Equivalence partitioning: Difference between revisions
No edit summary |
en>Walter Görlitz Reverted 1 edit by 194.80.66.254 (talk): Unexplained change to bad gammar. (TW) |
||
Line 1: | Line 1: | ||
{{Redirect|Fuzzing||Fuzz (disambiguation)}} | |||
'''Fuzz testing''' or '''fuzzing''' is a [[software testing]] technique, often automated or semi-automated, that involves providing invalid, unexpected, or [[random data]] to the inputs of a [[computer program]]. The program is then monitored for exceptions such as [[crash (computing)|crash]]es, or failing built-in code [[assertion (computing)|assertion]]s or for finding potential [[memory leak]]s. Fuzzing is commonly used to test for security problems in software or computer systems. | |||
The field of fuzzing originates with Barton Miller at the [[University of Wisconsin]] in 1988. This early work includes not only the use of random unstructured testing, but also a systematic set of tools to evaluate a wide variety of software utilities on a variety of platforms, along with a systematic analysis of the kinds of errors that were exposed by this kind of testing. In addition, they provided public access to their tool source code, test procedures and raw result data. | |||
There are two forms of fuzzing program, ''mutation-based'' and ''generation-based'', which can be employed as [[white-box testing|white]]-, [[gray-box testing|grey]]-, or [[black-box testing|black]]-[[software testing#The box approach|box testing]].<ref name="sutton" /> [[File format]]s and [[protocol (computing)|network protocol]]s are the most common targets of testing, but any type of program input can be fuzzed. Interesting inputs include [[environment variable]]s, keyboard and mouse [[event (computing)|event]]s, and sequences of [[application programming interface|API]] calls. Even items not normally considered "input" can be fuzzed, such as the contents of [[database]]s, [[shared memory]], or the precise interleaving of [[thread (computer science)|thread]]s. | |||
For the purpose of security, input that crosses a [[trust boundary]] is often the most interesting.<ref name="neystadt" /> For example, it is more important to fuzz code that handles the upload of a file by any user than it is to fuzz the code that parses a configuration file that is accessible only to a privileged user. | |||
==History== | |||
The term "fuzz" or "fuzzing" originates from a 1988 class project, taught by Barton Miller at the University of Wisconsin.<ref name="AutoDO-1" /><ref name="AutoDO-2" /> The project developed a basic command-line fuzzer to test the reliability of [[Unix]] programs by bombarding them with random data until they crashed. The test was repeated in 1995, expanded to include testing of GUI-based tools (such as the [[X Window System]]), network protocols, and system library APIs.<ref name="sutton" /> Follow-on work included testing command- and GUI-based applications on both Windows and Mac OS X. | |||
One of the earliest examples of fuzzing dates from before 1983. "The Monkey" was a [[Macintosh]] application developed by [[Steve Capps]] prior to 1983. It used journaling hooks to feed random events into Mac programs, and was used to test for bugs in [[MacPaint]].<ref name="AutoDO-3" /> | |||
Another early fuzz testing tool was ''crashme'', first released in 1991, which was intended to test the robustness of Unix and [[Unix-like]] operating systems by executing random machine instructions.<ref name="AutoDO-4" /> | |||
==Uses== | |||
Fuzz testing is often employed as a [[black-box testing]] methodology in large software projects where a budget exists to develop test tools. Fuzz testing is one of the techniques that offers a high benefit-to-cost ratio.<ref name="AutoDO-5" /> | |||
The technique can only provide a random sample of the system's behavior, and in many cases passing a fuzz test may only demonstrate that a piece of software can handle exceptions without crashing, rather than behaving correctly. This means fuzz testing is an assurance of overall quality, rather than a bug-finding tool, and not a substitute for exhaustive testing or [[formal methods]]. | |||
As a gross measurement of reliability, fuzzing can suggest which parts of a program should get special attention, in the form of a [[code audit]], application of [[static code analysis]], or partial [[rewrite (programming)|rewrite]]s. | |||
===Types of bugs=== | |||
As well as testing for outright crashes, fuzz testing is used to find bugs such as assertion failures and [[memory leak]]s (when coupled with a [[memory debugger]]). The methodology is useful against large applications, where any bug affecting [[memory safety]] is likely to be a severe [[vulnerability (computing)|vulnerability]]. | |||
Since fuzzing often generates invalid input it is used for testing error-handling routines, which are important for software that does not control its input. Simple fuzzing can be thought of as a way to automate [[negative testing]]. | |||
Fuzzing can also find some types of "correctness" bugs. For example, it can be used to find incorrect-[[serialization]] bugs by complaining whenever a program's serializer emits something that the same program's parser rejects.<ref name="AutoDO-6" /> It can also find unintentional differences between two versions of a program<ref name="AutoDO-7" /> or between two implementations of the same specification.<ref name="AutoDO-8" /> | |||
==Techniques== | |||
Fuzzing programs fall into two different categories. Mutation-based fuzzers mutate existing data samples to create test data while generation-based fuzzers define new test data based on models of the input.<ref name="sutton" /> | |||
The simplest form of fuzzing technique is sending a stream of random bits to software, either as command line options, randomly mutated protocol packets, or as events. This technique of random inputs continues to be a powerful tool to find bugs in command-line applications, network protocols, and GUI-based applications and services. Another common technique that is easy to implement is mutating existing input (e.g. files from a [[test suite]]) by flipping bits at random or moving blocks of the file around. However, the most successful fuzzers have detailed understanding of the format or protocol being tested. | |||
The understanding can be based on a [[specification (technical standard)#Software development|specification]]. A specification-based fuzzer involves writing the entire array of specifications into the tool, and then using model-based test generation techniques in walking through the specifications and adding anomalies in the data contents, structures, messages, and sequences. This "smart fuzzing" technique is also known as robustness testing, syntax testing, grammar testing, and (input) fault injection.<ref name="AutoDO-9" /><ref name="AutoDO-10" /><ref name="AutoDO-11" /><ref name="AutoDO-12" /> The protocol awareness can also be created [[heuristic algorithm|heuristically]] from examples using a tool such as [[Sequitur algorithm|Sequitur]].<ref name="AutoDO-13" /> These fuzzers can ''generate'' [[test case]]s from scratch, or they can ''mutate'' examples from [[test suite]]s or real life. They can concentrate on ''valid'' or ''invalid'' input, with ''mostly-valid'' input tending to trigger the "deepest" error cases. | |||
There are two limitations of protocol-based fuzzing based on protocol implementations of published specifications: 1) Testing cannot proceed until the specification is relatively mature, since a specification is a prerequisite for writing such a fuzzer; and 2) Many useful protocols are proprietary, or involve proprietary extensions to published protocols. If fuzzing is based only on published specifications, test coverage for new or proprietary protocols will be limited or nonexistent. | |||
Fuzz testing can be combined with other testing techniques. White-box fuzzing uses [[symbolic execution]] and [[constraint solving]].<ref name="AutoDO-14" /> Evolutionary fuzzing leverages feedback from an heuristic (E.g., [[code coverage]] in grey-box harnessing,<ref name="AutoDO-15" /> or a modeled attacker behavior in black-box harnessing<ref name="AutoDO-16" />) effectively automating the approach of ''[[exploratory testing]]''. | |||
==Reproduction and isolation== | |||
Test case reduction is the process of extracting minimal [[test case]]s from an initial test case.<ref name="AutoDO-17" /><ref name="AutoDO-18" /> Test case reduction may be done manually, or using software tools, and usually involves a [[divide-and-conquer algorithm]], wherein parts of the test are removed one by one until only the essential core of the test case remains. | |||
So as to be able to reproduce errors, fuzzing software will often record the input data it produces, usually before applying it to the software. If the computer crashes outright, the test data is preserved. If the fuzz stream is [[pseudo-random number]]-generated, the seed value can be stored to reproduce the fuzz attempt. Once a bug is found, some fuzzing software will help to build a [[test case]], which is used for [[debug]]ging, using test case reduction tools such as Delta or Lithium. | |||
==Advantages and disadvantages== | |||
The main problem with fuzzing to find program faults is that it generally only finds very simple faults. The computational complexity of the software testing problem is of [[big-oh|exponential order]] (<math>O(c^n)</math>, <math>c>1</math>) and every fuzzer takes shortcuts to find something interesting in a timeframe that a human cares about. A primitive fuzzer may have poor [[code coverage]]; for example, if the input includes a [[checksum]] which is not properly updated to match other random changes, only the checksum validation code will be verified. Code coverage tools are often used to estimate how "well" a fuzzer works, but these are only guidelines to fuzzer quality. Every fuzzer can be expected to find a different set of bugs. | |||
On the other hand, bugs found using fuzz testing are sometimes severe, exploitable bugs that could be used by a real attacker. Discoveries have become more common as fuzz testing has become more widely known, as the same techniques and tools are now used by attackers to exploit deployed software. This is a major advantage over binary or source auditing, or even fuzzing's close cousin, [[fault injection]], which often relies on artificial fault conditions that are difficult or impossible to exploit. | |||
The randomness of inputs used in fuzzing is often seen as a disadvantage, as catching a [[boundary value analysis|boundary value]] condition with random inputs is highly unlikely but today most of the fuzzers solve this problem by using [[deterministic algorithm]]s based on user inputs. | |||
Fuzz testing enhances [[software security]] and [[safety engineering|software safety]] because it often finds odd oversights and defects which human testers would fail to find, and even careful human test designers would fail to create tests for. | |||
==See also== | |||
{{Portal|Software Testing}} | |||
*[[Boundary value analysis]] | |||
==References== | |||
{{reflist|30em|refs= | |||
<ref name="sutton" >{{ cite book | isbn = 0-321-44611-9 | title = Fuzzing: Brute Force Vulnerability Discovery'' | author = Michael Sutton, Adam Greene, Pedram Amini | publisher = Addison-Wesley | year = 2007 }}</ref> | |||
<ref name="neystadt" >{{ cite web | author = John Neystadt | title = Automated Penetration Testing with White-Box Fuzzing | url = http://msdn.microsoft.com/en-us/library/cc162782.aspx | publisher = Microsoft | date = February 2008 | accessdate = 2009-05-14 }}</ref> | |||
<ref name="AutoDO-1">Barton Miller (2008). "Preface". In Ari Takanen, Jared DeMott and Charlie Miller, ''Fuzzing for Software Security Testing and Quality Assurance'', ISBN 978-1-59693-214-2</ref> | |||
<ref name="AutoDO-2">{{ cite web | title = Fuzz Testing of Application Reliability | url = http://pages.cs.wisc.edu/~bart/fuzz/ | publisher = University of Wisconsin-Madison | accessdate = 2009-05-14 }}</ref> | |||
<ref name="AutoDO-3">{{ cite web | url = http://www.folklore.org/StoryView.py?story=Monkey_Lives.txt | title = Macintosh Stories: Monkey Lives | publisher = Folklore.org | date = 1999-02-22 | accessdate = 2010-05-28 }}</ref> | |||
<ref name="AutoDO-4">{{ cite web | title = crashme | url = http://crashme.codeplex.com/ | work = CodePlex | accessdate = 2012-06-26 }}</ref> | |||
<ref name="AutoDO-5">{{ cite web | url = http://pages.cs.wisc.edu/~bart/fuzz/fuzz-nt.html | author = Justin E. Forrester and Barton P. Miller | title = An Empirical Study of the Robustness of Windows NT Applications Using Random Testing }}</ref> | |||
<ref name="AutoDO-6">{{ cite web | url = http://www.squarefree.com/2007/08/02/fuzzing-for-correctness/ | author = Jesse Ruderman | title = Fuzzing for correctness }}</ref> | |||
<ref name="AutoDO-7">{{ cite web | url = http://www.squarefree.com/2008/12/23/fuzzing-tracemonkey/ | author = Jesse Ruderman | title = Fuzzing TraceMonkey }}</ref> | |||
<ref name="AutoDO-8">{{ cite web | url = http://www.squarefree.com/2008/12/23/differences/ | author = Jesse Ruderman | title = Some differences between JavaScript engines }}</ref> | |||
<ref name="AutoDO-9">{{ cite web | url = http://wurldtech.com/resources/SB_002_Robustness_Testing_With_Achilles.pdf | title = Robustness Testing Of Industrial Control Systems With Achilles | format = PDF | date = | accessdate = 2010-05-28 }}{{Dead link|date=November 2010|bot=H3llBot}}</ref> | |||
<ref name="AutoDO-10">{{ cite web | url = http://www.amazon.com/dp/1850328803 | title = Software Testing Techniques by Boris Beizer. International Thomson Computer Press; 2 Sub edition (June 1990) | publisher = Amazon.com | date = | accessdate = 2010-05-28 }}</ref> | |||
<ref name="AutoDO-11">{{ cite web | url = http://www.vtt.fi/inf/pdf/publications/2001/P448.pdf | title = Kaksonen, Rauli. (2001) A Functional Method for Assessing Protocol Implementation Security (Licentiate thesis). Espoo. Technical Research Centre of Finland, VTT Publications 447. 128 p. + app. 15 p. ISBN 951-38-5873-1 (soft back ed.) ISBN 951-38-5874-X (on-line ed.). | format = PDF | date = | accessdate = 2010-05-28 }}</ref> | |||
<ref name="AutoDO-12">{{ cite web | url = http://www.amazon.com/dp/0471183814 | title = Software Fault Injection: Inoculating Programs Against Errors by Jeffrey M. Voas and Gary McGraw | publisher = John Wiley & Sons | date = January 28, 1998 }}</ref> | |||
<ref name="AutoDO-13">{{ cite web | url = http://usenix.org/events/lisa06/tech/slides/kaminsky.pdf | author = Dan Kaminski | title = Black Ops 2006 | year = 2006 }}</ref> | |||
<ref name="AutoDO-14">{{ cite web | url = http://people.csail.mit.edu/akiezun/pldi-kiezun.pdf | title = Grammar-based Whitebox Fuzzing | publisher = Microsoft Research | author = Patrice Godefroid, Adam Kiezun, Michael Y. Levin }}</ref> | |||
<ref name="AutoDO-15">{{ cite web | url = http://www.vdalabs.com/tools/efs_gpf.html | title = VDA Labs }}</ref> | |||
<ref name="AutoDO-16">{{ cite web | url = http://car-online.fr/en/spaces/fabien_duchene/publications/2012-04-SecTest-ICST/ | title = XSS Vulnerability Detection Using Model Inference Assisted Evolutionary Fuzzing }}</ref> | |||
<ref name="AutoDO-17">{{ cite web | url = http://www.webkit.org/quality/reduction.html | title = Test Case Reduction | date = 2011-07-18 }}</ref> | |||
<ref name="AutoDO-18">{{ cite web | url = https://www-304.ibm.com/support/docview.wss?uid=swg21084174 | title = IBM Test Case Reduction Techniques | date = 2011-07-18 }}</ref> | |||
}} | |||
==Further reading== | |||
*A. Takanen, J. DeMott, C. Miller, ''Fuzzing for Software Security Testing and Quality Assurance'', 2008, ISBN 978-1-59693-214-2 | |||
*H. Pohl, [http://www.softscheck.com/publications/softScheck%20Pohl%20Cost-Effective%20Identification%20of%20Less-Than%20Zero-Day%20Vulnerabilities%20WPE.pdf ''Cost-Effective Identification of Zero-Day Vulnerabilities with the Aid of Threat Modeling and Fuzzing''], 2011 | |||
==External links== | |||
*[http://www.cs.wisc.edu/~bart/fuzz University of Wisconsin Fuzz Testing (the original fuzz project)] Source of papers and fuzz software. | |||
*[http://iac.dtic.mil/iatac/download/Vol10_No1.pdf Look out! It's the Fuzz! (IATAC IAnewsletter 10-1)] | |||
*[http://video.google.com/videoplay?docid=6509883355867972121 Designing Inputs That Make Software Fail], conference video including fuzzy testing | |||
*[http://www.ee.oulu.fi/research/ouspg/ Link to the Oulu (Finland) University Secure Programming Group] | |||
*[http://docs.google.com/viewer?url=https%3A%2F%2Fgithub.com%2Fs7ephen%2FRuxxer%2Fraw%2Fmaster%2Fpresentations%2FRuxxer.ppt Building 'Protocol Aware' Fuzzing Frameworks] | |||
[[Category:Software testing]] | |||
[[Category:Computer security procedures]] |
Revision as of 16:41, 15 November 2013
Name: Jodi Junker
My age: 32
Country: Netherlands
Home town: Oudkarspel
Post code: 1724 Xg
Street: Waterlelie 22
my page - www.hostgator1centcoupon.info
Fuzz testing or fuzzing is a software testing technique, often automated or semi-automated, that involves providing invalid, unexpected, or random data to the inputs of a computer program. The program is then monitored for exceptions such as crashes, or failing built-in code assertions or for finding potential memory leaks. Fuzzing is commonly used to test for security problems in software or computer systems.
The field of fuzzing originates with Barton Miller at the University of Wisconsin in 1988. This early work includes not only the use of random unstructured testing, but also a systematic set of tools to evaluate a wide variety of software utilities on a variety of platforms, along with a systematic analysis of the kinds of errors that were exposed by this kind of testing. In addition, they provided public access to their tool source code, test procedures and raw result data.
There are two forms of fuzzing program, mutation-based and generation-based, which can be employed as white-, grey-, or black-box testing.[1] File formats and network protocols are the most common targets of testing, but any type of program input can be fuzzed. Interesting inputs include environment variables, keyboard and mouse events, and sequences of API calls. Even items not normally considered "input" can be fuzzed, such as the contents of databases, shared memory, or the precise interleaving of threads.
For the purpose of security, input that crosses a trust boundary is often the most interesting.[2] For example, it is more important to fuzz code that handles the upload of a file by any user than it is to fuzz the code that parses a configuration file that is accessible only to a privileged user.
History
The term "fuzz" or "fuzzing" originates from a 1988 class project, taught by Barton Miller at the University of Wisconsin.[3][4] The project developed a basic command-line fuzzer to test the reliability of Unix programs by bombarding them with random data until they crashed. The test was repeated in 1995, expanded to include testing of GUI-based tools (such as the X Window System), network protocols, and system library APIs.[1] Follow-on work included testing command- and GUI-based applications on both Windows and Mac OS X.
One of the earliest examples of fuzzing dates from before 1983. "The Monkey" was a Macintosh application developed by Steve Capps prior to 1983. It used journaling hooks to feed random events into Mac programs, and was used to test for bugs in MacPaint.[5]
Another early fuzz testing tool was crashme, first released in 1991, which was intended to test the robustness of Unix and Unix-like operating systems by executing random machine instructions.[6]
Uses
Fuzz testing is often employed as a black-box testing methodology in large software projects where a budget exists to develop test tools. Fuzz testing is one of the techniques that offers a high benefit-to-cost ratio.[7]
The technique can only provide a random sample of the system's behavior, and in many cases passing a fuzz test may only demonstrate that a piece of software can handle exceptions without crashing, rather than behaving correctly. This means fuzz testing is an assurance of overall quality, rather than a bug-finding tool, and not a substitute for exhaustive testing or formal methods.
As a gross measurement of reliability, fuzzing can suggest which parts of a program should get special attention, in the form of a code audit, application of static code analysis, or partial rewrites.
Types of bugs
As well as testing for outright crashes, fuzz testing is used to find bugs such as assertion failures and memory leaks (when coupled with a memory debugger). The methodology is useful against large applications, where any bug affecting memory safety is likely to be a severe vulnerability.
Since fuzzing often generates invalid input it is used for testing error-handling routines, which are important for software that does not control its input. Simple fuzzing can be thought of as a way to automate negative testing.
Fuzzing can also find some types of "correctness" bugs. For example, it can be used to find incorrect-serialization bugs by complaining whenever a program's serializer emits something that the same program's parser rejects.[8] It can also find unintentional differences between two versions of a program[9] or between two implementations of the same specification.[10]
Techniques
Fuzzing programs fall into two different categories. Mutation-based fuzzers mutate existing data samples to create test data while generation-based fuzzers define new test data based on models of the input.[1]
The simplest form of fuzzing technique is sending a stream of random bits to software, either as command line options, randomly mutated protocol packets, or as events. This technique of random inputs continues to be a powerful tool to find bugs in command-line applications, network protocols, and GUI-based applications and services. Another common technique that is easy to implement is mutating existing input (e.g. files from a test suite) by flipping bits at random or moving blocks of the file around. However, the most successful fuzzers have detailed understanding of the format or protocol being tested.
The understanding can be based on a specification. A specification-based fuzzer involves writing the entire array of specifications into the tool, and then using model-based test generation techniques in walking through the specifications and adding anomalies in the data contents, structures, messages, and sequences. This "smart fuzzing" technique is also known as robustness testing, syntax testing, grammar testing, and (input) fault injection.[11][12][13][14] The protocol awareness can also be created heuristically from examples using a tool such as Sequitur.[15] These fuzzers can generate test cases from scratch, or they can mutate examples from test suites or real life. They can concentrate on valid or invalid input, with mostly-valid input tending to trigger the "deepest" error cases.
There are two limitations of protocol-based fuzzing based on protocol implementations of published specifications: 1) Testing cannot proceed until the specification is relatively mature, since a specification is a prerequisite for writing such a fuzzer; and 2) Many useful protocols are proprietary, or involve proprietary extensions to published protocols. If fuzzing is based only on published specifications, test coverage for new or proprietary protocols will be limited or nonexistent.
Fuzz testing can be combined with other testing techniques. White-box fuzzing uses symbolic execution and constraint solving.[16] Evolutionary fuzzing leverages feedback from an heuristic (E.g., code coverage in grey-box harnessing,[17] or a modeled attacker behavior in black-box harnessing[18]) effectively automating the approach of exploratory testing.
Reproduction and isolation
Test case reduction is the process of extracting minimal test cases from an initial test case.[19][20] Test case reduction may be done manually, or using software tools, and usually involves a divide-and-conquer algorithm, wherein parts of the test are removed one by one until only the essential core of the test case remains.
So as to be able to reproduce errors, fuzzing software will often record the input data it produces, usually before applying it to the software. If the computer crashes outright, the test data is preserved. If the fuzz stream is pseudo-random number-generated, the seed value can be stored to reproduce the fuzz attempt. Once a bug is found, some fuzzing software will help to build a test case, which is used for debugging, using test case reduction tools such as Delta or Lithium.
Advantages and disadvantages
The main problem with fuzzing to find program faults is that it generally only finds very simple faults. The computational complexity of the software testing problem is of exponential order (, ) and every fuzzer takes shortcuts to find something interesting in a timeframe that a human cares about. A primitive fuzzer may have poor code coverage; for example, if the input includes a checksum which is not properly updated to match other random changes, only the checksum validation code will be verified. Code coverage tools are often used to estimate how "well" a fuzzer works, but these are only guidelines to fuzzer quality. Every fuzzer can be expected to find a different set of bugs.
On the other hand, bugs found using fuzz testing are sometimes severe, exploitable bugs that could be used by a real attacker. Discoveries have become more common as fuzz testing has become more widely known, as the same techniques and tools are now used by attackers to exploit deployed software. This is a major advantage over binary or source auditing, or even fuzzing's close cousin, fault injection, which often relies on artificial fault conditions that are difficult or impossible to exploit.
The randomness of inputs used in fuzzing is often seen as a disadvantage, as catching a boundary value condition with random inputs is highly unlikely but today most of the fuzzers solve this problem by using deterministic algorithms based on user inputs.
Fuzz testing enhances software security and software safety because it often finds odd oversights and defects which human testers would fail to find, and even careful human test designers would fail to create tests for.
See also
Sportspersons Hyslop from Nicolet, usually spends time with pastimes for example martial arts, property developers condominium in singapore singapore and hot rods. Maintains a trip site and has lots to write about after touring Gulf of Porto: Calanche of Piana.
References
43 year old Petroleum Engineer Harry from Deep River, usually spends time with hobbies and interests like renting movies, property developers in singapore new condominium and vehicle racing. Constantly enjoys going to destinations like Camino Real de Tierra Adentro.
Further reading
- A. Takanen, J. DeMott, C. Miller, Fuzzing for Software Security Testing and Quality Assurance, 2008, ISBN 978-1-59693-214-2
- H. Pohl, Cost-Effective Identification of Zero-Day Vulnerabilities with the Aid of Threat Modeling and Fuzzing, 2011
External links
- University of Wisconsin Fuzz Testing (the original fuzz project) Source of papers and fuzz software.
- Look out! It's the Fuzz! (IATAC IAnewsletter 10-1)
- Designing Inputs That Make Software Fail, conference video including fuzzy testing
- Link to the Oulu (Finland) University Secure Programming Group
- Building 'Protocol Aware' Fuzzing Frameworks
- ↑ 1.0 1.1 1.2 Cite error: Invalid
<ref>
tag; no text was provided for refs namedsutton
- ↑ Cite error: Invalid
<ref>
tag; no text was provided for refs namedneystadt
- ↑ Cite error: Invalid
<ref>
tag; no text was provided for refs namedAutoDO-1
- ↑ Cite error: Invalid
<ref>
tag; no text was provided for refs namedAutoDO-2
- ↑ Cite error: Invalid
<ref>
tag; no text was provided for refs namedAutoDO-3
- ↑ Cite error: Invalid
<ref>
tag; no text was provided for refs namedAutoDO-4
- ↑ Cite error: Invalid
<ref>
tag; no text was provided for refs namedAutoDO-5
- ↑ Cite error: Invalid
<ref>
tag; no text was provided for refs namedAutoDO-6
- ↑ Cite error: Invalid
<ref>
tag; no text was provided for refs namedAutoDO-7
- ↑ Cite error: Invalid
<ref>
tag; no text was provided for refs namedAutoDO-8
- ↑ Cite error: Invalid
<ref>
tag; no text was provided for refs namedAutoDO-9
- ↑ Cite error: Invalid
<ref>
tag; no text was provided for refs namedAutoDO-10
- ↑ Cite error: Invalid
<ref>
tag; no text was provided for refs namedAutoDO-11
- ↑ Cite error: Invalid
<ref>
tag; no text was provided for refs namedAutoDO-12
- ↑ Cite error: Invalid
<ref>
tag; no text was provided for refs namedAutoDO-13
- ↑ Cite error: Invalid
<ref>
tag; no text was provided for refs namedAutoDO-14
- ↑ Cite error: Invalid
<ref>
tag; no text was provided for refs namedAutoDO-15
- ↑ Cite error: Invalid
<ref>
tag; no text was provided for refs namedAutoDO-16
- ↑ Cite error: Invalid
<ref>
tag; no text was provided for refs namedAutoDO-17
- ↑ Cite error: Invalid
<ref>
tag; no text was provided for refs namedAutoDO-18