Bhaskara I's sine approximation formula: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Yobot
m Accuracy of the formula: WP:CHECKWIKI error fixes using AWB (9494)
en>Andreas Rejbrand
 
Line 1: Line 1:
{{for|the scientific journal|Natural Computing (journal)}}
Garden fans and interior growers alike may possibly take pleasure in the more price that LED develop light devices supply. The makers also claim that their X2 lens method further increases the intensity, so extra light will reach the bottom of the plantsOne particular function special to the Penetrator 168X Pro is the switchable power settings from .5W to 1W to 3W. This is slightly misleading, as the unit is only made to provide just more than 1.5W per LED at full powerBased on the specifications, the Super Develop Spectrum King (SK450) looks like the ideal worth.<br><br><br><br>It produces a continuous spectral distribution that is close to daylight, it uses name brand LEDs, does not need an internal fan due to the impressive thermal management style and comes with 90 degree TIR lenses to concentrate the light on the plants. The spectral distribution might also be a bit restrictive, depending on what you want to develop. They have a powerful affinity for red and blue light (and even some orange).<br><br>You are going to come across LED develop light reviews from your personal closest garden middle, but fairly frequently, getting these on-line gives you the opportunity to evaluate diverse kinds, suppliers and merchants to get the pretty greatest expense and also the correct type of LED develop light [http://Soyourebored.Pixub.com/text/groups/inexpensive-led-grow-light-testimonials/ program] for the condition. I am seriously considering I ought to give LED light's a chance and I have been reading a bunch of reviews as what sort I need to obtain.<br><br>The problem is that most companies employed low output LED bulbs with incorrect red and blue light spectrumThe mixture of blue and red light produces the very best benefits for plantsIf you liked this report and you would like to obtain much more information about [http://Bestgrowlightsguide.com/led-grow-lights-reviews-a-perfect-guide-for-everyone/ Http://Bestgrowlightsguide.Com/Led-Grow-Lights-Reviews-A-Perfect-Guide-For-Everyone/] kindly stop by our own web-page. 1 of the most crucial aspect in figuring out the fantastic develop light is the light coverageCustomers need to look for a [http://7sisters.in/for-sell-sale/mobile-phones-tablets_1/supernova-led-grow-light-critique_i11265 grow light] that can generate maximum light coverage for their plants. I have presented the newest report on the 3 finest LED grow lights on the market.<br><br>Ed Harwood, Ph.D., is the chief technology officer and co-founder of AeroFarms , a vertical farm” supplier that supplies equipment for providers increasing far more than 250 varieties of greens and herbs beneath LED (light-emitting diode) lights inside warehouses. With four lights on a two-foot wide shelf yousignificantly greater development than two.This light does put out some heat, neverthelessAn orchid grower wrote that this light fixture raised the temperature of his terrarium by 5-7ºF (three-5ºC).
'''Natural computing''',<ref name="handbook_NC">G.Rozenberg, T.Back, J.Kok, Editors, Handbook of Natural Computing, Springer Verlag, 2012</ref> also called '''Natural computation''', is a terminology introduced to encompass three classes of methods: 1) those that take inspiration from nature for the development of novel problem-solving techniques; 2) those that are based on the use of computers to synthesize natural phenomena; and 3) those that employ natural materials (e.g., molecules) to compute. The main fields of research that compose these three branches are [[artificial neural networks]], [[evolutionary algorithms]], [[swarm intelligence]], [[artificial immune systems]], fractal geometry, [[artificial life]], [[DNA computing]], and [[quantum computing]], among others.
 
Computational paradigms studied by natural computing are abstracted from natural phenomena as diverse as [[self-replication]], the functioning of the [[brain]], [[Darwinian evolution]], [[group behavior]], the [[immune system]], the defining properties of life forms, [[cell membranes]], and [[morphogenesis]].  
Besides traditional [[electronic hardware]], these computational paradigms can be implemented on alternative physical media such as biomolecules (DNA, RNA), or trapped-ion [[#Quantum computing|quantum computing]] devices.
 
Dually, one can view processes occurring in nature as information processing. Such processes include [[self-assembly]],
[[developmental process]]es, [[gene regulation]] networks, [[protein-protein interaction]] networks, biological transport ([[active transport]], [[passive transport]]) networks, and [[gene assembly]] in [[unicellular organism]]s. Efforts to
understand biological systems also  include engineering of semi-synthetic organisms, and understanding the universe itself from the point of view of information processing. Indeed, the idea was even  advanced that information is more fundamental than matter or energy.  
The Zuse-Fredkin thesis, dating back to the 1960s, states that the entire universe is a huge [[cellular automaton]] which continuously updates its rules.<ref name="Fredkin90">Fredkin, F. Digital mechanics: An informational process based on reversible universal CA. Physica D 45 (1990) 254-270</ref><ref name="Zuse67">Zuse, K. Rechnender Raum. Elektronische Datenverarbeitung 8 (1967) 336-344</ref>
Recently it has been suggested that the whole universe is a [[quantum computer]] that computes its own behaviour.<ref name="Lloyd06">Lloyd, S. Programming the Universe: A Quantum Computer Scientist Takes on the Cosmos. Knopf, 2006</ref>
 
== Nature-inspired models of computation ==
 
The most established "classical" nature-inspired models of computation are cellular automata, neural computation, and evolutionary computation. More recent computational systems abstracted from natural processes include swarm intelligence, artificial immune systems,
membrane computing,  and amorphous computing.
In fact, all major methods and algorithms are nature-inspired [[metaheuristic]] algorithms<ref name="Yang08">Yang, X.-S., Nature-inspired metaheuristic algorithms, Luniver Press, (2008).</ref> including cellular automata, evolutionary
computation, swarm intelligence and others. The detailed review can be found in many books
<ref name="Olarius">Olarius S., Zomaya A. Y., Handbook of Bioinspired Algorithms and Applications, Chapman & Hall/CRC, 2005.</ref><ref name="de Castro">de Castro, L. N., Fundamentals of Natural Computing: Basic Concepts, Algorithms, and Applications, CRC Press, 2006.</ref>
 
=== Cellular automata ===
 
: ''Further information: [[Cellular automaton]]''
 
A cellular automaton is a [[dynamical system]] consisting of a two-dimensional grid of  cells. Space and time are discrete and each of the cells can be in a finite number of [[state (computer science)|states]]. The cellular automaton  updates the states of its cells
synchronously according to the transition rules given [[a priori probability|''a priori'']]. The  next state of a cell is computed by a transition rule and it depends only  on its current state and the states of its neighbors.
 
[[Conway's game of life]] is one of the best-known examples of cellular automata, shown to be [[Turing completeness|computationally universal]]. Cellular automata have been applied to modelling a variety of phenomena such as communication, growth, reproduction, competition, evolution and other physical and biological processes.
 
=== Neural computation ===
 
: ''Further information: [[Neural computation]]''
 
Neural computation is the field of research that emerged from the comparison between [[computer|computing machines]] and the human [[nervous system]].<ref name="Neumann58">von Neumann, J. The Computer and the Brain. Yale University Press, 1958</ref>
This field aims both to understand  how the '''[[brain]]''' of [[living organisms]] works 
([[brain theory]] or [[computational neuroscience]]), and to design efficient algorithms  based on the principles of how the human brain processes information (Artificial Neural Networks, ANN <ref name="Arbib03">Arbib, M., editor. The Handbook of Brain Theory and Neural Networks. MIT Press, 2003.</ref>).
 
An [[artificial neural network]] is a network of [[artificial neurons]].<ref name="Rojas96">Rojas, R. Neural Networks: A Systematic Introduction. Springer, 1996</ref>
An artificial neuron ''A'' is equipped with a function  <math>f_A</math>, receives ''n'' [[real number|real-valued]] inputs <math>x_1, x_2, \ldots, x_n</math> with respective '''[[weight function|weights]]''' <math>w_1, w_2, \ldots, w_n</math>, and it  outputs <math>f_A(w_1x_1 + w_2x_2 + \ldots + w_nx_n)</math>. Some neurons are selected to be the output neurons, and the network function is the vectorial function that associates to the ''n'' input values, the outputs of the ''m'' selected output neurons.
Note that different choices of weights produce different network functions for the same inputs. Back-propagation is a [[supervised learning|supervised learning method]] by which the weights of the connections in the network are repeatedly adjusted so as to minimize the difference between the vector of actual outputs and that of desired outputs. [[machine learning|Learning algorithm]]s based on [[backpropagation|backwards propagation of errors]] can be used to find optimal weights for given [[network topology|topology of the network]] and input-output pairs.
 
=== Evolutionary computation ===
 
: ''Further information: [[Evolutionary computation]]''
 
Evolutionary computation<ref name=BFM97>B&auml;ck, T., Fogel, D., Michalewicz, Z., editors. Handbook of Evolutionary Computation. IOP Publishing, U.K., 1997</ref> is a computational paradigm inspired by [[Darwinian evolution]].
 
An artificial evolutionary system is a computational system  based on the notion of simulated evolution. It comprises a constant- or variable-size population of individuals, a [[fitness (biology)|fitness criterion]], and genetically inspired operators that produce the next '''[[generation]]''' from the current one.
The initial population is typically generated randomly or heuristically, and typical operators
are [[mutation]] and [[genetic recombination|recombination]].  At each step, the individuals are evaluated according to the given fitness function ([[survival of the fittest]]). The next generation is obtained from selected individuals (parents) by using  genetically inspired operators. The choice of parents can be guided by a  selection operator which reflects the biological principle of [[mate selection]].  This  process of simulated [[evolution]] eventually converges towards a nearly optimal population of individuals, from the point of view of the fitness function.
 
The study of evolutionary systems has historically evolved along three main branches:
[[Evolution strategies]] provide a solution to [[optimization (mathematics)|parameter optimization problems]] for real-valued as well as discrete and mixed types of parameters. 
[[Evolutionary programming]] originally aimed at creating  optimal "intelligent agents" modelled, e.g., as finite state machines.
[[Genetic algorithms]]<ref name="Koza92">Koza, J. [http://www.ru.lv/~peter/zinatne/ebooks/MIT%20-%20Genetic%20Programming.pdf Genetic Programming: On the Programming of Computers by Means of Natural Selection]. MIT Press, 1992</ref> applied  the idea of evolutionary computation to the problem of finding a (nearly-)optimal solution to a given problem. Genetic algorithms initially consisted of an input population of individuals encoded as fixed-length bit strings, the genetic operators mutation (bit flips) and recombination (combination of a prefix of a parent with the suffix of the other),  and a problem-dependent fitness function.
Genetic algorithms have been used to optimize computer programs, called [[genetic programming]], and today they are also  applied to real-valued parameter optimization problems as well as to many types of [[combinatorial tasks]].
 
=== Swarm intelligence ===
 
[[Swarm intelligence]],<ref name="">Engelbrecht, A. Fundamentals of Computational Swarm Intelligence. Wiley and Sons, 2005.</ref> sometimes referred to as [[collective intelligence]], is defined as the problem solving behavior that emerges from  the interaction of [[intelligence agent|individual agents]] (e.g., [[bacteria]], [[ants]], [[termites]], [[bees]], [[spiders]], [[fish]], [[birds]]) which communicate with other agents by acting on their [[neighborhood (mathematics)|local environments]].
 
[[Particle swarm optimization]] applies this idea  to the problem of finding an optimal solution to a given problem
by a search through a  (multi-dimensional) [[solution space]].  The initial set-up is a swarm of ''particles'', each representing a  possible solution to the problem. Each particle has  its own  [[velocity]]  which depends  on its previous velocity (the inertia component), the tendency towards the past personal best position (the nostalgia component), and  its tendency towards  a global neighborhood  optimum or local neighborhood optimum (the social component). Particles thus move through a multidimensional space and  eventually converge towards a point between the [[maxima and minima|global best]] and their personal best.
Particle swarm optimization algorithms have been applied to various optimization problems, and to [[unsupervised learning]], [[game learning]], and [[scheduling (computing)|scheduling]] applications.
 
In the  same vein, [[ant colony optimization|ant algorithms]] model the  foraging behaviour of ant colonies.
To find the best path between  the nest and a source of food, ants rely on indirect communication by laying a [[pheromone]] trail on the way back to the nest if they found food, respectively 
following the concentration of pheromones if they are looking for food. Ant algorithms have been successfully applied to a variety of combinatorial optimization problems over discrete search spaces.
 
=== Artificial immune systems ===
 
Artificial immune systems  (a.k.a. immunological computation or [[immunocomputing]]) are computational systems inspired by the natural immune systems of biological organisms.
 
Viewed as an information processing system, the [[immune system|natural immune system]] of organisms performs many complex tasks in [[parallel computation|parallel]] and [[distributed computing]] fashion.<ref name="Dasgupta98">Dasgupta, D. editor. Artificial Immune Systems and Their Applications. Springer, 1998</ref>
These include  distinguishing  between self and [[exogenous antigen|nonself]],<ref name=DeCastro>de Castro, L., Timmis, J. [http://www.springer.com/computer/artificial/book/978-1-85233-594-6 Artificial Immune Systems: A New Computational Intelligence Approach]. Springer, 2002.</ref> [[Neutralisation (immunology)|neutralization]] of nonself [[pathogens]] ([[viruses]], bacteria, [[fungi]], and [[parasitism|parasites]]), [[learning]], [[memory]], [[associative retrieval]], [[homeostasis|self-regulation]], and [[fault-tolerance]].
[[Artificial immune systems]] are abstractions of the natural immune system, emphasizing these computational aspects.  
Their applications include [[antivirus software|computer virus detection]], [[anomaly detection]] in a time series of data, [[fault diagnosis]], [[pattern recognition]], machine learning, [[bioinformatics]], optimization, [[robotics]] and [[Control theory|control]].
 
=== Membrane computing ===
 
[[Membrane computing]] investigates computing models abstracted from the [[cell compartment|compartmentalized structure]] of living cells effected by [[cell membrane|membranes]].<ref name="Paun02">Paun, G. Membrane Computing: An Introduction. Springer, 2002</ref>
A generic membrane system (P-system) consists of cell-like compartments (regions) delimited by ''membranes'', that are placed in a [[nested hierarchy|nested hierarchical]] structure.  Each membrane-enveloped region contains objects, transformation rules which modify these objects, as well as transfer rules, which specify whether the objects will be transferred outside or stay inside the region.
Regions communicate with each other via the transfer of objects.
The computation by a membrane system starts with an initial configuration, where the number ([[Multiplicity (mathematics)|multiplicity]]) of each object is set to some value for each region ([[multiset|multiset of objects]]).
It proceeds by choosing, [[nondeterminism|nondeterministically]] and in a [[parallelism (computing)|maximally parallel manner]],
which rules are applied to which objects. The output of the computation is collected from an ''a priori'' determined output region.
 
Applications of membrane systems include machine learning,  modelling  of biological processes ([[photosynthesis]], certain [[signaling pathways]], [[quorum sensing]] in bacteria, cell-mediated [[Immunity (medical)|immunity]]), as well as computer science applications such as  [[computer graphics]], [[public-key cryptography]], [[approximation algorithm|approximation]] and [[sorting algorithms]], as well as analysis of various [[computationally hard problems]].
 
=== Amorphous computing ===
 
In biological organisms, [[morphogenesis]] (the development of  well-defined shapes and functional structures) is achieved by the interactions between cells  guided by the genetic ''program'' encoded in the organism's DNA.
 
Inspired by this idea, [[amorphous computing]] aims at engineering well-defined shapes and patterns, or  coherent computational behaviours, from the local interactions of a multitude of simple unreliable,  irregularly placed, asynchronous, identically programmed computing elements (particles).<ref name="AAC00">Abelson, H., Allen, D., Coore, D., Hanson, C., Homsy, G., Knight Jr., T., Nagpal, R., Rauch, E., Sussman, G., Weiss, R. [http://portal.acm.org/citation.cfm?id=332842 Amorphous computing]. Communications of the ACM 43, 5 (May 2000), 74-82</ref>  
As a  programming paradigm, the aim is to find  new [[abstraction (computer science)|programming techniques]] that would work well for amorphous computing environments.  Amorphous computing also plays an important role as the basis for "[[#Cellular computing|cellular computing]]" (see the topics [[synthetic biology]] and [[cellular computing]], below).
 
== Synthesizing nature by means of computing ==
 
=== Artificial life ===
 
[[Artificial life]] (ALife) is a  research field whose ultimate goal is to understand the essential properties of life organisms <ref name="Langton90">Langton, C., editor. Artificial Life. Addison-Wesley Longman, 1990</ref> by building,  within electronic computers or other artificial media, ''[[ab initio]]'' systems  that exhibit properties normally associated only with living organisms.
Early examples include [[Lindenmayer systems]] (L-systems), that have been  used to model plant growth and development. An L-system is a parallel rewriting system that starts with an initial word, and applies its rewriting rules in parallel to all letters of the word.<ref name="RoSa80">Rozenberg, G. and Salomaa, A. The Mathematical Theory of L Systems. Academic Press, 1980</ref>
 
Pioneering experiments in artificial life included the design of evolving "virtual  block creatures"  acting in simulated environments with realistic features such as [[kinetics (physics)|kinetics]], [[Dynamics (mechanics)|dynamics]], [[gravity]], [[collision]], and [[friction]].<ref name="Brooks00">Brooks. R. [http://www.nature.com/doifinder/10.1038/35023200 Artificial life: from robot dreams to reality]. Nature 406 (2000), 945-947</ref>
These artificial creatures were selected for their abilities endowed to swim, or walk, or jump, and they competed for a common limited resource (controlling a cube). The simulation resulted in the evolution of creatures  exhibiting surprising behaviour: some developed hands to grab the cube, others developed  legs to move towards the cube. This computational  approach was further  combined with rapid manufacturing technology to actually build the physical robots that virtually evolved.<ref name="LiPo00">Lipson, P., Pollack, J. [http://www.nature.com/nature/journal/v406/n6799/full/406974a0.html Automatic design and manufacture of robotic lifeforms]. Nature 406 (2000), 974-978</ref> This marked the emergence of the field of '''mechanical artificial life'''.
 
The field of [[#Synthetic biology|synthetic biology]] explores a biological implementation of similar ideas.
Other  research directions within the field of artificial life include [[artificial chemistry]] as well as traditionally biological phenomena explored  in artificial systems, ranging from  computational processes such as [[coevolution|co-evolutionary]] adaptation and development, to physical processes such as growth, [[self-replication]], and [[self-repair]].
 
== Nature-inspired novel hardware ==
 
All of the computational techniques mentioned above, while inspired by nature,  have been implemented until now mostly on traditional [[electronic hardware]]. In contrast, the two paradigms introduced here, [[#Molecular computing|molecular computing]] and [[#Quantum computing|quantum computing]], employ radically different types of hardware.
 
=== Molecular computing ===
 
[[Molecular computer|Molecular computing]] (a.k.a. biomolecular computing, biocomputing, biochemical computing, [[DNA computing]]) is a computational paradigm in which data is encoded as [[biomolecules]] such as [[DNA sequence|DNA strands]], and molecular biology tools act  on the data to perform various operations (e.g., [[arithmetic operation|arithmetic]] or [[logical operations]]).
The first experimental realization of special-purpose molecular computer was the 1994 breakthrough experiment by [[Leonard Adleman]] who solved a
7-node  instance of the [[Hamiltonian Path Problem]] solely by manipulating DNA strands in test tubes.<ref name="Adleman94">Adleman, L. [http://www.usc.edu/dept/molecular-science/papers/fp-sci94.pdf Molecular computation of solutions to combinatorial problems]. Science 266 (1994), 1021-1024</ref>
DNA computations start  from an initial input  encoded as a DNA sequence (essentially a sequence over the four-letter alphabet {A, C, G, T}),
and proceed by a succession of  bio-operations such as cut-and-paste (by [[restriction enzymes]] and [[ligases]]),
extraction of strands containing a certain subsequence (by using Watson-Crick complementarity), copy (by  using [[polymerase chain reaction]] that employs the polymerase enzyme), and read-out.<ref name="Kari97">Kari, L. [http://www.csd.uwo.ca/~lila/intelh.pdf DNA computing - the arrival of biological mathematics]. The Mathematical Intelligencer 19, 2 (1997) 9-22</ref>
Recent experimental research succeeded in solving more complex instances of [[NP-complete]] problems such as a 20-variable instance of  [[3SAT]], and wet DNA implementations of finite state machines with potential applications to the design of [[Targeted drug delivery|smart drugs]].
 
[[File:Selfassemble Sierpinski.jpg|thumb|400px|right|DNA tile self-assembly of a  Sierpinski triangle, starting from a '''seed''' obtained by the
DNA origami technique<ref name=murata>Fujibayashi, K., Hariadi, R., Park, S-H., Winfree, E., Murata, S. [http://www.dna.caltech.edu/Papers/sierpinski_ribbons_2007final.pdf Toward reliable algorithmic self-assembly of DNA tiles: A fixed-width cellular automaton pattern]. Nano Letters 8(7) (2007) 1791-1797.</ref>]]
One of the most notable contributions of research in this field is to the understanding of [[self-assembly]].<ref name="ReLa07">Reif, J. and LaBean, T. [http://portal.acm.org/citation.cfm?id=1284647 Autonomous programmable biomolecular devices using self-assembled DNA nanostructures]. Communications of the ACM 50, 9 (Spet. 2007), 46-53</ref>
Self-assembly is the [[:wikt:bottom-up|bottom-up]] process by which objects autonomously come together to form complex structures. Instances in nature abound, and include  [[atoms]] binding by chemical bonds to form [[molecules]], and molecules forming [[crystals]] or [[macromolecules]]. Examples of  self-assembly research  topics include self-assembled DNA nanostructures<ref name="Seeman07">Seeman, N. [http://www.scientificamerican.com/article.cfm?id=nanotechnology-and-the-do Nanotechnology and the double helix]. Scientific American Reports, 17. 3 (2007), 30-39</ref> such as [[Sierpinski triangle]]s<ref name="RPW04">Rothemund, P., Papadakis, N., Winfree, E. [http://www.plosbiology.org/article/info:doi/10.1371/journal.pbio.0020424 Algorithmic self-assembly of DNA Sierpinski triangles]. PLoS Biology 2, 12 (December  2004)</ref> or arbitrary nanoshapes obtained using the  [[DNA origami]]<ref name="Rothemund06">Rothemund, P. [http://www.nature.com/nature/journal/v440/n7082/abs/nature04586.html Folding DNA to create nanoscale shapes and patterns]. Nature 440 (2006) 297-302.</ref> technique, and DNA nanomachines<ref name="">Bath, J., Turberfield, A. [http://www.nature.com/nnano/journal/v2/n5/full/nnano.2007.104.html DNA nanomachines]. Nature Nanotechnology 2 (May 2007), 275-284</ref> such as DNA-based circuits ([[binary counter]], [[bit-wise cumulative XOR]]), ribozymes for logic operations, molecular switches ([[DNA tweezers]]), and autonomous molecular motors ([[DNA walkers]]).
 
Theoretical research in molecular computing has yielded several novel models of DNA computing (e.g. [[splicing systems]] introduced by Tom Head already in 1987) and their computational power has been investigated.<ref name="PRS98">Paun, G., Rozenberg, G., Salomaa, A. DNA Computing: New Computing Paradigms. Springer, 1998</ref> Various subsets of bio-operations are now known to be able to achieve the computational power of [[Turing machines]].
 
=== Quantum computing ===
 
: ''Further information: [[Quantum computing]]''
 
A quantum computer<ref name="Hirvensalo04">Hirvensalo, M. Quantum Computing, 2nd Ed. Springer, 2004</ref> processes data stored as quantum bits ([[qubits]]), and uses  quantum mechanical phenomena such as [[quantum superposition|superposition]] and [[quantum entanglement|entanglement]] to perform computations.
A qubit can hold a "0", a "1", or a quantum superposition of these.
A quantum computer  operates on qubits with [[quantum gate|quantum logic gates]].
Through [[Shor's algorithm|Shor's polynomial algorithm]] for factoring integers, and [[Grover's algorithm]] for quantum database search that has a quadratic time advantage, quantum computers were shown to potentially possess a significant benefit relative to electronic computers.  
 
[[Quantum cryptography]] is not based on the [[Computational complexity theory|complexity of the computation]], but  on the special properties of [[quantum information]], such as the fact that  quantum information cannot be measured reliably and any attempt at measuring it results in an unavoidable and irreversible disturbance.
A successful  open air experiment in quantum cryptography was reported in 2007, where data was transmitted  securely over a distance of 144&nbsp;km.<ref name="Ursin07">Ursin, R. et al. [http://www.nature.com/nphys/journal/v3/n7/abs/nphys629.html Entanglemen-based quantum communication over 144km]. Nature Physics 3 (2007) 481-486</ref>
[[Quantum teleportation]] is another promising application, in which a quantum state (not matter or energy) is transferred to an arbitrary distant location. Implementations of practical quantum computers are based on  various substrates such as  [[ion trap|ion-traps]],
[[superconductors]], [[nuclear magnetic resonance]], etc.
As of 2006, the largest  quantum computing experiment  used  liquid state nuclear magnetic resonance quantum information processors, and could operate on up to 12 qubits.<ref name="Negrevergne06">Negrevergne, C. et al. [http://scitation.aip.org/getabs/servlet/GetabsServlet?prog=normal&id=PRLTAO000096000017170501000001&idtype=cvips&gifs=yes Benchmarking quantum control methods on a 12-qubit system]. Physical Review Letters 96:art170501, 2006</ref>
 
== Nature as information processing ==
 
The dual aspect of natural computation is  that it aims to understand nature by regarding natural phenomena as information processing.
The two main directions  of research in this area are [[#Systems biology|systems biology]] and [[#Synthetic biology|synthetic biology]].
 
=== Systems biology ===
 
''Further information: [[systems biology]]''
 
Computational systems biology (or simply systems biology) is an integrative and qualitative approach that investigates the complex communications and interactions taking place in biological systems.
Thus, in systems biology, the focus of the study is the [[interaction network]]s themselves and the properties of biological systems that arise due to these networks, rather than the individual components of functional processes in an organism.  
This type of  research  on organic components has focused strongly on four different interdependent interaction networks:<ref name="Car07">Cardelli, L. [http://lucacardelli.name/Bibliography.htm Abstract machines of systems biology]
Bulletin of the EATCS 93 (2007), 176-204</ref> gene-regulatory networks, biochemical networks, transport networks, and carbohydrate networks.
 
[[Gene regulatory networks]] comprise gene-gene interactions, as well as interactions between genes and other substances in the cell.
[[Gene]]s are transcribed into [[messenger RNA]] (mRNA), and then translated into [[protein]]s according to the [[genetic code]].
Each gene is associated with other DNA segments ([[promoter (biology)|promoter]]s, [[enhancer (genetics)|enhancer]]s, or [[silencer (DNA)|silencer]]s) that act as [[binding site]]s for [[activator (genetics)|activators]] or [[repressor]]s for [[gene transcription]].
Genes  interact with each other either through their gene products (mRNA, proteins) which can regulate gene transcription, or through  small [[RNA species]] that can directly regulate genes.
These [[gene-gene interactions]], together with genes' interactions with other substances in the cell, form the most basic interaction
network: the [[gene regulatory network]]s. They perform information processing tasks within the cell, including the assembly and maintenance of other networks. Models of gene regulatory networks include random and probabilistic [[Boolean network]]s, [[asynchronous automata]], and [[network motif]]s.
 
Another viewpoint is that the entire genomic regulatory system is a computational system,  a ''genomic computer''. This interpretation allows  one to compare human-made electronic computation with computation as it occurs in nature.<ref name="IDD07">Istrail, S., De-Leon, B-T., Davidson, E. [http://www.cs.brown.edu/research/pubs/pdfs/2007/Istrail-2007-RGC.pdf The regulatory genome and the computer]. Developmental Biology 310 (2007), 187-195</ref>
{| class="wikitable"
|+ A comparison between genomic and electronic computers
! !! Genomic computer !! Electronic computer
|-
! Architecture
| changeable || rigid
|-
! Components construction
| as-needed basis || from the start
|-
! Coordination
| causal coordination || temporal synchrony
|-
! Distinction between hardware and software
| No || Yes
|-
! Transport media
| molecules and ions || wires
|}
In addition, unlike a conventional computer, robustness  in a genomic computer is achieved by various [[feedback mechanism]]s  by which  poorly functional processes are rapidly degraded,  poorly functional cells are killed by [[apoptosis]], and poorly functional organisms are out-competed by more fit species.
 
[[Biochemical networks]] refer to  the interactions between proteins, and they perform various mechanical and metabolic tasks inside a cell.  Two or more proteins may bind to each other via binding of their interactions sites, and form  a dynamic protein complex  ([[complexation]]). These protein complexes may act as [[catalysts]] for other chemical reactions, or may chemically modify each other.
Such modifications cause changes to available binding sites of proteins.  There are tens of thousands of proteins in a cell, and they interact with each other. To describe such a massive scale interactions, [[Kohn maps]]<ref name=Kohn>Kohn, K. [http://www.molbiolcell.org/cgi/content/full/10/8/2703 Molecular interaction map of the mammalian cell cycle control and DNA repair systems]. Molecular Biology of the Cell 10(8) (1999) 2703-2734.</ref> were  introduced
as a graphical notation to depict molecular interactions in succinct pictures.  Other approaches  to describing accurately and succinctly  protein-protein interactions include the use of [[textual bio-calculus]]<ref name="Nagasaki">Nagasaki, M., Onami, S., Miyano, S., Kitano, H. [http://recomb2000.ims.u-tokyo.ac.jp/Posters/pdf/15.pdf Bio-calculus: its concept and molecular interaction]. Genome Informatics 10 (1999) 133-143.</ref> or [[pi-calculus]] enriched with stochastic features.<ref name="ReSh02">Regev, A., Shapiro, E. [http://www.nature.com/nature/journal/v419/n6905/full/419343a.html Cellular abstractions: Cells as computation]. Nature 419 (2002) 343</ref>
 
[[Transport network (biology)|Transport network]]s refer to the separation and transport of substances mediated by lipid membranes.
Some lipids can self-assemble into biological membranes. A lipid membrane  consists of a [[lipid bilayer]] in which  proteins and other molecules are embedded, being able to travel along this layer. Through lipid bilayers, substances are transported between the inside and outside of membranes to interact with other molecules.  
Formalisms depicting  transport networks  include membrane systems and  [[brane calculi]].<ref name=Cardelli>Cardelli, L. [http://lucacardelli.name/Papers/Brane%20Calculi.pdf Brane calculi: Interactions of biological membranes]. In LNCS 3082, pages 257-280. Springer, 2005.</ref>
 
=== Synthetic biology ===
 
: ''Further information: [[synthetic biology]]''
 
Synthetic biology aims at engineering synthetic biological components, with the ultimate goal of assembling whole biological systems from their constituent components.  The history of synthetic biology can be traced back to the 1960s, when [[Francois Jacob]] and [[Jacques Monod]] discovered the  mathematical logic in gene regulation. Genetic engineering techniques, based on [[recombinant DNA]] technology,  are  a precursor of today's synthetic biology which  extends these techniques to entire systems of genes and gene products.
 
Along with the possibility of synthesizing longer and longer DNA strands, the prospect  of creating synthetic genomes with the purpose of building entirely artificial [[synthetic organism]]s became a reality.
Indeed, rapid assembly of chemically synthesized short DNA strands made it possible to generate a 5386bp synthetic genome of a virus.<ref name="Smith03">Smith, H., Hutchison III, C., Pfannkoch, C., and Venter, C. [http://www.pnas.org/content/100/26/15440.full Generating a synthetic genome by whole genome assembly: {phi}X174 bacteriophage from synthetic oligonucleotides]. ''PNAS 100'', 26 (2003), 15440-15445.</ref>
 
Alternatively, Smith et al. found about 100 genes that can be removed invidually from the genome of ''[[Mycoplasma Genitalium]]''.  
This discovery  paves the way to the assembly of a minimal but still viable artificial genome consisting of the essential genes only.
 
A third approach to engineering  semi-synthetic cells is the construction of a single type of RNA-like molecule with the ability of self-replication.<ref name="SLS04">Sazani, P., Larralde, R., Szostak, J. [http://genetics.mgh.harvard.edu/szostakweb/publications/Szostak_pdfs/Sazani_et_al_2004_JACS_SI.pdf A small aptamer with strong and specific recognition of the triphosphate of ATP]. Journal of the American Chemical Society, 126(27) (2004) 8370-8371</ref> Such a molecule could be obtained by guiding the rapid evolution of an initial population of RNA-like molecules, by selection for the desired traits.
 
Another effort in this field is towards engineering multi-cellular systems by designing, e.g., [[cell signaling|cell-to-cell communication modules]] used to coordinate living bacterial cell populations.<ref name="WeKn01">Weiss, R., Knight, Jr., T. [http://www.springerlink.com/content/tavjkk4hymen6glj/ Engineered communications for microbial robotics]. In LNCS 2054, pages 1-16, Springer, 2001</ref>
 
=== Cellular computing ===
 
Computation in living cells (a.k.a. [[cellular computing]], or [[in-vivo computing]]) is another approach to understand nature as computation.
One particular study in this area is that of the computational nature of gene assembly  in unicellular organisms called [[ciliate]]s.
Ciliates store a copy of their DNA  containing functional genes in the [[macronucleus]], and another "encrypted" copy in the [[micronucleus]]. Conjugation of two ciliates consists of the exchange of their micronuclear genetic information, leading to the formation of two new micronuclei, followed by each ciliate re-assembling the information from its new micronucleus to construct a new  functional macronucleus.
The latter process is called [[gene assembly]], or gene re-arrangement. It involves re-ordering some fragments of DNA ([[permutation]]s and possibly [[Chromosomal inversion|inversion]]) and deleting other fragments from the micronuclear copy.
From the computational point of view, the study of this gene assembly process led to many challenging research themes and results, such as the Turing universality of various models of this process.<ref>Landweber, L. and Kari, L. [http://scholarsportal.info/pdflinks/08110515452920340.pdf The evolution of cellular computing: Nature's solution to a computational problem]. ''Biosystems'', 52, 1/3 (1999) 3-13.</ref>
From the biological point of view, a plausible hypothesis about the "bioware" that implements the gene-assembly process was proposed, based on [[template guided recombination]].<ref>{{cite journal | last1 = Angeleska | first1 = A. | last2 = Jonoska | first2 = N. | last3 = Saito | first3 = M. | last4 = Landweber | first4 = L. | year = 2007 | title = RNA-guided DNA assembly | url = http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6WMD-4NYD8XF-3&_user=940030&_rdoc=1&_fmt=&_orig=search&_sort=d&view=c&_acct=C000048763&_version=1&_urlVersion=0&_userid=940030&md5=41a35e4629fee3fde72940f1b0775cce | journal = [[Journal of Theoretical Biology]] | volume = 248 | issue = | pages = 706–720 | doi=10.1016/j.jtbi.2007.06.007}}</ref><ref>Prescott, D., Ehrenfeucht, A., and Rozenberg, G. [http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6WMD-484VCX1-1&_user=940030&_coverDate=06%2F07%2F2003&_rdoc=6&_fmt=high&_orig=browse&_srch=doc-info(%23toc%236932%232003%23997779996%23424670%23FLA%23display%23Volume)&_cdi=6932&_sort=d&_docanchor=&_ct=13&_acct=C000048763&_version=1&_urlVersion=0&_userid=940030&md5=36f664865c20a4521bb61f931a19449b Template-guided recombination for IES elimination and unscrambling of genes in stichotrichous ciliates]. ''J. Theoretical Biology'' 222, 3 (2003), 323-330.</ref>
 
Other approaches to cellular computing include developing an ''[[in vivo]]'' programmable and autonomous finite-state automaton with ''[[Escherichia coli|E. coli]]'',<ref name="NSS06">Nakagawa, H., Sakamoto, K., Sakakibara, Y. [http://www.springerlink.com/content/x3j1x0573681g1t4/ Development of an ''in vivo'' computer based on Escherichia Coli]. In LNCS 3892, pages 203-212, Springer, 2006</ref> and designing and constructing ''in vivo'' cellular logic gates and genetic circuits that harness the cell's existing biochemical processes (see for example <ref name="z2010b">Zabet NR, Hone ANW, Chu DF  [http://mitpress.mit.edu/catalog/item/default.asp?ttype=2&tid=12433 Design principles of transcriptional logic circuits]. In Artificial Life XII Proceedings of the Twelfth International Conference on the Synthesis and Simulation of Living Systems, pages 186-193. MIT Press, August 2010.</ref><ref name="z2010a">Zabet NR [http://www.cs.kent.ac.uk/pubs/2010/3098/index.html Towards Modular, Scalable and Optimal Design of Transcriptional Logic Systems].2010.</ref>).
 
== See also ==
 
* [[DNA computing]]
* [[Quantum computing]]
* [[Synthetic biology]]
* [[Turing machine]]
* [[Natural Computing (journal)|Natural Computing Journal]]
 
== Further reading ==
This article was written based on the following references with the kind permission of their authors:
* {{cite journal
| author = Lila Kari, Grzegorz Rozenberg
| title = The Many Facets of Natural Computing
| journal = Communications of the ACM
| volume = 51
|date=October 2008
| pages = pp.72–83
| url=http://portal.acm.org/citation.cfm?id=1400200
}}
* {{cite journal
| author = Leandro Nunes de Castro
| title = Fundamentals of Natural Computing: An Overview
| journal = Physics of Life Reviews
| volume = 4
|date=March 2007
| pages = pp.1–36
| url=http://linkinghub.elsevier.com/retrieve/pii/S1571064506000315
}}
 
Many of the constituent research areas of natural computing have their own specialized journals and books series.
Journals and book series dedicated to the broad field of Natural Computing include the journals [http://www.igi-global.com/ijncr International Journal of Natural Computing Research] (IGI Global),[http://www.springer.com/computer/foundations/journal/11047 Natural Computing] (Springer Verlag), [http://www.elsevier.com/wps/find/journaldescription.cws_home/505625/description#description Theoretical Computer Science, Series C: Theory of Natural Computing] (Elsevier),  [http://www.springer.com/series/4190 the Natural Computing book series] (Springer Verlag), and the [http://www.springer.com/computer/foundations/book/978-3-540-92911-6 Handbook of Natural Computing] (G.Rozenberg, T.Back, J.Kok, Editors, Springer Verlag).
* {{cite journal | first1 = E. | last1 = Ridge | first2 = D. | last2 = Kudenko | first3 = D. | last3 = Kazakov | first4 = E. | last4 = Curry | id = {{citeseerx|10.1.1.64.3403}} | title = Moving Nature-Inspired Algorithms to Parallel, Asynchronous and Decentralised Environments | journal = Self-Organization and Autonomic Informatics (I) | year = 2005 | volume = 135 | pages = 35–49 }}
*''Swarms and Swarm Intelligence'' by Michael G. Hinchey, Roy Sterritt, and Chris Rouff,
 
== References ==
{{reflist}}
 
[[Category:Natural computation| ]]
[[Category:Theoretical computer science]]

Latest revision as of 19:55, 16 October 2014

Garden fans and interior growers alike may possibly take pleasure in the more price that LED develop light devices supply. The makers also claim that their X2 lens method further increases the intensity, so extra light will reach the bottom of the plants. One particular function special to the Penetrator 168X Pro is the switchable power settings from .5W to 1W to 3W. This is slightly misleading, as the unit is only made to provide just more than 1.5W per LED at full power. Based on the specifications, the Super Develop Spectrum King (SK450) looks like the ideal worth.



It produces a continuous spectral distribution that is close to daylight, it uses name brand LEDs, does not need an internal fan due to the impressive thermal management style and comes with 90 degree TIR lenses to concentrate the light on the plants. The spectral distribution might also be a bit restrictive, depending on what you want to develop. They have a powerful affinity for red and blue light (and even some orange).

You are going to come across LED develop light reviews from your personal closest garden middle, but fairly frequently, getting these on-line gives you the opportunity to evaluate diverse kinds, suppliers and merchants to get the pretty greatest expense and also the correct type of LED develop light program for the condition. I am seriously considering I ought to give LED light's a chance and I have been reading a bunch of reviews as what sort I need to obtain.

The problem is that most companies employed low output LED bulbs with incorrect red and blue light spectrum. The mixture of blue and red light produces the very best benefits for plants. If you liked this report and you would like to obtain much more information about Http://Bestgrowlightsguide.Com/Led-Grow-Lights-Reviews-A-Perfect-Guide-For-Everyone/ kindly stop by our own web-page. 1 of the most crucial aspect in figuring out the fantastic develop light is the light coverage. Customers need to look for a grow light that can generate maximum light coverage for their plants. I have presented the newest report on the 3 finest LED grow lights on the market.

Ed Harwood, Ph.D., is the chief technology officer and co-founder of AeroFarms , a vertical farm” supplier that supplies equipment for providers increasing far more than 250 varieties of greens and herbs beneath LED (light-emitting diode) lights inside warehouses. With four lights on a two-foot wide shelf yousignificantly greater development than two.” This light does put out some heat, nevertheless. An orchid grower wrote that this light fixture raised the temperature of his terrarium by 5-7ºF (three-5ºC).