ITIL

ITIL, formerly known as the Information Technology Infrastructure Library, is a set of practices for IT service management (ITSM) that focuses on aligning IT services with the needs of business. In its current form (known as ITIL 2011 edition), ITIL is published as a series of five core volumes, each of which covers a different ITSM lifecycle stage. Although ITIL underpins ISO/IEC 20000 (previously BS15000), the International Service Management Standard for IT service management, there are some differences between the ISO 20000 standard and the ITIL framework.

ITIL describes processes, procedures, tasks, and checklists which are not organization-specific, but can be applied by an organization for establishing integration with the organization’s strategy, delivering value, and maintaining a minimum level of competency. It allows the organization to establish a baseline from which it can plan, implement, and measure. It is used to demonstrate compliance and to measure improvement.

Since July 2013, ITIL has been owned by AXELOS Ltd, a joint venture between HM Cabinet Office and Capita Plc. AXELOS licenses organisations to use the ITIL intellectual property, accredits licensed Examination Institutes, and manages updates to the framework.

ggplot2

ggplot2 is a data visualization package for the statistical programming language R. Created by Hadley Wickham in 2005, ggplot2 is an implementation of Leland Wilkinson‘s Grammar of Graphics—a general scheme for data visualization which breaks up graphs into semantic components such as scales and layers. ggplot2 can serve as a replacement for the base graphics in R and contains a number of defaults for web and print display of common scales. Since 2005, ggplot2 has grown in use to become one of the most popular R packages.[1][2] It is licensed under GNU GPL v2.[3]

On 2 March 2012, ggplot2 version 0.9.0 was released with numerous changes to internal organization, scale construction and layers.[4] An update dealing primarily with bug fixes was released on 9 May 2012, incrementing the version to 0.9.1.[5]

On 25 February 2014, Hadley Wickham formally announced that “ggplot2 is shifting to maintenance mode. This means that we are no longer adding new features, but we will continue to fix major bugs, and consider new features submitted as pull requests. In recognition this significant milestone, the next version of ggplot2 will be 1.0.0”.[6]

Solved game

A solved game is a game whose outcome (win, lose, or draw) can be correctly predicted from any position, given that both players play perfectly. Games which have not been solved are said to be “unsolved”. Games for which only some positions have been solved are said to be “partially solved”. This article focuses on two-player games that have been solved.

A two-player game can be “solved” on several levels:[1][2]

Ultra-weak

Prove whether the first player will win, lose, or draw from the initial position, given perfect play on both sides. This can be a non-constructive proof (possibly involving astrategy-stealing argument) that need not actually determine any moves of the perfect play.

Weak

Provide an algorithm that secures a win for one player, or a draw for either, against any possible moves by the opponent, from the beginning of the game. That is, produce at least one complete ideal game (all moves start to end) with proof that each move is optimal for the player making it. It does not necessarily mean a computer program using the solution will play optimally against an imperfect opponent. For example, the checkers program Chinook will never turn a drawn position into a losing position (since the weak solution of checkers proves that it is a draw), but it might possibly turn a winning position into a drawn position because Chinook does not expect the opponent to play a move that will not win but could possibly lose, and so it does not analyze such moves completely.

Strong

Provide an algorithm that can produce perfect play (moves) from any position, even if mistakes have already been made on one or both sides.

Despite the name, many game theorists believe that “ultra-weak” are the deepest, most interesting and valuable proofs. “Ultra-weak” proofs require a scholar to reason about the abstract properties of the game, and show how these properties lead to certain outcomes if perfect play is realized.[citation needed]

By contrast, “strong” proofs often proceed by brute force — using a computer to exhaustively search a game tree to figure out what would happen if perfect play were realized. The resulting proof gives an optimal strategy for every possible position on the board. However, these proofs aren’t as helpful in understanding deeper reasons why some games are solvable as a draw, and other, seemingly very similar games are solvable as a win.

Given the rules of any two-person game with a finite number of positions, one can always trivially construct a minimax algorithm that would exhaustively traverse the game tree. However, since for many non-trivial games such an algorithm would require an infeasible amount of time to generate a move in a given position, a game is not considered to be solved weakly or strongly unless the algorithm can be run by existing hardware in a reasonable time. Many algorithms rely on a huge pre-generated database, and are effectively nothing more.

As an example of a strong solution, the game of tic-tac-toe is solvable as a draw for both players with perfect play (a result even manually determinable by schoolchildren). Games like nim also admit a rigorous analysis using combinatorial game theory.

Whether a game is solved is not necessarily the same as whether it remains interesting for humans to play. Even a strongly solved game can still be interesting if its solution is too complex to be memorized; conversely, a weakly solved game may lose its attraction if the winning strategy is simple enough to remember (e.g. Maharajah and the Sepoys). An ultra-weak solution (e.g. Chomp or Hex on a sufficiently large board) generally does not affect playability.

In non-perfect information games, one also has the notion of essentially weakly solved[3]. A game is said to be essentially weakly solved if a human lifetime of play is not sufficient to establish with statistical significance that the strategy is not an exact solution. As an example, the poker variation heads-up limit Texas hold ’em have been essentially weakly solved by the poker bot Cepheus[3][4][5].

Perfect play

In game theory, perfect play is the behavior or strategy of a player that leads to the best possible outcome for that player regardless of the response by the opponent. Based on the rules of a game, every possible final position can be evaluated (as a win, loss or draw). By backward reasoning, one can recursively evaluate a non-final position as identical to that of the position that is one move away and best valued for the player whose move it is. Thus a transition between positions can never result in a better evaluation for the moving player, and a perfect move in a position would be a transition between positions that are equally evaluated. As an example, a perfect player in a drawn position would always get a draw or win, never a loss. If there are multiple options with the same outcome, perfect play is sometimes considered the fastest method leading to a good result, or the slowest method leading to a bad result.

Perfect play can be generalized to non-perfect information games, as the strategy that would guarantee the highest minimal expected outcome regardless of the strategy of the opponent. As an example, the perfect strategy for Rock, Paper, Scissors would be to randomly choose each of the options with equal (1/3) probability. The disadvantage in this example is that this strategy will never exploit non-optimal strategies of the opponent, so the expected outcome of this strategy versus any strategy will always be equal to the minimal expected outcome.

Although the optimal strategy of a game may not (yet) be known, a game-playing computer might still benefit from solutions of the game from certain endgame positions (in the form of endgame tablebases), which will allow it to play perfectly after some point in the game. Computer chess programs are well known for doing this.

Solved games

Awari (a game of the Mancala family)
The variant of Oware allowing game ending “grand slams” was strongly solved by Henri Bal and John Romein at the Vrije Universiteit in Amsterdam, Netherlands (2002). Either player can force the game into a draw.
Checkers
See “Draughts, English”
Chopsticks
The second player can always force a win.[6]
Connect Four
Solved first by James D. Allen (Oct 1, 1988), and independently by Victor Allis (Oct 16, 1988).[7] First player can force a win. Strongly solved by John Tromp’s 8-ply database[8](Feb 4, 1995). Weakly solved for all boardsizes where width+height is at most 15[7] (Feb 18, 2006).
Draughts, English (Checkers)
This 8×8 variant of draughts was weakly solved on April 29, 2007 by the team of Jonathan Schaeffer, known for Chinook, the “World Man-Machine Checkers Champion“. From the standard starting position, both players can guarantee a draw with perfect play.[9] Checkers is the largest game that has been solved to date, with a search space of 5×1020.[10] The number of calculations involved was 1014, which were done over a period of 18 years. The process involved from 200 desktop computers at its peak down to around 50.[11]

The game of checkers has roughly 500 billion billion possible positions (5 × 1020). The task of solving the game, determining the final result in a game with no mistakes made by either player, is daunting. Since 1989, almost continuously, dozens of computers have been working on solving checkers, applying state-of-the-art artificial intelligence techniques to the proving process. This paper announces that checkers is now solved: Perfect play by both sides leads to a draw. This is the most challenging popular game to be solved to date, roughly one million times as complex as Connect Four. Artificial intelligence technology has been used to generate strong heuristic-based game-playing programs, such as Deep Blue for chess. Solving a game takes this to the next level by replacing the heuristics with perfection.

pay it forward

The expression “pay it forward” is used to describe the concept of asking the beneficiary of a good deed to repay it to others instead of to the original benefactor. The concept is old, but the phrase may have been coined by Lily Hardy Hammond in her 1916 book In the Garden of Delight.[1]

Pay it forward” is implemented in contract law of loans in the concept of third party beneficiaries. Specifically, the creditor offers the debtor the option of “paying” the debt forward by lending it to athird person instead of paying it back to the original creditor. Debt and payments can be monetary or by good deeds. A related type of transaction, which starts with a gift instead of a loan, isalternative giving.

Hindsight bias

Hindsight bias, also known as the knew-it-all-along effect or creeping determinism, is the inclination, after an event has occurred, to see the event as having been predictable, despite there having been little or no objective basis for predicting it, prior to its occurrence.[1][2] It is a multifaceted phenomenon that can affect different stages of designs, processes, contexts, and situations.[3] Hindsight bias may cause memory distortion, where the recollection and reconstruction of content can lead to false theoretical outcomes. It has been suggested that the effect can cause extreme methodological problems while trying to analyze, understand, and interpret results in experimental studies. A basic example of the hindsight bias is when, after viewing the outcome of a potentially unforeseeable event, a person believes he or she “knew it all along”. Such examples are present in the writings of historians describing outcomes of battles, physicians recalling clinical trials, and in judicial systems trying to attribute responsibility and predictability of accidents.

Power and dominance

Non verbal expressions of power and dominance are gestures or motions that assert one´s authority over another.

handshakes
waving
smiling

The colors one wears affect other´s perceptions of one´s authority:

purple: people of high status adorn their clothing with purple to distinguish themselves as noble or wealthy

people attribute greater authority to others wearing red

It is human to strive for power and dominance in social settings

simple gestures establish authority

A firmer handshake
Better posture
Causing slight interruptions in conversation

can rise authority in group situations

many peers view Non verbal expressions of power and dominance as manipulation for self gain

Their abuse can be disastrous

Men and women have different perceptions of Non verbal expressions of power and dominance

Nodding is misinterpreted in cross gender communication

women interpret a nod as a signal of understanding

men interpret a nod as a signal of agreement

small miscommunications and misinterpretations lead to disagreement and confrontation

Russel (as cited in Dunbar & Burgoon, 2005) describes, “the fundamental concept in social science is power, in the same way that energy is the fundamental concept in physics“. Power and dominance-submission are two key concepts in relationships, especially close relationships where individuals rely on one another to achieve their goals (Dunbar & Burgoon, 2005) and as such it is important to be able to identify indicators of dominance.

Power and dominance are different concepts yet share similarities. Power is the ability to influence behavior (Bachrach & Lawler; Berger; Burgoon et al.; Foa & Foa; French & Raven; Gray-Little & Burks; Henley; Olson & Cromwell; Rollins & Bahr, as cited in Dunbar & Burgoon, 2005) and may or may not be fully evident until challenged by an equal force (Huston, as cited in Dunbar & Burgoon, 2005). Unlike power, that may be latent, dominance is manifest reflecting individual (Komter, as cited in Dunbar & Burgoon, 2005), situational and relationship patterns where control attempts are either accepted or rejected (Rogers-Millar & Millar,as cited in Dunbar & Burgoon, 2005). Moskowitz, Suh, and Desaulniers (1994) mention two similar ways that people can relate to the world in interpersonal relationships: agency and communion. Agency includes status and is a continuum from assertiveness-dominance to passive-submissiveness – it can be measured by subtracting submissiveness from dominance. Communion is a second way to interact with others and includes love with a continuum from warm-agreeable to cold-hostile-quarrelsomeness. Power and dominance relate together in such a way that those with the greatest and least power typically do not assert dominance while those with more equal relationships make more control attempts Dunbar & Burgoon, 2005).

As one can see, power and dominance are important, intertwined, concepts that greatly impact relationships. In order to understand how dominance captures relationships one must understand the influence of gender and social roles while watching for verbal and nonverbal indicators of dominance.

wagon-wheel effect

The wagon-wheel effect (alternatively, stagecoach-wheel effectstroboscopic effect) is an optical illusion in which a spoked wheelappears to rotate differently from its true rotation. The wheel can appear to rotate more slowly than the true rotation, it can appear stationary, or it can appear to rotate in the opposite direction from the true rotation. This last form of the effect is sometimes called thereverse rotation effect.

The wagon-wheel effect is most often seen in film or television depictions of stagecoaches or wagons in Western movies, although recordings of any regularly spoked wheel will show it, such as helicopter rotors and aircraft propellers. In these recorded media, the effect is a result of temporal aliasing.[1] It can also commonly be seen when a rotating wheel is illuminated by flickering light. These forms of the effect are known as stroboscopic effects: the original smooth rotation of the wheel is visible only intermittently. A version of the wagon-wheel effect can also be seen under continuous illumination.

Rushton (1967[5]) observed the wagon-wheel effect under continuous illumination while humming. The humming vibrates the eyes in their sockets, effectively creating stroboscopic conditions within the eye. By humming at a frequency of a multiple of the rotation frequency, he was able to stop the rotation. By humming at slightly higher and lower frequencies, he was able to make the rotation reverse slowly and to make the rotation go slowly in the direction of rotation. A similar stroboscopic effect is now commonly observed by people eating crunchy foods, such as carrots, while watching TV: the image appears to shimmer.[6] The crunching vibrates the eyes at a multiple of the frame rate of the TV. Besides vibrations of the eyes, the effect can be produced by observing wheels via a vibrating mirror. Rear-view mirrors in vibrating cars can produce the effect.

Truly continuous illumination

The first to observe the wagon-wheel effect under truly continuous illumination (such as from the sun) was Schouten (1967[7]). He distinguished three forms of subjective stroboscopy which he called alpha, beta, and gamma: Alpha stroboscopy occurs at 8–12 cycles per second; the wheel appears to become stationary, although “some sectors [spokes] look as though they are performing a hurdle race over the standing ones” (p. 48). Beta stroboscopy occurs at 30–35 cycles per second: “The distinctness of the pattern has all but disappeared. At times a definite counterrotation is seen of a grayish striped pattern” (pp. 48–49). Gamma stroboscopy occurs at 40–100 cycles per second: “The disk appears almost uniform except that at all sector frequencies a standing grayish pattern is seen … in a quivery sort of standstill” (pp. 49–50). Schouten interpreted beta stroboscopy, reversed rotation, as consistent with there being Reichardt detectors in the human visual system for encoding motion. Because the spoked wheel patterns he used (radial gratings) are regular, they can strongly stimulate detectors for the true rotation, but also weakly stimulate detectors for the reverse rotation.

There are two broad theories for the wagon-wheel effect under truly continuous illumination. The first is that human visual perception takes a series of still frames of the visual scene and that movement is perceived much like a movie. The second is Schouten’s theory: that moving images are processed by visual detectors sensitive to the true motion and also by detectors sensitive to opposite motion from temporal aliasing. There is evidence for both theories, but the weight of evidence favours the latter.

Discrete frames theory

Purves, Paydarfar, and Andrews (1996[8]) proposed the discrete-frames theory. One piece of evidence for this theory comes from Dubois and VanRullen (2011[9]). They reviewed experiences of users of LSD who often report that under the influence of the drug a moving object is seen trailing a series of still images behind it. They asked such users to match their drug experiences with movies simulating such trailing images viewed when not under the drug. They found that users selected movies around 15–20 Hz. This is between Schouten’s alpha and beta rates.

Other evidence for the theory is reviewed next.

Temporal aliasing theory

Kline, Holcombe, and Eagleman (2004[10]) confirmed the observation of reversed rotation with regularly spaced dots on a rotating drum. They called this “illusory motion reversal”. They showed that these occurred only after a long time of viewing the rotating display (from about 30 seconds to as long as 10 minutes for some observers). They also showed that the incidences of reversed rotation were independent in different parts of the visual field. This is inconsistent with discrete frames covering the entire visual scene. Kline, Holcombe, and Eagleman (2006[11]) also showed that reversed rotation of a radial grating in one part of the visual field was independent of superimposed orthogonal motion in the same part of the visual field. The orthogonal motion was of a circular grating contracting so as to have the same temporal frequency as the radial grating. This is inconsistent with discrete frames covering local parts of visual scene. Kline et al. concluded that the reverse rotations were consistent with Reichardt detectors for the reverse direction of rotation becoming sufficiently active to dominate perception of the true rotation in a form of rivalry. The long time required to see the reverse rotation suggests that neural adaptation of the detectors responding to the true rotation has to occur before the weakly stimulated reverse-rotation detectors can contribute to perception.

Some small doubts about the results of Kline et al. (2004) sustain adherents of the discrete-frame theory. These doubts include Kline et al.’s finding in some observers more instances of simultaneous reversals from different parts of the visual field than would be expected by chance, and finding in some observers differences in the distribution of the durations of reversals from that expected by a pure rivalry process (Rojas, Carmona-Fontaine, López-Calderón, & Aboitiz, 2006[12]).

In 2008, Kline and Eagleman demonstrated that illusory reversals of two spatially overlapping motions could be perceived separately, providing further evidence that illusory motion reversal is not caused by temporal sampling.[13] They also showed that illusory motion reversal occurs with non-uniform and non-periodic stimuli (for example, a spinning belt of sandpaper), which also cannot be compatible with discrete sampling. Kline and Eagleman proposed instead that the effect results from a “motion during-effect”, meaning that a motion after-effect becomes superimposed on the real motion.

Dangers

Because of the illusion this can give to moving machinery, it is advised that single-phase lighting be avoided in workshops and factories. For example, a factory that is lit from a single-phase supply with basic fluorescent lighting will have a flicker of twice the mains frequency, either at 100 or 120 Hz (depending on country); thus, any machinery rotating at multiples of this frequency may appear to not be turning. Seeing that the most common types of AC motors are locked to the mains frequency, this can pose a considerable hazard to operators of lathes and other rotating equipment. Solutions include deploying the lighting over a full 3-phase supply, or by using high-frequency controllers that drive the lights at safer frequencies.[14] Traditional incandescent light bulbs, which employ filaments that glow continuously, offer another option as well, albeit at the expense of increased power consumption. Smaller incandescent lights can be used as task lighting on equipment to help combat this effect to avoid the cost of operating larger quantities of incandescent lighting in a workshop environment.

 

rotatingwheels

 

Spiral and Agile

Agile is an implementation of Iterative Model. The spiral model is a software development process combining elements of both design and prototyping-in-stages, in an effort to combine advantages of top-down and bottom-up concepts.

Waterfall-Vs-Agile

The spiral model is a risk-driven process model generator for software projects. Based on the unique risk patterns of a given project, the spiral model guides a team to adopt elements of one or more process models, such as incrementalwaterfall, orevolutionary prototyping.

This model was first described by Barry Boehm in his 1986 paper “A Spiral Model of Software Development and Enhancement”.[1] In 1988 Boehm published a similar paper[2] to a wider audience. These papers introduce a diagram that has been reproduced in many subsequent publications discussing the spiral model.

Spiral model (Boehm, 1988). A number of misconceptions stem from oversimplifications in this widely circulated diagram.[3]

These early papers use the term “process model” to refer to the spiral model as well as to incremental, waterfall, prototyping, and other approaches. However, the spiral model’s characteristic risk-driven blending of other process models’ features is already present:

[R]isk-driven subsetting of the spiral model steps allows the model to accommodate any appropriate mixture of a specification-oriented, prototype-oriented, simulation-oriented, automatic transformation-oriented, or other approach to software development.[2]

In later publications,[3] Boehm describes the spiral model as a “process model generator”, where choices based on a project’s risks generate an appropriate process model for the project. Thus, the incremental, waterfall, prototyping, and other process models are special cases of the spiral model that fit the risk patterns of certain projects.

Boehm also identifies a number of misconceptions arising from oversimplifications in the original spiral model diagram. The most dangerous of these misconceptions are:

  • that the spiral is simply a sequence of waterfall increments;
  • that all project activities follow a single spiral sequence; and
  • that every activity in the diagram must be performed, and in the order shown.

While these misconceptions may fit the risk patterns of a few projects, they are not true for most projects.

To better distinguish them from “hazardous spiral look-alikes”, Boehm lists six characteristics common to all authentic applications of the spiral model.

The Six Invariants

Authentic applications of the spiral model are driven by cycles that always display six characteristics. Boehm illustrates each with an example of a “hazardous spiral look-alike” that violates the invariant.[3]

Define artifacts concurrently

Sequentially defining the key artifacts for a project often lowers the possibility of developing a system that meets stakeholder “win conditions” (objectives and constraints).

This invariant excludes “hazardous spiral look-alike” processes that use a sequence of incremental waterfall passes in settings where the underlying assumptions of the waterfall model do not apply. Boehm lists these assumptions as follows:

  1. The requirements are known in advance of implementation.
  2. The requirements have no unresolved, high-risk implications, such as risks due to cost, schedule, performance, safety, security, user interfaces, organizational impacts, etc.
  3. The nature of the requirements will not change very much during development or evolution.
  4. The requirements are compatible with all the key system stakeholders’ expectations, including users, customer, developers, maintainers, and investors.
  5. The right architecture for implementing the requirements is well understood.
  6. There is enough calendar time to proceed sequentially.

In situations where these assumptions do apply, it is a project risk not to specify the requirements and proceed sequentially. The waterfall model thus becomes a risk-driven special case of the spiral model.

Perform four basic activities in every cycle

This invariant identifies the four basic activities that must occur in each cycle of the spiral model:

  1. Consider the win conditions of all success-critical stakeholders.
  2. Identify and evaluate alternative approaches for satisfying the win conditions.
  3. Identify and resolve risks that stem from the selected approach(es).
  4. Obtain approval from all success-critical stakeholders, plus commitment to pursue the next cycle.

Project cycles that omit or shortchange any of these activities risk wasting effort by pursuing options that are unacceptable to key stakeholders, or are too risky.

Some “hazardous spiral look-alike” processes violate this invariant by excluding key stakeholders from certain sequential phases or cycles. For example, system maintainers and administrators might not be invited to participate in definition and development of the system. As a result, the system is at risk of failing to satisfy their win conditions.

Risk determines level of effort

For any project activity (e.g., requirements analysis, design, prototyping, testing), the project team must decide how much effort is enough. In authentic spiral process cycles, these decisions are made by minimizing overall risk.

For example, investing additional time testing a software product often reduces the risk due to the marketplace rejecting a shoddy product. However, additional testing time might increase the risk due to a competitor’s early market entry. From a spiral model perspective, testing should be performed until the total risk is minimized, and no further.

“Hazardous spiral look-alikes” that violate this invariant include evolutionary processes that ignore risk due to scalability issues, and incremental processes that invest heavily in a technical architecture that must be redesigned or replaced to accommodate future increments of the product.

Risk determines degree of detail

For any project artifact (e.g., requirements specification, design document, test plan), the project team must decide how much detail is enough. In authentic spiral process cycles, these decisions are made by minimizing overall risk.

Considering requirements specification as an example, the project should precisely specify those features where risk is reduced through precise specification (e.g., interfaces between hardware and software, interfaces between prime and sub contractors). Conversely, the project should not precisely specify those features where precise specification increases risk (e.g., graphical screen layouts, behavior of off-the-shelf components).

Use anchor point milestones

Boehm’s original description of the spiral model did not include any process milestones. In later refinements, he introduces three anchor point milestones that serve as progress indicators and points of commitment. These anchor point milestones can be characterized by key questions.

  1. Life Cycle Objectives. Is there a sufficient definition of a technical and management approach to satisfying everyone’s win conditions? If the stakeholders agree that the answer is “Yes”, then the project has cleared this LCO milestone. Otherwise, the project can be abandoned, or the stakeholders can commit to another cycle to try to get to “Yes.”
  2. Life Cycle Architecture. Is there a sufficient definition of the preferred approach to satisfying everyone’s win conditions, and are all significant risks eliminated or mitigated? If the stakeholders agree that the answer is “Yes”, then the project has cleared this LCA milestone. Otherwise, the project can be abandoned, or the stakeholders can commit to another cycle to try to get to “Yes.”
  3. Initial Operational Capability. Is there sufficient preparation of the software, site, users, operators, and maintainers to satisfy everyone’s win conditions by launching the system? If the stakeholders agree that the answer is “Yes”, then the project has cleared the IOC milestone and is launched. Otherwise, the project can be abandoned, or the stakeholders can commit to another cycle to try to get to “Yes.”

“Hazardous spiral look-alikes” that violate this invariant include evolutionary and incremental processes that commit significant resources to implementing a solution with a poorly defined architecture.

The three anchor point milestones fit easily into the Rational Unified Process (RUP), with LCO marking the boundary between RUP’s Inception and Elaboration phases, LCA marking the boundary between Elaboration and Construction phases, and IOC marking the boundary between Construction and Transition phases.

Focus on the system and its life cycle

This invariant highlights the importance of the overall system and the long-term concerns spanning its entire life cycle. It excludes “hazardous spiral look-alikes” that focus too much on initial development of software code. These processes can result from following published approaches to object-oriented or structured software analysis and design, while neglecting other aspects of the project’s process needs.

 

Agile and Spiral techniques; Differences/similarities.

The Spiral Model is  iterative development. A typical iteration will be somewhere between 6 months and 2 years and will include all aspects of the lifecycle – requirements analysis, risk analysis, planning, design and architecture, and then a release of either a prototype (which is either evolved or thrown away, depending on the specific methods chosen by the project team) or working software. These steps are repeated until the project is either ended or finished.

Agile development, on the other hand, includes a number of different methodologies with specific guidance as to the steps to take to produce a software project, such as Extreme Programming, Scrum, and Crystal Clear. The commonality between all of the agile methods is that they are iterative and incremental. The iterations in the agile methods are typically shorter – 2 to 4 weeks in most cases, and each iteration ends with a working software product. However, unlike the spiral model, the software produced isn’t a prototype – it is always high quality code that is expanded into the final product.

Agile has more restrictions on it than spiral does. It’s a square/rectangle relationship – yes, agile is a spiral, but spiral isn’t agile and it’s separated by more than just “incremental execution in order of risk”. Agile accounts for shorter schedules and more frequent releases. Spiral tends to imply “big design up front” — where you plan out many spirals, each in order of risk. Spiral, however, isn’t Agile — it’s just incremental execution in order of risk.

Agile is spiral, but you create detailed plans for just one increment at a time. Agile adds a lot of other things, also. Spiral is a very technical approach. Agile, however, recognizes that technology is built by people. The Agile Manifesto has four principles that are above and beyond the Boehm’s simple risk management approach.

Agile is type of Iterative SDLC while spiral is type of Incremental SDLC. Scrum is one the type of Agile other are DSDM/FDD/XP etc. All SDLC after waterfall followed same set of acts(Requirement Analysis, Design, Coding and Testing) in some different combinations. So basic set of action in sequential OR Iterative OR Incremental are same.

  1. As far as Agile and Spiral are concern both have common advantage Changing Requirement handling.
  2. Short term releases.
  3. Risk management is easy due to shorter duration of SDLC.
  4. Cross team helps product and project going smooth.