Deep learning

Software libraries

https://en.wikipedia.org/wiki/Comparison_of_deep_learning_software

  • Deeplearning4j — An open-source deep-learning library written for Java/C++ with LSTMs and convolutional networks. It provides parallelization with Spark on CPUs and GPUs.
  • Gensim — A toolkit for natural language processing implemented in the Python programming language.
  • Keras — An open-source deep learning framework for the Python programming language.
  • Microsoft CNTK (Computational Network Toolkit) — Microsoft’s open-source deep-learning toolkit for Windows and Linux. It provides parallelization with CPUs and GPUs across multiple servers.
  • MXNet — An open source deep learning framework that allows you to define, train, and deploy deep neural networks.
  • OpenNN — An open source C++ library which implements deep neural networks and provides parallelization with CPUs.
  • PaddlePaddle — An open source C++ /CUDA library with Python API for scalable deep learning platform with CPUs and GPUs, originally developed by Baidu.
  • TensorFlow — Google’s open source machine learning library in C++ and Python with APIs for both. It provides parallelization with CPUs and GPUs.
  • Theano — An open source machine learning library for Python supported by the University of Montreal and Yoshua Bengio’s team.
  • Torch — An open source software library for machine learning based on the Lua programming language and used by Facebook.
  • Caffe – Caffe is a deep learning framework made with expression, speed, and modularity in mind. It is developed by the Berkeley Vision and Learning Center (BVLC) and by community contributors.
  • DIANNE – A modular open-source deep learning framework in Java / OSGi developed at Ghent University, Belgium. It provides parallelization with CPUs and GPUs across multiple servers.

Go engine

[youtube https://www.youtube.com/watch?v=vU77itJptK0&w=560&h=315]

Mastering the game of Go with deep neural networks and tree search

David Silver, Aja Huang1, Chris J. Maddison, Arthur Guez, Laurent Sifre1, George van den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, Sander Dieleman, Dominik Grewe,
John Nham, Nal Kalchbrenner, Ilya Sutskever, Timothy Lillicrap, Madeleine Leach1, Koray Kavukcuoglu,
Thore Graepel1, Demis Hassabis

The game of Go has long been viewed as the most challenging of classic games for artificial intelligence owing to its enormous search space and the difficulty of evaluating board positions and moves. Here we introduce a new approach to computer Go that uses ‘value networks’ to evaluate board positions and ‘policy networks’ to select moves. These deep neural networks are trained by a novel combination of supervised learning from human expert games, and reinforcement learning from games of self-play. Without any lookahead search, the neural networks play Go at the level of stateof-the-art Monte Carlo tree search programs that simulate thousands of random games of self-play. We also introduce a new search algorithm that combines Monte Carlo simulation with value and policy networks. Using this search algorithm,our program AlphaGo achieved a 99.8% winning rate against other Go programs, and defeated the human European Go champion by 5 games to 0. This is the first time that a computer program has defeated a human professional player in the full-sized game of Go, a feat previously thought to be at least a decade away.
Continue reading “Go engine”

dancing links

In computer science, dancing links is the technique suggested by Donald Knuth to efficiently implement his Algorithm X.[1] Algorithm X is a recursive, nondeterministic, depth-first, backtracking algorithm that finds all solutions to the exact cover problem. Some of the better-known exact cover problems include tiling, the n queens problem, and Sudoku.

The name dancing links stems from the way the algorithm works, as iterations of the algorithm cause the links to “dance” with partner links so as to resemble an “exquisitely choreographed dance.” Knuth credits Hiroshi Hitotsumatsu and Kōhei Noshita with having invented the idea in 1979,[2] but it is his paper which has popularized it.

“Algorithm X” is the name Donald Knuth used in his paper “Dancing Links” to refer to “the most obvious trial-and-error approach” for finding all solutions to the exact coverproblem.[1] Technically, Algorithm X is a recursive, nondeterministic, depth-first, backtracking algorithm. While Algorithm X is generally useful as a succinct explanation of how theexact cover problem may be solved, Knuth’s intent in presenting it was merely to demonstrate the utility of the dancing links technique via an efficient implementation he called DLX.[1]

The exact cover problem is represented in Algorithm X using a matrix A consisting of 0s and 1s. The goal is to select a subset of the rows so that the digit 1 appears in each column exactly once.

Algorithm X functions as follows:

  1. If the matrix A has no columns, the current partial solution is a valid solution; terminate successfully.
  2. Otherwise choose a column c (deterministically).
  3. Choose a row r such that Ar, c = 1 (nondeterministically).
  4. Include row r in the partial solution.
  5. For each column j such that Ar, j = 1,
    for each row i such that Ai, j = 1,

    delete row i from matrix A.
    delete column j from matrix A.
  6. Repeat this algorithm recursively on the reduced matrix A.

The nondeterministic choice of r means that the algorithm essentially clones itself into independent subalgorithms; each subalgorithm inherits the current matrix A, but reduces it with respect to a different row r. If column c is entirely zero, there are no subalgorithms and the process terminates unsuccessfully.

The subalgorithms form a search tree in a natural way, with the original problem at the root and with level k containing each subalgorithm that corresponds to k chosen rows. Backtracking is the process of traversing the tree in preorder, depth first.

Any systematic rule for choosing column c in this procedure will find all solutions, but some rules work much better than others. To reduce the number of iterations, Knuthsuggests that the column choosing algorithm select a column with the lowest number of 1s in it.

Agile management

Agile management, or agile process management, or simply agile refer to an iterative, incremental method of managing the design and build activities for engineering, information technology, and other business areas that aims to provide new product or service development in a highly flexible and interactive manner; an example is its application in Scrum, an original form of agile software development.[1] It requires capable individuals from the relevant business, openness to consistent customer input, and management openness to non-hierarchical forms of leadership. Agile can in fact be viewed as a broadening and generalization of the principles of the earlier successful array of Scrum concepts and techniques to more diverse business activities. Agile also traces its evolution to a “consensus event”, the publication of the “Agile manifesto“, and it has conceptual links to lean techniques, kanban (かんばん(看板)?), and the Six Sigma area of business ideas.[1]

Agile X techniques may also be called extreme process management. It is a variant of iterative life cycle[2] where deliverables are submitted in stages. The main difference between agile and iterative development is that agile methods complete small portions of the deliverables in each delivery cycle (iteration)[3] while iterative methods evolve the entire set of deliverables over time, completing them near the end of the project. Both iterative and agile methods were developed as a reaction to various obstacles that developed in more sequential forms of project organization. For example, as technology projects grow in complexity, end users tend to have difficulty defining the long term requirements without being able to view progressive prototypes. Projects that develop in iterations can constantly gather feedback to help refine those requirements. According to Jean-Loup Richet (Research Fellow at ESSEC Institute for Strategic Innovation & Services) “this approach can be leveraged effectively for non-software products and for project management in general, especially in areas of innovation and uncertainty. The end result is a product or project that best meets current customer needs and is delivered with minimal costs, waste, and time, enabling companies to achieve bottom line gains earlier than via traditional approaches.[4] Agile management also offers a simple framework promoting communication and reflection on past work amongst team members.[5]

Agile methods are mentioned in the Guide to the Project Management Body of Knowledge (PMBOK Guide) under the Project Lifecycle definition:

Adaptive project life cycle, a project life cycle, also known as change-driven or agile methods, that is intended to facilitate change and require a high degree of ongoing stakeholder involvement. Adaptive life cycles are also iterative and incremental, but differ in that iterations are very rapid (usually 2-4 weeks in length) and are fixed in time and resources.[6]

The Personal Software Process (PSP)

The Personal Software Process (PSP) is a structured software development process that is intended to help software engineers better understand and improve their performance by tracking their predicted and actual development of code. The PSP was created by Watts Humphrey to apply the underlying principles of the Software Engineering Institute’s (SEI) Capability Maturity Model (CMM) to the software development practices of a single developer. It claims to give software engineers the process skills necessary to work on a Team Software Process (TSP) team.

“Personal Software Process” and “PSP” are registered service marks of the Carnegie Mellon University.[1]

Amazon Prime Air

https://en.wikipedia.org/wiki/Amazon_Prime_Air

http://www.amazon.com/b?node=8037720011

Amazon Prime Air is a conceptual drone-based delivery system currently in development by Amazon.com.

On December 1, 2013, Amazon.com CEO Jeff Bezos revealed plans for Amazon Prime Air in an interview on 60 Minutes. Amazon Prime Air will use multirotor Miniature Unmanned Air Vehicle (Miniature UAV, otherwise known as drone) technology to autonomously fly individual packages to customers’ doorsteps within 30 minutes of ordering.[1]To qualify for 30 minute delivery, the order must be less than five pounds (2.26 kg), must be small enough to fit in the cargo box that the craft will carry, and must have a delivery location within a ten-mile radius of a participating Amazon order fulfillment center.[1] 86% of packages sold by Amazon fit the weight qualification of the program.

Regulations

Presently, the biggest hurdle facing Amazon Prime Air is that commercial use of UAV technology is not yet legal in the United States.[2] In the FAA Modernization and Reform Act of 2012, Congress issued the Federal Aviation Administration a deadline of September 30, 2015 to accomplish a “safe integration of civil unmanned aircraft systems into the national airspace system.”[3]

In March 2015 the US Federal Aviation Administration (FAA) granted Amazon permission to begin US testing of a prototype. The company responded by claiming that the vehicle cleared for use was obsolete. In April 2015, the agency allowed the company to begin testing its current models. In the interim, the company had begun testing at a secret Canadian site 2,000 ft (610 m) from the US border.[4]

The agency mandated that Amazon’s drones fly no higher than 400 ft (122 m), no faster than 100 mph (161 km/h), and remain within the pilot’s line of sight. These rules are consistent with a proposed set of FAA guidelines. Ultimately, Amazon hopes to operate in a slice of airspace above 200 ft (61 m) and beneath 500 ft (152 m), with 500 ft being where general aviation begins. It plans to fly drones weighing a maximum of 55 lb (25 kg) within a 10 mi (16 km) radius of its warehouses, at speeds of up to 50 mph (80.5 km/h) with packages weighing up to 5 lb (2.26 kg) in tow.[5]

Public concerns

Public concerns regarding this technology include public safety, privacy, and package security issues.[2] Amazon states that “Safety will be our top priority, and our vehicles will be built with multiple redundancies and designed to commercial aviation standards.”[6] However, while privacy and security remain concerns, the FAA’s recently proposed rules for small UAS operations and certifications only provides provisions on its technical and functional aspects.[7]

The fact that the drone’s navigational airspace exists below 500 feet is a big step toward safety management.[8]

Privacy

Concerns over the constant connection of the drones to the internet raises concerns over personal privacy. The primary purpose of drone internet connection will be to manage flight controls and communication between drones.[9] However, the extent of Amazon’s data collection from the drones is unclear.[10] Some proposed data inputs include automated object detection, GPS surveillance, gigapixel cameras, and enhanced image resolution.[11] Because of this, Amazon’s operating center will collect unknown amounts of information, both intentionally and unintentionally, throughout the delivery process. Neither Amazon or the FAA has formed a clear policy on the management of this data.

The Common Vulnerability Scoring System (CVSS)

The Common Vulnerability Scoring System (CVSS) is a free and open industry standard for assessing the severity of computer system security vulnerabilities. CVSS attempts to assign severity scores to vulnerabilities, allowing responders to prioritize responses and resources according to threat. Scores are calculated based on a formula that depends on severalmetrics that approximate ease of exploit and the impact of exploit. Scores range from 0 to 10, with 10 being the most severe. While many utilize only the CVSS Base score for determining severity, Temporal and Environmental scores also exist, to factor in availability of mitigations and how widespread vulnerable systems are within an organization, respectively.

The current version of CVSS is CVSSv3.0, released in June 2015

Tor

Tor is free software for enabling anonymous communication. The name is an acronym derived from the original software project name The Onion Router,[7] however the correct spelling is “Tor”, capitalizing only the first letter.[8] Tor directs Internet traffic through a free, worldwide, volunteer network consisting of more than seven thousand relays[9] to conceal a user’s location and usage from anyone conducting network surveillance or traffic analysis. Using Tor makes it more difficult for Internet activity to be traced back to the user: this includes “visits to Web sites, online posts, instant messages, and other communication forms”.[10] Tor’s use is intended to protect the personal privacy of users, as well as their freedom and ability to conduct confidential communication by keeping their Internet activities from being monitored.

Onion routing is implemented by encryption in the application layer of a communication protocol stack, nested like the layers of anonion. Tor encrypts the data, including the destination IP address, multiple times and sends it through a virtual circuit comprising successive, randomly selected Tor relays. Each relay decrypts a layer of encryption to reveal only the next relay in the circuit in order to pass the remaining encrypted data on to it. The final relay decrypts the innermost layer of encryption and sends the original data to its destination without revealing, or even knowing, the source IP address. Because the routing of the communication is partly concealed at every hop in the Tor circuit, this method eliminates any single point at which the communicating peers can be determined through network surveillance that relies upon knowing its source and destination.

An adversary might try to de-anonymize the user by some means. One way this may be achieved is by exploiting vulnerable software on the user’s computer.[11] The NSA has a technique that targets outdated Firefox browsers codenamed EgotisticalGiraffe,[12] and targets Tor users in general for close monitoring under its XKeyscore program.[13] Attacks against Tor are an active area of academic research,[14][15] which is welcomed by the Tor Project itself.[16]

Computer emergency response teams (CERT)

Computer emergency response teams (CERT) are expert groups that handle computer security incidents. Alternative names for such groups include computer emergency readiness team and computer security incident response team (CSIRT).

The name “Computer Emergency Response Team” was first used by the CERT Coordination Center (CERT-CC) at Carnegie Mellon University (CMU). The abbreviation CERT of the historic name was picked up by other teams around the world. Some teams took on the more specific name of CSIRT to point out the task of handling computer security incidents instead of other tech support work, and because CMU was threatening to take legal action against individuals or organisations who referred to any other team than CERT-CC as a CERT. After the turn of the century, CMU relaxed its position, and the terms CERT and CSIRT are now used interchangeably.

The history of CERTs is linked to the existence of malware, especially computer worms and viruses. Whenever a new technology arrives, its misuse is not long in following. The first worm in the IBM VNET was covered up. Shortly after, a worm hit the Internet on 3 November 1988, when the so-called Morris Worm paralysed a good percentage of it. This led to the formation of the first computer emergency response team at Carnegie Mellon University under U.S. Government contract. With the massive growth in the use of information and communications technologies over the subsequent years, the now-generic term ‘CERT’/’CSIRT’ refers to an essential part of most large organisations’ structures. In many organisations the CERT evolves into a information security operations center.

Stuxnet

An Unprecedented Look at Stuxnet, the World’s First Digital Weapon


Stuxnet is a malicious computer worm believed to be a jointly built AmericanIsraeli cyber weapon.[1] Although neither state has confirmed this openly,[2] anonymous US officials speaking to the Washington Post claimed the worm was developed during the Obama administration to sabotage Iran’s nuclear program with what would seem like a long series of unfortunate accidents.[3]

Stuxnet specifically targets PLCs, which allow the automation of electromechanical processes such as those used to control machinery on factory assembly lines, amusement rides, or centrifuges for separating nuclear material. Exploiting four zero-day flaws,[4] Stuxnet functions by targeting machines using the Microsoft Windows operating system and networks, then seeking out Siemens Step7 software. Stuxnet reportedly compromised Iranian PLCs, collecting information on industrial systems and causing the fast-spinning centrifuges to tear themselves apart.[5] Stuxnet’s design and architecture are not domain-specific and it could be tailored as a platform for attacking modern SCADA and PLC systems (e.g., in automobile or power plants), the majority of which reside in Europe, Japan and the US.[6] Stuxnet reportedly ruined almost one-fifth of Iran’s nuclear centrifuges.[7]

Stuxnet has three modules: a worm that executes all routines related to the main payload of the attack; a link file that automatically executes the propagated copies of the worm; and a rootkit component responsible for hiding all malicious files and processes, preventing detection of the presence of Stuxnet.[8]

Stuxnet is typically introduced to the target environment via an infected USB flash drive. The worm then propagates across the network, scanning for Siemens Step7 software on computers controlling a PLC. In the absence of either criterion, Stuxnet becomes dormant inside the computer. If both the conditions are fulfilled, Stuxnet introduces the infected rootkit onto the PLC and Step7 software, modifying the codes and giving unexpected commands to the PLC while returning a loop of normal operations system values feedback to the users.[9][10]

In 2015, Kaspersky Labs‘ research findings on another highly sophisticated espionage platform created by what they called the Equation Group, noted that the group had used two of the same zero-day attacks used by Stuxnet, before they were used in Stuxnet, and their use in both programs was similar. The researchers reported that “the similar type of usage of both exploits together in different computer worms, at around the same time, indicates that the EQUATION group and the Stuxnet developers are either the same or working closely together”.[11]:13

Continue reading “Stuxnet”