Anki is a program which makes remembering things easy. Because it’s a lot more efficient than traditional study methods, you can either greatly decrease your time spent studying, or greatly increase the amount you learn.
Anyone who needs to remember things in their daily life can benefit from Anki. Since it is content-agnostic and supports images, audio, videos and scientific markup (via LaTeX), the possibilities are endless.
- Learning a language
- Studying for medical and law exams
- Memorizing people’s names and faces
- Brushing up on geography
- Mastering long poems
- Even practicing guitar chords!
Tor is free software for enabling anonymous communication. The name is an acronym derived from the original software project name The Onion Router, however the correct spelling is “Tor”, capitalizing only the first letter. Tor directs Internet traffic through a free, worldwide, volunteer network consisting of more than seven thousand relays to conceal a user’s location and usage from anyone conducting network surveillance or traffic analysis. Using Tor makes it more difficult for Internet activity to be traced back to the user: this includes “visits to Web sites, online posts, instant messages, and other communication forms”. Tor’s use is intended to protect the personal privacy of users, as well as their freedom and ability to conduct confidential communication by keeping their Internet activities from being monitored.
Onion routing is implemented by encryption in the application layer of a communication protocol stack, nested like the layers of anonion. Tor encrypts the data, including the destination IP address, multiple times and sends it through a virtual circuit comprising successive, randomly selected Tor relays. Each relay decrypts a layer of encryption to reveal only the next relay in the circuit in order to pass the remaining encrypted data on to it. The final relay decrypts the innermost layer of encryption and sends the original data to its destination without revealing, or even knowing, the source IP address. Because the routing of the communication is partly concealed at every hop in the Tor circuit, this method eliminates any single point at which the communicating peers can be determined through network surveillance that relies upon knowing its source and destination.
An adversary might try to de-anonymize the user by some means. One way this may be achieved is by exploiting vulnerable software on the user’s computer. The NSA has a technique that targets outdated Firefox browsers codenamed EgotisticalGiraffe, and targets Tor users in general for close monitoring under its XKeyscore program. Attacks against Tor are an active area of academic research, which is welcomed by the Tor Project itself.
BUGS is a software package for performing Bayesian inference Using Gibbs Sampling. The user specifies a statistical model, of (almost) arbitrary complexity, by simply stating the relationships between related variables. The software includes an ‘expert system’, which determines an appropriate MCMC (Markov chain Monte Carlo) scheme (based on the Gibbs sampler) for analysing the specified model. The user then controls the execution of the scheme and is free to choose from a wide range of output types.
There are two main versions of BUGS, namely WinBUGS and OpenBUGS. This site is dedicated to OpenBUGS, an open-source version of the package, on which all future development work will be focused. OpenBUGS, therefore, represents the future of the BUGS project. WinBUGS, on the other hand, is an established and stable, stand-alone version of the software, which will remain available but not further developed. The latest versions of OpenBUGS (from v3.0.7 onwards) have been designed to be at least as efficient and reliable as WinBUGS over a wide range of test applications. Please see here for more information on WinBUGS. OpenBUGS runs on x86 machines with MS Windows, Unix/Linux or Macintosh (using Wine).
Note that software exists to run OpenBUGS (and analyse its output) from within both R and SAS, amongst others.
For additional details on the differences between OpenBUGS and WinBUGS see the OpenVsWin manual page.
‘For a long time I would to go to bed early. Sometimes, the candle barely out, my eyes closed so quickly that I did not have the time to tell myself: I’m falling asleep.’
Marcel Proust, In Search of Lost Time
Taking the code literally
The performers are reading the machine-code version of Marcel Proust’s novel. During the eight hours of a working day the humans are playing computer. For these purposes the text is first deconstructed into its individual parts — the letter and characters — which in turn are decoded into the Ascii-code — a code underlying digital text processing. Each letter is represented by an individual sequence of signs, consisting of zeros and ones. The performance is situated in an ironic lab situation and attempts to find beauty inside of the microstructures of the digital. During the act of reading, interpreting and presenting the work of art emerges, posing questions about the nature of the digital and the analogue, of work and art, time and beauty.
From the analog to the digital and back again
The sequence of events of the performance is described in this manual.
Starting from the ASCII-Version of Marcel Proust’s novel ‘A la recherche du temps perdu’ it is then re-coded into zeros and ones and then read by two performers alternately (one is reading the zeros, the other one the ones). The third person is CPU (the Central Processing Unit): She interprets the zeros and ones with the aid of an ASCII allocation table, cuts out the corresponding letter from the prepared sheets and turns it over to Display, who sticks it onto the wall panel.
After eight hours of performance about 250 characters can be processed.
A five minute extract from the performance “A la recherche du temps perdu” on 20 March 2006 in SPACE, London, during the xxxx festival 2006 (http://1010.co.uk/xxxxx_arch.html). Performance by Karl Heinz Jeron and Valie Djordjevic. More info on http://khjeron.de/alarecherche
We are using the electronic versions of the first three parts of ‘A la recherche du temps perdu’ from Project Gutenberg.
The performance is licenced under the GNU General Public License.
First performance on 19 November 2005 at the allgirls gallery in Berlin.
True: Valie Djordjevic
False: Karl Heinz Jeron
CPU: Heissam el-Wardany
Display: Dani Djordjevic
Second performance on 20 March 2006 in SPACE, London , as a part of the xxxxx event series.
True: Verena Brückner
False: Florian Kmet
CPU: Thomas Hörl
Display: Peter Kozek
When first discovered in 2010, the Stuxnet computer worm posed a baffling puzzle. Beyond its sophistication loomed a more troubling mystery: its purpose. Ralph Langner and team helped crack the code that revealed this digital warhead’s final target. In a fascinating look inside cyber-forensics, he explains how — and makes a bold (and, it turns out, correct) guess at its shocking origins.
Ralph Langner’s Stuxnet Deep Dive is the definitive technical presentation on the PLC attack portion of Stuxnet. He did a good job of showing very technical details in a readable and logical presentation that you can follow in the video if you know something about programming and PLC’s.
The main purpose of Ralph’s talk was to convince the audience with “100% certainty” that Stuxnet was designed specifically to attack the Natanz facility. He does this at least four different ways, and I have to agree there is no doubt.
Ralph Langner is a German control system security consultant. He has received worldwide recognition for his analysis of the Stuxnet malware.
- Stuxnet worm hits Iranian centrifuges – from mid-2009 to late 2010
- Iran complains facilities hit by Stars malware – April 2011
- Duqu trojan hits Iran’s computer systems – November 2011
- Flame virus targets computers in PCs across the Middle East, including Iran and Israel – June 2012
- Iran says Stuxnet worm returns – December 2012
Somebody asked how one may count the number of floating point operations in a MATLAB program.@cise.ufl.edu>
Prior to version 6, one used to be able to do this with the command
flops, but this command is no longer available with the newer versions of MATLAB.
flopsis a relic from the LINPACK days of MATLAB (LINPACK has since been replaced by LAPACK). With the use of LAPACK in MATLAB, it will be more approrpiate to use
tocto count elapsed CPU time instead (cf.
If you're interested to know why
flopsis obsolete, you may wish to read the exchanges in NA digest regarding
Nevertheless, if you feel that you really do need a command to count floating point operations in MATLAB, what you can do is to install Tom Minka's Lightspeed MATLAB toolbox and use the flops counting operations therein.
To count flops, we need to first know what they are. What is a firstname.lastname@example.org>
LAPACK is not the only place where the question "what is a flop?" is
relevant. Sparse matrix codes are another. Multifrontal and supernodal
factorization algorithms store L and U (and intermediate submatrices, for
the multifrontal method) as a set of dense submatrices. It's more
efficient that way, since the dense BLAS can be used within the dense
submatrices. It is often better explicitly store some of the numerical
zeros, so that one ends up with fewer frontal matrices or supernodes.
So what happens when I compute zero times zero plus zero? Is that a flop
(or two flops)? I computed it, so one could argue that it counts. But it
was useless, so one could argue that it shouldn't count. Computing it
allowed me to use more BLAS-3, so I get a faster algorithm that happens to
do some useless flops. How do I compare the "mflop rate" of two
algorithms that make different decisions on what flops to perform and
which of those to include in the "flop count"?
A somewhat better measure would be to compare the two algorithms based an
external count. For example, the "true" flop counts for sparse LU
factorization can be computed in Matlab from the pattern of L and U as:
[L,U,P] = lu (A) ;
Lnz = full (sum (spones (L))) - 1 ; % off diagonal nz in cols of L
Unz = full (sum (spones (U')))' - 1 ; % off diagonal nz in rows of U
flops = 2*Lnz*Unz + sum (Lnz) ;
The same can be done on the LU factors found by any other factorization
code. This does count a few spurious flops, namely the computation a_ij +
l_ik*u_kj is always counted as two flops, even if a_ij is initially zero.
However, even with this "better" measure, the algorithm that does more
flops can be much faster. You're better off picking the algorithm with
the smallest memory space requirements (which is not always the smallest
nnz (L+U)) and/or fastest run time.
So my vote is to either leave out the the flop count, or at most return a
reasonable agreed-upon estimate (like the "true flop count" for LU, above)
that is somewhat independent of algorithmic details. Matrix multiply, for
example, should report 2*n^3, as Cleve states in his Winter 2000
newsletter, even though "better" methods with fewer flops (Strassen's
method) are available.
University of Florida
1: Motor radial de un avión
2: Distribución oval
3: Principio de la máquina de coser
4: Movimiento de Cruz de Malta – de la mano del segundero, que controla al reloj
5: Mecanismo de cambio de velocidades (automóvil)
6: Junta universal para velocidad constante automática
7: Sistema de carga de proyectiles
8: Motor giratorio – motor de combustión interna, el calor y no el movimiento del pistón, causa el movimiento giratorio
9: Motor en línea – cilindros alineados en forma paralela