BLOGGER: LINKING TO A PDF OR WORD DOCUMENT IN A POST
http://www.blogsbyheather.com/2009/01/blogger-linking-to-a-pdf-or-word-document-in-a-post.html
Hard and soft
http://www.blogsbyheather.com/2009/01/blogger-linking-to-a-pdf-or-word-document-in-a-post.html
Sao Paulo, Brasil.- Tras un experimento en que monos movieron y sintieron objetos usando únicamente su mente, los científicos suponen que están un paso más cerca de lograr que las personas paralíticas caminen y usen brazos artificiales.
Los animales fueron capaces de operar un brazo virtual para buscar objetos a través de su actividad cerebral que fue captada por implantes, una denominada interfaz cerebro-máquina.
Los primates también fueron capaces de experimentar la sensación del tacto, un elemento crucial de cualquier solución para paralíticos debido a que les permite juzgar la fuerza utilizada para agarrar y controlar objetos.
“Este fue uno de los pasos más difíciles y el hecho de que lo hayamos logrado abre la puerta al sueño de que una persona pueda caminar de nuevo”, dijo Miguel Nicolelis, neurólogo brasileño que formó parte del estudio realizado por un equipo de la Universidad de Duke, en Carolina del Norte.
Los resultados sugieren que sería posible crear una especie de “exoesqueleto” robótico que la gente podría usar para sentir objetos, afirmó.
“El éxito que hemos tenido con primates nos hace creer que los humanos podrían realizar las mismas tareas mucho más fácilmente en el futuro”, declaró Nicolelis.
El estudio fue publicado hoy en la revista especializada Nature.
En la primera parte del experimento, los monos rhesus fueron premiados con comida por usar sus manos para controlar un mando en busca de objetos en la pantalla de un computador.
Entonces el mando fue desconectado, lo que dejó a los monos con el control de un brazo virtual en la pantalla sólo a través del poder cerebral.
Nicolelis dijo que su objetivo es usar la tecnología para permitir a un atleta parapléjico joven participar en la ceremonia de apertura del Mundial de fútbol del 2014 en Brasil.
A partir del 2012, el estudio será llevado a Brasil, anticipó Nicolelis, y será puesto en práctica en el Instituto de Neurociencias en el estado norestino de Natal.
A QR code (abbreviated from Quick Response code) is a type of matrix barcode (or two-dimensional code) first designed for the automotive industry. More recently, the system has become popular outside of industry due to its fast readability and comparatively large storage capacity. The code consists of black modules arranged in a square pattern on a white background. The information encoded can be made up of any kind of data (e.g., binary, alphanumeric, or Kanji symbols)[1]
1: Motor radial de un avión
2: Distribución oval
3: Principio de la máquina de coser
4: Movimiento de Cruz de Malta – de la mano del segundero, que controla al reloj
5: Mecanismo de cambio de velocidades (automóvil)
6: Junta universal para velocidad constante automática
7: Sistema de carga de proyectiles
8: Motor giratorio – motor de combustión interna, el calor y no el movimiento del pistón, causa el movimiento giratorio
9: Motor en línea – cilindros alineados en forma paralela
Somebody asked how one may count the number of floating point operations in a MATLAB program. @cise.ufl.edu>
Prior to version 6, one used to be able to do this with the commandflops
, but this command is no longer available with the newer versions of MATLAB.flops
is a relic from the LINPACK days of MATLAB (LINPACK has since been replaced by LAPACK). With the use of LAPACK in MATLAB, it will be more approrpiate to usetic
andtoc
to count elapsed CPU time instead (cf.tic
,toc
).
If you're interested to know whyflops
is obsolete, you may wish to read the exchanges in NA digest regardingflops
.
Nevertheless, if you feel that you really do need a command to count floating point operations in MATLAB, what you can do is to install Tom Minka's Lightspeed MATLAB toolbox and use the flops counting operations therein.
@cise.ufl.edu>
To count flops, we need to first know what they are. What is a flop? @cise.ufl.edu>
LAPACK is not the only place where the question "what is a flop?" is
relevant. Sparse matrix codes are another. Multifrontal and supernodal
factorization algorithms store L and U (and intermediate submatrices, for
the multifrontal method) as a set of dense submatrices. It's more
efficient that way, since the dense BLAS can be used within the dense
submatrices. It is often better explicitly store some of the numerical
zeros, so that one ends up with fewer frontal matrices or supernodes.
So what happens when I compute zero times zero plus zero? Is that a flop
(or two flops)? I computed it, so one could argue that it counts. But it
was useless, so one could argue that it shouldn't count. Computing it
allowed me to use more BLAS-3, so I get a faster algorithm that happens to
do some useless flops. How do I compare the "mflop rate" of two
algorithms that make different decisions on what flops to perform and
which of those to include in the "flop count"?
A somewhat better measure would be to compare the two algorithms based an
external count. For example, the "true" flop counts for sparse LU
factorization can be computed in Matlab from the pattern of L and U as:
[L,U,P] = lu (A) ;
Lnz = full (sum (spones (L))) - 1 ; % off diagonal nz in cols of L
Unz = full (sum (spones (U')))' - 1 ; % off diagonal nz in rows of U
flops = 2*Lnz*Unz + sum (Lnz) ;
The same can be done on the LU factors found by any other factorization
code. This does count a few spurious flops, namely the computation a_ij +
l_ik*u_kj is always counted as two flops, even if a_ij is initially zero.
However, even with this "better" measure, the algorithm that does more
flops can be much faster. You're better off picking the algorithm with
the smallest memory space requirements (which is not always the smallest
nnz (L+U)) and/or fastest run time.
So my vote is to either leave out the the flop count, or at most return a
reasonable agreed-upon estimate (like the "true flop count" for LU, above)
that is somewhat independent of algorithmic details. Matrix multiply, for
example, should report 2*n^3, as Cleve states in his Winter 2000
newsletter, even though "better" methods with fewer flops (Strassen's
method) are available.
Tim Davis
University of Florida
davis@cise.ufl.edu
x = A b;
no => use QR to solve least squares problem.
yes => sparse triangular solve
yes => attempt Cholesky after symmetric minimum degree.
What is the computational complexity of inverting an nxn matrix? (In
general, not special cases such as a triangular matrix.)
Gaussian Elimination leads to O(n^3) complexity. The usual way to
count operations is to count one for each "division" (by a pivot) and
one for each "multiply-subtract" when you eliminate an entry.
Here's one way of arriving at the O(n^3) result:
At the beginning, when the first row has length n, it takes n
operations to zero out any entry in the first column (one division,
and n-1 multiply-subtracts to find the new entries along the row
containing that entry. To get the first column of zeroes therefore
takes n(n-1) operations.
In the next column, we need (n-1)(n-2) operations to get the second
column zeroed out.
In the third column, we need (n-2)(n-3) operations.
The sum of all of these operations is:
n n n n(n+1)(2n+1) n(n+1)
SUM i(i-1) = SUM i^2 - SUM i = ------------ - ------
i=1 i=1 i=1 6 2
which goes as O(n^3). To finish the operation count for Gaussian
Elimination, you'll need to tally up the operations for the process
of back-substitution (you can check that this doesn't affect the
leading order of n^3).
You might think that the O(n^3) complexity is optimal, but in fact
there exists a method (Strassen's method) that requires only
O(n^log_2(7)) = O(n^2.807...) operations for a completely general
matrix. Of course, there is a constant C in front of the n^2.807. This
constant is not small (between 4 and 5), and the programming of
Strassen's algorithm is so awkward, that often Gaussian Elimination is
still the preferred method.
Even Strassen's method is not optimal. I believe that the current
record stands at O(n^2.376), thanks to Don Coppersmith and Shmuel
Winograd. Here is a Web page that discusses these methods:
Fast Parallel Matrix Multiplication - Strategies for Practical
Hybrid Algorithms - Erik Ehrling
http://www.f.kth.se/~f95-eeh/exjobb/background.html
These methods exploit the close relation between matrix inversion and
matrix multiplication (which is also an O(n^3) task at first glance).
I hope this helps!
- Doctor Douglas, The Math Forum
http://mathforum.org/dr.math/