Eclipse, herramienta universal – IDE abierto y extensible

Eclipse: una herramienta profesional al alcance de todos Pese a que Eclipse está escrito en su mayor parte en Java (salvo el núcleo) y que su uso más popular sea como un IDE para Java, Eclipse es neutral y adaptable a cualquier tipo de lenguaje, por ejemplo C/C++, Cobol, C#, XML, etc. La característica clave de Eclipse es la extensibilidad. Eclipse es una gran estructura formada por un núcleo y muchos plug-ins que van conformando la funcionalidad final. La forma en que los plug-ins interactúan es mediante interfaces o puntos de extensión; así, las nuevas aportaciones se integran sin dificultad ni conflictos.

Eclipse fue producto de una inversión de cuarenta millones de dólares de IBM en su desarrollo antes de ofrecerlo como un producto de código abierto al consorcio Eclipse.org que estaba compuesto inicialmente por Borland e IBM. IBM sigue dirigiendo el desarrollo de Eclipse a través de su subsidiaria OTI (Object Technologies International), creadora de Eclipse. OTI fue adquirida por IBM en 1996 y se consolidó como gran empresa de desarrollo de herramientas orientadas a objeto (O.O.) desde la popularidad del lenguaje Smalltalk. OTI era la división de IBM en la que se generaron los productos Visual Age, que marcaron el estándar de las herramientas de desarrollo Orientado a objetos. Muchos conceptos pioneros en Smalltalk fueron aplicados en Java, creando Visual Age for Java (VA4J). VA4J fue escrito en Smalltalk. Eclipse es una reescritura de VA4J en Java. La base para Eclipse es la Plataforma de cliente enriquecido (del Inglés Rich Client Platform RCP). Los siguientes componentes constituyen la plataforma de cliente enriquecido:

Plataforma principal – inicio de Eclipse, ejecución de plugins OSGi – una plataforma para integrar distribuciones. El Standard Widget Toolkit (SWT) – Un widget toolkit portable. JFace – manejo de archivos, manejo de texto, editores de texto El Workbench de Eclipse – vistas, editores, perspectivas, asistentes

Los widgets de Eclipse están implementados por un herramienta de widget para Java llamada SWT, a diferencia de la mayoría de las aplicaciones Java, que usan las opciones estándar Abstract Window Toolkit (AWT) o Swing. La interfaz de usuario de Eclipse también tiene una capa GUI intermedia llamada JFace, la cual simplifica la construcción de aplicaciones basada en SWT. El entorno integrado de desarrollo (IDE) de Eclipse emplea módulos (plug-in) para proporcionar toda su funcionalidad al frente de la plataforma de cliente rico, a diferencia de otros entornos monolí­ticos donde las funcionalidades están todas incluidas, las necesite el usuario o no. Este mecanismo de módulos es una plataforma ligera para componentes de software. Se provee soporte para Java y CVS en el SDK de Eclipse. En cuanto a las aplicaciones clientes, eclipse provee al programador con frameworks muy ricos para el desarrollo de aplicaciones gráficas, definición y manipulación de modelos de software, aplicaciones web, etc. Por ejemplo, GEF (Graphic Editing Framework – Framework para la edición gráfica) es un plugin de eclipse para el desarrollo de editores visuales que pueden ir desde procesadores de texto wysiwyg hasta editores de diagramas UML, interfaces gráficas para el usuario (GUI), etc. El SDK de Eclipse incluye las herramientas de desarrollo de Java, ofreciendo un IDE con un compilador de Java interno y un modelo completo de los archivos fuente de Java. Esto permite técnicas avanzadas de refactorización y análisis de código. El IDE también hace uso de un espacio de trabajo, en este caso un grupo de metadata en un espacio para archivos plano, permitiendo modificaciones externas a los archivos en tanto se refresque el espacio de trabajo correspondiente. Núcleo: su tarea es determinar cuales son los plug-ins disponibles en el directorio de plug-ins de Eclipse. Cada plug-in tiene un fichero XML manifest que lista los elementos que necesita de otros plug-ins así­ como los puntos de extensión que ofrece. Como la cantidad de plug-ins puede ser muy grande, solo se cargan los necesarios en el momento de ser utilizados con el objeto de minimizar el tiempo de arranque de Eclipse y recursos. Entorno de trabajo: maneja los recursos del usuario, organizados en uno o más proyectos. Cada proyecto corresponde a un directorio en el directorio de trabajo de Eclipse, y contienen archivos y carpetas. Interfaz de usuario: muestra los menús y herramientas, y se organiza en perspectivas que configuran los editores de código y las vistas. A diferencia de muchas aplicaciones escritas en Java, Eclipse tiene el aspecto y se comporta como una aplicación nativa. Esta programada SWT (Standard Widget Toolkit) y Jface (juego de herramientas construida sobre SWT), que emula los gráficos nativos de cada sistema operativo. Este ha sido un aspecto discutido sobre Eclipse, porque SWT debe ser portada a cada sistema operativo para interactuar con el sistema gráfico. En los proyectos de Java puede usarse AWT y Swing salvo cuando se desarrolle un plug-in para Eclipse. Para descargar Eclipse existen distribuciones con diferentes combinaciones de plug-ins dependiendo del uso que se le quiera dar a la herramienta. Un problema que se presenta con estas distribuciones es que en Windows XP el descompresor integrado a veces falla y es preferible usar un programa externo como 7-zip, WinZIP, o info-zip

subversion

¿Qué es Subversion?

Subversion es un sistema de control de versiones libre y de código fuente abierto. Es decir, Subversion maneja ficheros y directorios a través del tiempo. Hay un Árbol de archivos en un repositorio central. El repositorio es como un servidor de archivos ordinario, excepto que recuerda todos los cambios hechos a sus archivos y directorios. Esto permite recuperar versiones antiguas de datos o examinar el historial de cambios de los mismos. En este aspecto, mucha gente piensa en los sistemas de versiones como en una especie de máquina del tiempo.

Subversion proporciona:

Versionado de directorios
CVS solamente lleva el historial de archivos individuales, pero Subversion implementa un sistema de archivos versionado virtual que sigue los cambios sobre árboles de directorios completos a través del tiempo. Ambos, archivos y directorios, se encuentran bajo el control de versiones.
Verdadero historial de versiones
CVS está limitado al versionado de archivos. Operaciones como copiar y renombrar, las cuales pueden ocurrir sobre archivos, pero realmente son cambios al contenido del directorio en el que se encuentran, no son soportadas por CVS. Adicionalmente, en CVS no puede reemplazar un archivo versionado con algo nuevo que lleve el mismo nombre sin que el nuevo elemento herede el historial del archivo antiguo que quizás sea completamente distinto al anterior. Con Subversion, se puede añadir, borrar, copiar, y renombrar archivos y directorios. Cada fichero nuevo añadido comienza con un historial nuevo, limpio y completamente suyo.
Envíos atómicos
Una colección cualquiera de modificaciones o bien entra por completo al repositorio, o bien no lo hace en absoluto. Ésto permite a los desarrolladores construir y enviar los cambios como fragmentos lógicos e impide que ocurran problemas cuando sólo una parte de los cambios enviados lo hace con éxito.
Versionado de metadatos
Cada archivo o directorio tiene un conjunto de propiedades claves y sus valores asociado. Se puede crear y almacenar cualquier par arbitrario de clave/valor. Las propiedades son versionadas a través del tiempo, al igual que el contenido de los ficheros.
Elección de las capas de red
Subversion tiene una noción abstracta del acceso al repositorio, facilitando a las personas implementar nuevos mecanismos de red. Subversion puede conectarse al servidor HTTP Apache como un módulo de extensión. Ésto proporciona a Subversion una gran ventaja en estabilidad e interoperabilidad, y acceso instantáneo a las caracterí­sticas existentes que ofrece este servidor: autenticación, autorización, compresión de la conexión, etcétera. También tiene disponible un servidor de Subversion independiente, y más ligero. Este servidor habla un protocolo propio, el cual puede ser encaminado fácilmente a través de un túnel SSH.
La versión de default trabaja con apache 2.0 pero es posible bajar un versión para apache 2.2.4
Manipulación consistente de datos
Subversion expresa las diferencias del archivo usando un algoritmo de diferenciación binario, que funciona idénticamente con ficheros de texto (legibles para humanos) y ficheros binarios (ilegibles para humanos). Ambos tipos de ficheros son almacenados igualmente comprimidos en el repositorio, y las diferencias son transmitidas en ambas direcciones a través de la red.
Ramificación y etiquetado eficientes
El coste de ramificación y etiquetado no necesita ser proporcional al tamaño del proyecto. Subversion crea ramas y etiquetas simplemente copiando el proyecto, usando un mecanismo similar al enlace duro. De este modo estas operaciones toman solamente una cantidad de tiempo pequeña y constante.

Subversion almacena todos los datos versionados en un repositorio central. TortoiseSvn is un proyecto hermano que proporciona integración con Windows explorer. Vea Capítulo 6, Configuración del servidor para aprender acerca de los diferentes tipos de procesos servidor disponibles y cómo configurarlos. svnserver puede correr como un servicio de Windows. Para crear el servicio http://svn.haxx.se/dev/archive-2006-11/0348.shtmlhttp://httpd.apache.org/download.cgi

http://svnbook.red-bean.com/en/1.0/ch06s03.html

http://svn.collab.net/repos/svn/trunk/notes/windows-service.txt

ASP.Net Security

tecnologias ASP.NetMake sure you are very familiar with the following terms:

  • Authentication. Positively identifying the clients of your application; clients might include end-users, services, processes or computers.
  • Authorization. Defining what authenticated clients are allowed to see and do within the application.
  • Secure Communications. Ensuring that messages remain private and unaltered as they cross networks.
  • Impersonation. This is the technique used by a server application to access resources on behalf of a client. The client’s security context is used for access checks performed by the server.
  • Delegation. An extended form of impersonation that allows a server process that is performing work on behalf of a client, to access resources on a remote computer. This capability is natively provided by Kerberos on Microsoft® Windows® 2000 and later operating systems. Conventional impersonation (for example, that provided by NTLM) allows only a single network hop. When NTLM impersonation is used, the one hop is used between the client and server computers, restricting the server to local resource access while impersonating.
  • Security Context. Security context is a generic term used to refer to the collection of security settings that affect the security-related behavior of a process or thread. The attributes from a process’ logon session and access token combine to form the security context of the process.
  • Identity. Identity refers to a characteristic of a user or service that can uniquely identify it. For example, this is often a display name, which often takes the form authority/user name.

Principles

There are a number of overarching principles that apply to the guidance. The following summarizes these principles:

  • Adopt the principle of least privilege. Processes that run script or execute code should run under a least privileged account to limit the potential damage that can be done if the process is compromised. If a malicious user manages to inject code into a server process, the privileges granted to that process determine to a large degree the types of operations the user is able to perform. Code that requires additional trust (and raised privileges) should be isolated within separate processes.The ASP.NET team made a conscious decision to run the ASP.NET account with least privileges.
  • Use defense in depth. Place check points within each of the layers and subsystems within your application. The check points are the gatekeepers that ensure that only authenticated and authorized users are able to access the next downstream layer.
  • Don’t trust user input. Applications should thoroughly validate all user input before performing operations with that input. The validation may include filtering out special characters. This preventive measure protects the application against accidental misuse or deliberate attacks by people who are attempting to inject malicious commands into the system. Common examples include SQL injection attacks, cross-site scripting attacks, and buffer overflow.
  • Use secure defaults. A common practice among developers is to use reduced security settings, simply to make an application work. If your application demands features that force you to reduce or change default security settings, test the effects and understand the implications before making the change.
  • Don’t rely on security by obscurity. Trying to hide secrets by using misleading variable names or storing them in odd file locations does not provide security. In a game of hide-and-seek, it’s better to use platform features or proven techniques for securing your data.
  • Check at the gate. You don’t always need to flow a user’s security context to the back end for authorization checks. Often, in a distributed system, this is not the best choice. Checking the client at the gate refers to authorizing the user at the first point of authentication (for example, within the Web application on the Web server), and determining which resources and operations (potentially provided by downstream services) the user should be allowed to access.If you design solid authentication and authorization strategies at the gate, you can circumvent the need to delegate the original caller’s security context all the way through to your application’s data tier.
  • Assume external systems are insecure. If you don’t own it, don’t assume security is taken care of for you.
  • Reduce surface area. Avoid exposing information that is not required. By doing so, you are potentially opening doors that can lead to additional vulnerabilities. Also, handle errors gracefully; don’t expose any more information than is required when returning an error message to the end user.
  • Fail to a secure mode. If your application fails, make sure it does not leave sensitive data unprotected. Also, do not provide too much detail in error messages; meaning don’t include details that could help an attacker exploit a vulnerability in your application. Write detailed error information to the Windows event log.
  • Remember you are only as secure as your weakest link. Security is a concern across all of your application tiers.
  • If you don’t use it, disable it. You can remove potential points of attack by disabling modules and components that your application does not require. For example, if your application doesn’t use output caching, then you should disable the ASP.NET output cache module. If a future security vulnerability is found in the module, your application is not threatened.

The following steps identify a process that will help you develop an authentication and authorization strategy for your application:

  1. Identify resources
  2. Choose an authorization strategy
  3. Choose the identities used for resource access
  4. Consider identity flow
  5. Choose an authentication approach
  6. Decide how to flow identity

Building Secure ASP.NET Applications: Authentication, Authorization, and Secure Communication

Natas; Crónica del virus mexicano

Tomado de http://vx.netlux.org/lib/agm00.html

En 1992 Little Loc se registro en Prodigy para buscar información sobre virii. Little Loc, alias James Gentile, a los 16 años habí­a escrito un virus mutante que se dispersaba rápidamente. El virus, Satan Bug, estaba escrito de manera que el proceso mismo de rastrear un disco en busca de infección infectaba todos los ejecutables en el mismo.

Satan Bug era el nombre de una teleserie de los 70s. ((Aunque Little Loc nunca vio la serie, vio el nombre en el TVguí­a y le gusto. )) El icono que inspiro la creación de Satan Bug fue el trabajo de Dark Avenger, ((un programador búlgaro de virus y su virus Eddie o Dark Avenger. Eddie usaba los mecanismos de rastreo de antivirus para infectar una maquina y gradualmente corrompí­a el disco duro del anfitrión. Una muerta lenta y dolorosa bajo las cuchillas del vengador tenebroso.))

Little Loc tenía talento natural para escribir virii, un arte que aprendió sin maestro directo ni entrenamiento formal en programación. ((Siguiendo el modelo de Eddie, Satan Bug atacaba el command shell al instalarse en memoria.)) Adicionalmente a los poderes del vengador tenebroso, Satan bug estaba encriptado y se escondí­a en la memoria del computador. Las características de encriptación estaban basadas en la ballena, un virus alemán. La ballena era una pesada navaja suiza de trucos para esconderse de los antivirus.

Little Loc publico el código fuente de Satan Bug en un boletín de noticias y se dedico activamente a diseminar su código. ((Su motivación era ser reconocido por su habilidad técnica.)) Eventualmente, en 1993, Satan Bug infecto las maquinas del servicio secreto en Washington D.C. y las saco de servicio por 3 dí­as. El servicio secreto siguió una línea de investigación con la hipótesis de que el virus era un esfuerzo deliberado para atacar maquinas del gobierno de Estados Unidos.

Little Loc cambió su nombre por Priest y escribió Jackal. ((Jackal fue escrito como un contraataque contra TBClean, un antivirus producido por la compañí­a holandesa Thunderbyte, del investigador de virus Frans Veldman.)) Un derivado de Jackal fue el Natas. En su espí­ritu de medida retaliatoria, Natas formatea el disco duro cuando detecta la presencia de TBClean.

Los mecanismos de detección de programas antivirus de Jackal los incluyo Priest en Natas (Satan al revés), que llego a la ciudad de México en la primavera de 1994.

De acuerdo a la tradición, un consultor que vendía servicios antivirus en la ciudad de México se encargo de propagarlo vigorosamente. Debido a ignorancia e incompetencia, adicionada con entusiasmo empresarial y poder de convocatoria, este pendejo con iniciativa logro difundir Natas en México tan rápido que la leyendo urbana lo ubica como un software de origen mexicano. Un script tragicómico digno del mejor guionista.

El consultor, al visitar los boletines de noticias dedicados a virii, contamino un diskette con Natas. ((El software que usaba detectaba el virus en programas, pero no en el sector MBR (Master Boot Record) del disco duro.)) El consultor iba con sus clientes, corrí­a su software de rastreo de su diskette infectado y detectaba la infección de Natas que el mismo provocaba. Alarmado corría a la siguiente maquina y repetía el proceso, infectando todas las maquinas del lugar. Inmediatamente iba a visitar a sus mejores clientes con la noticia de que había una epidemia de Natas y que más les valía rastrear sus maquinas, con el software que el traí­a, que podía detectar al Natas. Entonces procedía a infectar todas las maquinas y a continuar el proceso con el vecino de al lado. Seguramente penso que eso de Satan iba ne sero cuando despues de formatar las maquinas el virus resurguía de la nada. Espeluznante!

Natas llego a México del sur de California. El consultor era visitante frecuente de BBS en Santa Clarita que tenían el Natas y su código fuente en la revista 40Hex. El buen cuate bajo el virus sin entender que al diablo le puedes vender el alma, pero no pedirla de regreso. En mayo de 1994, un mes después, desesperadamente el consultor buscaba ayuda en los boletines de noticias.

Natas era un programa tí­pico de Priest. Estando en memoria, hace parecer que programas infectados no lo estaban. Copia una copia limpia de MBR y se la muestra al usuario para fintarlo de que todo estaba bien si lo revisa. Natas infectaba diskettes y utiliza el rastreo del antivirus para diseminarse.

Yo en lo personal tuve una experiencia similar a la del cuento. Tenia una Compaq Presario que me estaba dando problemas y solicite la vista de un técnico de Compaq para que revisara la maquina. El técnico se tuvo que retirar sin dar le servicio porque todos sus diskettes con utilerías de diagnostico estaban infectados con un virus.

Ubuntu root

Con un enfoque paternalista Ubuntu de entrada no da acceso a la cuenta de  root, sino que los comandos privilegiados se deben ejecutar usando sudo. Since most Ubuntu documentation asks you to use sudo even with graphical applications, Why recommend gksudo or kdesudo for graphical applications instead of sudo.

For example, a lot of guides (including the first book ever published about Ubuntu) will ask you to type this sort of command:

sudo gedit /etc/apt/sources.list

I will always recommend, however, that people use instead this sort of command:

gksudo gedit /etc/apt/sources.list

And reserve sudo for command-line applications, like so:

sudo nano /etc/apt/sources.list

Why is it an issue?
Well, to be perfectly honest, most of the time it isn’t. For a lot of applications, you can run them the improper way—using sudo for graphical applications and see no adverse side effects.

1. There are other times, though, when side effects can be as mild as Firefox extensions not sticking or as extreme as as not being able to log in any more because the permissions on your .ICEauthority changed. You can read a full discussion on the issue here.

These errors occur because sometimes when sudo launches an application, it launches with root privileges but uses the user’s configuration file.

Referencias

LISP

Lisp (historically, LISP) is a family of computer programming languages with a long history and a distinctive, fully parenthesized Polish prefix notation.[1] Originally specified in 1958, Lisp is the second-oldest high-level programming language in widespread use today; only Fortran is older (by one year). Like Fortran, Lisp has changed a great deal since its early days, and a number of dialects have existed over its history. Today, the most widely known general-purpose Lisp dialects are Common Lisp and Scheme.

Lisp was originally created as a practical mathematical notation for computer programs, influenced by the notation of Alonzo Church‘s lambda calculus. It quickly became the favored programming language for artificial intelligence (AI) research. As one of the earliest programming languages, Lisp pioneered many ideas in computer science, including tree data structures, automatic storage management, dynamic typing, conditionals, higher-order functions, recursion, and the self-hosting compiler.[2]

The name LISP derives from “LISt Processing”. Linked lists are one of Lisp language’s major data structures, and Lisp source code is itself made up of lists. As a result, Lisp programs can manipulate source code as a data structure, giving rise to the macro systems that allow programmers to create new syntax or even new domain-specific languages embedded in Lisp.

The interchangeability of code and data also gives Lisp its instantly recognizable syntax. All program code is written as s-expressions, or parenthesized lists. A function call or syntactic form is written as a list with the function or operator’s name first, and the arguments following; for instance, a function f that takes three arguments might be called using (f arg1 arg2 arg3).

Lisp was invented by John McCarthy in 1958 while he was at the Massachusetts Institute of Technology (MIT). McCarthy published its design in a paper in Communications of the ACM in 1960, entitled “Recursive Functions of Symbolic Expressions and Their Computation by Machine, Part I”[3] (“Part II” was never published). He showed that with a few simple operators and a notation for functions, one can build a Turing-complete language for algorithms.

Information Processing Language was the first AI language, from 1955 or 1956, and already included many of the concepts, such as list-processing and recursion, which came to be used in Lisp.

McCarthy’s original notation used bracketed “M-expressions” that would be translated into S-expressions. As an example, the M-expression car[cons[A,B]] is equivalent to the S-expression (car (cons A B)). Once Lisp was implemented, programmers rapidly chose to use S-expressions, and M-expressions were abandoned. M-expressions surfaced again with short-lived attempts of MLISP[4] by Horace Enea and CGOL by Vaughan Pratt.

After having declined somewhat in the 1990s, Lisp has recently experienced a resurgence of interest. Most new activity is focused around open source implementations of Common Lisp, and includes the development of new portable libraries and applications. A new print edition of Practical Common Lisp by Peter Seibel, a tutorial for new Lisp programmers, was published in 2005.[20]

Many new Lisp programmers were inspired by writers such as Paul Graham and Eric S. Raymond to pursue a language others considered antiquated. New Lisp programmers often describe the language as an eye-opening experience and claim to be substantially more productive than in other languages.[21] This increase in awareness may be contrasted to the “AI winter” and Lisp’s brief gain in the mid-1990s.[22]

Dan Weinreb lists in his survey of Common Lisp implementations[23] eleven actively maintained Common Lisp implementations. Scieneer Common Lisp is a new commercial implementation forked from CMUCL with a first release in 2002.

The open source community has created new supporting infrastructure: CLiki is a wiki that collects Common Lisp related information, the Common Lisp directory lists resources, #lisp is a popular IRC channel (with support by a Lisp-written Bot), lisppaste supports the sharing and commenting of code snippets, Planet Lisp collects the contents of various Lisp-related blogs, on LispForum users discuss Lisp topics, Lispjobs is a service for announcing job offers and there is a weekly news service, Weekly Lisp News. Common-lisp.net is a hosting site for open source Common Lisp projects.

50 years of Lisp (1958–2008) has been celebrated at LISP50@OOPSLA.[24] There are regular local user meetings in Boston, Vancouver, and Hamburg. Other events include the European Common Lisp Meeting, the European Lisp Symposium and an International Lisp Conference.

The Scheme community actively maintains over twenty implementations. Several significant new implementations (Chicken, Gambit, Gauche, Ikarus, Larceny, Ypsilon) have been developed in the last few years. The Revised5 Report on the Algorithmic Language Scheme[25] standard of Scheme was widely accepted in the Scheme community. The Scheme Requests for Implementation process has created a lot of quasi standard libraries and extensions for Scheme. User communities of individual Scheme implementations continue to grow. A new language standardization process was started in 2003 and led to the R6RS Scheme standard in 2007. Academic use of Scheme for teaching computer science seems to have declined somewhat. Some universities are no longer using Scheme in their computer science introductory courses.[citation needed]

There are several new dialects of Lisp: Arc, Nu, and Clojure.

The two major dialects of Lisp used for general-purpose programming today are Common Lisp and Scheme. These languages represent significantly different design choices.

Common Lisp is a successor to MacLisp. The primary influences were Lisp Machine Lisp, MacLisp, NIL, S-1 Lisp, Spice Lisp, and Scheme.[26] It has many of the features of Lisp Machine Lisp (a large Lisp dialect used to program Lisp Machines), but was designed to be efficiently implementable on any personal computer or workstation. Common Lisp has a large language standard including many built-in data types, functions, macros and other language elements, as well as an object system (Common Lisp Object System or shorter CLOS). Common Lisp also borrowed certain features from Scheme such as lexical scoping and lexical closures.

Scheme (designed earlier) is a more minimalist design, with a much smaller set of standard features but with certain implementation features (such as tail-call optimization and full continuations) not necessarily found in Common Lisp.

Scheme is a statically scoped and properly tail-recursive dialect of the Lisp programming language invented by Guy Lewis Steele Jr. and Gerald Jay Sussman. It was designed to have exceptionally clear and simple semantics and few different ways to form expressions. A wide variety of programming paradigms, including imperative, functional, and message passing styles, find convenient expression in Scheme. Scheme continues to evolve with a series of standards (Revisedn Report on the Algorithmic Language Scheme) and a series of Scheme Requests for Implementation.

Clojure is a recent dialect of Lisp that principally targets the Java Virtual Machine, as well as the CLR, the Python VM, the Ruby VM YARV, and compiling to JavaScript. It is designed to be a pragmatic general-purpose language. Clojure draws considerable influences from Haskell and places a very strong emphasis on immutability.[27] Clojure is a compiled language, as it compiles directly to JVM bytecode, yet remains completely dynamic. Every feature supported by Clojure is supported at runtime. Clojure provides access to Java frameworks and libraries, with optional type hints and type inference, so that calls to Java can avoid reflection and enable fast primitive operations.

In addition, Lisp dialects are used as scripting languages in a number of applications, with the most well-known being Emacs Lisp in the Emacs editor, AutoLisp and later Visual Lisp in AutoCAD, Nyquist in Audacity. The small size of a minimal but useful Scheme interpreter makes it particularly popular for embedded scripting. Examples include SIOD and TinyScheme, both of which have been successfully embedded in the GIMP image processor under the generic name “Script-fu”.[28] LIBREP, a Lisp interpreter by John Harper originally based on the Emacs Lisp language, has been embedded in the Sawfish window manager.[29] The Guile interpreter is used in GnuCash. Within GCC, the MELT plugin provides a Lisp-y dialect, translated into C, to extend the compiler by coding additional passes (in MELT).

Lisp was the first homoiconic programming language: the primary representation of program code is the same type of list structure that is also used for the main data structures. As a result, Lisp functions can be manipulated, altered or even created within a Lisp program without extensive parsing or manipulation of binary machine code. This is generally considered one of the primary advantages of the language with regard to its expressive power, and makes the language amenable to metacircular evaluation.

The ubiquitous if-then-else structure, now taken for granted as an essential element of any programming language, was invented by McCarthy for use in Lisp, where it saw its first appearance in a more general form (the cond structure). It was inherited by ALGOL, which popularized it.

Lisp deeply influenced Alan Kay, the leader of the research on Smalltalk, and then in turn Lisp was influenced by Smalltalk, by adopting object-oriented programming features (classes, instances, etc.) in the late 1970s. The Flavours object system (later CLOS) introduced multiple inheritance.

Lisp introduced the concept of automatic garbage collection, in which the system walks the heap looking for unused memory. Most of the modern sophisticated garbage collection algorithms such as generational garbage collection were developed for Lisp.

Largely because of its resource requirements with respect to early computing hardware (including early microprocessors), Lisp did not become as popular outside of the AI community as Fortran and the ALGOL-descended C language. Because of its suitability to complex and dynamic applications, Lisp is currently enjoying some resurgence of popular interest.

Emacs (pron.: /ˈmæks/) and its derivatives are a family of text editors that are characterized by their extensibility. The manual for one variant describes it as “the extensible, customizable, self-documenting, real-time display editor.”[2] Development began in the mid-1970s and continues actively as of 2013. Emacs has over 2,000 built-in commands and allows the user to combine these commands into macros to automate work. The use of Emacs Lisp, a variant of the Lisp programming language, provides a deep extension capability.

The original EMACS was written in 1976 by Richard Stallman and Guy L. Steele, Jr. as a set of Editor MACroS for the TECO editor.[3][4][5][6] It was inspired by the ideas of the TECO-macro editors TECMAC and TMACS.[7]

Emacs became, along with vi, one of the two main contenders in the traditional editor wars of Unix culture. The word “emacs” is often pluralized as emacsen by analogy with boxen and VAXen.[8]

The most popular, and most ported, version of Emacs is GNU Emacs, which was created by Stallman for the GNU Project.[9] XEmacs is a common variant that branched from GNU Emacs in 1991. Both of the variants use Emacs Lisp and are for the most part compatible with each other.

SLIME, the Superior Lisp Interaction Mode for Emacs, is an Emacs mode for developing Common Lisp applications. SLIME originates in an Emacs mode called SLIM written by Eric Marsden and developed as an open-source project by Luke Gorrie and Helmut Eller. Over 100 Lisp developers have contributed code to SLIME since the project was started in 2003. SLIME uses a backend called SWANK that is loaded into Common Lisp.

SLIME works with the following Common Lisp implementations:

Some implementations of other programming languages are using SLIME:

There’s a remarkable amount of Emacs Lisp programs out there, and they do just about everything, from providing handy mail quoting utilities to providing an Emacs interface to IMDB and more! And while many such elisp hacks come bundled with Emacs, there are even more out there on the Internet, just waiting for you to try them out. The Emacs Lisp List and the EmacsWiki are both great resources for finding interesting and useful elisp.

So, you’ve gone and downloaded some elisp file (foo.el, say). Now, what do you do with it? Well, the community convetion on the matter is to toss .el files in, say, ~/elisp/ (an elisp directory in your home directory). Once you have such a directory you need to ensure that it’s present in Emacs’ load-path variable. This is typically done by adding something like this to your ~/.emacs file:

(add-to-list 'load-path "~/elisp")

Next, you’ll need to configure Emacs to load the new file. Most of the time, you should be able to add (require 'foo) to ~/.emacs (where foo means foo.el).

Simplify! Use install.el

That’s often all you have to do, but there are lots of exceptions. Fortunately, Stefan Monnier’s install.el handles the vast majority of elisp files you’ll run into, and is very easy to use itself. Install it by following my directions above. Now, whenever you’d like to install an elisp file, simply invoke the install-file command (via M-x install-file RET). That’s it!

NEWS: EMACS 24.3 is finally available!

– emacs24 will be updated only when I change the build process or when new emacs24 versions are realeased
– emacs-snapshot are updated between once a week and once every two weeks on average. These versions are created from those of Julien Danjou for Debian unstable: http://emacs.naquadah.org/.

To build this PPA, I created this script: https://gist.github.com/2360655

Please report bugs to https://bugs.launchpad.net/emacs-snapshot/, but before reporting, please follow these steps that will ensure a clean installation:

$ sudo apt-get update
$ sudo apt-get install
$ sudo apt-get purge emacs-snapshot-common emacs-snapshot-bin-common emacs-snapshot emacs-snapshot-el emacs-snapshot-gtk emacs23 emacs23-bin-common emacs23-common emacs23-el emacs23-nox emacs23-lucid auctex emacs24 emacs24-bin-common emacs24-common emacs24-common-non-dfsg

To add this PPA:
$ sudo add-apt-repository ppa:cassou/emacs
$ sudo apt-get update

Then, for emacs-snapshot:
$ sudo apt-get install emacs-snapshot-el emacs-snapshot-gtk emacs-snapshot

*Or*, for emacs24:
$ sudo apt-get install emacs24 emacs24-el emacs24-common-non-dfsg

Adding this PPA to your system

You can update your system with unsupported packages from this untrusted PPA by adding ppa:cassou/emacs to your system’s Software Sources. (Read about installing)

USB drive Ubuntu install using VirtualBox

There are many ways to create a live USB drive carrying an operating system like Ubuntu, but the method I will describe further is mainly based on using SUN’s VirtualBox.

While the method described on the Ubuntu documentations implies installing a Live CD image on a USB flash drive, which would then need to extract and load the operating system in the RAM, the method described on this page implies installing a fresh operating system on a bootable flash drive that will work the same way as from a real HDD (except the speed, of course). Thus, you should have a good bootable USB 2.0, with decent I/O data processing speeds, with at least 4GB (considering that the operating system itself weighs ~2GB, Karmic Koala).

(assuming you’ve already installed guest additions)

Click on Settings for your virtual machine, go to USB tab. Check the two boxes, since you do want USB 2.0 support. In theory, this is all, but there’s one step we will need to do afterwards to get this really working. True for Windows, Linux needs a bit more sweat.

You also need to set USB filters so that the USB devices get sent to the guest OS. USB filter is a nice feature that allows you to automatically connect USB devices to your virtual machine. Any device listed in the filter box will be plugged in when you power the guest operating system. Other devices will require that you manually connect them.

From the main Virtualbox window open the Settings dialog, then the USB section, then click the little “add filter” button on the right side of the screen. You should be able to create a filter from any currently connected USB devices.

Much like VMware Tools for VMware products, the Guest Additions expose additional functionality in the virtual machine, boost performance, enhance sharing, and more. We’ve had a long tutorial, which explains how to achieve this in both Windows and Linux virtual machines. You will need to add your user to the VirtualBox group to be able to share USB resources. You can do this from the command line or try the GUI menus.

All right, so we’re running Ubuntu with Gnome desktop. Therefore, go to System > Administration > Users and Groups. In the menu that opens, click on Manage Groups. Scroll and look for the vboxusers group. Click on the Properties button. Make sure your user is listed and checked in the Group Members field. You will need to logout and login back into the session for the effects to take change. Now, power on the virtual machine once more and see what happens.

I had the same problem and fixed it by clicking in the VirtualBox group of my user. You can access it installing gnome-system-tools (it does not come with Ubuntu 12.04 Precise Pangolin), either via the Ubuntu Software Center, Synaptic or by typing in the terminal:

sudo apt-get install gnome-system-tools

Then you head to your Dash home and type users. You will see two applications. The good one is Users and Groups.

You then have to click on Advanced settings for your user and enter your password.

Now you will be shown a window with three tabs. Click on User Privileges. Find the line that says Use Virtualbox virtualization solution and then OK.

After you’ve done this (maybe restart to be sure the host OS isn’t capturing any of the USB devices for itself–Ubuntu will try to automount the flash drive so you might also want to check and make sure that it is unmounted too) then boot into the guest OS and you should see your USB devices.

Good luck.

Edit: note on USB filters

It’s my understanding that a device being used by a guest OS with a USB filter will not be accessible by the host OS while the guest OS is running. Therefore, one should choose carefully what usb devices to create filters for.

You should create USB filters for things that you plan on only using with the guest OS (often peripherals that don’t work with the host OS and will only work with the guest OS) and when you won’t require being able to access the device from the host OS while the guest OS is running. For example I have a USB banking dongle from my bank, ICBC, that is not compatible with Linux so I use a virtualized installation of Windows XP for banking and use a USB filter to grab the USB dongle.

Examples of good devices to create filters for:

  • USB banking dongles that only work with guest OS
  • e-readers (Kindle,Nook,etc.) that you plan on using only (or primarily) with the guest OS.
  • external soundcards that only work with the guest OS or require the guest OS for full functionality

Examples of bad devices to create filters for:

  • USB input devices (mouses or keyboards) that you would like to use with the host and guest OSes. Virtualbox will allow the guest OS access to these devices by default so there is no need for the guest OS to directly control them (well, I could think of some specialized reasons but I will digress…).
  • USB storage devices that you want the guest and the host OSes to both be able to access at the same time. Instead, mount the drive on the host OS and use shared folders to share the drive to the guest OS.

Remember that to paste in the terminal you have to use CTRL+SHIFT+V, as opposed to CTRL+V

You will probably have to enter your password to allow the installation and add a Y (as in yes) to finish installing the packages.
Press alt-f2 and type ccsm (do you have compiz settings manager installed?) Scroll to the bottom and find the “move windows” icon and click on it. There is an option “constrain Y”; uncheck this and you can pull the windows where you want. If you are useing “advanced desktop settings” and dont have compiz-config-settings installed open a terminal and digit;

sudo apt-get install compizconfig-settings-manager

More reading

For a whole library full of tutorials, guides, howtos, tips and tricks on virtualization, feel free to click on any of the links below, preferably all.

VirtualBox 3 overview

Compiz Fusion in VirtualBox 3

DirectX in VirtualBox 3

Seamless mode in VirtualBox

VirtualBox desktop shortcuts

Portable VirtualBox

How to add new hard disks in VirtualBox – Tutorial

How to clone disks in VirtualBox – Tutorial

How to shrink/expand disks in VirtualBox – Tutorial

How to install VirtualBox Guest Additions – Tutorial

Network & sharing in VirtualBox – Tutorial

How to boot from CD-ROM in newer versions of VirtualBox – Tutorial

the Interceptor

the Interceptor

What is the Interceptor?

The Interceptor is a wireless wired network tap. Basically, a network tap is a way to listen in to network traffic as it flows past. I haven’t done extensive research but all the ones I found when looking passed the copy of the traffic onto a specified wired interface which was then plugged into a machine to allow a user to monitor the traffic. The problem with this is that you have to be able to route the data from that wired port to your monitoring machine either through a direct cable or through an existing network. The direct cable method means your monitor has to be near by the location you want to tap, the network routing means you have to somehow encapsulate the data to get it across the network without it being affected on route.

The Interceptor does away with the wired monitor port and instead spits out the traffic over wireless meaning the listener can be anywhere they can make a wireless connection to the device. As the data is encrypted (actually, double encrypted, see how it works) the person placing the tap doesn’t have to worry about unauthorized users seeing the traffic.

See here for more information on how it works.

What Hardware Is Required

This project has been built and tested on a Fon+ but should in theory work on any device which will run OpenWrt and has at least a pair of wired interfaces and a wireless one.

OpenWrt is an operating system primarily used on embedded devices to route network traffic. The main components are the Linux kernel, uClibc and BusyBox. All components have been optimized for size, to be small enough to fit the limited storage and memory available in home routers.

OpenWrt is configured using a command-line interface (ash), or a web interface (LuCI). There are about 3500 optional software packages available for install via the opkg package management system.

OpenWrt can be run on CPE routers, residential gateways, smartphones (e.g. Neo FreeRunner), pocket computers (e.g. Ben NanoNote), and small laptops (e.g. One Laptop per Child (OLPC)). But it is also possible to run on ordinary computers (e.g. x86). Many patches are being included upstream in the Linux mainline kernel.

Possible Uses

This isn’t intended to be a permanent, in-situ device. It is designed for short term trouble shooting or information gathering on low usage networks, as such, it will work well between a printer and a switch but not between a switch and a router. Here are some possible situations for use:

  • Penetration testing – If you can gain physical access to a targets office drop the device between the office printer and switch then sit in the carpark and collect a copy of all documents printed. Or, get an appointment to see a boss and when he leaves the room to get you a drink, drop it on his computer. The relative low cost of the Fon+ means the device can almost be considered disposable and if branded with the right stickers most users wouldn’t think about an extra small box on the network.
  • Troubleshooting – For sys-admins who want to monitor an area of network from the comfort of their desks, just put it in place and fire up your wireless.
  • IDS – If you want to see what traffic is being generated from a PC without interfering with the PC simply add the Interceptor and sit back and watch. As the traffic is cloned to a virtual interface on your monitoring machine you can use any existing tools to scan the data.

I’m sure there are plenty more uses, if you come up with any good ones, let me know.

Download

The Interceptor comes as a single tarball which can be downloaded from here.

It also requires a number of extra packages to be installed on a base OpenWrt install, they can be found on the OpenWrt download page.

Install Notes

There are two sets of install notes, a basic set and a detailed walk-through set. The basic set is the standard set of notes that comes with most packages, the detailed set is a full walk through from flashing the Fon+, installing dependencies, installing Interceptor, starting up and monitoring traffic and finally shutting it down. Most people should find the basic set sufficient but the detailed set are useful if you have any problems.

Limitations

The main limitation is bandwidth, the wired network can get up to 100Mb/s but the top speed of the wireless is 54Mb/s, add on to that the overhead of encryption and that rate drops down further. This is why the Interceptor won’t work well on high traffic parts of the network.

From tests I’ve done, under high load the network seems to stay up and stable but not all traffic ends up on the monitor interface. I haven’t done any research to find out where the traffic is being dropped, it could be DaemonLogger, the AP or at the VPN. This is good as it means the device doesn’t affect the smooth running of the network but obviously means you may miss some important data. Be aware of this when working with the device.

The software has no fail safe in case of problems. If the hardware or software fails the network connection being tapped will probably be lost. Don’t use the Interceptor in situations where uptime is critical without knowing what you are doing.

Support

If you have any problems or questions you can either drop me an email or visit the Hak5 forums.

Licence

The Interceptor is released under a Creative Commons licence, view the terms for more information.

 

the fonosfera

Here is the place to download and commit source code into the Fonera 2.0 firmware (aka fon-ng) and report bugs. It is also the place that will host fon-ng Documentation. End user documentation of the Fonera 2.0 is on the Wiki:  Fonera 2.0n and  Fonera 2.0g

Resources

Getting Started with fon-ng

Ubuntu Malware Removal Toolkit

Ubuntu Malware Removal Toolkit is an Ubuntu-based LiveCD focused on Windows malicious software removal. The purpose of this distribution is to create a portable environment that will make it easier to remove malware from infected Windows systems.

Features

Detect and clean Windows malware directly from the LiveCD using the best free tools
Easy to use even for Linux novice users
Custom Nautilus scripts to make easier tasks like scanning or hashing multiple files or folders
Find online informations surfing the web with Firefox directly from the LiveCD
Windows network protocols support: Ubuntu MRT can browse Windows networks, resolve Windows hostnames, mount Windows shared folders and use RDP to remotely control Windows Servers
Easily create an Ubuntu MRT Persistent LiveUSB directly from the LiveCD
Browse and query the Windows registry files, detect NTFS timestamp artifacts and much more…
Easily search online for multiple file hashes with a single mouse clic (Virustotal.com, Team Cymru MHR and others services)
Analyze network traffic using preinstalled tools like ntop and BotHunter

Continue reading “Ubuntu Malware Removal Toolkit”

Clear code that works

A partir de una visión clara e intima del proceso de desarrollo de software, Kent Beck a creado un enfoque metodológico que  a primera vista pareciera contra intuitivo pero que ha resultado exitoso y ampliamente aceptado en la comunidad de programadores.

En Test Driven Development: By Example (Addison-Wesley Signature Series), el libro seminal de TDD, Beck aplica el refrán de divide y vencerás al precepto de calidad en la producción de código:  Clear code that works.

Beck propone contracorriente que es posible separar las consideraciones de calidad de código, desde la perspectiva de ingeniería de software, de la verificación de la funcionalidad, y que el primer paso en cada iteración del proceso de desarrollo es definir y aplicar las pruebas de funcionalidad.

Beck utiliza un proceso de refactorización para pasar de código funcional a código limpio, utilizando la eliminación de redundancia o duplicidad  como guía metodológica.

Haciendo una analogía con un semáforo,  Beck describe un proceso iterativo de 3 pasos:

  1. Rojo. Empezar con una prueba que debe fallar, tal ves ni compilar siquiera.
  2. Verde. Hacer que el código pase la prueba de la manera más expedita y simple, sin consideración alguna a normas y patrones de calidad de código.
  3. Refactorizar. Eliminar redundancia en código, pruebas, y datos.

De tan sencillo enfoque Beck elabora la metodología de desarrollo dirigido por pruebas.