Eclipse: una herramienta profesional al alcance de todos Pese a que Eclipse está escrito en su mayor parte en Java (salvo el núcleo) y que su uso más popular sea como un IDE para Java, Eclipse es neutral y adaptable a cualquier tipo de lenguaje, por ejemplo C/C++, Cobol, C#, XML, etc. La característica clave de Eclipse es la extensibilidad. Eclipse es una gran estructura formada por un núcleo y muchos plug-ins que van conformando la funcionalidad final. La forma en que los plug-ins interactúan es mediante interfaces o puntos de extensión; así, las nuevas aportaciones se integran sin dificultad ni conflictos.
Eclipse fue producto de una inversión de cuarenta millones de dólares de IBM en su desarrollo antes de ofrecerlo como un producto de código abierto al consorcio Eclipse.org que estaba compuesto inicialmente por Borland e IBM. IBM sigue dirigiendo el desarrollo de Eclipse a través de su subsidiaria OTI (Object Technologies International), creadora de Eclipse. OTI fue adquirida por IBM en 1996 y se consolidó como gran empresa de desarrollo de herramientas orientadas a objeto (O.O.) desde la popularidad del lenguaje Smalltalk. OTI era la división de IBM en la que se generaron los productos Visual Age, que marcaron el estándar de las herramientas de desarrollo Orientado a objetos. Muchos conceptos pioneros en Smalltalk fueron aplicados en Java, creando Visual Age for Java (VA4J). VA4J fue escrito en Smalltalk. Eclipse es una reescritura de VA4J en Java. La base para Eclipse es la Plataforma de cliente enriquecido (del Inglés Rich Client Platform RCP). Los siguientes componentes constituyen la plataforma de cliente enriquecido:
Plataforma principal – inicio de Eclipse, ejecución de plugins OSGi – una plataforma para integrar distribuciones. El Standard Widget Toolkit (SWT) – Un widget toolkit portable. JFace – manejo de archivos, manejo de texto, editores de texto El Workbench de Eclipse – vistas, editores, perspectivas, asistentes
Los widgets de Eclipse están implementados por un herramienta de widget para Java llamada SWT, a diferencia de la mayoría de las aplicaciones Java, que usan las opciones estándar Abstract Window Toolkit (AWT) o Swing. La interfaz de usuario de Eclipse también tiene una capa GUI intermedia llamada JFace, la cual simplifica la construcción de aplicaciones basada en SWT. El entorno integrado de desarrollo (IDE) de Eclipse emplea módulos (plug-in) para proporcionar toda su funcionalidad al frente de la plataforma de cliente rico, a diferencia de otros entornos monolíticos donde las funcionalidades están todas incluidas, las necesite el usuario o no. Este mecanismo de módulos es una plataforma ligera para componentes de software. Se provee soporte para Java y CVS en el SDK de Eclipse. En cuanto a las aplicaciones clientes, eclipse provee al programador con frameworks muy ricos para el desarrollo de aplicaciones gráficas, definición y manipulación de modelos de software, aplicaciones web, etc. Por ejemplo, GEF (Graphic Editing Framework – Framework para la edición gráfica) es un plugin de eclipse para el desarrollo de editores visuales que pueden ir desde procesadores de texto wysiwyg hasta editores de diagramas UML, interfaces gráficas para el usuario (GUI), etc. El SDK de Eclipse incluye las herramientas de desarrollo de Java, ofreciendo un IDE con un compilador de Java interno y un modelo completo de los archivos fuente de Java. Esto permite técnicas avanzadas de refactorización y análisis de código. El IDE también hace uso de un espacio de trabajo, en este caso un grupo de metadata en un espacio para archivos plano, permitiendo modificaciones externas a los archivos en tanto se refresque el espacio de trabajo correspondiente. Núcleo: su tarea es determinar cuales son los plug-ins disponibles en el directorio de plug-ins de Eclipse. Cada plug-in tiene un fichero XML manifest que lista los elementos que necesita de otros plug-ins así como los puntos de extensión que ofrece. Como la cantidad de plug-ins puede ser muy grande, solo se cargan los necesarios en el momento de ser utilizados con el objeto de minimizar el tiempo de arranque de Eclipse y recursos. Entorno de trabajo: maneja los recursos del usuario, organizados en uno o más proyectos. Cada proyecto corresponde a un directorio en el directorio de trabajo de Eclipse, y contienen archivos y carpetas. Interfaz de usuario: muestra los menús y herramientas, y se organiza en perspectivas que configuran los editores de código y las vistas. A diferencia de muchas aplicaciones escritas en Java, Eclipse tiene el aspecto y se comporta como una aplicación nativa. Esta programada SWT (Standard Widget Toolkit) y Jface (juego de herramientas construida sobre SWT), que emula los gráficos nativos de cada sistema operativo. Este ha sido un aspecto discutido sobre Eclipse, porque SWT debe ser portada a cada sistema operativo para interactuar con el sistema gráfico. En los proyectos de Java puede usarse AWT y Swing salvo cuando se desarrolle un plug-in para Eclipse. Para descargar Eclipse existen distribuciones con diferentes combinaciones de plug-ins dependiendo del uso que se le quiera dar a la herramienta. Un problema que se presenta con estas distribuciones es que en Windows XP el descompresor integrado a veces falla y es preferible usar un programa externo como 7-zip, WinZIP, o info-zip
Subversion es un sistema de control de versiones libre y de código fuente abierto. Es decir, Subversion maneja ficheros y directorios a través del tiempo. Hay un Árbol de archivos en un repositorio central. El repositorio es como un servidor de archivos ordinario, excepto que recuerda todos los cambios hechos a sus archivos y directorios. Esto permite recuperar versiones antiguas de datos o examinar el historial de cambios de los mismos. En este aspecto, mucha gente piensa en los sistemas de versiones como en una especie de máquina del tiempo.
Subversion proporciona:
Versionado de directorios
CVS solamente lleva el historial de archivos individuales, pero Subversion implementa un sistema de archivos versionado virtual que sigue los cambios sobre árboles de directorios completos a través del tiempo. Ambos, archivos y directorios, se encuentran bajo el control de versiones.
Verdadero historial de versiones
CVS está limitado al versionado de archivos. Operaciones como copiar y renombrar, las cuales pueden ocurrir sobre archivos, pero realmente son cambios al contenido del directorio en el que se encuentran, no son soportadas por CVS. Adicionalmente, en CVS no puede reemplazar un archivo versionado con algo nuevo que lleve el mismo nombre sin que el nuevo elemento herede el historial del archivo antiguo que quizás sea completamente distinto al anterior. Con Subversion, se puede añadir, borrar, copiar, y renombrar archivos y directorios. Cada fichero nuevo añadido comienza con un historial nuevo, limpio y completamente suyo.
Envíos atómicos
Una colección cualquiera de modificaciones o bien entra por completo al repositorio, o bien no lo hace en absoluto. Ésto permite a los desarrolladores construir y enviar los cambios como fragmentos lógicos e impide que ocurran problemas cuando sólo una parte de los cambios enviados lo hace con éxito.
Versionado de metadatos
Cada archivo o directorio tiene un conjunto de propiedades claves y sus valores asociado. Se puede crear y almacenar cualquier par arbitrario de clave/valor. Las propiedades son versionadas a través del tiempo, al igual que el contenido de los ficheros.
Elección de las capas de red
Subversion tiene una noción abstracta del acceso al repositorio, facilitando a las personas implementar nuevos mecanismos de red. Subversion puede conectarse al servidor HTTP Apache como un módulo de extensión. Ésto proporciona a Subversion una gran ventaja en estabilidad e interoperabilidad, y acceso instantáneo a las características existentes que ofrece este servidor: autenticación, autorización, compresión de la conexión, etcétera. También tiene disponible un servidor de Subversion independiente, y más ligero. Este servidor habla un protocolo propio, el cual puede ser encaminado fácilmente a través de un túnel SSH.
La versión de default trabaja con apache 2.0 pero es posible bajar un versión para apache 2.2.4
Manipulación consistente de datos
Subversion expresa las diferencias del archivo usando un algoritmo de diferenciación binario, que funciona idénticamente con ficheros de texto (legibles para humanos) y ficheros binarios (ilegibles para humanos). Ambos tipos de ficheros son almacenados igualmente comprimidos en el repositorio, y las diferencias son transmitidas en ambas direcciones a través de la red.
Ramificación y etiquetado eficientes
El coste de ramificación y etiquetado no necesita ser proporcional al tamaño del proyecto. Subversion crea ramas y etiquetas simplemente copiando el proyecto, usando un mecanismo similar al enlace duro. De este modo estas operaciones toman solamente una cantidad de tiempo pequeña y constante.
Lisp (historically,LISP) is a family of computerprogramming languages with a long history and a distinctive, fully parenthesized Polish prefix notation.[1] Originally specified in 1958, Lisp is the second-oldest high-level programming language in widespread use today; only Fortran is older (by one year). Like Fortran, Lisp has changed a great deal since its early days, and a number of dialects have existed over its history. Today, the most widely known general-purpose Lisp dialects are Common Lisp and Scheme.
The name LISP derives from “LISt Processing”. Linked lists are one of Lisp language’s major data structures, and Lisp source code is itself made up of lists. As a result, Lisp programs can manipulate source code as a data structure, giving rise to the macro systems that allow programmers to create new syntax or even new domain-specific languages embedded in Lisp.
The interchangeability of code and data also gives Lisp its instantly recognizable syntax. All program code is written as s-expressions, or parenthesized lists. A function call or syntactic form is written as a list with the function or operator’s name first, and the arguments following; for instance, a function f that takes three arguments might be called using (f arg1 arg2 arg3).
Lisp was invented by John McCarthy in 1958 while he was at the Massachusetts Institute of Technology (MIT). McCarthy published its design in a paper in Communications of the ACM in 1960, entitled “Recursive Functions of Symbolic Expressions and Their Computation by Machine, Part I”[3] (“Part II” was never published). He showed that with a few simple operators and a notation for functions, one can build a Turing-complete language for algorithms.
Information Processing Language was the first AI language, from 1955 or 1956, and already included many of the concepts, such as list-processing and recursion, which came to be used in Lisp.
McCarthy’s original notation used bracketed “M-expressions” that would be translated into S-expressions. As an example, the M-expression car[cons[A,B]] is equivalent to the S-expression (car (cons A B)). Once Lisp was implemented, programmers rapidly chose to use S-expressions, and M-expressions were abandoned. M-expressions surfaced again with short-lived attempts of MLISP[4] by Horace Enea and CGOL by Vaughan Pratt.
After having declined somewhat in the 1990s, Lisp has recently experienced a resurgence of interest. Most new activity is focused around open source implementations of Common Lisp, and includes the development of new portable libraries and applications. A new print edition of Practical Common Lisp by Peter Seibel, a tutorial for new Lisp programmers, was published in 2005.[20]
Many new Lisp programmers were inspired by writers such as Paul Graham and Eric S. Raymond to pursue a language others considered antiquated. New Lisp programmers often describe the language as an eye-opening experience and claim to be substantially more productive than in other languages.[21] This increase in awareness may be contrasted to the “AI winter” and Lisp’s brief gain in the mid-1990s.[22]
Dan Weinreb lists in his survey of Common Lisp implementations[23] eleven actively maintained Common Lisp implementations. Scieneer Common Lisp is a new commercial implementation forked from CMUCL with a first release in 2002.
The open source community has created new supporting infrastructure: CLiki is a wiki that collects Common Lisp related information, the Common Lisp directory lists resources, #lisp is a popular IRC channel (with support by a Lisp-written Bot), lisppaste supports the sharing and commenting of code snippets, Planet Lisp collects the contents of various Lisp-related blogs, on LispForum users discuss Lisp topics, Lispjobs is a service for announcing job offers and there is a weekly news service, Weekly Lisp News. Common-lisp.net is a hosting site for open source Common Lisp projects.
50 years of Lisp (1958–2008) has been celebrated at LISP50@OOPSLA.[24] There are regular local user meetings in Boston, Vancouver, and Hamburg. Other events include the European Common Lisp Meeting, the European Lisp Symposium and an International Lisp Conference.
The Scheme community actively maintains over twenty implementations. Several significant new implementations (Chicken, Gambit, Gauche, Ikarus, Larceny, Ypsilon) have been developed in the last few years. The Revised5 Report on the Algorithmic Language Scheme[25] standard of Scheme was widely accepted in the Scheme community. The Scheme Requests for Implementation process has created a lot of quasi standard libraries and extensions for Scheme. User communities of individual Scheme implementations continue to grow. A new language standardization process was started in 2003 and led to the R6RS Scheme standard in 2007. Academic use of Scheme for teaching computer science seems to have declined somewhat. Some universities are no longer using Scheme in their computer science introductory courses.[citation needed]
There are several new dialects of Lisp: Arc, Nu, and Clojure.
The two major dialects of Lisp used for general-purpose programming today are Common Lisp and Scheme. These languages represent significantly different design choices.
Common Lisp is a successor to MacLisp. The primary influences were Lisp Machine Lisp, MacLisp, NIL, S-1 Lisp, Spice Lisp, and Scheme.[26] It has many of the features of Lisp Machine Lisp (a large Lisp dialect used to program Lisp Machines), but was designed to be efficiently implementable on any personal computer or workstation. Common Lisp has a large language standard including many built-in data types, functions, macros and other language elements, as well as an object system (Common Lisp Object System or shorter CLOS). Common Lisp also borrowed certain features from Scheme such as lexical scoping and lexical closures.
Scheme (designed earlier) is a more minimalist design, with a much smaller set of standard features but with certain implementation features (such as tail-call optimization and full continuations) not necessarily found in Common Lisp.
Scheme is a statically scoped and properly tail-recursive dialect of the Lisp programming language invented by Guy Lewis Steele Jr. and Gerald Jay Sussman. It was designed to have exceptionally clear and simple semantics and few different ways to form expressions. A wide variety of programming paradigms, including imperative, functional, and message passing styles, find convenient expression in Scheme. Scheme continues to evolve with a series of standards (Revisedn Report on the Algorithmic Language Scheme) and a series of Scheme Requests for Implementation.
Clojure is a recent dialect of Lisp that principally targets the Java Virtual Machine, as well as the CLR, the Python VM, the Ruby VM YARV, and compiling to JavaScript. It is designed to be a pragmatic general-purpose language. Clojure draws considerable influences from Haskell and places a very strong emphasis on immutability.[27] Clojure is a compiled language, as it compiles directly to JVM bytecode, yet remains completely dynamic. Every feature supported by Clojure is supported at runtime. Clojure provides access to Java frameworks and libraries, with optional type hints and type inference, so that calls to Java can avoid reflection and enable fast primitive operations.
In addition, Lisp dialects are used as scripting languages in a number of applications, with the most well-known being Emacs Lisp in the Emacs editor, AutoLisp and later Visual Lisp in AutoCAD, Nyquist in Audacity. The small size of a minimal but useful Scheme interpreter makes it particularly popular for embedded scripting. Examples include SIOD and TinyScheme, both of which have been successfully embedded in the GIMP image processor under the generic name “Script-fu”.[28] LIBREP, a Lisp interpreter by John Harper originally based on the Emacs Lisp language, has been embedded in the Sawfishwindow manager.[29] The Guile interpreter is used in GnuCash. Within GCC, the MELT plugin provides a Lisp-y dialect, translated into C, to extend the compiler by coding additional passes (in MELT).
Lisp was the first homoiconic programming language: the primary representation of program code is the same type of list structure that is also used for the main data structures. As a result, Lisp functions can be manipulated, altered or even created within a Lisp program without extensive parsing or manipulation of binary machine code. This is generally considered one of the primary advantages of the language with regard to its expressive power, and makes the language amenable to metacircular evaluation.
The ubiquitous if-then-else structure, now taken for granted as an essential element of any programming language, was invented by McCarthy for use in Lisp, where it saw its first appearance in a more general form (the cond structure). It was inherited by ALGOL, which popularized it.
Lisp deeply influenced Alan Kay, the leader of the research on Smalltalk, and then in turn Lisp was influenced by Smalltalk, by adopting object-oriented programming features (classes, instances, etc.) in the late 1970s. The Flavours object system (later CLOS) introduced multiple inheritance.
Lisp introduced the concept of automatic garbage collection, in which the system walks the heap looking for unused memory. Most of the modern sophisticated garbage collection algorithms such as generational garbage collection were developed for Lisp.
Largely because of its resource requirements with respect to early computing hardware (including early microprocessors), Lisp did not become as popular outside of the AI community as Fortran and the ALGOL-descended C language. Because of its suitability to complex and dynamic applications, Lisp is currently enjoying some resurgence of popular interest.
Emacs (pron.: /ˈiːmæks/) and its derivatives are a family of text editors that are characterized by their extensibility. The manual for one variant describes it as “the extensible, customizable, self-documenting, real-time display editor.”[2] Development began in the mid-1970s and continues actively as of 2013. Emacs has over 2,000 built-in commands and allows the user to combine these commands into macros to automate work. The use of Emacs Lisp, a variant of the Lisp programming language, provides a deep extension capability.
The original EMACS was written in 1976 by Richard Stallman and Guy L. Steele, Jr. as a set of Editor MACroS for the TECO editor.[3][4][5][6] It was inspired by the ideas of the TECO-macro editors TECMAC and TMACS.[7]
Emacs became, along with vi, one of the two main contenders in the traditional editor wars of Unix culture. The word “emacs” is often pluralized as emacsen by analogy with boxen and VAXen.[8]
The most popular, and most ported, version of Emacs is GNU Emacs, which was created by Stallman for the GNU Project.[9]XEmacs is a common variant that branched from GNU Emacs in 1991. Both of the variants use Emacs Lisp and are for the most part compatible with each other.
SLIME, the Superior Lisp Interaction Mode for Emacs, is an Emacs mode for developing Common Lisp applications. SLIME originates in an Emacs mode called SLIM written by Eric Marsden and developed as an open-source project by Luke Gorrie and Helmut Eller. Over 100 Lisp developers have contributed code to SLIME since the project was started in 2003. SLIME uses a backend called SWANK that is loaded into Common Lisp.
SLIME works with the following Common Lisp implementations:
So, you’ve gone and downloaded some elisp file (foo.el, say). Now, what do you do with it? Well, the community convetion on the matter is to toss .el files in, say, ~/elisp/ (an elisp directory in your home directory). Once you have such a directory you need to ensure that it’s present in Emacs’ load-path variable. This is typically done by adding something like this to your ~/.emacs file:
(add-to-list 'load-path "~/elisp")
Next, you’ll need to configure Emacs to load the new file. Most of the time, you should be able to add (require 'foo) to ~/.emacs (where foo means foo.el).
Simplify! Use install.el
That’s often all you have to do, but there are lots of exceptions. Fortunately, Stefan Monnier’s install.el handles the vast majority of elisp files you’ll run into, and is very easy to use itself. Install it by following my directions above. Now, whenever you’d like to install an elisp file, simply invoke the install-file command (via M-x install-file RET). That’s it!
– emacs24 will be updated only when I change the build process or when new emacs24 versions are realeased
– emacs-snapshot are updated between once a week and once every two weeks on average. These versions are created from those of Julien Danjou for Debian unstable: http://emacs.naquadah.org/.
To add this PPA:
$ sudo add-apt-repository ppa:cassou/emacs
$ sudo apt-get update
Then, for emacs-snapshot:
$ sudo apt-get install emacs-snapshot-el emacs-snapshot-gtk emacs-snapshot
*Or*, for emacs24:
$ sudo apt-get install emacs24 emacs24-el emacs24-common-non-dfsg
Adding this PPA to your system
You can update your system with unsupported packages from this untrusted PPA by adding ppa:cassou/emacs to your system’s Software Sources. (Read about installing)
A partir de una visión clara e intima del proceso de desarrollo de software, Kent Beck a creado un enfoque metodológico que a primera vista pareciera contra intuitivo pero que ha resultado exitoso y ampliamente aceptado en la comunidad de programadores.
Beck propone contracorriente que es posible separar las consideraciones de calidad de código, desde la perspectiva de ingeniería de software, de la verificación de la funcionalidad, y que el primer paso en cada iteración del proceso de desarrollo es definir y aplicar las pruebas de funcionalidad.
Beck utiliza un proceso de refactorización para pasar de código funcional a código limpio, utilizando la eliminación de redundancia o duplicidad como guía metodológica.
Haciendo una analogía con un semáforo, Beck describe un proceso iterativo de 3 pasos:
Rojo. Empezar con una prueba que debe fallar, tal ves ni compilar siquiera.
Verde. Hacer que el código pase la prueba de la manera más expedita y simple, sin consideración alguna a normas y patrones de calidad de código.
Refactorizar. Eliminar redundancia en código, pruebas, y datos.
The Sysinternals web site was created in 1996 by Mark Russinovich and Bryce Cogswell to host their advanced system utilities and technical information. Whether you’re an IT Pro or a developer, you’ll find Sysinternals utilities to help you manage, troubleshoot and diagnose your Windows systems and applications.
Sysinternals Live is a service that enables you to execute Sysinternals tools directly from the Web without hunting for and manually downloading them. Simply enter a tool’s Sysinternals Live path into Windows Explorer or a command prompt as http://live.sysinternals.com/<toolname> or \live.sysinternals.comtools<toolname>.
There are many ways to create a live USB drive carrying an operating system like Ubuntu, but the method I will describe further is mainly based on using SUN’s VirtualBox. While the method described on the Ubuntu documentations implies installing a Live CD image on a USB flash drive, which would then need to extract and load the operating system in the RAM, the method that I will describe on this page implies installing a fresh operating system on a bootable flash drive that will work the same way as from a real HDD (except the speed, of course). Thus, you should have a good bootable USB 2.0, with decent I/O data processing speeds, with at least 4GB (considering that the operating system itself weighs ~2GB, Karmic Koala).
Divide your USB flash drive into two partitions
In order to separate the operating system from the documents you would like to save on the flash drive, it is advisable that you divide your USB flash drive into two partitions. Only do this if your USB flash drive has more than 2GB of space and you do not need to save changes you make inside the operating system. In order to achieve this, you need to have GParted installed (or at least this is what i prefer). If you are not following this tutorial on a Linux machine, then you’ll have to use whatever software you best know that works with your operating system (on Windows I recommend Acronis Disk Director and Partition Magic). To get GParted type the following command in a terminal:
sudo apt-get install gparted
Now backup all data you have on your USB flash drive, because we will need to format it and create two partitions. Haven’t backed up your data? You’re playing with fire!
UNetbootin (Universal Netboot Installer) is a cross-platform utility that can create live USB systems and can load a variety of system utilities or install various Linux distributions and other operating systems without a CD.
Can load a variety of system utilities, such as Ophcrack, BackTrack.
Other operating systems can be loaded via pre-downloaded ISO image or floppy/hard drive disk image files.
Automatically detects all removable devices.
Supports LiveUSB persistence (preserving files across reboots; this feature is for Ubuntu only)
Multiple installs on the same device are not supported.
It is worth noting that UNetbootin’s meta-data is very out of date. For example, the latest version of Linux Mint offered in the drop-down menu is version 10, whilst the latest official release is version 14 (at time of writing, February 2013). However, UNetbootin can still be used to write a bootable Mint 14 ISO file onto a USB device, if the user first downloads the ISO file manually.
These can contain just about any tool you want: anti-virus, OS boot cd’s, OS repair / recovery discs, programs, etc.) This is a slightly more difficult section depending on exactly what you want on your flash drive. This can also be time consuming.
Different programs and bootable Windows and Linux ISO’s require different boot parameters. Which is why some things work with one program and not another.
SARDU, XBOOT, and YUMI can create a multiboot utility flash drive but each officially support different programs / ISO’s. EasyBCD can create multiboot flash drives but requires you to PAY ATTENTION when configuring.
You’ll have to find which actually work best for you.
I haven’t found one that does everything I would like it to (do all the programs below and work) so I’ve got 2 utility / rescue flash drives.
Places to find help for the above programs.
1 – See the links on those programs home pages.
2 – Reboot.Pro
3 – 911CD
Additions Tested :
– Windows 7 Recovery Discs (32 & 64-bit)
– Windows Vista Repair Discs (32 & 64-bit)
– UBCD4Win (SP3 slipstreamed pre-build, nlited to add drivers and update packs)
– Hiren’s Boot CD 14
– openSUSE 11.4 LiveCD (KDE)
– Linux Mint LiveCD (Gnome & KDE) (if Linux Mint works then Ubuntu should too)
– AVG
– Avira
– Kaspersky
– Microsoft Standalone System Sweeper
– Acronis True Image Home 2011
– Acronis Disc Director 11
Below are some notes on each program as of this writing.
They are not intended to bash anyone, they are just the results I came up with.
SARDU ( 2.0.3 beta 6)
– Do not rename ISO’s.
– openSUSE does not work.
– Hiren’s Boot CD support removed AFAIK due to it’s questionable legality. (download v2.0.2c if you need this)
– UBCD4Win does not work in this version. (download v2.0.2c if you need this)
– Microsoft Standalone System Sweeper supported.
– To add Acronis to this see here.
Make sure all your ISO’s are in one folder.
Do not rename the ISO’s.
If you haven’t already downloaded them. Click the button next to the check box will take you to the download page.
Plug in your preformatted flash drive
Click the CD/ISO picture (upper left) to load the ISO folder.
ISO’s already in the folder will be preselected.
Click the Search USB button on the right to find your flash drive.
Click the picture of the flash drive below it to start the process of making your bootable flash drive.
SARDU creates a multiboot USB drive, a multiboot DVD or multiboot CD (all-in-one) for free (personal and non commercial use, read the license). Hard disks (internal and external), SSD, USB flash drive and all removable memory disk and media are supported.
The multi bootable device can include comprehensive collections of antivirus rescue CD, utilities and popular Linux live distributions. Windows PE can also be included, as well as recovery disks and install media for Windows XP (Professional, Home and 64 Bit), Windows Vista, Windows Seven and Windows Eight.
A search led to a recommendation that I try XBOOT. Another source suggested that SARDU and XBOOT both might be more robust than YUMI.
Were there other possibilities? An AlternativeTo webpage listed Universal USB Installer (of which UNetbootin was apparently a clone) and WinToFlash as much more popular than SARDU, XBOOT, or YUMI, but these did not appear to be multiboot solutions. That is, they would load only one program onto the USB drive. At this point, Wikipedia’s list of tools to create live USB systems did not distinguish multiboot from single-boot tools — but it did make clear that there were many single- or multiboot tools out there. One source offered a way to use UNetbootin to create a multiboot flash drive, but it, too, sounded complicated. A search suggested that EasyBCD was another possibility, but it appeared that it was a boot manager that would let you decide whether to boot from, say, a hard drive partition containing Windows 7 rather than another partition containing Linux.
So I took it as a choice among YUMI, SARDU, or XBOOT. A search led to a thread with several user reports that tended to favor YUMI. As I had also found, one comment recommended formatting within YUMI rather than formatting the USB drive via Windows Explorer. One blogpage, written in spring 2011, seemed to find little difference in capabilities, between SARDU and YUMI, except that SARDU had the advantage of allowing the user to burn a CD or DVD containing one (or possibly more) installer. Two other webpages praised SARDU, but without offering specific comparisons against alternatives like YUMI. The XBOOT webpage seemed to indicate, as others had done, that XBOOT was preprogrammed to accept far fewer programs and distributions than YUMI; the same had also seemed to be true of SARDU.
YUMI (Your Universal Multiboot Installer), is the successor to MultibootISOs. It can be used to create a Multiboot USB Flash Drive containing multiple operating systems, antivirus utilities, disc cloning, diagnostic tools, and more. Contrary to MultiBootISO’s which used grub to boot ISO files directly from USB, YUMI uses syslinux to boot extracted distributions stored on the USB device, and reverts to using grub to Boot Multiple ISO files from USB, if necessary.
Aside from a few distributions, all files are stored within the Multiboot folder, making for a nicely organized Multiboot Drive that can still be used for other storage purposes.
Creating a YUMI Multiboot MultiSystem Bootable USB Flash Drive
YUMI works much like Universal USB Installer, except it can be used to install more than one distribution to run from your USB. Distributions can also be uninstalled using the same tool!
XBOOT is yet another neat little Multiboot ISO USB Creator. It is a Windows based application that can be used to create a Live Multiboot USB or even a Multiboot ISO file that can then be burnt to a CD/DVD. XBOOT supports many Linux Distributions and Utilities, and allows you to use your choice of a Grub or Syslinux bootloader. Also included is a built in QEMU emulator (enabling you to boot an ISO from within Windows).
This software boots your PC and analyse your filesystems. It displays a graphical menu for you to select which system to boot. Intended to replace LILO and Loadlin, written in C with GCC, fully real mode.
It can read FAT12, FAT16, FAT32, ext2, ext3, ext4 (with constant inode size) and ISO 9660 filesystems.
It has a graphical user interface with mouse support, and can be installed on any media: floppy, hard/USB disk partitions, hard/USB disk MBR, CD/DVDROM, DVD-ram (i.e. FAT with 2048 bytes/sectors). It can also use a serial port as input/ouput instead of the screen and keyboard.
Gujin can chain-load other bootloader, load Linux kernel, has an unfinished loader of multiboot specification, but more importantly for our current interest it can load standard ELF files (more exactly GZIP compressed ELF files).
There is two major ELF variant on the PC: ELF32 (with 32 bits load address, sizes and entry point) and ELF64 (with 64 bits fields), gujin loads any of them and switch the processor in protected mode to jump to the ELF entry point.
Gujin will not try to set-up memory paging at all, that is the job of the kernel to select which kind of paging it wants – so even the 64 bits ELF files will have to handle the transition from 32 bits to 64 bits themselves (because you need paging to go to 64 bits mode).
Same for interrupts, Gujin switches to protected mode but does not re-enable interrupts, that is the job of the kernel being booted to set-up the Interrupt Decriptor Table and handle each interrupts, because BIOS will not help the kernel any more.
Gujin can also relocate the ELF file if it contains relocation information, see option –emit-relocs of the “ld” linker of the “binutils” toolchain.
Because most applications will want to collect BIOS informations before the switch to protected mode, Gujin can call in real-mode a function of the ELF file, and if this function returns zero – continue the loading process – else display an error message (for instance: “trying to execute a 64 bits application on a processor without this feature!”).
Moreover, in most cases the kernel can decide to return to the Gujin bootloader (if it did not erased it from memory), without forcing a reboot.
The Gujin bootloader is built using standard Linux tools, it does not need Linux to run (only a not-too-buggy BIOS) – but we have to assume you are running Linux to use the GNU toolchain: GCC, binutils, GZIP… to produce the ELF file for the kernel. Also, Gujin installer needs either Linux 32 bits or Linux 64 bits to run and install the Gujin bootloader on a device.
Because Windows uses another executable format, you cannot install cygwin/MinGW on windows and use the compilation toolchain directly, in this case you would have to generate a cross compiler toolchain to produce ELF files, and that is out of scope for this description (but not that difficult).
So enough text description, let’s try it!
The floppies being out-dated, let’s say we want to use a USB stick as our test media. In some cases, we may want to use a SD card, basically the process is the same.
We first need to check that the target PC will be able to boot that USB stick or SD card, when Gujin is installed on it.
To have increased chances of success, we will ask the Gujin installer to reformat completely this USB disk or SD card, so first backup any interresting file you have in some other place.
If you do not trust anybody, download the source file gujin-*.tar.gz, extract it in a directory and type “make” – that will produce an executable named “gujin” (even on a 64 bits Linux).
Remember to visit http://gujin.org to check if there is a newer version of Gujin, and to increase Gujin author counters and keep him happy.
Then, we reformat that dedicated USB stick as a bootable FAT filesystem, erasing all its content: first go in “root” by typing “su” or “sudo” (distribution dependant), then get the device name of you USB stick (let’s say it is /dev/sdg) (sometimes /dev/mmcblk0 for SD cards), and type:
./gujin /dev/sdg --disk=BIOS:0x00,auto
Depending on the size of the USB stick, that will have created either a FAT16 or a FAT32 (or even a FAT12) filesystem, but that point is not really important.
You then unplug this device, and replug it: most distribution will automatically mount the filesystem and display a window of its content: only a single file which is the bootloader itself.
To check that this filesystem is correctly created, you can type:
/sbin/fsck.vfat /dev/sdg
With the Gujin installer parameters we used, that would have created a “superfloppy” format on our USB stick, that is currently the format most PC will be able to understand and boot from.
That does not mean your own PC will 100% boot it, due to BIOS bugs – so you need to test now that this USB stick is bootable by your PC or not: umount the USB stick, plug it in the test PC and power it on, see if Gujin is started (you will notice easily).
If it is not started try to check:
- that the boot order in the BIOS is set to boot USB devices first
- try the different USB devices if your BIOS has switchable items.
- try to tell the Gujin installer to use the Extended BIOS instead by typing (--disk=EBIOS:0x00,auto is the default):
./gujin /dev/sdg
- try to tell the Gujin installer to generate a real disk and not a superfloppy by:
./gujin --mbr /dev/sdg --disk=BIOS:0x00,auto
- try the two previous options together:
./gujin --mbr /dev/sdg --disk=EBIOS:0x00,auto
- try to use another (smaller) USB stick, some BIOS will only accept to boot from a FAT16 superfloppy
By now you should know a lot more about your BIOS, and have a bootable USB stick.
Then, we want to generate this ELF kernel – let’s try to generate a “hello world”: Create a file with that content:
const char msg1[] = "Hello protected-mode text world! please reboot ...";
#define STACKSIZE 64 * 1024
static unsigned stack[STACKSIZE / 4] __attribute__ ((aligned(32)));
void _start (void)
{
/* We are flat non-paged memory and interrupt disabled */
asm (" mov %0,%%esp " : : "i" (&stack[STACKSIZE / 4]));
volatile unsigned short *video_array = (volatile unsigned short *)0xB8000;
unsigned cpt1;
video_array += 10 * 80; /* few empty lines */
/* We want blue background color and lightgray foreground, so 0x1700: */
for (cpt1 = 0; cpt1 < sizeof(msg1) - 1; cpt1++)
video_array[cpt1] = 0x1700 + msg1[cpt1];
while (1)
continue;
}
Then compile it like (you may need to add “-fno-stack-protector” too, distribution dependant):
You just need to copy that hello.kgz file into the USB stick, and reboot with this USB stick, you will get a menu with “hello.kgz” displayed.
If you click on this filename, you will have “Hello protected-mode text world! please reboot …” displayed (Because this hello-world do not manage graphic modes, you have to “force start kernel in text mode” in Gujin configuration).
If you want a bit more complex “hello world” applications, you should download Gujin install*.tar.gz pack .
you will find there few KGZ files, like previous example but with some addresses displayed to show the exact position of the application in memory.
you can copy this hello_bios.kgz to the USB stick and run it, I hope the different functions of the source file are obvious.
The Gujin bootloader is willing to let the real mode and protected mode kernel to access the bootloader internal data (like what is the current video mode, how to display strings…) as long as the kernel is licensed under the GPL license – i.e. when the GZIP kernel contains a GZIP comment describing its license. That is illustrated by cleandisk.kgz compiled form this cleandisk.c source code and linked with this linker file.
The main point about the GPL license is that it is not possible to draw a line in between Gujin possible bug and a closed source application: it is no allowed to modify Gujin data while in real mode – but there isn’t any enforcement by hardware.
Obviously, Gujin will enable you to run an ELF file with a real mode part AND a protected mode part, just have a look at the example hello_gpl.kgz and its source code hello_gpl.c.
How Programming Languages Evolved
How the computer stores data
Numbering systems the computer likes
Different data types
Different programming Syles
Procedural
Functional
Object Oriented
What does it mean when a language is “strongly” or “weakly” typed
What is compiling and do I need to do it?
When I would do it, and when I would not
Why, what advantage does it provide
Modern Computer Languages Overview
Bash
Perl
Python
Ruby
C
C++
Java
Vala
C#/Mono
Programming methodologies
Waterfall
RAD
Summary of Graphical Programming Libraries
GTK
QT
FLTK
SDL
And finally, Programmers tools:
Eclipse
NetBeans
Anjuta
This is a lot of topics, and this week will be an overview. It should get you enough information to recognize the labels on the map, even if you are not 100% sure where the map will lead you. That will be the task for the following weeks.
Uno de los problemas fundamentales con las metodologías de desarrollo, de hecho, con cualquier esfuerzo de normalizar un proceso entre personas, es que el deber ser en un sentido moral idealista obscurece el es. eXtreme Programming es un enfoque contra intuitivo para aumentar la productividad de los programadores.
A pesar de los esfuerzos heroicos del equipo de mercadotecnia y de la necesidad de los usuarios de mantener los costos bajos, un programador es productivo alrededor de 2 a 4 horas diarias en promedio. Un monstro en el closet pero una realidad. Esto anuado al hecho de la programación es una arte en al que unos pocos virtuosos pueden realizarla con soltura, 5% de los programadores (o menos) hacen 95% del trabajo (o más). Por eso los beneficios de programación en pares en realidad no implican un costo en productividad. Antes al contrario, probablemente un equipo de 2 de programadores trabajando bajo el esquema de programación extrema sea 2 a 3 veces más productivo que los mismos programadores trabajando de manera aislada.
El énfasis en diseño y pruebas es simplemente una realidad del ciclo de desarrollo:
Un defecto en codificación es un defecto, aunque corregirlo puede generar más defectos.
Un error en la fase de diseño produce más de 10 defectos en código
Un error en la fase de levantamiento de requerimientos produce más de 100 defectos en código
Por eso el esfuerzo de desarrollo debe concentrarse en el análisis y realizar iteraciones cortas donde rápidamente la funcionalidad del sistema sea aparente al usuario final y este pueda dar la retroalimentación necesaria para mantener las cosas en la dirección correcta de manera eficaz.
El esfuerzo de desarrollo debe seguir aproximadamente la siguiente ponderación: