Build and Continuous Integration Tools

We�ve examined at length the benefits of automating your build process, and as you would expect, there are many off-the-shelf tools that will help you to do so.


Build Tools
The granddaddy of build tools is the venerable make. Things have moved on, however, and several much better choices are now available.

GNU Make: http://www.gnu.org/software/make/
Although based upon make, GNU Make supports a number of significant extensions allowing much more sophisticated control over the build process than has traditionally been available.

Autoconf: http://www.gnu.org/software/autoconf/
Autoconf is particularly appropriate for open source software that needs to support building in a wide range of different environments. It allows the build system to automatically determine what facilities are available on the host system and behave accordingly.

Jam: http://www.perforce.com/jam/jam.html
Jam is an alternative to make that typically requires much less configuration to build a given project.

Boost.Build: http://www.boost.org/doc/tools/build/
Built on top of Jam, Boost.Build provides a standard build system particularly appropriate to building C++ software.

SCons: http://www.scons.org/
This is a make replacement integrating autoconf-like functionality.

Ant: http://ant.apache.org/
Ant is a make replacement that has become the de facto standard build tool within the Java world.

Maven: http://maven.apache.org/
Maven is a software project management tool that does much more than simply manage the build process, bringing package management, deployment, and other facilities to the Java world and rapidly gaining mind share from Ant.

Capistrano: http://www.capify.org/
Not a build tool per se, Capistrano manages the task of deploying software on a number of different servers. Although particularly associated with Ruby on Rails, it can be used to deploy products created with any technology.


Continuous Integration Tools
Many of the proprietary systems we�ve already discussed (such as Microsoft�s Visual Studio Team System) come with their own continuous integration solutions. In addition, there are a number of open source systems available:

CruiseControl: http://cruisecontrol.sourceforge.net/
This is probably the best known open source continuous integration system. As well as the main Java implementation, there are also .NET and Ruby on Rails variants.

Hudson: http://hudson.dev.java.net/
This is an open source J2EE continuous integration server.

Source of Information :  Paul Butcher - Debug it Find repair and prevent bugs

Source Control and Issue-Tracking Systems

The problem with choosing a source control and issue-tracking system isn�t so much finding one that�s right for you as picking through the huge range available. So, what might sway your decision? Some things (not an exhaustive list) to consider include the following:

� Open source or commercial?

� Do you need to host it yourself (behind your firewall, for example), or do you want to use one of the many services that provide hosting for you?

� Do you need your source control and issue-tracking systems to be tightly integrated with each other?

� What level of support for distributed development do you need?

I can�t possibly give a complete survey of all the different source control and issue-tracking systems here, but I can give you pointers to a few of the major players and why you might consider them.


Open Source Solutions
CVS: http://www.nongnu.org/cvs/
Until fairly recently, the only real open source choice was CVS. CVS has a number of well-known limitations, however, not least of which are the fact that check-ins aren�t atomic and it doesn�t version directory structures.

Subversion: http://subversion.tigris.org/
Over the last few years, CVS has been almost entirely supplanted by Subversion, which addresses most of CVS�s obvious weaknesses and has become the default open source choice.

Git: http://git.or.cz/
Coming up fast on the rails is Git, which is gaining mind share with a number of high-profile projects switching to it, in part because of its excellent support for distributed development.

Mercurial: http://www.selenic.com/mercurial/
This is a cross-platform, distributed system with very similar goals to Git and particularly good support for branching.

Bazaar: http://bazaar-vcs.org/
This is designed to just work and adapt to your team�s workflow instead of imposing its own model.

Bugzilla: http://www.bugzilla.org/
For a long time, Bugzilla, developed as part of the Mozilla project, was the default open source choice for issue tracking. Recently a number of alternatives have become available, however.

Trac: http://trac.edgewall.org/
Trac uses a minimalist approach, aiming to keep out of the way of developers as much as possible. It�s particularly notable for tight integration with its integrated wiki.

Redmine: http://www.redmine.org/
A relative newcomer on the scene, Redmine seems to be well supported and making good progress. Where open source solutions have traditionally been weak is integration between source control and issue tracking and with development environments. The situation has improved considerably recently with IDEs such as Eclipse providing excellent Subversion support, for example.


Hosted Solutions
SourceForge: http://sourceforge.net/
SourceForge is the best known of a number of similar sites that provide hosting for open source projects, integrating a number of tools such as source control, issue tracking, documentation tools, and so on. Others include Google Code (http://code.google.com/hosting/) and language-specific sites such as RubyForge (http://rubyforge.org/).

GitHub: http://github.com/
GitHub provides Git hosting and has recently gained a lot of attention when it started hosting the Ruby on Rails project.

Lighthouse: http://lighthouseapp.com/
This is a hosted issue-tracking system with integration for Subversion and Git.

Unfuddle: http://unfuddle.com/
This is a secure, hosted project management solution providing Subversion or Git hosting together with integrated issue tracking.

Rally: http://www.rallydev.com/
Rally provides Agile life-cycle management tools.

VersionOne: http://www.versionone.com/
This is a project management and planning tool designed specifically for agile software development. This is also available for local installation as well as hosted.

Pivotal Tracker: http://www.pivotaltracker.com/
Tracker is a free, award-winning, story-based project-planning tool that allows teams to collaborate in real time.


Commercial Solutions
Perforce: http://www.perforce.com/
Perforce is a source control system that particularly concentrates on cross-platform support and performance. It also includes a simple issue-tracking system or can integrate with various open source or commercial solutions.

FogBugz: http://www.fogcreek.com/FogBugz/
FogBugz, from Fog Creek Software, is a flexible bug tracking and project planning tool, available for local installation or as a hosted solution. It�s traditionally been available for Windows, but is in the process of being ported to Linux and Macintosh.

Visual Studio Team System: http://msdn.microsoft.com/teamsystem/
Microsoft�s Visual SourceSafe has long been a favorite punch bag, criticized for a range of failings. To be fair, Microsoft can hardly complain about this given that it never seemed to use it itself despite its famous policy of eating its own dog food. Thankfully,
Microsoft�s offering in this area seems to have improved to no end recently with the introduction of Visual Studio Team System, a fully integrated source control and project management solution.

Rational ClearCase and ClearQuest: http://ibm.com/software/awdtools/clearcase/
The ClearCase source control system and its associated issuetracking solution ClearQuest used to be considered the default enterprise choice. They are expensive and complex, however, and inappropriate for anything other than large teams with dedicated
support organizations.

StarTeam: http://www.borland.com/starteam/
This is a fully integrated source control and project management system.

BitKeeper: http://www.bitkeeper.com/
This is a distributed system with similar goals to Git.

Source of Information :  Paul Butcher - Debug it Find repair and prevent bugs

Debugging Builds

Many teams find it helpful to create a debugging build, which differs from a release build in various ways designed to help reproduce and diagnose problems.


Compiler Options
Most compilers provide you with a wide range of options that allow you to control exactly how they translate your source code into object code. Often it makes sense to use a different set of options during development and debugging from those used in production. Here are a few examples:

Optimization: Modern compilers can perform wonders, generating object code that is as efficient, or better, than hand-rolled machine code. In the process of doing so, however, they often restructure things so much that the relationship between source code and object code can become muddied. This can, for example, make single-stepping in a debugger confusing or even impossible. As a result, debug builds often disable optimization. Debugging information: To be able to single step through the source, debuggers need to know how to map lines of source code to regions of object code. Typically these are excluded from a production release because they add size and may give away information we would rather keep to ourselves.

Bounds checking: Some C/C++ compilers provide an ability to add bounds checking to arrays and other data structures.

There�s more to a debugging build than just choosing different compiler options, however.


Debugging Subsystems
Sometimes it�s worth thinking about replacing an entire subsystem with a version specifically designed to make debugging easier. This can be particularly useful if we can�t easily control the behavior of the production version of the subsystem (because it�s under the control of a third-party, for example, or because its behavior has some random element).

Imagine, for example, that our software interfaces with a server provided by a third-party and we�re trying to debug a problem that occurs only when it returns a specific sequence of results. It may not be easy, or even possible, to find a way to ensure that it always returns that exact sequence on demand. Even if we can, its owners may not thank us for bombarding it with requests�especially if those requests aren�t well-formed (which is likely to be the case during debugging).

There is some overlap between a debugging subsystem and the test doubles, Mocks, Stubs, and Other Test Doubles. The difference is one of scale and scope. A test double is a short-lived object only intended for use within a single test. A debugging subsystem is normally a complete replacement for its associated production subsystem, implementing all of its interfaces and operating correctly across a wide range of use cases. It may even make sense for us to ship a debugging subsystem with the software so that end users can enable it in order to help us debug a problem in situ.

A debugging subsystem either can entirely replace its corresponding production system (emulating its entire behavior) or can be implemented as a shim that sits between the rest of the software and the production system, modifying its behavior as appropriate.

One particular subsystem you might want to consider bypassing during debugging is the user interface.


Solving the User Interface Problem
The needs of the end user and the needs of a developer are often very different. A graphical or web-based user interface might make it very easy for end users to achieve their goals, but it can get in the way during development and debugging because such interfaces are difficult to control programmatically.

For this reason (among others), it makes sense to ensure that the user interface layer is as thin as possible, just looking after the details of displaying information and soliciting input from the user. In particular, it should contain no business logic whatsoever. This should mean that you can replace it with an alternative such as a scripting language that can drive the rest of the software, which is likely to be much easier to work with from a debugging standpoint.

This might fall out in the wash if your software implements an object model such as (OLE) Automation under Windows or AppleScript support on the Mac. It might even be worth adding support for such an object model exclusively for debugging. Another subsystem commonly replaced with a debugging version is the memory allocator.


Debugging Memory Allocators
In languages like C and C++, which don�t provide automatic memory management, a debugging memory allocator can be worth its weight in gold. A debugging allocator can help you detect and solve a number of common problems:

� By keeping track of memory allocation and deallocation, it can detect memory leaks (memory that is allocated but not freed).

� By placing guards before and after allocated memory, it can detect buffer overflows and memory corruption.

� By filling memory regions with known patterns, it can detect instances where memory is used without being initialized.

� By filling deallocated memory with a known pattern and holding onto it, it can detect instances where memory is written to after it has been deallocated.


Built-in Control
As well as modifying the behavior of third-party code, we can also choose to have our own code behave differently in a debug build, building in the control that will prove useful during diagnosis. Examples include the following:

Disabling features: Sometimes your software might include features that are valuable in production but obfuscate things during debugging. Communication between one part of the application and another might be encrypted for security reasons, for example. Or data structures might be optimized to improve memory usage and execution speed. You are likely to make problems in these areas much easier to diagnose if you allow such features to be selectively disabled.

Providing alternative implementations: Sometimes there is more than one way to implement a module�one that is simple and easy to understand and another that is complex and optimized. By including both within the code and providing a means to switch between them, you can validate the results of the complex version against the simple one. This can help pinpoint whether the bug resides in the optimized version or elsewhere, and it can help with debugging even if it does lie elsewhere by making things simpler to understand.

Although we tend to talk about two different builds, debug and release, there�s nothing to stop you from building other flavors. Many teams, for example, have an integration build that acts as a halfway house between a debug and a release build. It might, for example, have debugging symbols and assertions enabled like a debug build but have optimizations enabled like a release build.

Source of Information : Paul Butcher - Debug it Find repair and prevent bugs

Major Fields in Computing

You may have heard terms like �computer engineering,� �computer science,� �information systems,� and �information technology� and wondered if they were synonymous. They are not, but they�re related through the levels of abstraction. You need to understand these terms too.

Computer engineering focuses on the bottom levels of abstraction: hardware design and use. It is an extension of electrical engineering, covering the design and analysis of computer hardware. In fact, in a university, computer engineering is most often taught in the school of engineering. While a computer engineer understands programming, most of his or her focus is on hardware, not software.

Computer science, the subject of this book, is the systematic study of computing processes. It is concerned with the middle levels of abstraction, from hardware use to software use. Computer scientists work primarily in the development of software, either at the practical level (improving the speed at which a web page performs a search) or the theoretical (exploring the limits of computers� recognition of human speech). The hope is that every subject that begins as theoretical ends as practical. To better understand software design and development, the computer scientist must understand how a computer works, even if he or she can�t create one, and thus is also an experienced computer user. In a university, computer science is most often taught in the school of science and mathematics.

Covering the top levels of abstraction is information systems, which is the study of how computing technology is used in business. Someone schooled in information systems has knowledge of business operations and programming, but is more interested in solving a business problem with existing solutions than in trying to invent novel solutions. In a university, information systems may be taught in the same department as computer science or it may be taught in the business school.

Finally, information technology is a broad term that does not refer to a particular field but instead covers all the levels of abstraction. While often restricted to mean the use of technology in business (similar to information systems), in general, the term encompasses the design, development, and implementation of computer hardware and software.

Source of Information :  Broadway-Computer Science Made Simple 2010

Compare and contrast the three major multiprocessor architectures: SMP, massively parallel and cluster


An SMP (for Symmetrical Multiprocessor) is a system in which several processors share the same memory and function under the control of just one executing operating system. An SMP generally provides�within its configurability limits�the best scalability and the best price/performance ratio. SMPs are tending to become commodities at the entry level, although there are some problems in extending SMP to support a very large number of processors�at the time of writing, somewhere around 100 processors seems the upper boundary. While this amount will increase over time, for applications that call for extreme processing capabilities (intensive numerical operations or decision support across very large databases), one turns to cluster or MPP systems.

A cluster is a collection of otherwise independent systems (�nodes�) connected by a fast interconnect. Each node runs under the control of its own copy of the operating system, although cluster software gives the user the illusion that it is a single, extremely powerful, high-capacity system. Clusters support high availability in a very natural way: any applications running on a failing node can be retried on the surviving nodes. While clusters can provide high performance, they do require that applications be written to support the programming model provided by the cluster. Both DBMSs and integrated management packages (for Enterprise Resource Planning) have been ported to cluster systems. While it can be hard for a cluster to deliver on the promise of its performance for transaction processing, it is much more straightforward for decision support.

From an architectural point of view, the only difference between a cluster and an MPP is the maximum number of processors supported�several hundreds for the MPP, instead of tens�together with a very high- performance interconnect. Key applications for MPP are intensive scientific workloads and decision support. To leverage the available raw performance, applications must be carefully written to match the platform architecture, with extreme attention being paid to extracting and exposing the maximum parallelism. Peer-to-peer networks, grid computing and clusters of PCs can offer an economic alternative to an MPP system.

Source of Information :  Elsevier Server Architectures 2005

How much longer will system performance continue to increase?

A number of factors contribute to system performance: the performance of the processor(s), memory, I/O, network, and disks, as well as that of various software components (including OS, middleware, and applications), all have an effect. However, the largest effects have been seen�and will continue to be seen�in the area of processor performance and memory capacity. And as far as microprocessor performance and memory capacity is concerned, there seems little reason to suppose that the technology improvement curve will slow before 2010 or thereabouts; thus, microprocessor suppliers will have enough transistors and frequency capability on hand to keep processor performance on track for quite a while (although if it is not constrained, the increase in thermal dissipation associated with the higher frequencies could be a roadblock to achieving the desired level of performance). Memory capacity increases will make it possible to track the growth in software size and in the amount of data handled; this growth in memory size will make it possible (through the use of cache technologies) to compensate in large part for the lack of equivalent improvements in disk and network access times. But the possibility of abreakdown in the smooth progression of system performance remains, if memory access times and bandwidths get too far out of step with the demands of processor performance.

This drive for growth is driven by software�s insatiable appetite for storage capacity and processing power. And there are no objective reasons to suppose that the appetite or growth will stop. The open questions, however, are the growth rate and the effect that these phenomena will have on the industry.

One could observe, across several information-processing domains, that the industry goes through successive phases: a phase of stability, followed by a phase of disruption. A stability phase is characterized by continuity in architectural concepts and incremental improvements. During such an equilibrium phase, we can observe intense technological developments, often far afield from the mainstream. A disruption is characterized by a sudden questioning of established verities, together with a flood of new ideas coming to a head. New architectural concepts, approaches and products appear; the market makes its choice (not always in favor of the new), and the structure of the industry moves on. A new phase of stability then commences.

Source of Information :  Elsevier Server Architectures 2005

Windows 7 Editions

After seeing the different versions of Windows that may be running in your environment, you will be pleased to know that Windows 7 editions are not as complex. Microsoft released 11 different Windows 7 editions, counting the different 32-bit and 64-bit versions as separate versions. Some versions you might never see as most administrators will be working with the Professional and/or Enterprise/Ultimate editions. Below is a listing of the 11 different versions:

� Windows 7 Home Basic
� Windows 7 Starter
� Windows 7 Home Premium
� Windows 7 Home Premium 64-bit
� Windows 7 Home Premium N
� Windows 7 Professional
� Windows 7 Professional 64-bit
� Windows 7 Enterprise
� Windows 7 Enterprise 64-bit
� Windows 7 Ultimate
� Windows 7 Ultimate 64-bit


A brief introduction to each version:

� Windows 7 Starter � the most basic version of Windows 7, also known as the budget version. This version can only be obtained preinstalled through a manufacturer and is expected to be popular on netbooks. Not all features will be included in this version, including Windows Aero.

� Windows 7 Home Basic � the home edition for emerging markets. This version will not be available for sale in developed countries such as the United States, United Kingdom, and Australia. Not all features will be included in this version, including some Windows Aero features.

� Windows 7 Home Premium � the home edition for developed nations. This edition is aimed at the home market segment. Most computers sold at retail stores will include this version with all home features including Windows Media Center, Windows Aero, and touchscreen.

� Windows 7 Professional � the preferred edition for small business and enthusiasts. This edition includes everything the Home Premium edition includes as well as the capability to join Windows Server domains. Other features such as Remote Desktop, location aware printing, Encrypting File System (EFS), Presentation Mode, and Windows XP Mode will be included in this version.

� Windows 7 Enterprise � the corporate edition of the operating system. This version is for the enterprise and is only available through volume licensing. This version supports Multilingual User Interface, BitLocker Drive Encryption, and UNIX application support. This version is not available from manufactures or retail.

� Windows 7 Ultimate � the Enterprise version for home or enthusiast users. This version includes everything from the Enterprise edition through retail and some manufacturers.

The above explanation should be enough to give you an idea as to what version you will be working with at home or in your environment. Windows Starter will most likely be seen only on netbooks in the future. Windows Home Basic will not be seen in developed markets meaning Home Premium will be the choice for home users while Professional and Enterprise will be the choice for corporate and business environments. The Ultimate Edition will most likely be used by power home users and in some businesses that require the features that come with the Enterprise edition but do not have the licensing for it.

Source of Information :  Syngress Microsoft Windows 7 Administrators Reference

Top Spywares in 2010

Appin�s research lab in Asia and Europe does regular research on spyware and malware analysis/reverse engineering and evaluates which spywares have maximum spread. The summary of research on top spywares is underneath:

1. PurtyScan: is a pop-up advert. It attracts the user by giving propositions to find pornographic content/spyware on your computer and clean it. However once the user clicks it he/she will be taken to a website with much more spyware and adware waiting to infiltrate your computer.

2. Gator: Gator tracks your online browsing so that it can tailor itself to your likes. The computer user is flooded with banner ads attempting to catch your attention. This often ends up on computers through sharing files on Kazaa or other P2P programs or downloading freebies from random sites.

3. CoolWebSearch: This is a form of malware designed to hijack your Internet settings and then forward you to its own web page. However the webpage that it forwards you to is loaded with adware and spyware.

4. ISTbar/Aupdate: This acts as a toolbar. It continues to send a barrage of pop-up adverts displaying pornographic images to your computer.

5. Perfect KeyLogger: This is a very harmful form of coding. It records everything you type on your computer and can communicate your most personal information back to its creator.

6. Trojan-Downloader.Generic: This is a backdoor which allows the attacker to download any Trojan into the victim�s computer. This is actually a family of spywares popularly called as downloaders.

7. Trojan-Spy.Win32.Zbot.gen: This is spyware which reverse connects to a remote server and allows the attacker to gain remote access to your computer. This spyware is dangerous and can capture all sensitive data/passwords stored in your computer.

8. Explorer32.Hijacker: This spyware hijacks the explorer.exe of windows and gathers sensitive information from your computer.

Source of Information :  Hakin9 November 2010
 
Support : Creating Website | Johny Template | Mas Template
Copyright © 2011. Information Computer and Technology - All Rights Reserved
Template Modify by Creating Website
Proudly powered by Blogger