An Ecosystem of Distributions

Many hundreds of GNU/Linux distributions are in active use today. A quick look at Distrowatch�s database (distrowatch.com) demonstrates the staggering number and growth of distributions. One of the first GNU/Linux distributions was called Softlanding Linux System, or SLS. For a number of reasons, a programmer named Patrick Volkerding thought he could improve on SLS. Because SLS was free software, Volkerding had the freedom to make a derivative version of SLS and distribute it. Volkerding did just this when he took SLS�s code and used it as the framework or model upon which to create his own variant called Slackware. Subsequently, Slackware became the first widely successful GNU/Linux distribution and is maintained to this day.

With time, the landscape of GNU/Linux distribution has changed. However, the important role of derivation that made Slackware possible has remained fully intact and is still shaping this landscape. Today, the hundreds of GNU/Linux distributions serve a multitude of users for a myriad of purposes: There are distributions specially designed for children, for dentists, and for speakers of many of the world�s languages. There are distributions for science, for business, for servers, for PDAs, for nonprofit organizations, for musicians, and for countless other groups.

Despite this diversity, the vast majority of derivatives can be traced back to one of two parent distributions: Red Hat and Debian. While it is not necessary to understand the details of how these projects differ, it�s useful to know that Red Hat and Debian offer two compelling, but frequently different, platforms. Each project has strengths and weaknesses. For almost every group making a Linux-based OS, one of these projects acts as square one (with a few notable exceptions, such as the Gentoo project).

However, while the process of deriving distributions has allowed for a proliferation of OS platforms serving a vast multiplicity of needs, the derivative process has, historically, been largely a one-way process. New distributions based on Red Hat�Mandriva and Novell�s SUSE, for example�begin with Red Hat or a subset of Red Hat technology and then customize and diverge. Very few of these changes ever make it back into Red Hat and, with time, distributions tend to diverge to the point of irreconcilable incompatibility.
While the software that each system includes remains largely consistent across all distributions, the way that it is packaged, presented, installed, and configured becomes increasingly differentiated. During this process, interdistribution sharing and collaboration grow in difficulty.

This growing divergence indicates a more general problem faced by distribution teams in getting changes upstream. Frequently, the users of GNU/Linux distributions find and report problems in their software. Frequently, distribution teams fix the bugs in question. While sometimes these bugs are in changes introduced by the distribution, they often exist in the upstream version of the software and the fix applies to every distribution.

What is not uncommon, but is unfortunately much less frequent, is for these bug fixes to be pushed upstream so that all distributions and users get to use them. This lack of collaboration is rarely due to malice, incompetence, or any tactical or strategic decision made by developers or their employers. Instead, tracking and monitoring changes across distributions and in relation to upstream developers is complicated and difficult. It�s a fact of life that sometimes changes fall on the floor. These failures are simply the product of distribution-building processes, policies, and tools that approach distributions as products in and of themselves�not processes within an ecosystem.

Like many other distributions, Ubuntu is a derivative of Debian. Unlike the creators of many derivatives, the Ubuntu community has made it one of its primary goals to explore the possibility of a better derivation process with Debian, with Debian and Ubuntu�s common upstreams (e.g., projects such as Linux or GNU), and with Ubuntu�s own derivatives. A more indepth discussion of Debian can help explain how Ubuntu positions itself within the free software world.

Source of Information : Prentice Hall The official Ubuntu Book 5th Edition 2010

What Is a Distribution?

It�s clear to most people that Ubuntu is an OS. The full story is a little more complex. Ubuntu is what is called a distribution of GNU/Linux�a distro for short. Understanding exactly what that means requires, once again, a little bit of history. In the early days of GNU and Linux, users needed a great deal of technical knowledge. Only geeks needed to apply. There were no Linux operating systems in the sense that we usually use the term there was no single CD or set of disks that one could use to install. Instead, the software was dozens and even hundreds of individual programs, each built differently by a different individual, and each distributed separately. Installing each of the necessary applications would be incredibly time consuming at best. In many cases, incompatibilities and the technical trickery necessary to install software made getting a GNU/Linux system on a hard disk prohibitively difficult. A great deal of knowledge of configuration and programming was necessary just to get a system up and running. As a result, very few people who were not programmers used these early GNU/Linux systems.

Early distributions were projects that collected all of the necessary pieces of software from all of the different places and put them together in an easier-to-install form with the most basic configuration already done. These distributions aimed to make using GNU/Linux more convenient and to bring it to larger groups of users. Today, almost nobody uses GNU/Linux without using a distribution. As a result, distribution names are well known. Ubuntu is such a project. Other popular distros include Red Hat and Fedora, Novell�s SUSE, Gentoo, and of course Debian.

Most distributions contain a similar collection of software. For example, they all contain most of the core pieces of GNU and a Linux kernel. Almost all contain the X Window System and a set of applications on top of it that may include a Web browser, a desktop environment, and an office suite. While distributions started out distributing only the core pieces of the OS, they have grown to include an increasingly wide array of applications as well. A modern distribution includes all of the software that �comes with an OS,� that is, several CDs or DVDs containing anything that most users might want and that the distribution is legally allowed to distribute.

Ubuntu, like other contemporary distros, offers a custom installer, a framework including software and servers to install new software once the system has been installed, a standard configuration method through which many programs can be configured, a standard method through which users can report bugs in their software, and much more. Frequently, distributions also contain large repositories of software on servers accessible through the Internet. To get a sense of scale, Ubuntu includes more than 30,000 pieces of software on its central servers�each piece of software is customized slightly and tested to work well with all of the other software on the system. That number grows daily.

What�s important to realize is that the creators of distributions do not, for the most part, write or create the applications you use. The Ubuntu team did not write Linux, and it did not write GNU�although individuals on the team have contributed to both projects. Instead, the Ubuntu team takes GNU, Linux, and many thousands of other applications and then tests and integrates them to be accessible under a single installer. Ubuntu is the glue that lets you take a single CD, install hundreds of separate pieces of software, and have them work together as a single, integrated desktop system. If you were to pick up a CD of another distribution such as Debian, Red Hat, or Novell, the software installed would be nearly identical to the software in Ubuntu. The difference would be in the way the software is installed, serviced, upgraded, and presented and the way it integrates with other pieces of software on the system.

Source of Information :  Prentice Hall The official Ubuntu Book 5th Edition 2010   

The Two Faces of Privacy on Facebook

With Facebook starting out as a personal network, it has been a hard transition for people to become used to it as also a professional network. Most use LinkedIn as a professional network, Facebook as a personal network, and Twitter is a hybrid that people are still trying to figure out. But with the growth rate of Facebook, many have started to turn to it as a personal branding tool and professional network. Marketers have begun turning to Facebook with Facebook Pages, Groups, and advertisements as a way to reach out to their prospects, customers, and fans.

This transition has created a dilemma for many folks because they are resistant to using Facebook as a professional network, yet their colleagues, competition, and companies are becoming active on the network. Also, as we develop friends in our industries, we want to extend that friendship and therefore turn to Facebook. This starts to blur that line even further between work and home. However, as Dawn Foster of WebWorkerDaily points out, we don�t want to confuse �personal� for �private�:

You can actually be professional and personal at the same time in social media without too much effort. When we talk about �being personal� on social media websites, I think that many people confuse �personal� with �private.� The reality is that you get to decide what to share and what not to share, so you can still keep most areas of your private life private.

To deal with this dilemma, individuals typically have three options to choose from:

1. Maintain a single Facebook profile that combines personal and professional.

2. Maintain two different Facebook profiles: one personal and one professional.

3. Keeping Facebook only personal and not mixing work into it.

Each one of these has both upsides and downsides with not one clear answer or best practice, as of yet. Though it might not be clear yet, this will be important for you as a marketer or company.

Source of Information :  Facebook Marketing Designing Your Next Marketing Campaign

Pinguy OS: Linux For Dummies

It�s an old trope, one that has almost entirely lost its meaning over time, but there�s another Linux distro that is designed for the average (or slightly below average) computer user.

This one, however, is about as close as any have come to creating an idiot-proof Linux distro. Pinguy OS (www.pinguyos.com) is based on Ubuntu and is mighty fun to play with.


A Little Background
Pinguy OS was built by a fellow named Antoni Norman who wanted a Linux OS that he could give to friends and family�without having to constantly provide tech support. He�d been recommending Ubuntu but grew frustrated with how stark it was. Even though it�s relatively easy to add software, codecs, and the like with Ubuntu�s repositories, he found that many novice users were still having trouble finding everything they needed to make their system work as they wanted it to.

He looked at what his family and friends wanted to do with their computers, determined which applications would satisfy those needs in the most userfriendly way possible, and then decided to install those applications along with anything needed to make them run properly.

Along the way, Norman hit on an interesting problem with Linux repositories: It�s great that virtually any and every application you could possibly want is right there for the taking, but most users don�t know which applications they really want or need.

For example, everyone wants a wordprocessing application, a photo editor, a music player, and so on, but which one should you choose? The options can be overwhelming. Pinguy OS is kind of like having a personal shopper; you still have 50 choices for spaghetti sauce, but Pinguy OS more or less removes the decision-making process for you.

Pinguy OS is a simple and smart, if slightly counterintuitive, approach to a Linux distro. Instead of giving the user his or her own tabula rasa operating system as the user-friendly Ubuntu does, Pinguy OS packs in all the programs, patches, browser plug-ins, and other tweaks that (he�s pretty sure) the average user will want, and it�s all designed to be optimized from the get-go. He also used a variety of programs to tweak the interface so it�s as attractive as possible, such as using MintMenu for the main menu. The result is an operating system that should �just work� and looks pretty, to boot.


A Tour
It was somewhat jarring to boot a Linux OS and be greeted by a relatively busy desktop. On the left side of the screen is a vertical dock with folders for documents, music, pictures, and so on, and another dock on the bottom of the screen displays applications. Docky (which, as it happens, created both on-screen docks), Firefox, Mozilla Thunderbird, the Deluge BitTorrent client, Rhythmbox, VLC media player, Terminal, and a trash can are pinned to the dock (again, created by Docky) by default, but it also displays any open applications.

The top of the screen has a straightforward file menu (not created by Docky), which contains Places, System, Applications, and Search; Desktop, with a Spotlight-like search bar; and the predictable File, Edit, View, Places, and Help.

One item that seems to clash a bit with Pinguy�s raison d��tre is the large, transparent widget that displays information about CPU usage and other under-the-hood information about your system. Although this is a nice tool for knowledgeable computer users, Pinguy OS�s target audience probably won�t even know what it�s for, and it takes up a lot of screen space. It seems rather unnecessary.


Test Drive
For the most part, Pinguy OS is as advertised. Everything is intuitive, smooth, and responsive, and I was able to watch streaming video from a number of sites without having to install anything. (Netflix streaming was a nogo, however.) Little things make the overall experience enjoyable, such as the use of Elementary-Nautilus to pull album art meta data into Rhythmbox.

Although Norman set up Pinguy OS to open a given file in the �right� program, it didn�t always work that way. For example, video opens in VLP Player and images open in Image Viewer by default, which is great, but my test MP3 files opened in Movie Player for some reason. (A similar oddity occurs when you connect an iOS device�the system didn�t recognize an iPod as a music player, although it worked fine after forcing the system to open it in Rhythmbox.) It�s not difficult to change those settings, but the whole point of Pinguy OS is that you shouldn�t have to.

Some of the chosen applications make a lot of sense. For example, the office application (OpenOffice), Web browser (Firefox), and email application (Thunderbird) are great options. However, there seem to be a few applications with overlapping functions that add to the already large number of programs. Do we really need the Deluge and TED BitTorrent clients; Dropbox and Ubuntu One for online storage; Brasero and DeVeDe to burn discs; and GNOME MPlayer, VLC, and Rhythmbox to play media, to name a few?

At the same time, Pinguy OS is smart about efficiency in other areas. For example, under Gaming, instead of littering the section with a dozen casual games, Pinguy OS has just one entry: PlayOnLinux, which lets you play a lot of PC gaming titles on your Linux machine.


In Sum
What�s somewhat amusing about Pinguy OS is that it has a certain Mac feel to it. It�s pretty, it makes heavy use of docks, and everything is all set up for you ahead of time. Further, for the most part, it removes the need for people to choose which programs to use.

This is a terribly un-Linux-like approach and no doubt drives some purists crazy, but that�s OK because, of course, Pinguy isn�t for the purists. It�s designed to reach out a bit further than even Ubuntu does, to the less-experienced user looking for a bona fide OS option beyond Mac or Windows machines. In fact, Pinguy OS is a rather logical evolution for many people looking to get on the open-source bandwagon (or people trying to convince others to do so).

Hopefully, Norman will continue to work on Pinguy OS and the open-source community will continue to support it. It�s well worth the effort.

Source of Information :  Computer Power User (CPU) January 2011

Ubuntu Subprojects, Derivatives, and Spin-offs

Finally, no introduction to Ubuntu is complete without an introduction to a growing list of Ubuntu subprojects and derivatives. While Ubuntu was derived from Debian, the project has also developed a number of derivatives of its own.

First and foremost among these is Kubuntu�a version of Ubuntu that uses KDE instead of GNOME as the default desktop environment. However, it is important to realize that the relationship between Kubuntu and Ubuntu is different from the relationship between Ubuntu and Debian. From a technical perspective, Kubuntu is fully within the Ubuntu distribution. Organizationally, the Kubuntu team works fully within Ubuntu as well.

A similar organization exists with the Edubuntu project, which aims to help develop Ubuntu so that a configuration of the distribution can be easily and effectively put into use in schools. Although the project has undergone a few changes in recent years, it remains focused on both educational and school-related software and on a Linux Terminal Server Project (LTSP) setup that allows schools to run many students� computers using one or more powerful servers and many �dumb� terminals that connect to the server and run software off it. This relatively simple technical trick translates into huge cost savings in educational settings.

The Xubuntu project is based on the lightweight window manager Xfce. Xubuntu is designed to be appropriate on older or less powerful computers with less memory or slower processors�or just for people who prefer a more responsive environment and a slimmer set of features. While started as an unofficial project, Xubuntu has enjoyed great popularity and has become integrated as an official part of the core distribution.

The Netbook Edition, until recently known as the Netbook Remix, is a variant of the Ubuntu Desktop Edition, with GNOME, but customized for the smaller screens of netbooks. Canonical employees have also been working on providing Intel�s Moblin UI for Ubuntu as well.

Other derivatives exist as well, such as Ubuntu Studio and Mythbuntu. A list of officially supported and recognized derivatives is available at www.ubuntu.com/products/whatisubuntu/derivatives.

In a way, it is through these derivatives that the work and goals of the Ubuntu project come together and are crystallized. It is only through the free and open source software movements� commitment to freely accessible source code that Ubuntu could be built at all. Similarly, it is only through Ubuntu�s continued commitment to these ideals that derivatives can spring from Ubuntu. As a derivative with a view of distributions within an ecosystem, Ubuntu does not see the process of derivation as an insult or criticism. Far from it�Ubuntu thinks derivation is the highest form of compliment.

Outside of Ubuntu, Canonical�s work is largely based around software projects such as Launchpad and Bazaar that are designed to facilitate precisely this sort of derivative process. This process, when practiced right, is one that describes an ecosystem of development in which everyone benefits� the derivative, Ubuntu, and Ubuntu�s upstreams. Only through this derivative process does everyone get what they want.

Derivation, done correctly, allows groups to diverge where necessary while working together where possible. Ultimately, it leads to more work done, more happy users, and more overall collaboration. Through this enhanced collaboration, Ubuntu�s philosophical and technical goals will be achieved. Through this profound community involvement, Bug #1 will be closed. Through this type of meaningful cooperation, internal and external to the project itself, the incredible growth of Ubuntu in its first four years will be sustained into the next four and the next forty.

Source of Information : Prentice Hall The official Ubuntu Book 5th Edition 2010  

The History of Computing V - Linux

Some thought that the problem with Windows wasn�t that it was nearly universal�there are advantages to everyone, as far as that�s concerned�but that it was proprietary. That is, it was owned or controlled by one company, which means that no one outside of that company knows exactly how the product works. In computer science, the opposite of proprietary is open-source. That is, no one company owns or controls the product, and anyone can look at how the product works and even make suggestions on how to improve it.

In 1991, Linus Torvalds, a student in Finland, made an innocent post to an e-mail group. He wrote that he was working on a free version of UNIX, an operating system that had been in use for twenty years. The Torvalds version, which came to be called Linux (Linus UNIX), was released under an agreement called the General Public License. In essence, anyone could use Linux in any way and modify it in any way, as long as any changes made were not hidden or charged for. This ensured that Linux would remain �open.�

�Windows versus Linux� has become the main battleground in the larger war of �proprietary versus open-source.� Adherents on both sides believe their approach will lead to more effective software.

Source of Information : Broadway-Computer Science Made Simple 2010

The History of Computing IV - Microsoft Windows

Those earning microsoft certifications may be interested to know more about the history of the company. When IBM was readying its PC for launch, they needed an operating system, which is th core program that allows a computer to function. This is the program that starts to run when you turn on a computer, before you touch the keyboard or the mouse. Microsoft purchased an operating system from another company and adapted it for use on the IBM PC, calling its software MS-DOS, for Microsoft Disk Operating System.

In 1985, Microsoft�s initial fortunes were made with MS-DOS, but once Microsoft chairman Bill Gates saw the Apple Macintosh, he knew the days of MS-DOS were numbered. Microsoft developed its own GUI, a program that would run on top of MS-DOS, and called it �Windows.�

Few people remember the original Windows today, or its sequel, Windows 2.0. They were crude interfaces by today�s standards. They are most famous for starting a long legal battle with Apple, which claimed that the �look and feel� of Windows legally infringed on the designs of Apple computers. Microsoft, in turn, claimed that Apple had stolen the important ideas from Xerox anyway. Microsoft eventually won.

In 1990, Windows 3.0 was released, the first version that was truly popular with users and developers. In 1995, Windows 95 was released, which, unlike the previous versions, was an entire operating system and interface in one, and did not require that MS-DOS be installed beforehand on the system. Soon Windows 98, Windows NT, Windows 2000, and Windows XP were developed. These products dominate the industry, and the vast majority of computers run some form of Windows.

As Microsoft became ubiquitous in the computing world, a backlash developed. Critics claimed the company�s software presence on almost every computer in the world gave them an unfair competitive advantage. The company became entangled in antitrust lawsuits from rival companies and governments around the globe.

Source of Information : Broadway-Computer Science Made Simple 2010

The History of Computing III - Apple Macintosh

Although a great improvement over punch cards, some computer scientists saw limitations in a computer with only a keyboard for input and text for output. It was fine for researchers and computer experts to interact with the machine through obscure commands and oblique text messages, but if the computer was going into every home, it needed to interact with users in a different way.

In the early 1970s, researchers at Xerox developed a series of computers that communicated with the user through pictures, not just words. The culmination of their early efforts was the Xerox Star. It had �windows,� a �mouse,� and many other elements you would recognize today. Eventually this method of computer use�mostly visual, with little text�would be called a graphical user interface (or GUI), and every computer would have one. Unfortunately for the executives at Xerox, they proved better at funding interesting projects than at marketing the results.

Steve Jobs, the president of Apple Computers, toured the Xerox research facility in 1979, having traded some Apple stock for Xerox stock. He�d been told about this new interface and wanted to see it. He left impressed, and decided that Apple�s new computer, the �Apple Lisa,� would be the first mass-produced computer with a graphical user interface. Many of the Xerox researchers would soon be working at Apple.

Not many Apple Lisas were sold. It was an expensive computer, costing $10,000 when it debuted in 1983. But because Jobs was convinced that the GUI was the model for the future, he tried again.

During the Super Bowl in 1984, Apple ran one of the most famous commercials in history to introduce their next computer, the Apple Macintosh. Directed by Ridley Scott, director of the movie Blade Runner, it depicted an Orwellian future of gray-clad workers who mindlessly pay homage to a �Big Brother� figure on a huge video screen, until an athletic woman in running clothes smashes the screen with a flying hammer. What this had to do with the Macintosh was never clear, but the commercial was widely discussed around office water coolers and was repeated on news programs. Soon everyone had heard of the �Mac.� The new computer was cheaper than the Lisa, but less powerful. As with the Lisa, it was a slow seller at first, but the company stuck with it. There was no turning back to text-based computers.

Source of Information : Broadway-Computer Science Made Simple 2010

The History of Computing II - The IBM PC

For decades after World War II, computers were shared. Though they had grown smaller since the days of ENIAC, they were still very large and expensive. As a consequence, entire universities or companies, or even groups of schools and companies, would share the use of a single, large, and�for the time�powerful computer.

The people who used this mainframe, as these computers were called, would often never see it. The computer was generally locked in a secure location, and users would connect to it through phone lines or other wiring. At first, all the user saw was a teletype, which is like a combination of an electric typewriter and a printer. But later, video screens were introduced, which showed green or orange text on a black background.

By the late 1970s, computer scientists realized that smaller, less powerful computers could be placed right on user�s desks. �Garage� companies, so named because they were so small they might literally consist of one or two people working out of a garage, began making small computers for individuals, not whole companies. Sometimes these computers came in the form of kits.

IBM, the company that grew out of Hollerith�s census tabulators, was at this time very successful making mainframes. At first, IBM was skeptical that a market even existed for smaller computers, but eventually it decided to act. In 1981 the company introduced the IBM PC. The �PC� stood for �personal computer,� which is where that term for a single-user computer originates.

The price was over $1,500, which in 1980 was a lot of money, but it was still remarkably inexpensive compared to mainframes. In addition, the IBM name gave the personal computer instant legitimacy. Yet, the IBM PC did not mark a revolution in technology. The machine had been built almost entirely from off-the-shelf parts. What was revolutionary was the concept: A computer would appear on our desks at work, would come into our homes, would become an ordinary appliance like the telephone. At the start of 1983, when Time magazine would normally select its �Man of the Year,� for 1982, the editors instead selected �The Computer� as its �Machine of the Year.� The computer had come of age.

As you will find, today�s computers are not simply a little faster than the original desktops. A gymnasium full of IBM PCs would not equal the power of a single system today. You may not know what all these terms mean, but you will before you finish this book. For now, just marvel at how little $1,500 once bought (or, to be optimistic, how much $1,500 buys now).

Source of Information : Broadway-Computer Science Made Simple 2010

History of Computing I

The Abacus
The first efforts toward mechanical assistance aided in counting, not computation. An abacus is a mechanical device with beads sliding on rods, which is used as a counting device. It dates to at least the Roman Empire, and its ancestor, the counting board, was in use as far back as 500 B.C. The abacus is considered a counting device because all the computation is still done by the person using the device. The abacus did show, however, that a machine could be used to store numbers.


Jacquard�s Mechanical Loom
A loom is a machine for weaving a pattern into fabric. Early loom designs were operated by hand. For each �line� in the design, certain threads were �pulled� by an experienced weaver (or a poor worker under the direction of a weaver) to get the finished pattern. As you might guess, this process was slow and tedious.

In 1801, Frenchman Joseph-Marie Jacquard invented a mechanical loom in which threads had to be pulled at each stage in a pattern that was stored in a series of punch cards. A punch card encodes data with holes in specific locations. In the case of weaving, every thread that could be pulled had a location on the card. If there was a hole in that location, the thread was pulled. Jacquard�s loom used a series of these punch cards on a belt. The loom would weave the line dictated by the current card, then automatically advance to the next card.

Jacquard�s loom is not necessarily a computer, because it does no mathematical calculations, but it introduced the important idea of a programmable machine. The loom is a �universal weaver� that processes different sets of punch cards to make different woven fabrics.

Jacquard�s loom is not necessarily a computer, because it does no mathematical
calculations, but it introduced the important idea of a programmable machine. The loom
is a �universal weaver� that processes different sets of punch cards to make different
woven fabrics.


Babbage�s Counting Machine
Eventually someone put together a machine that could count and could execute a program, and wondered if a machine could be made to compute numbers. Charles Babbage, an English researcher, spent much of the 1800s trying to develop just such a machine.

One of Babbage�s early designs was for a device he called the �Difference Engine,� which produced successive terms in a mathematical series while an operator turned a crank. This may not seem a dramatic development, but at the time, mathematicians relied on tables of mathematical functions in which each value had been painstakingly calculated by hand. Thus, the Difference Engine was revolutionary.

Its success led Babbage to a more ambitious design: the Analytical Engine. Rather than being tied to a specific task like the Difference Engine, the Analytical Engine was conceived as a general-purpose computing device. Different programs would be fed to the machine using a belt of punch cards, just as in Jacquard�s loom.

The Analytical Engine was never built, because Babbage ran out of money. Like many researchers today, he was dependent on government grants to continue his work. In addition, his design may not have been possible to implement using the technology of the day. He was undoubtedly a man ahead of his time.


Hollerith�s Punch Cards
Under its Constitution, the U.S. government is required every ten years to count how many people reside in each state, a process known as the census. These numbers are used to determine the proportion of representatives each state receives in the House of Representatives.

Originally, this process was done entirely by hand. Census takers would fill out forms for each household, and then the results of these forms would be tabulated by state. This method was so onerous that by the late 1800s the 1880 census took more than ten years to complete, which meant that the next census was starting before the results of the previous one were known. Clearly, something had to be done.

The government created a contest to find the best solution to the problem. In 1890 it was won by a census agent named William Hollerith. In his design, each census form was encoded into a punch card. Machines called �tabulators� could rapidly process stacks of these cards.

This method was not only dramatically faster than manual tabulation, but also allowed the government to track demographics as never before, ask more questions of each citizen, and break up data along multiple categories. For example, rather than counting men who were above a certain age or were veterans, the tabulators could count the men who were in both categories, which allowed the government to better anticipate the funds that would be needed for veterans� pensions.

The system was a resounding success and led to Hollerith�s founding of the Tabulating Machine Company, which, several mergers later, became International Business Machines, or IBM, a company that would later dominate the world of computers for decades.


ABC
In the period from 1939 to 1942, John Atanasoff, a professor at Iowa State University, and Clifford Berry, a graduate student at the same school, created what is now considered the first modern computer. Their machine, which they called the Atanasoff-Berry Computer, or ABC, weighed about 700 pounds and had to be housed in the basement of the physics department. By current standards, it was a terribly slow machine, reportedly capable of only a single calculation every fifteen seconds. In contrast, a computer today can perform billions of calculations a second.

Atanasoff and Berry never completed the patent process on their work, and the machine itself was dismantled a few years after it was built, when the physics department needed its basement space back. This was unfortunate, as their pioneering work in the field was underappreciated. Credit for the first modern computer was instead bestowed on a more famous project: ENIAC.


ENIAC
Like William Hollerith�s punched cards, the ENIAC story is driven by governmental need. When World War II began, the United States was woefully underprepared for military operations. The army needed to develop and test a large number of weapons in a short period of time. In particular, it had to perform a number of ballistics tests to create artillery tables�in essence, a book showing how far an artillery shell would fly from a specific gun, given wind conditions, the angle of the gun barrel, and so on.

Like the mathematical tables of Babbage�s time, these artillery tables had been created by hand, but by now the army already had some devices for assisting in calculation. Called differential analyzers, they operated on mechanical principles (much like Babbage�s machines), not on electronics. But something better was needed, in aid of which the army hired John Mauchly and J. Presper Eckert, computer scientists at the University of Pennsylvania. In 1946, the machine they proposed was called ENIAC, which stands for Electronic Numerical Integrator and Computer. Like the ABC, it was truly a modern computer.

The term �modern� might seem too strong if you actually saw this machine. Computers of that era relied on the vacuum tube, a device that resembled a lightbulb through which one electrical current can control another. This controlling aspect was used to build logical circuits, because by itself one vacuum tube doesn�t do much. Indeed, ENIAC required about 19,000 vacuum tubes to do its work, filled an entire room, weighed thirty tons, and drew about 200 kilowatts (that is, 200,000 watts) of power. In comparison, a desktop computer purchased today would draw about 400 watts of power, which means ENIAC drew about 500 times more current, even though its actual ability to compute is dwarfed by the most inexpensive desktop computers of today.

What makes ENIAC so important is its reliance on electronics to solve a real-world problem. There were as few mechanical parts as possible, although some mechanics were inevitable. For example, ENIAC still used punch cards for input and output, and the parts that read and produced these cards were mechanical. The vacuum tubes were built into minicircuits that performed elementary logical functions and were built into larger circuits. Those circuits were built into even larger circuits, a design idea that is still used today.

For decades the ENIAC was considered the first computer, but in the 1970s the judge in a patent infringement case determined that ENIAC was based on the designs of the ABC. Other claims were also made, including those of Konrad Zuse, whose work in wartime Germany wasn�t known to the rest of the world for decades; and the Mark I, a computer developed around the same time at Harvard. The question of what was the first modern computer may never be settled.


Knuth�s Research
To this point computers were seen as increasingly useful tools, but computer science was not considered a serious discipline, separate from mathematics. One of the leading figures who changed this was Donald Knuth.

As an undergraduate studying physics and mathematics at the Case Institute of Technology in Cleveland in the 1950s, Knuth had his first contact with the school�s IBM computer. From then on, computers and programs were his obsession. He wrote programs for the IBM computer to analyze the college�s basketball statistics, and published research papers while still an undergraduate. When he completed the work for his bachelor�s degree, the college was so impressed with his computer work that he was awarded a master�s at the same time. His most famous accomplishment is The Art of Computer Programming, a proposed masterwork of seven volumes, of which three are completed. It�s no exaggeration to say that Donald Knuth�s writings are to computer science what those of Albert Einstein are to physics.

Source of Information : Broadway-Computer Science Made Simple 2010

SUBJECT AREAS IN COMPUTER SCIENCE

Within the computer science field, computer scientists can work in many areas. Depending on the profession, some computer scientists may need to know a little about each area, while others may need deep knowledge of one or two areas.


Artificial Intelligence
Artificial intelligence can be described as programming computers to perform tasks that require intelligence if humans were performing the tasks. This is not the only definition, though, and of all the areas in computer science, this one has perhaps the most contentious boundaries. Some researchers believe artificial intelligence must mimic the processes of the human brain; others are interested only in solving problems that seem to require intelligence, like understanding a request written in English.


Theory of Computation
The theory of computation puts limits on what can be computed. Some limits are practical. It may be shown, for instance, that a computer could solve a certain problem, but it would take hundreds of years to get the result. Other limits are absolute. Strange as it may seem, some questions have a fixed, numerical answer that cannot be computed. Scientists in this area also compare programming solutions to specific tasks in a formal way. For example, a common computing task is sorting, which just means to put items in some order (like alphabetizing a list of student records by last name). Countless ways can be used to approach the sorting problem, each with advantages and disadvantages. Computational theory is used to determine which situations are most suited to a particular approach.


Human-Computer Interaction
The computer scientist working in human-computer interaction investigates how people use computers now and how people and computers can work together better in the future. This research is similar to graphic design. A graphic designer is a specialist who knows how the colors, fonts, arrangement of text, pictures, and other elements make a book, magazine, or advertisement easier for the viewer to understand. Now that computer interfaces are increasingly graphical, the same kinds of ideas are used, except that a computer is interactive. For example, many programs now have a �toolbar,� which is a row of pictures that allow the user to select commonly used operations without navigating the entire menu of options. This kind of design innovation is a result of study in human-computer interaction.


Information Management
A database in general usage is any organized collection of data. In computer science, a database specifically means a collection of data that is stored in a computer-readable form. Examples include an online book catalog at the library or the account information for each person who has a VISA card. The information management area is concerned with how databases are created, stored, accessed, shared, updated, and secured.


Computer Graphics
Computer graphics is the generation of images through computers. It includes simple text displays as well as images that appear to be in three dimensions. An important part of computer graphics is computer visualization, which attempts to pictorially display data in a way that is most understandable for the user. For instance, a visualization can allow surgeons to preview a surgical procedure before actually performing it. Other forms of visualization involve data that have no natural pictorial form. As a result, these must be displayed in a form that tells a story or makes a point. If you�ve seen a graph or chart generated with a computer program that seemed to have no clear meaning, you know why this area is important. Like many areas in computer science, computer visualization is as much human psychology as machine capability. The computer visualization expert asks, �How do we as human beings process visuals?�

As computer graphics become more advanced, they terminate at a point called virtual reality, in which graphics and sensory feedback are all-encompassing. This can be seen, for example, a room in which every surface is covered with synchronized computer displays. It�s important to note that virtual reality does not promise an experience indistinguishable from the �real world,� although that may be a goal for some researchers. Rather, it is an experience in which the outside world is temporarily blocked from our senses.


Software Engineering
As previously discussed, a software engineer is involved with the entire process of a program�s development, not just in programming. Software engineering is concerned with improving the process of making software. This means creating new processes for software engineers to follow, new techniques for project management, new methods to test software to ensure its quality, and new metrics for measuring how effective any of the other new ideas have been.


MYTHS OF COMPUTER SCIENCE


Computer Science Is All about Math
The kind and degree of math involved with computer science depends on what area one works in. Most programming involves math no more advanced than high school algebra, but some specialties require more. Someone who writes a mortgage interest calculator would need to understand financial calculations. Someone who writes a program to plot the trajectory of a satellite through space needs to understand trigonometry and calculus. Most programs, though, are built upon basic operations like addition and multiplication.


Men Are Better Suited to Computer Science than Women
Judging by the number of men and women working in the field, one could say that men as a group are more interested in computer science than women. But there�s nothing to suggest that men are better at it. Women may have avoided computer science because of an aversion to math (which is probably caused by another myth) and because of media portrayals of computer scientists as socially awkward, pasty-faced �geeks.� Computer science is a field that rewards excellence, regardless of gender or ethnicity, and all those interested should apply.


Computer Science Is for Geniuses
Genius never hurt anyone in the sciences, but having a high IQ and a knack for programming and other computer science concepts are two different things. While the people at the top of any profession usually have extraordinary abilities (that�s why they�re at the top), plenty of �ordinary� people have excelled in this field.


Computer Security
Much sensitive data is stored on computers, including tax records, credit card bills, bank accounts, and medical histories. And with computers increasingly interconnected, it�s become easier for data to be stolen. The old adage, �A chain is only as strong as its weakest link,� shows its truth here in the information age, where every computer is a link to another. So it�s no surprise that computer security is a rapidly growing field. Computer security involves finding ways to protect data from unauthorized access. This includes installing software to limit intrusions to a network, instructing employees on safe habits, and analyzing the aftermath of a system break-in to learn how to prevent a recurrence. A related field is computer forensics, though in fact this is almost the reverse of computer security since it involves breaking through security to retrieve partially deleted files. The purpose of this �break-in� is to obtain and analyze evidence to be used in a court trial.

Source of Information :  Broadway-Computer Science Made Simple 2010

What is COMPUTING ?

Computers have become a ubiquitous feature of modern life. It would be difficult to get through a day without some activity involving a computer, be it composing e-mail on a computer sitting on a desk, using a computer hidden inside a cell phone, or receiving a bill generated by a computer at the power company. Computer science allows all these activities to happen.

But what is computer science? It sounds simple enough�computer science is a branch of science that studies computers. But not everyone who works with computers is a computer scientist. The use and development of computers comprises a number of overlapping disciplines.

Before these disciplines are discussed, you need to understand a few terms.

A program is a series of steps to accomplish a given task. In general usage, a program might refer to everyday instructions, written in English, such as instructions to change a tire or register for a college class. In computer science, however, the term �program� refers to a series of steps given to a computer.

A computer is an electronic device for performing logical and mathematical operations based on its programs. The term includes not only the obvious electronic devices that have a screen, keyboard, printer, and so on, but also computers that are embedded into devices like those at supermarket checkout counters or in DVD players. What makes computers interesting and powerful is that they can be given arbitrary sets of instructions to perform.

Hardware refers to all the physical devices that make up a computer system, both those inside the computer �case� and those outside the case, like monitor, keyboard, and mouse.

Software refers to the programs the computer executes. For example, the word processor Microsoft Word, or the computer game �Half-Life,� is software, as is a program that enables a cell phone display so the user can select a new ring-tone. By analogy, when you play a movie in a DVD player, the movie is the software and the player is the hardware.

A programmer is someone who creates programs.

User refers to a person who uses a software program or computer.

Source of Information :  Broadway-Computer Science Made Simple 2010

iPhone E-mail

Unlike many other smartphones, your iPhone can send, receive, and browse e-mail without getting weighed down with compromise. The iPhone doesn�t settle for cramped, odd presentations. Your e-mail looks the way it should�the way it would if you were reading it on your home computer. That�s because the iPhone provides an HTMLcompatible rich-text client. Mail looks better because the client is better. It�s made to work right.

iPhone mail works with most industry-standard e-mail systems. With it, you can send and receive photos; view Excel spreadsheets, PDF files, and Word documents; manage your accounts; and more.


Compatibility
iPhone Mail is surprisingly compatible. It works with virtually all major e-mail providers, including Gmail, AOL, Yahoo!, and Comcast. For businesses, iPhone plays relatively well with Microsoft Exchange. This high level of provider support is because of the iPhone�s support of industry-standard protocols. The iPhone understands the most popular email standards, namely, POP, IMAP, SMTP, and Exchange. If you�re not already familiar with these standards, here�s a brief overview.


POP
POP (aka POP3) stands for Post Office Protocol. It�s probably the most common e-mail retrieval protocol in use today. It allows mail clients to connect to a server such as Gmail or AOL, retrieve messages, and disconnect afterward. This usually happens on a set schedule, such as every ten minutes or every hour; you do not receive mail until your client connects to the server and requests that new mail. POP works by checking in with a server, downloading your e-mail, and optionally leaving the original copies of your e-mail on the server. This leave-on-server option works well with the iPhone, because when you�re on the go, you probably want to check your mail on the iPhone and retrieve it again later when you get back to the office or return home. POP also has its downsides. Unlike the newer and improved IMAP protocol, POP downloads entire messages all at once, so it�s a bit of a space hog on portable devices. The 3 in POP3 indicates the third version of the protocol standard; POP1 and POP2 are obsolete.


SMTP
Mail clients use one protocol for receiving mail and another for sending mail. Your iPhone uses Simple Mail Transfer Protocol (SMTP) to send outgoing messages. SMTP contacts a mail server and transmits messages you�ve written along with any attachments including text, photos, and so forth. A common kind of SMTP, called SMTP-AUTH (AUTH stands for authorization), allows you to send secure, authorized mail. You provide your account name and a password. Your mail client authenticates itself to the server, and your e-mail is sent on its way.

The iPhone makes sending authenticated e-mail easy. Enter your account name and password into the Mail settings pane. Once you�ve done this, just use outgoing mail to send a note, share a web page�s URL, or pass along a photo that you�ve just snapped with the iPhone�s built-in camera. The iPhone takes care of all the protocol issues. You decide what to send and to whom to send it.


IMAP
IMAP stands for Internet Message Access Protocol. Like POP3, it allows you to receive e-mail on the iPhone. It�s a newer and more flexible protocol. As the name suggests, IMAP was built around the Internet. It introduces advanced ways to connect to the mail server and use the limited bandwidth of mobile connections in the most efficient way. The key to understanding IMAP is to recognize that messages are meant to live on the server rather than go through a retrieve-and-delete cycle. You manage your mail on the IMAP server. You read your mail on a client, like the iPhone.

When you download mail with POP, you download entire messages. When you download mail with IMAP, you download headers instead, at least initially. Headers are the bit that tells you who the mail is from and what it�s about. You don�t download the main body of the message until you explicitly request it from the server. Since the header occupies only a fraction of the space of the message, you can download IMAP data a lot faster than you download POP. The rest of the message stays on the server until you�re ready to read it.

The same thing goes for attachments. Say that someone sends you a 10MB video. It doesn�t get downloaded to your iPhone. It stays on the server until you�re ready to watch it on your home computer. If you�d downloaded the message with POP, the entire video would have transferred with the message. With IMAP, you can read the message that came along with the video without having to download the video file itself until you�re ready to watch it. The video attachment waits for you on the mail server.

IMAP also offers a feature that�s called push e-mail. Geeks will tell you that technically speaking IMAP is not exactly the same thing as push e-mail. True push e-mail reaches out and tells your e-mail client whenever new mail arrives in the system. Instead, your iPhone IMAP client connects to and gently tickles the server until new mail arrives. This kind of always-on connection allows the iPhone to receive mail almost as soon as it arrives on the server. In practice, there�s better intention there with push-style mail than actual results. Yahoo! and Gmail offer free IMAP accounts for iPhone users. To sign up for an account, point your browser to http://mail.yahoo.com or http://gmail.com.


Microsoft Exchange
Microsoft Exchange provides e-mail along with other enterprise-level services intended to support Outlook on the Web, personal computers, and mobile devices. Past versions of the iPhone did not support Exchange without, well, jumping through hoops. Exchange Server administrators had to open all sorts of security holes to get it to work, and they usually weren�t too happy about doing that. Fortunately, iOS 4 provides much better compatibility with Microsoft Exchange, to the point that you can now configure multiple Exchange ActiveSync accounts on your iPhone for business use.

Exchange is more than just e-mail, though�it�s also about sharing calendars and contacts. Since iOS 4, iPhones can receive push e-mail from an Exchange Server, access a company-wide global address list, accept or create calendar invitations, and even search e-mails that are stored on the server.

If you�re using your iPhone in a corporate setting that uses Microsoft Exchange Server, it�s best to work with your IT department to ensure that your device is connected to the server in the most secure way possible. If they�re not familiar with how the iPhone works with Exchange, Apple has provided a white paper on Exchange deployment that is free to download: http://images.apple.com/iphone/business/docs/iPhone_EAS.pdf.

Source of Information :  Taking Your iPhone 4 to the Max

Three Ways iPhone Messaging Is Better Than What�s Available on Other Phones

When it comes to messaging, the iPhone offers several advantages over the rest of the field. First, there�s the keyboard. The iPhone keyboard is far easier to use than any other cell phone keyboard. It makes it simple to type and edit messages.

For years, teenagers have been wearing their fingers to a nub texting each other. Texting on standard cell phones is hard work. Because of this, an entire language of texting shorthand has sprung up from �lol� (laughing out loud) to �pos� (parents are watching over my shoulder) to �kthxbai� (OK, thank you, good bye). The idea is to keep everything short and easy to type. Unfortunately, clarity is sacrificed to ease of typing. The iPhone brings simple typing back and allows you to properly say �thank you,� instead of �ty� or �thx.�

Second, the iPhone remembers conversations. You can easily see entire text conversations on a single screen. The iPhone organizes messages by sender, not by date or time. This means that messages from one person, even messages that are separated by hours or days, are grouped together into one easy-to-read presentation.

Third, the iPhone makes it easy to do all the stuff connected with messaging that should be connected with messaging, such as calling the person back directly or updating your calendar when someone mentions meeting you for dinner. Jump out of the text program
(tap Home), pop over to your calendar, and then return exactly where you left off.

NOTE: If you�re new to texting and your text partner is used to cell phone keys, you might want to acquaint yourself with the keypress-saving SMS shorthand. Those abbreviations are more likely to come up with experienced texters. Search the Internet for text message abbreviations for assistance.

Source of Information :  Taking Your iPhone 4 to the Max

A Variety of Chats with iPhone

Your iPhone is not only a powerful phone but also a messaging heavyweight. It supports text and multimedia messaging out of the box, can do AIM-style instant messaging and Internet chats with third-party apps, and can even be used for voice and video chats. Social networking services have expanded exponentially over the past few years, and the iPhone has been at the forefront of this explosion.

Messaging allows you to communicate instantly with colleagues and friends. It brings people closer together without relying on the immediacy of a phone call. Instant messaging is asynchronous; you send a message, and the recipient can respond to it (or not) at their leisure. That�s unlike a phone conversation, which is synchronous, where you and the recipient have an immediate back-and-forth interchange of ideas.

Several big players dominate the messaging world: e-mail, SMS, MMS, IM, IRC, Facebook, and Twitter. If these names already sound familiar to you, feel free to skip ahead. If not, don�t let the acronyms dampen your enthusiasm. You are probably already familiar with these technologies. Here�s a quick rundown:

� SMS: SMS stands for Short Message Service. It�s the feature that most people call text messaging. With it, you can send short messages from one phone to another. It lets you carry on typed conversations without placing a voice-based call. The iPhone fully supports SMS messaging, and it�s the most common way to conduct iPhone chats. SMS operates on the same network as your voice phone calls, so it often works when your data connection is down. SMS messages can arrive any time your phone is turned on, and they don�t require a special app to be running on your iPhone.

� MMS: MMS is the multimedia version of SMS; it stands for Multimedia Messaging Service. MMS messages can include audio, video, and images, and text can be formatted to include multiple fonts as well as italics and bold. If a picture can tell a thousand words, sending an MMS message with a photo might keep you from doing a lot of unnecessary text messaging. As with SMS, MMS works over the voice network, not the data network.

� IM: IM means instant messaging. It�s a way to communicate in real time by typing text. If you use AIM, Yahoo! Messenger, or iChat, you�re using IM. There are many instant messaging apps for the iPhone, so you�ll be able to keep in touch with your friends and co-workers easily.

� IM is different from text and multimedia messaging in five key ways. First, it requires a connection to a data network. Second, the service itself is free�you pay only for the data plan on your iPhone. Next, it can allow typed conversations with more than one person at a time. Fourth, you need to sign into an IM server. The server acts as the gateway for messages to pass between you and others. Finally, it requires that your IM app be up and running for the duration of the chat. If you�ve quit the app, you�re not going to receive messages.

� IRC: Internet Relay Chat is a relatively ancient (1988) messaging technology that�s still going strong. You can find IRC clients for nearly every operating system on Earth. Third-party developers have created IRC apps for iPhone, so you can continue using IRC into the future. Like IM apps, IRC apps must be constantly connected to a server in order to send and receive messages, and you must sign in to the server before conducting a typed conversation.

� SMS, MMS, IM, and IRC messages also have one thing in common compared to email; the messages are much shorter. Although you might provide a complete business proposal to a client in an email, you�d be much more likely to send a short �I�m on my way home� message to your spouse in an SMS message.

� Facebook: As of July 2010, Facebook had more than 500 million users. It�s a way for many people to get together and find friends, old and new. With your iPhone, you never need to be out of touch with Facebook. Facebook requires you to log into the Facebook service and use a special Facebook app to send messages, photos, and videos to your friends.

� Twitter: Twitter answers the question �What are you doing right now?� in 140 characters or less and then displays your answer to the world. There are many Twitter apps for iPhone, so you can find one that fits your particular style of tweeting. Tweets are essentially one-way blasts of information to the world, although two-way conversations that are tweeted back and forth are common. Twitter requires a special app and a connection to the Twitter service.

Source of Information : Taking Your iPhone 4 to the Max

Analyzing Facebook Ad Performance

Facebook provides an excellent reporting tool to gauge the performance of your ad campaign. Analyzing performance is important so that you can make adjustments, not only after your campaign but also during the campaign. By monitoring your performance in real time, or near real time, you can make immediate changes that can turn the tides of a lagging campaign that�s only draining your bank account.

When you head into the analytics area, you�re immediately greeted with a dashboard where you get an immediate glance at your campaign. From here you can edit your campaign, its status, the daily budget, or duration. You can also see a roll-up report of the major key performance indicators of your campaign including

� Status
� Bid
� Type (CPC or CPM)
� Clicks
� Impressions
� CTR % (Click-through rate)
� Avg. CPC
� Avg. CPM
� Amount Spent

You can see this immediate information for all campaigns you may be running or have run in the past.

You can also view a few different graphs for a campaign based on either clicks, impressions, or CTR. Both the dashboard and graphs are a great way to receive an immediate overview of how the campaign is going, and it can help to raise any red flags that may need deeper attention.


Additionally, Facebook provides three campaign reports:

� Advertising Performance: This is the same data provided on the dashboard, but it allows you to apply a few filters and export the data.

� Responder Demographics: This report provides demographic data thus allowing you to see who is actually clicking on your ads. This allows you to adjust targets or optimize text or to understand who is attracted to your ads. Again, this information can be filtered and exported.

� Responder Profiles: This report, according to Facebook, �provides information about the types of users who see or click on your ads based on interests that they have listed in their personal Facebook profiles.�

Through the use of the three reports that Facebook provides and the main dashboard, you can measure and adjust your campaigns on-the-fly, thus making better use of the money that you invest and allowing you to hit your target audiences in a more effective manner.

Source of Information : Facebook Marketing Designing Your Next Marketing Campaign

Common Reasons for an Ad on Facebook to be Rejected

Before we creating a Facebook ad, it is useful to consider the common reasons that your ad could be rejected. Make sure that you don�t use any of these tactics and your ad should pass with flying colors. According to Facebook, the following are the 12 most common reasons for an ad to be rejected:

1. Capitalization of every word: Capitalizing every word can give you an unfair advantage over the ads that are running. Also, it is not proper grammar.

2. Capitalization of entire words: One of the easiest ways to scream �SPAM� to your target audience is with the capitalization of entire words. We�ve all seen these messages before. Admit it; you�re usually, if not always, turned off by messages such as LEARN HOW TO IMPROVE YOUR SALARY. Even if the message is useful to you, the capitalization of every word makes it appears spammy.

3. Incorrect grammar, spelling, and slang: Do yourself a favor and use proper grammar and spelling. Don�t use slang. You and your company will not be viewed as professional if you�re not using proper grammar and spelling. Take time to proofread your ad before submitting it to Facebook for review.

4. Inaccurate ad text: Facebook is likely to reject ads that do not clearly state the company name, product, or offer.

5. Deceptive discounts and offers: This should fall under the �don�t be dumb� rule, but unfortunately there are people who try to deceive by making one offer that attracts users, and then giving them something totally different when they click it. This will surely get your ad tossed by Facebook and it will also damage your online reputation.

6. Irrelevant or inappropriate images: Only use images that are relevant to your ad. Also, make sure your images are clean. Similar to deceptive discounts, don�t be dumb. Help to ensure Facebook is a safe and fun place for people to hang out.

7. Inappropriate targeting: Why would you want to spend money targeting one group while speaking to another? Target those that you want your ad to reach. Bottom line.

8. Destination: Facebook has developed some particular guidelines when it comes to where you can send your audience. Per Facebook: �All users must be sent to the same landing page when the ad is clicked. The destination may not include fake close behavior, pop-ups. Ads may only be directed to a website or iTunes. When linking to iTunes, the text must explicitly say so. Ads may not be directed to any other download such as PDF, PowerPoint, or Word documents.�

9. Sentence structure: Simple enough: Use complete sentences. Proper grammar, spelling, and sentence structure will make your ad, and therefore your company, appear unprofessional.
10. Unacceptable language choice: Language can�t be degrading, derogatory, inappropriate, sexual, or profane in any nature.

11. Incorrect punctuation: Similar to other areas that we�ve talked about, this is yet another reason that your ad will make your company appear unprofessional. Ensure that someone proofreads your ads before sending them to Facebook for review.

12. Symbols and numbers in place of words: The sublanguage of speaking in 140 characters (for both text and now Twitter) has changed how many people word a sentence. Instead of full words such as �for,� many now replace it with the number 4 instead. This is cause for rejection by Facebook. Don�t substitute symbols or numbers for words.

Source of Information :  Facebook Marketing Designing Your Next Marketing Campaign

Architecture-Neutral Distribution Format

Within the framework of the OSF (Open Software Foundation) created at the end of the 1980s by a group of information processing systems manufacturers, whose initial goal was to develop a UNIX operating system independent of Sun and ATT, there was an attempt to promote a distribution technology independent of processor architecture. However, this technology, called ANDF was not adopted by industry players. It consisted of distributing the applications in a form of intermediate code (compilers produced this intermediate code instead of generating computer code). Installation then required translation of the intermediate code into computer code. One reason for the failure of this approach could be that software vendors using it might have been presented with a support problem, since the approach vastly increases the number of platforms on which the software could be run (to port to a new platform, one needed only a translation program for that platform). The Java approach, which we cover shortly, is different in that it specifies a Java Virtual Machine (JVM) to execute the Java programs., and the JVM is the piece to be validated. However, in the JIT (Just In Time) compilation strategy of Java, one sees a close parallel with ANDF.

Source of Information : Elsevier Server Architectures 2005

System Abstraction Layer

SAL allows the portability of the operating systems�in binary format� between various platforms based on the same processor architecture. The role of this layer is to hide the specific details of the platform from the operating systems. Among the principal functions of a SAL are:

� The assumption of responsibility of initialization, test and the configuration of the hardware resources of the platform

� Providing the operating system with data structures describing the configuration

� Provision of basic services for resource management, such as choice of which processor will load (�bootstrap�) the system; basic hardware error management and error logging

� Hiding differences arising from the use of different processor architectures, or the use of different processors within a single architectural family

For IA-64 systems, Intel has introduced a further layer between the SAL and the hardware; this new PAL (Processor Abstraction Layer) layer is intended to hide the differences between the implementation details of successive generations of IA-64 processors from the SAL. For an operating system intended to be ported across processor architectures, this functionality has its natural home in a SAL.

The concept of a SAL was understood and implemented long before the term itself was introduced. As an example, the manufacturers of both proprietary and UNIX systems developed families of products which had differences in implementation�in particular, for performance reasons�and therefore often had to develop an abstraction layer to facilitate porting the OS between the various models in the range.

In the case of Windows NT, a similar abstraction layer was implemented, called HAL, for Hardware Abstraction Layer.

Source of Information : Elsevier Server Architectures 2005

Flynn�s Classification of Parallel Architectures

On the level of systems architectures, Michael Flynn proposed a classification of parallel architectures:

� SISD (Single Instruction Single Data), in which just one stream of instructions performs a transformation on a single stream of data (this is the simplest model and corresponds to the computer architecture model described by John von Neumann.)

� SIMD (Single Instruction Multiple Data), in which the same stream of instructions is applied to disjoint sets of data

� MISD (Multiple Instruction Single Data) in which several instruction streams are applied to the same stream of data, (some sources hold that pipelined execution is the closest approximation to this approach, but we feel that this is a somewhat artificial view, since in a pipelined machine there is but one instruction stream)

� MIMD (Multiple Instruction Multiple Data), in which independent instruction streams are applied to independent sets of data (as we will see in the case of parallel DBMSs, the same program (the DBMS) is simultaneously executed, without synchronism on disjoint sets of data one speaks then of SPMD Single Program Multiple Data Stream)

Source of Information :  Elsevier Server Architectures 2005

Insecure Websites in DMZ Still Pose a Risk

Level of Trust
Normally, a website is considered to be a part of the untrusted outer perimeter of a company network infrastructure. Hence, system administrators usually put a web server in the DMZ part of a network and assume the information security risk from the website to the network is mitigated. However, several industry security standards have been imposed to protect the public infrastructure such as webs servers and name servers in addition to the services directly subjected to the standard application scope. Some companies are hiring people with computer forensics degrees to find out where threats are coming from. Why is it so important to protect your website even if it is not closely connected to your critical data infrastructure?


Social Impact
Humans are the weakest link in the chain of a company�s security. The experience gathered during more than 5 years of penetration testing shows that almost no large-scale companies can resist a social-engineering attack vector. In companies which have more than 30 employees, a penetration tester or a real intruder can pretext, deceive, and easily persuade at least 10% of the available employees to open an attachment or follow a link to the malicious website containing an exploit pack and a viral payload. Basic countermeasures include restricting network access to all websites but whitelisted sites, which includes your own website, or simply educating the employees. So, what happens when an intruder gains access to the website? The following list highlights what can be done with a web server located in DMZ:

� Inject an exploit pack and payload into the main page or create malicious pages

� Send spam and scam letters to the company employees inviting them to visit a malicious page at the website

� Install a rootkit and sniffer to maintain access and get all password inputs by system administrators or website maintainers

� Modify links from legitimate sites to malicious ones, for instance, to redirect Internet bankin link to http://ibank.y0urbank.ru instead of http://ibank.yourbank.com

� Pivot client-side payloads through the web server in the case of networks with restricted Internet access

This list includes only those risks related to successful network penetration. In addition, there are business image risks such as defacing, modifying sensitive public information (e.g. exchange rates at bank�s website, payment credentials at some charity company�s website, phone numbers etc.), or denial of service by deleting everything and bringing the web server down.


Ways to Protect
There are several methodologies to assess website security and mitigate risks connected with the website. One of the most popular is the OWASP Testing Guide, which includes more than 300 checks and addresses almost all known web vulnerabilities. The PCI Data
Security Standard refers to the top 10 most widespread vulnerabilities in the software, called the OWASP Top Ten, and is a basic requirement for any website dealing with credit card payments. For developers, there is also a Development Guide, the goal of which is to prevent mistakes affecting security.

To companies willing to protect their websites and networks, Informzaschita offers the following services:

� Complete website assessment according to OWASP Testing Guide (300+ checks)

� Express assessment according to OWASP Top Ten and deployment of Web Application Firewalls for small businesses or companies falling under PCI DSS requirements

� Complete PCI PA-DSS assessment for companies developing payment applications

� Automated security web and network scanning

Source of Information :  Hakin9 December 2010

The Power and Reach of Facebook Connect

Although 400+ million people is a lot, it still isn�t everyone. Therefore, Facebook must continually come up with new ways to find users. Also, as it�s out searching for new users, it also wants to ensure that it entices back its current user base as often as possible. This provides more opportunities for you, as a marketer, because it increases the possibility for you to connect with new prospects, current customers, and fans of your brand.

To accomplish all these goals, Facebook created Facebook Connect. Facebook Connect was released in December 2008. After approximately only one year, Facebook Connect had been integrated, in one form or another, on more than 80,000 websites. On those 80,000 websites, approximately 60 million users have engaged with Facebook Connect. That means that, as of this writing, approximately 20 percent of all Facebook users utilize Facebook Connect on websites that they visit other than Facebook. It�s no surprise that many of the largest websites have turned to integrating Facebook Connect to help drive more visitors and then translate those visitors into repeat traffic and increased engagement. According to Facebook, two-thirds of the top 100 websites in the United States and 50 percent of the world�s top 100 websites have implemented Facebook Connect.

Facebook Connect is a lot more than just a blue �Connect with Facebook� box that lets you sign in with your Facebook credentials instead of registering for the site. Facebook gives a basic and geeky definition of Facebook Connect, calling it: �a powerful set of APIs for developers that lets users bring their identity and connections everywhere.� Facebook Connect is a powerful tool with a complex set of features that can be integrated into websites. For marketers, integrating Facebook Connect helps to bring more eyes to your website thus giving you more opportunities to connect and convert as well as giving you the ability to implement other features available within Facebook Connect. Additionally, it provides more resources to the user who visits your website and doesn�t need to create a new log in just to leave a comment or interact with your website in some other way.

However, these implementations don�t even begin to touch on the capabilities of Facebook Connect. Let�s take a little time to walk through some of the features of Facebook Connect along with a couple examples. From that you should have a few ideas on how you may want to integrate Facebook Connect with your website or blog.

Source of Information : Facebook Marketing Designing Your Next Marketing Campaign

Creating a Personalized Experience with Facebook Connect

The capability of Facebook Connect to create a personalized experience is something that we�ve only seen the tip of so far. Facebook Connect has the capability to use some of your information such as age, gender, location, or content that you�ve uploaded to Facebook to help create a story that has been created just for you.

To fully understand what creating a personalized experience with Facebook Connect actually means, it is easiest to learn about how some have chosen to use this feature.

During Shark Week, the Discovery Channel sought to find ways to pull people in to watch the programming that it had for Shark Week. The Discovery Channel realized that one of the ways to pull people in would be to reach out into the communities where they�re already hanging out. Instead of hoping that they tune in and watch commercials related to Shark Week, the Discovery Channel, with the assistance of C.C. Chapman and his team at Campfire (previously known as The Advance Guard), decided to use Facebook as a digital channel to find potential viewers.

The easy way out would be to just use a Facebook Ad that had a slick call-to-action. No, that would be too simple. What if you could make users feel as though they were in a boat being attacked by a shark? What if you could make their heart race and make it really real for them? That would be cool, right? That is exactly what an application called Frenzied Waters created using Facebook Connect for Shark Week did. By using information that you�ve already made available to Facebook such as biographical information and photos, the Frenzied Waters application created an experience that made you feel as though you were part of the shark attack.

Another slick use of Facebook Connect was for the launch of the 10th anniversary of Fight Club, a movie featuring Brad Pitt. When you visit WelcomeToFC.com, you�re prompted to enter your Facebook credentials. After you enter your credentials, the website (actually Facebook Connect) grabs data from your Facebook account such as your name, favorite movies, job information, and photos and uses it to build a story from Fight Club around you. At the end of the experience, it then encourages you to purchase the Fight Club DVD.

What both of these applications accomplish is invoking emotion and bringing you into the experience. This tends to hook you and makes you want to run around and share it with your friends because of how cool it is. This word-of-mouth marketing is the best type of marketing that any brand or product could ask for.

Source of Information :  Facebook Marketing Designing Your Next Marketing Campaign

Using Facebook Connect as a Single Sign On

The concept of single sign on is something that has continued to float around the Internet as users become increasingly frustrated with how many different websites they have to be signed up for to access content. Such concepts as OpenID have attempted to create a single sign on that people would use on all the websites they visit. The problem with OpenID though is that it requires website administrators to add OpenID functionality to their website. During the middle of this debate over single sign on, Facebook implemented Facebook Connect and instantly enabled 400+ million users to have a single sign on. Yes, just like with OpenID, website administrators need to add additional code to their websites; however, unlike OpenID Facebook Connect also gives you access to the many other features that have been covered. Facebook Connect also moves ahead of OpenID and other single sign-on concepts because of Facebook�s brand recognition.

By becoming a single sign-on resource, Facebook also benefits because users become more reliant on the social network. Because you have the ability to use Facebook to sign in to all your favorite websites, you�re more likely to interact with Facebook more often. Even though you cannot access features of your Facebook account, and it wouldn�t necessarily make sense for you to have the ability to, it does increase the amount that you interact with Facebook, increasing the likelihood that you�ll actually head over to the social network and hang out over there as well.

Another beneficial feature of using Facebook Connect as a single sign on is that when you log in to Facebook either directly or connect with any site that uses Facebook Connect, Facebook signs you in to all the websites that you use Connect with. This means that if you visit five sites that use Facebook Connect, you will not be required to re-enter your login credentials multiple times. Also, that means that when you sign out of any one site, Facebook will sign you out of all websites. This can be helpful if you use a public computer so that you don�t have to remember to log in and out of all the websites that you may visit during a browsing session.

As Facebook continues to grow, the concept of single sign on using Facebook Connect will become one of the, if not the, most popular feature of Facebook Connect.

Source of Information :  Facebook Marketing Designing Your Next Marketing Campaign

Sharing Stories into Your Stream with Facebook Connect

The ability to share stories into your stream is a feature of some more socially forward websites such as Yelp. On Yelp, once you rate a restaurant, you�re able to push that review and restaurant information to your Wall as a status update. Where this differs from the Share functionality is that with the Share feature, you�re sharing an article from a website or blog. With the ability to stream stories, you�re able to automatically push information to your Facebook Wall while you�re interacting on other websites. This is beneficial to the websites that are able to integrate it because the stream story usually carries some of the branding from the website such as a logo thus bringing increased brand awareness.

Several websites allow you the option of publishing your activity to Facebook. You�ll typically see this option with social networks and games that rely on users spreading information to bring awareness of their services. If you use popular social bookmarking website Delicious, you�re able to use Facebook Connect to authorize Delicious to post the websites that you save to their service on your Facebook Wall. You can also add a Delicious box to your home page that will automatically update as you run around the web bookmarking websites.

By linking together the various places you enjoy hanging out around the Internet, you�re able to use Facebook as a lifestream to show your community on Facebook what you�re interested in. But, with that comes some cautionary advice: Ensure that the information that you�re allowing to flow into Facebook doesn�t overwhelm your community and is actually useful to them. If not then they�ll begin hiding your status updates, ignoring you, de-fanning your Page, or at worse, unfriending you. If any of that happens then you close the door on everything that you�ve been working through this book to accomplish.

To help avoid flooding your community and understand how and where your activity on the Internet is used, it is helpful to create, what Louis Gray termed, a social media data flow.

Essentially, by working through this process you create a visual map of where the data that you input into various social networks ends up. This can help you get a grip on whether you may be overwhelming your community because it can happen easily as the habit of using Facebook Connect all over the web can become customary.

Source of Information : Facebook Marketing Designing Your Next Marketing Campaign

An Ecosystem of Distributions

Many hundreds of GNU/Linux distributions are in active use today. A quick look at Distrowatch�s database (distrowatch.com) demonstrates the staggering number and growth of distributions. One of the first GNU/Linux distributions was called Softlanding Linux System, or SLS. For a number of reasons, a programmer named Patrick Volkerding thought he could improve on SLS. Because SLS was free software, Volkerding had the freedom to make a derivative version of SLS and distribute it. Volkerding did just this when he took SLS�s code and used it as the framework or model upon which to create his own variant called Slackware. Subsequently, Slackware became the first widely successful GNU/Linux distribution and is maintained to this day.

With time, the landscape of GNU/Linux distribution has changed. However, the important role of derivation that made Slackware possible has remained fully intact and is still shaping this landscape. Today, the hun dreds of GNU/Linux distributions serve a multitude of users for a myriad of purposes: There are distributions specially designed for children, for dentists, and for speakers of many of the world�s languages. There are distributions for science, for business, for servers, for PDAs, for nonprofit organizations, for musicians, and for countless other groups.

Despite this diversity, the vast majority of derivatives can be traced back to one of two parent distributions: Red Hat and Debian. While it is not necessary to understand the details of how these projects differ, it�s useful to know that Red Hat and Debian offer two compelling, but frequently different, platforms. Each project has strengths and weaknesses. For almost every group making a Linux-based OS, one of these projects acts as square one (with a few notable exceptions, such as the Gentoo project).

However, while the process of deriving distributions has allowed for a proliferation of OS platforms serving a vast multiplicity of needs, the derivative process has, historically, been largely a one-way process. New distributions based on Red Hat�Mandriva and Novell�s SUSE, for example�begin with Red Hat or a subset of Red Hat technology and then customize and diverge. Very few of these changes ever make it back into Red Hat and, with time, distributions tend to diverge to the point of irreconcilable incompatibility. While the software that each system includes remains largely consistent across all distributions, the way that it is packaged, presented, installed, and configured becomes increasingly differentiated. During this process, interdistribution sharing and collaboration grow in difficulty.

This growing divergence indicates a more general problem faced by distribution teams in getting changes upstream. Frequently, the users of GNU/Linux distributions find and report problems in their software. Frequently, distribution teams fix the bugs in question. While sometimes these bugs are in changes introduced by the distribution, they often exist in the upstream version of the software and the fix applies to every distribution. What is not uncommon, but is unfortunately much less frequent, is for these bug fixes to be pushed upstream so that all distributions and users get to use them. This lack of collaboration is rarely due to malice, incompetence, or any tactical or strategic decision made by developers or their employers. Instead, tracking and monitoring changes across distributions and in relation to upstream developers is complicated and difficult. It�s a fact of life that sometimes changes fall on the floor. These failures are simply the product of distribution-building processes, policies, and tools that approach distributions as products in and of themselves�not processes within an ecosystem.

Like many other distributions, Ubuntu is a derivative of Debian. Unlike the creators of many derivatives, the Ubuntu community has made it one of its primary goals to explore the possibility of a better derivation process with Debian, with Debian and Ubuntu�s common upstreams (e.g., projects such as Linux or GNU), and with Ubuntu�s own derivatives. A more indepth discussion of Debian can help explain how Ubuntu positions itself within the free software world.

Source of Informaion : Prentice Hall The official Ubuntu Book 5th Edition 2010

Useful Software Development Tools

Here�s a quick survey of some other candidates for every programmer�s toolbox.

Testing Tools
FitNesse: http://fitnesse.org/
FitNesse is an acceptance testing tool that allows tests to be expressed as tables of input data and expected output data, described in Fit for Developing Software: Framework for Integrated Tests.

Watir: http://wtr.rubyforge.org/
Watir is an open source library for automating web browsers allowing automated testing of web applications. It started out on Internet Explorer on Windows but is in the process of being ported to other browsers.

Selenium: http://selenium.openqa.org/
Selenium is a cross-platform suite of tools to automate web application testing.

Sahi: http://sahi.co.in/
Sahi is an automation and testing tool for web applications that runs as a proxy server.

The Grinder: http://grinder.sourceforge.net/
This is an open source load testing tool in which scripts are written Jython.

JMeter: http://jakarta.apache.org/jmeter/
This is an open source load testing tool in which scripts are written in Java.

QuickTest Professional and LoadRunner: http://www.hp.com/
QuickTest Professional is an automated functional GUI testing tool, and LoadRunner is a performance and load testing product.

Peach Fuzzing Platform: http://peachfuzzer.com/
Peach is a fuzzer that is capable of performing both generation and mutation-based fuzzing.

RFuzz: http://rfuzz.rubyforge.org/
RFuzz is a Ruby library that allows web applications to be easily fuzz tested.


Runtime Analysis Tools
Valgrind: http://valgrind.org/
Valgrind is an instrumentation framework for Linux and includes, among other things, memory analysis and profiling tools.

BoundsChecker: http://www.compuware.com/products/devpartner/visualc.htm
BoundsChecker is part of Compuware�s DevPartner for Visual C++ BoundsChecker Suite. It analyzes running programs to detect memory and other issues.

Purify: http://www.ibm.com/software/awdtools/purify/
IBM�s Rational Purify detects memory leaks and corruption within running programs.

DTrace: http://opensolaris.org/os/community/dtrace/
DTrace is a highly regarded dynamic tracing framework created by Sun Microsystems for troubleshooting kernel and application problems. It is also incorporated in Mac OS X �Leopard,� including a GUI called Instruments.


Network Analyzers
If your software relies upon network communication (and it�s becoming difficult to find software which doesn�t), it can be very useful to see what�s really being transferred over the network. A network analyzer (sometimes called a packet sniffer) sits on the network capturing and analyzing all the packets crossing it. You can then filter these packets to extract only those that you�re interested in and examine their contents. Broadly speaking, a packet sniffer is a low-level tool. It can capture all the traffic on the network but doesn�t necessarily have a deep understanding of the protocol being used. So if, for example, the communication is encrypted, a packet sniffer is unlikely to be able to display the information being exchanged.

TCPDUMP: http://www.tcpdump.org/
TCPDUMP is a widely distributed open source packet sniffer.

Wireshark: http://www.wireshark.org/
Wireshark (previously known as Ethereal) is an open source tool that provides similar functionality to TCPDUMP, but it has a graphical front end and a wider selection of built-in analysis tools.


Debugging Proxies
A debugging proxy is a higher-level tool than a network analyzer, targeted to a particular protocol. You normally need to configure your software slightly differently so that it communicates via the proxy rather than directly, but having done so very often you can get a deeper analysis of the conversation. Some debugging proxies can even view encrypted data.

Charles: http://www.charlesproxy.com/
Charles is a cross-platform HTTP proxy that, among other things, supports debugging encrypted communications.

Fiddler: http://www.fiddlertool.com/
Fiddler is a Windows HTTP proxy that, as its name suggests, allows you to �fiddle� with incoming or outgoing data.


Debuggers
In most cases, your choice of debugger is going to be governed by your choice of language, IDE, or tool chain, so there�s little value in me providing a list of choices here. There is one particular debugger that I have to mention, however:

Firebug: http://getfirebug.com/
Firebug has transformed web development by providing dramatically improved client-side debugging facilities. It allows you to inspect and edit the DOM and CSS, as well as monitor and profile network activity, and it provides full JavaScript debugging support.

Source of Information : Paul Butcher - Debug it Find repair and prevent bugs

Useful Software Development Libraries

Not all tools are stand-alone�many, covered in this section, come in the form of libraries that we need to link with our own code.


Testing
The last few years have seen an explosion in the number of test frameworks, many of which are based upon the seminal JUnit. There�s no way that I can begin to cover them all here, so I will restrict myself to referencing the �big two� in the Java community:

JUnit: http://www.junit.org/
This is the library that started it all.

TestNG: http://testng.org/
This is a more recent test framework, which builds upon the ideas in JUnit but takes a few different approaches and is starting to gain a considerable following.


Debugging Memory Allocators
Debugging Memory Allocators, on in languages like C and C++ that don�t provide memory management, a debugging memory allocator is an essential tool to avoid memory leaks, corruption, and other common issues.

libcwd: http://libcwd.sourceforge.net/
This is an open source debugging support library that provides memory debugging along with other features.

Microsoft Visual C++: http://msdn.microsoft.com/visualc/
Microsoft�s Visual C++ ships with a debugging memory allocator built in. Search for Memory Leak Detection and Isolation in the documentation for further information.

Mudflap: http://gcc.gnu.org/wiki/Mudflap_Pointer_Debugging
Mudflap is a technology built into some versions of the GNU C and C++ compiler that instruments all risky pointer and array dereferencing operations, some standard library string and heap functions, and some other associated constructs with range and validity tests.

Dinkumware: http://www.dinkumware.com/
Dinkumware sells C and C++ standard libraries that include comprehensive support for memory debugging.

Electric Fence: http://perens.com/works/software/ElectricFence/
This uses virtual memory hardware to detect memory overwrites and reuse of freed memory.


Logging
Logging frameworks provide the ability for your code to contain configurable logging that can be enabled, disabled, or increased in detail, typically at runtime and by individual feature.

log4j: http://logging.apache.org/log4j/
Apache log4j is probably the best-known Java logging library, and ports exist to most major languages.

Logback: http://logback.qos.ch/
Logback was designed by Ceki G�lc�, the founder of log4j, to be its successor.

java.util.logging: http://java.sun.com/j2se/1.4.2/docs/guide/util/logging/
As of 1.4.2, Java includes a standard logging API java.util.logging, commonly known as JUL.

SLF4J: http://www.slf4j.org/
The Simple Logging Facade for Java is an attempt to tame the plethora of Java logging APIs by providing a common interface that can write to different implementations at deployment time.

syslog-ng http://www.balabit.com/network-security/syslog-ng/
syslog-ng is the most popular implementation of The BSD syslog Protocol, allowing log data to be integrated from many different systems into a central repository and rich content-based filtering.

Source of Information :  Paul Butcher - Debug it Find repair and prevent bugs
 
Support : Creating Website | Johny Template | Mas Template
Copyright © 2011. Information Computer and Technology - All Rights Reserved
Template Modify by Creating Website
Proudly powered by Blogger