The traditional choice has been in favor of consistency and so system architects have in the past shied away from scaling out and gone in favor of scaling up. Scaling up or vertical scaling involves larger and more powerful machines. Involving larger and more powerful machines works in many cases but is often characterized by the following:
� Vendor lock-in � Not everyone makes large and powerful machines and those who do often rely on proprietary technologies for delivering the power and efficiency that you desire. This means there is a possibility of vendor lock-in. Vendor lock-in in itself is not bad, at least not as much as it is often projected. Many applications over the years have successfully been built and run on proprietary technology. Nevertheless, it does restrict your choices and is less flexible than its open counterparts.
� Higher costs � Powerful machines usually cost a lot more than the price of commodity hardware.
� Data growth perimeter � Powerful and large machines work well until the data grows to fill it. At that point, there is no choice but to move to a yet larger machine or to scale out. The largest of machines has a limit to the amount of data it can hold and the amount of processing it can carry out successfully. (In real life a team of people is better than a superhero!)
� Proactive provisioning � Many applications have no idea of the final large scale when they start out. When scaling vertically in your scaling strategy, you need to budget for large scale upfront. It�s extremely difficult and complex to assess and plan scalability requirements because the growth in usage, data, and transactions is often impossible to predict.
Given the challenges associated with vertical scaling, horizontal scaling has, for the past few years, become the scaling strategy of choice. Horizontal scaling implies systems are distributed across multiple machines or nodes. Each of these nodes can be some sort of a commodity machine that is cost effective. Anything distributed across multiple nodes is subject to fallacies of distributed computing, which is a list of assumptions in the context of distributed systems that developers take for granted but often does not hold true. The fallacies are as follows:
� The network is reliable.
� Latency is zero.
� Bandwidth is infi nite.
� The network is secure.
� Topology doesn�t change.
� There is one administrator.
� Transport cost is zero.
� The network is homogeneous.
The fallacies of distributed computing is attributed to Sun Microsystems (now part of Oracle). Peter Deutsch created the original seven on the list. Bill Joy, Tom Lyon, and James Gosling also contributed to the list. Read more about the fallacies at http://blogs.oracle.com/jag/resource/Fallacies.html.
Source of Information : NoSQL
VERTICAL SCALING CHALLENGES AND FALLACIES OF DISTRIBUTED COMPUTING
Jika Anda menyukai Artikel di blog ini, Silahkan
klik disini untuk berlangganan gratis via email, dengan begitu Anda akan mendapat kiriman artikel setiap ada artikel yang terbit di Creating Website