To understand fully whether or not ACID expectations apply to distributed systems you need to first explore the properties of distributed systems and see how they get impacted by the ACID promise. Distributed systems come in varying shapes, sizes, and forms but they all have a few typical characteristics and are exposed to similar complications. As distributed systems get larger and more spread out, the complications get more challenging. Added to that, if the system needs to be highly available the challenges only get multiplied.
Even in this simple situation with two applications, each connected to a database and all four parts running on a separate machine, the challenges of providing the ACID guarantee is not trivial. In distributed systems, the ACID principles are applied using the concept laid down by the open XA consortium, which specifies the need for a transaction manager or coordinator to manage transactions distributed across multiple transactional resources. Even with a central coordinator, implementing isolation across multiple databases is extremely difficult. This is because different databases provide isolation guarantees differently. A few techniques like two-phase locking (and its variant Strong Strict Two-Phase Locking or SS2PL) and two-phase commit help ameliorate the situation a bit. However, these techniques lead to blocking operations and keep parts of the system from being available during the states when the transaction is in process and data moves from one consistent state to another. In long-running transactions, XA-based distributed transactions don�t work, as keeping resources blocked for a long time is not practical. Alternative strategies like compensating operations help implement transactional fi delity in long-running distributed transactions. The challenges of resource unavailability in long-running transactions also appear in high availability scenarios. The problem takes center stage especially when there is less tolerance for resource unavailability and outage.
A congruent and logical way of assessing the problems involved in assuring ACID-like guarantees in distributed systems is to understand how the following three factors get impacted in such systems:
� Consistency
� Availability
� Partition Tolerance
Consistency, Availability, and Partition Tolerance (CAP) are the three pillars of Brewer�s Theorem that underlies much of the recent generation of thinking around transactional integrity in large and scalable distributed systems. Succinctly put, Brewer�s Theorem states that in systems that are distributed or scaled out it�s impossible to achieve all three (Consistency, Availability, and Partition Tolerance) at the same time. You must make trade-offs and sacrifice at least one in favor of the other two. However, before the trade-offs are discussed, it�s important to explore some more on what these three factors mean and imply.
Consistency
Consistency is not a very well-defi ned term but in the context of CAP it alludes to atomicity and isolation. Consistency means consistent reads and writes so that concurrent operations see the same valid and consistent data state, which at minimum means no stale data. In ACID, consistency means that data that does not satisfy predefi ned constraints is not persisted. That�s not the same as the consistency in CAP. Brewer�s Theorem was conjectured by Eric Brewer and presented by him (www.cs.berkeley.edu/~brewer/cs262b-2004/PODC-keynote.pdf) as a keynote address at the ACM Symposium on the Principles of Distributed Computing (PODC) in 2000. Brewer�s ideas on CAP developed as a part of his work at UC Berkeley and at Inktomi. In 2002, Seth Gilbert and Nancy Lynch proved Brewer�s conjecture and hence it�s now referred to as Brewer�s Theorem (and sometimes as Brewer�s CAP Theorem). In Gilbert and Lynch�s proof, consistency is considered as atomicity. Gilbert and Lynch�s proof is available as a published paper titled �Brewer�s Conjecture and the Feasibility of Consistent, Available, Partition-Tolerant Web Services� and can be accessed online at http://theory.lcs.mit.edu/tds/papers/Gilbert/Brewer6.ps. In a single-node situation, consistency can be achieved using the database ACID semantics but things get complicated as the system is scaled out and distributed.
Availability
Availability means the system is available to serve at the time when it�s needed. As a corollary, a system that is busy, uncommunicative, or unresponsive when accessed is not available. Some, especially those who try to refute the CAP Theorem and its importance, argue that a system with minor delays or minimal hold-up is still an available system. Nevertheless, in terms of CAP the definition is not ambiguous; if a system is not available to serve a request at the very moment it�s needed, it�s not available. That said, many applications could compromise on availability and that is a possible trade-off choice they can make.
Partition Tolerance
Parallel processing and scaling out are proven methods and are being adopted as the model for scalability and higher performance as opposed to scaling up and building massive super computers. The past few years have shown that building giant monolithic computational contraptions is expensive and impractical in most cases. Adding a number of commodity hardware units in a cluster and making them work together is a more cost-, algorithm-, and resource-effective and efficient solution. The emergence of cloud computing is a testimony to this fact.
Because scaling out is the chosen path, partitioning and occasional faults in a cluster are a given. The third pillar of CAP rests on partition tolerance or fault-tolerance. In other words, partition tolerance measures the ability of a system to continue to service in the event a few of its cluster members become unavailable.
Source of Information : NoSQL
DISTRIBUTED ACID SYSTEMS
Jika Anda menyukai Artikel di blog ini, Silahkan
klik disini untuk berlangganan gratis via email, dengan begitu Anda akan mendapat kiriman artikel setiap ada artikel yang terbit di Creating Website