Home > Sample essays > Discover Google File System: High Availibility and Serializable Execution Benefits

Essay: Discover Google File System: High Availibility and Serializable Execution Benefits

Essay details and download:

  • Subject area(s): Sample essays
  • Reading time: 6 minutes
  • Price: Free download
  • Published: 1 April 2019*
  • Last Modified: 23 July 2024
  • File format: Text
  • Words: 1,717 (approx)
  • Number of pages: 7 (approx)
  • Tags: Google essays

Text preview of this essay:

This page of the essay has 1,717 words.



PGroup Eger-Master Eger- Group Lazy- Master Lazy

Compare Eger Lazy

all replicas are updated as part of the original transaction.

One replica is updated by the originating transaction. Other replicas updated asynchronously, typically as a separate transaction for each node.

Advantage • Update all replicas at once.

• Serializable Execution.

• no need for reconciliation. • Asynchronously updates.

• Improves response time.

• used for mobile nodes.

Disadvantage • Reduced (update) performance.

• Increased response times.

• Not used for mobile nodes.

• Scaling laws.

• Anomalies converted to waits/deadlocks. • Stale versions.

• Reconcile conflicting transactions.

• Scaleup Pitfall (cubic increase).

• System Delusion.

• Collisions when disconnected.

Difference Group Mater

any node can update its replica (update anywhere). Each object has a master node. Only the master can update the primary copy of the object. The rest replicas are read-only.

Transaction and ownership Lazy Group Eger Group Lazy Mater Eger Mater

N transactions N object owners One transactions N object owners

Use of N transactions one object owner one transaction one object owner

• Use timestamps for reconciliation Objects and have update for the timestamps. Updates have new value and old of object in timestamp.

• Collisions happen when disconnected.

• Better than eager, but not so good. • Lazy-Master replication is not appropriate for mobile applications.

• No reconciliations, but we may have deadlock.

They have new model called two tier to solve all problems in the  Previous models. The problem of high deadlock or reconciliation rates solve by restricted form of replication called Two- Tier Replication. Tow tier consist from Two node  types:

• Base nodes: Always connected, store replica, master most objects.

• Mobile nodes: often disconnected, store a replica, issues tentative transactions

 And it has two version types:

• Master version:  Exists at the object owner, other may have older versions.

• Tentative version: Local version is updated by tentative transactions.

Advantages:

• Supports mobile nodes .

• Combine eager-master-replication with local updates.

b. It is stated in the paper that eager updates cause deadlines more than lazy updates. Why is that?

Eager replication keeps all replicas synchronized at all nodes by updating all the replicas as part of one atomic transaction. all the nodes are connected all the time, updates may cause deadlocks that prevent serialization errors. Eager replication gives serializable execution – there are concurrency. eager replication reduces update performance and increases transaction response times because there extra  updates and messages are added to the transaction. with higher transaction rates means higher deadlock rates. Eager replication is not good for mobile applications where most nodes are disconnected. Disconnected nodes stall updates. Quorum/cluster enhanced update availability. Updates may still fail cause deadlocks.

Assume database increases in size. Perform replica updates concurrently. Deadlocks in Eager Replication Growth rate will be quadratic.

Replication of transactional data cause unstable system performance. For consistent replication they need to use lock and that might cause Waits and deadlocks. If the number of checkbooks per account increases by a factor of ten, the deadlock or reconciliation rates rises by a factor of a thousand.

Eager replication updates all replicas when a transaction updates any instance of the object. There are no serialization anomalies (inconsistencies) and no need for reconciliation in eager systems. Locking seclect potential anomalies and converts them to waits or deadlocks.  

But in lazy replication algorithms which asynchronously replica updates to other nodes after the updating transaction commits. Some continuously connected systems use lazy replication to improve response time. Mobile applications require lazy replication. In Lazy Group Replication used of timestamps for reconciliation.

In Lazy Master Replication No reconciliations, but we may have deadlock . Disconnected operation and message delays mean lazy replication has large frequent reconciliation.

Transactions that would wait in an eager replication system and face reconciliation in a lazy-group replication system. Waits are much more frequent than deadlocks because it takes two waits to make a deadlock.  In additional, if waits are a rare event, then deadlocks are very rare (rare 2 ). Eager replication waits cause delays while deadlocks create application faults. In lazy replication, the more frequent waits are what determines the reconciliation frequency. Lazy-master replication is slightly less deadlock prone than eager-group replication primarily because the transactions have shorter duration. deadlocks (failed transactions) increases with systems size. Finally large number of transaction in Eger that want to update in the same time cause deadlock because it needs for lock to concurrency but in lazy system not update transaction in same time.

GOOGLE FILE SYSTEM: (10)

 2- In

computers in the system wouldn't always be reliable. The cheap price tag went hand-in-hand the paper titled “The Google File System” by Ghemawat et al, the authors describe how they designed and developed a distributed file system to deal with the huge amount of data at Google.

a. Describe what the system does to ensure high availability.

The authors describe the design and implemented of the Google distributed file system which is designed to meet large data sets and to be run over commodity hardware that is cheaper and has higher failure rates than server hardware.

The paper describes the design of the Google File System, that aims at providing high aggregate performance and provides fault tolerance of running on cheap commodity hardware. It has a single master and multiple chunkserver model with other design decisions to minimize the load on master.

The objective in building this system is to address the requirements which were revealed from study, namely, importance of sustained bandwidth over latency, optimizing for workloads with large reads and large sequential writes (appends), providing atomicity for multiple clients appending to a same file.

Availability and recoverability exist on cheap hardware. They chose to use cheap hardware, that made building a large system with cost-effective process. It also meant that the individual  with computers that have a tendency to fail.

The GFS developers built functions in the system to compensate for the inherent unreliability of individual components. Those functions consist master and chunk replication, a streamlined recovery process, rebalancing, stale replica detection, garbage removal and checksumming. While there's only one active master server per GFS cluster, copies of the master server exist on other machines. Some copies, called shadow masters, it provides limited services even when the primary master server is active. Those services are limited to read requests, since those requests don't alter data in any way. The shadow master servers always lag a little behind the primary master server, but it's usually only need of fractions of a second. The master server replicas maintain contact with the primary master server, monitoring the operation log and polling chunkservers for keep track of data. If the primary master server fails and cannot restart, there is a secondary master server can take its place.

The GFS replicates chunks to ensure that data is available even if hardware fails. It stores replicas on different machines across different racks. That way, if an entire rack were to fail, the data would still exist in an accessible format on other machine. The GFS uses the unique chunk identifier to ensure that each replica is valid. If one of the replica's handles invalid the chunk handle, the master server creates a new replica and assigns it to a chunkserver.

The master server also monitors the cluster as a whole and periodically rebalances the workload by shifting chunks from one chunkserver to other. The master server also monitors chunks and ensure that each replica is current. If a replica doesn't match the chunk's identification number, the master server consider it as a stale replica. The stale replica becomes garbage. After three days, the master server delete a garbage chunk. This is a safety measure users can check on a garbage chunk before it is deleted and it can prevent unwanted deletions.

The master server monitors chunks by looking checksums. If the checksum of a replica doesn't match the checksum in the master server's memory, the master server deletes the replica and creates a new one to replace it. High availability and Fast recovery in master and chunkservers by restartable in a few seconds.

From their experiment they note this :

Recovery Time

 • Experiment: killed 1 chunkserver : Clonings limited to 40% of the chunkservers and 50 Mbps each to limit impact on applications and all chunks restored in 23.2 min .

• Experiment: killed 2 chunkservers :266 of 16,000 chunks reduced to single replica with higher priority re-replication for these chunks and achieved 2x replication within 2 min.

b. The paper describes a HeartBeat message. What is it? What other data is piggybacked on it?

heartbeats and handshakes are GFS components give system updates through electronic messages. These short messages allow the master server to stay current with each chunkserver's status.

A cluster in GFS contains a master server and number of  chunkservers. The master periodically communicates with the chunkservers by using HeartBeat messages. Files are broke into large chunks which are replicated on large number of chunkservers. The master stores in memory the file and chunk namespaces and mappings, as well as the location of chunks. The master makes global policy decisions to determine where chunks are stored on different servers and different racks; however a chunkserver has the final say as to determine which chunks are stored on it. At startup, the master builds up its memory structures by asking the chunkservers which chunks they have it , and by building up the file system state with a checkpoint/log system similar to what we have previously seen. A new operation, the record append and it is introduced to support a multiple producer model. Record appends are guaranteed to be atomic and write at least once. however, they may write more than once, and this may change from replica to replica (replicas are not guaranteed to be exactly the same), and the application is expected to deal with duplicate records.

The master can keep up-to-date itself because it controls all chunkplacement and monitors chunkserver status with regular HeartBeat messages.

The lease mechanism is designed to minimize management overhead at the master. Master grants lease  to primary (for 60 seconds). Leases renewed using heartbeat messages between master and chunkservers. Master decrements count of replicas for all chunks on dead chunkserver. Master re-replicates chunks missing replicas in background. Highest priority for chunks missing greater number of replicas The master periodically communicates with each chunkserver in HeartBeat messages to give it instructions and collect its state.

aste your essay in here…

Discover more:

About this essay:

If you use part of this page in your own work, you need to provide a citation, as follows:

Essay Sauce, Discover Google File System: High Availibility and Serializable Execution Benefits. Available from:<https://www.essaysauce.com/sample-essays/2016-5-7-1462604085/> [Accessed 07-10-24].

These Sample essays have been submitted to us by students in order to help you with your studies.

* This essay may have been previously published on EssaySauce.com and/or Essay.uk.com at an earlier date than indicated.