النتائج 1 إلى 2 من 2

الموضوع: طلب ترجمة خاصة بالشبكات

  1. #1
    عضو الصورة الرمزية abourwan
    تاريخ التسجيل
    Nov 2010
    المشاركات
    78
    الدولة: Yemen
    معدل تقييم المستوى
    30

    طلب ترجمة خاصة بالشبكات

    السلام عليكم شباب
    لدي مقطع من كتاب خاص بالشبكات وارغب بترجمته
    ممكن احد يتكرم ويترجمه لي واكون ممنون
    علي فكرة الكتاب ضخم جدا جدا
    عن الشبكات وتقنياتها
    هذا رابط الكتاب
    طلبي فقط محصور علي ترجمة مابين الصفحات
    787----792
    الكتاب عدد صفحاته تزيد عن 1000 تقريبا (موسوعة)
    استخدمت جوجل فكانت النتيجة بمنتهى البشاعة
    اريد شخص يتقن الانجليزية يترجم الكلمات في سياقها
    بحيث احصل علي مقطع متناسق المعاني
    هذا المقطع مباشرة


    Figure 18.7 A gossip replica manager, showing its main state components
    manager has not received confirmation that this update has been received at all other
    replica managers. In the meantime, it propagates the update in gossip messages.
    Replica timestamp: This vector timestamp represents those updates that have been
    accepted by the replica manager – that is, placed in the manager’s log. It differs from
    the value timestamp in general, of course, because not all updates in the log are stable.
    Executed operation table: The same update may arrive at a given replica manager
    from a front end and in gossip messages from other replica managers. To prevent an
    update being applied twice, the ‘executed operation’ table is kept, containing the
    unique front-end-supplied identifiers of updates that have been applied to the value.
    The replica managers check this table before adding an update to the log.
    Timestamp table: This table contains a vector timestamp for each other replica
    manager, filled with timestamps that arrive from them in gossip messages. Replica
    managers use the table to establish when an update has been applied at all replica
    managers.
    The replica managers are numbered 0, 1, 2, .., : the ith element of a vector timestamp
    held by replica manager i corresponds to the number of updates received from front ends
    by i, and the jth component (j  i) equals the number of updates received by j and
    propagated to i in gossip messages. So, for example, in a three-manager gossip system
    a value timestamp of (2,4,5) at manager 0 would represent the fact that the value there
    reflects the first two updates accepted from front ends at manager 0, the first four at
    manager 1 and the first five at manager 2. The following sections look in more detail at
    how the timestamps are used to enforce the ordering.
    Query operations • The simplest operation to consider is that of a query. Recall that a
    query request q contains a description of the operation and a timestamp q.prev sent by
    the front end. The latter reflects the latest version of the value that the front end has read
    or submitted as an update. Therefore the task of the replica manager is to return a value
    that is at least as recent as this. If valueTS is the replica’s value timestamp, then q can be
    applied to the replica’s value if:
    q.prev ” valueTS
    The replica manager keeps q on a list of pending query operations (that is, a hold-back
    queue) until this condition is fulfilled. It can either wait for the missing updates, which
    should eventually arrive in gossip messages, or request the updates from the replica
    managers concerned. For example, if valueTS is (2,5,5) and q.prev is (2,4,6), it can be
    seen that just one update is missing – from replica manager 2. (The front end that
    submitted q must have contacted a different replica manager previously for it to have
    seen this update, which the replica manager has not seen.)
    Once the query can be applied, the replica manager returns valueTS to the front
    end as the timestamp new shown in Figure 18.5. The front end then merges this with its
    timestamp: frontEndTS := merge(frontEndTS, new). The update at replica manager 1
    that the front end has not seen before the query in the example just given (q.prev has 4
    where the replica manager has 5) will be reflected in the update to frontEndTS (and
    potentially in the value returned, depending on the query).
    Processing update operations in causal order • A front end submits an update request to
    one or more replica managers. Each update request u contains a specification of the
    update (its type and parameters) u.op, the front end’s timestamp u.prev, and a unique
    identifier that the front end generates, u.id. If the front end sends the same request u to
    several replica managers, it uses the same identifier in u each time – so that it will not
    be processed as several different but identical requests.
    When replica manager i receives an update request from a front end it checks that
    it has not already processed this request by looking up its operation identifier in the
    executed operation table and in the records in its log. The replica manager discards the
    update if it has already seen it; otherwise, it increments the ith element in its replica
    timestamp by one, to keep count of the number of updates it has received directly from
    front ends. Then the replica manager assigns to the update request u a unique vector
    timestamp whose derivation is given shortly, and a record for the update is placed in the
    replica manager’s log. If ts is the unique timestamp that the replica manager assigns to
    the update, then the update record is constructed and stored in the log as the following
    tuple:
    logRecord := <i, ts, u.op, u.prev, u.id>
    Replica manager i derives the timestamp ts from u.prev by replacing u.prev’s ith
    element with the ith element of its replica timestamp (which it has just incremented).
    This action makes ts unique, thus ensuring that all system components will correctly
    record whether or not they have observed the update. The remaining elements in ts are
    copied from u.prev, since it is these values sent by the front end that must be used to
    determine when the update is stable. The replica manager then immediately passes ts
    back to the front end, which merges it with its existing timestamp. Note that a front end
    can submit its update to several replica managers and receive different timestamps in
    return, all of which have to be merged into its timestamp.
    The stability condition for an update u is similar to that for queries:
    u.prev ” valueTS
    This condition states that all the updates on which this update depends – that is, all the
    updates that have been observed by the front end that issued the update – have already
    been applied to the value. If this condition is not met at the time the update is submitted,
    it will be checked again when gossip messages arrive. When the stability condition has
    been met for an update record r, the replica manager applies the update to the value and
    updates the value timestamp and the executed operation table, executed:
    value := apply(value, r.u.op)
    valueTS := merge(valueTS, r.ts)
    executed := executed ‰ ^r.u.id`
    The first of these three statements represents the application of the update to the value.
    In the second statement, the update’s timestamp is merged with that of the value. In the
    third, the update’s operation identifier is added to the set of identifiers of operations that
    have been executed – which is used to check for repeated operation requests.
    Forced and immediate update operations • Forced and immediate updates require
    special treatment. Recall that forced updates are totally as well as causally ordered. The
    basic method for ordering forced updates is for a unique sequence number to be
    appended to the timestamps associated with them, and to process them in order of this
    sequence number. As Chapter 15 explained, a general method for generating sequence
    numbers is to use a single sequencer process. But reliance upon a single process is
    inadequate in the context of a highly available service. The solution is to designate a socalled
    primary replica manager as the sequencer and to ensure that another replica
    manager can be elected to take over consistently as the sequencer should the primary
    fail. What is required is for a majority of replica managers (including the primary) to
    record which update is next in sequence before the operation can be applied. Then, as
    long as a majority of replica managers survive failure, this ordering decision will be
    honoured by a new primary elected from among the surviving replica managers.
    Immediate updates are ordered with respect to forced updates by using the primary
    replica manager to order them in this sequence. The primary also determines which
    causal updates are deemed to have preceded an immediate update. It does this by
    communicating and synchronizing with the other replica managers in order to reach
    agreement. Further details are provided in ***** et al. [1992].
    Gossip messages • Replica managers send gossip messages containing information
    concerning one or more updates so that other replica managers can bring their state upto-
    date. A replica manager uses the entries in its timestamp table to estimate which
    updates any other replica manager has not yet received (it is an estimate because that
    replica manager may have received more updates by now).
    A gossip message m consists of two items sent by the source replica manager: its
    log, m.log, and its replica timestamp, m.ts (see Figure 18.7). The replica manager that
    receives a gossip message has three main tasks:
    • to merge the arriving log with its own (it may contain updates not seen by the
    receiver before);
    • to apply any updates that have become stable and have not been executed before
    (stable updates in the arrived log may in turn make pending updates become
    stable);
    • to eliminate records from the log and entries in the executed operation table when
    it is known that the updates have been applied everywhere and for which there is
    no danger of repeats. Clearing redundant entries from the log and from the
    executed operation table is an important task, since they would otherwise grow
    without limit.
    Merging the log contained in an arrived gossip message with the receiver’s log is
    straightforward. Let replicaTS denote the recipient’s replica timestamp. A record r in
    m.log is added to the receiver’s log unless r.ts ” replicaTS – in which case it is already
    in the log or it has been applied to the value and then discarded.
    The replica manager merges the timestamp of the incoming gossip message with
    its own replica timestamp replicaTS, so that it corresponds to the additions to the log:
    replicaTS := merge(replicaTS, m.ts)
    When new update records have been merged into the log, the replica manager collects
    the set S of any updates in the log that are now stable. These can be applied to the value
    but care must be taken over the order in which they are applied so that the happenedbefore
    relation is observed. The replica manager sorts the updates in the set according
    to the partial order ‘”’ between vector timestamps. It then applies the updates in this
    order, smallest first. That is, each r  S is applied only when there is no s  S such that
    s.prev < r.prev.
    The replica manager then looks for records in the log that can be discarded. If the
    gossip message was sent by replica manager j and if tableTS is the table of replica
    timestamps of the replica managers, then the replica manager sets
    tableTS[j] := m.ts
    The replica manager can now discard any record r in the log for an update that has been
    received everywhere. That is, if c is the replica manager that created the record, then we
    require for all replica managers i:
    tableTS>i@>c@ t r.ts>c@
    The gossip architecture also defines how replica managers can discard entries in the
    executed operation table. It is important not to discard these entries too early; otherwise,
    a much-delayed operation could mistakenly be applied twice. ***** et al. [1992]
    provide details of the scheme. In essence, front ends issue acknowledgements to the
    replies to their updates, so replica managers know when a front end will stop sending the
    update. They assume a maximum update propagation delay from that point.
    Update propagation • The gossip architecture does not specify when replica managers
    exchange gossip messages, or how a replica manager decides where to send its gossip
    messages. A robust update-propagation strategy is needed if all replica managers are to
    receive all updates in an acceptable time.
    The time it takes for all replica managers to receive a given update depends upon
    three factors:
    • the frequency and duration of network partitions;
    • the frequency with which replica managers send gossip messages;
    • the policy for choosing a partner with which to exchange gossip.
    The first factor is beyond the system’s control, although users can to some extent
    determine how often they work disconnectedly.
    The desired gossip-exchange frequency may be tuned to the application. Consider
    a bulletin board system shared between several sites. It seems unnecessary for every
    item to be dispatched immediately to all sites. But what if gossip is only exchanged after
    long periods, say once a day? If only causal updates are used, then it is quite possible for
    clients at each site to have their own consistent debates over the same bulletin board,
    oblivious to the discussions at the other sites. Then at, say, midnight, all the debates will
    be merged; but debates on the same topic are likely to be incongruous, when it would
    have been preferable for them to take account of one another. A gossip-exchange period
    of minutes or hours seems more appropriate in this case.
    There are several types of partner-selection policy. Golding and Long [1993]
    consider random, deterministic and topological policies for their ‘timestamped antientropy
    protocol’, which uses a gossip-style update propagation scheme.
    Random policies choose a partner randomly but with weighted probabilities so as
    to favour some partners over others – for example, near partners over far partners.
    Golding and Long found that such a policy works surprisingly well under simulations.
    Deterministic policies utilize a simple function of the replica manager’s state to make
    the choice of partner. For example, a replica manager could examine its timestamp table
    and choose the replica manager that appears to be the furthest behind in the updates it
    has received.
    Topological policies arrange the replica managers into a fixed graph. One
    possibility is a mesh: replica managers send gossip messages to the four replica
    managers to which it is connected. Another is to arrange the replica managers in a circle,
    with each passing on gossip only to its neighbour (in the clockwise direction, say), so
    that updates from any replica manager eventually traverse the circle. There are many
    other possible topologies, including trees.
    Different partner-seclection policies such as these trade off the amount of
    communication against higher transmission latencies and the possibility that a single
    failure will affect other replica managers. The choice depends in practice on the relative
    importance of these factors. For example, the circle topology produces relatively little
    communication but is subject to high transmission latencies since gossip generally has
    to traverse several replica managers. Moreover, if one replica manager fails then the
    circle cannot function and needs to be reconfigured. By contrast, the random selection
    policy is not susceptible to failures but may produce more variable update propagation
    times.
    Discussion of the gossip architecture • The gossip architecture is aimed at achieving
    high availability for services. In its favour, this architecture ensures that clients can
    continue to obtain a service even when they are partitioned from the rest of the network,
    as long as at least one replica manager continues to function in their partition. But this
    type of availability is achieved at the expense of enforcing only relaxed consistency
    guarantees. For objects such as bank accounts, where sequential consistency is required,
    a gossip architecture can do no better than the fault-tolerant systems studied in Section
    18.3 and supply the service only in a majority partition.
    Its lazy approach to update propagation makes a gossip-based system
    inappropriate for updating replicas in near-real time, such as when users take part in a
    ‘real-time’ conference and update a shared document. A multicast-based system would
    be more appropriate for that case.
    The scalability of a gossip system is another issue. As the number of replica
    managers grows, so does the number of gossip messages that have to be transmitted and
    the size of the timestamps used. If a client makes a query, then this normally takes two
    messages (between front end and replica manager). If a client makes a causal update
    operation and if each of the R replica managers normally collects G updates into a gossip
    message, then the number of messages exchanged is 2 + R – 1 e G . The first term
    represents communication between the front end and replica manager and the second is
    the update’s share of a gossip message sent to the other replica managers. Increasing G
    improves the number of messages but worsens the delivery latencies, because the replica
    manager waits for more updates to arrive before propagating them.
    One approach to making gossip-based services scalable is to make most of the
    replicas read-only. In other words, these replicas are updated by gossip messages but do
    not receive updates directly from front ends. This arrangement is potentially useful
    where the update/query ratio is small. Read-only replicas can be situated close to client
    groups and updates can be serviced by relatively few central replica managers. Gossip
    traffic is reduced since read-only replicas have no gossip to propagate, and vector
    timestamps need only contain entries for the updateable replicas.
    18.4.2 Bayou and the operational transformation approach
    The Bayou system [Terry et al. 1995, Petersen et al. 1997] provides data replication for
    high availability with weaker guarantees than sequential consistency, like the gossip
    architecture and the timestamped anti-entropy protocol. As in those systems, Bayou
    replica managers cope with variable connectivity by exchanging updates in pairs, in
    what the designers also call an anti-entropy protocol. But Bayou adopts a markedly
    different approach in that it enables domain-specific conflict detection and conflict
    resolution to take place.
    Consider the user who needs to update a diary while working disconnectedly. If
    strict consistency is required, in the gossip architecture updates would have to be
    performed using a forced (totally ordered) operation. But then only users in a majority
    partition could update the diary. The users’ access to the diary might thus be limited,
    regardless of whether they in fact are making updates that would break the diary’s

  2. #2
    عضو الصورة الرمزية abourwan
    تاريخ التسجيل
    Nov 2010
    المشاركات
    78
    الدولة: Yemen
    معدل تقييم المستوى
    30

    رد: طلب ترجمة خاصة بالشبكات

    ارجو المساعدة

المفضلات

ضوابط المشاركة

  • لا تستطيع إضافة مواضيع جديدة
  • لا تستطيع الرد على المواضيع
  • لا تستطيع إرفاق ملفات
  • لا تستطيع تعديل مشاركاتك
  •