锘??xml version="1.0" encoding="utf-8" standalone="yes"?>国产一区免费视频,欧美亚洲一区二区在线,久久亚洲一区二区三区四区http://www.shnenglu.com/qywyh/archive/2010/11/21/134208.html璞?/dc:creator>璞?/author>Sun, 21 Nov 2010 04:25:00 GMThttp://www.shnenglu.com/qywyh/archive/2010/11/21/134208.htmlhttp://www.shnenglu.com/qywyh/comments/134208.htmlhttp://www.shnenglu.com/qywyh/archive/2010/11/21/134208.html#Feedback1http://www.shnenglu.com/qywyh/comments/commentRss/134208.htmlhttp://www.shnenglu.com/qywyh/services/trackbacks/134208.html

1錛氭煡鐪婥PU璐熻澆--mpstat
mpstat -P ALL [internal [count]]

鍙傛暟鐨勫惈涔夊涓嬶細(xì)
-P ALL 琛ㄧず鐩戞帶鎵鏈塁PU
internal 鐩擱偦鐨勪袱嬈¢噰鏍風(fēng)殑闂撮殧鏃墮棿
count 閲囨牱鐨勬鏁?/div>

mpstat鍛戒護(hù)浠?proc/stat鑾峰緱鏁版嵁杈撳嚭
杈撳嚭鐨勫惈涔夊涓嬶細(xì)


CPU 澶勭悊鍣↖D
user 鍦╥nternal鏃墮棿孌甸噷錛岀敤鎴鋒佺殑CPU鏃墮棿錛?錛?錛屼笉鍖呭惈 nice鍊間負(fù)璐?榪涚▼ ?usr/?total*100
nice 鍦╥nternal鏃墮棿孌甸噷錛宯ice鍊間負(fù)璐熻繘紼嬬殑CPU鏃墮棿錛?錛??nice/?total*100
system 鍦╥nternal鏃墮棿孌甸噷錛屾牳蹇冩椂闂達(dá)紙%錛??system/?total*100
iowait 鍦╥nternal鏃墮棿孌甸噷錛岀‖鐩業(yè)O絳夊緟鏃墮棿錛?錛??iowait/?total*100
irq 鍦╥nternal鏃墮棿孌甸噷錛岃蔣涓柇鏃墮棿錛?錛??irq/?total*100
soft 鍦╥nternal鏃墮棿孌甸噷錛岃蔣涓柇鏃墮棿錛?錛??softirq/?total*100
idle 鍦╥nternal鏃墮棿孌甸噷錛孋PU闄ゅ幓絳夊緟紓佺洏I(yè)O鎿嶄綔澶栫殑鍥犱負(fù)浠諱綍鍘熷洜鑰岀┖闂茬殑鏃墮棿闂茬疆鏃墮棿 錛?錛??idle/?total*100

intr/s 鍦╥nternal鏃墮棿孌甸噷錛屾瘡縐扖PU鎺ユ敹鐨勪腑鏂殑嬈℃暟 ?intr/?total*100
CPU鎬葷殑宸ヤ綔鏃墮棿total_cur=user+system+nice+idle+iowait+irq+softirq

total_pre=pre_user+ pre_system+ pre_nice+ pre_idle+ pre_iowait+ pre_irq+ pre_softirq
user=user_cur – user_pre
total=total_cur-total_pre

鍏朵腑_cur 琛ㄧず褰撳墠鍊鹼紝_pre琛ㄧずinterval鏃墮棿鍓嶇殑鍊箋備笂琛ㄤ腑鐨勬墍鏈夊煎彲鍙栧埌涓や綅灝忔暟鐐廣?/div>

2錛氭煡鐪嬬鐩榠o鎯呭喌鍙?qiáng)CPU璐熻澆--vmstat
usage: vmstat [-V] [-n] [delay [count]]
              -V prints version.
              -n causes the headers not to be reprinted regularly.
              -a print inactive/active page stats.
              -d prints disk statistics
              -D prints disk table
              -p prints disk partition statistics
              -s prints vm table
              -m prints slabinfo
              -S unit size
              delay is the delay between updates in seconds. 
              unit size k:1000 K:1024 m:1000000 M:1048576 (default is K)
              count is the number of updates.

vmstat浠?proc/stat鑾峰緱鏁版嵁

杈撳嚭鐨勫惈涔夊涓? 
FIELD DESCRIPTION FOR VM MODE
   Procs
       r: The number of processes waiting for run time.
       b: The number of processes in uninterruptible sleep.

   Memory
       swpd: the amount of virtual memory used.
       free: the amount of idle memory.
       buff: the amount of memory used as buffers.
       cache: the amount of memory used as cache.
       inact: the amount of inactive memory. (-a option)
       active: the amount of active memory. (-a option)

   Swap
       si: Amount of memory swapped in from disk (/s).
       so: Amount of memory swapped to disk (/s).

   IO
       bi: Blocks received from a block device (blocks/s).
       bo: Blocks sent to a block device (blocks/s).

   System
       in: The number of interrupts per second, including the clock.
       cs: The number of context switches per second.

   CPU
       These are percentages of total CPU time.
       us: Time spent running non-kernel code. (user time, including nice time)
       sy: Time spent running kernel code. (system time)
       id: Time spent idle. Prior to Linux 2.5.41, this includes IO-wait time.
       wa: Time spent waiting for IO. Prior to Linux 2.5.41, shown as zero.
       st: Time spent in involuntary wait. Prior to Linux 2.6.11, shown as zero.

3錛氭煡鐪嬪唴瀛樹嬌鐢ㄦ儏鍐?-free
usage: free [-b|-k|-m|-g] [-l] [-o] [-t] [-s delay] [-c count] [-V]
  -b,-k,-m,-g show output in bytes, KB, MB, or GB
  -l show detailed low and high memory statistics
  -o use old format (no -/+buffers/cache line)
  -t display total for RAM + swap
  -s update every [delay] seconds
  -c update [count] times
  -V display version information and exit

[root@Linux /tmp]# free

            total     used        free       shared    buffers   cached
Mem:       255268    238332      16936         0        85540   126384
-/+ buffers/cache:   26408       228860 
Swap:      265000      0         265000

Mem錛氳〃紺虹墿鐞嗗唴瀛樼粺璁?nbsp;
-/+ buffers/cached錛氳〃紺虹墿鐞嗗唴瀛樼殑緙撳瓨緇熻 
Swap錛氳〃紺虹‖鐩樹笂浜ゆ崲鍒嗗尯鐨勪嬌鐢ㄦ儏鍐碉紝榪欓噷鎴戜滑涓嶅幓鍏沖績銆?/div>
緋葷粺鐨勬葷墿鐞嗗唴瀛橈細(xì)255268Kb錛?56M錛夛紝浣嗙郴緇熷綋鍓嶇湡姝e彲鐢ㄧ殑鍐呭瓨b騫朵笉鏄涓琛宖ree 鏍囪鐨?16936Kb錛屽畠浠呬唬琛ㄦ湭琚垎閰嶇殑鍐呭瓨銆?/div>

絎?琛? Mem錛?/div>
total錛氳〃紺虹墿鐞嗗唴瀛樻婚噺銆?nbsp;
used錛氳〃紺烘昏鍒嗛厤緇欑紦瀛橈紙鍖呭惈buffers 涓巆ache 錛変嬌鐢ㄧ殑鏁伴噺錛屼絾鍏朵腑鍙兘閮ㄥ垎緙撳瓨騫舵湭瀹為檯浣跨敤銆?nbsp;
free錛氭湭琚垎閰嶇殑鍐呭瓨銆?nbsp;
shared錛氬叡浜唴瀛橈紝涓鑸郴緇熶笉浼?xì)鐢ㄥ垘图寴q欓噷涔熶笉璁ㄨ銆?nbsp;
buffers錛氱郴緇熷垎閰嶄絾鏈浣跨敤鐨刡uffers 鏁伴噺銆?nbsp;
cached錛氱郴緇熷垎閰嶄絾鏈浣跨敤鐨刢ache 鏁伴噺銆俠uffer 涓巆ache 鐨勫尯鍒鍚庨潰銆?nbsp;
total = used + free    
絎?琛?  -/+ buffers/cached錛?/div>
used錛氫篃灝辨槸絎竴琛屼腑鐨剈sed - buffers-cached   涔熸槸瀹為檯浣跨敤鐨勫唴瀛樻婚噺銆?nbsp;
free錛氭湭琚嬌鐢ㄧ殑buffers 涓巆ache 鍜屾湭琚垎閰嶇殑鍐呭瓨涔嬪拰錛岃繖灝辨槸緋葷粺褰撳墠瀹為檯鍙敤鍐呭瓨銆?/div>
free 2= buffers1 + cached1 + free1   //free2涓虹浜岃銆乥uffers1絳変負(fù)絎竴琛?/div>

buffer 涓巆ache 鐨勫尯鍒?/div>
A buffer is something that has yet to be "written" to disk. 
A cache is something that has been "read" from the disk and stored for later use
絎?琛岋細(xì)
瀵規(guī)搷浣滅郴緇熸潵璁叉槸Mem鐨勫弬鏁?buffers/cached 閮芥槸灞炰簬琚嬌鐢?鎵浠ュ畠璁や負(fù)free鍙湁16936.
瀵瑰簲鐢ㄧ▼搴忔潵璁叉槸(-/+ buffers/cach).buffers/cached 鏄瓑鍚屽彲鐢ㄧ殑錛屽洜涓篵uffer/cached鏄負(fù)浜嗘彁楂樻枃浠惰鍙栫殑鎬ц兘錛屽綋搴旂敤紼嬪簭闇鍦ㄧ敤鍒板唴瀛樼殑鏃跺欙紝buffer/cached浼?xì)寰堝揩鍦拌鍥炴敹銆?/div>
鎵浠ヤ粠搴旂敤紼嬪簭鐨勮搴︽潵璇達(dá)紝鍙敤鍐呭瓨=緋葷粺free memory+buffers+cached.

swap
swap灝辨槸LINUX涓嬬殑铏氭嫙鍐呭瓨鍒嗗尯,瀹冪殑浣滅敤鏄湪鐗╃悊鍐呭瓨浣跨敤瀹屼箣鍚?灝嗙鐩樼┖闂?涔熷氨鏄疭WAP鍒嗗尯)铏氭嫙鎴愬唴瀛樻潵浣跨敤.

4錛氭煡鐪嬬綉鍗℃儏鍐?-sar
璇︾粏瑙乵an
4.1錛氭煡鐪嬬綉鍗℃祦閲忥細(xì)sar -n DEV delay count 
鏈嶅姟鍣ㄧ綉鍗℃渶澶ц兘鎵垮彈嫻侀噺鐢辯綉鍗℃湰韜喅瀹氾紝鍒嗕負(fù)10M銆?0/100鑷傚簲銆?00+浠ュ強(qiáng)1G緗戝崱錛屼竴鑸櫘閫氭湇鍔″櫒鐢ㄧ殑鏄櫨鍏嗭紝涔熸湁鐢ㄥ崈鍏嗙殑銆?/div>

杈撳嚭瑙i噴錛?/div>
IFACE
       Name of the network interface for which statistics are reported.

rxpck/s
       Total number of packets received per second.

txpck/s
       Total number of packets transmitted per second.

rxbyt/s
       Total number of bytes received per second.

txbyt/s
       Total number of bytes transmitted per second.

rxcmp/s
       Number of compressed packets received per second (for cslip etc.).

txcmp/s
       Number of compressed packets transmitted per second.

rxmcst/s
       Number of multicast packets received per second.

4.2錛氭煡鐪嬬綉鍗″け璐ユ儏鍐碉細(xì)sar -n EDEV delay count 
杈撳嚭瑙i噴錛?/div>
IFACE
       Name of the network interface for which statistics are reported.

rxerr/s
       Total number of bad packets received per second.

txerr/s
       Total number of errors that happened per second while transmitting packets.

coll/s
       Number of collisions that happened per second while transmitting packets.

rxdrop/s
       Number of received packets dropped per second because of a lack of space in linux buffers.

txdrop/s
       Number of transmitted packets dropped per second because of a lack of space in linux buffers.

txcarr/s
       Number of carrier-errors that happened per second while transmitting packets.

rxfram/s
       Number of frame alignment errors that happened per second on received packets.

rxfifo/s
       Number of FIFO overrun errors that happened per second on received packets.

txfifo/s
       Number of FIFO overrun errors that happened per second on transmitted packets.


5錛氬畾浣嶉棶棰樿繘紼?-top, ps
top -d delay錛岃緇嗚man
ps aux 鏌ョ湅榪涚▼璇︾粏淇℃伅
ps axf 鏌ョ湅榪涚▼鏍?/div>

6錛氭煡鐪嬫煇涓繘紼嬩笌鏂囦歡鍏崇郴--losf
闇瑕乺oot鏉冮檺鎵嶈兘鐪嬪埌鍏ㄩ儴錛屽惁鍒欏彧鑳界湅鍒扮櫥褰曠敤鎴鋒潈闄愯寖鍥村唴鐨勫唴瀹?/div>

lsof -p 77//鏌ョ湅榪涚▼鍙蜂負(fù)77鐨勮繘紼嬫墦寮浜嗗摢浜涙枃浠?/div>
lsof -d 4//鏄劇ず浣跨敤fd涓?鐨勮繘紼?nbsp;
lsof abc.txt//鏄劇ず寮鍚枃浠禷bc.txt鐨勮繘紼?/div>
lsof -i :22//鏄劇ず浣跨敤22绔彛鐨勮繘紼?/div>
lsof -i tcp//鏄劇ず浣跨敤tcp鍗忚鐨勮繘紼?/div>
lsof -i tcp:22//鏄劇ず浣跨敤tcp鍗忚鐨?2绔彛鐨勮繘紼?/div>
lsof +d /tmp//鏄劇ず鐩綍/tmp涓嬭榪涚▼鎵撳紑鐨勬枃浠?/div>
lsof +D /tmp//鍚屼笂錛屼絾鏄細(xì)鎼滅儲鐩綍涓嬬殑鐩綍錛屾椂闂磋緝闀?/div>
lsof -u username//鏄劇ず鎵灞瀠ser榪涚▼鎵撳紑鐨勬枃浠?/div>

7錛氭煡鐪嬬▼搴忚繍琛屾儏鍐?-strace
usage: strace [-dffhiqrtttTvVxx] [-a column] [-e expr] ... [-o file]
              [-p pid] ... [-s strsize] [-u username] [-E var=val] ...
              [command [arg ...]]
   or: strace -c [-e expr] ... [-O overhead] [-S sortby] [-E var=val] ...
              [command [arg ...]]

甯哥敤閫夐」錛?/div>
-f錛氶櫎浜嗚窡韙綋鍓嶈繘紼嬪錛岃繕璺熻釜鍏跺瓙榪涚▼銆?/div>
-c錛氱粺璁℃瘡涓緋葷粺璋冪敤鐨勬墍鎵ц鐨勬椂闂?嬈℃暟鍜屽嚭閿欑殑嬈℃暟絳? 
-o file錛氬皢杈撳嚭淇℃伅鍐欏埌鏂囦歡file涓紝鑰屼笉鏄樉紺哄埌鏍囧噯閿欒杈撳嚭錛坰tderr錛夈?/div>
-p pid錛氱粦瀹氬埌涓涓敱pid瀵瑰簲鐨勬鍦ㄨ繍琛岀殑榪涚▼銆傛鍙傛暟甯哥敤鏉ヨ皟璇曞悗鍙拌繘紼嬨?/div>

8錛氭煡鐪嬬鐩樹嬌鐢ㄦ儏鍐?-df
test@wolf:~$ df
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/sda1              3945128   1810428   1934292  49% /
udev                    745568        80    745488   1% /dev
/dev/sda3             12649960   1169412  10837948  10% /usr/local
/dev/sda4             63991676  23179912  37561180  39% /data

9錛氭煡鐪嬬綉緇滆繛鎺ユ儏鍐?-netstat
甯哥敤錛歯etstat -lpn
閫夐」璇存槑錛?/div>
 -p, --programs           display PID/Program name for sockets
 -l, --listening          display listening server sockets
 -n, --numeric            don't resolve names
 -a, --all, --listening   display all sockets (default: connected)


]]>A brief history of Consensus, 2PC and Transaction Commit.http://www.shnenglu.com/qywyh/archive/2010/08/12/123258.html璞?/dc:creator>璞?/author>Thu, 12 Aug 2010 15:37:00 GMThttp://www.shnenglu.com/qywyh/archive/2010/08/12/123258.htmlhttp://www.shnenglu.com/qywyh/comments/123258.htmlhttp://www.shnenglu.com/qywyh/archive/2010/08/12/123258.html#Feedback0http://www.shnenglu.com/qywyh/comments/commentRss/123258.htmlhttp://www.shnenglu.com/qywyh/services/trackbacks/123258.html*. Time, Clocks and the Ordering of Events in a Distributed System" (1978)
    1. The issue is that in a distributed system you cannot tell if event A happened before event B, unless A caused B in some way. Each observer can see events happen in a different order, except for events that cause each other, ie there is only a partial ordering of events in a distributed system.
    2. Lamport defines the "happens before" relationship and operator, and goes on to give an algorithm that provides a total ordering of events in a distributed system, so that each process sees events in the same order as every other process.
    3. Lamport also introduces the concept of a distributed state machine: start a set of deterministic state machines in the same state and then make sure they process the same messages in the same order.
    4. Each machine is now a replica of the others. The key problem is making each replica agree what is the next message to process: a consensus problem.
    5. However, the system is not fault tolerant; if one process fails that others have to wait for it to recover.

*.  "Notes on Database Operating Systems" (1979).
    1. 2PC problem: Unfortunately 2PC would block if the TM (Transaction Manager) fails at the wrong time.

*.  "NonBlocking Commit Protocols" (1981)
    1. 3PC problem: The problem was coming up with a nice 3PC algorithm, this would only take nearly 25 years!

*. "Impossibility of distributed consensus with one faulty process" (1985)
    1. this famous result is known as the "FLP" result
    2. By this time "consensus" was the name given to the problem of getting a bunch of processors to agree a value.
    3. The kernel of the problem is that you cannot tell the difference between a process that has stopped and one that is running very slowly, making dealing with faults in an asynchronous system almost impossible.
    4. a distributed algorithm has two properties: safety and liveness. 2PC is safe: no bad data is ever written to the databases, but its liveness properties aren't great: if the TM fails at the wrong point the system will block.
    5. The asynchronous case is more general than the synchronous case: an algorithm that works for an asynchronous system will also work for a synchronous system, but not vice versa.

*.  "The Byzantine Generals Problem" (1982)
    1. In this form of the consensus problem the processes can lie, and they can actively try to deceive other processes.

*.  "A Comparison of the Byzantine Agreement Problem and the Transaction Commit Problem." (1987) .
    1. At the time the best consensus algorithm was the Byzantine Generals, but this was too expensive to use for transactions.

*.  "Uniform consensus is harder than consensus" (2000)
    1. With uniform consensus all processes must agree on a value, even the faulty ones - a transaction should only commit if all RMs are prepared to commit.
   
*.  "The Part-Time Parliament" (submitted in 1990, published 1998)
    1. Paxos consensus algorithm
   
*.  "How to Build a Highly Availability System using Consensus" (1996).
    1. This paper provides a good introduction to building fault tolerant systems and Paxos.

*.  "Paxos Made Simple (2001)
    1. The kernel of Paxos is that given a fixed number of processes, any majority of them must have at least one process in common. For example given three processes A, B and C the possible majorities are: AB, AC, or BC. If a decision is made when one majority is present eg AB, then at any time in the future when another majority is available at least one of the processes can remember what the previous majority decided. If the majority is AB then both processes will remember, if AC is present then A will remember and if BC is present then B will remember.
    2. Paxos can tolerate lost messages, delayed messages, repeated messages, and messages delivered out of order.
    3. It will reach consensus if there is a single leader for long enough that the leader can talk to a majority of processes twice. Any process, including leaders, can fail and restart; in fact all processes can fail at the same time, the algorithm is still safe. There can be more than one leader at a time.
    4. Paxos is an asynchronous algorithm; there are no explicit timeouts. However, it only reaches consensus when the system is behaving in a synchronous way, ie messages are delivered in a bounded period of time; otherwise it is safe. There is a pathological case where Paxos will not reach consensus, in accordance to FLP, but this scenario is relatively easy to avoid in practice.

*.   "Consensus in the presence of partial synchrony" (1988)
    1. There are two versions of partial synchronous system: in one processes run at speeds within a known range and messages are delivered in bounded time but the actual values are not known a priori; in the other version the range of speeds of the processes and the upper bound for message deliver are known a priori, but they will only start holding at some unknown time in the future.
    2. The partial synchronous model is a better model for the real world than either the synchronous or asynchronous model; networks function in a predicatable way most of the time, but occasionally go crazy.
   
*.   "Consensus on Transaction Commit" (2005).
    1. A third phase is only required if there is a fault, in accordance to the Skeen result. Given 2n+1 TM replicas Paxos Commit will complete with up to n faulty replicas.
    2. Paxos Commit does not use Paxos to solve the transaction commit problem directly, ie it is not used to solve uniform consensus, rather it is used to make the system fault tolerant.
    3.  Recently there has been some discussion of the CAP conjecture: Consistency, Availability and Partition. The conjecture asserts that you cannot have all three in a distributed system: a system that is consistent, that can have faulty processes and that can handle a network partition.
    4. Now take a Paxos system with three nodes: A, B and C. We can reach consensus if two nodes are working, ie we can have consistency and availability. Now if C becomes partitioned and C is queried, it cannot respond because it cannot communicate with the other nodes; it doesn't know whether it has been partitioned, or if the other two nodes are down, or if the network is being very slow. The other two nodes can carry on, because they can talk to each other and they form a majority. So for the CAP conjecture, Paxos does not handle a partition because C cannot respond to queries. However, we could engineer our way around this. If we are inside a data center we can use two independent networks (Paxos doesn't mind if messages are repeated). If we are on the internet, then we could have our client query all nodes A, B and C, and if C is partitioned the client can query A or B unless it is partitioned in a similar way to C.
    5. a synchronous network, if C is partitioned it can learn that it is partitioned if it does not receive messages in a fixed period of time, and thus can declare itself down to the client.

*.   "Co-Allocation, Fault Tolerance and Grid Computing" (2006).


[REF] http://betathoughts.blogspot.com/2007/06/brief-history-of-consensus-2pc-and.html


]]>
Lock-Freehttp://www.shnenglu.com/qywyh/archive/2010/07/20/120886.html璞?/dc:creator>璞?/author>Tue, 20 Jul 2010 08:58:00 GMThttp://www.shnenglu.com/qywyh/archive/2010/07/20/120886.htmlhttp://www.shnenglu.com/qywyh/comments/120886.htmlhttp://www.shnenglu.com/qywyh/archive/2010/07/20/120886.html#Feedback0http://www.shnenglu.com/qywyh/comments/commentRss/120886.htmlhttp://www.shnenglu.com/qywyh/services/trackbacks/120886.html
A "wait-free" procedure can complete in a finite number of steps, regardless of the relative speeds of other threads.

A "lock-free" procedure guarantees progress of at least one of the threads executing the procedure. That means some threads can be delayed arbitrarily, but it is guaranteed that at least one thread makes progress at each step.

CAS錛歛ssuming the map hasn't changed since I last looked at it, copy it. Otherwise, start all over again.

Delay Update錛欼n plain English, the loop says "I'll replace the old map with a new, updated one, and I'll be on the lookout for any other updates of the map, but I'll only do the replacement when the reference count of the existing map is one." 




]]>
Lessons Learned from scaling Farmvillehttp://www.shnenglu.com/qywyh/archive/2010/07/16/120552.html璞?/dc:creator>璞?/author>Fri, 16 Jul 2010 07:06:00 GMThttp://www.shnenglu.com/qywyh/archive/2010/07/16/120552.htmlhttp://www.shnenglu.com/qywyh/comments/120552.htmlhttp://www.shnenglu.com/qywyh/archive/2010/07/16/120552.html#Feedback1http://www.shnenglu.com/qywyh/comments/commentRss/120552.htmlhttp://www.shnenglu.com/qywyh/services/trackbacks/120552.html


1.      Interactive games are write-heavy. Typical web apps read more than they write so many common architectures may not be sufficient. Read heavy apps can often get by with a caching layer in front of a single database. Write heavy apps will need to partition so writes are spread out and/or use an in-memory architecture.

2.    Design every component as a degradable service. Isolate components so increased latencies in one area won't ruin another. Throttle usage to help alleviate problems. Turn off features when necessary.

3.    Cache Facebook data. When you are deeply dependent on an external component consider caching that component's data to improve latency.

4.    Plan ahead for new release related usage spikes.

5.      Sample. When analyzing large streams of data, looking for problems for example, not every piece of data needs to be processed. Sampling data can yield the same results for much less work.


The key ideas are to isolate troubled and highly latent services from causing latency and performance issues elsewhere through use of error and timeout throttling, and if needed, disable functionality in the application using on/off switches and functionality based throttles.



]]>
php copy on writehttp://www.shnenglu.com/qywyh/archive/2010/05/18/115734.html璞?/dc:creator>璞?/author>Tue, 18 May 2010 14:45:00 GMThttp://www.shnenglu.com/qywyh/archive/2010/05/18/115734.htmlhttp://www.shnenglu.com/qywyh/comments/115734.htmlhttp://www.shnenglu.com/qywyh/archive/2010/05/18/115734.html#Feedback0http://www.shnenglu.com/qywyh/comments/commentRss/115734.htmlhttp://www.shnenglu.com/qywyh/services/trackbacks/115734.html
2.濡傛灉鏄紩鐢ㄨ祴鍊鹼紝鐢ㄤ簬澶嶅埗鐨勫彉閲忔寚鍚戠殑zval鐨刬s_ref=0錛屽垯copy on write錛屽師zval refcount--錛屾柊鍙橀噺鍜屽紩鐢ㄥ彉閲忓悓鏃舵寚鍚戞柊鐨剒val錛宨s_ref=1,refcount=2; 鑻val鐨刬s_ref=1錛屽垯鐩存帴鎸囧悜,refcount++;



]]>
青青草原综合久久大伊人导航_色综合久久天天综合_日日噜噜夜夜狠狠久久丁香五月_热久久这里只有精品
  • <ins id="pjuwb"></ins>
    <blockquote id="pjuwb"><pre id="pjuwb"></pre></blockquote>
      <noscript id="pjuwb"></noscript>
            <sup id="pjuwb"><pre id="pjuwb"></pre></sup>
              <dd id="pjuwb"></dd>
              <abbr id="pjuwb"></abbr>
              欧美xx69| 亚洲九九精品| 亚洲第一二三四五区| 欧美日韩在线视频一区| 久久av一区二区三区漫画| 久久久免费精品| 亚洲一区欧美激情| 久久一区激情| 先锋影院在线亚洲| 欧美岛国激情| 久久精品在这里| 蜜臀av一级做a爰片久久| 亚洲影院免费观看| 美女日韩欧美| 久久成人人人人精品欧| 欧美日韩视频在线一区二区| 欧美激情精品久久久久久黑人 | 国内精品伊人久久久久av一坑| 99国内精品久久| 日韩亚洲欧美成人| 另类尿喷潮videofree| 久久久久久穴| 国产亚洲va综合人人澡精品| 亚洲在线日韩| 久久精品久久99精品久久| 国产精品高潮视频| 夜夜嗨av色综合久久久综合网| 亚洲经典三级| 蜜臀av性久久久久蜜臀aⅴ| 牛人盗摄一区二区三区视频| 亚洲国产精品va在线观看黑人| 欧美一区二区三区在线免费观看 | 亚洲无人区一区| 亚洲一区二区三区乱码aⅴ| 欧美激情视频一区二区三区免费| 欧美成年人网站| 在线精品观看| 欧美激情二区三区| 亚洲狼人精品一区二区三区| 一本久久综合| 国产精品亚洲аv天堂网| 亚洲综合二区| 麻豆av一区二区三区| 亚洲激情视频在线| 欧美精品一区二区三区高清aⅴ| 亚洲久久视频| 欧美一区二区免费观在线| 国产日韩欧美综合在线| 久久久久久亚洲精品不卡4k岛国| 嫩草伊人久久精品少妇av杨幂| 亚洲九九爱视频| 欧美三区美女| 欧美一区二区三区在线观看视频| 欧美高清在线一区二区| 亚洲素人一区二区| 国产日韩av在线播放| 狼狼综合久久久久综合网| 亚洲精品乱码久久久久| 午夜久久一区| 一区二区在线视频播放| 欧美大片免费观看| 亚洲自拍都市欧美小说| 老司机午夜精品| 在线视频欧美一区| 国产一区二区丝袜高跟鞋图片| 美腿丝袜亚洲色图| 一本高清dvd不卡在线观看| 亚洲一区二三| 性欧美办公室18xxxxhd| 亚洲福利视频在线| 欧美女主播在线| 亚洲欧美日韩国产另类专区| 欧美a级大片| 欧美中文字幕在线视频| 亚洲麻豆国产自偷在线| 国产欧美亚洲精品| 欧美日本在线一区| 久久经典综合| 亚洲综合色激情五月| 亚洲精品国产精品国产自| 久热精品视频在线免费观看| 亚洲女人小视频在线观看| 在线观看一区二区精品视频| 国产伦精品一区二区| 欧美激情精品久久久久久| 久久精品99无色码中文字幕| 亚洲无玛一区| 日韩一区二区精品| 欧美激情国产高清| 久久综合色天天久久综合图片| 亚洲图片欧美一区| 亚洲精品国产视频| 亚洲成人在线视频网站| 国内精品美女在线观看| 国产精品一区一区| 国产精品青草久久久久福利99| 欧美日韩福利| 欧美日韩久久| 欧美人与禽性xxxxx杂性| 欧美11—12娇小xxxx| 蜜桃视频一区| 久久综合九色| 农村妇女精品| 欧美激情一区二区三区 | 亚洲久久在线| 亚洲精品国产精品国产自| 亚洲高清自拍| 亚洲精品欧美在线| 亚洲人成人一区二区三区| 最新亚洲激情| 日韩一区二区精品葵司在线| 亚洲国产欧美日韩另类综合| 亚洲国产另类久久久精品极度| 欧美激情bt| 亚洲日本欧美天堂| 一本色道久久综合亚洲精品不卡| 9久草视频在线视频精品| 99精品国产高清一区二区| 在线亚洲高清视频| 亚洲一卡久久| 欧美在线视频一区二区三区| 久久精品人人做人人爽| 久久久久久9| 欧美国产丝袜视频| 欧美色视频一区| 国产欧美二区| 在线观看欧美成人| 亚洲精品看片| 午夜激情综合网| 另类图片国产| 亚洲精品国精品久久99热| 一区二区三区四区五区精品视频| 亚洲曰本av电影| 久久综合五月天婷婷伊人| 欧美激情精品久久久久| 国产精品激情av在线播放| 国内外成人在线视频| 亚洲人在线视频| 性欧美video另类hd性玩具| 久久深夜福利免费观看| 亚洲第一页中文字幕| 一区二区三区久久精品| 久久精品在线观看| 欧美另类在线观看| 国产日韩欧美日韩| 亚洲精品少妇30p| 性欧美暴力猛交69hd| 欧美激情一区二区三区在线| 一区二区激情| 久久久久久久久久久一区| 欧美激情一区在线| 国产精品日本欧美一区二区三区| 在线播放视频一区| 亚洲综合三区| 亚洲韩国青草视频| 性欧美精品高清| 欧美日韩综合不卡| 亚洲高清资源| 久久精品二区三区| 亚洲免费观看高清在线观看 | 亚洲欧洲综合另类| 欧美一区二区视频在线| 欧美精品亚洲二区| 一区二区在线视频播放| 亚洲一二三四久久| 欧美激情第4页| 欧美在线免费观看视频| 欧美日韩视频一区二区| 亚洲丶国产丶欧美一区二区三区| 亚洲欧美日韩在线播放| 91久久精品一区| 久久美女艺术照精彩视频福利播放| 国产精品成人在线| 一区二区三区国产在线| 亚洲第一区中文99精品| 久久精品国产一区二区电影| 国产精品资源| 亚洲女与黑人做爰| 亚洲久久一区| 欧美精品一区三区| 亚洲精品一区二区三区四区高清 | 欧美成人精品影院| 黄色在线一区| 久久亚洲风情| 欧美在线首页| 国产视频亚洲| 久久国产综合精品| 欧美一区二区三区喷汁尤物| 国产欧美精品一区| 久久精品国亚洲| 欧美诱惑福利视频| 国产综合精品一区| 久久婷婷久久| 久久亚洲午夜电影| 亚洲国产导航| 亚洲欧洲一区二区在线播放| 欧美高清在线观看| 一本色道久久99精品综合| 亚洲欧洲日本在线| 欧美性大战久久久久|