]]>Alternating optimizationhttp://www.shnenglu.com/guijie/archive/2015/05/24/210729.html鏉板摜鏉板摜Sun, 24 May 2015 04:58:00 GMThttp://www.shnenglu.com/guijie/archive/2015/05/24/210729.htmlhttp://www.shnenglu.com/guijie/comments/210729.htmlhttp://www.shnenglu.com/guijie/archive/2015/05/24/210729.html#Feedback0http://www.shnenglu.com/guijie/comments/commentRss/210729.htmlhttp://www.shnenglu.com/guijie/services/trackbacks/210729.html 鎴戜釜浜虹悊瑙o紝榪欏嚑涓蹇甸兘鏄瓑浠風殑銆?br />
‘alternating optimization’ or ‘alternative optimization’?
Sue (UTS) comment: ‘Alternating’ means you use this optimization with another optimization, one after the other. ‘Alternative’ means you use this optimization instead of any other.
]]>How to use matlab solve optimization quadratic?http://www.shnenglu.com/guijie/archive/2012/11/21/195475.html鏉板摜鏉板摜Wed, 21 Nov 2012 10:31:00 GMThttp://www.shnenglu.com/guijie/archive/2012/11/21/195475.htmlhttp://www.shnenglu.com/guijie/comments/195475.htmlhttp://www.shnenglu.com/guijie/archive/2012/11/21/195475.html#Feedback0http://www.shnenglu.com/guijie/comments/commentRss/195475.htmlhttp://www.shnenglu.com/guijie/services/trackbacks/195475.html
]]>Taylor series in several variableshttp://www.shnenglu.com/guijie/archive/2012/10/31/194113.html鏉板摜鏉板摜Wed, 31 Oct 2012 02:48:00 GMThttp://www.shnenglu.com/guijie/archive/2012/10/31/194113.htmlhttp://www.shnenglu.com/guijie/comments/194113.htmlhttp://www.shnenglu.com/guijie/archive/2012/10/31/194113.html#Feedback0http://www.shnenglu.com/guijie/comments/commentRss/194113.htmlhttp://www.shnenglu.com/guijie/services/trackbacks/194113.htmlhttp://en.wikipedia.org/wiki/Taylor_series
Taylor series in several variables
The Taylor series may also be generalized to functions of more than one variable with
For example, for a function that depends on two variables, x and y, the Taylor series to second order about the point (a, b) is:
which is to be understood as a still more abbreviated multi-index version of the first equation of this paragraph, again in full analogy to the single variable case.
]]>Gradient Descent(姊害涓嬮檷娉?(涓や緥瀵瑰簲涓ょ墰鏂囧潎鐢ㄨ娉曟眰瑙g洰鏍囧嚱鏁?http://www.shnenglu.com/guijie/archive/2012/10/19/193522.html鏉板摜鏉板摜Fri, 19 Oct 2012 05:33:00 GMThttp://www.shnenglu.com/guijie/archive/2012/10/19/193522.htmlhttp://www.shnenglu.com/guijie/comments/193522.htmlhttp://www.shnenglu.com/guijie/archive/2012/10/19/193522.html#Feedback0http://www.shnenglu.com/guijie/comments/commentRss/193522.htmlhttp://www.shnenglu.com/guijie/services/trackbacks/193522.htmlhttp://en.wikipedia.org/wiki/Gradient_descent http://zh.wikipedia.org/wiki/%E6%9C%80%E9%80%9F%E4%B8%8B%E9%99%8D%E6%B3%95 Gradient descent is based on the observation that if the multivariable function is defined and differentiable in a neighborhood of a point , then decreases fastest if one goes from in the direction of the negative gradient of at , 涓哄暐姝ラ暱瑕佸彉鍖栵紵Tianyi鐨勮В閲婂緢濂斤細濡傛灉姝ラ暱榪囧ぇ錛屽彲鑳戒嬌寰楀嚱鏁板間笂鍗囷紝鏁呰鍑忓皬姝ラ暱 (涓嬮潰榪欎釜鍥劇墖鏄湪綰鎬笂鐢誨ソ錛岀劧鍚巗can鐨?銆?br />Andrew NG鐨刢oursera璇劇▼Machine learning鐨?span style="text-align: justify; text-transform: none; background-color: rgb(255,255,255); text-indent: 0px; letter-spacing: normal; display: inline !important; font: 13px/18px Verdana, Helvetica, Arial; white-space: normal; float: none; color: rgb(94,94,94); word-spacing: 0px; -webkit-text-stroke-width: 0px">II. Linear Regression with One Variable鐨?span style="font-family: 'Calibri','sans-serif'; font-size: 10.5pt; mso-bidi-font-size: 11.0pt; mso-ascii-theme-font: minor-latin; mso-fareast-font-family: 瀹嬩綋; mso-fareast-theme-font: minor-fareast; mso-hansi-theme-font: minor-latin; mso-bidi-font-family: 'Times New Roman'; mso-bidi-theme-font: minor-bidi; mso-ansi-language: EN-US; mso-fareast-language: ZH-CN; mso-bidi-language: AR-SA" lang="EN-US">Gradient descent Intuition涓殑瑙i噴寰堝ソ錛屾瘮濡傚湪涓嬪浘鍦ㄥ彸渚х殑鐐癸紝鍒欐搴︽槸姝f暟錛?font size="2" face="Arial"> 鏄礋鏁幫紝鍗充嬌褰撳墠鐨刟鍑忓皬
渚?錛歍oward the Optimization of Normalized Graph Laplacian(TNN 2011)鐨凢ig. 1. Normalized graph Laplacian learning algorithm鏄緢濂界殑姊害涓嬮檷娉曠殑渚嬪瓙.鍙鐪婩ig1錛屽叾浠栦笉蹇呯湅銆侳ig1闄禨huning鑰佸笀璇句歡 闈炵嚎鎬т紭鍖栫鍏〉絎洓涓猵pt錛屽搴旀暀鏉怭124錛屽叧閿洿綰挎悳绱㈢瓥鐣ワ紝搴旂敤 闈炵嚎鎬т紭鍖栫鍥涢〉絎洓涓猵pt錛屾闀垮姞鍊嶆垨鍑忓嶃傚彧瑕佺洰鏍囧噺灝戝氨鍒頒笅涓涓悳绱㈢偣錛屽茍涓旀闀垮姞鍊嶏紱鍚﹀垯鍋滅暀鍦ㄥ師鐐癸紝灝嗘闀垮噺鍊嶃?br />渚?錛?nbsp;Distance Metric Learning for Large Margin Nearest Neighbor Classification(JLMR),鐩爣鍑芥暟灝辨槸鍏紡14錛屾槸鐭╅樀M鐨勪簩嬈″瀷錛屽睍寮鍚庡氨浼氬彂鐜幫紝鍏充簬M鏄嚎鎬х殑錛屾晠鏄嚫鐨勩傚M姹傚鐨勭粨鏋滐紝闄勫綍鍏紡18鍜?9涔嬮棿鐨勫叕寮忎腑娌℃湁M
for (i in 2:time){ D1[i-1]=f_d1(X[i-1]) D2[i-1]=f_d2(X[i-1]) X[i]=X[i-1]-1/(D2[i-1])*(D1[i-1]) #NR綆楁硶榪唬寮?br /> if (abs(D1[i-1])<0.05)break points(X[i],f(X[i]),pch=2,col=i) count=count+1 } return(list(x=X,Deriviative_1=D,deriviative2=D2,count)) }