Search Results

Search found 3 results on 1 pages for 'linearization'.

Page 1/1 | 1 

  • Numbering Regex Submatches

    - by gentlylisped
    Is there a canonical ordering of submatch expressions in a regular expression? For example: What is the order of the submatches in "(([0-9]{3}).([0-9]{3}).([0-9]{3}).([0-9]{3}))\s+([A-Z]+)" ? a. (([0-9]{3})\.([0-9]{3})\.([0-9]{3})\.([0-9]{3}))\s+([A-Z]+) (([0-9]{3})\.([0-9]{3})\.([0-9]{3})\.([0-9]{3})) ([A-Z]+) ([0-9]{3}) ([0-9]{3}) ([0-9]{3}) ([0-9]{3}) b. (([0-9]{3})\.([0-9]{3})\.([0-9]{3})\.([0-9]{3}))\s+([A-Z]+) (([0-9]{3})\.([0-9]{3})\.([0-9]{3})\.([0-9]{3})) ([0-9]{3}) ([0-9]{3}) ([0-9]{3}) ([0-9]{3}) ([A-Z]+) or c. somthin' else.

    Read the article

  • How are Scala traits compiled into Java bytecode?

    - by Justin Ardini
    I have played around with Scala for a while now, and I know that traits can act as the Scala equivalent of both interfaces and abstract classes. Recently I've been wondering how exactly traits are compiled into Java bytecode. I found some short explanations that stated traits are compiled exactly like Java interfaces when possible, and interfaces with an additional class otherwise. I still don't understand, however, how Scala achieves class linearization, a feature not available in Java. Could anyone explain or provide a good source explains how traits compile to Java bytecode?

    Read the article

  • difference equations in MATLAB - why the need to switch signs?

    - by jefflovejapan
    Perhaps this is more of a math question than a MATLAB one, not really sure. I'm using MATLAB to compute an economic model - the New Hybrid ISLM model - and there's a confusing step where the author switches the sign of the solution. First, the author declares symbolic variables and sets up a system of difference equations. Note that the suffixes "a" and "2t" both mean "time t+1", "2a" means "time t+2" and "t" means "time t": %% --------------------------[2] MODEL proc-----------------------------%% % Define endogenous vars ('a' denotes t+1 values) syms y2a pi2a ya pia va y2t pi2t yt pit vt ; % Monetary policy rule ia = q1*ya+q2*pia; % ia = q1*(ya-yt)+q2*pia; %%option speed limit policy % Model equations IS = rho*y2a+(1-rho)yt-sigma(ia-pi2a)-ya; AS = beta*pi2a+(1-beta)*pit+alpha*ya-pia+va; dum1 = ya-y2t; dum2 = pia-pi2t; MPs = phi*vt-va; optcon = [IS ; AS ; dum1 ; dum2; MPs]; He then computes the matrix A: %% ------------------ [3] Linearization proc ------------------------%% % Differentiation xx = [y2a pi2a ya pia va y2t pi2t yt pit vt] ; % define vars jopt = jacobian(optcon,xx); % Define Linear Coefficients coef = eval(jopt); B = [ -coef(:,1:5) ] ; C = [ coef(:,6:10) ] ; % B[c(t+1) l(t+1) k(t+1) z(t+1)] = C[c(t) l(t) k(t) z(t)] A = inv(C)*B ; %(Linearized reduced form ) As far as I understand, this A is the solution to the system. It's the matrix that turns time t+1 and t+2 variables into t and t+1 variables (it's a forward-looking model). My question is essentially why is it necessary to reverse the signs of all the partial derivatives in B in order to get this solution? I'm talking about this step: B = [ -coef(:,1:5) ] ; Reversing the sign here obviously reverses the sign of every component of A, but I don't have a clear understanding of why it's necessary. My apologies if the question is unclear or if this isn't the best place to ask.

    Read the article

1