Search Results

Search found 218 results on 9 pages for 'equations'.

Page 2/9 | < Previous Page | 1 2 3 4 5 6 7 8 9  | Next Page >

  • assigning in system of differential equations

    - by Alireza
    hi every one! when i solve numerically a system of two differential equations: s1:=diff(nDi, t)=...; s2:=diff(nT, t)=...; ics:={...}; #initial condition. sys := {s1, s2, ics}: sol:=dsolve(sys,numeric); with respect to "t",then the solution (for example)for "t=4" is of the form, sol(4): [t=4, n1(t)=const1, n2(t)=const2]. now, how is possible to use values of n1(t) and n2(t) for all "t"'s in another equation, namely "p", which involved n1(t) or n2(t)(like: {p=a+n1(t)*n2(t)+f(t)},where "a" and "f(t)" are defined), and to plot "p" for an interval of "t"?

    Read the article

  • Orbital equations, and power required to run them

    - by Adam Davis
    Due to a discussion on the SO IRC today, I'm curious about orbital mechanics, and The equations needed to solve orbital problems The computing power required to solve complex problems The question in particular is calculating when the Earth will plow into the Sun (or vice versa, depending on the frame of reference). I suspect that all the gravitational pulls within our solar system may need to be calculated, which makes me wonder what type of computer cluster is required, or can this be done on a single box? I don't have the experience to do a back of the napkin test here, but perhaps you do? Also, much thx to Gortok for the original inspiration (see comments).

    Read the article

  • Solving for the coefficent of linear equations with one known coefficent

    - by CppLearner
    clc; clear all; syms y a2 a3 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % [ 0.5 0.25 0.125 ] [ a2 ] [ y ] % [ 1 1 1 ] [ a3 ] = [ 3 ] % [ 2 4 8 ] [ 6 ] [ 2 ] %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% M = [0.5 0.25 0.125; 1 1 1; 2 4 8]; t = [a2 a3 6]; r = [y 3 2]; sol = M * t' s1 = solve(sol(1), a2) % solve for a2 s2 = solve(sol(2), a3) % solve for a3 This is what I have so far. These are my output sol = conj(a2)/2 + conj(a3)/4 + 3/4 conj(a2) + conj(a3) + 6 2*conj(a2) + 4*conj(a3) + 48 s1 = - conj(a3)/2 - 3/2 - Im(a3)*i s2 = - conj(a2) - 6 - 2*Im(a2)*i sol looks like what we would have if we put them back into equation form: 0.5 * a2 + 0.25 * a3 + 0.125 * a4 a2 + a3 + a4 = 3 2*a2 + 4*a3 + 8*a4 = 2 where a4 is known == 6. My problem is, I am stuck with how to use solve to actually solve these equations to get the values of a2 and a3. s2 solve for a3 but it doesn't match what we have on paper (not quite). a2 + a3 + 6 = 3 should yield a3 = -3 - a2. because of the imaginary. Somehow I need to equate the vector solution sol to the values [y 3 2] for each row.

    Read the article

  • Large sparse (stiff) ODE system needed for testing

    - by macydanim
    I hope this is the right place for this question. I have been working on a sparse stiff implicit ODE solver and have finished the code so far. I now tested the solver with the Van der Pol equation, and another stiff problem, which is of dimension 4. But to perform better tests I am searching for a bigger system. I'm thinking of the order N = 100...1000, if possible stiff and sparse. Does anybody have an example I could use? I really don't know where to search.

    Read the article

  • How is this number calculated?

    - by Hamid
    I have numbers; A == 0x20000000 B == 18 C == (B/10) D == 0x20000004 == (A + C) A and D are in hex, but I'm not sure what the assumed numeric bases of the others are (although I'd assume base 10 since they don't explicitly state a base. It may or may not be relevant but I'm dealing with memory addresses, A and D are pointers. The part I'm failing to understand is how 18/10 gives me 0x4. Edit: Code for clarity: *address1 (pointer is to address: 0x20000000) printf("Test1: %p\n", address1); printf("Test2: %p\n", address1+(18/10)); printf("Test3: %p\n", address1+(21/10)); Output: Test1: 0x20000000 Test2: 0x20000004 Test3: 0x20000008

    Read the article

  • Equation solver project

    - by Victor Barbu
    I would like to start a project destinated to students. My application has to solve any kind of equation, passed by the user as a string, exactly like in Matlab solve function. How shluld I do this? What programming language is the best for this purpose? Thanks in advance. P.S This is a screenshot made in Matlab. This is how I would like the user to insert and receive the answer: Another example:

    Read the article

  • Parsing basic math equations for children's educational software?

    - by Simucal
    Inspired by a recent TED talk, I want to write a small piece of educational software. The researcher created little miniature computers in the shape of blocks called "Siftables". [David Merril, inventor - with Siftables in the background.] There were many applications he used the blocks in but my favorite was when each block was a number or basic operation symbol. You could then re-arrange the blocks of numbers or operation symbols in a line, and it would display an answer on another siftable block. So, I've decided I wanted to implemented a software version of "Math Siftables" on a limited scale as my final project for a CS course I'm taking. What is the generally accepted way for parsing and interpreting a string of math expressions, and if they are valid, perform the operation? Is this a case where I should implement a full parser/lexer? I would imagine interpreting basic math expressions would be a semi-common problem in computer science so I'm looking for the right way to approach this. For example, if my Math Siftable blocks where arranged like: [1] [+] [2] This would be a valid sequence and I would perform the necessary operation to arrive at "3". However, if the child were to drag several operation blocks together such as: [2] [\] [\] [5] It would obviously be invalid. Ultimately, I want to be able to parse and interpret any number of chains of operations with the blocks that the user can drag together. Can anyone explain to me or point me to resources for parsing basic math expressions? I'd prefer as much of a language agnostic answer as possible.

    Read the article

  • Finding the intersection of two vector equations.

    - by Matthew Mitchell
    I've been trying to solve this and I found an equation that gives the possibility of zero division errors. Not the best thing: v1 = (a,b) v2 = (c,d) d1 = (e,f) d2 = (h,i) l1: v1 + ?d1 l2: v2 + µd2 Equation to find vector intersection of l1 and l2 programatically by re-arranging for lambda. (a,b) + ?(e,f) = (c,d) + µ(h,i) a + ?e = c + µh b +?f = d + µi µh = a + ?e - c µi = b +?f - d µ = (a + ?e - c)/h µ = (b +?f - d)/i (a + ?e - c)/h = (b +?f - d)/i a/h + ?e/h - c/h = b/i +?f/i - d/i ?e/h - ?f/i = (b/i - d/i) - (a/h - c/h) ?(e/h - f/i) = (b - d)/i - (a - c)/h ? = ((b - d)/i - (a - c)/h)/(e/h - f/i) Intersection vector = (a + ?e,b + ?f) Not sure if it would even work in some cases. I haven't tested it. I need to know how to do this for values as in that example a-i. Thank you.

    Read the article

  • Solving simultaneous equations

    - by Milo
    Here is my problem: Given x, y, z and ratio where z is known and ratio is known and is a float representing a relative value, I need to find x and y. I know that: x / y == ratio y - x == z What I'm trying to do is make my own scroll pane and I'm figuring out the scrollbar parameters. So for example, If the scrollbar must be able to scroll 100 values (z) and the thumb must consume 80% of the bar (ratio = 0.8) then x would be 400 and y would be 500. Thanks

    Read the article

  • Formatting equations in LaTeX

    - by jetsam
    When I include an equation in LaTeX that is enumerated, i.e. {\begin{equation} $$ $$ ... \end{equation} } The line above the equation (blank space between the text preceeding it and the equation) is huge. How do I make it smaller?

    Read the article

  • Error in Ordinary Differential Equation representation

    - by Priya M
    UPDATE I am trying to find the Lyapunov Exponents given in link LE. I am trying to figure it out and understand it by taking the following eqs for my case. These are a set of ordinary differential equations (these are just for testing how to work with cos and sin as ODE) f(1)=ALPHA*(y-x); f(2)=x*(R-z)-y; f(3) = 10*cos(x); and x=X(1); y=X(2); cos(y)=X(3); f1 means dx/dt;f2 dy/dt and f3 in this case would be -10sinx. However,when expressing as x=X(1);y=X(2);i am unsure how to express for cos.This is just a trial example i was doing so as to know how to work with equations where we have a cos,sin etc terms as a function of another variable. When using ode45 to solve these Eqs [T,Res]=sol(3,@test_eq,@ode45,0,0.01,20,[7 2 100 ],10); it throws the following error ??? Attempted to access (2); index must be a positive integer or logical. Error in ==> Eq at 19 x=X(1); y=X(2); cos(x)=X(3); Is my representation x=X(1); y=X(2); cos(y)=X(3); alright? How to resolve the error? Thank you

    Read the article

  • A Taxonomy of Numerical Methods v1

    - by JoshReuben
    Numerical Analysis – When, What, (but not how) Once you understand the Math & know C++, Numerical Methods are basically blocks of iterative & conditional math code. I found the real trick was seeing the forest for the trees – knowing which method to use for which situation. Its pretty easy to get lost in the details – so I’ve tried to organize these methods in a way that I can quickly look this up. I’ve included links to detailed explanations and to C++ code examples. I’ve tried to classify Numerical methods in the following broad categories: Solving Systems of Linear Equations Solving Non-Linear Equations Iteratively Interpolation Curve Fitting Optimization Numerical Differentiation & Integration Solving ODEs Boundary Problems Solving EigenValue problems Enjoy – I did ! Solving Systems of Linear Equations Overview Solve sets of algebraic equations with x unknowns The set is commonly in matrix form Gauss-Jordan Elimination http://en.wikipedia.org/wiki/Gauss%E2%80%93Jordan_elimination C++: http://www.codekeep.net/snippets/623f1923-e03c-4636-8c92-c9dc7aa0d3c0.aspx Produces solution of the equations & the coefficient matrix Efficient, stable 2 steps: · Forward Elimination – matrix decomposition: reduce set to triangular form (0s below the diagonal) or row echelon form. If degenerate, then there is no solution · Backward Elimination –write the original matrix as the product of ints inverse matrix & its reduced row-echelon matrix à reduce set to row canonical form & use back-substitution to find the solution to the set Elementary ops for matrix decomposition: · Row multiplication · Row switching · Add multiples of rows to other rows Use pivoting to ensure rows are ordered for achieving triangular form LU Decomposition http://en.wikipedia.org/wiki/LU_decomposition C++: http://ganeshtiwaridotcomdotnp.blogspot.co.il/2009/12/c-c-code-lu-decomposition-for-solving.html Represent the matrix as a product of lower & upper triangular matrices A modified version of GJ Elimination Advantage – can easily apply forward & backward elimination to solve triangular matrices Techniques: · Doolittle Method – sets the L matrix diagonal to unity · Crout Method - sets the U matrix diagonal to unity Note: both the L & U matrices share the same unity diagonal & can be stored compactly in the same matrix Gauss-Seidel Iteration http://en.wikipedia.org/wiki/Gauss%E2%80%93Seidel_method C++: http://www.nr.com/forum/showthread.php?t=722 Transform the linear set of equations into a single equation & then use numerical integration (as integration formulas have Sums, it is implemented iteratively). an optimization of Gauss-Jacobi: 1.5 times faster, requires 0.25 iterations to achieve the same tolerance Solving Non-Linear Equations Iteratively find roots of polynomials – there may be 0, 1 or n solutions for an n order polynomial use iterative techniques Iterative methods · used when there are no known analytical techniques · Requires set functions to be continuous & differentiable · Requires an initial seed value – choice is critical to convergence à conduct multiple runs with different starting points & then select best result · Systematic - iterate until diminishing returns, tolerance or max iteration conditions are met · bracketing techniques will always yield convergent solutions, non-bracketing methods may fail to converge Incremental method if a nonlinear function has opposite signs at 2 ends of a small interval x1 & x2, then there is likely to be a solution in their interval – solutions are detected by evaluating a function over interval steps, for a change in sign, adjusting the step size dynamically. Limitations – can miss closely spaced solutions in large intervals, cannot detect degenerate (coinciding) solutions, limited to functions that cross the x-axis, gives false positives for singularities Fixed point method http://en.wikipedia.org/wiki/Fixed-point_iteration C++: http://books.google.co.il/books?id=weYj75E_t6MC&pg=PA79&lpg=PA79&dq=fixed+point+method++c%2B%2B&source=bl&ots=LQ-5P_taoC&sig=lENUUIYBK53tZtTwNfHLy5PEWDk&hl=en&sa=X&ei=wezDUPW1J5DptQaMsIHQCw&redir_esc=y#v=onepage&q=fixed%20point%20method%20%20c%2B%2B&f=false Algebraically rearrange a solution to isolate a variable then apply incremental method Bisection method http://en.wikipedia.org/wiki/Bisection_method C++: http://numericalcomputing.wordpress.com/category/algorithms/ Bracketed - Select an initial interval, keep bisecting it ad midpoint into sub-intervals and then apply incremental method on smaller & smaller intervals – zoom in Adv: unaffected by function gradient à reliable Disadv: slow convergence False Position Method http://en.wikipedia.org/wiki/False_position_method C++: http://www.dreamincode.net/forums/topic/126100-bisection-and-false-position-methods/ Bracketed - Select an initial interval , & use the relative value of function at interval end points to select next sub-intervals (estimate how far between the end points the solution might be & subdivide based on this) Newton-Raphson method http://en.wikipedia.org/wiki/Newton's_method C++: http://www-users.cselabs.umn.edu/classes/Summer-2012/csci1113/index.php?page=./newt3 Also known as Newton's method Convenient, efficient Not bracketed – only a single initial guess is required to start iteration – requires an analytical expression for the first derivative of the function as input. Evaluates the function & its derivative at each step. Can be extended to the Newton MutiRoot method for solving multiple roots Can be easily applied to an of n-coupled set of non-linear equations – conduct a Taylor Series expansion of a function, dropping terms of order n, rewrite as a Jacobian matrix of PDs & convert to simultaneous linear equations !!! Secant Method http://en.wikipedia.org/wiki/Secant_method C++: http://forum.vcoderz.com/showthread.php?p=205230 Unlike N-R, can estimate first derivative from an initial interval (does not require root to be bracketed) instead of inputting it Since derivative is approximated, may converge slower. Is fast in practice as it does not have to evaluate the derivative at each step. Similar implementation to False Positive method Birge-Vieta Method http://mat.iitm.ac.in/home/sryedida/public_html/caimna/transcendental/polynomial%20methods/bv%20method.html C++: http://books.google.co.il/books?id=cL1boM2uyQwC&pg=SA3-PA51&lpg=SA3-PA51&dq=Birge-Vieta+Method+c%2B%2B&source=bl&ots=QZmnDTK3rC&sig=BPNcHHbpR_DKVoZXrLi4nVXD-gg&hl=en&sa=X&ei=R-_DUK2iNIjzsgbE5ID4Dg&redir_esc=y#v=onepage&q=Birge-Vieta%20Method%20c%2B%2B&f=false combines Horner's method of polynomial evaluation (transforming into lesser degree polynomials that are more computationally efficient to process) with Newton-Raphson to provide a computational speed-up Interpolation Overview Construct new data points for as close as possible fit within range of a discrete set of known points (that were obtained via sampling, experimentation) Use Taylor Series Expansion of a function f(x) around a specific value for x Linear Interpolation http://en.wikipedia.org/wiki/Linear_interpolation C++: http://www.hamaluik.com/?p=289 Straight line between 2 points à concatenate interpolants between each pair of data points Bilinear Interpolation http://en.wikipedia.org/wiki/Bilinear_interpolation C++: http://supercomputingblog.com/graphics/coding-bilinear-interpolation/2/ Extension of the linear function for interpolating functions of 2 variables – perform linear interpolation first in 1 direction, then in another. Used in image processing – e.g. texture mapping filter. Uses 4 vertices to interpolate a value within a unit cell. Lagrange Interpolation http://en.wikipedia.org/wiki/Lagrange_polynomial C++: http://www.codecogs.com/code/maths/approximation/interpolation/lagrange.php For polynomials Requires recomputation for all terms for each distinct x value – can only be applied for small number of nodes Numerically unstable Barycentric Interpolation http://epubs.siam.org/doi/pdf/10.1137/S0036144502417715 C++: http://www.gamedev.net/topic/621445-barycentric-coordinates-c-code-check/ Rearrange the terms in the equation of the Legrange interpolation by defining weight functions that are independent of the interpolated value of x Newton Divided Difference Interpolation http://en.wikipedia.org/wiki/Newton_polynomial C++: http://jee-appy.blogspot.co.il/2011/12/newton-divided-difference-interpolation.html Hermite Divided Differences: Interpolation polynomial approximation for a given set of data points in the NR form - divided differences are used to approximately calculate the various differences. For a given set of 3 data points , fit a quadratic interpolant through the data Bracketed functions allow Newton divided differences to be calculated recursively Difference table Cubic Spline Interpolation http://en.wikipedia.org/wiki/Spline_interpolation C++: https://www.marcusbannerman.co.uk/index.php/home/latestarticles/42-articles/96-cubic-spline-class.html Spline is a piecewise polynomial Provides smoothness – for interpolations with significantly varying data Use weighted coefficients to bend the function to be smooth & its 1st & 2nd derivatives are continuous through the edge points in the interval Curve Fitting A generalization of interpolating whereby given data points may contain noise à the curve does not necessarily pass through all the points Least Squares Fit http://en.wikipedia.org/wiki/Least_squares C++: http://www.ccas.ru/mmes/educat/lab04k/02/least-squares.c Residual – difference between observed value & expected value Model function is often chosen as a linear combination of the specified functions Determines: A) The model instance in which the sum of squared residuals has the least value B) param values for which model best fits data Straight Line Fit Linear correlation between independent variable and dependent variable Linear Regression http://en.wikipedia.org/wiki/Linear_regression C++: http://www.oocities.org/david_swaim/cpp/linregc.htm Special case of statistically exact extrapolation Leverage least squares Given a basis function, the sum of the residuals is determined and the corresponding gradient equation is expressed as a set of normal linear equations in matrix form that can be solved (e.g. using LU Decomposition) Can be weighted - Drop the assumption that all errors have the same significance –-> confidence of accuracy is different for each data point. Fit the function closer to points with higher weights Polynomial Fit - use a polynomial basis function Moving Average http://en.wikipedia.org/wiki/Moving_average C++: http://www.codeproject.com/Articles/17860/A-Simple-Moving-Average-Algorithm Used for smoothing (cancel fluctuations to highlight longer-term trends & cycles), time series data analysis, signal processing filters Replace each data point with average of neighbors. Can be simple (SMA), weighted (WMA), exponential (EMA). Lags behind latest data points – extra weight can be given to more recent data points. Weights can decrease arithmetically or exponentially according to distance from point. Parameters: smoothing factor, period, weight basis Optimization Overview Given function with multiple variables, find Min (or max by minimizing –f(x)) Iterative approach Efficient, but not necessarily reliable Conditions: noisy data, constraints, non-linear models Detection via sign of first derivative - Derivative of saddle points will be 0 Local minima Bisection method Similar method for finding a root for a non-linear equation Start with an interval that contains a minimum Golden Search method http://en.wikipedia.org/wiki/Golden_section_search C++: http://www.codecogs.com/code/maths/optimization/golden.php Bisect intervals according to golden ratio 0.618.. Achieves reduction by evaluating a single function instead of 2 Newton-Raphson Method Brent method http://en.wikipedia.org/wiki/Brent's_method C++: http://people.sc.fsu.edu/~jburkardt/cpp_src/brent/brent.cpp Based on quadratic or parabolic interpolation – if the function is smooth & parabolic near to the minimum, then a parabola fitted through any 3 points should approximate the minima – fails when the 3 points are collinear , in which case the denominator is 0 Simplex Method http://en.wikipedia.org/wiki/Simplex_algorithm C++: http://www.codeguru.com/cpp/article.php/c17505/Simplex-Optimization-Algorithm-and-Implemetation-in-C-Programming.htm Find the global minima of any multi-variable function Direct search – no derivatives required At each step it maintains a non-degenerative simplex – a convex hull of n+1 vertices. Obtains the minimum for a function with n variables by evaluating the function at n-1 points, iteratively replacing the point of worst result with the point of best result, shrinking the multidimensional simplex around the best point. Point replacement involves expanding & contracting the simplex near the worst value point to determine a better replacement point Oscillation can be avoided by choosing the 2nd worst result Restart if it gets stuck Parameters: contraction & expansion factors Simulated Annealing http://en.wikipedia.org/wiki/Simulated_annealing C++: http://code.google.com/p/cppsimulatedannealing/ Analogy to heating & cooling metal to strengthen its structure Stochastic method – apply random permutation search for global minima - Avoid entrapment in local minima via hill climbing Heating schedule - Annealing schedule params: temperature, iterations at each temp, temperature delta Cooling schedule – can be linear, step-wise or exponential Differential Evolution http://en.wikipedia.org/wiki/Differential_evolution C++: http://www.amichel.com/de/doc/html/ More advanced stochastic methods analogous to biological processes: Genetic algorithms, evolution strategies Parallel direct search method against multiple discrete or continuous variables Initial population of variable vectors chosen randomly – if weighted difference vector of 2 vectors yields a lower objective function value then it replaces the comparison vector Many params: #parents, #variables, step size, crossover constant etc Convergence is slow – many more function evaluations than simulated annealing Numerical Differentiation Overview 2 approaches to finite difference methods: · A) approximate function via polynomial interpolation then differentiate · B) Taylor series approximation – additionally provides error estimate Finite Difference methods http://en.wikipedia.org/wiki/Finite_difference_method C++: http://www.wpi.edu/Pubs/ETD/Available/etd-051807-164436/unrestricted/EAMPADU.pdf Find differences between high order derivative values - Approximate differential equations by finite differences at evenly spaced data points Based on forward & backward Taylor series expansion of f(x) about x plus or minus multiples of delta h. Forward / backward difference - the sums of the series contains even derivatives and the difference of the series contains odd derivatives – coupled equations that can be solved. Provide an approximation of the derivative within a O(h^2) accuracy There is also central difference & extended central difference which has a O(h^4) accuracy Richardson Extrapolation http://en.wikipedia.org/wiki/Richardson_extrapolation C++: http://mathscoding.blogspot.co.il/2012/02/introduction-richardson-extrapolation.html A sequence acceleration method applied to finite differences Fast convergence, high accuracy O(h^4) Derivatives via Interpolation Cannot apply Finite Difference method to discrete data points at uneven intervals – so need to approximate the derivative of f(x) using the derivative of the interpolant via 3 point Lagrange Interpolation Note: the higher the order of the derivative, the lower the approximation precision Numerical Integration Estimate finite & infinite integrals of functions More accurate procedure than numerical differentiation Use when it is not possible to obtain an integral of a function analytically or when the function is not given, only the data points are Newton Cotes Methods http://en.wikipedia.org/wiki/Newton%E2%80%93Cotes_formulas C++: http://www.siafoo.net/snippet/324 For equally spaced data points Computationally easy – based on local interpolation of n rectangular strip areas that is piecewise fitted to a polynomial to get the sum total area Evaluate the integrand at n+1 evenly spaced points – approximate definite integral by Sum Weights are derived from Lagrange Basis polynomials Leverage Trapezoidal Rule for default 2nd formulas, Simpson 1/3 Rule for substituting 3 point formulas, Simpson 3/8 Rule for 4 point formulas. For 4 point formulas use Bodes Rule. Higher orders obtain more accurate results Trapezoidal Rule uses simple area, Simpsons Rule replaces the integrand f(x) with a quadratic polynomial p(x) that uses the same values as f(x) for its end points, but adds a midpoint Romberg Integration http://en.wikipedia.org/wiki/Romberg's_method C++: http://code.google.com/p/romberg-integration/downloads/detail?name=romberg.cpp&can=2&q= Combines trapezoidal rule with Richardson Extrapolation Evaluates the integrand at equally spaced points The integrand must have continuous derivatives Each R(n,m) extrapolation uses a higher order integrand polynomial replacement rule (zeroth starts with trapezoidal) à a lower triangular matrix set of equation coefficients where the bottom right term has the most accurate approximation. The process continues until the difference between 2 successive diagonal terms becomes sufficiently small. Gaussian Quadrature http://en.wikipedia.org/wiki/Gaussian_quadrature C++: http://www.alglib.net/integration/gaussianquadratures.php Data points are chosen to yield best possible accuracy – requires fewer evaluations Ability to handle singularities, functions that are difficult to evaluate The integrand can include a weighting function determined by a set of orthogonal polynomials. Points & weights are selected so that the integrand yields the exact integral if f(x) is a polynomial of degree <= 2n+1 Techniques (basically different weighting functions): · Gauss-Legendre Integration w(x)=1 · Gauss-Laguerre Integration w(x)=e^-x · Gauss-Hermite Integration w(x)=e^-x^2 · Gauss-Chebyshev Integration w(x)= 1 / Sqrt(1-x^2) Solving ODEs Use when high order differential equations cannot be solved analytically Evaluated under boundary conditions RK for systems – a high order differential equation can always be transformed into a coupled first order system of equations Euler method http://en.wikipedia.org/wiki/Euler_method C++: http://rosettacode.org/wiki/Euler_method First order Runge–Kutta method. Simple recursive method – given an initial value, calculate derivative deltas. Unstable & not very accurate (O(h) error) – not used in practice A first-order method - the local error (truncation error per step) is proportional to the square of the step size, and the global error (error at a given time) is proportional to the step size In evolving solution between data points xn & xn+1, only evaluates derivatives at beginning of interval xn à asymmetric at boundaries Higher order Runge Kutta http://en.wikipedia.org/wiki/Runge%E2%80%93Kutta_methods C++: http://www.dreamincode.net/code/snippet1441.htm 2nd & 4th order RK - Introduces parameterized midpoints for more symmetric solutions à accuracy at higher computational cost Adaptive RK – RK-Fehlberg – estimate the truncation at each integration step & automatically adjust the step size to keep error within prescribed limits. At each step 2 approximations are compared – if in disagreement to a specific accuracy, the step size is reduced Boundary Value Problems Where solution of differential equations are located at 2 different values of the independent variable x à more difficult, because cannot just start at point of initial value – there may not be enough starting conditions available at the end points to produce a unique solution An n-order equation will require n boundary conditions – need to determine the missing n-1 conditions which cause the given conditions at the other boundary to be satisfied Shooting Method http://en.wikipedia.org/wiki/Shooting_method C++: http://ganeshtiwaridotcomdotnp.blogspot.co.il/2009/12/c-c-code-shooting-method-for-solving.html Iteratively guess the missing values for one end & integrate, then inspect the discrepancy with the boundary values of the other end to adjust the estimate Given the starting boundary values u1 & u2 which contain the root u, solve u given the false position method (solving the differential equation as an initial value problem via 4th order RK), then use u to solve the differential equations. Finite Difference Method For linear & non-linear systems Higher order derivatives require more computational steps – some combinations for boundary conditions may not work though Improve the accuracy by increasing the number of mesh points Solving EigenValue Problems An eigenvalue can substitute a matrix when doing matrix multiplication à convert matrix multiplication into a polynomial EigenValue For a given set of equations in matrix form, determine what are the solution eigenvalue & eigenvectors Similar Matrices - have same eigenvalues. Use orthogonal similarity transforms to reduce a matrix to diagonal form from which eigenvalue(s) & eigenvectors can be computed iteratively Jacobi method http://en.wikipedia.org/wiki/Jacobi_method C++: http://people.sc.fsu.edu/~jburkardt/classes/acs2_2008/openmp/jacobi/jacobi.html Robust but Computationally intense – use for small matrices < 10x10 Power Iteration http://en.wikipedia.org/wiki/Power_iteration For any given real symmetric matrix, generate the largest single eigenvalue & its eigenvectors Simplest method – does not compute matrix decomposition à suitable for large, sparse matrices Inverse Iteration Variation of power iteration method – generates the smallest eigenvalue from the inverse matrix Rayleigh Method http://en.wikipedia.org/wiki/Rayleigh's_method_of_dimensional_analysis Variation of power iteration method Rayleigh Quotient Method Variation of inverse iteration method Matrix Tri-diagonalization Method Use householder algorithm to reduce an NxN symmetric matrix to a tridiagonal real symmetric matrix vua N-2 orthogonal transforms     Whats Next Outside of Numerical Methods there are lots of different types of algorithms that I’ve learned over the decades: Data Mining – (I covered this briefly in a previous post: http://geekswithblogs.net/JoshReuben/archive/2007/12/31/ssas-dm-algorithms.aspx ) Search & Sort Routing Problem Solving Logical Theorem Proving Planning Probabilistic Reasoning Machine Learning Solvers (eg MIP) Bioinformatics (Sequence Alignment, Protein Folding) Quant Finance (I read Wilmott’s books – interesting) Sooner or later, I’ll cover the above topics as well.

    Read the article

  • Why can't flowcharts or mathematical equations created in Microsoft Office and saved in .docx format be opened by LibreOffice?

    - by user33831
    I am using Ubuntu 11.10 and LibreOffice that comes with it. Before this , I was a Windows user and some of my previous documents were saved in .docx format. I tried to use LibreOffice to open those .docx file and I can view all text, however I can't view the flowchart I drew and also mathematical equations. Another issue is, if I create new flowchart with LibreOffice and save it in .docx file, when I re-open that file, I can't view those flowcharts, but those flowcharts are there, occupied space. No problem for .odt format of course. Does anyone know why this happens? Thanks in advanced.

    Read the article

  • Latex --- Is there a way to shift the equation numbering one tab space from the right margin (shift

    - by Murari
    I have been formatting my dissertation and one little problem is stucking me up. I used the following code to typeset an equation \begin{align} & R=\frac{P^2}{P+S'} \label{eqn:SCS}\\ &\mbox {where} \quad \mbox R = \mbox {Watershed Runoff} \notag\\ &\hspace{0.63in} \mbox P = \mbox{Rainfall} \notag\\ &\hspace{0.63in} \mbox S' = \mbox{Storage in the watershed $=\frac{1000}{CN}-10$ }\notag \end{align} My output requirement is such that: The equation should begin one tab space from the left margin The equation number should end at one tab space from the right margin With the above code, I have the equation begin at the right place but not the numbering. Any help will be extremely appreciated. Thanks MP

    Read the article

  • Runge-Kutta Method with adaptive step

    - by infoholic_anonymous
    I am implementing Runge-Kutta method with adaptive step in matlab. I get different results as compared to matlab's own ode45 and my own implementation of Runge-Kutta method with fixed step. What am I doing wrong in my code? Is it possible? function [ result ] = rk4_modh( f, int, init, h, h_min ) % % f - function handle % int - interval - pair (x_min, x_max) % init - initial conditions - pair (y1(0),y2(0)) % h_min - lower limit for h (step length) % h - initial step length % x - independent variable ( for example time ) % y - dependent variable - vertical vector - in our case ( y1, y2 ) function [ k1, k2, k3, k4, ka, y ] = iteration( f, h, x, y ) % core functionality performed within loop k1 = h * f(x,y); k2 = h * f(x+h/2, y+k1/2); k3 = h * f(x+h/2, y+k2/2); k4 = h * f(x+h, y+k3); ka = (k1 + 2*k2 + 2*k3 + k4)/6; y = y + ka; end % constants % relative error eW = 1e-10; % absolute error eB = 1e-10; s = 0.9; b = 5; % initialization i = 1; x = int(1); y = init; while true hy = y; hx = x; %algorithm [ k1, k2, k3, k4, ka, y ] = iteration( f, h, x, y ); % error estimation for j=1:2 [ hk1, hk2, hk3, hk4, hka, hy ] = iteration( f, h/2, hx, hy ); hx = hx + h/2; end err(:,i) = abs(hy - y); % step adjustment e = abs( hy ) * eW + eB; a = min( e ./ err(:,i) )^(0.2); mul = a * s; if mul >= 1 % step length admitted keepH(i) = h; k(:,:,i) = [ k1, k2, k3, k4, ka ]; previous(i,:) = [ x+h, y' ]; %' i = i + 1; if floor( x + h + eB ) == int(2) break; else h = min( [mul*h, b*h, int(2)-x] ); x = x + keepH(i-1); end else % step length requires further adjustments h = mul * h; if ( h < h_min ) error('Computation with given precision impossible'); end end end result = struct( 'val', previous, 'k', k, 'err', err, 'h', keepH ); end The function in question is: function [ res ] = fun( x, y ) % res(1) = y(2) + y(1) * ( 0.9 - y(1)^2 - y(2)^2 ); res(2) = -y(1) + y(2) * ( 0.9 - y(1)^2 - y(2)^2 ); res = res'; %' end The call is: res = rk4( @fun, [0,20], [0.001; 0.001], 0.008 ); The resulting plot for x1 : The result of ode45( @fun, [0, 20], [0.001, 0.001] ) is:

    Read the article

  • What's the fastest way to approximate the period of data using Octave?

    - by John
    I have a set of data that is periodic (but not sinusoidal). I have a set of time values in one vector and a set of amplitudes in a second vector. I'd like to quickly approximate the period of the function. Any suggestions? Specifically, here's my current code. I'd like to approximate the period of the vector x(:,2) against the vector t. Ultimately, I'd like to do this for lots of initial conditions and calculate the period of each and plot the result. function xdot = f (x,t) xdot(1) =x(2); xdot(2) =-sin(x(1)); endfunction x0=[1;1.75]; #eventually, I'd like to try lots of values for x0(2) t = linspace (0, 50, 200); x = lsode ("f", x0, t) plot(x(:,1),x(:,2)); Thank you! John

    Read the article

  • Matlab help: I am given a second order differential equation.I need to use matlab to find unit step response and impulse response?

    - by Cady Smith
    I have the second order differential equation d^2(y(t))/dt^2+ B1*d(y(t))/dt+ c1*y(t)=A1*x(t) t is in seconds and is greater than 0. A1, B1, C1 are constants that equal: A1= 3.8469x10^6 B1= 325.6907 C1= 3.8469x10^6 This system is linear, time-invariant, and casual. The system is called H1. I want to use Matlab to compute and plot the impulse response function h1(t) and the unit step response function g1(t) of this system.

    Read the article

  • Is there a way to shift the equation numbering one tab space from the right margin (shift towards le

    - by Murari
    I have been formatting my dissertation and one little problem is stucking me up. I used the following code to typeset an equation \begin{align} & R=\frac{P^2}{P+S'} \label{eqn:SCS}\\ &\mbox {where} \quad \mbox R = \mbox {Watershed Runoff} \notag\\ &\hspace{0.63in} \mbox P = \mbox{Rainfall} \notag\\ &\hspace{0.63in} \mbox S' = \mbox{Storage in the watershed $=\frac{1000}{CN}-10$ }\notag \end{align} My output requirement is such that: The equation should begin one tab space from the left margin The equation number should end at one tab space from the right margin With the above code, I have the equation begin at the right place but not the numbering. Any help will be extremely appreciated. Thanks MP

    Read the article

  • How to type all the math, stat, greek, equations EFFICIENTLY in libreoffice?

    - by kernel_panic
    i am preparing a report related to physics which is full of greek, stat and calculus things, i know there is this question how to insert a greek symbol, but my problem is i cant fiddle with a drop down/ scroll list for for every symbol(my paper in FULL of those), is there a way to do something with my keyboard layout, and turn it into something like the one Tony Stark uses in Ironman(i am not kidding please). i am literally tired for this fiddle-work for half of the day and have completed just 2 sheets, hmmm.

    Read the article

  • Force apt-get to ask all install equations again?

    - by user204744
    I made a mistake. I answered the wrong way when installing something with apt-get. Now, when I purge it and reinstall it, it doesn't ask me the question any more. How do I force apt-get to ask me the question again? I'd really rather do this through the package manager than making the changes manually; that's the whole point of using a package manager.... The package in question is jackd. The first time I installed it, I answered "no" to the question about real-time priority. Now, I wish I'd said yes. However, even though I purge and install it again, I don't get the question. (I have tried dpkg-reconfigure jackd -- that doesn't give me any questions to answer.)

    Read the article

  • Netlogo programming question - is it possible to put balanced chemical equations in a model?

    - by user286190
    hi I was wondering if it was possible to put balanced chemical equations, and if possible including state symbols, in the existing netlogo model that i am using, i havenot seen any examples in the models library so was not sure if it was possible. I wanted the model to be able to allow the user to input a balanced chemical equilibrium equation, or the model displays the the equation so then the user can select from them if they do not want to enter their own any help will be greatly appreciated thanks for example ethane + oxygen -- carbon dioxide + steam C2H6 + O2 -- CO2 + H2O

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9  | Next Page >