Search Results

Search found 6922 results on 277 pages for 'moving platforms'.

Page 194/277 | < Previous Page | 190 191 192 193 194 195 196 197 198 199 200 201  | Next Page >

  • Enhance GIMP’s Image Editing Power with Gimp Paint Studio

    - by Asian Angel
    Does your GIMP installation need a little super-charging? Using Gimp Paint Studio you can add a wonderful set of brushes, tools, and more to GIMP and take your work up to the next level. For our example we chose to install the beta version of Gimp Paint Studio on Ubuntu 10.10. Once you download the .zip file and unzip it, all that you need to do is manually transfer the contents shown here to the appropriate GIMP folders on your system. You can see the location of the destination folders here on our system… Note: Make certain to make a back-up copy of the “sessionrc and toolrc files” before you transfer Gimp Paint Studio into your installation (in case you would like to or need to revert back to the originals later). When you finish transferring the files start GIMP up and get ready to have fun. And if your experience is like ours then you should see a noticeable difference in window size and arrangement from the default settings. Here are some samples of the exceptional artwork done by Ramon Miranda and Mozart Couto using Gimp Paint Studio. Really impressive! Artwork by Ramon Miranda & Mozart Couto. Watch the introduction video and see Gimp Paint Studio in action. Download Gimp Paint Studio for Linux, Windows, and Mac [Gimp Paint Studio Homepage] *Keep in mind that there are stable and beta releases available, so choose the version that you are most comfortable with using. View the Installation Guides for Gimp Paint Studio *Page contains wonderful “video and written” versions for adding/installing Gimp Paint Studio to your system. Gimp Paint Studio Video Tutorials Library Visit the Gimp Paint Studio Gallery Latest Features How-To Geek ETC Should You Delete Windows 7 Service Pack Backup Files to Save Space? What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) Enhance GIMP’s Image Editing Power with Gimp Paint Studio Reclaim Vertical UI Space by Moving Your Tabs to the Side in Firefox Wind and Water: Puzzle Battles – An Awesome Game for Linux and Windows How Star Wars Changed the World [Infographic] Tabs Visual Manager Adds Thumbnailed Tab Switching to Chrome Daisies and Rye Swaying in the Summer Wind Wallpaper

    Read the article

  • Ask the Readers: Do You Use the Command Line?

    - by Asian Angel
    Most people have heard of it but not everyone is familiar or comfortable with how to use this bastion of geekdom. This week we would like to know if you use the command line or not. The command line…the bastion of ultimate geekery in many peoples’ eyes. You often hear people referring to doing things using the command line, so there must be something to it, right? For some people using the command line is the best, most efficient, and easiest way to do things on their systems. These are the people that many of us wish we were like. Next you have those who are proficient at using the command line but do not rely on it for everything they do on their systems. Then there are people who know how to perform some tasks or hacks using the command line but may not be as comfortable or knowledgeable as they wish to be using it. Moving on you find those who are interested in learning how to use the command line and just need a small push to get started.  Perhaps you feel too intimidated to learn it and just need the right opportunity to come along. And maybe you do not care one way or the other so long as you get done what you want to do on your system. Or you may prefer to simply use a graphical interface since that is quicker and easier for you (along with being familiar). You can find the whole range of people when it comes to using the command line… This week we would like to know if you use the command line or not. What command line category do you fit into? Power user? Casual usage? Totally lost? Let us know in the comments! How-To Geek Polls require Javascript. Please Click Here to View the Poll. Latest Features How-To Geek ETC The How-To Geek Holiday Gift Guide (Geeky Stuff We Like) LCD? LED? Plasma? The How-To Geek Guide to HDTV Technology The How-To Geek Guide to Learning Photoshop, Part 8: Filters Improve Digital Photography by Calibrating Your Monitor Our Favorite Tech: What We’re Thankful For at How-To Geek The How-To Geek Guide to Learning Photoshop, Part 7: Design and Typography Fun and Colorful Firefox Theme for Windows 7 Happy Snow Bears Theme for Chrome and Iron [Holiday] Download Full Command and Conquer: Tiberian Sun Game for Free Scorched Cometary Planet Wallpaper Quick Fix: Add the RSS Button Back to the Firefox Awesome Bar Dropbox Desktop Client 1.0.0 RC for Windows, Linux, and Mac Released

    Read the article

  • SQL SERVER – Finding Shortest Distance between Two Shapes using Spatial Data Classes – Ramsetu or Adam’s Bridge

    - by pinaldave
    Recently I was reading excellent blog post by Lenni Lobel on Spatial Database. He has written very interesting function ShortestLineTo in Spatial Data Classes. I really loved this new feature of the finding shortest distance between two shapes in SQL Server. Following is the example which is same as Lenni talk on his blog article . DECLARE @Shape1 geometry = 'POLYGON ((-20 -30, -3 -26, 14 -28, 20 -40, -20 -30))' DECLARE @Shape2 geometry = 'POLYGON ((-18 -20, 0 -10, 4 -12, 10 -20, 2 -22, -18 -20))' SELECT @Shape1 UNION ALL SELECT @Shape2 UNION ALL SELECT @Shape1.ShortestLineTo(@Shape2).STBuffer(.25) GO When you run this script SQL Server finds out the shortest distance between two shapes and draws the line. We are using STBuffer so we can see the connecting line clearly. Now let us modify one of the object and then we see how the connecting shortest line works. DECLARE @Shape1 geometry = 'POLYGON ((-20 -30, -3 -30, 14 -28, 20 -40, -20 -30))' DECLARE @Shape2 geometry = 'POLYGON ((-18 -20, 0 -10, 4 -12, 10 -20, 2 -22, -18 -20))' SELECT @Shape1 UNION ALL SELECT @Shape2 UNION ALL SELECT @Shape1.ShortestLineTo(@Shape2).STBuffer(.25) GO Now once again let us modify one of the script and see how the shortest line to works. DECLARE @Shape1 geometry = 'POLYGON ((-20 -30, -3 -30, 14 -28, 20 -40, -20 -30))' DECLARE @Shape2 geometry = 'POLYGON ((-18 -20, 0 -10, 4 -12, 10 -20, 2 -18, -18 -20))' SELECT @Shape1 UNION ALL SELECT @Shape2 UNION ALL SELECT @Shape1.ShortestLineTo(@Shape2).STBuffer(.25) SELECT @Shape1.STDistance(@Shape2) GO You can see as the objects are changing the shortest lines are moving at appropriate place. I think even though this is very small feature this is really cool know. While I was working on this example, I suddenly thought about distance between Sri Lanka and India. The distance is very short infect it is less than 30 km by sea. I decided to map India and Sri Lanka using spatial data classes. To my surprise the plotted shortest line is the same as Adam’s Bridge or Ramsetu. Adam’s Bridge starts as chain of shoals from the Dhanushkodi tip of India’s Pamban Island and ends at Sri Lanka’s Mannar Island. Geological evidence suggests that this bridge is a former land connection between India and Sri Lanka. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Function, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Spatial Database, SQL Spatial

    Read the article

  • Managing Database Clusters - A Whole Lot Simpler

    - by mat.keep(at)oracle.com
    Clustered computing brings with it many benefits: high performance, high availability, scalable infrastructure, etc.  But it also brings with it more complexity.Why ?  Well, by its very nature, there are more "moving parts" to monitor and manage (from physical, virtual and logical hosts) to fault detection and failover software to redundant networking components - the list goes on.  And a cluster that isn't effectively provisioned and managed will cause more downtime than the standalone systems it is designed to improve upon.  Not so great....When it comes to the database industry, analysts already estimate that 50% of a typical database's Total Cost of Ownership is attributable to staffing and downtime costs.  These costs will only increase if a database cluster is to hard to properly administer.Over the past 9 months, monitoring and management has been a major focus in the development of the MySQL Cluster database, and on Tuesday 12th January, the product team will be presenting the output of that development in a new webinar.Even if you can't make the date, it is still worth registering so you will receive automatic notification when the on-demand replay is availableIn the webinar, the team will cover:    * NDBINFO: released with MySQL Cluster 7.1, NDBINFO presents real-time status and usage statistics, providing developers and DBAs with a simple means of pro-actively monitoring and optimizing database performance and availability.    * MySQL Cluster Manager (MCM): available as part of the commercial MySQL Cluster Carrier Grade Edition, MCM simplifies the creation and management of MySQL Cluster by automating common management tasks, delivering higher administration productivity and enhancing cluster agility. Tasks that used to take 46 commands can be reduced to just one!    * MySQL Cluster Advisors & Graphs: part of the MySQL Enterprise Monitor and available in the commercial MySQL Cluster Carrier Grade Edition, the Enterprise Advisor includes automated best practice rules that alert on key performance and availability metrics from MySQL Cluster data nodes.You'll also learn how you can get started evaluating and using all of these tools to simplify MySQL Cluster management.This session will last round an hour and will include interactive Q&A throughout. You can learn more about MySQL Cluster Manager from this whitepaper and on-line demonstration.  You can also download the packages from eDelivery (just select "MySQL Database" as the product pack, select your platform, click "Go" and then scroll down to get the software).While managing clusters will never be easy, the webinar will show hou how it just got a whole lot simpler !

    Read the article

  • JavaScript Data Binding Frameworks

    - by dwahlin
    Data binding is where it’s at now days when it comes to building client-centric Web applications. Developers experienced with desktop frameworks like WPF or web frameworks like ASP.NET, Silverlight, or others are used to being able to take model objects containing data and bind them to UI controls quickly and easily. When moving to client-side Web development the data binding story hasn’t been great since neither HTML nor JavaScript natively support data binding. This means that you have to write code to place data in a control and write code to extract it. Although it’s certainly feasible to do it from scratch (many of us have done it this way for years), it’s definitely tedious and not exactly the best solution when it comes to maintenance and re-use. Over the last few years several different script libraries have been released to simply the process of binding data to HTML controls. In fact, the subject of data binding is becoming so popular that it seems like a new script library is being released nearly every week. Many of the libraries provide MVC/MVVM pattern support in client-side JavaScript apps and some even integrate directly with server frameworks like Node.js. Here’s a quick list of a few of the available libraries that support data binding (if you like any others please add a comment and I’ll try to keep the list updated): AngularJS MVC framework for data binding (although closely follows the MVVM pattern). Backbone.js MVC framework with support for models, key/value binding, custom events, and more. Derby Provides a real-time environment that runs in the browser an in Node.js. The library supports data binding and templates. Ember Provides support for templates that automatically update as data changes. JsViews Data binding framework that provides “interactive data-driven views built on top of JsRender templates”. jQXB Expression Binder Lightweight jQuery plugin that supports bi-directional data binding support. KnockoutJS MVVM framework with robust support for data binding. For an excellent look at using KnockoutJS check out John Papa’s course on Pluralsight. Meteor End to end framework that uses Node.js on the server and provides support for data binding on  the client. Simpli5 JavaScript framework that provides support for two-way data binding. WinRT with HTML5/JavaScript If you’re building Windows 8 applications using HTML5 and JavaScript there’s built-in support for data binding in the WinJS library.   I won’t have time to write about each of these frameworks, but in the next post I’m going to talk about my (current) favorite when it comes to client-side JavaScript data binding libraries which is AngularJS. AngularJS provides an extremely clean way – in my opinion - to extend HTML syntax to support data binding while keeping model objects (the objects that hold the data) free from custom framework method calls or other weirdness. While I’m writing up the next post, feel free to visit the AngularJS developer guide if you’d like additional details about the API and want to get started using it.

    Read the article

  • Tales from the Trenches – Building a Real-World Silverlight Line of Business Application

    - by dwahlin
    There's rarely a boring day working in the world of software development. Part of the fun associated with being a developer is that change is guaranteed and the more you learn about a particular technology the more you realize there's always a different or better way to perform a task. I've had the opportunity to work on several different real-world Silverlight Line of Business (LOB) applications over the past few years and wanted to put together a list of some of the key things I've learned as well as key problems I've encountered and resolved. There are several different topics I could cover related to "lessons learned" (some of them were more painful than others) but I'll keep it to 5 items for this post and cover additional lessons learned in the future. The topics discussed were put together for a TechEd talk: Pick a Pattern and Stick To It Data Binding and Nested Controls Notify Users of Successes (and failures) Get an Agent – A Service Agent Extend Existing Controls The first topic covered relates to architecture best practices and how the MVVM pattern can save you time in the long run. When I was first introduced to MVVM I thought it was a lot of work for very little payoff. I've since learned (the hard way in some cases) that my initial impressions were dead wrong and that my criticisms of the pattern were generally caused by doing things the wrong way. In addition to MVVM pros the slides and sample app below also jump into data binding tricks in nested control scenarios and discuss how animations and media can be used to enhance LOB applications in subtle ways. Finally, a discussion of creating a re-usable service agent to interact with backend services is discussed as well as how existing controls make good candidates for customization. I tried to keep the samples simple while still covering the topics as much as possible so if you’re new to Silverlight you should definitely be able to follow along with a little study and practice. I’d recommend starting with the SilverlightDemos.View project, moving to the SilverlightDemos.ViewModels project and then going to the SilverlightDemos.ServiceAgents project. All of the backend “Model” code can be found in the SilverlightDemos.Web project. Custom controls used in the app can be found in the SivlerlightDemos.Controls project.   Sample Code and Slides

    Read the article

  • A Taxonomy of Numerical Methods v1

    - by JoshReuben
    Numerical Analysis – When, What, (but not how) Once you understand the Math & know C++, Numerical Methods are basically blocks of iterative & conditional math code. I found the real trick was seeing the forest for the trees – knowing which method to use for which situation. Its pretty easy to get lost in the details – so I’ve tried to organize these methods in a way that I can quickly look this up. I’ve included links to detailed explanations and to C++ code examples. I’ve tried to classify Numerical methods in the following broad categories: Solving Systems of Linear Equations Solving Non-Linear Equations Iteratively Interpolation Curve Fitting Optimization Numerical Differentiation & Integration Solving ODEs Boundary Problems Solving EigenValue problems Enjoy – I did ! Solving Systems of Linear Equations Overview Solve sets of algebraic equations with x unknowns The set is commonly in matrix form Gauss-Jordan Elimination http://en.wikipedia.org/wiki/Gauss%E2%80%93Jordan_elimination C++: http://www.codekeep.net/snippets/623f1923-e03c-4636-8c92-c9dc7aa0d3c0.aspx Produces solution of the equations & the coefficient matrix Efficient, stable 2 steps: · Forward Elimination – matrix decomposition: reduce set to triangular form (0s below the diagonal) or row echelon form. If degenerate, then there is no solution · Backward Elimination –write the original matrix as the product of ints inverse matrix & its reduced row-echelon matrix à reduce set to row canonical form & use back-substitution to find the solution to the set Elementary ops for matrix decomposition: · Row multiplication · Row switching · Add multiples of rows to other rows Use pivoting to ensure rows are ordered for achieving triangular form LU Decomposition http://en.wikipedia.org/wiki/LU_decomposition C++: http://ganeshtiwaridotcomdotnp.blogspot.co.il/2009/12/c-c-code-lu-decomposition-for-solving.html Represent the matrix as a product of lower & upper triangular matrices A modified version of GJ Elimination Advantage – can easily apply forward & backward elimination to solve triangular matrices Techniques: · Doolittle Method – sets the L matrix diagonal to unity · Crout Method - sets the U matrix diagonal to unity Note: both the L & U matrices share the same unity diagonal & can be stored compactly in the same matrix Gauss-Seidel Iteration http://en.wikipedia.org/wiki/Gauss%E2%80%93Seidel_method C++: http://www.nr.com/forum/showthread.php?t=722 Transform the linear set of equations into a single equation & then use numerical integration (as integration formulas have Sums, it is implemented iteratively). an optimization of Gauss-Jacobi: 1.5 times faster, requires 0.25 iterations to achieve the same tolerance Solving Non-Linear Equations Iteratively find roots of polynomials – there may be 0, 1 or n solutions for an n order polynomial use iterative techniques Iterative methods · used when there are no known analytical techniques · Requires set functions to be continuous & differentiable · Requires an initial seed value – choice is critical to convergence à conduct multiple runs with different starting points & then select best result · Systematic - iterate until diminishing returns, tolerance or max iteration conditions are met · bracketing techniques will always yield convergent solutions, non-bracketing methods may fail to converge Incremental method if a nonlinear function has opposite signs at 2 ends of a small interval x1 & x2, then there is likely to be a solution in their interval – solutions are detected by evaluating a function over interval steps, for a change in sign, adjusting the step size dynamically. Limitations – can miss closely spaced solutions in large intervals, cannot detect degenerate (coinciding) solutions, limited to functions that cross the x-axis, gives false positives for singularities Fixed point method http://en.wikipedia.org/wiki/Fixed-point_iteration C++: http://books.google.co.il/books?id=weYj75E_t6MC&pg=PA79&lpg=PA79&dq=fixed+point+method++c%2B%2B&source=bl&ots=LQ-5P_taoC&sig=lENUUIYBK53tZtTwNfHLy5PEWDk&hl=en&sa=X&ei=wezDUPW1J5DptQaMsIHQCw&redir_esc=y#v=onepage&q=fixed%20point%20method%20%20c%2B%2B&f=false Algebraically rearrange a solution to isolate a variable then apply incremental method Bisection method http://en.wikipedia.org/wiki/Bisection_method C++: http://numericalcomputing.wordpress.com/category/algorithms/ Bracketed - Select an initial interval, keep bisecting it ad midpoint into sub-intervals and then apply incremental method on smaller & smaller intervals – zoom in Adv: unaffected by function gradient à reliable Disadv: slow convergence False Position Method http://en.wikipedia.org/wiki/False_position_method C++: http://www.dreamincode.net/forums/topic/126100-bisection-and-false-position-methods/ Bracketed - Select an initial interval , & use the relative value of function at interval end points to select next sub-intervals (estimate how far between the end points the solution might be & subdivide based on this) Newton-Raphson method http://en.wikipedia.org/wiki/Newton's_method C++: http://www-users.cselabs.umn.edu/classes/Summer-2012/csci1113/index.php?page=./newt3 Also known as Newton's method Convenient, efficient Not bracketed – only a single initial guess is required to start iteration – requires an analytical expression for the first derivative of the function as input. Evaluates the function & its derivative at each step. Can be extended to the Newton MutiRoot method for solving multiple roots Can be easily applied to an of n-coupled set of non-linear equations – conduct a Taylor Series expansion of a function, dropping terms of order n, rewrite as a Jacobian matrix of PDs & convert to simultaneous linear equations !!! Secant Method http://en.wikipedia.org/wiki/Secant_method C++: http://forum.vcoderz.com/showthread.php?p=205230 Unlike N-R, can estimate first derivative from an initial interval (does not require root to be bracketed) instead of inputting it Since derivative is approximated, may converge slower. Is fast in practice as it does not have to evaluate the derivative at each step. Similar implementation to False Positive method Birge-Vieta Method http://mat.iitm.ac.in/home/sryedida/public_html/caimna/transcendental/polynomial%20methods/bv%20method.html C++: http://books.google.co.il/books?id=cL1boM2uyQwC&pg=SA3-PA51&lpg=SA3-PA51&dq=Birge-Vieta+Method+c%2B%2B&source=bl&ots=QZmnDTK3rC&sig=BPNcHHbpR_DKVoZXrLi4nVXD-gg&hl=en&sa=X&ei=R-_DUK2iNIjzsgbE5ID4Dg&redir_esc=y#v=onepage&q=Birge-Vieta%20Method%20c%2B%2B&f=false combines Horner's method of polynomial evaluation (transforming into lesser degree polynomials that are more computationally efficient to process) with Newton-Raphson to provide a computational speed-up Interpolation Overview Construct new data points for as close as possible fit within range of a discrete set of known points (that were obtained via sampling, experimentation) Use Taylor Series Expansion of a function f(x) around a specific value for x Linear Interpolation http://en.wikipedia.org/wiki/Linear_interpolation C++: http://www.hamaluik.com/?p=289 Straight line between 2 points à concatenate interpolants between each pair of data points Bilinear Interpolation http://en.wikipedia.org/wiki/Bilinear_interpolation C++: http://supercomputingblog.com/graphics/coding-bilinear-interpolation/2/ Extension of the linear function for interpolating functions of 2 variables – perform linear interpolation first in 1 direction, then in another. Used in image processing – e.g. texture mapping filter. Uses 4 vertices to interpolate a value within a unit cell. Lagrange Interpolation http://en.wikipedia.org/wiki/Lagrange_polynomial C++: http://www.codecogs.com/code/maths/approximation/interpolation/lagrange.php For polynomials Requires recomputation for all terms for each distinct x value – can only be applied for small number of nodes Numerically unstable Barycentric Interpolation http://epubs.siam.org/doi/pdf/10.1137/S0036144502417715 C++: http://www.gamedev.net/topic/621445-barycentric-coordinates-c-code-check/ Rearrange the terms in the equation of the Legrange interpolation by defining weight functions that are independent of the interpolated value of x Newton Divided Difference Interpolation http://en.wikipedia.org/wiki/Newton_polynomial C++: http://jee-appy.blogspot.co.il/2011/12/newton-divided-difference-interpolation.html Hermite Divided Differences: Interpolation polynomial approximation for a given set of data points in the NR form - divided differences are used to approximately calculate the various differences. For a given set of 3 data points , fit a quadratic interpolant through the data Bracketed functions allow Newton divided differences to be calculated recursively Difference table Cubic Spline Interpolation http://en.wikipedia.org/wiki/Spline_interpolation C++: https://www.marcusbannerman.co.uk/index.php/home/latestarticles/42-articles/96-cubic-spline-class.html Spline is a piecewise polynomial Provides smoothness – for interpolations with significantly varying data Use weighted coefficients to bend the function to be smooth & its 1st & 2nd derivatives are continuous through the edge points in the interval Curve Fitting A generalization of interpolating whereby given data points may contain noise à the curve does not necessarily pass through all the points Least Squares Fit http://en.wikipedia.org/wiki/Least_squares C++: http://www.ccas.ru/mmes/educat/lab04k/02/least-squares.c Residual – difference between observed value & expected value Model function is often chosen as a linear combination of the specified functions Determines: A) The model instance in which the sum of squared residuals has the least value B) param values for which model best fits data Straight Line Fit Linear correlation between independent variable and dependent variable Linear Regression http://en.wikipedia.org/wiki/Linear_regression C++: http://www.oocities.org/david_swaim/cpp/linregc.htm Special case of statistically exact extrapolation Leverage least squares Given a basis function, the sum of the residuals is determined and the corresponding gradient equation is expressed as a set of normal linear equations in matrix form that can be solved (e.g. using LU Decomposition) Can be weighted - Drop the assumption that all errors have the same significance –-> confidence of accuracy is different for each data point. Fit the function closer to points with higher weights Polynomial Fit - use a polynomial basis function Moving Average http://en.wikipedia.org/wiki/Moving_average C++: http://www.codeproject.com/Articles/17860/A-Simple-Moving-Average-Algorithm Used for smoothing (cancel fluctuations to highlight longer-term trends & cycles), time series data analysis, signal processing filters Replace each data point with average of neighbors. Can be simple (SMA), weighted (WMA), exponential (EMA). Lags behind latest data points – extra weight can be given to more recent data points. Weights can decrease arithmetically or exponentially according to distance from point. Parameters: smoothing factor, period, weight basis Optimization Overview Given function with multiple variables, find Min (or max by minimizing –f(x)) Iterative approach Efficient, but not necessarily reliable Conditions: noisy data, constraints, non-linear models Detection via sign of first derivative - Derivative of saddle points will be 0 Local minima Bisection method Similar method for finding a root for a non-linear equation Start with an interval that contains a minimum Golden Search method http://en.wikipedia.org/wiki/Golden_section_search C++: http://www.codecogs.com/code/maths/optimization/golden.php Bisect intervals according to golden ratio 0.618.. Achieves reduction by evaluating a single function instead of 2 Newton-Raphson Method Brent method http://en.wikipedia.org/wiki/Brent's_method C++: http://people.sc.fsu.edu/~jburkardt/cpp_src/brent/brent.cpp Based on quadratic or parabolic interpolation – if the function is smooth & parabolic near to the minimum, then a parabola fitted through any 3 points should approximate the minima – fails when the 3 points are collinear , in which case the denominator is 0 Simplex Method http://en.wikipedia.org/wiki/Simplex_algorithm C++: http://www.codeguru.com/cpp/article.php/c17505/Simplex-Optimization-Algorithm-and-Implemetation-in-C-Programming.htm Find the global minima of any multi-variable function Direct search – no derivatives required At each step it maintains a non-degenerative simplex – a convex hull of n+1 vertices. Obtains the minimum for a function with n variables by evaluating the function at n-1 points, iteratively replacing the point of worst result with the point of best result, shrinking the multidimensional simplex around the best point. Point replacement involves expanding & contracting the simplex near the worst value point to determine a better replacement point Oscillation can be avoided by choosing the 2nd worst result Restart if it gets stuck Parameters: contraction & expansion factors Simulated Annealing http://en.wikipedia.org/wiki/Simulated_annealing C++: http://code.google.com/p/cppsimulatedannealing/ Analogy to heating & cooling metal to strengthen its structure Stochastic method – apply random permutation search for global minima - Avoid entrapment in local minima via hill climbing Heating schedule - Annealing schedule params: temperature, iterations at each temp, temperature delta Cooling schedule – can be linear, step-wise or exponential Differential Evolution http://en.wikipedia.org/wiki/Differential_evolution C++: http://www.amichel.com/de/doc/html/ More advanced stochastic methods analogous to biological processes: Genetic algorithms, evolution strategies Parallel direct search method against multiple discrete or continuous variables Initial population of variable vectors chosen randomly – if weighted difference vector of 2 vectors yields a lower objective function value then it replaces the comparison vector Many params: #parents, #variables, step size, crossover constant etc Convergence is slow – many more function evaluations than simulated annealing Numerical Differentiation Overview 2 approaches to finite difference methods: · A) approximate function via polynomial interpolation then differentiate · B) Taylor series approximation – additionally provides error estimate Finite Difference methods http://en.wikipedia.org/wiki/Finite_difference_method C++: http://www.wpi.edu/Pubs/ETD/Available/etd-051807-164436/unrestricted/EAMPADU.pdf Find differences between high order derivative values - Approximate differential equations by finite differences at evenly spaced data points Based on forward & backward Taylor series expansion of f(x) about x plus or minus multiples of delta h. Forward / backward difference - the sums of the series contains even derivatives and the difference of the series contains odd derivatives – coupled equations that can be solved. Provide an approximation of the derivative within a O(h^2) accuracy There is also central difference & extended central difference which has a O(h^4) accuracy Richardson Extrapolation http://en.wikipedia.org/wiki/Richardson_extrapolation C++: http://mathscoding.blogspot.co.il/2012/02/introduction-richardson-extrapolation.html A sequence acceleration method applied to finite differences Fast convergence, high accuracy O(h^4) Derivatives via Interpolation Cannot apply Finite Difference method to discrete data points at uneven intervals – so need to approximate the derivative of f(x) using the derivative of the interpolant via 3 point Lagrange Interpolation Note: the higher the order of the derivative, the lower the approximation precision Numerical Integration Estimate finite & infinite integrals of functions More accurate procedure than numerical differentiation Use when it is not possible to obtain an integral of a function analytically or when the function is not given, only the data points are Newton Cotes Methods http://en.wikipedia.org/wiki/Newton%E2%80%93Cotes_formulas C++: http://www.siafoo.net/snippet/324 For equally spaced data points Computationally easy – based on local interpolation of n rectangular strip areas that is piecewise fitted to a polynomial to get the sum total area Evaluate the integrand at n+1 evenly spaced points – approximate definite integral by Sum Weights are derived from Lagrange Basis polynomials Leverage Trapezoidal Rule for default 2nd formulas, Simpson 1/3 Rule for substituting 3 point formulas, Simpson 3/8 Rule for 4 point formulas. For 4 point formulas use Bodes Rule. Higher orders obtain more accurate results Trapezoidal Rule uses simple area, Simpsons Rule replaces the integrand f(x) with a quadratic polynomial p(x) that uses the same values as f(x) for its end points, but adds a midpoint Romberg Integration http://en.wikipedia.org/wiki/Romberg's_method C++: http://code.google.com/p/romberg-integration/downloads/detail?name=romberg.cpp&can=2&q= Combines trapezoidal rule with Richardson Extrapolation Evaluates the integrand at equally spaced points The integrand must have continuous derivatives Each R(n,m) extrapolation uses a higher order integrand polynomial replacement rule (zeroth starts with trapezoidal) à a lower triangular matrix set of equation coefficients where the bottom right term has the most accurate approximation. The process continues until the difference between 2 successive diagonal terms becomes sufficiently small. Gaussian Quadrature http://en.wikipedia.org/wiki/Gaussian_quadrature C++: http://www.alglib.net/integration/gaussianquadratures.php Data points are chosen to yield best possible accuracy – requires fewer evaluations Ability to handle singularities, functions that are difficult to evaluate The integrand can include a weighting function determined by a set of orthogonal polynomials. Points & weights are selected so that the integrand yields the exact integral if f(x) is a polynomial of degree <= 2n+1 Techniques (basically different weighting functions): · Gauss-Legendre Integration w(x)=1 · Gauss-Laguerre Integration w(x)=e^-x · Gauss-Hermite Integration w(x)=e^-x^2 · Gauss-Chebyshev Integration w(x)= 1 / Sqrt(1-x^2) Solving ODEs Use when high order differential equations cannot be solved analytically Evaluated under boundary conditions RK for systems – a high order differential equation can always be transformed into a coupled first order system of equations Euler method http://en.wikipedia.org/wiki/Euler_method C++: http://rosettacode.org/wiki/Euler_method First order Runge–Kutta method. Simple recursive method – given an initial value, calculate derivative deltas. Unstable & not very accurate (O(h) error) – not used in practice A first-order method - the local error (truncation error per step) is proportional to the square of the step size, and the global error (error at a given time) is proportional to the step size In evolving solution between data points xn & xn+1, only evaluates derivatives at beginning of interval xn à asymmetric at boundaries Higher order Runge Kutta http://en.wikipedia.org/wiki/Runge%E2%80%93Kutta_methods C++: http://www.dreamincode.net/code/snippet1441.htm 2nd & 4th order RK - Introduces parameterized midpoints for more symmetric solutions à accuracy at higher computational cost Adaptive RK – RK-Fehlberg – estimate the truncation at each integration step & automatically adjust the step size to keep error within prescribed limits. At each step 2 approximations are compared – if in disagreement to a specific accuracy, the step size is reduced Boundary Value Problems Where solution of differential equations are located at 2 different values of the independent variable x à more difficult, because cannot just start at point of initial value – there may not be enough starting conditions available at the end points to produce a unique solution An n-order equation will require n boundary conditions – need to determine the missing n-1 conditions which cause the given conditions at the other boundary to be satisfied Shooting Method http://en.wikipedia.org/wiki/Shooting_method C++: http://ganeshtiwaridotcomdotnp.blogspot.co.il/2009/12/c-c-code-shooting-method-for-solving.html Iteratively guess the missing values for one end & integrate, then inspect the discrepancy with the boundary values of the other end to adjust the estimate Given the starting boundary values u1 & u2 which contain the root u, solve u given the false position method (solving the differential equation as an initial value problem via 4th order RK), then use u to solve the differential equations. Finite Difference Method For linear & non-linear systems Higher order derivatives require more computational steps – some combinations for boundary conditions may not work though Improve the accuracy by increasing the number of mesh points Solving EigenValue Problems An eigenvalue can substitute a matrix when doing matrix multiplication à convert matrix multiplication into a polynomial EigenValue For a given set of equations in matrix form, determine what are the solution eigenvalue & eigenvectors Similar Matrices - have same eigenvalues. Use orthogonal similarity transforms to reduce a matrix to diagonal form from which eigenvalue(s) & eigenvectors can be computed iteratively Jacobi method http://en.wikipedia.org/wiki/Jacobi_method C++: http://people.sc.fsu.edu/~jburkardt/classes/acs2_2008/openmp/jacobi/jacobi.html Robust but Computationally intense – use for small matrices < 10x10 Power Iteration http://en.wikipedia.org/wiki/Power_iteration For any given real symmetric matrix, generate the largest single eigenvalue & its eigenvectors Simplest method – does not compute matrix decomposition à suitable for large, sparse matrices Inverse Iteration Variation of power iteration method – generates the smallest eigenvalue from the inverse matrix Rayleigh Method http://en.wikipedia.org/wiki/Rayleigh's_method_of_dimensional_analysis Variation of power iteration method Rayleigh Quotient Method Variation of inverse iteration method Matrix Tri-diagonalization Method Use householder algorithm to reduce an NxN symmetric matrix to a tridiagonal real symmetric matrix vua N-2 orthogonal transforms     Whats Next Outside of Numerical Methods there are lots of different types of algorithms that I’ve learned over the decades: Data Mining – (I covered this briefly in a previous post: http://geekswithblogs.net/JoshReuben/archive/2007/12/31/ssas-dm-algorithms.aspx ) Search & Sort Routing Problem Solving Logical Theorem Proving Planning Probabilistic Reasoning Machine Learning Solvers (eg MIP) Bioinformatics (Sequence Alignment, Protein Folding) Quant Finance (I read Wilmott’s books – interesting) Sooner or later, I’ll cover the above topics as well.

    Read the article

  • How To Rip a Music CD in Windows 7 Media Center

    - by DigitalGeekery
    If you’re a Media Center user, you already know that it can play and manage your digital music collection. But, did you know you can also rip a music CD in Windows 7 Media Center and have it automatically added to your music library? Rip a CD in Windows 7 Media Center Place your CD into your optical drive. From within Windows Media Center, open the Music Library and select the CD. If you haven’t previously ripped a CD in Windows 7 with either Windows Media Center or Windows Media Player, you’ll be prompted to select whether or not you’d like to add copy protection. Click Next. By default, your CD will be ripped to .WMA format. The rip settings for Windows Media Center are pulled from Windows Media Player. So to change the rip settings, we’ll need to do so in Media Player. Click Finish. From within Windows Media Player, click on Tools from Menu bar, and select Options. If you are new to Windows Media Player 12, check out our beginner’s guide on how to manage your music with WMP 12. Select the Rip Music tab and choose your output format from the Format drop down list. You can also select the Audio quality (bit rate) by moving the slider bar under Audio quality. Click OK when you are finished.   Now, you are ready to rip your CD. Click on Rip CD. Click Yes to confirm you want to rip the CD. You can follow the progress as each track is being converted.    When the CD is finished you’re ready to start enjoying your music any time you wish in Windows 7 Media Center. Looking for some more tasks you can perform in Media Center with just a remote? Check out our earlier post on how to crop, edit, and print photos in Windows Media Center. Similar Articles Productive Geek Tips Using Netflix Watchnow in Windows Vista Media Center (Gmedia)Fixing When Windows Media Player Library Won’t Let You Add FilesStartup Customizations for Media Center in Windows 7Schedule Updates for Windows Media CenterIntegrate Hulu Desktop and Windows Media Center in Windows 7 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 OutlookStatView Scans and Displays General Usage Statistics How to Add Exceptions to the Windows Firewall Office 2010 reviewed in depth by Ed Bott FoxClocks adds World Times in your Statusbar (Firefox) Have Fun Editing Photo Editing with Citrify Outlook Connector Upgrade Error

    Read the article

  • Silverlight Cream for April 29, 2010 -- #851

    - by Dave Campbell
    In this Issue: Carlos Figueira(-2-), Subodh Pushpak, Gergely Orosz, John Papa, Mike Snow(-2-), Rishi, Tim Heuer, Stefan Olson, and David Anson. Shoutouts: Josh Holmes blogged about a cool app the City of Miami has up: Miami 311: Built on Windows Azure Gergely Orosz reports on the state of a bug he found pre SL4: Silverlight 4 still displays large elements incorrectly Laura Foy and Charlie Kindel discuss WP7 on Channel 9: Windows Phone 7 Developer Tools Refresh Announced Charlie Kindel has an announcement, good instructions, and what's new notes on the Windows Phone Developer Tools CTP Refresh! Tim Heuer mentioned the workaround for this in his post (below), but I thought you might like to read Brandon Watson's debrief of what it's all about: Signed Assemblies Bug in the Windows Phone Tools CTP Refresh Laurent Bugnion posted about interrelations between versions of Blend and WP7 code... read it closely: Be careful when installing the Blend Windows Phone 7 Add-In From SilverlightCream.com: Consuming REST/POX services in Silverlight 4 Carlos Figueira has a pair of posts up about consuming services in Silverlight 4. This first one is about consuming REST/POX services. He provides a Service Contract that can be used with either and the full project code is available as well. Consuming REST/JSON services in Silverlight 4 In the second post, Carlos Figueira provides the code to allow WCF and Silverlight 4 to consume strongly-typed REST/JSON... and again, all the code is available. Silverlight and WCF caching Subodh Pushpak has a post up discussing caching in WCF, and has code demonstrating turning caching on at run-time. Detecting Silverlight Version Installed Gergely Orosz said it right when he said "Detecting the Silverlight version installed on a client machine isn’t entirely straightforward." ... and after reading this post, if you take the link to his ScottLogic blog, you'll get a full break-out of how it's done. Silverlight TV 22: Tim Heuer on Extending the SMF It's Thursday, and that means Silverlight TV! ... this week, John Papa has on Tim Heuer who has always been out there pushing media... and he's talking about SMF or Silverlight Media Framework for the uninitiated, and also extending it. Silverlight Tip of the Day #7 – Localized Resources Mike Snow has Tip Number 7 up and it's about localization... good end-to-end discussion and demonstration. Just thought I should use that to prove to my daughter that the tatoo she had put on the back of her neck actually reads "Eat More Broccoli" :) Silverlight Tip of the Day #8 – Detecting Alt, Shift, Control, Window & Apple Keys Combinations I just realized Mike Snow's site logo reads "Silverlight Tips of the Day" (bolding mine) ... that answers why I'm seeing more than one -- sorry Mike, couldn't pass it up :) ... Mike's second tip today and number 8 in the series is on detecting all the mouse button and ctl/alt/shift combinations in Silverlight. nRoute: More Wholesomeness, with SL 4 and .NET 4.0 Rishi has a post up announcing a new nRoute release for Silverlight 4 and .NET 4.0 He's tweaked the code to take advantages of enhancements in the new platforms, so check it out. Windows Phone 7 Developer Tools April 2010 Refresh Booya... Tim Heuer announced the release of the next drop in the WP7 tools ... dang wish I was at home today :) ... be sure to read the post for info such as the notes about Authenticode Assemblies and the release notes. Updates to Silverlight Multi-binding support Stefan Olson points up a SL4 change to Multi-binding support that he had previously blogged about. He shows the previous non-working example, and what you have to do to make it work now. Using XAML to create a custom wallpaper image for your mobile device David Anson has a solution for those pesky lost devices, and let me go on the record right now saying if anyone finds a WP7 phone laying around, just call me, it's mine :) [think that'd work??] ... ok, David's solution is a WPF app "MobileDeviceHomeScreenMaker" that you get the info set and it produces a png you then put on your device. But seriously about that lost phone... Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Resolving IIS7 HTTP Error 500.19 - Internal Server Error

    - by fatherjack
    LiveJournal Tags: RedGate Tools,SQL Server,Tips and Tricks How To The requested page cannot be accessed because the related configuration data for the page is invalid. As part of my work recently I was moving SQL Monitor from the bespoke XSP web server to be hosted on IIS instead. This didn't go smoothly. I was lucky to be helped by Red Gate's support team (http://twitter.com/kickasssupport). I had SQL Monitor installed and working fine on the XSP site but wanted to move to IIS so I reinstalled the software and chose the IIS option. This wasn't possible as IIS wasn't installed on the server. I went to Control Panel, Windows features and installed IIS and then returned to the SQL Monitor installer. Everything went as planned but when I browsed the site I got a huge error with the message "HTTP Error 500.19 - Internal Server Error The requested page cannot be accessed because the related configuration data for the page is invalid." All links that I could find suggested it was a permissions issue, based on the directory where the config file was stored. I changed this any number of times and also tried the altering its location. Nothing resolved the error. It was only when I was trying the installation again that I read through the details from Red Gate and noted that they referred to ASP settings that I didn't have. Essentially I was seeing this. I had installed IIS using the default settings and that DOESN'T include ASP. When this dawned on me I went back through the windows components installation process and ticked the ASP service within the IIS role. Completing this and going back to the IIS management console I saw something like this; so many more options! When I clicked on the Authentication icon this time I got the option to not only enable Anonymous Authentication but also ASP.NET Impersonation (which is disabled by default). Once I had enabled this the SQL Monitor website worked without error. I think the HTTP Error 500.19 is misleading in this case and at the very least should be able to recognise if the ASP service is installed or not and then to include a hint that it should be. I hope this helps some people and avoids wasting as much of your time as it did mine. Let me know if it helps you.

    Read the article

  • JMX Based Monitoring - Part Three - Web App Server Monitoring

    - by Anthony Shorten
    In the last blog entry I showed a technique for integrating a JMX console with Oracle WebLogic which is a standard feature of Oracle WebLogic 11g. Customers on other Web Application servers and other versions of Oracle WebLogic can refer to the documentation provided with the server to do a similar thing. In this blog entry I am going to discuss a new feature that is only present in Oracle Utilities Application Framework 4 and above that allows JMX to be used for management and monitoring the Oracle Utilities Web Applications. In this case JMX can be used to perform monitoring as well as provide the management of the cache. In Oracle Utilities Application Framework you can enable Web Application Server JMX monitoring that is unique to the framework by specifying a JMX port number in RMI Port number for JMX Web setting and initial credentials in the JMX Enablement System User ID and JMX Enablement System Password configuration options. These options are available using the configureEnv[.sh] -a utility. Once this is information is supplied a number of configuration files are built (by the initialSetup[.sh] utility) to configure the facility: spl.properties - contains the JMX URL, the security configuration and the mbeans that are enabled. For example, on my demonstration machine: spl.runtime.management.rmi.port=6740 spl.runtime.management.connector.url.default=service:jmx:rmi:///jndi/rmi://localhost:6740/oracle/ouaf/webAppConnector jmx.remote.x.password.file=scripts/ouaf.jmx.password.file jmx.remote.x.access.file=scripts/ouaf.jmx.access.file ouaf.jmx.com.splwg.base.support.management.mbean.JVMInfo=enabled ouaf.jmx.com.splwg.base.web.mbeans.FlushBean=enabled ouaf.jmx.* files - contain the userid and password. The default setup uses the JMX default security configuration. You can use additional security features by altering the spl.properties file manually or using a custom template. For more security options see the JMX Site. Once it has been configured and the changes reflected in the product using the initialSetup[.sh] utility the JMX facility can be used. For illustrative purposes, I will use jconsole but any JSR160 complaint browser or client can be used (with the appropriate configuration). Once you start jconsole (ensure that splenviron[.sh] is executed prior to execution to set the environment variables or for remote connection, ensure java is in your path and jconsole.jar in your classpath) you specify the URL in the spl.management.connnector.url.default entry and the credentials you specified in the jmx.remote.x.* files. Remember these are encrypted by default so if you try and view the file you may be able to decipher it visually. For example: There are three Mbeans available to you: flushBean - This is a JMX replacement for the jsp versions of the flush utilities provided in previous releases of the Oracle Utilities Application Framework. You can manage the cache using the provided operations from JMX. The jsp versions of the flush utilities are still provided, for backward compatibility, but now are authorization controlled. JVMInfo - This is a JMX replacement for the jsp version of the JVMInfo screen used by support to get a handle on JVM information. This information is environmental not operational and is used for support purposes. The jsp versions of the JVMInfo utilities are still provided, for backward compatibility, but now is also authorization controlled. JVMSystem - This is an implementation of the Java system MXBeans for use in monitoring. We provide our own implementation of the base Mbeans to save on creating another JMX configuration for internal monitoring and to provide a consistent interface across platforms for the MXBeans. This Mbean is disabled by default and can be enabled using the enableJVMSystemBeans operation. This Mbean allows for the monitoring of the ClassLoading, Memory, OperatingSystem, Runtime and the Thread MX beans. Refer to the Server Administration Guides provided with your product and the Technical Best Practices Whitepaper for information about individual statistics. The Web Application Server JMX monitoring allows greater visibility for monitoring and management of the Oracle Utilities Application Framework application from jconsole or any JSR160 compliant JMX browser or JMX console.

    Read the article

  • UPK 3.6.1 (is Coming)

    - by marc.santosusso
    In anticipation of the release of UPK 3.6.1, I'd like to briefly describe some of the features that will be available in this new version. Topic Editor in Tabs Topic Editors now open in tabs instead of separate Developer windows. This offers several improvements: First, the bubble editor can be docked and resized in the same way as other editor panes. That's right, you can resize the bubble editor! The second enhancement that this changes brings is an improved undo and redo which allows each action to be undone and redone in the Topic Editor. New Sound Editor The topic and web page editors include a new sound editor with all the bells and whistles necessary to record, edit, import, and export, sound. Sound can be captured during topic recording--which is great for a Subject Matter Expert (SME) to narrate what they're recording--or after the topic has been recorded. Sound can also be added to web pages and played on the concept panes of modules, sections and topics. Turn off bubbles in Topics Authors may opt to hide bubbles either per frame or for an entire topic. When you want to draw a user's attention to the content on the screen instead of the bubble. This feature works extremely well in conjunction with the new sound capabilities. For instance, consider recording conceptual information with narration and no bubbles. Presentation Output UPK content can be published as a Presentation in Microsoft PowerPoint format. Publishing for Presentation will create a presentation for each topic published. The presentation template can be customized Using the same methods offered for the UPK document outputs, allowing your UPK-generated presentations to match your corporate branding. Autosave and Recovery The Developer will automatically save your work as often as you would like. This affords authors the ability to recover these automatically saved documents if their system or UPK were to close unexpectedly. The Developer defaults to save open documents every ten minutes. Package Editor Enhancement Files in packages will now open in the associated application when double-clicked. Authors can also choose to "Open with..." from the context menu (AKA right click menu.) See It! Window See It! mode may now be launched in a non-fullscreen window. This is available from the kp.html file in any Player package. This version of See It! mode offers on-screen navigation controls including previous frame, next frame, pause etc. Firefox Enhancments The UPK Player will now offer both Do It! mode and sound playback when viewed using Firefox web browser. Player Support for Safari The UPK Player is now fully supported on the Safari web browser for both Mac OS and Windows platforms. Keep document checked out Authors may choose to keep a document checked out when performing a check in. This allows an author to have a new version created on the server and continue editing. Close button on individual tabs A close button has been added to the tabs making it easier to close a specific tab. Outline Editor Enhancements Authors will have the option to prevent concepts from immediately displaying in the Developer when an outline item is selected. This makes it faster to move around in the outline editor. Tell us which feature you're most excited to use in the comments.

    Read the article

  • Silverlight Cream for March 22, 2010 -- #817

    - by Dave Campbell
    In this Issue: Bart Czernicki, Tim Greenfield, Andrea Boschin(-2-), AfricanGeek, Fredrik Normén, Ian Griffiths, Christian Schormann, Pete Brown, Jeff Handley, Brad Abrams, and Tim Heuer. Shoutout: At the beginning of MIX10, Brad Abrams reported Silverlight 4 and RIA Services Release Candidate Available NOW From SilverlightCream.com: Using the Bing Maps Silverlight control on the Windows Phone 7 Bart Czernicki has a very cool BingMaps and WP7 tutorial up... you're going to want to bookmark this one for sure! Code included and external links... thanks Bart! Silverlight Rx DataClient within MVVM Tim Greenfield has a great post up about Rx and MVVM with Silverlight 3. Lots of good insight into Rx and interesting code bits. SilverVNC - a VNC Viewer with Silverlight 4.0 RC Andrea Boschin digs into Silverlight 4 RC and it's full-trust on sockets and builds an implementation of RFB protocol... give it a try and give Andrea some feedback. Chromeless Window for OOB applications in Silverlight 4.0 RC Andrea Boschin also has a post up on investigating the OOB no-chrome features in SL4RC. Windows Phone 7 and WCF AfricanGeek has his latest video tutorial up and it's on WCF and WP7... I've got a feeling we're all going to have to get our arms around this. Some steps for moving WCF RIA Services Preveiw to the RC version Fredrik Normén details his steps in transitioning to the RC version of RIA Services. Silverlight Business Apps: Module 8.5 - The Value of MEF with Silverlight Ian Griffiths has a video tutorial up at Channel 9 on MEF and Silverlight, posted by John Papa Introducing Blend 4 – For Silverlight, WPF and Windows Phone Christian Schormann has an early MIX10 post up about te new features in Expression Blend with regard to Silverlight, WPF, and WP7. Building your first Silverlight for Windows Phone Application Pete Brown has his first post up on building a WP7 app with the MIX10 bits. Lookups in DataGrid and DataForm with RIA Services Jeff Handley elaborates on a post by someone else about using lookup data in the DataGrid and DataForm with RIA Services Silverlight 4 + RIA Services - Ready for Business: Starting a New Project with the Business Application Template Brad Abrams is starting a series highlighting the key features of Silverlight 4 and RIA with the new releases. He has a post up Silverlight 4 + RIA Services - Ready for Business: Index, including links and source. Then in this first post of the series, he introduces the Business Application Template. Custom Window Chrome and Events Watch a tutorial video by Tim Heuer on creating custom chrome for OOB apps. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Unstructured Data - The future of Data Administration

    Some have claimed that there is a problem with the way data is currently managed using the relational paradigm do to the rise of unstructured data in modern business. PCMag.com defines unstructured data as data that does not reside in a fixed location. They further explain that unstructured data refers to data in a free text form that is not bound to any specific structure. With the rise of unstructured data in the form of emails, spread sheets, images and documents the critics have a right to argue that the relational paradigm is not as effective as the object oriented data paradigm in managing this type of data. The relational paradigm relies heavily on structure and relationships in and between items of data. This type of paradigm works best in a relation database management system like Microsoft SQL, MySQL, and Oracle because data is forced to conform to a structure in the form of tables and relations can be derived from the existence of one or more tables. These critics also claim that database administrators have not kept up with reality because their primary focus in regards to data administration deals with structured data and the relational paradigm. The relational paradigm was developed in the 1970’s as a way to improve data management when compared to standard flat files. Little has changed since then, and modern database administrators need to know more than just how to handle structured data. That is why critics claim that today’s data professionals do not have the proper skills in order to store and maintain data for modern systems when compared to the skills of system designers, programmers , software engineers, and data designers  due to the industry trend of object oriented design and development. I think that they are wrong. I do not disagree that the industry is moving toward an object oriented approach to development with the potential to use more of an object oriented approach to data.   However, I think that it is business itself that is limiting database administrators from changing how data is stored because of the potential costs, and impact that might occur by altering any part of stored data. Furthermore, database administrators like all technology workers constantly are trying to improve their technical skills in order to excel in their job, so I think that accusing data professional is not just when the root cause of the lack of innovation is controlled by business, and it is business that will suffer for their inability to keep up with technology. One way for database professionals to better prepare for the future of database management is start working with data in the form of objects and so that they can extract data from the objects so that the stored information within objects can be used in relation to the data stored in a using the relational paradigm. Furthermore, I think the use of pattern matching will increase with the increased use of unstructured data because object can be selected, filtered and altered based on the existence of a pattern found within an object.

    Read the article

  • BI Publisher at Collaborate 2010

    - by mike.donohue
    Noelle and I are heading to Collaborate 2010 next week. There are over two dozen sessions on BI Publisher including a Hands On Lab (see below). Very excited to see what our customers and partners will be presenting and how they are using BI Publisher to get better reports and reduce costs. My only regret is that many sessions are scheduled at the same time so I won't get to see all of them. Noelle and I will be presenting the following: Monday, April 19 2:30 pm - 3:30 pm Introduction to Oracle Business Intelligence Publisher Session: 227 Location: Reef F By: Mike 2:30 pm - 3:30 pm The Reporting Platform for Applications: Oracle Business Intelligence Publisher Session: 73170 Location: South Seas Ballroom J By: Noelle 3:45 pm - 4:45 pm Oracle Business Intelligence Publisher Hands On Lab (1) Session: 217 Location: Palm D By: Noelle and Mike Tuesday, April 20 8:00 am - 9:00 am Oracle Business Intelligence Publisher Best Practices Session: 218 Location: Palm D By: Noelle and Mike We will also be at the BI Technology demo pod in the exhibt hall so please stop by and say hello. All BI Publisher related Sessions Sunday, April 18 2:00 pm - 2:50 pm Customizing your Invoices in a Flash! 3:00 pm - 3:50 pm BI Publisher SIG Meeting - Part 1 4:00 pm - 4:50 pm BI Publisher SIG Meeting - Part 2 Monday, April 19 8:00 am - 9:00 am XML Publisher and FSG for Beginners 2:30 pm - 3:30 pm Introduction to Oracle Business Intelligence Publisher 2:30 pm - 3:30 pm The Reporting Platform for Applications: Oracle Business Intelligence Publisher 2:30 pm - 3:30 pm Bay Ballroom A What it Takes to Make Your Business Intelligence Implementation a Success 2:30 pm - 3:30 pm XML Publisher-More Than Just Form Letters 3:45 pm - 4:45 pm JD Edwards EnterpriseOne Reporting and Batch Discussions presented by Technology SIG 3:45 pm - 4:45 pm Hands On Lab: Oracle Business Intelligence Publisher (1) Tuesday, April 20 8:00 am - 9:00 am Oracle Business Intelligence Publisher Best Practices 8:00 am - 9:00 am Creating XML Publisher Documents with PeopleCode 10:30 am - 11:30 am Moving to BI Publisher, Now What? Automated Fax and Email from Oracle EBS 2:00 pm - 3:00 pm Smart Reporting in Oracle Financials Release 12.1 2:00 pm - 3:00 pm Custom Check Printing Framework using XML Publisher 2:00 pm - 3:00 pm BI Publisher and Oracle BI for JD Edwards Wednesday, April 21 8:00 am - 9:00 am XML Publisher Tips for PeopleTools 10:30 am - 11:30 am JD Edwards World - Technical Upgrade Considerations 10:30 am - 11:30 am Data Visualization Best Practices: Know how to design and improve your BI & EPM reports, dashboards, and queries 10:30 am - 11:30 am Oracle BIEE End-to-End 1:00 pm - 2:00 pm Empower JD Edwards Users with Oracle BI Publisher for Ad Hoc Reporting 1:00 pm - 2:00 pm BIP and JD Edwards World - Good Stuff! 2:15 pm - 3:15 pm Proven Strategies for Increasing ROI with PeopleSoft HCM 4:00 pm - 5:00 pm Using Oracle BI Delivers to Send Reports to JD Edwards Users Thursday, April 22 9:45 am - 10:45 am PeopleSoft Recruiting Enhancements You Can Use 9:45 am - 10:45 am Reducing Cost with Oracle's BI Publisher Note (1) the Hands On Lab was not showing in the joint scheduler as of this posting but, it is definitely ON.

    Read the article

  • Master Note for Generic Data Warehousing

    - by lajos.varady(at)oracle.com
    ++++++++++++++++++++++++++++++++++++++++++++++++++++ The complete and the most recent version of this article can be viewed from My Oracle Support Knowledge Section. Master Note for Generic Data Warehousing [ID 1269175.1] ++++++++++++++++++++++++++++++++++++++++++++++++++++In this Document   Purpose   Master Note for Generic Data Warehousing      Components covered      Oracle Database Data Warehousing specific documents for recent versions      Technology Network Product Homes      Master Notes available in My Oracle Support      White Papers      Technical Presentations Platforms: 1-914CU; This document is being delivered to you via Oracle Support's Rapid Visibility (RaV) process and therefore has not been subject to an independent technical review. Applies to: Oracle Server - Enterprise Edition - Version: 9.2.0.1 to 11.2.0.2 - Release: 9.2 to 11.2Information in this document applies to any platform. Purpose Provide navigation path Master Note for Generic Data Warehousing Components covered Read Only Materialized ViewsQuery RewriteDatabase Object PartitioningParallel Execution and Parallel QueryDatabase CompressionTransportable TablespacesOracle Online Analytical Processing (OLAP)Oracle Data MiningOracle Database Data Warehousing specific documents for recent versions 11g Release 2 (11.2)11g Release 1 (11.1)10g Release 2 (10.2)10g Release 1 (10.1)9i Release 2 (9.2)9i Release 1 (9.0)Technology Network Product HomesOracle Partitioning Advanced CompressionOracle Data MiningOracle OLAPMaster Notes available in My Oracle SupportThese technical articles have been written by Oracle Support Engineers to provide proactive and top level information and knowledge about the components of thedatabase we handle under the "Database Datawarehousing".Note 1166564.1 Master Note: Transportable Tablespaces (TTS) -- Common Questions and IssuesNote 1087507.1 Master Note for MVIEW 'ORA-' error diagnosis. For Materialized View CREATE or REFRESHNote 1102801.1 Master Note: How to Get a 10046 trace for a Parallel QueryNote 1097154.1 Master Note Parallel Execution Wait Events Note 1107593.1 Master Note for the Oracle OLAP OptionNote 1087643.1 Master Note for Oracle Data MiningNote 1215173.1 Master Note for Query RewriteNote 1223705.1 Master Note for OLTP Compression Note 1269175.1 Master Note for Generic Data WarehousingWhite Papers Transportable Tablespaces white papers Database Upgrade Using Transportable Tablespaces:Oracle Database 11g Release 1 (February 2009) Platform Migration Using Transportable Database Oracle Database 11g and 10g Release 2 (August 2008) Database Upgrade using Transportable Tablespaces: Oracle Database 10g Release 2 (April 2007) Platform Migration using Transportable Tablespaces: Oracle Database 10g Release 2 (April 2007)Parallel Execution and Parallel Query white papers Best Practices for Workload Management of a Data Warehouse on the Sun Oracle Database Machine (June 2010) Effective resource utilization by In-Memory Parallel Execution in Oracle Real Application Clusters 11g Release 2 (Feb 2010) Parallel Execution Fundamentals in Oracle Database 11g Release 2 (November 2009) Parallel Execution with Oracle Database 10g Release 2 (June 2005)Oracle Data Mining white paper Oracle Data Mining 11g Release 2 (March 2010)Partitioning white papers Partitioning with Oracle Database 11g Release 2 (September 2009) Partitioning in Oracle Database 11g (June 2007)Materialized Views and Query Rewrite white papers Oracle Materialized Views  and Query Rewrite (May 2005) Improving Performance using Query Rewrite in Oracle Database 10g (December 2003)Database Compression white papers Advanced Compression with Oracle Database 11g Release 2 (September 2009) Table Compression in Oracle Database 10g Release 2 (May 2005)Oracle OLAP white papers On-line Analytic Processing with Oracle Database 11g Release 2 (September 2009) Using Oracle Business Intelligence Enterprise Edition with the OLAP Option to Oracle Database 11g (July 2008)Generic white papers Enabling Pervasive BI through a Practical Data Warehouse Reference Architecture (February 2010) Optimizing and Protecting Storage with Oracle Database 11g Release 2 (November 2009) Oracle Database 11g for Data Warehousing and Business Intelligence (August 2009) Best practices for a Data Warehouse on Oracle Database 11g (September 2008)Technical PresentationsA selection of ObE - Oracle by Examples documents: Generic Using Basic Database Functionality for Data Warehousing (10g) Partitioning Manipulating Partitions in Oracle Database (11g Release 1) Using High-Speed Data Loading and Rolling Window Operations with Partitioning (11g Release 1) Using Partitioned Outer Join to Fill Gaps in Sparse Data (10g) Materialized View and Query Rewrite Using Materialized Views and Query Rewrite Capabilities (10g) Using the SQLAccess Advisor to Recommend Materialized Views and Indexes (10g) Oracle OLAP Using Microsoft Excel With Oracle 11g Cubes (how to analyze data in Oracle OLAP Cubes using Excel's native capabilities) Using Oracle OLAP 11g With Oracle BI Enterprise Edition (Creating OBIEE Metadata for OLAP 11g Cubes and querying those in BI Answers) Building OLAP 11g Cubes Querying OLAP 11g Cubes Creating Interactive APEX Reports Over OLAP 11g CubesSelection of presentations from the BIWA website:Extreme Data Warehousing With Exadata  by Hermann Baer (July 2010) (slides 2.5MB, recording 54MB)Data Mining Made Easy! Introducing Oracle Data Miner 11g Release 2 New "Work flow" GUI   by Charlie Berger (May 2010) (slides 4.8MB, recording 85MB )Best Practices for Deploying a Data Warehouse on Oracle Database 11g  by Maria Colgan (December 2009)  (slides 3MB, recording 18MB, white paper 3MB )

    Read the article

  • BI and EPM Landscape

    - by frank.buytendijk
    Most of my blog entries are not about Oracle products, and most of the latest entries are about topics such as IT strategy and enterprise architecture. However, given my background at Gartner, and at Hyperion, I still keep a close eye on what's happening in BI and EPM. One important reason is that I believe there is significant competitive value for organizations getting BI and EPM right. Davenport and Harris wrote a great book called "Competing on Analytics", in which they explain this in a very engaging and convincing way. At Oracle we have defined the concept of "management excellence" that outlines what organizations have to do to keep or create a competitive edge. It's not only in the business processes, but also in the management processes. Recently, Gartner published its 2009 market shares report for BI, Analytics, and Performance Management. Gartner identifies the same three segments that Oracle does: (1) CPM Suites (Oracle refers not to Corporate Performance Management, but Enterprise Performance Management), (2) BI Platform, and (3) Analytic Applications & Performance Management. According to Gartner, Oracle's share is increasing with revenue growing by more than 5%. Oracle currently holds the #2 market share position in the overall BI Software space based on total BI software revenue. Source: Gartner Dataquest Market Share: Business Intelligence, Analytics and Performance Management Software, Worldwide, 2009; Dan Sommer and Bhavish Sood; Apr 2010 Gartner has ranked Oracle as #1 in the CPM Suites worldwide sub-segment based on total BI software revenue, and Oracle is gaining share with revenue growing by more than 6% in 2009. Source: Gartner Dataquest Market Share: Business Intelligence, Analytics and Performance Management Software, Worldwide, 2009; Dan Sommer and Bhavish Sood; Apr 2010 The Analytic Applications & Performance Management subsegment is more fragmented. It has for instance a very large "Other Vendors" category. The largest player traditionally is SAS. Analytic Applications are often meant for very specific analytic needs in very specific industry sectors. According to Gartner, from the large vendors, again Oracle is the one who is gaining the most share - with total BI software revenue growth close to 15% in 2009. Source: Gartner Dataquest Market Share: Business Intelligence, Analytics and Performance Management Software, Worldwide, 2009; Dan Sommer and Bhavish Sood; Apr 2010 I believe this shows Oracle's integration strategy is working. In fact, integration actually is the innovation. BI and EPM have been silo technology platforms and application suites way too long. Management and measuring performance should be very closely linked to strategy execution, which is the domain of other business application areas such as CRM, ERP, and Supply Chain. BI and EPM are not about "making better decisions" anymore, but are part of a tangible action framework. Furthermore, organizations are getting more serious about ecosystem thinking. They do not evaluate single tools anymore for different application areas, but buy into a complete ecosystem of hardware, software and services. The best ecosystem is the one that offers the most options, in environments where the uncertainty is high and investments are hard to reverse. The key to successfully managing such an environment is middleware, and BI and EPM become increasingly middleware intensive. In fact, given the horizontal nature of BI and EPM, sitting on top of all business functions and applications, you could call them "upperware". Many are active in the BI and EPM space. Big players can offer a lot, but there are always many areas that are covered by specialty vendors. Oracle openly embraces those technologies within the ecosystem as well. Complete, open and integrated still accurately describes the Oracle product strategy. frank

    Read the article

  • How to Sync Any Browser’s Bookmarks With Your iPad or iPhone

    - by Chris Hoffman
    Apple makes it easy to synchronize bookmarks between the Safari browser on a Mac and the Safari browser on iOS, but you don’t have to use Safari — or a Mac — to sync your bookmarks back and forth. You can do this with any browser. Whether you’re using Chrome, Firefox, or even Internet Explorer, there’s a way to sync your browser bookmarks so you can access your same bookmarks on your iPad. Safari on a Mac Apple’s iCloud service is the officially supported way to sync data with your iPad or iPhone. It’s included on Macs, but Apple also offers similar iCloud bookmark syncing features for Windows. On a Mac, this should be enabled by default. To check whether it’s enabled, you can launch the System Preferences panel on your Mac, open the iCloud preferences panel, and ensure the Safari option is checked. If you’re using Safari on Windows — well, you shouldn’t be. Apple is no longer updating Safari for Windows. iCloud allows you to synchronize bookmarks between other browsers on your Windows system and Safari on your iOS device, so Safari isn’t necessary. Internet Explorer, Firefox, or Chrome via iCloud To get started, download Apple’s iCloud Control Panel application for Windows and install it. Launch the iCloud Control Panel and log in with the same iCloud account (Apple ID) you use on your iPad or iPhone. You’ll be able to enable Bookmark syncing with Internet Explorer, Firefox, or Chrome. Click the Options button to select the browser you want to synchronize bookmarks with. (Note that bookmarks are called “favorites” in Internet Explorer.) You’ll be able to access your synced bookmarks in the Safari browser on your iPad or iPhone, and they’ll sync back and forth automatically over the Internet. Google Chrome Sync Google Chrome also has its own built-in sync feature and Google provides an official Chrome app for iPad and iPhone. If you’re a Chrome user, you can set up Chrome Sync on your desktop version of Chrome — you should already have this enabled if you have logged into your Chrome browser. You can check if this Chrome Sync is enabled by opening Chrome’s settings screen and seeing whether you’re signed in. Click the Advanced sync settings button and ensure bookmark syncing is enabled. Once you have Chrome Sync set up, you can install the Chrome app from the App Store and sign in with the same Google account. Your bookmarks, as well as other data like your open browser tabs, will automatically sync. This can be a better solution because the Chrome browser is available for so many platforms and you gain the ability to synchronize other browser data, such as your open browser tabs, between your devices. Unfortunately, the Chrome browser is slower than Apple’s own Safari browser on iPad and iPhone because of the way Apple limits third-party browsers, so using it involves a trade-off. Manual Bookmark Sync in iTunes iTunes also allows you to sync bookmarks between your computer and your iPad or iPhone. It does this the old-fashioned way, by initiating a manual sync when your device is plugged in via USB. To access this option, connect your device to your computer, select the device in iTunes, and click the Info tab. This is the more outdated way of synchronizing your bookmarks. This feature may be useful if you want to create a one-time copy of your bookmarks from your PC, but it’s nowhere near ideal for regular syncing. You don’t have to use this feature, just as you really don’t have to use iTunes anymore. In fact, this option is unavailable if you’ve set up iCloud syncing in iTunes. After you set up bookmark syncing via iCloud or Chrome Sync, bookmarks will sync immediately after you save, remove, or edit them.     

    Read the article

  • SQLAuthority News – Book Review – Beginning T-SQL 2008 by Kathi Kellenberger

    - by pinaldave
    Beginning T-SQL 2008 by Kathi Kellenberger Amazon Link Detail Review: Beginning T-SQL 2008 is one of the best books on the market if you are just beginning to work with Microsoft SQL, or have a little bit of experience and need to learn more quickly. Each chapter of the book introduces a new subject, and builds upon topics covered in previous chapters.  The author of the book, Kathi Kellenberger understands that you need to form a solid foundation of knowledge before moving on to new topics, and sets up each subject nicely.  Because the chapters move in an orderly progression, you continue to use skills you learned earlier. One of the best features of Beginning T-SQL 2008 is that each chapter has multiple examples and exercises.  Many books introduce a topic and then never go back to it.  This book gives enough examples that you will be familiar with the subject when you come across it in real life.  The exercises at the end of the chapter mean that you will be using the skills you learned – and there is no better way to cement a subject in your brain. The book also includes discussions of the common errors that programmers will come across, how to avoid them, and how to fix them if they happen.  Ms. Kellenberger understands that not only do mistakes happen, but they are bound to happen if you aren’t trained properly.  Mistakes are part of the learning process! The book begins by discussions relational theory, so that programmers will understand the way T-SQL works from the ground up.  It also walks readers through writing accurate queries, combining set-based and procedural processing, embedding logic in stored functions, and so much more. Overall, the main goal of Beginning T-SQL 2008 is to introduce novices to SQL programming, and quickly familiarize them with the basics of running the program.  The book is written with the idea that readers will not know any of the technical terms or vocabulary.  However, if you are a little more familiar with SQL and looking to become better, you will still find this book very helpful. Ratting: 4.5+ Stars Summary: I must recommend Beginning T-SQL 2008 highly enough.  If you are going to buy any beginners guide to Transect-SQL, this is the one you should spend your money on.  You can save yourself a lot of time and effort later by using this very affordable manual to learn the basics, which will allow you to become an expert much faster. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Book Review, SQLAuthority News, T SQL, Technology

    Read the article

  • Ancillary Objects: Separate Debug ELF Files For Solaris

    - by Ali Bahrami
    We introduced a new object ELF object type in Solaris 11 Update 1 called the Ancillary Object. This posting describes them, using material originally written during their development, the PSARC arc case, and the Solaris Linker and Libraries Manual. ELF objects contain allocable sections, which are mapped into memory at runtime, and non-allocable sections, which are present in the file for use by debuggers and observability tools, but which are not mapped or used at runtime. Typically, all of these sections exist within a single object file. Ancillary objects allow them to instead go into a separate file. There are different reasons given for wanting such a feature. One can debate whether the added complexity is worth the benefit, and in most cases it is not. However, one important case stands out — customers with very large 32-bit objects who are not ready or able to make the transition to 64-bits. We have customers who build extremely large 32-bit objects. Historically, the debug sections in these objects have used the stabs format, which is limited, but relatively compact. In recent years, the industry has transitioned to the powerful but verbose DWARF standard. In some cases, the size of these debug sections is large enough to push the total object file size past the fundamental 4GB limit for 32-bit ELF object files. The best, and ultimately only, solution to overly large objects is to transition to 64-bits. However, consider environments where: Hundreds of users may be executing the code on large shared systems. (32-bits use less memory and bus bandwidth, and on sparc runs just as fast as 64-bit code otherwise). Complex finely tuned code, where the original authors may no longer be available. Critical production code, that was expensive to qualify and bring online, and which is otherwise serving its intended purpose without issue. Users in these risk adverse and/or high scale categories have good reasons to push 32-bits objects to the limit before moving on. Ancillary objects offer these users a longer runway. Design The design of ancillary objects is intended to be simple, both to help human understanding when examining elfdump output, and to lower the bar for debuggers such as dbx to support them. The primary and ancillary objects have the same set of section headers, with the same names, in the same order (i.e. each section has the same index in both files). A single added section of type SHT_SUNW_ANCILLARY is added to both objects, containing information that allows a debugger to identify and validate both files relative to each other. Given one of these files, the ancillary section allows you to identify the other. Allocable sections go in the primary object, and non-allocable ones go into the ancillary object. A small set of non-allocable objects, notably the symbol table, are copied into both objects. As noted above, most sections are only written to one of the two objects, but both objects have the same section header array. The section header in the file that does not contain the section data is tagged with the SHF_SUNW_ABSENT section header flag to indicate its placeholder status. Compiler writers and others who produce objects can set the SUNW_SHF_PRIMARY section header flag to mark non-allocable sections that should go to the primary object rather than the ancillary. If you don't request an ancillary object, the Solaris ELF format is unchanged. Users who don't use ancillary objects do not pay for the feature. This is important, because they exist to serve a small subset of our users, and must not complicate the common case. If you do request an ancillary object, the runtime behavior of the primary object will be the same as that of a normal object. There is no added runtime cost. The primary and ancillary object together represent a logical single object. This is facilitated by the use of a single set of section headers. One can easily imagine a tool that can merge a primary and ancillary object into a single file, or the reverse. (Note that although this is an interesting intellectual exercise, we don't actually supply such a tool because there's little practical benefit above and beyond using ld to create the files). Among the benefits of this approach are: There is no need for per-file symbol tables to reflect the contents of each file. The same symbol table that would be produced for a standard object can be used. The section contents are identical in either case — there is no need to alter data to accommodate multiple files. It is very easy for a debugger to adapt to these new files, and the processing involved can be encapsulated in input/output routines. Most of the existing debugger implementation applies without modification. The limit of a 4GB 32-bit output object is now raised to 4GB of code, and 4GB of debug data. There is also the future possibility (not currently supported) to support multiple ancillary objects, each of which could contain up to 4GB of additional debug data. It must be noted however that the 32-bit DWARF debug format is itself inherently 32-bit limited, as it uses 32-bit offsets between debug sections, so the ability to employ multiple ancillary object files may not turn out to be useful. Using Ancillary Objects (From the Solaris Linker and Libraries Guide) By default, objects contain both allocable and non-allocable sections. Allocable sections are the sections that contain executable code and the data needed by that code at runtime. Non-allocable sections contain supplemental information that is not required to execute an object at runtime. These sections support the operation of debuggers and other observability tools. The non-allocable sections in an object are not loaded into memory at runtime by the operating system, and so, they have no impact on memory use or other aspects of runtime performance no matter their size. For convenience, both allocable and non-allocable sections are normally maintained in the same file. However, there are situations in which it can be useful to separate these sections. To reduce the size of objects in order to improve the speed at which they can be copied across wide area networks. To support fine grained debugging of highly optimized code requires considerable debug data. In modern systems, the debugging data can easily be larger than the code it describes. The size of a 32-bit object is limited to 4 Gbytes. In very large 32-bit objects, the debug data can cause this limit to be exceeded and prevent the creation of the object. To limit the exposure of internal implementation details. Traditionally, objects have been stripped of non-allocable sections in order to address these issues. Stripping is effective, but destroys data that might be needed later. The Solaris link-editor can instead write non-allocable sections to an ancillary object. This feature is enabled with the -z ancillary command line option. $ ld ... -z ancillary[=outfile] ...By default, the ancillary file is given the same name as the primary output object, with a .anc file extension. However, a different name can be provided by providing an outfile value to the -z ancillary option. When -z ancillary is specified, the link-editor performs the following actions. All allocable sections are written to the primary object. In addition, all non-allocable sections containing one or more input sections that have the SHF_SUNW_PRIMARY section header flag set are written to the primary object. All remaining non-allocable sections are written to the ancillary object. The following non-allocable sections are written to both the primary object and ancillary object. .shstrtab The section name string table. .symtab The full non-dynamic symbol table. .symtab_shndx The symbol table extended index section associated with .symtab. .strtab The non-dynamic string table associated with .symtab. .SUNW_ancillary Contains the information required to identify the primary and ancillary objects, and to identify the object being examined. The primary object and all ancillary objects contain the same array of sections headers. Each section has the same section index in every file. Although the primary and ancillary objects all define the same section headers, the data for most sections will be written to a single file as described above. If the data for a section is not present in a given file, the SHF_SUNW_ABSENT section header flag is set, and the sh_size field is 0. This organization makes it possible to acquire a full list of section headers, a complete symbol table, and a complete list of the primary and ancillary objects from either of the primary or ancillary objects. The following example illustrates the underlying implementation of ancillary objects. An ancillary object is created by adding the -z ancillary command line option to an otherwise normal compilation. The file utility shows that the result is an executable named a.out, and an associated ancillary object named a.out.anc. $ cat hello.c #include <stdio.h> int main(int argc, char **argv) { (void) printf("hello, world\n"); return (0); } $ cc -g -zancillary hello.c $ file a.out a.out.anc a.out: ELF 32-bit LSB executable 80386 Version 1 [FPU], dynamically linked, not stripped, ancillary object a.out.anc a.out.anc: ELF 32-bit LSB ancillary 80386 Version 1, primary object a.out $ ./a.out hello worldThe resulting primary object is an ordinary executable that can be executed in the usual manner. It is no different at runtime than an executable built without the use of ancillary objects, and then stripped of non-allocable content using the strip or mcs commands. As previously described, the primary object and ancillary objects contain the same section headers. To see how this works, it is helpful to use the elfdump utility to display these section headers and compare them. The following table shows the section header information for a selection of headers from the previous link-edit example. Index Section Name Type Primary Flags Ancillary Flags Primary Size Ancillary Size 13 .text PROGBITS ALLOC EXECINSTR ALLOC EXECINSTR SUNW_ABSENT 0x131 0 20 .data PROGBITS WRITE ALLOC WRITE ALLOC SUNW_ABSENT 0x4c 0 21 .symtab SYMTAB 0 0 0x450 0x450 22 .strtab STRTAB STRINGS STRINGS 0x1ad 0x1ad 24 .debug_info PROGBITS SUNW_ABSENT 0 0 0x1a7 28 .shstrtab STRTAB STRINGS STRINGS 0x118 0x118 29 .SUNW_ancillary SUNW_ancillary 0 0 0x30 0x30 The data for most sections is only present in one of the two files, and absent from the other file. The SHF_SUNW_ABSENT section header flag is set when the data is absent. The data for allocable sections needed at runtime are found in the primary object. The data for non-allocable sections used for debugging but not needed at runtime are placed in the ancillary file. A small set of non-allocable sections are fully present in both files. These are the .SUNW_ancillary section used to relate the primary and ancillary objects together, the section name string table .shstrtab, as well as the symbol table.symtab, and its associated string table .strtab. It is possible to strip the symbol table from the primary object. A debugger that encounters an object without a symbol table can use the .SUNW_ancillary section to locate the ancillary object, and access the symbol contained within. The primary object, and all associated ancillary objects, contain a .SUNW_ancillary section that allows all the objects to be identified and related together. $ elfdump -T SUNW_ancillary a.out a.out.anc a.out: Ancillary Section: .SUNW_ancillary index tag value [0] ANC_SUNW_CHECKSUM 0x8724 [1] ANC_SUNW_MEMBER 0x1 a.out [2] ANC_SUNW_CHECKSUM 0x8724 [3] ANC_SUNW_MEMBER 0x1a3 a.out.anc [4] ANC_SUNW_CHECKSUM 0xfbe2 [5] ANC_SUNW_NULL 0 a.out.anc: Ancillary Section: .SUNW_ancillary index tag value [0] ANC_SUNW_CHECKSUM 0xfbe2 [1] ANC_SUNW_MEMBER 0x1 a.out [2] ANC_SUNW_CHECKSUM 0x8724 [3] ANC_SUNW_MEMBER 0x1a3 a.out.anc [4] ANC_SUNW_CHECKSUM 0xfbe2 [5] ANC_SUNW_NULL 0 The ancillary sections for both objects contain the same number of elements, and are identical except for the first element. Each object, starting with the primary object, is introduced with a MEMBER element that gives the file name, followed by a CHECKSUM that identifies the object. In this example, the primary object is a.out, and has a checksum of 0x8724. The ancillary object is a.out.anc, and has a checksum of 0xfbe2. The first element in a .SUNW_ancillary section, preceding the MEMBER element for the primary object, is always a CHECKSUM element, containing the checksum for the file being examined. The presence of a .SUNW_ancillary section in an object indicates that the object has associated ancillary objects. The names of the primary and all associated ancillary objects can be obtained from the ancillary section from any one of the files. It is possible to determine which file is being examined from the larger set of files by comparing the first checksum value to the checksum of each member that follows. Debugger Access and Use of Ancillary Objects Debuggers and other observability tools must merge the information found in the primary and ancillary object files in order to build a complete view of the object. This is equivalent to processing the information from a single file. This merging is simplified by the primary object and ancillary objects containing the same section headers, and a single symbol table. The following steps can be used by a debugger to assemble the information contained in these files. Starting with the primary object, or any of the ancillary objects, locate the .SUNW_ancillary section. The presence of this section identifies the object as part of an ancillary group, contains information that can be used to obtain a complete list of the files and determine which of those files is the one currently being examined. Create a section header array in memory, using the section header array from the object being examined as an initial template. Open and read each file identified by the .SUNW_ancillary section in turn. For each file, fill in the in-memory section header array with the information for each section that does not have the SHF_SUNW_ABSENT flag set. The result will be a complete in-memory copy of the section headers with pointers to the data for all sections. Once this information has been acquired, the debugger can proceed as it would in the single file case, to access and control the running program. Note - The ELF definition of ancillary objects provides for a single primary object, and an arbitrary number of ancillary objects. At this time, the Oracle Solaris link-editor only produces a single ancillary object containing all non-allocable sections. This may change in the future. Debuggers and other observability tools should be written to handle the general case of multiple ancillary objects. ELF Implementation Details (From the Solaris Linker and Libraries Guide) To implement ancillary objects, it was necessary to extend the ELF format to add a new object type (ET_SUNW_ANCILLARY), a new section type (SHT_SUNW_ANCILLARY), and 2 new section header flags (SHF_SUNW_ABSENT, SHF_SUNW_PRIMARY). In this section, I will detail these changes, in the form of diffs to the Solaris Linker and Libraries manual. Part IV ELF Application Binary Interface Chapter 13: Object File Format Object File Format Edit Note: This existing section at the beginning of the chapter describes the ELF header. There's a table of object file types, which now includes the new ET_SUNW_ANCILLARY type. e_type Identifies the object file type, as listed in the following table. NameValueMeaning ET_NONE0No file type ET_REL1Relocatable file ET_EXEC2Executable file ET_DYN3Shared object file ET_CORE4Core file ET_LOSUNW0xfefeStart operating system specific range ET_SUNW_ANCILLARY0xfefeAncillary object file ET_HISUNW0xfefdEnd operating system specific range ET_LOPROC0xff00Start processor-specific range ET_HIPROC0xffffEnd processor-specific range Sections Edit Note: This overview section defines the section header structure, and provides a high level description of known sections. It was updated to define the new SHF_SUNW_ABSENT and SHF_SUNW_PRIMARY flags and the new SHT_SUNW_ANCILLARY section. ... sh_type Categorizes the section's contents and semantics. Section types and their descriptions are listed in Table 13-5. sh_flags Sections support 1-bit flags that describe miscellaneous attributes. Flag definitions are listed in Table 13-8. ... Table 13-5 ELF Section Types, sh_type NameValue . . . SHT_LOSUNW0x6fffffee SHT_SUNW_ancillary0x6fffffee . . . ... SHT_LOSUNW - SHT_HISUNW Values in this inclusive range are reserved for Oracle Solaris OS semantics. SHT_SUNW_ANCILLARY Present when a given object is part of a group of ancillary objects. Contains information required to identify all the files that make up the group. See Ancillary Section. ... Table 13-8 ELF Section Attribute Flags NameValue . . . SHF_MASKOS0x0ff00000 SHF_SUNW_NODISCARD0x00100000 SHF_SUNW_ABSENT0x00200000 SHF_SUNW_PRIMARY0x00400000 SHF_MASKPROC0xf0000000 . . . ... SHF_SUNW_ABSENT Indicates that the data for this section is not present in this file. When ancillary objects are created, the primary object and any ancillary objects, will all have the same section header array, to facilitate merging them to form a complete view of the object, and to allow them to use the same symbol tables. Each file contains a subset of the section data. The data for allocable sections is written to the primary object while the data for non-allocable sections is written to an ancillary file. The SHF_SUNW_ABSENT flag is used to indicate that the data for the section is not present in the object being examined. When the SHF_SUNW_ABSENT flag is set, the sh_size field of the section header must be 0. An application encountering an SHF_SUNW_ABSENT section can choose to ignore the section, or to search for the section data within one of the related ancillary files. SHF_SUNW_PRIMARY The default behavior when ancillary objects are created is to write all allocable sections to the primary object and all non-allocable sections to the ancillary objects. The SHF_SUNW_PRIMARY flag overrides this behavior. Any output section containing one more input section with the SHF_SUNW_PRIMARY flag set is written to the primary object without regard for its allocable status. ... Two members in the section header, sh_link, and sh_info, hold special information, depending on section type. Table 13-9 ELF sh_link and sh_info Interpretation sh_typesh_linksh_info . . . SHT_SUNW_ANCILLARY The section header index of the associated string table. 0 . . . Special Sections Edit Note: This section describes the sections used in Solaris ELF objects, using the types defined in the previous description of section types. It was updated to define the new .SUNW_ancillary (SHT_SUNW_ANCILLARY) section. Various sections hold program and control information. Sections in the following table are used by the system and have the indicated types and attributes. Table 13-10 ELF Special Sections NameTypeAttribute . . . .SUNW_ancillarySHT_SUNW_ancillaryNone . . . ... .SUNW_ancillary Present when a given object is part of a group of ancillary objects. Contains information required to identify all the files that make up the group. See Ancillary Section for details. ... Ancillary Section Edit Note: This new section provides the format reference describing the layout of a .SUNW_ancillary section and the meaning of the various tags. Note that these sections use the same tag/value concept used for dynamic and capabilities sections, and will be familiar to anyone used to working with ELF. In addition to the primary output object, the Solaris link-editor can produce one or more ancillary objects. Ancillary objects contain non-allocable sections that would normally be written to the primary object. When ancillary objects are produced, the primary object and all of the associated ancillary objects contain a SHT_SUNW_ancillary section, containing information that identifies these related objects. Given any one object from such a group, the ancillary section provides the information needed to identify and interpret the others. This section contains an array of the following structures. See sys/elf.h. typedef struct { Elf32_Word a_tag; union { Elf32_Word a_val; Elf32_Addr a_ptr; } a_un; } Elf32_Ancillary; typedef struct { Elf64_Xword a_tag; union { Elf64_Xword a_val; Elf64_Addr a_ptr; } a_un; } Elf64_Ancillary; For each object with this type, a_tag controls the interpretation of a_un. a_val These objects represent integer values with various interpretations. a_ptr These objects represent file offsets or addresses. The following ancillary tags exist. Table 13-NEW1 ELF Ancillary Array Tags NameValuea_un ANC_SUNW_NULL0Ignored ANC_SUNW_CHECKSUM1a_val ANC_SUNW_MEMBER2a_ptr ANC_SUNW_NULL Marks the end of the ancillary section. ANC_SUNW_CHECKSUM Provides the checksum for a file in the c_val element. When ANC_SUNW_CHECKSUM precedes the first instance of ANC_SUNW_MEMBER, it provides the checksum for the object from which the ancillary section is being read. When it follows an ANC_SUNW_MEMBER tag, it provides the checksum for that member. ANC_SUNW_MEMBER Specifies an object name. The a_ptr element contains the string table offset of a null-terminated string, that provides the file name. An ancillary section must always contain an ANC_SUNW_CHECKSUM before the first instance of ANC_SUNW_MEMBER, identifying the current object. Following that, there should be an ANC_SUNW_MEMBER for each object that makes up the complete set of objects. Each ANC_SUNW_MEMBER should be followed by an ANC_SUNW_CHECKSUM for that object. A typical ancillary section will therefore be structured as: TagMeaning ANC_SUNW_CHECKSUMChecksum of this object ANC_SUNW_MEMBERName of object #1 ANC_SUNW_CHECKSUMChecksum for object #1 . . . ANC_SUNW_MEMBERName of object N ANC_SUNW_CHECKSUMChecksum for object N ANC_SUNW_NULL An object can therefore identify itself by comparing the initial ANC_SUNW_CHECKSUM to each of the ones that follow, until it finds a match. Related Other Work The GNU developers have also encountered the need/desire to support separate debug information files, and use the solution detailed at http://sourceware.org/gdb/onlinedocs/gdb/Separate-Debug-Files.html. At the current time, the separate debug file is constructed by building the standard object first, and then copying the debug data out of it in a separate post processing step, Hence, it is limited to a total of 4GB of code and debug data, just as a single object file would be. They are aware of this, and I have seen online comments indicating that they may add direct support for generating these separate files to their link-editor. It is worth noting that the GNU objcopy utility is available on Solaris, and that the Studio dbx debugger is able to use these GNU style separate debug files even on Solaris. Although this is interesting in terms giving Linux users a familiar environment on Solaris, the 4GB limit means it is not an answer to the problem of very large 32-bit objects. We have also encountered issues with objcopy not understanding Solaris-specific ELF sections, when using this approach. The GNU community also has a current effort to adapt their DWARF debug sections in order to move them to separate files before passing the relocatable objects to the linker. The details of Project Fission can be found at http://gcc.gnu.org/wiki/DebugFission. The goal of this project appears to be to reduce the amount of data seen by the link-editor. The primary effort revolves around moving DWARF data to separate .dwo files so that the link-editor never encounters them. The details of modifying the DWARF data to be usable in this form are involved — please see the above URL for details.

    Read the article

  • Silverlight Cream for March 08, 2010 -- #809

    - by Dave Campbell
    In this Issue: Michael Washington, Tim Greenfield, Bobby Diaz(-2-), Glenn Block(-2-), Nikhil Kothari, Jianqiang Bao(-2-), and Christopher Bennage. Shoutouts: Adam Kinney announced a Big update for the Project Rosetta site today Arpit Gupta has opened a new blog with a great logo: I think therefore I am dangerous :) From SilverlightCream.com: DotNetNuke Silverlight Traffic Module If it's DNN and Silverlight, it has to be my buddy Michael Washington :) ... Michael has combined those stunning gauges you've seen with website traffic... just too cool!... grab the code and display yours too! Cool demonstration of Silverlight VideoBrush This is a no-code post by Tim Greenfield, but I like the UX on this Jigsaw Puzzle page... and you can make your own. Introducing the Earthquake Locator – A Bing Maps Silverlight Application, part 1 Bobby Diaz has an informative post up on combining earthquake data with BingMaps in Silverlight 3... check it out, the grab the recently posted Live Demo and Source Code Adding Volcanos and Options - Earthquake Locator, part 2 Bobby Diaz also added volcanic activity to his earthquake BinMaps app, and updated the downloadable code and live demo. Building Hello MEF – Part IV – DeploymentCatalog Glenn Block posted a pair of MEF posts yesterday... made me think I missed one :) .. the first one is about the DeploymentCatalog. Note he is going to be using the CodePlex bits in his posts. Building HelloMEF – Part V – Refactoring to ViewModel Glenn Block's part V is about MEF and MVVM -- no, really! ... he is refactoring MVVM into the app with a nod to Josh Smith and Laurent Bugnion... get your head around this... The Case for ViewModel Nikhil Kothari has a post up about the ViewModel, and how it facilitates designer/developer workflow, jumpstarts development, improves scaling, and makes asynch programming development simpler MMORPG programming in Silverlight Tutorial (12)Map Instance (Part I) Jianqiang Bao has part 12 of his MMORPG game up... this one is showing how to deal with obstuctions on maps. MMORPG programming in Silverlight Tutorial (13)Perfect moving mechanism Jianqiang Bao also has part 13 up, and this second one is about sprite movement around the obstructions. 1 Simple Step for Commanding in Silverlight Christopher Bennage blogged about Commanding in Silverlight, he begins with a blog post about commands in Silverlight 4 then goes on to demonstrate the Caliburn way of doing commanding. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    MIX10

    Read the article

  • SOA, Cloud and Service Technology Symposium a super success!

    - by JuergenKress
    SOA, Cloud and Service Technology Symposium in London was a huge success. More than 600 international attendees participated in it. Our SOA & BPM Community had a great presence there. At joint booth with the Specialized partners link consulting, eProseed and Griffiths Waite, we presented the latest product updates and had many interesting discussions with customers and speakers. Special thanks to our HQ product management team Demed, Tim, Manas for coming over right before OOW. Also a very big Thank to Matthias Ziegler from Accenture for presenting our joint presentation individually! If you missed the conference here are the key presentations links for your reference: Big Data and its impact on SOA Demed L'Her [View PDF] Building 21st Century Service-Oriented Airports Shyam Kumar [View PDF] Building Cloudy Services Anne Thomas Manes [View PDF] Community Management: The Next Wave of SOA Governance and API Management Tim E. Hall [View PDF] Elastic SOA in the Cloud Steve Millidge [View PDF] Governing Shared Services: On-Premise & In the Cloud Thomas Erl [View Video] Introducing the Cloud Computing Design Patterns Catalogue Thomas Erl and Amin Naserpour [CloudPatterns.org] Lost in Translation - Common Mistakes Interpreting Patterns Mark Simpson [View PDF] Moving Applications to the Cloud: Migration Options Anne Thomas Manes [View PDF] New Paradigms for Application Architecture: From Applications to IT Services Anne Thomas Manes [View PDF] NoSQL for Data Services, Data Virtualization & Big Data Guido Schmutz [View PDF] A Pragmatic Approach to Cloud Computing Andrea Morena [View PDF] The Successful Execution of the SOA and BPM Vision Using a Business Capability Framework: Concepts and Examples Clemens Utschig and Manas Deb [View PDF] Service Modeling & BPM Business Value Patterns Matthias Ziegler [View PDF] [Podcast] SOA Adoption in the Brazilian Ministry of Health - Case Study Ricardo Puttini and Andre Toffanello [PDF Coming Soon] SOA Environments are a Big Data Problem Markus Zirn, Splunk and Maciej Barcz [View PDF] SOA Governance at EDP: A Global Energy Company Manuel Rosa [View PDF] For all presentations please visit the SOA, Cloud and Service Technology Symposium Website SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: SOA Symposium,Thomas Erl,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Converting Openfire IM datetime values in SQL Server to / from VARCHAR(15) and DATETIME data types

    - by Brian Biales
    A client is using Openfire IM for their users, and would like some custom queries to audit user conversations (which are stored by Openfire in tables in the SQL Server database). Because Openfire supports multiple database servers and multiple platforms, the designers chose to store all date/time stamps in the database as 15 character strings, which get converted to Java Date objects in their code (Openfire is written in Java).  I did some digging around, and, so I don't forget and in case someone else will find this useful, I will put the simple algorithms here for converting back and forth between SQL DATETIME and the Java string representation. The Java string representation is the number of milliseconds since 1/1/1970.  SQL Server's DATETIME is actually represented as a float, the value being the number of days since 1/1/1900, the portion after the decimal point representing the hours/minutes/seconds/milliseconds... as a fractional part of a day.  Try this and you will see this is true:     SELECT CAST(0 AS DATETIME) and you will see it returns the date 1/1/1900. The difference in days between SQL Server's 0 date of 1/1/1900 and the Java representation's 0 date of 1/1/1970 is found easily using the following SQL:   SELECT DATEDIFF(D, '1900-01-01', '1970-01-01') which returns 25567.  There are 25567 days between these dates. So to convert from the Java string to SQL Server's date time, we need to convert the number of milliseconds to a floating point representation of the number of days since 1/1/1970, then add the 25567 to change this to the number of days since 1/1/1900.  To convert to days, you need to divide the number by 1000 ms/s, then by  60 seconds/minute, then by 60 minutes/hour, then by 24 hours/day.  Or simply divide by 1000*60*60*24, or 86400000.   So, to summarize, we need to cast this string as a float, divide by 86400000 milliseconds/day, then add 25567 days, and cast the resulting value to a DateTime.  Here is an example:   DECLARE @tmp as VARCHAR(15)   SET @tmp = '1268231722123'   SELECT @tmp as JavaTime, CAST((CAST(@tmp AS FLOAT) / 86400000) + 25567 AS DATETIME) as SQLTime   To convert from SQL datetime back to the Java time format is not quite as simple, I found, because floats of that size do not convert nicely to strings, they end up in scientific notation using the CONVERT function or CAST function.  But I found a couple ways around that problem. You can convert a date to the number of  seconds since 1/1/1970 very easily using the DATEDIFF function, as this value fits in an Int.  If you don't need to worry about the milliseconds, simply cast this integer as a string, and then concatenate '000' at the end, essentially multiplying this number by 1000, and making it milliseconds since 1/1/1970.  If, however, you do care about the milliseconds, you will need to use DATEPART to get the milliseconds part of the date, cast this integer to a string, and then pad zeros on the left to make sure this is three digits, and concatenate these three digits to the number of seconds string above.  And finally, I discovered by casting to DECIMAL(15,0) then to VARCHAR(15), I avoid the scientific notation issue.  So here are all my examples, pick the one you like best... First, here is the simple approach if you don't care about the milliseconds:   DECLARE @tmp as VARCHAR(15)   DECLARE @dt as DATETIME   SET @dt = '2010-03-10 14:35:22.123'   SET @tmp = CAST(DATEDIFF(s, '1970-01-01 00:00:00' , @dt) AS VARCHAR(15)) + '000'   SELECT @tmp as JavaTime, @dt as SQLTime If you want to keep the milliseconds:   DECLARE @tmp as VARCHAR(15)   DECLARE @dt as DATETIME   DECLARE @ms as int   SET @dt = '2010-03-10 14:35:22.123'   SET @ms as DATEPART(ms, @dt)   SET @tmp = CAST(DATEDIFF(s, '1970-01-01 00:00:00' , @dt) AS VARCHAR(15))           + RIGHT('000' + CAST(@ms AS VARCHAR(3)), 3)   SELECT @tmp as JavaTime, @dt as SQLTime Or, in one fell swoop:   DECLARE @dt as DATETIME   SET @dt = '2010-03-10 14:35:22.123'   SELECT @dt as SQLTime     , CAST(DATEDIFF(s, '1970-01-01 00:00:00' , @dt) AS VARCHAR(15))           + RIGHT('000' + CAST( DATEPART(ms, @dt) AS VARCHAR(3)), 3) as JavaTime   And finally, a way to simply reverse the math used converting from Java date to SQL date. Note the parenthesis - watch out for operator precedence, you want to subtract, then multiply:   DECLARE @dt as DATETIME   SET @dt = '2010-03-10 14:35:22.123'   SELECT @dt as SQLTime     , CAST(CAST((CAST(@dt as Float) - 25567.0) * 86400000.0 as DECIMAL(15,0)) as VARCHAR(15)) as JavaTime Interestingly, I found that converting to SQL Date time can lose some accuracy, when I converted the time above to Java time then converted  that back to DateTime, the number of milliseconds is 120, not 123.  As I am not interested in the milliseconds, this is ok for me.  But you may want to look into using DateTime2 in SQL Server 2008 for more accuracy.

    Read the article

  • Oracle BI and XS Energy Drinks – Don’t Miss the Amway Presentation!

    - by Michelle Kimihira
    By Maria Forney Amway is a global leader in the direct sales industry with $10.9B in annual sales in more than 100 countries and territories. The company has implemented a global BI framework that provides accurate, consistent, and timely insights to support global, regional and local analytical research, business planning, performance measurement and assessment. Oracle BI EE is used by 1500 employees across Amway sales, marketing, finance, and supply chain business units as well as Amway affiliates in Europe, Russia, South Africa, Japan, Australia, Latin America, Malaysia, Vietnam, and Indonesia. Last week, I spoke with Lead Data Analyst with Amway Global Sales, Dan Arganbright, and IT Manager with Amway BI Competency Center, Mike Olson, about their upcoming presentation at Oracle OpenWorld in San Francisco. Scheduled during a prime speaking slot on Monday, October 1 at 12:15pm in Moscone West, 2007, Dan and Mike will discuss their experience building Amway’s Distributor Consulting solution, powered by Oracle BI EE. You can find more information here. As background, Amway offers people an opportunity to own their own businesses and consumers exclusive products in health and wellness, beauty and home care.  The Amway internal Sales organization is charged with consulting leadership-level Distributors to help them with data insights and ultimately grow their business. Until recently, this was a resource-intense process of gathering and formatting data. In some markets, it took over 40 hours to collect the data and produce the analysis needed for one consultation session. Amway began its global BI journey in 2006 and since then the company has migrated from having multiple technology providers and integration points to an integrated strategic vendor approach. Today, the company has standardized on Oracle technology for BI.  Amway has achieved cost savings through the retirement of redundant technology platforms. In addition, Mike’s organization has led the charge to align disparate BI organizations into a BI Competency Center.  The following diagram highlights the simplicity of the standardized architecture of Amway today. Dubbed Distributor Consulting, Amway has developed a BI solution using the Oracle technology stack to help Distributor leaders grow their businesses. The Distributor Consulting solution provides over 40 metrics for Sales staff to provide data-driven insights on the Distributors and organizations they support.  Using Oracle BI EE, Exadata, and Oracle Data Integrator, Amway provides customized and personalized business intelligence, and the Oracle BI EE dashboards were developed by the Amway Sales organization, which demonstrates business empowerment of the technology. Amway is also leveraging the power of BI to drive business growth in all of its markets.  A new set of Distributor Segmentation metrics are enabling a better understanding of distributor behaviors. A Global Scorecard that Amway developed provides key metrics at a market and global level for executive-level discussions. Product Analysis teams can now highlight repeat purchase rates, product penetration and the success of CRM campaigns. In the words of Dan and Mike, the addition of Exadata 11 months ago has been “a game changer.”  Amway has been able to dramatically reduce complexity, improve performance and increase business productivity and cost savings. For example, the number of indexes on the global data warehouse was reduced from more than 1,000 to less than 20.  Pulling data for the highest level distributors or the largest markets in the company now can be done in minutes instead of hours.  As a result, IT has shifted from performance tuning and keeping the system operational to higher-value business-focused activities. •       “The distributors that have been introduced to the BI reports have found them extremely helpful. Because they have never had this kind of information before, when they were presented with the reports, they wanted to take action immediately!”  -     Sales Development Manager in Latin America Without giving away more, the Amway case study presentation will be one of the unique customer sessions at OpenWorld this year. Speakers Dan Arganbright and Mike Olson have planned an interactive and entertaining session on Monday October 1 at 12:15pm in Moscone West, 2007. I’ll see you there!

    Read the article

  • Oracle BI and XS Energy Drinks – Don’t Miss the Amway Presentation!

    - by Maria Forney
    Amway is a global leader in the direct sales industry with $10.9B in annual sales in more than 100 countries and territories. The company has implemented a global BI framework that provides accurate, consistent, and timely insights to support global, regional and local analytical research, business planning, performance measurement and assessment. Oracle BI EE is used by 1500 employees across Amway sales, marketing, finance, and supply chain business units as well as Amway affiliates in Europe, Russia, South Africa, Japan, Australia, Latin America, Malaysia, Vietnam, and Indonesia. Last week, I spoke with Lead Data Analyst with Amway Global Sales, Dan Arganbright, and IT Manager with Amway BI Competency Center, Mike Olson, about their upcoming presentation at Oracle OpenWorld in San Francisco. Scheduled during a prime speaking slot on Monday, October 1 at 12:15pm in Moscone West, 2007, Dan and Mike will discuss their experience building Amway’s Distributor Consulting solution, powered by Oracle BI EE. You can find more information here. As background, Amway offers people an opportunity to own their own businesses and consumers exclusive products in health and wellness, beauty and home care.  The Amway internal Sales organization is charged with consulting leadership-level Distributors to help them with data insights and ultimately grow their business. Until recently, this was a resource-intense process of gathering and formatting data. In some markets, it took over 40 hours to collect the data and produce the analysis needed for one consultation session. Amway began its global BI journey in 2006 and since then the company has migrated from having multiple technology providers and integration points to an integrated strategic vendor approach. Today, the company has standardized on Oracle technology for BI.  Amway has achieved cost savings through the retirement of redundant technology platforms. In addition, Mike’s organization has led the charge to align disparate BI organizations into a BI Competency Center.  The following diagram highlights the simplicity of the standardized architecture of Amway today. Dubbed Distributor Consulting, Amway has developed a BI solution using the Oracle technology stack to help Distributor leaders grow their businesses. The Distributor Consulting solution provides over 40 metrics for Sales staff to provide data-driven insights on the Distributors and organizations they support.  Using Oracle BI EE, Exadata, and Oracle Data Integrator, Amway provides customized and personalized business intelligence, and the Oracle BI EE dashboards were developed by the Amway Sales organization, which demonstrates business empowerment of the technology. Amway is also leveraging the power of BI to drive business growth in all of its markets.  A new set of Distributor Segmentation metrics are enabling a better understanding of distributor behaviors. A Global Scorecard that Amway developed provides key metrics at a market and global level for executive-level discussions. Product Analysis teams can now highlight repeat purchase rates, product penetration and the success of CRM campaigns. In the words of Dan and Mike, the addition of Exadata 11 months ago has been “a game changer.”  Amway has been able to dramatically reduce complexity, improve performance and increase business productivity and cost savings. For example, the number of indexes on the global data warehouse was reduced from more than 1,000 to less than 20.  Pulling data for the highest level distributors or the largest markets in the company now can be done in minutes instead of hours.  As a result, IT has shifted from performance tuning and keeping the system operational to higher-value business-focused activities. •       “The distributors that have been introduced to the BI reports have found them extremely helpful. Because they have never had this kind of information before, when they were presented with the reports, they wanted to take action immediately!”  -     Sales Development Manager in Latin America Without giving away more, the Amway case study presentation will be one of the unique customer sessions at OpenWorld this year. Speakers Dan Arganbright and Mike Olson have planned an interactive and entertaining session on Monday October 1 at 12:15pm in Moscone West, 2007. I’ll see you there!

    Read the article

< Previous Page | 190 191 192 193 194 195 196 197 198 199 200 201  | Next Page >