Search Results

Search found 88206 results on 3529 pages for 'code coverage'.

Page 614/3529 | < Previous Page | 610 611 612 613 614 615 616 617 618 619 620 621  | Next Page >

  • XSL file handling through javascript

    - by Zaid Iqbal
    i want to handle my xsl file through my javascript code. I made my XSL file but i want to dynamically change my XSL file at run time.As in add more attributes in header or data. My javascript code as follow` <script> function loadXMLDoc(dname) { if (window.XMLHttpRequest) { xhttp=new XMLHttpRequest(); } else { xhttp=new ActiveXObject("Microsoft.XMLHTTP"); } xhttp.open("GET",dname,false); xhttp.send(""); return xhttp.responseXML; } function displayResult() { xml=loadXMLDoc("cdcatalog.xml"); xsl=loadXMLDoc("cdcatalog.xsl"); // code for IE if (window.ActiveXObject) { ex=xml.transformNode(xsl); document.getElementById("example").innerHTML=ex; } // code for Mozilla, Firefox, Opera, etc. else if (document.implementation && document.implementation.createDocument) { xsltProcessor=new XSLTProcessor(); xsltProcessor.importStylesheet(xsl); resultDocument = xsltProcessor.transformToFragment(xml,document); document.getElementById("example").appendChild(resultDocument); } } </script> My XSL file code: <?xml version="1.0" encoding="ISO-8859-1"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:template match="/"> <html> <body> <h2>My CD Collection</h2> <table border="1"> <tr bgcolor="#9acd32"> <th align="left">Title</th> <th align="left">Artist</th> <th align="left">Country</th> </tr> <xsl:for-each select="catalog/cd"> <tr> <td><xsl:value-of select="title" /></td> <td><xsl:value-of select="artist" /></td> <td><xsl:value-of select="country" /></td> </tr> </xsl:for-each> </table> </body> </html> </xsl:template> </xsl:stylesheet> `

    Read the article

  • Go/Obj-C style interfaces with ability to extend compiled objects after initial release

    - by Skrylar
    I have a conceptual model for an object system which involves combining Go/Obj-C interfaces/protocols with being able to add virtual methods from any unit, not just the one which defines a class. The idea of this is to allow Ruby-ish open classes so you can take a minimalist approach to library development, and attach on small pieces of functionality as is actually needed by the whole program. Implementation of this involves a table of methods marked virtual in an RTTI table, which system functions are allowed to add to during module initialization. Upon typecasting an object to an interface, a Go-style lookup is done to create a vtable for that particular mapping and pass it off so you can have comparable performance to C/C++. In this case, methods may be added /afterwards/ which were not previously known and these new methods allow newer interfaces to be satisfied; while I like this idea because it seems like it would be very flexible (disregarding the potential for spaghetti code, which can happen with just about any model you use regardless). By wrapping the system calls for binding methods up in a set of clean C-compatible calls, one would also be able to integrate code with shared libraries and retain a decent amount of performance (Go does not do shared linking, and Objective-C does a dynamic lookup on each call.) Is there a valid use-case for this model that would make it worth the extra background plumbing? As much as this Dylan-style extensibility would be nice to have access to, I can't quite bring myself to a use case that would justify the overhead other than "it could make some kinds of code more extensible in future scenarios."

    Read the article

  • StreamInsight 2.1, meet LINQ

    - by Roman Schindlauer
    Someone recently called LINQ “magic” in my hearing. I leapt to LINQ’s defense immediately. Turns out some people don’t realize “magic” is can be a pejorative term. I thought LINQ needed demystification. Here’s your best demystification resource: http://blogs.msdn.com/b/mattwar/archive/2008/11/18/linq-links.aspx. I won’t repeat much of what Matt Warren says in his excellent series, but will talk about some core ideas and how they affect the 2.1 release of StreamInsight. Let’s tell the story of a LINQ query. Compile time It begins with some code: IQueryable<Product> products = ...; var query = from p in products             where p.Name == "Widget"             select p.ProductID; foreach (int id in query) {     ... When the code is compiled, the C# compiler (among other things) de-sugars the query expression (see C# spec section 7.16): ... var query = products.Where(p => p.Name == "Widget").Select(p => p.ProductID); ... Overload resolution subsequently binds the Queryable.Where<Product> and Queryable.Select<Product, int> extension methods (see C# spec sections 7.5 and 7.6.5). After overload resolution, the compiler knows something interesting about the anonymous functions (lambda syntax) in the de-sugared code: they must be converted to expression trees, i.e.,“an object structure that represents the structure of the anonymous function itself” (see C# spec section 6.5). The conversion is equivalent to the following rewrite: ... var prm1 = Expression.Parameter(typeof(Product), "p"); var prm2 = Expression.Parameter(typeof(Product), "p"); var query = Queryable.Select<Product, int>(     Queryable.Where<Product>(         products,         Expression.Lambda<Func<Product, bool>>(Expression.Property(prm1, "Name"), prm1)),         Expression.Lambda<Func<Product, int>>(Expression.Property(prm2, "ProductID"), prm2)); ... If the “products” expression had type IEnumerable<Product>, the compiler would have chosen the Enumerable.Where and Enumerable.Select extension methods instead, in which case the anonymous functions would have been converted to delegates. At this point, we’ve reduced the LINQ query to familiar code that will compile in C# 2.0. (Note that I’m using C# snippets to illustrate transformations that occur in the compiler, not to suggest a viable compiler design!) Runtime When the above program is executed, the Queryable.Where method is invoked. It takes two arguments. The first is an IQueryable<> instance that exposes an Expression property and a Provider property. The second is an expression tree. The Queryable.Where method implementation looks something like this: public static IQueryable<T> Where<T>(this IQueryable<T> source, Expression<Func<T, bool>> predicate) {     return source.Provider.CreateQuery<T>(     Expression.Call(this method, source.Expression, Expression.Quote(predicate))); } Notice that the method is really just composing a new expression tree that calls itself with arguments derived from the source and predicate arguments. Also notice that the query object returned from the method is associated with the same provider as the source query. By invoking operator methods, we’re constructing an expression tree that describes a query. Interestingly, the compiler and operator methods are colluding to construct a query expression tree. The important takeaway is that expression trees are built in one of two ways: (1) by the compiler when it sees an anonymous function that needs to be converted to an expression tree, and; (2) by a query operator method that constructs a new queryable object with an expression tree rooted in a call to the operator method (self-referential). Next we hit the foreach block. At this point, the power of LINQ queries becomes apparent. The provider is able to determine how the query expression tree is evaluated! The code that began our story was intentionally vague about the definition of the “products” collection. Maybe it is a queryable in-memory collection of products: var products = new[]     { new Product { Name = "Widget", ProductID = 1 } }.AsQueryable(); The in-memory LINQ provider works by rewriting Queryable method calls to Enumerable method calls in the query expression tree. It then compiles the expression tree and evaluates it. It should be mentioned that the provider does not blindly rewrite all Queryable calls. It only rewrites a call when its arguments have been rewritten in a way that introduces a type mismatch, e.g. the first argument to Queryable.Where<Product> being rewritten as an expression of type IEnumerable<Product> from IQueryable<Product>. The type mismatch is triggered initially by a “leaf” expression like the one associated with the AsQueryable query: when the provider recognizes one of its own leaf expressions, it replaces the expression with the original IEnumerable<> constant expression. I like to think of this rewrite process as “type irritation” because the rewritten leaf expression is like a foreign body that triggers an immune response (further rewrites) in the tree. The technique ensures that only those portions of the expression tree constructed by a particular provider are rewritten by that provider: no type irritation, no rewrite. Let’s consider the behavior of an alternative LINQ provider. If “products” is a collection created by a LINQ to SQL provider: var products = new NorthwindDataContext().Products; the provider rewrites the expression tree as a SQL query that is then evaluated by your favorite RDBMS. The predicate may ultimately be evaluated using an index! In this example, the expression associated with the Products property is the “leaf” expression. StreamInsight 2.1 For the in-memory LINQ to Objects provider, a leaf is an in-memory collection. For LINQ to SQL, a leaf is a table or view. When defining a “process” in StreamInsight 2.1, what is a leaf? To StreamInsight a leaf is logic: an adapter, a sequence, or even a query targeting an entirely different LINQ provider! How do we represent the logic? Remember that a standing query may outlive the client that provisioned it. A reference to a sequence object in the client application is therefore not terribly useful. But if we instead represent the code constructing the sequence as an expression, we can host the sequence in the server: using (var server = Server.Connect(...)) {     var app = server.Applications["my application"];     var source = app.DefineObservable(() => Observable.Range(0, 10, Scheduler.NewThread));     var query = from i in source where i % 2 == 0 select i; } Example 1: defining a source and composing a query Let’s look in more detail at what’s happening in example 1. We first connect to the remote server and retrieve an existing app. Next, we define a simple Reactive sequence using the Observable.Range method. Notice that the call to the Range method is in the body of an anonymous function. This is important because it means the source sequence definition is in the form of an expression, rather than simply an opaque reference to an IObservable<int> object. The variation in Example 2 fails. Although it looks similar, the sequence is now a reference to an in-memory observable collection: var local = Observable.Range(0, 10, Scheduler.NewThread); var source = app.DefineObservable(() => local); // can’t serialize ‘local’! Example 2: error referencing unserializable local object The Define* methods support definitions of operator tree leaves that target the StreamInsight server. These methods all have the same basic structure. The definition argument is a lambda expression taking between 0 and 16 arguments and returning a source or sink. The method returns a proxy for the source or sink that can then be used for the usual style of LINQ query composition. The “define” methods exploit the compile-time C# feature that converts anonymous functions into translatable expression trees! Query composition exploits the runtime pattern that allows expression trees to be constructed by operators taking queryable and expression (Expression<>) arguments. The practical upshot: once you’ve Defined a source, you can compose LINQ queries in the familiar way using query expressions and operator combinators. Notably, queries can be composed using pull-sequences (LINQ to Objects IQueryable<> inputs), push sequences (Reactive IQbservable<> inputs), and temporal sequences (StreamInsight IQStreamable<> inputs). You can even construct processes that span these three domains using “bridge” method overloads (ToEnumerable, ToObservable and To*Streamable). Finally, the targeted rewrite via type irritation pattern is used to ensure that StreamInsight computations can leverage other LINQ providers as well. Consider the following example (this example depends on Interactive Extensions): var source = app.DefineEnumerable((int id) =>     EnumerableEx.Using(() =>         new NorthwindDataContext(), context =>             from p in context.Products             where p.ProductID == id             select p.ProductName)); Within the definition, StreamInsight has no reason to suspect that it ‘owns’ the Queryable.Where and Queryable.Select calls, and it can therefore defer to LINQ to SQL! Let’s use this source in the context of a StreamInsight process: var sink = app.DefineObserver(() => Observer.Create<string>(Console.WriteLine)); var query = from name in source(1).ToObservable()             where name == "Widget"             select name; using (query.Bind(sink).Run("process")) {     ... } When we run the binding, the source portion which filters on product ID and projects the product name is evaluated by SQL Server. Outside of the definition, responsibility for evaluation shifts to the StreamInsight server where we create a bridge to the Reactive Framework (using ToObservable) and evaluate an additional predicate. It’s incredibly easy to define computations that span multiple domains using these new features in StreamInsight 2.1! Regards, The StreamInsight Team

    Read the article

  • Earmarks of a Professional PHP Programmer

    - by Scotty C.
    I'm a 19 year old student who really REALLY enjoys programming, and I'm hoping to glean from your years of experience here. At present, I'm studying PHP every chance I get, and have been for about 3 years, although I've never taken any formal classes. I'd love to some day be a programmer full time, and make a good career of it. My question to you is this: What do you consider to be the earmarks or traits of a professional programmer? Mainly in the field of PHP, but other, more generalized qualifications are also more than welcome, as I think PHP is more of a hobbyist language and may not be the language of choice in the eyes of potential employers. Please correct me if I'm wrong. Above all, I don't want to wast time on something that isn't worth while. I'm currently feeling pretty confident in my knowledge of PHP as a language, and I know that I could build just about anything I need and have it "work", but I feel sorely lacking in design concepts and code structure. I can even write object oriented code, but in my personal opinion, that isn't worth a hill of beans if it isn't organized well. For this reason, I bought Matt Zandstra's book "PHP Objects, Patterns, and Practice" and have been reading that a little every day. Anyway, I'm starting to digress a little here, so back to the original question. What advice would you give to an aspiring programmer who wants to make an impact in this field? Also, on a side note, I've been working on a project with a friend of mine that would give a fairly good idea of where I'm at coding wise. I'm gonna give a link, I don't want anyone to feel as though I'm pushing or spamming here, so don't click it if you don't want to. But if you are interested on giving some feedback there as well, you can see the code on github. I'm known as The Craw there. https://github.com/PureChat/PureChat--Beta-/tree/

    Read the article

  • When and why are you planning to upgrade to Python 3.0?

    - by Tomislav Mutak
    Python 3.0 (aka Python 3000, Py3k, etc) is now available. When and why are you planning on porting your project or code to the new Python? edit: I'm particularly interested in any features that don't exist in 2.6 that make porting worth it. Right now seems like a lot of negatives (x hasn't been ported yet), but I don't know what people see as the positives. Regarding "when", I'm interested in people's thoughts that the first step to porting is to have "excellent test coverage" which seems a bit optimistic for some projects.

    Read the article

  • How to implment the database for event conditions and item bonuses for a browser based game

    - by Saifis
    I am currently creating a browser based game, and was wondering what was the standard approach in making diverse conditions and status bonuses database wise. Currently considering two cases. Event Conditions Needs min 1000 gold Needs min Lv 10 Needs certain item. Needs fulfillment of another event Status Bonus Reduces damage by 20% +100 attack points Deflects certain type of attack I wish to be able to continually change these parameters during the process of production and operation, so having them hard-coded isn't the best way. All I could come up with are the following two methods. Method 1 Create a table that contains each conditions with needed attributes Have a model named conditions with all the attributes it would need to set them conditions condition_type (level, money_min, money_max item, event_aquired) condition_amount prerequisite_condition_id prerequisite_item_id Method 2 write it in a DSL form that could be interpreted later in the code Perhaps something like yaml, have a text area in the setting form and have the code interpret it. condition_foo: condition_type :level min_level: 10 condition_type :item item_id: 2 At current Method 2 looks to be more practical and flexible for future changes, trade off being that all the flex must be done on the code side. Not to sure how this is supposed to be done, is it supposed to be hard coded? separate config file? Any help would be appreciated. Added For additional info, it will be implemented with Ruby on Rails

    Read the article

  • How does the event dispatch thread work?

    - by Roman
    With the help of people on stackoverflow I was able to get the following working code of the simples GUI countdown (it just displays a window counting down seconds). My main problem with this code is the invokeLater stuff. As far as I understand the invokeLater send a task to the event dispatching thread (EDT) and then the EDT execute this task whenever it "can" (whatever it means). Is it right? To my understanding the code works like that: In the main method we use invokeLater to show the window (showGUI method). In other words, the code displaying the window will be executed in the EDT. In the main method we also start the counter and the counter (by construction) is executed in another thread (so it is not in the event dispatching thread). Right? The counter is executed in a separate thread and periodically it calls updateGUI. The updateGUI is supposed to update GUI. And GUI is working in the EDT. So, updateGUI should also be executed in the EDT. It is why the code for the updateGUI is inclosed in the invokeLater. Is it right? What is not clear to me is why we call the counter from the EDT. Anyway it is not executed in the EDT. It starts immediately a new thread and the counter is executed there. So, why we cannot call the counter in the main method after the invokeLater block? import javax.swing.JFrame; import javax.swing.JLabel; import javax.swing.SwingUtilities; public class CountdownNew { static JLabel label; // Method which defines the appearance of the window. public static void showGUI() { JFrame frame = new JFrame("Simple Countdown"); frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); label = new JLabel("Some Text"); frame.add(label); frame.pack(); frame.setVisible(true); } // Define a new thread in which the countdown is counting down. public static Thread counter = new Thread() { public void run() { for (int i=10; i>0; i=i-1) { updateGUI(i,label); try {Thread.sleep(1000);} catch(InterruptedException e) {}; } } }; // A method which updates GUI (sets a new value of JLabel). private static void updateGUI(final int i, final JLabel label) { SwingUtilities.invokeLater( new Runnable() { public void run() { label.setText("You have " + i + " seconds."); } } ); } public static void main(String[] args) { SwingUtilities.invokeLater(new Runnable() { public void run() { showGUI(); counter.start(); } }); } }

    Read the article

  • TDD: Write a separate test for object initialization or relying on other tests exercising it

    - by DXM
    This seems to be the common pattern that's emerging in some of the tests I've worked on lately. We have a class, and quite often this is legacy code whose design can't be easily altered, which has a bunch of member variables. There's some kind of "Initialize" or "Load" function which would put an object into a valid state. Only after it is initialized/loaded, are the members in the proper state so that other methods can be exercised. So when we start writing tests, first test is "TestLoad" and all we put in there is exercising initialization logic. Then we might add one (or few) TestLoadFailureXXX tests and those are definitely valuable. Then we start writing tests to verify other behaviors but all of them require the object to be loaded. So they all start by running exactly the same code as "TestLoad". So my question: Is TestLoad even necessary? Do you take it and let other tests simply exercise the loading? Or leave it so things are more explicit? I know that each unit test function should have no (or as little as possible) overlap with other test functions, but it seems like in cases of loading, this is unavoidable. And whether we like it or not, if something in the loading code breaks, we will end up with a whole test suite of failures. Is there another approach that I might be missing here? Thank you for the responses. It definitely makes sense that you want to see "InitializationTest" and if that fails you know where to start looking. In case it matters, this question is mostly about C++ and we use CppUnit framework. And now, thanks to sleske, I'll be constantly wishing that CppUnit supported test dependencies. Might have to hack something in one of these days :)

    Read the article

  • Help with converting an XML into a 2D level (Actionscript 3.0)

    - by inzombiak
    I'm making a little platformer and wanted to use Ogmo to create my level. I've gotten everything to work except the level that my code generates is not the same as what I see in Ogmo. I've checked the array and it fits with the level in Ogmo, but when I loop through it with my code I get the wrong thing. I've included my code for creating the level as well as an image of what I get and what I'm supposed to get. EDIT: I tried to add it, but I couldn't get it to display properly Also, if any of you know of better level editors please let me know. xmlLoader.addEventListener(Event.COMPLETE, LoadXML); xmlLoader.load(new URLRequest("Level1.oel")); function LoadXML(e:Event):void { levelXML = new XML(e.target.data); xmlFilter = levelXML.* for each (var levelTest:XML in levelXML.*) { crack = levelTest; } levelArray = crack.split(''); trace(levelArray); count = 0; for(i = 0; i <= 23; i++) { for(j = 0; j <= 35; j++) { if(levelArray[i*36+j] == 1) { block = new Platform; s.addChild(block); block.x = j*20; block.y = i*20; count++; trace(i); trace(block.x); trace(j); trace(block.y); } } } trace(count);

    Read the article

  • How to scroll hex tiles?

    - by Chris Evans
    I don't seem to be able to find an answer to this one. I have a map of hex tiles. I wish to implement scrolling. Code at present: drawTilemap = function() { actualX = Math.floor(viewportX / hexWidth); actualY = Math.floor(viewportY / hexHeight); offsetX = -(viewportX - (actualX * hexWidth)); offsetY = -(viewportY - (actualY * hexHeight)); for(i = 0; i < (10); i++) { for(j = 0; j < 10; j++) { if(i % 2 == 0) { x = (hexOffsetX * i) + offsetX; y = j * sourceHeight; } else { x = (hexOffsetX * i) + offsetX; y = hexOffsetY + (j * sourceHeight); } var tileselected = mapone[actualX + i][j]; drawTile(x, y, tileselected); } } } The code I've written so far only handles X movement. It doesn't yet work the way it should do. If you look at my example on jsfiddle.net below you will see that when moving to the right, when you get to the next hex tile along, there is a problem with the X position and calculations that have taken place. It seems it is a simple bit of maths that is missing. Unfortunately I've been unable to find an example that includes scrolling yet. http://jsfiddle.net/hd87E/1/ Make sure there is no horizontal scroll bar then trying moving right using the - right arrow on the keyboard. You will see the problem as you reach the end of the first tile. Apologies for the horrid code, I'm learning! Cheers

    Read the article

  • How do you communicate improvements in tools and process to the development team?

    - by birryree
    Hi everyone, My team does a lot of internal tooling and infrastructure work - you can think of us as a small scale version of the teams Facebook, Etsy, Netflix, etc. who build all the infrastructure for scaling their services up to thousands/tens of thousands of servers and supporting millions of users. Lately, we've been running full steam ahead improving much of the tools we use internally, like tools for automatically creating new servers, setting up new application instances, etc. An end result of this has been decreased developer frustration, but increased 'ignorance' by most of the developer team about how to use our tools correctly and effectively. More often than not, my team will be asked by other teams to help them use the tools. Solutions we've thought up or things already in place: All our code is relatively simple and self-explanatory, with good comments where necessary, so developers could read the scripts. Counterargument: You can guess this isn't a particularly good idea, having people read our tools' code to figure out how to use it. All our code is committed to Subversion with very detailed commit messages about changes, developers could read the commit emails. Counterargument: Expect the developers to read all our commits? Ludicrous. Wiki - we have an internal company wiki, that we try to maintain with up to date information, but as we are moving so fast, the wiki has to keep pace as well. Counterargument: As mentioned, we move fast in my team, as more improvements on our tools are added daily. Again still relies on people to read something that might change constantly. Email the team? We could email the team when we have a glut of improvements to communicate. So as you can all see, we are trying to find new ideas, and explore options we haven't thought of yet. Anyone else ever been in a similar situation and have some guidance?

    Read the article

  • Using Telerik MVC with your own custom jQuery and or other plug-ins

    - by Steve Clements
    If you are using MVC it might be worth checking out the telerik controls (http://demos.telerik.com/aspnet-mvc), they are free if you are doing an internal or “not for profit” application. If however you do choose to use them, you could come up against a little problem I had.  Using the telerik controls with your own custom jQuery.  In my case I was using the jQuery UI dialog. It kept throwing an error where I was setting my div to a dialog. Code Snippet $("#textdialog").dialog({ The problem is when you use the telerik mvc stuff you need to call ScriptRegistrar Code Snippet @Html.Telerik().ScriptRegistrar() in order to setup the javascript for the controls. By default this adds a reference to jQuery and if you have already added a reference to jQuery because you are using it elsewhere, this causes a problem. I found the solution here And it was to change the above ScriptRegistrar call to this… Code Snippet @Html.Telerik().ScriptRegistrar().jQuery(false).DefaultGroup(g => g.Combined(true).Compress(true));   If you come across this one on stackoverflow it wont work – in my case the HtmlEditor would render no problem, but was unusable.  Which is the same as someone else found when using the tab control – they went to the bother of re-writing the ScriptRegistrar.  Not for me that one!!

    Read the article

  • GAC and SharePoint

    - by Sahil Malik
    SharePoint 2010 Training: more information GAC and SharePoint have had a funny love & hate relationship. In 2007, SharePoint and GAC fell in love. GAC was about the only practical alternative to deploying custom code. Here is a dirty secret, based on completely unscientific and unfounded research, I can tell you that 99% of SharePoint code written, ended up in the GAC. Yes, I know GAC rhymes with hack, crack, smack, and even mac, but still it was the better alternative. You could write CAS policies and put your DLLs in the bin folder, but it was mighty inconvenient to both write, and maintain. Most of us never did it. There are still some SharePoint developers out there insisting on the bin approach – well, get over it; you’re not winning the fight. CAS is about as outdated as Samantha Fox anyway. It was hot at one point though. So all that code that ended up in the GAC caused lots and lots of headache. Clearly, Microsoft had to get us off the crack, uhh .. I mean the GAC. In Read full article ....

    Read the article

  • Implementing Circle Physics in Java

    - by Shijima
    I am working on a simple physics based game where 2 balls bounce off each other. I am following a tutorial, 2-Dimensional Elastic Collisions Without Trigonometry, for the collision reactions. I am using Vector2 from the LIBGDX library to handle vectors. I am a bit confused on how to implement step 6 in Java from the tutorial. Below is my current code, please note that the code strictly follows the tutorial and there are redundant pieces of code which I plan to refactor later. Note: refrences to this refer to ball 1, and ball refers to ball 2. /* * Step 1 * * Find the Normal, Unit Normal and Unit Tangential vectors */ Vector2 n = new Vector2(this.position[0] - ball.position[0], this.position[1] - ball.position[1]); Vector2 un = n.normalize(); Vector2 ut = new Vector2(-un.y, un.x); /* * Step 2 * * Create the initial (before collision) velocity vectors */ Vector2 v1 = this.velocity; Vector2 v2 = ball.velocity; /* * Step 3 * * Resolve the velocity vectors into normal and tangential components */ float v1n = un.dot(v1); float v1t = ut.dot(v1); float v2n = un.dot(v2); float v2t = ut.dot(v2); /* * Step 4 * * Find the new tangential Velocities after collision */ float v1tPrime = v1t; float v2tPrime = v2t; /* * Step 5 * * Find the new normal velocities */ float v1nPrime = v1n * (this.mass - ball.mass) + (2 * ball.mass * v2n) / (this.mass + ball.mass); float v2nPrime = v2n * (ball.mass - this.mass) + (2 * this.mass * v1n) / (this.mass + ball.mass); /* * Step 6 * * Convert the scalar normal and tangential velocities into vectors??? */

    Read the article

  • How to create a copy of an instance without having access to private variables

    - by Jamie
    Im having a bit of a problem. Let me show you the code first: public class Direction { private CircularList xSpeed, zSpeed; private int[] dirSquare = {-1, 0, 1, 0}; public Direction(int xSpeed, int zSpeed){ this.xSpeed = new CircularList(dirSquare, xSpeed); this.zSpeed = new CircularList(dirSquare, zSpeed); } public Direction(Point dirs){ this(dirs.x, dirs.y); } public void shiftLeft(){ xSpeed.shiftLeft(); zSpeed.shiftRight(); } public void shiftRight(){ xSpeed.shiftRight(); zSpeed.shiftLeft(); } public int getXSpeed(){ return this.xSpeed.currentValue(); } public int getZSpeed(){ return this.zSpeed.currentValue(); } } Now lets say i have an instance of Direction: Direction dir = new Direction(0, 0); As you can see in the code of Direction, the arguments fed to the constructor, are passed directly to some other class. One cannot be sure if they stay the same because methods shiftRight() and shiftLeft could have been called, which changes thos numbers. My question is, how do i create a completely new instance of Direction, that is basically copy(not by reference) of dir? The only way i see it, is to create public methods in both CircularList(i can post the code of this class, but its not relevant) and Direction that return the variables needed to create a copy of the instance, but this solution seems really dirty since those numbers are not supposed to be touched after beeing fed to the constructor, and therefore they are private.

    Read the article

  • user defined Copy ctor, and copy-ctors further down the chain - compiler bug ? programmers brainbug

    - by J.Colmsee
    Hi. i have a little problem, and I am not sure if it's a compiler bug, or stupidity on my side. I have this struct : struct BulletFXData { int time_next_fx_counter; int next_fx_steps; Particle particles[2];//this is the interesting one ParticleManager::ParticleId particle_id[2]; }; The member "Particle particles[2]" has a self-made kind of smart-ptr in it (resource-counted texture-class). this smart-pointer has a default constructor, that initializes to the ptr to 0 (but that is not important) I also have another struct, containing the BulletFXData struct : struct BulletFX { BulletFXData data; BulletFXRenderFunPtr render_fun_ptr; BulletFXUpdateFunPtr update_fun_ptr; BulletFXExplosionFunPtr explode_fun_ptr; BulletFXLifetimeOverFunPtr lifetime_over_fun_ptr; BulletFX( BulletFXData data, BulletFXRenderFunPtr render_fun_ptr, BulletFXUpdateFunPtr update_fun_ptr, BulletFXExplosionFunPtr explode_fun_ptr, BulletFXLifetimeOverFunPtr lifetime_over_fun_ptr) :data(data), render_fun_ptr(render_fun_ptr), update_fun_ptr(update_fun_ptr), explode_fun_ptr(explode_fun_ptr), lifetime_over_fun_ptr(lifetime_over_fun_ptr) { } /* //USER DEFINED copy-ctor. if it's defined things go crazy BulletFX(const BulletFX& rhs) :data(data),//this line of code seems to do a plain memory-copy without calling the right ctors render_fun_ptr(render_fun_ptr), update_fun_ptr(update_fun_ptr), explode_fun_ptr(explode_fun_ptr), lifetime_over_fun_ptr(lifetime_over_fun_ptr) { } */ }; If i use the user-defined copy-ctor my smart-pointer class goes crazy, and it seems that calling the CopyCtor / assignment operator aren't called as they should. So - does this all make sense ? it seems as if my own copy-ctor of struct BulletFX should do exactly what the compiler-generated would, but it seems to forget to call the right constructors down the chain. compiler bug ? me being stupid ? Sorry about the big code, some small example could have illustrated too. but often you guys ask for the real code, so well - here it is :D EDIT : more info : typedef ParticleId unsigned int; Particle has no user defined copyctor, but has a member of type : Particle { .... Resource<Texture> tex_res; ... } Resource is a smart-pointer class, and has all ctor's defined (also asignment operator) and it seems that Resource is copied bitwise. EDIT : henrik solved it... data(data) is stupid of course ! it should of course be rhs.data !!! sorry for huge amount of code, with a very little bug in it !!! (Guess you shouldn't code at 1 in the morning :D )

    Read the article

  • What is an effective way to convert a shared memory-mapped system to another data access model?

    - by Rob Jones
    I have a code base that is designed around shared memory. Each process that needs to access the memory maps it into its own address space. The data structures in the shared memory are directly accessed, that is, there is no API. For example: Assume the following: typedef struct { int x; int y; struct { int a; int b; } z; } myStruct; myStruct s; Then a process might access this structure as: myStruct *s = mapGlobalMem(); And use it as: int tmpX = s->x; The majority of the information in the global structure is configuration information that is set once and read many times. I would like to store this information in a database and develop an API to access the database. The problem is, these references are sprinkled throughout the code. I need a way to parse the code and identify global structure references that will need to be refactored. I've looked into using ANTLR to create a parser that will identify references to a small set of structures and enter them into a custom symbol table. I could then use this symbol table to identify which source files need to be refactored. It looks like a promising approach. What other approaches are there? Of course, I'm looking for a programmatic approach. There are far too many source files to examine each one visually. This is all ordinary ANSI C. Nothing else.

    Read the article

  • MSBuild Starter Kits - i.e. Copy and just start modding...

    - by vdh_ant
    Hi guys Just wondering if anyone knows if there are any MSBuild starter kits out there. What I mean by starter kits is that from the looks of it most builds to kinda the same sort of steps with minor changes here and there (i.e. most builds would run test, coverage, zip up the results, produce a report, deploy etc). Also what most people in general want from a CI build, test build, release build is mostly the same with minor changes here and there. Now don't get me wrong i think that most scripts are fairly different in the end. But I can't help but think that most start out life being fairly similar. Hence does anyone know of any "starter kits" that have like a dev/CI/test/release build with the common tasks that most people would want that you can just start changing and modifying? Cheers Anthony

    Read the article

  • How do I do high quality scaling of a image?

    - by pbhogan
    I'm writing some code to scale a 32 bit RGBA image in C/C++. I have written a few attempts that have been somewhat successful, but they're slow and most importantly the quality of the sized image is not acceptable. I compared the same image scaled by OpenGL (i.e. my video card) and my routine and it's miles apart in quality. I've Google Code Searched, scoured source trees of anything I thought would shed some light (SDL, Allegro, wxWidgets, CxImage, GD, ImageMagick, etc.) but usually their code is either convoluted and scattered all over the place or riddled with assembler and little or no comments. I've also read multiple articles on Wikipedia and elsewhere, and I'm just not finding a clear explanation of what I need. I understand the basic concepts of interpolation and sampling, but I'm struggling to get the algorithm right. I do NOT want to rely on an external library for one routine and have to convert to their image format and back. Besides, I'd like to know how to do it myself anyway. :) I have seen a similar question asked on stack overflow before, but it wasn't really answered in this way, but I'm hoping there's someone out there who can help nudge me in the right direction. Maybe point me to some articles or pseudo code... anything to help me learn and do. Here's what I'm looking for: 1. No assembler (I'm writing very portable code for multiple processor types). 2. No dependencies on external libraries. 3. I am primarily concerned with scaling DOWN, but will also need to write a scale up routine later. 4. Quality of the result and clarity of the algorithm is most important (I can optimize it later). My routine essentially takes the following form: DrawScaled( uint32 *src, uint32 *dst, src_x, src_y, src_w, src_h, dst_x, dst_y, dst_w, dst_h ); Thanks! UPDATE: To clarify, I need something more advanced than a box resample for downscaling which blurs the image too much. I suspect what I want is some kind of bicubic (or other) filter that is somewhat the reverse to a bicubic upscaling algorithm (i.e. each destination pixel is computed from all contributing source pixels combined with a weighting algorithm that keeps things sharp. EXAMPLE: Here's an example of what I'm getting from the wxWidgets BoxResample algorithm vs. what I want on a 256x256 bitmap scaled to 55x55. And finally: the original 256x256 image

    Read the article

  • Computer graphics: programatically create duotone (or separations)

    - by TarGz
    There are special kind of images called "duotone" which have just two channels. It is mostly used when you want to achive higher quality reproduction - have a printing press with two colors (black , gray). My question is, I have normal gray-scale image, how to convert it to duotone? I know I can tweak the curves in Photoshop - this is not what I'm asking, rather than how to do it programmatically? Perhaps there is a library which can do just that? What about "dot gain compensation"? "Total ink coverage"?

    Read the article

  • How are DX and DY coordinates calculated in flash?

    - by Meganlee1984
    I'm trying to update a clients site and the original developer left almost no instructions. The code is all updated through XML. Here is a sample of the code enter code here<FOLDER NAME="COMMERCIAL"> <GALLERY NAME="LOCANDA VERDE: New York"> <IMG HEIGHT="500" CAPTION="Some photo" WIDTH="393" SRC="locanda1.jpg" DX="60" DY="40" LINKTEXT="" LINKURL="" INFOTEXT="SOHO, NEW YORK"/> <IMG HEIGHT="300" CAPTION="Some photo" WIDTH="450" SRC="locanda2.jpg" DX="160" DY="80" LINKTEXT="" LINKURL="" INFOTEXT="SOHO, NEW YORK"/> <IMG HEIGHT="500" CAPTION="Some photo" WIDTH="393" SRC="locanda3.jpg" DX="80" DY="260" LINKTEXT="" LINKURL="" INFOTEXT="SOHO, NEW YORK"/> <IMG HEIGHT="500" CAPTION="Some photo" WIDTH="393" SRC="locanda4.jpg" DX="120" DY="60" LINKTEXT="" LINKURL="" INFOTEXT="SOHO, NEW YORK"/> <IMG HEIGHT="393" CAPTION="Some photo" WIDTH="500" SRC="locanda5.jpg" DX="180" DY="100" LINKTEXT="" LINKURL="" INFOTEXT="SOHO, NEW YORK"/> <IMG HEIGHT="500" CAPTION="Some photo" WIDTH="433" SRC="locanda6.jpg" DX="60" DY="140" LINKTEXT="" LINKURL="" INFOTEXT="SOHO, NEW YORK"/> <IMG HEIGHT="500" CAPTION="Some photo" WIDTH="393" SRC="locanda7.jpg" DX="100" DY="200" LINKTEXT="" LINKURL="" INFOTEXT="SOHO, NEW YORK"/> </GALLERY>`enter code here It relates to this page: http://meyerdavis.com/ Click Commercial Click Laconda Verde New York. The xml file pulls a jpg from 2 places, one is a thumb nail that are all 60 x 60 and then one is the bigger sized image. The issue that I'm having is that I can't figure out how the DX and DY coordinates are generated for each item. Any help would be much appreciated. ` Edit: Here's the code from the comment below. platformblock.expandspeed = 0.02; platformblock.h = 450 - platformblock.dy1; //platformblock.h = 402; platformblock.dy2 = 0; platformblock.onResize(); /**/ platformblock.onEnterFrame = function() { this.dy1 += (48 - this.dy1)*this.expandspeed; this.h = 450 - this.dy1; if(this.expandspeed<0.3) { this.expandspeed += 0.02; } if(Math.abs(this.dy1-48)<0.2) { this.dy1 = 48; } if(this.platform._height==402 && this.dy1==48){ this.h = null; this.onResize(); this.onEnterFrame = null; } } platformblock._resizeto(800, 402, _root.play, _root, 0.08); titleblockcontainer.play(); /**/ stop();

    Read the article

  • Find non-ascii characters from a UTF-8 string

    - by user10607
    I need to find the non-ASCII characters from a UTF-8 string. my understanding: UTF-8 is a superset of character encoding in which 0-127 are ascii characters. So if in a UTF-8 string , a characters value is Not between 0-127, then it is not a ascii character , right? Please correct me if i'm wrong here. On the above understanding i have written following code in C : Note: I'm using the Ubuntu gcc compiler to run C code utf-string is xvab c long i; char arr[] = "xvab c"; printf("length : %lu \n", sizeof(arr)); for(i=0; i<sizeof(arr); i++){ char ch = arr[i]; if (isascii(ch)) printf("Ascii character %c\n", ch); else printf("Not ascii character %c\n", ch); } Which prints the output like: length : 9 Ascii character x Not ascii character Not ascii character ? Not ascii character ? Ascii character a Ascii character b Ascii character Ascii character c Ascii character To naked eye length of xvab c seems to be 6, but in code it is coming as 9 ? Correct answer for the xvab c is 1 ...i.e it has only 1 non-ascii character , but in above output it is coming as 3 (times Not ascii character). How can i find the non-ascii character from UTF-8 string, correctly. Please guide on the subject.

    Read the article

  • LiveMeeting VC PowerShell PASS – Troubleshooting SQL Server with PowerShell

    - by Laerte Junior
    Guys, join me on Wednesday July 18th 12 noon EDT (GMT -4) for a presentation called Troubleshooting SQL Server With PowerShell. It will be in English, so please make allowances for this. I’m sure that you’re aware that my English is not perfect, but it is not so bad. I will do my best, you can be sure. The registration link will be available soon from PowerShell.sqlpass.org, so I hope to see you there. It will be a session without slides. Just code; pure PowerShell code. Trust me, We will see a lot of COOL stuff.Big thanks to Aaron Nelson (@sqlvariant) for the opportunity! Here are some more details about the presentation: “Troubleshooting SQL Server with PowerShell – The Next Level’ It is normal for us to have to face poorly performing queries or even complete failure in our SQL server environments. This can happen for a variety of reasons including poor Database Designs, hardware failure, improperly-configured systems and OS Updates applied without testing. As Database Administrators, we need to take precaution to minimize the impact of these problems when they occur, and so we need the tools and methodology required to identify and solve issues quickly. In this Session we will use PowerShell to explore some common troubleshooting techniques used in our day-to-day work as s DBA. This will include a variety of such activities including Gathering Performance Counters in several servers at the same time using background jobs, identifying Blocked Sessions and Reading & filtering the SQL Error Log even if the Instance is offline The approach will be using some advanced PowerShell techniques that allow us to scale the code for multiple servers and run the data collection in asynchronous mode.

    Read the article

  • Are there any real-world cases for C++ without exceptions?

    - by Martin
    In When to use C over C++, and C++ over C? there is a statement wrt. to code size / C++ exceptions: Jerry answers (among other points): (...) it tends to be more difficult to produce truly tiny executables with C++. For really small systems, you're rarely writing a lot of code anyway, and the extra (...) to which I asked why that would be, to which Jerry responded: the main thing is that C++ includes exception handling, which (at least usually) adds some minimum to the executable size. Most compilers will let you disable exception handling, but when you do the result isn't quite C++ anymore. (...) which I do not really doubt on a technical real world level. Therefore I'm interested (purely out of curiosity) to hear from real world examples where a project chose C++ as a language and then chose to disable exceptions. (Not just merely "not use" exceptions in user code, but disable them in the compiler, so that you can't throw or catch exceptions.) Why does a project chose to do so (still using C++ and not C, but no exceptions) - what are/were the (technical) reasons? Addendum: For those wishing to elaborate on their answers, it would be nice to detail how the implications of no-exceptions are handled: STL collections (vector, ...) do not work properly (allocation failure cannot be reported) new can't throw Constructors cannot fail

    Read the article

  • Modular GWT design concerns

    - by GlGuru
    Hi, I have a couple of questions regarding a modular GWT based application framework. I have some ideas about them but being new to the field of web development I feel they are far from ideal. I'd appreciate a few comments and suggestions in this regard. Here are my questions: I am developing a framework which will allow third parties to embed GWT applications into our website and do some communication with them using simple iFrame postMessage. All these third party modules are going to use our SDK which is also GWT based. The problem arises that even though all the modules will be using the same codebase there is going to be a massive overheard in the amount of duplicate Javascript code (i.e. our common SDK code base which is quite large) being downloaded on the client's machine. This is highly redundant and problematic, not only due to the sheer size of duplicate code but, also due to the fact that subsequent updates of the SDK would require the modules to be recompiled which is going to create a DLL hell kind of scenario in the long run. What is the best way of doing this kind of thing? Is there a way where I can have some static GWT code (i.e. the SDK) and the dynamic GWT module refers to it (even if lies on a different domain) and it all work happily? The other part of the problem lies in providing robust scripting front end to the SDK. At first it appears to be trivial since Javascript itself is a scripting language. However, I do not know how to load and call a piece of pure Javascript code at runtime? I am willing to put restrictions on the target Javascript (i.e. having a function main and unique namespace or something). Furthermore the Javascript will come as a string from a database and not as a full URL. If its doable in Javascript how does one get this right in GWT i.e. forcing the compiler to emit a certain function in the generated Javascript? This I believe can be lesser of a problem by having a stub Javascript with all the right requirements which just loads up a GWT generated Javascript. Is any of this possible at all? I generally hate to be this verbose but I hope to find a quick solution to the problem as its holding up my progress. I'd highly appreciate any comments, suggestions and experiences.

    Read the article

< Previous Page | 610 611 612 613 614 615 616 617 618 619 620 621  | Next Page >