Search Results

Search found 346 results on 14 pages for 'conversions'.

Page 12/14 | < Previous Page | 8 9 10 11 12 13 14  | Next Page >

  • Is there an algorithm for converting quaternion rotations to Euler angle rotations?

    - by Will Baker
    Is there an existing algorithm for converting a quaternion representation of a rotation to an Euler angle representation? The rotation order for the Euler representation is known and can be any of the six permutations (i.e. xyz, xzy, yxz, yzx, zxy, zyx). I've seen algorithms for a fixed rotation order (usually the NASA heading, bank, roll convention) but not for arbitrary rotation order. Furthermore, because there are multiple Euler angle representations of a single orientation, this result is going to be ambiguous. This is acceptable (because the orientation is still valid, it just may not be the one the user is expecting to see), however it would be even better if there was an algorithm which took rotation limits (i.e. the number of degrees of freedom and the limits on each degree of freedom) into account and yielded the 'most sensible' Euler representation given those constraints. I have a feeling this problem (or something similar) may exist in the IK or rigid body dynamics domains. Solved: I just realised that it might not be clear that I solved this problem by following Ken Shoemake's algorithms from Graphics Gems. I did answer my own question at the time, but it occurs to me it may not be clear that I did so. See the answer, below, for more detail. Just to clarify - I know how to convert from a quaternion to the so-called 'Tait-Bryan' representation - what I was calling the 'NASA' convention. This is a rotation order (assuming the convention that the 'Z' axis is up) of zxy. I need an algorithm for all rotation orders. Possibly the solution, then, is to take the zxy order conversion and derive from it five other conversions for the other rotation orders. I guess I was hoping there was a more 'overarching' solution. In any case, I am surprised that I haven't been able to find existing solutions out there. In addition, and this perhaps should be a separate question altogether, any conversion (assuming a known rotation order, of course) is going to select one Euler representation, but there are in fact many. For example, given a rotation order of yxz, the two representations (0,0,180) and (180,180,0) are equivalent (and would yield the same quaternion). Is there a way to constrain the solution using limits on the degrees of freedom? Like you do in IK and rigid body dynamics? i.e. in the example above if there were only one degree of freedom about the Z axis then the second representation can be disregarded. I have tracked down one paper which could be an algorithm in this pdf but I must confess I find the logic and math a little hard to follow. Surely there are other solutions out there? Is arbitrary rotation order really so rare? Surely every major 3D package that allows skeletal animation together with quaternion interpolation (i.e. Maya, Max, Blender, etc) must have solved exactly this problem?

    Read the article

  • Does weak typing offer any advantages?

    - by sub
    Don't confuse this with static vs. dynamic typing! You all know JavaScripts/PHPs infamous type systems: PHP example: echo "123abc"+2; // 125 - the reason for this is explained // in the PHP docs but still: This hurts echo "4"+1; // 5 - Oh please echo "ABC"*5; // 0 - WTF // That's too much, seriously now. // This here might be actually a use for weak typing, but no - // it has to output garbage. JavaScript example: // A good old JavaScript, maybe you'll do better? alert("4"+1); // 51 - Oh come on. alert("abc"*3); // NaN - What the... // Have your creators ever heard of the word "consistence"? Python example: # Python's type system is actually a mix # It spits errors on senseless things like the first example below AND # allows intelligent actions like the second example. >>> print("abc"+1) Traceback (most recent call last): File "<pyshell#2>", line 1, in <module> print("abc"+1) TypeError: Can't convert 'int' object to str implicitly >>> print("abc"*5) abcabcabcabcabc Ruby example: puts 4+"1" // Type error - as supposed puts "abc"*4 // abcabcabcabc - makes sense After these examples it should be clear that PHP/JavaScript probably have the most inconsistent type systems out there. This is a fact and really not subjective. Now when having a closer look at the type systems of Ruby and Python it seems like they are having a much more intelligent and consistent type system. I think these examples weren't really necessary as we all know that PHP/JavaScript have a weak and Python/Ruby have a strong type system. I just wanted to mention why I'm asking this. Now I have two questions: When looking at those examples, what are the advantages of PHPs and JavaScripts type systems? I can only find downsides: They are inconsistent and I think we know that this is not good Types conversions are hardly controllable Bugs are more likely to happen and much harder to spot Do you prefer one of the both systems? Why? Personally I have worked with PHP, JavaScript and Python so far and must say that Pythons type system has really only advantages over PHPs and JavaScripts. Does anybody here not think so? Why then?

    Read the article

  • Best method for converting several sets of numbers with several different ratios

    - by C Patton
    I'm working on an open-source harm reduction application for opioid addicts. One of the features in this application is the conversion (in mg/mcg) between common opioids, so people don't overdose by accident. If you're morally against opioid addiction and wouldn't respond because of your morals, please consider that this application is for HARM REDUCTION.. So people don't end up dead. I have this data.. 3mg morphine IV = 10mcg fentanyl IV 2mg morphine oral = 1mg oxycodone oral 3mg oral morphine = 1mg oxymorphone oral 7.0mg morphine oral = 1mg hydromorphone oral 1mg morphine iv = .10mg oxymorphone iv 1mg morphine oral = 1mg hydrocodone oral 1mg morphine oral = 6.67mg codeine oral 1mg morphine oral = .10mg methadone oral And I have a textbox that is the source dosage in mg (a double) that the user can enter in. Underneath this, I have radio boxes for the source substance (ie: morphine) and the destination substance (ie oxycodone) for conversion.. I've been trying to think of the most efficient way to do this, but nearly every seems sloppy. If I were to do something like public static double MorphinetoOxycodone(string morphineValue) { double morphine = Double.Parse(morphineValue); return (morphine / 2 ); } I would also have to make a function for OxycodonetoMorphine, OxycodonetoCodeine, etc.. and then would end up with dozens functions.. There must be an easier way than this that I'm missing. If you'll notice, all of my conversions use morphine as the base value.. what might be the easiest way to use the morphine value to convert one opioid to another? For example, if 1mg morphine oral is equal to 1mg hydrocodone and 1mg morphine oral is equal to .10mg methadone, wouldn't I just multiply 1*.10 to get the hydrocodone-methadone value? Implementing this idea is what I'm having the most trouble with. Any help would be GREATLY appreciated.. and if you'd like, I would add your name/nickname to the credits in this program. It's possible that many, many people around the world will use this (I'm translating it into several languages as well) and to know that your work could've helped an addict from dying.. I think that's a great thing :) -cory

    Read the article

  • Specializating a template function that takes a universal reference parameter

    - by David Stone
    How do I specialize a template function that takes a universal reference parameter? foo.hpp: template<typename T> void foo(T && t) // universal reference parameter foo.cpp template<> void foo<Class>(Class && class) { // do something complicated } Here, Class is no longer a deduced type and thus is Class exactly; it cannot possibly be Class &, so reference collapsing rules will not help me here. I could perhaps create another specialization that takes a Class & parameter (I'm not sure), but that implies duplicating all of the code contained within foo for every possible combination of rvalue / lvalue references for all parameters, which is what universal references are supposed to avoid. Is there some way to accomplish this? To be more specific about my problem in case there is a better way to solve it: I have a program that can connect to multiple game servers, and each server, for the most part, calls everything by the same name. However, they have slightly different versions for a few things. There are a few different categories that these things can be: a move, an item, etc. I have written a generic sort of "move string to move enum" set of functions for internal code to call, and my server interface code has similar functions. However, some servers have their own internal ID that they communicate with, some use strings, and some use both in different situations. Now what I want to do is make this a little more generic. I want to be able to call something like ServerNamespace::server_cast<Destination>(source). This would allow me to cast from a Move to a std::string or ServerMoveID. Internally, I may need to make a copy (or move from) because some servers require that I keep a history of messages sent. Universal references seem to be the obvious solution to this problem. The header file I'm thinking of right now would expose simply this: namespace ServerNamespace { template<typename Destination, typename Source> Destination server_cast(Source && source); } And the implementation file would define all legal conversions as template specializations.

    Read the article

  • Regular expression from font to span (size and colour) and back (VB.NET)

    - by chapmanio
    Hi, I am looking for a regular expression that can convert my font tags (only with size and colour attributes) into span tags with the relevant inline css. This will be done in VB.NET if that helps at all. I also need a regular expression to go the other way as well. To elaborate below is an example of the conversion I am looking for: <font size="10">some text</font> To then become: <span style="font-size:10px;">some text</span> So converting the tag and putting a "px" at the end of whatever the font size is (I don't need to change/convert the font size, just stick px at the end). The regular expression needs to cope with a font tag that only has a size attribute, only a color attribute, or both: <font size="10">some text</font> <font color="#000000">some text</font> <font size="10" color="#000000">some text</font> <font color="#000000" size="10">some text</font> I also need another regular expression to do the opposite conversion. So for example: <span style="font-size:10px;">some text</span> Will become: <font size="10">some text</font> As before converting the tag but this time removing the "px", I don't need to worry about changing the font size. Again this will also need to cope with the size styling, font styling, and a combination of both: <span style="font-size:10px;">some text</span> <span style="color:#000000;">some text</span> <span style="font-size:10px; color:#000000;">some text</span> <span style="color:#000000; font-size:10px;">some text</span> I apprecitate this is a lot to ask, I am hopeless with regular expressions and need to find a way of making these conversions in my code. Thanks so much to anyone that can/is willing to help me!

    Read the article

  • Loosely coupled implicit conversion

    - by ltjax
    Implicit conversion can be really useful when types are semantically equivalent. For example, imagine two libraries that implement a type identically, but in different namespaces. Or just a type that is mostly identical, except for some semantic-sugar here and there. Now you cannot pass one type into a function (in one of those libraries) that was designed to use the other, unless that function is a template. If it's not, you have to somehow convert one type into the other. This should be trivial (or otherwise the types are not so identical after-all!) but calling the conversion explicitly bloats your code with mostly meaningless function-calls. While such conversion functions might actually copy some values around, they essentially do nothing from a high-level "programmers" point-of-view. Implicit conversion constructors and operators could obviously help, but they introduce coupling, so that one of those types has to know about the other. Usually, at least when dealing with libraries, that is not the case, because the presence of one of those types makes the other one redundant. Also, you cannot always change libraries. Now I see two options on how to make implicit conversion work in user-code: The first would be to provide a proxy-type, that implements conversion-operators and conversion-constructors (and assignments) for all the involved types, and always use that. The second requires a minimal change to the libraries, but allows great flexibility: Add a conversion-constructor for each involved type that can be externally optionally enabled. For example, for a type A add a constructor: template <class T> A( const T& src, typename boost::enable_if<conversion_enabled<T,A>>::type* ignore=0 ) { *this = convert(src); } and a template template <class X, class Y> struct conversion_enabled : public boost::mpl::false_ {}; that disables the implicit conversion by default. Then to enable conversion between two types, specialize the template: template <> struct conversion_enabled<OtherA, A> : public boost::mpl::true_ {}; and implement a convert function that can be found through ADL. I would personally prefer to use the second variant, unless there are strong arguments against it. Now to the actual question(s): What's the preferred way to associate types for implicit conversion? Are my suggestions good ideas? Are there any downsides to either approach? Is allowing conversions like that dangerous? Should library implementers in-general supply the second method when it's likely that their type will be replicated in software they are most likely beeing used with (I'm thinking of 3d-rendering middle-ware here, where most of those packages implement a 3D vector).

    Read the article

  • CUDA memory transfer issue

    - by Vaibhav Sundriyal
    I am trying to execute a code which first transfers data from CPU to GPU memory and vice-versa. In spite of increasing the volume of data, the data transfer time remains the same as if no data transfer is actually taking place. I am posting the code. #include <stdio.h> /* Core input/output operations */ #include <stdlib.h> /* Conversions, random numbers, memory allocation, etc. */ #include <math.h> /* Common mathematical functions */ #include <time.h> /* Converting between various date/time formats */ #include <cuda.h> /* CUDA related stuff */ #include <sys/time.h> __global__ void device_volume(float *x_d,float *y_d) { int index = blockIdx.x * blockDim.x + threadIdx.x; } int main(void) { float *x_h,*y_h,*x_d,*y_d,*z_h,*z_d; long long size=9999999; long long nbytes=size*sizeof(float); timeval t1,t2; double et; x_h=(float*)malloc(nbytes); y_h=(float*)malloc(nbytes); z_h=(float*)malloc(nbytes); cudaMalloc((void **)&x_d,size*sizeof(float)); cudaMalloc((void **)&y_d,size*sizeof(float)); cudaMalloc((void **)&z_d,size*sizeof(float)); gettimeofday(&t1,NULL); cudaMemcpy(x_d, x_h, nbytes, cudaMemcpyHostToDevice); cudaMemcpy(y_d, y_h, nbytes, cudaMemcpyHostToDevice); cudaMemcpy(z_d, z_h, nbytes, cudaMemcpyHostToDevice); gettimeofday(&t2,NULL); et = (t2.tv_sec - t1.tv_sec) * 1000.0; // sec to ms et += (t2.tv_usec - t1.tv_usec) / 1000.0; // us to ms printf("\n %ld\t\t%f\t\t",nbytes,et); et=0.0; //printf("%f %d\n",seconds,CLOCKS_PER_SEC); // launch a kernel with a single thread to greet from the device //device_volume<<<1,1>>>(x_d,y_d); gettimeofday(&t1,NULL); cudaMemcpy(x_h, x_d, nbytes, cudaMemcpyDeviceToHost); cudaMemcpy(y_h, y_d, nbytes, cudaMemcpyDeviceToHost); cudaMemcpy(z_h, z_d, nbytes, cudaMemcpyDeviceToHost); gettimeofday(&t2,NULL); et = (t2.tv_sec - t1.tv_sec) * 1000.0; // sec to ms et += (t2.tv_usec - t1.tv_usec) / 1000.0; // us to ms printf("%f\n",et); cudaFree(x_d); cudaFree(y_d); cudaFree(z_d); return 0; } Can anybody help me with this issue? Thanks

    Read the article

  • Dynamically creating a Generic Type at Runtime

    - by Rick Strahl
    I learned something new today. Not uncommon, but it's a core .NET runtime feature I simply did not know although I know I've run into this issue a few times and worked around it in other ways. Today there was no working around it and a few folks on Twitter pointed me in the right direction. The question I ran into is: How do I create a type instance of a generic type when I have dynamically acquired the type at runtime? Yup it's not something that you do everyday, but when you're writing code that parses objects dynamically at runtime it comes up from time to time. In my case it's in the bowels of a custom JSON parser. After some thought triggered by a comment today I realized it would be fairly easy to implement two-way Dictionary parsing for most concrete dictionary types. I could use a custom Dictionary serialization format that serializes as an array of key/value objects. Basically I can use a custom type (that matches the JSON signature) to hold my parsed dictionary data and then add it to the actual dictionary when parsing is complete. Generic Types at Runtime One issue that came up in the process was how to figure out what type the Dictionary<K,V> generic parameters take. Reflection actually makes it fairly easy to figure out generic types at runtime with code like this: if (arrayType.GetInterface("IDictionary") != null) { if (arrayType.IsGenericType) { var keyType = arrayType.GetGenericArguments()[0]; var valueType = arrayType.GetGenericArguments()[1]; … } } The GetArrayType method gets passed a type instance that is the array or array-like object that is rendered in JSON as an array (which includes IList, IDictionary, IDataReader and a few others). In my case the type passed would be something like Dictionary<string, CustomerEntity>. So I know what the parent container class type is. Based on the the container type using it's then possible to use GetGenericTypeArguments() to retrieve all the generic types in sequential order of definition (ie. string, CustomerEntity). That's the easy part. Creating a Generic Type and Providing Generic Parameters at RunTime The next problem is how do I get a concrete type instance for the generic type? I know what the type name and I have a type instance is but it's generic, so how do I get a type reference to keyvaluepair<K,V> that is specific to the keyType and valueType above? Here are a couple of things that come to mind but that don't work (and yes I tried that unsuccessfully first): Type elementType = typeof(keyvalue<keyType, valueType>); Type elementType = typeof(keyvalue<typeof(keyType), typeof(valueType)>); The problem is that this explicit syntax expects a type literal not some dynamic runtime value, so both of the above won't even compile. I turns out the way to create a generic type at runtime is using a fancy bit of syntax that until today I was completely unaware of: Type elementType = typeof(keyvalue<,>).MakeGenericType(keyType, valueType); The key is the type(keyvalue<,>) bit which looks weird at best. It works however and produces a non-generic type reference. You can see the difference between the full generic type and the non-typed (?) generic type in the debugger: The nonGenericType doesn't show any type specialization, while the elementType type shows the string, CustomerEntity (truncated above) in the type name. Once the full type reference exists (elementType) it's then easy to create an instance. In my case the parser parses through the JSON and when it completes parsing the value/object it creates a new keyvalue<T,V> instance. Now that I know the element type that's pretty trivial with: // Objects start out null until we find the opening tag resultObject = Activator.CreateInstance(elementType); Here the result object is picked up by the JSON array parser which creates an instance of the child object (keyvalue<K,V>) and then parses and assigns values from the JSON document using the types  key/value property signature. Internally the parser then takes each individually parsed item and adds it to a list of  List<keyvalue<K,V>> items. Parsing through a Generic type when you only have Runtime Type Information When parsing of the JSON array is done, the List needs to be turned into a defacto Dictionary<K,V>. This should be easy since I know that I'm dealing with an IDictionary, and I know the generic types for the key and value. The problem is again though that this needs to happen at runtime which would mean using several Convert.ChangeType() calls in the code to dynamically cast at runtime. Yuk. In the end I decided the easier and probably only slightly slower way to do this is a to use the dynamic type to collect the items and assign them to avoid all the dynamic casting madness: else if (IsIDictionary) { IDictionary dict = Activator.CreateInstance(arrayType) as IDictionary; foreach (dynamic item in items) { dict.Add(item.key, item.value); } return dict; } This code creates an instance of the generic dictionary type first, then loops through all of my custom keyvalue<K,V> items and assigns them to the actual dictionary. By using Dynamic here I can side step all the explicit type conversions that would be required in the three highlighted areas (not to mention that this nested method doesn't have access to the dictionary item generic types here). Static <- -> Dynamic Dynamic casting in a static language like C# is a bitch to say the least. This is one of the few times when I've cursed static typing and the arcane syntax that's required to coax types into the right format. It works but it's pretty nasty code. If it weren't for dynamic that last bit of code would have been a pretty ugly as well with a bunch of Convert.ChangeType() calls to litter the code. Fortunately this type of type convulsion is rather rare and reserved for system level code. It's not every day that you create a string to object parser after all :-)© Rick Strahl, West Wind Technologies, 2005-2011Posted in .NET  CSharp   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • C# Extension Methods - To Extend or Not To Extend...

    - by James Michael Hare
    I've been thinking a lot about extension methods lately, and I must admit I both love them and hate them. They are a lot like sugar, they taste so nice and sweet, but they'll rot your teeth if you eat them too much.   I can't deny that they aren't useful and very handy. One of the major components of the Shared Component library where I work is a set of useful extension methods. But, I also can't deny that they tend to be overused and abused to willy-nilly extend every living type.   So what constitutes a good extension method? Obviously, you can write an extension method for nearly anything whether it is a good idea or not. Many times, in fact, an idea seems like a good extension method but in retrospect really doesn't fit.   So what's the litmus test? To me, an extension method should be like in the movies when a person runs into their twin, separated at birth. You just know you're related. Obviously, that's hard to quantify, so let's try to put a few rules-of-thumb around them.   A good extension method should:     Apply to any possible instance of the type it extends.     Simplify logic and improve readability/maintainability.     Apply to the most specific type or interface applicable.     Be isolated in a namespace so that it does not pollute IntelliSense.     So let's look at a few examples in relation to these rules.   The first rule, to me, is the most important of all. Once again, it bears repeating, a good extension method should apply to all possible instances of the type it extends. It should feel like the long lost relative that should have been included in the original class but somehow was missing from the family tree.    Take this nifty little int extension, I saw this once in a blog and at first I really thought it was pretty cool, but then I started noticing a code smell I couldn't quite put my finger on. So let's look:       public static class IntExtensinos     {         public static int Seconds(int num)         {             return num * 1000;         }           public static int Minutes(int num)         {             return num * 60000;         }     }     This is so you could do things like:       ...     Thread.Sleep(5.Seconds());     ...     proxy.Timeout = 1.Minutes();     ...     Awww, you say, that's cute! Well, that's the problem, it's kitschy and it doesn't always apply (and incidentally you could achieve the same thing with TimeStamp.FromSeconds(5)). It's syntactical candy that looks cool, but tends to rot and pollute the code. It would allow things like:       total += numberOfTodaysOrders.Seconds();     which makes no sense and should never be allowed. The problem is you're applying an extension method to a logical domain, not a type domain. That is, the extension method Seconds() doesn't really apply to ALL ints, it applies to ints that are representative of time that you want to convert to milliseconds.    Do you see what I mean? The two problems, in a nutshell, are that a) Seconds() called off a non-time value makes no sense and b) calling Seconds() off something to pass to something that does not take milliseconds will be off by a factor of 1000 or worse.   Thus, in my mind, you should only ever have an extension method that applies to the whole domain of that type.   For example, this is one of my personal favorites:       public static bool IsBetween<T>(this T value, T low, T high)         where T : IComparable<T>     {         return value.CompareTo(low) >= 0 && value.CompareTo(high) <= 0;     }   This allows you to check if any IComparable<T> is within an upper and lower bound. Think of how many times you type something like:       if (response.Employee.Address.YearsAt >= 2         && response.Employee.Address.YearsAt <= 10)     {     ...     }     Now, you can instead type:       if(response.Employee.Address.YearsAt.IsBetween(2, 10))     {     ...     }     Note that this applies to all IComparable<T> -- that's ints, chars, strings, DateTime, etc -- and does not depend on any logical domain. In addition, it satisfies the second point and actually makes the code more readable and maintainable.   Let's look at the third point. In it we said that an extension method should fit the most specific interface or type possible. Now, I'm not saying if you have something that applies to enumerables, you create an extension for List, Array, Dictionary, etc (though you may have reasons for doing so), but that you should beware of making things TOO general.   For example, let's say we had an extension method like this:       public static T ConvertTo<T>(this object value)     {         return (T)Convert.ChangeType(value, typeof(T));     }         This lets you do more fluent conversions like:       double d = "5.0".ConvertTo<double>();     However, if you dig into Reflector (LOVE that tool) you will see that if the type you are calling on does not implement IConvertible, what you convert to MUST be the exact type or it will throw an InvalidCastException. Now this may or may not be what you want in this situation, and I leave that up to you. Things like this would fail:       object value = new Employee();     ...     // class cast exception because typeof(IEmployee) != typeof(Employee)     IEmployee emp = value.ConvertTo<IEmployee>();       Yes, that's a downfall of working with Convertible in general, but if you wanted your fluent interface to be more type-safe so that ConvertTo were only callable on IConvertibles (and let casting be a manual task), you could easily make it:         public static T ConvertTo<T>(this IConvertible value)     {         return (T)Convert.ChangeType(value, typeof(T));     }         This is what I mean by choosing the best type to extend. Consider that if we used the previous (object) version, every time we typed a dot ('.') on an instance we'd pull up ConvertTo() whether it was applicable or not. By filtering our extension method down to only valid types (those that implement IConvertible) we greatly reduce our IntelliSense pollution and apply a good level of compile-time correctness.   Now my fourth rule is just my general rule-of-thumb. Obviously, you can make extension methods as in-your-face as you want. I included all mine in my work libraries in its own sub-namespace, something akin to:       namespace Shared.Core.Extensions { ... }     This is in a library called Shared.Core, so just referencing the Core library doesn't pollute your IntelliSense, you have to actually do a using on Shared.Core.Extensions to bring the methods in. This is very similar to the way Microsoft puts its extension methods in System.Linq. This way, if you want 'em, you use the appropriate namespace. If you don't want 'em, they won't pollute your namespace.   To really make this work, however, that namespace should only include extension methods and subordinate types those extensions themselves may use. If you plant other useful classes in those namespaces, once a user includes it, they get all the extensions too.   Also, just as a personal preference, extension methods that aren't simply syntactical shortcuts, I like to put in a static utility class and then have extension methods for syntactical candy. For instance, I think it imaginable that any object could be converted to XML:       namespace Shared.Core     {         // A collection of XML Utility classes         public static class XmlUtility         {             ...             // Serialize an object into an xml string             public static string ToXml(object input)             {                 var xs = new XmlSerializer(input.GetType());                   // use new UTF8Encoding here, not Encoding.UTF8. The later includes                 // the BOM which screws up subsequent reads, the former does not.                 using (var memoryStream = new MemoryStream())                 using (var xmlTextWriter = new XmlTextWriter(memoryStream, new UTF8Encoding()))                 {                     xs.Serialize(xmlTextWriter, input);                     return Encoding.UTF8.GetString(memoryStream.ToArray());                 }             }             ...         }     }   I also wanted to be able to call this from an object like:       value.ToXml();     But here's the problem, if i made this an extension method from the start with that one little keyword "this", it would pop into IntelliSense for all objects which could be very polluting. Instead, I put the logic into a utility class so that users have the choice of whether or not they want to use it as just a class and not pollute IntelliSense, then in my extensions namespace, I add the syntactical candy:       namespace Shared.Core.Extensions     {         public static class XmlExtensions         {             public static string ToXml(this object value)             {                 return XmlUtility.ToXml(value);             }         }     }   So now it's the best of both worlds. On one hand, they can use the utility class if they don't want to pollute IntelliSense, and on the other hand they can include the Extensions namespace and use as an extension if they want. The neat thing is it also adheres to the Single Responsibility Principle. The XmlUtility is responsible for converting objects to XML, and the XmlExtensions is responsible for extending object's interface for ToXml().

    Read the article

  • The Shift: how Orchard painlessly shifted to document storage, and how it’ll affect you

    - by Bertrand Le Roy
    We’ve known it all along. The storage for Orchard content items would be much more efficient using a document database than a relational one. Orchard content items are composed of parts that serialize naturally into infoset kinds of documents. Storing them as relational data like we’ve done so far was unnatural and requires the data for a single item to span multiple tables, related through 1-1 relationships. This means lots of joins in queries, and a great potential for Select N+1 problems. Document databases, unfortunately, are still a tough sell in many places that prefer the more familiar relational model. Being able to x-copy Orchard to hosters has also been a basic constraint in the design of Orchard. Combine those with the necessity at the time to run in medium trust, and with license compatibility issues, and you’ll find yourself with very few reasonable choices. So we went, a little reluctantly, for relational SQL stores, with the dream of one day transitioning to document storage. We have played for a while with the idea of building our own document storage on top of SQL databases, and Sébastien implemented something more than decent along those lines, but we had a better way all along that we didn’t notice until recently… In Orchard, there are fields, which are named properties that you can add dynamically to a content part. Because they are so dynamic, we have been storing them as XML into a column on the main content item table. This infoset storage and its associated API are fairly generic, but were only used for fields. The breakthrough was when Sébastien realized how this existing storage could give us the advantages of document storage with minimal changes, while continuing to use relational databases as the substrate. public bool CommercialPrices { get { return this.Retrieve(p => p.CommercialPrices); } set { this.Store(p => p.CommercialPrices, value); } } This code is very compact and efficient because the API can infer from the expression what the type and name of the property are. It is then able to do the proper conversions for you. For this code to work in a content part, there is no need for a record at all. This is particularly nice for site settings: one query on one table and you get everything you need. This shows how the existing infoset solves the data storage problem, but you still need to query. Well, for those properties that need to be filtered and sorted on, you can still use the current record-based relational system. This of course continues to work. We do however provide APIs that make it trivial to store into both record properties and the infoset storage in one operation: public double Price { get { return Retrieve(r => r.Price); } set { Store(r => r.Price, value); } } This code looks strikingly similar to the non-record case above. The difference is that it will manage both the infoset and the record-based storages. The call to the Store method will send the data in both places, keeping them in sync. The call to the Retrieve method does something even cooler: if the property you’re looking for exists in the infoset, it will return it, but if it doesn’t, it will automatically look into the record for it. And if that wasn’t cool enough, it will take that value from the record and store it into the infoset for the next time it’s required. This means that your data will start automagically migrating to infoset storage just by virtue of using the code above instead of the usual: public double Price { get { return Record.Price; } set { Record.Price = value; } } As your users browse the site, it will get faster and faster as Select N+1 issues will optimize themselves away. If you preferred, you could still have explicit migration code, but it really shouldn’t be necessary most of the time. If you do already have code using QueryHints to mitigate Select N+1 issues, you might want to reconsider those, as with the new system, you’ll want to avoid joins that you don’t need for filtering or sorting, further optimizing your queries. There are some rare cases where the storage of the property must be handled differently. Check out this string[] property on SearchSettingsPart for example: public string[] SearchedFields { get { return (Retrieve<string>("SearchedFields") ?? "") .Split(new[] {',', ' '}, StringSplitOptions.RemoveEmptyEntries); } set { Store("SearchedFields", String.Join(", ", value)); } } The array of strings is transformed by the property accessors into and from a comma-separated list stored in a string. The Retrieve and Store overloads used in this case are lower-level versions that explicitly specify the type and name of the attribute to retrieve or store. You may be wondering what this means for code or operations that look directly at the database tables instead of going through the new infoset APIs. Even if there is a record, the infoset version of the property will win if it exists, so it is necessary to keep the infoset up-to-date. It’s not very complicated, but definitely something to keep in mind. Here is what a product record looks like in Nwazet.Commerce for example: And here is the same data in the infoset: The infoset is stored in Orchard_Framework_ContentItemRecord or Orchard_Framework_ContentItemVersionRecord, depending on whether the content type is versionable or not. A good way to find what you’re looking for is to inspect the record table first, as it’s usually easier to read, and then get the item record of the same id. Here is the detailed XML document for this product: <Data> <ProductPart Inventory="40" Price="18" Sku="pi-camera-box" OutOfStockMessage="" AllowBackOrder="false" Weight="0.2" Size="" ShippingCost="null" IsDigital="false" /> <ProductAttributesPart Attributes="" /> <AutoroutePart DisplayAlias="camera-box" /> <TitlePart Title="Nwazet Pi Camera Box" /> <BodyPart Text="[...]" /> <CommonPart CreatedUtc="2013-09-10T00:39:00Z" PublishedUtc="2013-09-14T01:07:47Z" /> </Data> The data is neatly organized under each part. It is easy to see how that document is all you need to know about that content item, all in one table. If you want to modify that data directly in the database, you should be careful to do it in both the record table and the infoset in the content item record. In this configuration, the record is now nothing more than an index, and will only be used for sorting and filtering. Of course, it’s perfectly fine to mix record-backed properties and record-less properties on the same part. It really depends what you think must be sorted and filtered on. In turn, this potentially simplifies migrations considerably. So here it is, the great shift of Orchard to document storage, something that Orchard has been designed for all along, and that we were able to implement with a satisfying and surprising economy of resources. Expect this code to make its way into the 1.8 version of Orchard when that’s available.

    Read the article

  • Problem Implementing Texture on Libgdx Mesh of Randomized Terrain

    - by BrotherJack
    I'm having problems understanding how to apply a texture to a non-rectangular object. The following code creates textures such as this: from the debug renderer I think I've got the physical shape of the "earth" correct. However, I don't know how to apply a texture to it. I have a 50x50 pixel image (in the environment constructor as "dirt.png"), that I want to apply to the hills. I have a vague idea that this seems to involve the mesh class and possibly a ShapeRenderer, but the little i'm finding online is just confusing me. Bellow is code from the class that makes and regulates the terrain and the code in a separate file that is supposed to render it (but crashes on the mesh.render() call). Any pointers would be appreciated. public class Environment extends Actor{ Pixmap sky; public Texture groundTexture; Texture skyTexture; double tankypos; //TODO delete, temp public Tank etank; //TODO delete, temp int destructionRes; // how wide is a static pixel private final float viewWidth; private final float viewHeight; private ChainShape terrain; public Texture dirtTexture; private World world; public Mesh terrainMesh; private static final String LOG = Environment.class.getSimpleName(); // Constructor public Environment(Tank tank, FileHandle sfileHandle, float w, float h, int destructionRes) { world = new World(new Vector2(0, -10), true); this.destructionRes = destructionRes; sky = new Pixmap(sfileHandle); viewWidth = w; viewHeight = h; skyTexture = new Texture(sky); terrain = new ChainShape(); genTerrain((int)w, (int)h, 6); Texture tankSprite = new Texture(Gdx.files.internal("TankSpriteBase.png")); Texture turretSprite = new Texture(Gdx.files.internal("TankSpriteTurret.png")); tank = new Tank(0, true, tankSprite, turretSprite); Rectangle tankrect = new Rectangle(300, (int)tankypos, 44, 45); tank.setRect(tankrect); BodyDef terrainDef = new BodyDef(); terrainDef.type = BodyType.StaticBody; terrainDef.position.set(0, 0); Body terrainBody = world.createBody(terrainDef); FixtureDef fixtureDef = new FixtureDef(); fixtureDef.shape = terrain; terrainBody.createFixture(fixtureDef); BodyDef tankDef = new BodyDef(); Rectangle rect = tank.getRect(); tankDef.type = BodyType.DynamicBody; tankDef.position.set(0,0); tankDef.position.x = rect.x; tankDef.position.y = rect.y; Body tankBody = world.createBody(tankDef); FixtureDef tankFixture = new FixtureDef(); PolygonShape shape = new PolygonShape(); shape.setAsBox(rect.width*WORLD_TO_BOX, rect.height*WORLD_TO_BOX); fixtureDef.shape = shape; dirtTexture = new Texture(Gdx.files.internal("dirt.png")); etank = tank; } private void genTerrain(int w, int h, int hillnessFactor){ int width = w; int height = h; Random rand = new Random(); //min and max bracket the freq's of the sin/cos series //The higher the max the hillier the environment int min = 1; //allocating horizon for screen width Vector2[] horizon = new Vector2[width+2]; horizon[0] = new Vector2(0,0); double[] skyline = new double[width]; //TODO skyline necessary as an array? //ratio of amplitude of screen height to landscape variation double r = (int) 2.0/5.0; //number of terms to be used in sine/cosine series int n = 4; int[] f = new int[n*2]; //calculating omegas for sine series for(int i = 0; i < n*2 ; i ++){ f[i] = rand.nextInt(hillnessFactor - min + 1) + min; } //amp is the amplitude of the series int amp = (int) (r*height); double lastPoint = 0.0; for(int i = 0 ; i < width; i ++){ skyline[i] = 0; for(int j = 0; j < n; j++){ skyline[i] += ( Math.sin( (f[j]*Math.PI*i/height) ) + Math.cos(f[j+n]*Math.PI*i/height) ); } skyline[i] *= amp/(n*2); skyline[i] += (height/2); skyline[i] = (int)skyline[i]; //TODO Possible un-necessary float to int to float conversions tankypos = skyline[i]; horizon[i+1] = new Vector2((float)i, (float)skyline[i]); if(i == width) lastPoint = skyline[i]; } horizon[width+1] = new Vector2(800, (float)lastPoint); terrain.createChain(horizon); terrain.createLoop(horizon); //I have no idea if the following does anything useful :( terrainMesh = new Mesh(true, (width+2)*2, (width+2)*2, new VertexAttribute(Usage.Position, (width+2)*2, "a_position")); float[] vertices = new float[(width+2)*2]; short[] indices = new short[(width+2)*2]; for(int i=0; i < (width+2); i+=2){ vertices[i] = horizon[i].x; vertices[i+1] = horizon[i].y; indices[i] = (short)i; indices[i+1] = (short)(i+1); } terrainMesh.setVertices(vertices); terrainMesh.setIndices(indices); } Here is the code that is (supposed to) render the terrain. @Override public void render(float delta) { Gdx.gl.glClearColor(1, 1, 1, 1); Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT); // tell the camera to update its matrices. camera.update(); // tell the SpriteBatch to render in the // coordinate system specified by the camera. backgroundStage.draw(); backgroundStage.act(delta); uistage.draw(); uistage.act(delta); batch.begin(); debugRenderer.render(this.ground.getWorld(), camera.combined); batch.end(); //Gdx.graphics.getGL10().glEnable(GL10.GL_TEXTURE_2D); ground.dirtTexture.bind(); ground.terrainMesh.render(GL10.GL_TRIANGLE_FAN); //I'm particularly lost on this ground.step(); }

    Read the article

  • Android-Java: Constructing a triangle based on Coordinates on a map and your bearing

    - by Aidan
    Hi Guys, I'm constructing a geolocation based application and I'm trying to figure out a way to make my application realise when a user is facing the direction of the given location (a particular long / lat co-ord). I've got the math figured, I just have the triangle to construct. //UPDATE So I've figured out a good bit of this... Below is a method which takes in a long / lat value and attempts to compute a triangle finding a point 700 meters away and one to its left + right. It'd then use these to construct the triangle. It computes the correct longitude but the latitude ends up somewhere off the coast of east Africa. (I'm in Ireland!). public void drawtri(double currlng,double currlat, double bearing){ bearing = (bearing < 0 ? -bearing : bearing); System.out.println("RUNNING THE DRAW TRIANGLE METHOD!!!!!"); System.out.println("CURRENT LNG" + currlng); System.out.println("CURRENT LAT" + currlat); System.out.println("CURRENT BEARING" + bearing); //Find point X(x,y) double distance = 0.7; //700 meters. double R = 6371.0; //The radius of the earth. //Finding X's y value. Math.toRadians(currlng); Math.toRadians(currlat); Math.toRadians(bearing); distance = distance/R; Global.Alat = Math.asin(Math.sin(currlat)*Math.cos(distance)+ Math.cos(currlat)*Math.sin(distance)*Math.cos(bearing)); System.out.println("CURRENT ALAT!!: " + Global.Alat); //Finding X's x value. Global.Alng = currlng + Math.atan2(Math.sin(bearing)*Math.sin(distance) *Math.cos(currlat), Math.cos(distance)-Math.sin(currlat)*Math.sin(Global.Alat)); Math.toDegrees(Global.Alat); Math.toDegrees(Global.Alng); //Co-ord of Point B(x,y) // Note: Lng = X axis, Lat = Y axis. Global.Blat = Global.Alat+ 00.007931; Global.Blng = Global.Alng; //Co-ord of Point C(x,y) Global.Clat = Global.Alat - 00.007931; Global.Clng = Global.Alng; } From debugging I've determined the problem lies with the computation of the latitude done here.. Global.Alat = Math.asin(Math.sin(currlat)*Math.cos(distance)+ Math.cos(currlat)*Math.sin(distance)*Math.cos(bearing)); I have no idea why though and don't know how to fix it. I got the formula from this site.. http://www.movable-type.co.uk/scripts/latlong.html It appears correct and I've tested multiple things... I've tried converting to Radians then post computations back to degrees, etc. etc. Anyone got any ideas how to fix this method so that it will map the triangle ONLY 700 meters in from my current location in the direction that I am facing? Thanks, EDIT/// Converting the outcome to radians gives me a lat of 5.6xxxxxxxxxxxxxx .I have a feeling this bug has something to do with conversions but its not THAT simple. The equation is correct, it just.. outputs wrong..

    Read the article

  • Visual C++ 2010 Winform Errors C2238, C2059, C1075.

    - by tracelez
    It errors when I try compile. I cut the code out of the program and the program works and complies correctly. Not sure why it doesn't like this. This part of the code does single digit math with numeric strings that are converted into a char arrays. ** Error 2 error C2238: unexpected token(s) preceding ';' C:\Users\Alpha\documents\visual studio 2010\Projects\Win32 Form c++\Win32 Form c++\Win32 Form c++.cpp 10 1 Win32 Form c++ ** Error 1 error C2059: syntax error : 'namespace' C:\Users\Alpha\documents\visual studio 2010\Projects\Win32 Form c++\Win32 Form c++\Win32 Form c++.cpp 10 1 Win32 Form c++ ** Error 3 error C1075: end of file found before the left brace '{' at 'c:\users\alpha\documents\visual studio 2010\projects\win32 form c++\win32 form c++\Form1.h(40)' was matched C:\Users\Alpha\documents\visual studio 2010\Projects\Win32 Form c++\Win32 Form c++\Win32 Form c++.cpp 23 1 Win32 Form c++ ////// ////// // chX[] and chY[] are char arrays from functional part of program std::reverse( chX, &chX[ strlen( chX ) ] ); std::reverse( chY, &chY[ strlen( chY ) ] ); // makes sure x is larger or equal to y...makes looping logic easier if (strlen(chX) < strlen(chY)) { char *chZ = chX; chX = chY; chY = chZ; } //Variables for this part of the program char chX2; char chY2; std::string strSum; int sum = 0; bool carryth1 = false; int x=0; int y=0; for (int i = 0; i <= (strlen(chX)-1); i++) { if (i <= strlen(chY)-1) { chX2= chX[i]; chY2= chY[i]; x = atoi(chX2); y = atoi(chY2); //x = atoi(chX[i]); sum = x+y+(int)carryth1; carryth1 = false; if (sum > 9) { if(i == 0) { sum -=10; strSum = itoa(sum); carryth1 = true; } else { sum -=10; strSum += itoa(sum); carryth1 = true; } } else { if(i == 0) { strSum = itoa(sum); } else { strSum += itoa(sum); } } else { y = 0; chX2= chX[i]; x = atoi(chX2); sum = x+y+(int)carryth1; if((i == strlen(chX)-1)&& (carryth1 == true) && (x == 9)) { strSum += "10"; } else { strSum = itoa(sum); } } std::reverse( strSum, &strSum[ strlen( strSum ) ] ); //Creates new string for txtDisplay this simplifies conversions String^ strDisplay = "X is " + gcnew String((strX1.c_str())) + " Y is " +gcnew String((strY1.c_str())) + " \r\n " ; strDisplay += "The sum of the X + Y = "; txtDisplay->Text = gcnew String((strSum.c_str())) ;

    Read the article

  • Strange WCF Error - IIS hosted - context being aborted

    - by RandomNoob
    I have a WCF service that does some document conversions and returns the document to the caller. When developing locally on my local dev server, the service is hosted on ASP.NET Development server, a console application invokes the operation and executes within seconds. When I host the service in IIS via a .svc file, two of the documents work correctly, the third one bombs out, it begins to construct the word document using the OpenXml Sdk, but then just dies. I think this has something to do with IIS, but I cannot put my finger on it. There are a total of three types of documents I generate. In a nutshell this is how it works SQL 2005 DB/IBM DB2 - WCF Service written by other developer to expose data. This service only has one endpoint using basicHttpBinding My Service invokes his service, gets the relevant data, uses the Open Xml Sdk to generate a Microsoft Word Document, saves it on a server and returns the path to the user. The word documents are no bigger than 100KB. I am also using basicHttpBinding although I have tried wsHttpBinding with the same results. What is amazing is how fast it is locally, and even more that two of the documents generate just fine, its the third document type that refuses to work. To the error message: An error occured while receiving the HTTP Response to http://myservername.mydomain.inc/MyService/Service.Svc. This could be due to the service endpoint binding not using the HTTP Protocol. This could also be due to an HTTP request context being aborted by the server (possibly due to the server shutting down). See server logs for more details. I have spent the last 2 days trying to figure out what is going on, I have tried everything, including changing the maxReceivedMessageSize, maxBufferSize, maxBufferPoolSize, etc etc to large values, I even included: <httpRuntime maxRequestLength="2097151" executionTimeout="120"/> To see maybe if IIS was choking because of that. Programatically the service does nothing special, it just constructs the word documents from the data using the Open Xml Sdk and like I said, locally all 3 documents work when invoked via a console app running locally on the asp.net dev server, i.e. http://localhost:3332/myService.svc When I host it on IIS and I try to get a Windows Forms application to invoke it, I get the error. I know you will ask for logs, so yes I have logging enabled on my Host. And there is no error in the logs, I am logging everything. Basically I invoke two service operations written by another developer. MyOperation calls - HisOperation1 and then HisOperation2, both of those calls give me complex types. I am going to look at his code tomorrow, because he is using LINQ2SQL and there may be some funny business going on there. He is using a variety of collections etc, but the fact that I can run the exact same document, lets call it "Document 3" within seconds when the service is being hosted locally on ASP WebDev Server is what is most odd, why would it run on scaled down Cassini and blow up on IIS? From the log it seems, after calling HisOperation1 and HisOperation2 the service just goes into la-la land dies, there is a application pool (w3wp.exe) error in the Windows Event Log. Faulting application w3wp.exe, version 6.0.3790.1830, stamp 42435be1, faulting module kernel32.dll, version 5.2.3790.3311, stamp 49c5225e, debug? 0, fault address 0x00015dfa. It's classified as .NET 2.0 Runtime error. Any help is appreciated, the lack of sleep is getting to me. Help me Obi-Wan Kenobi, you're my only hope.

    Read the article

  • Python - help on custom wx.Python (pyDev) class

    - by Wallter
    I have been hitting a dead end with this program. I am trying to build a class that will let me control the BIP's of a button when it is in use. so far this is what i have (see following.) It keeps running this weird error TypeError: 'module' object is not callable - I, coming from C++ and C# (for some reason the #include... is so much easier) , have no idea what that means, Google is of no help so... I know I need some real help with sintax and such - anything woudl be helpful. Note: The base code found here was used to create a skeleton for this 'custom button class' Custom Button import wx from wxPython.wx import * class Custom_Button(wx.PyControl): # The BMP's # AM I DOING THIS RIGHT? - I am trying to get empty 'global' # variables within the class Mouse_over_bmp = None #wxEmptyBitmap(1,1,1) # When the mouse is over Norm_bmp = None #wxEmptyBitmap(1,1,1) # The normal BMP Push_bmp = None #wxEmptyBitmap(1,1,1) # The down BMP Pos_bmp = wx.Point(0,0) # The posisition of the button def __init__(self, parent, NORM_BMP, PUSH_BMP, MOUSE_OVER_BMP, pos, size, text="", id=-1, **kwargs): wx.PyControl.__init__(self,parent, id, **kwargs) # The conversions, hereafter, were to solve another but. I don't know if it is # necessary to do this since the source being given to the class (in this case) # is a BMP - is there a better way to prevent an error that i have not # stumbled accost? # Set the BMP's to the ones given in the constructor self.Mouse_over_bmp = wx.Bitmap(wx.Image(MOUSE_OVER_BMP, wx.BITMAP_TYPE_ANY).ConvertToBitmap()) self.Norm_bmp = wx.Bitmap(wx.Image(NORM_BMP, wx.BITMAP_TYPE_ANY).ConvertToBitmap()) self.Push_bmp = wx.Bitmap(wx.Image(PUSH_BMP, wx.BITMAP_TYPE_ANY).ConvertToBitmap()) self.Pos_bmp = self.pos self.Bind(wx.EVT_LEFT_DOWN, self._onMouseDown) self.Bind(wx.EVT_LEFT_UP, self._onMouseUp) self.Bind(wx.EVT_LEAVE_WINDOW, self._onMouseLeave) self.Bind(wx.EVT_ENTER_WINDOW, self._onMouseEnter) self.Bind(wx.EVT_ERASE_BACKGROUND,self._onEraseBackground) self.Bind(wx.EVT_PAINT,self._onPaint) self._mouseIn = self._mouseDown = False def _onMouseEnter(self, event): self._mouseIn = True def _onMouseLeave(self, event): self._mouseIn = False def _onMouseDown(self, event): self._mouseDown = True def _onMouseUp(self, event): self._mouseDown = False self.sendButtonEvent() def sendButtonEvent(self): event = wx.CommandEvent(wx.wxEVT_COMMAND_BUTTON_CLICKED, self.GetId()) event.SetInt(0) event.SetEventObject(self) self.GetEventHandler().ProcessEvent(event) def _onEraseBackground(self,event): # reduce flicker pass def _onPaint(self, event): dc = wx.BufferedPaintDC(self) dc.SetFont(self.GetFont()) dc.SetBackground(wx.Brush(self.GetBackgroundColour())) dc.Clear() dc.DrawBitmap(self.Norm_bmp) # draw whatever you want to draw # draw glossy bitmaps e.g. dc.DrawBitmap if self._mouseIn: # If the Mouse is over the button dc.DrawBitmap(self, self.Mouse_over_bmp, self.Pos_bmp, useMask=False) if self._mouseDown: # If the Mouse clicks the button dc.DrawBitmap(self, self.Push_bmp, self.Pos_bmp, useMask=False) Main.py import wx import Custom_Button from wxPython.wx import * ID_ABOUT = 101 ID_EXIT = 102 class MyFrame(wx.Frame): def __init__(self, parent, ID, title): wxFrame.__init__(self, parent, ID, title, wxDefaultPosition, wxSize(400, 400)) self.CreateStatusBar() self.SetStatusText("Program testing custom button overlays") menu = wxMenu() menu.Append(ID_ABOUT, "&About", "More information about this program") menu.AppendSeparator() menu.Append(ID_EXIT, "E&xit", "Terminate the program") menuBar = wxMenuBar() menuBar.Append(menu, "&File"); self.SetMenuBar(menuBar) self.Button1 = Custom_Button(self, parent, -1, "D:/Documents/Python/Normal.bmp", "D:/Documents/Python/Clicked.bmp", "D:/Documents/Python/Over.bmp", wx.Point(200,200), wx.Size(300,100)) EVT_MENU(self, ID_ABOUT, self.OnAbout) EVT_MENU(self, ID_EXIT, self.TimeToQuit) def OnAbout(self, event): dlg = wxMessageDialog(self, "Testing the functions of custom " "buttons using pyDev and wxPython", "About", wxOK | wxICON_INFORMATION) dlg.ShowModal() dlg.Destroy() def TimeToQuit(self, event): self.Close(true) class MyApp(wx.App): def OnInit(self): frame = MyFrame(NULL, -1, "wxPython | Buttons") frame.Show(true) self.SetTopWindow(frame) return true app = MyApp(0) app.MainLoop() Errors (and traceback) /home/wallter/python/Custom Button overlay/src/Custom_Button.py:8: DeprecationWarning: The wxPython compatibility package is no longer automatically generated or actively maintained. Please switch to the wx package as soon as possible. I have never been able to get this to go away whenever using wxPython any help? from wxPython.wx import * Traceback (most recent call last): File "/home/wallter/python/Custom Button overlay/src/Main.py", line 57, in <module> app = MyApp(0) File "/usr/lib/python2.6/dist-packages/wx-2.8-gtk2-unicode/wx/_core.py", line 7978, in __init__ self._BootstrapApp() File "/usr/lib/python2.6/dist-packages/wx-2.8-gtk2-unicode/wx/_core.py", line 7552, in _BootstrapApp return _core_.PyApp__BootstrapApp(*args, **kwargs) File "/home/wallter/python/Custom Button overlay/src/Main.py", line 52, in OnInit frame = MyFrame(NULL, -1, "wxPython | Buttons") File "/home/wallter/python/Custom Button overlay/src/Main.py", line 32, in __init__ wx.Point(200,200), wx.Size(300,100)) TypeError: 'module' object is not callable I have tried removing the "wx.Point(200,200), wx.Size(300,100))" just to have the error move up to the line above. Have I declared it right? help?

    Read the article

  • Common Mercator Projection formulas for Google Maps not working correctly

    - by Tom Halladay
    I am building a Tile Overlay server for Google maps in C#, and have found a few different code examples for calculating Y from Latitude. After getting them to work in general, I started to notice certain cases where the overlays were not lining up properly. To test this, I made a test harness to compare Google Map's Mercator LatToY conversion against the formulas I found online. As you can see below, they do not match in certain cases. Case #1 Zoomed Out: The problem is most evident when zoomed out. Up close, the problem is barely visible. Case #2 Point Proximity to Top & Bottom of viewing bounds: The problem is worse in the middle of the viewing bounds, and gets better towards the edges. This behavior can negate the behavior of Case #1 The Test: I created a google maps page to display red lines using the Google Map API's built in Mercator conversion, and overlay this with an image using the reference code for doing Mercator conversion. These conversions are represented as black lines. Compare the difference. The Results: Check out the top-most and bottom-most lines: The problem gets visually larger but numerically smaller as you zoom in: And it all but disappears at closer zoom levels, regardless of screen orientation. The Code: Google Maps Client Side Code: var lat = 0; for (lat = -80; lat <= 80; lat += 5) { map.addOverlay(new GPolyline([new GLatLng(lat, -180), new GLatLng(lat, 0)], "#FF0033", 2)); map.addOverlay(new GPolyline([new GLatLng(lat, 0), new GLatLng(lat, 180)], "#FF0033", 2)); } Server Side Code: Tile Cutter : http://mapki.com/wiki/Tile_Cutter OpenStreetMap Wiki : http://wiki.openstreetmap.org/wiki/Mercator protected override void ImageOverlay_ComposeImage(ref Bitmap ZipCodeBitMap) { Graphics LinesGraphic = Graphics.FromImage(ZipCodeBitMap); Int32 MapWidth = Convert.ToInt32(Math.Pow(2, zoom) * 255); Point Offset = Cartographer.Mercator2.toZoomedPixelCoords(North, West, zoom); TrimPoint(ref Offset, MapWidth); for (Double lat = -80; lat <= 80; lat += 5) { Point StartPoint = Cartographer.Mercator2.toZoomedPixelCoords(lat, -179, zoom); Point EndPoint = Cartographer.Mercator2.toZoomedPixelCoords(lat, -1, zoom); TrimPoint(ref StartPoint, MapWidth); TrimPoint(ref EndPoint, MapWidth); StartPoint.X = StartPoint.X - Offset.X; EndPoint.X = EndPoint.X - Offset.X; StartPoint.Y = StartPoint.Y - Offset.Y; EndPoint.Y = EndPoint.Y - Offset.Y; LinesGraphic.DrawLine(new Pen(Color.Black, 2), StartPoint.X, StartPoint.Y, EndPoint.X, EndPoint.Y); LinesGraphic.DrawString( lat.ToString(), new Font("Verdana", 10), new SolidBrush(Color.Black), new Point( Convert.ToInt32((width / 3.0) * 2.0), StartPoint.Y)); } } protected void TrimPoint(ref Point point, Int32 MapWidth) { point.X = Math.Max(point.X, 0); point.X = Math.Min(point.X, MapWidth - 1); point.Y = Math.Max(point.Y, 0); point.Y = Math.Min(point.Y, MapWidth - 1); } So, Anyone ever experienced this? Dare I ask, resolved this? Or simply have a better C# implementation of Mercator Project coordinate conversion? Thanks!

    Read the article

  • Where are classpath, path and pathelement documented in Ant version 1.8.0?

    - by Robert Menteer
    I'm looking over the documentation that comes with Apache Ant version 1.8.0 and can't find where classpath, path and pathelement are documented. I've found a page that describes path like structures but it doesn't list the valid attributes or nested elements for these. Another thing I can't find in the documentation is a description of the relationships between filelist, fileset, patternset and path and how to convert them back and forth. For instance there has to be an easier way to compile only those classes in one package while removing all class dependencies on the package classes and update documentation. <!-- Get list of files in which we're interested. --> <fileset id = "java.source.set" dir = "${src}"> <include name = "**/Package/*.java" /> </fileset> <!-- Get a COMMA separated list of classes to compile. --> <pathconvert property = "java.source.list" refid = "java.source.set" pathsep = ","> <globmapper from = "${src}/*.@{src.extent}" to = "*.class" /> </pathconvert> <!-- Remove ALL dependencies on package classes. --> <depend srcdir = "${src}" destdir = "${build}" includes = "${java.source.list}" closure = "yes" /> <!-- Get a list of up to date classes. --> <fileset id = "class.uptodate.set" dir = "${build}"> <include name = "**/*.class" /> </fileset> <!-- Get list of source files for up to date classes. --> <pathconvert property = "java.uptodate.list" refid = "class.uptodate.set" pathsep = ","> <globmapper from="${build}/*.class" to="*.java" /> </pathconvert> <!-- Compile only those classes in package that are not up to date. --> <javac srcdir = "${src}" destdir = "${build}" classpathref = "compile.classpath" includes = "${java.source.list}" excludes = "${java.uptodate.list}"/> <!-- Get list of directories of class files for package. --: <pathconvert property = "class.dir.list" refid = "java.source.set" pathsep = ","> <globmapper from = "${src}/*.java" to = "${build}*" /> </pathconvert> <!-- Convert directory list to path. --> <path id = "class.dirs.path"> <dirset dir = "${build}" includes = "class.dir.list" /> </path> <!-- Update package documentation. --> <jdepend outputfile = "${docs}/jdepend-report.txt"> <classpath refid = "compile.classpath" /> <classpath location = "${build}" /> <classespath> <path refid = "class.dirs.path" /> </classespath> <exclude name = "java.*" /> <exclude name = "javax.*" /> </jdepend> Notice there's a number of conversions between filesets, paths and comma separated list just to get the proper 'type' required for the different ant tasks. Is there a way to simplify this while still processing the fewest files in a complex directory structure?

    Read the article

  • Can a conforming C implementation #define NULL to be something wacky

    - by janks
    I'm asking because of the discussion that's been provoked in this thread: http://stackoverflow.com/questions/2597142/when-was-the-null-macro-not-0/2597232 Trying to have a serious back-and-forth discussion using comments under other people's replies is not easy or fun. So I'd like to hear what our C experts think without being restricted to 500 characters at a time. The C standard has precious few words to say about NULL and null pointer constants. There's only two relevant sections that I can find. First: 3.2.2.3 Pointers An integral constant expression with the value 0, or such an expression cast to type void * , is called a null pointer constant. If a null pointer constant is assigned to or compared for equality to a pointer, the constant is converted to a pointer of that type. Such a pointer, called a null pointer, is guaranteed to compare unequal to a pointer to any object or function. and second: 4.1.5 Common definitions <stddef.h> The macros are NULL which expands to an implementation-defined null pointer constant; The question is, can NULL expand to an implementation-defined null pointer constant that is different from the ones enumerated in 3.2.2.3? In particular, could it be defined as: #define NULL __builtin_magic_null_pointer Or even: #define NULL ((void*)-1) My reading of 3.2.2.3 is that it specifies that an integral constant expression of 0, and an integral constant expression of 0 cast to type void* must be among the forms of null pointer constant that the implementation recognizes, but that it isn't meant to be an exhaustive list. I believe that the implementation is free to recognize other source constructs as null pointer constants, so long as no other rules are broken. So for example, it is provable that #define NULL (-1) is not a legal definition, because in if (NULL) do_stuff(); do_stuff() must not be called, whereas with if (-1) do_stuff(); do_stuff() must be called; since they are equivalent, this cannot be a legal definition of NULL. But the standard says that integer-to-pointer conversions (and vice-versa) are implementation-defined, therefore it could define the conversion of -1 to a pointer as a conversion that produces a null pointer. In which case if ((void*)-1) would evaluate to false, and all would be well. So what do other people think? I'd ask for everybody to especially keep in mind the "as-if" rule described in 2.1.2.3 Program execution. It's huge and somewhat roundabout, so I won't paste it here, but it essentially says that an implementation merely has to produce the same observable side-effects as are required of the abstract machine described by the standard. It says that any optimizations, transformations, or whatever else the compiler wants to do to your program are perfectly legal so long as the observable side-effects of the program aren't changed by them. So if you are looking to prove that a particular definition of NULL cannot be legal, you'll need to come up with a program that can prove it. Either one like mine that blatantly breaks other clauses in the standard, or one that can legally detect whatever magic the compiler has to do to make the strange NULL definition work. Steve Jessop found an example of way for a program to detect that NULL isn't defined to be one of the two forms of null pointer constants in 3.2.2.3, which is to stringize the constant: #define stringize_helper(x) #x #define stringize(x) stringize_helper(x) Using this macro, one could puts(stringize(NULL)); and "detect" that NULL does not expand to one of the forms in 3.2.2.3. Is that enough to render other definitions illegal? I just don't know. Thanks!

    Read the article

  • Problem exporting NSOpenGLView pixel data to some image file formats using ImageKit & CGImageDestina

    - by walkytalky
    I am developing an application to visualise some experimental data. One of its functions is to render the data in an NSOpenGLView subclass, and allow the resulting image to be exported to a file or copied to the clipboard. The view exports the data as an NSImage, generated like this: - (NSImage*) image { NSBitmapImageRep* imageRep; NSImage* image; NSSize viewSize = [self bounds].size; int width = viewSize.width; int height = viewSize.height; [self lockFocus]; [self drawRect:[self bounds]]; [self unlockFocus]; imageRep=[[[NSBitmapImageRep alloc] initWithBitmapDataPlanes:NULL pixelsWide:width pixelsHigh:height bitsPerSample:8 samplesPerPixel:4 hasAlpha:YES isPlanar:NO colorSpaceName:NSDeviceRGBColorSpace bytesPerRow:width*4 bitsPerPixel:32] autorelease]; [[self openGLContext] makeCurrentContext]; glReadPixels(0,0,width,height,GL_RGBA,GL_UNSIGNED_BYTE,[imageRep bitmapData]); image=[[[NSImage alloc] initWithSize:NSMakeSize(width,height)] autorelease]; [image addRepresentation:imageRep]; [image setFlipped:YES]; // this is deprecated in 10.6 [image lockFocusOnRepresentation:imageRep]; // this will flip the rep [image unlockFocus]; return image; } Copying uses this image very simply, like this: - (IBAction) copy:(id) sender { NSImage* img = [self image]; NSPasteboard* pb = [NSPasteboard generalPasteboard]; [pb clearContents]; NSArray* copied = [NSArray arrayWithObject:img]; [pb writeObjects:copied]; } For file writing, I use the ImageKit IKSaveOptions accessory panel to set the output file type and associated options, then use the following code to do the writing: NSImage* glImage = [glView image]; NSRect rect = [glView bounds]; rect.origin.x = rect.origin.y = 0; img = [glImage CGImageForProposedRect:&rect context:[NSGraphicsContext currentContext] hints:nil]; if (img) { NSURL* url = [NSURL fileURLWithPath: path]; CGImageDestinationRef dest = CGImageDestinationCreateWithURL((CFURLRef)url, (CFStringRef)newUTType, 1, NULL); if (dest) { CGImageDestinationAddImage(dest, img, (CFDictionaryRef)[imgSaveOptions imageProperties]); CGImageDestinationFinalize(dest); CFRelease(dest); } } (I've trimmed a bit of extraneous code here, but nothing that would affect the outcome as far as I can see. The newUTType comes from the IKSaveOptions panel.) This works fine when the file is exported as GIF, JPEG, PNG, PSD or TIFF, but exporting to PDF, BMP, TGA, ICNS and JPEG-2000 produces a red colour artefact on part of the image. Example images are below, the first exported as JPG, the second as PDF. Copy to clipboard does not exhibit this red stripe with the current implementation of image, but it did with the original implementation, which generated the imageRep using NSCalibratedRGBColorSpace rather than NSDeviceRGBColorSpace. So I'm guessing there's some issue with the colour representation in the pixels I get from OpenGL that doesn't get through the subsequent conversions properly, but I'm at a loss as to what to do about it. So, can anyone tell me (i) what is causing this, and (ii) how can I make it go away? I don't care so much about all of the formats but I'd really like at least PDF to work.

    Read the article

  • MVC using ODP.NET getting ORA-01840

    - by sse
    I am writing a simple MVC Application using ODP.NET. I am trying to call a Pl/Sql proc that inserts a record. Here is the simple Pl/Sql: procedure spAddCountry(pGisRecid in country.GISRECID%type, pCountryCode in country.COUNTRYCODE%type, pCountryName in country.COUNTRYNAME%type, pCurrencyCode in country.CURRENCYCODE%type, pEUTerritory in country.EUTERRITORY%type, pFatCAStatus in country.FATCASTATUS%type, pFATF in country.FATF%type, pFSCountryCode in country.COUNTRYCODE%type, pInsertedBy in country.INSERTEDBY%type, pInsertedOn in country.INSERTEDON%type, pLanguages in country.LANGUAGES%type, pNCCT in country.NCCT%type) is PRAGMA AUTONOMOUS_TRANSACTION; begin INSERT INTO COUNTRY (GISRECID, COUNTRYCODE, COUNTRYNAME, CURRENCYCODE, EUTERRITORY, FATCASTATUS, FATF, FSCOUNTRYCODE, INSERTEDBY, INSERTEDON, LANGUAGES, NCCT) VALUES(pGISRECID, pCOUNTRYCODE, pCOUNTRYNAME, pCURRENCYCODE, pEUTERRITORY, pFATCASTATUS, pFATF, pFSCOUNTRYCODE, pINSERTEDBY, pINSERTEDON, pLANGUAGES, pNCCT); Commit; end; I am having difficulty passing the date parameter, pInsertedOn, to the Stored Proc. I have verified that the web form retrieves the form data successfully and calls the AddCountry method below, which in turns calls the stored proc, spAddCountry, after populating all of the parms. Here is a snippet of the MVC C# code. I get the following exception: "ORA-01840 input value not long enough for date format". public void AddCountry(Country aCountry) //because the country object field names match the form field names they automatically get bound!! { string oradb = "Data Source=XYZ;User Id=XYZ;Password=xyz;"; OracleConnection conn = new OracleConnection(oradb); OracleCommand cmd = conn.CreateCommand(); cmd.CommandText = "tstpack.spAddCountry"; cmd.CommandType = CommandType.StoredProcedure; ... OracleParameter paramInsertedBy = new OracleParameter(); paramInsertedBy.ParameterName = "pInsertedBy"; paramInsertedBy.Value = aCountry.InsertedBy; cmd.Parameters.Add(paramInsertedBy); // CultureInfo ci = new CultureInfo("en-US"); OracleParameter paramInsertedOn = new OracleParameter(); paramInsertedOn.ParameterName = "pInsertedOn"; // paramInsertedOn.Value = DateTime.Now; //just testing to see if it's WebForm issue // paramInsertedOn.Value = Convert.ToDateTime(DateTime.Now.ToString(), ci); //flail! paramInsertedOn.Value = aCountry.InsertedOn; cmd.Parameters.Add(paramInsertedOn); ... conn.Open(); cmd.ExecuteNonQuery(); //CRASH! ORA-01840 conn.Close(); } Just to verify that the flow of the program is working, I tried removing the date parm "pInsertedOn" from the pl/sql and from the parm list above, and everything worked fine. I know I am going off of the rails with the date. Can someone tell me how to pass a date to Oracle from an MVC WebForm? Is there some sort of type cast needed? I would really appreciate an example too. Thanks so much! ps, I did try changing the parm type to Varchar2 in the Pl/Sql and doing some conversions myself in the Pl/Sql, the automatic MVC binder was getting in my way, forcing the property of paramInsertedOn.OracleType to DateTime. I tried forcing it to Varchar2, but no luck there either...

    Read the article

  • What's the best way to refactor this Rails controller?

    - by Robert DiNicolas
    I'd like some advice on how to best refactor this controller. The controller builds a page of zones and modules. Page has_many zones, zone has_many modules. So zones are just a cluster of modules wrapped in a container. The problem I'm having is that some modules may have some specific queries that I don't want executed on every page, so I've had to add conditions. The conditions just test if the module is on the page, if it is the query is executed. One of the problems with this is if I add a hundred special module queries, the controller has to iterate through each one. I think I would like to see these module condition moved out of the controller as well as all the additional custom actions. I can keep everything in this one controller, but I plan to have many apps using this controller so it could get messy. class PagesController < ApplicationController # GET /pages/1 # GET /pages/1.xml # Show is the main page rendering action, page routes are aliased in routes.rb def show #-+-+-+-+-Core Page Queries-+-+-+-+- @page = Page.find(params[:id]) @zones = @page.zones.find(:all, :order => 'zones.list_order ASC') @mods = @page.mods.find(:all) @columns = Page.columns # restful params to influence page rendering, see routes.rb @fragment = params[:fragment] # render single module @cluster = params[:cluster] # render single zone @head = params[:head] # render html, body and head #-+-+-+-+-Page Level Json Conversions-+-+-+-+- @metas = @page.metas ? ActiveSupport::JSON.decode(@page.metas) : nil @javascripts = @page.javascripts ? ActiveSupport::JSON.decode(@page.javascripts) : nil #-+-+-+-+-Module Specific Queries-+-+-+-+- # would like to refactor this process @mods.each do |mod| # Reps Module Custom Queries if mod.name == "reps" @reps = User.find(:all, :joins => :roles, :conditions => { :roles => { :name => 'rep' } }) end # Listing-poc Module Custom Queries if mod.name == "listing-poc" limit = params[:limit].to_i < 1 ? 10 : params[:limit] PropertyEntry.update_from_listing(mod.service_url) @properties = PropertyEntry.all(:limit => limit, :order => "city desc") end # Talents-index Module Custom Queries if mod.name == "talents-index" @talent = params[:type] @reps = User.find(:all, :joins => :talents, :conditions => { :talents => { :name => @talent } }) end end respond_to do |format| format.html # show.html.erb format.xml { render :xml => @page.to_xml( :include => { :zones => { :include => :mods } } ) } format.json { render :json => @page.to_json } format.css # show.css.erb, CSS dependency manager template end end # for property listing ajax request def update_properties limit = params[:limit].to_i < 1 ? 10 : params[:limit] offset = params[:offset] @properties = PropertyEntry.all(:limit => limit, :offset => offset, :order => "city desc") #render :nothing => true end end So imagine a site with a hundred modules and scores of additional controller actions. I think most would agree that it would be much cleaner if I could move that code out and refactor it to behave more like a configuration.

    Read the article

  • Defend PHP; convince me it isn't horrible

    - by Jason L
    I made a tongue-in-cheek comment in another question thread calling PHP a terrible language and it got down-voted like crazy. Apparently there are lots of people here who love PHP. So I'm genuinely curious. What am I missing? What makes PHP a good language? Here are my reasons for disliking it: PHP has inconsistent naming of built-in and library functions. Predictable naming patterns are important in any design. PHP has inconsistent parameter ordering of built-in functions, eg array_map vs. array_filter which is annoying in the simple cases and raises all sorts of unexpected behaviour or worse. The PHP developers constantly deprecate built-in functions and lower-level functionality. A good example is when they deprecated pass-by-reference for functions. This created a nightmare for anyone doing, say, function callbacks. A lack of consideration in redesign. The above deprecation eliminated the ability to, in many cases, provide default keyword values for functions. They fixed this in PHP 5, but they deprecated the pass-by-reference in PHP 4! Poor execution of name spaces (formerly no name spaces at all). Now that name spaces exist, what do we use as the dereference character? Backslash! The character used universally for escaping, even in PHP! Overly-broad implicit type conversion leads to bugs. I have no problem with implicit conversions of, say, float to integer or back again. But PHP (last I checked) will happily attempt to magically convert an array to an integer. Poor recursion performance. Recursion is a fundamentally important tool for writing in any language; it can make complex algorithms far simpler. Poor support is inexcusable. Functions are case insensitive. I have no idea what they were thinking on this one. A programming language is a way to specify behavior to both a computer and a reader of the code without ambiguity. Case insensitivity introduces much ambiguity. PHP encourages (practically requires) a coupling of processing with presentation. Yes, you can write PHP that doesn't do so, but it's actually easier to write code in the incorrect (from a sound design perspective) manner. PHP performance is abysmal without caching. Does anyone sell a commercial caching product for PHP? Oh, look, the designers of PHP do. Worst of all, PHP convinces people that designing web applications is easy. And it does indeed make much of the effort involved much easier. But the fact is, designing a web application that is both secure and efficient is a very difficult task. By convincing so many to take up programming, PHP has taught an entire subgroup of programmers bad habits and bad design. It's given them access to capabilities that they lack the understanding to use safely. This has led to PHP's reputation as being insecure. (However, I will readily admit that PHP is no more or less secure than any other web programming language.) What is it that I'm missing about PHP? I'm seeing an organically-grown, poorly-managed mess of a language that's spawning poor programmers. So convince me otherwise!

    Read the article

  • Magento Onepage Success Conversion Tracking Design Pattern

    - by user1734954
    My intent is to track conversions through multiple channels by inserting third party javascript (for example google analytics, optimizely, pricegrabber etc.) into the footer of onepage success . I've accomplished this by adding a block to the footer reference inside of the checkout success node within local.xml and everything works appropriately. My questions are more about efficiency and extensibility. It occurred to me that it would be better to combine all of the blocks into a single block reference and then use a various methods acting on a single call to the various related models to provide the data needed for insertion into the javascript for each of the conversion tracking scripts. Some examples of the common data that conversion tracking may rely on(pseudo): Order ID , Order Total, Order.LineItem.Name(foreach) and so on Currently for each of the scripts I've made a call to the appropriate model passing the customers last order id as the load value and the calling a get() assigning the return value to a variable and then iterating through the data to match the values with the expectations of the given third party service. All of the data should be pulled once when checkout is complete each third party services may expect different data in different formats Here is an example of one of the conversion tracking template files which loads at the footer of checkout success. $order = Mage::getModel('sales/order')->loadByIncrementId(Mage::getSingleton('checkout/session')->getLastRealOrderId()); $amount = number_format($order->getGrandTotal(),2); $customer = Mage::helper('customer')->getCustomer()->getData(); ?> <script type="text/javascript"> popup_email = '<?php echo($customer['email']);?>'; popup_order_number = '<?php echo $this->getOrderId() ?>'; </script> <!-- PriceGrabber Merchant Evaluation Code --> <script type="text/javascript" charset="UTF-8" src="https://www.pricegrabber.com/rating_merchrevpopjs.php?retid=<something>"></script> <noscript><a href="http://www.pricegrabber.com/rating_merchrev.php?retid=<something>" target=_blank> <img src="https://images.pricegrabber.com/images/mr_noprize.jpg" border="0" width="272" height="238" alt="Merchant Evaluation"></a></noscript> <!-- End PriceGrabber Code --> Having just a single piece of code like this is not that big of a deal, but we are doing similar things with a number of different third party services. Pricegrabber is one of the simpler examples. A more sophisticated tracking service expects a comma separated list of all of the product names, ids, prices, categories , order id etc. I would like to make it all more manageable so my idea to do the following: combine all of the template files into a single file Develop a helper class or library to deliver the data to the conversion template Goals Include Extensibility Minimal Model Calls Minimal Method Calls The Questions 1. Is a Mage helper the best route to take? 2. Is there any design pattern you may recommend for the "helper" class? 3. Why would this the design pattern you've chosen be best for this instance?

    Read the article

  • Effective optimization strategies on modern C++ compilers

    - by user168715
    I'm working on scientific code that is very performance-critical. An initial version of the code has been written and tested, and now, with profiler in hand, it's time to start shaving cycles from the hot spots. It's well-known that some optimizations, e.g. loop unrolling, are handled these days much more effectively by the compiler than by a programmer meddling by hand. Which techniques are still worthwhile? Obviously, I'll run everything I try through a profiler, but if there's conventional wisdom as to what tends to work and what doesn't, it would save me significant time. I know that optimization is very compiler- and architecture- dependent. I'm using Intel's C++ compiler targeting the Core 2 Duo, but I'm also interested in what works well for gcc, or for "any modern compiler." Here are some concrete ideas I'm considering: Is there any benefit to replacing STL containers/algorithms with hand-rolled ones? In particular, my program includes a very large priority queue (currently a std::priority_queue) whose manipulation is taking a lot of total time. Is this something worth looking into, or is the STL implementation already likely the fastest possible? Along similar lines, for std::vectors whose needed sizes are unknown but have a reasonably small upper bound, is it profitable to replace them with statically-allocated arrays? I've found that dynamic memory allocation is often a severe bottleneck, and that eliminating it can lead to significant speedups. As a consequence I'm interesting in the performance tradeoffs of returning large temporary data structures by value vs. returning by pointer vs. passing the result in by reference. Is there a way to reliably determine whether or not the compiler will use RVO for a given method (assuming the caller doesn't need to modify the result, of course)? How cache-aware do compilers tend to be? For example, is it worth looking into reordering nested loops? Given the scientific nature of the program, floating-point numbers are used everywhere. A significant bottleneck in my code used to be conversions from floating point to integers: the compiler would emit code to save the current rounding mode, change it, perform the conversion, then restore the old rounding mode --- even though nothing in the program ever changed the rounding mode! Disabling this behavior significantly sped up my code. Are there any similar floating-point-related gotchas I should be aware of? One consequence of C++ being compiled and linked separately is that the compiler is unable to do what would seem to be very simple optimizations, such as move method calls like strlen() out of the termination conditions of loop. Are there any optimization like this one that I should look out for because they can't be done by the compiler and must be done by hand? On the flip side, are there any techniques I should avoid because they are likely to interfere with the compiler's ability to automatically optimize code? Lastly, to nip certain kinds of answers in the bud: I understand that optimization has a cost in terms of complexity, reliability, and maintainability. For this particular application, increased performance is worth these costs. I understand that the best optimizations are often to improve the high-level algorithms, and this has already been done.

    Read the article

  • Mapping UrlEncoded POST Values in ASP.NET Web API

    - by Rick Strahl
    If there's one thing that's a bit unexpected in ASP.NET Web API, it's the limited support for mapping url encoded POST data values to simple parameters of ApiController methods. When I first looked at this I thought I was doing something wrong, because it seems mighty odd that you can bind query string values to parameters by name, but can't bind POST values to parameters in the same way. To demonstrate here's a simple example. If you have a Web API method like this:[HttpGet] public HttpResponseMessage Authenticate(string username, string password) { …} and then hit with a URL like this: http://localhost:88/samples/authenticate?Username=ricks&Password=sekrit it works just fine. The query string values are mapped to the username and password parameters of our API method. But if you now change the method to work with [HttpPost] instead like this:[HttpPost] public HttpResponseMessage Authenticate(string username, string password) { …} and hit it with a POST HTTP Request like this: POST http://localhost:88/samples/authenticate HTTP/1.1 Host: localhost:88 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Content-type: application/x-www-form-urlencoded Content-Length: 30 Username=ricks&Password=sekrit you'll find that while the request works, it doesn't actually receive the two string parameters. The username and password parameters are null and so the method is definitely going to fail. When I mentioned this over Twitter a few days ago I got a lot of responses back of why I'd want to do this in the first place - after all HTML Form submissions are the domain of MVC and not WebAPI which is a valid point. However, the more common use case is using POST Variables with AJAX calls. The following is quite common for passing simple values:$.post(url,{ Username: "Rick", Password: "sekrit" },function(result) {…}); but alas that doesn't work. How ASP.NET Web API handles Content Bodies Web API supports parsing content data in a variety of ways, but it does not deal with multiple posted content values. In effect you can only post a single content value to a Web API Action method. That one parameter can be very complex and you can bind it in a variety of ways, but ultimately you're tied to a single POST content value in your parameter definition. While it's possible to support multiple parameters on a POST/PUT operation, only one parameter can be mapped to the actual content - the rest have to be mapped to route values or the query string. Web API treats the whole request body as one big chunk of data that is sent to a Media Type Formatter that's responsible for de-serializing the content into whatever value the method requires. The restriction comes from async nature of Web API where the request data is read only once inside of the formatter that retrieves and deserializes it. Because it's read once, checking for content (like individual POST variables) first is not possible. However, Web API does provide a couple of ways to access the form POST data: Model Binding - object property mapping to bind POST values FormDataCollection - collection of POST keys/values ModelBinding POST Values - Binding POST data to Object Properties The recommended way to handle POST values in Web API is to use Model Binding, which maps individual urlencoded POST values to properties of a model object provided as the parameter. Model binding requires a single object as input to be bound to the POST data, with each POST key that matches a property name (including nested properties like Address.Street) being mapped and updated including automatic type conversion of simple types. This is a very nice feature - and a familiar one from MVC - that makes it very easy to have model objects mapped directly from inbound data. The obvious drawback with Model Binding is that you need a model for it to work: You have to provide a strongly typed object that can receive the data and this object has to map the inbound data. To rewrite the example above to use ModelBinding I have to create a class maps the properties that I need as parameters:public class LoginData { public string Username { get; set; } public string Password { get; set; } } and then accept the data like this in the API method:[HttpPost] public HttpResponseMessage Authenticate(LoginData login) { string username = login.Username; string password = login.Password; … } This works fine mapping the POST values to the properties of the login object. As a side benefit of this method definition, the method now also allows posting of JSON or XML to the same endpoint. If I change my request to send JSON like this: POST http://localhost:88/samples/authenticate HTTP/1.1 Host: localhost:88 Accept: application/jsonContent-type: application/json Content-Length: 40 {"Username":"ricks","Password":"sekrit"} it works as well and transparently, courtesy of the nice Content Negotiation features of Web API. There's nothing wrong with using Model binding and in fact it's a common practice to use (view) model object for inputs coming back from the client and mapping them into these models. But it can be  kind of a hassle if you have AJAX applications with a ton of backend hits, especially if many methods are very atomic and focused and don't effectively require a model or view. Not always do you have to pass structured data, but sometimes there are just a couple of simple response values that need to be sent back. If all you need is to pass a couple operational parameters, creating a view model object just for parameter purposes seems like overkill. Maybe you can use the query string instead (if that makes sense), but if you can't then you can often end up with a plethora of 'message objects' that serve no further  purpose than to make Model Binding work. Note that you can accept multiple parameters with ModelBinding so the following would still work:[HttpPost] public HttpResponseMessage Authenticate(LoginData login, string loginDomain) but only the object will be bound to POST data. As long as loginDomain comes from the querystring or route data this will work. Collecting POST values with FormDataCollection Another more dynamic approach to handle POST values is to collect POST data into a FormDataCollection. FormDataCollection is a very basic key/value collection (like FormCollection in MVC and Request.Form in ASP.NET in general) and then read the values out individually by querying each. [HttpPost] public HttpResponseMessage Authenticate(FormDataCollection form) { var username = form.Get("Username"); var password = form.Get("Password"); …} The downside to this approach is that it's not strongly typed, you have to handle type conversions on non-string parameters, and it gets a bit more complicated to test such as setup as you have to seed a FormDataCollection with data. On the other hand it's flexible and easy to use and especially with string parameters is easy to deal with. It's also dynamic, so if the client sends you a variety of combinations of values on which you make operating decisions, this is much easier to work with than a strongly typed object that would have to account for all possible values up front. The downside is that the code looks old school and isn't as self-documenting as a parameter list or object parameter would be. Nevertheless it's totally functionality and a viable choice for collecting POST values. What about [FromBody]? Web API also has a [FromBody] attribute that can be assigned to parameters. If you have multiple parameters on a Web API method signature you can use [FromBody] to specify which one will be parsed from the POST content. Unfortunately it's not terribly useful as it only returns content in raw format and requires a totally non-standard format ("=content") to specify your content. For more info in how FromBody works and several related issues to how POST data is mapped, you can check out Mike Stalls post: How WebAPI does Parameter Binding Not really sure where the Web API team thought [FromBody] would really be a good fit other than a down and dirty way to send a full string buffer. Extending Web API to make multiple POST Vars work? Don't think so Clearly there's no native support for multiple POST variables being mapped to parameters, which is a bit of a bummer. I know in my own work on one project my customer actually found this to be a real sticking point in their AJAX backend work, and we ended up not using Web API and using MVC JSON features instead. That's kind of sad because Web API is supposed to be the proper solution for AJAX backends. With all of ASP.NET Web API's extensibility you'd think there would be some way to build this functionality on our own, but after spending a bit of time digging and asking some of the experts from the team and Web API community I didn't hear anything that even suggests that this is possible. From what I could find I'd say it's not possible primarily because Web API's Routing engine does not account for the POST variable mapping. This means [HttpPost] methods with url encoded POST buffers are not mapped to the parameters of the endpoint, and so the routes would never even trigger a request that could be intercepted. Once the routing doesn't work there's not much that can be done. If somebody has an idea how this could be accomplished I would love to hear about it. Do we really need multi-value POST mapping? I think that that POST value mapping is a feature that one would expect of any API tool to have. If you look at common APIs out there like Flicker and Google Maps etc. they all work with POST data. POST data is very prominent much more so than JSON inputs and so supporting as many options that enable would seem to be crucial. All that aside, Web API does provide very nice features with Model Binding that allows you to capture many POST variables easily enough, and logistically this will let you build whatever you need with POST data of all shapes as long as you map objects. But having to have an object for every operation that receives a data input is going to take its toll in heavy AJAX applications, with a lot of types created that do nothing more than act as parameter containers. I also think that POST variable mapping is an expected behavior and Web APIs non-support will likely result in many, many questions like this one: How do I bind a simple POST value in ASP.NET WebAPI RC? with no clear answer to this question. I hope for V.next of WebAPI Microsoft will consider this a feature that's worth adding. Related Articles Passing multiple POST parameters to Web API Controller Methods Mike Stall's post: How Web API does Parameter Binding Where does ASP.NET Web API Fit?© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

< Previous Page | 8 9 10 11 12 13 14  | Next Page >