Search Results

Search found 45013 results on 1801 pages for 'example'.

Page 195/1801 | < Previous Page | 191 192 193 194 195 196 197 198 199 200 201 202  | Next Page >

  • What are Web runtime environments and programming languages

    - by Bradly Spicer
    I've been looking into the details behind these two different categories: Web runtime environments Web application programming languages I believe I have the correct information and have phrased it correctly but I am unsure. I have been searching for a while but only find snippets of information or what I can see as useless information (I could be wrong). Here are my descriptions so far: Web runtime environments - A Run-time environment implements part of the core behaviour of any computer language and allows it to be modified via an API or embedded domain-specific language. A web runtime environment is similar except it uses web based languages such as Java-script which utilises the core behaviour a computer language. Another example of a Run-time environment web language is JsLibs which is a standable JavaScript development runtime environment for using JavaScript as a general all round scripting language. JavaScript is often used to create responsive interfaces which improve the user experience and provide dynamic functionality without having to wait for the server to react and direct to another page. Web application programming languages - A web application program language is something that mimics a traditional desktop application within a web page. For example, using PHP you can create forms and tables which use a database similar to that of Microsoft Excel. Some of the other languages for web application programming are: Ajax Perl Ruby Here are some of the resources used: http://en.wikipedia.org/wiki/Web_application_development http://code.google.com/p/jslibs/ I would like some confirmation that the descriptions I have created are correct as I am still slightly unsure as to whether I have hit the nail on the head.

    Read the article

  • Apache2's recursive directory permission requirement

    - by Sn3akyP3t3
    The experience I've had thus far is from Ubuntu 10.04 and 12.04 64 bit OS so if there are other OS differences I'd like to know if this is an OS specific problem or not. The issue I've experienced is mostly confusion. Once the cause of the problem is identified and corrected there are no further related problems experienced. The symptom is Error 403 forbidden. Typically the cause is attempting to use a directory other than /var/www/ for content. The cause is simply permissions, but its puzzling why the required permissions must persist from at least one level deeper than root onward till the current working directory where the content is stored. For example: Alias /example/ "/home/user/permissions/can/be/confusing/with/apache/" <Directory /home/user/permissions/can/be/confusing/with/apache/> Options FollowSymLinks MultiViews AllowOverride None Order allow,deny Allow from all </Directory> With www-data being the user that spawned apache and "user" being a member of the www-data group. Thus, if ownership of /home/user/* is user:user then all that is necessary to display content with apache is permssions of read and execute. So d---r-x--- should suffice, but for practical purposes I'm using drwxr-x--- for most. However, if all directories /home/user/* are permissions of drwxr-x-- and /home/user/ itself has permissions of drwx------ then content will always fail with error 403. This is strange because it doesn't follow what I would consider traditional logic of permissions which should only be applicable to the current working directory or a particular file in that directory and not any directory further back in the chain. Is this by design or is it a bug?

    Read the article

  • What is "Open" anyway?

    - by EmbeddedInsider
    This terms is often used with many meanings.  For example, some people consider Flash 'open' and 'multi-platform' .  But Flash is a product of Adobe systems, locked down, copy protected and distribution restricted.  And versions for other than standard PC, home use, may carry licence fees. Check it out: 3.1 Adobe Runtime Restrictions. You will not use any Adobe Runtime on any non-PC device or with any embedded or device version of any operating system. For the avoidance of doubt, and by example only, you may not use an Adobe Runtime on any (a) mobile device, set top box (STB), handheld, phone, web pad, tablet and Tablet PC (other than with Windows XP Tablet PC Edition and its successors), game console, TV, DVD player, media center (other than with Windows XP Media Center Edition and its successors), electronic billboard or other digital signage, Internet appliance or other Internet-connected device, PDA, medical device, ATM, telematic device, gaming machine, home automation system, kiosk, remote control device, or any other consumer electronics device, (b) operator-based mobile, cable, satellite, or television system or (c) other closed system device. For information on licensing Adobe Runtimes for use on such systems please visit http://www.adobe.com/go/licensing. You will notice, for its embedded operating systems, Microsoft buys and includes a fully paid license for Adobe.   Do you get this with Linux?  Unix?  QNX? So, what is 'open'? Lawrence Ricci www.EmbeddedInsider.com

    Read the article

  • deWitters Game loop in libgdx(Android)

    - by jaysingh
    I am a beginner and I want a complete example in LibGDX for android(Fixed time game loop) how to limit the framerate to 50 or 60. Also how to mangae interpolation between game state with simple example code e.g. deWiTTERS Game Loop: @Override public void render() { float deltaTime = Gdx.graphics.getDeltaTime(); Update(deltaTime); Render(deltaTime); } libgdx comments:- There is a Gdx.graphics.setVsync() method (generic = backend-independant), but it is not present in 0.9.1, only in the Nightlies. "Relying on vsync for fixed time steps is a REALLY bad idea. It will break on almost all hardware out there. See LwjglApplicationConfiguration, there's a flag in there that let s use toggle gpu/software vsynching. Play around with it." (Mario) NOTE that none of these limit the framerate to a specific value... if you REALLY need to limit the framerate for some reason, you'll have to handle it yourself by returning from render calls if xxx ms haven't passed since the last render call. li

    Read the article

  • Why is HTML/Javascript minification beneficial

    - by Channel72
    Why is HTML/Javascript minification beneficial when the HTTP protocol already supports gzip data compression? I realize that Javascript/HTML minification has the potential to significantly reduce the size of Javascript/HTML files by removing unnecessary whitespace, and perhaps renaming variables to a few letters each, but doesn't the LZW algorithm do especially well when there are many repeated characters (e.g. lots of whitespace?) I realize that some Javascript minification tools do more than just reduce size. Google's closure compiler, for example, also tries to improve code performance by inlining functions and doing other analyses. But the primary purpose of Javascript minification is usually to reduce file size. I also realize there are other reasons you might want to minify aside from performace, such as code obfuscation. But again, that reason is not usually emphasized as much as performance gain and file size reduction. For example, Closure Compiler is not advertised as an obfuscation tool, but as a code size reducer and download-speed enhancer. So, how much performance do you really gain from Javascript/HTML minification when you're already significantly reducing file size with gzip compression?

    Read the article

  • First time application where to start?

    - by Nazariy
    After many years of searches and copy pasting, I'm still looking for simple solution that can transliterate text input on the fly from one key set to another. There are quite few online services that provide this feature but it still quite annoying to go online all the time. Unfortunately there is not that many applications left which are capable of doing so, and none of them supported by this day. I decided to make my own and at same time to learn something new for my self. The idea is quite simple: application should sit in system tray and wait until input language get changed, for example to Russian. If Russian language is activated, application should start to listen for user key strokes combination and replace them based on custom dictionary for example R = ?, SH = ? etc. I should be able to bind application to any installed language (Russian, Ukrainian, Bulgarian, Belarusian etc.) and customise dictionary for any of them. So my question is: Which language should I chose for this task C++, C# or might be something hardcore like Assembler, as application should work natively with Windows XP/Vista/7 or possibly Mac. (cross platform support is good but my main target is Windows) Due to nature of application behaviour how can I tell anti-virus software that it is not a "Key Logger" and basically not a virus? Where should I start and what should I be aware of? P.S. My current programming knowledge is quite basic, PHP and JavaScript with Object Oriented approach.

    Read the article

  • Mexico leading in Business Transformation Strategies:

    - by [email protected]
    By [email protected] on April 15, 2010 8:31 AM By John Burke Group Vice President Oracle Applications Business Unit I recently completed a business tour in Mexico, and was surprised by both the economic vibrancy of the country and the thought leadership expressed by many of the customers I met. An example of the economic vibrancy of the country: across the street from my hotel was the local Bentley dealership, Coach Store, Yves Saint Laurent and of course a Starbucks. I only made it to Starbucks. Both the Coach Store and YSL had a line of folks waiting to get in... As for thought leadership, there were several illustrations only on the first day. I had the opportunity to meet with a branch of the Mexican Federal Government. Their questions were not about clerical task automation, far from it! We discussed citizen on-line access to fees and services - for example looking up the duty on an international goods shipment, or tracking that my taxes have been received, or the status of my request for a certain service. Eligibility, policies and status. Having an integrated rules or policy automation system that would allow businesses and citizens to access accurate information and ensure the proper collection of fees and payment for 3rd party provided services. Then in the afternoon, I met with the owner of a roofing company (note: most roofs in Mexico are flat and made of cement). This CEO started discussing how he wanted to transform his business from a cement products company to a service company and market 5-10-15 year service contracts which would guarantee the structural integrity of the roof and of course that the roof would remain waterproof. Although his products were guaranteed, they required an annual inspection and most home owners never schedule that inspection until it is too late and water damage has occurred. These emergency calls reduce his margin and reduce customer satisfaction. This lead to a discussion of business models in general and why long term differentiation can only come from service, not just for the music or news industries, but also for roofing companies! I completely agreed with the transformational concepts described in both meetings and quickly understood why there is a Bentley dealership near my hotel.

    Read the article

  • Adding dynamic business logic/business process checks to a system

    - by Jordan Reiter
    I'm wondering if there is a good extant pattern (language here is Python/Django but also interested on the more abstract level) for creating a business logic layer that can be created without coding. For example, suppose that a house rental should only be available during a specific time. A coder might create the following class: from bizlogic import rules, LogicRule from orders.models import Order class BeachHouseAvailable(LogicRule): def check(self, reservation): house = reservation.house_reserved if not (house.earliest_available < reservation.starts < house.latest_available ) raise RuleViolationWhen("Beach house is available only between %s and %s" % (house.earliest_available, house.latest_available)) return True rules.add(Order, BeachHouseAvailable, name="BeachHouse Available") This is fine, but I don't want to have to code something like this each time a new rule is needed. I'd like to create something dynamic, ideally something that can be stored in a database. The thing is, it would have to be flexible enough to encompass a wide variety of rules: avoiding duplicates/overlaps (to continue the example "You already have a reservation for this time/location") logic rules ("You can't rent a house to yourself", "This house is in a different place from your chosen destination") sanity tests ("You've set a rental price that's 10x the normal rate. Are you sure this is the right price?" Things like that. Before I recreate the wheel, I'm wondering if there are already methods out there for doing something like this.

    Read the article

  • C#/.NET Little Wonders: Tuples and Tuple Factory Methods

    - by James Michael Hare
    Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can really help improve your code by making it easier to write and maintain.  This week, we look at the System.Tuple class and the handy factory methods for creating a Tuple by inferring the types. What is a Tuple? The System.Tuple is a class that tends to inspire a reaction in one of two ways: love or hate.  Simply put, a Tuple is a data structure that holds a specific number of items of a specific type in a specific order.  That is, a Tuple<int, string, int> is a tuple that contains exactly three items: an int, followed by a string, followed by an int.  The sequence is important not only to distinguish between two members of the tuple with the same type, but also for comparisons between tuples.  Some people tend to love tuples because they give you a quick way to combine multiple values into one result.  This can be handy for returning more than one value from a method (without using out or ref parameters), or for creating a compound key to a Dictionary, or any other purpose you can think of.  They can be especially handy when passing a series of items into a call that only takes one object parameter, such as passing an argument to a thread's startup routine.  In these cases, you do not need to define a class, simply create a tuple containing the types you wish to return, and you are ready to go? On the other hand, there are some people who see tuples as a crutch in object-oriented design.  They may view the tuple as a very watered down class with very little inherent semantic meaning.  As an example, what if you saw this in a piece of code: 1: var x = new Tuple<int, int>(2, 5); What are the contents of this tuple?  If the tuple isn't named appropriately, and if the contents of each member are not self evident from the type this can be a confusing question.  The people who tend to be against tuples would rather you explicitly code a class to contain the values, such as: 1: public sealed class RetrySettings 2: { 3: public int TimeoutSeconds { get; set; } 4: public int MaxRetries { get; set; } 5: } Here, the meaning of each int in the class is much more clear, but it's a bit more work to create the class and can clutter a solution with extra classes. So, what's the correct way to go?  That's a tough call.  You will have people who will argue quite well for one or the other.  For me, I consider the Tuple to be a tool to make it easy to collect values together easily.  There are times when I just need to combine items for a key or a result, in which case the tuple is short lived and so the meaning isn't easily lost and I feel this is a good compromise.  If the scope of the collection of items, though, is more application-wide I tend to favor creating a full class. Finally, it should be noted that tuples are immutable.  That means they are assigned a value at construction, and that value cannot be changed.  Now, of course if the tuple contains an item of a reference type, this means that the reference is immutable and not the item referred to. Tuples from 1 to N Tuples come in all sizes, you can have as few as one element in your tuple, or as many as you like.  However, since C# generics can't have an infinite generic type parameter list, any items after 7 have to be collapsed into another tuple, as we'll show shortly. So when you declare your tuple from sizes 1 (a 1-tuple or singleton) to 7 (a 7-tuple or septuple), simply include the appropriate number of type arguments: 1: // a singleton tuple of integer 2: Tuple<int> x; 3:  4: // or more 5: Tuple<int, double> y; 6:  7: // up to seven 8: Tuple<int, double, char, double, int, string, uint> z; Anything eight and above, and we have to nest tuples inside of tuples.  The last element of the 8-tuple is the generic type parameter Rest, this is special in that the Tuple checks to make sure at runtime that the type is a Tuple.  This means that a simple 8-tuple must nest a singleton tuple (one of the good uses for a singleton tuple, by the way) for the Rest property. 1: // an 8-tuple 2: Tuple<int, int, int, int, int, double, char, Tuple<string>> t8; 3:  4: // an 9-tuple 5: Tuple<int, int, int, int, double, int, char, Tuple<string, DateTime>> t9; 6:  7: // a 16-tuple 8: Tuple<int, int, int, int, int, int, int, Tuple<int, int, int, int, int, int, int, Tuple<int,int>>> t14; Notice that on the 14-tuple we had to have a nested tuple in the nested tuple.  Since the tuple can only support up to seven items, and then a rest element, that means that if the nested tuple needs more than seven items you must nest in it as well.  Constructing tuples Constructing tuples is just as straightforward as declaring them.  That said, you have two distinct ways to do it.  The first is to construct the tuple explicitly yourself: 1: var t3 = new Tuple<int, string, double>(1, "Hello", 3.1415927); This creates a triple that has an int, string, and double and assigns the values 1, "Hello", and 3.1415927 respectively.  Make sure the order of the arguments supplied matches the order of the types!  Also notice that we can't half-assign a tuple or create a default tuple.  Tuples are immutable (you can't change the values once constructed), so thus you must provide all values at construction time. Another way to easily create tuples is to do it implicitly using the System.Tuple static class's Create() factory methods.  These methods (much like C++'s std::make_pair method) will infer the types from the method call so you don't have to type them in.  This can dramatically reduce the amount of typing required especially for complex tuples! 1: // this 4-tuple is typed Tuple<int, double, string, char> 2: var t4 = Tuple.Create(42, 3.1415927, "Love", 'X'); Notice how much easier it is to use the factory methods and infer the types?  This can cut down on typing quite a bit when constructing tuples.  The Create() factory method can construct from a 1-tuple (singleton) to an 8-tuple (octuple), which of course will be a octuple where the last item is a singleton as we described before in nested tuples. Accessing tuple members Accessing a tuple's members is simplicity itself… mostly.  The properties for accessing up to the first seven items are Item1, Item2, …, Item7.  If you have an octuple or beyond, the final property is Rest which will give you the nested tuple which you can then access in a similar matter.  Once again, keep in mind that these are read-only properties and cannot be changed. 1: // for septuples and below, use the Item properties 2: var t1 = Tuple.Create(42, 3.14); 3:  4: Console.WriteLine("First item is {0} and second is {1}", 5: t1.Item1, t1.Item2); 6:  7: // for octuples and above, use Rest to retrieve nested tuple 8: var t9 = new Tuple<int, int, int, int, int, int, int, 9: Tuple<int, int>>(1,2,3,4,5,6,7,Tuple.Create(8,9)); 10:  11: Console.WriteLine("The 8th item is {0}", t9.Rest.Item1); Tuples are IStructuralComparable and IStructuralEquatable Most of you know about IComparable and IEquatable, what you may not know is that there are two sister interfaces to these that were added in .NET 4.0 to help support tuples.  These IStructuralComparable and IStructuralEquatable make it easy to compare two tuples for equality and ordering.  This is invaluable for sorting, and makes it easy to use tuples as a compound-key to a dictionary (one of my favorite uses)! Why is this so important?  Remember when we said that some folks think tuples are too generic and you should define a custom class?  This is all well and good, but if you want to design a custom class that can automatically order itself based on its members and build a hash code for itself based on its members, it is no longer a trivial task!  Thankfully the tuple does this all for you through the explicit implementations of these interfaces. For equality, two tuples are equal if all elements are equal between the two tuples, that is if t1.Item1 == t2.Item1 and t1.Item2 == t2.Item2, and so on.  For ordering, it's a little more complex in that it compares the two tuples one at a time starting at Item1, and sees which one has a smaller Item1.  If one has a smaller Item1, it is the smaller tuple.  However if both Item1 are the same, it compares Item2 and so on. For example: 1: var t1 = Tuple.Create(1, 3.14, "Hi"); 2: var t2 = Tuple.Create(1, 3.14, "Hi"); 3: var t3 = Tuple.Create(2, 2.72, "Bye"); 4:  5: // true, t1 == t2 because all items are == 6: Console.WriteLine("t1 == t2 : " + t1.Equals(t2)); 7:  8: // false, t1 != t2 because at least one item different 9: Console.WriteLine("t2 == t2 : " + t2.Equals(t3)); The actual implementation of IComparable, IEquatable, IStructuralComparable, and IStructuralEquatable is explicit, so if you want to invoke the methods defined there you'll have to manually cast to the appropriate interface: 1: // true because t1.Item1 < t3.Item1, if had been same would check Item2 and so on 2: Console.WriteLine("t1 < t3 : " + (((IComparable)t1).CompareTo(t3) < 0)); So, as I mentioned, the fact that tuples are automatically equatable and comparable (provided the types you use define equality and comparability as needed) means that we can use tuples for compound keys in hashing and ordering containers like Dictionary and SortedList: 1: var tupleDict = new Dictionary<Tuple<int, double, string>, string>(); 2:  3: tupleDict.Add(t1, "First tuple"); 4: tupleDict.Add(t2, "Second tuple"); 5: tupleDict.Add(t3, "Third tuple"); Because IEquatable defines GetHashCode(), and Tuple's IStructuralEquatable implementation creates this hash code by combining the hash codes of the members, this makes using the tuple as a complex key quite easy!  For example, let's say you are creating account charts for a financial application, and you want to cache those charts in a Dictionary based on the account number and the number of days of chart data (for example, a 1 day chart, 1 week chart, etc): 1: // the account number (string) and number of days (int) are key to get cached chart 2: var chartCache = new Dictionary<Tuple<string, int>, IChart>(); Summary The System.Tuple, like any tool, is best used where it will achieve a greater benefit.  I wouldn't advise overusing them, on objects with a large scope or it can become difficult to maintain.  However, when used properly in a well defined scope they can make your code cleaner and easier to maintain by removing the need for extraneous POCOs and custom property hashing and ordering. They are especially useful in defining compound keys to IDictionary implementations and for returning multiple values from methods, or passing multiple values to a single object parameter. Tweet Technorati Tags: C#,.NET,Tuple,Little Wonders

    Read the article

  • Web API, JavaScript, Chrome &amp; Cross-Origin Resource Sharing

    - by Brian Lanham
    The team spent much of the week working through this issues related to Chrome running on Windows 8 consuming cross-origin resources using Web API.  We thought it was resolved on day 2 but it resurfaced the next day.  We definitely resolved it today though.  I believe I do not fully understand the situation but I am going to explain what I know in an effort to help you avoid and/or resolve a similar issue. References We referenced many sources during our trial-and-error troubleshooting.  These are the links we reference in order of applicability to the solution: Zoiner Tejada JavaScript and other material from -> http://www.devproconnections.com/content1/topic/microsoft-azure-cors-141869/catpath/windows-azure-platform2/page/3 WebDAV Where I learned about “Accept” –>  http://www-jo.se/f.pfleger/cors-and-iis? IT Hit Tells about NOT using ‘*’ –> http://www.webdavsystem.com/ajax/programming/cross_origin_requests Carlos Figueira Sample back-end code (newer) –> http://code.msdn.microsoft.com/windowsdesktop/Implementing-CORS-support-a677ab5d (older version) –> http://code.msdn.microsoft.com/CORS-support-in-ASPNET-Web-01e9980a   Background As a measure of protection, Web designers (W3C) and implementers (Google, Microsoft, Mozilla) made it so that a request, especially a JSON request (but really any URL), sent from one domain to another will only work if the requestee “knows” about the requester and allows requests from it. So, for example, if you write a ASP.NET MVC Web API service and try to consume it from multiple apps, the browsers used may (will?) indicate that you are not allowed by showing an “Access-Control-Allow-Origin” error indicating the requester is not allowed to make requests. Internet Explorer (big surprise) is the odd-hair-colored step-child in this mix. It seems that running locally at least IE allows this for development purposes.  Chrome and Firefox do not.  In fact, Chrome is quite restrictive.  Notice the images below. IE shows data (a tabular view with one row for each day of a week) while Chrome does not (trust me, neither does Firefox).  Further, the Chrome developer console shows an XmlHttpRequest (XHR) error. Screen captures from IE (left) and Chrome (right). Note that Chrome does not display data and the console shows an XHR error. Why does this happen? The Web browser submits these requests and processes the responses and each browser is different. Okay, so, IE is probably the only one that’s truly different.  However, Chrome has a specific process of performing a “pre-flight” check to make sure the service can respond to an “Access-Control-Allow-Origin” or Cross-Origin Resource Sharing (CORS) request.  So basically, the sequence is, if I understand correctly:  1)Page Loads –> 2)JavaScript Request Processed by Browser –> 3)Browsers Prepares to Submit Request –> 4)[Chrome] Browser Submits Pre-Flight Request –> 5)Server Responds with HTTP 200 –> 6)Browser Submits Request –> 7)Server Responds with Data –> 8)Page Shows Data This situation occurs for both GET and POST methods.  Typically, GET methods are called with query string parameters so there is no data posted.  Instead, the requesting domain needs to be permitted to request data but generally nothing more is required.  POSTs on the other hand send form data.  Therefore, more configuration is required (you’ll see the configuration below).  AJAX requests are not friendly with this (POSTs) either because they don’t post in a form. How to fix it. The team went through many iterations of self-hair removal and we think we finally have a working solution.  The trial-and-error approach eventually worked and we referenced many sources for the information.  I indicate those references above.  There are basically three (3) tasks needed to make this work. Assumptions: You are using Visual Studio, Web API, JavaScript, and have Cross-Origin Resource Sharing, and several browsers. 1. Configure the client Joel Cochran centralized our “cors-oriented” JavaScript (from here). There are two calls including one for GET and one for POST function(url, data, callback) {             console.log(data);             $.support.cors = true;             var jqxhr = $.post(url, data, callback, "json")                 .error(function(jqXhHR, status, errorThrown) {                     if ($.browser.msie && window.XDomainRequest) {                         var xdr = new XDomainRequest();                         xdr.open("post", url);                         xdr.onload = function () {                             if (callback) {                                 callback(JSON.parse(this.responseText), 'success');                             }                         };                         xdr.send(data);                     } else {                         console.log(">" + jqXhHR.status);                         alert("corsAjax.post error: " + status + ", " + errorThrown);                     }                 });         }; The GET CORS JavaScript function (credit to Zoiner Tejada) function(url, callback) {             $.support.cors = true;             var jqxhr = $.get(url, null, callback, "json")                 .error(function(jqXhHR, status, errorThrown) {                     if ($.browser.msie && window.XDomainRequest) {                         var xdr = new XDomainRequest();                         xdr.open("get", url);                         xdr.onload = function () {                             if (callback) {                                 callback(JSON.parse(this.responseText), 'success');                             }                         };                         xdr.send();                     } else {                         alert("CORS is not supported in this browser or from this origin.");                     }                 });         }; The POST CORS JavaScript function (credit to Zoiner Tejada) Now you need to call these functions to get and post your data (instead of, say, using $.Ajax). Here is a GET example: corsAjax.get(url, function(data) { if (data !== null && data.length !== undefined) { // do something with data } }); And here is a POST example: corsAjax.post(url, item); Simple…except…you’re not done yet. 2. Change Web API Controllers to Allow CORS There are actually two steps here.  Do you remember above when we mentioned the “pre-flight” check?  Chrome actually asks the server if it is allowed to ask it for cross-origin resource sharing access.  So you need to let the server know it’s okay.  This is a two-part activity.  a) Add the appropriate response header Access-Control-Allow-Origin, and b) permit the API functions to respond to various methods including GET, POST, and OPTIONS.  OPTIONS is the method that Chrome and other browsers use to ask the server if it can ask about permissions.  Here is an example of a Web API controller thus decorated: NOTE: You’ll see a lot of references to using “*” in the header value.  For security reasons, Chrome does NOT recognize this is valid. [HttpHeader("Access-Control-Allow-Origin", "http://localhost:51234")] [HttpHeader("Access-Control-Allow-Credentials", "true")] [HttpHeader("Access-Control-Allow-Methods", "ACCEPT, PROPFIND, PROPPATCH, COPY, MOVE, DELETE, MKCOL, LOCK, UNLOCK, PUT, GETLIB, VERSION-CONTROL, CHECKIN, CHECKOUT, UNCHECKOUT, REPORT, UPDATE, CANCELUPLOAD, HEAD, OPTIONS, GET, POST")] [HttpHeader("Access-Control-Allow-Headers", "Accept, Overwrite, Destination, Content-Type, Depth, User-Agent, X-File-Size, X-Requested-With, If-Modified-Since, X-File-Name, Cache-Control")] [HttpHeader("Access-Control-Max-Age", "3600")] public abstract class BaseApiController : ApiController {     [HttpGet]     [HttpOptions]     public IEnumerable<foo> GetFooItems(int id)     {         return foo.AsEnumerable();     }     [HttpPost]     [HttpOptions]     public void UpdateFooItem(FooItem fooItem)     {         // NOTE: The fooItem object may or may not         // (probably NOT) be set with actual data.         // If not, you need to extract the data from         // the posted form manually.         if (fooItem.Id == 0) // However you check for default...         {             // We use NewtonSoft.Json.             string jsonString = context.Request.Form.GetValues(0)[0].ToString();             Newtonsoft.Json.JsonSerializer js = new Newtonsoft.Json.JsonSerializer();             fooItem = js.Deserialize<FooItem>(new Newtonsoft.Json.JsonTextReader(new System.IO.StringReader(jsonString)));         }         // Update the set fooItem object.     } } Please note a few specific additions here: * The header attributes at the class level are required.  Note all of those methods and headers need to be specified but we find it works this way so we aren’t touching it. * Web API will actually deserialize the posted data into the object parameter of the called method on occasion but so far we don’t know why it does and doesn’t. * [HttpOptions] is, again, required for the pre-flight check. * The “Access-Control-Allow-Origin” response header should NOT NOT NOT contain an ‘*’. 3. Headers and Methods and Such We had most of this code in place but found that Chrome and Firefox still did not render the data.  Interestingly enough, Fiddler showed that the GET calls succeeded and the JSON data is returned properly.  We learned that among the headers set at the class level, we needed to add “ACCEPT”.  Note that I accidentally added it to methods and to headers.  Adding it to methods worked but I don’t know why.  We added it to headers also for good measure. [HttpHeader("Access-Control-Allow-Methods", "ACCEPT, PROPFIND, PROPPA... [HttpHeader("Access-Control-Allow-Headers", "Accept, Overwrite, Destin... Next Steps That should do it.  If it doesn’t let us know.  What to do next?  * Don’t hardcode the allowed domains.  Note that port numbers and other domain name specifics will cause problems and must be specified.  If this changes do you really want to deploy updated software?  Consider Miguel Figueira’s approach in the following link to writing a custom HttpHeaderAttribute class that allows you to specify the domain names and then you can do it dynamically.  There are, of course, other ways to do it dynamically but this is a clean approach. http://code.msdn.microsoft.com/windowsdesktop/Implementing-CORS-support-a677ab5d

    Read the article

  • How can I uniqely record every new command I use, and possibly timestamp it?

    - by Nirmik
    I've been on Linux for more than 6 months now but never went too much into the CLI (command-line interface or terminal or shell) Now as I ask questions here, get answers, or help from other sites, I learn new commands... How can I can store every new command in a text file? Only new/*unique* commands, not repetitions of the same command. Here's an example: In the terminal, I enter the commands like this- ubuntu@ubuntu:~$ *command1* ubuntu@ubuntu:~$ *command2* ubuntu@ubuntu:~$ *command3* ubuntu@ubuntu:~$ *command4* ubuntu@ubuntu:~$ *command1* Now, these commands should get saved in a text file say commandrec like this- *command1* *command2* *command3* *command4* NOTE:The last command in the terminal which was again command1 is not recorded/saved again in the text file. And the next time I open the terminal, and enter a new command command 5, it should get appended to the list in commandrec (but if the command was used earlier on some other date, it should still be ignored. For example, command 1 entered again along with command 5 on a new day/time but command1 not recorded as already used) The commandrec file looking something like this- 31/05/12 12:00:00 *command1* *command2* *command3* *command4* 01/06/12 13:00:00 *command 5* (the time and date thing would be great if possible, but okay even if that isn't there) This way, I can have a record of all commands used by me to date. How can this be done?

    Read the article

  • Value of the HTML5 lang attribute

    - by user359650
    I'm working on a website which will offer localized content following the language+region approach as described on this W3.org page (e.g. fr-CA for Canadian French content, and fr-FR for "French French" content). As we consider content for each language+region to be unique, it is crucial to us that search engines properly identify and serve the content accordingly. By looking up on the Internet (e.g. this question), it appears that most people recommend the use of an ISO639 language code in the HTML lang attribute to describe the content language. Following this recommendation, we would en up using <html lang="fr"> which wouldn't enable the differentiation between the aforementioned language+region combinations. When reviewing the HTML4 specification, it seems that using language+region as a language code would be perfectly OK, as the en-US example is given as one possible value. However I couldn't find any confirmation of this in the HTML5 specification which doesn't seem to provide any example as to the possible allowed values. From there I tried to get a de facto answer by looking at what the web giants are doing. I looked at what Facebook are doing: they offer Candian French and French French versions of their websites with (slightly) different content, whilst the HTML lang value remains the same: fr-CA URL: http://fr-ca.facebook.com HTML lang attribute: <html lang="fr"> translation of the word 'email': courriel fr-FR URL: http://fr-fr.facebook.com/ HTML lang attribute: <html lang="fr"> translation of the word 'email': Adresse électronique Q: What is the recommended/standard way of describing content that was localized using the language+region approach in HTML5 ?

    Read the article

  • Large invoice database structure and rendering

    - by user132624
    Our client has a MS SQL database that has 1 million customer invoice records in it. Using the database, our client wants its customers to be able to log into a frontend web site and then be able to view, modify and download their company’s invoices. Given the size of the database and the large number of customers who may log into the web site at any time, we are concerned about data base engine performance and web page invoice rendering performance. The 1 million invoice database is for just 90 days sales, so we will remove invoices over 90 days old from the database. Most of the invoices have multiple line items. We can easily convert our invoices into various data formats so for example it is easy for us to convert to and from SQL to XML with related schema and XSLT. Any data conversion would be done on another server so as not to burden the web interface server. We have tentatively decided to run the web site on a .NET Framework IIS web server using MS SQL on MS Azure. How would you suggest we structure our database for best performance? For example, should we put all the invoices of all customers located within the same 5 digit or 6 digit zip codes into the same table? Or could we set up a separate home directory for each customer on IIS and place each customer’s invoices in each customer’s home directory in XML format? And secondly what would you suggest would be the best method to render customer invoices on a web page and allow customers to modify for best performance? The ADO.net XML Data Set looks intriguing to us as a method, but we have never used it.

    Read the article

  • Output = MAXDOP 1

    - by Dave Ballantyne
    It is widely know that data modifications on table variables do not support parallelism, Peter Larsson has a good example of that here .  Whilst tracking down a performance issue,  I saw that using the OUTPUT clause also causes parallelism to not be used. By way of example,  first lets create two tables with a simple parent and child (one to one) relationship, and then populate them with 100,000 rows. Drop table ParentDrop table Childgocreate table Parent(id integer identity Primary Key,data1 char(255))Create Table Child(id integer Primary Key)goinsert into Parent(data1)Select top 1000000 NULL from sys.columns a cross join sys.columns b insert into ChildSelect id from Parentgo If we then execute update Parent set data1 =''from Parentjoin Child on Parent.Id = Child.Id where Parent.Id %100 =1 and Child.id %100 =1 We should see an execution plan using parallelism such as   However,  if the OUTPUT clause is now used update Parent set data1 =''output inserted.idfrom Parentjoin Child on Parent.Id = Child.Id where Parent.Id %100 =1 and Child.id %100 =1   The execution plan shows that Parallelism was not used Make of that what you will, but i thought that this was a pretty unexpected outcome. Update : Laurence Hoff has mailed me to note that when the OUTPUT results are captured to a temporary table using the INTO clause,  then parallelism is used.  Naturally if you use a table variable then there is still no parallelism  

    Read the article

  • Making a Statement: How to retrieve the T-SQL statement that caused an event

    - by extended_events
    If you’ve done any troubleshooting of T-SQL, you know that sooner or later, probably sooner, you’re going to want to take a look at the actual statements you’re dealing with. In extended events we offer an action (See the BOL topic that covers Extended Events Objects for a description of actions) named sql_text that seems like it is just the ticket. Well…not always – sounds like a good reason for a blog post. When is a statement not THE statement? The sql_text action returns the same information that is returned from DBCC INPUTBUFFER, which may or may not be what you want. For example, if you execute a stored procedure, the sql_text action will return something along the lines of “EXEC sp_notwhatiwanted” assuming that is the statement you sent from the client. Often times folks would like something more specific, like the actual statements that are being run from within the stored procedure or batch. Enter the stack Extended events offers another action, this one with the descriptive name of tsql_stack, that includes the sql_handle and offset information about the statements being run when an event occurs. With the sql_handle and offset values you can retrieve the specific statement you seek using the DMV dm_exec_sql_statement. The BOL topic for dm_exec_sql_statement provides an example for how to extract this information, so I’ll cover the gymnastics required to get the sql_handle and offset values out of the tsql_stack data collected by the action. I’m the first to admit that this isn’t pretty, but this is what we have in SQL Server 2008 and 2008 R2. We will be making it easier to get statement level information in the next major release of SQL Server. The sample code For this example I have a stored procedure that includes multiple statements and I have a need to differentiate between those two statements in my tracing. I’m going to track two events: module_end tracks the completion of the stored procedure execution and sp_statement_completed tracks the execution of each statement within a stored procedure. I’m adding the tsql_stack events (since that’s the topic of this post) and the sql_text action for comparison sake. (If you have questions about creating event sessions, check out Pedro’s post Introduction to Extended Events.) USE AdventureWorks2008GO -- Test SPCREATE PROCEDURE sp_multiple_statementsASSELECT 'This is the first statement'SELECT 'this is the second statement'GO -- Create a session to look at the spCREATE EVENT SESSION track_sprocs ON SERVERADD EVENT sqlserver.module_end (ACTION (sqlserver.tsql_stack, sqlserver.sql_text)),ADD EVENT sqlserver.sp_statement_completed (ACTION (sqlserver.tsql_stack, sqlserver.sql_text))ADD TARGET package0.ring_bufferWITH (MAX_DISPATCH_LATENCY = 1 SECONDS)GO -- Start the sessionALTER EVENT SESSION track_sprocs ON SERVERSTATE = STARTGO -- Run the test procedureEXEC sp_multiple_statementsGO -- Stop collection of events but maintain ring bufferALTER EVENT SESSION track_sprocs ON SERVERDROP EVENT sqlserver.module_end,DROP EVENT sqlserver.sp_statement_completedGO Aside: Altering the session to drop the events is a neat little trick that allows me to stop collection of events while keeping in-memory targets such as the ring buffer available for use. If you stop the session the in-memory target data is lost. Now that we’ve collected some events related to running the stored procedure, we need to do some processing of the data. I’m going to do this in multiple steps using temporary tables so you can see what’s going on; kind of like having to “show your work” on a math test. The first step is to just cast the target data into XML so I can work with it. After that you can pull out the interesting columns, for our purposes I’m going to limit the output to just the event name, object name, stack and sql text. You can see that I’ve don a second CAST, this time of the tsql_stack column, so that I can further process this data. -- Store the XML data to a temp tableSELECT CAST( t.target_data AS XML) xml_dataINTO #xml_event_dataFROM sys.dm_xe_sessions s INNER JOIN sys.dm_xe_session_targets t    ON s.address = t.event_session_addressWHERE s.name = 'track_sprocs' SELECT * FROM #xml_event_data -- Parse the column data out of the XML blockSELECT    event_xml.value('(./@name)', 'varchar(100)') as [event_name],    event_xml.value('(./data[@name="object_name"]/value)[1]', 'varchar(255)') as [object_name],    CAST(event_xml.value('(./action[@name="tsql_stack"]/value)[1]','varchar(MAX)') as XML) as [stack_xml],    event_xml.value('(./action[@name="sql_text"]/value)[1]', 'varchar(max)') as [sql_text]INTO #event_dataFROM #xml_event_data    CROSS APPLY xml_data.nodes('//event') n (event_xml) SELECT * FROM #event_data event_name object_name stack_xml sql_text sp_statement_completed NULL <frame level="1" handle="0x03000500D0057C1403B79600669D00000100000000000000" line="4" offsetStart="94" offsetEnd="172" /><frame level="2" handle="0x01000500CF3F0331B05EC084000000000000000000000000" line="1" offsetStart="0" offsetEnd="-1" /> EXEC sp_multiple_statements sp_statement_completed NULL <frame level="1" handle="0x03000500D0057C1403B79600669D00000100000000000000" line="6" offsetStart="174" offsetEnd="-1" /><frame level="2" handle="0x01000500CF3F0331B05EC084000000000000000000000000" line="1" offsetStart="0" offsetEnd="-1" /> EXEC sp_multiple_statements module_end sp_multiple_statements <frame level="1" handle="0x03000500D0057C1403B79600669D00000100000000000000" line="0" offsetStart="0" offsetEnd="0" /><frame level="2" handle="0x01000500CF3F0331B05EC084000000000000000000000000" line="1" offsetStart="0" offsetEnd="-1" /> EXEC sp_multiple_statements After parsing the columns it’s easier to see what is recorded. You can see that I got back two sp_statement_completed events, which makes sense given the test procedure I’m running, and I got back a single module_end for the entire statement. As described, the sql_text isn’t telling me what I really want to know for the first two events so a little extra effort is required. -- Parse the tsql stack information into columnsSELECT    event_name,    object_name,    frame_xml.value('(./@level)', 'int') as [frame_level],    frame_xml.value('(./@handle)', 'varchar(MAX)') as [sql_handle],    frame_xml.value('(./@offsetStart)', 'int') as [offset_start],    frame_xml.value('(./@offsetEnd)', 'int') as [offset_end]INTO #stack_data    FROM #event_data        CROSS APPLY    stack_xml.nodes('//frame') n (frame_xml)    SELECT * from #stack_data event_name object_name frame_level sql_handle offset_start offset_end sp_statement_completed NULL 1 0x03000500D0057C1403B79600669D00000100000000000000 94 172 sp_statement_completed NULL 2 0x01000500CF3F0331B05EC084000000000000000000000000 0 -1 sp_statement_completed NULL 1 0x03000500D0057C1403B79600669D00000100000000000000 174 -1 sp_statement_completed NULL 2 0x01000500CF3F0331B05EC084000000000000000000000000 0 -1 module_end sp_multiple_statements 1 0x03000500D0057C1403B79600669D00000100000000000000 0 0 module_end sp_multiple_statements 2 0x01000500CF3F0331B05EC084000000000000000000000000 0 -1 Parsing out the stack information doubles the fun and I get two rows for each event. If you examine the stack from the previous table, you can see that each stack has two frames and my query is parsing each event into frames, so this is expected. There is nothing magic about the two frames, that’s just how many I get for this example, it could be fewer or more depending on your statements. The key point here is that I now have a sql_handle and the offset values for those handles, so I can use dm_exec_sql_statement to get the actual statement. Just a reminder, this DMV can only return what is in the cache – if you have old data it’s possible your statements have been ejected from the cache. “Old” is a relative term when talking about caches and can be impacted by server load and how often your statement is actually used. As with most things in life, your mileage may vary. SELECT    qs.*,     SUBSTRING(st.text, (qs.offset_start/2)+1,         ((CASE qs.offset_end          WHEN -1 THEN DATALENGTH(st.text)         ELSE qs.offset_end         END - qs.offset_start)/2) + 1) AS statement_textFROM #stack_data AS qsCROSS APPLY sys.dm_exec_sql_text(CONVERT(varbinary(max),sql_handle,1)) AS st event_name object_name frame_level sql_handle offset_start offset_end statement_text sp_statement_completed NULL 1 0x03000500D0057C1403B79600669D00000100000000000000 94 172 SELECT 'This is the first statement' sp_statement_completed NULL 1 0x03000500D0057C1403B79600669D00000100000000000000 174 -1 SELECT 'this is the second statement' module_end sp_multiple_statements 1 0x03000500D0057C1403B79600669D00000100000000000000 0 0 C Now that looks more like what we were after, the statement_text field is showing the actual statement being run when the sp_statement_completed event occurs. You’ll notice that it’s back down to one row per event, what happened to frame 2? The short answer is, “I don’t know.” In SQL Server 2008 nothing is returned from dm_exec_sql_statement for the second frame and I believe this to be a bug; this behavior has changed in the next major release and I see the actual statement run from the client in frame 2. (In other words I see the same statement that is returned by the sql_text action  or DBCC INPUTBUFFER) There is also something odd going on with frame 1 returned from the module_end event; you can see that the offset values are both 0 and only the first letter of the statement is returned. It seems like the offset_end should actually be –1 in this case and I’m not sure why it’s not returning this correctly. This behavior is being investigated and will hopefully be corrected in the next major version. You can workaround this final oddity by ignoring the offsets and just returning the entire cached statement. SELECT    event_name,    sql_handle,    ts.textFROM #stack_data    CROSS APPLY sys.dm_exec_sql_text(CONVERT(varbinary(max),sql_handle,1)) as ts event_name sql_handle text sp_statement_completed 0x0300070025999F11776BAF006F9D00000100000000000000 CREATE PROCEDURE sp_multiple_statements AS SELECT 'This is the first statement' SELECT 'this is the second statement' sp_statement_completed 0x0300070025999F11776BAF006F9D00000100000000000000 CREATE PROCEDURE sp_multiple_statements AS SELECT 'This is the first statement' SELECT 'this is the second statement' module_end 0x0300070025999F11776BAF006F9D00000100000000000000 CREATE PROCEDURE sp_multiple_statements AS SELECT 'This is the first statement' SELECT 'this is the second statement' Obviously this gives more than you want for the sp_statement_completed events, but it’s the right information for module_end. I leave it to you to determine when this information is needed and use the workaround when appropriate. Aside: You might think it’s odd that I’m showing apparent bugs with my samples, but you’re going to see this behavior if you use this method, so you need to know about it.I’m all about transparency. Happy Eventing- Mike Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • What is the advantage of currying?

    - by Mad Scientist
    I just learned about currying, and while I think I understand the concept, I'm not seeing any big advantage in using it. As a trivial example I use a function that adds two values (written in ML). The version without currying would be fun add(x, y) = x + y and would be called as add(3, 5) while the curried version is fun add x y = x + y (* short for val add = fn x => fn y=> x + y *) and would be called as add 3 5 It seems to me to be just syntactic sugar that removes one set of parentheses from defining and calling the function. I've seen currying listed as one of the important features of a functional languages, and I'm a bit underwhelmed by it at the moment. The concept of creating a chain of functions that consume each a single parameter, instead of a function that takes a tuple seems rather complicated to use for a simple change of syntax. Is the slightly simpler syntax the only motivation for currying, or am I missing some other advantages that are not obvious in my very simple example? Is currying just syntactic sugar?

    Read the article

  • Interpolate air drag for my game?

    - by Valentin Krummenacher
    So I have a little game which works with small steps, however those steps vary in time, so for example I sometimes have 10 Steps/second and then I have 20 Steps/second. This changes automatically depending on how many steps the user's computer can take. To avoid inaccurate positioning of the game's player object I use y=v0*dt+g*dt^2/2 to determine my objects y-position, where dt is the time since the last step, v0 is the velocity of my object in the beginning of my step and g is the gravity. To calculate the velocity in the end of a step I use v=v0+g*dt what also gives me correct results, independent of whether I use 2 steps with a dt of for example 20ms or one step with a dt of 40ms. Now I would like to introduce air drag. For simplicity's sake I use a=k*v^2 where a is the air drag's acceleration (I am aware that it would usually result in a force, but since I assume 1kg for my object's mass the force is the same as the resulting acceleration), k is a constant (in this case I'm using 0.001) and v is the speed. Now in an infinitely small time interval a is k multiplied by the velocity in this small time interval powered by 2. The problem is that v in the next time interval would depend on the drag of the last which again depends on the v of the last interval and so on... In other words: If I use a=k*v^2 I get different results for my position/velocity when I use 2 steps of 20ms than when I use one step of 40ms. I used to have this problem for my position too, but adding +g*dt^2/2 to the formula for my position fixed the problem since it takes into account that the position depends on the velocity which changes slightly in every infinitely small time interval. Does something like that exist for air drag too? And no, I dont mean anything like Adding air drag to a golf ball trajectory equation or similar, for that kind of method only gives correct results when all my steps are the same. (I hope you can understand my intermediate english, it's not my main language so I would like to say sorry for all the silly mistakes I might have made in my question)

    Read the article

  • DIVIDE vs division operator in #dax

    - by Marco Russo (SQLBI)
    Alberto Ferrari wrote an interesting article about DIVIDE performance in DAX. This new function has been introduced in SQL Server Analysis Services 2012 SP1, so it is available also in Excel 2013 (which still doesn’t have other features/fixes introduced by following Cumulative Updates…). The idea that instead of writing: IF ( Sales[Quantity] <> 0, Sales[Amount] / Sales[Quantity], BLANK () ) you can write: DIVIDE ( Sales[Amount], Sales[Quantity] ) There is a third optional argument in DIVIDE that defines the result in case the denominator (second argument) is zero, and by default its value is BLANK, so I omitted the third argument in my example. Using DIVIDE is very important, especially when you use a measure in MDX (for example in an Excel PivotTable) because it raise the chance that the non empty evaluation for the result is evaluated in bulk mode instead of cell-by-cell. However, from a DAX point of view, you might find it’s better to use the standard division operator removing the IF statement. I suggest you to read Alberto’s article, because you will find that an expression applying a filter using FILTER is faster than using CALCULATE, which is against any rule of thumb you might have read until now! Again, this is not always true, and depends on many conditions – trying to simplify, we might say that for a simple calculation, the query plan generated by FILTER could be more efficient – but, as usual, it depends, and 90% of the times using FILTER instead of CALCULATE produces slower performance. Do not take anything for granted, and always check the query plan when performance are your first issue!

    Read the article

  • SQL SERVER – Introduction to PERCENT_RANK() – Analytic Functions Introduced in SQL Server 2012

    - by pinaldave
    SQL Server 2012 introduces new analytical functions PERCENT_RANK(). This function returns relative standing of a value within a query result set or partition. It will be very difficult to explain this in words so I’d like to attempt to explain its function through a brief example. Instead of creating a new table, I will be using the AdventureWorks sample database as most developers use that for experiment purposes. Now let’s have fun following query: USE AdventureWorks GO SELECT SalesOrderID, OrderQty, RANK() OVER(ORDER BY SalesOrderID) Rnk, PERCENT_RANK() OVER(ORDER BY SalesOrderID) AS PctDist FROM Sales.SalesOrderDetail WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY PctDist DESC GO The above query will give us the following result: Now let us understand the resultset. You will notice that I have also included the RANK() function along with this query. The reason to include RANK() function was as this query is infect uses RANK function and find the relative standing of the query. The formula to find PERCENT_RANK() is as following: PERCENT_RANK() = (RANK() – 1) / (Total Rows – 1) If you want to read more about this function read here. Now let us attempt the same example with PARTITION BY clause USE AdventureWorks GO SELECT SalesOrderID, OrderQty, ProductID, RANK() OVER(PARTITION BY SalesOrderID ORDER BY ProductID ) Rnk, PERCENT_RANK() OVER(PARTITION BY SalesOrderID ORDER BY ProductID ) AS PctDist FROM Sales.SalesOrderDetail s WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY PctDist DESC GO Now you will notice that the same logic is followed in follow result set. I have now quick question to you – how many of you know the logic/formula of PERCENT_RANK() before this blog post? Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Function, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Designing javascript chart library

    - by coolscitist
    I started coding a chart library on top of d3js: My chart library. I read Javascript API reusability and Towards reusable charts. However, I am NOT really following the suggestions because I am not really convinced about them. This is how my library can be used to create a bubble chart: var chart = new XYBubbleChart(); chart.data = [{"xValue":200,"yValue":300},{"xValue":400,"yValue":200},{"xValue":100,"yValue":310}]; //set data chart.dataKey.x = "xValue"; chart.dataKey.y = "yValue"; chart.elementId = "#chart"; chart.createChart(); Here are my questions: It does not use chaining. Is it a big issue? Every property and function is exposed publicly. (Example: width, height are exposed in Chart.js). OOP is all about abstraction and hiding, but I don't really see the point right now. I think exposing everything gives flexibility to change property and functionality inside subclasses and objects without writing a lot of code. What could be pitfalls of such exposure? I have implemented functions like: zooming, "showing info boxes when data point is clicked" as "abilities". (example: XYZoomingAbility.js). Basically, such "abilities" accept "chart" object, play around with public variables of "chart" to add functionality. What this allows me to do is to add an ability by writing: activateZoomAbility(chartObject); My goal is to separate "visualization" from "interactivity". I want "interactivity" like: zooming to be plugged into the chart rather than built inside the chart. Like, I don't want my bubble chart to know anything about "zooming". However, I do want zoomable bubble chart. What is the best way to do this? How to test and what to test? I have written mixed tests: jasmine and actual html files so that I can test manually on browser.

    Read the article

  • Continuous Integration using Docker

    - by Leon Mergen
    One of the main advantages of Docker is the isolated environment it brings, and I want to leverage that advantage in my continuous integration workflow. A "normal" CI workflow goes something like this: Poll repository for changes Pull from repository Install dependencies Run tests In a Dockerized workflow, it would be something like this: Poll repository for changes Pull from repository Build docker image Run docker image as container Run tests Kill docker container My problem is with the "run tests" step: since Docker is an isolated environment, intuitively I would like to treat it as one; this means the preferred method of communication are sockets. However, this only works well in certain situations (a webapp, for example). When testing different kind of services (for example, a background service that only communicated with a database), a different approach would be required. What is the best way to approach this problem? Is it a problem with my application's design, and should I design it in a more TDD, service-oriented way that always listens on some socket? Or should I just give up on isolation, and do something like this: Poll repository for changes Pull from repository Build docker image Run docker image as container Open SSH session into container Run tests Kill docker container SSH'ing into the container seems like an ugly solution to me, since it requires deep knowledge of the contents of the container, and thus break the isolation. I would love to hear SO's different approaches to this problem.

    Read the article

  • ubuntu one syncronizing problems

    - by user72249
    I am a user of Ubuntu and Ubuntu One. First: I had a serious problem with ubuntu one. I have a windows and several Ubuntu machine and they all sync one account. My first problem is that i can't really delete files. Whenever I delete something Ubu One resync it, so it appears again in my folder. Furthermore, when i move something to another dir it synconize the new location and resync the old one, so i got doubled my files. I can't reorganize my files. So i tried to delete the duplicated files through the web dash, but i cant reorder and select multiple files by extension. For example i wanted to move all PDF to another location... Second: I can't configure in the ubuntu one app the loacation of the main One folder. For example i want my One folder to be on another partition than my HOME folder. It is a main problem in linux and windows also. I tried to move that folder in windows with the hardlink method. So i uninstalled the U1, then i created a folder link to another drive, than installed the U1. THEN i lost all my files in my U1. Lucky thing that i have a backup, but i thought U1 is a stable solution for me, and i planned to extend my space, but these bugs are major problems! It's worth to pay for it if it's working perfectly. I think it's more important than releasing a new version in every month.

    Read the article

  • Creating PDF Documents with ASP.NET and iTextSharp

    The Portable Document Format (PDF) is a popular file format for documents. Due to their ubiquity and layout capabilities, it's not uncommon for a websites to use PDF technology. For example, an eCommerce store may offer a "printable receipt" option that, when selected, displays a PDF file within the browser. Last week's article, Filling in PDF Forms with ASP.NET and iTextSharp, looked at how to work with a special kind of PDF document, namely one that has one or more fields defined. A PDF document can contain various types of user interface elements, which are referred to as fields. For instance, there is a text field, a checkbox field, a combobox field, and more. Typically, the person viewing the PDF on her computer interacts with the document's fields; however, it is possible to enumerate and fill a PDF's fields programmatically, as we saw in last week's article. This article continues our investigation into iTextSharp, a .NET open source library for PDF generation, showing how to use iTextSharp to create PDF documents from scratch. We start with an example of how to programmatically define and piece together paragraphs, tables, and images into a single PDF file. Following that, we explore how to use iTextSharp's built-in capabilities to convert HTML into PDF. Read on to learn more! Read More >

    Read the article

  • Changing the connection factory JNDI dynamically in Ftp Adapter

    - by [email protected]
    Consider a usecase where you need to send the same file over to five different ftp servers. The first thought that might come to mind is to create five FtpAdapter references one for each connection-factory location. However, this is not the most optimal approach and this is exactly where "Dynamic Partner Links" come into play in 11g.    If you're running the adapter in managed mode, it would require you to configure the connection factory JNDI in the appserver console for the FtpAdapter. In the sample below, I have mapped the connection-factory JNDI location "eis/Ftp/FtpAdapter" with the ftp server running on localhost.           After you've configured the connection factory on your appserver, you will need to refer to the connection-factory JNDI in the jca artifact of your SCA process. In the example below, I've instructed the FTPOut reference to use the ftp server corresponding to "eis/Ftp/FtpAdapter".     The good news is that you can change this connection-factory location dynamically using jca header properties in both BPEL as well as Mediator service engines. In order to do so, the business scenario involving BPEL or Mediator would be required to use a reserved jca header property "jca.jndi" as shown below.     Similarly, for mediator, the mplan would look as shown below.       Things to remember while using dynamic partner links: 1) The connection factories must be pre-configured on the SOA server. In our BPEL example above, both "eis/Ftp/FtpAdater1" and "eis/Ftp/FtpAdater2" must be configured in the weblogic deployment descriptor for the FtpAdapter prior to deploying the scenario. 2) Dynamic Partner Links are applicable to outbound invocations only.    

    Read the article

  • Order of operations to render VBO to FBO texture and then rendering FBO texture full quad

    - by cyberdemon
    I've just started using OpenGL with C# via the OpenTK library. I've managed to successfully render my game world using VBOs. I now want to create a pixellated affect by rendering the frame to an offscreen FBO with a size half of my GameWindow size and then render that FBO to a full screen quad. I've been looking at the OpenTK example here: http://www.opentk.com/doc/graphics/frame-buffer-objects ...but the result is a black form. I'm not sure which parts of the example code belongs in the OnLoad event and OnRenderFrame. Can someone please tell me if the below code shows the correct order of operations? OnLoad { // VBO. // DataArrayBuffer GenBuffers/BindBuffer/BufferData // ElementArrayBuffer GenBuffers/BindBuffer/BufferData // ColourArrayBuffer GenBuffers/BindBuffer/BufferData // FBO. // ColourTexture GenTextures/BindTexture/TexParameterx4/TexImage2D // Create FBO. // Textures Ext.GenFramebuffers/Ext.BindFramebuffer/Ext.FramebufferTexture2D/Ext.FramebufferRenderbuffer } OnRenderFrame { // Use FBO buffer. Ext.BindFramebuffer(FBO) GL.Clear // Set viewport to FBO dimensions. GL.DrawBuffer((DrawBufferMode)FramebufferAttachment.ColorAttachment0Ext) // Bind VBO arrays. GL.BindBuffer(ColourArrayBuffer) GL.ColorPointer GL.EnableClientState(ColorArray) GL.BindBuffer(DataArrayBuffer) // If world changed GL.BufferData(DataArrayBuffer) GL.VertexPointer GL.EnableClientState(VertexArray) GL.BindBuffer(ElementArrayBuffer) // Render VBO. GL.DrawElements // Bind visible buffer. GL.Ext.BindFramebuffer(0) GL.DrawBuffer(Back) GL.Clear // Set camera to view texture. GL.BindTexture(ColourTexture) // Render FBO texture GL.Begin(Quads) // Draw texture on quad // TexCoord2/Vertex2 GL.End SwapBuffers }

    Read the article

< Previous Page | 191 192 193 194 195 196 197 198 199 200 201 202  | Next Page >