Search Results

Search found 346 results on 14 pages for 'conversions'.

Page 10/14 | < Previous Page | 6 7 8 9 10 11 12 13 14  | Next Page >

  • More SQL Smells

    - by Nick Harrison
    Let's continue exploring some of the SQL Smells from Phil's list. He has been putting together. Datatype mis-matches in predicates that rely on implicit conversion.(Plamen Ratchev) This is a great example poking holes in the whole theory of "If it works it's not broken" Queries will this probably will generally work and give the correct response. In fact, without careful analysis, you probably may be completely oblivious that there is even a problem. This subtle little problem will needlessly complicate queries and slow them down regardless of the indexes applied. Consider this example: CREATE TABLE [dbo].[Page](     [PageId] [int] IDENTITY(1,1) NOT NULL,     [Title] [varchar](75) NOT NULL,     [Sequence] [int] NOT NULL,     [ThemeId] [int] NOT NULL,     [CustomCss] [text] NOT NULL,     [CustomScript] [text] NOT NULL,     [PageGroupId] [int] NOT NULL;  CREATE PROCEDURE PageSelectBySequence ( @sequenceMin smallint , @sequenceMax smallint ) AS BEGIN SELECT [PageId] , [Title] , [Sequence] , [ThemeId] , [CustomCss] , [CustomScript] , [PageGroupId] FROM [CMS].[dbo].[Page] WHERE Sequence BETWEEN @sequenceMin AND @SequenceMax END  Note that the Sequence column is defined as int while the sequence parameter is defined as a small int. The problem is that the database may have to do a lot of type conversions to evaluate the query. In some cases, this may even negate the indexes that you have in place. Using Correlated subqueries instead of a join   (Dave_Levy/ Plamen Ratchev) There are two main problems here. The first is a little subjective, since this is a non-standard way of expressing the query, it is harder to understand. The other problem is much more objective and potentially problematic. You are taking much of the control away from the optimizer. Written properly, such a query may well out perform a corresponding query written with traditional joins. More likely than not, performance will degrade. Whenever you assume that you know better than the optimizer, you will most likely be wrong. This is the fundmental problem with any hint. Consider a query like this:  SELECT Page.Title , Page.Sequence , Page.ThemeId , Page.CustomCss , Page.CustomScript , PageEffectParams.Name , PageEffectParams.Value , ( SELECT EffectName FROM dbo.Effect WHERE EffectId = dbo.PageEffects.EffectId ) AS EffectName FROM Page INNER JOIN PageEffect ON Page.PageId = PageEffects.PageId INNER JOIN PageEffectParam ON PageEffects.PageEffectId = PageEffectParams.PageEffectId  This can and should be written as:  SELECT Page.Title , Page.Sequence , Page.ThemeId , Page.CustomCss , Page.CustomScript , PageEffectParams.Name , PageEffectParams.Value , EffectName FROM Page INNER JOIN PageEffect ON Page.PageId = PageEffects.PageId INNER JOIN PageEffectParam ON PageEffects.PageEffectId = PageEffectParams.PageEffectId INNER JOIN dbo.Effect ON dbo.Effects.EffectId = dbo.PageEffects.EffectId  The correlated query may just as easily show up in the where clause. It's not a good idea in the select clause or the where clause. Few or No comments. This one is a bit more complicated and controversial. All comments are not created equal. Some comments are helpful and need to be included. Other comments are not necessary and may indicate a problem. I tend to follow the rule of thumb that comments that explain why are good. Comments that explain how are bad. Many people may be shocked to hear the idea of a bad comment, but hear me out. If a comment is needed to explain what is going on or how it works, the logic is too complex and needs to be simplified. Comments that explain why are good. Comments may explain why the sql is needed are good. Comments that explain where the sql is used are good. Comments that explain how tables are related should not be needed if the sql is well written. If they are needed, you need to consider reworking the sql or simplify your data model. Use of functions in a WHERE clause. (Anil Das) Calling a function in the where clause will often negate the indexing strategy. The function will be called for every record considered. This will often a force a full table scan on the tables affected. Calling a function will not guarantee that there is a full table scan, but there is a good chance that it will. If you find that you often need to write queries using a particular function, you may need to add a column to the table that has the function already applied.

    Read the article

  • Adobe Photoshop CS5 vs Photoshop CS5 extended

    - by Edward
    Adobe Photoshop has been an industry standard for most web designers & photographers worldwide. Photoshop CS5 has made photography editing much more refined and the composition process has become much easier than ever before.  To study the advantage of Photoshop CS5 extended over Photoshop CS5 we have written this comparison article, with both a Designer’s & Photographer’s perspective. Hopefully it shall help you in your buying/upgrade decision. Photoshop CS5 Photoshop CS5 has refining feature with powerful photography tools. It made editing process easy as fewer steps are involved to remove noise, add grain, create vignettes, correct lens distortions, sharpen, and create HDR images. It has quick image correction and color and tone control for professional purpose. Intelligent image editing and enhancement , extraordinary advanced compositing has made it a better tool than earlier versions for photographers. It allows users to accelerate workflow with fast performance on 64-bit Windows® and Mac hardware systems and smoother interactions due to more GPU-accelerated features. It also boasts of a state-of-the-art processing with Adobe Photoshop Camera Raw 6 and helps to maximize creative impact. It provides for tremendous precision and freedom. It allows user to easily select intricate image elements, such as hair and create realistic painting effects. It also allows to remove any image element and see the space fill in almost magically. It has easy access to core editing and streamlined work flow and flexible work ambience. It has creative tools and contents. Photoshop CS5 Extended Photoshop CS5 extended is quite innovative and has incorporated 3D elements to 2D artwork directly within digital imaging application, which enables user to do an easy on-ramp to 3D image creation. It also provides for 3D editing. It has intelligent image editing and enhancement. It offers advance composing and has extraordinary painting and drawing toolset. It provides for video and animation designing. It helps to work with specialized images for architecture, manufacturing, engineering, science, and medicine. Where CS5 extended scores over CS5 CS5 extended has many features, which were not included in CS5. These features make it score more over CS5. These features are: Technology for creating 3D extrusion 3D material library and picker Field depth for 3D 3D merging and scene composition improvements 3D workflow improvement Customization of 3D features Image based light source Shadow catcher for shadow creation Enhanced ray tracer Context sensitive widgets, which allows easy control of objects, lights and cameras. Overlays for materials and mesh boundaries Photoshop CS5 extended is far better than CS5 as it incorporates all the features of CS5 and have more advanced features. It allows 3D creation and editing and has other advanced tools to make it better. Redefining the Image-Editing Experience  : A Photographer’s point of View Photoshop CS5 delivers amazing features and creative options so even new users can perform advanced image manipulations and compositions. Breath taking image intelligence behind Content-Aware Fill magically removes any image detail or object, examines the surroundings and seamlessly fills in the space left behind. Lighting, tone and noise of the surrounding area can be matched. New Refine Edge makes nearly-impossible image selections possible. Masking was never easier, the toughest types of edges, such as hair and foliage seem easier to fix. To sum up following are few advantages of CS5 extended over previous versions 64-bit processing Content Aware Fill Refine Edge, “makes nearly-impossible image selections impossible” HDR Pro, including ghost artifact removal and HDR toning, which gives the look of HDR with a single exposure New brush options Improved image management with enhanced Adobe Bridge Lens corrections Improved black-and-white conversions Puppet Warp: Precisely reposition or warp any image element Adobe Camera Raw 6 Upgrade Buy Online Pricing and Availability Adobe Photoshop CS5 and CS5 Extended are available through Adobe Authorized Resellers & the Adobe Store. Estimated street price for Adobe Photoshop CS5 is US$699 and US$999 for Photoshop CS5 Extended. Upgrade pricing and volume licensing are also available. Related posts:10 Free Alternatives for Adobe Photoshop Software Web based Alternatives to Photoshop 15 Useful Adobe Illustrator Tutorials For Designers

    Read the article

  • Get the Picture: Pinterest for Marketers

    - by Mike Stiles
    When trying to determine on which networks to conduct social marketing, the usual suspects immediately rise to the top; Facebook & Twitter, then LinkedIn (especially if you’re B2B), then maybe some Google Plus to hedge SEO bets.  So at what juncture do brands get excited about Pinterest? Pinterest has been easy for marketers to de-prioritize thanks to the perception its usage is so dominated by women. Um, what’s wrong with that? Women make an estimated 85% of all consumer purchases. So if there are indeed over 30 million US women active on it monthly, and they do 92% of the pinning, and 84% are still active on it after 4 years, when did an audience of highly engaged, very likely sales conversions become low priority? Okay, if you’re a tech B2B SaaS product like the Oracle Social Cloud, Pinterest may not be where you focus. But if you operate in the top Pinterest categories, which are truly far-reaching, it’s time to take note of Pinterest’s performance to date: 40.1 million monthly users in the US (eMarketer). Over 30 billion pins, half of which were pinned in the last 6 months. (Big momentum) 75% of usage is on their mobile app. (In solid shape for the mobile migration) Pinterest sharing grew 58% in 2013, beating Facebook, Twitter, or LinkedIn. (ShareThis) Pinterest is the 3rd most popular sharing platform overall (over email), with 48% of all sharing on tablets. Users referred by Pinterest are 10% more likely to buy on e-commerce sites and tend to spend twice that of users coming from Facebook. (Shopify) To be fair, brands haven’t had any paid marketing opportunities on that platform…until recently. Users are seeing Promoted Pins in both category and search feeds from rollout brands like Gap, ABC Family, Ziploc, and Nestle. Are the paid pins annoying users? It seems more so than other social networks, they’re fitting right in to the intended user experience and being accepted, getting almost as many click-throughs as user pins. New York Magazine’s Kevin Roose laid it out succinctly; Pinterest offers a place that’s image-centric, search-friendly, makes things easy to purchase, makes things easy to share, and puts users in an aspirational mood to buy. Pinterest is very confident in the value of that combo and that audience, with CPM rates 5x that of the most expensive Facebook ad, plus (at least for now) required spending commitments and required pin review by Pinterest for quality. The latest developments; a continued move toward search and discovery with enhancements like Guided Search to help you hone in on what interests you, Custom Categories, and the rumored Visual Search that stands to be a liberation from text. And most recently, Pinterest has opened up its API so brands can get access to deeper insights into the best search terms and categories in which to play ball, as well as what kinds of pins stand to perform best in those areas. As we learned in our rundown this week of Social Media Examiner’s Social Media Marketing Industry Report, around 50% of marketers specifically intend on upping their use of Pinterest. If you’re a big believer in fishing where the fish are, that’s probably an efficient position to take. @mikestiles @oraclesocialPhoto: Adam Lambert_Gorwyn, freeimages.com

    Read the article

  • How to Global onRouteRequest directly to onBadRequest?

    - by virtualeyes
    EDIT Came up with this to sanitize URI date params prior to passing off to Play router val ymdMatcher = "\\d{8}".r // matcher for yyyyMMdd URI param val ymdFormat = org.joda.time.format.DateTimeFormat.forPattern("yyyyMMdd") def ymd2Date(ymd: String) = ymdFormat.parseDateTime(ymd) override def onRouteRequest(r: RequestHeader): Option[Handler] = { import play.api.i18n.Messages ymdMatcher.findFirstIn(r.uri) map{ ymd=> try { ymd2Date( ymd); super.onRouteRequest(r) } catch { case e:Exception => // kick to "bad" action handler on invalid date Some(controllers.Application.bad(Messages("bad.date.format"))) } } getOrElse(super.onRouteRequest(r)) } ORIGINAL Let's say I want to return a BadRequest result type for all /foo URIs: override def onBadRequest(r: RequestHeader, error: String) = { BadRequest("Bad Request: " + error) } override def onRouteRequest(r: RequestHeader): Option[Handler] = { if(r.uri.startsWith("/foo") onBadRequest("go away") else super.onRouteRequest(r) } Of course does not work, since the expected return type is Option[play.api.mvc.Handler] What's the idiomatic way to deal with this, create a default Application controller method to handle filtered bad requests? Ideally, since I know in onRouteRequest that /foo is in fact a BadRequest, I'd like to call onBadRequest directly. Should note that this is a contrived example, am actually verifying a URI yyyyMMdd date param, and BadRequest-ing if it does not parse to a JodaTime instance -- basically a catch-all filter to sanitize a given date param rather than handling on every single controller method call, not to mention, avoiding cluttering up application log with useless stack traces re: invalid date parse conversions (have several MBs of these junk trace entries accruing daily due to users pointlessly manipulating the uri date in attempts to get at paid subscriber content)

    Read the article

  • XML Schema Migration

    - by Corwin Joy
    I am working on a project where we need to save data in an XML format. The problem is, over time we expect the format / schema for our data to change. What we want to be able to do is to produce scripts to migrate our data across different schema versions. We distribute our product to thousands of customers so we need to be able to run / apply these scripts at customer sites (so we can't just do the conversions by hand). I think that what we are looking for is some kind of XML data migration tool. In my mind the ideal tool could: Do an "XML diff" of two schema to identify added/deleted/changed nodes. Allow us to specify transformation functions. So, for example, we might add a new element to our schema that is a function of the old elements. (E.g. a new element C where C = A+B, A + B are old elements). So I think I am looking for a kind of XML diff and patch tool which can also apply transformation functions. One tool I am looking at for this is Altova's MapForce . I'm sure others here have had to deal with XML data format migration. How did you handle it? Edit: One point of clarification. The "diff" I plan to do is on the schema or .xsd files. The actual changes will be made to particular data sets that follow a given schema. These data sets will be .xml files. So its a "diff" of the schema to help figure out what changes need to be made to data sets to migrate them from one scheme to another.

    Read the article

  • Possible to create an implicit cast for an anonymous type to a dictionary?

    - by Ralph
    I wrote a method like this: using AttrDict = System.Collections.Generic.Dictionary<string, object>; using IAttrDict = System.Collections.Generic.IEnumerable<System.Collections.Generic.KeyValuePair<string, object>>; static string HtmlTag(string tagName, string content = null, IAttrDict attrs = null) { var sb = new StringBuilder("<"); sb.Append(tagName); if(attrs != null) foreach (var attr in attrs) sb.AppendFormat(" {0}=\"{1}\"", attr.Key, attr.Value.ToString().EscapeQuotes()); if (content != null) sb.AppendFormat(">{0}</{1}>", content, tagName); else sb.Append(" />"); return sb.ToString(); } Which you can call like HtmlTag("div", "hello world", new AttrDict{{"class","green"}}); Not too bad. But what if I wanted to allow users to pass an anonymous type in place of the dict? Like HtmlTag("div", "hello world", new {@class="green"}); Even better! I could write the overload easily, but the problem is I'm going to have about 50 functions like this, I don't want to overload each one of them. I was hoping I could just write an implicit cast to do the work for me... public class AttrDict : Dictionary<string, object> { public static implicit operator AttrDict(object obj) { // conversion from anonymous type to AttrDict here } } But C# simply won't allow it: user-defined conversions to or from a base class are not allowed So what can I do?

    Read the article

  • using ghostscript in server mode to convert pdfs to pngs

    - by emh
    while i am able to convert a specific page of a PDF to a PNG like so: gs -dSAFER -dBATCH -dNOPAUSE -sDEVICE=png16m -dGraphicsAlphaBits=4 -sOutputFile=gymnastics-20.png -dFirstPage=20 -dLastPage=20 gymnastics.pdf i am wondering if i can somehow use ghostscript's JOBSERVER mode to process several conversions without having to incur the cost of starting up ghostscript each time. from: http://pages.cs.wisc.edu/~ghost/doc/svn/Use.htm -dJOBSERVER Define \004 (^D) to start a new encapsulated job used for compatibility with Adobe PS Interpreters that ordinarily run under a job server. The -dNOOUTERSAVE switch is ignored if -dJOBSERVER is specified since job servers always execute the input PostScript under a save level, although the exitserver operator can be used to escape from the encapsulated job and execute as if the -dNOOUTERSAVE was specified. This also requires that the input be from stdin, otherwise an error will result (Error: /invalidrestore in --restore--). Example usage is: gs ... -dJOBSERVER - < inputfile.ps -or- cat inputfile.ps | gs ... -dJOBSERVER - Note: The ^D does not result in an end-of-file action on stdin as it may on some PostScript printers that rely on TBCP (Tagged Binary Communication Protocol) to cause an out-of-band ^D to signal EOF in a stream input data. This means that direct file actions on stdin such as flushfile and closefile will affect processing of data beyond the ^D in the stream. the idea is to run ghostscript in-process. the script would receive a request for a particular page of a pdf and would use ghostscript to generate the specified image. i'd rather not start up a new ghostscript process every time.

    Read the article

  • Not able to use 7-Zip to compress stdin and output with stdout?

    - by acidzombie24
    I get the error "Not implemented". I want to compress a file using 7-Zip via stdin then take the data via stdout and do more conversions with my application. In the man page it shows this example: % echo foo | 7z a dummy -tgzip -si -so /dev/null I am using Windows and C#. Results: 7-Zip 4.65 Copyright (c) 1999-2009 Igor Pavlov 2009-02-03 Creating archive StdOut System error: Not implemented Code: public static byte[] a7zipBuf(byte[] b) { string line; var p = new Process(); line = string.Format("a dummy -t7z -si -so "); p.StartInfo.Arguments = line; p.StartInfo.FileName = @"C:\Program Files\7-Zip\7z.exe"; p.StartInfo.WindowStyle = ProcessWindowStyle.Hidden; p.StartInfo.CreateNoWindow = true; p.StartInfo.UseShellExecute = false; p.StartInfo.RedirectStandardOutput = true; p.StartInfo.RedirectStandardError = true; p.StartInfo.RedirectStandardInput = true; p.Start(); p.StandardInput.BaseStream.Write(b, 0, b.Length); p.StandardInput.Close(); Console.Write(p.StandardError.ReadToEnd()); //Console.Write(p.StandardOutput.ReadToEnd()); return p.StandardOutput.BaseStream.ReadFully(); } Is there another simple way to read the file into memory? Right now I can 1) write to a temporary file and read (easy and can copy/paste some code) 2) use a file pipe (medium? I have never done it) 3) Something else.

    Read the article

  • Converting a size_t into an integer (c++)

    - by JeanOTF
    Hello, I've been trying to make a for loop that will iterate based off of the length of a network packet. In the API there exists a variable (size_t) by event.packet-dataLength. I want to iterate from 0 to event.packet-dataLength - 7 increasing i by 10 each time it iterates but I am having a world of trouble. I looked for solutions but have been unable to find anything useful. I tried converting the size_t to an unsigned int and doing the arithmetic with that but unfortunately it didn't work. Basically all I want is this: for (int i = 0; i < event.packet->dataLength - 7; i+=10) { } Though every time I do something like this or attempt at my conversions the i < # part is a huge number. They gave a printf statement in a tutorial for the API which used "%u" to print the actual number however when I convert it to an unsigned int it is still incorrect. I'm not sure where to go from here. Any help would be greatly appreciated :)

    Read the article

  • Strange C++ performance difference?

    - by STingRaySC
    I just stumbled upon a change that seems to have counterintuitive performance ramifications. Can anyone provide a possible explanation for this behavior? Original code: for (int i = 0; i < ct; ++i) { // do some stuff... int iFreq = getFreq(i); double dFreq = iFreq; if (iFreq != 0) { // do some stuff with iFreq... // do some calculations with dFreq... } } While cleaning up this code during a "performance pass," I decided to move the definition of dFreq inside the if block, as it was only used inside the if. There are several calculations involving dFreq so I didn't eliminate it entirely as it does save the cost of multiple run-time conversions from int to double. I expected no performance difference, or if any at all, a negligible improvement. However, the perfomance decreased by nearly 10%. I have measured this many times, and this is indeed the only change I've made. The code snippet shown above executes inside a couple other loops. I get very consistent timings across runs and can definitely confirm that the change I'm describing decreases performance by ~10%. I would expect performance to increase because the int to double conversion would only occur when iFreq != 0. Chnaged code: for (int i = 0; i < ct; ++i) { // do some stuff... int iFreq = getFreq(i); if (iFreq != 0) { // do some stuff with iFreq... double dFreq = iFreq; // do some stuff with dFreq... } } Can anyone explain this? I am using VC++ 9.0 with /O2. I just want to understand what I'm not accounting for here.

    Read the article

  • Cucumber could not find table; but its there. What is going on?

    - by JZ
    I'm working with cucumber and I'm running into difficulties. When I run "cucumber features", I am met with errors, cucumber is unable to find my requests table. What obvious mistake am I making? Thank you in advance! Bash: justin-zollarss-mac-pro:conversion justinz$ cucumber features Using the default profile... /Users/justinz/.gem/ruby/1.8/gems/rails-2.3.5/lib/rails/gem_dependency.rb:119:Warning: Gem::Dependency#version_requirements is deprecated and will be removed on or after August 2010. Use #requirement F-- (::) failed steps (::) Could not find table 'requests' (ActiveRecord::StatementInvalid) ./features/article_steps.rb:3 ./features/article_steps.rb:2:in `each' ./features/article_steps.rb:2:in `/^I have requests named (.+)$/' features/manage_articles.feature:7:in `Given I have requests named Foo, Bar' Failing Scenarios: cucumber features/manage_articles.feature:6 # Scenario: Conversion 1 scenario (1 failed) 3 steps (1 failed, 2 skipped) 0m0.154s justin-zollarss-mac-pro:conversion justinz$ Manage_articles.feature: Feature: Manage Articles In order to make sales As a customer I want to make conversions Scenario: Conversion Given I have requests named Foo, Bar When I go to the list of customers Then I should see a new "customer" Article_steps.rb: Given /^I have requests named (.+)$/ do |firsts| firsts.split(', ').each do |first| Request.create!(:first => first) pending # express the regexp above with the code you wish you had end end Then /^I should see a new "([^"]*)"$/ do |arg1| pending # express the regexp above with the code you wish you had end DB schema: ActiveRecord::Schema.define(:version => 20100528011731) do create_table "requests", :force => true do |t| t.string "institution" t.string "website" t.string "type" t.string "users" t.string "first" t.string "last" t.string "jobtitle" t.string "phone" t.string "email" t.datetime "created_at" t.datetime "updated_at" end end

    Read the article

  • [C#] Improving method to read signed 8-bit integers from hexadecimal.

    - by JYelton
    Scenario: I have a string of hexadecimal characters which encode 8-bit signed integers. Each two characters represent a byte which employ the leftmost (MSB) bit as the sign (rather than two's complement). I am converting these to signed ints within a loop and wondered if there's a better way to do it. There are too many conversions and I am sure there's a more efficient method that I am missing. Current Code: string strData = "FFC000407F"; // example input data, encodes: -127, -64, 0, 64, 127 int v; for (int x = 0; x < strData.Length/2; x++) { v = HexToInt(strData.Substring(x * 2, 2)); Console.WriteLine(v); // do stuff with v } private int HexToInt(string _hexData) { string strBinary = Convert.ToString(Convert.ToInt32(_hexData, 16), 2).PadLeft(_hexData.Length * 4, '0'); int i = Convert.ToInt32(strBinary.Substring(1, 7), 2); i = (strBinary.Substring(0, 1) == "0" ? i : -i); return i; } Question: Is there a more streamlined and direct approach to reading two hex characters and converting them to an int when they represent a signed int (-127 to 127) using the leftmost bit as the sign?

    Read the article

  • Improving method to read signed 8-bit integers from hexadecimal.

    - by JYelton
    Scenario: I have a string of hexadecimal characters which encode 8-bit signed integers. Each two characters represent a byte which employ the leftmost (MSB) bit as the sign (rather than two's complement). I am converting these to signed ints within a loop and wondered if there's a better way to do it. There are too many conversions and I am sure there's a more efficient method that I am missing. Current Code: string strData = "FFC000407F"; // example input data, encodes: -127, -64, 0, 64, 127 int v; for (int x = 0; x < strData.Length/2; x++) { v = HexToInt(strData.Substring(x * 2, 2)); Console.WriteLine(v); // do stuff with v } private int HexToInt(string _hexData) { string strBinary = Convert.ToString(Convert.ToInt32(_hexData, 16), 2).PadLeft(_hexData.Length * 4, '0'); int i = Convert.ToInt32(strBinary.Substring(1, 7), 2); i = (strBinary.Substring(0, 1) == "0" ? i : -i); return i; } Question: Is there a more streamlined and direct approach to reading two hex characters and converting them to an int when they represent a signed int (-127 to 127) using the leftmost bit as the sign?

    Read the article

  • JS encodeURIComponent result different from the one created by FORM

    - by Marco Demaio
    I thought values entered in forms are properly encoded by browsers. But this simple test shows it's not true: <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html><head> <meta http-equiv="Content-Type" content="text/html; charset=windows-1252"> <title></title> </head><body> <form id="test" action="test_get_vs_encodeuri.html" method="GET" onsubmit="alert(encodeURIComponent(this.one.value));"> <input name="one" type="text" value="Euro-€"> <input type="submit" value="SUBMIT"> </form> </body></html> When hitting submit button: encodeURICompenent encodes input value into "Euro-%E2%82%AC" while browser into the GET query writes only a simple "Euro-%80" Could somone explain? Or is encodeURIComponent doing unnecessary conversions?

    Read the article

  • How to get rid of void-pointers.

    - by Patrick
    I inherited a big application that was originally written in C (but in the mean time a lot of C++ was also added to it). Because of historical reasons, the application contains a lot of void-pointers. Before you start to choke, let me explain why this was done. The application contains many different data structures, but they are stored in 'generic' containers. Nowadays I would use templated STL containers for it, or I would give all data structures a common base class, so that the container can store pointers to the base class, but in the [good?] old C days, the only solution was to cast the struct-pointer to a void-pointer. Additionally, there is a lot of code that works on these void-pointers, and uses very strange C constructions to emulate polymorphism in C. I am now reworking the application, and trying to get rid of the void-pointers. Adding a common base-class to all the data structures isn't that hard (few days of work), but the problem is that the code is full of constructions like shown below. This is an example of how data is stored: void storeData (int datatype, void *data); // function prototype ... Customer *myCustomer = ...; storeData (TYPE_CUSTOMER, myCustomer); This is an example of how data is fetched again: Customer *myCustomer = (Customer *) fetchData (int datatype, char *key); I actually want to replace all the void-pointers with some smart-pointer (reference-counted), but I can't find a trick to automate (or at least) help me to get rid of all the casts to and from void-pointers. Any tips on how to find, replace, or interact in any possible way with these conversions?

    Read the article

  • Enumerating a string

    - by JamesB
    I have a status which is stored as a string of a set length, either in a file or a database. I'm looking to enumerate the possible status' I have the following type to define the possible status' Type TStatus = (fsNormal = Ord('N'),fsEditedOnScreen = Ord('O'), fsMissing = Ord('M'),fsEstimated = Ord('E'),fsSuspect = Ord('s'), fsSuspectFromOnScreen = Ord('o'),fsSuspectMissing = Ord('m'), fsSuspectEstimated = Ord('e')); Firstly is this really a good idea? or should I have a seperate const array storing the char conversions? That would mean more than one place to update. Now convert a string to a status array I have the following, but how can I check if a char is valid without looping through the enumeration? Function StrToStatus(Value : String):TStatusArray; var i: Integer; begin if Trim(Value) = '' then begin SetLength(Result,0); Exit; end; SetLength(Result,Length(Value)); for i := 1 to Length(Value) do begin Result[i] := TStatus(Value[i]); // I don't think this line is safe. end; end; AFAIK this should be fine for converting back again. Function StatusToStr(Value : TStatusArray):String; var i: Integer; begin for i := 0 to Length(Value) - 1 do Result := Result + Chr(Ord(Value[i])) end; I'm using Delphi 2007

    Read the article

  • What is the most efficient way to display decoded video frames in Qt?

    - by Jason
    What is the fastest way to display images to a Qt widget? I have decoded the video using libavformat and libavcodec, so I already have raw RGB or YCbCr 4:2:0 frames. I am currently using a QGraphicsView with a QGraphicsScene object containing a QGraphicsPixmapItem. I am currently getting the frame data into a QPixmap by using the QImage constructor from a memory buffer and converting it to QPixmap using QPixmap::fromImage(). I like the results of this and it seems relatively fast, but I can't help but think that there must be a more efficient way. I've also heard that the QImage to QPixmap conversion is expensive. I have implemented a solution that uses an SDL overlay on a widget, but I'd like to stay with just Qt since I am able to easily capture clicks and other user interaction with the video display using the QGraphicsView. I am doing any required video scaling or colorspace conversions with libswscale so I would just like to know if anyone has a more efficient way to display the image data after all processing has been performed. Thanks.

    Read the article

  • Error while compiling Hello world program for CUDA

    - by footy
    I am using Ubuntu 12.10 and have sucessfully installed CUDA 5.0 and its sample kits too. I have also run sudo apt-get install nvidia-cuda-toolkit Below is my hello world program for CUDA: #include <stdio.h> /* Core input/output operations */ #include <stdlib.h> /* Conversions, random numbers, memory allocation, etc. */ #include <math.h> /* Common mathematical functions */ #include <time.h> /* Converting between various date/time formats */ #include <cuda.h> /* CUDA related stuff */ __global__ void kernel(void) { } /* MAIN PROGRAM BEGINS */ int main(void) { /* Dg = 1; Db = 1; Ns = 0; S = 0 */ kernel<<<1,1>>>(); /* PRINT 'HELLO, WORLD!' TO THE SCREEN */ printf("\n Hello, World!\n\n"); /* INDICATE THE TERMINATION OF THE PROGRAM */ return 0; } /* MAIN PROGRAM ENDS */ The following error occurs when I compile it with nvcc -g hello_world_cuda.cu -o hello_world_cuda.x /tmp/tmpxft_000033f1_00000000-13_hello_world_cuda.o: In function `main': /home/adarshakb/Documents/hello_world_cuda.cu:16: undefined reference to `cudaConfigureCall' /tmp/tmpxft_000033f1_00000000-13_hello_world_cuda.o: In function `__cudaUnregisterBinaryUtil': /usr/include/crt/host_runtime.h:172: undefined reference to `__cudaUnregisterFatBinary' /tmp/tmpxft_000033f1_00000000-13_hello_world_cuda.o: In function `__sti____cudaRegisterAll_51_tmpxft_000033f1_00000000_4_hello_world_cuda_cpp1_ii_b81a68a1': /tmp/tmpxft_000033f1_00000000-1_hello_world_cuda.cudafe1.stub.c:1: undefined reference to `__cudaRegisterFatBinary' /tmp/tmpxft_000033f1_00000000-1_hello_world_cuda.cudafe1.stub.c:1: undefined reference to `__cudaRegisterFunction' /tmp/tmpxft_000033f1_00000000-13_hello_world_cuda.o: In function `cudaError cudaLaunch<char>(char*)': /usr/lib/nvidia-cuda-toolkit/include/cuda_runtime.h:958: undefined reference to `cudaLaunch' collect2: ld returned 1 exit status I am also making sure that I use gcc and g++ version 4.4 ( As 4.7 there is some problem with CUDA)

    Read the article

  • How to make UISlider output nice rounded numbers exponentially?

    - by RickiG
    Hi I am implementing a UISlider a user can manipulate to set a distance. I have never used the CocoaTouch UISlider, but have used other frameworks sliders, usually there is a variable for setting the "step" and other "helper" properties. The documentation for the UISlider deals only with a max and min value, and the output is always a 6 decimal float with a linear relation to the position of the "slider nob". I guess I will have to implement the desired functionality step by step. To the user, the min/max values range from 10 m to 999 Km, I am trying to implement this in an exponential way, that will feel natural to the user. I.e. the user experiences a feeling of control over the values, big or small. Also that the "output" has reasonable values. Values like 10m 200m 2.5km 150 km etc. instead of 1.2342356 m or 108.93837756 km. I would like for the step size to increase by 10m for the first 200m, then maybe by 50m up to 500m, then when passing the 1000 m value, it starts to deal with Kilometers, so then it is step size = 1 km up until 50 km, then maybe 25 km steps etc. Any way I go about this I end up doing a lot of rounding and a lot of calculations wrapped in a forrest of if statements and NSString/Number conversions, each time the user moves the slider just a little. I was hoping someone could lend me a bit of inspiration/math help or make me aware of a more lean approach to solving this problem. My last idea is to populate and array with a 100 string values, then have the slider int value correspond to a string, this is not very flexible, but doable. Thank you in advance for any help given:)

    Read the article

  • ODBC and NLS_LANG

    - by Michael S.
    Let's say that I've created two different program executables, e.g. in C++. For some reason, the two programs internals representation of text are different from each other. Let's say the first program is using text representation A and the other text representation B. It could be a specific 8-bit ANSI codepage, Unicode/UTF-8 or Unicode/UTF-16 or whatever. Now each program want to communicate text (add/retrieve data) to/from the same database table on a (database) server. Each program communicates with the database through ODBC. So the programs do not know what database system they they are communicating with. In this specific case through the database is actually a Oracle RDMS database and the database server administrator has setup the database to use UTF-8. On the system on which the programs are running an appropriate ODBC driver is available, so that the programs can connect through ODBC. Each program will treat and convert from the ODBC data type SQL_C_CHAR to its internal text representation appropriately. I assume that the programs cannot do no other than to assume a specific encoding returned for SQL_C_CHAR text. If not the programs has to be told which encoding that is. For Oracle, I know that the NLS_LANG environment variable can be used on the client. I assume it affects the ODBC driver (related to SQL_C_CHAR) to convert from a specific encoding (as given by NLS_LANG) to the internal encoding of the database (in this example UTF-8) and vice-versa. If the machine running my programs are having a NLS_LANG this setting will affect the byte sequences returned for SQL_C_CHAR so my programs cannot suddenly assume a specific encoding for the text returned via SQL_C_CHAR. Is it possible to setup the ODBC connection (preferably programmatically at runtime), so that it takes care of text conversions appropriately for the two programs, i.e. from/to representation to/from UTF-8 and from/to representation B to/from UTF-8? Regards, /Michael PS. As the programs are connecting through ODBC I don't think it would be nice that they should now anything about NLS_LANG as this is a Orcacle specific environment variable.

    Read the article

  • Programmtically choosing image conversion format to JPEG or PNG for Silverlight display

    - by Otaku
    I have a project where I need to convert a large number of image types to be displayable in a Silverlight app - TIFF, GIF, WMF, EMF, BMP, DIB, etc. I can do these conversions on the server before hydrating the Silverlight app. However, I'm not sure when I should choose to convert to which format, either JPG or PNG. Is there some kind of standard out there like TIFF should always be a JPEG and GIF should always be a PNG. Or, if a BMP is 24 bit, it should be converted to a JPEG - any lower and it can be a PNG. Or everything is a PNG and why? What I usually see or see in response to this type of question is "Well, if the picture is a photograph, go with JPEG" or "If it has straight lines, PNG is better." Unfortunately, I won't have the luxury of viewing any of the image files at all and would like just a standard way to do this via code, even if that is a zillion if/then statements. Are there any standards or best practices around this subject? P.S. Please don't move to close this subject - it actually has no duplicate on SO because I'm not looking for subjectivity.

    Read the article

  • Convert pre-IEEE-574 C++ floating-point numbers to/from C#

    - by Richard Kucia
    Before .Net, before math coprocessors, before IEEE-574, Microsoft defined a bit pattern for floating-point numbers. Old versions of the C++ compiler happily used that definition. I am writing a C# app that needs to read/write such floating-point numbers in a file. How can I do the conversions between the 2 bit formats? I need conversion methods in both directions. This app is going to run in a PocketPC/WinCE environment. Changing the structure of the file is out-of-scope for this project. Is there a C++ compiler option that instructs it to use the old FP format? That would be ideal. I could then exchange data between the C# code and C++ code by using a null-terminated text string, and the C++ methods would be simple wrappers around sprintf and atof functions. At the very least, I'm hoping someone can reply with the bit definitions for the old FP format, so I can put together a low-level bit manipulation algorithm if necessary. Thanks.

    Read the article

  • linearRGB conversion to/from HSL

    - by Otaku
    Does anyone know of a way to get HSL from an linearRGB color (not an sRGB color)? I've seen a lot of sRGB<-HSL conversions, but nothing for linearRGB<-HSL. Not sure if it is fundementally the same conversion with minor tweaks, but I'd appreciate any insight someone may have on this. Linear RGB is not the same as linearizing sRGB (which is taking [0,255] and making it [0,1]). Linear RGB transformation from/to sRGB is at http://en.wikipedia.org/wiki/SRGB. In VBA, this would be expressed (taking in linearized sRGB values [0,1]): Public Function sRGB_to_linearRGB(value As Double) If value < 0# Then sRGB_to_linearRGB = 0# Exit Function End If If value <= 0.04045 Then sRGB_to_linearRGB = value / 12.92 Exit Function End If If value <= 1# Then sRGB_to_linearRGB = ((value + 0.055) / 1.055) ^ 2.4 Exit Function End If sRGB_to_linearRGB = 1# End Function Public Function linearRGB_to_sRGB(value As Double) If value < 0# Then linearRGB_to_sRGB = 0# Exit Function End If If value <= 0.0031308 Then linearRGB_to_sRGB = value * 12.92 Exit Function End If If value < 1# Then linearRGB_to_sRGB = 1.055 * (value ^ (1# / 2.4)) - 0.055 Exit Function End If linearRGB_to_sRGB = 1# End Function I have tried sending in Linear RGB values to standard RGB_to_HSL routines and back out from HSL_to_RGB, but it does not work. I have seen almost no references that this can be done, except for two: A reference on http://en.wikipedia.org/wiki/HSL_and_HSV#cite_note-9 (numbered item 10). A reference on an open source project Grafx2 @ http://code.google.com/p/grafx2/issues/detail?id=63#c22 in which the contributor states that he has done Linear RGB <- HSL conversion and provides some C code in an attachment to his comment in a .diff file (which I can't really read :( ) My intent is to send from sRGB (for example, FF99FF (R=255, G=153, B=255)) to Linear RGB (R=1.0, G=0.318546778125092, B=1.0) using the code above (for example, the G=153 would be obtained in Linear RGB from sRGB_to_linearRGB(153 / 255)) to HSL, modify the Saturation by 350% and going back from HSL-Linear RGB-sRGB, the result would be FF19FF (R=255, G=25, B=255). Using available functions from .NET, such as .getHue from a System.Drawing.Color does not work in any sRGB space above 100% modulation of any HSL value, hence the need for Linear RGB to be sent in instead of sRGB.

    Read the article

  • SSIS - How do I use a resultset as input in a SQL task and get data types right?

    - by thursdaysgeek
    I am trying to merge records from an Oracle database table to my local SQL table. I have a variable for the package that is an Object, called OWell. I have a data flow task that gets the Oracle data as a SQL statment (select well_id, well_name from OWell order by Well_ID), and then a conversion task to convert well_id from a DT_STR of length 15 to a DT_WSTR; and convert well_name from a DT_STR of length 15 to DT_WSTR of length 50. That is then stored in the recordset OWell. The reason for the conversions is the table that I want to add records to has an identity field: SSIS shows well_id as a DT_WSTR of length 15, well_name a DT_WSTR of length 50. I then have a SQL task that connects to the local database and attempts to add records that are not there yet. I've tried various things: using the OWell as a result set and referring to it in my SQL statement. Currently, I have the ResultSet set to None, and the following SQL statment: Insert into WELL (WELL_ID, WELL_NAME) Select OWELL_ID, OWELL_NAME from OWell where OWELL_ID not in (select WELL.WELL_ID from WELL) For Parameter Mapping, I have Paramater 0, called OWell_ID, from my variable User::OWell. Parameter 1, called OWell_Name is from the same variable. Both are set to VARCHAR, although I've also tried NVARCHAR. I do not have a Result set. I am getting the following error: Error: 0xC002F210 at Insert records to FLEDG, Execute SQL Task: Executing the query "Insert into WELL (WELL_ID, WELL_NAME) Select OWELL..." failed with the following error: "An error occurred while extracting the result into a variable of type (DBTYPE_STR)". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly. I don't think it's a data type issue, but rather that I somehow am not using the resultset properly. How, exactly, am I supposed to refer to that recordset in my SQL task, so that I can use the two recordset fields and add records that are missing?

    Read the article

  • [php] Cookies only changing value every two page refreshes?

    - by Gazillion
    Hello, I'm trying to implement some pixel tracking where I will save certain values in a cookie to then forward users to another page. If users purchase a product after being forwarded to the online store by us the store adds an image tag in the page with our php script included. With the values set in the cookie we would like to track conversions. I understand this tracking technique has some limitations (like if a user has cookies turned off or if they do not load images but that's the direction my client wanted to go in). The problem I'm having is that the cookie's behaviour is extremely... random. I've been trying to track their values (with a var_dump so I don't have to wait for a page reload to view the cookie's value) but it seems the value for one field only gets refreshed every two page reloads. setcookie("tracking[cn]", $cn, time()+3600*24*7,'/','mydomain.com'); setcookie("tracking[t]", $t, time()+3600*24*7,'/','mydomain.com'); setcookie("tracking[kid]", $kid, time()+3600*24*7,'/','mydomain.com'); redirectTo($redirect_url); the values of cn, t are fine but for some reason kid is always wrong (having taken the value of the previous kid) Any help would be extremely appreciated I've been at this all evening! :)

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14  | Next Page >