Search Results

Search found 10947 results on 438 pages for 'product comparison'.

Page 287/438 | < Previous Page | 283 284 285 286 287 288 289 290 291 292 293 294  | Next Page >

  • cast operator to base class within a thin wrapper derived class

    - by miked
    I have a derived class that's a very thin wrapper around a base class. Basically, I have a class that has two ways that it can be compared depending on how you interpret it so I created a new class that derives from the base class and only has new constructors (that just delegate to the base class) and a new operator==. What I'd like to do is overload the operator Base&() in the Derived class so in cases where I need to interpret it as the Base. For example: class Base { Base(stuff); Base(const Base& that); bool operator==(Base& rhs); //typical equality test }; class Derived : public Base { Derived(stuff) : Base(stuff) {}; Derived(const Base& that) : Base(that) {}; Derived(const Derived& that) : Base(that) {}; bool operator==(Derived& rhs); //special case equality test operator Base&() { return (Base&)*this; //Is this OK? It seems wrong to me. } }; If you want a simple example of what I'm trying to do, pretend I had a String class and String==String is the typical character by character comparison. But I created a new class CaseInsensitiveString that did a case insensitive compare on CaseInsensitiveString==CaseInsensitiveString but in all other cases just behaved like a String. it doesn't even have any new data members, just an overloaded operator==. (Please, don't tell me to use std::string, this is just an example!) Am I going about this right? Something seems fishy, but I can't put my finger on it.

    Read the article

  • How can I compare market data feed sources for quality and latency improvement?

    - by yves Baumes
    I am in the very first stages of implementing a tool to compare 2 market data feed sources in order to prove the quality of new developed sources to my boss ( meaning there are no regressions, no missed updates, or wrong ), and to prove latencies improvement. So the tool I need must be able to check updates differences as well as to tell which source is the best (in term of latency). Concrectly, reference source could be Reuters while the other one is a Feed handler we develop internally. People warned me that updates might not arrive in the same order as Reuters implementation could differs totally from ours. Therefore a simple algorithm based on the fact that updates could arrive in the same order is likely not to work. My very first idea would be to use fingerprint to compare feed sources, as Shazaam application does to find the title of the tube you are submitting. Google told me it is based on FFT. And I was wondering if signal processing theory could behaves well with market access applications. I wanted to know your own experience in that field, is that possible to develop a quite accurate algorithm to meet the needs? What was your own idea? What do you think about fingerprint based comparison?

    Read the article

  • Float addition promoted to double?

    - by Andreas Brinck
    I had a small WTF moment this morning. Ths WTF can be summarized with this: float x = 0.2f; float y = 0.1f; float z = x + y; assert(z == x + y); //This assert is triggered! (Atleast with visual studio 2008) The reason seems to be that the expression x + y is promoted to double and compared with the truncated version in z. (If i change z to double the assert isn't triggered). I can see that for precision reasons it would make sense to perform all floating point arithmetics in double precision before converting the result to single precision. I found the following paragraph in the standard (which I guess I sort of already knew, but not in this context): 4.6.1. "An rvalue of type float can be converted to an rvalue of type double. The value is unchanged" My question is, is x + y guaranteed to be promoted to double or is at the compiler's discretion? UPDATE: Since many people has claimed that one shouldn't use == for floating point, I just wanted to state that in the specific case I'm working with, an exact comparison is justified. Floating point comparision is tricky, here's an interesting link on the subject which I think hasn't been mentioned.

    Read the article

  • Python: circular imports needed for type checking

    - by phild
    First of all: I do know that there are already many questions and answers to the topic of the circular imports. The answer is more or less: "Design your Module/Class structure properly and you will not need circular imports". That is true. I tried very hard to make a proper design for my current project, I in my opinion I was successful with this. But my specific problem is the following: I need a type check in a module that is already imported by the module containing the class to check against. But this throws an import error. Like so: foo.py: from bar import Bar class Foo(object): def __init__(self): self.__bar = Bar(self) bar.py: from foo import Foo class Bar(object): def __init__(self, arg_instance_of_foo): if not isinstance(arg_instance_of_foo, Foo): raise TypeError() Solution 1: If I modified it to check the type by a string comparison, it will work. But I dont really like this solution (string comparsion is rather expensive for a simple type check, and could get a problem when it comes to refactoring). bar_modified.py: from foo import Foo class Bar(object): def __init__(self, arg_instance_of_foo): if not arg_instance_of_foo.__class__.__name__ == "Foo": raise TypeError() Solution 2: I could also pack the two classes into one module. But my project has lots of different classes like the "Bar" example, and I want to seperate them into different module files. After my own 2 solutions are no option for me: Has anyone a nicer solution for this problem?

    Read the article

  • C++ find largest BST in a binary tree

    - by fonjibe
    what is your approach to have the largest BST in a binary tree? I refer to this post where a very good implementation for finding if a tree is BST or not is bool isBinarySearchTree(BinaryTree * n, int min=std::numeric_limits<int>::min(), int max=std::numeric_limits<int>::max()) { return !n || (min < n->value && n->value < max && isBinarySearchTree(n->l, min, n->value) && isBinarySearchTree(n->r, n->value, max)); } It is quite easy to implement a solution to find whether a tree contains a binary search tree. i think that the following method makes it: bool includeSomeBST(BinaryTree* n) { if(!isBinarySearchTree(n)) { if(!isBinarySearchTree(n->left)) return isBinarySearchTree(n->right); } else return true; else return true; } but what if i want the largest BST? this is my first idea, BinaryTree largestBST(BinaryTree* n) { if(isBinarySearchTree(n)) return n; if(!isBinarySearchTree(n->left)) { if(!isBinarySearchTree(n->right)) if(includeSomeBST(n->right)) return largestBST(n->right); else if(includeSomeBST(n->left)) return largestBST(n->left); else return NULL; else return n->right; } else return n->left; } but its not telling the largest actually. i struggle to make the comparison. how should it take place? thanks

    Read the article

  • Which are the RDBMS that minimize the server roundtrips? Which RDBMS are better (in this area) than

    - by user193655
    When the latency is high ("when pinging the server takes time") the server roundtrips make the difference. Now I don't want to focus on the roundtrips created in programming, but the roundtrips that occur "under the hood" in the DB engine, so the roundtrips that are 100% dependant on how the RDBMS is written itself. I have been told that FireBird has more roundtrips than MySQL. But this is the only information I know. I am currently supporting MS SQL but I'd like to change RDBMS (because I use Express Editions and in my scenario they are quite limiting from the performance point of view), so to make a wise choice I would like to include also this point into "my RDBMS comparison feature matrix" to understand which is the best RDBMS to choose as an alternative to MS SQL. So the bold sentence above would make me prefer MySQL to Firebird (for the roundtrips concept, not in general), but can anyone add informations? And MS SQL where is it located? Is someone able to "rank" the roundtrip performance of the main RDBMS, or at least: MS SQL, MySql, Postegresql, Firebird (I am not interested in Oracle since it is not free, and if I have to change I would change to a free RDBMS). Anyway MySql (as mentioned several times on stackoverflow) has a not clear future and a not 100% free license. So my final choice will probably dall on PostgreSQL or Firebird. Additional info: somehow you can answer my question by making a simple list like: MSSQL:3; MySQL:1; Firebird:2; Postgresql:2 (where 1 is good, 2 average, 3 bad). Of course if you can post some links where the roundtrips per RDBMSs are compared it would be great

    Read the article

  • Oracle Date Format Convert Hour-Minute to Interval and Disregard Year-Month-Day

    - by dlite922
    I need to compare an event's half-way midpoint between a start and stop time of day. Right now i'm converting the dates you see on the right, to HH:MM and the comparison works until midnight. the query says: WHERE half BETWEEN pStart and pStop. As you can see below, pStart and pStap have January 1st 2000 dates, this is because the year month day are not important to me... Valid Data: +-------+--------+-------+---------------------+---------------------+---------------------+ | half | pStart | pStop | half2 | pStart2 | pStop2 | +-------+--------+-------+---------------------+---------------------+---------------------+ | 19:00 | 19:00 | 23:00 | 2012-11-04 19:00:00 | 2000-01-01 19:00:00 | 2000-01-01 23:00:00 | | 20:00 | 19:00 | 23:00 | 2012-11-04 20:00:00 | 2000-01-01 19:00:00 | 2000-01-01 23:00:00 | | 21:00 | 19:00 | 23:00 | 2012-11-04 21:00:00 | 2000-01-01 19:00:00 | 2000-01-01 23:00:00 | | 23:00 | 20:00 | 23:00 | 2012-11-05 23:00:00 | 2000-01-01 20:00:00 | 2000-01-01 23:00:00 | +-------+--------+-------+---------------------+---------------------+---------------------+ Now observe what happens when pStop is midnight or later... Valid Data that breaks it: +-------+--------+-------+---------------------+---------------------+---------------------+ | half | pStart | pStop | half2 | pStart2 | pStop2 | +-------+--------+-------+---------------------+---------------------+---------------------+ | 23:00 | 22:00 | 00:00 | 2012-11-04 23:00:00 | 2000-01-01 22:00:00 | 2000-01-01 00:00:00 | | 23:30 | 23:00 | 02:00 | 2012-11-05 23:30:00 | 2000-01-01 23:00:00 | 2000-01-01 02:00:00 | +-------+--------+-------+---------------------+---------------------+---------------------+ Thus my where clause translates to: WHERE 19:00 BETWEEN 22:00 AND 00:00 ...which returns false and I miss those two correct rows above. Question: Is there a way to show those dates as integer interval so that saying half BETWEEN pStart and pStop are correct? I thought about adding 24 when pStop is less than pStart to make 00:00 into 24:00 but don't know an easy way to do that without long string concatenations and number conversions. This would solve the problem because pStart pStop difference will never be longer than 6 hours. Note: (The Query is much more complex. It has other irrelevant date calculations, but the result are show above. DATE_FORMAT(%H:%i) is applied to the first three columns and no formatting to the last three) Thanks for your help:

    Read the article

  • Performance impact when using XML columns in a table with MS SQL 2008

    - by Sam Dahan
    I am using a simple table with 6 columns, 3 of which are of XML type, not schema-constrained. When the table reaches a size around 120,000 or 150,000 rows, I see a dramatic performance cost in doing any query in the table. For comparison, I have another table, which grows in size at about the same rate, but only contain scalar types (int, datetime, a few float columns). That table performs perfectly fine even after 200,000 rows. And by the way, I am not using XQuery on the xml columns, i am only using regular SQL query statements. Some specifics: both tables contain a DateTime field called SampleTime. a statement like (it's in a stored procedure but I show you the actual statement) SELECT MAX(sampleTime) SampleTime FROM dbo.MyRecords WHERE PlacementID=@somenumber takes 0 seconds on the table without xml columns, and anything from 13 to 20 seconds on the table with XML columns. That depends on which drive I set my database on. At the moment it sits on a different spindle (not C:) and it takes 13 seconds. Has anyone seen this behavior before, or have any hint at what I am doing wrong? I tried this with SQL 2008 EXPRESS and the full-blown SQL Server 2008, that made no difference. Oh, one last detail: I am doing this from a C# application, .NET 3.5, using SqlConnection, SqlReader, etc.. I'd appreciate some insight into that, thanks! Sam

    Read the article

  • Python Ephem / Datetime calculation

    - by dassouki
    the output should process the first date as "day" and second as "night". I've been playing with this for a few hours now and can't figure out what I'm doing wrong. Any ideas? Edit I assume that the problem is due to my date comparison implementation Output: $ python time_of_day.py * should be day: event date: 2010/4/6 16:00:59 prev rising: 2010/4/6 09:24:24 prev setting: 2010/4/5 23:33:03 next rise: 2010/4/7 09:22:27 next set: 2010/4/6 23:34:27 day * should be night: event date: 2010/4/6 00:01:00 prev rising: 2010/4/5 09:26:22 prev setting: 2010/4/5 23:33:03 next rise: 2010/4/6 09:24:24 next set: 2010/4/6 23:34:27 day time_of_day.py import datetime import ephem # install from http://pypi.python.org/pypi/pyephem/ #event_time is just a date time corresponding to an sql timestamp def type_of_light(latitude, longitude, event_time, utc_time, horizon): o = ephem.Observer() o.lat, o.long, o.date, o.horizon = latitude, longitude, event_time, horizon print "event date ", o.date print "prev rising: ", o.previous_rising(ephem.Sun()) print "prev setting: ", o.previous_setting(ephem.Sun()) print "next rise: ", o.next_rising(ephem.Sun()) print "next set: ", o.next_setting(ephem.Sun()) if o.previous_rising(ephem.Sun()) <= o.date <= o.next_setting(ephem.Sun()): return "day" elif o.previous_setting(ephem.Sun()) <= o.date <= o.next_rising(ephem.Sun()): return "night" else: return "error" print "should be day: ", type_of_light('45.959','-66.6405','2010/4/6 16:01','-4', '-6') print "should be night: ", type_of_light('45.959','-66.6405','2010/4/6 00:01','-4', '-6')

    Read the article

  • What can cause my code to run slower when the server JIT is activated?

    - by durandai
    I am doing some optimizations on an MPEG decoder. To ensure my optimizations aren't breaking anything I have a test suite that benchmarks the entire codebase (both optimized and original) as well as verifying that they both produce identical results (basically just feeding a couple of different streams through the decoder and crc32 the outputs). When using the "-server" option with the Sun 1.6.0_18, the test suite runs about 12% slower on the optimized version after warmup (in comparison to the default "-client" setting), while the original codebase gains a good boost running about twice as fast as in client mode. While at first this seemed to be simply a warmup issue to me, I added a loop to repeat the entire test suite multiple times. Then execution times become constant for each pass starting at the 3rd iteration of the test, still the optimized version stays 12% slower than in the client mode. I am also pretty sure its not a garbage collection issue, since the code involves absolutely no object allocations after startup. The code consists mainly of some bit manipulation operations (stream decoding) and lots of basic floating math (generating PCM audio). The only JDK classes involved are ByteArrayInputStream (feeds the stream to the test and excluding disk IO from the tests) and CRC32 (to verify the result). I also observed the same behaviour with Sun JDK 1.7.0_b98 (only that ist 15% instead of 12% there). Oh, and the tests were all done on the same machine (single core) with no other applications running (WinXP). While there is some inevitable variation on the measured execution times (using System.nanoTime btw), the variation between different test runs with the same settings never exceeded 2%, usually less than 1% (after warmup), so I conclude the effect is real and not purely induced by the measuring mechanism/machine. Are there any known coding patterns that perform worse on the server JIT? Failing that, what options are available to "peek" under the hood and observe what the JIT is doing there?

    Read the article

  • visualising piano performance evaluation

    - by Dolphin
    I need to develop a performance evaluator for piano playing. Based on a midi generated from sheet music, I need to evaluate the midi of the actual playing (midi keyboard). I'm planning to evaluate the playing based on note pitch, duration and loudness. The evaluation is I suppose a comparison of the notes of the sheet music and playing in midi. But I have no idea how I can visualise (i.e. show where the person have gone wrong) this evaluation process. i.e. maybe show both the notation and highlight which note has gone wrong. But how can I show any of this in some graphical form? Or more precisely on a stave (a music score) itself. I have note details (pitch, duration) and score details (key and time signature) stored in a table, and I'm using Java. But I have no clue as in how I can put all this into graphical form. Any insight is most gratefully appreciated. Advance thanks

    Read the article

  • Character Set Issues when Upgrading from Symfony 2.0.* to Symfony 2.1.*?

    - by Adam Stacey
    I have recently upgraded my staging test site to the latest version of Symfony and updated all the vendors using composer as instructed in the upgrade document that comes with the download. Everything has all updated fine, but I have noticed now that some bits of HTML are not displaying in the Twig templates. I did a comparison with the current live site and it appears to be a character set issue. As an example I had a drop down list that had the following value in: Kitchen Ducting > Ducting Kits > Ducting Kit 4” / 100mm In the updated site the drop-down list item just appeared blank. When I used Twig's raw function it then displayed the item again, but with the dreaded question mark in a black diamond. Kitchen Ducting > Ducting Kits > Ducting Kit 4? / 100mm Things that you should know that may help: The staging test site and live site are both on the same server. In my httpd.conf file I have 'AddDefaultCharset utf-8'. In my php.ini file I have 'default_charset = "utf-8"'. The HTML file served has the Content-Type meta tag 'content="text/html; charset=utf-8"' My database is InnoDB and uses 'utf8' as the default character set and 'utf8_general_ci' as default collation. All tables in the database also use the defaults. I looked into BOM with UTF8, but could not work out if that was a problem or not?

    Read the article

  • ArrayList.Sort should be a stable sort with an IComparer but is not?

    - by Kaleb Pederson
    A stable sort is a sort that maintains the relative ordering of elements with the same value. The docs on ArrayList.Sort say that when an IComparer is provided the sort is stable: If comparer is set to null, this method performs a comparison sort (also called an unstable sort); that is, if two elements are equal, their order might not be preserved. In contrast, a stable sort preserves the order of elements that are equal. To perform a stable sort, you must implement a custom IComparer interface. Unless I'm missing something, the following testcase shows that ArrayList.Sort is not using a stable sort: internal class DisplayOrdered { public int ID { get; set; } public int DisplayOrder { get; set; } public override string ToString() { return string.Format("ID: {0}, DisplayOrder: {1}", ID, DisplayOrder); } } internal class DisplayOrderedComparer : IComparer { public int Compare(object x, object y) { return ((DisplayOrdered)x).DisplayOrder - ((DisplayOrdered)y).DisplayOrder; } } [TestFixture] public class ArrayListStableSortTest { [Test] public void TestWeblinkCallArrayListIsSortedUsingStableSort() { var call1 = new DisplayOrdered {ID = 1, DisplayOrder = 0}; var call2 = new DisplayOrdered {ID = 2, DisplayOrder = 0}; var call3 = new DisplayOrdered {ID = 3, DisplayOrder = 2}; var list = new ArrayList {call1, call2, call3}; list.Sort(new DisplayOrderedComparer()); // expected order (by ID): 1, 2, 3 (because the DisplayOrder // is equal for ID's 1 and 2, their ordering should be // maintained for a stable sort.) Assert.AreEqual(call1, list[0]); // Actual: ID=2 ** FAILS Assert.AreEqual(call2, list[1]); // Actual: ID=1 Assert.AreEqual(call3, list[2]); // Actual: ID=3 } } Am I missing something? If not, would this be a documentation bug or a library bug? Apparently using an OrderBy in Linq gives a stable sort.

    Read the article

  • Storing unique values into an array and comparing against a loop - PHP

    - by Aphex22
    I'm writing a PHP report which is designed to be exported purely as a CSV file, using commma delimiters. There are three columns relating to product_id, these three columns are as follows: SKU Parent / Child Parent SKU 12345 parent 12345 12345_1 child 12345 12345_2 child 12345 12345_3 child 12345 12345_4 child 12345 18099 parent 18099 18099_1 child 18099 Here's a link to the full CSV file: http://i.imgur.com/XELufRd.png At the moment the code looks like this: $sql = "select * from product WHERE on_amazon = 'on' AND active = 'on'"; $result = mysql_query($sql) or die ( mysql_error() );?> <? // set headers echo " Type, SKU, Parent / Child, Parent SKU, Product name, Manufacturer name, Gender, Product_description, Product price, Discount price, Quantity, Category, Photo 1, Photo 2, Photo 3, Photo 4, Photo 5, Photo 6, Photo 7, Photo 8, Color id, Color name, Size name <br> "; // load all stock while ($line = mysql_fetch_assoc($result) ) { ?> <?php // Loop through each possible size variation to see whether any of the quantity column has stock > 0 $con_size = array (35,355,36,37,375,38,385,39,395,40,405,41,415,42,425,43,435,44,445,45,455,46,465,47,475,48,485); $arrlength=count($con_size); for($x=0;$x<$arrlength;$x++) { // check if size is available if($line['quantity_c_size_'.$con_size[$x].'_chain'] > 0 ) { ?> <? echo 'Shoes'; ?>, <?=$line['product_id']?>, , , <?=$line['title']?>, <? $brand = $line['jys_brand']; echo ucfirst($brand); ?>, <? $gender = $line['category']; if ($gender == 'Mens') { echo 'H'; } else{ echo 'F'; } ?>, <?=preg_replace('/[^\da-z]/i', ' ', $line['amazon_desc']) ?>, <?=$line['price']?>, <?=$line['price']?>, <?=$line['quantity_c_size_'.$con_size[$x].'_chain']?>, <? $category = $line['style1']; switch ($category) { case "ankle-boots": echo "10013"; break; case "knee-high-boots": echo "10011"; break; case "high-heel-boots": echo "10033"; break; case "low-heel-boots": echo "10014"; break; case "wedge-boots": echo "10014"; break; case "western-boots": echo "10032"; break; case "flat-shoes": echo "10034"; break; case "high-heel-shoes": echo "10039"; break; case "low-heel-shoes": echo "10039"; break; case "wedge-shoes": echo "10035"; break; case "ballerina-shoes": echo "10008"; break; case "boat-shoes": echo "10018"; break; case "loafer-shoes": echo "10037"; break; case "work-shoes": echo "10039"; break; case "flat-sandals": echo "10041"; break; case "low-heel-sandals": echo "10042"; break; case "high-heel-sandals": echo "10042"; break; case "wedge-sandals": echo "10042"; break; case "mule-sandals": echo "10038"; break; case "mary-jane-shoes": echo "10039"; break; case "sports-shoes": echo "10026"; break; case "court-shoes": echo "10035"; break; case "peep-toe-shoes": echo "10035"; break; case "flat-boots": echo "10609"; break; case "mid-calf-boots": echo "10014"; break; case "trainer-shoes": echo "10009"; break; case "wellington-boots": echo "10012"; break; case "lace-up-boots": echo "10609"; break; case "chelsea-and-jodphur-boots": echo "10609"; break; case "desert-and-chukka-boots": echo "10032"; break; case "lace-up-shoes": echo "10034"; break; case "slip-on-shoes": echo "10043"; break; case "gibson-and-derby-shoes": echo "10039"; break; case "oxford-shoes": echo "10039"; break; case "brogue-shoes": echo "10039"; break; case "winter-boots": echo "10021"; break; case "slipper-shoes": echo "10016"; break; case "mid-heel-shoes": echo "10039"; break; case "sandals-and-beach-shoes": echo "10044"; break; case "mid-heel-sandals": echo "10042"; break; case "mid-heel-boots": echo "10014"; break; default: echo ""; } ?>, http://www.getashoe.co.uk/full/<?=$line['product_id']?>_1.jpg, http://www.getashoe.co.uk/full/<?=$line['product_id']?>_2.jpg, http://www.getashoe.co.uk/full/<?=$line['product_id']?>_3.jpg, http://www.getashoe.co.uk/full/<?=$line['product_id']?>_4.jpg, , , , , <? $colour = preg_replace('/[^\da-z]/i', ' ', $line['colour']); if( preg_match( '/white.*/i', $colour)) { echo '1'; } elseif( preg_match( '/yellow.*/i', $colour)) { echo '4'; } elseif( preg_match( '/orange.*/i', $colour)) { echo '7'; } elseif( preg_match( '/red.*/i', $colour)) { echo '8'; } elseif( preg_match( '/pink.*/i', $colour)) { echo '13'; } elseif( preg_match( '/purple.*/i', $colour)) { echo '15'; } elseif( preg_match( '/blue.*/i', $colour)) { echo '19'; } elseif( preg_match( '/green.*/i', $colour)) { echo '25'; } elseif( preg_match( '/brown.*/i', $colour)) { echo '28'; } elseif( preg_match( '/grey.*/i', $colour)) { echo '35'; } elseif( preg_match( '/black.*/i', $colour)) { echo '38'; } elseif( preg_match( '/gold.*/i', $colour)) { echo '41'; } elseif( preg_match( '/silver.*/i', $colour)) { echo '46'; } elseif( preg_match( '/multi.*/i', $colour)) { echo '594'; } elseif( preg_match( '/beige.*/i', $colour)) { echo '6887'; } elseif( preg_match( '/nude.*/i', $colour)) { echo '6887'; } else { echo '534'; } ?>, <?=$line['colour']?>, <?=$con_size[$x]?> <br> <? // finish checking if size is available } } ?> So at the moment this is simply echoing out the product_ID into the SKU column. The code would need to enter the product_id into an array and check whether it is unique. If the product_id is unique to the array, then the product_id is echoed out unaltered, and parent is echoed out to the 'Parent/Child' column and then the product_id is repeated to the 'Parent SKU' column. However, if the array is checked and the product_id already exists in the array, then the product_id is echoed out to the 'SKU' column with a suffix i.e. _1. Then child is echoed to the 'Parent / Child' column and the original parent product_id echoed to the 'Parent SKU' column. HOWEVER - the same SKU cannot be repeated with the same suffix i.e. 12345_1, 12345_1 - so presumably there would be to be another array for the suffixed SKUs to be checked against. If anybody could help, it would be great. Thanks --- UPDATE ANSWER --- I managed to solved this myself and thought I would share my solution for future reference. /* * Array to collect product_ids and check whether unique. * If unique product_id becomes parent SKU * If not product_id becomes child of previous parent and suffixed with _1, _2 etc... */ if (!in_array($line['product_id'], $SKU)) { $SKU[] = $line['product_id']; $parent = $line['product_id']; $a = 0; ?> <? echo 'Shoes'; ?>, <? echo $parent; ?>, <? echo "Parent"; ?>, <? echo $parent; ?>, <? } else { $child = $line['product_id'] . "_" . $a; ?> <? echo 'Shoes'; ?>, <? echo $child; ?>, <? echo "Child"; ?>, <? echo $child; <? // increment suffix value for child SKU $a++; ?>

    Read the article

  • Alternatives to the Entity Framework for Serving/Consuming an OData Interface

    - by Egahn
    I'm researching how to set up an OData interface to our database. I would like to be able to pull/query data from our DB into Excel, as a start. Eventually I would like to have Excel run queries and pull data over HTTP from a remote client, including authentication, etc. I've set up a working (rickety) prototype so far, using the ADO.NET Entity Data Model wizard in Visual Studio, and VSTO to create a test Excel worksheet with a button to pull from that ADO.NET interface. This works OK so far, and I can query the DB using Linq through the entities/objects that are created by the ADO.NET EDM wizard. However, I have started to run into some problems with this approach. I've been finding the Entity Framework difficult to work with (and in fact, also difficult to research solutions to, as there's a lot of chaff out there regarding it and older versions of it). An example of this is my being unable to figure out how to set the SQL command timeout (as opposed to the HTTP request timeout) on the DataServiceContext object that the wizard generates for my schema, but that's not the point of my question. The real question I have is, if I want to use OData as my interface standard, am I stuck with the Entity Framework? Are there any other solutions out there (preferably open source) which can set up, serve and consume an OData interface, and are easier to work with and less bloated than the Entity Framework? I have seen mention of NHibernate as an alternative, but most of the comparison threads I've seen are a few years old. Are there any other alternatives out there now? Thanks very much!

    Read the article

  • What's the recommended implementation for hashing OLE Variants?

    - by Barry Kelly
    OLE Variants, as used by older versions of Visual Basic and pervasively in COM Automation, can store lots of different types: basic types like integers and floats, more complicated types like strings and arrays, and all the way up to IDispatch implementations and pointers in the form of ByRef variants. Variants are also weakly typed: they convert the value to another type without warning depending on which operator you apply and what the current types are of the values passed to the operator. For example, comparing two variants, one containing the integer 1 and another containing the string "1", for equality will return True. So assuming that I'm working with variants at the underlying data level (e.g. VARIANT in C++ or TVarData in Delphi - i.e. the big union of different possible values), how should I hash variants consistently so that they obey the right rules? Rules: Variants that hash unequally should compare as unequal, both in sorting and direct equality Variants that compare as equal for both sorting and direct equality should hash as equal It's OK if I have to use different sorting and direct comparison rules in order to make the hashing fit. The way I'm currently working is I'm normalizing the variants to strings (if they fit), and treating them as strings, otherwise I'm working with the variant data as if it was an opaque blob, and hashing and comparing its raw bytes. That has some limitations, of course: numbers 1..10 sort as [1, 10, 2, ... 9] etc. This is mildly annoying, but it is consistent and it is very little work. However, I do wonder if there is an accepted practice for this problem.

    Read the article

  • Validating Login / Changing User settings / Php Mysql

    - by Marcelo
    Hi everyone, my questions are about login, and changing already saved data. (Q1) 'Till now I've only saved input in the tables of the database (registration steps), now I need to check if the input (login steps), are the same of my table in database, in fact I have 3 types of users, then I'll have to check 3 kind of tables. Then if the input data matches with one of those 3 tables I will redirect the user to his specific area. I'm thinking about saved the submitted data $login=$_REQUEST['login']; and $password=$_REQUEST['password']; and compare with the login column in the database. Then if the login matches, I'll compare the password submitted with the one in the row, not in the column. But I don't know how to do this search and comparison,neither what to use. Then if both matches I'll redirect the user. Else I'll send an login error message. (this I know how to do) (Q2) What if need to change an already saved user ? For example to change an email address. My changing user's data web page is exactly the same like the registration user web page. Can I load the already saved options and values of registration (table user for example). Then the user will change whatever he thinks it's necessary, and then when he submits the new information, they would not create a new row in my table, but just be overwritten the old information? How can I do this? Sorry for any mistake in English, and Thanks for the attention.

    Read the article

  • c# "==" operator : compiler behaviour with different structs

    - by Moe Sisko
    Code to illustrate : public struct MyStruct { public int SomeNumber; } public string DoSomethingWithMyStruct(MyStruct s) { if (s == null) return "this can't happen"; else return "ok"; } private string DoSomethingWithDateTime(DateTime s) { if (s == null) return "this can't happen"; // XX else return "ok"; } Now, "DoSomethingWithStruct" fails to compile with : "Operator '==' cannot be applied to operands of type 'MyStruct' and '<null>'". This makes sense, since it doesn't make sense to try a reference comparison with a struct, which is a value type. OTOH, "DoSomethingWithDateTime" compiles, but with compiler warning : "Unreachable code detected" at line marked "XX". Now, I'm assuming that there is no compiler error here, because the DateTime struct overloads the "==" operator. But how does the compiler know that the code is unreachable ? e.g. Does it look inside the code which overloads the "==" operator ? (This is using Visual Studio 2005 in case that makes a difference). Note : I'm more curious than anything about the above. I don't usually try to use "==" on structs and nulls.

    Read the article

  • xcode project-/target-settings-syntax for linker flag force_load on iPhone

    - by Kaiserludi
    Hi all. I am confronted with the double bind, that on the one hand for one of the 3rd party static libraries, my iPhone application uses, the linker flag -all_load has to be set in the application project- or target settings, otherwise the app crashes at runtime not finding some symbols, called internally from the lib, on the other hand for another 3rd party static lib -all_load must not be set on application level, or the app won't build thanks to a "duplicate symbols"-linker error. To solve this issue I now want to use force_load instant of load_all, as it due to documentation it does the same like all_load, but only for the passed path or lib-file, instead of all libs. The problem with force_load is, I do not have a clue, how to pass a path or file as parameter with it, when passing it via xcode project- or target-settings. All syntax-possibilities coming to my mind either lead into xcode thinking its another linker flag instead of a parameter to the previous one, or the linker is throwing syntax related errors or the flag simply does nothing at all in comparison to not being set. I also opened the .pbxproj-file in a text-editor to edit it to the correct command line syntax manually, but when reloading the project with xcode, it auto changes the syntax into interpreting the parameter to force_load as a separate flag. Anyone having an idea on this issue? Thx, Kaiserludi.

    Read the article

  • URL equals and checking Internet access

    - by James P.
    On http://java.sun.com/j2se/1.5.0/docs/api/java/net/URL.html it states that: Compares this URL for equality with another object. If the given object is not a URL then this method immediately returns false. Two URL objects are equal if they have the same protocol, reference equivalent hosts, have the same port number on the host, and the same file and fragment of the file. Two hosts are considered equivalent if both host names can be resolved into the same IP addresses; else if either host name can't be resolved, the host names must be equal without regard to case; or both host names equal to null. Since hosts comparison requires name resolution, this operation is a blocking operation. Note: The defined behavior for equals is known to be inconsistent with virtual hosting in HTTP. According to this, equals will only work if name resolution is possible. Since I can't be sure that a computer has internet access at a given time, should I just use Strings to store addresses instead? Also, how do I go about testing if access is available when requested?

    Read the article

  • Resolving the WMI DNS Host Name

    - by Stephen Murby
    I am trying to make a comparison between a machine name i have retrieved from AD, and the DNS Host Name i want to get using WMI from the machine. I currently have: foreach (SearchResult oneMachine in allMachinesCollected) { pcName = oneMachine.Properties["name"][0].ToString(); ConnectionOptions setupConnection = new ConnectionOptions(); setupConnection.Username = USERNAME; setupConnection.Password = PASSWORD; setupConnection.Authority = "ntlmdomain:DOMAIN"; ManagementScope setupScope = new ManagementScope("\\\\" + pcName + "\\root\\cimv2", setupConnection); setupScope.Connect(); ObjectQuery dnsNameQuery = new ObjectQuery("SELECT * FROM Win32_ComputerSystem"); ManagementObjectSearcher dnsNameSearch = new ManagementObjectSearcher(setupScope, dnsNameQuery); ManagementObjectCollection allDNSNames = dnsNameSearch.Get(); string dnsHostName; foreach (ManagementObject oneName in allDNSNames) { dnsHostName = oneName.Properties["DNSHostName"].ToString(); if (dnsHostName == pcName) { shutdownMethods.ShutdownMachine(pcName, USERNAME, PASSWORD); MessageBox.Show(pcName + " has been sent the reboot command"); } } } } But i get a ManagementException dnsHostName = oneName.Properties["DNSHostName"].ToString(); << here saying not found. Any ideas?

    Read the article

  • Vector [] vs copying

    - by sak
    What is faster and/or generally better? vector<myType> myVec; int i; myType current; for( i = 0; i < 1000000; i ++ ) { current = myVec[ i ]; doSomethingWith( current ); doAlotMoreWith( current ); messAroundWith( current ); checkSomeValuesOf( current ); } or vector<myType> myVec; int i; for( i = 0; i < 1000000; i ++ ) { doSomethingWith( myVec[ i ] ); doAlotMoreWith( myVec[ i ] ); messAroundWith( myVec[ i ] ); checkSomeValuesOf( myVec[ i ] ); } I'm currently using the first solution. There are really millions of calls per second and every single bit comparison/move is performance-problematic.

    Read the article

  • Comparing two images from the clipboard class

    - by VeXe
    In a C# winform app. I'm writing a Clipboard log manager that logs test to a log file, (everytime a Ctrl+c/x is pressed the copied/cut text gets appended to the file) I also did the same for images, that is, if you press "prtScreen", the screen shot you took will also go to a folder. I do that by using a timer, inside I have something which 'looks' like this: if (Clipboard.ContainsImage()) { if (IsClipboardUpdated()) { LogData(); UpdateLastClipboardData(); } } This is how the rest of the methods look like: public void UpdateLastClipboardData() { // ... other updates LastClipboardImage = Clipboard.GetImage(); } // This is how I determine if there's a new image in the clipboard... public bool IsClipboardUpdated() { return (LastClipboardImage != Clipboard.GetImage()); } public void LogData() { Clipboard.GetImage().Save(ImagesLogFolder + "\\Image" + now_hours + "_" + now_mins + "_" + now_secs + ".jpg"); } The problem is: inside the update method, "LastClipboardImage != Clipboard.GetImage()" is always returning true! I even did the following inside the update method: Image img1 = Clipboard.GetImage(); Image img2 = Clipboard.GetImage(); Image img3 = img2; bool b1 = img1 == img2; // this returned false. WHY?? bool b2 = img3 == img2; // this returned true. Makes sense. Please help, the comparison isn't working... why?

    Read the article

  • Is it a good idea to use an integer column for storing US ZIP codes in a database?

    - by Yadyn
    From first glance, it would appear I have two basic choices for storing ZIP codes in a database table: Text (probably most common), i.e. char(5) or varchar(9) to support +4 extension Numeric, i.e. 32-bit integer Both would satisfy the requirements of the data, if we assume that there are no international concerns. In the past we've generally just gone the text route, but I was wondering if anyone does the opposite? Just from brief comparison it looks like the integer method has two clear advantages: It is, by means of its nature, automatically limited to numerics only (whereas without validation the text style could store letters and such which are not, to my knowledge, ever valid in a ZIP code). This doesn't mean we could/would/should forgo validating user input as normal, though! It takes less space, being 4 bytes (which should be plenty even for 9-digit ZIP codes) instead of 5 or 9 bytes. Also, it seems like it wouldn't hurt display output much. It is trivial to slap a ToString() on a numeric value, use simple string manipulation to insert a hyphen or space or whatever for the +4 extension, and use string formatting to restore leading zeroes. Is there anything that would discourage using int as a datatype for US-only ZIP codes?

    Read the article

  • Good way to identify similar images?

    - by Nick
    I've developed a simple and fast algorithm in PHP to compare images for similarity. Its fast (~40 per second for 800x600 images) to hash and a unoptimised search algorithm can go through 3,000 images in 22 mins comparing each one against the others (3/sec). The basic overview is you get a image, rescale it to 8x8 and then convert those pixels for HSV. The Hue, Saturation and Value are then truncated to 4 bits and it becomes one big hex string. Comparing images basically walks along two strings, and then adds the differences it finds. If the total number is below 64 then its the same image. Different images are usually around 600 - 800. Below 20 and extremely similar. Are there any improvements upon this model I can use? I havent looked at how relevant the different components (hue, saturation and value) are to the comparison. Hue is probably quite important but the others? To speed up searches I could probably split the 4 bits from each part in half, and put the most significant bits first so if they fail the check then the lsb doesnt need to be checked at all. I dont know a efficient way to store bits like that yet still allow them to be searched and compared easily. I've been using a dataset of 3,000 photos (mostly unique) and there havent been any false positives. Its completely immune to resizes and fairly resistant to brightness and contrast changes.

    Read the article

< Previous Page | 283 284 285 286 287 288 289 290 291 292 293 294  | Next Page >