Search Results

Search found 2242 results on 90 pages for 'fuzzy comparison'.

Page 34/90 | < Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >

  • QT vs. Net - REAL comparisons for R.A.D. projects

    - by Pirate for Profit
    Man in all these Qt vs. .NET discussions 90% these people argue about the dumbest crap. Trying to get a real comparison chart here, because I know a little about both frameworks but I don't know everything. I believe Qt and .NET both have strengths and weaknesses. This is to make a comparison that highlights these so people can make more informed decisions before embarking on a project, in the spirit of R.A.D. Event Handling In Qt the event handling system is very simple. You just emit signals when something cool happens and then catch them in slots. ie. // run some calculations, then emit valueChanged(30, false, 20.2); and then catching it, any object can make a slot to recieve that message easily void MyObj::valueChanged(int percent, bool ok, float timeRemaining). It's easy to "block" an event or "disconnect" when needed, and works seamlessly across threads... once you get the hang of it, it just seems a lot more natural and intuitive than the way the .NET event handling is set up (you know, void valueChanged(object sender, CustomEventArgs e). And I'm not just talking about syntax, because in the end the .NET anonymous delegates are the bomb. I'm also talking about in more than just reflection (because, yes, .NET obviously has much stronger reflection capabilities). I'm talking about in the way the system feels to a human being. Qt wins hands down for the simplest yet still flexible event handling system ever i m o. Plugins and such I do love some of the ease of C# compared to C++, as well as .NET's assembly architecture, even though it leads to a bunch of .dll's (there's ways to combine everything into a single exe though). That is a big bonus for modular projects, which are a PITA to import stuff in C++ as far as RAD is concerned. Database Ease of Doing Crap Also what about datasets and database manipulations. I think .net wins here but I'm not sure. Threading/Conccurency How do you guys think of the threading? In .NET, all I've ever done is make like a list of master worker threads with locks. I like QConcurrentFramework, you don't worry about locks or anything, and with the ease of the signal slot system across threads it's nice to get notified about the progress of things. QConcurrent is the simplest threading mechanism I've ever played with. Memory Usage Also what do you think of the overall memory usage comparison. Is the .NET garbage collector pretty on the ball and quick compared to the instantaneous nature of native memory management? Or does it just let programs leak up a storm and lag the computer then clean it up when it's about to really lag? Doesn't the just-in-time compiler make native code that is pretty good, like and that only happens the first time the program is run? However, I am a n00b who doesn't know what I'm talking about, please school me on the subject.

    Read the article

  • What are the best tools for Sql Server version control

    - by Mendy
    After reading this post, and the suggestion to use Team Edition for Database Professionals, I want to know is there any equivalent to this for SQL server 2008 / Visual stuio 2010 ultimate. I'm looking for tool need to do all the thing that Jeff mention in his article: Create test data. Schema comparison. Data comparison. Database unit testing. Refactoring. Integrated T-SQL editor, a first class language construct in the IDE, just like C# and VB.NET.

    Read the article

  • Case insensitive string compare in LINQ-to-SQL

    - by BlueMonkMN
    I've read that it's unwise to use ToUpper and ToLower to perform case-insensitive string comparisons, but I see no alternative when it comes to LINQ-to-SQL. The ignoreCase and CompareOptions arguments of String.Compare are ignored by LINQ-to-SQL (if you're using a case-sensitive database, you get a case-sensitive comparison even if you ask for a case-insensitive comparison). Is ToLower or ToUpper the best option here? Is one better than the other? I thought I read somewhere that ToUpper was better, but I don't know if that applies here. (I'm doing a lot of code reviews and everyone is using ToLower.) Dim s = From row In context.Table Where String.Compare(row.Name, "test", StringComparison.InvariantCultureIgnoreCase) = 0 This translates to an SQL query that simply compares row.Name with "test" and will not return "Test" and "TEST" on a case-sensitive database.

    Read the article

  • Problem evaluating NULL in an IIF statement (Access)

    - by Mohgeroth
    Item in the recordset rstImportData("Flat Size") is = Null With that, given the following statement: IIF(IsNull(rstImportData("Flat Size")), Null, cstr(rstImportData("Flat Size"))) Result: Throws error 94: Invalid use of Null If I change the statement by removing the type conversion upon a false comparison: IIF(IsNull(rstImportData("Flat Size")), Null, 0) Result: Null It returns Null as it should have the first time. It appears that I cannot do a type conversion in an IIF if the value passed in should ever be null even if it passes an IIF test, it still attempts to evaluate it at both the true and false answer. The only reason I'm using IIF like this is because I have a 25 line comparison to compare data from an Import against a matching record in a database to see if I need to append the prior to history. Any thoughts? The way data is imported there will be null dates and where the spreadsheet import is in a string format I must convert either side to the other to compare the values properly but if either side is null this exception occurs :(

    Read the article

  • Returning references while using shared_ptrs

    - by Goose Bumper
    Suppose I have a rather large class Matrix, and I've overloaded operator== to check for equality like so: bool operator==(Matrix &a, Matrix &b); Of course I'm passing the Matrix objects by reference because they are so large. Now i have a method Matrix::inverse() that returns a new Matrix object. Now I want to use the inverse directly in a comparison, like so: if (a.inverse()==b) { ... }` The problem is, this means the inverse method needs to return a reference to a Matrix object. Two questions: Since I'm just using that reference in this once comparison, is this a memory leak? What happens if the object-to-be-returned in the inverse() method belongs to a boost::shared_ptr? As soon as the method exits, the shared_ptr is destroyed and the object is no longer valid. Is there a way to return a reference to an object that belongs to a shared_ptr?

    Read the article

  • Is there any way to modify a column before it is ordered in MySQL?

    - by George Edison
    I have a table with a field value which is a varchar(255). The contents of the field can be quite varied: $1.20 $2994 $56 + tax (This one can be ignored or truncated to $56 if necessary) I have a query constructed: SELECT value FROM unnamed_table ORDER BY value However, this of course uses ASCII string comparison to order the results and does not use any numerical type of comparison. Is there a way to truly order by value without changing the field type to DECIMAL or something else? In other words, can the value field be modified ('$' removed, value converted to decimal) on the fly before the results are sorted?

    Read the article

  • Direct comparator in Java out of the box

    - by KARASZI István
    I have a method which needs a Comparator for one of its parameters. I would like to pass a Comparator which does a normal comparison and a reverse comparator which does in reverse. java.util.Collections provides a reverseOrder() this is good for the reverse comparison, but I could not find any normal Comparator. The only solution what came into my mind is Collections.reverseOrder(Collections.reverseOrder()). but I don't like it because the double method calling inside. Of course I could write a NormalComparator like this: public class NormalComparator<T extends Comparable> implements Comparator<T> { public int compare(T o1, T o2) { return o1.compareTo(o2); } } But I'm really surprised that Java doesn't have a solution for this out of the box.

    Read the article

  • Merging MySQL Structure and Data

    - by Shahid
    I have a MySQL database running on a deployment machine which also contains data. Then I have another MySQL database which has evolved in terms of STRUCTURE + DATA for some time. I need a way to merge the changes (ONLY) for both structure and data to the DB in deployment machine without disturbing the existing data. Does anyone know of a tool available which can do this safely. I have had a look at a few comparison tools but I need a tool which can automate the merge operation. Note also that most of the data in the tables is in BINARY so I can't use many file comparison tools. Does any one know of a solution to this? thanks

    Read the article

  • Interesting AS3 hash situation. Is it really using strict equality as the documentation says?

    - by Triynko
    AS3 Code: import flash.utils.Dictionary; var num1:Number = Number.NaN; var num2:Number = Math.sqrt(-1); var dic:Dictionary = new Dictionary( true ); trace(num1); //NaN trace(num2); //NaN dic[num1] = "A"; trace( num1 == num2 ); //false trace( num1 === num2 ); //false trace( dic[num1] ); //A trace( dic[num2] ); //A Concerning the key comparison method... "The Dictionary class lets you create a dynamic collection of properties, which uses strict equality (===) for key comparison. When an object is used as a key, the object's identity is used to look up the object, and not the value returned from calling toString() on it." If Dictionary uses strict equality, as the documentation states, then how is it that num1 === num2 is false, and yet dic[num1] resolves to the same hash slot as dic[num2]?

    Read the article

  • What is the 'noreq' Filter Type an Alias for?

    - by Alan Storm
    I'm looking in to Magento's filtering options (Ecommerce System and PHP Framekwork with an expansive ORM system). Specifically the addFieldToFilter method. In this method, you specify a SQLish filter by passing in a single element array, with the key indicating the type of filter. For example, array('eq'=>'bar') //eq means equal array('neq'=>'bar') //neq means not equal would each give you a where clause that looks like where field = 'bar'; where field != 'bar'; So, deep in the bowels of the source, I found a comparison type named 'moreq' that maps to a = comparison operator array('moreq'=>'27') where field >= 27 The weird thing is, there's already a 'gteq' comparision type array('gteq'=>'27') where field >= 27 So, my question is, what does moreq stand for? Is is some special SQL concept that's supported in other databases that the Magento guys wants to map to MySQL, or is it just "more required" and an example what happens when you're doing rapid agile and trying to maintain backwards compatibility.

    Read the article

  • How to store and compare time-zone sensitive times

    - by Chad Moran
    I have a data structure where an entity has times stored as an int (minutes into the day) for fast comparison. The entity also has a Foreign Key reference back to a TimeZone table which contains the .NET CLR ID Name and it's Standard Time/Daylight Time acronyms. Since this information is stored as time-zone insensitive - I was wondering how in LINQ to SQL I could convert this into a UTC DateTime for comparison against other times that will be in UTC. Just to be clear this conversion has to be done server-side so that I can execute filtering on the SQL Server and not the client. I am using .NET 3.5 SP1 and SQL Server 2008.

    Read the article

  • Check for state change in objects using serialization?

    - by sev7n
    I have a form that is bind to an object, and when the user trying to leave the form, I want to warn them if anything on the form has been changed, and prompt them to save. My question is, is there any way to achieve this without implementing IComparar for all my classes in the binded object? I was thinking if there is a way I can serialize my object when loading the form, and do a simple comparison against the change object that also get serialized. Something like a string comparison. Hope that make sense, Thanks

    Read the article

  • Is a safe accumulator really this complicated?

    - by Martin
    I'm trying to write an accumulator that is well behaved given unconstrained inputs. This seems to not be trivial and requires some pretty strict planning. Is it really this hard? int naive_accumulator(unsigned int max, unsigned int *accumulator, unsigned int amount) { if(*accumulator + amount >= max) return 1; // could overflow *accumulator += max; // could overflow return 0; } int safe_accumulator(unsigned int max, unsigned int *accumulator, unsigned int amount) { // if amount >= max, then certainly *accumulator + amount >= max if(amount >= max) { return 1; } // based on the comparison above, max - amount is defined // but *accumulator + amount might not be if(*accumulator >= max - amount) { return 1; } // based on the comparison above, *accumulator + amount is defined // and *accumulator + amount < max *accumulator += amount; return 0; }

    Read the article

  • Optimality of Binary Search

    - by templatetypedef
    Hello all- This may be a silly question, but does anyone know of a proof that binary search is asymptotically optimal? That is, if we are given a sorted list of elements where the only permitted operation on those objects is a comparison, how do you prove that the search can't be done in o(lg n)? (That's little-o of lg n, by the way.) Note that I'm restricting this to elements where the only operation permitted operation is a comparison, since there are well-known algorithms that can beat O(lg n) on expectation if you're allowed to do more complex operations on the data (see, for example, interpolation search). Thanks so much! This has really been bugging me since it seems like it should be simple but has managed to resist all my best efforts. :-)

    Read the article

  • Is the "==" operator required to be defined to use std::find

    - by user144182
    Let's say I have: class myClass std::list<myClass> myList where myClass does not define the == operator and only consists of public fields. In both VS2010 and VS2005 the following does not compile: myClass myClassVal = myList.front(); std::find( myList.begin(), myList.end(), myClassVal ) complaining about lack of == operator. I naively assumed it would do a value comparison of the myClass object's public members, but I am almost positive this is not correct. I assume if I define a == operator or perhaps use a functor instead, it will solve the problem. Alternatively, if my list was holding pointers instead of values, the comparison would work. Is this right or should I be doing something else?

    Read the article

  • CreationName for SSIS 2008 and adding components programmatically

    If you are building SSIS 2008 packages programmatically and adding data flow components, you will probably need to know the creation name of the component to add. I can never find a handy reference when I need one, hence this rather mundane post. See also CreationName for SSS 2005. We start with a very simple snippet for adding a component: // Add the Data Flow Task package.Executables.Add("STOCK:PipelineTask"); // Get the task host wrapper, and the Data Flow task TaskHost taskHost = package.Executables[0] as TaskHost; MainPipe dataFlowTask = (MainPipe)taskHost.InnerObject; // Add OLE-DB source component - ** This is where we need the creation name ** IDTSComponentMetaData90 componentSource = dataFlowTask.ComponentMetaDataCollection.New(); componentSource.Name = "OLEDBSource"; componentSource.ComponentClassID = "DTSAdapter.OLEDBSource.2"; So as you can see the creation name for a OLE-DB Source is DTSAdapter.OLEDBSource.2. CreationName Reference  ADO NET Destination Microsoft.SqlServer.Dts.Pipeline.ADONETDestination, Microsoft.SqlServer.ADONETDest, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 ADO NET Source Microsoft.SqlServer.Dts.Pipeline.DataReaderSourceAdapter, Microsoft.SqlServer.ADONETSrc, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 Aggregate DTSTransform.Aggregate.2 Audit DTSTransform.Lineage.2 Cache Transform DTSTransform.Cache.1 Character Map DTSTransform.CharacterMap.2 Checksum Konesans.Dts.Pipeline.ChecksumTransform.ChecksumTransform, Konesans.Dts.Pipeline.ChecksumTransform, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b2ab4a111192992b Conditional Split DTSTransform.ConditionalSplit.2 Copy Column DTSTransform.CopyMap.2 Data Conversion DTSTransform.DataConvert.2 Data Mining Model Training MSMDPP.PXPipelineProcessDM.2 Data Mining Query MSMDPP.PXPipelineDMQuery.2 DataReader Destination Microsoft.SqlServer.Dts.Pipeline.DataReaderDestinationAdapter, Microsoft.SqlServer.DataReaderDest, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 Derived Column DTSTransform.DerivedColumn.2 Dimension Processing MSMDPP.PXPipelineProcessDimension.2 Excel Destination DTSAdapter.ExcelDestination.2 Excel Source DTSAdapter.ExcelSource.2 Export Column TxFileExtractor.Extractor.2 Flat File Destination DTSAdapter.FlatFileDestination.2 Flat File Source DTSAdapter.FlatFileSource.2 Fuzzy Grouping DTSTransform.GroupDups.2 Fuzzy Lookup DTSTransform.BestMatch.2 Import Column TxFileInserter.Inserter.2 Lookup DTSTransform.Lookup.2 Merge DTSTransform.Merge.2 Merge Join DTSTransform.MergeJoin.2 Multicast DTSTransform.Multicast.2 OLE DB Command DTSTransform.OLEDBCommand.2 OLE DB Destination DTSAdapter.OLEDBDestination.2 OLE DB Source DTSAdapter.OLEDBSource.2 Partition Processing MSMDPP.PXPipelineProcessPartition.2 Percentage Sampling DTSTransform.PctSampling.2 Performance Counters Source DataCollectorTransform.TxPerfCounters.1 Pivot DTSTransform.Pivot.2 Raw File Destination DTSAdapter.RawDestination.2 Raw File Source DTSAdapter.RawSource.2 Recordset Destination DTSAdapter.RecordsetDestination.2 RegexClean Konesans.Dts.Pipeline.RegexClean.RegexClean, Konesans.Dts.Pipeline.RegexClean, Version=2.0.0.0, Culture=neutral, PublicKeyToken=d1abe77e8a21353e Row Count DTSTransform.RowCount.2 Row Count Plus Konesans.Dts.Pipeline.RowCountPlusTransform.RowCountPlusTransform, Konesans.Dts.Pipeline.RowCountPlusTransform, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b2ab4a111192992b Row Number Konesans.Dts.Pipeline.RowNumberTransform.RowNumberTransform, Konesans.Dts.Pipeline.RowNumberTransform, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b2ab4a111192992b Row Sampling DTSTransform.RowSampling.2 Script Component Microsoft.SqlServer.Dts.Pipeline.ScriptComponentHost, Microsoft.SqlServer.TxScript, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 Slowly Changing Dimension DTSTransform.SCD.2 Sort DTSTransform.Sort.2 SQL Server Compact Destination Microsoft.SqlServer.Dts.Pipeline.SqlCEDestinationAdapter, Microsoft.SqlServer.SqlCEDest, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 SQL Server Destination DTSAdapter.SQLServerDestination.2 Term Extraction DTSTransform.TermExtraction.2 Term Lookup DTSTransform.TermLookup.2 Trash Destination Konesans.Dts.Pipeline.TrashDestination.Trash, Konesans.Dts.Pipeline.TrashDestination, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b8351fe7752642cc TxTopQueries DataCollectorTransform.TxTopQueries.1 Union All DTSTransform.UnionAll.2 Unpivot DTSTransform.UnPivot.2 XML Source Microsoft.SqlServer.Dts.Pipeline.XmlSourceAdapter, Microsoft.SqlServer.XmlSrc, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 Here is a simple console program that can be used to enumerate the pipeline components installed on your machine, and dumps out a list of all components like that above. You will need to add a reference to the Microsoft.SQLServer.ManagedDTS assembly. using System; using System.Diagnostics; using Microsoft.SqlServer.Dts.Runtime; public class Program { static void Main(string[] args) { Application application = new Application(); PipelineComponentInfos componentInfos = application.PipelineComponentInfos; foreach (PipelineComponentInfo componentInfo in componentInfos) { Debug.WriteLine(componentInfo.Name + "\t" + componentInfo.CreationName); } Console.Read(); } }

    Read the article

  • More Stuff less Fluff

    - by brendonpage
    Originally posted on: http://geekswithblogs.net/brendonpage/archive/2013/11/08/more-stuff-less-fluff.aspxYAGNI – "You Aren't Going To Need It". This is an acronym commonly used in software development to remind developers to only write what they need. This acronym exists because software developers have gotten into the habit of writing everything they need to solve a problem and then everything they think they're going to possibly need in the future. Since we can't predict the future this results in a large portion of the code that we write never being used. That extra code causes unnecessary complexity, which makes it harder to understand and harder to modify when we inevitably have to write something that we didn't think of. I've known about YAGNI for some time now but I never really got it. The words made sense and the idea was clear but the concept never sank in. I was one of those devs who'd happily write a ton of code in the anticipation of future needs. In my mind this was an essential part of writing high quality code. I didn't realise that in doing so I was actually writing low quality code. If you are anything like me you are probably thinking "Lies and propaganda! High quality code needs to be future proof." I agree! But what makes code future proof? If we could see into the future the answer would be simple, code that allows for or meets all future requirements. Since we can't see the future the best we can do is write code that can easily adapt to future requirements, this means writing flexible code. Flexible code is: Fast to understand. Fast to add to. Fast to modify. To be flexible code has to be simple, this means only making it as complex as it needs to be to meet those 3 criteria. That is high quality code. YAGNI! The art is in deciding where to place the seams (abstractions) that will give you flexibility without making decisions about future functionality. Robert C Martin explains it very nicely, he says a good architecture allows you to defer decisions because if you can defer a decision then you have the flexibility to change it. I've recently had a YAGNI experience which brought this all into perspective. I was working on a new project which had multiple clients that connect to a server hosted in the cloud. I was tasked with adding a feature to the desktop client that would allow users to capture items that would then be saved to the cloud. My immediate thought was "Hey we have multiple clients so I should build a web service for these items, that way we can access them from other clients", so I went to work and this is what I created.  I stood back and gazed upon what I'd created with a warm fuzzy feeling. It was beautiful! Then the time came for the team to use the design I'd created for another feature with a new entity. Let's just say that they didn't get the same warm fuzzy feeling that I did when they looked at the design. After much discussion they eventually got it through to me that I'd bloated the design based on an assumption of future functionality. After much more discussion we cut the design down to the following. This design gives us future flexibility with no extra work, it is as complex as it needs to be. It has been a couple of months since this incident and we still haven't needed to access either of the entities from other clients. Using the simpler design allowed us to do more stuff with less stuff!

    Read the article

  • Lucene stop words not removed during searching need a substitute for AnalyzingQueryParser

    - by iamrohitbanga
    I have created a Lucene index with the following analyzer. public class DocSpecAnalyzer extends Analyzer { private static CharArraySet stopSet;// = new HashSet<String>(Arrays.asList());//STOP_WORDS_SET; static { stopSet = new CharArraySet(FDConstants.stopwords, true); // uncommenting this displays all the stop words // for (String s: FDConstants.stopwords) { // System.out.println(s); // } } /** * Specifies whether deprecated acronyms should be replaced with HOST type. * See {@linkplain https://issues.apache.org/jira/browse/LUCENE-1068} */ private final boolean enableStopPositionIncrements; private final Version matchVersion; public DocSpecAnalyzer(Version matchVersion) { this.matchVersion = matchVersion; enableStopPositionIncrements = StopFilter.getEnablePositionIncrementsVersionDefault(matchVersion); } public TokenStream tokenStream(String fieldName, Reader reader) { StandardTokenizer tokenStream = new StandardTokenizer(matchVersion, reader); tokenStream.setMaxTokenLength(DEFAULT_MAX_TOKEN_LENGTH); TokenStream result = new StandardFilter(tokenStream); result = new LowerCaseFilter(result); result = new StopFilter(enableStopPositionIncrements, result, stopSet); result = new PorterStemFilter(result); return result; } /** Default maximum allowed token length */ public static final int DEFAULT_MAX_TOKEN_LENGTH = 255; } Now when I search for documents for a query containing stop words, i get hits for stop words also. It is because of http://lucene.apache.org/java/2_9_2/api/contrib-misc/org/apache/lucene/queryParser/analyzing/AnalyzingQueryParser.html not handling stop words. Is there a substitute? Update: forgot to mention that I need to do a fuzzy search. that is why i am using an AnalyzingQueryParser. Update portion of code that invokes AnalyzingQueryParser AnalyzingQueryParser parser = new AnalyzingQueryParser(Version.LUCENE_CURRENT,"description", analyzer); // fuzzy matching preparation String fuzzyStr = TextQuery.prepareFuzzy(tq.text, fuzzyDist); Query query = parser.parse(fuzzyStr); TopScoreDocCollector collector = TopScoreDocCollector.create(numHits, true); searcher.search(query, collector);

    Read the article

  • New Cumulative Updates for SQL Server 2005 & SQL Server 2008 R2

    - by AaronBertrand
    Early this morning, the SQL Server Release Services team pushed out three new cumulative updates for SQL Server. KB #2489375 - SQL Server 2005 SP3 CU #14 (9.00.4317) KB #2489409 - SQL Server 2005 SP4 CU #2 (9.00.5259) KB #2489376 - SQL Server 2008 R2 CU #6 (10.50.1765) There are a lot more fixes in the 2008 R2 update - 43, by my count. In comparison, only 9 fixes for 2005 SP4, and only 2 fixes for 2005 SP3. You can draw your own conclusions from that data, particularly if you are still on SQL Server...(read more)

    Read the article

< Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >