Search Results

Search found 13889 results on 556 pages for 'results'.

Page 427/556 | < Previous Page | 423 424 425 426 427 428 429 430 431 432 433 434  | Next Page >

  • Keyword to SQL search

    - by jdelator
    Use Case When a user goes to my website, they will be confronted with a search box much like SO. They can search for results using plan text. ".net questions", "closed questions", ".net and java", etc.. The search will function a bit different that SO, in that it will try to as much as possible of the schema of the database rather than a straight fulltext search. So ".net questions" will only search for .net questions as opposed to .net answers (probably not applicable to SO case, just an example here), "closed questions" will return questions that are closed, ".net and java" questions will return questions that relate to .net and java and nothing else. Problem I'm not too familiar with the words but I basically want to do a keyword to SQL driven search. I know the schema of the database and I also can datamine the database. I want to know any current approaches there that existing out already before I try to implement this. I guess this question is for what is a good design for the stated problem. Proposed My proposed solution so far looks something like this Clean the input. Just remove any special characters Parse the input into chunks of data. Break an input of "c# java" into c# and java Also handle the special cases like "'c# java' questions" into 'c# java' and "questions". Build a tree out of the input Bind the data into metadata. So convert stuff like closed questions and relate it to the isclosed column of a table. Convert the tree into a sql query. Thoughts/suggestions/links?

    Read the article

  • Versioning friendly, extendible binary file format

    - by Bas Bossink
    In the project I'm currently working on there is a need to save a sizable data structure to disk (edit: think dozens of MB's). Being an optimist, I thought that there must be a standard solution for such a problem; however, up to now I haven't found a solution that satisfies the following requirements: .NET 2.0 support, preferably with a FOSS implementation Version friendly (this should be interpreted as: reading an old version of the format should be relatively simple if the changes in the underlying data structure are simple, say adding/dropping fields) Ability to do some form of random access where part of the data can be extended after initial creation (think of this as extending intermediate results) Space and time efficient (XML has been excluded as option given this requirement) Options considered so far: Protocol Buffers: was turned down by verdict of the documentation about Large Data Sets - since this comment suggested adding another layer on top, this would call for additional complexity which I wish to have handled by the file format itself. HDF5,EXI: do not seem to have .net implementations SQLite/SQL Server Compact edition: the data structure at hand would result in a pretty complex table structure that seems too heavyweight for the intended use BSON: does not appear to support requirement 3. Fast Infoset: only seems to have paid .NET implementations. Any recommendations or pointers are greatly appreciated. Furthermore if you believe any of the information above is not true, please provide pointers/examples to prove me wrong.

    Read the article

  • EF 4.0 : Save Changes Retry Logic

    - by BGR
    Hi, I would like to implement an application wide retry system for all entity SaveChanges method calls. Technologies: Entity framework 4.0 .Net 4.0 namespace Sample.Data.Store.Entities { public partial class StoreDB { public override int SaveChanges(System.Data.Objects.SaveOptions options) { for (Int32 attempt = 1; ; ) { try { return base.SaveChanges(options); } catch (SqlException sqlException) { // Increment Trys attempt++; // Find Maximum Trys Int32 maxRetryCount = 5; // Throw Error if we have reach the maximum number of retries if (attempt == maxRetryCount) throw; // Determine if we should retry or abort. if (!RetryLitmus(sqlException)) throw; else Thread.Sleep(ConnectionRetryWaitSeconds(attempt)); } } } static Int32 ConnectionRetryWaitSeconds(Int32 attempt) { Int32 connectionRetryWaitSeconds = 2000; // Backoff Throttling connectionRetryWaitSeconds = connectionRetryWaitSeconds * (Int32)Math.Pow(2, attempt); return (connectionRetryWaitSeconds); } /// <summary> /// Determine from the exception if the execution /// of the connection should Be attempted again /// </summary> /// <param name="exception">Generic Exception</param> /// <returns>True if a a retry is needed, false if not</returns> static Boolean RetryLitmus(SqlException sqlException) { switch (sqlException.Number) { // The service has encountered an error // processing your request. Please try again. // Error code %d. case 40197: // The service is currently busy. Retry // the request after 10 seconds. Code: %d. case 40501: //A transport-level error has occurred when // receiving results from the server. (provider: // TCP Provider, error: 0 - An established connection // was aborted by the software in your host machine.) case 10053: return (true); } return (false); } } } The problem: How can I run the StoreDB.SaveChanges to retry on a new DB context after an error occured? Something simular to Detach/Attach might come in handy. Thanks in advance! Bart

    Read the article

  • zxing project on android

    - by Aisthesis Cronos
    Hello everybody Many weeks ago,I tried to work on a mini project on Android OS requires ZXING, I followed several tutorials on this web site and on other Example: tuto1, and many tags and tutorials here tuto2, tuto3 ... But I failed each time. I can't import the android project into eclipse IDE to compile it with my code "not via Intent zxing APK-and my program like this example : private Button.OnClickListener btScanListener = new Button.OnClickListener() { public void onClick(View v) { Intent intent = new Intent("com.google.zxing.client.android.SCAN"); intent.putExtra("SCAN_MODE", "QR_CODE_MODE"); try { startActivityForResult(intent, REQUEST_SCAN); } catch (ActivityNotFoundException e) { Toast.makeText(Main.this, "Barcode Scanner not intaled ", 2000).show(); } } }; public void onActivityResult(int reqCode, int resCode, Intent intent) { if (REQUEST_SCAN == reqCode) { if (RESULT_OK == resCode) { String contents = intent.getStringExtra("SCAN_RESULT"); Toast.makeText(this, "Succès : " + contents, 2000).show(); } else if (RESULT_CANCELED == resCode) { Toast.makeText(this, "Scan annulé", 2000).show(); } } }` ". I feel disappointed, frustrated and sad. I still have errors after importing the project. I tried both versions 1.5 and 1.6 zxing I tried to import the project c: \ ZXing-1.6 \ android, and an other new project with c: \ ZXing-1.6 \ zxing-1.6 \ android,I cheked out SVN: ttp: / / zxing.googlecode.com / svn / trunk / zxing-read-only with tortoiseSVN and reproduce the same work, but unfortunately without results! I really fed up with myself ... Please help me to solve this problem.how can I import the project and compile it correctly in my own project? 1 - I use Windows 7 64-bit Home Premium 2 - Eclipse IDE for Java EE Web Developers. Version: Helios Service Release 2 Build id: 20110218-0911 What is the effective and sure method to run this, otherwise if there is a video or a guide details or someone who already done it previously I would really appreciate it if someone would help me out

    Read the article

  • Flash video slooow in AIR 2 HTMLLoader component

    - by shane
    I am working on a full screen kiosk application in Flex 4/Air 2 using Flash Builder 4. We have a company training website which staff can access via the kiosk, and the main content is interactive flash training videos. Our target machines are by no means 'beefy', they are Atom n270s @ 1.6Ghz with 1Gb RAM. As it stands the videos are all but unusable when used from within the Air application, the application becomes completely unresponsive (100% cpu usage, click events take approx 5-10 seconds to register). So far I have tried: increasing the default frame rate from 24fps to 60. No improvement. nativeWindow.stage.frameRate = 60; running the videos in a stripped down version of my app, just a full screen HTMLLoader component pointed at the training website. No better than before. disabled hyper threading. The Atom CPU is split into two virtual cores, and the AIR app was only able to use one thread so maxed out at 50% CPU usage. Since the kiosk will only run the AIR app I am happy to loose hyper threading to increase the performance of the Air app. Marginal Improvement. The same website with the same videos is responsive if viewed in ie7 on the same machine, although Internet Explorer takes advantage of the CPU’s hyper threading. The flash videos are built with Adobe Captivate and from what I understand employee JavaScript to relay results back to the server. I will add more information about the video content asap as the training guru is back in the office later this week.

    Read the article

  • GAE AttributeError

    - by awegawef
    My GAE app runs fine from my computer, but when I upload it, I start getting an AttributeError, specifically: AttributeError: 'dict' object has no attribute 'item' I am using the pylast interface (an API for last.fm--link). Specifically, I am accessing a list of variables of this type: SimilarItem = _namedtuple("SimilarItem", ["item", "match"]) I have a variable of this type, call it sim, and I am trying to access sim.item when I get the attribute error. I should note that I am using Python 2.6 on my computer, and I understand that GAE runs on Python 2.5. Would that make a difference here? I thought they were backwards-compatible. Lastly, I think it could be a possible problem with the modules that pylast imports--maybe they don't work with GAE or something? I did some research but I didn't get any results. Here are the imports: import hashlib import httplib import urllib import threading from xml.dom import minidom import xml.dom import time import shelve import tempfile import sys import htmlentitydefs I would appreciate any help with this frustrating issue. Thanks in advance.

    Read the article

  • StackOverFlowException - but oviously NO recursion/endless loop

    - by user567706
    Hi there, I'm now blocked by this problem the entire day, read thousands of google results, but nothing seems to reflect my problem or even come near to it... i hope any of you has a push into the right direction for me. I wrote a client-server-application (so more like 2 applications) - the client collects data about his system, as well as a screenshot, serializes all this into a XML stream (the picture as a byte[]-array]) and sends this to the server in regular intervals. The server receives the stream (via tcp), deserializes the xml to an information-object and shows the information on a windows form. This process is running stable for about 20-25 minutes at a submission interval of 3 seconds. When observing the memory usage there's nothing significant to see, also kinda stable. But after these 20-25 mins the server throws a StackOverflowException at the point where it deserializes the tcp-stream, especially when setting the Image property from the byte[]-array. I thoroughly searched for recursive or endless loops, and regarding the fact that it occurs after thousands of sucessfull intervals, i could hardly imagine that. public byte[] ImageBase { get { MemoryStream ms = new MemoryStream(); _screen.Save(ms, System.Drawing.Imaging.ImageFormat.Jpeg); return ms.GetBuffer(); } set { if (_screen != null) _screen.Dispose(); //preventing well-known image memory leak MemoryStream ms = new MemoryStream(value); try { _screen = Image.FromStream(ms); //<< EXCEPTION THROWING HERE } catch (StackOverflowException ex) //thx to new CLR management this wont work anymore -.- { Console.WriteLine(ex.Message + Environment.NewLine + ex.StackTrace); } ms.Dispose(); ms = null; } } I hope that more code would be unnecessary, or it could get very complex... Please help, i have no clue at all anymore thx Chris

    Read the article

  • Winforms: How to speed up Invalidate()?

    - by Pedery
    I'm developing a retained mode drawing application in GDI+. The application can draw simple shapes to a canvas and perform basic editing. The math that does this is optimized to the last byte and is not an issue. I'm drawing on a panel that is using the built-in Controlstyles.DoubleBuffer. Now, my problem arises if I run my app maximized on a big monitor (HD in my case). If I try to draw a line from one corner of the (big) canvas to the diagonally oposite other, it will start to lag and the CPU goes high up. Each graphical object in my app has a boundingbox. Thus, when I invalidate the boundingbox of a line that goes from one corner of the maximized app to the oposite diagonal one, that boundingbox is virtually as big as the canvas. When a user is drawing a line, this invalidation of the boundingbox thus happens on the mousemove event, and there is a clear lag visible. This lag also exists if the line is the only object on the canvas. I've tried to optimize this in many ways. If I draw a shorter line, the CPU and the lag goes down. If I remove the Invalidate() and keep all other code, the app is quick. If I use a Region (that only spans the figure) to invalidate instead of the boundingbox, it is just as slow. If I split the boundingbox into a range of smaller boxes that lie back to back, thus reducing the invalidation area, no visible performance gain can be seen. Thus I'm at a loss here. How can I speed up the invalidation? On a side note, both Paint.Net and Mspaint suffers from the same shortcommings. Word and PowerPoint however, seem to be able to paint a line as described above with no lag and no CPU load at all. Thus it's possible to achieve the desired results, the question is how?

    Read the article

  • Reliable strtotime() result for different languages

    - by Maksee
    There was always a strange bug in Joomla when adding new article with back-end displayed with a language other than English (for me it's Russian). The field "Finish Publishing" started to be current date instead of "Never" equivalent in Russian. For a site in php4 finally found that strtotime function returns different results for arbitrary words. For "Never" it always -1 and joomla relies on this result in the JDate implementation. But in other case it sometimes returns a valid date. For russian translation of Never (???????) it is the case, but also for single "N" it is the case, so if one decided to change the string to some other he or she would face the same issue. So the code below <?php echo "Res:".strtotime("N")."<br>"; echo "Res:".strtotime("Nev")."<br>"; echo "Res:".strtotime("Neve")."<br>"; echo "Res:".strtotime("Never")."<br>"; ?> Outputs: Res:1271120400 Res:-1 Res:-1 Res:-1 So what are the solutions would be in this case? I would like not to write language-specific date.php handler, but to modify date method of JDate class, but what are language-neutral changes would be in order to detect invalid string. Thank you

    Read the article

  • Access denied error on select into outfile using Zend

    - by Peter
    Hi, I'm trying to make a dump of a MySQL table on the server and I'm trying to do this in Zend. I have a model/mapper/dbtable structure for all my connections to my tables and I'm adding the following code to the mappers: public function dumpTable() { $db = $this->getDbTable()->getAdapter(); $name = $this->getDbTable()->info('name'); $backupFile = APPLICATION_PATH . '/backup/' . date('U') . '_' . $name . '.sql'; $query = "SELECT * INTO OUTFILE '$backupFile' FROM $name"; $db->query( $query ); } This should work peachy, I thought, but Message: Mysqli prepare error: Access denied for user 'someUser'@'localhost' (using password: YES) is what this results in. I checked the user rights for someUser and he has all the rights to the database and table in question. I've been looking around here and on the net in general and usually turning on "all" the rights for the user seems to be the solution, but not in my case (unless I'm overlooking something right now with my tired eyes + I don't want to turn on "all" on my production server). What am I doing wrong here? Or, does anybody know a more elegant way to get this done in Zend?

    Read the article

  • How can I write reusable Javascript?

    - by RenderIn
    I've started to wrap my functions inside of Objects, e.g.: var Search = { carSearch: function(color) { }, peopleSearch: function(name) { }, ... } This helps a lot with readability, but I continue to have issues with reusabilty. To be more specific, the difficulty is in two areas: Receiving parameters. A lot of times I will have a search screen with multiple input fields and a button that calls the javascript search function. I have to either put a bunch of code in the onclick of the button to retrieve and then martial the values from the input fields into the function call, or I have to hardcode the HTML input field names/IDs so that I can subsequently retrieve them with Javascript. The solution I've settled on for this is to pass the field names/IDs into the function, which it then uses to retrieve the values from the input fields. This is simple but really seems improper. Returning values. The effect of most Javascript calls tends to be one in which some visual on the screen changes directly, or as a result of another action performed in the call. Reusability is toast when I put these screen-altering effects at the end of a function. For example, after a search is completed I need to display the results on the screen. How do others handle these issues? Putting my thinking cap on leads me to believe that I need to have an page-specific layer of Javascript between each use in my application and the generic methods I create which are to be used application-wide. Using the previous example, I would have a search button whose onclick calls a myPageSpecificSearchFunction, in which the search field IDs/names are hardcoded, which marshals the parameters and calls the generic search function. The generic function would return data/objects/variables only, and would not directly read from or make any changes to the DOM. The page-specific search function would then receive this data back and alter the DOM appropriately. Am I on the right path or is there a better pattern to handle the reuse of Javascript objects/methods?

    Read the article

  • Mathematical annotations in a PDF file

    - by kvaruni
    I like to annotate papers I read in a digital way. Numerous programs exist to help in this process. For example, on OS X one can use programs such as Skim or even Preview. However, making annotations is dreadful when one wishes to add mathematical annotations, such as formulas or greek letters. A cumbersome "solution" is to select the desired symbol one by one using the Special Characters palette, though this considerably slows down the annotation process. Is there any way to add mathematical annotations to a PDF? The only two limitations that I would impose on a solution is that 1) the mathematical text needs to be selectable, i.e. it must be text and 2) I want to limit the number of programs I need to make the process as painless as possible. Some of the more promising solutions I have tried include generating LaTeX with LaTeXiT, but it seems to be impossible to add a PDF on top of another PDF. Another attempt was to use jsMath to generate the symbols and copy-paste these as annotation using one of the jsMath fonts. This results in unreadable, incorrect characters.

    Read the article

  • LINQ - is SkipWhile broken?

    - by Judah Himango
    I'm a bit surprised to find the results of the following code, where I simply want to remove all 3s from a sequence of ints: var sequence = new [] { 1, 1, 2, 3 }; var result = sequence.SkipWhile(i => i == 3); // Oh noes! Returns { 1, 1, 2, 3 } Why isn't 3 skipped? My next thought was, OK, the Except operator will do the trick: var sequence = new [] { 1, 1, 2, 3 }; var result = sequence.Except(i => i == 3); // Oh noes! Returns { 1, 2 } In summary, Except removes the 3, but also removes non-distinct elements. Grr. SkipWhile doesn't skip the last element, even if it matches the condition. Grr. Can someone explain why SkipWhile doesn't skip the last element? And can anyone suggest what LINQ operator I can use to remove the '3' from the sequence above?

    Read the article

  • How do I pass an array of structs (containing std:string or BSTR) from ATL to C#. SafeArray? Varian

    - by Andrew
    Hi, I have an ATL COM object that I am using from C#. The interface currently looks like: interface ICHASCom : IDispatch{ [id(1), helpstring("method Start")] HRESULT Start([in] BSTR name, [out,retval] VARIANT_BOOL* result); ... [id(4), helpstring("method GetCount")] HRESULT GetCount([out,retval] LONG* numPorts); ... [id(7), helpstring("method EnableLogging")] HRESULT EnableLogging([in] VARIANT_BOOL enableLogging); }; That is, it's a very simple interface. I also have some events that I send back too. Now, I would like to add something to the interface. In the ATL I have some results, which are currently structs and look like struct REPORT_LINE { string creationDate; string Id; string summary; }; All the members of the struct are std::string. I have an array of these that I need to get back to the C#. What's the best way to do this? I suspect someone is going to say, "hey, you can't just send std::string over COM like that. If so, fine, but what's the best way to modidfy the struct? Change the std::string to BSTR? And then how do I, 1) Set up the IDL to pass an array of structs (structs with BSTR or std::string) 2) If I must use SAFEARRAYS, how do I fill the SAFEARRAYS with the structs. I'm not familiar with COM except for use with simple types. Thanks, Dave

    Read the article

  • P6 Architecture - Register renaming aside, does the limited user registers result in more ops spent

    - by mrjoltcola
    I'm studying JIT design with regard to dynamic languages VM implementation. I haven't done much Assembly since the 8086/8088 days, just a little here or there, so be nice if I'm out of sorts. As I understand it, the x86 (IA-32) architecture still has the same basic limited register set today that it always did, but the internal register count has grown tremendously, but these internal registers are not generally available and are used with register renaming to achieve parallel pipelining of code that otherwise could not be parallelizable. I understand this optimization pretty well, but my feeling is, while these optimizations help in overall throughput and for parallel algorithms, the limited register set we are still stuck with results in more register spilling overhead such that if x86 had double, or quadruple the registers available to us, there may be significantly less push/pop opcodes in a typical instruction stream? Or are there other processor optmizations that also optimize this away that I am unaware of? Basically if I've a unit of code that has 4 registers to work with for integer work, but my unit has a dozen variables, I've got potentially a push/pop for every 2 or so instructions. Any references to studies, or better yet, personal experiences?

    Read the article

  • Problem installing RMagick rubygem on Centos 5

    - by Keith Pitty
    I'm having problems installing the RMagick rubygem on Centos 5. I've followed the steps detailed in http://rmagick.rubyforge.org/install2-linux.html but when I try: sudo gem install rmagick the result is: Building native extensions. This could take a while... ERROR: Error installing rmagick: ERROR: Failed to build gem native extension. /usr/local/bin/ruby extconf.rb checking for Ruby version >= 1.8.5... yes checking for gcc... yes checking for Magick-config... no Can't install RMagick 2.11.0. Can't find Magick-config in /usr/bin:/bin *** extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=/usr/local/bin/ruby Gem files will remain installed in /usr/local/lib/ruby/gems/1.8/gems/rmagick-2.11.0 for inspection. Results logged to /usr/local/lib/ruby/gems/1.8/gems/rmagick-2.11.0/ext/RMagick/gem_make.out The directory /usr/local/bin contains Magick-config but I haven't been able to get rubygems to look there. I tried the following but the result was the same: sudo gem install rmagick -- --with-opt-dir=/usr/local/bin Any suggestions would be appreciated.

    Read the article

  • Catching MediaPlayer Exceptions from WPF MediaElement Control

    - by ScottCate
    I'm playing video in a MediaElement in WPF. It's working 1000's of times, over and over again. Once in a blue moon (like once a week), I get a windows exception (you know the dialog Dr. Watson Crash??) that happens. The MediaElment doesn't expose an error, it just crashes and sits there with an ugly Crash report on the screen. If you "view this report" you can see it is in fact MediaPlayer that has crashed. I know I can disable the crash reports from popping up - but I'm more interested in finding out what's going wrong. I'm not sure how to capture the results of the Dr. Watson capture, but I have the dialog open now if someone has advice on a better way to capture. Here is the opening line of data, that points to my application, then to wmvdecod.dll AppName: ScottApp.exe AppVer: 2.2009.2291.805 AppStamp:4a36c812 ModName: wmvdecod.dll ModVer: 11.0.5721.5145 ModStamp:453711a3 fDebug: 0 Offset: 000cbc88 And from the Win Event Log. (same information) Event Type: Error Event Source: .NET Runtime 2.0 Error Reporting Event Category: None Event ID: 1000 Date: 7/13/2009 Time: 10:20:27 AM User: N/A Computer:28022 Description: Faulting application ScottApp.exe, version 2.2009.2291.805, stamp 4a36c812, faulting module wmvdecod.dll, version 11.0.5721.5145, stamp 453711a3, debug? 0, fault address 0x000cbc88.

    Read the article

  • Bitbucket API authentication with Python's HTTPBasicAuthHandler

    - by jbochi
    I'm trying to get the list of issues on a private repository using bitbucket's API. I have confirmed that HTTP Basic authentication works with hurl, but I am unable to authenticate in Python. Adapting the code from this tutorial, I have written the following script. import cookielib import urllib2 class API(): api_url = 'http://api.bitbucket.org/1.0/' def __init__(self, username, password): self._opener = self._create_opener(username, password) def _create_opener(self, username, password): cj = cookielib.LWPCookieJar() cookie_handler = urllib2.HTTPCookieProcessor(cj) password_manager = urllib2.HTTPPasswordMgrWithDefaultRealm() password_manager.add_password(None, self.api_url, username, password) auth_handler = urllib2.HTTPBasicAuthHandler(password_manager) opener = urllib2.build_opener(cookie_handler, auth_handler) return opener def get_issues(self, username, repository): query_url = self.api_url + 'repositories/%s/%s/issues/' % (username, repository) try: handler = self._opener.open(query_url) except urllib2.HTTPError, e: print e.headers raise e return handler.read() api = API(username='my_username', password='XXXXXXXX') api.get_issues('my_username', 'my_repository') results in: >>> Server: nginx/0.7.62 Date: Mon, 19 Apr 2010 16:15:06 GMT Content-Type: text/plain Connection: close Vary: Authorization,Cookie Content-Length: 9 Traceback (most recent call last): File "C:/USERS/personal/bitbucket-burndown/bitbucket-api.py", line 29, in <module> print api.get_issues('my_username', 'my_repository') File "C:/USERS/personal/bitbucket-burndown/bitbucket-api.py", line 25, in get_issues raise e HTTPError: HTTP Error 401: UNAUTHORIZED api.get_issues('jespern', 'bitbucket') works like a charm. What's wrong with my code?

    Read the article

  • How do you efficiently implement a document similarity search system?

    - by Björn Lindqvist
    How do you implement a "similar items" system for items described by a set of tags? In my database, I have three tables, Article, ArticleTag and Tag. Each Article is related to a number of Tags via a many-to-many relationship. For each Article i want to find the five most similar articles to implement a "if you like this article you will like these too" system. I am familiar with Cosine similarity and using that algorithm works very well. But it is way to slow. For each article, I need to iterate over all articles, calculate the cosine similarity for the article pair and then select the five articles with the highest similarity rating. With 200k articles and 30k tags, it takes me half a minute to calculate the similar articles for a single article. So I need another algorithm that produces roughly as good results as cosine similarity but that can be run in realtime and which does not require me to iterate over the whole document corpus each time. Maybe someone can suggest an off-the-shelf solution for this? Most of the search engines I looked at does not enable document similarity searching.

    Read the article

  • UITableView and SearchBar problem

    - by dododedodonl
    Hi all, I'm trying to add a Search bar to my UITableView. I followed this tutorial: http://clingingtoideas.blogspot.com/2010/02/uitableview-how-to-part-2-search.html. I'm getting this error if I type a letter in the search box: Rooster(10787,0xa05ed4e0) malloc: *** error for object 0x3b5f160: double free *** set a breakpoint in malloc_error_break to debug. This error occurs here: - (BOOL)searchDisplayController:(UISearchDisplayController *)controller shouldReloadTableForSearchString:(NSString *)searchString { [self handleSearchForTerm:searchString]; return YES; } (on the second line) - (void)handleSearchForTerm:(NSString *)searchTerm { [self setSavedSearchTerm:searchTerm]; if ([self searchResults] == nil) { NSMutableArray *array = [[NSMutableArray alloc] init]; [self setSearchResults:array]; [array release]; } //Empty the searchResults array [[self searchResults] removeAllObjects]; //Check if the searchTerm doesn't equal zero... if ([[self savedSearchTerm] length] != 0) { //Search the whole tableList (datasource) for (NSString *currentString in tableList) { NSString *klasString = [[NSString alloc] init]; NSInteger i = [[leerlingNaarKlasList objectAtIndex:[tableList indexOfObject:currentString]] integerValue]; if(i != -1) { klasString = [klassenList objectAtIndex:(i - 1)]; } //Check if the string matched or the klas (group of school) if ([currentString rangeOfString:searchTerm options:NSCaseInsensitiveSearch].location != NSNotFound || [klasString rangeOfString:searchTerm options:NSCaseInsensitiveSearch].location != NSNotFound) { //Add to results [[self searchResults] addObject:currentString]; //Save the klas (group of school). It has the same index as the result (lastname) NSString *strI = [[NSString alloc] initWithFormat:@"%i", i]; [[self searchResultsLeerlingNaarKlas] addObject:strI]; [strI release]; } [klasString release]; } } } Can someone help me out? Regards, Dodo

    Read the article

  • Making dtSearch highlight one hit per phrase, rather than one hit per word-in-a-phrase

    - by Chris
    I'm using dtSearch to highlight text search matches within a document. The code to do this, minus some details and cleanup, is roughly along these lines: SearchJob sj = new SearchJob(); sj.Request = "\"audit trail\""; // the user query sj.FoldersToSearch.Add(path_to_src_document); sj.Execute(); FileConverter fileConverter = new FileConverter(); fileConverter.SetInputItem(sj.Results, 0); fileConvert.BeforeHit = "<a name=\"HH_%%ThisHit%%\"/><b>"; fileConverter.AfterHit = "</b>"; fileConverter.Execute(); string myHighlightedDoc = fileConverter.OutputString; If I give dtSearch a quoted phrase query like "audit trail" then dtSearch will do hit highlighting like this: An <a name="HH_0"/><b>audit</b> <a name="HH_1"/><b>trail</b> is a fun thing to have an <a name="HH_2"/><b>audit</b> <a name="HH_last"/><b>trail</b> about! Note that each word of the phrase is highlighted separately. Instead, I would like phrases to get highlighted as whole units, like this: An <a name="HH_0"/><b>audit trail</b> is a fun thing to have an <a name="HH_last"/><b>audit trail</b> about! This would A) make highlighting look better, B) improve behavior of my javascript that helps users navigate from hit to hit, and C) give more accurate counts of total # hits. Is there good ways to make dtSearch highlight phrases this way?

    Read the article

  • Data in two databases, eager spool resulting in query

    - by Valkyrie
    I have two databases in SQL2k5: one that holds a large amount of static data (SQL Database 1) (never updated but frequently inserted into) and one that holds relational data (SQL Database 2) related to the static data. They're separated mainly because of corporate guidelines and business requirements: assume for the following problem that combining them is not practical. There are places in SQLDB2 that PKs in SQLDB1 are referenced; triggers control the referential integrity, since cross-database relationships are troublesome in SQL Server. BUT, because of the large amount of data in SQLDB1, I'm getting eager spools on queries that join from the Id in SQLDB2 that references the data in SQLDB1. (With me so far? Maybe an example will help:) SELECT t.Id, t.Name, t2.Company FROM SQLDB1.table t INNER JOIN SQLDB2.table t2 ON t.Id = t2.FKId This query results in a eager spool that's 84% of the load of the query; the table in SQLDB1 has 35M rows, so it's completely choking this query. I can't create a view on the table in SQLDB1 and use that as my FK/index; it doesn't want me to create a constraint based on a view. Anyone have any idea how I can fix this huge bottleneck? (Short of putting the static data in the first db: believe me, I've argued that one until I'm blue in the face to no avail.) Thanks! valkyrie Edit: also can't create an indexed view because you can't put schemabinding on a view that references a table outside the database where the view resides. Dang it.

    Read the article

  • Better why of looping to detect change.

    - by Dremation
    As of now I'm using a while(true) method to detect changes in memory. The problem with this is it's kill the applications performance. I have a list of 30 pointers that need checked as rapidly as possible for changes, without sacrificing a huge performance loss. Anyone have ideas on this? memScan = new Thread(ScanMem); public static void ScanMem() { int i = addy.Length; while (true) { Thread.Sleep(30000); //I do this to cut down on cpu usage for (int j = 0; j < i; j++) { string[] values = addy[j].Split(new char[] { Convert.ToChar(",") }); //MessageBox.Show(values[2]); try { if (Memory.Scanner.getIntFromMem(hwnd, (IntPtr)Convert.ToInt32(values[0], 16), 32).ToString() != values[1].ToString()) { //Ok, it changed lets do our work //work if (Globals.Working) return; SomeFunction("Results: " + values[2].ToString(), "Memory"); Globals.Working = true; }//end if }//end try catch { } }//end for }//end while }//end void

    Read the article

  • What makes these two R data frames not identical?

    - by Matt Parker
    UPDATE: I remembered dput() about the time Sharpie mentioned it. It's probably the row names. Back in a moment with an answer. I have two small data frames, this_tx and last_tx. They are, in every way that I can tell, completely identical. this_tx == last_tx results in a frame of identical dimensions, all TRUE. this_tx %in% last_tx, two TRUEs. Inspected visually, clearly identical. But when I call identical(this_tx, last_tx) I get a FALSE. Hilariously, even identical(str(this_tx), str(last_tx)) will return a TRUE. If I set this_tx <- last_tx, I'll get a TRUE. What is going on? I don't have the deepest understanding of R's internal mechanics, but I can't find a single difference between the two data frames. If it's relevant, the two variables in the frames are both factors - same levels, same numeric coding for the levels, both just subsets of the same original data frame. Converting them to character vectors doesn't help. Background (because I wouldn't mind help on this, either): I have records of drug treatments given to patients. Each treatment record essentially specifies a person and a date. A second table has a record for each drug and dose given during a particular treatment (usually, a few drugs are given each treatment). I'm trying to identify contiguous periods during which the person was taking the same combinations of drugs at the same doses. The best plan I've come up with is to check the treatments chronologically. If the combination of drugs and doses for treatment[i] is identical to the combination at treatment[i-1], then treatment[i] is a part of the same phase as treatment[i-1]. Of course, if I can't compare drug/dose combinations, that's right out.

    Read the article

  • How can I improve my real-time behavior in multi-threaded app using pthreads and condition variables

    - by WilliamKF
    I have a multi-threaded application that is using pthreads. I have a mutex() lock and condition variables(). There are two threads, one thread is producing data for the second thread, a worker, which is trying to process the produced data in a real time fashion such that one chuck is processed as close to the elapsing of a fixed time period as possible. This works pretty well, however, occasionally when the producer thread releases the condition upon which the worker is waiting, a delay of up to almost a whole second is seen before the worker thread gets control and executes again. I know this because right before the producer releases the condition upon which the worker is waiting, it does a chuck of processing for the worker if it is time to process another chuck, then immediately upon receiving the condition in the worker thread, it also does a chuck of processing if it is time to process another chuck. In this later case, I am seeing that I am late processing the chuck many times. I'd like to eliminate this lost efficiency and do what I can to keep the chucks ticking away as close to possible to the desired frequency. Is there anything I can do to reduce the delay between the release condition from the producer and the detection that that condition is released such that the worker resumes processing? For example, would it help for the producer to call something to force itself to be context switched out? Bottom line is the worker has to wait each time it asks the producer to create work for itself so that the producer can muck with the worker's data structures before telling the worker it is ready to run in parallel again. This period of exclusive access by the producer is meant to be short, but during this period, I am also checking for real-time work to be done by the producer on behalf of the worker while the producer has exclusive access. Somehow my hand off back to running in parallel again results in significant delay occasionally that I would like to avoid. Please suggest how this might be best accomplished.

    Read the article

< Previous Page | 423 424 425 426 427 428 429 430 431 432 433 434  | Next Page >