Search Results

Search found 39758 results on 1591 pages for 'center text'.

Page 215/1591 | < Previous Page | 211 212 213 214 215 216 217 218 219 220 221 222  | Next Page >

  • sending SMS with VBA

    - by I__
    does anyone know if this is possible? i was able to succesfully run hyperterminal and use it to send texts through my phone which is attached by USB. anyone know how to do it in VBA?

    Read the article

  • Any recommended iPhone script/code editor apps out there?

    - by Unreality
    I'd like to write code even when I'm not at my desktop machine. Any recommended iPhone script/code editor apps out there? I'm not meaning desktop applications like XCode for writing iPhone apps. I'm meaning iPhone apps cause I want to write code on iPhone. (any code like java c ruby etc and doesn't limit to writing codes for iPhone...) It will be great if you can recommend both free and paid apps, many thanks!

    Read the article

  • How do I use Notepad++ (or other) with msysgit?

    - by PHLAK
    How do I use Notepad++ (or any other editor besides vim) with msysgit? I tried all of the following to no avail: git config --global core.editor C:\Program Files\Notepad++\notepad++.exe git config --global core.editor "C:\Program Files\Notepad++\notepad++.exe" git config --global core.editor C:/Program Files/Notepad++/notepad++.exe git config --global core.editor C:\\Program Files\\Notepad++\\notepad++.exe

    Read the article

  • How to index a string like "aaa.bbb.ddd-fff" in Lucene?

    - by user46703
    Hi, I have to index a lot documents that contain reference numbers like "aaa.bbb.ddd-fff". The structure can change but it's always some arbitrary numbers or characters combined with "/","-","_" or some other delimiter. The users want to be able to search for any of the substrings like "aaa" or "ddd" and also for combinations like "aaa.bbb" or "ddd-fff". The best I have been able to come up with is to create my own token filter modeled after the synonym filter in "Lucene in action" which spits out multiple terms for each input. In my case I return "aaa.bbb", "bbb.ddd","bbb.ddd-fff" and all other combinations of the substrings. This works pretty well but when I index large documents (100MB) that contain lots of such strings I tend to get out of memory exceptions because my filter returns multiple terms for each input string. Is there a better way to index these strings?

    Read the article

  • How do I keep a scanner from throwing exceptions when the wrong type is entered? (java)

    - by David
    Here's some sample code: import java.util.Scanner; class In { public static void main (String[]arg) { Scanner in = new Scanner (System.in) ; System.out.println ("how many are invading?") ; int a = in.nextInt() ; System.out.println (a) ; } } if i run the program and give it an int like 4then everything goes fine. if, on the other hand, i answer too many it doesn't laugh at my funny joke. instead i get this: (as expected) Exception in thread "main" java.util.InputMismatchException at java.util.Scanner.throwFor(Scanner.java:819) at java.util.Scanner.next(Scanner.java:1431) at java.util.Scanner.nextInt(Scanner.java:2040) at java.util.Scanner.nextInt(Scanner.java:2000) at In.main(In.java:9) is there a way so that i can make it so that it either ignores entries that aren't ints or re prompts with "how many are invading?"? i'd like to know how to do both of these.

    Read the article

  • FORMSOF Thesaurus in SQL Server

    - by Coolcoder
    Has anyone done any performance measures with this in terms of speed where there is a high number of substitutes for any given word. For instance, I want to use this to store common misspellings; expecting to have 4-10 variations of a word. <expansion> <sub>administration</sub> <sub>administraton</sub> <sub>aministraton</sub> </expansion> When you run a fulltext search, how does performance degrade with that number of variations? for instance, I assume it has to do a separate fulltext search performing an OR? Also, having say 20/30K entries in the Thesaurus xml file - does this impact performance?

    Read the article

  • undo continually vi

    - by wowrt
    Hi, I am using vi(not Vim) and I would like to continually undo the changes made. u works for a single command undo and Ufor a single line undo. But Is there a way to undo continuously like vim(I recall a command in vim can even undo changes by time!) in vi? Thanks in Advance.

    Read the article

  • Problems when going from SQL 2005 to SQL 2008

    - by Nezdet
    Hi! I did go over from SQL server 2005 to 2008. Doing that gave me some problems with the fulltext search. This site is based on Fulltext search. It occurs more deadlocks, the search is slower and sometimes it return empty lists, don't know why. A lot of people has been writning about they having this problem with 2008. But I haven'tgot any solutions why 2005 worked better for my program.. PLS help me out!

    Read the article

  • How can I make keyword order more relevant in my search?

    - by Atomiton
    In my database, I have a keywords field that stores a comma-delimited list of keywords. For example, a Shrek doll might have the following keywords: ogre, green, plush, hero, boys' toys A "Beanie Baby" doll ( that happens to be an ogre ) might have: beanie baby, kids toys, beanbag toys, soft, infant, ogre (That's a completely contrived example.) What I'd like to do is if the consumer searches for "ogre" I'd like the "Shrek" doll to come up higher in the search results. My content administrator feels that if the keyword is earlier in the list, it should get a higher ranking. ( This makes sense to me and it makes it easy for me to let them control the search result relevance ). Here's a simplified query: SELECT p.ProductID AS ContentID , p.ProductName AS Title , p.ProductCode AS Subtitle , 100 AS Rank , p.ProductKeywords AS Keywords FROM Products AS p WHERE FREETEXT( p.ProductKeywords, @SearchPredicate ) I'm thinking something along the lines of replacing the RANK with: , 200 - INDEXOF(@SearchTerm) AS Rank This "should" rank the keyword results by their relevance I know INDEXOF isn't a SQL command... but it's something LIKE that I would like to accomplish. Am I approaching this the right way? Is it possible to do something like this? Does this make sense?

    Read the article

  • Javascript: Make Rows Draggable Through Input Field Handles

    - by Corey O.
    I have created a table with draggable rows. Unfortunately, most of the rows are covered with a large textbox input element. In order to drag the rows, you have to grab the row on the very edge just outside of the textbox. Is there a way to allow the rows to be grabbed through the textboxes without destroying the textbox functionality? (i.e. relay the mouse drag event, but not the mouseclick event?)

    Read the article

  • How well does Solr scale over large number of facet values?

    - by Continuation
    I'm using Solr and I want to facet over a field "group". Since "group" is created by users, potentially there can be a huge number of values for "group". Would Solr be able to handle a use case like this? Or is Solr not really appropriate for facet fields with a large number of values? I understand that I can set facet.limit to restrict the number of values returned for a facet field. Would this help in my case? Say there are 100,000 matching values for "group" in a search, if I set facet.limit to 50. would that speed up the query, or would the query still be slow because Solr still needs to process and sort through all the facet values and return the top 50 ones? Any tips on how to tune Solr for large number of facet values? Thanks.

    Read the article

  • Recommended way to perform Lucene search without limit

    - by Thomas
    The Lucene documents tell me that "Hits" will be removed from the API in Lucene 3.0. Deprecated. Hits will be removed in Lucene 3.0. Use search(Query, Filter, int) instead. The proposed overload limits the number of documents returned to the value of the int. So my question is: what is the recommended way to perform a search in Lucene with no limit on the number of documents to be returned?

    Read the article

  • Why are Vi and Emacs popular ?

    - by Teifion
    I've never learned to use Vi or Emacs yet people do use them still, despite there being other editors out there that are free and useful. What is it about these two and any others like them that means they hold appeal in the face of the newer editors?

    Read the article

  • Search filenames in MySQL database table restricted by filetype?

    - by ju
    Hello I have a MySQL database that I replicate from another server. The database contains a table with this columns ID, FileName and FileSize In the table there are more than 4'000'000 records. I want to make fast a search in FileName (varchar) column I found that I can use for this Sphinx search engine. The problem is that I want to restrict searches by filetype. Do I have to and how (trigers?) to extract file extensions for all rows? May be I have to create another table (because this one is replicated) and join them in 1:1 relation? Can you give me some advices please :)

    Read the article

  • Correct way to write /* and */

    - by billpg
    Hi everyone. I'd like to know, please, the correct way to write the symbols that the C family of languages use to begin and end comments. Before you all respond "a slash followed by an asterisk", I mean what's the correct way to write them on paper. (IE, How many points should the asterisk have? What angle should the slash be? etc) Everything I need so I can sit down and draw correct comment start and end symbols. Please note, I'm looking for the correct standard way. If there is no industry standard, please respond with "there is no standard" and I will accept that answer.

    Read the article

  • What's the fastest way to strip and replace a document of high unicode characters using Python?

    - by Rhubarb
    I am looking to replace from a large document all high unicode characters, such as accented Es, left and right quotes, etc., with "normal" counterparts in the low range, such as a regular 'E', and straight quotes. I need to perform this on a very large document rather often. I see an example of this in what I think might be perl here: http://www.designmeme.com/mtplugins/lowdown.txt Is there a fast way of doing this in Python without using s.replace(...).replace(...).replace(...)...? I've tried this on just a few characters to replace and the document stripping became really slow.

    Read the article

  • How does one implement storage/retrieval of smart-search/mailbox features?

    - by humble_coder
    Hi All, I have a question regarding implementation of smart-search features. For example, consider something like "smart mailboxes" in various email applications. Let's assume you have your data (emails) stored in a database and, depending on the field for which the query will be created, you present different options to the end user. At the moment let's assume the Subject, Verb, Object approach… For instance, say you have the following: SUBJECTs: message, to_address, from_address, subject, date_received VERBs: contains, does_not_contain, is_equal_to, greater_than, less_than OBJECTs: ??????? Now, in case it isn't clear, I want a table structure (although I'm not opposed to an external XMLesque file of some sort) to store (and later retrieve/present) my criteria for smart searches/mailboxes for later use. As an example, using SVO I could easily store then reconstruct a query for "date between two dates" -- simply use "date greater than" AND "date less than". However, what if, in the same smart search, I wanted a "between" OR'ed with another criterion? You can see that it might get out of hand -- not necessarily in the query creation (as that is rather simplistic), but in the option presentation and storage mechanism. Perhaps I need to think more on a more granular level. Perhaps I need to simply allow the user to select AND or OR for each entry independently instead of making it an ALL OR NOTHING type smart search (i.e. instead of MATCH ALL or MATCH ANY, I need to simply allow them to select -- I just don't want it to turn into a Hydra). Any input would be most appreciated. My apologies if the question is a bit incoherent. It is late, and I my brain is toast. Best.

    Read the article

  • zlib gzgets extremely slow?

    - by monkeyking
    I'm doing stuff related to parsing huge globs of textfiles, and was testing what input method to use. There is not much of a difference using c++ std::ifstreams vs c FILE, According to the documentation of zlib, it supports uncompressed files, and will read the file without decompression. I'm seeing a difference from 12 seconds using non zlib to more than 4 minutes using zlib.h This I've tested doing multiple runs, so its not a disk cache issue. Am I using zlib in some wrong way? thanks #include <zlib.h> #include <cstdio> #include <cstdlib> #include <fstream> #define LENS 1000000 size_t fg(const char *fname){ fprintf(stderr,"\t-> using fgets\n"); FILE *fp =fopen(fname,"r"); size_t nLines =0; char *buffer = new char[LENS]; while(NULL!=fgets(buffer,LENS,fp)) nLines++; fprintf(stderr,"%lu\n",nLines); return nLines; } size_t is(const char *fname){ fprintf(stderr,"\t-> using ifstream\n"); std::ifstream is(fname,std::ios::in); size_t nLines =0; char *buffer = new char[LENS]; while(is. getline(buffer,LENS)) nLines++; fprintf(stderr,"%lu\n",nLines); return nLines; } size_t iz(const char *fname){ fprintf(stderr,"\t-> using zlib\n"); gzFile fp =gzopen(fname,"r"); size_t nLines =0; char *buffer = new char[LENS]; while(0!=gzgets(fp,buffer,LENS)) nLines++; fprintf(stderr,"%lu\n",nLines); return nLines; } int main(int argc,char**argv){ if(atoi(argv[2])==0) fg(argv[1]); if(atoi(argv[2])==1) is(argv[1]); if(atoi(argv[2])==2) iz(argv[1]); }

    Read the article

< Previous Page | 211 212 213 214 215 216 217 218 219 220 221 222  | Next Page >