Search Results

Search found 27296 results on 1092 pages for 'desktop search'.

Page 250/1092 | < Previous Page | 246 247 248 249 250 251 252 253 254 255 256 257  | Next Page >

  • Good development themes/environments for Gnome/kde/whatever?

    - by EvanAlm
    I've searched forever for good themes or customized versions of any type of x-server that is designed for development in terms of web productions/programming for all kind of stuffs. Features such as simplified workspace overviews, good tabbing support etc. For multimedia "UbuntuStudio" exists, and something like that but for programming instead. I know that it's possible to customize it by myself but I don't have the skills to make it all happen. Another reason why it's hard for me to customize it is that I simply do not know everything that would make it good for me. I've looked into gnome-shell and it has (according to me) superb workspace overview functions, but lacks in other spaces instead. Any help in finding a good solution for me in this case is appreciated. If some of you also have had this problem and found a solution that works for you, please tell me how you did :) Would love to solve this for once!

    Read the article

  • Code Profiling in the Windows Sidebar Environment

    - by Matt
    Does anyone know of a way I can code-profile my Windows Sidebar Gadget? I've played around with the code-profiling tool in IE8's "Developer Tools" and the code-profiling included in Visual Studio 2010, but I can't find a way to include the System.* API, which my gadget relies on (as it is standard in the Sidebar environment). The gadget also relies on cross-domain AJAX requests; which is normally permitted in the Sidebar environment. By code-profiling I primarily mean: Function call count Function execution time Any ideas would be much appreciated. Regards, Matt

    Read the article

  • Copied App.config to (Assembly).exe.config but from folder debug application doesn't run

    - by uugan
    Gives error: "The specified named connection is either not found in the configuration, not intended to be used with the EntityClient provider, or not valid" App.config looks like and it's same as (Assembly.exe.config) config file for output: <?xml version="1.0" encoding="utf-8"?> <configuration> <connectionStrings> <add name="Entities1" connectionString="metadata=res://*/;provider=System.Data.SqlClient;provider connection string='data source=localhost;initial catalog=DatabaseName;integrated security=True;multipleactiveresultsets=True;App=EntityFramework'" providerName="System.Data.EntityClient" /> </connectionStrings> </configuration> How to run exe with it's configuration file? I tried to change '' to quot but nothing has changed.

    Read the article

  • How can I store data in a table as a trie? (SQL Server)

    - by Matt
    Hi, To make things easier, the table contains all the words in the English dictionary. What I would like to do is be able to store the data as a trie. This way I can traverse the different branches of the trie and return the most relevant result. First, how do I store the data in the table as a trie? Second, how do I traverse the tree? If it helps at all, the suggestion in this previous question is where this question was sparked from. Please make sure it's SQL we're talking about. I understood the Mike Dunlavey's C implementation because of pointers but can't see how this part (The trie itself) works in SQL. Thanks, Matt

    Read the article

  • Searching Techniques/Algorithms for Resources over a given area

    - by Raydon
    I have a flat area with nodes randomly placed on this flat surface. I need techniques which are able to take a starting point, move in a certain way (the algorithm), find nodes and continue searching. I do not have an overall view of the surface (i.e. I cannot see everything), only a limited view (i.e. 4 cells in any direction). Ideally, these methods would be efficient in the way that they work. Any points in the right direction would be greatly appreciated.

    Read the article

  • What open source database platform is most easily transferred from my personal machine into a window

    - by Tom
    I would like eventual interaction with MS Dynamics SL and/or MindTouch Core (running on WMware) for eventual intranet and/or internet display. I guess I am asking for front and back end recommendations for a database I am constructing, but since this is my first major project I would greatly appreciate any help and advice. I would also love an opportunity to learn a new language so the code base could be in any language. I do have a few more related questions for discussion; What is the viability of using Google hosting to provide the service to the public for free? Should I implement plone or another CMS if I have a large amount of output? Is there a structuring questionnaire or standards publication I could reference? Does UML diagramming provide additional options for portability? Thank you.

    Read the article

  • Gadget vista error var with flyout

    - by Jiinn
    hello ! i have a problem with a variable for the flyout : var friendsUser = ""; var friendsMdp = ""; System.Gadget.Settings.write("variableName", variableName); System.Gadget.settingsUI = "Settings.html"; System.Gadget.onSettingsClosed = SettingsClosed; System.Gadget.Flyout.visible = SettingsClosed; function SettingsClosed() { variableName = System.Gadget.Settings.read("variableName"); friendsUser = System.Gadget.Settings.read("friendUser"); friendsMdp = System.Gadget.Settings.read("friendMdp"); setContentText(); } function flyFriends() { System.Gadget.Flyout.file = 'friends.htm'; System.Gadget.Flyout.show = true ; var flyoutDiv = System.Gadget.Flyout.document.parentWindow; flyoutDiv.gMyVar = friendsUser; flyoutDiv.gMyVar2 = friendsMdp; } If i use this my flyout var is undefined , and if i write : var friendsUser = "test"; i have Test in var and after use setting i have nothing ... if i write var in flyoutDiv before System.Gadget.Flyout.show = true ; gadget bug . my settings dont have a problem, but the refresh of the var ... have you a idea ? thank you for all !

    Read the article

  • What is the mysql 5.5 equivalent for the sys.dm_fts_index_keywords_by_document in sql 2008

    - by djsurge
    I'm making a web application that uses the data in the sys.dm_fts_index_keywords_by_document. I'm interested how many times a given term occurs in each string that is indexed. For example, I have a table with a column called comments, the table has various strings in the comments field. When I make that column full text searchable, the dm_fts_index_keywords_by_document is created and I can see the word per document data. Can i do the same thing in mySQL?

    Read the article

  • MSSQL 2008 FTS CONTAINSTABLE Not Returning More Than Five Rows

    - by Elijah Glover
    I have a single table called "Indexes", it contains one nvarchar and three ntext columns (all Full Text Indexes). Index is up to date. CONTAINSTABLE(Indexes, *), 'test', 5) //5 results No matter what I change the above keyword too, it only returns the first 3-5 results. It should roughly return 90-120 results, for the above query. SELECT count(*) FROM Indexes WHERE [Description] like '%test%' //122 results How would I start to troubleshoot this problem?

    Read the article

  • Best way to handle SQL Server fulltext index updates

    - by tlianza
    Hi all, I have a fulltext index which doesn't need to be immediately up-to-date, I'd like to spare myself the I/O (when I do bulk updates, I see a ton of I/O related to the index) and do the index updates during low usage times (nightly, perhaps even weekly). It seems there are two ways to go about this: Turn off change tracking (SET CHANGE_TRACKING OFF) and add a timestamp field to the indexed table, so that you can run alter fulltext index on <table> start INCREMENTAL population, or Enable change tracking, but set it to MANUAL, so that you can run alter fulltext index on <table> start UPDATE population when you need it updated. Is there a preferred method? I couldn't tell from this overview if there was a performance benefit one way or the other. Tom

    Read the article

  • BST insert operation. don't insert a node if a duplicate exists already

    - by jeev
    the following code reads an input array, and constructs a BST from it. if the current arr[i] is a duplicate, of a node in the tree, then arr[i] is discarded. count in the struct node refers to the number of times a number appears in the array. fi refers to the first index of the element found in the array. after the insertion, i am doing a post-order traversal of the tree and printing the data, count and index (in this order). the output i am getting when i run this code is: 0 0 7 0 0 6 thank you for your help. Jeev struct node{ int data; struct node *left; struct node *right; int fi; int count; }; struct node* binSearchTree(int arr[], int size); int setdata(struct node**node, int data, int index); void insert(int data, struct node **root, int index); void sortOnCount(struct node* root); void main(){ int arr[] = {2,5,2,8,5,6,8,8}; int size = sizeof(arr)/sizeof(arr[0]); struct node* temp = binSearchTree(arr, size); sortOnCount(temp); } struct node* binSearchTree(int arr[], int size){ struct node* root = (struct node*)malloc(sizeof(struct node)); if(!setdata(&root, arr[0], 0)) fprintf(stderr, "root couldn't be initialized"); int i = 1; for(;i<size;i++){ insert(arr[i], &root, i); } return root; } int setdata(struct node** nod, int data, int index){ if(*nod!=NULL){ (*nod)->fi = index; (*nod)->left = NULL; (*nod)->right = NULL; return 1; } return 0; } void insert(int data, struct node **root, int index){ struct node* new = (struct node*)malloc(sizeof(struct node)); setdata(&new, data, index); struct node** temp = root; while(1){ if(data<=(*temp)->data){ if((*temp)->left!=NULL) *temp=(*temp)->left; else{ (*temp)->left = new; break; } } else if(data>(*temp)->data){ if((*temp)->right!=NULL) *temp=(*temp)->right; else{ (*temp)->right = new; break; } } else{ (*temp)->count++; free(new); break; } } } void sortOnCount(struct node* root){ if(root!=NULL){ sortOnCount(root->left); sortOnCount(root->right); printf("%d %d %d\n", (root)->data, (root)->count, (root)->fi); } }

    Read the article

  • Best way to store list of numbers and to retrieve them

    - by bingoNumbers
    Hi. What is the best way to store a list of random numbers (like lotto/bingo numbers) and retrieve them? I'd like to store on a Database a number of rows, where each row contains 5-10 numbers ranging from 0 to 90. I will store a big number of those rows. What I'd like to be able is to retrieve the rows that have at least X number in common to a newly generated row. Example: [3,4,33,67,85,99] [55,56,77,89,98,99] [3,4,23,47,85,91] Those are on the DB I will generate this: [1,2,11,45,47,88] and now I want to get the rows that have at least 1 number in common with this one. The easiest (and dumbest?) way is to make 6 select and check for similar results. I thought to store numbers with a large binary string like 000000000000000000000100000000010010110000000000000000000000000 with 99 numbers where each number represent a number from 1 to 99, so if I have 1 at the 44th position, it means that I have 44 on that row. This method is probably shifting the difficult tasks to the Db but it's again not very smart. Any suggestion?

    Read the article

  • Poor execution plans when using a filter and CONTAINSTABLE in a query

    - by Paul McLoughlin
    We have an interesting problem that I was hoping someone could help to shed some light on. At a high level the problem is as below: The following query executes quickly (1 second): SELECT SA.* FROM cg.SEARCHSERVER_ACTYS AS SA JOIN CONTAINSTABLE(CG.SEARCHSERVER_ACTYS, NOTE, 'reports') AS T1 ON T1.[Key]=SA.UNIQUE_ID but if we add a filter to the query, then it takes approximately 2 minutes to return: SELECT SA.* FROM cg.SEARCHSERVER_ACTYS AS SA JOIN CONTAINSTABLE(CG.SEARCHSERVER_ACTYS, NOTE, 'reports') AS T1 ON T1.[Key]=SA.UNIQUE_ID WHERE SA.CHG_DATE'19 Feb 2010' Looking at the execution plan for the two queries, I can see that in the second case there are two places where there are huge differences between the actual and estimated number of rows, these being: 1) For the FulltextMatch table valued function where the estimate is approx 22,000 rows and the actual is 29 million rows (which are then filtered down to 1670 rows before the join) and 2) For the index seek on the full text index, where the estimate is 1 row and the actual is 13,000 rows As a result of the estimates, the optimiser is choosing to use a nested loops join (since it assumes a small number of rows) hence the plan is inefficient. We can work around the problem by either (a) parameterising the query and adding an OPTION (OPTIMIZE FOR UNKNOWN) to the query or (b) by forcing a HASH JOIN to be used. In both of these cases the query returns in sub 1 second and the estimates appear reasonable. My question really is 'why are the estimates being used in the poorly performing case so wildly inaccurate and what can be done to improve them'? Statistics are up to date on the indexes on the indexed view being used here. Any help greatly appreciated.

    Read the article

  • Add domain to relative urls

    - by Rick
    How can I add http://facebook.com to relative URL's contained within #facebook_urls? Eg: <a href="/test.html"> becomes <a href="http://facebook.com/test.html"> #facebook_urls also contains absolute urls, so I want to make sure I don't touch those. Thanks!

    Read the article

  • SQL SERVER FULL-TEXT INDEX, CONTAINS return empty

    - by max
    Hi, All: I got a issue about full index, any body can help me on this? set up full text index CREATE FULLTEXT INDEX ON dbo.Companies(my table name) ( CompanyName(colum of my table) Language 0X0 ) KEY INDEX IX_Companies_CompanyAlias ON QuestionsDB WITH CHANGE_TRACKING AUTO GO Using CONTAINS to find the matched rows SELECT CompanyId, CompanyName FROM dbo.Companies WHERE CONTAINS(CompanyName,'Micro') All is going well. just just just return empty resultset. And I am sure there is company with CompanyName "Microsoft" in Table Company Much appreciated if anybody does me a favor on this.

    Read the article

  • RegularExpression-esque search matching Objects in List

    - by Pindatjuh
    I'm currently working on an implementation of the following idea, and I was wondering if there is any literature on this subject. Working with Java, but the principle applies on any language with a decent type-system, I like to implement: matching Objects from a List using a RegularExpression-esque search: So let's say I have a List containing List<Object> x = new ArrayList<Object>(); x.add(new Object()); x.add("Hello World"); x.add("Second String"); x.add(5); // Integer (auto-boxing) x.add(6); // Integer Then I create a "Regular Expression" (not working with a stream of characters, but working with a stream of Objects), and instead of character-classes, I use type-system properties: [String][Integer] And this would match one sublist: {Match["Second String", 5]}. The expression: [String:length()<15] Will match two sublist (each of length 1) containing a String which instance is passing the expression instance.length() < 5: {Match["Hello World"],Match["Second String"]}. [Object][Object] Matches any pair in the List: {Match[Object,"Hello World"],Match["Second String", 5]}, in a streamed manner (no overlapping matches). Ofcourse, my implementation will have grouping, lookahead/lookbehinds and is hierarchical (i.e. matching n elements from Lists in Lists), etc. The above merely illustrates the concept. Is there a name for this principle, and is there literature available on it?

    Read the article

  • Finding key Solr performance metrics

    - by Mike Malloy
    To improve performance of Solr find your slowest searches, monitor query results, cache hit rate and cache size, document cache and filter cache; find problems with Solr update handlers by tracking index operations and document operations. There is a tool from New Relic which may help. http://www.newrelic.com/solr.html

    Read the article

  • cakephp filter index pages according to foreign keys

    - by Marki
    Hi there, I'm pretty new to CakePHP and was missing a crucial feature not generated as scaffold: filtering. What do I have to do to provide dropdowns or multi-selects on the index pages for each field that is a (foreign) key, thereby allowing to filter the table ("OR" inside multi-select, "AND" between different multi-selects, if any)? From what my websearch has shown me there are many more people trying to accomplish the same thing, although I couldn't find anything that would work for me because either they have text fields and do wildcard filtering, or the plugins they propose only work for 1.2 whereas i now started with 1.3 etc. etc. Can someone alleviate the confusion and maybe present some working code or direct me to the definitive guide[tm] where this matter has been solved? Thx

    Read the article

  • Lucene (.NET) Document stucture and performance suggestions.

    - by Josh Handel
    Hello, I am indexing about 100M documents that consist of a few string identifiers and a hundred or so numaric terms.. I won't be doing range queries, so I haven't dugg too deep into Numaric Field but I'm not thinking its the right choose here. My problem is that the query performance degrades quickly when I start adding OR criteria to my query.. All my queries are on specific numaric terms.. So a document looks like StringField:[someString] and N DataField:[someNumber].. I then query it with something like DataField:((+1 +(2 3)) (+75 +(3 5 52)) (+99 +88 +(102 155 199))). Currently these queries take about 7 to 16 seconds to run on my laptop.. I would like to make sure thats really the best they can do.. I am open to suggestions on field structure and query structure :-). Thanks Josh PS: I have already read over all the other lucene performance discussions on here, and on the Lucene wiki and at lucid imiagination... I'm a bit further down the rabbit hole then that...

    Read the article

  • Counting Alphabetic Characters That Are Contained in an Array with C

    - by Craig
    Hello everyone, I am having trouble with a homework question that I've been working at for quite some time. I don't know exactly why the question is asking and need some clarification on that and also a push in the right direction. Here is the question: (2) Solve this problem using one single subscripted array of counters. The program uses an array of characters defined using the C initialization feature. The program counts the number of each of the alphabetic characters a to z (only lower case characters are counted) and prints a report (in a neat table) of the number of occurrences of each lower case character found. Only print the counts for the letters that occur at least once. That is do not print a count if it is zero. DO NOT use a switch statement in your solution. NOTE: if x is of type char, x-‘a’ is the difference between the ASCII codes for the character in x and the character ‘a’. For example if x holds the character ‘c’ then x-‘a’ has the value 2, while if x holds the character ‘d’, then x-‘a’ has the value 3. Provide test results using the following string: “This is an example of text for exercise (2).” And here is my source code so far: #include<stdio.h> int main() { char c[] = "This is an example of text for exercise (2)."; char d[26]; int i; int j = 0; int k; j = 0; //char s = 97; for(i = 0; i < sizeof(c); i++) { for(s = 'a'; s < 'z'; s++){ if( c[i] == s){ k++; printf("%c,%d\n", s, k); k = 0; } } } return 0; } As you can see, my current solution is a little anemic. Thanks for the help, and I know everyone on the net doesn't necessarily like helping with other people's homework. ;P

    Read the article

  • Get highest frequency terms from Lucene index

    - by Julia
    Hello! i need to extract terms with highest frequencies from several lucene indexes, to use them for some semantic analysis. So, I want to get maybe top 30 most occuring terms(still did not decide on threshold, i will analyze results) and their per-index counts. I am aware that I might lose some precision because of potentionally dropped duplicates, but for now, lets say i am ok with that. So for the proposed solutions, (needless to say maybe) speed is not important, since I would do static analysis, I would put accent on simplicity of implementation because im not so skilled with Lucene (not the programming guru too :/ ) and cant wrap my mind around many concepts of it.. I can not find any code samples from something similar, so all concrete advices (code, pseudocode, links to code samples...) I will apretiate very much!!! Thank you!

    Read the article

  • Can sphinx be used over cassandra?

    - by Mickey Shine
    I am planning to build a cassandra store system and also I need a full-text(Chinese) system too. Can sphinx be used on cassandra? (sphinx supports xml format but I am not going to use it, cause it is slow and much of time are spent on xml parsing). Or you can share your experiences if you have ever built a full-text searching system over cassandra. Thank you

    Read the article

< Previous Page | 246 247 248 249 250 251 252 253 254 255 256 257  | Next Page >