Search Results

Search found 1124 results on 45 pages for 'indexing'.

Page 36/45 | < Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >

  • 302 vs 301 redirect in this specific case

    - by Binder
    We have a website that displays information in a location based manner, i.e. it detects the IP of the visiting user and redirects him/her to an appropriate landing page; for e.g. a user coming from 'Egypt' will be redirected to http://www.mysite.com/egypt/cairo and a user visting from dubai will be redirected to http://www.mysite.com/uae/dubai, so on and so forth and we cater to multiple locations in the middle-east. Now, we have been advised by our SEO consultant that we should put a 301 (permanent redirect) on http://www.mysite.com to point to http://www.mysite.com/ksa/riyadh I would like to know the negative implications that this would have on Google indexing or otherwise, as I fundamentally disagree with this suggestion and believe that in a siutation like this a 302 redirect would be more appropriate.

    Read the article

  • Background thread in C#

    - by Xodarap
    When the user saves some data, I want to spin off a background thread to update my indexes and do some other random stuff. Even if there is an error in this indexing the user can't do anything about it, so there is no point in forcing the main thread to wait until the background thread finishes. I'm doing this from a ASP.NET process, so I think I should be able to do this (as the main thread exiting won't kill the process). When I set a breakpoint in the background thread's method though, the main thread also appears to stop. Is this just an artifact of visual studio's debugger, or is the main thread really not going to return until the background thread stops?

    Read the article

  • Tips for improving performance of DB that is above size 40 GB (Sql Server 2005) and growing monthly

    - by HotTester
    The current DB or our project has crossed over 40 GB this month and on an average it is growing monthly by around 3 GB. Now all the tables are best normalized and proper indexing has been used. But still as the size is growing it is taking more time to fire even basic queries like 'select count(1) from table'. So can u share some more points that will help in this front. Database is Sql Server 2005. Further if we implement Partitioning wouldn't it create a overhead ? Thanks in advance.

    Read the article

  • Make Sphinx quiet (non-verbose)

    - by J. Pablo Fernández
    I'm using Sphinx through Thinking Sphinx in a Ruby on Rails project. When I create seed data and all the time, it's quite verbose, printing this: using config file '/Users/pupeno/projectx/config/development.sphinx.conf'... indexing index 'user_delta'... collected 7 docs, 0.0 MB collected 0 attr values sorted 0.0 Mvalues, 100.0% done sorted 0.0 Mhits, 99.6% done total 7 docs, 159 bytes total 0.042 sec, 3749.29 bytes/sec, 165.06 docs/sec Sphinx 0.9.8.1-release (r1533) Copyright (c) 2001-2008, Andrew Aksyonoff for every record that is created or so. Is there a way to suppress that output?

    Read the article

  • python dictionary with constant value-type

    - by s.kap
    hi there, I bumped into a case where I need a big (=huge) python dictionary, which turned to be quite memory-consuming. However, since all of the values are of a single type (long) - as well as the keys, I figured I can use python (or numpy, doesn't really matter) array for the values ; and wrap the needed interface (in: x ; out: d[x]) with an object which actually uses these arrays for the keys and values storage. I can use a index-conversion object (input -- index, of 1..n, where n is the different-values counter), and return array[index]. I can elaborate on some techniques of how to implement such an indexing-methods with reasonable memory requirement, it works and even pretty good. However, I wonder if there is such a data-structure-object already exists (in python, or wrapped to python from C/++), in any package (I checked collections, and some Google searches). Any comment will be welcome, thanks.

    Read the article

  • Different analyzers for each field

    - by user72185
    Hi, How can I enable different analyzers for each field in a document I'm indexing with Lucene? Example: RAMDirectory dir = new RAMDirectory(); IndexWriter iw = new IndexWriter(dir, new StandardAnalyzer(Lucene.Net.Util.Version.LUCENE_CURRENT), true, IndexWriter.MaxFieldLength.UNLIMITED); Document doc = new Document(); Field field1 = new Field("field1", someText1, Field.Store.YES, Field.Index.ANALYZED, Field.TermVector.WITH_POSITIONS_OFFSETS); Field field2 = new Field("field2", someText2, Field.Store.YES, Field.Index.ANALYZED, Field.TermVector.WITH_POSITIONS_OFFSETS); doc.Add(field1); doc.Add(field2); iw.AddDocument(doc); iw.Commit(); The analyzer is an argument to the IndexWriter, but I want to use StandardAnalyzer for field1 and SimpleAnalyzer for field2, how can I do that? The same applies when searching, of course. The correct analyzer must be applied for each field.

    Read the article

  • idiomatic way to take groups of n items from a list in Python?

    - by Wang
    Given a list A = [1 2 3 4 5 6] Is there any idiomatic (Pythonic) way to iterate over it as though it were B = [(1, 2) (3, 4) (5, 6)] other than indexing? That feels like a holdover from C: for a1,a2 in [ (A[i], A[i+1]) for i in range(0, len(A), 2) ]: I can't help but feel there should be some clever hack using itertools or slicing or something. (Of course, two at a time is just an example; I'd like a solution that works for any n.) Edit: related http://stackoverflow.com/questions/1162592/iterate-over-a-string-2-or-n-characters-at-a-time-in-python but even the cleanest solution (accepted, using zip) doesn't generalize well to higher n without a list comprehension and *-notation.

    Read the article

  • Background thread in .NET

    - by Xodarap
    When the user saves some data, I want to spin off a background thread to update my indexes and do some other random stuff. Even if there is an error in this indexing the user can't do anything about it, so there is no point in forcing the main thread to wait until the background thread finishes. I'm doing this from a ASP.NET process, so I think I should be able to do this (as the main thread exiting won't kill the process). When I set a breakpoint in the background thread's method though, the main thread also appears to stop. Is this just an artifact of visual studio's debugger, or is the main thread really not going to return until the background thread stops?

    Read the article

  • Begin Viewing Query Results Before Query Ends

    - by Frank Developer
    OK, so say I have a table with 500K rows, then I ad-hoc query with unsupported indexing which requires a full table scan. I would like to immediately view the first rows returned while the full table scan continues. Then I want to scroll thru the next results. In the meantime, I would like to display the progress of the table scan, example: "SEARCHING.. FOUND 23 OF 500,000 ROWS SO FAR". If I scroll too far ahead, I want to display a message like: "REACHED LAST ROW IN LOOK-AHEAD BUFFER.. QUERY HAS NOT COMPLETED".. Can this be done? Maybe like: spawn/exec, declare scroll cursor, open, fetch, etc.?

    Read the article

  • How can i test my DB speed? (Learning)

    - by acidzombie24
    I have design a database. Theres no columns with indexing, nor any code for optimizing. I am positive i should index certain columns since i search them a lot. My question is HOW do i test if any part of my database will be slow? ATM I am using sqlite and i will be switching to either MS Sql or MySql based on my host provider. Will creating 100,000 records in each table be enough? Or will that always be fast in sqlite and i need to do 1mil? Do i need 10mil before a database will become slow? Also how do i time it? I am using C# so should i use StopWatch or is there a ADO.NET/Sqlite function i should use?

    Read the article

  • JS DOM: Get elements by text content.

    - by hristo
    Hello! I am looking for a way to perform fulltext search on the DOM tree with JS. In two words, I would like to retrieve the list of text nodes which contain a given string. I've tried mootools' Element.getElements ( ':contains[string]' ) but I can't get it to work with strings containing whitespace. I'm thinking about simply indexing all text nodes and checking against each node for the string being searched for, but, in my project, there's no way of telling when the DOM updates in order to maintain such an index up-to-date. Any better ideas? Thanks

    Read the article

  • A tool to determine jar dependencies based on existing code?

    - by geoffeg
    Is there a tool that can determine .jar dependencies given a directory of .jar files and a separate directory of java source code? I need to generate Eclipse .classpath files based on an existing code base that doesn't have any dependencies defined. To be more specific, I've been given a large codebase consisting of a dozen or so J2EE-style projects and a single directory of jar files. My client uses a custom development and build framework that is just too arcane for me to use and get any real work done. The projects do not have any information about their dependencies, either between projects or to jar libraries. I would expect this tool would have to spin through each jar file, indexing the classes available in that file and then go through each file in the project source code tree and match up the dependencies, possibly writing out a .classpath file with the required jar files. I realize this is a rather simplistic view of the operation, as duplicate classes among the jar files and such might make things more difficult.

    Read the article

  • using NEWSEQUENTIALID() with UPDATE Trigger

    - by Ram
    I am adding a new GUID/Uniqueidentifier column to my table. ALTER TABLE table_name ADD VersionNumber UNIQUEIDENTIFIER UNIQUE NOT NULL DEFAULT NEWSEQUENTIALID() GO And when ever a record is updated in the table, I would want to update this column "VersionNumber". So I create a new trigger CREATE TRIGGER [DBO].[TR_TABLE_NAMWE] ON [DBO].[TABLE_NAME] AFTER UPDATE AS BEGIN UPDATE TABLE_NAME SET VERSIONNUMBER=NEWSEQUENTIALID() FROM TABLE_NAME D JOIN INSERTED I ON D.ID=I.ID/* some ID which is used to join*/ END GO But just realized that NEWSEQUENTIALID() can only be used with CREATE TABLE or ALTER TABLE. I got this error The newsequentialid() built-in function can only be used in a DEFAULT expression for a column of type 'uniqueidentifier' in a CREATE TABLE or ALTER TABLE statement. It cannot be combined with other operators to form a complex scalar expression. Is there a workaround for this ? Edit1: Changing NEWSEQUENTIALID() to NEWID() in the trigger solves this, but I am indexing this column and using NEWID() would be sub-optimal

    Read the article

  • c# im getting an error

    - by vj4u
    double dval = 1; for (int i = 0; i < Cols; i++) { k = 0; dval = 1; for (int j = Cols - 1; j >= 0; j--) { colIndex = (i + j) % 3; val *= dval[colIndex, k]; k++; } det -= dval; } im getting an error Cannot apply indexing with [] to an expression of type 'double' for dval help its urgent

    Read the article

  • Strange difference between optimized/non optimized microsoft c++ code

    - by Anders Forsgren
    I have a c++ program with a method that looks something like this: int myMethod(int* arr1, int* arr2, int* index) { arr1--; arr2--; int val = arr1[*index]; int val2 = arr2[val]; doMoreThings(val); } With optimizations enabled (/O2) the first line where the first pointer is decremented is not executed. I assume the compiler believes that the arr1 array is not used since it thinks it can remove the decrement. Am I violating some convention in the above code? What could cause this behavior? It is a very old piece of f2c-translated code, the pointer decrement is due to the 1-based indexing of the original code.

    Read the article

  • Architecture for analysing search result impressions/clicks to improve future searches

    - by Hais
    We have a large database of items (10m+) stored in MySQL and intend to implement search on metadata on these items, taking advantage of something like Sphinx. The dataset will be changing slightly on a daily basis so Sphinx will be re-indexing daily. However we want the algorithm to self-learn and improve search results by analysing impression and click data so that we provide better results for our customers on that search term, and possibly other similar search terms too. I've been reading up on Hadoop and it seems like it has the potential to crunch all this data, although I'm still unsure how to approach it. Amazon has tutorials for compiling impression vs click data using MapReduce but I can't see how to get this data in a useable format. My idea is that when a search term comes in I query Sphinx to get all the matching items from the dataset, then query the analytics (compiled on an hourly basis or similar) so that we know the most popular items for that search term, then cache the final results using something like Memcached, Membase or similar. Am I along the right lines here?

    Read the article

  • No optimization causes wrong search result

    - by KailZhang
    I just took over our solr/lucene stuff from my ex-colleague. But there is a weird bug. If there is no optimization after dataimport, actually if there are multiple segment files, the search result then will be wrong. We are using a customized solr searchComponent. As far as I know about lucene, optimization is only optimization which could improve the speed of indexing and should not affect search result. I doubt this may be related to multithreading or unclosed searcher/reader or something. Anybody can help? Thank you.

    Read the article

  • Best way to reuse a Runnable

    - by Gandalf
    I have a class that implements Runnable and am currently using an Executor as my thread pool to run tasks (indexing documents into Lucene). executor.execute(new LuceneDocIndexer(doc, writer)); My issue is that my Runnable class creates many Lucene Field objects and I would rather reuse them then create new ones every call. What's the best way to reuse these objects (Field objects are not thread safe so I cannot simple make them static) - should I create my own ThreadFactory? I notice that after a while the program starts to degrade drastically and the only thing I can think of is it's GC overhead. I am currently trying to profile the project to be sure this is even an issue - but for now lets just assume it is.

    Read the article

  • Construction an logical expression which will count bits in a byte.

    - by danatel
    When interviewing new candidates, we usually ask them to write a piece of C code to count the number of bits with value 1 in a given byte variable (e.g. the byte 3 has two 1-bits). I know all the common answers, such as right shifting eight times, or indexing constant table of 256 precomputed results. But, is there a smarter way without using the precomputed table? What is the shortest combination of byte operations (AND, OR, XOR, +, -, binary negation, left and right shift) which computes the number of bites?

    Read the article

  • MySQL indexes - what are the best practises?

    - by Haroldo
    I've been using indexes on my mysql databases for a while now but never properly learnt about them. Generally I put an index on any fields that i will be searching or selecting using a WHERE clause but sometimes it doesn't seem so black and white. What are the best practises for mysql indexes? example situations/dilemas: If a table has six columns and all of them are searchable, should i index all of them or none of them? . What are the negetive performance impacts of indexing? . If i have a VARCHAR 2500 column which is searchable from parts of my site, should i index it?

    Read the article

  • Scala, make my loop more functional

    - by Pengin
    I'm trying to reduce the extent to which I write Scala (2.8) like Java. Here's a simplification of a problem I came across. Can you suggest improvements on my solutions that are "more functional"? Transform the map val inputMap = mutable.LinkedHashMap(1->'a',2->'a',3->'b',4->'z',5->'c') by discarding any entries with value 'z' and indexing the characters as they are encountered First try var outputMap = new mutable.HashMap[Char,Int]() var counter = 0 for(kvp <- inputMap){ val character = kvp._2 if(character !='z' && !outputMap.contains(character)){ outputMap += (character -> counter) counter += 1 } } Second try (not much better, but uses an immutable map and a 'foreach') var outputMap = new immutable.HashMap[Char,Int]() var counter = 0 inputMap.foreach{ case(number,character) => { if(character !='z' && !outputMap.contains(character)){ outputMap2 += (character -> counter) counter += 1 } } }

    Read the article

  • manipulating 15+ million records in mysql with php?

    - by Nithish
    Hey, I got a user table containing 15+ million records and while doing the registration function i wish to check whether the username already exist. I did indexing for username column and when i run the query "select count(uid) from users where username='webdev'" ,. hmmm, its keep on loading blank screen finally hanged up. I'm doing this in my localhost with php 5 & mysql 5. So suggest me some technique to handle this situation. Is that mongodb is good alternative for handling this process in our local machine? Thanks, Nithish.

    Read the article

  • Generic Database table design

    - by Gazeth
    Just trying to figure out the best way to design my table for the following scenario: I have several areas in my system (documents, projects, groups and clients) and each of these can have comments logged against them. My question is should I have one table like this: CommentID DocumentID ProjectID GroupID ClientID etc Where only one of the ids will have data and the rest will be NULL or should I have a seperate CommentType table and have my comments table like this: CommentID CommentTypeID ResourceID (this being the id of the project/doc/client) etc My thoughts are that option 2 would be more efficient from an indexing point of view?

    Read the article

  • How to change the text of the multiple asp:label using for loop in C# ASP.NET

    - by Minelava
    I want to change the asp label multiple times. Here is the asp.net code <asp:Label ID="lbl_Text1" runat="server" Text=""> <asp:Label ID="lbl_Text2" runat="server" Text=""> <asp:Label ID="lbl_Text3" runat="server" Text=""> <asp:Label ID="lbl_Text4" runat="server" Text=""> Instead of using this: C# Code lbl_Text1.Text = "hello"; lbl_Text2.Text = "hello"; lbl_Text3.Text = "hello"; lbl_Text4.Text = "hello"; I tried to use for loop for (int i = 1; i <= 4; i++) { lbl_Text[i].Text = "hello"; } And I get this error..... cannot apply indexing with [] to an expression of type 'system.web.ui.webcontrols.label' Is there anyone can help me on that?

    Read the article

  • Fastest method in merging of the two: dicts vs lists

    - by tipu
    I'm doing some indexing and memory is sufficient but CPU isn't. So I have one huge dictionary and then a smaller dictionary I'm merging into the bigger one: big_dict = {"the" : {"1" : 1, "2" : 1, "3" : 1, "4" : 1, "5" : 1}} smaller_dict = {"the" : {"6" : 1, "7" : 1}} #after merging resulting_dict = {"the" : {"1" : 1, "2" : 1, "3" : 1, "4" : 1, "5" : 1, "6" : 1, "7" : 1}} My question is for the values in both dicts, should I use a dict (as displayed above) or list (as displayed below) when my priority is to use as much memory as possible to gain the most out of my CPU? For clarification, using a list would look like: big_dict = {"the" : [1, 2, 3, 4, 5]} smaller_dict = {"the" : [6,7]} #after merging resulting_dict = {"the" : [1, 2, 3, 4, 5, 6, 7]} Side note: The reason I'm using a dict nested into a dict rather than a set nested in a dict is because JSON won't let me do json.dumps because a set isn't key/value pairs, it's (as far as the JSON library is concerned) {"a", "series", "of", "keys"} Also, after choosing between using dict to a list, how would I go about implementing the most efficient, in terms of CPU, method of merging them? I appreciate the help.

    Read the article

< Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >