Search Results

Search found 1650 results on 66 pages for 'indexes'.

Page 42/66 | < Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >

  • "Did you mean" feature on a dictionary database

    - by Hazar
    I have a ~300.000 row table; which includes technical terms; queried using PHP and MySQL + FULLTEXT indexes. But when I searching a wrong typed term; for example "hyperpext"; naturally giving no results. I need to "compansate" little writing errors and getting nearest record from database. How I can accomplish such feaure? I know (actually, learned today) about Levenshtein distance, Soundex and Metaphone algorithms but currently not having a solid idea to implement this to querying against database. Best regards. (Sorry about my poor English, I'm trying to do my best)

    Read the article

  • Determining the order of a list of numbers (possibly without sorting)

    - by Victor Liu
    I have an array of unique integers (e.g. val[i]), in arbitrary order, and I would like to populate another array (ord[i]) with the the sorted indexes of the integers. In other words, val[ord[i]] is in sorted order for increasing i. Right now, I just fill in ord with 0, ..., N, then sort it based on the value array, but I am wondering if we can be more efficient about it since ord is not populated to begin with. This is more of a question out of curiousity; I don't really care about the extra overhead from having to prepopulate a list and then sort it (it's small, I use insertion sort). This may be a silly question with an obvious answer, but I couldn't find anything online.

    Read the article

  • htaccess: how to prevent infinite subdirectories?

    - by Vincent Isles
    My .htaccess at the root directory contains this: Options -MultiViews +FollowSymlinks -Indexes ErrorDocument 404 /404.htm RewriteEngine On RewriteBase / followed by a bunch of RewriteRules. The .htaccess at the /forum subdirectory contains this: RewriteEngine On RewriteBase /forum RewriteRule ^index.html$ index.php [L,NE] RewriteRule ^(.*)-t-([0-9]+).html(.*)$ showthread.php?tid=$2$3 [QSA,L] RewriteRule ^(.*)-t-([0-9]+)-([0-9]+).html$ showthread.php?tid=$2&page=$3 [QSA,L] followed by other rules mapping SEO-friendly URLs to the true URLs. mydomain.com/forum/a-thread-t-1.html returns the page showthread.php?tid=1 but so does mydomain.com/forum/forum/a-thread-t-1.html, mydomain.com/forum/forum/forum/a-thread-t-1.html and so on. I don't want this behavior - I want pages accessed as /forum/forum/* to return a 404. Any hint on where I had been wrong?

    Read the article

  • Disable Primary Key and Re-Enable After SQL Bulk Insert

    - by Jon
    I am about to run a massive data insert into my DB. I have managed to work out how to enable and rebuild non-clustered indexes on my tables but I also want to disable/enable primary keys. You can't disable the clustered index for the primary key as the table is inaccessible when that is done and my attempt to do a ALTER TABLE for constraints does not work as I think that is only for foreign keys. Do you know of a way to Disable the Primary Key and Re-Enable After SQL Bulk Insert. NOTE: This is over numerous tables and so I don't know the exact primary key specifications eg/name etc

    Read the article

  • Background thread in .NET

    - by Xodarap
    When the user saves some data, I want to spin off a background thread to update my indexes and do some other random stuff. Even if there is an error in this indexing the user can't do anything about it, so there is no point in forcing the main thread to wait until the background thread finishes. I'm doing this from a ASP.NET process, so I think I should be able to do this (as the main thread exiting won't kill the process). When I set a breakpoint in the background thread's method though, the main thread also appears to stop. Is this just an artifact of visual studio's debugger, or is the main thread really not going to return until the background thread stops?

    Read the article

  • Function calls in virtual machine killing performance

    - by GenTiradentes
    I wrote a virtual machine in C, which has a call table populated by pointers to functions that provide the functionality of the VM's opcodes. When the virtual machine is run, it first interprets a program, creating an array of indexes corresponding to the appropriate function in the call table for the opcode provided. It then loops through the array, calling each function until it reaches the end. Each instruction is extremely small, typically one line. Perfect for inlining. The problem is that the compiler doesn't know when any of the virtual machine's instructions are going to be called, as it's decided at runtime, so it can't inline them. The overhead of function calls and argument passing is killing the performance of my VM. Any ideas on how to get around this?

    Read the article

  • How to enable indexing of pages with dynamic data?

    - by mithunb
    I have a site that has certain urls that point to pages with permanent data and others that point to dynamic web pages. Google indexes both these regularly. By the time a user finds one of the dynamic content urls, the data on the page has already changed and the user does not find what he was looking for. Further, the dynamic url pages contains links to the permanent urls (which I want Google or any crawler to index). Google crawler controls (webmaster tools) cannot be made to read urls from a page but not index them. Solutions? crawling strategies *system architecture*.

    Read the article

  • How do you find the last element of an array while iterating using a foreach loop in php ?

    - by Vaibhav Kamble
    I am writing a sql query creator using some parameters. While doing that ,I came across this problem. In java , Its very easy to detect the last element of an array from inside the for loop by just checking the current array position with the array length. for(int i=0; i< arr.length;i++){ boolean isLastElem = i== (arr.length -1) ? true : false; } php has some different fashion. They have non integer indexes to access arrays. So you must iterate over an array using foreach loop. But it becomes very problematic when you need to take some decision (in my case to append or/and parameter while building query). I am sure there must be some standard way of doing this. How do you solve this problem normally in php ?

    Read the article

  • My mod_rewrite won't work, what's wrong?

    - by Tim Rogers
    I have the following rewrite rule, but nothing is hapenning at all when I try to use it. I have the file in the directory server.blahblahblah.com/todo and the following is my .htaccess file: Options +FollowSymLinks Options +Indexes RewriteEngine On RewriteBase / RewriteRule ^tasks/view/([0-9]+)?/$ controller.php?task=view&id=$1 RewriteRule ^tasks/view/([0-9]+).xml$ controller.php?task=viewxml&id=$1 RewriteRule ^tasks/new?/$ controller.php?task=new RewriteRule ^tasks/delete/([0-9]+)?/$ controller.php?task=delete&id=$1 RewriteRule ^tasks/completed/([0-9]+)?/$ controller.php?task=complete&id=$1 RewriteRule ^tasks?/$ controller.php?task=home Does anyone know why this won't work at all? Thanks, Tim

    Read the article

  • Python: Time a code segment for testing performance (with timeit)

    - by Mestika
    Hi, I've a python script which works just as it should but I need to write the time for the execution. I've gooled that I should use timeit but I can't seem to get it to work. My Python script looks like this: import sys import getopt import timeit import random import os import re import ibm_db import time from string import maketrans myfile = open("results_update.txt", "a") for r in range(100): rannumber = random.randint(0, 100) update = "update TABLE set val = %i where MyCount >= '2010' and MyCount < '2012' and number = '250'" % rannumber #print rannumber conn = ibm_db.pconnect("dsn=myDB","usrname","secretPWD") for r in range(5): print "Run %s\n" % r ibm_db.execute(query_stmt) query_stmt = ibm_db.prepare(conn, update) myfile.close() ibm_db.close(conn) What I need it the time it takes the execution of the query and written to the file "results_update.txt". The purpose is to test an update statement for my database with different indexes and tuning mechanisms. Sincerely Mestika

    Read the article

  • Does Python Django support custom SQL and denormalized databases with no Foreign Key relationships?

    - by Jay
    I've just started learning Python Django and have a lot of experience building high traffic websites using PHP and MySQL. What worries me so far is Python's overly optimistic approach that you will never need to write custom SQL and that it automatically creates all these Foreign Key relationships in your database. The one thing I've learned in the last few years of building Chess.com is that its impossible to NOT write custom SQL when you're dealing with something like MySQL that frequently needs to be told what indexes it should use (or avoid), and that Foreign Keys are a death sentence. Percona's strongest recommendation was for us to remove all FKs for optimal performance. Is there a way in Django to do this in the models file? create relationships without creating actual DB FKs? Or is there a way to start at the database level, design/create my database, and then have Django reverse engineer the models file?

    Read the article

  • MySQL ORDER BY returns things in (apparently) random order !?

    - by Luke H
    The following query: SELECT DISTINCT ClassName FROM SiteTree ORDER BY ClassName is returning things in no apparent order! I get the same result whether I quote column/table names, or use DISTINCT or not, or add ASC or DESC. I assumed the indexes might be broken, or something like this, so tried dropping and recreating. Also tried REPAIR TABLE and CHECK TABLE. The table collation is set to latin1_swedish_ci. All the textual columns are set to use UTF-8 and collation is set to utf8_general_ci What could be causing this?

    Read the article

  • Lucene search on specific field name?

    - by Rachel
    I have been playing around with an installation of SOLR that indexes some data from my database. I am able to index data and query it back but I was wondering about how field name queries work. For certain fields I am able to specify their name and the search text to have the results return as expected and for other fields, when I specify their name and search text, no results are returned. q=type:book //(this will work) q=type:book AND title:"The Title" //(no results returned) In this example, type is a required field and title is not. For the example where I search by title, I can see the document in the results of the first query having the given title so I know that a document exists that matches this search. Is making a field 'required' the only way to be able to search by field name? [edit] I'm using the default installation and the 'example' folder inside of solr, editing the xml files and using the interface available through start.jar to be able to run, index and query.

    Read the article

  • Composite primary keys in N-M relation or not?

    - by BerggreenDK
    Lets say we have 3 tables (actually I have 2 at the moment, but this example might illustrate the thought better): [Person] ID: int, primary key Name: nvarchar(xx) [Group] ID: int, primary key Name: nvarchar(xx) [Role] ID: int, primary key Name: nvarchar(xx) [PersonGroupRole] Person_ID: int, PRIMARY COMPOSITE OR NOT? Group_ID: int, PRIMARY COMPOSITE OR NOT? Role_ID: int, PRIMARY COMPOSITE OR NOT? Should any of the 3 ID's in the relation PersonGroupRole be marked as PRIMARY key or should they all 3 be combined into one composite?? whats the real benefit of doing it or not? I can join anyways as far as I know, so Person JOIN PersonGroupRole JOIN Group gives me which persons are in which Groups etc. I will be using LINQ/C#/.NET on top of SQL-express and SQL-server, so if there is any reasons regarding language/SQL that might make the choice more clear, thats the platform I ask about. Looking forward to see what answers pops up, as I have thought of these primary keys/indexes many times when making combined ones.

    Read the article

  • sql server db deployment script ignoring constraints etc until commit

    - by Daniel
    Hi all, I am planning on doing a database deployment script for sql server 2005. Currently we have a tool that will run all of the tables, foreign keys, indexes and then data, each of which is located in a separate file with a certain extension (eg. tab, .kci, .fky) and the tool just runs *.tab, *.kci, *.fky into the db etc. Could I possibly combine all of thse into one file and have them run ignoring referential integrity until they are all complete, I would turn it on before we started inserting test data. It is just unmanageable having to maintain 4 or 5 different types of scripts for one table. Are there any issues I should be aware of? Cheers

    Read the article

  • Should i really use integer primary IDs [sql]

    - by arthurprs
    For example, i always generate an auto-increment field for the users table, but i also specifies an UNIQUE index on their usernames. There is situations that i first need to get the userId for a given username and then execute the desired query. Or use a JOIN in the desired query. It's 2 trips to the database or a JOIN vs. a varchar index The above is just an example There is a real performance benefit on INT over small VARCHAR indexes? Thanks in advance!

    Read the article

  • SEO: It's recommended to upload and put live a beta / non-finished version of web site??

    - by Jonathan
    I'm working on this big website and I want to put it online before its fully finished... I'm working locally and the database is getting really big so I wanted to upload the website and continue to work on it in the server, but allowing people to enter, so I can test. The question is if this is good for SEO, I mean, there are a lot of things SEO related that are incomplete.. For example: there are no friendly URLs, no sitemap, no .htacces file, lot of 'in-construction' sections... Does Google will penalize me forever? How does it works? Google indexes adn get the structure of the site just once or its constantly updating and checking for changes??? What i should do? What you recommend?!!?

    Read the article

  • DB2 increase bufferpool size and compressed tables not equal better performance. Why?

    - by Mestika
    Hi, I’m working on tuning and increasing the performance of my IBM DB2 version 9.7 database. I’ve been searching around the net for the last couple of days and learned that if I created my tables in COMPRESS mode and created one more bufferpool and set both of them to access 1024mb, then the performance in my queries should increase because of the less I/Os to the disks. However, when I run my time analysis, the performance Decrease. I added the new additions to my regular database with the indexes I’ve used all the time. Each time I search google I come up with the statement that: Increased bufferpool size and several bufferpools AND a table compression SHOULD prove to get better performance. I’m very puzzled about the total unexpected result. Are there some tuning mechanisms I’ve forgot or does anyone have a explanation for this odd behavior? Sincerely Mestika

    Read the article

  • Drupal: FileFields... internal server error

    - by Patrick
    hi, I've moved my drupal installation to new server and the website works. HOwever I get an "internal server error" for each file field in my nodes. In other words, images and videos are not loaded. In Firebug, Net Tab I can see they cannot be retrieved from the server because of "internal server error". The only thing I changed with respect to the previous installation is commenting the following 2 lines in .htaccess # Don't show directory listings for URLs which map to a directory. #Options -Indexes # Follow symbolic links in this directory. #Options +FollowSymLinks Is this the issue ? How can I solve this ? thanks

    Read the article

  • Mysql search design

    - by neil
    I'm designing a mysql database, and i'd like some input on an efficient way to store blog/article data for searching. Right now, I've made a separate column that stores the content to be searched - no duplicate words, no words shorter than four letters, and no words that are too common. So, essentially, it's a list of keywords from the original article. Also searched would be a list of tags, and the title field. I'm not quite sure how mysql indexes fulltext columns, so would storing the data like that be ineffective, or redundant somehow? A lot of the articles are on the same topic, so would the score be hurt by so many of the rows having similar keywords? Also, for this project, solutions like sphinx, lucene or google custom seach can't be used -- only php & mysql. Thanks!

    Read the article

  • .htaccess url rewrite with ssl redirection

    - by Stuart McAlpine
    I'm having trouble combining a url query parameter rewrite (fancy-url) with a .htaccess ssl redirection. My .htaccess file is currently: Options +FollowSymLinks Options -Indexes ServerSignature Off RewriteEngine on RewriteBase / # in https: process secure.html in https RewriteCond %{server_port} =443 RewriteCond $1 ^secure$ [NC] RewriteRule ^(.+).html$ index.php?page=$1 [QSA,L] # in https: force all other pages to http RewriteCond %{server_port} =443 RewriteCond $1 !^secure$ [NC] RewriteRule ^(.+).html$ http://%{HTTP_HOST}%{REQUEST_URI} [QSA,N] # in http: force secure.html to https RewriteCond %{server_port} !=443 RewriteCond $1 ^secure$ [NC] RewriteRule ^(.+).html$ https://%{HTTP_HOST}%{REQUEST_URI} [QSA,N] # in http: process other pages as http RewriteCond %{server_port} !=443 RewriteCond $1 !^secure$ [NC] RewriteRule ^(.+).html$ index.php?page=$1 [QSA,L] The fancy-url rewriting is working fine but the redirection to/from https isn't working at all. If I replace the 2 lines containing RewriteRule ^(.+).html$ https://%{HTTP_HOST}%{REQUEST_URI} [QSA,N] with RewriteRule ^(.+).html$ https://%{HTTP_HOST}/index.php?page=$1 [QSA,L] then the https redirection works fine but the fancy-url rewriting doesn't work. Is it possible to combine these two?

    Read the article

  • Scala create xhtml elements dynamically

    - by portoalet
    Given a String array val www = List("http://bloomberg.com", "http://marketwatch.com"); I want to dynamically generate <span id="span1">http://bloomberg.com</span> <span id="span2">http://marketwatch.com</span> def genSpan(web: String) = <span id="span1"> + web + </span>; www.map(genSpan); // How can I pass the loop index? How can I use scala map function to generate the ids (span1, span2), as 1 and 2 are the loop indexes ? Or is the only way is to use for comprehension?

    Read the article

  • How to uniquely identify a monitor?

    - by Jerry Dodge
    I'm working on a project where I take screenshots of individual monitors (TMonitor) and stream their images through network (remote desktop viewing). Suppose a monitor is added/removed (which I can recognize this already), I need to synchronize which monitor this happened to. Because, suppose there's 3 monitors, indexes 0, 1, 2. Monitor 1 is removed. I don't want to automatically change index 2 to 1, I want it to maintain an ID at all times. Is there any property I can recognize in the TMonitor class (Screen.Monitors[i]) to uniquely identify it?

    Read the article

  • How to optimize indexing of large number of DB records using Zend_Lucene and Zend_Paginator

    - by jdichev
    So I have this cron script that is deployed and ran using Cron on a host and indexes all the records in a database table - the index is later used both for the front end of the site and the backed operations as well. After the operation, the index is about 3-4 MB. The problem is it takes a lot of resources (CPU: 30+ and a good chunk of memory) and slows the machine down. My question is about how to optimize the operation described below: First there is a select query built using the Zend Framework API, this query is then passed to a Paginator factory that returns a paginator which I am using to balance the current number of items being indexed and not iterate over too much items. The script is iterating over the current items in the paginator object using a foreach loop until reaching the end and then it starts from the beginning after getting items for the next page. I am suspecting this overhead is caused by the Zend_Lucene but no idea how this could be improved.

    Read the article

  • partition from programming pearls

    - by davit-datuashvili
    hi suppose i have following array int a[]=new int[]{55,41,59,26,53,58,97,93}; i want to partition it around 55 so new array will be such } 41,26,53,55,59,58,93,93}; i have done such kinds of problems myself but this is from programming pearls and here code is like this we have some array[a..b] and given value t we write code following way int m=a-1; for i=[a..b] if ( array[i]<t) swap (++m;i); where swap function exchange two element in array at indexes ++m and i, i have run this program and it showed me mistake Exception java.lang.NullPointerException can anybody help me?

    Read the article

< Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >