Search Results

Search found 4815 results on 193 pages for 'parameterized queries'.

Page 137/193 | < Previous Page | 133 134 135 136 137 138 139 140 141 142 143 144  | Next Page >

  • How should I set up these tables for searching?

    - by thewebguy
    My PHP site is an online store with about 5k products. Products belong to a vendor, a category, and possibly a subcategory. Each of those items has a name and the products have descriptions. The search queries we've set up work wonderfully, but tend to run pretty slow. They range between 0.20s and 30s (yes 30 seconds). We've optimized like crazy and I'm starting to think we're out of room to improve on that front, so we're caching them and that's making life a lot easier. But when they run they are still killing the server, because what appears to be all of the table locking that comes with MyISAM. So on to my question: Is there a way for us to use InnoDB (row-level locking) and still maintain FULLTEXT? Should we move our DB offsite and use a service like DB2? Is there some other search engine type software we should use instead? Any help is greatly appreciated :)

    Read the article

  • Which index is used in select and why?

    - by Lukasz Lysik
    I have the table with zip codes with following columns: id - PRIMARY KEY code - NONCLUSTERED INDEX city When I execute query SELECT TOP 10 * FROM ZIPCodes I get the results sorted by id column. But when I change the query to: SELECT TOP 10 id FROM ZIPCodes I get the results sorted by code column. Again, when I change the query to: SELECT TOP 10 code FROM ZIPCodes I get the results sorted by code column again. And finally when I change to: SELECT TOP 10 id,code FROM ZIPCodes I get the results sorted by id column. My question is in the title of the question. I know which indexes are used in the queries, but my question is, why those indexes are used? I the second query (SELECT TOP 10 id FROM ZIPCodes) wouldn't it be faster if the clusteder index was used? How the query engine chooses which index to use?

    Read the article

  • connecting to oracle database from c# asp.net mvc website

    - by ooo
    I am trying to connect to oracle database. I am able to connect to it through a local SQL Developer tool by sticking something in the oranames.tns file. My question is that i will be deploying this website to a number of places. A few questions: What is the simplest way i can use to connect to this database and do very basic queries. I see some examples that have me referencing oracleclient dlls. Other methods not? Is there a best practice here? Am i going to have to update the oranames.tns file on everyone on of the machines that i deploy to ? is there any simpler way

    Read the article

  • MySQL inner join different results

    - by Darryl at NetHosted
    I am trying to work out why the following two queries return different results: SELECT DISTINCT i.id, i.date FROM `tblinvoices` i INNER JOIN `tblinvoiceitems` it ON it.userid=i.userid INNER JOIN `tblcustomfieldsvalues` cf ON it.relid=cf.relid WHERE i.`tax` = 0 AND i.`date` BETWEEN '2012-07-01' AND '2012-09-31' and SELECT DISTINCT i.id, i.date FROM `tblinvoices` i WHERE i.`tax` = 0 AND i.`date` BETWEEN '2012-07-01' AND '2012-09-31' Obviously the difference is the inner join here, but I don't understand why the one with the inner join is returning less results than the one without it, I would have thought since I didn't do any cross table references they should return the same results. The final query I am working towards is SELECT DISTINCT i.id, i.date FROM `tblinvoices` i INNER JOIN `tblinvoiceitems` it ON it.userid=i.userid INNER JOIN `tblcustomfieldsvalues` cf ON it.relid=cf.relid WHERE cf.`fieldid` =5 AND cf.`value` REGEXP '[A-Za-z]' AND i.`tax` = 0 AND i.`date` BETWEEN '2012-07-01' AND '2012-09-31' But because of the different results that seem incorrect when I add the inner join (it removes some results that should be valid) it's not working at present, thanks.

    Read the article

  • Data storage advice needed: Best way to store location + time data?

    - by sobedai
    I have a project in mind that will require the majority of queries to be keyed off of lat/long as well as date + time. Initially, I was thinking of a standard RDBMS where lat, long, and the datetime field are properly indexed. Then, I began thinking of a document based system where the document was essentially a timestamp and each document has lat/long with in it. Each document could have n objects associated with it. I'm looking for advice on what would be the best type of storage engine for this sort of thing is - which of the above idea would be better or if there is something else completely that is the ideal solution. Thanks

    Read the article

  • Is directly executing SQL bad app design?

    - by Michael Lowman
    I'm developing an iOS application that's a manager/viewer for another project. The idea is the app will be able to process the data stored in a database into a number of visualizations-- the overall effect being similar to cacti. I'm making the visualizations fully user-configurable: the user defines what she wants to see and adds restrictions. She might specify, for instance, to graph a metric over the last three weeks with user accounts that are currently active and aren't based in the United States. My problem is that the only design I can think of is more or less passing direct SQL from the iOS app to the backend server to be executed against the database. I know it's bad practice and everything should be written in terms of stored procedures. But how else do I maintain enough flexiblity to keep fully user-defined queries? While the application does compose the SQL, direct SQL is never visible or injectable by the user. That's all abstracted away in UIDateTimeChoosers, UIPickerViews, and the like.

    Read the article

  • Some help needed with a SQL query

    - by Psyche
    Hello, I need some help with a MySQL query. I have two tables, one with offers and one with statuses. An offer can has one or more statuses. What I would like to do is get all the offers and their latest status. For each status there's a table field named 'added' which can be used for sorting. I know this can be easily done with two queries, but I need to make it with only one because I also have to apply some filters later in the project. Here's my setup: CREATE TABLE `test`.`offers` ( `id` INT UNSIGNED NOT NULL AUTO_INCREMENT PRIMARY KEY , `client` TEXT NOT NULL , `products` TEXT NOT NULL , `contact` TEXT NOT NULL ) ENGINE = MYISAM ; CREATE TABLE `statuses` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `offer_id` int(11) NOT NULL, `options` text NOT NULL, `deadline` date NOT NULL, `added` datetime NOT NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1

    Read the article

  • What will happen if I change the type of a column from int to year?

    - by MachinationX
    I have a table in MySQL 4.0 which currently has a year field as a smallint(6) type. What will happen if I convert it directly to a Year type with a query like the following: ALTER TABLE t MODIFY y YEAR(4) NOT NULL DEFAULT CURRENT_TIMESTAMP; When the current members of column y have values like 2010? I assume that because the year type is technically values from 1-255, values above that will be truncated or broken. So if MySQL isn't smart enough to realize that 2010(int) = 110(year), what would be the simplest query or queries to convert the values? Thanks for your help!

    Read the article

  • Passing GPS speed via the Eclipse Emulator Control

    - by Nick
    Hi, my app queries the GPS-speed using .getSpeed() on a LocationListener. Is there a way to set this speed using the Eclipse Emulator Control or the command line? I tried to feed multiple sets of coordinates to the emulator via the manual GPS-control, but it didn't pick up a speed from that. Also, using a pre-defined GPX-file and playing it doesn't work for me. I would like to test my app without having to take it on a test-drive in my car every time ;)! Thanks!

    Read the article

  • SSAS Reporting Services - Set specific language / translation

    - by Chris
    Hi all, in the data warehouse there's a default language for the measures, and I added a translation for German captions. In a Visual Studio Report Server project, when creating a query with my German OS, the cube and its measures are displayed in German language. When dragging measures to the mdx query windows, the default measure name is used. That's what I want and what I expect, since when writing MDX queries I would like to use the default measure names. But when executing the query, the columns created for each measure is translated to German again. This resuls in having German columns names within my dataset, which I dont want. I'd like to have the english column names. I already tried to change the connection string to: Data Source=server;Initial Catalog=DataWarehouse;LocaleIdentifier=1033 But that doesn't help, I still see German translations. Anyone knows how to set a specific translation?

    Read the article

  • Oracle: Finding Columns with only null values

    - by Jorge Valois
    I have a table with a lot of columns and a type column. Some columns seem to be always empty for a specific type. I want to create a view for each type and only show the relevant columns for each type. Working under the assumption that if a column has ONLY null values for a specific type, then that columns should not be part of the view, how can you find that out with queries? Is there a SELECT [columnName] FROM [table] WHERE [columnValues] ARE ALL [null] I know I COMPLETELY made it all up above... I'm just trying to get the idea across. Thanks in advance!

    Read the article

  • Using a custom URL parameter in Wordpress (with permalinks)?

    - by kiko
    Hello - Is there a way to perhaps edit .htaccess to add a custom URL parameter to Wordpress, so that the parameter is not stripped out by permalinks? I have a Wordpress site with a page that queries a separate (non-Wordpress) database. I passed the URL parameter "pubID" to display individual books and it is working OK. Example: http://www.uglyducklingpresse.org/catalog/browse/item/?pubID=63 But the individual books are not showing up properly in Google - maybe because they all have the same auto-generated "canonical" URL meta tab - one with the "pubID" parameter stripped out. Thank you for any help.

    Read the article

  • I want to exchange the Value of a column in two different rows in Microsoft SQL server

    - by Silmaril89
    Hi, I want to do the following two SQL Queries in Microsoft SQL SERVER UPDATE Partnerships SET sortOrder = 2 WHERE sortOrder = 1; UPDATE Partnerships SET sortOrder = 1 WHERE sortOrder = 2; The only problem is, I don't allow for sortOrder to contain the same value, it is a unique key. How could I get around this, because the first query violates the unique key rule and terminates? Or will I have to get rid of the unique key rule I have? Thanks!

    Read the article

  • Why Does TFS Allow Orphaned Content and How Do I Get Rid of It?

    - by Chad
    My TfsVersionControl database has grown to 40+ GB in size. We recently did a TFS Destroy on a folder tree that should have cleared up at least 10 GB but instead it seemed to have no effect. When I look at the tables in TfsVersionControl, I am first shocked to see that there are no foreign keys at all in the database. Running a few queries, I see that there is some orphaning going on: tbl_Content has 13.9 GB of records that don't have a related tbl_File record tbl_File and tbl_Content have 2.4 GB that don't have a related tbl_Namespace record The cleanup job seems to be running nightly (prc_DeleteUnusedContent) and running it against the database manually doesn't remove any orphans. I see in the log for the cleanup job that it failed on 3/16, which is the morning after I destroyed the large amount of data. The error was due to a full transaction log. Could that error be the reason I'm left with all this orphaned data that can't be deleted? How can I permanently destroy this unneeded content?

    Read the article

  • MySQLi String comparisons using keys

    - by asdasd
    I have a table with lets say 2 columns. id number, and value. Value is a string (var char). Lets say i have a number x, and a list of numbers a1, a2, a3, a4, a5..... where x is not in the list. All of these numbers correspond to a unique row in the table. I want to know if the string value for x in the table is contained in one of the string values for any table entry for a1, a2, a3, a4... Lets say i have these rows: x, aaa a1, bbb a2, ccc a3, ddd a4, aaabbbcc then i want somehow a confirmation that yes, the value for x is included in one of the values in my list of numbers (a4 contains x). I know i can do this in a couple queries and shove it down some PHP and get my answer. But can i do this with one query?

    Read the article

  • Database design: one huge table or separate tables?

    - by littlegreen
    Currently I am designing a database for use in our company. We are using SQL Server 2008. The database will hold data gathered from several customers. The goal of the database is to acquire aggregate benchmark numbers over several customers. Recently, I have become worried with the fact that one table in particular will be getting very big. Each customer has approximately 20.000.000 rows of data, and there will soon be 30 customers in the database (if not more). A lot of queries will be done on this table. I am already noticing performance issues and users being temporarily locked out. My question, will we be able to handle this table in the future, or is it better to split this table up into smaller tables for each customer?

    Read the article

  • What is the most efficient approach to fetch category tree, products, brands, counts by subcategory

    - by alex227
    Symfony 1.4 + Doctrine 1.2. What is the best way to minimize the number of queries to retrieve products, subcategories of current category, product counts by subcategory and brand for the result set of the query below? Categories are a nested set. Here is my query: $q = Doctrine_Query::create() ->select('c.*, p.product,p.price, b.brand') ->from('Category c') ->leftJoin('c.Product p') ->leftJoin('p.Brand b') ->where ('c.root_id = ?', $this->category->getRootId()) ->andWhere('c.lft >= ?', $this->category->getLft()) ->andWhere('c.rgt <= ?', $this->category->getRgt()) ->setHydrationMode(Doctrine_Core::HYDRATE_ARRAY); $treeObject = Doctrine::getTable('Category')->getTree(); $treeObject->setBaseQuery($q); $this->treeObject = $treeObject; $treeObject->resetBaseQuery(); $this->products = $q->execute();

    Read the article

  • Oracle SQL: Multiple Subqueries Unioned Without Running Original Query Multiple Times.

    - by Bob
    So I've got a very large database, and need to work on a subset ~1% of the data to dump into an excel spreadsheet to make a graph. Ideally, I could select out the subset of data and then run multiple select queries on that, which are then UNION'ed together. Is this even possible? I can't seem to find anyone else trying to do this and would improve the performance of my current query quite a bit. Right now I have something like this: SELECT ( SELECT ( SELECT( long list of requirements ) UNION SELECT( slightly different long list of requirements ) ) ) and it would be nice if i could group the commonalities of the two long requirements and have simple differences between the two select statements being unioned.

    Read the article

  • db optimization - have a total field or query table?

    - by Dorian Fife
    I have an app where users get points for actions they perform - either 1 point for an easy action or 2 for a difficult one. I wish to display to the user the total number of points he got in my app and the points obtained this week (since Monday at midnight). I have a table that records all actions, along with their time and number of points. I have two alternatives and I'm not sure which is better: Every time the user sees the report perform a query and sum the points the user got Add two fields to each user that records the number of points obtained so far (total and weekly). The weekly points value will be set to 0 every Monday at midnight. The first option is easier, but I'm afraid that as I'll get many users and actions queries will take a long time. The second option bares the risk of inconsistency between the table of actions and the summary values. I'm very interested in what you think is the best alternative here. Thanks, Dorian

    Read the article

  • What approach should be suitable for user authentification in simle client/server app

    - by TerryS
    My previous question was closed so I will be more specific. I need to create an application, desktop one written in C#, that will ask for user credentials and after verification opens the GUI allowing to work with DB (black box for users). It should be used from everywhere, not LAN or SQL domain. I assume I would need to do the following: Create a client and a server applications that will deal with authentification. That would mean a lot of socketing stuff.. Once the user is verified, the client queries would be sent to database (client-server-DB). The server would need to send the DB data sets back to the client. As you can see, this is just my guess but I have no idea whether its too complicated or completely wrong. The main thing is that it must be desktop app (not web based one) and accessible from everywhere. I am interested in main points how to design the system and will be extremely grateful for that.

    Read the article

  • Single logical SQL Server possible from multiple physical servers?

    - by TuffyIsHere
    Hi, With Microsoft SQL Server 2005, is it possible to combine the processing power of multiple physical servers into a single logical sql server? Is it possible on SQL Server 2008? I'm thinking, if the database files were located on a SAN and somehow one of the sql servers acted as a kind of master, then processing could be spread out over multiple physical servers, for instance even allowing simultaneous updates where there was no overlap, and in the case of read-only queries on unlocked tables no limit. We have an application that is limited by the speed of our sql server, and probably stuck with server 2005 for now. Is the only option to get a single more powerful physical server? Sorry I'm not an expert, I'm not sure if the question is a stupid one. TIA

    Read the article

  • Getting age in years in a SQL query

    - by Earlz
    Hello I've been tasked with doing a few queries on a large SQL Server 2000 database. The query I'm having trouble with is "find the number of people between ages 20 and 40" How would I do this? My previous query to get a count of everyone looks like this: select count(rid) from people where ... (with the ... being irrelevant conditions). I've googled some but the only thing I've found for calculating age is so large that I don't see how to embed it into a query, or it is a stored procedure which I do not have the permissions to create. Can someone help me with this?

    Read the article

  • How to improve the speed of a loop containing a sqlalchemy query statement as conditional

    - by LtPinback
    This loop checks if a record is in the sqlite database and builds a list of dictionaries for those records that are missing and then executes a multiple insert statement with the list. This works but it is very slow (at least i think it is slow) as it takes 5 minutes to loop over 3500 queries. I am a complete newbie in python, sqlite and sqlalchemy so I wonder if there is a faster way of doing this. list_dict = [] session = Session() for data in data_list: if session.query(Class_object).filter(Class_object.column_name_01 == data[2]).filter(Class_object.column_name_00 == an_id).count() == 0: list_dict.append({'column_name_00':a_id, 'column_name_01':data[2]}) conn = engine.connect() conn.execute(prices.insert(),list_dict) conn.close() session.close() edit: I moved session = Session() outside the loop. Did not make a difference.

    Read the article

  • SQL Server GROUP BY troubles!

    - by Lucas311
    I'm getting a frustrating error in one of my SQL Server 2008 queries. It parses fine, but crashes when I try to execute. The error I get is the following: Msg 8120, Level 16, State 1, Line 4 Column 'customertraffic_return.company' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause. SELECT * FROM (SELECT ctr.sp_id AS spid, Substring(ctr.company, 1, 20) AS company, cci.email_address AS tech_email, CASE WHEN rating IS NULL THEN 'unknown' ELSE rating END AS rating FROM customer_contactinfo cci INNER JOIN customertraffic_return ctr ON ctr.sp_id = cci.sp_id WHERE cci.email_address <> '' AND cci.email_address NOT LIKE '%hotmail%' AND cci.email_address IS NOT NULL AND ( region LIKE 'Europe%' OR region LIKE 'Asia%' ) AND SERVICE IN ( '1', '2' ) AND ( rating IN ( 'Premiere', 'Standard', 'unknown' ) OR rating IS NULL ) AND msgcount >= 5000 GROUP BY ctr.sp_id, cci.email_address) AS a WHERE spid NOT IN (SELECT spid FROM customer_exclude) GROUP BY spid, tech_email

    Read the article

  • mysql stored routine vs. mysql-alternative?

    - by user522962
    We are using a mysql database w/ about 150,000 records (names) total. Our searches on the 'names' field is done through an autocomplete function in php. We have the table indexed but still feel that the searching is a bit sluggish (a few full seconds vs. something like Google Finance w/ near-instant response). We came up w/ 2 possibilities, but wanted to get more insight: Can we create a bunch (many thousands or more) of stored procedures to speed up searches, or will creating that many stored procedures bog-down the db? Is there a faster alternative to mysql for "select" statements (speed on inserting & updating rows isn't too important so we can sacrifice that, if necessary). I've vaguely heard of BigTable & others that don't support JOIN statements....we need JOIN statements for some of our other queries we do. thx

    Read the article

< Previous Page | 133 134 135 136 137 138 139 140 141 142 143 144  | Next Page >