Search Results

Search found 20224 results on 809 pages for 'query optimization'.

Page 137/809 | < Previous Page | 133 134 135 136 137 138 139 140 141 142 143 144  | Next Page >

  • Find out which row caused the error

    - by Felipe Fiali
    I have a big fat query that's written dynamically to integrate some data. Basically what it does is query some tables, join some other ones, treat some data, and then insert it into a final table. The problem is that there's too much data, and we can't really trust the sources, because there could be some errored or inconsistent data. For example, I've spent almost an hour looking for an error while developing using a customer's database because somewhere in the middle of my big fat query there was an error converting some varchar to datetime. It turned out to be that they had some sales dating '2009-02-29', an out-of-range date. And yes, I know. Why was that stored as varchar? Well, the source database has 3 columns for dates, 'Month', 'Day' and 'Year'. I have no idea why it's like that, but still, it is. But how the hell would I treat that, if the source is not trustable? I can't HANDLE exceptions, I really need that it comes up to another level with the original message, but I wanted to provide some more info, so that the user could at least try to solve it before calling us. So I thought about displaying to the user the row number, or some ID that would at least give him some idea of what record he'd have to correct. That's also a hard job because there will be times when the integration will run up to 80000 records. And in an 80000 records integration, a single dummy error message: 'The conversion of a varchar data type to a datetime data type resulted in an out-of-range datetime value' means nothing at all. So any idea would be appreciated. Oh I'm using SQL Server 2005 with Service Pack 3.

    Read the article

  • Error using to_char // to_timestamp

    - by pepersview
    Hello, I have a database in PostgreSQL and I'm developing an application in PHP using this database. The problem is that when I execute the following query I get a nice result in phpPgAdmin but in my PHP application I get an error. The query: SELECT t.t_name, t.t_firstname FROM teachers AS t WHERE t.id_teacher IN (SELECT id_teacher FROM teacher_course AS tcourse JOIN course_timetable AS coursetime ON tcourse.course = coursetime.course AND to_char(to_timestamp('2010-4-12', 'YYYY-MM-DD'),'FMD') = (coursetime.day +1)) AND t.id_teacher NOT IN (SELECT id_teacher FROM teachers_fill WHERE date = '2010-4-12') ORDER BY t.t_name ASC And this is the error in PHP operator does not exist: text = integer (to_timestamp('', 'YYYY-MM-DD'),'FMD') = (courset... ^ HINT: No operator matches the given name and argument type(s). You might need to add explicit type casts. The purpose to solve this error is to use the ORIGINAL query in php with : $date = "2010"."-".$selected_month."-".$selected_day; SELECT t.t_name, t.t_firstname FROM teachers AS t WHERE t.id_teacher IN (SELECT id_teacher FROM teacher_course AS tcourse JOIN course_timetable AS coursetime ON tcourse.course = coursetime.course AND to_char(to_timestamp('$date', 'YYYY-MM-DD'),'FMD') = (coursetime.day +1)) AND t.id_teacher NOT IN (SELECT id_teacher FROM teachers_fill WHERE date = '$date') ORDER BY t.t_name ASC

    Read the article

  • SQL - Multiple join conditions using OR?

    - by Brandi
    I have a query that is using multiple joins. The goal is to say "Out of table A, give me all the customer numbers in which you can match table A's EmailAddress with either email_to or email_from of table B. Ignore nulls, internal emails, etc.". It seems like it would be better to use an or condition in the join than multiple joins since it is the same table. When I try to use AND/OR it does not give the behaviour I expect... AND finishes in a reasonable time, but yields no results (I know that there are matches, so it must be some flaw in my logic) and OR never finishes (I have to kill it). Here is example code to illustrate the question: --my original query SELECT DISTINCT a.CustomerNo FROM A a WITH (NOLOCK) LEFT JOIN B e WITH (NOLOCK) ON a.EmailAddress = e.email_from RIGHT JOIN B f WITH (NOLOCK) ON a.EmailAddress = f.email_to WHERE a.EmailAddress NOT LIKE '%@mydomain.___' AND a.EmailAddress IS NOT NULL AND (e.email_from IS NOT NULL OR f.email_to IS NOT NULL) Here is what I tried, (I am attempting logical equivalence): SELECT DISTINCT a.CustomerNo FROM A a WITH (NOLOCK) LEFT JOIN B e WITH (NOLOCK) ON a.EmailAddress = e.email_from OR a.EmailAddress = e.email_to WHERE a.EmailAddress NOT LIKE '%@mydomain.___' AND a.EmailAddress IS NOT NULL AND (e.email_from IS NOT NULL OR e.email_to IS NOT NULL) So my question is two-fold: Why does having AND in the above query work in a few seconds and OR goes for minutes and never completes? What am I missing to make a logically equivalent statement that has only one join?

    Read the article

  • sql query --need some suggestions

    - by benjamin button
    I have a table with list of cycle codes.CYCLE_DEFINITION. each and every cycle_code has 12 months entries in another table(PM1_CYCLE_STATE). Each and every month has a cycle_start_date and a cycle_close_date. i will check with a particular date(lets say sysdate) and check what is the current month of every cycle.additionally i will also get the list of future 3 more months of that particular cycle. the query i have written is as below: SELECT cycd,cm,sd,ed,ld FROM (SELECT pcs.cycle_code CYCD,LTRIM(pcs.cycle_month,'0')+0 CM, pcs.cycle_start_date SD,pcs.cycle_close_date ED,ld.logical_date LD FROM pm1_cycle_state pcs,logical_date ld WHERE ld.logical_date BETWEEN pcs.cycle_start_date AND pcs.cycle_close_date and ld.logical_date_type='B') UNION SELECT cycd,cm,sd,ed,ld FROM (SELECT pcs.cycle_code CYCD,DECODE(LTRIM(pcs.cycle_month,'0')+1,13,1,14,2,15,3,LTRIM(pcs.cycle_month,'0')+1) CM ,pcs.cycle_start_date SD,pcs.cycle_close_date ED,ld.logical_date LD FROM pm1_cycle_state pcs,logical_date ld WHERE ld.logical_date BETWEEN pcs.cycle_start_date AND pcs.cycle_close_date and ld.logical_date_type='B') UNION SELECT cycd,cm,sd,ed,ld FROM (SELECT pcs.cycle_code CYCD,DECODE(LTRIM(pcs.cycle_month,'0')+2,13,1,14,2,15,3,LTRIM(pcs.cycle_month,'0')+2) CM ,pcs.cycle_start_date SD,pcs.cycle_close_date ED,ld.logical_date LD FROM pm1_cycle_state pcs,logical_date ld WHERE ld.logical_date BETWEEN pcs.cycle_start_date AND pcs.cycle_close_date and ld.logical_date_type='B') UNION SELECT cycd,cm,sd,ed,ld FROM (SELECT pcs.cycle_code CYCD,DECODE(LTRIM(pcs.cycle_month,'0')+3,13,1,14,2,15,3,LTRIM(pcs.cycle_month,'0')+3) CM ,pcs.cycle_start_date SD,pcs.cycle_close_date ED,ld.logical_date LD FROM pm1_cycle_state pcs,logical_date ld WHERE ld.logical_date BETWEEN pcs.cycle_start_date AND pcs.cycle_close_date and ld.logical_date_type='B') This query is running perfectly fine. This will result in all the cycle_codes with exactly 4 rows for current month and future 3 months. Now the requirement is if any of the month is missing.how could i show it? for eg: the output of the above query is cycd cm 102 1 102 10 102 11 102 12 103 1 103 10 103 11 103 12 104 1 104 10 104 11 104 12 Now lets say the row with cycd=104 and cm=11 is not present in the table,then the above query will not get the row 104 11. I want to display only those rows. how could i do it?

    Read the article

  • SQL Server - stored procedure suddenly become slow

    - by Barguast
    I have written a stored procedure that, yesterday, typically completed in under a second. Today, it takes about 18 seconds. I ran into the problem yesterday as well, and it seemed to be solved by DROPing and re-CREATEing the stored procedure. Today, that trick doesn't appear to be working. :( Interestingly, if I copy the body of the stored procedure and execute it as a straightforward query it completes quickly. It seems to be the fact that it's a stored procedure that's slowing it down...! Does anyone know what the problem might be? I've searched for answers, but often they recommend running it through Query Analyser, but I don't have have it - I'm using SQL Server 2008 Express for now. The stored procedure is as follows; ALTER PROCEDURE [dbo].[spGetPOIs] @lat1 float, @lon1 float, @lat2 float, @lon2 float, @minLOD tinyint, @maxLOD tinyint, @exact bit AS BEGIN -- Create the query rectangle as a polygon DECLARE @bounds geography; SET @bounds = dbo.fnGetRectangleGeographyFromLatLons(@lat1, @lon1, @lat2, @lon2); -- Perform the selection if (@exact = 0) BEGIN SELECT [ID], [Name], [Type], [Data], [MinLOD], [MaxLOD], [Location].[Lat] AS [Latitude], [Location].[Long] AS [Longitude], [SourceID] FROM [POIs] WHERE NOT ((@maxLOD < [MinLOD]) OR (@minLOD > [MaxLOD])) AND (@bounds.Filter([Location]) = 1) END ELSE BEGIN SELECT [ID], [Name], [Type], [Data], [MinLOD], [MaxLOD], [Location].[Lat] AS [Latitude], [Location].[Long] AS [Longitude], [SourceID] FROM [POIs] WHERE NOT ((@maxLOD < [MinLOD]) OR (@minLOD > [MaxLOD])) AND (@bounds.STIntersects([Location]) = 1) END END The 'POI' table has an index on MinLOD, MaxLOD, and a spatial index on Location.

    Read the article

  • postgresql help with php loop....

    - by KnockKnockWhosThere
    I keep getting an "Notice: Undefined index: did" error with this query, and I'm not understanding why... I'm much more used to mysql, so, maybe the syntax is wrong? This is the php query code: function get_demos() { global $session; $demo = array(); $result = pg_query("SELECT DISTINCT(did,vid,iid,value) FROM dv"); if(pg_num_rows($result) > 0) { while($r = pg_fetch_array($result)) { switch($r['did']) { case 1: $demo['a'][$r['vid']] = $r['value']; break; case 2: $demo['b'][$r['vid']] = $r['value']; break; case 3: $demo['c'][$r['vid']] = $r['value']; break; } } } else { $session->session_setMessage(2); } return $demo; } When I run that query at the pg prompt, I get results: "(1,1,1,"A")" "(1,2,2,"B")" "(1,3,3,"C")" "(1,4,4,"D")" "(1,5,5,"E")" "(1,6,6,"F")" "(1,7,7,"G")" "(1,8,8,"H")" "(1,9,9,"I")" "(1,10,A,"J")" "(1,11,B,"K")" "(1,12,C,"L")" "(1,13,D,"M")" "(2,14,1,"A")" "(2,15,2,"B")" "(2,16,0,"C")" "(3,17,1,"A")" "(3,18,2,"B")" "(3,19,3,"C")" "(3,20,4,"D")" "(3,21,5,"E")" "(3,22,6,"F")" "(3,23,7,"G")"

    Read the article

  • Django: Paginator + raw SQL query

    - by Silver Light
    Hello! I'm using Django Paginator everywhere on my website and even wrote a special template tag, to make it more convenient. But now I got to a state, where I need to make a complex custom raw SQL query, that without a LIMIT will return about 100K records. How can I use Django Pagintor with custom query? Simplified example of my problem: My model: class PersonManager(models.Manager): def complicated_list(self): from django.db import connection #Real query is much more complex cursor.execute("""SELECT * FROM `myapp_person`"""); result_list = [] for row in cursor.fetchall(): result_list.append(row[0]); return result_list class Person(models.Model): name = models.CharField(max_length=255); surname = models.CharField(max_length=255); age = models.IntegerField(); objects = PersonManager(); The way I use pagintation with Django ORM: all_objects = Person.objects.all(); paginator = Paginator(all_objects, 10); try: page = int(request.GET.get('page', '1')) except ValueError: page = 1 try: persons = paginator.page(page) except (EmptyPage, InvalidPage): persons = paginator.page(paginator.num_pages) This way, Django get very smart, and adds LIMIT to a query when executing it. But when I use custom manager: all_objects = Person.objects.complicated_list(); all data is selected, and only then python list is sliced, which is VERY slow. How can I make my custom manager behave similar like built in one?

    Read the article

  • Simple neon optimization in Android

    - by Peter
    In my Android application, I use a bunch of open source libraries such as libyuv, libvpx, libcrypto, libssl, etc. Some of them come with Android.mk. For others, I hand-crafted Android.mk. The code is built only for arm for now. Here is my Application.mk: APP_ABI := armeabi-v7a APP_OPTIM := release APP_STL := gnustl_static APP_CPPFLAGS := -frtti I am looking for way to generate binaries that are optimized for neon. Browsing the net, I found the following setting that someone is using in his Android.mk: LOCAL_CFLAGS += -mfloat-abi=softfp -mfpu=neon -march=armv7 I wonder if I simply put this setting in Application.mk, will it automatically get applied across all the libraries? A step before each library is built is the following: include $(CLEAR_VARS) Is it better to include LOCAL_CFLAGS directive after this line (instead of including it in Application.mk)? Finally, why doesn't ndk-build automatically optimize for neon when it sees armabi in Application.mk? Regards.

    Read the article

  • How to store data in mysql, to get the fastest performance?

    - by Oden
    Hey, I'm thinking about it, witch of the following two query types would give me the fastest performance for a user messaging module inside my site: The first one i thought about is a multi table setup, witch has a connection table, and a main table. The connection table holds the connection between accounts, and the messaging table. In this case a query would look like following, to get some data of the author, and the messages he has sent: SELECT m.*, a.username FROM messages AS m LEFT JOIN connection_table ON (message_id = m.id) LEFT JOIN accounts AS a ON (account_id = a.id) WHERE m.id = '32341' Inserting into it is a little bit more "complicated". My other idea, and in my thought the better solution of this problem is that i store the data i would use in a connection table in the same table where is store the data of the mail. Sounds like i would get lots of duplicated entries, but no, because i have a field witch has text type and holds user ids like this: *24*32*249* If I want to query them, i use the mysql LIKE method. Deleting is an other problem, but for this i have one more field where i store who has deleted the post. Sad about that i don't know how to join this. So what would you recommend? Are there other ways?

    Read the article

  • MongoDB vs CouchDB (Speed optimization)

    - by Edward83
    Hi! I made some tests of speed to compare MongoDB and CouchDB. Only inserts were while testing. I got MongoDB 15x faster than CouchDB. I know that it is because of sockets vs http. But, it is very interesting for me how can I optimize inserts in CouchDB? Test platform: Windows XP SP3 32 bit. I used last versions of MongoDB, MongoDB C# Driver and last version of installation package of CouchDB for Windows. Thanks!

    Read the article

  • Efficient algorithm for Next button on a MySQL result set

    - by David Grayson
    I have a website that lets people view rows in a table (each row is a picture). There are more than 100,000 rows. You can view different subsets of the rows, and you can view them with different sort orders. While you are viewing one of the rows, you can click the "Next" or "Previous" buttons to go the next/previous row in the list. How would you implement the "Next" and "Previous" features of the website? More specifically, if you have an arbitrary query that returns a list of up to 100,000+ rows, and you know some information about the current row someone is viewing, how do you determine the NEXT row efficiently? Here is the pseudo-code of the solution I came up with when the website was young, and it worked well when there were only 1000 rows, but now that there are 100,000 rows I think it is eating up too much memory. int nextRowId(string query, int currentRowId) { array allRowIds = mysql_query(query); // Takes up a lot of memory! int currentIndex = (index of currentRowId in allRowIds); // Takes time! return allRowIds[currentIndex+1]; } While you are thinking about this problem, remember that the website can store more information about the current row than just its ID (for example, the position of the current row in the result set), and this information can be used as a hint to help determine the ID of the next row. Edit: Sorry for not mentioning this earlier, but this isn't just a static website: rows can often be added to the list, and rows can be re-ordered in the list. (Much rarer, rows can be removed from the list.) I think that I should worry about that kind of thing, but maybe you can convince me otherwise.

    Read the article

  • MySQL "OR MATCH" hangs (very slow) on multiple tables

    - by Kerry
    After learning how to do MySQL Full-Text search, the recommended solution for multiple tables was OR MATCH and then do the other database call. You can see that in my query below. When I do this, it just gets stuck in a "busy" state, and I can't access the MySQL database. SELECT a.`product_id`, a.`name`, a.`slug`, a.`description`, b.`list_price`, b.`price`, c.`image`, c.`swatch`, e.`name` AS industry, MATCH( a.`name`, a.`sku`, a.`description` ) AGAINST ( '%s' IN BOOLEAN MODE ) AS relevance FROM `products` AS a LEFT JOIN `website_products` AS b ON (a.`product_id` = b.`product_id`) LEFT JOIN ( SELECT `product_id`, `image`, `swatch` FROM `product_images` WHERE `sequence` = 0) AS c ON (a.`product_id` = c.`product_id`) LEFT JOIN `brands` AS d ON (a.`brand_id` = d.`brand_id`) INNER JOIN `industries` AS e ON (a.`industry_id` = e.`industry_id`) WHERE b.`website_id` = %d AND b.`status` = %d AND b.`active` = %d AND MATCH( a.`name`, a.`sku`, a.`description` ) AGAINST ( '%s' IN BOOLEAN MODE ) OR MATCH ( d.`name` ) AGAINST ( '%s' IN BOOLEAN MODE ) GROUP BY a.`product_id` ORDER BY relevance DESC LIMIT 0, 9 Any help would be greatly appreciated. EDIT All the tables involved are MyISAM, utf8_general_ci. Here's the EXPLAIN SELECT statement: id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY a ALL NULL NULL NULL NULL 16076 Using temporary; Using filesort 1 PRIMARY b ref product_id product_id 4 database.a.product_id 2 1 PRIMARY e eq_ref PRIMARY PRIMARY 4 database.a.industry_id 1 1 PRIMARY <derived2> ALL NULL NULL NULL NULL 23261 1 PRIMARY d eq_ref PRIMARY PRIMARY 4 database.a.brand_id 1 Using where 2 DERIVED product_images ALL NULL NULL NULL NULL 25933 Using where I don't know how to make that look neater -- sorry about that UPDATE it returns the query after 196 seconds (I think correctly). The query without multiple tables takes about .56 seconds (which I know is really slow, we plan on changing to solr or sphinx soon), but 196 seconds?? If we could add a number to the relevance if it was in the brand name ( d.name ), that would also work

    Read the article

  • LINQ : How to query how to sort result by most similarity/equality

    - by aNui
    I want to do a search for Music instruments which has its informations Name, Category and Origin as I asked in my post. But now I want to sort/group the result by similarity/equality to the keyword such as. If I have the list { Harp, Piano, Drum, Guitar, Guitarrón } and if I queried "p" the result should be { Piano, Harp } but it shows Harp first because of the list's sequence and if I add {Grand Piano} to the list and query "piano" the result shoud be like { Piano, Grand Piano } here's my code static IEnumerable<MInstrument> InstrumentsSearch(IEnumerable<MInstrument> InstrumentsList, string query, MInstrument.Category[] SelectedCategories, MInstrument.Origin[] SelectedOrigins) { var result = InstrumentsList .Where(item => SelectedCategories.Contains(item.category)) .Where(item => SelectedOrigins.Contains(item.origin)) .Where(item => { if ( (" " + item.Name.ToLower()).Contains(" " + query.ToLower()) || item.Name.IndexOf(query) != -1 ) { return true; } return false; } ) .Take(30); return result.ToList<MInstrument>(); } Or the result may be like my old self-invented algorithm that I called "by order of occurence", that is just OK to me. Is there any way to do that, please tell me. Thanks in advance.

    Read the article

  • how to use a list containing the query result with hibernate and display it in jsp

    - by kawtousse
    hi everyone, In my servlet I construct the query like the following: net.sf.hibernate.Session s = null; net.sf.hibernate.Transaction tx; try { s= HibernateUtil.currentSession(); tx=s.beginTransaction(); Query query = s.createQuery("select opcemployees.Nom,opcemployees.Prenom,dailytimesheet.TrackingDate,dailytimesheet.Activity," + "dailytimesheet.ProjectCode,dailytimesheet.WAName,dailytimesheet.TaskCode," + "dailytimesheet.TimeSpent,dailytimesheet.PercentTaskComplete from Opcemployees opcemployees,Dailytimesheet dailytimesheet " + "where opcemployees.Matricule=dailytimesheet.Matricule and dailytimesheet.Etat=3 " + "group by opcemployees.Nom,opcemployees.Prenom" ); List opdts= query.list(); request.setAttribute("items", opdts); request.getRequestDispatcher("EspaceValidation.jsp").forward(request, response); } catch (HibernateException e){ e.printStackTrace();} But in the jsp I can't display the result correctly. I do the following in JSP: <table> <c:forEach items="${items}" var="item"> <tr> <td>${item}</td> </tr> </c:forEach> </table> thanks for help.

    Read the article

  • url optimization in rails?

    - by piemesons
    Hello I want to create seo optimize url in rails.Same like done in stackoverflow. Right now this is my url http://localhost:3000/questions/56 I want to make it something like this:- http://localhost:3000/questions/56/this-is-my-optimized-url i am using restful approach. is there any plug-in available for this.

    Read the article

  • ControlCollection extension method optimization

    - by Johan Leino
    Hi, got question regarding an extension method that I have written that looks like this: public static IEnumerable<T> FindControlsOfType<T>(this ControlCollection instance) where T : class { T control; foreach (Control ctrl in instance) { if ((control = ctrl as T) != null) { yield return control; } foreach (T child in FindControlsOfType<T>(ctrl.Controls)) { yield return child; } } } public static IEnumerable<T> FindControlsOfType<T>(this ControlCollection instance, Func<T, bool> match) where T : class { return FindControlsOfType<T>(instance).Where(match); } The idea here is to find all controls that match a specifc criteria (hence the Func<..) in the controls collection. My question is: Does the second method (that has the Func) first call the first method to find all the controls of type T and then performs the where condition or does the "runtime" optimize the call to perform the where condition on the "whole" enumeration (if you get what I mean). secondly, are there any other optimizations that I can do to the code to perform better. An example can look like this: var checkbox = this.Controls.FindControlsOfType<MyCustomCheckBox>( ctrl => ctrl.CustomProperty == "Test" ) .FirstOrDefault();

    Read the article

  • SQL wont work? It doesn't come up with errors either

    - by Stefan
    Hey there, I have php function which checks to see if variables are set and then adds them onto my sql query. However I am don't seem to be getting any results back!? $where_array = array(); if (array_key_exists("location", $_GET)) { $location = addslashes($_GET['location']); $where_array[] = "`mainID` = '".$location."'"; } if (array_key_exists("gender", $_GET)) { $gender = addslashes($_GET["gender"]); $where_array[] = "`gender` = '".$gender."'"; } if (array_key_exists("hair", $_GET)) { $hair = addslashes($_GET["hair"]); $where_array[] = "`hair` = '".$hair."'"; } if (array_key_exists("area", $_GET)) { $area = addslashes($_GET["area"]); $where_array[] = "`locationID` = '".$area."'"; } $where_expr = ''; if ($where_array) { $where_expr = "WHERE " . implode(" AND ", $where_array); } $sql = "SELECT `postID` FROM `posts` ". $where_expr; $dbi = new db(); $result = $dbi->query($sql); $r = mysql_fetch_row($result); I'm trying to call the data after in a list like so: $dbi = new db(); $offset = ($currentpage - 1) * $rowsperpage; // get the info from the db $sql .= " ORDER BY `time` DESC LIMIT $offset, $rowsperpage"; $result = $dbi->query($sql); // while there are rows to be fetched... while ($row = mysql_fetch_object($result)){ // echo data echo $row['text']; } // end while Anyone got any ideas why I am not retrieving any data? -Stefan

    Read the article

  • magento category tree menu query being called on every page

    - by user1173309
    I have this query being called on every page in Magento CE 1.6.2, to find out where is it being called from, I have disabled all modules, removed the customizations done but it's still being called, this query is slowing up the page loading time and am at my wit's end trying to find out how can I stop it from being executed. The query is given below, for simplcitiy, I have removed lot of category id' to keep the sql short, it would be great if I could get solutions or hints to stop this query being called. SELECT e.*, IF(at_is_active.value_id 0, at_is_active.value, at_is_active_default.value) AS is_active, IF(at_include_in_menu.value_id 0, at_include_in_menu.value, at_include_in_menu_default.value) AS include_in_menu, core_url_rewrite.request_path FROM catalog_category_entity AS e INNER JOIN catalog_category_entity_int AS at_is_active_default ON (at_is_active_default.entity_id = e.entity_id) AND (at_is_active_default.attribute_id = '119') AND at_is_active_default.store_id = 0 LEFT JOIN catalog_category_entity_int AS at_is_active ON (at_is_active.entity_id = e.entity_id) AND (at_is_active.attribute_id = '119') AND (at_is_active.store_id = 1) INNER JOIN catalog_category_entity_int AS at_include_in_menu_default ON (at_include_in_menu_default.entity_id = e.entity_id) AND (at_include_in_menu_default.attribute_id = '934') AND at_include_in_menu_default.store_id = 0 LEFT JOIN catalog_category_entity_int AS at_include_in_menu ON (at_include_in_menu.entity_id = e.entity_id) AND (at_include_in_menu.attribute_id = '934') AND (at_include_in_menu.store_id = 1) LEFT JOIN core_url_rewrite ON (core_url_rewrite.category_id=e.entity_id) AND (core_url_rewrite.is_system=1 AND core_url_rewrite.product_id IS NULL AND core_url_rewrite.store_id='1' AND id_path LIKE 'category/%') WHERE (e.entity_type_id = '9') AND (e.entity_id IN('105', '125', '284', '285', '286', '288', '289', '185', '463', '464', '465', '625')) AND (e.entity_id NOT IN('140', '145', '530', '531', '775')) AND (IF(at_is_active.value_id 0, at_is_active.value, at_is_active_default.value) = '1') AND (IF(at_include_in_menu.value_id 0, at_include_in_menu.value, at_include_in_menu_default.value) = '1') Cheers Arjun

    Read the article

  • Stored procedure optimization

    - by George Zacharia
    Hi, i have a stored procedure which takes lot of time to execure .Can any one suggest a better approch so that the same result set is achived. ALTER PROCEDURE [dbo].[spFavoriteRecipesGET] @USERID INT, @PAGENUMBER INT, @PAGESIZE INT, @SORTDIRECTION VARCHAR(4), @SORTORDER VARCHAR(4),@FILTERBY INT AS BEGIN DECLARE @ROW_START INT DECLARE @ROW_END INT SET @ROW_START = (@PageNumber-1)* @PageSize+1 SET @ROW_END = @PageNumber*@PageSize DECLARE @RecipeCount INT DECLARE @RESULT_SET_TABLE TABLE ( Id INT NOT NULL IDENTITY(1,1), FavoriteRecipeId INT, RecipeId INT, DateAdded DATETIME, Title NVARCHAR(255), UrlFriendlyTitle NVARCHAR(250), [Description] NVARCHAR(MAX), AverageRatingId FLOAT, SubmittedById INT, SubmittedBy VARCHAR(250), RecipeStateId INT, RecipeRatingId INT, ReviewCount INT, TweaksCount INT, PhotoCount INT, ImageName NVARCHAR(50) ) INSERT INTO @RESULT_SET_TABLE SELECT FavoriteRecipes.FavoriteRecipeId, Recipes.RecipeId, FavoriteRecipes.DateAdded, Recipes.Title, Recipes.UrlFriendlyTitle, Recipes.[Description], Recipes.AverageRatingId, Recipes.SubmittedById, COALESCE(users.DisplayName,users.UserName,Recipes.SubmittedBy) As SubmittedBy, Recipes.RecipeStateId, RecipeReviews.RecipeRatingId, COUNT(RecipeReviews.Review), COUNT(RecipeTweaks.Tweak), COUNT(Photos.PhotoId), dbo.udfGetRecipePhoto(Recipes.RecipeId) AS ImageName FROM FavoriteRecipes INNER JOIN Recipes ON FavoriteRecipes.RecipeId=Recipes.RecipeId AND Recipes.RecipeStateId <> 3 LEFT OUTER JOIN RecipeReviews ON RecipeReviews.RecipeId=Recipes.RecipeId AND RecipeReviews.ReviewedById=@UserId AND RecipeReviews.RecipeRatingId= ( SELECT MAX(RecipeReviews.RecipeRatingId) FROM RecipeReviews WHERE RecipeReviews.ReviewedById=@UserId AND RecipeReviews.RecipeId=FavoriteRecipes.RecipeId ) OR RecipeReviews.RecipeRatingId IS NULL LEFT OUTER JOIN RecipeTweaks ON RecipeTweaks.RecipeId = Recipes.RecipeId AND RecipeTweaks.TweakedById= @UserId LEFT OUTER JOIN Photos ON Photos.RecipeId = Recipes.RecipeId AND Photos.UploadedById = @UserId AND Photos.RecipeId = FavoriteRecipes.RecipeId AND Photos.PhotoTypeId = 1 LEFT OUTER JOIN users ON Recipes.SubmittedById = users.UserId WHERE FavoriteRecipes.UserId=@UserId GROUP BY FavoriteRecipes.FavoriteRecipeId, Recipes.RecipeId, FavoriteRecipes.DateAdded, Recipes.Title, Recipes.UrlFriendlyTitle, Recipes.[Description], Recipes.AverageRatingId, Recipes.SubmittedById, Recipes.SubmittedBy, Recipes.RecipeStateId, RecipeReviews.RecipeRatingId, users.DisplayName, users.UserName, Recipes.SubmittedBy; WITH SortResults AS ( SELECT ROW_NUMBER() OVER ( ORDER BY CASE WHEN @SORTDIRECTION = 't' AND @SORTORDER='a' THEN TITLE END ASC, CASE WHEN @SORTDIRECTION = 't' AND @SORTORDER='d' THEN TITLE END DESC, CASE WHEN @SORTDIRECTION = 'r' AND @SORTORDER='a' THEN AverageRatingId END ASC, CASE WHEN @SORTDIRECTION = 'r' AND @SORTORDER='d' THEN AverageRatingId END DESC, CASE WHEN @SORTDIRECTION = 'mr' AND @SORTORDER='a' THEN RecipeRatingId END ASC, CASE WHEN @SORTDIRECTION = 'mr' AND @SORTORDER='d' THEN RecipeRatingId END DESC, CASE WHEN @SORTDIRECTION = 'd' AND @SORTORDER='a' THEN DateAdded END ASC, CASE WHEN @SORTDIRECTION = 'd' AND @SORTORDER='d' THEN DateAdded END DESC ) RowNumber, FavoriteRecipeId, RecipeId, DateAdded, Title, UrlFriendlyTitle, [Description], AverageRatingId, SubmittedById, SubmittedBy, RecipeStateId, RecipeRatingId, ReviewCount, TweaksCount, PhotoCount, ImageName FROM @RESULT_SET_TABLE WHERE ((@FILTERBY = 1 AND SubmittedById= @USERID) OR ( @FILTERBY = 2 AND (SubmittedById <> @USERID OR SubmittedById IS NULL)) OR ( @FILTERBY <> 1 AND @FILTERBY <> 2)) ) SELECT RowNumber, FavoriteRecipeId, RecipeId, DateAdded, Title, UrlFriendlyTitle, [Description], AverageRatingId, SubmittedById, SubmittedBy, RecipeStateId, RecipeRatingId, ReviewCount, TweaksCount, PhotoCount, ImageName FROM SortResults WHERE RowNumber BETWEEN @ROW_START AND @ROW_END print @ROW_START print @ROW_END SELECT @RecipeCount=dbo.udfGetFavRecipesCount(@UserId) SELECT @RecipeCount AS RecipeCount SELECT COUNT(Id) as FilterCount FROM @RESULT_SET_TABLE WHERE ((@FILTERBY = 1 AND SubmittedById= @USERID) OR (@FILTERBY = 2 AND (SubmittedById <> @USERID OR SubmittedById IS NULL)) OR (@FILTERBY <> 1 AND @FILTERBY <> 2)) END

    Read the article

  • Database maintenance, big binary table optimization

    - by jgemedina
    Hi i have a huge database, around 1 TB in size, most of the space is consumed by a table which stores images, the tables has right now almost 800k rows. server response time has increased, i would like to know which techniques should i use or you recomend, partitioning? o how to reorganize the table every row is accessed by the image id column, and it has its clustered index by that column, and every two days i reorganize the index and every 7 days i rebuild it, but it seems not to be working any suggestions?

    Read the article

  • mysql query using global variables

    - by Carlos
    I am trying run a query to active the users account. I am not sure if I am having problem with the query itself or if there's something else that I dont know about. here is the code: if($_SESSION['lastid']&&$_SESSION['random']) { $check= mysql_query('SELECT * FROM members WHERE id= "$_SESSION[lastid]" AND random = " $_SESSION[random]"'); $checknum = mysql_num_rows($check); //$checknum = mysql_query($check) or die("Error: ". mysql_error(). " with query ". $check); if($checknum != 0) // run query to activate the account { $acti= mysql_query('UPDATE members SET activation = "1" WHERE id= "$_SESSION[lastid]"'); die('Your account has been activated. You may now log in!'); }else{ echo('Invalid id or activation code.') . ' lastid: ' .$_SESSION['lastid'] . ' random: ' .$_SESSION['random'] ; // die ('Invalid id or activation code.'); } }else{ die('Could not either find id or random number!'); } this is the warning I am getting from mysql: Warning: mysql_num_rows(): supplied argument is not a valid MySQL result resource in /hermes/bosweb26b/b2501/servername/folder/file.php on line 30 but when I echo the variables out, I get the same values that are stored in the database.... Invalid id or activation code. lastid: 2 and random: 36308075 could someone please give me a hint? thank you.

    Read the article

  • sequential minimal optimization C++

    - by Anton
    Hello. I want to implement the method of SVM. But the problem appeared in his training. It was originally planned to use SMO, but did not find ready-made libraries for C++. If there is a ready, then share it. Thank you in advance. The problem of finding an object in the picture (probably human)

    Read the article

  • Access VBA question: Change the query being referenced by a function, depending on context

    - by Tara Amatista
    I have a custom function in Access2007 that hinges on grabbing data out of a specific query. It opens Outlook, creates a new email and populates the fields with specific addresses and data taken from the query ("DecisionEmail"). Now I want to make a different query ("RequestEmail") and have it populate the email with that data. So all I have to do is change this line: Set MailList = db.OpenRecordset("DecisionEmail") and that's where I get stumped. This is my desired result: If the user is on Form_Decision and clicks the button "Send email", "DecisionEmail" will get plugged into the function and that data will appear in the email. If the user on Form_SendRequest and clicks the button "Send email", "RequestEmail" will instead get plugged in. The reason that these are different queries is because they contain very different information that is smudged about in different ways. However, since it's just one little thing that needs to change in the function code, I don't think a brand new function is a good idea. My last resort would be to make a brand new function and use the Conditions field in the Macro interface to choose between them, but I have a feeling there's a more elegant solution possible. I have a vague notion of setting the query names as variables and using an If statement but I just don't have the mental vocabulary for thinking through this.

    Read the article

  • Could someone give me their two cents on this optimization strategy

    - by jimstandard
    Background: I am writing a matching script in python that will match records of a transaction in one database to names of customers in another database. The complexity is that names are not unique and can be represented multiple different ways from transaction to transaction. Rather than doing multiple queries on the database (which is pretty slow) would it be faster to get all of the records where the last name (which in this case we will say never changes) is "Smith" and then have all of those records loaded into memory as you go though each looking for matches for a specific "John Smith" using various data points. Would this be faster, is it feasible in python, and if so does anyone have any recommendations for how to do it?

    Read the article

  • please check my MYSQL query & give me advice?

    - by Suba
    select s.s_nric as NRIC,s.s_name as NAME,s.s_psle_eng as PSLE_ENG,s.s_psle_math as PSLE_MATHS,s.s_psle_aggr as PSLE_AGGR, (SELECT re.re_mark FROM si_results re WHERE re.re_code like 'FEEN%' AND re.re_year='2008' AND re.re_semester='2' AND re.re_nric=s.s_nric ) as English_2008, (SELECT re.re_mark FROM si_results re WHERE re.re_code like 'FEMA%' AND re.re_year='2008' AND re.re_semester='2' AND re.re_nric=s.s_nric ) Maths_2008, (SELECT re.re_mark FROM si_results re WHERE re.re_code like 'FEEN%' AND re.re_year='2009' AND re.re_semester='2' AND re.re_nric=s.s_nric ) as English_2009, (SELECT re.re_mark FROM si_results re WHERE re.re_code like 'FEMA%' AND re.re_year='2009' AND re.re_semester='2' AND re.re_nric=s.s_nric ) Maths_2009 ,isc.isc_g_gpa as ISC_GPA from si_student_data as s LEFT JOIN si_isc_gpa as isc ON isc.isc_g_nric=s.s_nric where 1=1 AND s.s_admission_year='2008' GROUP BY s.s_nric ORDER BY s.s_gender,s.s_name asc This is my query. please check my sub query this is my sub query (SELECT re.re_mark FROM si_results re WHERE re.re_code like 'FEEN%' AND re.re_year='2008' AND re.re_semester='2' AND re.re_nric=s.s_nric ) as English_2008, (SELECT re.re_mark FROM si_results re WHERE re.re_code like 'FEMA%' AND re.re_year='2008' AND re.re_semester='2' AND re.re_nric=s.s_nric ) Maths_2008, (SELECT re.re_mark FROM si_results re WHERE re.re_code like 'FEEN%' AND re.re_year='2009' AND re.re_semester='2' AND re.re_nric=s.s_nric ) as English_2009, (SELECT re.re_mark FROM si_results re WHERE re.re_code like 'FEMA%' AND re.re_year='2009' AND re.re_semester='2' AND re.re_nric=s.s_nric ) Maths_2009 When I execute my query, server take long time to execute. So how to make simple? please advice me. Thanks.

    Read the article

< Previous Page | 133 134 135 136 137 138 139 140 141 142 143 144  | Next Page >