Search Results

Search found 4685 results on 188 pages for 'queries'.

Page 27/188 | < Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >

  • How to combine these three sql queries into one?

    - by lam3r4370
    How to combine these two sql queries into one? SELECT DISTINCT * FROM rss WHERE MATCH(content,title) AGAINST ('$filter') SELECT COUNT(content) FROM rss WHERE MATCH(content,title) AGAINST ('$filters') And if the result is 0 from the above query - SELECT DISTINCT * FROM rss WHERE content LIKE '%$filters%' OR title LIKE '%$filters%'; $filter .= $row['filter']; $filters = $row['filter']; $filters may be more than one keyword

    Read the article

  • Need help with 2 MySql Queries. Join vs Subqueries.

    - by BugBusterX
    I have 2 tables: user: id, name message: sender_id, receiver_id, message, read_at, created_at There are 2 results I need to retrieve and I'm trying to find the best solution. I have included queries that I'm using in the very end. I need to retrieve a list of users, and also with each user have information available whether there are any unread messages from each user (them as sender, me as receiver) and whether or not there are any read messages between us ( they send I'm receiver or I send they are receivers) I need Same as above, but only those members where there has been any messaging between us, sorted by unread first, then by last message received. Can you advise please? Should this be done with joins or subqueries? In first case I do not need the count, I just need to know whether or not there is at least one unread message. I'm posting code and my current queries, please have a look when you get a chance: BTW, everything is the way I want in fist query. My concern is: In second query I would like to order by messages.created_at, but I dont think I can do that with grouping? And also I dont know if this approach is the most optimized and fast. CREATE TABLE `user` ( `id` bigint(20) NOT NULL AUTO_INCREMENT, `name` varchar(255) NOT NULL, PRIMARY KEY (`id`) ) INSERT INTO `user` VALUES (1,'User 1'),(2,'User 2'),(3,'User 3'),(4,'User 4'),(5,'User 5'); CREATE TABLE `message` ( `id` bigint(20) NOT NULL AUTO_INCREMENT, `sender_id` bigint(20) DEFAULT NULL, `receiver_id` bigint(20) DEFAULT NULL, `message` text, `read_at` datetime DEFAULT NULL, `created_at` datetime NOT NULL, PRIMARY KEY (`id`) ) INSERT INTO `message` VALUES (1,3,1,'Messge',NULL,'2010-10-10 10:10:10'),(2,1,4,'Hey','2010-10-10 10:10:12','2010-10-10 10:10:11'),(3,4,1,'Hello','2010-10-10 10:10:19','2010-10-10 10:10:15'),(4,1,4,'Again','2010-10-10 10:10:25','2010-10-10 10:10:21'),(5,3,1,'Hiii',NULL,'2010-10-10 10:10:21'); SELECT u.*, m_new.id as have_new, m.id as have_any FROM user u LEFT JOIN message m_new ON (u.id = m_new.sender_id AND m_new.receiver_id = 1 AND m_new.read_at IS NULL) LEFT JOIN message m ON ((u.id = m.sender_id AND m.receiver_id = 1) OR (u.id = m.receiver_id AND m.sender_id = 1)) GROUP BY u.id SELECT u.*, m_new.id as have_new, m.id as have_any FROM user u LEFT JOIN message m_new ON (u.id = m_new.sender_id AND m_new.receiver_id = 1 AND m_new.read_at IS NULL) LEFT JOIN message m ON ((u.id = m.sender_id AND m.receiver_id = 1) OR (u.id = m.receiver_id AND m.sender_id = 1)) where m.id IS NOT NULL GROUP BY u.id

    Read the article

  • How to translate this 2 queries from Mysql to Postgresql? :

    - by xRobot
    How can I translate this 2 queries in postgresql ? : CREATE TABLE `example` ( `id` int(10) unsigned NOT NULL auto_increment, `from` varchar(255) NOT NULL default '0', `message` text NOT NULL, `lastactivity` timestamp NULL default '0000-00-00 00:00:00', `read` int(10) unsigned NOT NULL, PRIMARY KEY (`id`), KEY `from` (`from`) ) DEFAULT CHARSET=utf8; Query: SELECT * FROM table_1 LEFT OUTER JOIN table_2 ON ( table_1.id = table_2.id ) WHERE (table_1.lastactivity > NOW()-100);

    Read the article

  • Which is more efficient in mysql, a big join or multiple queries of single table?

    - by Tom Greenpoint
    I have a mysql database like this Post – 500,000 rows (Postid,Userid) Photo – 200,000 rows (Photoid,Postid) About 50,000 posts have photos, average 4 each, most posts do not have photos. I need to get a feed of all posts with photos for a userid, average 50 posts each. Which approach would be more efficient? 1: Big Join select * from post left join photo on post.postid=photo.postid where post.userid=123 2: Multiple queries select * from post where userid=123 while (loop through rows) { select * from photo where postid=row[postid] }

    Read the article

  • How to write native SQL queries in Hibernate without hardcoding table names and fields?

    - by serg555
    Sometimes you have to write some of your queries in native SQL rather than hibernate HQL. Is there a nice way to avoid hardcoding table names and fields and get this data from existing mapping? For example instead of: String sql = "select user_name from tbl_user where user_id = :id"; something like: String sql = "select " + Hibernate.getFieldName("user.name") + " from " + Hibernate.getTableName(User.class) + " where " + Hibernate.getFieldName("user.id") + " = :id";

    Read the article

  • ASP.NET MVC <OutputCache> SqlDependency (CommandNotification?) with LINQ queries

    - by sinni800
    Hello, I use LINQ queries in my ASP.NET MVC application and want to use OutputCache in some of my Actions. I hear this should be possible with CommandNotifications. But those seem to only go for self-created SQLCommands, or am I wrong? Can I manually tell SQL server to send SQLDependency notifications if certain tables change? And if yes, how can I attach them to the OutputCache? Another side question: Can you do this with strongly types views too? Thank you in advance...

    Read the article

  • How can I generate an Expression tree that queries an object with List<T> as a property?

    - by David Robbins
    Forgive my clumsy explanation, but I have a class that contains a List: public class Document { public int OwnerId { get; set; } public List<User> Users { get; set; } public Document() { } } public class User { public string UserName { get; set; } public string Department { get; set; } } Currently I use PredicateBuilder to perform dynmica queries on my objects. How can I turn the following LINQ statement into an Expression Tree: var predicate= PredicateBuilder.True<User>(); predicate= predicate.And<User>(user => user.Deparment == "HR"); var deptDocs = documents.AsQueryable() .Where(doc => doc.Users .AsQueryable().Count(predicate) > 0) .ToList(); In other words var deptDocs = documents.HasUserAttributes("Department", "HR").ToList();

    Read the article

  • Trying to use VB to automate some queries. Running into what looks like a string problem

    - by Jeff
    Hi there I'm using MS Access 2003 and I'm trying to execute a few queries at once using VB. When I write out the query in SQL it works fine, but when I try to do it in VB it asks me to "Enter Parameter Value" for DEPA, then DND (which are the first few letters of a two strings I have). Here's the code: Option Compare Database Public Sub RemoveDupelicateDepartments() Dim oldID As String Dim newID As String Dim sqlStatement As String oldID = "DND-01" newID = "DEPA-04" sqlStatement = "UPDATE [Clean student table] SET [HomeDepartment]=" & newID & " WHERE [HomeDepartment]=" & oldID & ";" DoCmd.RunSQL sqlStatement & "" End Sub It looks to me as though it's taking in the string up to the - then nothing else. I dunno, that's why I'm asking lol. What should my code look like?

    Read the article

  • Trying to Unit Test A Class That Makes DB Queries Using Hibernate And Can't Get Session Created...

    - by Jared Michaels
    I am trying to implement JUnit tests for a class that performs DB queries using Hibernate. When I create the class under test, I get access to the session through the factory by doing the following: InitialContext context = new InitialContext(); sessionFactory = (SessionFactory) context.lookup(hibernateContext); This works fine when I deploy this to JBoss 5.1. I am trying to figure out how to get this to work with my JUnit test. I keep getting an exception stating that I "Need to specify class name in environment or system property, or as an applet parameter, or in an application resource file: java.naming.factory.initial". I've searched high and low but haven't been able to find any information about what specifically I need to do to get this to work. I am not using Spring or any frameworks, just plain old Java and JUnit.

    Read the article

  • Any way using JavaScript API via iOS? and problem with FQL queries responses.

    - by Assaf b
    Hi, I'm developing an iPhone application with FB connect, the JavaScript API includes really powerful methods like wait.on for combining requests... Any way using those API methods via iOS and Xcode? about the FQL responses, I'm using both: request:didReceiveResponse: AND request:didLoad: methods. all the FQL queries I send provoke didReceiveResponse but not all of them provoke the second one (didLoad). @"SELECT uid,eid FROM event_member WHERE uid in (select uid2 from friend where uid1=%d limit 100)", userID when the limit is 1-2 it provokes them all, when it grows too 100 (friends to fetch) it provokes only the first.. does anyone know this problem? Thanks!

    Read the article

  • Is it possible to combine these 3 mySQL queries?

    - by Greenie
    I know the $downloadfile - and I want the $user_id. By trial and error I found that this does what I want. But it's 3 separate queries and 3 while loops. I have a feeling there is a better way. And yes, I only have a very little idea about what I'm doing :) $result = pod_query("SELECT ID FROM wp_posts WHERE guid LIKE '%/$downloadfile'"); while ($row = mysql_fetch_assoc($result)) { $attachment = $row['ID']; } $result = pod_query("SELECT pod_id FROM wp_pods_rel WHERE tbl_row_id = '$attachment'"); while ($row = mysql_fetch_assoc($result)) { $pod_id = $row['pod_id']; } $result = pod_query("SELECT tbl_row_id FROM wp_pods_rel WHERE tbl_row_id = '$pod_id' AND field_id = '28'"); while ($row = mysql_fetch_assoc($result)) { $user_id = $row['pod_id']; }

    Read the article

  • Google Suggest - What determines the sort order of suggested queries on google?

    - by John Himmelman
    How is this sort order determined? Is it ranked by popularity, number of results, or a mysterious google algorithm? Does there algorithm take into account the search popularity of a query (using google-trends data or something)? Edit: I found a news article dating back to when google suggest was made public in 2004. Here is an excerpt... How does it work? "Our algorithms use a wide range of information to predict the queries users are most likely to want to see. For example, Google Suggest uses data about the overall popularity of various searches to help rank the refinements it offers." Source: http://www.free-seo-news.com/newsletter138.htm

    Read the article

  • How to join nearly identical several queries into one?

    - by Devyn
    Hi, Assume I have an order_dummy table where order_dummy_id, order_id, user_id, book_id, author_id are stored. You may complain the logic of my table but I somehow need to do it that way. I want to execute following queries. SELECT * FROM order_dummy WHERE order_id = 1 AND user_id = 1 AND book_id = 1 ORDER BY `order_dummy_id` DESC LIMIT 1 SELECT * FROM order_dummy WHERE order_id = 1 AND user_id = 1 AND book_id = 2 ORDER BY `order_dummy_id` DESC LIMIT 1 SELECT * FROM order_dummy WHERE order_id = 1 AND user_id = 1 AND book_id = 3 ORDER BY `order_dummy_id` DESC LIMIT 1 Please keep in mind that several numbers of same book is included in one order. Therefore, I list order_dummy_id by descending and limit 1 so only LATEST ORDER of A BOOK is shown. But my goal is to show other books in that way in one table. I used group by like this ... SELECT * FROM order_dummy WHERE order_id = 1 AND user_id = 1 GROUP BY book_id but it only shows order_dummy_id with ascending result. I have no idea anymore. Looking forward your kindness help!

    Read the article

  • Data access strategy for a site like SO - sorted SQL queries and simultaneous updates that affect th

    - by Kaleb Brasee
    I'm working on a Grails web app that would be similar in access patterns to StackOverflow or MyLifeIsAverage - users can vote on entries, and their votes are used to sort a list of entries based on the number of votes. Votes can be placed while the sorted select queries are being performed. Since the selects would lock a large portion of the table, it seems that normal transaction locking would cause updates to take forever (given enough traffic). Has anyone worked on an app with a data access pattern such as this, and if so, did you find a way to allow these updates and selects to happen more or less concurrently? Does anyone know how sites like SO approach this? My thought was to make the sorted selects dirty reads, since it is acceptable if they're not completely up to date all of the time. This is my only idea for possibly improving performance of these selects and updates, but I thought someone might know a better way.

    Read the article

  • Need help on nested loop of queries in php and mysql?

    - by mysqllearner
    Hi, I am trying to get do this: <?php $good_customer = 0; $q = mysql_query("SELECT user FROM users WHERE activated = '1'"); // this gives me about 40k users while($r = mysql_fetch_assoc($q)){ $money_spent = 0; $user = $r['user']; // Do queries on another 20 tables for($i = 1; $i<=20 ; $i++){ $tbl_name = 'data' . $i; $q2 = mysql_query("SELECT money_spent FROM $tbl_name WHERE user = '{$user}'"); while($r2 = mysql_fetch_assoc($q2)){ $money_spend += $r2['money_spent']; } if($money_spend > 1000000){ $good_customer += 1; } } } This is just an example. I am testing on localhost, for single user, it returns very fast. But when I try 1000, it takes forever, not even mentioned 40k users. Anyway to optimise/improve this code? EDIT: By the way, each of the others 20 tables has ~20 - 40k records

    Read the article

  • Efficient way to combine results of two database queries.

    - by ensnare
    I have two tables on different servers, and I'd like some help finding an efficient way to combine and match the datasets. Here's an example: From server 1, which holds our stories, I perform a query like: query = """SELECT author_id, title, text FROM stories ORDER BY timestamp_created DESC LIMIT 10 """ results = DB.getAll(query) for i in range(len(results)): #Build a string of author_ids, e.g. '1314,4134,2624,2342' But, I'd like to fetch some info about each author_id from server 2: query = """SELECT id, avatar_url FROM members WHERE id IN (%s) """ values = (uid_list) results = DB.getAll(query, values) Now I need some way to combine these two queries so I have a dict that has the story as well as avatar_url and member_id. If this data were on one server, it would be a simple join that would look like: SELECT * FROM members, stories WHERE members.id = stories.author_id But since we store the data on multiple servers, this is not possible. What is the most efficient way to do this? Thanks.

    Read the article

  • Is there a Firefox or Chrome plugin, or a standalone program, for monitoring site usage and search queries?

    - by Leigh Caldwell
    I'm running some research on how people search the web for specific types of information. I'd like to be able to set them up with a laptop and browser and then record a history of what they search for and what sites they visit. A Firefox or Chrome plugin would be ideal, but a standalone program is fine too. It doesn't need to be free, just quick and reliable. It doesn't need to be a general PC monitoring program (though that would be OK too) - it's only Web usage I need to track. I've found a few on the Web but am not sure which ones to trust. Your recommendations would be much appreciated.

    Read the article

  • SQLAuthority News – We’re sorry… … but your computer or network may be sending automated queries. To

    - by pinaldave
    I use multiple browser many times when I am working with multiple projects simultaneously. Often I use Google Reader to read few feeds. Recently, I faced the following error and this error will not go. I even restarted my computer and rebooted my network. I am confident that my computer does not have viruses or malware, I could not tackle this error. When I opened Google Reader on another browser, it worked fine. Finally, I found the solution and I want share it with all of you. Error We’re sorry… … but your computer or network may be sending automated queries. To protect our users, we can’t process your request right now. I removed the cookies of Google Reader with the name ‘reader_offline’ as displayed in image below. Once I remove the above mentioned cookie, I could login perfectly fine in Google Reader. I think this message from Google was misleading and inaccurate; however, the solution is easy enough. I just wanted to share this quick tip with everyone who is facing such an issue. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology Tagged: Google

    Read the article

  • How do I determine the cause of a sustained spike in mysql queries/activity?

    - by mattmcmanus
    So this is more of a "I'm trying to learn about how this works" question rather than "there is a serious problem I can't figure out!" question. I'm setting up a VPS and have been tweaking and changing things here and there. I recently installed munin (like two days ago) and yesterday I noticed a significant increase in mysql activity. So now my curiosity is going crazy. How do I setup/access mysql's query log? I have about 5 databases on the server so I want to see which one is getting all the action. Is there anything else I can do to keep a better eye on what's going on? Here are the graphs. As you can tell, it's not that much activity at all but I'm just curious at the change. The sites that are on the server right now do not get a lot of traffic. It's running a couple drupal sites, only one of which is live. The live one hasn't had a spike in traffic and the last spike was 250 visitors so it's barely a spike at all.

    Read the article

  • Can I configure a DNS cache not to forward AAAA queries?

    - by itsadok
    I'm setting up an internal DNS cache because my firewall is having trouble handling all the sessions created by DNS requests. I tried using bind9, dnsmasq and DJB dnscache, they all help reduce the number of requests leaving my network, but there are still a lot of request being made. Looking at the log files, and tcpdump and dnstop outputs, it seems that requests that return SERVFAIL do not get cached at all. And a lot of those failed requests are AAAA requests, which is a shame, because I do not have ipv6 enabled on any server. I've looked at several ways to help the situation, and I think if I could somehow prevent AAAA record requests from being forwarded by the DNS cache, it would reduce the number of requests significantly. The closest thing I found was the filter-aaaa-on-v4 option in BIND9. However, this only removes the record from the server response, and does not prevent it from forwarding it. Any help would be appreciated.

    Read the article

  • What PowerShell/WSMan clients or queries are consuming more than 1000 requests per 2 seconds?

    - by makerofthings7
    Exchange 2010 remote administration tools are complaining with the following error [txexmb02.ibm.com] Connecting to remote server failed with the following error message : The WS-Management service cannot process the request. The system load quota of 1000 requests per 2 seconds has been exceeded. Send future requests at a slower rate or raise the system quota. The next request from this user will not be approved for at least 558475776 milliseconds. For more information, see the about_Remote_Troubleshooting Help topic. + CategoryInfo : OpenError: (System.Manageme....RemoteRunspace:RemoteRunspace) [], PSRemotingTransportException + FullyQualifiedErrorId : PSSessionOpenFailed VERBOSE: Connecting to TXEXHC02.ibm.com The help document this error referrers to says this is a WS-Man error. We're running SCOM 2007 R2 and am thinking that is increasing the query count, but I need to prove it.

    Read the article

  • [Repost-ish] Impossibly slow queries, Tables indexed, How can I speed it up?

    - by colorfulgrayscale
    Hi guys, I posted a little earlier on here at http://stackoverflow.com/questions/2656837/query-results-taking-too-long-on-200k-database-speed-up-tips asking about slow executing SQL queries. I was told to index the columns; I did. and its still slow (slow as in, i never see the results, both mysql and sqlite freeze up on query). Help would be greatly appreciated. Here is the SQL SELECT equipment.`unitID` AS `equipment_unitID`, equipment.`fleetCode` AS `equipment_fleetCode`, equipment.type AS equipment_type, equipment.tiremap AS equipment_tiremap, tiremap.`TireID` AS `tiremap_TireID`, tiremap.`WorkMap` AS `tiremap_WorkMap`, tiremap.`Position` AS `tiremap_Position`, tiremap.`DepthMap` AS `tiremap_DepthMap`, tiremap.timestamp AS tiremap_timestamp, workreference.`aMap` AS `workreference_aMap`, workreference.`bMap` AS `workreference_bMap`, tirework.`RO` AS `tirework_RO`, tirework.location AS tirework_location, tirework.mileage AS tirework_mileage, tirework.`mechanicCode` AS `tirework_mechanicCode`, tirework.`partNumber` AS `tirework_partNumber`, tirework.`historyID` AS `tirework_historyID`, tirework.workmap AS tirework_workmap, tirework.timestamp AS tirework_timestamp FROM equipment, tiremap, workreference, tirework WHERE equipment.tiremap = tiremap.`TireID` AND tiremap.`WorkMap` = workreference.`aMap` AND workreference.`bMap` = tirework.workmap LIMIT 5 and here is the EXPLAIN for it id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE equipment ALL tiremap 14079 1 SIMPLE tiremap ref PRIMARY,WorkMap,TireID,WorkMap_2 PRIMARY 52 tire.equipment.tiremap 3 1 SIMPLE workreference ref aMap,bMap aMap 52 tire.tiremap.WorkMap 1 1 SIMPLE tirework eq_ref NewIndex1 NewIndex1 52 tire.workreference.bMap 1

    Read the article

  • Would this method work to scale out SQL queries?

    - by David
    I have a database containing a single huge table. At the moment a query can take anything from 10 to 20 minutes and I need that to go down to 10 seconds. I have spent months trying different products like GridSQL. GridSQL works fine, but is using its own parser which does not have all the needed features. I have also optimized my database in various ways without getting the speedup I need. I have a theory on how one could scale out queries, meaning that I utilize several nodes to run a single query in parallel. The idea is to take an incoming SQL query and simply run it exactly like it is on all the nodes. When the results are returned to a coordinator node, the same query is run on the union of the resultsets. I realize that an aggregate function like average need to be rewritten into a count and sum to the nodes and that the coordinator divides the sum of the sums with the sum of the counts to get the average. What kinds of problems could not easily be solved using this model. I believe one issue would be the count distinct function. Edit: I am getting so many nice suggestions, but none have addressed the method.

    Read the article

< Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >