Search Results

Search found 4815 results on 193 pages for 'parameterized queries'.

Page 57/193 | < Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >

  • Oracle - pl sql selecting from SYS_REFCURSOR

    - by Einstein
    I have a function that returns a SYS_REFCURSOR that has a single row but multiple columns. What I'm looking to do is to be able to have a SQL query that has nested sub-queries using the column values returned in the SYS_REFCURSOR. Alternative ideas such as types, etc would be appreciated. Code below is me writing on-the-fly and hasn't been validated for syntax. --Oracle function CREATE DummyFunction(dummyValue AS NUMBER) RETURN SYS_REFCURSOR IS RETURN_DATA SYS_REFCURSOR; BEGIN OPEN RETURN_DATA SELECT TO_CHAR(dummyValue) || 'A' AS ColumnA ,TO_CHAR(dummyValue) || 'B' AS ColumnB FROM DUAL; RETURN RETURN_DATA; END; --sample query with sub-queries; does not work SELECT SELECT ColumnA FROM DummyFunction(1) FROM DUAL AS ColumnA ,SELECT ColumnB FROM DummyFunction(1) FROM DUAL AS ColumnB FROM DUAL;

    Read the article

  • Improving MySQL Update Query Efficiency

    - by Russell C.
    In our database tables we keep a number of counting columns to help reduce the number of simple lookup queries. For example, in our users table we have columns for the number of reviews written, photos uploaded, friends, followers, etc. To help make sure these stay in sync we have a script that runs periodically to check and update these counting columns. The problem is that now that our database has grown significantly the queries we have been using are taking forever to run since they are totally inefficient. I would appreciate someone with more MySQL knowledge than myself to recommend how we can improve it's efficiency: update users set photos=(select count(*) from photos where photos.status="A" AND photos.user_id=users.id) where users.status="A"; If this were a select statement I would just use a join but I'm not sure if that is possible with update. Thanks in advance for your help!

    Read the article

  • Any tips of how to handle hierarchial trees in relational model?

    - by George
    Hello all. I have a tree structure that can be n-levels deep, without restriction. That means that each node can have another n nodes. What is the best way to retrieve a tree like that without issuing thousands of queries to the database? I looked at a few other models, like flat table model, Preorder Tree Traversal Algorithm, and so. Do you guys have any tips or suggestions of how to implement a efficient tree model? My objective in the real end is to have one or two queries that would spit the whole tree for me. With enough processing i can display the tree in dot net, but that would be in client machine, so, not much of a big deal. Thanks for the attention

    Read the article

  • Many Associations Leading to Slow Query

    - by Joey Cadle
    I currently have a database that has a lot of many to many associations. I have services which have many variations which have many staff who can perform the variation who then have details on themselves like name, role, etc... At 10 services with 3 variations each and up to 4 out of 20 staff attached to each service even doing something as getting all variations and the staff associated with them takes 4s. Is there a way I can reduce these queries that take a while to process? I've cut down the queries by doing eager loading in my DBM to reduce the problems that arise from 1+N issues, but still 4s is a long query for just a testing stage. Is there a structure out there that would help make such nested many to many associations much quicker to select? Maybe combining everything past the service level into a single table with a 'TYPE' column ?? I'm just not knowledgable enough to know the solution that turns this 4s query into a 300MS query... Any suggestions would be helpful.

    Read the article

  • Phantom activity on MySQL

    - by LoveMeSomeCode
    This is probably just my total lack of MySQL expertise, but is it typical to see lots of phantom activity on a MySQL instance via phpMyAdmin? I have a shared hosting plan through Lithium, and when I log in through the phpMyAdmin console and click on the 'Status' tab, it's showing crazy high numbers for queries. Within an hour of activating my account I had 1 million queries. At first I thought this was them setting things up, but the number is climbing constantly, averaging 170/second. I've got a support ticket in with Lithium, but I thought I'd ask here if this were a MySQL/shared host thing, because I had the same thing happen with a shared hosting plan through Joyent.

    Read the article

  • Filemaker XSL 20sec Query Latency

    - by Ian Wetherbee
    I have an ASP frontend that loads data from a Filemaker database using XSL to perform simple queries. The problem is that the first page load takes 20 seconds +/- 200ms, then the next few page refreshes within a minute of the first request take <200ms, then the cycle starts over again. Each page load makes only 2 XSL queries, and they execute fast after the first page load, so what is causing the delay on the first page load? I have caching turned up with a 100% hit rate, and number of connections at 100. I've tried with XSL database sessions on and off, and session time anywhere from 1 to 60 minutes without any changes. The XSL loads from ASP use a GET request and add a Basic Authorization header to authenticate each time. During fast page requests, the fmserver.exe and fmswpc.exe processes don't even flinch, but during a 20 second holdup I see fmserver jump to 30% CPU and a 3mb I/O read a few seconds into the request, and occasionally fmswpc jump to 60% CPU.

    Read the article

  • Are indexes good or bad for a large database?

    - by gmemon
    Hello All, I read on MySQL Performance Blog that when tables are large, it is better to scan full tables, instead of using indexes. I have a table with tens of millions of rows. When conducting queries, if I use no indexes, then queries are 24 times slower than with indexes. I know lot of things may cause this (e.g., are rows stored sequentially), but can you please give me some hints what might be happening? Or how I should start examining this issue? I want to understand when use of indexes is preferred and when it's not Thanks

    Read the article

  • Existing SQL database android help

    - by basketball4567
    Hey guys, I am new to android programming and i am trying to make an app that will interface with a website i have. It is a movie website with all of its information stored in SQL database. I know how to write the requests and queries in .asp but dont know how to get information from the database in my app. I want the user to be able to enter a movie title, and through a couple of stored procedures that are in my SQL database, return the info on that movie(actor, budget, genre....). I would like to have little info stored on the device, have all of the queries being sent to my SQL server and just have the info being returned. So my question boils down to, how do i link my existing SQL database with an android app. Any help would be great. Thanks Basketball4567

    Read the article

  • ASP, My SQL & case sensitity suddenly broken

    - by user131812
    Hi There, We have an old ASPsite that has been working fine for years with a MY SQL database. All of a sudden last week lots fo SQL queries stopped working. The database has a table called 'members' but the code calls 'Members'. It appears the queries used to work regardless of case sensitivity on the table names, but something has changed recently somewhere to enforce case. This has me stumped as the site has not been touched in years, the server config hasn't changed & the database provide has not changed anything. Is there any simple way to ignore case for an ASP site (without editing lots fo files :) Thanks Ben

    Read the article

  • MySQL Config File for Large System

    - by Jonathon
    We are running MySQL on a Windows 2003 Server Enterpise Edition box. MySQL is about the only program running on the box. We have approx. 8 slaves replicated to it, but my understanding is that having multiple slaves connecting to the same master does not significantly slow down performance, if at all. The master server has 16G RAM, 10 Terabyte drives in RAID 10, and four dual-core processors. From what I have seen from other sites, we have a really robust machine as our master db server. We just upgraded from a machine with only 4G RAM, but with similar hard drives, RAID, etc. It also ran Apache on it, so it was our db server and our application server. It was getting a little slow, so we split the db server onto this new machine and kept the application server on the first machine. We also distributed the application load amongst a few of our other slave servers, which also run the application. The problem is the new db server has mysqld.exe consuming 95-100% of CPU almost all the time and is really causing the app to run slowly. I know we have several queries and table structures that could be better optimized, but since they worked okay on the older, smaller server, I assume that our my.ini (MySQL config) file is not properly configured. Most of what I see on the net is for setting config files on small machines, so can anyone help me get the my.ini file correct for a large dedicated machine like ours? I just don't see how mysqld could get so bogged down! FYI: We have about 100 queries per second. We only use MyISAM tables, so skip-innodb is set in the ini file. And yes, I know it is reading the ini file correctly because I can change some settings (like the server-id and it will kill the server at startup). Here is the my.ini file: #MySQL Server Instance Configuration File # ---------------------------------------------------------------------- # Generated by the MySQL Server Instance Configuration Wizard # # # Installation Instructions # ---------------------------------------------------------------------- # # On Linux you can copy this file to /etc/my.cnf to set global options, # mysql-data-dir/my.cnf to set server-specific options # (@localstatedir@ for this installation) or to # ~/.my.cnf to set user-specific options. # # On Windows you should keep this file in the installation directory # of your server (e.g. C:\Program Files\MySQL\MySQL Server X.Y). To # make sure the server reads the config file use the startup option # "--defaults-file". # # To run run the server from the command line, execute this in a # command line shell, e.g. # mysqld --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # To install the server as a Windows service manually, execute this in a # command line shell, e.g. # mysqld --install MySQLXY --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # And then execute this in a command line shell to start the server, e.g. # net start MySQLXY # # # Guildlines for editing this file # ---------------------------------------------------------------------- # # In this file, you can use all long options that the program supports. # If you want to know the options a program supports, start the program # with the "--help" option. # # More detailed information about the individual options can also be # found in the manual. # # # CLIENT SECTION # ---------------------------------------------------------------------- # # The following options will be read by MySQL client applications. # Note that only client applications shipped by MySQL are guaranteed # to read this section. If you want your own MySQL client program to # honor these values, you need to specify it as an option during the # MySQL client library initialization. # [client] port=3306 [mysql] default-character-set=latin1 # SERVER SECTION # ---------------------------------------------------------------------- # # The following options will be read by the MySQL Server. Make sure that # you have installed the server correctly (see above) so it reads this # file. # [mysqld] # The TCP/IP Port the MySQL Server will listen on port=3306 #Path to installation directory. All paths are usually resolved relative to this. basedir="D:/MySQL/" #Path to the database root datadir="D:/MySQL/data" # The default character set that will be used when a new schema or table is # created and no character set is defined default-character-set=latin1 # The default storage engine that will be used when create new tables when default-storage-engine=MYISAM # Set the SQL mode to strict #sql-mode="STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION" # we changed this because there are a couple of queries that can get blocked otherwise sql-mode="" #performance configs skip-locking max_allowed_packet = 1M table_open_cache = 512 # The maximum amount of concurrent sessions the MySQL server will # allow. One of these connections will be reserved for a user with # SUPER privileges to allow the administrator to login even if the # connection limit has been reached. max_connections=1510 # Query cache is used to cache SELECT results and later return them # without actual executing the same query once again. Having the query # cache enabled may result in significant speed improvements, if your # have a lot of identical queries and rarely changing tables. See the # "Qcache_lowmem_prunes" status variable to check if the current value # is high enough for your load. # Note: In case your tables change very often or if your queries are # textually different every time, the query cache may result in a # slowdown instead of a performance improvement. query_cache_size=168M # The number of open tables for all threads. Increasing this value # increases the number of file descriptors that mysqld requires. # Therefore you have to make sure to set the amount of open files # allowed to at least 4096 in the variable "open-files-limit" in # section [mysqld_safe] table_cache=3020 # Maximum size for internal (in-memory) temporary tables. If a table # grows larger than this value, it is automatically converted to disk # based table This limitation is for a single table. There can be many # of them. tmp_table_size=30M # How many threads we should keep in a cache for reuse. When a client # disconnects, the client's threads are put in the cache if there aren't # more than thread_cache_size threads from before. This greatly reduces # the amount of thread creations needed if you have a lot of new # connections. (Normally this doesn't give a notable performance # improvement if you have a good thread implementation.) thread_cache_size=64 #*** MyISAM Specific options # The maximum size of the temporary file MySQL is allowed to use while # recreating the index (during REPAIR, ALTER TABLE or LOAD DATA INFILE. # If the file-size would be bigger than this, the index will be created # through the key cache (which is slower). myisam_max_sort_file_size=100G # If the temporary file used for fast index creation would be bigger # than using the key cache by the amount specified here, then prefer the # key cache method. This is mainly used to force long character keys in # large tables to use the slower key cache method to create the index. myisam_sort_buffer_size=64M # Size of the Key Buffer, used to cache index blocks for MyISAM tables. # Do not set it larger than 30% of your available memory, as some memory # is also required by the OS to cache rows. Even if you're not using # MyISAM tables, you should still set it to 8-64M as it will also be # used for internal temporary disk tables. key_buffer_size=3072M # Size of the buffer used for doing full table scans of MyISAM tables. # Allocated per thread, if a full scan is needed. read_buffer_size=2M read_rnd_buffer_size=8M # This buffer is allocated when MySQL needs to rebuild the index in # REPAIR, OPTIMZE, ALTER table statements as well as in LOAD DATA INFILE # into an empty table. It is allocated per thread so be careful with # large settings. sort_buffer_size=2M #*** INNODB Specific options *** innodb_data_home_dir="D:/MySQL InnoDB Datafiles/" # Use this option if you have a MySQL server with InnoDB support enabled # but you do not plan to use it. This will save memory and disk space # and speed up some things. skip-innodb # Additional memory pool that is used by InnoDB to store metadata # information. If InnoDB requires more memory for this purpose it will # start to allocate it from the OS. As this is fast enough on most # recent operating systems, you normally do not need to change this # value. SHOW INNODB STATUS will display the current amount used. innodb_additional_mem_pool_size=11M # If set to 1, InnoDB will flush (fsync) the transaction logs to the # disk at each commit, which offers full ACID behavior. If you are # willing to compromise this safety, and you are running small # transactions, you may set this to 0 or 2 to reduce disk I/O to the # logs. Value 0 means that the log is only written to the log file and # the log file flushed to disk approximately once per second. Value 2 # means the log is written to the log file at each commit, but the log # file is only flushed to disk approximately once per second. innodb_flush_log_at_trx_commit=1 # The size of the buffer InnoDB uses for buffering log data. As soon as # it is full, InnoDB will have to flush it to disk. As it is flushed # once per second anyway, it does not make sense to have it very large # (even with long transactions). innodb_log_buffer_size=6M # InnoDB, unlike MyISAM, uses a buffer pool to cache both indexes and # row data. The bigger you set this the less disk I/O is needed to # access data in tables. On a dedicated database server you may set this # parameter up to 80% of the machine physical memory size. Do not set it # too large, though, because competition of the physical memory may # cause paging in the operating system. Note that on 32bit systems you # might be limited to 2-3.5G of user level memory per process, so do not # set it too high. innodb_buffer_pool_size=500M # Size of each log file in a log group. You should set the combined size # of log files to about 25%-100% of your buffer pool size to avoid # unneeded buffer pool flush activity on log file overwrite. However, # note that a larger logfile size will increase the time needed for the # recovery process. innodb_log_file_size=100M # Number of threads allowed inside the InnoDB kernel. The optimal value # depends highly on the application, hardware as well as the OS # scheduler properties. A too high value may lead to thread thrashing. innodb_thread_concurrency=10 #replication settings (this is the master) log-bin=log server-id = 1 Thanks for all the help. It is greatly appreciated.

    Read the article

  • Improve SQL strategy - denormalize in object-children-images case

    - by fesja
    Hi, I have a Tour object which has many Place objects. For the list of tours, I want to show some info of the tour, the number of places of the tour, and three Place's images. Right one my SQL queries are (i'm using Doctrine with Symfony on MySQL) get Tour get Tour 1 places get Tour 2 places get Tour 3 places ... get Tour n places If I have a three Tour list, it's not so bad; but I'm sure it can get bad if I do a 10-20 tour-list. So, thinking on how to improve the queries I've thought of several measures: Having a place count cache Storing the urls of three images on a new tour field. The problem with 2. is that if I change the image, I have to check all the tours to update that image for another one. What solution do you think is best to scale the system in a near future? Any other suggestion. thanks!

    Read the article

  • How should I store Dynamically Changing Data into Server Cache?

    - by Scott
    Hey all, EDIT: Purpose of this Website: Its called Utopiapimp.com. It is a third party utility for a game called utopia-game.com. The site currently has over 12k users to it an I run the site. The game is fully text based and will always remain that. Users copy and paste full pages of text from the game and paste the copied information into my site. I run a series of regular expressions against the pasted data and break it down. I then insert anywhere from 5 values to over 30 values into the DB based on that one paste. I then take those values and run queries against them to display the information back in a VERY simple and easy to understand way. The game is team based and each team has 25 users to it. So each team is a group and each row is ONE users information. The users can update all 25 rows or just one row at a time. I require storing things into cache because the site is very slow doing over 1,000 queries almost every minute. So here is the deal. Imagine I have an excel spreadsheet with 100 columns and 5000 rows. Each row has two unique identifiers. One for the row it self and one to group together 25 rows a piece. There are about 10 columns in the row that will almost never change and the other 90 columns will always be changing. We can say some will even change in a matter of seconds depending on how fast the row is updated. Rows can also be added and deleted from the group, but not from the database. The rows are taken from about 4 queries from the database to show the most recent and updated data from the database. So every time something in the database is updated, I would also like the row to be updated. If a row or a group has not been updated in 12 or so hours, it will be taken out of Cache. Once the user calls the group again via the DB queries. They will be placed into Cache. The above is what I would like. That is the wish. In Reality, I still have all the rows, but the way I store them in Cache is currently broken. I store each row in a class and the class is stored in the Server Cache via a HUGE list. When I go to update/Delete/Insert items in the list or rows, most the time it works, but sometimes it throws errors because the cache has changed. I want to be able to lock down the cache like the database throws a lock on a row more or less. I have DateTime stamps to remove things after 12 hours, but this almost always breaks because other users are updating the same 25 rows in the group or just the cache has changed. This is an example of how I add items to Cache, this one shows I only pull the 10 or so columns that very rarely change. This example all removes rows not updated after 12 hours: DateTime dt = DateTime.UtcNow; if (HttpContext.Current.Cache["GetRows"] != null) { List<RowIdentifiers> pis = (List<RowIdentifiers>)HttpContext.Current.Cache["GetRows"]; var ch = (from xx in pis where xx.groupID == groupID where xx.rowID== rowID select xx).ToList(); if (ch.Count() == 0) { var ck = GetInGroupNotCached(rowID, groupID, dt); //Pulling the group from the DB for (int i = 0; i < ck.Count(); i++) pis.Add(ck[i]); pis.RemoveAll((x) => x.updateDateTime < dt.AddHours(-12)); HttpContext.Current.Cache["GetRows"] = pis; return ck; } else return ch; } else { var pis = GetInGroupNotCached(rowID, groupID, dt);//Pulling the group from the DB HttpContext.Current.Cache["GetRows"] = pis; return pis; } On the last point, I remove items from the cache, so the cache doesn't actually get huge. To re-post the question, Whats a better way of doing this? Maybe and how to put locks on the cache? Can I get better than this? I just want it to stop breaking when removing or adding rows.

    Read the article

  • Creating An OLEDB Connection From Excel To SQL

    - by Soo
    I am looking to run SQL queries using VBA code in an Excel file. It may sound like a bad way to do things, but the purpose of this is to support legacy functionality on a project I'm working on. I figured out how to create an ODBC connection, but it requires several steps which may be troublesome to implement on many computers, so I'm looking into the possibility of using OLEDB to get the job done. My question is how to go about setting things up so I can run SQL queries in Excel using VBA.

    Read the article

  • SQL Database dilemma : Optimize for Querying or Writing?

    - by Harry
    I'm working on a personal project (Search engine) and have a bit of a dilemma. At the moment it is optimized for writing data to the search index and significantly slow for search queries. The DTA (Database Engine Tuning Adviser) recommends adding a couple of Indexed views inorder to speed up search queries. But this is to the detriment of writing new data to the DB. It seems I can't have one without the other! This is obviously not a new problem. What is a good strategy for this issue?

    Read the article

  • Design guide-lines for writing a Typed SQL Statement API ?

    - by this. __curious_geek
    Last night I came up to sometihng intersting while designing my new project that brought me to ask this qustion here. My project is supposed to follow Table Gateway pattern using tradional ADO.Net datasets for data access. I don't want to write plain queries in my data-access classes. So I came up with an idea of writing a parser kindaa api that exposes objects and methods to generate queries on the move based on my domain objects. Later I want this api to hook up to my Business objects and provide Typed SQL generator api right on the business object instances. Any idea or references how can I do this ? This seems very wide to start with that I'm compelled take your opinions here. Does there anything already exists that can do this ?

    Read the article

  • Return segmented average from SQL Query?

    - by Guillaume Filion
    Hi, I measure the load on DNS servers every minute and store that into an SQL DB. I want to draw a chart of the load for the last 48 hours. That's 69120 (48*24*60) data points but my chart's only 800 pixels wide so to make things faster I would like my SQL query to return only ~800 data points. It's seems to me like a pretty standard thing to do, but I've been searching the web and in books for such a thing for a while now and the closest I was able to find was a rolling average. What I'm looking for a more of a "segmented average": divide the 69120 data points in ~800 segments, then average each segment. My SQL table is: CREATE TABLE measurements ( ip int, measurement_time int, queries int, query_time float ) My query looks like this SELECT ip, queries FROM measurements WHERE measurement_time>(time()-172800) Thanks a lot!

    Read the article

  • SQL 2008 table locked - can't work out why

    - by Mr. Flibble
    I have two databases on one SQL 2008 server. Database 1 seems to be causing a lock on a table on database 2. There no queries are running on database 1 that should affect database 2. Is this normal behaviour? When I view the running queries with this command SELECT sqltext.TEXT, req.session_id, req.status, req.command, req.cpu_time, req.total_elapsed_time/1000 [seconds] FROM sys.dm_exec_requests req CROSS APPLY sys.dm_exec_sql_text(sql_handle) AS sqltext it tells me as much, and says that the command on database 2 is suspended. I'm at a bit of a loss. What sort of things should I look at to work out why the table in database 2 is locked?

    Read the article

  • one query instead of four - is it possible?

    - by Syom
    i must get data from four tables. i wrote the script with four queries, but i use it in ajax, and i wan't to do it by one query. here is queries... $query1 = "SELECT `id`,`name_ar` FROM `tour_type` ORDER BY `order`"; $query2 = "SELECT `id`,`name_ar` FROM `hotel_type` ORDER BY `order`"; $query3 = "SELECT `id`,`name_ar` FROM `food_type` ORDER BY `order`"; $query4 = "SELECT `id`,`name_ar` FROM `cities` WHERE `id_parrent` = '$id_parrent' ORDER BY `name_ar`"; is it possible to write in one query? thanks

    Read the article

  • Username correct, password incorrect?

    - by jonnnnnnnnnie
    In a login system, how can you tell if the user has entered the password incorrectly? Do you perform two SQL queries, one to find the username, and then one to find the username and matching (salted+hashed etc) password? I'm asking this because If the user entered the password incorrectly, I want to update the failed_login_attempts column I have. If you perform two queries wouldn't that increase overhead? If you did a query like this, how would you tell if the password entered was correct or not, or whether the username doesn't exist: SELECT * FROM author WHERE username = '$username' AND password = '$password' LIMIT 1 ( ^ NB: I'm keeping it simple, will use hash and salt, and will sanitize input in real one.) Something like this: $user = perform_Query() // get username and password? if ($user['username'] == $username && $user['password'] == $password) { return $user; } elseif($user['username'] == $username && $user['password'] !== $password) { // here the password doesn't match // update failed_login_attemps += 1 }

    Read the article

  • hadoop - large database query

    - by Mastergeek
    Situation: I have a Postgres DB that contains a table with several million rows and I'm trying to query all of those rows for a MapReduce job. From the research I've done on DBInputFormat, Hadoop might try and use the same query again for a new mapper and since these queries take a considerable amount of time I'd like to prevent this in one of two ways that I've thought up: 1) Limit the job to only run 1 mapper that queries the whole table and call it good. or 2) Somehow incorporate an offset in the query so that if Hadoop does try to use a new mapper it won't grab the same stuff. I feel like option (1) seems more promising, but I don't know if such a configuration is possible. Option(2) sounds nice in theory but I have no idea how I would keep track of the mappers being made and if it is at all possible to detect that and reconfigure. Help is appreciated and I'm namely looking for a way to pull all of the DB table data and not have several of the same query running because that would be a waste of time.

    Read the article

  • Dump Hibernate activity to sql script file

    - by zeven
    Hi, I'm trying to log hibernate activity (only dml operations) to an sql script file. My goal is to have a way to reconstruct the database from a given starting point to the current state by executing the generated script. I can get the sql queries from log4j logs but they have more information than the raw sql queries and i would need to parse them and extract only the helpful statements. So i'm looking for a programatic way, maybe by listening the persist/merge/delete operations and accessing the hibernate-generated sql statements. I don't like to reinvent the wheel so, if anybody know a way for doing this i would appreciate it very much. Thanks in advance

    Read the article

  • Mysql query problem

    - by Lost_in_code
    Below is a sample table: fruits +-------+---------+ | id | type | +-------+---------+ | 1 | apple | | 2 | orange | | 3 | banana | | 4 | apple | | 5 | apple | | 6 | apple | | 7 | orange | | 8 | apple | | 9 | apple | | 10 | banana | +-------+---------+ Following are the two queries of interest: SELECT * FROM fruits WHERE type='apple' LIMIT 2; SELECT COUNT(*) AS total FROM fruits WHERE type='apple'; // output 6 I want to combine these two queries so that the results looks like this: +-------+---------+---------+ | id | type | total | +-------+---------+---------+ | 1 | apple | 6 | | 4 | apple | 6 | +-------+---------+---------+ The output has to be limited to 2 records but it should also contain the total number of records of the type apple. How can this be done with 1 query?

    Read the article

  • Understanding configuration for parallel calling in web app (IIS + MS SQL)

    - by mmcteam.com.ua
    We have an ASP.NET MVC application + IIS 7.5 + SQL Server 2008 R2. We have to load a lot of aggregate counters on the each page. We decided to use ajax and call with javascript for each counter or groups of counters and return them as JSON result. We solve the problem that user doesn't wait for page loading, page loads fast. User waits for counters loading while seeing other page content. But we thought that if we make calls from javascript - our queries will be make async, but we notice, that it is not. All our javascipt calls runs immediately, but action that they invoke are in queue. If we use Async Controller ability - all counters calculating simultaneously, but user has to wait for the longest counter calculating before page loads. The question: We want to understand what is happens if we use ajax and call two or more actions simultaneously. And how can we configuring this. (also in each action we make some queries to sql server)

    Read the article

< Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >