Search Results

Search found 14737 results on 590 pages for 'dynamic tables'.

Page 332/590 | < Previous Page | 328 329 330 331 332 333 334 335 336 337 338 339  | Next Page >

  • mysql multi count() in one query

    - by atno
    Hi, I'm trying to count several joined tables but without any luck, what I get is the same numbers for every column (tUsers,tLists,tItems). My query is: select COUNT(users.*) as tUsers, COUNT(lists.*) as tLists, COUNT(items.*) as tItems, companyName from users as c join lists as l on c.userID = l.userID join items as i on c.userID = i.userID group by companyID The result I want to get is --------------------------------------------- # | CompanyName | tUsers | tlists | tItems 1 | RealCoName | 5 | 2 | 15 --------------------------------------------- what modifications do i have to do to my query to get those results? Cheers

    Read the article

  • Mysql query question

    - by brux
    I have 2 tables: Customer: customerid - int, pri-key,auto fname - varchar sname -varchar housenum - varchar street -varchar Items: itemid - int,pri-key,auto type - varchar collectiondate - date releasedate - date customerid - int I need a query which will get me all items that have a releasedate 3 days prior to (and including) the current date. i.e The query should return customerid,fname,sname,street,housenum,type,releasedate for all items which have releasedate within (and including)3 days prior today thanks in advance

    Read the article

  • How does the performance of pure Nginx compare to cpNginx?

    - by jb510
    There is now a Cpanel plugin to fairly easily setup Nginx as a reverse proxy on a Cpanel/Apache server. I've been simultaneously interested in setting up my first unmanaged VPS and my first Nginx server and as a masochist figured why not combine the two. I'm wondering however if it's worth setting up a pure Nginx server vs trying out cpNginx on Apache? My goal is solely to host WordPress sites and while what I've read raves about Nginx's is exceptional ability serving static at least as a reverse proxy, I am unclear if there is substantial benefit to running a pure nginx with eAccelorator over cpNginx on Apache for dynamic sites? Regardless I'll be running W3TC on all sites to cache content, but am still interested if there are big CPU reductions running PHP scripts under pure Nginx over cpNginx?

    Read the article

  • SQL query multi table selection

    - by nemiss
    I have 3 tables, - Section table that defines some general item sections. - Category table - has a "section" column (foreign key). - Product table - has a "category" column (foreign key). I want to get all products that belong to X section. How can I do it? select from select?

    Read the article

  • Reverse Proxy (mod_rewrite) and Rails (absolute paths)

    - by SooDesuNe
    I have front end rails app, that reverse proxies to any of a number of backend rails apps depending on URL, for example http://www.my_host.com/app_one reverse proxies to http://www.remote_host_running_app_one.com such that a URL like http://www.my_host.com/app_one/users will display the contents of http://www.remote_host_running_app_one.com/users I have a large, and ever expanding number of backends, so they can not be explicitly listed anywhere other than a database. This is no problem for mod_rewrite using a prg:/ rewrite map reverse proxy. The question is, the urls returned by rails helpers have the form /controller/action making them absolute to the root. This is a problem for the page served by mod_rewrite because links on the proxied page appear as absolute to the domain. i.e.: http://www.my_host.com/app_one/controller/action has links that end up looking like /controller/action/ when they need to look like /app_one/controller/action mod_proxy_html seems like the right idea, but it doesn't seem to be as dynamic as I would need, since the rules need to be hard coded into the config files. Is there a way to fix this server-side, so that the links will be routed correctly?

    Read the article

  • How update dataset with datagrid view C#

    - by Paul
    I am beginner and I have this problem. How can I can update dataset with datagridview? I binding dataset in datagrid. Edit datagrid. At finish I want update dataset with datagridview. Thank you form any advice Sory, I use Winforms. Example: I bind dataset in datagridview. dataGridViewCustomers.DataSource = _ds.Tables[0]; //edit datagridview //on this place I want update dataset with datagrid view

    Read the article

  • How can I improve my select query for storing large versioned data sets?

    - by Jason Francis
    At work, we build large multi-page web applications, consisting mostly of radio and check boxes. The primary purpose of each application is to gather data, but as users return to a page they have previously visited, we report back to them their previous responses. Worst-case scenario, we might have up to 900 distinct variables and around 1.5 million users. For several reasons, it makes sense to use an insert-only approach to storing the data (as opposed to update-in-place) so that we can capture historical data about repeated interactions with variables. The net result is that we might have several responses per user per variable. Our table to collect the responses looks something like this: CREATE TABLE [dbo].[results]( [id] [bigint] IDENTITY(1,1) NOT NULL, [userid] [int] NULL, [variable] [varchar](8) NULL, [value] [tinyint] NULL, [submitted] [smalldatetime] NULL) Where id serves as the primary key. Virtually every request results in a series of insert statements (one per variable submitted), and then we run a select to produce previous responses for the next page (something like this): SELECT t.id, t.variable, t.value FROM results t WITH (NOLOCK) WHERE t.userid = '2111846' AND (t.variable='internat' OR t.variable='veteran' OR t.variable='athlete') AND t.id IN (SELECT MAX(id) AS id FROM results WITH (NOLOCK) WHERE userid = '2111846' AND (t.variable='internat' OR t.variable='veteran' OR t.variable='athlete') GROUP BY variable) Which, in this case, would return the most recent responses for the variables "internat", "veteran", and "athlete" for user 2111846. We have followed the advice of the database tuning tools in indexing the tables, and against our data, this is the best-performing version of the select query that we have been able to come up with. Even so, there seems to be significant performance degradation as the table approaches 1 million records (and we might have about 150x that). We have a fairly-elegant solution in place for sharding the data across multiple tables which has been working quite well, but I am open for any advice about how I might construct a better version of the select query. We use this structure frequently for storing lots of independent data points, and we like the benefits it provides. So the question is, how can I improve the performance of the select query? I assume the nested select statement is a bad idea, but I have yet to find an alternative that performs as well. Thanks in advance. NB: Since we emphasize creating over reading in this case, and since we never update in place, there doesn't seem to be any penalty (and some advantage) for using the NOLOCK directive in this case.

    Read the article

  • Linq to SQL: how get row security between write access??

    - by Francisco
    I would like to allow two threads to write in a table at the same time (I know the problem of updating the same row, but this would be a story apart). I need that in behalf of speed up the operations in my aplication (one thread could write in row X while another could do the same in row X+n instead of waiting the first to finalize). So, can I block rows instead of tables with Linq to SQL? Thanks.

    Read the article

  • Foreign Keys Duplicated in DataGridView

    - by John Doe
    I created a Windows Forms Application to which I added a DataGridView and LINQ to SQL Classes from one of my databases. I can successfully bind one of my database's tables to my DataGridView: var dataSource = from c in _db.NetworkedEquipments select c; dataGridView1.DataSource = dataSource; However, the foreign keys get duplicated, that is, the columns appear twice. How can I prevent this?

    Read the article

  • There is a JavaScript error in my rails application!

    - by Small Wolf
    As the title said, I got a problem! i encountered the "RJS Error:[object error]",the code in my application is page << "#{hidden_print("#{url_for(:controller => 'tables', :action => 'dispatch', :id => id, :pop => true, :print =>true)}")} " the method hidden_print is def hidden_print(url) "window.parent.headFrame.document.all.iframe_helper.src = '#{url}';" end

    Read the article

  • Can I concatenate multiple MySQL rows into one field?

    - by Dean
    Using MySQL, I can do something like select hobbies from peoples_hobbies where person_id = 5; and get: shopping fishing coding but instead I just want 1 row, 1 col: shopping, fishing, coding The reason is that I'm selecting multiple values from multiple tables, and after all the joins I've got a lot more rows than I'd like. I've looked for a function on MySQL Doc and it doesn't look like the CONCAT or CONCAT_WS functions accept result sets, so does anyone here know how to do this?

    Read the article

  • Multiple Rails app, single MySQL database

    - by Gaius Parx
    I intend to have multiple Rails apps each for site.com, api.site.com, admin.site.com. All apps will access the same tables from one single MySQL database. Apps and database runs in the same server. Is there any settings in Rails, ActiveRecord or MySQL that I need to be concerned about for above access scenerio? Thanks Running: Rails 2.3.5, MySQL 5.0, Nginx, Passenger, RubyEE

    Read the article

  • How to use LINQ for CRUD with a simple SQL table?

    - by Rob Ferno
    Every LINQ blog I found there seemed around 2 years old, I understand the syntax but need more direction on creating the SQL mapping and context classes. I just need to use LINQ for 2 SQL tables I have, nothing complicated. Do folks write the SQL mapping classes by hand for such cases or is there a decent tool for this? Can someone point me in the right direction?

    Read the article

  • Where does Drupal store NODE data?

    - by RD
    This is a follow up to my previous question: http://stackoverflow.com/questions/1284476/where-does-drupal-store-node-body-content Now, I tried adding values into node and node-revision, but still the node data is not showing. So, obviously more data is stored somewhere else. So basically, I want to know, which tables are affected when you create a new node?

    Read the article

  • MySQL search for user and their roles

    - by Jenkz
    I am re-writing the SQL which lets a user search for any other user on our site and also shows their roles. An an example, roles can be "Writer", "Editor", "Publisher". Each role links a User to a Publication. Users can take multiple roles within multiple publications. Example table setup: "users" : user_id, firstname, lastname "publications" : publication_id, name "link_writers" : user_id, publication_id "link_editors" : user_id, publication_id Current psuedo SQL: SELECT * FROM ( (SELECT user_id FROM users WHERE firstname LIKE '%Jenkz%') UNION (SELECT user_id FROM users WHERE lastname LIKE '%Jenkz%') ) AS dt JOIN (ROLES STATEMENT) AS roles ON roles.user_id = dt.user_id At the moment my roles statement is: SELECT dt2.user_id, dt2.publication_id, dt.role FROM ( (SELECT 'writer' AS role, link_writers.user_id, link_writers.publication_id FROM link_writers) UNION (SELECT 'editor' AS role, link_editors.user_id, link_editors.publication_id FROM link_editors) ) AS dt2 The reason for wrapping the roles statement in UNION clauses is that some roles are more complex and require a table join to find the publication_id and user_id. As an example "publishers" might be linked accross two tables "link_publishers": user_id, publisher_group_id "link_publisher_groups": publisher_group_id, publication_id So in that instance, the query forming part of my UNION would be: SELECT 'publisher' AS role, link_publishers.user_id, link_publisher_groups.publication_id FROM link_publishers JOIN link_publisher_groups ON lpg.group_id = lp.group_id I'm pretty confident that my table setup is good (I was warned off the one-table-for-all system when researching the layout). My problem is that there are now 100,000 rows in the users table and upto 70,000 rows in each of the link tables. Initial lookup in the users table is fast, but the joining really slows things down. How can I only join on the relevant roles? -------------------------- EDIT ---------------------------------- Explain above (open in a new window to see full resolution). The bottom bit in red, is the "WHERE firstname LIKE '%Jenkz%'" the third row searches WHERE CONCAT(firstname, ' ', lastname) LIKE '%Jenkz%'. Hence the large row count, but I think this is unavoidable, unless there is a way to put an index accross concatenated fields? The green bit at the top just shows the total rows scanned from the ROLES STATEMENT. You can then see each individual UNION clause (#6 - #12) which all show a large number of rows. Some of the indexes are normal, some are unique. It seems that MySQL isn't optimizing to use the dt.user_id as a comparison for the internal of the UNION statements. Is there any way to force this behaviour? Please note that my real setup is not publications and writers but "webmasters", "players", "teams" etc.

    Read the article

  • writting an sql query

    - by Praveen Prasad
    iam having 2 tables table Items Table (this table holds all items iam having) itemId --------- Item1 Item2 Item3 Item4 Item5 table 2 users_item relation UserId || ItemId 1 || Item1 1 || Item2 userId one has stored 2 items Item1,Item2. Now i want to write a query on table1 (Items table) so that it displays all items which user1 has NOT chosen.

    Read the article

  • How to remove "Standard Error" column from xtable() output of an lm on R/RSweave/LaTeX

    - by Lucas Spangher
    I'm currently doing some data analysis on population data, so reporting the standard errors in the tables of parameter coefficients just doesn't really make statistical sense. I've done a fair bit of searching and can't find any way to customize the xtable output to remove it. Can anyone point me in the right direction? Thanks a lot, I didn't post this lightly; if it's something obvious, I apologize for having wasted time!

    Read the article

  • How to get stream to "in-memory" database created via H2DB?

    - by Reynevan
    I have to create such a mechanism: Create in-memory (H2DB) database; Create tables and fill them using some data; Get stream to that database; Send that stream via WebDAV or something else; I know everything except that "How to get stream to "in-memory" database created via H2DB"? And some explanations: I can't create file because of some server restrictions; I need that stream to create a file;

    Read the article

  • How do I use a MySQL subquery to count the number of rows in a foreign table?

    - by James Skidmore
    I have two tables, users and reports. Each user has no, one, or multiple reports associated with it, and the reports table has a user_id field. I have the following query, and I need to add to each row a count of how many reports the user has: SELECT * FROM users LIMIT 1, 10 Do I need to use a subquery, and if so, how can I use it efficently? The reports table has thousands and thousands of rows.

    Read the article

  • SQL-How to retrieve the correct data using php

    - by Programatt
    I am new to SQL so please excuse my question if it is simple. I have a database with a few tables. 1 is a users table, the others are application tables that contain the users preferences for receiving notifications about that application based on the country they are interested in. What I want to do, is retrieve the e-mail address of all users that have an interest in that country. I am struggling to think about how to do this. I currently have the following query constructed, and the code to populate the values function check($string) { if (isset($_POST[$string])) { $print = implode(', ', $_POST[$string]); //Converts an array into a single string $imanageSQLArr = Array(); if (substr_count($print,'Benelux') > 0) { $imanageSQLArr[0] = "checked"; } else { $imanageSQLArr[0] = "off"; } if (substr_count($print, 'France') > 0) { $imanageSQLArr[1] = "checked"; } else { $imanageSQLArr[1] = "off"; } if (substr_count($print, 'Germany') > 0) { $imanageSQLArr[2] = "checked"; } else { $imanageSQLArr[2] = "off"; } if (substr_count($print, 'Italy') > 0) { $imanageSQLArr[3] = "checked"; } else { $imanageSQLArr[3] = "off"; } if (substr_count($print, 'Netherlands') > 0) { $imanageSQLArr[4] = "checked"; } else { $imanageSQLArr[4] = "off"; } if (substr_count($print, 'Portugal') > 0) { $imanageSQLArr[5] = "checked"; } else { $imanageSQLArr[5] = "off"; } if (substr_count($print, 'Spain') > 0) { $imanageSQLArr[6] = "checked"; } else { $imanageSQLArr[6] = "off"; } if (substr_count($print, 'Sweden') > 0) { $imanageSQLArr[7] = "checked"; } else { $imanageSQLArr[7] = "off"; } if (substr_count($print, 'Switzerland') > 0) { $imanageSQLArr[8] = "checked"; } else { $imanageSQLArr[8] = "off"; } if (substr_count($print, 'UK') > 0) { $imanageSQLArr[9] = "checked"; } else { $imanageSQLArr[9] = "off"; } and the query $tocheck = $db->prepare( "SELECT users.email FROM users,app WHERE users.id=app.userid AND BENELUX=:BENELUX AND FRANCE=:FRANCE AND GERMANY=:GERMANY AND ITALY=:ITALY AND NETHERLANDS=:NETHERLANDS AND PORTUGAL=:PORTUGAL AND SPAIN=:SPAIN AND SWEDEN=:SWEDEN AND SWITZERLAND=:SWITZERLAND AND UK=:UK"); $tocheck->execute($country); $row = $tocheck->fetchAll(); This does retrieve data, but only people who's preferences match EXACTLY what is put (so what they haven't selected is taken into account as much as what they have). Any help would be greatly appreciated.

    Read the article

  • import a text file into a temporary table using 'Load data infile' in a stored procedure- MySQL

    - by Pankaj
    I need to import a text file into a temporary table and from that select portions of it to insert in different tables. I wanted to use 'LOAD DATA INFILE'. Is there any way, i can use 'Load data infile' in a stored procedure. I am using mysql. LOAD DATA LOCAL INFILE 'C:\\MyData.txt' INTO TABLE tempprod fields terminated by ',' lines terminated by '\r\n'; SELECT * FROM product p;

    Read the article

< Previous Page | 328 329 330 331 332 333 334 335 336 337 338 339  | Next Page >