Search Results

Search found 24675 results on 987 pages for 'table'.

Page 476/987 | < Previous Page | 472 473 474 475 476 477 478 479 480 481 482 483  | Next Page >

  • In SQL, why is "Distinct" not used in a subquery, when looking for some items "not showing up" in th

    - by Jian Lin
    Usually when looking for some items not showing up in the other table, we can use: select * from gifts where giftID not in (select giftID from sentgifts); or select * from gifts where giftID not in (select distinct giftID from sentgifts); the second line is with "distinct" added, so that the resulting table is smaller, and probably let the search for "not in" faster too. So, won't using "distinct" be desirable? Often than not, I don't see it being used in the subquery in such a case. Is there advantage or disadvantage of using it? thanks.

    Read the article

  • MySQL Query still executing after a day..?

    - by Matt Jarvis
    Hi - I'm trying to isolate duplicates in a 500MB database and have tried two ways to do it. One creating a new table and grouping: CREATE TABLE test_table as SELECT * FROM items WHERE 1 GROUP BY title; But it's been running for an hour and in MySQL Admin it says the status is Locked. The other way I tried was to delete duplicates with this: DELETE bad_rows.* from items as bad_rows inner join ( select post_title, MIN(id) as min_id from items group by title having count(*) 1 ) as good_rows on good_rows.post_title = bad_rows.post_title; ..and this has been running for 24hours now, Admin telling me it's Sending data... Do you think either or these queries are actually still running? How can I find out if it's hung? (with Apple OS X 10.5.7)

    Read the article

  • how to choose which row to insert with same id in sql?

    - by user1429595
    so Basically I have a table called "table_1" : ID Index STATUS TIME DESCRIPTION 1 15 pending 1:00 Started Pending 1 16 pending 1:05 still in request 1 17 pending 1:10 still in request 1 18 complete 1:20 Transaction has been completed 2 19 pending 2:25 request has been started 2 20 pending 2:30 in progress 2 21 pending 2:35 in progess still 2 22 pending 2:40 still pending 2 23 complete 2:45 Transaction Compeleted I need to insert these data into my second table "table_2" where only start and compelete times are included, so my "table_2" should like this: ID Index STATUS TIME DESCRIPTION 1 15 pending 1:00 Started Pending 1 18 complete 1:20 Transaction has been completed 2 19 pending 2:25 request has been started 2 23 complete 2:45 Transaction Compeleted if anyone can help me write sql query for this I would highly appreciate it. Thanks in advance

    Read the article

  • undefined method `events' for #<ActiveRecord::Relation:0x4177518> -rails 3.0.3

    - by brg
    I am having this unexplained NoMethodError with undefined method `events' for #. I don't know why since my model association are well defined and the event table has the foreign keys for the user table. I tried using this fix but it failed: Rails 3 ActiveRecord::Relation random associations behavior event.rb class Event < ActiveRecord::Base belongs_to :user attr_accessible :event_name, :Starts_at, :finish, :tracks end user.rb class User < ActiveRecord::Base has_many :events, :dependent = :destroy attr_accessible :name, :event_attributes accepts_nested_attributes_for :events, :allow_destroy = true end schema.rb ActiveRecord::Schema.define(:version = 20101201180355) do create_table "events", :force = true do |t| t.string "event_name" t.string "tracks" t.datetime "starts_at" t.datetime "finish" t.datetime "created_at" t.datetime "updated_at" t.integer "user_id" end end error message NoMethodError in Users#index undefined method `events' for # Extracted source (around line #10): 7: <%= sortable "Tracks" % 8: 10: <% @users.events.each do |event| % 11: <% debugger % 12: 13: <%= event.starts_at % Trace of template inclusion: app/views/users/index.html.erb Rails.root: C:/rails_project1/events_manager Application Trace | Framework Trace | Full Trace app/views/users/_event_user.html.erb:10:in _app_views_users__event_user_html_erb__412443848_34308540_1390678' app/views/users/index.html.erb:7:in_app_views_users_index_html_erb___603337143_34316016_0'

    Read the article

  • ORACLE -1401 error

    - by Sachin Chourasiya
    I have a stored procedure in Oracle 9i which inserts records in a table. The table has a primary key built to ensure duplicte rows doesnot exists. I am trying to insert a record by calling this stored procedure and it works first time properly. I am again trying to insert a duplicate record and expecting unique constraint violation error. But I am getting ORA-01401 inserted value too large for column I knew its meaning but my query is , if the value inserted is really large then how it got successful in the first attempt.

    Read the article

  • Any way to avoid a filesort when order by is different to where clause?

    - by Julian
    I have an incredibly simple query (table type InnoDb) and EXPLAIN says that MySQL must do an extra pass to find out how to retrieve the rows in sorted order. SELECT * FROM `comments` WHERE (commentable_id = 1976) ORDER BY created_at desc LIMIT 0, 5 exact explain output: table select_type type extra possible_keys key key length ref rows comments simple ref using where; using filesort common_lookups common_lookups 5 const 89 commentable_id is indexed. Comments has nothing trick in it, just a content field. The manual suggests that if the order by is different to the where, there is no way filesort can be avoided. http://dev.mysql.com/doc/refman/5.0/en/order-by-optimization.html I also tried order by id as well as it's equivalent but makes no difference, even if I add id as an index (which I understand is not required as id is indexed implicitly in MySQL). thanks in advance for any ideas!

    Read the article

  • MySQL: Select pages that are not tagged?

    - by lauthiamkok
    Hi, I have a db with two tables like these below, page table pg_id title 1 a 2 b 3 c 4 d tagged table tagged_id pg_id 1 1 2 4 I want to select the pages which are tagged, I tried with this query below but doesn't work, SELECT * FROM root_pages LEFT JOIN root_tagged ON ( root_tagged.pg_id = root_pages.pg_id ) WHERE root_pages.pg_id != root_tagged.pg_id It returns zero - Showing rows 0 - 1 (2 total, Query took 0.0021 sec) But I want it to return pg_id title 2 b 3 c My query must have been wrong?? How can I return the pages which are not tagged correctly? Thanks.

    Read the article

  • [JS] How to manipulate a PHP Array in javascript?

    - by rasouza
    I have an Array in PHP which contains data from database. And it prints out also as a table in the same page which has an AJAX delete function. Trying to explain better The array contains debt sums related to many people, it is the application's main function. In the same page, there is a table containing every debt record related to the array, which can be deleted or edited using AJAX. I have coded the part of deleting the record and removing the TR entry, but it's not enough: I'd like to change also the debt sum using AJAX which is an PHP Array. What I have I have the JS function which removes the TR when the delete button is clicked // TR Fading when deleted $('.delete') .click(function() { $.ajax({ type: 'GET', url: 'history/delete/id/'+$(this).attr('id') }); $(this).parent().parent().fadeOut(); return false; }); and I have the PHP array (image)

    Read the article

  • group by, order by, with join

    - by Scarface
    Hey guys, quick question, I have this query, and I am trying to get the latest comment for each topic and then sort those results in descending order (therefore one comment per topic). I have what I think should work, but my join always messes my results up. Somehow, it seems to have sorted the end results properly, but has not taken the latest comment from each topic instead it seems to have just taken a random comment. If anyone has any ideas, would really appreciate any advice SELECT * FROM comments JOIN topic ON topic.topic_id=comments.topic_id WHERE topic.creator='admin' GROUP BY comments.topic_id ORDER BY comments.time DESC table comments is structured like id time user message topic_id table topic is structured like topic_id subject_id topic_title creator timestamp description

    Read the article

  • ASP, My SQL & case sensitity suddenly broken

    - by user131812
    Hi There, We have an old ASPsite that has been working fine for years with a MY SQL database. All of a sudden last week lots fo SQL queries stopped working. The database has a table called 'members' but the code calls 'Members'. It appears the queries used to work regardless of case sensitivity on the table names, but something has changed recently somewhere to enforce case. This has me stumped as the site has not been touched in years, the server config hasn't changed & the database provide has not changed anything. Is there any simple way to ignore case for an ASP site (without editing lots fo files :) Thanks Ben

    Read the article

  • Mapping tables from an existing database to an object -- is Hibernate suited?

    - by Bernhard V
    Hello! I've got some tables in an existing database and I want to map them to a Java object. Actually it's one table that contains the main information an some other tables that reference to such a table entry with a foreign key. I don't want to store objects in the database, I only want to read from it. The program should not be allowed to apply any changes to the underlying database. Currently I read from the database with 5 JDBC sql queries and set the results then on an object. I'm now looking for a less code intensive way. Another goal is the learning aspect. Is Hibernate suitable for this task, or is there another ORM framework that better fits my requirement?

    Read the article

  • Is there a SaaS for logging user activity?

    - by JoshL
    In almost every app that I build I create some kind of user log table to log various activities that my actual USERS (not visitors, but someone with an account) perform on the site. This is primarily used for customer service issues to allow me to pull up a record of the pages and actions that a user has visited. The downside to this is the size of the UserLogs table. It gets immense. I'm not sure if it is common practice or not for others to log INDIVIDUAL (not aggregate like Google Analytics) user behavior to a database, but if it is I'm wondering if any form of a SaaS exists to help offload this task? I essentially need a RESTful API that lets me store and retrieve individual user activity quickly and securely. Anyone know of any or am I the only one who has this issue?

    Read the article

  • Wrong data retrieved from database

    - by holyredbeard
    So, I want to retrieve the order of the elements of a list. The order is set before by the user, and are stored in the table below. Because I also want to retrieve name and description of the list elements I need to combine two tables (see below). However, what is actually retrieved is an array containing 16 elements (should be four because it only exists four elements as for now). The array is too long to post here, but I put it in a phpFiddle to be found here if you're interested. Well, I have really tried to find what's wrong (probably something easy as always), but with no luck. Thanks a lot for your time and help! listModel.php: public function GetOrderedElements($userId, $listId) { // $userId = 46 // $listId = 1 $query = "SELECT le.listElemId, le.listElemName, le.listElemDesc, lo.listElemOrderPlace FROM listElement AS le INNER JOIN listElemOrder AS lo ON le.listId = lo.listId WHERE lo.userId = ? AND lo.listId = ? ORDER BY listElemId"; $stmt = $this->m_db->Prepare($query); $stmt->bind_param("ii", $userId, $listId); $listElements = $this->m_db->GetOrderedElements($stmt); return $listElements; } database.php: public function GetOrderedElements(\mysqli_stmt $stmt) { if ($stmt === FALSE) { throw new \Exception($this->mysqli->error); } if ($stmt->execute() == FALSE) { throw new \Exception($this->mysqli->error); } if ($stmt->bind_result($listElemId, $listElemName, $listElemDesc, $listElemOrderPlace) == FALSE) { throw new \Exception($this->mysqli->error); } $listElements = array(); while ($stmt->fetch()) { $listElements[] = array('listElemId' => $listElemId, 'listElemName' => $listElemName, 'listElemDesc' => $listElemDesc, 'listElemOrderPlace' => $listElemOrderPlace); } var_dump($listElements); $stmt->Close(); return $listElements; } from the database: listElemOrder: listElemOrderId | listId | listElemId | userId | listElemOrderPlace 1 1 1 46 1 2 1 2 46 4 3 1 3 46 2 4 1 4 46 3 listElement: listElemId | listElemName | listId | listElemDesc | listElemOrderPlace 1 Elem A 1 Derp NULL 2 Elem B 1 Herp NULL 3 Elem C 1 Lorum NULL 4 Elem D 1 Ipsum NULL Note: 'listElemOrderPlace' in the table listElement is the final order of the elements (all users average), not to be mixed with the one with the same name in the other table, that's only a specific user's order of the list elements (which is the interesting one in this case).

    Read the article

  • Duplicate a database record with linq

    - by holz
    Is there a way to duplicate a db record with linq to sql in c#? Id [int] IDENTITY(1,1) NOT NULL PRIMARY KEY, [Foo] [nvarchar](255) NOT NULL, [Bar] [numeric](28,12) NOT NULL, ... Given the table above, I would like to duplicate a record (but give it a different id), in a way that new fields added to the DB and the Linq dbml file at a later date will still get duplicated with out having to change that code that duplicates the record. ie I don't want to write newRecord.Foo = currentRecord.Foo; for all of the fields on the table.

    Read the article

  • How can I prevent page-break in CFDocument from occuring in middle of content?

    - by Dan Roberts
    I am generating a PDF file dynamically from html/css using the cfdocument tag. There are blocks of content that I don't want to span multiple pages. After some searching I found that the style "page-break-inside" is supported according to the docs. However in my testing the declaration "page-break-inside: avoid" does no good. Any suggestions on getting this style declaration to work, or have alternative suggestions? Here is an example. I would expect the content in the div tag not to span a page break but it does. The style "page-break-inside: avoid" is not being honored. <cfdocument format="flashpaper"> <cfloop from="1" to="10" index="i"> <div style="page-break-inside: avoid"> <h1>Table Label</h1> <table> <tr><td>label</td><td>data</td></tr> <tr><td>label</td><td>data</td></tr> <tr><td>label</td><td>data</td></tr> <tr><td>label</td><td>data</td></tr> <tr><td>label</td><td>data</td></tr> <tr><td>label</td><td>data</td></tr> <tr><td>label</td><td>data</td></tr> <tr><td>label</td><td>data</td></tr> <tr><td>label</td><td>data</td></tr> </table> </div> </cfloop> </cfdocument>

    Read the article

  • how to write xsd for the following xml?

    - by Venkats
    <?xml version="1.0"?> <datatype xmlns:xs="http://www.w3.org/2001/XMLSchema-instance" xs:noNamespaceSchemaLocation="sampletype.xsd"> <table name="emp"> <columns> <column> <name>emp_id</name> <data_type>int(200) </data_type> </column> </columns> </table> </datatype> Here i generate the xsd for above xml, but it was not correct. can you help me generate the xsd for the xml? thanks in advance.

    Read the article

  • Click event of a button which is subview of an image view in xcode

    - by Christina
    in my app i have an image view and above that i added an button.And also in button click event i am calling a method.But i am unable to click on the button .whenever i try to click the button then the row of the table vie win which my image view is present is selected .Is there any method from which i can get the click event of the button which is placed on the image view.I do not want to use UIbuttons for setting the images in place of image view .Also if i am adding button as a subview of the table view then the click event is working but when i change the subview to image view then i get unsatisfactory result.Please help. Thanks, Christy

    Read the article

  • php foreach looping twice

    - by Jack
    Hi, I am trying to loop through some data from my database but it is outputting it twice. $fields = 'field1, field2, field3, field4'; $idFields = 'id_field1, id_field2, id_field3, id_field4'; $tables = 'table1, table2, table3, table4'; $table = explode(', ', $tables); $field = explode(', ', $fields); $id = explode(', ', $idFields); $str = 'Egg'; $i=1; while ($i<4) { $f = $field[$i]; $idd = $id[$i]; $sql = $writeConn->select()->from($table[$i], array($f, $idd))->where($f . " LIKE ?", '%' . $str . '%'); $string = '<a title="' . $str . '" href="' . $currentProductUrl . '">' . $str . '</a>'; $result = $writeConn->fetchAssoc($sql); foreach ($result as $row) { echo 'Success! Found ' . $str . ' in ' . $f . '. ID: ' . $row[$idd] . '.<br>'; } $i++; } Outputting: Success! Found Egg in field3. ID: 5. Success! Found Egg in field3. ID: 5. Could someone please explain why it is looping through both the indexed and associative values? UPDATE I did some more playing around and tried the following. $fields = 'field1, field2, field3, field4'; $idFields = 'id_field1, id_field2, id_field3, id_field4'; $tables = 'table1, table2, table3, table4'; $table = explode(', ', $tables); $field = explode(', ', $fields); $id = explode(', ', $idFields); $str = 'Egg'; $i=1; while ($i<4) { $f = $field[$i]; $idd = $id[$i]; $sql = $writeConn->select()->from($table[$i], array($f, $idd))->where($f . " LIKE ?", '%' . $str . '%'); $string = '<a title="' . $str . '" href="' . $currentProductUrl . '">' . $str . '</a>'; $sth = $writeConn->prepare($sql); $sth->execute(); $result = $sth->fetch(PDO::FETCH_ASSOC); foreach ($result as $row) { echo 'Success! Found ' . $str . ' in ' . $f . '. ID: ' . $row[$idd] . '.<br>'; } $i++; } The interesting thing is that this outputs the below: Success! Found Egg in field3. ID: E. Success! Found Egg in field3. ID: E. Success! Found Egg in field3. ID: 5. Success! Found Egg in field3. ID: 5. Success! Found Egg in field3. ID: E. Success! Found Egg in field3. ID: E. Success! Found Egg in field3. ID: 5. Success! Found Egg in field3. ID: 5. I have also tried adding $i to the output and this outputs 2 as expected. If I change fetch(PDO::FETCH_BOTH) to fetch(PDO::FETCH_ASSOC) the output is as follows: Success! Found Egg in field3. ID: E. Success! Found Egg in field3. ID: E. Success! Found Egg in field3. ID: 5. Success! Found Egg in field3. ID: 5. This has been bugging me for too long, so if anyone could help I would be very appreciative!

    Read the article

  • Passing operator as a parameter

    - by nacho4d
    Hi, I want to have a function that evaluates 2 bool vars (like a truth table) for example: since T | F : T then myfunc('t', 'f', ||); /*defined as: bool myfunc(char lv, char rv, ????)*/ should return true; how can I pass the third parameter? (I know is possible to pass it as a char* but then I will have to have another table to compare operator string and then do the operation which is something I would like to avoid) Is it possible to pass an operator like ^(XOR) or ||(OR) or &&(AND), etc in a function/method? Thanks in advance

    Read the article

  • How can I improve my select query for storing large versioned data sets?

    - by Jason Francis
    At work, we build large multi-page web applications, consisting mostly of radio and check boxes. The primary purpose of each application is to gather data, but as users return to a page they have previously visited, we report back to them their previous responses. Worst-case scenario, we might have up to 900 distinct variables and around 1.5 million users. For several reasons, it makes sense to use an insert-only approach to storing the data (as opposed to update-in-place) so that we can capture historical data about repeated interactions with variables. The net result is that we might have several responses per user per variable. Our table to collect the responses looks something like this: CREATE TABLE [dbo].[results]( [id] [bigint] IDENTITY(1,1) NOT NULL, [userid] [int] NULL, [variable] [varchar](8) NULL, [value] [tinyint] NULL, [submitted] [smalldatetime] NULL) Where id serves as the primary key. Virtually every request results in a series of insert statements (one per variable submitted), and then we run a select to produce previous responses for the next page (something like this): SELECT t.id, t.variable, t.value FROM results t WITH (NOLOCK) WHERE t.userid = '2111846' AND (t.variable='internat' OR t.variable='veteran' OR t.variable='athlete') AND t.id IN (SELECT MAX(id) AS id FROM results WITH (NOLOCK) WHERE userid = '2111846' AND (t.variable='internat' OR t.variable='veteran' OR t.variable='athlete') GROUP BY variable) Which, in this case, would return the most recent responses for the variables "internat", "veteran", and "athlete" for user 2111846. We have followed the advice of the database tuning tools in indexing the tables, and against our data, this is the best-performing version of the select query that we have been able to come up with. Even so, there seems to be significant performance degradation as the table approaches 1 million records (and we might have about 150x that). We have a fairly-elegant solution in place for sharding the data across multiple tables which has been working quite well, but I am open for any advice about how I might construct a better version of the select query. We use this structure frequently for storing lots of independent data points, and we like the benefits it provides. So the question is, how can I improve the performance of the select query? I assume the nested select statement is a bad idea, but I have yet to find an alternative that performs as well. Thanks in advance. NB: Since we emphasize creating over reading in this case, and since we never update in place, there doesn't seem to be any penalty (and some advantage) for using the NOLOCK directive in this case.

    Read the article

  • ASP.NET MVC - how to make users confirm the delete

    - by H4mm3rHead
    He, I have this page where i have checkboxes next to every item in a table, and want to allow the user to select some of them and press my delete button. I just cant come up with the jquery for making the confirm window and submitting only if 'yes' is pushed - this is my page <%Html.BeginForm(); %> <%List<ShopLandCore.Model.SLGroup> groups = (List<ShopLandCore.Model.SLGroup>)Model; %> <%Html.RenderPartial("AdminWorkHeader"); %> <table width="100%" id="ListTable" cellpadding="0" cellspacing="0"> <tr> <td colspan="5" class="heading"> <input type="submit" name="closeall" value="Close selected" /> <input type="submit" name="deleteall" value="Delete selected" /> </td> </tr> <tr> <th width="20px"> </th> <th> Name </th> <th> Description </th> <th width="150px"> Created </th> <th width="150px"> Closed </th> </tr> <%foreach (ShopLandCore.Model.SLGroup g in groups) { %> <tr> <td> <%=Html.CheckBox(g.Id.ToString()) %> </td> <td> <%=g.Name %> </td> <td> <%=g.Description %> </td> <td> <%=g.Created %> </td> <td> <%=g.Closed %> </td> </tr> <%} %> </table> <%Html.EndForm(); %> Please note that its only for the delete that it should confirm, and not necessarily for the close button.

    Read the article

  • Query to update rowNum

    - by BrokeMyLegBiking
    Can anyone help me write this query more efficiently? I have a table that captures TCP traffic, and I'd like to update a column called RowNumForFlow which is simly the sequential number of the IP packet in that flow. The code below works fine, but it is slow. declare @FlowID int declare @LastRowNumInFlow int declare @counter1 int set @counter1 = 0 while (@counter1 < 1) BEGIN set @counter1 = @counter1 + 1 -- 1) select top 1 @FlowID = t.FlowID from Traffic t where t.RowNumInFlow is null if (@FlowID is null) break -- 2) set @LastRowNumInFlow = null select top 1 @LastRowNumInFlow = RowNumInFlow from Traffic where FlowID=@FlowID and RowNumInFlow is not null order by ID desc if @LastRowNumInFlow is null set @LastRowNumInFlow = 1 else set @LastRowNumInFlow = @LastRowNumInFlow + 1 update Traffic set RowNumInFlow = @LastRowNumInFlow where ID = (select top 1 ID from Traffic where flowid = @FlowID and RowNumInFlow is null) END Example table values after query has run: ID FlowID RowNumInFlow 448923 44 1 448924 44 2 448988 44 3 448989 44 4 448990 44 5 448991 44 6 448992 44 7 448993 44 8 448995 44 9 448996 44 10 449065 44 11 449063 45 1 449170 45 2 449171 45 3 449172 45 4 449187 45 5

    Read the article

  • How can I sqldump a huge database?

    - by meder
    SELECT count(*) from table gives me 3296869 rows. The table only contains 4 columns, storing dropped domains. I tried to dump the sql through: $backupFile = $dbname . date("Y-m-d-H-i-s") . '.gz'; $command = "mysqldump --opt -h $dbhost -u $dbuser -p $dbpass $dbname | gzip > $backupFile"; However, this just dumps an empty 20 KB gzipped file. My client is using shared hosting so the server specs and resource usage aren't top of the line. I'm not even given ssh access or access directly to the database so I have to make queries through PHP scripts I upload via FTP ( SFTP isn't an option, again ). Is there some way I can perhaps sequentially download portions of it, or pass an argument to mysqldump that will optimize it? I came across http://jeremy.zawodny.com/blog/archives/000690.html which mentions the -q flag and tried that but it didn't seem to do anything differently.

    Read the article

< Previous Page | 472 473 474 475 476 477 478 479 480 481 482 483  | Next Page >