Search Results

Search found 14354 results on 575 pages for 'existing records'.

Page 124/575 | < Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >

  • why does InnoDB keep on growing without for every update?

    - by Akash Kava
    I have a table which consists of heavy blobs, and I wanted to conduct some tests on it. I know deleted space is not reclaimed by innodb, so I decided to reuse existing records by updating its own values instead of createing new records. But I noticed, whether I delete and insert a new entry, or I do UPDATE on existing ROW, InnoDB keeps on growing. Assuming I have 100 Rows, each Storing 500KB of information, My InnoDB size is 10MB, now when I call UPDATE on all rows (no insert/ no delete), the innodb grows by ~8MB for every run I do. All I am doing is I am storing exactly 500KB of data in each row, with little modification, and size of blob is fixed. What can I do to prevent this? I know about optimize table, but I cant do it because on regular usage, the table is going to be 60-100GB big, and running optimize will just stall entire server.

    Read the article

  • How to run stored procedure 1000 times

    - by subt13
    I have a stored procedure that I'm using to populate a table with about 60 columns. I have genereated 1000 exec statements that look like this: exec PopulateCVCSTAdvancement 174, 213, 1, 0, 7365 exec PopulateCVCSTAdvancement 174, 214, 1, 0, 7365 exec PopulateCVCSTAdvancement 175, 213, 0, 0, 7365 Each time the stored procedure will be inserting anywhere from 1 to 3,000 records (usually around 2,000 records). The "server" is running desktop hardware with 4 gigs of available memory on a server OS. The problem I have is that after the first 10-15 executes of an average of 1-2 seconds each time, the next 10-15 seem to never finish. Am I doing this correctly? How should I do this? Thanks! Top 10 waiters: LAZYWRITER_SLEEP SQLTRACE_INCREMENTAL_FLUSH_SLEEP REQUEST_FOR_DEADLOCK_SEARCH XE_TIMER_EVENT FT_IFTS_SCHEDULER_IDLE_WAIT CHECKPOINT_QUEUE LOGMGR_QUEUE SLEEP_TASK BROKER_TO_FLUSH BROKER_TASK_STOP

    Read the article

  • comparing two tables

    - by sza
    I have two identical table ie all the columns are identical and one of the datatype is Text, one is varchar(255) and the rest are int. Lets say the table name is 'AAAAA'. Table AAAAA was processed and backed up earlier this month. Both the tables were storing data and now the second table is only storing data. I need to find unmatching records from the second table (BBBBB) which is storing data right now and add those records to Table AAAAA. Your help will be highly appreciated. I tried to use 'EXCEPT' but it does not support text datatype. I'm using SQL Server 2005.

    Read the article

  • Model association changes in production environment, specifically converting a model to polymorphic?

    - by dustmoo
    Hi everyone, I was hoping I could get feedback on major changes to how a model works in an app that is in production already. In my case I have a model Record, that has_many PhoneNumbers. Currently it is a typical has_many belongs_to association with a record having many PhoneNumbers. Of course, I now have a feature of adding temporary, user generated records and these records will have PhoneNumbers too. I 'could' just add the user_record_id to the PhoneNumber model, but wouldn't it be better for this to be a polymorphic association? And if so, if you change how a model associates, how in the heck would I update the production database without breaking everything? .< Anyway, just looking for best practices in a situation like this. Thanks!

    Read the article

  • Data manipulation without server side

    - by monczek
    Hi, I have to create a very simple webpage to show, filter and add data from not-yet-defined source (probably txt/xml/cvs). Records should be visible as a table, filtered using 3 criteria fields. There should be also possibility to add new records. My first thought was: XHTML + jQuery + csv2table + PicNet Table Filter. It does exactly what I want except adding rows - that is saving changes in source file (probably due to security risk). My question is - is there any possibility to do it without involving server side like asp.net, jee, php, sql? Source file is located on the server. Thanks for your ideas :-)

    Read the article

  • How to remove row which has one or more empty or null cell ?

    - by Harikrishna
    I have datagridview on my winform. I am displaying records in the datagridview. Now after displaying the records on the datagridview, I want to remove the row from datagridview which has one or more empy cells that is no value in the cell for that row. So for that I am checking each cell for every row if there is any cell empty or null then I remove that rows using RemoveAt() function. My code is : for (int i = 0; i < dataGridView1.Rows.Count - 1; i++) { for (int j = 0; j < dataGridView1.Columns.Count; j++) { if (string.IsNullOrEmpty(dataGridView1.Rows[i].Cells[j].Value.ToString())) { dataGridView1.Rows.RemoveAt(i); break; } } }

    Read the article

  • Is using a DataSet's column Expression works in background same as manual calculation?

    - by Harikrishna
    I have one datatable which is not bindided and records are coming from the file by parsing it in the datatable dynamically every time. Now there is three columns in the datatable Marks1,Marks2 and FinalMarks. And their types is decimal. Now for making addition of columns Marks1 and Marks2 's records and store it into FinalMarks column,For that what I do is : datatableResult.Columns["FinalMarks"].Expression="Marks1+Marks2"; It's works properly. It can be done in other way also is foreach (DataRow r in datatableResult.Rows) { r["FinalMarks"]=Convert.ToDecimal(r["Marks1"])+Convert.ToDecimal(r["Marks2"]); } Is first approach same as second approach in background means is both approach same or what? EDIT: I want to know that first approach works in background as second approach.

    Read the article

  • DataSet.GetChanges - Save the updated record in a different table than the source one

    - by John Dev
    I'm doing operation on a dataset containing data from a sql table named Test_1 and then get the updated records using the DataSet.GetChanges(DataRowState.Modified) function. Then i try to save the dataset containing the updated records on a different table than the source one (the table is names Test and has the same structure as Test_1) using the following statement: sqlDataAdapter.Update(changesDataSet,"Test"); I'm getting the following error : Update unable to find TableMapping['Test'] or DataTable 'Test' I'm new to ado.net and don't even know if it"s something possible. Any advice is welcome. Just to provide a bit of context. ETL jobs are importing data in temp table with same structure as the original but with _jobid suffix. Right after a rule engine is doing validation before updating the original table.

    Read the article

  • SQL Server 15MM rows, simple COUNT query. 15+ seconds?

    - by john
    We took over a website from another company after a client decided to switch. We have a table that grows by about 25k records a day, and is currently at 15MM records. The table looks something like: id (PK, int, not null) member_id (int, not null) another_id (int, not null) date (datetime, not null) SELECT COUNT(id) FROM tbl can take up to 15 seconds. A simple inner join on 'another_id' takes over 30 seconds. I can't imagine why this is taking so long. Any advice? SQL Server 2005 Express

    Read the article

  • how can i query a table that got split to 2 smaller tables? Union? view ?

    - by danfromisrael
    hello friends, I have a very big table (nearly 2,000,000 records) that got split to 2 smaller tables. one table contains only records from last week and the other contains all the rest (which is a lot...) now i got some Stored Procedures / Functions that used to query the big table before it got split. i still need them to query the union of both tables, however it seems that creating a View which uses the union statement between the two tables lasts forever... that's my view: CREATE VIEW `united_tables_view` AS select * from table1 union select * from table2; and then i'd like to switch everywhere the Stored procedure select from 'oldBigTable' to select from 'united_tables_view'... i've tried adding indexes to make the time shorter but nothing helps... any Ideas? PS the view and union are my idea but any other creative idea would be perfect! bring it on! thanks!

    Read the article

  • which one is a faster/better sql practice?

    - by artsince
    Suppose I have a 2 column table (id, flag) and id is sequential. I expect this table to contain a lot of records. I want to periodically select the first row not flagged and update it. Some of the records on the way may have already been flagged, so I want to skip them. Does it make more sense if I store the last id I flagged and use it in my select statement, like select * from mytable where id > my_last_id order by id asc limit 1 or simply get the first unflagged row, like: select * from mytable where flagged = 'F' order by id asc limit 1 Thank you!

    Read the article

  • Working with sets of rows in (My)SQL and comparing values

    - by Pep.
    Hello, I am trying to figure out the SQL for doing some relatively simple operations on sets of records in a table but I am stuck. Consider a table with multiple rows per item, all identified by a common key. For example: serial model color XX1 A blue XX2 A blue XX3 A green XX5 B red XX6 B blue XX1 B blue What I would for example want to do is: Assuming that all model A rows must have the same color, find the rows which dont. (for example, XX3 is green). Assuming that a given serial number can only point to a single type of model, find out the rows which that does not occur (for example XX1 points both to A and B) These are all simple logically things to do. To abstract it, I want to know how to group things by using a single key (or combination of keys) and then compare the values of those records. Should I use a join on the same table? should i use some sort of array or similar? thanks for your help

    Read the article

  • sql server - how to execute tje second half of or only when first one fails

    - by fn79
    Suppose I have a table with following records value text company/about about Us company company company/contactus company contact I have a very simple query in sql server as below. I am having problem with the 'or' condition. In below query, I am trying to find text for value 'company/about'. If it is not found, then only I want to run the other side of 'or'. The below query returns two records as below value text company/about about Us company company Query select * from tbl where value='company/about' or value=substring('company/about',0,charindex('/','company/about')) How can I modify the query so the result set looks like value text company/about about Us

    Read the article

  • MS Access Can't move to a different record?

    - by user986706
    Ok, I'm really baffled. Here's the rundown: I've created a form with a subform. The main form is called FacilityInfo, and the subform is called BillingInfo. The forms are linked via 3 fields, AffiliateID, ClientID, and FacilityID. the subform shows one record at a time, set to show Continuous Forms, and AllowAdditions = No I can see that there are 4 records in the nav bar. But Access won't let me move off the first record. I've tried setting to Single Form. I've tried AllowAdditions = Yes. I do have a Vertical Scroll bar on the subform. It will allow me to scroll through the records, but I can only see them. I cannot move to one of the controls. Any ideas? I'm freakin' out! Thanks in advance!

    Read the article

  • MySQL some columns Distinct

    - by Adam
    I have the following query that works well. SELECT DISTINCT city,region1,region2 from static_geo_world where country='AU' AND (city LIKE '%geel%' OR region1 LIKE '%geel%' OR region2 LIKE '%geel%' OR region3 LIKE '%geel%' OR zip LIKE 'geel%') ORDER BY city; I need to also extract a column named 'id' but this messes up the DISTINCT as each ID is different. How can I get the same UNIQUE set of records as above but also get the 'id' for each record? Note: sometimes I can return a few thousand records so a query for each record isn't possible. Any ideas would be very welcome...

    Read the article

  • Why isn't INT more efficient than UNIQUEIDENTIFIER (according to the execution plan)?

    - by ck
    I have a parent table and child table where the columns that join them together are the UNIQUEIDENTIFIER type. The child table has a clustered index on the column that joins it to the parent table (its PK, which is also clustered). I have created a copy of both of these tables but changed the relationship columns to be INTs instead, have rebuilt the indexes so that they are essentially the same structure and can be queried in the same way. When I query for a known 20 records from the parent table, pulling in all the related records from the child tables, I get identical query costs across both, i.e. 50/50 cost for the batches. If this is true, then my giant project to change all of the tables like this appears to be pointless, other than speeding up inserts. Can anyone provide any light on the situation? EDIT: The question is not about which is more efficient, but why is the query execution plan showing both queries as having the same cost?

    Read the article

  • How to exclude an array of ids from query in Rails (using ActiveRecord)?

    - by CuriousYogurt
    I would like to perform an ActiveRecord query that returns all records except those records that have certain ids. The ids I would like excluded are stored in an array. So: ids_to_exclude = [1,2,3] array_without_excluded_ids = Item. ??? I'm not sure how to complete the second line. Background: What I've already tried: I'm not sure background is necessary, but I've already tried various combinations of .find and .where. For example: array_without_excluded_ids = Item.find(:all, :conditions => { "id not IN (?)", ids_to_exclude }) array_without_excluded_ids = Item.where( "items.id not IN ?", ids_to_exclude) These fail. This tip might be on the right track, but I have not succeeded in adapting it. Any help would be greatly appreciated.

    Read the article

  • More efficient SQL than using "A UNION (B in A)"?

    - by machinatus
    Edit 1 (clarification): Thank you for the answers so far! The response is gratifying. I want to clarify the question a little because based on the answers I think I did not describe one aspect of the problem correctly (and I'm sure that's my fault as I was having a difficult time defining it even for myself). Here's the rub: The result set should contain ONLY the records with tstamp BETWEEN '2010-01-03' AND '2010-01-09', AND the one record where the tstamp IS NULL for each order_num in the first set (there will always be one with null tstamp for each order_num). The answers given so far appear to include all records for a certain order_num if there are any with tstamp BETWEEN '2010-01-03' AND '2010-01-09'. For example, if there were another record with order_num = 2 and tstamp = 2010-01-12 00:00:00 it should not be included in the result. Original question: Consider an orders table containing id (unique), order_num, tstamp (a timestamp), and item_id (the single item included in an order). tstamp is null, unless the order has been modified, in which case there is another record with identical order_num and tstamp then contains the timestamp of when the change occurred. Example... id order_num tstamp item_id __ _________ ___________________ _______ 0 1 100 1 2 101 2 2 2010-01-05 12:34:56 102 3 3 113 4 4 124 5 5 135 6 5 2010-01-07 01:23:45 136 7 5 2010-01-07 02:46:00 137 8 6 100 9 6 2010-01-13 08:33:55 105 What is the most efficient SQL statement to retrieve all of the orders (based on order_num) which have been modified one or more times during a certain date range? In other words, for each order we need all of the records with the same order_num (including the one with NULL tstamp), for each order_num WHERE at least one of the order_num's has tstamp NOT NULL AND tstamp BETWEEN '2010-01-03' AND '2010-01-09'. It's the "WHERE at least one of the order_num's has tstamp NOT NULL" that I'm having difficulty with. The result set should look like this: id order_num tstamp item_id __ _________ ___________________ _______ 1 2 101 2 2 2010-01-05 12:34:56 102 5 5 135 6 5 2010-01-07 01:23:45 136 7 5 2010-01-07 02:46:00 137 The SQL that I came up with is this, which is essentially "A UNION (B in A)", but it executes slowly and I hope there is a more efficient solution: SELECT history_orders.order_id, history_orders.tstamp, history_orders.item_id FROM (SELECT orders.order_id, orders.tstamp, orders.item_id FROM orders WHERE orders.tstamp BETWEEN '2010-01-03' AND '2010-01-09') AS history_orders UNION SELECT current_orders.order_id, current_orders.tstamp, current_orders.item_id FROM (SELECT orders.order_id, orders.tstamp, orders.item_id FROM orders WHERE orders.tstamp IS NULL) AS current_orders WHERE current_orders.order_id IN (SELECT orders.order_id FROM orders WHERE orders.tstamp BETWEEN '2010-01-03' AND '2010-01-09');

    Read the article

  • insert into select from other table

    - by user3815079
    I need to add multiple records based on data from another table where the event is the same. I've found on this forum insert into table2(id,name) select "001",first_name from table1 where table1.id="001" as possible solution for my question. So I thought this should be the following syntax: insert into reservations(event,seat) select "99",id from seats where seats.id>0 to add all seats to event 99. However when I run this query mysql gives the message 'MySQL returned an empty resultset (0 rows). (query 0.0028 sec)' and no records were added. I translated the message so could be sligthly different. When I only use the "select "99",id from seats where seats.id0" query, it returns me 1080 rows.

    Read the article

  • complex data requirement.

    - by Abulalia
    Here is my query: select Table1.a, Table1.b, Table1.c, Table1.d, Table2.e, Table3.f, Table4.g, Table5.h from Table1 left join Table6 on Table1.b=Table6.b left join Table3 on Table6.j=Table3.j left join Table7 on Table1.b=Table7.b left join Table5 on Table7.h=Table5.h inner join Table4 on Table1.k=Table4.k inner join Table2 on Table1.m=Table2.m where Table2.e <= x and Table2.n = y and Table3.f in (‘r’, ‘s’) and Table1.d = z group by Table1.a, Table1.b, Table1.c, Table1.d, Table2.e, Table3.f, Table4.g, Table5.h order by Table1.a, Table1.b, Table1.c I am looking for records (a,b,c,d,e,f,g,h) for every a when the very first record b (there are multiple records b for each a) is either 'r' or 's'. Can someone help?

    Read the article

  • R statistics, change ranked tables to paired

    - by cousin_pete
    I have data for many tables like: event_id player finish 1 a 1 1 b 2 1 c 3 1 d 4 2 b 1 2 e 2 2 f 3 2 a 3 2 g 5 Many event_id's, each from 5 to 20 players, finish may be tied. In order to use conditional logistic regression in R I would like to reformat the tables to be like: event_id player1 player2 result 1 a b 1 1 a c 1 1 a d 1 1 b c 1 1 b d 1 1 c d 1 2 b e 1 2 b f 1 2 b a 1 2 b g 1 2 e f 1 2 e a 1 2 e g 1 2 f a 0.5 2 f g 1 2 a g 1 An event_id of 4 players will have 4*3/2 = 6 records in the new table, 5 players will have 5*4/2 = 10 records and so on. If player "a" has "finish" less than player "b" the "result" is 1. If "finish" is equal the "result" is 0.5. If player "a" has finish greater than player "b" then the "result" would be 0. Any help appreciated!

    Read the article

  • Determining smallest number of samples for 99% accuracy

    - by test
    I'm trying to compare 100,000 records on a local database (L) with 100,000 records on a remote database (R). Basically I want to know if an elment in L exusts in R. To determine that, I have to make a request against the R for each L, which takes a long time (I know, there should be a better way, there isn't, that's the API I've got). So I would like to test a small sample of L against R, and then infer with some level of confidence how many are present in the whole R. How many do I have to test to have a 99% confidence level?

    Read the article

  • Macports: port install jpeg fails

    - by Philipp Keller
    History: installed MacPorts on Leopard upgraded to Snow Leopard uninstall all ports reinstalled XCode sudo port uninstall jpeg sudo port install jpeg DEBUG: Found port in file:///opt/local/var/macports/sources/rsync.macports.org/release/ports/graphics/jpeg DEBUG: Changing to port directory: /opt/local/var/macports/sources/rsync.macports.org/release/ports/graphics/jpeg DEBUG: OS Platform: darwin DEBUG: OS Version: 10.3.0 DEBUG: Mac OS X Version: 10.6 DEBUG: System Arch: i386 DEBUG: setting option os.universal_supported to yes DEBUG: org.macports.load registered provides 'load', a pre-existing procedure. Target override will not be provided DEBUG: org.macports.unload registered provides 'unload', a pre-existing procedure. Target override will not be provided DEBUG: org.macports.distfiles registered provides 'distfiles', a pre-existing procedure. Target override will not be provided DEBUG: adding the default universal variant DEBUG: Reading variant descriptions from /opt/local/var/macports/sources/rsync.macports.org/release/ports/_resources/port1.0/variant_descriptions.conf DEBUG: Requested variant darwin is not provided by port jpeg. DEBUG: Requested variant i386 is not provided by port jpeg. DEBUG: Requested variant macosx is not provided by port jpeg. --- Computing dependencies for jpeg DEBUG: Executing org.macports.main (jpeg) DEBUG: Skipping completed org.macports.fetch (jpeg) DEBUG: Skipping completed org.macports.checksum (jpeg) DEBUG: Skipping completed org.macports.extract (jpeg) DEBUG: Skipping completed org.macports.patch (jpeg) --- Configuring jpeg DEBUG: Using compiler 'Mac OS X gcc 4.2' DEBUG: Executing org.macports.configure (jpeg) DEBUG: Environment: CFLAGS='-O2 -arch x86_64' CXXFLAGS='-O2 -arch x86_64' MACOSX_DEPLOYMENT_TARGET='10.6' CXX='/usr/bin/g++-4.2' F90FLAGS='-O2 -m64' LDFLAGS='-arch x86_64' OBJC='/usr/bin/gcc-4.2' FCFLAGS='-O2 -m64' INSTALL='/usr/bin/install -c' OBJCFLAGS='-O2 -arch x86_64' FFLAGS='-O2 -m64' CC='/usr/bin/gcc-4.2' DEBUG: Assembled command: 'cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_graphics_jpeg/work/jpeg-8a" && ./configure --prefix=/opt/local' sh: line 0: cd: /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_graphics_jpeg/work/jpeg-8a: No such file or directory Error: Target org.macports.configure returned: configure failure: shell command " cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_graphics_jpeg/work/jpeg-8a" && ./configure --prefix=/opt/local " returned error 1 DEBUG: Backtrace: configure failure: shell command " cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_graphics_jpeg/work/jpeg-8a" && ./configure --prefix=/opt/local " returned error 1 while executing "$procedure $targetname" Warning: the following items did not execute (for jpeg): org.macports.activate org.macports.configure org.macports.build org.macports.destroot org.macports.install Error: Status 1 encountered during processing. To report a bug, see http://guide.macports.org/#project.tickets

    Read the article

  • Unlimited and multi-computer online storage solutions with automatic backup

    - by JRL
    As the title says, what are the existing online storage solutions that provide: unlimited storage automatic backup and allow for an unlimited number of computers (use not tied to a single computer)? There are several existing questions on this site related to online storage solutions, but none that is specifically targeted to what I want, so I thought I'd ask the question. This wikipedia article lists some of them, are there others? How do they compare in terms of price, feature set and ease of use? Update: Kinda disappointed no one has any answers to this so far. JungleDisk looks promising, anyone have experience with it? Update 2: To answer the comments, what I'm looking for definitely DOES exist. These solutions all seem to fit the bill: BackMii CrashPlan DataPreserve Humyo JungleDisk KeepVault SpiderOak And some of them are quite cheap (CrashPlan is $100 a year). For unlimited space and computers, I'd say that's pretty good. Does anyone have experience with CrashPlan or any other of the above solutions?

    Read the article

< Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >