Search Results

Search found 7311 results on 293 pages for 'rows'.

Page 104/293 | < Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >

  • How to find foreign-key dependencies pointing to one record in Oracle?

    - by daveslab
    Hi folks, I have a very large Oracle database, with many many tables and millions of rows. I need to delete one of them, but want to make sure that dropping it will not break any other dependent rows that point to it as a foreign key record. Is there a way to get a list of all the other records, or at least table schemas, that point to this row? I know that I could just try to delete it myself, and catch the exception, but I won't be running the script myself and need it to run clean the first time through. I have the tools SQL Developer from Oracle, and PL/SQL Developer from AllRoundAutomations at my disposal. Thanks in advance!

    Read the article

  • C# gridview nested in repeater update

    - by Padawan
    My current situation is to display an unknown number of Plantypes and within those Plantypes display a list of Participants(also unknown number), the participants have a textbox and a dropdown that is editable (you can't edit the individual rows, there is one update that does a bit of validation then updates all rows.) I currently have a gridview nested withing a repeater, the repeater displays the Plan in a label and OnItemDataBound I call a method to populate the gridviews. It looks great, but I can't figure out how to save all the data at once. I'm not opposed to handling this a different way, as in loose the gridview and or repeater, if someone has a better idea. This is C# and framework 2.0...there is no sorting or paging on the gridviews...just some links and the fields to update. thanks in advance, Padawan

    Read the article

  • Insert values into dataset

    - by sudha.s
    Am using vb.net.Having dataset contain column name as phone .it contain set of phone number. i want to add 0 to each phone number and store it in another dataset. my code -------- cmd = New OracleCommand("select substr(PHONE,-10)as PHONE from reports.renewal_contact_t where run_date=to_date('" + TextBox1.Text + "','mm/dd/yyyy') and EXP_DATE =to_date('" + TextBox2.Text + "','mm/dd/yyyy') and region not in('TNP')", cn) ada = New OracleDataAdapter(cmd) ada.Fill(ds, "reports.renewal_contact_t ") Dim ds1 As New DataSet ds1 = ds.Clone() For Each q In ds.Tables(0).Rows phone = z + q("PHONE").ToString For Each q1 In ds1.Tables(0).Rows q1("PHONE") = phone Next Next my problem is am not getting values in ds1.Please help me to correct it.

    Read the article

  • How to query range of data in DB2 with highest performance?

    - by Fuangwith S.
    Usually, I need to retrieve data from a table in some range; for example, a separate page for each search result. In MySQL I use LIMIT keyword but in DB2 I don't know. Now I use this query for retrieve range of data. SELECT * FROM( SELECT SMALLINT(RANK() OVER(ORDER BY NAME DESC)) AS RUNNING_NO , DATA_KEY_VALUE , SHOW_PRIORITY FROM EMPLOYEE WHERE NAME LIKE 'DEL%' ORDER BY NAME DESC FETCH FIRST 20 ROWS ONLY ) AS TMP ORDER BY TMP.RUNNING_NO ASC FETCH FIRST 10 ROWS ONLY but I know it's bad style. So, how to query for highest performance?

    Read the article

  • MySQL Prepared Statements vs Stored Procedures Performance

    - by amardilo
    Hi there, I have an old MySQL 4.1 database with a table that has a few millions rows and an old Java application that connects to this database and returns several thousand rows from this this table on a frequent basis via a simple SQL query (i.e. SELECT * FROM people WHERE first_name = 'Bob'. I think the Java application uses client side prepared statements but was looking at switching this to the server, and in the example mentioned the value for first_name will vary depending on what the user enters). I would like to speed up performance on the select query and was wondering if I should switch to Prepared Statements or Stored Procedures. Is there a general rule of thumb of what is quicker/less resource intensive (or if a combination of both is better)

    Read the article

  • Walking through an SQLite Table

    - by galford13x
    I would like to implement or use functionality that allows stepping through a Table in SQLite. If I have a Table Products that has 100k rows, I would like to retrive perhaps 10k rows at a time. Somthing similar to how a webpage would list data and have a < Previous .. Next > link to walk through the data. Are there select statements that can make this simple? I see and have tried using the ROWID in conjunction with LIMIT which seems ok if not ordering the data. // This seems works if not ordering. SELECT * FROM Products WHERE ROWID BETWEEN x AND y;

    Read the article

  • Undefined method 'total_entries' after upgrading Rails 2.2.2 to 2.3.5

    - by Trevor
    I am upgrading a Rails application from 2.2.2 to 2.3.5. The only remaining error is when I invoke total_entries for creating a jqgrid. Error: NoMethodError (undefined method `total_entries' for #<Array:0xbbe9ab0>) Code snippet: @route = Route.find( :all, :conditions => "id in (#{params[:id]})" ) { if params[:page].present? then paginate :page => params[:page], :per_page => params[:rows] order_by "#{params[:sidx]} #{params[:sord]}" end } respond_to do |format| format.html # show.html.erb format.xml { render :xml => @route } format.json { render :json => @route } format.jgrid { render :json => @route.to_jqgrid_json( [ :id, :name ], params[:page], params[:rows], @route.total_entries ) } end Any ideas? Thanks!

    Read the article

  • mysql date format with changing string value

    - by hacket
    I have a field called Timestamp, that stores its values as text as opposed to an actual Timestamp. The logging application is unchangeable, unfortunately. So table.Timestamp -> text field with format -> "Wed Mar 02 13:28:59 CDT 2011" I have been developing a query to purge all but the most recent row using this as my Timestamp selector, which is also converting the string into a date - MAX( STR_To_DATE( table.Timestamp , '%a %b %d %H:%i:%s CDT %Y' ) My query works perfectly... However, what I've found is that the string value - 'CDT' - changes between 'CDT' and 'CST' depending on whether the current time is daylight savings time or not. During daylight savings time, it logs as 'CDT', and vice versa. So all the rows that contain 'CST' get ignored when I run this - MAX( STR_To_DATE( table.Timestamp , '%a %b %d %H:%i:%s CDT %Y' ) and all the rows that contain 'CDT' get ignored when I run this - MAX( STR_To_DATE( table.Timestamp , '%a %b %d %H:%i:%s CST %Y' ) Is there a way to make it run against both string formats?

    Read the article

  • Storing the records in csv file from datatable.

    - by Harikrishna
    I have datatable and I am displaying those values in the datagridview with the helping of code : dataGridView1.ColumnCount = TableWithOnlyFixedColumns.Columns.Count; dataGridView1.RowCount = TableWithOnlyFixedColumns.Rows.Count; for (int i = 0; i < dataGridView1.RowCount; i++) { for (int j = 0; j < dataGridView1.ColumnCount; j++) { dataGridView1[j, i].Value = TableWithOnlyFixedColumns.Rows[i][j].ToString(); } } TableExtractedFromFile.Clear(); TableWithOnlyFixedColumns.Clear(); Now I want to save the records in the datatable in csv file.How can I do that ?

    Read the article

  • Length of an HTMLObjectCollection is incorrect in Internet Explorer

    - by Mayank Gupta
    I have three cells in different rows in a table having same name.e.g. <td name = "x"> is present in 3 different rows. I am using document.getElementsByName() to obtain a collection of these cells and trying to calculate the length of this collection. e.g. var obj = doucment.getElementsByName("X"); var length = obj.length; This code works fine in Google Chrome but in IE the length is return as 0(zero). Can anyone tell me how to sove this problem in IE?

    Read the article

  • How to add indexes to MySQL tables?

    - by Michael
    I've got a very large MySQL table with about 150,000 rows of data. Currently, when I try and run SELECT * FROM table WHERE id = '1'; the code runs fine as the ID field is the primary index. However, recently for a development in the project, I have to search the database by another field. For example SELECT * FROM table WHERE product_id = '1'; This field was not previously indexed, however, I've added it as an index but when I try to run the above query, the results is very slow. An EXPLAIN query reveals that there is no index for the product_id field when I've already added one and as a result the query takes any where from 20 minutes to 30 minutes to return a single row. EDIT: My full EXPLAIN results are: | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+------+----------------------+------+---------+------+------+------------------+ | 1 | SIMPLE | table | ALL | NULL | NULL | NULL | NULL | 157211 | Using where | +----+-------------+-------+------+----------------------+------+---------+------+------+------------------+

    Read the article

  • importing data using get or create - identity error 1062

    - by hamackey
    I am importing data from a mssql database into mysql. Works except when it encounters the id of a previous entry. id is unique. I need to get entries that already exist so that they can be placed in the work of the day. Error is IntegrityError: (1062, "Duplicate entry '001355338' for key 2") This entry is already in the database. I need it entered for that day, but can not have it added to the table. It is already there. def handle(self, *args, **options): 59 #patients_local = Patient.objects.all() 60 #attendings_local = Attending.objects.all() 61 connection = pyodbc.connect("XXXXXXXXXXX") 62 cursor = connection.cursor() 63 cursor.execute(COMMAND) 64 rows = cursor.fetchall() 65 for row in rows: 66 # get_or_create returns (object, boolean) 67 p, created = Patient.objects.get_or_create( 68 first_name = row.Firstname, 69 middle_name = '', 70 last_name = row.Lastname, 71 id = row.id, 72 )

    Read the article

  • Improved way to build nested array of unique values in javascript

    - by dualmon
    The setup: I have a nested html table structure that displays hierarchical data, and the individual rows can be hidden or shown by the user. Each row has a dom id that is comprised of the level number plus the primary key for the record type on that level. I have to have both, because each level is from a different database table, so the primary key alone is not unique in the dom. example: id="level-1-row-216" I am storing the levels and rows of the visible elements in a cookie, so that when the page reloads the same rows the user had open are can be shown automatically. I don't store the full map of dom ids, because I'm concerned about it getting too verbose, and I want to keep my cookie under 4Kb. So I convert the dom ids to a compact json object like this, with one property for each level, and a unique array of primary keys under each level: { 1:[231,432,7656], 2:[234,121], 3:[234,2], 4:[222,423], 5:[222] } With this structure stored in a cookie, I feed it to my show function and restore the user's previous disclosure state when the page loads. The area for improvement: I'm looking for better option for reducing my map of id selectors down to this compact format. Here is my function: function getVisibleIds(){ // example dom id: level-1-row-216-sub var ids = $("tr#[id^=level]:visible").map(function() { return this.id; }); var levels = {}; for(var i in ids ) { var id = ids[i]; if (typeof id == 'string'){ if (id.match(/^level/)){ // here we extract the number for level and row var level = id.replace(/.*(level-)(\d*)(.*)/, '$2'); var row = id.replace(/.*(row-)(\d*)(.*)/, '$2'); // *** Improvement here? *** // This works, but it seems klugy. In PHP it's one line (see below): if(levels.hasOwnProperty(level)){ if($.inArray(parseInt(row, 10) ,levels[level]) == -1){ levels[level].push(parseInt(row, 10)); } } else { levels[level] = [parseInt(row, 10)]; } } } } return levels; } If I were doing it in PHP, I'd build the compact array like this, but I can't figure it out in javascript: foreach($ids as $id) { if (/* the criteria */){ $level = /* extract it from $id */; $row = /* extract it from $id */; $levels[$level][$row]; } }

    Read the article

  • How to recalculate primary index?

    - by JohnM2
    I have table in mysql database with autoincrement PRIMARY KEY. On regular basis rows in this table are being deleted an added. So the result is that value of PK of the latest row is growing very fast, but there is not so much rows in this table. What I want to do is to "recalculate" PK in this way, that the first row has PK = 1, second PK = 2 and so on. There are no external dependencies on PK of this table so it would be "safe". Is there anyway it can be done using only mysql queries/tools? Or I have to do it from my code?

    Read the article

  • BIRT PDF report : fixed table heigth?

    - by kiwifrog
    Hi, I am trying to display a table to display of products (rows) in a single A4 fixed layout page. I manage to add a table with header/detail/footer sections but I can not set a minimum heigth for the detail section (150mm for example). If I set a 150mm heigth on the detail row, then Each row will have that 150mm heigth. Whereas I would like, each row to have a minimal heith (could be on several line if the content of some columns is wrapped). +---------+--------+--------------+ Tbl Hdr | col1 | col2 | col3 | +---------+--------+--------------+ Tbl Dtl | [val1] | [val2] | [val3] | +---------+--------+--------------+ | | <-should have a variable heigth +---------+--------+--------------+ Tbl Ftr | | | Total | +---------+--------+--------------+ If a set not heigth on the detail row then the footer comes, right beneath the detail rows, instead of sticking at the bottom of the page :-( I hope this makes sense (if not I could provide more details). Any help would be greatly appreciated. Thanks in advance.

    Read the article

  • Simplest way to create a wrapper class around some strings for a WPF DataGrid?

    - by Joel
    I'm building a simple hex editor in C#, and I've decided to use each cell in a DataGrid to display a byte*. I know that DataGrid will take a list and display each object in the list as a row, and each of that object's properties as columns. I want to display rows of 16 bytes each, which will require a wrapper with 16 string properties. While doable, it's not the most elegant solution. Is there an easier way? I've already tried creating a wrapper around a public string array of size 16, but that doesn't seem to work. Thanks *The rational for this is that I can have spaces between each byte without having to strip them all out when I want to save my edited file. Also it seems like it'll be easier to label the rows and columns.

    Read the article

  • In PHP + MySQL, How do I join many tables with conditions

    - by Moe
    Hi, I'm trying to get the users full activity throughout the website. I need to Join many tables throughout the database, with that condition that it is one user. What I currently have written is: SELECT * FROM comments AS c JOIN rphotos AS r ON c.userID = r.userID AND c.userID = '$defineUserID'; But What it is returning is everything about the user, but it repeats rows. For instance, for one user he has 6 photos and 5 comments So I expect the join to return 11 rows. Instead it returns 30 results like so: PhotoID = 1; CommentID = 1; PhotoID = 1; CommentID = 2; PhotoID = 1; CommentID = 3; and so on... What am i doing wrong?

    Read the article

  • MySQL query optimization.

    - by PiKey
    I'm so bad in making good MySQL queries. I've created this one: http://pastebin.com/GtDfgky8 products Table have about 17k rows, allegro Table have about 3k of rows. The query Idea is select all products, where stock_quanity 3, where is photo, and where is no product id in allegro table. Now query takes about 10 seconds... I have no idea how I can optimize this query. Please help my, I'll be thankfully! :) & Sorry for my bad English also

    Read the article

  • Python/Numpy - Save Array with Column AND Row Titles

    - by Scott B
    I want to save a 2D array to a CSV file with row and column "header" information (like a table). I know that I could use the header argument to numpy.savetxt to save the column names, but is there any easy way to also include some other array (or list) as the first column of data (like row titles)? Below is an example of how I currently do it. Is there a better way to include those row titles, perhaps some trick with savetxt I'm unaware of? import csv import numpy as np data = np.arange(12).reshape(3,4) # Add a '' for the first column because the row titles go there... cols = ['', 'col1', 'col2', 'col3', 'col4'] rows = ['row1', 'row2', 'row3'] with open('test.csv', 'wb') as f: writer = csv.writer(f) writer.writerow(cols) for row_title, data_row in zip(rows, data): writer.writerow([row_title] + data_row.tolist())

    Read the article

  • Any way to avoid a filesort when order by is different to where clause?

    - by Julian
    I have an incredibly simple query (table type InnoDb) and EXPLAIN says that MySQL must do an extra pass to find out how to retrieve the rows in sorted order. SELECT * FROM `comments` WHERE (commentable_id = 1976) ORDER BY created_at desc LIMIT 0, 5 exact explain output: table select_type type extra possible_keys key key length ref rows comments simple ref using where; using filesort common_lookups common_lookups 5 const 89 commentable_id is indexed. Comments has nothing trick in it, just a content field. The manual suggests that if the order by is different to the where, there is no way filesort can be avoided. http://dev.mysql.com/doc/refman/5.0/en/order-by-optimization.html I also tried order by id as well as it's equivalent but makes no difference, even if I add id as an index (which I understand is not required as id is indexed implicitly in MySQL). thanks in advance for any ideas!

    Read the article

  • Eliminate duplicates in SQL query

    - by ewdef
    i have a table with 6 fields. the columns are ID, new_id price,title,Img,Active. I have datawhich is duplicated for the price column. When I do a select i want to show only distinct rows where new_id is not the same. e.g.- ID New_ID Price Title Img Active 1 1 20.00 PA-1 0X4... 1 2 1 10.00 PA-10 0X4... 1 3 3 20.00 PA-11 0X4... 1 4 4 30.00 PA-5 0X4... 1 5 9 20.00 PA-99A 0X4... 1 6 3 50.00 PA-55 0X4... 1 When the select statement runs, only rows with ID (1,4,9,6) should show. Reason being the new_ID with the higher price should show up. How can i do this?

    Read the article

  • Inserting asyncronously into Oracle, any benefits?

    - by Karl Trumstedt
    I am using ODP.NET for loading data into Oracle. I am bulking inserts into groups of a 1000 rows each call. Is there any performance benefits in calling my load method asynchronously? So say I want to insert 10000 rows, instead of making 10 calls synchronously I make 10 calls asynchronously. My database is using ASSM right now but otherwise plenty of freelists are used of course. The database server has several cores as well. My initial tests seem to point to a performance increase, but maybe there is something I cannot see? Potential deadlock or contention issues? Of course, there is added complexity in handling transactions and such doing my load this way.

    Read the article

  • Errors with large data sources

    - by The Sheek Geek
    I'm doing some benchmarking on large data sources and binding/exporting data for reporting. I started with using a data set, filling it with 100000 rows and then attempting to open a crystal report with the retrieved data. I noticed that the data set filled just fine (took about 779 milliseconds) however, when attempting to export the data to the report or even bind to a gridview the application would fail with an OutOfMemoryException. Does anyone experienced this before or have an idea of how to get around it? It is very possible that clients will run reports for years worth of data and 100000 rows are not inconceivable. The application and the benchmark code are written in C# using ORACLE and SQL Server databases. I still have some data sources to test, but would like to know how to get around this just in case I don't find a better solution.

    Read the article

  • How to retrieve large data from oracle database using vbscript

    - by allenzzzxd
    Hi guys, I'm now working on vbscript to do some test. Actuelly, I want to retrieve a large amount of data from an oracle database, so I write the code like this: sql = "Select * from CORE_DB where MC = '" & mstr & "' " Set myrs = db_execute_query(curConnection, sql) Then I count the rows in myrs,there are 248 rows. So then I do a For loop to retrieve some fields of each row. For k = 0 To db_get_rows_count(myrs) But then I found that the content of the row k when k 133 was always equal to k = 133. So this makes an error. As I think, there may be a limit size of mrys ? Could anyone light me about this? Thanks a lot in advance

    Read the article

  • How can i use listDictionary?

    - by Phsika
    i can fill my listdictinary but, if running error returns to me in " foreach (string ky in ld.Keys)"(invalid operation Exception was unhandled) Error Detail : After creating a pointer to the list of sample collection has been changed. C# ListDictionary ld = new ListDictionary(); foreach (DataColumn dc in dTable.Columns) { MessageBox.Show(dTable.Rows[0][dc].ToString()); ld.Add(dc.ColumnName, dTable.Rows[0][dc].ToString()); } foreach (string ky in ld.Keys) if (int.TryParse(ld[ky].ToString(), out QuantityInt)) ld[ky] = "integer"; else if(double.TryParse(ld[ky].ToString(), out QuantityDouble)) ld[ky]="double"; else ld[ky]="nvarchar";

    Read the article

< Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >