Search Results

Search found 7311 results on 293 pages for 'rows'.

Page 206/293 | < Previous Page | 202 203 204 205 206 207 208 209 210 211 212 213  | Next Page >

  • Hibernate deletion issue

    - by muffytyrone
    I'm trying to write a Java app that imports a data file. The process is as follows Create Transaction Delete all rows from datatable Load data file into datatable Commit OR Rollback if any errors were encountered. The data loaded in step 3 is mostly the same as the data deleted in step3. The deletion is performed using the following DetachedCriteria criteria = DetachedCriteria.forClass(myObject.class); List<myObject> myObjects = hibernateTemplate.findByCriteria(criteria); hibernateTemplate.deleteAll(myObjects); When I then load the datafile, i get the following exception nested exception is org.hibernate.NonUniqueObjectException: a different object with the same identifier value was already associated with the session: The whole process needs to take place in transaction. And I don't really want to have to compare the import file / data table and then perform an insert/update/delete to get them into sync. Any help would be appreciated.

    Read the article

  • Fulltext and composite indexes and how they affect the query

    - by Brett
    Just say I had a query as below.. SELECT name,category,address,city,state FROM table WHERE MATCH(name,subcategory,category,tag1) AGAINST('education') AND city='Oakland' AND state='CA' LIMIT 0, 10; ..and I had a fulltext index as name,subcategory,category,tag1 and a composite index as city,state; is this good enough for this query? Just wondering if something extra is needed when mixing additional AND's when making use of the fulltext index with the MATCH/AGAINST. Edit: What I am trying to understand is, what happens with the additional columns that are within the query but are not indexed in the chosen index (the fulltext index), the above example being city and state. How does MySQL now find the matching rows for these since it can't use two indexes (or can it?) - so, basically, I'm trying to understand how MySQL goes about finding the data optimally for the columns NOT in the chosen fulltext index and if there is anything I can or should do to optimize the query.

    Read the article

  • Including associations optimization in Rails

    - by Vitaly
    Hey, I'm looking for help with Ruby optimization regarding loading of associations on demand. This is simplified example. I have 3 models: Post, Comment, User. References are: Post has many comments and Comment has reference to User (:author). Now when I go to the post page, I expect to see post body + all comments (and their respective authors names). This requires following 2 queries: select * from Post -- to get post data (1 row) select * from Comment inner join User -- to get comment + usernames (N rows) In the code I have: Post.find(params[:id], :include => { :comments => [:author] } But it doesn't work as expected: as I see in the back end, there're still N+1 hits (some of them are cached though). How can I optimize that?

    Read the article

  • Updating table takes very long time

    - by rrejc
    Hi all, I have a table in MsSQL Server 2008 (SP2) containing 30 millios of rows, table size 150GB, there are a couple of int columns and two nvarchar(max) columns: one containing text (from 1-30000 characters) and one containg xml (up to 100000 characters). Table doesn't have any primary keys or indexes (its is a staging table). So atm I am running a query: UPDATE [dbo].[stage_table] SET [column2] = SUBSTRING([column1], 1, CHARINDEX('.', [column1])-1); the query is running for 3 hours (and it is still not completed), which I think is too long. Is It? I can see that there is constant read rate of 5MB/s and write rate of 10MB/s to .mdf file. How can I find out why the query is running so long? The "server" is i7, 24GB of ram, SATA disks on RAID 10. Many thanks!

    Read the article

  • Any way to optimize this MySQL query?

    - by manyxcxi
    My table looks like this: `MyDB`.`Details` ( `id` bigint(20) NOT NULL, `run_id` int(11) NOT NULL, `element_name` varchar(255) NOT NULL, `value` text, `line_order` int(11) default NULL, `column_order` int(11) default NULL ); I have the following SELECT statement in a stored procedure SELECT RULE ,TITLE ,SUM(IF(t.PASSED='Y',1,0)) AS PASS ,SUM(IF(t.PASSED='N',1,0)) AS FAIL FROM ( SELECT a.line_order ,MAX(CASE WHEN a.element_name = 'PASSED' THEN a.`value` END) AS PASSED ,MAX(CASE WHEN a.element_name = 'RULE' THEN a.`value` END) AS RULE ,MAX(CASE WHEN a.element_name = 'TITLE' THEN a.`value` END) AS TITLE FROM Details a WHERE run_id = runId GROUP BY line_order ) t GROUP BY RULE, TITLE; *runId is an input parameter to the stored procedure. This query takes about 14 seconds to run. The table has 214856 rows, and the particular run_id I am filtering on has 162204 records. It's not on a super high power machine, but I feel like I could be doing this more efficiently. My main goal is to summarize by Rule and Title and show Pass and Fail count columns.

    Read the article

  • Fetch Max from a date column grouped by a particular field

    - by vamyip
    Hi, I have a table similar to this: LogId RefId Entered ================================== 1 1 2010-12-01 2 1 2010-12-04 3 2 2010-12-01 4 2 2010-12-06 5 3 2010-12-01 6 1 2010-12-10 7 3 2010-12-05 8 4 2010-12-01 Here, LogId is unique; For each RefId, there are multiple entries with timestamp. What I want to extract is LogId for each latest RefId. I tried solutions from this link:http://stackoverflow.com/questions/121387/sql-fetch-the-row-which-has-the-max-value-for-a-column. But, it returns multiple rows with same RefId. Can someone help me with this? Thanks Vamyip

    Read the article

  • Powershell Select-Object from array not working

    - by Andrew
    I am trying to seperate values in an array so i can pass them to another function. Am using the select-Object function within a for loop to go through each line and separate the timestamp and value fields. However, it doesn't matter what i do the below code only displays the first select-object variable for each line. The second select-object command doesn't seem to work as my output is a blank line for each of the 6 rows. Any ideas on how to get both values $ReportData = $SystemStats.get_performance_graph_csv_statistics( (,$Query) ) ### Allocate a new encoder and turn the byte array into a string $ASCII = New-Object -TypeName System.Text.ASCIIEncoding $csvdata = $ASCII.GetString($ReportData[0].statistic_data) $csv2 = convertFrom-CSV $csvdata $newarray = $csv2 | Where-Object {$_.utilization -ne "0.0000000000e+00" -and $_.utilization -ne "nan" } for ( $n = 0; $n -lt $newarray.Length; $n++) { $nTime = $newarray[$n] $nUtil = $newarray[$n] $util = $nUtil | select-object Utilization $util $tstamp = $nTime | select-object timestamp $tstamp }

    Read the article

  • HTML table with fixed cells size

    - by misha-moroshko
    I have an HTML table which has equally divided rows and columns. I would like each cell to be of a fixed size, say 40px width and 30px height. When I change the size of the browser window, the cells size changes also. How can I prevent it ? I would expect to see the scroll bars if browser's window become too small. Is that right to set the height and the width of the cell in pixels ? Thanks !

    Read the article

  • How can we block the user from unchecking a DataGridView checkbox?

    - by hawbsl
    We have a DataGridViewCheckBox column bound to a boolean property in our class. The property setter has some logic which says that under certain conditions a True flag cannot be changed, ie, it stays checked forever. This is on a per record basis. So the entire column can't be readonly, only certain rows. Pseudo code: Public Property Foo() As Boolean Get Return _Foo End Get Set(ByVal value As Boolean) If _Foo And Bar And value = False Then //do nothing, in this scenario once you're true, you stay true Else _Foo = value End If End Set End Property Databinding is handling all of this fine, except that the checkbox is visibly cleared when it's clicked. Then, of course, when the binding / setter is fired (as you move off that cell) it is restored to its checked status per the underlying logic. Ultimately it doesn't matter too much but it's a clumsy bit of UI. How can we intercept the user's click and keep it checked?

    Read the article

  • Fixing "Lock wait timeout exceeded; try restarting transaction" for a 'stuck" Mysql table?

    - by Tom
    From a script I sent a query like this thousands of times to my local database: update some_table set some_column = some_value I forgot to add the where part, so the same column was set to the same a value for all the rows in the table and this was done thousands of times and the column was indexed, so the corresponding index was probably updated too lots of times. I noticed something was wrong, because it took too long, so I killed the script. I even rebooted my computer since them, but something stuck in the table, because simple queries take a very long time to run and when I try dropping the relevant index it fails with this message: Lock wait timeout exceeded; try restarting transaction It's an innodb table, so stuck the transaction is probably implicit. How can I fix this table and remove the stuck transaction from it?

    Read the article

  • Design ideas for a versioned db schema with related tables also versioned

    - by vfilby
    Here is the drill, I want to version a database. I have done this before using multiple rows where the table primary key becomes a combination of the row id and either a datestamp or a version #. Now I want to version a table that depends on many other small tables. Versioning each table will be a giant PITA, so I am looking for good options to verion a schema where the data to be versioned spreads over multiple tables. All related tables are properly keyed with foreign key relationships. The database is currently on Sql Server 2005.

    Read the article

  • How to create following table using MDX Scripting in Sql Server 2005?

    - by Itsgkiran
    Hi! I have the following table , Database Table: BatchID BatchName Chemical Value ---------------------------------------------- BI-1 BN-1 CH-1 1 BI-2 BN-2 CH-2 2 ---------------------------------------------- I need to display the following table. BI-1 BI-2 BN-1 BN-2 ----------------------------------------- CH-1 1 null ------------------------------------------ CH-2 null 2 ------------------------------------------ Here BI-1,BN-1 are two rows in a single columns i need to display chemical value as row of that.Could Please help me to solve this problem. I tried it in Pivot table but i unable to get this. So is there any chance in Reporting Server MDX. Could you please Answer this question. This is high priority to me . Thank You in advance.

    Read the article

  • How to facet multiple columns in Google Refine

    - by banjanxed
    I have a data set with 30 columns and multiple rows (some cells have no data). I would like to be able to facet the columns in groups. 1 2 3 4..... Row1 A B C D Row2 E A D F Row3 Q A B H Given the above data I would like the facet to retun the number of instances in a group of columns. For the first three columns I need the facet to return: A - 3 B - 2 C - 1 D - 1 E - 1 Q - 1 I have tried to combine columns when I loaded the data but the individual data was grouped as well. This is not the desired outcome. For example: ABC - 1 EAD - 1 QAB - 1 Thanks in advance.

    Read the article

  • Oracle, slow performance when using sub select

    - by Wyass
    I have a view that is very slow if you fetch all rows. But if I select a subset (providing an ID in the where clause) the performance is very good. I cannot hardcode the ID so I create a sub select to get the ID from another table. The sub select only returns one ID. Now the performance is very slow and it seems like Oracle is evaluating the whole view before using the where clause. Can I somehow help Oracle so SQL 2 and 3 have the same performance? I’m using Oracle 10g 1 slow select * from ci.my_slow_view 2 fast select * from ci.my_slow_view where id = 1; 3 slow select * from ci.my_slow_view where id in (select id from active_ids)

    Read the article

  • Writing OLAP SQL query

    - by user1859596
    I have a project I am working on that requires the following : create a normalized sample rdbms (5 tables) using Java I entered 1 million rows of data to each table run two OLTP and two OLAP queries on the normalized tables. Denormalized tables. run the same OLTP and OLAP queries on them and compare time. What does OLAP query mean? I've searched the internet and all that I can find is that I have to make a cube, and apply queries on it. How can I write an OLAP query on a RDBMS? I have a sample : tables normalized(orders,product,customer,branch,sales) sales : order_id,product_id,quantity product : product_id,name,description,price,sales_tax customer : customer_id,f_name,l_name,tel_no,addr,nic,city branch : branch_id,name,tel_no,addr,city orders : order_id,customer_id,order_date,branch_id I want to write an OLAP query on the above tables. I am using Oracle Express with SQL Developer.

    Read the article

  • Oracle: delete suddenly taking a long time

    - by Damo
    Hi We have a feed process which runs every day of the year. As part of that we delete every row from a table (approx 1 million rows) every day, repopulate it using 5 different stored procedures and then commit the transaction. This is the only commit statement that we call. All of a sudden the delete has started takign about 2 hours to complete. The delete is also very simple (delete from T_PROFILE_WORK) This has worked perfectly well for the past year, but in the past week i have noticed this issue. Any help on this is greatly appreciated Thanks Damien

    Read the article

  • R webscraping: interrogating for date and importance

    - by adam.888
    I am able to webscrape a table from a webpage containing news library(XML) webpage <- "http://www.tradingeconomics.com/calendar" tables <- readHTMLTable(webpage ) n.rows <- unlist(lapply(tables, function(t) dim(t)[1])) dfcal <- as.data.frame(tables$calendar) However I do not know how to interrogate for date or for importance. For example how could I webscrape news from Jan 2014? I am able to do this on the webpage by altering button settings, but how can I do it from within R? I was also not able to collect the importance column data. Also are there better ways for collecting economic news from within R? I have looked on http://www.rseek.org/ but could not find anything. Thank you for your help.

    Read the article

  • How can I generate a random human-readable colour from a seed? C#

    - by SLC
    Got a logfile, and it has all kinds of text in it. Currently it is just displayed as one colour, and each entry says something like: Log from section 1: Some text here Log from section 125: Some text here Log from section 17: Some text here Log from section 1: Some text here Log from section 125: Some text here Log from section 1: Some text here Log from section 17: Some text here Now the logfile is displayed in real time, and it would be nice to make the rows with the same section number the same colour. However there could be potentially quite a large range of numbers. What I want to do is create a method that will take a number, and randomly generate a unique colour. The colour must be readable against a black background though, so #000000 is no good, nor is #101010 or anything too dark to read. Ideally two similar numbers will not produce the same colour because in the above examples, the numbers 1 and 17 might be too similar, and some numbers might be in the 10,000 range. Any ideas on this?

    Read the article

  • Manipulate Excel workbooks programmatically

    - by Tom
    I have an Excel workbook that I want to use as a template. It has several worksheets setup, one that produces the pretty graphs and summarizes the numbers. Sheet 1 needs to be populated with data that is generated by another program. The data comes in a tab delimited file. Currently the user imports the tab delimited file into a new Workbook, selects all and copies. Then goes to the template and pastes the data into sheet1. This is a large amount of data, 269 columns and over 135,000 rows. It’s a cumbersome process and the users are not experienced Excel users. All they really want is the pretty graphs. I would like to add a step after the program that generates the data to programmatically automate the process the user currently must do manually. Can anyone suggest the best method/programming language that could accomplish this?

    Read the article

  • Processing variable number of form fields

    - by a_m0d
    I am working on a form which displays information about orders. Each order has a unique id, but they are not necessarily sequential on the form. Also, the number of fields can vary (one field per row on the form). The input into the form will not be mapped straight into the database, but will be added to the current value in the database, and then saved. An example of the form is in the picture below - the callout on the right shows the id for each row. I know how to generate the form like this, but I can't work out how I can easily process each of these rows reliably. I also know how to give each of the fields a unique identifier, like name="row-23", but how can I translate that name so that I can update the related record in the database?

    Read the article

  • Access Relationship Table in Grails

    - by WaZ
    Hi, I have the following domains classes: class Posts{ String Name String Country static hasMany = [tags:Tags] static constraints = { } } class Tags{ String Name static belongsTo = Posts static hasMany = [posts:Posts] static constraints = { } String toString() { "${TypeName}" } } Grails creates an another table in the database i.e. Posts_Tags. My requirement is: E.g. 1 post has 3 tags. So, in the Posts_Tags table there are 3 rows. How can I access the table Posts_Tags directly in my code so that I can manipulate the data or add some more fields to it.

    Read the article

  • k-means clustering in R on very large, sparse matrix?

    - by movingabout
    Hello, I am trying to do some k-means clustering on a very large matrix. The matrix is approximately 500000 rows x 4000 cols yet very sparse (only a couple of "1" values per row). The whole thing does not fit into memory, so I converted it into a sparse ARFF file. But R obviously can't read the sparse ARFF file format. I also have the data as a plain CSV file. Is there any package available in R for loading such sparse matrices efficiently? I'd then use the regular k-means algorithm from the cluster package to proceed. Many thanks

    Read the article

  • MEMORY(HEAP) vs. InnoDB in a Read and Write Envirnment

    - by Johannes
    I want to programm a real-time application using MySQL. It needs a small table (less than 10000 rows) that will be under heavy read (scan) and write (update and some insert/delete) load. I am really speaking of 10000 updates or selects per second. These statements will be executed on only a few (less than 10) open mysql connections. The table is small and does not contain any data that needs to be stored on disk. So I ask which is faster: InnoDB or MEMORY (HEAP)? My thoughts are: Both enginges will probably serve SELECTs directly from memory, as even InnoDB will cache the whole table. What about the UPDATAEs? (innodb_flush_log_at_trx_commit?) My main concern is the locking behavior: InnoDB row lock vs. MEMORY table lock. Will this present the bottleneck in the MEMORY implementation? Thanks for your thoughts!

    Read the article

  • How to add a sequence column to an existing table with records

    - by user1888543
    I had created a new table named USERLOG with two fields from a previous VIEW. The table already consist of about 9000 records. The two fields taken from the VIEW, i.e. weblog_views consist of IP (consists of IP address), and WEB_LINK (consists of URL). This is the code I used, CREATE TABLE USERLOG AS SELECT C_IP, WEB_LINK FROM weblog_views; I want to add another column to this table called the USER_ID, which would consists of a sequence starting with 1 to 9000 records to create a unique id for each existing rows. I need help with this part. I'm using Oracle SQL Developer: ODMiner version 3.0.04. I tried using the AUTO-INCREMENT option, ALTER TABLE USERLOG ADD USER_ID INT UNSIGNED NOT NULL AUTO_INCREMENT; But I get an error with this, Error report: SQL Error: ORA-01735: invalid ALTER TABLE option 01735. 00000 - "invalid ALTER TABLE option" So, I would really appreciate any help that I can get!

    Read the article

  • How to double the size of 8x8 Grid whilst keeping the relative position of certain tiles intact?

    - by ke3pup
    Hi guys I have grid size of size 8x8 , total of 64 Tiles. i'm using this Grid to implement java search algorithms such as BFS and DFS. The Grid has given forbidden Tiles (meaning they can't be traversed or be neighbour of any other tile) and Goal and Start tile. for example Tile 19,20,21,22 and 35, 39 are forbidden and 14 an 43 are the Goal and start node when the program runs. My question is , How can i double the size of the grid, to 16x16 whilst keeping the Relative position of forbidden tiles as well as the Relative position of start and goal Tiles intact? On paper i know i can do this by adding 4 rows and columns to all size but in coding terms i don't know how to make it work? Can someone please give any sort of hints?

    Read the article

< Previous Page | 202 203 204 205 206 207 208 209 210 211 212 213  | Next Page >