Search Results

Search found 7311 results on 293 pages for 'rows'.

Page 206/293 | < Previous Page | 202 203 204 205 206 207 208 209 210 211 212 213  | Next Page >

  • Updating table takes very long time

    - by rrejc
    Hi all, I have a table in MsSQL Server 2008 (SP2) containing 30 millios of rows, table size 150GB, there are a couple of int columns and two nvarchar(max) columns: one containing text (from 1-30000 characters) and one containg xml (up to 100000 characters). Table doesn't have any primary keys or indexes (its is a staging table). So atm I am running a query: UPDATE [dbo].[stage_table] SET [column2] = SUBSTRING([column1], 1, CHARINDEX('.', [column1])-1); the query is running for 3 hours (and it is still not completed), which I think is too long. Is It? I can see that there is constant read rate of 5MB/s and write rate of 10MB/s to .mdf file. How can I find out why the query is running so long? The "server" is i7, 24GB of ram, SATA disks on RAID 10. Many thanks!

    Read the article

  • Writing OLAP SQL query

    - by user1859596
    I have a project I am working on that requires the following : create a normalized sample rdbms (5 tables) using Java I entered 1 million rows of data to each table run two OLTP and two OLAP queries on the normalized tables. Denormalized tables. run the same OLTP and OLAP queries on them and compare time. What does OLAP query mean? I've searched the internet and all that I can find is that I have to make a cube, and apply queries on it. How can I write an OLAP query on a RDBMS? I have a sample : tables normalized(orders,product,customer,branch,sales) sales : order_id,product_id,quantity product : product_id,name,description,price,sales_tax customer : customer_id,f_name,l_name,tel_no,addr,nic,city branch : branch_id,name,tel_no,addr,city orders : order_id,customer_id,order_date,branch_id I want to write an OLAP query on the above tables. I am using Oracle Express with SQL Developer.

    Read the article

  • Including associations optimization in Rails

    - by Vitaly
    Hey, I'm looking for help with Ruby optimization regarding loading of associations on demand. This is simplified example. I have 3 models: Post, Comment, User. References are: Post has many comments and Comment has reference to User (:author). Now when I go to the post page, I expect to see post body + all comments (and their respective authors names). This requires following 2 queries: select * from Post -- to get post data (1 row) select * from Comment inner join User -- to get comment + usernames (N rows) In the code I have: Post.find(params[:id], :include => { :comments => [:author] } But it doesn't work as expected: as I see in the back end, there're still N+1 hits (some of them are cached though). How can I optimize that?

    Read the article

  • Matlab Question - Principal Component Analysis

    - by Jack
    I have a set of 100 observations where each observation has 45 characteristics. And each one of those observations have a label attached which I want to predict based on those 45 characteristics. So it's an input matrix with the dimension 45 x 100 and a target matrix with the dimension 1 x 100. The thing is that I want to know how many of those 45 characteristics are relevant in my set of data, basically the principal component analysis, and I understand that I can do this with Matlab function processpca. Could you please tell me how can I do this? Suppose that the input matrix is x with 45 rows and 100 columns and y is a vector with 100 elements.

    Read the article

  • How can I generate a random human-readable colour from a seed? C#

    - by SLC
    Got a logfile, and it has all kinds of text in it. Currently it is just displayed as one colour, and each entry says something like: Log from section 1: Some text here Log from section 125: Some text here Log from section 17: Some text here Log from section 1: Some text here Log from section 125: Some text here Log from section 1: Some text here Log from section 17: Some text here Now the logfile is displayed in real time, and it would be nice to make the rows with the same section number the same colour. However there could be potentially quite a large range of numbers. What I want to do is create a method that will take a number, and randomly generate a unique colour. The colour must be readable against a black background though, so #000000 is no good, nor is #101010 or anything too dark to read. Ideally two similar numbers will not produce the same colour because in the above examples, the numbers 1 and 17 might be too similar, and some numbers might be in the 10,000 range. Any ideas on this?

    Read the article

  • How can we block the user from unchecking a DataGridView checkbox?

    - by hawbsl
    We have a DataGridViewCheckBox column bound to a boolean property in our class. The property setter has some logic which says that under certain conditions a True flag cannot be changed, ie, it stays checked forever. This is on a per record basis. So the entire column can't be readonly, only certain rows. Pseudo code: Public Property Foo() As Boolean Get Return _Foo End Get Set(ByVal value As Boolean) If _Foo And Bar And value = False Then //do nothing, in this scenario once you're true, you stay true Else _Foo = value End If End Set End Property Databinding is handling all of this fine, except that the checkbox is visibly cleared when it's clicked. Then, of course, when the binding / setter is fired (as you move off that cell) it is restored to its checked status per the underlying logic. Ultimately it doesn't matter too much but it's a clumsy bit of UI. How can we intercept the user's click and keep it checked?

    Read the article

  • Refining Solr searches, getting exact matches?

    - by thebluefox
    Afternoon chaps, Right, I'm constructing a fairly complex (to me anyway) search system for a website using Solr, although this question is quite simple I think... I have two search criteria, location and type. I want to return results that are exact matches to type (letter to letter, no exceptions), and like location. My current search query is as follows ../select/?q=location:N1 type:blue&rows=100&fl=*,score&debugQuery=true This firstly returns all the type blue's that match N1, but then returns any type that matches N1, which is opposite to what I'm after. Both fields are set as textgen in the Solr schema. Any pointers? Cheers gang

    Read the article

  • How to facet multiple columns in Google Refine

    - by banjanxed
    I have a data set with 30 columns and multiple rows (some cells have no data). I would like to be able to facet the columns in groups. 1 2 3 4..... Row1 A B C D Row2 E A D F Row3 Q A B H Given the above data I would like the facet to retun the number of instances in a group of columns. For the first three columns I need the facet to return: A - 3 B - 2 C - 1 D - 1 E - 1 Q - 1 I have tried to combine columns when I loaded the data but the individual data was grouped as well. This is not the desired outcome. For example: ABC - 1 EAD - 1 QAB - 1 Thanks in advance.

    Read the article

  • Powershell Select-Object from array not working

    - by Andrew
    I am trying to seperate values in an array so i can pass them to another function. Am using the select-Object function within a for loop to go through each line and separate the timestamp and value fields. However, it doesn't matter what i do the below code only displays the first select-object variable for each line. The second select-object command doesn't seem to work as my output is a blank line for each of the 6 rows. Any ideas on how to get both values $ReportData = $SystemStats.get_performance_graph_csv_statistics( (,$Query) ) ### Allocate a new encoder and turn the byte array into a string $ASCII = New-Object -TypeName System.Text.ASCIIEncoding $csvdata = $ASCII.GetString($ReportData[0].statistic_data) $csv2 = convertFrom-CSV $csvdata $newarray = $csv2 | Where-Object {$_.utilization -ne "0.0000000000e+00" -and $_.utilization -ne "nan" } for ( $n = 0; $n -lt $newarray.Length; $n++) { $nTime = $newarray[$n] $nUtil = $newarray[$n] $util = $nUtil | select-object Utilization $util $tstamp = $nTime | select-object timestamp $tstamp }

    Read the article

  • What techniques are available for filtering collections of objects when using zodb?

    - by Omega
    As the title says: What techniques are available for filtering objects when using zodb? The equivalent in SQL terms would be something like filtering results by a date range. Or only returning rows with a particular value set in a column. If I had a series of blog posts and only wanted ones done in the past month, what would I have to do? Is there any way to optimize these kinds of "queries"? My gut tells me iterating over all the objects in a relationship simply to perform a test is less than optimal.

    Read the article

  • SQL query to print mirror labels

    - by Eric
    I want to print labels in words as returned by a SQL query such as follow. 1 2 3 4 5 6 When I want to print the reverse of those labels, I have to print them as follow 3 2 1 6 5 4 In my real case, I have 5 colums by 2 rows, how can I formulate my query so that my records are ordered like the second one. The normal ordering is handled by word, so my query is like SELECT * FROM Products ORDER BY Products.id I'm using MS Access =( EDIT : Just to make it clear I'd like my records to be ordered such as 3 2 1 6 5 4 9 8 7 12 11 10 EDIT2 : my table looks like this ID ProductName 1 Product1 2 Product2 3 Product3 n Product[n] I want the ids to be returned as I mentioned above

    Read the article

  • Getting a table's values into a tree

    - by Jason
    So, I have a table like such: id|root|kw1|kw2|kw3|kw4|kw5|name 1| A| B| C| D| E| F|fileA 2| A| B| | | | |fileB 3| B| C| D| E| | |fileC 4| A| B| | | | |fileD (several hundred rows...) And I need to get it into a tree like the following: *A *B -fileB -fileD *C *D *E *F -fileA *B *C *D *E -fileC I'm pretty sure the table is laid out poorly but it's what I have to live with. I've read a little about Adjacency List Model & Modified Preorder Tree Traversal but I don't think my data is laid out correctly. I think this requires a recursive function, but I'm not at all sure how to go about that. I'm open to any ideas of how to get this done even if it means extracting the data into a new table just to process this. Are there any good options available to me or any good ways to do this? (Examples are a bonus of course)

    Read the article

  • Handling errors on php contact form

    - by topSearchDesign
    The below code is working great for handling errors for text fields in my contact form, but how do I get this same method to work for dropdown select option boxes and textareas? <input type="text" name="name" value="<?php if($errors){echo $name;} ?>" id="name" size="30" /> For example: <textarea name="message" value="<?php if($errors){echo $message;} ?>" id="message" rows="10" cols="40"></textarea> does not work.

    Read the article

  • Processing variable number of form fields

    - by a_m0d
    I am working on a form which displays information about orders. Each order has a unique id, but they are not necessarily sequential on the form. Also, the number of fields can vary (one field per row on the form). The input into the form will not be mapped straight into the database, but will be added to the current value in the database, and then saved. An example of the form is in the picture below - the callout on the right shows the id for each row. I know how to generate the form like this, but I can't work out how I can easily process each of these rows reliably. I also know how to give each of the fields a unique identifier, like name="row-23", but how can I translate that name so that I can update the related record in the database?

    Read the article

  • Fetch Max from a date column grouped by a particular field

    - by vamyip
    Hi, I have a table similar to this: LogId RefId Entered ================================== 1 1 2010-12-01 2 1 2010-12-04 3 2 2010-12-01 4 2 2010-12-06 5 3 2010-12-01 6 1 2010-12-10 7 3 2010-12-05 8 4 2010-12-01 Here, LogId is unique; For each RefId, there are multiple entries with timestamp. What I want to extract is LogId for each latest RefId. I tried solutions from this link:http://stackoverflow.com/questions/121387/sql-fetch-the-row-which-has-the-max-value-for-a-column. But, it returns multiple rows with same RefId. Can someone help me with this? Thanks Vamyip

    Read the article

  • Any way to optimize this MySQL query?

    - by manyxcxi
    My table looks like this: `MyDB`.`Details` ( `id` bigint(20) NOT NULL, `run_id` int(11) NOT NULL, `element_name` varchar(255) NOT NULL, `value` text, `line_order` int(11) default NULL, `column_order` int(11) default NULL ); I have the following SELECT statement in a stored procedure SELECT RULE ,TITLE ,SUM(IF(t.PASSED='Y',1,0)) AS PASS ,SUM(IF(t.PASSED='N',1,0)) AS FAIL FROM ( SELECT a.line_order ,MAX(CASE WHEN a.element_name = 'PASSED' THEN a.`value` END) AS PASSED ,MAX(CASE WHEN a.element_name = 'RULE' THEN a.`value` END) AS RULE ,MAX(CASE WHEN a.element_name = 'TITLE' THEN a.`value` END) AS TITLE FROM Details a WHERE run_id = runId GROUP BY line_order ) t GROUP BY RULE, TITLE; *runId is an input parameter to the stored procedure. This query takes about 14 seconds to run. The table has 214856 rows, and the particular run_id I am filtering on has 162204 records. It's not on a super high power machine, but I feel like I could be doing this more efficiently. My main goal is to summarize by Rule and Title and show Pass and Fail count columns.

    Read the article

  • SQLITE (C/C++interface) - How to commit a transaction

    - by AJ
    I am using sqlite c/c++ interface. Now here is my scenario - I have 3 tables (related tables) say A, B, C. Now, there is a function called Set, which get some inputs and based on the inputs inserts rows into these three tables. (sometimes it can be an update in one of the tables) Now I need two things. One, i dont want autocommit feature. Basically I would like to commit after every 1000 calls to Set function Secondly, within the set function itself, if i find that after inserting into two tables, the third insert fails, then i have to revert, those particular changes in that Set function call. Now i dont see any sqlite3_commit function exposed. I only see a function called sqlite3_commit_hook() which is slightly diff in documentation. Are there any function exposed for this purpose? or What is the way to achieve this behaviour? Can you help me with the best approach of doing this. Regards, Arjun

    Read the article

  • Dirty Reads in Postgres

    - by User1
    I have a long running function that should be inserting new rows. How do I check the progress of this function? I was thinking dirty reads would work so I read http://www.postgresql.org/docs/8.4/interactive/sql-set-transaction.html and came up with the following code and ran it in a new session: SET SESSION CHARACTERISTICS AS SERIALIZABLE; SELECT * FROM MyTable; Postgres gives me a syntax error. What am I doing wrong? If I do it right, will I see the inserted records while that long function is still running? Thanks

    Read the article

  • [struts 2] How do you iterate through a list of objects?

    - by Kevin
    I have a User class that has a String username in it. I have a list of users that I'm trying to display in a table using <s:iterator value="users" id="list"> <tr> <td><s:property value="#list.username" /></td> <td></td> <td></td> <td></td> </tr> </s:iterator> The rows are being displayed the right number of times, so it's iterating through my list properly. However, I don't know how to access the username property to display it. Obviously what I have above isn't correct... Any ideas?

    Read the article

  • UIPickerView and empty core data array

    - by Mark
    I have a viewcontroller showing items from a core data entity. I also have a tableview listing records from the same entity. The table is editable, and the user could remove all the records. When this happens, the view controller holding the pickerview bombs because it's looking for records in an empty array. How to prevent this? I'm assuming I need to do something different at objectAtIndex:row... # pragma mark PickerView Section - (NSInteger)numberOfComponentsInPickerView:(UIPickerView *)pickerView { return 1; // returns the number of columns to display. } - (NSInteger)pickerView:(UIPickerView *)pickerView numberOfRowsInComponent:(NSInteger)component { return [profiles count]; // returns the number of rows } - (NSString *)pickerView:(UIPickerView *)pickerView titleForRow:(NSInteger)row forComponent:(NSInteger)component { // Display the profiles we've fetched on the picker Profiles *prof = [profiles objectAtIndex:row]; return prof.profilename; } //If the user chooses from the pickerview - (void)pickerView:(UIPickerView *)pickerView didSelectRow:(NSInteger)row inComponent:(NSInteger)component { selectedProfile = [[profiles objectAtIndex:row]valueForKey:@"profilename"]; }

    Read the article

  • Arranging image thumbnails

    - by Adi Mathur
    I am using JQuery hover zoom (http://jmar.github.com/jquery-hoverZoom/) Its working fine. The thumbnails that I have is of different sizes . So I stated using the masonry plugin for arranging it. Both of them work fine when isolated but collectively the masonry plugin doesn't work as intended. Pictures start to overlap each other. What I feel is that both Masonry and the JQuery hover zoom , interact with the same div element which causes the problem, Both are adding their attributes to it. How can I fix this ? Is there some way that I arrange the rows without the masonry? So that the conflict wont occur ?

    Read the article

  • Design ideas for a versioned db schema with related tables also versioned

    - by vfilby
    Here is the drill, I want to version a database. I have done this before using multiple rows where the table primary key becomes a combination of the row id and either a datestamp or a version #. Now I want to version a table that depends on many other small tables. Versioning each table will be a giant PITA, so I am looking for good options to verion a schema where the data to be versioned spreads over multiple tables. All related tables are properly keyed with foreign key relationships. The database is currently on Sql Server 2005.

    Read the article

  • How do I create a user history?

    - by ggfan
    I want to create a user history function that allows shows users what they done. ex: commented on an ad, posted an ad, voted on an ad, etc. How exactly do I do this? I was thinking about... in my site, when they log in it stores their user_id ($_SESSION['user_id']) so I guess whenever an user posts an ad(postad.php), comments(comment.php), I would just store in a database table "userhistory" what they did based on whenever or not their user_id was activate. When they comment, I store the user_id in the comment dbc table, so I'll also store it in the "userhistory" table. And then I would just queries all the rows in the dbc for the user to show it Any steps/improvements I can make? :)

    Read the article

  • How to correctly handle click events on Widget

    - by www.liveinternet.ruusersilya_bog
    There is a task to make smt like todo list on widget (with dinamic number of elements), how to organize this list for click support on this elements. I only found how add click event on one widget layout element (with setOnClickPendingIntent), and how send text to widget element TextView. But it's unclear how handle click events for sub-elemets, or how get click coordinates(or item) where was click event. I saw widget "Agenda widget" - and it work fine with clicking on different calendar rows. I will be very much appreciated for help.

    Read the article

  • Oracle, slow performance when using sub select

    - by Wyass
    I have a view that is very slow if you fetch all rows. But if I select a subset (providing an ID in the where clause) the performance is very good. I cannot hardcode the ID so I create a sub select to get the ID from another table. The sub select only returns one ID. Now the performance is very slow and it seems like Oracle is evaluating the whole view before using the where clause. Can I somehow help Oracle so SQL 2 and 3 have the same performance? I’m using Oracle 10g 1 slow select * from ci.my_slow_view 2 fast select * from ci.my_slow_view where id = 1; 3 slow select * from ci.my_slow_view where id in (select id from active_ids)

    Read the article

< Previous Page | 202 203 204 205 206 207 208 209 210 211 212 213  | Next Page >