Search Results

Search found 7311 results on 293 pages for 'rows'.

Page 69/293 | < Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >

  • 'checked' is null or not an object - only IE

    - by Srinath
    Hi This problem exists only in IE browser, in mozilla its working fine... i have to delete a row on click on checkbox.... Here is my code : <table width="100%" id="seller"> <tr id="row_1" class="row"> <td><input type="checkbox" name="delete" id="delete" value="1" /></td> </tr> <tr id="row_2" class="row"> <td><input type="checkbox" name="delete" id="delete" value="2" /></td> </tr> </table> <input type="button" id="delete" name="delete" value="Delete" onclick="delete_row('seller')"/> var delete_ids = []; function delete_row(tableID){ var rows = document.getElementsByClassName('row'); var delItems = document.getElementsByName('delete'); for(var i=0;i < rows.length;i++){ alert(delItems[i].checked); if(delItems[i].checked == true){ delete_ids.push(rows[i].id); jQuery(rows[i]).remove(); i = i-1; } } } Showing error on page : 'checked' is null or not an object. can one please tell me the fix . thanks in advance, sri..

    Read the article

  • How to declare a 2D array of 2D array pointers and access them?

    - by vikramtheone
    Hi Guys, How can I declare an 2D array of 2D Pointers? And later access the individual array elements of the 2D arrays. Is my approach correct? void alloc_2D(int ***memory, unsigned int rows, unsigned int cols); int main() { int i, j; int **ptr; int **array[10][10]; for(i=0;i<10;i++) { for(j=0;j<10;j++) { alloc_2D(&ptr, 10, 10); array[i][j] = ptr; } } //After I do this, how can I access the 10 individual 2D arrays? return 0; } void alloc_2D(int ***memory, unsigned int rows, unsigned int cols) { int **ptr; *memory = NULL; ptr = malloc(rows * sizeof(int*)); if(ptr == NULL) { printf("\nERROR: Memory allocation failed!"); } else { int i; for(i = 0; i< rows; i++) { ptr[i] = malloc(cols * sizeof(float)); if(ptr[i]==NULL) { printf("\nERROR: Memory allocation failed!"); } } } *memory = ptr; }

    Read the article

  • Out of memory while iterating through rowset

    - by Phliplip
    Hi All, I have a "small" table of 60400 rows with zipcode data. I wan't to iterate through them all, update a column value, and then save it. The following is part of my Zipcodes model which extends My_Db_Table that a totalRows function that - you guessed it.. returns the total number of rows in the table (60400 rows) public function normalizeTable() { $this->getAdapter()->setProfiler(false); $totalRows = $this->totalRows(); $rowsPerQuery = 5; for($i = 0; $i < $totalRows; $i = $i + $rowsPerQuery) { $select = $this->select()->limit($i, $rowsPerQuery); $rowset = $this->fetchAll($select); foreach ($rowset as $row) { $row->{self::$normalCityColumn} = $row->normalize($row->{self::$cityColumn}); $row->save(); } unset($rowset); } } My rowClass contains a normalize function (basicly a metaphone wrapper doing some extra magic). At first i tried a plain old $this-fetchAll(), but got a out of memory (128MB) right away. Then i tried splitting the rowset into chunks, only difference is that some rows actually gets updated. Any ideas on how i can acomplish this, or should i fallback to ye'olde mysql_query()

    Read the article

  • My QFileSystemModel doesn't work as expected in PyQt

    - by Skilldrick
    I'm learning the Qt Model/View architecture at the moment, and I've found something that doesn't work as I'd expect it to. I've got the following code (adapted from Qt Model Classes): from PyQt4 import QtCore, QtGui model = QtGui.QFileSystemModel() parentIndex = model.index(QtCore.QDir.currentPath()) print model.isDir(parentIndex) #prints True print model.data(parentIndex).toString() #prints name of current directory childIndex = model.index(0, 0, parentIndex) print model.data(childIndex).toString() rows = model.rowCount(parentIndex) print rows #prints 0 (even though the current directory has directory and file children) The question: Is this a problem with PyQt, have I just done something wrong, or am I completely misunderstanding QFileSystemModel? According to the documentation, model.rowCount(parentIndex) should return the number of children in the current directory. The QFileSystemModel docs say that it needs an instance of a Gui application, so I've also placed the above code in a QWidget as follows, but with the same result: import sys from PyQt4 import QtCore, QtGui class Widget(QtGui.QWidget): def __init__(self, parent=None): QtGui.QWidget.__init__(self, parent) model = QtGui.QFileSystemModel() parentIndex = model.index(QtCore.QDir.currentPath()) print model.isDir(parentIndex) print model.data(parentIndex).toString() childIndex = model.index(0, 0, parentIndex) print model.data(childIndex).toString() rows = model.rowCount(parentIndex) print rows def main(): app = QtGui.QApplication(sys.argv) widget = Widget() widget.show() sys.exit(app.exec_()) if __name__ == '__main__': main()

    Read the article

  • Will this class cause memory leaks, and does anything need disposing of? (asp.net vb)

    - by Phil
    Here is the class to export a gridview to an excel sheet: Imports System Imports System.Data Imports System.Configuration Imports System.IO Imports System.Web Imports System.Web.Security Imports System.Web.UI Imports System.Web.UI.WebControls Imports System.Web.UI.WebControls.WebParts Imports System.Web.UI.HtmlControls Namespace ExcelExport Public NotInheritable Class GVExportUtil Private Sub New() End Sub Public Shared Sub Export(ByVal fileName As String, ByVal gv As GridView) HttpContext.Current.Response.Clear() HttpContext.Current.Response.AddHeader("content-disposition", String.Format("attachment; filename={0}", fileName)) HttpContext.Current.Response.ContentType = "application/ms-excel" Dim sw As StringWriter = New StringWriter Dim htw As HtmlTextWriter = New HtmlTextWriter(sw) Dim table As Table = New Table table.GridLines = GridLines.Vertical If (Not (gv.HeaderRow) Is Nothing) Then GVExportUtil.PrepareControlForExport(gv.HeaderRow) table.Rows.Add(gv.HeaderRow) End If For Each row As GridViewRow In gv.Rows GVExportUtil.PrepareControlForExport(row) table.Rows.Add(row) Next If (Not (gv.FooterRow) Is Nothing) Then GVExportUtil.PrepareControlForExport(gv.FooterRow) table.Rows.Add(gv.FooterRow) End If table.RenderControl(htw) HttpContext.Current.Response.Write(sw.ToString) HttpContext.Current.Response.End() End Sub Private Shared Sub PrepareControlForExport(ByVal control As Control) Dim i As Integer = 0 Do While (i < control.Controls.Count) Dim current As Control = control.Controls(i) If (TypeOf current Is LinkButton) Then control.Controls.Remove(current) control.Controls.AddAt(i, New LiteralControl(CType(current, LinkButton).Text)) ElseIf (TypeOf current Is ImageButton) Then control.Controls.Remove(current) control.Controls.AddAt(i, New LiteralControl(CType(current, ImageButton).AlternateText)) ElseIf (TypeOf current Is HyperLink) Then control.Controls.Remove(current) control.Controls.AddAt(i, New LiteralControl(CType(current, HyperLink).Text)) ElseIf (TypeOf current Is DropDownList) Then control.Controls.Remove(current) control.Controls.AddAt(i, New LiteralControl(CType(current, DropDownList).SelectedItem.Text)) ElseIf (TypeOf current Is CheckBox) Then control.Controls.Remove(current) control.Controls.AddAt(i, New LiteralControl(CType(current, CheckBox).Checked)) End If If current.HasControls Then GVExportUtil.PrepareControlForExport(current) End If i = (i + 1) Loop End Sub End Class End Namespace Will this class cause memory leaks? And does anything here need to be disposed of? The code is working but I am getting the app pool falling over frequently when it is in use. Thanks.

    Read the article

  • Inserting variables into a query string - it won't work!

    - by Jonesy
    Basically i have a query string that when i hardcode in the catalogue value its fine. when I try adding it via a variable it just doesn't pick it up. This works: Dim WaspConnection As New SqlConnection("Data Source=JURA;Initial Catalog=WaspTrackAsset_NROI;User id=" & ConfigurationManager.AppSettings("WASPDBUserName") & ";Password='" & ConfigurationManager.AppSettings("WASPDBPassword").ToString & "';") This doesn't: Public Sub GetWASPAcr() connection.Open() Dim dt As New DataTable() Dim username As String = HttpContext.Current.User.Identity.Name Dim sqlCmd As New SqlCommand("SELECT WASPDatabase FROM dbo.aspnet_Users WHERE UserName = '" & username & "'", connection) Dim sqlDa As New SqlDataAdapter(sqlCmd) sqlDa.Fill(dt) If dt.Rows.Count > 0 Then For i As Integer = 0 To dt.Rows.Count - 1 If dt.Rows(i)("WASPDatabase") Is DBNull.Value Then WASP = "" Else WASP = "WaspTrackAsset_" + dt.Rows(i)("WASPDatabase") End If Next End If connection.Close() End Sub Dim WaspConnection As New SqlConnection("Data Source=JURA;Initial Catalog=" & WASP & ";User id=" & ConfigurationManager.AppSettings("WASPDBUserName") & ";Password='" & ConfigurationManager.AppSettings("WASPDBPassword").ToString & "';") When I debug the catalog is empty in the query string but the WASP variable holds the value "WaspTrackAsset_NROI" Any idea's why? Cheers, jonesy

    Read the article

  • question about MySQL transaction and trigger

    - by WilliamLou
    I quickly browsed MySQL manual but didn't find the exact information about my question. Here is my question: if I have a InnoDB table A with two triggers triggered by 'AFTER INSERT ON A' and 'AFTER UPDATE ON A'. More specifically, For example: one trigger is defined as: CREATE TRIGGER test_trigger AFTER INSERT ON A FOR EACH ROW BEGIN INSERT INTO B SELECT * FROM A WHERE A.col1 = NEW.col1 END; You can ignore the query between BEGIN AND END, basically I mean this trigger will insert several rows into table B which is also a InnoDB table. Now, if I started a transaction and then insert many rows, say: 10K rows, into table A. If there is no trigger associated with table A, all these inserts are atomic, that's for sure. Now, if table A is associated with several insert/update triggers which insert/update many rows to table B and/or table C etc.. will all these inserts and/or updates are still all atomic? I think it's still atomic, but it's kind of difficult to test and I can't find any explanations in the Manual. Anyone can confirm this? Thanks a lot!

    Read the article

  • Mysql for update

    - by shantanuo
    MySQL supports "for update" keyword. Here is how I tested that it is working as expected. I opened 2 browser tabs and executed the following commands in one window. mysql> start transaction; Query OK, 0 rows affected (0.00 sec) mysql> select * from myxml where id = 2 for update; .... mysql> update myxml set id = 3 where id = 2 limit 1; Query OK, 1 row affected, 1 warning (0.00 sec) Rows matched: 1 Changed: 1 Warnings: 0 mysql> commit; Query OK, 0 rows affected (0.08 sec) In another window, I started the transaction and tried to take an update lock on the same record. mysql> start transaction; Query OK, 0 rows affected (0.00 sec) mysql> select * from myxml where id = 2 for update; Empty set (43.81 sec) As you can see from the above example, I could not select the record for 43 seconds as the transaction was being processed by another application in the Window No 1. Once the transaction was over, I got to select the record, but since the id 2 was changed to id 3 by the transaction that was executed first, no record was returned. My question is what are the disadvantages of using "for update" syntax? If I do not commit the transaction that is running in window 1 will the record be locked for-ever?

    Read the article

  • How to maintain ComboBox.SelectedItem reference when DataSource is resorted?

    - by Dave
    This really seems like a bug to me, but perhaps some databinding gurus can enlighten me? (My WinForms databinding knowledge is quite limited.) I have a ComboBox bound to a sorted DataView. When the properties of the items in the DataView change such that items are resorted, the SelectedItem in my ComboBox does not keep in-sync. It seems to point to someplace completely random. Is this a bug, or am I missing something in my databinding? Here is a sample application that reproduces the problem. All you need is a Button and a ComboBox: public partial class Form1 : Form { private DataTable myData; public Form1() { this.InitializeComponent(); this.myData = new DataTable(); this.myData.Columns.Add("ID", typeof(int)); this.myData.Columns.Add("Name", typeof(string)); this.myData.Columns.Add("LastModified", typeof(DateTime)); this.myData.Rows.Add(1, "first", DateTime.Now.AddMinutes(-2)); this.myData.Rows.Add(2, "second", DateTime.Now.AddMinutes(-1)); this.myData.Rows.Add(3, "third", DateTime.Now); this.myData.DefaultView.Sort = "LastModified DESC"; this.comboBox1.DataSource = this.myData.DefaultView; this.comboBox1.ValueMember = "ID"; this.comboBox1.DisplayMember = "Name"; } private void saveStuffButton_Click(object sender, EventArgs e) { DataRowView preUpdateSelectedItem = (DataRowView)this.comboBox1.SelectedItem; // OUTPUT: SelectedIndex = 0; SelectedItem.Name = third Debug.WriteLine(string.Format("SelectedIndex = {0:N0}; SelectedItem.Name = {1}", this.comboBox1.SelectedIndex, preUpdateSelectedItem["Name"])); this.myData.Rows[0]["LastModified"] = DateTime.Now; DataRowView postUpdateSelectedItem = (DataRowView)this.comboBox1.SelectedItem; // OUTPUT: SelectedIndex = 2; SelectedItem.Name = second Debug.WriteLine(string.Format("SelectedIndex = {0:N0}; SelectedItem.Name = {1}", this.comboBox1.SelectedIndex, postUpdateSelectedItem["Name"])); // FAIL! Debug.Assert(object.ReferenceEquals(preUpdateSelectedItem, postUpdateSelectedItem)); } }

    Read the article

  • Calling NSFetchedResultsController & CoreData experts

    - by JK
    I am having a few nagging issues with NSFetchedResultsController and CoreData, any of which I would be very grateful to get help on. Issue 1 - Updates: I update my store on a background thread which results in certain rows being delete, inserted or updated. The changes are merged into the context on the main thread using the "mergeChangesFromContextDidSaveNotification:" method. Inserts and deletes are updated properly, but updates are not (e.g. the cell label is not updated with the change) although I have confirmed the updates to come through the contextDidSaveNotifcation, exactly like the inserts and deleted. My current workaround is to temporarily change the staleness interval of the context to 0, but this does not seem like the ideal solution. Issue 2 - Deleting objects: My fetch batch size is 20. If an object is deleted by the background thread which is in the first 20 rows, everything works fine. But if the object is after the first 20 rows and the table is scrolled down, a "CoreData could not fulfill a fault" error is raised. I have tried resaving the context and reperforming the frc fetch - all to no avail. Note: In this scenario, the frc delegate method "didChangeObject...." is not called for the delete - I assume this is because the object in question had not been faulted at that time (as it is was outside the initial fetch range). But for some reason, the context still thinks the object is around, although is has been deleted from the store. Issue 3 - Deleting sections : When the deletion of a row leads to the deletion of a section, I have gotten the "invalid number of rows in section???" error. I have worked around this by removing the "reloadSection" line from the NSFetchedResultsChangeMove: section and replacing it with "[tableView insertRowsAtIndexPaths...." This seems to work, but once again, I am not sure if this is the best solution. Any help would be greatly appreciated. Thank you!

    Read the article

  • how to Clean up(destructor) a dynamic Array of pointers??

    - by Ahmed Sharara
    Is that Destructor is enough or do I have to iterate to delete the new nodes?? #include "stdafx.h" #include<iostream> using namespace std; struct node{ int row; int col; int value; node* next_in_row; node* next_in_col; }; class MultiLinkedListSparseArray { private: char *logfile; node** rowPtr; node** colPtr; // used in constructor node* find_node(node* out); node* ins_node(node* ins,int col); node* in_node(node* ins,node* z); node* get(node* in,int row,int col); bool exist(node* so,int row,int col); //add anything you need public: MultiLinkedListSparseArray(int rows, int cols); ~MultiLinkedListSparseArray(); void setCell(int row, int col, int value); int getCell(int row, int col); void display(); void log(char *s); void dump(); }; MultiLinkedListSparseArray::MultiLinkedListSparseArray(int rows,int cols){ rowPtr=new node* [rows+1]; colPtr=new node* [cols+1]; for(int n=0;n<=rows;n++) rowPtr[n]=NULL; for(int i=0;i<=cols;i++) colPtr[i]=NULL; } MultiLinkedListSparseArray::~MultiLinkedListSparseArray(){ // is that destructor enough?? cout<<"array is deleted"<<endl; delete [] rowPtr; delete [] colPtr; }

    Read the article

  • How to effectively execute this cron job?

    - by Lost_in_code
    I have a table with 200 rows. I'm running a cron job every 10 minutes to perform some kind of insert/update operation on the table. The operation needs to be performed only on 5 rows at a time every time the cron job runs. So in first 10 mins records 1-5 are updated, records 5-10 in the 20th minute and so on. When the cron job runs for the 20th time, all the records in the table would have been updated exactly once. This is what is to be achieved at least. And the next cron job should repeat the process again. The problem: is that, every time a cron job runs, the insert/update operation should be performed on N rows (not just 5 rows). So, if N is 100, all records would've been updated by just 2 cron jobs. And the next cron job would repeat the process again. Here's an example: This is the table I currently have (200 records). Every time a cron job executes, it needs to pick N records (which I set as a variable in PHP) and update the time_md5 field with the current time's MD5 value. +---------+-------------------------------------+ | id | time_md5 | +---------+-------------------------------------+ | 10 | 971324428e62dd6832a2778582559977 | | 72 | 1bd58291594543a8cc239d99843a846c | | 3 | 9300278bc5f114a290f6ed917ee93736 | | 40 | 915bf1c5a1f13404add6612ec452e644 | | 599 | 799671e31d5350ff405c8016a38c74eb | | 56 | 56302bb119f1d03db3c9093caf98c735 | | 798 | 47889aa559636b5512436776afd6ba56 | | 8 | 85fdc72d3b51f0b8b356eceac710df14 | | .. | ....... | | .. | ....... | | .. | ....... | | .. | ....... | | 340 | 9217eab5adcc47b365b2e00bbdcc011a | <-- 200th record +---------+-------------------------------------+ So, the first record(id 10) should not be updated more than once, till all 200 records are updated once - the process should start over once all the records are updated once. I have some idea on how this could be achieved, but I'm sure there are more efficient ways of doing it. Any suggestions?

    Read the article

  • Indexing table with duplicates MySQL/MSSQL with millions of records

    - by Tesnep
    I need help in indexing in MySQL. I have a table in MySQL with following rows: ID Store_ID Feature_ID Order_ID Viewed_Date Deal_ID IsTrial The ID is auto generated. Store_ID goes from 1 - 8. Feature_ID from 1 - let's say 100. Viewed Date is Date and time on which the data is inserted. IsTrial is either 0 or 1. You can ignore Order_ID and Deal_ID from this discussion. There are millions of data in the table and we have a reporting backend that needs to view the number of views in a certain period or overall where trial is 0 for a particular store id and for a particular feature. The query takes the form of: select count(viewed_date) from theTable where viewed_date between '2009-12-01' and '2010-12-31' and store_id = '2' and feature_id = '12' and Istrial = 0 In MSSQL you can have a filtered index to use for Istrial. Is there anything similar to this in MySQL? Also, Store_ID and Feature_ID have a lot of duplicate data. I created an index using Store_ID and Feature_ID. Although this seems to have decreased the search period, I need better improvement than this. Right now I have more than 4 million rows. To search for a particular query like the one above, it looks at 3.5 million rows in order to give me the count of 500k rows. PS. I forgot to add view_date filter in the query. Now I have done this.

    Read the article

  • MySQL PHP | "SELECT FROM table" using "alphanumeric"-UUID. Speed vs. Indexed Integer / Indexed Char

    - by dropson
    At the moment, I select rows from 'table01' using: SELECT * FROM table01 WHERE UUID = 'whatever'; The UUID column is a unique index. I know this isn't the fastest way to select data from the database, but the UUID is the only row-identifier that is available to the front-end. Since I have to select by UUID, and not ID, I need to know what of these two options I should go for, if say the table consists of 100'000 rows. What speed differences would I look at, and would the index for the UUID grow to large, and lag the DB? Get the ID before doing the "big" select 1. $id = "SELECT ID FROM table01 WHERE UUID = '{alphanumeric character}'"; 2. $rows = SELECT * FROM table01 WHERE ID = $id; Or keep it the way it is now, using the UUID. 1. SELECT FROM table01 WHERE UUID '{alphanumeric character}'; Side note: All new rows are created by checking if the system generated uniqueid exists before trying to insert a new row. Keeping the column always unique. The "example" table. CREATE TABLE Table01 ( ID int NOT NULL PRIMARY KEY, UUID char(15), name varchar(100), url varchar(255), `date` datetime ) ENGINE = InnoDB; CREATE UNIQUE INDEX UUID ON Table01 (UUID);

    Read the article

  • How to store sorted records in csv file ?

    - by Harikrishna
    I sort the records of the datatable datewise with the column TradingDate which is type of datetime. TableWithOnlyFixedColumns.DefaultView.Sort = "TradingDate asc"; Now I want to display these sorted records into csv file but it does not display records sorted by date. TableWithOnlyFixedColumns.DefaultView.Sort = "TradingDate asc";TableWithOnlyFixedColumns.Columns["TradingDate"].ColumnName + "] asc"; DataTable newTable = TableWithOnlyFixedColumns.Clone(); newTable.DefaultView.Sort = TableWithOnlyFixedColumns.DefaultView.Sort; foreach (DataRow oldRow in TableWithOnlyFixedColumns.Rows) { newTable.ImportRow(oldRow); } // we'll use these to check for rows with nulls var columns = newTable.DefaultView.Table.Columns.Cast<DataColumn>(); using (var writer = new StreamWriter(@"C:\Documents and Settings\Administrator\Desktop\New Text Document (3).csv")) { for (int i = 0; i < newTable.DefaultView.Table.Rows.Count; i++) { DataRow row = newTable.DefaultView.Table.Rows[i]; // check for any null cells if (columns.Any(column => row.IsNull(column))) continue; string[] textCells = row.ItemArray .Select(cell => cell.ToString()) // may need to pick a text qualifier here .ToArray(); // check for non-null but EMPTY cells if (textCells.Any(text => string.IsNullOrEmpty(text))) continue; writer.WriteLine(string.Join(",", textCells)); } } So how to store sorted records in csv file ?

    Read the article

  • How can I get the search parameters from jqgrid in the server side?

    - by Jack
    I've been visiting this forum a lot without registering since several months ago, and I really like it. So, thanks in advance to all the members. Now I'd like to make my first question. I've been using Jqgrid for a few time, and I've managed to have it display the rows and buttons, but now I need to do a search, a complex one, and I thought that "automatically" jqgrid would send the parameters to the server, I mean: sField, searchField, sOper, searchOper, sValue, searchString, sFilter and/or filters I'm not sure at all which ones it has to send, and I thought it would be just the same as it sends 'page', 'rows' and 'sord'. But I'm missing something, because, for example, I can get 'page', 'rows' and 'sord' using: $limit = $this->getRequest()->getParam('rows', 10); but I get nothing by using: $params = $_REQUEST['filters'] or $params = $this->getRequest()->getParam('sFilter'); I'm using PHP, Zend and json. I didn't post any code because my doubt is kind of generic, but I will do it if it was needed. I've searched a lot, and read the documentation, but I just don't see it. I will appreciate your help, thanks!

    Read the article

  • Indexing table with duplicates MySQL/SQL Server with millions of records

    - by Tesnep
    I need help in indexing in MySQL. I have a table in MySQL with following rows: ID Store_ID Feature_ID Order_ID Viewed_Date Deal_ID IsTrial The ID is auto generated. Store_ID goes from 1 - 8. Feature_ID from 1 - let's say 100. Viewed Date is Date and time on which the data is inserted. IsTrial is either 0 or 1. You can ignore Order_ID and Deal_ID from this discussion. There are millions of data in the table and we have a reporting backend that needs to view the number of views in a certain period or overall where trial is 0 for a particular store id and for a particular feature. The query takes the form of: select count(viewed_date) from theTable where viewed_date between '2009-12-01' and '2010-12-31' and store_id = '2' and feature_id = '12' and Istrial = 0 In SQL Server you can have a filtered index to use for Istrial. Is there anything similar to this in MySQL? Also, Store_ID and Feature_ID have a lot of duplicate data. I created an index using Store_ID and Feature_ID. Although this seems to have decreased the search period, I need better improvement than this. Right now I have more than 4 million rows. To search for a particular query like the one above, it looks at 3.5 million rows in order to give me the count of 500k rows. PS. I forgot to add view_date filter in the query. Now I have done this.

    Read the article

  • When empty field comes, removed the row in the Grouped Table view in iPhone?

    - by Pugal Devan
    Hi friends, I have displayed the datas in grouped table view. The data's are displayed in the table view from XML parsing. I have 2 section of the table view, the section one has three rows and section two has two rows. section 1 -> 3 Rows section 2 - > 2 Rows. Now i want to check, if anyone of the string is empty then i should remove the empty cells, so i have faced some problems, if i have removed any empty cell, then it will changed the index number. So how can i check, anyone of the field is empty?, Because some times more number of empty field will come, so that the index position will be change. So please send me any sample code or link for that? How can i achieve this? Sample code, - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { if (section == 0) { if([userEmail isEqualToString:@" "] || [phoneNumber isEqualToString:@" "] || [firstName isEqualToString:@" "]) { return 2; } else { return 3; } } if (section == 1) { if(![gradYear isEqualToString:@" "] || ![graduate isEqualToString:@" "]) { return 1; } else { return 2; } return 0; } Please Help me out!!! Thanks.

    Read the article

  • Updating Cells in a DataTable

    - by Maxim Z.
    I'm writing a small app to do a little processing on some cells in a CSV file I have. I've figured out how to read and write CSV files with a library I found online, but I'm having trouble: the library parses CSV files into a DataTable, but, when I try to change a cell of the table, it isn't saving the change in the table! Below is the code in question. I've separated the process into multiple variables and renamed some of the things to make it easier to debug for this question. Code Inside the loop: string debug1 = readIn.Rows[i].ItemArray[numColumnToCopyTo].ToString(); string debug2 = readIn.Rows[i].ItemArray[numColumnToCopyTo].ToString().Trim(); string debug3 = readIn.Rows[i].ItemArray[numColumnToCopyFrom].ToString().Trim(); string towrite = debug2 + ", " + debug3; readIn.Rows[i].ItemArray[numColumnToCopyTo] = (object)towrite; After the loop: readIn.AcceptChanges(); When I debug my code, I see that towrite is being formed correctly and everything's OK, except that the row isn't updated: why isn't it working? I have a feeling that I'm making a simple mistake here: the last time I worked with DataTables (quite a long time ago), I had similar problems. If you're wondering why I'm adding another comma in towrite, it's because I'm combining a street address field with a zip code field - I hope that's not messing anything up. My code is kind of messy, as I'm only trying to edit one file to make a small fix, so sorry.

    Read the article

  • When to use CTEs to encapsulate sub-results, and when to let the RDBMS worry about massive joins.

    - by IanC
    This is a SQL theory question. I can provide an example, but I don't think it's needed to make my point. Anyone experienced with SQL will immediately know what I'm talking about. Usually we use joins to minimize the number of records due to matching the left and right rows. However, under certain conditions, joining tables cause a multiplication of results where the result is all permutations of the left and right records. I have a database which has 3 or 4 such joins. This turns what would be a few records into a multitude. My concern is that the tables will be large in production, so the number of these joined rows will be immense. Further, heavy math is performed on each row, and the idea of performing math on duplicate rows is enough to make anyone shudder. I have two questions. The first is, is this something I should care about, or will SQL Server intelligently realize these rows are all duplicates and optimize all processing accordingly? The second is, is there any advantage to grouping each part of the query so as to get only the distinct values going into the next part of the query, using something like: WITH t1 AS ( SELECT DISTINCT... [or GROUP BY] ), t2 AS ( SELECT DISTINCT... ), t3 AS ( SELECT DISTINCT... ) SELECT... I have often seen the use of DISTINCT applied to subqueries. There is obviously a reason for doing this. However, I'm talking about something a little different and perhaps more subtle and tricky.

    Read the article

  • Optimize SQL query (Facebook-like application)

    - by fabriciols
    My application is similar to Facebook, and I'm trying to optimize the query that get user records. The user records are that he as src ou dst. The src is in usermuralentry directly, the dst list are in usermuralentry_user. So, a entry can have one src and many dst. I have those tables: mysql> desc usermuralentry ; +-----------------+------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-----------------+------------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | user_src_id | int(11) | NO | MUL | NULL | | | private | tinyint(1) | NO | | NULL | | | content | longtext | NO | | NULL | | | date | datetime | NO | | NULL | | | last_update | datetime | NO | | NULL | | +-----------------+------------------+------+-----+---------+----------------+ 10 rows in set (0.10 sec) mysql> desc usermuralentry_user ; +-------------------+---------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-------------------+---------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | usermuralentry_id | int(11) | NO | MUL | NULL | | | userinfo_id | int(11) | NO | MUL | NULL | | +-------------------+---------+------+-----+---------+----------------+ 3 rows in set (0.00 sec) And the following query to retrieve information from two users. mysql> explain SELECT * FROM usermuralentry AS a , usermuralentry_user AS b WHERE a.user_src_id IN ( 1, 2 ) OR ( a.id = b.usermuralentry_id AND b.userinfo_id IN ( 1, 2 ) ); +----+-------------+-------+------+-------------------------------------------------------------------------------------------+------+---------+------+---------+------------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+------+-------------------------------------------------------------------------------------------+------+---------+------+---------+------------------------------------------------+ | 1 | SIMPLE | b | ALL | usermuralentry_id,usermuralentry_user_bcd7114e,usermuralentry_user_6b192ca7 | NULL | NULL | NULL | 147188 | | | 1 | SIMPLE | a | ALL | PRIMARY | NULL | NULL | NULL | 1371289 | Range checked for each record (index map: 0x1) | +----+-------------+-------+------+-------------------------------------------------------------------------------------------+------+---------+------+---------+------------------------------------------------+ 2 rows in set (0.00 sec) but it is taking A LOT of time... Some tips to optimize? Can the table schema be better in my application?

    Read the article

  • JQuery Control Update Not Happening

    - by Mad Halfling
    Hi, I've got a script that disables a button input control, empties a table body and then (after an AJAX call) re-populates it. It all works fine but sometimes there are a lot of rows in the table so the browser (IE) takes a while to empty and refill it. The strange thing is, while the rows are being emptied, the button still appears to be enabled, however if I put an alert between the button being disabled and the tbody being emptied, the button works properly, disabling visibly before the the alert comes up. Is there any way I can get the button to update before the resource consuming table emptying process/command commences? Thx MH Code sample, as requested (but it's not complex, so I didn't initially include it) $('#Search').attr('disabled', true); $('#StatusSpan').empty(); $('#DisplayTBody').empty(); then I perform my AJAX call, re-enable the button and repopulate the table. As I mentioned, normally this is really quick and isn't a problem, but if there are, say, 1500 rows in the table it takes a while to clear down but the 'Search' button doesn't update on the screen, however if I put an alert after the .attr('disabled' line the button visibly updates when the alert box is up, but without that the button doesn't visibly disable until after the table clears (which is about 3 or 4 seconds with 1500 rows), it just stays in it's down/"mid-press" state. I don't have a problem with the time the browser is taking to render the table changes, that's just life, but I need the users to see visible feedback so they know the search has started

    Read the article

  • How can i add to dataGridView1 a data to the last row/column?

    - by user3681442
    In top of form1 i did: private System.Timers.Timer _refreshTimer; private int _thisProcess; Then in the Form1 Load event: _thisProcess = Process.GetCurrentProcess().Id; InitializeRefreshTimer(); PopulateApplications(); Then the timer init method: void InitializeRefreshTimer() { _refreshTimer = new System.Timers.Timer(5000); _refreshTimer.SynchronizingObject = this; _refreshTimer.Elapsed += new System.Timers.ElapsedEventHandler(TimerToUpdate_Elapsed); _refreshTimer.Start(); } Then the timer elapsed event: void TimerToUpdate_Elapsed(object sender, System.Timers.ElapsedEventArgs e) { PopulateApplications(); } In the end the Populate method: void PopulateApplications() { dataGridView1.Rows.Clear(); foreach (Process p in Process.GetProcesses(".")) { if (p.Id != _thisProcess) { try { if (p.MainWindowTitle.Length > 0) { String status = p.Responding ? "Running" : "Not Responding"; dataGridView1.Rows.Add( p.MainWindowTitle, status); } } catch { } } } } The variable status show in the column2 but let's say i want that status will be display for each process/app in column5 ? How can i move it ? EDIT** Tried this: void PopulateApplications() { dataGridView1.Rows.Clear(); foreach (Process p in Process.GetProcesses(".")) { if (p.Id != _thisProcess) { try { if (p.MainWindowTitle.Length > 0) { var icon = Icon.ExtractAssociatedIcon(p.MainModule.FileName); Image ima = icon.ToBitmap(); img.Image = ima; img.HeaderText = "Image"; img.Name = "img"; String status = p.Responding ? "Running" : "Not Responding"; dataGridView1.Rows.Add(img, p.MainWindowTitle, status); } } catch { } } } } I moved the variable img to the top of the form. The problem is i see in each row this: DataGridViewImageColumn { Name=img, Index=-1 } And i don't see the icon it self. Why ?

    Read the article

  • Reporting Services - It's a Wrap!

    - by smisner
    If you have any experience at all with Reporting Services, you have probably developed a report using the matrix data region. It's handy when you want to generate columns dynamically based on data. If users view a matrix report online, they can scroll horizontally to view all columns and all is well. But if they want to print the report, the experience is completely different and you'll have to decide how you want to handle dynamic columns. By default, when a user prints a matrix report for which the number of columns exceeds the width of the page, Reporting Services determines how many columns can fit on the page and renders one or more separate pages for the additional columns. In this post, I'll explain two techniques for managing dynamic columns. First, I'll show how to use the RepeatRowHeaders property to make it easier to read a report when columns span multiple pages, and then I'll show you how to "wrap" columns so that you can avoid the horizontal page break. Included with this post are the sample RDLs for download. First, let's look at the default behavior of a matrix. A matrix that has too many columns for one printed page (or output to page-based renderer like PDF or Word) will be rendered such that the first page with the row group headers and the inital set of columns, as shown in Figure 1. The second page continues by rendering the next set of columns that can fit on the page, as shown in Figure 2.This pattern continues until all columns are rendered. The problem with the default behavior is that you've lost the context of employee and sales order - the row headers - on the second page. That makes it hard for users to read this report because the layout requires them to flip back and forth between the current page and the first page of the report. You can fix this behavior by finding the RepeatRowHeaders of the tablix report item and changing its value to True. The second (and subsequent pages) of the matrix now look like the image shown in Figure 3. The problem with this approach is that the number of printed pages to flip through is unpredictable when you have a large number of potential columns. What if you want to include all columns on the same page? You can take advantage of the repeating behavior of a tablix and get repeating columns by embedding one tablix inside of another. For this example, I'm using SQL Server 2008 R2 Reporting Services. You can get similar results with SQL Server 2008. (In fact, you could probably do something similar in SQL Server 2005, but I haven't tested it. The steps would be slightly different because you would be working with the old-style matrix as compared to the new-style tablix discussed in this post.) I created a dataset that queries AdventureWorksDW2008 tables: SELECT TOP (100) e.LastName + ', ' + e.FirstName AS EmployeeName, d.FullDateAlternateKey, f.SalesOrderNumber, p.EnglishProductName, sum(SalesAmount) as SalesAmount FROM FactResellerSales AS f INNER JOIN DimProduct AS p ON p.ProductKey = f.ProductKey INNER JOIN DimDate AS d ON d.DateKey = f.OrderDateKey INNER JOIN DimEmployee AS e ON e.EmployeeKey = f.EmployeeKey GROUP BY p.EnglishProductName, d.FullDateAlternateKey, e.LastName + ', ' + e.FirstName, f.SalesOrderNumber ORDER BY EmployeeName, f.SalesOrderNumber, p.EnglishProductName To start the report: Add a matrix to the report body and drag Employee Name to the row header, which also creates a group. Next drag SalesOrderNumber below Employee Name in the Row Groups panel, which creates a second group and a second column in the row header section of the matrix, as shown in Figure 4. Now for some trickiness. Add another column to the row headers. This new column will be associated with the existing EmployeeName group rather than causing BIDS to create a new group. To do this, right-click on the EmployeeName textbox in the bottom row, point to Insert Column, and then click Inside Group-Right. Then add the SalesOrderNumber field to this new column. By doing this, you're creating a report that repeats a set of columns for each EmployeeName/SalesOrderNumber combination that appears in the data. Next, modify the first row group's expression to group on both EmployeeName and SalesOrderNumber. In the Row Groups section, right-click EmployeeName, click Group Properties, click the Add button, and select [SalesOrderNumber]. Now you need to configure the columns to repeat. Rather than use the Columns group of the matrix like you might expect, you're going to use the textbox that belongs to the second group of the tablix as a location for embedding other report items. First, clear out the text that's currently in the third column - SalesOrderNumber - because it's already added as a separate textbox in this report design. Then drag and drop a matrix into that textbox, as shown in Figure 5. Again, you need to do some tricks here to get the appearance and behavior right. We don't really want repeating rows in the embedded matrix, so follow these steps: Click on the Rows label which then displays RowGroup in the Row Groups pane below the report body. Right-click on RowGroup,click Delete Group, and select the option to delete associated rows and columns. As a result, you get a modified matrix which has only a ColumnGroup in it, with a row above a double-dashed line for the column group and a row below the line for the aggregated data. Let's continue: Drag EnglishProductName to the data textbox (below the line). Add a second data row by right-clicking EnglishProductName, pointing to Insert Row, and clicking Below. Add the SalesAmount field to the new data textbox. Now eliminate the column group row without eliminating the group. To do this, right-click the row above the double-dashed line, click Delete Rows, and then select Delete Rows Only in the message box. Now you're ready for the fit and finish phase: Resize the column containing the embedded matrix so that it fits completely. Also, the final column in the matrix is for the column group. You can't delete this column, but you can make it as small as possible. Just click on the matrix to display the row and column handles, and then drag the right edge of the rightmost column to the left to make the column virtually disappear. Next, configure the groups so that the columns of the embedded matrix will wrap. In the Column Groups pane, right-click ColumnGroup1 and click on the expression button (labeled fx) to the right of Group On [EnglishProductName]. Replace the expression with the following: =RowNumber("SalesOrderNumber" ). We use SalesOrderNumber here because that is the name of the group that "contains" the embedded matrix. The next step is to configure the number of columns to display before wrapping. Click any cell in the matrix that is not inside the embedded matrix, and then double-click the second group in the Row Groups pane - SalesOrderNumber. Change the group expression to the following expression: =Ceiling(RowNumber("EmployeeName")/3) The last step is to apply formatting. In my example, I set the SalesAmount textbox's Format property to C2 and also right-aligned the text in both the EnglishProductName and the SalesAmount textboxes. And voila - Figure 6 shows a matrix report with wrapping columns. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • The blocking nature of aggregates

    - by Rob Farley
    I wrote a post recently about how query tuning isn’t just about how quickly the query runs – that if you have something (such as SSIS) that is consuming your data (and probably introducing a bottleneck), then it might be more important to have a query which focuses on getting the first bit of data out. You can read that post here.  In particular, we looked at two operators that could be used to ensure that a query returns only Distinct rows. and The Sort operator pulls in all the data, sorts it (discarding duplicates), and then pushes out the remaining rows. The Hash Match operator performs a Hashing function on each row as it comes in, and then looks to see if it’s created a Hash it’s seen before. If not, it pushes the row out. The Sort method is quicker, but has to wait until it’s gathered all the data before it can do the sort, and therefore blocks the data flow. But that was my last post. This one’s a bit different. This post is going to look at how Aggregate functions work, which ties nicely into this month’s T-SQL Tuesday. I’ve frequently explained about the fact that DISTINCT and GROUP BY are essentially the same function, although DISTINCT is the poorer cousin because you have less control over it, and you can’t apply aggregate functions. Just like the operators used for Distinct, there are different flavours of Aggregate operators – coming in blocking and non-blocking varieties. The example I like to use to explain this is a pile of playing cards. If I’m handed a pile of cards and asked to count how many cards there are in each suit, it’s going to help if the cards are already ordered. Suppose I’m playing a game of Bridge, I can easily glance at my hand and count how many there are in each suit, because I keep the pile of cards in order. Moving from left to right, I could tell you I have four Hearts in my hand, even before I’ve got to the end. By telling you that I have four Hearts as soon as I know, I demonstrate the principle of a non-blocking operation. This is known as a Stream Aggregate operation. It requires input which is sorted by whichever columns the grouping is on, and it will release a row as soon as the group changes – when I encounter a Spade, I know I don’t have any more Hearts in my hand. Alternatively, if the pile of cards are not sorted, I won’t know how many Hearts I have until I’ve looked through all the cards. In fact, to count them, I basically need to put them into little piles, and when I’ve finished making all those piles, I can count how many there are in each. Because I don’t know any of the final numbers until I’ve seen all the cards, this is blocking. This performs the aggregate function using a Hash Match. Observant readers will remember this from my Distinct example. You might remember that my earlier Hash Match operation – used for Distinct Flow – wasn’t blocking. But this one is. They’re essentially doing a similar operation, applying a Hash function to some data and seeing if the set of values have been seen before, but before, it needs more information than the mere existence of a new set of values, it needs to consider how many of them there are. A lot is dependent here on whether the data coming out of the source is sorted or not, and this is largely determined by the indexes that are being used. If you look in the Properties of an Index Scan, you’ll be able to see whether the order of the data is required by the plan. A property called Ordered will demonstrate this. In this particular example, the second plan is significantly faster, but is dependent on having ordered data. In fact, if I force a Stream Aggregate on unordered data (which I’m doing by telling it to use a different index), a Sort operation is needed, which makes my plan a lot slower. This is all very straight-forward stuff, and information that most people are fully aware of. I’m sure you’ve all read my good friend Paul White (@sql_kiwi)’s post on how the Query Optimizer chooses which type of aggregate function to apply. But let’s take a look at SQL Server Integration Services. SSIS gives us a Aggregate transformation for use in Data Flow Tasks, but it’s described as Blocking. The definitive article on Performance Tuning SSIS uses Sort and Aggregate as examples of Blocking Transformations. I’ve just shown you that Aggregate operations used by the Query Optimizer are not always blocking, but that the SSIS Aggregate component is an example of a blocking transformation. But is it always the case? After all, there are plenty of SSIS Performance Tuning talks out there that describe the value of sorted data in Data Flow Tasks, describing the IsSorted property that can be set through the Advanced Editor of your Source component. And so I set about testing the Aggregate transformation in SSIS, to prove for sure whether providing Sorted data would let the Aggregate transform behave like a Stream Aggregate. (Of course, I knew the answer already, but it helps to be able to demonstrate these things). A query that will produce a million rows in order was in order. Let me rephrase. I used a query which produced the numbers from 1 to 1000000, in a single field, ordered. The IsSorted flag was set on the source output, with the only column as SortKey 1. Performing an Aggregate function over this (counting the number of rows per distinct number) should produce an additional column with 1 in it. If this were being done in T-SQL, the ordered data would allow a Stream Aggregate to be used. In fact, if the Query Optimizer saw that the field had a Unique Index on it, it would be able to skip the Aggregate function completely, and just insert the value 1. This is a shortcut I wouldn’t be expecting from SSIS, but certainly the Stream behaviour would be nice. Unfortunately, it’s not the case. As you can see from the screenshots above, the data is pouring into the Aggregate function, and not being released until all million rows have been seen. It’s not doing a Stream Aggregate at all. This is expected behaviour. (I put that in bold, because I want you to realise this.) An SSIS transformation is a piece of code that runs. It’s a physical operation. When you write T-SQL and ask for an aggregation to be done, it’s a logical operation. The physical operation is either a Stream Aggregate or a Hash Match. In SSIS, you’re telling the system that you want a generic Aggregation, that will have to work with whatever data is passed in. I’m not saying that it wouldn’t be possible to make a sometimes-blocking aggregation component in SSIS. A Custom Component could be created which could detect whether the SortKeys columns of the input matched the Grouping columns of the Aggregation, and either call the blocking code or the non-blocking code as appropriate. One day I’ll make one of those, and publish it on my blog. I’ve done it before with a Script Component, but as Script components are single-use, I was able to handle the data knowing everything about my data flow already. As per my previous post – there are a lot of aspects in which tuning SSIS and tuning execution plans use similar concepts. In both situations, it really helps to have a feel for what’s going on behind the scenes. Considering whether an operation is blocking or not is extremely relevant to performance, and that it’s not always obvious from the surface. In a future post, I’ll show the impact of blocking v non-blocking and synchronous v asynchronous components in SSIS, using some of LobsterPot’s Script Components and Custom Components as examples. When I get that sorted, I’ll make a Stream Aggregate component available for download.

    Read the article

< Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >