Search Results

Search found 7311 results on 293 pages for 'rows'.

Page 86/293 | < Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >

  • UpdatePanel Javascript Error on Adding Clientside Listbox items After postback

    - by DBMaster
    Hi, I have a Dynamic list control of Metabuilder.Webcontrol inside UpdatePanel,I am adding removing Items using Javascript from the list control. It works fine Inside UpdatePanel. I have another control Gridview along with checkbox's which require postback to get populated. Once It gets populated successfully inside update without postback. I checked few rows and wanted to add them into List Control using Javascript. It says "object doesn't support this property or method" function addItmList(idv,valItem) { var list =document.getElementById('ctl00_ContentPlaceHolder1_MyList'); //var generatedName = "newItem" + ( list.options.length + 1 ); list.Add(idv,valItem); } function checkitemvalues() { var gvET = document.getElementById("ctl00_ContentPlaceHolder1_grd"); var target = document.getElementById('ctl00_ContentPlaceHolder1_lstIControl'); var newOption = window.document.createElement('OPTION'); var rCount = gvET.rows.length; var rowIdx = 0; var tcount = 1; for (rowIdx; rowIdx<=rCount-1; rowIdx++) { var rowElement = gvET.rows[rowIdx]; var chkBox = rowElement.cells[0].firstChild; var cod = rowElement.cells[1].innerText; var desc = rowElement.cells[2].innerText; if (chkBox.checked == true){ addItmList(rowElement.cells[1].innerText,rowElement.cells[2].innerText); } } } Code Behind ScriptManager.RegisterClientScriptBlock( Me.Page, Me.GetType(), MyList.ClientID, "" & vbCr & vbLf & "window.mylistid='" + MyList.ClientID + "';" & vbCr & vbLf & "", True ) Remember my code works fine. It cannot maintain the state of List Control thats why It says Object reqiured. Can any one help me out. After Update Panel Why My javascript doesnt add Items into ListBox. Thanks In advance

    Read the article

  • Optimize GROUP BY&ORDER BY query

    - by Jan Hancic
    I have a web page where users upload&watch videos. Last week I asked what is the best way to track video views so that I could display the most viewed videos this week (videos from all dates). Now I need some help optimizing a query with which I get the videos from the database. The relevant tables are this: video (~239371 rows) VID(int), UID(int), title(varchar), status(enum), type(varchar), is_duplicate(enum), is_adult(enum), channel_id(tinyint) signup (~115440 rows) UID(int), username(varchar) videos_views (~359202 rows after 6 days of collecting data, so this table will grow rapidly) videos_id(int), views_date(date), num_of_views(int) The table video holds the videos, signup hodls users and videos_views holds data about video views (each video can have one row per day in that table). I have this query that does the trick, but takes ~10s to execute, and I imagine this will only get worse over time as the videos_views table grows in size. SELECT v.VID, v.title, v.vkey, v.duration, v.addtime, v.UID, v.viewnumber, v.com_num, v.rate, v.THB, s.username, SUM(vvt.num_of_views) AS tmp_num FROM video v LEFT JOIN videos_views vvt ON v.VID = vvt.videos_id LEFT JOIN signup s on v.UID = s.UID WHERE v.status = 'Converted' AND v.type = 'public' AND v.is_duplicate = '0' AND v.is_adult = '0' AND v.channel_id <> 10 AND vvt.views_date >= '2001-05-11' GROUP BY vvt.videos_id ORDER BY tmp_num DESC LIMIT 8 And here is a screenshot of the EXPLAIN result: So, how can I optimize this?

    Read the article

  • adding header row to gridview won't allow you to save the last item row

    - by Lex
    I've modified my GridView to have an extra Header row, however that extra row has caused my grid view row count to be incorrect. Basically, when I want to save the Gridview now, it doesn't recognize the last item line. In my current test I have 5 Item Lines, however only 4 of them are being saved. My code for creating the additional header Line: protected void gvStatusReport_RowDataBound(object sender, GridViewRowEventArgs e) { // This grid has multiple rows, fake the top row. if (e.Row.RowType == DataControlRowType.Header) { SortedList FormatCells = new SortedList(); FormatCells.Add("1", ",6,1"); FormatCells.Add("2", "Time Spent,7,1"); FormatCells.Add("3", "Total,2,1"); FormatCells.Add("4", ",6,1"); GetMultiRowHeader(e, FormatCells); } } The function to create a new row: public void GetMultiRowHeader(GridViewRowEventArgs e, SortedList GetCels) { GridViewRow row; IDictionaryEnumerator enumCels = GetCels.GetEnumerator(); row = new GridViewRow(-1, -1, DataControlRowType.Header, DataControlRowState.Normal); while (enumCels.MoveNext()) { string[] count = enumCels.Value.ToString().Split(Convert.ToChar(",")); TableCell cell = new TableCell(); cell.RowSpan = Convert.ToInt16(count[2].ToString()); cell.ColumnSpan = Convert.ToInt16(count[1].ToString()); cell.Controls.Add(new LiteralControl(count[0].ToString())); cell.HorizontalAlign = HorizontalAlign.Center; row.Cells.Add(cell); } e.Row.Parent.Controls.AddAt(0, row); } Then when I'm going to save, I loop through the rows: int totalRows = gvStatusReport.Rows.Count; for (int rowNumber = 0; rowNumber < totalRows; rowNumber++) { However the first line doesn't seem to have any of the columns that the item row has, and the last line doesn't even show up. My problem is that I do need the extra header line, but what is the best way to fix this?

    Read the article

  • How does PHP PDO work internally ?

    - by Rachel
    I want to use pdo in my application, but before that I want to understand how internally PDOStatement->fetch and PDOStatement->fetchAll. For my application, I want to do something like "SELECT * FROM myTable" and insert into csv file and it has around 90000 rows of data. My question is, if I use PDOStatement->fetch as I am using it here: // First, prepare the statement, using placeholders $query = "SELECT * FROM tableName"; $stmt = $this->connection->prepare($query); // Execute the statement $stmt->execute(); var_dump($stmt->fetch(PDO::FETCH_ASSOC)); while ($row = $stmt->fetch(PDO::FETCH_ASSOC)) { echo "Hi"; // Export every row to a file fputcsv($data, $row); } Will after every fetch from database, result for that fetch would be store in memory ? Meaning when I do second fetch, memory would have data of first fetch as well as data for second fetch. And so if I have 90000 rows of data and if am doing fetch every time than memory is being updated to take new fetch result without removing results from previous fetch and so for the last fetch memory would already have 89999 rows of data. Is this how PDOStatement::fetch works ? Performance wise how does this stack up against PDOStatement::fetchAll ?

    Read the article

  • SQL Server Compact timed out waiting for a lock

    - by jankhana
    Hi all, I'm having an application in that i use Sql Compact 3.5 with VS2008. I'm running multiple threads in my application which contacts the compact database and accesses the row. It selects and deletes those rows in a fashion i.e selecting and giving to the application 5 rows and deleting those rows from the table. It works great with a single thread but if i use multiple threads i.e if 3 or more threads are running I get very often the TimeOut Error!!! I have increased the Time out property in the connection string but it didn't give me expected result. The error log is as follow: SQL Server Compact timed out waiting for a lock. The default lock time is 2000ms for devices and 5000ms for desktops. The default lock timeout can be increased in the connection string using the ssce: default lock timeout property. [ Session id = 5,Thread id = 4204,Process id = 4808,Table name = XXX,Conflict type = x lock (s blocks),Resource = TAB ] The Query that I use to retrieve is as follows: " select Top(5) * from TableName order by id; delete from TableName where id in(select top(5) id from TableName order by id); " Is there any way by which we can avoid this Time Out exception??????? The above query I un as a transaction in VS2008 one using SQLCECommand and the other using SqlCEDataAdapter. Any Idea!!!!!! Reply

    Read the article

  • XML Parseing Error when serving a PDF

    - by Andy
    I'm trying to serve a pdf file from a database in ASP.NET using an Http Handler, but every time I go to the page I get an error XML Parsing Error: no element found Location: https://ucc489/rc/NoteFileHandler.ashx?noteId=1,msdsId=3 Line Number 1, Column 1: ^ Here is my HttpHandler code: public class NoteFileHandler : IHttpHandler { public void ProcessRequest(HttpContext context) { if (context.Request.QueryString.HasKeys()) { if (context.Request.QueryString["noteId"] != null && context.Request.QueryString["msdsId"] != null) { string nId = context.Request.QueryString["noteId"]; string mId = context.Request.QueryString["msdsId"]; DataTable noteFileDt = App_Models.Notes.GetNoteFile(nId, mId); if (noteFileDt.Rows.Count > 0) { try { context.Response.Clear(); context.Response.AddHeader("content-disposition", "attachment;filename=" + noteFileDt.Rows[0][0] + ".pdf"); context.Response.ContentType = "application/pdf"; byte[] file = (byte[])noteFileDt.Rows[0][1]; context.Response.BinaryWrite(file); context.Response.End(); } catch { context.Response.ContentType = "text/plain"; context.Response.Write("File Not Found"); context.Response.StatusCode = 404; } } } } } public bool IsReusable { get { return false; } } } Is there anything else I need to do (server configuration/whatever) to get my pdf file to load?

    Read the article

  • Flipping View transition becomes clunky when using UIPickerViews with a large number of elements

    - by user302246
    I have a very simple app created with the Utility Application project template on XCode. My MainView has two UIPickerView components and two buttons. The FlipSideView has another UIPickerView. The pickers on the main view each have 4 segments and each segment has 8 rows. The picker on the flip side has just 1 segment with 8 rows. All rows on all pickers are just text. With just this setup, pressing the button to flip the view back and forth displays a noticeable delay before the animation actually starts, and then the animation actually seems to go faster than what it should, like it's trying to make up for the lost time. I removed the pickers in interface builder and loaded the app on the phone and the animation now seems natural. I also tried just one picker (the flipside one) and things still seem normal. So my current theory is that the number of objects involved in the main view is the cause. The thing is that I don't think it's that many (4 x 8 x 2 = 64), but I could be completely wrong. This is pretty much my first app so maybe I'm just doing something grossly wrong, or maybe the phone is has a lot more limited processing than I thought. I am thinking of creating the picker views with pickerView:viewForRow:forComponent:reusingView: to see if this hopefully performs better, but I'm not sure if this is just a waste of time. Any suggestions? Thanks Ruy P.S.: Testing on a 3G phone on 3.1.2

    Read the article

  • MySQL: order by and limit gives wrong result

    - by Larry K
    MySQL ver 5.1.26 I'm getting the wrong result with a select that has where, order by and limit clauses. It's only a problem when the order by uses the id column. I saw the MySQL manual for LIMIT Optimization My guess from reading the manual is that there is some problem with the index on the primary key, id. But I don't know where I should go from here... Question: what should I do to best solve the problem? Works correctly: mysql> SELECT id, created_at FROM billing_invoices WHERE (billing_invoices.account_id = 5) ORDER BY id DESC ; +------+---------------------+ | id | created_at | +------+---------------------+ | 1336 | 2010-05-14 08:05:25 | | 1334 | 2010-05-06 08:05:25 | | 1331 | 2010-05-05 23:18:11 | +------+---------------------+ 3 rows in set (0.00 sec) WRONG result when limit added! Should be the first row, id - 1336 mysql> SELECT id, created_at FROM billing_invoices WHERE (billing_invoices.account_id = 5) ORDER BY id DESC limit 1; +------+---------------------+ | id | created_at | +------+---------------------+ | 1331 | 2010-05-05 23:18:11 | +------+---------------------+ 1 row in set (0.00 sec) Works correctly: mysql> SELECT id, created_at FROM billing_invoices WHERE (billing_invoices.account_id = 5) ORDER BY created_at DESC ; +------+---------------------+ | id | created_at | +------+---------------------+ | 1336 | 2010-05-14 08:05:25 | | 1334 | 2010-05-06 08:05:25 | | 1331 | 2010-05-05 23:18:11 | +------+---------------------+ 3 rows in set (0.01 sec) Works correctly with limit: mysql> SELECT id, created_at FROM billing_invoices WHERE (billing_invoices.account_id = 5) ORDER BY created_at DESC limit 1; +------+---------------------+ | id | created_at | +------+---------------------+ | 1336 | 2010-05-14 08:05:25 | +------+---------------------+ 1 row in set (0.01 sec) Additional info: explain SELECT id, created_at FROM billing_invoices WHERE (billing_invoices.account_id = 5) ORDER BY id DESC limit 1; +----+-------------+------------------+-------+--------------------------------------+--------------------------------------+---------+------+------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+------------------+-------+--------------------------------------+--------------------------------------+---------+------+------+-------------+ | 1 | SIMPLE | billing_invoices | range | index_billing_invoices_on_account_id | index_billing_invoices_on_account_id | 4 | NULL | 3 | Using where | +----+-------------+------------------+-------+--------------------------------------+--------------------------------------+---------+------+------+-------------+

    Read the article

  • When are SQL views appropriate in ASP.net MVC?

    - by sslepian
    I've got a table called Protocol, a table called Eligibility, and a Protocol_Eligibilty table that maps the two together (a many to many relationship). If I wanted to make a perfect copy of an entry in the Protocol table, and create all the needed mappings in the Protocol_Eligibility table, would using an SQL view be helpful, from a performance standpoint? Protocol will have around 1000 rows, Eligibility will have about 200, and I expect each Protocol to map to about 10 Eligibility rows and each Eligibility to map to over 100 rows in Protocol. Here's how I'm doing this with the view: var pel_original = (from pel in _documentDataModel.Protocol_Eligibility_View where pel.pid == id select pel); Protocol_Eligibility newEligibility; foreach (var pel_item in pel_original) { newEligibility = new Protocol_Eligibility(); newEligibility.Eligibility = (from pel in _documentDataModel.Eligibility where pel.ID == pel_item.eid select pel).First(); newEligibility.Protocol = newProtocol; newEligibility.ordering = pel_item.ordering; _documentDataModel.AddToProtocol_Eligibility(newEligibility); } And this is without the view: var pel_original = (from pel in _documentDataModel.Protocol_Eligibility where pel.Protocol.ID == id select pel); Protocol_Eligibility newEligibility; foreach (var pel_item in pel_original) { pel_item.EligibilityReference.Load(); newEligibility = new Protocol_Eligibility(); newEligibility.Eligibility = pel_item.Eligibility; newEligibility.Protocol = newProtocol; newEligibility.ordering = pel_item.ordering; _documentDataModel.AddToProtocol_Eligibility(newEligibility); }

    Read the article

  • How to extract a 2x2 submatrix from a bigger matrix

    - by ZaZu
    Hello, I am a very basic user and do not know much about commands used in C, so please bear with me...I cant use very complicated codes. I have some knowledge in the stdio.h and ctype.h library, but thats about it. I have a matrix in a txt file and I want to load the matrix based on my input of number of rows and columns For example, I have a 5 by 5 matrix in the file. I want to extract a specific 2 by 2 submatrix, how can I do that ? I created a nested loop using : FILE *sample sample=fopen("randomfile.txt","r"); for(i=0;i<rows;i++){ for(j=0;j<cols;j++){ fscanf(sample,"%f",&matrix[i][j]); } fscanf(sample,"\n",&matrix[i][j]); } fclose(sample); Sadly the code does not work .. If I have this matrix : 5.00 4.00 5.00 6.00 5.00 4.00 3.00 25.00 5.00 3.00 4.00 23.00 5.00 2.00 352.00 6.00 And inputting 3 for row and 3 for column, I get : 5.00 4.00 5.00 6.00 5.00 4.00 3.00 25.00 5.00 Not only this isnt a 2 by 2 submatrix, but even if I wanted the first 3 rows and first 3 columns, its not printing it correctly.... I need to start at row 3 and col 3, then take the 2 by 2 submatrix ! I should have ended up with : 4.00 23.00 352.00 6.00 I heard that I can use fgets and sscanf to accomplish this. Here is my trial code : fgets(garbage,1,fin); sscanf(garbage,"\n"); But this doesnt work either :( What am I doing wrong ? Please help. Thanks !

    Read the article

  • Custom rowsetClass does not return values

    - by Wesley
    Hi, I'm quite new to Zend and the database classes from it. I'm having problems mapping a Zend_Db_Table_Row_Abstract to my rows. The problem is that whenever I try to map it to a class (Job) that extends the Zend_Db_Table_Row_Abstract class, the database data is not receivable anymore. I'm not getting any errors, trying to get data simply returns null. Here is my code so far: Jobs: class Jobs extends Zend_Db_Table_Abstract { protected $_name = 'jobs'; protected $_rowsetClass = "Job"; public function getActiveJobs() { $select = $this->select()->where('jobs.jobDateOpen < UNIX_TIMESTAMP()')->limit(15,0); $rows = $this->fetchAll($select); return $rows; } } Job: class Job extends Zend_Db_Table_Row_Abstract { public function getCompanyName() { //Gets the companyName } } Controller: $oJobs = new Jobs(); $aActiveJobs = $oJobs->getActiveJobs(); foreach ($aActiveJobs as $value) { var_dump($value->jobTitle); } When I remove the "protected $_rowsetClass = "Job";" line, so that the table row is not mapped to my own class, I get all the jobTitles perfectly. What am I doing wrong here? Thanks in advance, Wesley

    Read the article

  • ListBox Items Not Visible after DataBinding

    - by SidC
    Good Evening All, I am writing a page that allows users to search a parts table, select quantities and a listbox is to be populated with gridview values. Here's a snippet of my aspx page: <asp:ListBox runat="server" ID="lbItems" Width="155px"> <asp:ListItem></asp:ListItem> <asp:ListItem></asp:ListItem> <asp:ListItem></asp:ListItem> <asp:ListItem></asp:ListItem> <asp:ListItem></asp:ListItem> </asp:ListBox> Here's the relevant contents of codebehind: Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load 'Define DataTable Columns as incoming gridview fields Dim dtSelParts As DataTable = New DataTable Dim dr As DataRow = dtSelParts.NewRow() dtSelParts.Columns.Add("PartNumber") dtSelParts.Columns.Add("NSN") dtSelParts.Columns.Add("PartName") dtSelParts.Columns.Add("Qty") 'Select those gridview rows that have txtQty <> 0 For Each row As GridViewRow In MySearch.Rows Dim textboxText As String = _ CType(row.FindControl("txtQty"), TextBox).Text If textboxText <> "0" Then 'Create the row dr = dtSelParts.NewRow() 'Fill the row with data dr("PartNumber") = MySearch.DataKeys(row.RowIndex)("PartNumber") dr("NSN") = MySearch.DataKeys(row.RowIndex)("NSN") dr("PartName") = MySearch.DataKeys(row.RowIndex)("PartName") 'Add the row to the table dtSelParts.Rows.Add(dr) End If Next 'Need to send items to Listbox control lbItems lbItems.DataSource = New DataView(dtSelParts) lbItems.DataValueField = "PartNumber" lbItems.DataValueField = "NSN" lbItems.DataValueField = "PartName" lbItems.DataBind() End Sub The page runs fine in that my search functiobnality is intact, and I can input quantity values into my gridview's textbox. When I click Add to Quote the Listbox receives focus, but no list items are visible. I've created several list items in the aspx page, however I don't know how to populate the contents of my datatable in the listbox. Can someone help a newbie with this, seemingly, easy issue? Thanks, Sid

    Read the article

  • facebook javascript api

    - by ngreenwood6
    I am trying to get my status from facebook using the javascript api. I have the following code: <div id="fb-root"></div> <div id="data"></div> <script src="http://connect.facebook.net/en_US/all.js"></script> <script type="text/javascript"> (function(){ FB.init({ appId : 'SOME ID', status : true, // check login status cookie : true, // enable cookies to allow the server to access the session xfbml : true // parse XFBML }); }); getData(); function getData(){ var query = FB.Data.query('SELECT message FROM status WHERE uid=12345 LIMIT 10'); query.wait(function(rows) { for(i=0;i<rows.length;i++){ document.getElementById('data').innerHTML += 'Your status is ' + rows[i].message + '<br />'; } }); } </script> When i try to get my name it works fine but the status is not working. Can someone please tell me what I am doing wrong because the documentation for this is horrible. And yes I replaced the uid with a fake one and yes i put in my app id because like i said when i try to get my name it works fine. Any help is appreciated.

    Read the article

  • Jquery set tr with empty td lower than with text in td

    - by PavelBY
    I have html, and jquery for sorting my table (also there is non-standart sorting (with multi-tbody)). My code could be found here: http://jsfiddle.net/GuRxj/ As you can see there, td with prices (on russian ????) are sorted ascending (but tech-as not!? why? (it a question too))... But as you see, i need to send this tr's with prices to top of this tbody (now there are in the bottom), while empty-price-tr send to bottom... How to do this? part of js: $('.prcol').click(function(e) { var $sort = this; var $table = $('#articles-table'); var $rows = $('tbody.analogs_art > tr',$table); $rows.sort(function(a, b){ var keyA = $('td:eq(3)',a).text(); var keyB = $('td:eq(3)',b).text(); if (keyA.length > 0 && keyB.length > 0) { if($($sort).hasClass('asc')){ console.log("bbb"); return (keyA > keyB) ? 1 : 0; } else { console.log(keyA+"-"+keyB); return (keyA > keyB) ? 1 : 0; } } }); $.each($rows, function(index, row){ //console.log(row); $table.append(row); //$("table.123").append(row); }); e.preventDefault(); });

    Read the article

  • Dynamic checkboxlist

    - by Steve
    Hi, I am currently building a dynamic url tab system that user can say what urls they want to be displayed on a tabpage control. In the database i have the following columns. userID, int URLName var Enabled bit I am pulling the data back ok but what i am trying do is populate a checkbox list with the urlname and its status on a user options page so they can say what tabs they want displayed. I have wrote the following methods to get the urls and create the checkboxes however i keep getting the following error. ex = {"InvalidArgument=Value of '1' is not valid for 'index'.\r\nParameter name: index"} It reads the first row ok but when it hits the 2nd row that is being returned i get that error. Has anyone got any ideas? Thanks private void GetUserURLS() { db.initiateCommand("[Settings].[LoadAllUserURLS]", CommandType.StoredProcedure); sqlp = db.addParameter("@UserID", _UserID, SqlDbType.Int, ParameterDirection.Input); sqlp = db.addParameter("@spErrorID", DBNull.Value, SqlDbType.Int, ParameterDirection.InputOutput); db.executeCommand(); CreateCheckBoxes(db.getTable(0).Rows); db.releaseCommand(); } private void CreateCheckBoxes(DataRowCollection rows) { try { int i = 0; foreach (DataRow row in rows) { //Gets the url name and path when the status is enabled. The status of Enabled / Disabled is setup in the users option page string URLName = row["URLName"].ToString(); bool enabled = Convert.ToBoolean(row["Enabled"]); CheckedListBox CB = new CheckedListBox(); CB.Items.Insert(i, URLName); CB.Tag = "CB" + i.ToString(); checkedListBox1.Items.Add(CB); i++; } } catch (Exception ex) { //Log error Functionality func = new Functionality(); func.LogError(ex); //Error message the user will see string FriendlyError = "There has been populating checkboxes with the urls - A notification has been sent to development"; Classes.ShowMessageBox.MsgBox(FriendlyError, "There has been an Error", MessageBoxButtons.OK, MessageBoxIcon.Error); } }

    Read the article

  • Table Row Spacing Problem in IE

    - by Brij
    Viewing the code below in IE displays spacing between the rows. I want to join the rows. In Firefox, It is working fine. <table border="0" cellspacing='0' cellpadding='0' width="720" cols="2"> <tr> <td colspan="2"> <a href="index.html"> <img src="images/banner.gif" border="0"> </a> </td> </tr> <tr valign="top"> <td width="130"> <img name="navigate" src="images/navbar.jpg" border="0"> </td> ..... I have also tried style="margin:0; padding:0;" for tr and td but there is no effect in IE. Let me know what to do to remove spacing between rows. Thanks

    Read the article

  • Collapsing data frame by selecing one row per group

    - by jkebinger
    I'm trying to collapse a data frame by removing all but one row from each group of rows with identical values in a particular column. In other words, the first row from each group. For example, I'd like to convert this > d = data.frame(x=c(1,1,2,4),y=c(10,11,12,13),z=c(20,19,18,17)) > d x y z 1 1 10 20 2 1 11 19 3 2 12 18 4 4 13 17 Into this: x y z 1 1 11 19 2 2 12 18 3 4 13 17 I'm using aggregate to do this currently, but the performance is unacceptable with more data: > d.ordered = d[order(-d$y),] > aggregate(d.ordered,by=list(key=d.ordered$x),FUN=function(x){x[1]}) I've tried split/unsplit with the same function argument as here, but unsplit complains about duplicate row numbers. Is rle a possibility? Is there an R idiom to convert rle's length vector into the indices of the rows that start each run, which I can then use to pluck those rows out of the data frame?

    Read the article

  • Handling missing/incomplete data in R--is there function to mask but not remove NAs?

    - by doug
    As you would expect from a DSL aimed at data analysis, R handles missing/incomplete data very well, for instance: Many R functions have an 'na.rm' flag that you can set to 'T' to remove the NAs: mean( c(5,6,12,87,9,NA,43,67), na.rm=T) But if you want to deal with NAs before the function call, you need to do something like this: to remove each 'NA' from a vector: vx = vx[!is.na(a)] to remove each 'NA' from a vector and replace it w/ a '0': ifelse(is.na(vx), 0, vx) to remove entire each row that contains 'NA' from a data frame: dfx = dfx[complete.cases(dfx),] All of these functions permanently remove 'NA' or rows with an 'NA' in them. Sometimes this isn't quite what you want though--making an 'NA'-excised copy of the data frame might be necessary for the next step in the workflow but in subsequent steps you often want those rows back (e.g., to calculate a column-wise statistic for a column that has missing rows caused by a prior call to 'complete cases' yet that column has no 'NA' values in it). to be as clear as possible about what i'm looking for: python/numpy has a class, 'masked array', with a 'mask' method, which lets you conceal--but not remove--NAs during a function call. Is there an analogous function in R?

    Read the article

  • How do I search a NTEXT column for XML attributes and update the values? MS SQL 2005

    - by Alan
    Duplicate: this exact question was asked by the same author in http://stackoverflow.com/questions/1221583/how-do-i-update-a-xml-string-in-an-ntext-column-in-sql-server. Please close this one and answer in the original question. I have a SQL table with 2 columns. ID(int) and Value(ntext) The value rows have all sorts of xml strings in them. ID Value -- ------------------ 1 <ROOT><Type current="TypeA"/></ROOT> 2 <XML><Name current="MyName"/><XML> 3 <TYPE><Colour current="Yellow"/><TYPE> 4 <TYPE><Colour current="Yellow" Size="Large"/><TYPE> 5 <TYPE><Colour current="Blue" Size="Large"/><TYPE> 6 <XML><Name current="Yellow"/><XML> How do I: A. List the rows where <TYPE><Colour current="Yellow", bearing in mind that there is an entry <XML><Name current="Yellow"/><XML> B. Modify the rows that contain <TYPE><Colour current="Yellow" to be <TYPE><Colour current="Purple" Thanks! 4 your help

    Read the article

  • CSS3 transition on table row using height property

    - by Alexis Leclerc
    I have an HTML table displaying some information with a few rows. On each row, the user can click to reveal some additional rows that contains information related to the clicked row. Something like this: While the additional rows are being created with an AJAX request, a loading row is inserted right after the clicked row : Currently, everything works perfectly (drill down style), but I would like to add some animation to the loading row. I found a question ( this one ) that has a JSFiddle showing kind of what I want ( this fiddle ). I tried to implement something similar with CSS3 transitions, but I can't get it to work. Here's my simulated attempt ( fiddle only for deployment of the loading row ) using these transitions : -webkit-transition: height 0.1s ease-in-out; -moz-transition: height 0.1s ease-in-out; -ms-transition: height 0.1s ease-in-out; -o-transition: height 0.1s ease-in-out; transition: height 0.1s ease-in-out; Any thought on why my fiddle doesn't do any animation? Any other methods you would propose? Thanks!

    Read the article

  • SQL Server search filter and order by performance issues

    - by John Leidegren
    We have a table value function that returns a list of people you may access, and we have a relation between a search and a person called search result. What we want to do is that wan't to select all people from the search and present them. The query looks like this SELECT qm.PersonID, p.FullName FROM QueryMembership qm INNER JOIN dbo.GetPersonAccess(1) ON GetPersonAccess.PersonID = qm.PersonID INNER JOIN Person p ON p.PersonID = qm.PersonID WHERE qm.QueryID = 1234 There are only 25 rows with QueryID=1234 but there are almost 5 million rows total in the QueryMembership table. The person table has about 40K people in it. QueryID is not a PK, but it is an index. The query plan tells me 97% of the total cost is spent doing "Key Lookup" witht the seek predicate. QueryMembershipID = Scalar Operator (QueryMembership.QueryMembershipID as QM.QueryMembershipID) Why is the PK in there when it's not used in the query at all? and why is it taking so long time? The number of people total 25, with the index, this should be a table scan for all the QueryMembership rows that have QueryID=1234 and then a JOIN on the 25 people that exists in the table value function. Which btw only have to be evaluated once and completes in less than 1 second.

    Read the article

  • How to migrate primary key generation from "increment" to "hi-lo"?

    - by Bevan
    I'm working with a moderate sized SQL Server 2008 database (around 120 tables, backups are around 4GB compressed) where all the table primary keys are declared as simple int columns. At present, primary key values are generated by NHibernate with the increment identity generator, which has worked well thus far, but precludes moving to a multiprocessing environment. Load on the system is growing, so I'm evaluating the work required to allow the use of multiple servers accessing a common database backend. Transitioning to the hi-lo generator seems to be the best way forward, but I can't find a lot of detail about how such a migration would work. Will NHibernate automatically create rows in the hi-lo table for me, or do I need to script these manually? If NHibernate does insert rows automatically, does it properly take account of existing key values? If NHibernate does take care of thing automatically, that's great. If not, are there any tools to help? Update NHibernate's increment identifier generator works entirely in-memory. It's seeded by selecting the maximum value of used identifiers from the table, but from that point on allocates new values by a simple increment, without reference back to the underlying database table. If any other process adds rows to the table, you end up with primary key collisions. You can run multiple threads within the one process just fine, but you can't run multiple processes. For comparison, the NHibernate identity generator works by configuring the database tables with identity columns, putting control over primary key generation in the hands of the database. This works well, but compromises the unit of work pattern. The hi-lo algorithm sits inbetween these - generation of primary keys is coordinated through the database, allowing for multiprocessing, but actual allocation can occur entirely in memory, avoiding problems with the unit of work pattern.

    Read the article

  • Database Functional Programming in Clojure

    - by Ralph
    "It is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail." - Abraham Maslow I need to write a tool to dump a large hierarchical (SQL) database to XML. The hierarchy consists of a Person table with subsidiary Address, Phone, etc. tables. I have to dump thousands of rows, so I would like to do so incrementally and not keep the whole XML file in memory. I would like to isolate non-pure function code to a small portion of the application. I am thinking that this might be a good opportunity to explore FP and concurrency in Clojure. I can also show the benefits of immutable data and multi-core utilization to my skeptical co-workers. I'm not sure how the overall architecture of the application should be. I am thinking that I can use an impure function to retrieve the database rows and return a lazy sequence that can then be processed by a pure function that returns an XML fragment. For each Person row, I can create a Future and have several processed in parallel (the output order does not matter). As each Person is processed, the task will retrieve the appropriate rows from the Address, Phone, etc. tables and generate the nested XML. I can use a a generic function to process most of the tables, relying on database meta-data to get the column information, with special functions for the few tables that need custom processing. These functions could be listed in a map(table name -> function). Am I going about this in the right way? I can easily fall back to doing it in OO using Java, but that would be no fun. BTW, are there any good books on FP patterns or architecture? I have several good books on Clojure, Scala, and F#, but although each covers the language well, none look at the "big picture" of function programming design.

    Read the article

  • How to do binding programmically?

    - by user175908
    Hello, Could anyone identify the problem in this code? (I'm kinda newbie in WPF bindings.) This code executes after chart is loaded when I click a button: I get this error: Update: I dont get that error anymore. Thanks to Tomas. Now no error occur but chart looks completely blank (no columns) Update: Code now looks like this: // create a very simple DataSet var dataSet = new DataSet("MyDataSet"); var table = dataSet.Tables.Add("MyTable"); table.Columns.Add("Name"); table.Columns.Add("Price"); table.Rows.Add("Brick", 1.5d); table.Rows.Add("Soap", 4.99d); table.Rows.Add("Comic Book", 0.99d); // chart series var series = new ColumnSeries() { IndependentValueBinding = new Binding("[Name]"), // How to deal with DependentValueBinding = new Binding("[Price]"), // these two? ItemsSource = dataSet.Tables[0].DefaultView // or maybe I do mistake here? }; // ---------- set additional binding as adviced ------------------ series.SetBinding(ColumnSeries.ItemsSourceProperty, new Binding()); // chart stuff MyChart.Series.Add(series); MyChart.Title = "Names 'n Prices"; // some code to remove legend var style = new Style(typeof(Control)); style.Setters.Add(new Setter(LegendItem.TemplateProperty, null)); MyChart.LegendStyle = style; XAML: <Window x:Class="BindingzTest.MainWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="MainWindow" Height="606" Width="988" xmlns:charting="clr-namespace:System.Windows.Controls.DataVisualization.Charting;assembly=System.Windows.Controls.DataVisualization.Toolkit"> <Grid Name="LayoutRoot"> <charting:Chart Name="MyChart" Margin="0,0,573,0" Height="289" VerticalAlignment="Top" /> <Button Content="Button" Height="23" HorizontalAlignment="Left" Margin="272,361,0,0" Name="button1" VerticalAlignment="Top" Width="75" Click="chart1_Loaded" /> </Grid> Thanks for help in advance once more.

    Read the article

  • Postgre database ignoring created index ?!

    - by drasto
    I have an Postgre database and a table called my_table. There are 4 columns in that table (id, column1, column2, column3). The id column is primary key, there are no other constrains or indexes on columns. The table has about 200000 rows. I want to print out all rows which has value of column column2 equal(case insensitive) to 'value12'. I use this: SELECT * FROM my_table WHERE column2 = lower('value12') here is the execution plan for this statement(result of set enable_seqscan=on; EXPLAIN SELECT * FROM my_table WHERE column2 = lower('value12')): Seq Scan on my_table (cost=0.00..4676.00 rows=10000 width=55) Filter: ((column2)::text = 'value12'::text) I consider this to be to slow so I create an index on column column2 for better prerformance of searches: CREATE INDEX my_index ON my_table (lower(column2)) Now I ran the same select: SELECT * FROM my_table WHERE column2 = lower('value12') and I expect it to be much faster because it can use index. However it is not faster, it is as slow as before. So I check the execution plan and it is the same as before(see above). So it still uses sequential scen and it ignores the index! Where is the problem ?

    Read the article

< Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >