Search Results

Search found 7311 results on 293 pages for 'rows'.

Page 87/293 | < Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >

  • Anchor tags are blank

    - by ryanday
    I'm having a problem where the my anchor tags sometimes aren't displaying their links. This is happening on Mobile Safari on multiple iPhones, and in the iPhone simulator. I'm using jQtouch r147, PhoneGap, and jQuery 1.4.2. I'm generating the data from a database call, and adding anchor tags to a list like this: for(var i=0;i<data.rows.length;i++) { var item = $('<li></li>'); var name = data.rows.item(i).name; var anchor = $('<a href="#lpage">'+name+'</a>'); item.addClass('arrow'); // This line always displays the name, even when I can't see // the name in the browser debug.log('The name: ' + name); (function(info) { anchor.bind('tap', function(e) { debug.log('Touch start ' + info.id); }); })(data.rows.item(i)); item.append(anchor); if( anchor.html() == null ) { debug.log('html is blank'); } $('#myUL').append(item); } Sometimes my list of names shows fine(http://imagebin.org/101462), and sometimes it is just blank(http://imagebin.org/101464). When the list is blank, I see the debug.log() line show me 'html is blank', and I also see the log line show me that the variable 'name' does, in fact, contain a valid name. When I check for anchor.html() == null, I've also tried to .remove() the anchor tag, and re-create it. But it always comes back without the name displayed. This happens on the mobile device and in the simulator, but I've never seen it happen in Safari or in Chrome. Has anyone seen something like this? I can't find the cause, and I can't get it to stop. Thank you for any ideas or suggestions!

    Read the article

  • SQL Server Long Query

    - by thormj
    Ok... I don't understand why this query is taking so long (MSSQL Server 2005): [Typical output 3K rows, 5.5 minute execution time] SELECT dbo.Point.PointDriverID, dbo.Point.AssetID, dbo.Point.PointID, dbo.Point.PointTypeID, dbo.Point.PointName, dbo.Point.ForeignID, dbo.Pointtype.TrendInterval, coalesce(dbo.Point.trendpts,5) AS TrendPts, LastTimeStamp = PointDTTM, LastValue=PointValue, Timezone FROM dbo.Point LEFT JOIN dbo.PointType ON dbo.PointType.PointTypeID = dbo.Point.PointTypeID LEFT JOIN dbo.PointData ON dbo.Point.PointID = dbo.PointData.PointID AND PointDTTM = (SELECT Max(PointDTTM) FROM dbo.PointData WHERE PointData.PointID = Point.PointID) LEFT JOIN dbo.SiteAsset ON dbo.SiteAsset.AssetID = dbo.Point.AssetID LEFT JOIN dbo.Site ON dbo.Site.SiteID = dbo.SiteAsset.SiteID WHERE onlinetrended =1 and WantTrend=1 PointData is the biggun, but I thought its definition should allow me to pick up what I want easily enough: CREATE TABLE [dbo].[PointData]( [PointID] [int] NOT NULL, [PointDTTM] [datetime] NOT NULL, [PointValue] [real] NULL, [DataQuality] [tinyint] NULL, CONSTRAINT [PK_PointData_1] PRIMARY KEY CLUSTERED ( [PointID] ASC, [PointDTTM] ASC ) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] GO CREATE NONCLUSTERED INDEX [IX_PointDataDesc] ON [dbo].[PointData] ( [PointID] ASC, [PointDTTM] DESC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] GO PointData is 550M rows, and Point (source of PointID) is only 28K rows. I tried making an Indexed View, but I can't figure out how to get the Last Timestamp/Value out of it in a compatible way (no Max, no subquery, no CTE). This runs twice an hour, and after it runs I put more data into those 3K PointID's that I selected. I thought about creating LastTime/LastValue tables directly into Point, but that seems like the wrong approach. Am I missing something, or should I rebuild something? (I'm also the DBA, but I know very little about A'ing a DB!)

    Read the article

  • Create a dictionary property list programmatically

    - by jovany
    I want to programatically create a dictionary which feeds data to my UITableView but I'm having a hard time with it. I want to create a dictionary that resembles this property list (image) give or take a couple of items. I've looked at "Property List Programming Guide: Creating Property Lists Programmatically" and I came up with a small sample of my own: //keys NSArray *Childs = [NSArray arrayWithObjects:@"testerbet", nil]; NSArray *Children = [NSArray arrayWithObjects:@"Children", nil]; NSArray *Keys = [NSArray arrayWithObjects:@"Rows", nil]; NSArray *Title = [NSArray arrayWithObjects:@"Title", nil]; //strings NSString *Titles = @"mmm training"; //dictionary NSDictionary *item1 = [NSDictionary dictionaryWithObject:Childs, Titles forKey:Children , Title]; NSDictionary *item2 = [NSDictionary dictionaryWithObject:Childs, Titles forKey:Children , Title]; NSDictionary *item3 = [NSDictionary dictionaryWithObject:Childs, Titles forKey:Children , Title]; NSArray *Rows = [NSArray arrayWithObjects: item1, item2, item3, nil]; NSDictionary *Root = [NSDictionary dictionaryWithObject:Rows forKey:Keys]; // NSDictionary *tempDict = [[NSDictionary alloc] //initWithContentsOfFile:DataPath]; NSDictionary *tempDict = [[NSDictionary alloc] initWithDictionary: Root]; I'm trying to use this data of hierachy for my table views. I'm actually using this article as an example. So I was wondering how can I can create my property list (dictionary) programmatically so that I can fill it with my own arrays. I'm still new with iPhone development so bear with me. ;)

    Read the article

  • MySQL query optimization - distinct, order by and limit

    - by Manuel Darveau
    I am trying to optimize the following query: select distinct this_.id as y0_ from Rental this_ left outer join RentalRequest rentalrequ1_ on this_.id=rentalrequ1_.rental_id left outer join RentalSegment rentalsegm2_ on rentalrequ1_.id=rentalsegm2_.rentalRequest_id where this_.DTYPE='B' and this_.id<=1848978 and this_.billingStatus=1 and rentalsegm2_.endDate between 1273631699529 and 1274927699529 order by rentalsegm2_.id asc limit 0, 100; This query is done multiple time in a row for paginated processing of records (with a different limit each time). It returns the ids I need in the processing. My problem is that this query take more than 3 seconds. I have about 2 million rows in each of the three tables. Explain gives: +----+-------------+--------------+--------+-----------------------------------------------------+---------------+---------+--------------------------------------------+--------+----------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+--------------+--------+-----------------------------------------------------+---------------+---------+--------------------------------------------+--------+----------------------------------------------+ | 1 | SIMPLE | rentalsegm2_ | range | index_endDate,fk_rentalRequest_id_BikeRentalSegment | index_endDate | 9 | NULL | 449904 | Using where; Using temporary; Using filesort | | 1 | SIMPLE | rentalrequ1_ | eq_ref | PRIMARY,fk_rental_id_BikeRentalRequest | PRIMARY | 8 | solscsm_main.rentalsegm2_.rentalRequest_id | 1 | Using where | | 1 | SIMPLE | this_ | eq_ref | PRIMARY,index_billingStatus | PRIMARY | 8 | solscsm_main.rentalrequ1_.rental_id | 1 | Using where | +----+-------------+--------------+--------+-----------------------------------------------------+---------------+---------+--------------------------------------------+--------+----------------------------------------------+ I tried to remove the distinct and the query ran three times faster. explain without the query gives: +----+-------------+--------------+--------+-----------------------------------------------------+---------------+---------+--------------------------------------------+--------+-----------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+--------------+--------+-----------------------------------------------------+---------------+---------+--------------------------------------------+--------+-----------------------------+ | 1 | SIMPLE | rentalsegm2_ | range | index_endDate,fk_rentalRequest_id_BikeRentalSegment | index_endDate | 9 | NULL | 451972 | Using where; Using filesort | | 1 | SIMPLE | rentalrequ1_ | eq_ref | PRIMARY,fk_rental_id_BikeRentalRequest | PRIMARY | 8 | solscsm_main.rentalsegm2_.rentalRequest_id | 1 | Using where | | 1 | SIMPLE | this_ | eq_ref | PRIMARY,index_billingStatus | PRIMARY | 8 | solscsm_main.rentalrequ1_.rental_id | 1 | Using where | +----+-------------+--------------+--------+-----------------------------------------------------+---------------+---------+--------------------------------------------+--------+-----------------------------+ As you can see, the Using temporary is added when using distinct. I already have an index on all fields used in the where clause. Is there anything I can do to optimize this query? Thank you very much!

    Read the article

  • Query broke down and left me stranded in the woods

    - by user1290323
    I am trying to execute a query that deletes all files from the images table that do not exist in the filters tables. I am skipping 3,500 of the latest files in the database as to sort of "Trim" the table back to 3,500 + "X" amount of records in the filters table. The filters table holds markers for the file, as well as the file id used in the images table. The code will run on a cron job. My Code: $sql = mysql_query("SELECT * FROM `images` ORDER BY `id` DESC") or die(mysql_error()); while($row = mysql_fetch_array($sql)){ $id = $row['id']; $file = $row['url']; $getId = mysql_query("SELECT `id` FROM `filter` WHERE `img_id` = '".$id."'") or die(mysql_error()); if(mysql_num_rows($getId) == 0){ $IdQue[] = $id; $FileQue[] = $file; } } for($i=3500; $i<$x; $i++){ mysql_query("DELETE FROM `images` WHERE id='".$IdQue[$i]."' LIMIT 1") or die("line 18".mysql_error()); unlink($FileQue[$i]) or die("file Not deleted"); } echo ($i-3500)." files deleted."; Output: 0 files deleted. Database contents: images table: 10,000 rows filters table: 63 rows Amount of rows in filters table that contain an images table id: 63 Execution time of php script: 4 seconds +/- 0.5 second Relevant DB structure TABLE: images id url etc... TABLE: filter id img_id (CONTAINS ID FROM images table) etc...

    Read the article

  • Use Javascript RegEx to extract column names from SQLite Create Table SQL

    - by NimbusSoftware
    I'm trying to extract column names from a SQLite result set from sqlite_master's sql column. I get hosed up in the regular expressions in the match() and split() functions. t1.executeSql('SELECT name, sql FROM sqlite_master WHERE type="table" and name!="__WebKitDatabaseInfoTable__";', [], function(t1, result) { for(i = 0;i < result.rows.length; i++){ var tbl = result.rows.item(i).name; var dbSchema = result.rows.item(i).sql; // errors out on next line var columns = dbSchema.match(/.*CREATE\s+TABLE\s+(\S+)\s+\((.*)\).*/)[2].split(/\s+[^,]+,?\s*/); } }, function(){console.log('err1');} ); I want to parse SQL statements like these... CREATE TABLE sqlite_sequence(name,seq); CREATE TABLE tblConfig (Key TEXT NOT NULL,Value TEXT NOT NULL); CREATE TABLE tblIcon (IconID INTEGER NOT NULL PRIMARY KEY,png TEXT NOT NULL,img32 TEXT NOT NULL,img64 TEXT NOT NULL,Version TEXT NOT NULL) into a strings like theses... name,seq Key,Value IconID,png,img32,img64,Version Any help with a RegEx would be greatly appreciated.

    Read the article

  • Fast serarch of 2 dimensional array

    - by Tim
    I need a method of quickly searching a large 2 dimensional array. I extract the array from Excel, so 1 dimension represents the rows and the second the columns. I wish to obtain a list of the rows where the columns match certain criteria. I need to know the row number (or index of the array). For example, if I extract a range from excel. I may need to find all rows where column A =”dog” and column B = 7 and column J “a”. I only know which columns and which value to find at run time, so I can’t hard code the column index. I could use a simple loop, but is this efficient ? I need to run it several thousand times, searching for different criteria each time. For r As Integer = 0 To UBound(myArray, 0) - 1 match = True For c = 0 To UBound(myArray, 1) - 1 If not doesValueMeetCriteria(myarray(r,c) then match = False Exit For End If Next If match Then addRowToMatchedRows(r) Next The doesValueMeetCriteria function is a simple function that checks the value of the array element against the query requirement. e.g. Column A = dog etc. Is it more effiecent to create a datatable from the array and use the .select method ? Can I use Linq in some way ? Perhaps some form of dictionary or hashtable ? Or is the simple loop the most effiecent ? Your suggestions are most welcome.

    Read the article

  • incremental way of counting quantiles for large set of data

    - by Gacek
    I need to count the quantiles for a large set of data. Let's assume we can get the data only through some portions (i.e. one row of a large matrix). To count the Q3 quantile one need to get all the portions of the data and store it somewhere, then sort it and count the quantile: List<double> allData = new List<double>(); foreach(var row in matrix) // this is only example. In fact the portions of data are not rows of some matrix { allData.AddRange(row); } allData.Sort(); double p = 0.75*allData.Count; int idQ3 = (int)Math.Ceiling(p) - 1; double Q3 = allData[idQ3]; Now, I would like to find a way of counting this without storing the data in some separate variable. The best solution would be to count some parameters od mid-results for first row and then adjust it step by step for next rows. Note: These datasets are really big (ca 5000 elements in each row) The Q3 can be estimated, it doesn't have to be an exact value. I call the portions of data "rows", but they can have different leghts! Usually it varies not so much (+/- few hundred samples) but it varies! This question is similar to this one: http://stackoverflow.com/questions/1058813/on-line-iterator-algorithms-for-estimating-statistical-median-mode-skewness But I need to count quantiles. ALso there are few articles in this topic, i.e.: http://web.cs.wpi.edu/~hofri/medsel.pdf http://portal.acm.org/citation.cfm?id=347195&dl But before I would try to implement these, I wanted to ask you if there are maybe any other, qucker ways of counting the 0.25/0.75 quantiles?

    Read the article

  • jQuery.ajax() + empty JSON object = parse error

    - by roosteronacid
    I get a parse error when using jQuery to load some JSON data. Here's a snippet of my code: jQuery.ajax({ dataType: "json", success: function (json) { jQuery.each(json, function () { alert(this["columnName"]); }); } }); I get no errors when parsing a non-empty JSON object. So my guess is that the problem is with my serializer. Question is: how do I format an empty JSON object which jQuery won't consider malformed? This is what I've tried so far, with no success: {[]} {[null]} {} {null} {"rows": []} {"rows": null} {"rows": {}} UPDATE: I can understand that I've been somewhat vague--let me try and clarify: Parsing of the JSON object is not the issue here--JQuery is - I think. jQuery throws a parse-error (invokes the error function). It seems like jQuery's internal JSON validation is not accepting any of the before mentioned objects. Not even the valid ones. Output of the error function is: XMLHttpRequest: XMLHttpRequest readyState=4 status=200 textStatus: parsererror errorThrown: undefined This goes for all of the before mentioned objects.

    Read the article

  • Entity Framework and associations between string keys

    - by fredrik
    Hi, I am new to Entity Framework, and ORM's for that mather. In the project that I'm involed in we have a legacy database, with all its keys as strings, case-insensitive. We are converting to MSSQL and want to use EF as ORM, but have run in to a problem. Here is an example that illustrates our problem: TableA has a primary string key, TableB has a reference to this primary key. In LINQ we write something like: var result = from t in context.TableB select t.TableA; foreach( var r in result ) Console.WriteLine( r.someFieldInTableA ); if TableA contains a primary key that reads "A", and TableB contains two rows that references TableA but with different cases in the referenceing field, "a" and "A". In our project we want both of the rows to endup in the result, but only the one with the matching case will end up there. Using the SQL Profiler, I have noticed that both of the rows are selected. Is there a way to tell Entity Framework that the keys are case insensitive? Edit:We have now tested this with NHibernate and come to the conclution that NHibernate works with case-insensitive keys. So NHibernate might be a better choice for us.I am however still interested in finding out if there is any way to change the behaviour of Entity Framework.

    Read the article

  • delete row from result set in web sql with javascript

    - by Kaijin
    I understand that the result set from web sql isn't quite an array, more of an object? I'm cycling through a result set and to speed things up I'd like to remove a row once it's been found. I've tried "delete" and "splice", the former does nothing and the latter throws an error. Here's a piece of what I'm trying to do, notice the delete on line 18: function selectFromReverse(reverseRay,suggRay){ var reverseString = reverseRay.toString(); db.transaction(function (tx) { tx.executeSql('SELECT votecount, comboid FROM counterCombos WHERE comboid IN ('+reverseString+') AND votecount>0', [], function(tx, results){ processSelectFromReverse(results,suggRay); }); }, function(){onError}); } function processSelectFromReverse(results,suggRay){ var i = suggRay.length; while(i--){ var j = results.rows.length; while(j--){ console.log('searching'); var found = 0; if(suggRay[i].reverse == results.rows.item(j).comboid){ delete results.rows.item(j); console.log('found'); found++; break; } } if(found == 0){ console.log('lost'); } } }

    Read the article

  • C# DynamicPDF Merging causing "Index out of bounds" error

    - by Dining Philanderer
    Greetings, We use DynamicPDF to merge multiple PDF documents stored in a MSSQL database. The vast majority of times it works wonderfully, but occasionally one of these documents will fail to merge generating the exception message "Index was outside the bounds of the array." I think I have isolated the problem to PDF files that are greater than 8.5 x 11.0. Does anyone know if this is a known issue with DynamicPDF? The merging code is posted here. What would be ideal is if there is a way to resize the PDF files to the correct size so this is not a concern at all... for (int docs = 0; docs < dsPDFInfo.Tables[0].Rows.Count; docs++) { byte[] bytePDFArray = (byte[])dsPDFInfo.Tables[0].Rows[docs]["Content"]; int iContentSize = Convert.ToInt32(dsPDFInfo.Tables[0].Rows[docs]["ContentSize"]); MemoryStream ms = new MemoryStream(bytePDFArray, 0, iContentSize); ceTe.DynamicPDF.Merger.PdfDocument pdfdoc = new ceTe.DynamicPDF.Merger.PdfDocument(ms); ceTe.DynamicPDF.Merger.MergeDocument mergedoc = new ceTe.DynamicPDF.Merger.MergeDocument(pdfdoc); docCombinedPDF.Append(mergedoc); } Thanks....

    Read the article

  • ADO.NET DataTable/DataRow Thread Safety

    - by Allen E. Scharfenberg
    Introduction A user reported to me this morning that he was having an issue with inconsistent results (namely, column values sometimes coming out null when they should not be) of some parallel execution code that we provide as part of an internal framework. This code has worked fine in the past and has not been tampered with lately, but it got me to thinking about the following snippet: Code Sample lock (ResultTable) { newRow = ResultTable.NewRow(); } newRow["Key"] = currentKey; foreach (KeyValuePair<string, object> output in outputs) { object resultValue = output.Value; newRow[output.Name] = resultValue != null ? resultValue : DBNull.Value; } lock (ResultTable) { ResultTable.Rows.Add(newRow); } (No guarantees that that compiles, hand-edited to mask proprietery information.) Explanation We have this cascading type of locking code other places in our system, and it works fine, but this is the first instance of cascading locking code that I have come across that interacts with ADO .NET. As we all know, members of framework objects are usually not thread safe (which is the case in this situation), but the cascading locking should ensure that we are not reading and writing to ResultTable.Rows concurrently. We are safe, right? Hypothesis Well, the cascading lock code does not ensure that we are not reading from or writing to ResultTable.Rows at the same time that we are assigning values to columns in the new row. What if ADO .NET uses some kind of buffer for assigning column values that is not thread safe--even when different object types are involved (DataTable vs. DataRow)? Has anyone run into anything like this before? I thought I would ask here at StackOverflow before beating my head against this for hours on end :) Conclusion Well, the consensus appears to be that changing the cascading lock to a full lock has resolved the issue. That is not the result that I expected, but the full lock version has not produced the issue after many, many, many tests. The lesson: be wary of cascading locks used on APIs that you do not control. Who knows what may be going on under the covers!

    Read the article

  • Adding dynamic data to a table

    - by user559780
    I've the following table in my application. I've a ajax request which will fetch the results to be shown in the table. How add these results to the table with out overridding the header every time? <table id="itemList"> <td>Name</td> <td>Price</td> <td>Quantity</td> <td>Total</td> </table> Then the ajax data is as shown below var items = [ { Name: "Apple", Price: "80", Quantity : "3", Total : "240" }, { Name: "Orance", Price: "50", Quantity : "4", Total : "200" }, { Name: "Banana", Price: "20", Quantity : "8", Total : "160" }, { Name: "Cherry", Price: "250", Quantity : "10", Total : "2500" } ]; Now I'm trying something like this but it is not working var rows = ""; $.each(items, function(){ rows += "<tr><td>" + this.Name + "</td><td>" + this.Price + "</td><td>" + this.Quantity + "</td><td>" + this.Total + "</td></tr>"; }); $( "#itemList" ).text('<tr><td>Name</td><td>Price</td><td>Quantity-</td><td>Total</td></tr>' + rows );

    Read the article

  • Looping through my table, how do I know if the checkbox is checked?

    - by radbyx
    How I build my table: for (var i = 0; i < result.length; i++) { var item = result[i]; // Firma, BygningselementNavn, BrugerNavn, EmailAdresse, Telefon tbody = tbody + '<tr class="modtagerRow"><td>' + item.FirmaNavn + '</td>' + '<td>' + item.BygningselementNavn + '</td>' + '<td>' + item.BrugerNavn + '</td>' + '<td>' + item.EmailAdresse + '</td>' + '<td>' + item.Telefon + '</td>' // Medtag tbody = tbody + '<td style="text-align:center"><input type="checkbox" value="' + item.BygningselementId + '_' + item.BrugerId + '" name="BygningsElementBrugerComboIds"></td>' + '</tr>'; } $('#ModtagereTable tbody').append(tbody) How I am trying to loop through the rows and adding a CSS class to rows that has it's checkbox checked. 1) I get the indexies to the console, but I can't make the if condition for all the checked checkboxes. 2) Also I am not sure if I can you $( this ) or I should use something else, when adding the class .hideForSendMailConfirm? // Looping rows in table $( ".modtagerRow" ).each(function(index, element) { console.log('index: ' + index); // if if (element.checked) { $( this ).addClass(".hideForSendMailConfirm"); } });

    Read the article

  • php oop and mysql

    - by gloris
    I need to get data, to check and send to db. Programming with PHP OOP. Could you tell me if my class structure is good and how dislpay all data?. Thanks <?php class Database{ private $DBhost = 'localhost'; private $DBuser = 'root'; private $DBpass = 'root'; private $DBname = 'blog'; public function connect(){ //Connect to mysql db } public function select($rows){ //select data from db } public function insert($rows){ //Insert data to db } public function delete($rows){ //Delete data from db } } class CheckData{ public $number1; public $number2; public function __construct(){ $this->number1 = $_POST['number1']; $this->number2 = $_POST['number2']; } function ISempty(){ if(!empty($this->$number1)){ echo "Not Empty"; $data = new Database(); $data->insert($this->$number1); } else{ echo "Empty1"; } if(!empty($this->$number2)){ echo "Not Empty"; $data = new Database(); $data->insert($this->$number2); } else{ echo "Empty2"; } } } class DisplayData{ //How print all data? function DisplayNumber(){ $data = new Database(); $data->select(); } } $check = new CheckData(); $check->ISempty(); $display = new DisplayData() $display->DisplayNumber(); ?>

    Read the article

  • Optimizing T-SQL where an array would be nice

    - by Polatrite
    Alright, first you'll need to grab a barf bag. I've been tasked with optimizing several old stored procedures in our database. This SP does the following: 1) cursor loops through a series of "buildings" 2) cursor loops through a week, Sunday-Saturday 3) has a huge set of IF blocks that are responsible for counting how many Objects of what Types are present in a given building Essentially what you'll see in this code block is that, if there are 5 objects of type #2, it will increment @Type_2_Objects_5 by 1. IF @Number_Type_1_Objects = 0 BEGIN SET @Type_1_Objects_0 = @Type_1_Objects_0 + 1 END IF @Number_Type_1_Objects = 1 BEGIN SET @Type_1_Objects_1 = @Type_1_Objects_1 + 1 END IF @Number_Type_1_Objects = 2 BEGIN SET @Type_1_Objects_2 = @Type_1_Objects_2 + 1 END IF @Number_Type_1_Objects = 3 BEGIN SET @Type_1_Objects_3 = @Type_1_Objects_3 + 1 END [... Objects_4 through Objects_20 for Type_1] IF @Number_Type_2_Objects = 0 BEGIN SET @Type_2_Objects_0 = @Type_2_Objects_0 + 1 END IF @Number_Type_2_Objects = 1 BEGIN SET @Type_2_Objects_1 = @Type_2_Objects_1 + 1 END IF @Number_Type_2_Objects = 2 BEGIN SET @Type_2_Objects_2 = @Type_2_Objects_2 + 1 END IF @Number_Type_2_Objects = 3 BEGIN SET @Type_2_Objects_3 = @Type_2_Objects_3 + 1 END [... Objects_4 through Objects_20 for Type_2] In addition to being extremely hacky (and limited to a quantity of 20 objects), it seems like a terrible way of handling this. In a traditional language, this could easily be solved with a 2-dimensional array... objects[type][quantity] += 1; I'm a T-SQL novice, but since writing stored procedures often uses a lot of temporary tables (which could essentially be a 2-dimensional array) I was wondering if someone could illuminate a better way of handling a situation like this with two dynamic pieces of data to store. Requested in comments: The columns are simply Number_Type_1_Objects, Number_Type_2_Objects, Number_Type_3_Objects, Number_Type_4_Objects, Number_Type_5_Objects, and CurrentDateTime. Each row in the table represents 5 minutes. The expected output is to figure out what percentage of time a given quantity of objects is present throughout each day. Sunday - Object Type 1 0 objects - 69 rows, 5:45, 34.85% 1 object - 85 rows, 7:05, 42.93% 2 objects - 44 rows, 3:40, 22.22% On Sunday, there were 0 objects of type 1 for 34.85% of the day. There was 1 object for 42.93% of the day, and 2 objects for 22.22% of the day. Repeat for each object type.

    Read the article

  • Dynamically add data stored in php to nested json

    - by HoGo
    I am trying to dynamicaly generate data in json for jQuery gantt chart. I know PHP but am totally green with JavaScript. I have read dozen of solutions on how dynamicaly add data to json, and tried few dozens of combinations and nothing. Here is the json format: var data = [{ name: "Sprint 0", desc: "Analysis", values: [{ from: "/Date(1320192000000)/", to: "/Date(1322401600000)/", label: "Requirement Gathering", customClass: "ganttRed" }] },{ name: " ", desc: "Scoping", values: [{ from: "/Date(1322611200000)/", to: "/Date(1323302400000)/", label: "Scoping", customClass: "ganttRed" }] }, <!-- Somoe more data--> }]; now I have all data in php db result. Here it goes: $rows=$db->fetchAllRows($result); $rowsNum=count($rows); And this is how I wanted to create json out of it: var data=''; <?php foreach ($rows as $row){ ?> data['name']="<?php echo $row['name'];?>"; data['desc']="<?php echo $row['desc'];?>"; data['values'] = {"from" : "/Date(<?php echo $row['from'];?>)/", "to" : "/Date(<?php echo $row['to'];?>)/", "label" : "<?php echo $row['label'];?>", "customClass" : "ganttOrange"}; } However this does not work. I have tried without loop and replacing php variables with plain text just to check, but it did not work either. Displays chart without added items. If I add new item by adding it to the list of values, it works. So there is no problem with the Gantt itself or paths. Based on all above I assume the problem is with adding plain data to json. Can anyone please help me to fix it?

    Read the article

  • notify url is not called

    - by Jahangeer Ahmed
    Dim redirecturl As String = "" redirecturl = ConfigurationManager.AppSettings("papalUrl").ToString() & "us/cgi-bin/webscr?cmd=_cart&upload=1&business=" & ConfigurationManager.AppSettings("paypalemail").ToString() Dim j As Integer = 0 Dim dr1 As DataRow If ds.Tables("ReviewOrder").Rows.Count 0 Then Dim requestsFile As String = Server.MapPath("~/App_Data/PaymentRequests.xml") ' ds.Tables("ReviewOrder").WriteXml(requestsFile) For j = 0 To ds.Tables("ReviewOrder").Rows.Count - 1 dr1 = ds.Tables("ReviewOrder").Rows(j) redirecturl += "&item_name_" & j + 1 & "=" & dr1("varTitle") redirecturl += "&amount_" & j + 1 & "=" & dr1("flRate") redirecturl += "&image_url_" & j + 1 & "=" & ConfigurationManager.AppSettings("RSSurl").ToString() & dr1("imgImage") redirecturl += "&quantity_" & j + 1 & "=" & Convert.ToInt64(dr1("flQuantity")) ''redirecturl += "&item_name_2=Sample_testing2&amount_2=9.50" ''redirecturl += "&quantity_2=2" ''redirecturl += "&item_name_3=Sample_testing3" ''redirecturl += "&amount_3=8.50" ''redirecturl += "&quantity_3=3" redirecturl += "&custom_" & j + 1 & "=" & dr1("BasketID") Next End If redirecturl += "&currency=" & ConfigurationManager.AppSettings("CurrencyCode").ToString() redirecturl += "&first_name=" & firstName redirecturl += "&last_name=" & lastName redirecturl += "&city=" & city redirecturl += "&state=" & state redirecturl += "&zip=" & zip redirecturl += "&address1=" & address1 redirecturl += "&address2=" & address2 redirecturl += "&notify_url=" & Server.UrlEncode(ConfigurationManager.AppSettings("NotifyUrl").ToString() & "&rm=2") redirecturl += "&return=" & ConfigurationManager.AppSettings("SuccessURL").ToString() 'Failed return page url redirecturl += "&cancel_return=" & ConfigurationManager.AppSettings("FailedURL").ToString() Page.ClientScript.RegisterClientScriptBlock(Me.GetType(), "Redirect", "window.parent.location='" & redirecturl & "';", True)

    Read the article

  • How to declare array of 2D array pointers and access them?

    - by vikramtheone
    Hi Guys, How can I declare an 2D array of 2D Pointers? And later access the individual array elements of the 2D arrays. Is my approach correct? main() { int i, j; int **array[10][10]; int **ptr = NULL; for(i=0;i<10;i++) { for(j=0j<10;j++) { alloc_2D(&ptr, 10, 10); array[i][j] = ptr; } } //After I do this, how can I access the individual 2D array //and then the individual elements of the 2D arrays? } void alloc_2D(float ***memory, unsigned int rows, unsigned int cols) { float **ptr; *memory = NULL; ptr = malloc(rows * sizeof(float*)); if(ptr == NULL) { status = ERROR; printf("\nERROR: Memory allocation failed!"); } else { int i; for(i = 0; i< rows; i++) { ptr[i] = malloc(cols * sizeof(float)); if(ptr[i]==NULL) { status = ERROR; printf("\nERROR: Memory allocation failed!"); } } } *memory = ptr; }

    Read the article

  • PHP and MySQL echoing out a Table

    - by user1631702
    Okay, so I've done this before, and it worked. I am trying to echo out specific rows on my database in a table. Here is my code: <?php $connect = mysql_connect("localhost", "xxx", "xxx") or die ("Hey loser, check your server connection."); mysql_select_db("xxx"); $quey1="select * from `Ad Requests`"; $result=mysql_query($quey1) or die(mysql_error()); ?> <table border=1 style="background-color:#F0F8FF;" > <caption><EM>Student Record</EM></caption> <tr> <th>Student ID</th> <th>Student Name</th> <th>Class</th> </tr> <?php while($row=mysql_fetch_array($result)){ echo "</td><td>"; echo $row['id']; echo "</td><td>"; echo $row['twitter']; echo "</td><td>"; echo $row['why']; echo "</td></tr>"; } echo "</table>"; ?> It gives me no errors, but It just shows a blank table with none of these rows. My Question: How come this wont show any rows in the table, what am I doing wrong?

    Read the article

  • Mysql slow query: INNER JOIN + ORDER BY causes filesort

    - by Alexander
    Hello! I'm trying to optimize this query: SELECT `posts`.* FROM `posts` INNER JOIN `posts_tags` ON `posts`.id = `posts_tags`.post_id WHERE (((`posts_tags`.tag_id = 1))) ORDER BY posts.created_at DESC; The size of tables is 38k rows, and 31k and mysql uses "filesort" so it gets pretty slow. I tried to use different indexes, no luck. CREATE TABLE `posts` ( `id` int(11) NOT NULL auto_increment, `created_at` datetime default NULL, PRIMARY KEY (`id`), KEY `index_posts_on_created_at` (`created_at`), KEY `for_tags` (`trashed`,`published`,`clan_private`,`created_at`) ) ENGINE=InnoDB AUTO_INCREMENT=44390 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci CREATE TABLE `posts_tags` ( `id` int(11) NOT NULL auto_increment, `post_id` int(11) default NULL, `tag_id` int(11) default NULL, `created_at` datetime default NULL, `updated_at` datetime default NULL, PRIMARY KEY (`id`), KEY `index_posts_tags_on_post_id_and_tag_id` (`post_id`,`tag_id`) ) ENGINE=InnoDB AUTO_INCREMENT=63175 DEFAULT CHARSET=utf8 +----+-------------+------------+--------+--------------------------+--------------------------+---------+---------------------+-------+-----------------------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+------------+--------+--------------------------+--------------------------+---------+---------------------+-------+-----------------------------------------------------------+ | 1 | SIMPLE | posts_tags | index | index_post_id_and_tag_id | index_post_id_and_tag_id | 10 | NULL | 24159 | Using where; Using index; Using temporary; Using filesort | | 1 | SIMPLE | posts | eq_ref | PRIMARY | PRIMARY | 4 | .posts_tags.post_id | 1 | | +----+-------------+------------+--------+--------------------------+--------------------------+---------+---------------------+-------+-----------------------------------------------------------+ 2 rows in set (0.00 sec) What kind of index I need to define to avoid mysql using filesort? Is it possible when order field is not in where clause?

    Read the article

  • "Thread was being aborted" 0n large dataset

    - by Donaldinio
    I am trying to process 114,000 rows in a dataset (populated from an oracle database). I am hitting an error at around the 600 mark - "Thread was being aborted". All I am doing is reading the dataset, and I still hit the issue. Is this too much data for a dataset? It seems to load into the dataset ok though. I welcome any better ways to process this amount of data. rootTermsTable = entKw.GetRootKeywordsByCategory(catID); for (int k = 0; k < rootTermsTable.Rows.Count; k++) { string keywordID = rootTermsTable.Rows[k]["IK_DBKEY"].ToString(); ... } public DataTable GetKeywordsByCategory(string categoryID) { DbProviderFactory provider = DbProviderFactories.GetFactory(connectionProvider); DbConnection con = provider.CreateConnection(); con.ConnectionString = connectionString; DbCommand com = provider.CreateCommand(); com.Connection = con; com.CommandText = string.Format("Select * From icm_keyword WHERE (IK_IC_DBKEY = {0})",categoryID); com.CommandType = CommandType.Text; DataSet ds = new DataSet(); DbDataAdapter ad = provider.CreateDataAdapter(); ad.SelectCommand = com; con.Open(); ad.Fill(ds); con.Close(); DataTable dt = new DataTable(); dt = ds.Tables[0]; return dt; //return ds.Tables[0].DefaultView; }

    Read the article

  • Excel 2010 VBA code is stuck when UserForm is shown

    - by Denis
    I've created a UserForm as a progress indicator while a web query (using InternetExplorer object) runs in the background. The code gets triggered as shown below. The progress indicator form is called 'Progerss'. Private Sub Worksheet_Change(ByVal Target As Range) If Target.Row = Range("B2").Row And Target.Column = Range("B2").Column Then Progress.Show vbModeless Range("A4:A65535").ClearContents GetWebData (Range("B2").Value) Progress.Hide End If End Sub What I see with this code is that the progress indicator form pops up when cell B2 changes. I also see that the range of cells in column A gets cleared which tells me that the vbModeless is doing what I want. But then, somewhere within the GetWebData() procedure, things get hung up. As soon as I manually destroy the progress indicator form, the GetWebData() routine finishes and I see the correct results. But if I leave the progress indicator visible, things just get stuck indefinitely. The code below shows what GetWebData() is doing. Private Sub GetWebData(ByVal Symbol As String) Dim IE As New InternetExplorer 'IE.Visible = True IE.navigate MyURL Do DoEvents Loop Until IE.readyState = READYSTATE_COMPLETE Dim Doc As HTMLDocument Set Doc = IE.document Dim Rows As IHTMLElementCollection Set Rows = Doc.getElementsByClassName("financialTable").Item(0).all.tags("tr") Dim r As Long r = 0 For Each Row In Rows Sheet1.Range("A4").Offset(r, 0).Value = Row.Children.Item(0).innerText r = r + 1 Next End Sub Any thoughts?

    Read the article

  • php - create columns from mysql list

    - by user271619
    I have a long list generated from a simple mysql select query. Currently (shown in the code below) I am simply creating list of table rows with each record. So, nothing complicated. However, I want to divide it into more than one column, depending on the number of returned results. I've been wrapping my brain around how to count this in the php, and I'm not getting the results I need. <table> <? $query = mysql_query("SELECT * FROM `sometable`"); while($rows = mysql_fetch_array($query)){ ?> <tr> <td><?php echo $rows['someRecord']; ?></td> </tr> <? } ?> </table> Obviously there's one column generated. So if the records returned reach 10, then I want to create a new column. In other words, if the returned results are 12, I have 2 columns. If I have 22 results, I'll have 3 columns, and so on.

    Read the article

< Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >