Search Results

Search found 7311 results on 293 pages for 'rows'.

Page 67/293 | < Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >

  • Ruby on Rails export to csv - maintain mysql select statement order

    - by zekial
    Exporting some data from mysql to a csv file using FasterCSV. I'd like the columns in the outputted CSV to be in the same order as the select statement in my query. Example: rows = Data.find( :all, :select=>'name, age, height, weight' ) headers = rows[0].attributes.keys FasterCSV.generate do |csv| csv << headers rows.each do |r| csv << r.attributes.values end end CSV Output: height,weight,name,age 74,212,bob,23 70,201,fred,24 . . . I want the CSV columns in the same order as my select statement. Obviously the attributes method is not going to work. Any ideas on the best way to ensure that the columns in my csv file will be in the same order as the select statement? Got a lot of data and performance is an issue. The select statement is not static. I realize I could loop through column names within the rows.each loop but it seems kinda dirty.

    Read the article

  • Can I select 0 columns in SQL Server?

    - by Woody Zenfell III
    I am hoping this question fares a little better than the similar Create a table without columns. Yes, I am asking about something that will strike most as pointlessly academic. It is easy to produce a SELECT result with 0 rows (but with columns), e.g. SELECT a = 1 WHERE 1 = 0. Is it possible to produce a SELECT result with 0 columns (but with rows)? e.g. something like SELECT NO COLUMNS FROM Foo. (This is not valid T-SQL.) I came across this because I wanted to insert several rows without specifying any column data for any of them. e.g. (SQL Server 2005) CREATE TABLE Bar (id INT NOT NULL IDENTITY PRIMARY KEY) INSERT INTO Bar SELECT NO COLUMNS FROM Foo -- Invalid column name 'NO'. -- An explicit value for the identity column in table 'Bar' can only be specified when a column list is used and IDENTITY_INSERT is ON. One can insert a single row without specifying any column data, e.g. INSERT INTO Foo DEFAULT VALUES. One can query for a count of rows (without retrieving actual column data from the table), e.g. SELECT COUNT(*) FROM Foo. (But that result set, of course, has a column.) I tried things like INSERT INTO Bar () SELECT * FROM Foo -- Parameters supplied for object 'Bar' which is not a function. -- If the parameters are intended as a table hint, a WITH keyword is required. and INSERT INTO Bar DEFAULT VALUES SELECT * FROM Foo -- which is a standalone INSERT statement followed by a standalone SELECT statement. I can do what I need to do a different way, but the apparent lack of consistency in support for degenerate cases surprises me. I read through the relevant sections of BOL and didn't see anything. I was surprised to come up with nothing via Google either.

    Read the article

  • improve my code for collapsing a list of data.frames

    - by romunov
    Dear StackOverFlowers (flowers in short), I have a list of data.frames (walk.sample) that I would like to collapse into a single (giant) data.frame. While collapsing, I would like to mark (adding another column) which rows have came from which element of the list. This is what I've got so far. This is the data.frame that needs to be collapsed/stacked. > walk.sample [[1]] walker x y 1073 3 228.8756 -726.9198 1086 3 226.7393 -722.5561 1081 3 219.8005 -728.3990 1089 3 225.2239 -727.7422 1032 3 233.1753 -731.5526 [[2]] walker x y 1008 3 205.9104 -775.7488 1022 3 208.3638 -723.8616 1072 3 233.8807 -718.0974 1064 3 217.0028 -689.7917 1026 3 234.1824 -723.7423 [[3]] [1] 3 [[4]] walker x y 546 2 629.9041 831.0852 524 2 627.8698 873.3774 578 2 572.3312 838.7587 513 2 633.0598 871.7559 538 2 636.3088 836.6325 1079 3 206.3683 -729.6257 1095 3 239.9884 -748.2637 1005 3 197.2960 -780.4704 1045 3 245.1900 -694.3566 1026 3 234.1824 -723.7423 I have written a function to add a column that denote from which element the rows came followed by appending it to an existing data.frame. collapseToDataFrame <- function(x) { # collapse list to a dataframe with a twist walk.df <- data.frame() for (i in 1:length(x)) { n.rows <- nrow(x[[i]]) if (length(x[[i]])>1) { temp.df <- cbind(x[[i]], rep(i, n.rows)) names(temp.df) <- c("walker", "x", "y", "session") walk.df <- rbind(walk.df, temp.df) } else { cat("Empty list", "\n") } } return(walk.df) } > collapseToDataFrame(walk.sample) Empty list Empty list walker x y session 3 1 -604.5055 -123.18759 1 60 1 -562.0078 -61.24912 1 84 1 -594.4661 -57.20730 1 9 1 -604.2893 -110.09168 1 43 1 -632.2491 -54.52548 1 1028 3 240.3905 -724.67284 1 1040 3 232.5545 -681.61225 1 1073 3 228.8756 -726.91980 1 1091 3 209.0373 -740.96173 1 1036 3 248.7123 -694.47380 1 I'm curious whether this can be done more elegantly, with perhaps do.call() or some other more generic function?

    Read the article

  • dataset - set parent child relations

    - by Night Walker
    Hello all I am trying to set the relations of rows in DataSet and then to show that relation in XTraaTreeList as tree with relations. | --| ----| but i get | | | I am doing this code but i get a view without any relations i get them all in one level. Any idea what i am doing wrong ? this.treeList1.BeginUpdate(); this.dataTable1.Clear(); DataRow dr = this.dataTable1.NewRow(); dr[0] = "father"; dr[1] = true; dr[2] = "ddd"; this.dataTable1.Rows.Add(dr); DataRow dr1 = this.dataTable1.NewRow(); dr1[0] = "son"; dr1[1] = true; dr1[2] = "ddd"; dr1.SetParentRow(dr); this.dataTable1.Rows.Add(dr1); DataRow dr2 = this.dataTable1.NewRow(); this.dataTable1.ParentRelations() dr2[0] = "grand son"; dr2[1] = true; dr2[2] = "ddd"; dr2.SetParentRow(dr1); this.dataTable1.Rows.Add(dr2); this.treeList1.EndUpdate();

    Read the article

  • Trying to parse OpenCV YAML ouput with yaml-cpp

    - by Kenn Sebesta
    I've got a series of OpenCv generated YAML files and would like to parse them with yaml-cpp I'm doing okay on simple stuff, but the matrix representation is proving difficult. # Center of table tableCenter: !!opencv-matrix rows: 1 cols: 2 dt: f data: [ 240, 240] This should map into the vector 240 240 with type float. My code looks like: #include "yaml.h" #include <fstream> #include <string> struct Matrix { int x; }; void operator >> (const YAML::Node& node, Matrix& matrix) { unsigned rows; node["rows"] >> rows; } int main() { std::ifstream fin("monsters.yaml"); YAML::Parser parser(fin); YAML::Node doc; Matrix m; doc["tableCenter"] >> m; return 0; } But I get terminate called after throwing an instance of 'YAML::BadDereference' what(): yaml-cpp: error at line 0, column 0: bad dereference Abort trap I searched around for some documentation for yaml-cpp, but there doesn't seem to be any, aside from a short introductory example on parsing and emitting. Unfortunately, neither of these two help in this particular circumstance. As I understand, the !! indicate that this is a user-defined type, but I don't see with yaml-cpp how to parse that.

    Read the article

  • Best way to change jqGrid rowNum from ALL to -1 before pass to a web service

    - by Billyhole
    I'm looking to find the best way to allow users to choose to show ALL records in a jqGrid. I know that a -1 value passed for the rows parameter denotes ALL, but I want the word "ALL" not a -1 to appear in the rowList select element, ie. rowList: [15, 50, 100, 'ALL']. I'm passing the grid request to a web service which accepts an int for "rows", and I'm trying find how and when I should change the user selected value of "ALL" to a -1 before it gets sent to the web service. Below is my cleaned up grid code. I tried some various code blocks before my $.ajax in the datatype function. But most attempts just seemed like I have to be doing this the most convoluted way I possibly could. For example, datatype: function(postdata) { if ($("#gridTableAssets").jqGrid('getGridParam', 'rowNum') == 'ALL') { $("#gridTableAssets").appendPostData({ "rows": -1, "page": 1 }); } $.ajax({... But doing that seemed to cause the actual "page" GridParam to be nulled out on subsequent grid actions, forcing me handle that in other places. There just seems like this is something that would be frequently done out there and have clean way of doing it. Cleaned grid code: $("#gridTableAssets").jqGrid({ datatype: function(postdata) { $.ajax({ url: "/Service/Repository.asmx/GetAssets", data: JSON.stringify(postdata), type: 'POST', contentType: "application/json; charset=utf-8", error: function(XMLHttpRequest, textStatus, errorThrown) { alert('error'); }, success: function(msg) { var assetsGrid = $("#gridTableAssets")[0]; assetsGrid.addJSONData(JSON.parse(msg)); ... } }); }, ... pager: $('#pagerAssets'), rowNum: 15, rowList: [15, 50, 100, 'ALL'], ... onPaging: function(index, colindex, sortorder) { SessionKeepAlive(); } }); And here is the web service [WebMethod] public string GetAssetsOfAssetStructure(bool _search, int rows, int page, string sidx, string sord, string filters)

    Read the article

  • Selecting a multi-dimensional array in LINQ

    - by mckhendry
    I have a task where I need to translate a DataTable to a two-dimensional array. That's easy enough to do by just looping over the rows and columns (see example below). private static string[,] ToArray(DataTable table) { var array = new string[table.Rows.Count,table.Columns.Count]; for (int i = 0; i < table.Rows.Count; ++i) for (int j = 0; j < table.Columns.Count; ++j) array[i, j] = table.Rows[i][j].ToString(); return array; } What I'd really like to do is use a select statement in LINQ to generate that 2D array. Unfortunately it looks like there is no way in LINQ to select a multidimensional array. Yes, I'm aware that I can use LINQ to select a jagged array, but that's not what I want. Is my assumption correct, or is there a way to use LINQ to select a multi-dimensional array?

    Read the article

  • PHP/MySQL Printing Duplicate Labels

    - by Michael
    Using an addon of FPDF, I am printing labels using PHP/MySQL (http://www.fpdf.de/downloads/addons/29/). I'd like to be able to have the user select how many labels to print. For example, if the query puts out 10 records and the user wants to print 3 labels for each record, it prints them all in one set. 1,1,1,2,2,2,3,3,3...etc. Any ideas? <?php require_once('auth.php'); require_once('../config.php'); require_once('../connect.php'); require('pdf/PDF_Label.php'); $sql="SELECT $tbl_members.lastname, $tbl_members.firstname, $tbl_members.username, $tbl_items.username, $tbl_items.itemname FROM $tbl_members, $tbl_items WHERE $tbl_members.username = $tbl_items.username"; $result=mysql_query($sql); if(mysql_num_rows($result) == 0){ echo "Your search criteria does not return any results, please try again."; exit(); } $pdf = new PDF_Label("5160"); $pdf->AddPage(); // Print labels while($rows=mysql_fetch_array($result)){ $name = $rows['lastname'].', '.$rows['firstname'; $item= $rows['itemname']; $text = sprintf(" * %s *\n %s\n", $name, $item); $pdf->Add_Label($text); } $pdf->Output('labels.pdf', 'D'); ?>

    Read the article

  • MySQL return Deadlock with insert row and FK is locked 'for update'

    - by constantin-slednev
    Hello developers! I get deadlock error in my mysql transaction. The simple example of my situation: Thread1 > set autocommit=0; Query OK, 0 rows affected (0.00 sec) Thread1 > SET TRANSACTION ISOLATION LEVEL READ COMMITTED; Query OK, 0 rows affected (0.00 sec) Thread1 > SELECT * FROM A WHERE ID=1000 FOR UPDATE; 1 row in set (0.00 sec) Thread2 > set autocommit=0; Query OK, 0 rows affected (0.00 sec) Thread2 > SET TRANSACTION ISOLATION LEVEL READ COMMITTED; Query OK, 0 rows affected (0.00 sec) Thread2 > INSERT INTO B (AID, NAME) VALUES (1000, 'Hello world'); SLEEP Query OK, 1 row affected (4.99 sec) Thread1 > INSERT INTO B (AID, NAME) VALUES (1000, 'Hello world2'); ERROR 1213 (40001): Deadlock found when trying to get lock; try restarting transaction B.AID -> FK -> A.ID I see three solutions: catch deadlock error in code and retry query. use innodb_locks_unsafe_for_binlog in my.cnf lock (for update) table A in Thread2 before insert Can you give me more solutions ? Current solutions do not fit me.

    Read the article

  • Is there added overhead to looking up a column in a DataTable by name rather than by index?

    - by Ben McCormack
    In a DataTable object, is there added overhead to looking up a column value by name thisRow("ColumnA") rather than by the column index thisRow(0)? In which scenarios might this be an issue. I work on a team that has lots of experience writing VB6 code and I noticed that didn't do column lookups by name for DataTable objects or data grids. Even in .NET code, we use a set of integer constants to reference column names in these types of objects. I asked our team lead why this was so, and he mentioned that in VB6, there was a lot of overhead in looking up data by column name rather than by index. Is this still true for .NET? Example code (in VB.NET, but same applies to C#): Public Sub TestADOData() Dim dt As New DataTable 'Set up the columns in the DataTable ' dt.Columns.Add(New DataColumn("ID", GetType(Integer))) dt.Columns.Add(New DataColumn("Name", GetType(String))) dt.Columns.Add(New DataColumn("Description", GetType(String))) 'Add some data to the data table ' dt.Rows.Add(1, "Fred", "Pitcher") dt.Rows.Add(3, "Hank", "Center Field") 'Method 1: By Column Name ' For Each r As DataRow In dt.Rows Console.WriteLine( _ "{0,-2} {1,-10} {2,-30}", r("ID"), r("Name"), r("Description")) Next Console.WriteLine() 'Method 2: By Column Name ' For Each r As DataRow In dt.Rows Console.WriteLine("{0,-2} {1,-10} {2,-30}", r(0), r(1), r(2)) Next End Sub Is there an case where method 2 provides a performance advantage over method 1?

    Read the article

  • Use $_FILES on a page called by .ajax

    - by RachelD
    I have two .php pages that I'm working with. Index.php has a file upload form that posts back to index.php. I can access the $_FILES no problem on index.php after submitting the form. My issue is that I want (after the form submit and the page loads) to use .ajax (jQuery) to call another .php file so that file can open and process some of the rows and return the results to ajax. The ajax then displays the results and recursively calls itself to process the next batch of rows. Basically I want to process (put in the DB etc) the csv in chunks and display it for the user in between chunks. Im doing it this way because the files are 400,000+ rows and the user doesnt want to wait the 10+ min for them all to be processed. I dont want to move this file (save it) because I just need to process it and throw it away and if a user closes the page while its processing the file wont be thrown away. I could cron script it but I dont want to. What I would really like to do is pass the (single) $_FILES through .ajax OR Save it in a $_POST or $_SESSION to use on the second page. Is there any hope for my cause? Heres the ajax code if that helps: function processCSV(startIndex, length) { $.ajax({ url: "ajax-targets/process-csv.php", dataType: "json", type: "POST", data: { startIndex: startIndex, length: length }, timeout: 60000, // 1000 = 1 sec success: function(data) { // JQuery to display the rows from the CSV var newStart = startIndex+length; if(newStart <= data['csvNumRows']) { processCSV(newStart, length); } } }); } processCSV(1, 2); }); P.S. I did try this Passing $_FILES or $_POST to a new page with PHP but its not working for me :( SOS.

    Read the article

  • Using multiple aggregate functions in an (ANSI) SQL statement

    - by morpheous
    I have aggregate functions foo(), foobar(), fredstats(), barneystats() I want to create a domain specific query language (DSQL) above my DB, to facilitate using a domain language to query the DB. The 'language' comprises of algebraic expressions (or more specifically SQL like criteria) which I use to generate (ANSI) SQL statements which are sent to the db engine. The following lines are examples of what the language statements will look like, and hopefully, it will help further clarify the concept: **Example 1** DQL statement: foobar('yellow') between 1 and 3 and fredstats('weight') > 42 Translation: fetch all rows in an underlying table where computed values for aggregate function foobar() is between 1 and 3 AND computed value for AGG FUNC fredstats() is greater than 42 **Example 2** DQL statement: fredstats('weight') < barneystats('weight') AND foo('fighter') in (9,10,11) AND foobar('green') <> 42 Translation: Fetch all rows where the specified criteria matches **Example 3** DQL statement: foobar('green') / foobar('red') <> 42 Translation: Fetch all rows where the specified criteria matches **Example 4** DQL statement: foobar('green') - foobar('red') >= 42 Translation: Fetch all rows where the specified criteria matches Given the following information: The table upon which the queries above are being executed is called 'tbl' table 'tbl' has the following structure (id int, name varchar(32), weight float) The result set returns only the tbl.id, tbl.name and the names of the aggregate functions as columns in the result set - so for example the foobar() AGG FUNC column will be called foobar in the result set. So for example, the first DQL query will return a result set with the following columns: id, name, foobar, fredstats Given the above, my questions then are: What would be the underlying SQL required for Example1 ? What would be the underlying SQL required for Example3 ? Given an algebraic equation comprising of AGGREGATE functions, Is there a way of generalizing the algorithm needed to generate the required ANSI SQL statement(s)? I am using PostgreSQL as the db, but I would prefer to use ANSI SQL wherever possible.

    Read the article

  • WPF Toolkit: Nullable object must have a value

    - by Via Lactea
    Hi All, I am trying to create some line charts from a dataset and getting an error (WPF+WPF Toolkit + C#): Nullable object must have a value Here is a code that I use to add some data points to the chart: ObservableCollection points = new ObservableCollection(); foreach (DataRow dr in dc.Tables[0].Rows) { points.Add(new VelChartPoint() { Label = dr[0].ToString(), Value = double.Parse(dr[1].ToString()) }); } Here is a class VelChartPoint public class VelChartPoint : VelObject, INotifyPropertyChanged { public DateTime Date { get; set; } public string Label { get; set; } private double _Value; public double Value { get { return _Value; } set { _Value = value; var handler = PropertyChanged; if (null != handler) { handler.Invoke(this, new PropertyChangedEventArgs("Value")); } } } public string FieldName { get; set; } public event PropertyChangedEventHandler PropertyChanged; public VelChartPoint() { } } So the problem occures in this part of the code points.Add(new VelChartPoint { Name = dc.Tables[0].Rows[0][0].ToString(), Value = double.Parse(dc.Tables[0].Rows[0][1].ToString()) } ); I've made some tests, here are some results i've found out. This part of code does'nt work for me: string[] labels = new string[] { "label1", "label2", "label3" }; foreach (string label in labels) { points.Add(new VelChartPoint { Name = label, Value = 500.0 } ); } But this one works fine: points.Add(new VelChartPoint { Name = "LabelText", Value = double.Parse(dc.Tables[0].Rows[0][1].ToString()) } ); Please, help me to solve this error.

    Read the article

  • SQL Server Clustered Index: (Physical) Data Page Order

    - by scherand
    I am struggling understanding what a clustered index in SQL Server 2005 is. I read the MSDN article Clustered Index Structures (among other things) but I am still unsure if I understand it correctly. The (main) question is: what happens if I insert a row (with a "low" key) into a table with a clustered index? The above mentioned MSDN article states: The pages in the data chain and the rows in them are ordered on the value of the clustered index key. And Using Clustered Indexes for example states: For example, if a record is added to the table that is close to the beginning of the sequentially ordered list, any records in the table after that record will need to shift to allow the record to be inserted. Does this mean that if I insert a row with a very "low" key into a table that already contains a gazillion rows literally all rows are physically shifted on disk? I cannot believe that. This would take ages, no? Or is it rather (as I suspect) that there are two scenarios depending on how "full" the first data page is. A) If the page has enough free space to accommodate the record it is placed into the existing data page and data might be (physically) reordered within that page. B) If the page does not have enough free space for the record a new data page would be created (anywhere on the disk!) and "linked" to the front of the leaf level of the B-Tree? This would then mean the "physical order" of the data is restricted to the "page level" (i.e. within a data page) but not to the pages residing on consecutive blocks on the physical hard drive. The data pages are then just linked together in the correct order. Or formulated in an alternative way: if SQL Server needs to read the first N rows of a table that has a clustered index it can read data pages sequentially (following the links) but these pages are not (necessarily) block wise in sequence on disk (so the disk head has to move "randomly"). How close am I? :)

    Read the article

  • EF4 and multiple abstract levels

    - by Cedric
    I need to use inheritance with EF4 and the TPH model created from DB. I created a new projet to test simples classes. There is my class model: There is my table in SQL SERVER 2008 : VEHICLE ID : int PK Owner : varchar(50) Consumption : float FirstCirculationDate : date Type : varchar(50) Discriminator : varchar(10) I added a condition in my EDMX on the Discriminator field to differentiate the Scooter, Car, Motorbike and Bike entities. MotorizedVehicle and Vehicle are Abstract. But when I compile, this error appears : Error 3032: Problem in mapping fragments starting at lines 78, 85:EntityTypes EF4InheritanceModel.Scooter, EF4InheritanceModel.Motorbike, EF4InheritanceModel.Car, EF4InheritanceModel.Bike are being mapped to the same rows in table Vehicle. Mapping conditions can be used to distinguish the rows that these types are mapped to. Edit : To Ladislav : I try it and error change to become it for all of my entities : Error 3034: Problem in mapping fragments starting at lines 72, 86:An entity is mapped to different rows within the same table. Ensure these two mapping fragments do not map two groups of entities with overlapping keys to two distinct groups of rows. To Henk (with Ladislay suggestion) : There are all of mappings details : What's wrong ? Thanks

    Read the article

  • Can't bind string containing @ char with mysqli_stmt_bind_param

    - by Tirithen
    I have a problem with my database class. I have a method that takes one prepared statement and any number of parameters, binds them to the statement, executes the statement and formats the result into a multidimentional array. Everthing works fine until I try to include an email adress in one of the parameters. The email contains an @ character and that one seems to break everything. When I supply with parameters: $types = "ss" and $parameters = array("[email protected]", "testtest") I get the error: Warning: Parameter 3 to mysqli_stmt_bind_param() expected to be a reference, value given in ...db/Database.class.php on line 63 Here is the method: private function bindAndExecutePreparedStatement(&$statement, $parameters, $types) { if(!empty($parameters)) { call_user_func_array('mysqli_stmt_bind_param', array_merge(array($statement, $types), &$parameters)); /*foreach($parameters as $key => $value) { mysqli_stmt_bind_param($statement, 's', $value); }*/ } $result = array(); $statement->execute() or debugLog("Database error: ".$statement->error); $rows = array(); if($this->stmt_bind_assoc($statement, $row)) { while($statement->fetch()) { $copied_row = array(); foreach($row as $key => $value) { if($value !== null && mb_substr($value, 0, 1, "UTF-8") == NESTED) { // If value has a nested result inside $value = mb_substr($value, 1, mb_strlen($value, "UTF-8") - 1, "UTF-8"); $value = $this->parse_nested_result_value($value); } $copied_row[$ke<y] = $value; } $rows[] = $copied_row; } } // Generate result $result['rows'] = $rows; $result['insert_id'] = $statement->insert_id; $result['affected_rows'] = $statement->affected_rows; $result['error'] = $statement->error; return $result; } I have gotten one suggestion that: the array_merge is casting parameter to string in the merge change it to &$parameters so it remains a reference So I tried that (3rd line of the method), but it did not do any difference. How should I do? Is there a better way to do this without call_user_func_array?

    Read the article

  • CS 50- Pset 1 Mario Program

    - by boametaphysica
    the problem set asks us to create a half pyramid using hashes. Here is a link to an image of how it should look- I get the idea and have written the program until printing the spaces (which I have replaced by "_" just so that I can test the first half of it. However, when I try to run my program, it doesn't go beyond the do-while loop. In other words, it keeps asking me for the height of the pyramid and does not seem to run the for loop at all. I've tried multiple approaches but this problem seems to persist. Any help would be appreciated! Below is my code- # include <cs50.h> # include <stdio.h> int main(void) { int height; do { printf("Enter the height of the pyramid: "); height = GetInt(); } while (height > 0 || height < 24); for (int rows = 1; rows <= height, rows++) { for (int spaces = height - rows; spaces > 0; spaces--) { printf("_"); } } return 0; } Running this program yields the following output- Enter the height of the pyramid: 11 Enter the height of the pyramid: 1231 Enter the height of the pyramid: aawfaf Retry: 12 Enter the height of the pyramid:

    Read the article

  • Datatable add new column and values speed issue

    - by Cine
    I am having some speed issue with my datatables. In this particular case I am using it as holder of data, it is never used in GUI or any other scenario that actually uses any of the fancy features. In my speed trace, this particular constructor was showing up as a heavy user of time when my database is ~40k rows. The main user was set_Item of DataTable. protected myclass(DataTable dataTable, DataColumn idColumn) { this.dataTable = dataTable; IdColumn = idColumn ?? this.dataTable.Columns.Add(string.Format("SYS_{0}_SYS", Guid.NewGuid()), Type.GetType("System.Int32")); JobIdColumn = this.dataTable.Columns.Add(string.Format("SYS_{0}_SYS", Guid.NewGuid()), Type.GetType("System.Int32")); IsNewColumn = this.dataTable.Columns.Add(string.Format("SYS_{0}_SYS", Guid.NewGuid()), Type.GetType("System.Int32")); int id = 1; foreach (DataRow r in this.dataTable.Rows) { r[JobIdColumn] = id++; r[IsNewColumn] = (r[IdColumn] == null || r[IdColumn].ToString() == string.Empty) ? 1 : 0; } Digging deeper into the trace, it turns out that set_Item calls EndEdit, which brings my thoughts to the transaction support of the DataTable, for which I have no usage for in my scenario. So my solution to this was to open editing on all of the rows and never close them again. _dt.BeginLoadData(); foreach (DataRow row in _dt.Rows) row.BeginEdit(); Is there a better solution? This feels too much like a big giant hack that will eventually come and bite me. You might suggest that I dont use DataTable at all, but I have already considered that and rejected it due to the amount of effort that would be required to reimplement with a custom class. The main reason it is a datatable is that it is ancient code (.net 1.1 time) and I dont want to spend that much time changing it, and it is also because the original table comes out of a third party component.

    Read the article

  • SQL Server Index cost

    - by yellowstar
    I have read that one of the tradeoffs for adding table indexes in SQL Server is the increased cost of insert/update/delete queries to benefit the performance of select queries. I can conceptually understand what happens in the case of an insert because SQL Server has to write entries into each index matching the new rows, but update and delete are a little more murky to me because I can't quite wrap my head around what the database engine has to do. Let's take DELETE as an example and assume I have the following schema (pardon the pseudo-SQL) TABLE Foo col1 int ,col2 int ,col3 int ,col4 int PRIMARY KEY (col1,col2) INDEX IX_1 col3 INCLUDE col4 Now, if I issue the statement DELETE FROM Foo WHERE col1=12 AND col2 > 34 I understand what the engine must do to update the table (or clustered index if you prefer). The index is set up to make it easy to find the range of rows to be removed and do so. However, at this point it also needs to update IX_1 and the query that I gave it gives no obvious efficient way for the database engine to find the rows to update. Is it forced to do a full index scan at this point? Does the engine read the rows from the clustered index first and generate a smarter internal delete against the index? It might help me to wrap my head around this if I understood better what is going on under the hood, but I guess my real question is this. I have a database that is spending a significant amount of time in delete and I'm trying to figure out what I can do about it. When I display the execution plan for the deletion, it just shows an entry for "Clustered Index Delete" on table Foo which lists in the details section the other indices that need to be updated but I don't get any indication of the relative cost of these other indices. Are they all equal in this case? Is there some way that I can estimate the impact of removing one or more of these indices without having to actually try it?

    Read the article

  • Why is MySQL with InnoDB doing a table scan when key exists and choosing to examine 70 times more ro

    - by andysk
    Hello, I'm troubleshooting a query performance problem. Here's an expected query plan from explain: mysql> explain select * from table1 where tdcol between '2010-04-13:00:00' and '2010-04-14 03:16'; +----+-------------+--------------------+-------+---------------+--------------+---------+------+---------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+--------------------+-------+---------------+--------------+---------+------+---------+-------------+ | 1 | SIMPLE | table1 | range | tdcol | tdcol | 8 | NULL | 5437848 | Using where | +----+-------------+--------------------+-------+---------------+--------------+---------+------+---------+-------------+ 1 row in set (0.00 sec) That makes sense, since the index named tdcol (KEY tdcol (tdcol)) is used, and about 5M rows should be selected from this query. However, if I query for just one more minute of data, we get this query plan: mysql> explain select * from table1 where tdcol between '2010-04-13 00:00' and '2010-04-14 03:17'; +----+-------------+--------------------+------+---------------+------+---------+------+-----------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+--------------------+------+---------------+------+---------+------+-----------+-------------+ | 1 | SIMPLE | table1 | ALL | tdcol | NULL | NULL | NULL | 381601300 | Using where | +----+-------------+--------------------+------+---------------+------+---------+------+-----------+-------------+ 1 row in set (0.00 sec) The optimizer believes that the scan will be better, but it's over 70x more rows to examine, so I have a hard time believing that the table scan is better. Also, the 'USE KEY tdcol' syntax does not change the query plan. Thanks in advance for any help, and I'm more than happy to provide more info/answer questions.

    Read the article

  • Translate SQL to OCL ?

    - by Roland Bengtsson
    I have a piece of SQL that I want to translate to OCL. I'm not good at SQL so I want to increase maintainability by this. We are using Interbase 2009, Delphi 2007 with Bold and modeldriven development. Now my hope is that someone here both speaks good SQL and OCL :-) The original SQL: Select Bold_Id, MessageId, ScaniaId, MessageType, MessageTime, Cancellation, ChassieNumber, UserFriendlyFormat, ReceivingOwner, Invalidated, InvalidationReason, (Select Parcel.MCurrentStates From Parcel Where ScaniaEdiSolMessage.ReceivingOwner = Parcel.Bold_Id) as ParcelState From ScaniaEdiSolMessage Where MessageType = 'IFTMBP' and not Exists (Select * From ScaniaEdiSolMessage EdiSolMsg Where EdiSolMsg.ChassieNumber = ScaniaEdiSolMessage.ChassieNumber and EdiSolMsg.ShipFromFinland = ScaniaEdiSolMessage.ShipFromFinland and EdiSolMsg.MessageType = 'IFTMBF') and invalidated = 0 Order By MessageTime desc After a small simplification: Select Bold_Id, (Select Parcel.MCurrentStates From Parcel where ScaniaEdiSolMessage.ReceivingOwner = Parcel.Bold_Id) From ScaniaEdiSolMessage Where MessageType = 'IFTMBP' and not Exists (Select * From ScaniaEdiSolMessage EdiSolMsg Where EdiSolMsg.ChassieNumber = ScaniaEdiSolMessage.ChassieNumber and EdiSolMsg.ShipFromFinland = ScaniaEdiSolMessage.ShipFromFinland and EdiSolMsg.MessageType = 'IFTMBF') and invalidated = 0 NOTE: There are 2 cases for MessageType, 'IFTMBP' and 'IFTMBF'. So the table to be listed is ScaniaEdiSolMessage. It has attributes like: MessageType: String ChassiNumber: String ShipFromFinland: Boolean Invalidated: Boolean It has also a link to table Parcel named ReceivingOwner with BoldId as key. So it seems like it list all rows of ScaniaEdiSolMessage and then have a subquery that also list all rows of ScaniaEdiSolMessage and name it EdiSolMsg. The it exclude almost all rows. In fact the query above give one hit from 28000 records. In OCL it is easy to list all instances: ScaniaEdiSolMessage.allinstances Also easy to filter rows by select for example: ScaniaEdiSolMessage.allinstances->select(shipFromFinland and not invalidated) But I do not understand how I should make a OCL to match the SQL above.

    Read the article

  • MySQL efficiency as it relates to the database/table size

    - by mlissner
    I'm building a system using django, Sphinx and MySQL that's very quickly becoming quite large. The database currently has about 2000 rows, and I've written a program that's going to populate it with another 40,000 rows in a couple days. Since the database is live right now, and since I've never had a database with this much information in it, I'm worried about some things: Is adding all these rows going to seriously degrade the efficiency of my django app? Will I need to go back through it and optimize all my database calls so they're doing things more cleverly? Or will this make the database slow all around to the extent that I can't do anything about it at all? If you scoff at my 40k rows, then, my next question is, at what point SHOULD I be concerned? I will likely be adding another couple hundred thousand soon, so I worry, and I fret. How is sphinx going to feel about all this? Is it going to freak out when it realizes it has to index all this data? Or will it be fine? Is this normal for it? If it is, at what point should I be concerned that it's too much data for Sphinx? Thanks for any thoughts.

    Read the article

  • Best way to store list of numbers and to retrieve them

    - by bingoNumbers
    Hi. What is the best way to store a list of random numbers (like lotto/bingo numbers) and retrieve them? I'd like to store on a Database a number of rows, where each row contains 5-10 numbers ranging from 0 to 90. I will store a big number of those rows. What I'd like to be able is to retrieve the rows that have at least X number in common to a newly generated row. Example: [3,4,33,67,85,99] [55,56,77,89,98,99] [3,4,23,47,85,91] Those are on the DB I will generate this: [1,2,11,45,47,88] and now I want to get the rows that have at least 1 number in common with this one. The easiest (and dumbest?) way is to make 6 select and check for similar results. I thought to store numbers with a large binary string like 000000000000000000000100000000010010110000000000000000000000000 with 99 numbers where each number represent a number from 1 to 99, so if I have 1 at the 44th position, it means that I have 44 on that row. This method is probably shifting the difficult tasks to the Db but it's again not very smart. Any suggestion?

    Read the article

  • Modifying NSMutableDictionary from a single index format into nested (array within array)

    - by Michael Robinson
    I need to take the member ID off the top of this and create an array inside that contains the rest of the JSON return. Here is my JSON return. [{"F_Name_VC":"Frank", "L_Name_VC":"Johnson", "userid":"18", "age":"23", },] After it is json-deserialized it looks like this: I need convert it to be transformed into this, with children sub-array: Someone else posted a similar question but without the fact that it was coming from a JSON deserialization. There were two answers but no conclusion to the question. (Answer 1): NSDictionary *item1 = [NSDictionary dictionaryWithObjects:[NSArray arrayWithObjects:@" member",[NSNumber numberWithInt:3],nil] forKeys:[NSArray arrayWithObjects:@"Title",@"View",nil]]; Answer (2): NSMutableArray *Rows = [NSMutableArray arrayWithCapacity: 1]; for (int i = 0; i < 4; ++i) { NSMutableArray *theChildren = [NSMutableArray arrayWithCapacity: 1]; [theChildren addObject: [NSString stringWithFormat: @"tester %d", i]]; NSString *aTitle = [NSString stringWithFormat: @"Item %d", i]; NSDictionary *anItem = [NSDictionary dictionaryWithObjectsAndKeys: aTitle, @"Title", theChildren, @"Children"]; [Rows addObject: anItem]; } NSDictionary *Root = [NSDictionary withObject: Rows andKey: @"Rows"]; Here is my JSON return and save code: NSData *jsonData = [jsonreturn dataUsingEncoding:NSUTF32BigEndianStringEncoding]; NSError *error = nil; NSDictionary * dict = [[CJSONDeserializer deserializer] deserializeAsDictionary:jsonData error:&error]; if (dict) { rowsArray = [dict objectForKey:@"member"]; [rowsArray retain]; } Thanks in advance. Every tutorial on JSON only shows a simple array being returned, never with children..It's driving me crazy trying to figure this out.

    Read the article

  • Excel VBA Macro for Pivot Table with Dynamic Data Range

    - by John Ziebro
    CODE IS WORKING! THANKS FOR THE HELP! I am attempting to create a dynamic pivot table that will work on data that varies in the number of rows. Currently, I have 28,300 rows, but this may change daily. Example of data format as follows: Case Number Branch Driver 1342 NYC Bob 4532 PHL Jim 7391 CIN John 8251 SAN John 7211 SAN Mary 9121 CLE John 7424 CIN John Example of finished table: Driver NYC PHL CIN SAN CLE Bob 1 0 0 0 0 Jim 0 1 0 0 0 John 0 0 2 1 1 Mary 0 0 0 1 0 Code as follows: Sub CreateSummaryReportUsingPivot() ' Use a Pivot Table to create a static summary report ' with model going down the rows and regions across Dim WSD As Worksheet Dim PTCache As PivotCache Dim PT As PivotTable Dim PRange As Range Dim FinalRow As Long Dim FinalCol As Long Set WSD = Worksheets("PivotTable") 'Name active worksheet as "PivotTable" ActiveSheet.Name = "PivotTable" ' Delete any prior pivot tables For Each PT In WSD.PivotTables PT.TableRange2.Clear Next PT ' Define input area and set up a Pivot Cache FinalRow = WSD.Cells(Application.Rows.Count, 1).End(xlUp).Row FinalCol = WSD.Cells(1, Application.Columns.Count). _ End(xlToLeft).Column Set PRange = WSD.Cells(1, 1).Resize(FinalRow, FinalCol) Set PTCache = ActiveWorkbook.PivotCaches.Add(SourceType:= _ xlDatabase, SourceData:=PRange) ' Create the Pivot Table from the Pivot Cache Set PT = PTCache.CreatePivotTable(TableDestination:=WSD. _ Cells(2, FinalCol + 2), TableName:="PivotTable1") ' Turn off updating while building the table PT.ManualUpdate = True ' Set up the row fields PT.AddFields RowFields:="Driver", ColumnFields:="Branch" ' Set up the data fields With PT.PivotFields("Case Number") .Orientation = xlDataField .Function = xlCount .Position = 1 End With With PT .ColumnGrand = False .RowGrand = False .NullString = "0" End With ' Calc the pivot table PT.ManualUpdate = False PT.ManualUpdate = True End Sub

    Read the article

< Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >