Search Results

Search found 7127 results on 286 pages for 'calculated columns'.

Page 58/286 | < Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >

  • How to differentiate between two similar fields in Linq Join tables

    - by Azhar
    How to differentiate between two select new fields e.g. Description c.Description and lt.Description DataTable lDt = new DataTable(); try { lDt.Columns.Add(new DataColumn("AreaTypeID", typeof(Int32))); lDt.Columns.Add(new DataColumn("CategoryRef", typeof(Int32))); lDt.Columns.Add(new DataColumn("Description", typeof(String))); lDt.Columns.Add(new DataColumn("CatDescription", typeof(String))); EzEagleDBDataContext lDc = new EzEagleDBDataContext(); var lAreaType = (from lt in lDc.tbl_AreaTypes join c in lDc.tbl_AreaCategories on lt.CategoryRef equals c.CategoryID where lt.AreaTypeID== pTypeId select new { lt.AreaTypeID, lt.Description, lt.CategoryRef, c.Description }).ToArray(); for (int j = 0; j< lAreaType.Count; j++) { DataRow dr = lDt.NewRow(); dr["AreaTypeID"] = lAreaType[j].LandmarkTypeID; dr["CategoryRef"] = lAreaType[j].CategoryRef; dr["Description"] = lAreaType[j].Description; dr["CatDescription"] = lAreaType[j].; lDt.Rows.Add(dr); } } catch (Exception ex) { }

    Read the article

  • How to insert and call by row and column into sqlite3 python

    - by user291071
    Lets say i have a simple array of x rows and y columns with corresponding values, What is the best method to do 3 things? How to insert, update a value at a specific row column? How to select a value for each row and column, import sqlite3 con = sqlite3.connect('simple.db') c = con.cursor() c.execute('''create table simple (links text)''') con.commit() dic = {'x1':{'y1':1.0,'y2':0.0},'x2':{'y1':0.0,'y2':2.0,'y3':1.5},'x3':{'y2':2.0,'y3':1.5}} ucols = {} ## my current thoughts are collect all row values and all column values from dic and populate table row and columns accordingly how to call by row and column i havn't figured out yet ##populate rows in first column for row in dic: print row c.execute("""insert into simple ('links') values ('%s')"""%row) con.commit() ##unique columns for row in dic: print row for col in dic[row]: print col ucols[col]=dic[row][col] ##populate columns for col in ucols: print col c.execute("alter table simple add column '%s' 'float'" % col) con.commit() #functions needed ##insert values into sql by row x and column y?how to do this e.g. x1 and y2 should put in 0.0 ##I tried as follows didn't work for row in dic: for col in dic[row]: val =dic[row][col] c.execute("""update simple SET '%s' = '%f' WHERE 'links'='%s'"""%(col,val,row)) con.commit() ##update value at a specific row x and column y? ## select a value at a specific row x and column y?

    Read the article

  • Convert a nested array into a flat array with PHP

    - by Ben Fransen
    Hello all, I'm trying to create a generic database mapping class with PHP. Collecting the data through my functions is going well, but as expected I'm retrieving a nested set. A print_r of my received array looks like: Array ( [table] => Session [columns] => Array ( [0] => `Session`.`ID` AS `Session_ID` [1] => `Session`.`User` AS `Session_User` [2] => `Session`.`SessionID` AS `Session_SessionID` [3] => `Session`.`ExpiresAt` AS `Session_ExpiresAt` [4] => `Session`.`CreatedAt` AS `Session_CreatedAt` [5] => `Session`.`LastActivity` AS `Session_LastActivity` [6] => `Session`.`ClientIP` AS `Session_ClientIP` ) [0] => Array ( [table] => User [columns] => Array ( [0] => `User`.`ID` AS `User_ID` [1] => `User`.`UserName` AS `User_UserName` [2] => `User`.`Password` AS `User_Password` [3] => `User`.`FullName` AS `User_FullName` [4] => `User`.`Address` AS `User_Address` ) [0] => Array ( [table] => Address [columns] => Array ( [0] => `Address`.`ID` AS `Address_ID` [1] => `Address`.`UserID` AS `Address_UserID` [2] => `Address`.`Street` AS `Address_Street` [3] => `Address`.`City` AS `Address_City` ) ) ) ) To simplify things I want to recreate this nested array to a flat array so I can easily loop through it and use the 'columns' key to create my SELECT query. I'm kinda struggling with this for a while now and figures, maybe some users at SO can help me out here. I've tried multiple things with recursion, all without luck so far... Any help is much appriciated! Thanks in advance, Ben Fransen

    Read the article

  • How can I add the column data type after adding the column headers to my datatable?

    - by Kevin
    Using the code below (from a console app I've cobbled together), I add seven columns to my datatable. Once this is done, how can I set the data type for each column? For instance, column 1 of the datatable will have the header "ItemNum" and I want to set it to be an Int. I've looked at some examples on thet 'net, but most all of them show creating the column header and column data type at once, like this: loadDT.Columns.Add("ItemNum", typeof(Int)); At this point in my program, the column already has a name. I just want to do something like this (not actual code): loadDT.Column[1].ChangeType(typeof(int)); Here's my code so far (that gives the columns their name): // get column headings for datatable by reading first line of csv file. StreamReader sr = new StreamReader(@"c:\load_forecast.csv"); headers = sr.ReadLine().Split(','); foreach (string header in headers) { loadDT.Columns.Add(header); } Obviously, I'm pretty new at this, but trying very hard to learn. Can someone point me in the right direction? Thanks!

    Read the article

  • .NET: How to know when serialization is completed?

    - by Ian Boyd
    When I construct my control (which inherits DataGrid), I add specific rows and columns. This works great at design time. Unfortunately, at runtime I add my rows and columns in the same constructor, but then the DataGrid is serialized (after the constructor runs) adding more rows and columns. After serialization is complete, I need to clear everything and re-initialize the rows and columns. Is there a protected method that I can override to know when the control is done serializing? Of course, I'd prefer to not have to do the work in the constructor, throw it away, and do it again after (potential) serialization. Is there a preferred event that is the equivalent of "set yourself up now", so that it is called once whether I'm serialized or not? The serialization i speak of comes from the InitializeComponent() method in the form's code-behind file. #region Windows Form Designer generated code /// <summary> /// Required method for Designer support - do not modify /// the contents of this method with the code editor. /// </summary> private void InitializeComponent() { ... } It would have been perfect if InitializeComponent was a virtual method defined by Control, then i could just override it and then perform my processing after i call base: protected override void InitializeComponent() { base.InitializeComponent(); InitializeMe(); } But it's not an ancestor method, it's declared only in the code-behind file. i notice that InitializeComponent calls SuspendLayout and ResumeLayout on various Controls. i thought it could override ResumeLayout, and perform my initialization then: public override void ResumeLayout() { base.ResumeLayout(); InitializeMe(); } But ResumeLayout is not virtual, so that's out. Anymore ideas? i can't be the first person to create a custom control.

    Read the article

  • Changing the indexing on existing table in SQL Server 2000

    - by Raj
    Guys, Here is the scenario: SQL Server 2000 (8.0.2055) Table currently has 478 million rows of data. The Primary Key column is an INT with IDENTITY. There is an Unique Constraint imposed on two other columns with a Non-Clustered Index. This is a vendor application and we are only responsible for maintaining the DB. Now the vendor has recommended doing the following "to improve performance" Drop the PK and Clustered Index Drop the non-clustered index on the two columns with the UNIQUE CONSTRAINT Recreate the PK, with a NON-CLUSTERED index Create a CLUSTERED index on the two columns with the UNIQUE CONSTRAINT I am not convinced that this is the right thing to do. I have a number of concerns. By dropping the PK and indexes, you will be creating a heap with 478 million rows of data. Then creating a CLUSTERED INDEX on two columns would be a really mammoth task. Would creating another table with the same structure and new indexing scheme and then copying the data over, dropping the old table and renaming the new one be a better approach? I am also not sure how the stored procs will react. Will they continue using the cached execution plan, considering that they are not being explicitly recompiled. I am simply not able to understand what kind of "performance improvement" this change will provide. I think that this will actually have the reverse effect. All thoughts welcome. Thanks in advance, Raj

    Read the article

  • Binding a Button to a GridView

    - by Tyler
    This a pretty simple question, I'm just not sure how to do it exactly. I would like to bind a Button or perhaps ImageButton to a GridView in ASP.NET/C#. Currently, the GridView has two columns and is bound to a DataTable with two columns. I want to add a third column to the GridView, which will include the Button. I know GridView has ButtonField, but I'm not too sure how to go about using it to do what I want. I want to dynamically generate these Buttons and add them to the GridView. Here is how my GridView looks right now: <asp:GridView ID="GridView1" Runat="server"> <Columns> <asp:HyperLinkField HeaderText="Display Name" DataNavigateUrlFields="DISPNAME" DataNavigateUrlFormatString="ViewItem.aspx" DataTextField="DISPNAME"> <ItemStyle Width="70%" /> </asp:HyperLinkField> <asp:BoundField DataField="TypeDisp" HeaderText="Type"> <ItemStyle Width="20%" /> </asp:BoundField> </Columns> </asp:GridView>

    Read the article

  • R selecting duplicate rows

    - by Matt
    Okay, I'm fairly new to R and I've tried to search the documentation for what I need to do but here is the problem. I have a data.frame called heeds.data in the following form (some columns omitted for simplicity) eval.num, eval.count, ... fitness, fitness.mean, green.h.0, green.v.0, offset.0, green.h.1, green.v.1,...green.h.7, green.v.7, offset.7... And I have selected a row meeting the following criteria: best.fitness <- min(heeds.data$fitness.mean[heeds.data$eval.count = 10]) best.row <- heeds.data[heeds.data$fitness.mean == best.fitness] Now, what I want are all of the other rows with that have columns green.h.0 to offset.7 (a contiguous section of columns) equal to the best.row Basically I'm looking for rows that have some of the conditions the same as the "best" row. I thought I could just do this, heeds.best <- heeds.data$fitness[ heeds.data$green.h.0 == best.row$green.h.0 & ... ] But with 24 columns it seems like a stupid method. Looking for something a bit simpler with less manual typing. Thanks!

    Read the article

  • need help building a stored procedure that takes rows from one table into another.

    - by MyHeadHurts
    alright i built this stored procedure to take the columns from a stagging table and copy them into my other table, but if these four columns are duplicates it wont insert the rows, works fine. however, what i want to do is if only the tour, taskname and deptdate are the same, then i will update the rest of the information. and if all four columns are the same dont instert. INSERT INTO dashboardtasks1 SELECT [tour], [taskname], [deptdate], [tasktype], [desc], [duedate], [compdate], [comments], [agent], [compby], [graceperiod] FROM staggingtasks WHERE NOT EXISTS(SELECT * FROM dashboardtasks1 WHERE (staggingtasks.tour=dashboardtasks1.tour and staggingtasks.taskname=dashboardtasks1.taskname and staggingtasks.deptdate=dashboardtasks1.deptdate and staggingtasks.duedate=dashboardtasks1.duedate ) ) i saw something like this INSERT INTO table (a,b,c) VALUES (1,2,3) ON DUPLICATE KEY UPDATE c=c+1; UPDATE table SET c=c+1 WHERE a=1; but how could i do it if my stated 3 columns are the samed then update? or is there a way to do this with an if statement and use 2 different queries, but how would my if statement work would it check if the row exists in the table i am uploading to and then run the insert statement? or what if i did something like IF EXISTS (SELECT * FROM dashboardtasks WHERE staggingtasks.tour=dashboardtasks.tour and staggingtasks.taskname=dashboardtasks.taskname and staggingtasks.deptdate=dashboardtasks.deptdate ) begin UPDATE [dashboardtasks] SET [tour] = staggingtasks.tour, [taskname] = staggingtasks.taskname, [deptdate] = staggingtasks.deptdate, [tasktype] = staggingtasks.tasktype, [desc] = staggingtasks.desc, [duedate] = staggingtasks.duedate, [compdate] = staggingtasks.compdate, [comments] = staggingtasks.comments, [agent] = staggingtasks.agent, [compby] = staggingtasks.compby, [graceperiod] = staggingtasks.graceperiod end else EXISTS (SELECT * FROM dashboardtasks WHERE staggingtasks.tour=dashboardtasks.tour and staggingtasks.taskname=dashboardtasks.taskname and staggingtasks.deptdate=dashboardtasks.deptdate and staggingtasks.duedate=dashboardtasks.duedate ) begin INSERT INTO dashboardtasks1 SELECT [tour], [taskname], [deptdate], [tasktype], [desc], [duedate], [compdate], [comments], [agent], [compby], [graceperiod] FROM staggingtasks WHERE NOT EXISTS(SELECT * FROM dashboardtasks1 WHERE (staggingtasks.tour=dashboardtasks1.tour and staggingtasks.taskname=dashboardtasks1.taskname and staggingtasks.deptdate=dashboardtasks1.deptdate and staggingtasks.duedate=dashboardtasks1.duedate ) ) end end

    Read the article

  • Dependecy Injection with Massive ORM: dynamic trouble

    - by Sergi Papaseit
    I've started working on an MVC 3 project that needs data from an enormous existing database. My first idea was to go ahead and use EF 4.1 and create a bunch of POCO's to represent the tables I need, but I'm starting to think the mapping will get overly complicated as I only need some of the columns in some of the tables. (thanks to Steven for the clarification in the comments. So I thought I'd give Massive ORM a try. I normally use a Unit of Work implementation so I can keep everything nicely decoupled and can use Dependency Injection. This is part of what I have for Massive: public interface ISession { DynamicModel CreateTable<T>() where T : DynamicModel, new(); dynamic Single<T>(string where, params object[] args) where T : DynamicModel, new(); dynamic Single<T>(object key, string columns = "*") where T : DynamicModel, new(); // Some more methods supported by Massive here } And here's my implementation of the above interface: public class MassiveSession : ISession { public DynamicModel CreateTable<T>() where T : DynamicModel, new() { return new T(); } public dynamic Single<T>(string where, params object[] args) where T: DynamicModel, new() { var table = CreateTable<T>(); return table.Single(where, args); } public dynamic Single<T>(object key, string columns = "*") where T: DynamicModel, new() { var table = CreateTable<T>(); return table.Single(key, columns); } } The problem comes with the First(), Last() and FindBy() methods. Massive is based around a dynamic object called DynamicModel and doesn't define any of the above method; it handles them through a TryInvokeMethod() implementation overriden from DynamicObject instead: public override bool TryInvokeMember(InvokeMemberBinder binder, object[] args, out object result) { } I'm at a loss on how to "interface" those methods in my ISession. How could my ISession provide support for First(), Last() and FindBy()? Put it another way, how can I use all of Massive's capabilities and still be able to decouple my classes from data access?

    Read the article

  • Latex multicols. Can I group content so it won't split over cols and/or suggest colbreaks?

    - by valadil
    Hi. I'm trying to learn LaTeX. I've been googling this one for a couple days, but I don't speak enough LaTeX to be able to search for it effectively and what documentation I have found is either too simple or goes way over my head (http://www.uoregon.edu/~dspivak/files/multicol.pdf) I have a document using the multicol package. (I'm actually using multicols* so that the first col fills before the second begins instead of trying to balance them, but I don't think that's relevant here.) The columns output nicely, but I want to be able to indicate that some content won't be broken up into different columns. For instance, aaaaaaaa bbbbbbb aaaaaaaa bbbbbbb aaaaaaaa ccccccc bbbbbbbb ccccccc That poor attempt at ascii art columns is what's happening. I'd like to indicate that the b block is a whole unit that shouldn't be broken up into different columns. Since it doesn't fit under the a block, the entirety of the b block should be moved to the second column. Should b be wrapped in something? Is there a block/float/section/box/minipage/paragraph structure I can use? Something specific to multicol? Alternatively is there a way that I can suggest a columnbreak? I'm thinking of something like \- that suggests a hyphenated line break if its convenient, but this would go between blocks. Thanks!

    Read the article

  • Printing a DataTable to textbox/textfile in .NET

    - by neodymium
    Is there a predefined or "easy" method of writing a datatable to a text file or TextBox Control (With monospace font) such as DataTable.Print(): Column1| Column2| --------|--------| v1| v2| v3| v4| v5| v6| Edit Here's an initial version (vb.net) - in case anyone is interested or wants to build their own: Public Function BuildTable(ByVal dt As DataTable) As String Dim result As New StringBuilder Dim widths As New List(Of Integer) Const ColumnSeparator As Char = "|"c Const HeadingUnderline As Char = "-"c ' determine width of each column based on widest of either column heading or values in that column For Each col As DataColumn In dt.Columns Dim colWidth As Integer = Integer.MinValue For Each row As DataRow In dt.Rows Dim len As Integer = row(col.ColumnName).ToString.Length If len > colWidth Then colWidth = len End If Next widths.Add(CInt(IIf(colWidth < col.ColumnName.Length, col.ColumnName.Length + 1, colWidth + 1))) Next ' write column headers For Each col As DataColumn In dt.Columns result.Append(col.ColumnName.PadLeft(widths(col.Ordinal))) result.Append(ColumnSeparator) Next result.AppendLine() ' write heading underline For Each col As DataColumn In dt.Columns Dim horizontal As String = New String(HeadingUnderline, widths(col.Ordinal)) result.Append(horizontal.PadLeft(widths(col.Ordinal))) result.Append(ColumnSeparator) Next result.AppendLine() ' write each row For Each row As DataRow In dt.Rows For Each col As DataColumn In dt.Columns result.Append(row(col.ColumnName).ToString.PadLeft(widths(col.Ordinal))) result.Append(ColumnSeparator) Next result.AppendLine() Next Return result.ToString() End Function

    Read the article

  • Vertically Merge Multiple Tables in MySQL by Joint Primary Key

    - by world
    Hello, I'll attempt to make my question as clear as possible. I'm fairly unexperienced with SQL, only know the really basic queries. In order to have a better idea I'd been reading the MySQL manual for the past few days, but I couldn't really find a concrete solution and my needs are quite specific. I've got 3 MySQL MyISAM tables: table1, table2 and table3. Each table has an ID column (ID, ID2, ID3 respectively), and different data columns. For example table1 has [ID, Name, Birthday, Status, ...] columns, table2 has [ID2, Country, Zip, ...], table3 has [ID3, Source, Phone, ...] you get the idea. The ID, ID2, ID3 columns are common to all three tables... if there's an ID value in table1 it will also appear in table2 and table3. The number of rows in these tables is identical, about 10m rows in each table. What I'd like to do is create a new table that contains (most of) the columns of all three tables and merge them into it. The dates, for instance, must be converted because right now they're in VARCHAR YYYYMMDD format. Reading the MySQL manual I figured STR_TO_DATE() would do the job, but I don't know how to write the query itself in the first place so I have no idea how to integrate the date conversion. So basically, after I create the new table (which I do know how to do), how can I merge the three tables into it, integrating into the query the date conversion?

    Read the article

  • Sorting and Pagination does not work after I build a custom keyword search that is build using relat

    - by Roland
    I recently started to build a custom keyword search using Yii 1.1.x The search works 100%. But as soon as I sort the columns and use pagination in the admin view the search gets lost and all results are shown. So with otherwords it's not filtering so that only the search results show. Somehow it resets it. In my controller my code looks as follows $builder=Messages::model()->getCommandBuilder(); //Table1 Columns $columns1=array('0'=>'id','1'=>'to','2'=>'from','3'=>'message','4'=>'error_code','5'=>'date_send'); //Table 2 Columns $columns2=array('0'=>'username'); //building the Keywords $keywords = explode(' ',$_REQUEST['search']); $count=0; foreach($keywords as $key){ $kw[$count]=$key; ++$count; } $keywords=$kw; $condition1=$builder->createSearchCondition(Messages::model()->tableName(),$columns1,$keywords,$prefix='t.'); $condition2=$builder->createSearchCondition(Users::model()->tableName(),$columns2,$keywords); $condition = substr($condition1,0,-1) . " OR ".substr($condition2,1); $condition = str_replace('AND','OR',$condition); $dataProvider=new CActiveDataProvider('Messages', array( 'pagination'=>array( 'pageSize'=>self::PAGE_SIZE, ), 'criteria'=>array( 'with'=>'users', 'together'=>true, 'joinType'=>'LEFT JOIN', 'condition'=>$condition, ), 'sort'=>$sort, )); $this->render('admin',array( 'dataProvider'=>$dataProvider,'keywords'=>implode(' ',$keywords),'sort'=>$sort )); and my view looks like this $this->widget('zii.widgets.grid.CGridView', array( 'dataProvider'=>$dataProvider, 'columns'=>array( 'id', array( 'name'=>'user_id', 'value'=>'CHtml::encode(Users::model()->getReseller($data->user_id))', 'visible'=>Yii::app()->user->checkAccess('poweradministrator') ), 'to', 'from', 'message', /* 'date_send', */ array( 'name'=>'error_code', 'value'=>'CHtml::encode($data->status($data->error_code))', ), array( 'class'=>'CButtonColumn', 'template'=>'{view} {delete}', ), ), )); I really do not know what do do anymore since I'm terribly lost, any help will be hihsly appreciated

    Read the article

  • Pro ASP.Net MVC 3 Entity Framework Sports Store tutorial

    - by gary7
    Following the tutorial in the book "Pro ASP.Net MVC 3 Entity Framework" in Chapter 9 - Image Uploads section; asks that the Product class be updated with two new columns - public byte ImageData, and public string ImageType. It also directs that the database be updated with these two columns via the server explorer. After these updates, the discussion directs that the Entity Framework Conceptual Model be updated via the SportsStore.EDMX file. This file does not exist in the source code for the project, and was not used in the project to begin with. Obvious errata for the book. Adding the ADO.NET Entity Data Model to the Project then overrides the EFProduct reposistory (conceptual model used throughout the project) which inherits from the interface IProductsRepository; and results in errors within the mapping. If the project is debugged after the columns are added, an error is thrown related to the new added columns. Has anyone resolved this issue in the project? I haven't found any solutions so far. Thanks!

    Read the article

  • How can I check if a TTTabItem gets selected?

    - by schoash
    I have several TTTabGrids in my view and now I am stuck with the problem, that I can't figure out, how to detect, when a TTTabItem gets touched. If someone selects an item in one of the TTTabGrids I should upadate some labels. Can someone tell me a way how to detect, when the selection in a TTTabGrid gets changed? My code looks like this: @interface MyTTTabGrid : TTTabGrid @end @implementation MyTTTabGrid - (id)initWithFrame:(CGRect)frame columns:(NSInteger)columns{ if (self = [super initWithFrame:frame]) { self.style = TTSTYLE(tabGrid); _columnCount = columns; } return self; } @end - (void)viewDidLoad { [super viewDidLoad]; _tabBarHours = [[MyTTTabGrid alloc] initWithFrame:CGRectMake(10, _tabBarZone.bottom+10, 300, 0) columns:7]; _tabBarHours.backgroundColor = [UIColor clearColor]; NSMutableArray *tmpHours = [[NSMutableArray alloc] init]; int i=6; while (i<20) { [tmpHours addObject:[[[TTTabItem alloc] initWithTitle:[NSString stringWithFormat:@"%d", i]] autorelease]]; i++; } _tabBarHours.tabItems = tmpHours; [_tabBarHours sizeToFit]; [self.view addSubview:_tabBarHours]; [tmpHours release]; }

    Read the article

  • Populating and Using Dynamic Classes in C#/.NET 4.0

    - by Bob
    In our application we're considering using dynamically generated classes to hold a lot of our data. The reason for doing this is that we have customers with tables that have different structures. So you could have a customer table called "DOG" (just making this up) that contains the columns "DOGID", "DOGNAME", "DOGTYPE", etc. Customer #2 could have the same table "DOG" with the columns "DOGID", "DOG_FIRST_NAME", "DOG_LAST_NAME", "DOG_BREED", and so on. We can't create classes for these at compile time as the customer can change the table schema at any time. At the moment I have code that can generate a "DOG" class at run-time using reflection. What I'm trying to figure out is how to populate this class from a DataTable (or some other .NET mechanism) without extreme performance penalties. We have one table that contains ~20 columns and ~50k rows. Doing a foreach over all of the rows and columns to create the collection take about 1 minute, which is a little too long. Am I trying to come up with a solution that's too complex or am I on the right track? Has anyone else experienced a problem like this? Create dynamic classes was the solution that a developer at Microsoft proposed. If we can just populate this collection and use it efficiently I think it could work.

    Read the article

  • Dataset holds a table called "Table", not the table I pass in?

    - by dotnetdev
    Hi, I have the code below: string SQL = "select * from " + TableName; using (DS = new DataSet()) using (SqlDataAdapter adapter = new SqlDataAdapter()) using (SqlConnection sqlconn = new SqlConnection(connectionStringBuilder.ToString())) using (SqlCommand objCommand = new SqlCommand(SQL, sqlconn)) { sqlconn.Open(); adapter.SelectCommand = objCommand; adapter.Fill(DS); } System.Windows.Forms.MessageBox.Show(DS.Tables[0].TableName); return DS; However, every time I run this code, the dataset (DS) is filled with one table called "Table". It does not represent the table name I pass in as the parameter TableName and this parameter does not get mutated so I don't know where the name Table comes from. I'd expect the table to be the same as the tableName parameter I pass in? Any idea why this is not so? EDIT: Important fact: This code needs to return a dataset because I use the dataRelation object in another method, which is dependent on this, and without using a dataset, that method throws an exception. The code for that method is: DataRelation PartToIntersection = new DataRelation("XYZ", this.LoadDataToTable(tableName).Tables[tableName].Columns[0], // Treating the PartStat table as the parent - .N this.LoadDataToTable("PartProducts").Tables["PartProducts"].Columns[0]); // 1 // PartsProducts (intersection) to ProductMaterial DataRelation ProductMaterialToIntersection = new DataRelation("", ds.Tables["ProductMaterial"].Columns[0], ds.Tables["PartsProducts"].Columns[1]); Thanks

    Read the article

  • Symfony generating database from model

    - by Sergej Jevsejev
    Hello, I am having troubles generating a simple database form model. I am using: Doctrine on Symfony 1.4.4 MySQL Workbench 5.2.16 with Doctrine Export 0.4.2dev So my ERL Model is: http://img708.imageshack.us/img708/1716/tmg.png Genereted YAML file: --- detect_relations: true options: collate: utf8_unicode_ci charset: utf8 type: InnoDB Course: columns: id: type: integer(4) primary: true notnull: true autoincrement: true name: type: string(255) notnull: true keywords: type: string(255) notnull: true summary: type: clob(65535) notnull: true Lecture: columns: id: type: integer(4) primary: true notnull: true autoincrement: true course_id: type: integer(4) primary: true notnull: true name: type: string(255) notnull: true description: type: string(255) notnull: true url: type: string(255) relations: Course: class: Course local: course_id foreign: id foreignAlias: Lectures foreignType: many owningSide: true User: columns: id: type: integer(4) primary: true unique: true notnull: true autoincrement: true firstName: type: string(255) notnull: true lastName: type: string(255) notnull: true email: type: string(255) unique: true notnull: true designation: type: string(1024) personalHeadline: type: string(1024) shortBio: type: clob(65535) UserCourse: tableName: user_has_course columns: user_id: type: integer(4) primary: true notnull: true course_id: type: integer(4) primary: true notnull: true relations: User: class: User local: user_id foreign: id foreignAlias: UserCourses foreignType: many owningSide: true Course: class: Course local: course_id foreign: id foreignAlias: UserCourses foreignType: many owningSide: true And no matter what I try this error occurs after: symfony doctrine:build --all --no-confirmation SQLSTATE[42000]: Syntax error or access violation: 1072 Key column 'user_userid' doesn't exist in table. Failing Query: "ALTER TABLE user_has_course ADD CONSTRAINT user_has_course_user_userid_user_id FOREIGN KEY (user_userid) REFERENCES user(id)". Failing Query: ALTER TABLE user_has_course ADD CONSTRAINT user_has_cou rse_user_userid_user_id FOREIGN KEY (user_userid) REFERENCES user(id) Currently I am studying Symfony, and stuck with this error. Please help.

    Read the article

  • How to insert and call by row and column into sqlite3 python, great tutorial problem.

    - by user291071
    Lets say i have a simple array of x rows and y columns with corresponding values, What is the best method to do 3 things? How to insert, update a value at a specific row column? How to select a value for each row and column, import sqlite3 con = sqlite3.connect('simple.db') c = con.cursor() c.execute('''create table simple (links text)''') con.commit() dic = {'x1':{'y1':1.0,'y2':0.0},'x2':{'y1':0.0,'y2':2.0,'y3':1.5},'x3':{'y2':2.0,'y3':1.5}} ucols = {} ## my current thoughts are collect all row values and all column values from dic and populate table row and columns accordingly how to call by row and column i havn't figured out yet ##populate rows in first column for row in dic: print row c.execute("""insert into simple ('links') values ('%s')"""%row) con.commit() ##unique columns for row in dic: print row for col in dic[row]: print col ucols[col]=dic[row][col] ##populate columns for col in ucols: print col c.execute("alter table simple add column '%s' 'float'" % col) con.commit() #functions needed ##insert values into sql by row x and column y?how to do this e.g. x1 and y2 should put in 0.0 ##I tried as follows didn't work for row in dic: for col in dic[row]: val =dic[row][col] c.execute("""update simple SET '%s' = '%f' WHERE 'links'='%s'"""%(col,val,row)) con.commit() ##update value at a specific row x and column y? ## select a value at a specific row x and column y?

    Read the article

  • I read 3 pages of a JQuery book and here's my reaction and question

    - by George
    My jQuery reaction to the language's flexible "selectors" is probably rooted in this experience: I once had managed a project where a developer constructed a web page that was used by users to provide very flexible search parameters for a search screen using dynamic sql string building based on the user's specified search parameter. The resulting queries were usually very complicated and involved joins to many tables. One of the options that the user had was to choose from one of 3 an options. Depending on the user's choice for this option, the resulting SQL would need to query a different set of database columns. For example, if choice option "A" were selected, the resulting database columns queried would be prefixed with "A_"; if option "B" were selected, he resulting database columns queried would be prefixed with "B_" and so on. The developer choice to write all the complete SQL assuming that the user selected, for example, option "A" and therefore first constructed SQLs of this type: SQL = "SELECT A_COL1, A_COL2, A_COL3 FROM TABLE ..." and then after constructing one of a million possible variations on the Query From Hell, did something like this: If UserOption = "B" then SQL = SQL.Replace("A_","B_") 'replace everywhere End if He insisted that this was the easiest was to code it, and while I understood that, I was concerned about maintenance of this code. You see, this worked for a while, but as the search options grew and the database columns evolved, the various "REPLACE small substring" with another small substring had unexpected consequences when applied to an evolving database and new search options. My feeling is that code should be written as much as possible such that you can add to it without fear of breaking what is already there. I feel a better approach, though a bit more work, would have been to write a function to return the appropriate target column based on a common set name and the user selected option. OK, so what does this have to do with jQuery selectors? Are the ultra flexible JQuery selectors kind of like perform a "replace all" on a SQL string? Handy as hell but potentially creating a maintenance nightmare?

    Read the article

  • Adding a Way To preserve A Comma In A CSV To DataTable Function

    - by Nick LaMarca
    I have a function that converts a .csv file to a datatable. One of the columns I am converting is is a field of names that have a comma in them i.e. "Doe, John" when converting the function treats this as 2 seperate fields because of the comma. I need the datatable to hold this as one field Doe, John in the datatable. Function CSV2DataTable(ByVal filename As String, ByVal sepChar As String) As DataTable Dim reader As System.IO.StreamReader Dim table As New DataTable Dim colAdded As Boolean = False Try ''# open a reader for the input file, and read line by line reader = New System.IO.StreamReader(filename) Do While reader.Peek() >= 0 ''# read a line and split it into tokens, divided by the specified ''# separators Dim tokens As String() = System.Text.RegularExpressions.Regex.Split _ (reader.ReadLine(), sepChar) ''# add the columns if this is the first line If Not colAdded Then For Each token As String In tokens table.Columns.Add(token) Next colAdded = True Else ''# create a new empty row Dim row As DataRow = table.NewRow() ''# fill the new row with the token extracted from the current ''# line For i As Integer = 0 To table.Columns.Count - 1 row(i) = tokens(i) Next ''# add the row to the DataTable table.Rows.Add(row) End If Loop Return table Finally If Not reader Is Nothing Then reader.Close() End Try End Function

    Read the article

  • A better way of getting a data table with various column types into string array

    - by Vlad
    This should be an easy one, looks like I got myself too confused. I get a table from a database, data ranges from varchar to int to Null values. Cheap and dirty way of converting this into a tab-delimited file that I already have is this (shrunken to preserve space, ugliness is kept on par with original): da.Fill(dt) ' da - DataAdapter ' ' dt - DataTable ' Dim lColumns As Long = dt.Columns.Count Dim arrColumns(dt.Columns.Count) As String Dim arrData(dt.Columns.Count) As Object Dim j As Long = 0 Dim arrData(dt.Columns.Count) As Object For i = 0 To dt.Rows.Count - 1 arrData = dt.Rows(i).ItemArray() For j = 0 To arrData.GetUpperBound(0) - 1 arrColumns(j) = arrData(j).ToString Next wrtOutput.WriteLine(String.Join(strFieldDelimiter, arrColumns)) Array.Clear(arrColumns, 0, arrColumns.GetLength(0)) Array.Clear(arrData, 0, arrData.GetLength(0)) Next Not only this is ugly and inefficient, it is also getting on my nerves. Besides, I want, if possible, to avoid the infamous double-loop through the table. I would really appreciate a clean and safe way of rewriting this piece. I like the approach that is used here - especially that is trying to solve the same problem that I have, but it crashes on me when I apply it to my case directly.

    Read the article

  • How to store sorted records in csv file ?

    - by Harikrishna
    I sort the records of the datatable datewise with the column TradingDate which is type of datetime. TableWithOnlyFixedColumns.DefaultView.Sort = "TradingDate asc"; Now I want to display these sorted records into csv file but it does not display records sorted by date. TableWithOnlyFixedColumns.DefaultView.Sort = "TradingDate asc";TableWithOnlyFixedColumns.Columns["TradingDate"].ColumnName + "] asc"; DataTable newTable = TableWithOnlyFixedColumns.Clone(); newTable.DefaultView.Sort = TableWithOnlyFixedColumns.DefaultView.Sort; foreach (DataRow oldRow in TableWithOnlyFixedColumns.Rows) { newTable.ImportRow(oldRow); } // we'll use these to check for rows with nulls var columns = newTable.DefaultView.Table.Columns.Cast<DataColumn>(); using (var writer = new StreamWriter(@"C:\Documents and Settings\Administrator\Desktop\New Text Document (3).csv")) { for (int i = 0; i < newTable.DefaultView.Table.Rows.Count; i++) { DataRow row = newTable.DefaultView.Table.Rows[i]; // check for any null cells if (columns.Any(column => row.IsNull(column))) continue; string[] textCells = row.ItemArray .Select(cell => cell.ToString()) // may need to pick a text qualifier here .ToArray(); // check for non-null but EMPTY cells if (textCells.Any(text => string.IsNullOrEmpty(text))) continue; writer.WriteLine(string.Join(",", textCells)); } } So how to store sorted records in csv file ?

    Read the article

  • How to calculate how many lines (or records) can fit in the given page size

    - by devcoder
    My problem is to calculate the data (string) that can fit into the given page size (in inches). I have an application which creates plain vanilla HTML report without using any reporting controls. Now I have to provide paging support in this report. the report is dynamic in nature i.e. columns are decided at run time. Depending upon the page width, I want to wrap columns in multiple lines. For example, if the page width is 8", i want to fit only first 'n' columns in first line and rest columns can be displayed in second line (or more lines if required). For this I need to calculate how much data can fit in a 8" wide line. Similarly, I want to calculate the height of data that can fit into the given height of page. To summarize, how can I calculate how much data can fit into the given page size in inches. Note: The calculation should also consider the font as it is decided at run time.

    Read the article

< Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >