Search Results

Search found 6407 results on 257 pages for 'reorder columns'.

Page 52/257 | < Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >

  • IN SQL operator in R-Shiny

    - by Piyush
    I am taking multiple selection for component as per below code. selectInput("cmpnt", "Choose Component:", choices = as.character(levels(Material_Data()$CMPNT_NM)),multiple = TRUE) But I am trying to write a sql statement as given below, then its not working. Neither it is throwing any error message. When I was selecting one option at a time (without mutiple = TRUE) then it was working (since I was using "=" operator). But after using "multiple=TRUE" I need to use IN operator, which is not working. Input_Data2 <- fn$sqldf( paste0( "select * from Input_Data1 where MTRL_NBR = '$mtrl1' and CMPNT_NM in ('$cmpnt1')") ) Thanks in advance for any help on this. Thanks jdharrison! Pleasefind the detailed code: # server.R library(RODBC) library(shiny) library(sqldf) Input_Data <- readRDS("InputSource.rds") Mtrl <- factor(Input_Data$MTRL_NBR) Mtrl_List <- levels(Mtrl) shinyServer(function(input, output) { # First UI input (Service column) filter clientData output$Choose_Material <- renderUI({ if (is.null(clientData())) return("No client selected") selectInput("mtrl", "Choose Material:", choices = as.character(levels(clientData()$MTRL_NBR)), selected = input$mtrl ) }) # Second UI input (Rounds column) filter service-filtered clientData output$Choose_Component <- renderUI({ if(is.null(input$mtrl)) return() if (is.null(Material_Data())) return("No service selected") selectInput("cmpnt", "Choose Component:", choices = as.character(levels(Material_Data()$CMPNT_NM)),multiple = TRUE) }) # First data load (client data) clientData <- reactive({ # get(input$Input_Data) return(Input_Data) }) # Second data load (filter by service column) Material_Data <- reactive({ dat <- clientData() if (is.null(dat)) return(NULL) if (!is.null(input$mtrl)) # ! dat <- dat[dat$MTRL_NBR %in% input$mtrl,] dat <- droplevels(dat) return(dat) }) output$Choose_Columns <- renderUI({ if(is.null(input$mtrl)) return() if(is.null(input$cmpnt)) return() colnames <- names(Input_Data) checkboxGroupInput("columns", "Choose Columns To Display The Data:", choices = colnames, selected = colnames) }) output$text <- renderText({ print(input$cmpnt) }) output$data_table <- renderTable({ if(is.null(input$mtrl)) return() if (is.null(input$columns) || !(input$columns %in% names(Input_Data))) return() Input_Data1 <- Input_Data[, input$columns, drop = FALSE] cmpnt1 <- input$cmpnt mtrl1 <- input$mtrl Input_Data2 <- fn$sqldf( paste0( "select * from Input_Data1 where MTRL_NBR = '$mtrl1' and CMPNT_NM in ('$cmpnt1')") ) head(Input_Data2, 10) }) })

    Read the article

  • drawing graphs in php

    - by user434885
    i am using php to generate graphs from arrays. I wish to create multiple graphs on the same page as i need to design a summary report from answers extracted from a database. Currently i am using this code and am only able to get one single graph. what additions to the code do i need to make to get multiple graphs? <?php function draw_graph($values) { // Get the total number of columns we are going to plot $columns = count($values); // Get the height and width of the final image $width = 300; $height = 200; // Set the amount of space between each column $padding = 5; // Get the width of 1 column $column_width = $width / $columns ; // Generate the image variables $im = imagecreate($width,$height); $gray = imagecolorallocate ($im,0xcc,0xcc,0xcc); $gray_lite = imagecolorallocate ($im,0xee,0xee,0xee); $gray_dark = imagecolorallocate ($im,0x7f,0x7f,0x7f); $white = imagecolorallocate ($im,0xff,0xff,0xff); // Fill in the background of the image imagefilledrectangle($im,0,0,$width,$height,$white); $maxv = 0; // Calculate the maximum value we are going to plot for($i=0;$i<$columns;$i++)$maxv = max($values[$i],$maxv); // Now plot each column for($i=0;$i<$columns;$i++) { $column_height = ($height / 100) * (( $values[$i] / $maxv) *100); $x1 = $i*$column_width; $y1 = $height-$column_height; $x2 = (($i+1)*$column_width)-$padding; $y2 = $height; imagefilledrectangle($im,$x1,$y1,$x2,$y2,$gray_dark); } header ("Content-type: image/png"); imagepng($im); imagedestroy($im); } $values = array("23","32","35","57","12"); $values2 = array("123","232","335","157","102"); draw_graph($values2); draw_graph($values);//no output is coming draw_graph($values2);//no output is coming draw_graph($values);//no output is coming ?>

    Read the article

  • How to define schema for an ActiveRecord model?

    - by Eric Stanton
    I can find how to define columns only when doing migrations. However i do not need to migrate my model. I want to work with it "virtually". Does AR read columns data only from db? Any way to define columns like in DataMapper? class Post include DataMapper::Resource property :id, Serial property :title, String property :published, Boolean end Now i can play with my model without migrations/connections.

    Read the article

  • query to return three records for each customer application based on the options declared in the pre

    - by kumarreddy
    tables look like this table1---customer application columns--- application id--primary key, name, ssn, ... ... table2----balance(actually its a view) columns--- amount balance, application id ...... ...... table3 ---- options columns--- optionid, option value(1,2,3,4), ...... ........ .... table4 ----- ratios columns--- ratios id, option value, ratio value, applicationid(have to think about it), ........ table 4(detail) option value, Ratios 1 ----- 30 1 ----- 40 1 ----- 30 2 ---- 100 2 ----- 0 2 ------ 0 3 ---- 60 3 ------ 30 3 ----- 10 4 ---- 50 4 ----- 30 4 ----- 20 as is the case...now i need to get three records for each customer application with varying balances in proportion of ratios declared in table 4 corresponding to option values...... plz let me know where i was unclear about returning records thanks in advance

    Read the article

  • HTML Table: How to resize data cells when already specified "colspan"?

    - by Jenny
    I have tabular data to display, which has a meaning to both rows and columns. Columns are time blocks, rows are days. A particular datacell is confined to a single day, but can be in multiple time block. To show this, I am using the colspan tag. <div id = "GuideTable"> <table> <tr> <td colspan = 3> </td></tr></table> </div> Or Whatever. I'm trying to apply CSS formating to the entire table, and for changing colors, etc, things are fine, but wanting to have a consistent width is where I am running into problems. Right now, each data cell's width seems tied to the maximum width in its column (everything auto lines up). Some columns are itty bitty, others are huge. I'm trying to make columns consistently sized (even if that means every column is as big as the biggest column needs to be), but setting an individual datacells width (either via css or in the tag itself) is getting me nowhere. I'm thinking maybe the colspan tag is overriding my manual width? If that's the case, how can I change the width of a column as a whole, especially since they aren't explicitly defined? (CAN you explicitly define columns?) Examples of the CSS I'm using: #GuideTable td{ background:#ffffff; border-style: solid; border-width: 1px; width: 100px; }

    Read the article

  • Grid View Button Passing Data Via On Click

    - by flyersun
    Hi, I'm pretty new to C# and asp.net so aplogies if this is a really stupid question. I'm using a grid view to display a number of records from a database. Each row has an Edit Button. When the button is clicked I want an ID to be passed back to a funtion in my .cs file. How do I bind the rowID to the Button field? I've tired using a hyper link instead but this doens't seem to work because I'm posting back to the same page which already has a Permanter on the URL. asp.net <asp:GridView ID="gvAddresses" runat="server" onrowcommand="Edit_Row"> <Columns> <asp:ButtonField runat="server" ButtonType="Button" Text="Edit"> </Columns> </asp:GridView> c# int ImplantID = Convert.ToInt32(Request.QueryString["ImplantID"]); Session.Add("ImplantID", ImplantID); List<GetImplantDetails> DataObject = ImplantDetails(ImplantID); System.Data.DataSet DSImplant = new DataSet(); System.Data.DataTable DTImplant = new DataTable("Implant"); DSImplant.Tables.Add(DTImplant); DataColumn ColPostCode = new DataColumn(); ColPostCode.ColumnName = "PostCode"; ColPostCode.DataType = typeof(string); DTImplant.Columns.Add(ColPostCode); DataColumn ColConsigneeName = new DataColumn(); ColConsigneeName.ColumnName = "Consignee Name"; ColConsigneeName.DataType = typeof(string); DTImplant.Columns.Add(ColConsigneeName); DataColumn ColIsPrimaryAddress = new DataColumn(); ColIsPrimaryAddress.ColumnName = "Primary"; ColIsPrimaryAddress.DataType = typeof(int); DTImplant.Columns.Add(ColIsPrimaryAddress); DataColumn ColImplantCustomerDetailsID = new DataColumn(); ColImplantCustomerDetailsID.ColumnName = "Implant ID"; ColImplantCustomerDetailsID.DataType = typeof(int); DTImplant.Columns.Add(ColImplantCustomerDetailsID); foreach (GetImplantDetails Object in DataObject) { DataRow DRImplant = DTImplant.NewRow(); DRImplant["PostCode"] = Object.GetPostCode(); DRImplant["Consignee Name"] = Object.GetConsigneeName(); DRImplant["Primary"] = Object.GetIsPrimaryAddress(); DRImplant["Implant ID"] = Object.GeTImplantCustomerDetailsID(); DTImplant.Rows.Add(DRImplant); <--- this is what I need to be added to the button } gvAddresses.DataSource = DTImplant; gvAddresses.DataBind();

    Read the article

  • Finding center of fingerprints.

    - by an_ant
    If we suppose that every fingerprint is made of concentric curves (ellipses or circles) - and I'm aware of the fact that not every fingerprint is - how can I find center of those concentric curves? Let's take this "ideal" fingerprint and try to find out its center ... My approaches were to try: Find the spectrum through columns/rows of the image and try to find columns/rows that maximize particular band of the spectrum. I thought that column going through the center would have most regular pattern of changing amplitudes - therefore, most recognizible harmonic. My second approach was to try to count the changes of black-and-white also through the columns and rows, and to maximize that amount among rows and columns also. While these methods work to the some extant, with some additional filtering, they fail, when fingerprint is "not ideal as this one is". Can you think of any different approach? Are there standard ways to do it?

    Read the article

  • Import data from multiple CSV files to an Excel sheet

    - by Chetan
    I need to import data from 50 similar csv files to a single excel sheet. Is there any way to get only selected columns from each file and put them together in one sheet. Structure of my csv files: A few columns exactly same in all the files. (I want them to be in excel) then one column with same column name but different data which I want to place next to each other with different names in the excel sheet. I do not not want all other remaining columns from csv files. In short, read all csv files, get all the columns which are common to all csv & put in excel sheet. Now, take one column from each file which has the same header but different data and put one after the other in excel sheet with named from the csv file name. Leave remaining rest of the columns. Write excel sheet to a excel file. Initially I thought it can be easily done, but considering my programming skill in learning stage it is way difficult for me. Please help.

    Read the article

  • Determining Best Table Structure for MySQL Performance

    - by Joe Majewski
    I'm working on a browser-based RPG for one of my websites, and right now I'm trying to determine the best way to organize my SQL tables for performance and maintenance. Here's my question: Does the number of columns in an SQL table affect the speed in which it can be queried? I am not a newbie when it comes to PHP or MySQL. I used to develop things with the common goal of getting them to work, but I've recently advanced to the stage where a functional program is not good enough unless it's fast and reliable. Anyways, right now I have a members table that has around 15 columns. It contains information such as the player's username, password, email, logins, page views, etcetera. It doesn't contain any information on the player's progress in the game, however. If I added columns for things such as army size, gold, turns, and whatnot, then it could easily rise to around 40 or 50 total columns. Oh, and my database structure IS normalized. Will a table with 50 columns that gets constantly queried be a bad idea? Should I split it into two tables; one for the user's general information and one for the user's game statistics? I know I could check the query time myself, but I haven't actually created the tables yet and I think I'd be better off with some professional advice on this important decision for my game. Thank you for your time! :)

    Read the article

  • C# DataTable to Json?

    - by AliRiza Adiyahsi
    I want to get DataTable as Json Format to show it on a chart. public JsonResult GetDataTable() { DataTable dt = new DataTable(); dt.Columns.Add("Jan"); dt.Columns.Add("Feb"); dt.Columns.Add("Mar"); dt.Columns.Add("Apr"); for (int i = 0; i < 10; i++) { dt.Rows.Add(i * 5, i * 10, i * 15, i * 11); } // JsonDataTable = dt to Json return new JsonResult { Data = new { success = true, chartData = JsonDataTable }, JsonRequestBehavior = JsonRequestBehavior.AllowGet }; } How Can I convert DataTable to Json? Thanks.

    Read the article

  • Greatest not null column

    - by Álvaro G. Vicario
    I need to update a row with a formula based on the largest value of two DATETIME columns. I would normally do this: GREATEST(date_one, date_two) However, both columns are allowed to be NULL. I need the greatest date even when the other is NULL (of course, I expect NULL when both are NULL) and GREATEST() returns NULL when one of the columns is NULL. This seems to work: GREATEST(COALESCE(date_one, date_two), COALESCE(date_two, date_one)) But I wonder... am I missing a more straightforward method?

    Read the article

  • SQL Server column level security

    - by user46372
    I think I need some pointers on security in SQL Server. I'm trying to restrict some of our end users from getting access to certain columns (i.e. SSN) on a table. I thought I could just use column level security to restrict access to the columns. That successfully prevented users from accessing the table directly, but I was surprised that they could still get to those columns through a view that accessed that table. I followed the tips here: http://www.mssqltips.com/sqlservertip/2124/filtering-sql-server-columns-using-column-level-permissions/ Those were very helpful, but when I created a view at the end, the intern was able to access that column by default I've read that views are the best way to accomplish this, but I really don't want to go through and change all of the views and the legacy front-end application. I would rather just restrict it once on the table and if a view tries to access that column it would just fail. Is that possible or am I misunderstanding how security works in SQL Server?

    Read the article

  • Splitting values into groups evenly

    - by Paul Knopf
    Let me try to explain the situation the best I can. Lets say I have 3 values 1, 2, 3 I tell an algorithm to split this values into x columns. Lets say x = 2 for clarification. The algorithm determines that the group of values is best put into two columns the following way. 1st column 2nd column --------------------------- 1 3 2 Each column has an even number (totals, not literals) value. Now lets say I have the following values 7, 8, 3, 1, 4 I tell the algorithm that I want the values split into 3 columns. The algorithm now tells me that the following is the best fit. 1st column 2nd column 3rd column 8 7 3 1 4 Notice how the columns arent quiet even, but it is as close as it can get. A little over and a little under is considered ok, as long as the list is AS CLOSE TO EVEN AS IT CAN BE. Anybody got any suggestions? Know any good methods of doing this?

    Read the article

  • How can I set default seed for all identities within a SQL Server database?

    - by Brandon DuRette
    Is there a way to tell SQL server to use specific default seed value for IDENTITY columns - assuming the (to be run) CREATE TABLE statements do not specify one? I don't really care about altering existing data or altering the seed values for specific IDENTITY columns. I want the same seed for all newly created identity columns. Assume I cannot modify the individual CREATE TABLE statements in any way.

    Read the article

  • How to get selected items Drag & drop component ?

    - by Thirst for Excellence
    How to get all the selected items from in this below code selectedContsList ? <mx:DataGrid id="dg" dataProvider="{cNumbersList}" allowMultipleSelection="true" dropEnabled="true" dragMoveEnabled="true" > <mx:columns> <mx:DataGridColumn dataField="contactName" headerText="Name"/> <mx:DataGridColumn dataField="contactNo" eaderText="ContactNo"/> </mx:columns> <mx:Label text="Selected Contacts :" width="122" color="#C90855" height="16"/> <mx:DataGrid id="selectedContsList" allowMultipleSelection="true" dragMoveEnabled="true" dragEnabled="true" > <mx:columns> <mx:DataGridColumn dataField="contactName" headerText="Name"/> <mx:DataGridColumn dataField="contactNo" headerText="ContactNo"/> </mx:columns> </mx:DataGrid>

    Read the article

  • What type of structure should be here ?

    - by Harikrishna
    I have need of suggestion here that how should be the xml structure here for my need. I want to store data for more than one table in single xml file that is I want to define more than one table in single xml file.And I want to set more than one attribute for each column definition. Like I have three tables PersonalInfo,OfficeDetail,OtherInfo. Columns for each tables : PersonalInfo: Columns: Name, Address Attributtes: Name optional="true" IsInSameColumn="true" Pattern="abc" OfficeDetail: Columns: Pid, Work Attributtes: Pid optional="true" IsInSameColumn="true" Pattern="798" OtherInfo : Columns: PhoneNo How Should be the xml structure here such that I can read it like if only a single table I want to read from xml file.

    Read the article

  • GridView with multiple DataSources

    - by mike1973
    Hello, I have a grid view that will display columns from multiple data sources (3). The data sources are stored procedures that contains a variety of columns not identical, How can i select the columns from these datasources and bind them to the Grid programtically (from code behind)??? Thanks

    Read the article

  • Uploading and Importing CSV file to SQL Server in ASP.NET WebForms

    - by Vincent Maverick Durano
    Few weeks ago I was working with a small internal project  that involves importing CSV file to Sql Server database and thought I'd share the simple implementation that I did on the project. In this post I will demonstrate how to upload and import CSV file to SQL Server database. As some may have already know, importing CSV file to SQL Server is easy and simple but difficulties arise when the CSV file contains, many columns with different data types. Basically, the provider cannot differentiate data types between the columns or the rows, blindly it will consider them as a data type based on first few rows and leave all the data which does not match the data type. To overcome this problem, I used schema.ini file to define the data type of the CSV file and allow the provider to read that and recognize the exact data types of each column. Now what is schema.ini? Taken from the documentation: The Schema.ini is a information file, used to define the data structure and format of each column that contains data in the CSV file. If schema.ini file exists in the directory, Microsoft.Jet.OLEDB provider automatically reads it and recognizes the data type information of each column in the CSV file. Thus, the provider intelligently avoids the misinterpretation of data types before inserting the data into the database. For more information see: http://msdn.microsoft.com/en-us/library/ms709353%28VS.85%29.aspx Points to remember before creating schema.ini:   1. The schema information file, must always named as 'schema.ini'.   2. The schema.ini file must be kept in the same directory where the CSV file exists.   3. The schema.ini file must be created before reading the CSV file.   4. The first line of the schema.ini, must the name of the CSV file, followed by the properties of the CSV file, and then the properties of the each column in the CSV file. Here's an example of how the schema looked like: [Employee.csv] ColNameHeader=False Format=CSVDelimited DateTimeFormat=dd-MMM-yyyy Col1=EmployeeID Long Col2=EmployeeFirstName Text Width 100 Col3=EmployeeLastName Text Width 50 Col4=EmployeeEmailAddress Text Width 50 To get started lets's go a head and create a simple blank database. Just for the purpose of this demo I created a database called TestDB. After creating the database then lets go a head and fire up Visual Studio and then create a new WebApplication project. Under the root application create a folder called UploadedCSVFiles and then place the schema.ini on that folder. The uploaded CSV files will be stored in this folder after the user imports the file. Now add a WebForm in the project and set up the HTML mark up and add one (1) FileUpload control one(1)Button and three (3) Label controls. After that we can now proceed with the codes for uploading and importing the CSV file to SQL Server database. Here are the full code blocks below: 1: using System; 2: using System.Data; 3: using System.Data.SqlClient; 4: using System.Data.OleDb; 5: using System.IO; 6: using System.Text; 7:   8: namespace WebApplication1 9: { 10: public partial class CSVToSQLImporting : System.Web.UI.Page 11: { 12: private string GetConnectionString() 13: { 14: return System.Configuration.ConfigurationManager.ConnectionStrings["DBConnectionString"].ConnectionString; 15: } 16: private void CreateDatabaseTable(DataTable dt, string tableName) 17: { 18:   19: string sqlQuery = string.Empty; 20: string sqlDBType = string.Empty; 21: string dataType = string.Empty; 22: int maxLength = 0; 23: StringBuilder sb = new StringBuilder(); 24:   25: sb.AppendFormat(string.Format("CREATE TABLE {0} (", tableName)); 26:   27: for (int i = 0; i < dt.Columns.Count; i++) 28: { 29: dataType = dt.Columns[i].DataType.ToString(); 30: if (dataType == "System.Int32") 31: { 32: sqlDBType = "INT"; 33: } 34: else if (dataType == "System.String") 35: { 36: sqlDBType = "NVARCHAR"; 37: maxLength = dt.Columns[i].MaxLength; 38: } 39:   40: if (maxLength > 0) 41: { 42: sb.AppendFormat(string.Format(" {0} {1} ({2}), ", dt.Columns[i].ColumnName, sqlDBType, maxLength)); 43: } 44: else 45: { 46: sb.AppendFormat(string.Format(" {0} {1}, ", dt.Columns[i].ColumnName, sqlDBType)); 47: } 48: } 49:   50: sqlQuery = sb.ToString(); 51: sqlQuery = sqlQuery.Trim().TrimEnd(','); 52: sqlQuery = sqlQuery + " )"; 53:   54: using (SqlConnection sqlConn = new SqlConnection(GetConnectionString())) 55: { 56: sqlConn.Open(); 57: SqlCommand sqlCmd = new SqlCommand(sqlQuery, sqlConn); 58: sqlCmd.ExecuteNonQuery(); 59: sqlConn.Close(); 60: } 61:   62: } 63: private void LoadDataToDatabase(string tableName, string fileFullPath, string delimeter) 64: { 65: string sqlQuery = string.Empty; 66: StringBuilder sb = new StringBuilder(); 67:   68: sb.AppendFormat(string.Format("BULK INSERT {0} ", tableName)); 69: sb.AppendFormat(string.Format(" FROM '{0}'", fileFullPath)); 70: sb.AppendFormat(string.Format(" WITH ( FIELDTERMINATOR = '{0}' , ROWTERMINATOR = '\n' )", delimeter)); 71:   72: sqlQuery = sb.ToString(); 73:   74: using (SqlConnection sqlConn = new SqlConnection(GetConnectionString())) 75: { 76: sqlConn.Open(); 77: SqlCommand sqlCmd = new SqlCommand(sqlQuery, sqlConn); 78: sqlCmd.ExecuteNonQuery(); 79: sqlConn.Close(); 80: } 81: } 82: protected void Page_Load(object sender, EventArgs e) 83: { 84:   85: } 86: protected void BTNImport_Click(object sender, EventArgs e) 87: { 88: if (FileUpload1.HasFile) 89: { 90: FileInfo fileInfo = new FileInfo(FileUpload1.PostedFile.FileName); 91: if (fileInfo.Name.Contains(".csv")) 92: { 93:   94: string fileName = fileInfo.Name.Replace(".csv", "").ToString(); 95: string csvFilePath = Server.MapPath("UploadedCSVFiles") + "\\" + fileInfo.Name; 96:   97: //Save the CSV file in the Server inside 'MyCSVFolder' 98: FileUpload1.SaveAs(csvFilePath); 99:   100: //Fetch the location of CSV file 101: string filePath = Server.MapPath("UploadedCSVFiles") + "\\"; 102: string strSql = "SELECT * FROM [" + fileInfo.Name + "]"; 103: string strCSVConnString = "Provider=Microsoft.Jet.OLEDB.4.0;Data Source=" + filePath + ";" + "Extended Properties='text;HDR=YES;'"; 104:   105: // load the data from CSV to DataTable 106:   107: OleDbDataAdapter adapter = new OleDbDataAdapter(strSql, strCSVConnString); 108: DataTable dtCSV = new DataTable(); 109: DataTable dtSchema = new DataTable(); 110:   111: adapter.FillSchema(dtCSV, SchemaType.Mapped); 112: adapter.Fill(dtCSV); 113:   114: if (dtCSV.Rows.Count > 0) 115: { 116: CreateDatabaseTable(dtCSV, fileName); 117: Label2.Text = string.Format("The table ({0}) has been successfully created to the database.", fileName); 118:   119: string fileFullPath = filePath + fileInfo.Name; 120: LoadDataToDatabase(fileName, fileFullPath, ","); 121:   122: Label1.Text = string.Format("({0}) records has been loaded to the table {1}.", dtCSV.Rows.Count, fileName); 123: } 124: else 125: { 126: LBLError.Text = "File is empty."; 127: } 128: } 129: else 130: { 131: LBLError.Text = "Unable to recognize file."; 132: } 133:   134: } 135: } 136: } 137: } The code above consists of three (3) private methods which are the GetConnectionString(), CreateDatabaseTable() and LoadDataToDatabase(). The GetConnectionString() is a method that returns a string. This method basically gets the connection string that is configured in the web.config file. The CreateDatabaseTable() is method that accepts two (2) parameters which are the DataTable and the filename. As the method name already suggested, this method automatically create a Table to the database based on the source DataTable and the filename of the CSV file. The LoadDataToDatabase() is a method that accepts three (3) parameters which are the tableName, fileFullPath and delimeter value. This method is where the actual saving or importing of data from CSV to SQL server happend. The codes at BTNImport_Click event handles the uploading of CSV file to the specified location and at the same time this is where the CreateDatabaseTable() and LoadDataToDatabase() are being called. If you notice I also added some basic trappings and validations within that event. Now to test the importing utility then let's create a simple data in a CSV format. Just for the simplicity of this demo let's create a CSV file and name it as "Employee" and add some data on it. Here's an example below: 1,VMS,Durano,[email protected] 2,Jennifer,Cortes,[email protected] 3,Xhaiden,Durano,[email protected] 4,Angel,Santos,[email protected] 5,Kier,Binks,[email protected] 6,Erika,Bird,[email protected] 7,Vianne,Durano,[email protected] 8,Lilibeth,Tree,[email protected] 9,Bon,Bolger,[email protected] 10,Brian,Jones,[email protected] Now save the newly created CSV file in some location in your hard drive. Okay let's run the application and browse the CSV file that we have just created. Take a look at the sample screen shots below: After browsing the CSV file. After clicking the Import Button Now if we look at the database that we have created earlier you'll notice that the Employee table is created with the imported data on it. See below screen shot.   That's it! I hope someone find this post useful! Technorati Tags: ASP.NET,CSV,SQL,C#,ADO.NET

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #038

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 CASE Statement in ORDER BY Clause – ORDER BY using Variable This article is as per request from the Application Development Team Leader of my company. His team encountered code where the application was preparing string for ORDER BY clause of the SELECT statement. Application was passing this string as variable to Stored Procedure (SP) and SP was using EXEC to execute the SQL string. This is not good for performance as Stored Procedure has to recompile every time due to EXEC. sp_executesql can do the same task but still not the best performance. SSMS – View/Send Query Results to Text/Grid/Files Results to Text – CTRL + T Results to Grid – CTRL + D Results to File – CTRL + SHIFT + F 2008 Introduction to SPARSE Columns Part 2 I wrote about Introduction to SPARSE Columns Part 1. Let us understand the concept of the SPARSE column in more detail. I suggest you read the first part before continuing reading this article. All SPARSE columns are stored as one XML column in the database. Let us see some of the advantage and disadvantage of SPARSE column. Deferred Name Resolution How come when table name is incorrect SP can be created successfully but when an incorrect column is used SP cannot be created? 2009 Backup Timeline and Understanding of Database Restore Process in Full Recovery Model In general, databases backup in full recovery mode is taken in three different kinds of database files. Full Database Backup Differential Database Backup Log Backup Restore Sequence and Understanding NORECOVERY and RECOVERY While doing RESTORE Operation if you restoring database files, always use NORECOVER option as that will keep the database in a state where more backup file are restored. This will also keep database offline also to prevent any changes, which can create itegrity issues. Once all backup file is restored run RESTORE command with a RECOVERY option to get database online and operational. Four Different Ways to Find Recovery Model for Database Perhaps, the best thing about technical domain is that most of the things can be executed in more than one ways. It is always useful to know about the various methods of performing a single task. Two Methods to Retrieve List of Primary Keys and Foreign Keys of Database When Information Schema is used, we will not be able to discern between primary key and foreign key; we will have both the keys together. In the case of sys schema, we can query the data in our preferred way and can join this table to another table, which can retrieve additional data from the same. Get Last Running Query Based on SPID PID is returns sessions ID of the current user process. The acronym SPID comes from the name of its earlier version, Server Process ID. 2010 SELECT * FROM dual – Dual Equivalent Dual is a table that is created by Oracle together with data dictionary. It consists of exactly one column named “dummy”, and one record. The value of that record is X. You can check the content of the DUAL table using the following syntax. SELECT * FROM dual Identifying Statistics Used by Query Someone asked this question in my training class of query optimization and performance tuning.  “Can I know which statistics were used by my query?” 2011 SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 14 of 31 What are the basic functions for master, msdb, model, tempdb and resource databases? What is the Maximum Number of Index per Table? Explain Few of the New Features of SQL Server 2008 Management Studio Explain IntelliSense for Query Editing Explain MultiServer Query Explain Query Editor Regions Explain Object Explorer Enhancements Explain Activity Monitors SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 15 of 31 What is Service Broker? Where are SQL server Usernames and Passwords Stored in the SQL server? What is Policy Management? What is Database Mirroring? What are Sparse Columns? What does TOP Operator Do? What is CTE? What is MERGE Statement? What is Filtered Index? Which are the New Data Types Introduced in SQL SERVER 2008? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 16 of 31 What are the Advantages of Using CTE? How can we Rewrite Sub-Queries into Simple Select Statements or with Joins? What is CLR? What are Synonyms? What is LINQ? What are Isolation Levels? What is Use of EXCEPT Clause? What is XPath? What is NOLOCK? What is the Difference between Update Lock and Exclusive Lock? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 17 of 31 How will you Handle Error in SQL SERVER 2008? What is RAISEERROR? What is RAISEERROR? How to Rebuild the Master Database? What is the XML Datatype? What is Data Compression? What is Use of DBCC Commands? How to Copy the Tables, Schema and Views from one SQL Server to Another? How to Find Tables without Indexes? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 18 of 31 How to Copy Data from One Table to Another Table? What is Catalog Views? What is PIVOT and UNPIVOT? What is a Filestream? What is SQLCMD? What do you mean by TABLESAMPLE? What is ROW_NUMBER()? What are Ranking Functions? What is Change Data Capture (CDC) in SQL Server 2008? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 19 of 31 How can I Track the Changes or Identify the Latest Insert-Update-Delete from a Table? What is the CPU Pressure? How can I Get Data from a Database on Another Server? What is the Bookmark Lookup and RID Lookup? What is Difference between ROLLBACK IMMEDIATE and WITH NO_WAIT during ALTER DATABASE? What is Difference between GETDATE and SYSDATETIME in SQL Server 2008? How can I Check that whether Automatic Statistic Update is Enabled or not? How to Find Index Size for Each Index on Table? What is the Difference between Seek Predicate and Predicate? What are Basics of Policy Management? What are the Advantages of Policy Management? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 20 of 31 What are Policy Management Terms? What is the ‘FILLFACTOR’? Where in MS SQL Server is ’100’ equal to ‘0’? What are Points to Remember while Using the FILLFACTOR Argument? What is a ROLLUP Clause? What are Various Limitations of the Views? What is a Covered index? When I Delete any Data from a Table, does the SQL Server reduce the size of that table? What are Wait Types? How to Stop Log File Growing too Big? If any Stored Procedure is Encrypted, then can we see its definition in Activity Monitor? 2012 Example of Width Sensitive and Width Insensitive Collation Width Sensitive Collation: A single-byte character (half-width) represented as single-byte and the same character represented as a double-byte character (full-width) are when compared are not equal the collation is width sensitive. In this example we have one table with two columns. One column has a collation of width sensitive and the second column has a collation of width insensitive. Find Column Used in Stored Procedure – Search Stored Procedure for Column Name Very interesting conversation about how to find column used in a stored procedure. There are two different characters in the story and both are having a conversation about how to find column in the stored procedure. Here are two part story Part 1 | Part 2 SQL SERVER – 2012 Functions – FORMAT() and CONCAT() – An Interesting Usage Generate Script for Schema and Data – SQL in Sixty Seconds #021 – Video In simple words, in many cases the database move from one place to another place. It is not always possible to back up and restore databases. There are possibilities when only part of the database (with schema and data) has to be moved. In this video we learn that we can easily generate script for schema for data and move from one server to another one. INFORMATION_SCHEMA.COLUMNS and Value Character Maximum Length -1 I often see the value -1 in the CHARACTER_MAXIMUM_LENGTH column of INFORMATION_SCHEMA.COLUMNS table. I understand that the length of any column can be between 0 to large number but I do not get it when I see value in negative (i.e. -1). Any insight on this subject? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Permission issue when deleting folders

    - by nate
    I can't delete some folders and some actions are being denied because I do not have administrator access. But in my user account it says i am an administrator. I also just upgraded from Windows Vista 64-bit to Windows 7 64-bit, but I had this problem before upgrading, but now I am being denied more things. I also can't reorder the way files are sorted in folders. I can sort them through the default means, but I literally can't drag and drop them if I want to reorder them. I don't know if this is another problem or if it is related. Let me know if you need more information. Thanks!

    Read the article

  • SQL SERVER – Step by Step Guide to Beginning Data Quality Services in SQL Server 2012 – Introduction to DQS

    - by pinaldave
    Data Quality Services is a very important concept of SQL Server. I have recently started to explore the same and I am really learning some good concepts. Here are two very important blog posts which one should go over before continuing this blog post. Installing Data Quality Services (DQS) on SQL Server 2012 Connecting Error to Data Quality Services (DQS) on SQL Server 2012 This article is introduction to Data Quality Services for beginners. We will be using an Excel file Click on the image to enlarge the it. In the first article we learned to install DQS. In this article we will see how we can learn about building Knowledge Base and using it to help us identify the quality of the data as well help correct the bad quality of the data. Here are the two very important steps we will be learning in this tutorial. Building a New Knowledge Base  Creating a New Data Quality Project Let us start the building the Knowledge Base. Click on New Knowledge Base. In our project we will be using the Excel as a knowledge base. Here is the Excel which we will be using. There are two columns. One is Colors and another is Shade. They are independent columns and not related to each other. The point which I am trying to show is that in Column A there are unique data and in Column B there are duplicate records. Clicking on New Knowledge Base will bring up the following screen. Enter the name of the new knowledge base. Clicking NEXT will bring up following screen where it will allow to select the EXCE file and it will also let users select the source column. I have selected Colors and Shade both as a source column. Creating a domain is very important. Here you can create a unique domain or domain which is compositely build from Colors and Shade. As this is the first example, I will create unique domain – for Colors I will create domain Colors and for Shade I will create domain Shade. Here is the screen which will demonstrate how the screen will look after creating domains. Clicking NEXT it will bring you to following screen where you can do the data discovery. Clicking on the START will start the processing of the source data provided. Pre-processed data will show various information related to the source data. In our case it shows that Colors column have unique data whereas Shade have non-unique data and unique data rows are only two. In the next screen you can actually add more rows as well see the frequency of the data as the values are listed unique. Clicking next will publish the knowledge base which is just created. Now the knowledge base is created. We will try to take any random data and attempt to do DQS implementation over it. I am using another excel sheet here for simplicity purpose. In reality you can easily use SQL Server table for the same. Click on New Data Quality Project to see start DQS Project. In the next screen it will ask which knowledge base to use. We will be using our Colors knowledge base which we have recently created. In the Colors knowledge base we had two columns – 1) Colors and 2) Shade. In our case we will be using both of the mappings here. User can select one or multiple column mapping over here. Now the most important phase of the complete project. Click on Start and it will make the cleaning process and shows various results. In our case there were two columns to be processed and it completed the task with necessary information. It demonstrated that in Colors columns it has not corrected any value by itself but in Shade value there is a suggestion it has. We can train the DQS to correct values but let us keep that subject for future blog posts. Now click next and keep the domain Colors selected left side. It will demonstrate that there are two incorrect columns which it needs to be corrected. Here is the place where once corrected value will be auto-corrected in future. I manually corrected the value here and clicked on Approve radio buttons. As soon as I click on Approve buttons the rows will be disappeared from this tab and will move to Corrected Tab. If I had rejected tab it would have moved the rows to Invalid tab as well. In this screen you can see how the corrected 2 rows are demonstrated. You can click on Correct tab and see previously validated 6 rows which passed the DQS process. Now let us click on the Shade domain on the left side of the screen. This domain shows very interesting details as there DQS system guessed the correct answer as Dark with the confidence level of 77%. It is quite a high confidence level and manual observation also demonstrate that Dark is the correct answer. I clicked on Approve and the row moved to corrected tab. On the next screen DQS shows the summary of all the activities. It also demonstrates how the correction of the quality of the data was performed. The user can explore their data to a SQL Server Table, CSV file or Excel. The user also has an option to either explore data and all the associated cleansing info or data only. I will select Data only for demonstration purpose. Clicking explore will generate the files. Let us open the generated file. It will look as following and it looks pretty complete and corrected. Well, we have successfully completed DQS Process. The process is indeed very easy. I suggest you try this out yourself and you will find it very easy to learn. In future we will go over advanced concepts. Are you using this feature on your production server? If yes, would you please leave a comment with your environment and business need. It will be indeed interesting to see where it is implemented. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Business Intelligence, Data Warehousing, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Data Quality Services, DQS

    Read the article

  • AutoAudit 1.10c

    - by Paul Nielsen
    AutoAudit is a free SQL Server (2005, 2008) Code-Gen utility that creates Audit Trail Triggers with: · Created, Modified, and RowVersion (incrementing INT) columns to table · Creates View to reconstruct deleted rows · Creates UDF to reconstruct Row History · Schema Audit Trigger to track schema changes · Re-code-gens triggers when Alter Table changes the table Version 1.10c Adds: · Createdby and ModifiedBy columns. Pass the user to the column and AutoAudit records that username instead of the Suser_Sname...(read more)

    Read the article

< Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >