Search Results

Search found 1303 results on 53 pages for 'dr stonyhills'.

Page 29/53 | < Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >

  • SQLAuthority News – Meeting SQL Friends – SQLPASS 2011 Event Log

    - by pinaldave
    One of the biggest reason I go to SQLPASS is that my friends are going there too. There are so many friends with whom I often talk on Facebook and Twitter but I rarely get time to meet them as well talk with them. One thing I am usually sure that many fo them will be for sure attend SQLPASS. This is one event which every SQL Server Enthusiast should attend. Just like everybody I had pleasant time to meet many of my SQL friends. There were so many friends that I met and I did not click photo. There were so many friends who clicked photo in their camera and I do not have them. Here are 1% of the photos which I have. If you are not in the photo, it does not mean I have less respect to our friendship. Please post link to our photo together :) I was very fortunate that I was able to snap a quick photograph with Pinal Dave with Dr. David DeWitt. I stood outside of the hall waiting for Dr. to show up and when he was heading down from convention center I requested him if I can have one photo for my memory lane and very politely he agreed to have one. It indeed made my day! Pinal Dave with Dr. David DeWitt Every single time I met Steve, I make sure I have one photo for my memory. Steve is so kind every single time. If you know SQL and do not know Steve Jones, you do not know SQL (IMHO). Following is the photograph with Michael McLean. More details about this photo in future blog post! Pinal Dave, Michael McLean, and Rick Morelan Arnie always shares his wisdom with me. I still remember when I very first time visited USA, I was standing alone in corner and Arnie walked to me and introduced to every single person he know. Talking to Arnie is always pleasure and inspiring. Arnie Rowland and Pinal Dave I am now published author and have written two books so far. I am fortunate to have Rick Morelan as Co-author of both of my books. He is great guy and very easy to become friends with. I am very much impressed by him and his kindness during book co-authoring. Here is very first of our photograph together at SQLPASS. Rick Morelan and Pinal Dave Diego Nogare and I have been talking for long time on twitter and on various social media channels. I finally got chance to meet my friend from Brazil. It was excellent experience to meet a friend whom one wants to meet for long time and had never got chance earlier. Buck Woody – who does not know Buck. He is funny, kind and most important friends of every one. Buck is so kind that he does not hesitate to approach people even though he is famous and most known in community. Every time I meet him I learn something. He is always smiling and approachable. Pinal Dave and Buck Woddy Rushabh Mehta is current SQL PASS president and personal friend. He has always smiling face and tremendous love for SQL community. I often wonder where he gets all the time for all the time and efforts he puts in for community. I never miss a chance to meet and greet him. Even though he is renowned SQL Guru and extremely busy person – every single time I meet him he always asks me – “How is Nupur and Shaivi?” He even remembers my wife and daughters name. I am touched. Rushabh Mehta and Pinal Dave Nigel Sammy has extremely well sense of humor and passion from community. We have excellent synergy while we are together. The attached photo is taken while I was talking to him on Seattle Shoreline about SQL. Pinal Dave and Nigel Sammy Rick Morelan wanted my this trip to be memorable. I am vegetarian and I told him that I do not like Seafood. Well, to prove the point, he took me to fantastic Seafood restaurant in Seattle and treated me with mouth watering vegetarian dishes. I think when I go to Seattle next time, I am going to make him to take me again to the same place. Rick, Rushabh, Pinal and Paras Well, this is a short summary of few of the friends I met at Seattle. What is the life without friends, eh? Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL PASS, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, SQLAuthority News, T SQL, Technology

    Read the article

  • High Availability for IaaS, PaaS and SaaS in the Cloud

    - by BuckWoody
    Outages, natural disasters and unforeseen events have proved that even in a distributed architecture, you need to plan for High Availability (HA). In this entry I'll explain a few considerations for HA within Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS). In a separate post I'll talk more about Disaster Recovery (DR), since each paradigm has a different way to handle that. Planning for HA in IaaS IaaS involves Virtual Machines - so in effect, an HA strategy here takes on many of the same characteristics as it would on-premises. The primary difference is that the vendor controls the hardware, so you need to verify what they do for things like local redundancy and so on from the hardware perspective. As far as what you can control and plan for, the primary factors fall into three areas: multiple instances, geographical dispersion and task-switching. In almost every cloud vendor I've studied, to ensure your application will be protected by any level of HA, you need to have at least two of the Instances (VM's) running. This makes sense, but you might assume that the vendor just takes care of that for you - they don't. If a single VM goes down (for whatever reason) then the access to it is lost. Depending on multiple factors, you might be able to recover the data, but you should assume that you can't. You should keep a sync to another location (perhaps the vendor's storage system in another geographic datacenter or to a local location) to ensure you can continue to serve your clients. You'll also need to host the same VM's in another geographical location. Everything from a vendor outage to a network path problem could prevent your users from reaching the system, so you need to have multiple locations to handle this. This means that you'll have to figure out how to manage state between the geo's. If the system goes down in the middle of a transaction, you need to figure out what part of the process the system was in, and then re-create or transfer that state to the second set of systems. If you didn't write the software yourself, this is non-trivial. You'll also need a manual or automatic process to detect the failure and re-route the traffic to your secondary location. You could flip a DNS entry (if your application can tolerate that) or invoke another process to alias the first system to the second, such as load-balancing and so on. There are many options, but all of them involve coding the state into the application layer. If you've simply moved a state-ful application to VM's, you may not be able to easily implement an HA solution. Planning for HA in PaaS Implementing HA in PaaS is a bit simpler, since it's built on the concept of stateless applications deployment. Once again, you need at least two copies of each element in the solution (web roles, worker roles, etc.) to remain available in a single datacenter. Also, you need to deploy the application again in a separate geo, but the advantage here is that you could work out a "shared storage" model such that state is auto-balanced across the world. In fact, you don't have to maintain a "DR" site, the alternate location can be live and serving clients, and only take on extra load if the other site is not available. In Windows Azure, you can use the Traffic Manager service top route the requests as a type of auto balancer. Even with these benefits, I recommend a second backup of storage in another geographic location. Storage is inexpensive; and that second copy can be used for not only HA but DR. Planning for HA in SaaS In Software-as-a-Service (such as Office 365, or Hadoop in Windows Azure) You have far less control over the HA solution, although you still maintain the responsibility to ensure you have it. Since each SaaS is different, check with the vendor on the solution for HA - and make sure you understand what they do and what you are responsible for. They may have no HA for that solution, or pin it to a particular geo, or perhaps they have a massive HA built in with automatic load balancing (which is often the case).   All of these options (with the exception of SaaS) involve higher costs for the design. Do not sacrifice reliability for cost - that will always cost you more in the end. Build in the redundancy and HA at the very outset of the project - if you try to tack it on later in the process the business will push back and potentially not implement HA. References: http://www.bing.com/search?q=windows+azure+High+Availability  (each type of implementation is different, so I'm routing you to a search on the topic - look for the "Patterns and Practices" results for the area in Azure you're interested in)

    Read the article

  • How to Transform a user's search string into a MS SQL Full-Text Search Phrase

    - by Atomiton
    I've search for answers for this and I can't seem to find an answer to what should be somewhat simple. This is related to another question I asked, but it's different. What's the best way to take a user's search phrase and throw it into a CONTAINSTABLE(table, column, @phrase, topN ) phrase? Say, for example the user inputs: Books by "Dr. Seuss" What's the best way to turn that into something that will return results in my ContainsTAble() phrase? I was previously parsing the search phrase and writing something like ISABOUT("Books" WEIGHT(1.0), "by" WEIGHT(0.9), "Dr. Seuss" WEIGHT(0.8)) as my @phrase but ISABOUT seems to be returning odd results... especially when one word searches are entered. Any Ideas?

    Read the article

  • display data from json file in datagrid

    - by kayn
    I want to display data from a json files in a data grid using dojo ver 1.0.0. I am able to diplay the data when i declare it on my code but when i store the same data in a json format so i can reference it in my script,i get an empty grid. This is my json file; { data: [ ['10''myfile','Css', 'CS Degree','Dr. Bottoman','This is mine'], ['10'myfile2','CS716', 'CS Degree','Prof Frank', 'This is course'], ['10'myfile3 ','CS714', 'CS Degree', 'Dr. Ree', 'Welcome'], ['14', 'myfile4','CS772', 'CS Degree', 'Mr. Boss', 'This will display content' ], ['18', 'myfile5','CS774', 'CS Degree','Ms. Kirk', 'This is networks.' ] ] } and below is my code; @import "../../../dojo/resources/dojo.css"; @import "../_grid/Grid.css"; body { font-size: 1.0em; } #grid { height: 400px; border: 1px solid silver; } .text-oneline { white-space: nowrap; overflow: hidden; text-overflow: ellipsis; } .text-scrolling { height: 4em; overflow: auto; } .text-scrolling { width: 21.5em; } dojo.require("dojox.grid.Grid"); dojo.require("dojox.grid._data.model"); dojo.require("dojo.parser"); <script type="text/javascript"> /*<span dojoType="dojo.data.ItemFileWriteStore" jsId="myStore" url="course.json"> </span>*/ data = [ ['10''myfile','Css', 'CS Degree','Dr. Bottoman','This is mine'], ['10'myfile2','CS716', 'CS Degree','Prof Frank', 'This is course'], ['10'myfile3 ','CS714', 'CS Degree', 'Dr. Ree', 'Welcome'], ['14', 'myfile4','CS772', 'CS Degree', 'Mr. Boss', 'This will display content' ], ['18', 'myfile5','CS774', 'CS Degree','Ms. Kirk', 'This is networks.' ] ]; getDetailData = function(inRowIndex) { var row = data[this.grid.dataRow % data.length ]; switch (this.index) { case 0: return row[5]; case 1: return row[2]; case 2: return row[0]; case 3: return row[1]; case 4: return row[3]; case 5: return row[4]; default: return row[this.index]; } } getName = function(inRowIndex) { var row = data[inRowIndex % data.length]; return row[1]; } // Main grid structure var gridCells = [ { type: 'dojox.GridRowView', width: '20px' }, { onBeforeRow: function(inDataIndex, inSubRows) { inSubRows[1].hidden = !detailRows[inDataIndex]; }, cells: [[ { name: 'Master', width: 3, get: getCheck, styles: 'text-align: center;' }, { name: 'Detail', get: getName, width: 60 }, ], [ { name: '', get: getDetail, colSpan: 2, styles: 'padding: 0; margin: 0;'} ]] } ]; // html for the +/- cell function getCheck(inRowIndex) { var image = (detailRows[inRowIndex] ? 'open.gif' : 'closed.gif'); var show = (detailRows[inRowIndex] ? 'false' : 'true') return ''; } // provide html for the Detail cell in the master grid function getDetail(inRowIndex) { var cell = this; // we can affect styles and content here, but we have to wait to access actual nodes setTimeout(function() { buildDetailgrid(inRowIndex, cell); }, 1); // look for a Detailgrid var Detailgrid = dijit.byId(makeDetailgridId(inRowIndex)); var h = (Detailgrid ? Detailgrid.cacheHeight : "120") + "px"; // insert a placeholder return ''; } // the Detail cell contains a Detailgrid which we set up below var DetailgridCells = [{ noscroll: true, cells: [ [ {name: "Brief Course Description",width: "auto"}, {name: "Course Code" }, {name: "Credits" }, {name: "Subject" }, {name: "Prerequisite" }, {name: "Lecturer"}], [] ]}]; var DetailgridProps = { structure: DetailgridCells, rowCount: 1, autoHeight: true, autoRender: false, "get": getDetailData }; // identify Detailgrids by their row indices function makeDetailgridId(inRowIndex) { return grid.widgetId + "Detailgrid"/+ inRowIndex/; } // if a Detailgrid exists at inRowIndex, detach it from the DOM function detachDetailgrid(inRowIndex) { var Detailgrid = dijit.byId(makeDetailgridId(inRowIndex)); if (Detailgrid) dojox.grid.removeNode(Detailgrid.domNode); } // render a Detailgrid into inCell at inRowIndex function buildDetailgrid(inRowIndex, inCell) { var n = inCell.getNode(inRowIndex).firstChild; var id = makeDetailgridId(inRowIndex); var Detailgrid = dijit.byId(id); if (Detailgrid) { n.appendChild(Detailgrid.domNode); } else { DetailgridProps.dataRow = inRowIndex; DetailgridProps.widgetId = id; Detailgrid = new dojox.VirtualGrid(DetailgridProps, n); } if (Detailgrid) { Detailgrid.render(); Detailgrid.cacheHeight = Detailgrid.domNode.offsetHeight; inCell.grid.rowHeightChanged(inRowIndex); } } // destroy Detailgrid at inRowIndex function destroyDetailgrid(inRowIndex) { var Detailgrid = dijit.byId(makeDetailgridId(inRowIndex)); if (Detailgrid) Detailgrid.destroy(); } // when user clicks the +/- detailRows = []; function toggleDetail(inIndex, inShow) { if (!inShow) detachDetailgrid(inIndex); detailRows[inIndex] = inShow; grid.updateRow(inIndex); } dojo.addOnLoad(function() { window["grid"] = dijit.byId("grid"); dojo.connect(grid, 'rowRemoved', destroyDetailgrid); }); Test grid

    Read the article

  • Dropdownlist and Datareader

    - by salvationishere
    After trying many solutions listed on the internet I am very confused now. I have a C#/SQL web application for which I am simply trying to bind an ExecuteReader command to a Dropdownlist so the user can select a value. This is a VS2008 project on an XP OS. How it works is after the user selects a table, I use this selection as an input parameter to a method from my Datamatch.aspx.cs file. Then this Datamatch.aspx.cs file calls a method from my ADONET.cs class file. Finally this method executes a SQL procedure to return the list of columns from that table. (These are all tables in Adventureworks DB). I know that this method returns successfully the list of columns if I execute this SP in SSMS. However, I'm not sure how to tell if it works in VS or not. This should be simple. How can I do this? Here is some of my code. The T-sQL stored proc: CREATE PROCEDURE [dbo].[getColumnNames] @TableName VarChar(50) AS BEGIN SET NOCOUNT ON; SELECT col.name 'COLUMN_NAME' FROM sysobjects obj INNER JOIN syscolumns col ON obj.id = col.id WHERE obj.name = @TableName END It gives me desired output when I execute following from SSMS: exec getColumnNames 'AddressType' And the code from Datamatch.aspx.cs file currently is: // Add DropDownList Control to Placeholder private void CreateDropDownLists() { SqlDataReader dr2 = ADONET_methods.DisplayTableColumns(targettable); int NumControls = targettable.Length; DropDownList ddl = new DropDownList(); } Where ADONET_methods.DisplayTableColumns(targettable) is: public static SqlDataReader DisplayTableColumns(string tt) { SqlDataReader dr = null; string TableName = tt; string connString = "Server=(local);Database=AdventureWorks;Integrated Security = SSPI"; string errorMsg; SqlConnection conn2 = new SqlConnection(connString); SqlCommand cmd = new SqlCommand("getColumnNames"); //conn2.CreateCommand(); try { cmd.CommandType = CommandType.StoredProcedure; cmd.Connection = conn2; SqlParameter parm = new SqlParameter("@TableName", SqlDbType.VarChar); parm.Value = "Person." + TableName.Trim(); parm.Direction = ParameterDirection.Input; cmd.Parameters.Add(parm); conn2.Open(); dr = cmd.ExecuteReader(); } catch (Exception ex) { errorMsg = ex.Message; } return dr; }

    Read the article

  • ColdFusion structs Direct Assignment vs object literal notation.

    - by Tom Hubbard
    The newer versions of ColdFusion (I believe CF 8 and 9) allow you to create structs with object literal notation similar to JSON. My question is, are there specific benefits (execution efficiency maybe) to using object literal notation over individual assignments for data that is essentially static? For example: With individual assignments you would do something like this: var user = {}; user.Fname = "MyFirstnam"; user.Lname = "MyLastName"; user.titles = []; ArrayAppend(user.titles,'Mr'); ArrayAppend(user.titles,'Dr.'); Whereas with object literals you would do something like. var user = {Fname = "MyFirstnam", Lname = "MyLastName", titles = ['Mr','Dr']}; Now this limited example is admittedly simple, but if titles was an array of structures (Say an array of addresses), the literal notation becomes awkward to work with.

    Read the article

  • Convert a user's search string into a MS SQL `Full-Text Query` Search Phrase

    - by Atomiton
    I've search for answers for this and I can't seem to find an answer to what should be somewhat simple. This is related to another question I asked, but it's different. What's the best way to take a user's search phrase and throw it into a CONTAINSTABLE(table, column, @phrase, topN ) phrase? Say, for example the user inputs: Books by "Dr. Seuss" What's the best way to turn that into something that will return results in my ContainsTAble() phrase? I was previously parsing the search phrase and writing something like ISABOUT("Books" WEIGHT(1.0), "by" WEIGHT(0.9), "Dr. Seuss" WEIGHT(0.8)) as my @phrase but ISABOUT seems to be returning odd results... especially when one word searches are entered. Any Ideas?

    Read the article

  • named_scope and substings

    - by Philb28
    I have a named_scope in rails that finds episodes by there directors given name named_scope :director_given, lambda { |dr| {:joins => :director, :conditions => ['given = ?', dr]} } It works great but I would like it to also work on substrings one the name. e.g. instead of having to search for 'Lucy' you could just search 'Lu'. P.S. I also have another named scope which does exactly the same thing but on the directors last name. It there a way to combine the two? Thanks,

    Read the article

  • datatable works in C# winform but not ASP.NET

    - by Charles Gargent
    Hi I have created a class that returns a datatable, when I use the class in a c# winform the dataGridView is populted corectly using the following code dataGridView1.DataSource = dbLib.GetData(); However when I try the same thing with ASP.NET I get a Object reference not set to an instance of an object. using the following code GridView1.DataSource = dbLib.GetData(); GridView1.DataBind(); What am I doing wrong / missing Thanks EDIT for the curios here is the dbLib class public static DataTable GetData() { SQLiteConnection cnn = new SQLiteConnection("Data Source=c:\\test.db"); SQLiteCommand cmd = new SQLiteCommand("SELECT count(Message) AS Occurances, Message FROM evtlog GROUP BY Message ORDER BY Occurances DESC LIMIT 25", cnn); cnn.Open(); SQLiteDataReader dr = cmd.ExecuteReader(CommandBehavior.CloseConnection); DataTable dt = new DataTable(); dt.Load(dr); return dt; }

    Read the article

  • cannot read multiple rows from sqldatareader

    - by amby
    Hi, when i query for only one record/row, sqldatareader is giving correct result but when i query for multiple rows, its giving error on the client side. below is my code. please tell me what is the problem here. string query = "select * from Customer_Order where orderNumber = " + order;//+" OR orderNumber = 17"; DataTable dt = new DataTable(); Hashtable sendData = new Hashtable(); try { using (SqlConnection conn = new SqlConnection(ConnectionString)) { using (SqlCommand cmd = new SqlCommand(query, conn)) { conn.Open(); SqlDataReader dr = cmd.ExecuteReader(CommandBehavior.CloseConnection); dt.Load(dr); } }

    Read the article

  • SQL Invalid Object Name 'AddressType'

    - by salvationishere
    I am getting the above error in my VS 2008 C# method when I try to invoke the SQL getColumnNames stored procedure from VS. This SP accepts one input parameter, the table name, and works successfully from SSMS. Currently I am selecting the AdventureWorks AddressType table for it to pull the column names from this table. I can see teh AdventureWorks table available in VS from my Server Explorer / Data Connection. And I see both the AddressType table and getColumnNames SP showing in Server Explorer. But I am still getting this error listed above. Here is the C# code snippet I use to execute this: public static DataTable DisplayTableColumns(string tt) { SqlDataReader dr = null; string TableName = tt; string connString = "Data Source=.;AttachDbFilename=\"C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\AdventureWorks_Data.mdf\";Initial Catalog=AdventureWorks;Integrated Security=True;Connect Timeout=30;User Instance=False"; string errorMsg; SqlConnection conn2 = new SqlConnection(connString); SqlCommand cmd = conn2.CreateCommand(); try { cmd.CommandText = "dbo.getColumnNames"; cmd.CommandType = CommandType.StoredProcedure; cmd.Connection = conn2; SqlParameter parm = new SqlParameter("@TableName", SqlDbType.VarChar); parm.Value = TableName; parm.Direction = ParameterDirection.Input; cmd.Parameters.Add(parm); conn2.Open(); dr = cmd.ExecuteReader(); } catch (Exception ex) { errorMsg = ex.Message; } And when I examine the errorMsg it says the following: " at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection)\r\n at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection)\r\n at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj)\r\n at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj)\r\n at System.Data.SqlClient.SqlDataReader.ConsumeMetaData()\r\n at System.Data.SqlClient.SqlDataReader.get_MetaData()\r\n at System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString)\r\n at System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async)\r\n at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, DbAsyncResult result)\r\n at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method)\r\n at System.Data.SqlClient.SqlCommand.ExecuteReader(CommandBehavior behavior, String method)\r\n at System.Data.SqlClient.SqlCommand.ExecuteReader()\r\n at ADONET_namespace.ADONET_methods.DisplayTableColumns(String tt) in C:\Documents and Settings\Admin\My Documents\Visual Studio 2008\Projects\AddFileToSQL\AddFileToSQL\ADONET methods.cs:line 35" Where line 35 is dr = cmd.ExecuteReader();

    Read the article

  • Trying to get JQuery Autocomplete working on Asp.Net page.

    - by JasonMHirst
    Can someone shed some light on the problem please: I have the following: $(document).ready(function () { $("#txtFirstContact").autocomplete({url:'http://localhost:7970/Home/FindSurname' }); }); On my Asp.Net page. The http request is a function on an MVC Controller and that code is here: Function FindSurname(ByVal surname As String, ByVal count As Integer) Dim sqlConnection As New SqlClient.SqlConnection sqlConnection.ConnectionString = My.Settings.sqlConnection Dim sqlCommand As New SqlClient.SqlCommand sqlCommand.CommandText = "SELECT ConSName FROM tblContact WHERE ConSName LIKE '" & surname & "%'" sqlCommand.Connection = sqlConnection Dim ds As New DataSet Dim da As New SqlClient.SqlDataAdapter(sqlCommand) da.Fill(ds, "Contact") sqlConnection.Close() Dim contactsArray As New List(Of String) For Each dr As DataRow In ds.Tables("Contact").Rows contactsArray.Add(dr.Item("ConSName")) Next Return Json(contactsArray, JsonRequestBehavior.AllowGet) End Function As far as I'm aware, the Controller is returning JSON data, however I don't know if the Function Parameters are correct, or indeed if the format returned is interprettable by the AutoComplete plugin. If anyone can assist in the matter I'd really appreciate it.

    Read the article

  • Make Directory.GetFiles() ignore protected folders

    - by Kryptic
    Hello Everyone, I'm using the Directory.GetFiles() method to get a list of files to operate on. This method throws an UnauthorizedAccessException for example when trying to access a protected folder. I would like it to simply skip over such folders and continue. How can I accomplish this with either Directory.GetFiles (preferably) or another method? Update: Here is the code that throws the exception. I am asking the user to select a directory and then retrieving the list of files. I commented out the code (so this is now whole method) that iterates through the files and the problem still occurs. The exception is thrown on the Directory.GetFiles() line. FolderBrowserDialog fbd = new FolderBrowserDialog(); DialogResult dr = fbd.ShowDialog(); if (dr == System.Windows.Forms.DialogResult.Cancel) return; string directory = fbd.SelectedPath; string[] files = Directory.GetFiles(directory, "*.html", SearchOption.AllDirectories);

    Read the article

  • C# - Getting record from a row using DataRow

    - by pinkcupcake
    I'm trying to get record of a row using DataRow. Here's what I've done so far: uID = int.Parse(Request.QueryString["id"]); PhotoDataSetTableAdapters.MembersTableAdapter mem = new PhotoDataSetTableAdapters.MembersTableAdapter(); PhotoDataSet.MembersDataTable memTable = mem.GetMemberByID(uID); DataRow[] dr = memTable.Select("userID = uID"); string uName = dr["username"].ToString(); Then I got the error: Cannot implicitly convert type 'string' to 'int' The error points to "username". I don't know what's wrong because I'm just trying to assign a string variable to a string value. Anyone figures out the reason of the error? Please help and thanks.

    Read the article

  • invalid postback event instead of dropdown to datagrid

    - by rima
    I faced with funny situation. I created a page which is having some value, I set these value and control my post back event also. The problem is happening when I change a component index(ex reselect a combobox which is not inside my datagrid) then I dont know why without my page call the Page_Load it goes to create a new row in grid function and all of my parameter are null! I am just receiving null exception. So in other word I try to explain the situation: when I load my page I am initializing some parameter. then everything is working fine. in my page when I change selected item of my combo box, page suppose to go and run function related to that combo box, and call page_load, but it is not going there and it goes to rowcreated function. I am trying to illustrate part of my page. Please help me because I am not receiving any error except null exception and it triger wrong even which seems so complicated for me. public partial class W_CM_FRM_02 : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { if (Page.IsPostBack && !loginFail) return; InitializeItems(); } } private void InitializeItems() { cols = new string[] { "v_classification_code", "v_classification_name" }; arrlstCMMM_CLASSIFICATION = (ArrayList)db.Select(cols, "CMMM_CLASSIFICATION", "v_classification_code <> 'N'", " ORDER BY v_classification_name"); } } protected void DGV_RFA_DETAILS_RowCreated(object sender, GridViewRowEventArgs e) { //db = (Database)Session["oCon"]; foreach (DataRow dr in arrlstCMMM_CLASSIFICATION) ((DropDownList)DGV_RFA_DETAILS.Rows[index].Cells[4].FindControl("OV_RFA_CLASSIFICATION")).Items.Add(new ListItem(dr["v_classification_name"].ToString(), dr["v_classification_code"].ToString())); } protected void V_CUSTOMER_SelectedIndexChanged(object sender, EventArgs e) { if (V_CUSTOMER.SelectedValue == "xxx" || V_CUSTOMER.SelectedValue == "ddd") V_IMPACTED_FUNCTIONS.Enabled = true; } } my form: <%@ Page Language="C#" MasterPageFile="~/MasterPage.master" AutoEventWireup="true" CodeFile="W_CM_FRM_02.aspx.cs" Inherits="W_CM_FRM_02" Title="W_CM_FRM_02" enableeventvalidation="false" EnableViewState="true"%> <td>Project name*</td> <td><asp:DropDownList ID="V_CUSTOMER" runat="server" AutoPostBack="True" onselectedindexchanged="V_CUSTOMER_SelectedIndexChanged" /></td> <td colspan = "8"> <asp:GridView ID="DGV_RFA_DETAILS" runat="server" ShowFooter="True" AutoGenerateColumns="False" CellPadding="1" ForeColor="#333333" GridLines="None" OnRowDeleting="grvRFADetails_RowDeleting" Width="100%" Style="text-align: left" onrowcreated="DGV_RFA_DETAILS_RowCreated"> <RowStyle BackColor="#FFFBD6" ForeColor="#333333" /> <Columns> <asp:BoundField DataField="ON_RowNumber" HeaderText="SNo" /> <asp:TemplateField HeaderText="RFA/RAD/Ticket No*"> <ItemTemplate> <asp:TextBox ID="OV_RFA_NO" runat="server" Width="120"></asp:TextBox> </ItemTemplate> </asp:TemplateField>

    Read the article

  • Problem with skipping empty cells while importing data from .xlsx file in asp.net c# application

    - by Eedoh
    Hi to all. I have a problem with reading .xlsx files in asp.net mvc2.0 application, using c#. Problem occurs when reading empty cell from .xlsx file. My code simply skips this cell and reads the next one. For example, if the contents of .xlsx file are: FirstName LastName Age John 36 They will be read as: FirstName LastName Age John 36 Here's the code that does the reading. private string GetValue(Cell cell, SharedStringTablePart stringTablePart) { if (cell.ChildElements.Count == 0) return string.Empty; //get cell value string value = cell.ElementAt(0).InnerText;//CellValue.InnerText; //Look up real value from shared string table if ((cell.DataType != null) && (cell.DataType == CellValues.SharedString)) value = stringTablePart.SharedStringTable.ChildElements[Int32.Parse(value)].InnerText; return value; } private DataTable ExtractExcelSheetValuesToDataTable(string xlsxFilePath, string sheetName) { DataTable dt = new DataTable(); using (SpreadsheetDocument myWorkbook = SpreadsheetDocument.Open(xlsxFilePath, true)) { //Access the main Workbook part, which contains data WorkbookPart workbookPart = myWorkbook.WorkbookPart; WorksheetPart worksheetPart = null; if (!string.IsNullOrEmpty(sheetName)) { Sheet ss = workbookPart.Workbook.Descendants<Sheet>().Where(s => s.Name == sheetName).SingleOrDefault<Sheet>(); worksheetPart = (WorksheetPart)workbookPart.GetPartById(ss.Id); } else { worksheetPart = workbookPart.WorksheetParts.FirstOrDefault(); } SharedStringTablePart stringTablePart = workbookPart.SharedStringTablePart; if (worksheetPart != null) { Row lastRow = worksheetPart.Worksheet.Descendants<Row>().LastOrDefault(); Row firstRow = worksheetPart.Worksheet.Descendants<Row>().FirstOrDefault(); if (firstRow != null) { foreach (Cell c in firstRow.ChildElements) { string value = GetValue(c, stringTablePart); dt.Columns.Add(value); } } if (lastRow != null) { for (int i = 2; i <= lastRow.RowIndex; i++) { DataRow dr = dt.NewRow(); bool empty = true; Row row = worksheetPart.Worksheet.Descendants<Row>().Where(r => i == r.RowIndex).FirstOrDefault(); int j = 0; if (row != null) { foreach (Cell c in row.ChildElements) { //Get cell value string value = GetValue(c, stringTablePart); if (!string.IsNullOrEmpty(value) && value != "") empty = false; dr[j] = value; j++; if (j == dt.Columns.Count) break; } if (empty) break; dt.Rows.Add(dr); } } } } } return dt; }

    Read the article

  • Building a DataTable in C# with one column at a time

    - by Awaken
    I am trying to build a Retirement Calculator as a chance to make something useful and learn C# better. Currently, I am trying to build a DataTable with a dynamic amount of rows/columns. For context, I ask the user for some inputs regarding salary, % of salary being invested, and expected ROI. I have some other stuff too, but it isn't really part of the issue I am having. Because the number of columns I need for the DataTable is unknown until the formula is run (while+for loop completes), I am having trouble doing things as DataColumns. Hopefully the code below will help. I am really trying to build the table one column at a time. I realize I could reverse it to build it one row at a time because the Years before retirement (yearsRetire) is known, but I would prefer not to and I want to learn more. Sorry for the indentation and commenting. I tried to leave some of my commented coding attempts in there. Thanks for any help. public double calcROI() { double testROI = 0.00; double tempRetireAmount = 0; double adjustRetire = goalAmount * (1 + (Math.Pow(inflation,yearsRetire))); // Loop through ROI values until the calculated retire amount with the test ROI // is greater than the target amount adjusted for inflation while (tempRetireAmount < adjustRetire) { //Increment ROI by 1% per while iteration testROI += .01; //Make a new Column to hold the values for ROI for this while iteration //dtMain.Columns.Add(Convert.ToString(testROI)); //DataColumn tempdc = new DataColumn(Convert.ToString(testROI)); //Loop through the number of years entered by user and see the amount //at Retirement with current ROI for (int i = 0; i < yearsRetire; i++) { //Main formula to calculate amount after i years tempRetireAmount = (tempRetireAmount + salary*savingsPct) * (1 + testROI); // Add value for this year/ROI to table/column //DataRow dr = .NewRow(); //dr tempRetireAmount; //tempdc[i] = tempRetireAmount; } //Need to add column of data to my Main DataTable //dtMain.Rows.Add(dr); //dtMain.Columns.Add(tempdc); } return testROI; }

    Read the article

  • how to change the filed while showing on the grid?

    - by rockers
    helo friends, I am new to Asp.net mvc.. I have a column called Indicator in the data base..that has Y or N this is the code I am using to get the field from data base.. Indicator= !dr.IsDBNull(8) ? dr.GetString(8) : null, In my entity class i have the filed public string Indicator { get; set; } while to assing the grid I am doing soemthing like this var result = (from e in A.List.AsQueryable() select new { Indicator= e.Indicator, }); return gridModel.ListGrid.DataBind(result); But on the grid I am seeing Y and N.. but Yesterda of that if it return Y I need to show them YES if it is N i need to show them NO where do I need to change this? thanks

    Read the article

  • Table character encoding - exception in application

    - by zgnilec
    I have a code: CREATE TABLE IF NOT EXISTS Person ( name varchar(24) ... ) CHARACTER SET utf8 COLLATE utf8_polish_ci; This works OK in my application, but I read if someone put in name field a string that contains character wchich code is greater than 127, database will use 2 bytes (or more) to store this character. So i think, i will change character set to utf16: CHARACTER SET utf16 COLLATE utf16_polish_ci; But now when I run my application, exception apears: KeyNotFoundException. It apears exactly at these instructions: MySqlCommand komenda = baza.Polaczenie.CreateCommand (); komenda.CommandText = zapytanie; MySqlDataReader dr = komenda.ExecuteReader (); // HERE, at execute reader method if (dr.Read ()) ... 1) Anyone had similar problem? 2) Any idea how to use always 2 bytes/char in database field?

    Read the article

  • Populating a WPF listbox with items from an SQL (SDF) database

    - by xplinux557
    I have been searching on how to do this for a very long time, and I have not managed to get a straight answer on the subject, so hopefully one of you StackOverflow users will be able to help me here. I have a WPF ListBox named CategoryList and a SDF database called ProgramsList.sdf (with two tables called CategoryList and ProgramsList). What I wish my program to do is get the category names from the CategoryList table and list them in the ListBox control called CategoryList. Here's the code that I tried, but it only caused my program to crash. SqlConnection myConnection = new SqlConnection("Data Source=" + AppDomain.CurrentDomain.BaseDirectory + "ProgramsList.sdf"); SqlDataReader myReader = null; myConnection.Open(); CategoryList.Items.Clear(); SqlDataReader dr = new SqlCommand("SELECT Name FROM CategoryList ORDER BY Name DESC", myConnection).ExecuteReader(); while (myReader.Read()) { CategoryList.Items.Add(dr.GetInt32(0)); } myConnection.Close(); Can anyone help me? Thanks in advance!

    Read the article

  • Why does GC.GetTotalMemory() report huge memory allocations?

    - by Seventh Element
    I have been playing around with GC.GetTotalMemory(). When I create a local variable of type Titles in the example below, the consumed amount of memory increases by 6276 bytes. What's going on here? class Program { enum Titles { Mr, Ms, Mrs, Dr }; static void Main(string[] args) { GetTotalMemory(); Titles t = Titles.Dr; GetTotalMemory(); } static void GetTotalMemory() { long bytes = GC.GetTotalMemory(true); Console.WriteLine("{0}", bytes); } }

    Read the article

  • Geek City: SQL Server 2014 In-Memory OLTP (“Hekaton”) Whitepaper for CTP2

    - by Kalen Delaney
    Last week at the PASS Summit in Charlotte, NC, the update of my whitepaper for CTP2 was released. The manager supervising the paper at Microsoft told me that David DeWitt himself said some very nice things about the technical quality of the paper, which was one of the most ego enhancing compliments I have ever gotten! Unfortunately, Dr. DeWitt said those things at his “After-the-keynote” session, not in the keynote that was recorded, so I only have my manager’s word for it. But I’ll take what I can...(read more)

    Read the article

< Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >