Search Results

Search found 9132 results on 366 pages for 'convert'.

Page 126/366 | < Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >

  • What is best practice in converting XML to Java object?

    - by newbie
    I need to convert XML data to Java objects. What would be best practice to convert this XML data to object? Idea is to fetch data via a web service (it doesn't use WSDL, just HTTP GET queries, so I cannot use any framework) and answers are in XML. What would be best practice to handle this situation?

    Read the article

  • Converting upper-case string into title-case using Ruby

    - by wsb3383
    Hi, all. I'm trying to convert an all-uppercase string in Ruby into a lower case one, but with each word's first character being upper case. Example: convert "MY STRING HERE" to "My String Here". I know I can use the .downcase method, but that would make everything lower case ("my string here"). I'm scanning all lines in a file and doing this change, so is there a regular expression I can use through ruby to achieve this? Thanks!

    Read the article

  • Converting a string into a double

    - by Koning Baard
    I am trying to convert a string (const char* argv[]) to a double precision floating point number: int main(const int argc, const char *argv[]) { int i; double numbers[argc - 1]; for(i = 1; i < argc; i += 1) { /* -- Convert each argv into a double and put it in `number` */ } /* ... */ return 0; } Can anyone help me? Thanks

    Read the article

  • SQL 2008 CASE statement aggravation...

    - by Brad
    Why does this fail: DECLARE @DATE VARCHAR(50) = 'dasf' SELECT CASE WHEN ISDATE(@DATE) = 1 THEN CONVERT(date,@DATE) ELSE @DATE END Msg 241, Level 16, State 1, Line 2 Conversion failed when converting date and/or time from character string. Why is it trying to convert dasf to date when it clearly causes ISDATE(@DATE) = 1 to evaluate to false... If I do: SELECT ISDATE(@DATE) The return value is 0.

    Read the article

  • SQL Server varchar to datetime

    - by Dezigo
    I have a field varchar(14) = 20090226115644 I need convert it to - 2009-02-26 11:56:44 (datetime format) My idea. use cast and convert.. but I always have errors. Conversion failed when converting datetime from character string. I made this, but don`t like it.. SELECT SUBSTRING(move,1,4) + '-' + SUBSTRING(move,5,2) + '-' + SUBSTRING(move,7,2) + ' ' + SUBSTRING(move,9,2) + ':' + SUBSTRING(move,11,2) + ':'+SUBSTRING(move,13,2) as new -- FROM [Test].[dbo].[container_events] where move IS not null Result :2009-02-26 11:56:44

    Read the article

  • Resultset To List

    - by Gnaniyar Zubair
    I want to convert my Resultset to List in my JSP page. and want to display all the values. This is my query: SELECT userId, userName FROM user; I have executed that using preparedstatement and got the Resultset. But how to convert it as a List and want to display the result like this: userID userName ------------------ 1001 user-X 1006 user-Y 1007 user-Z

    Read the article

  • C# float to decimal conversion

    - by Adrian4B
    Any smart way to convert a float like this: float f = 711989.98f; into a decimal (or double) without loosing precision? I've tried: decimal d = (decimal)f; decimal d1 = (decimal)(Math.Round(f,2)); decimal d2 = Convert.ToDecimal(f);

    Read the article

  • ActionScript Negating a Number

    - by TheDarkIn1978
    i'd like to negate a number and would like to know if there's a built in method that will convert a negative number to a positive OR a positive into a negative, depending on the number. i know about Math.abs(), but that only seems to convert negative into positive. is there a method that will do both?

    Read the article

  • coverting existin flash with cakephp website into mobile version

    - by jpramod4u
    How to convert the existing flash site into mobile version existing site in form of the cakephp frame work. We thought that html ,css,php,javascript may work all mobiles.We dont know exactly. Please tell us how many possible way to develop existing site into mobile version and also need to detect the from which browser the request is coming whether mobile browser or pc browser. The existing site link is :This site convert into mobile version

    Read the article

  • Retrieving the first digit of a number

    - by Michoel
    Hi, I am just learning Java and am trying to get my program to retrieve the first digit of a number - for example 543 should return 5, etc. I thought to convert to a string, but I am not sure how I can convert it back? Thanks for any help. int number = 534; String numberString = Integer.toString(number); char firstLetterChar = numberString.charAt(0); int firstDigit = ????

    Read the article

  • Date Time conversion

    - by Zerotoinfinite
    Hi All, I am getting datetime as 1/2/2010 11:29:30 which I am displaying in a gridview. and I want to convert it to "Feb 1, 2010 at 11:29". Please let me know how to convert it like this. Thanks in advance. Note: I am using asp.net with C#.

    Read the article

  • Is there anyway that we can get a label value to a Sql

    - by Pradeep
    SELECT COUNT(*) AS Expr1 FROM Book INNER JOIN Temp_Order ON Book.Book_ID = Temp_Order.Book_ID WHERE (Temp_Order.User_ID = 25) AND (CONVERT (nvarchar, Temp_Order.OrderDate, 111) = CONVERT (nvarchar, GETDATE(), 111)) In here i want to change my User_ID to get from a label.Text this Sql Statement is in a DataView. so in the Wizard it not accepting a text box values or anything. can someone please help me to solve this

    Read the article

  • Please help translate this in linq to ef

    - by user3487644
    StringBuilder sb = new StringBuilder(); sb.AppendLine("SELECT"); sb.AppendLine(String.Format(" (SELECT TOP 1 CAST(ProspectID AS VARCHAR(5)) FROM Lead_Import_Fail Where ProspectID < {0} AND ProspectFullName = '{1}')", Convert.ToInt64(lead.LeadID), lead.Name)); sb.AppendLine(String.Format(", (SELECT TOP 1 CAST(ProspectID AS VARCHAR(5)) FROM Lead_Import_Fail Where ProspectID < {0} AND ProspectNRICPassport = '{1}')", Convert.ToInt64(lead.LeadID), lead.NRIC)); Thanks in advance.

    Read the article

  • converting timestamp to nanoseconds

    - by kuki
    I have a certain value of date and time say 28-3-2012(date) - 10:36:45(time) . I wish to convert this whole timestamp to nanoseconds with the precision of nanoseconds. As in the user would input the time and date as shown but internally i have to be accurate upto nanoseconds and convert the whole thing to nanoseconds to form a unique key assigned to a specific object created at that particular time. Could some one please help me with the same..

    Read the article

  • Get XML from Server for Use on Windows Phone

    - by psheriff
    When working with mobile devices you always need to take into account bandwidth usage and power consumption. If you are constantly connecting to a server to retrieve data for an input screen, then you might think about moving some of that data down to the phone and cache the data on the phone. An example would be a static list of US State Codes that you are asking the user to select from. Since this is data that does not change very often, this is one set of data that would be great to cache on the phone. Since the Windows Phone does not have an embedded database, you can just use an XML string stored in Isolated Storage. Of course, then you need to figure out how to get data down to the phone. You can either ship it with the application, or connect and retrieve the data from your server one time and thereafter cache it and retrieve it from the cache. In this blog post you will see how to create a WCF service to retrieve data from a Product table in a database and send that data as XML to the phone and store it in Isolated Storage. You will then read that data from Isolated Storage using LINQ to XML and display it in a ListBox. Step 1: Create a Windows Phone Application The first step is to create a Windows Phone application called WP_GetXmlFromDataSet (or whatever you want to call it). On the MainPage.xaml add the following XAML within the “ContentPanel” grid: <StackPanel>  <Button Name="btnGetXml"          Content="Get XML"          Click="btnGetXml_Click" />  <Button Name="btnRead"          Content="Read XML"          IsEnabled="False"          Click="btnRead_Click" />  <ListBox Name="lstData"            Height="430"            ItemsSource="{Binding}"            DisplayMemberPath="ProductName" /></StackPanel> Now it is time to create the WCF Service Application that you will call to get the XML from a table in a SQL Server database. Step 2: Create a WCF Service Application Add a new project to your solution called WP_GetXmlFromDataSet.Services. Delete the IService1.* and Service1.* files and the App_Data folder, as you don’t generally need these items. Add a new WCF Service class called ProductService. In the IProductService class modify the void DoWork() method with the following code: [OperationContract]string GetProductXml(); Open the code behind in the ProductService.svc and create the GetProductXml() method. This method (shown below) will connect up to a database and retrieve data from a Product table. public string GetProductXml(){  string ret = string.Empty;  string sql = string.Empty;  SqlDataAdapter da;  DataSet ds = new DataSet();   sql = "SELECT ProductId, ProductName,";  sql += " IntroductionDate, Price";  sql += " FROM Product";   da = new SqlDataAdapter(sql,    ConfigurationManager.ConnectionStrings["Sandbox"].ConnectionString);   da.Fill(ds);   // Create Attribute based XML  foreach (DataColumn col in ds.Tables[0].Columns)  {    col.ColumnMapping = MappingType.Attribute;  }   ds.DataSetName = "Products";  ds.Tables[0].TableName = "Product";  ret = ds.GetXml();   return ret;} After retrieving the data from the Product table using a DataSet, you will want to set each column’s ColumnMapping property to Attribute. Using attribute based XML will make the data transferred across the wire a little smaller. You then set the DataSetName property to the top-level element name you want to assign to the XML. You then set the TableName property on the DataTable to the name you want each element to be in your XML. The last thing you need to do is to call the GetXml() method on the DataSet object which will return an XML string of the data in your DataSet object. This is the value that you will return from the service call. The XML that is returned from the above call looks like the following: <Products>  <Product ProductId="1"           ProductName="PDSA .NET Productivity Framework"           IntroductionDate="9/3/2010"           Price="5000" />  <Product ProductId="3"           ProductName="Haystack Code Generator for .NET"           IntroductionDate="7/1/2010"           Price="599.00" />  ...  ...  ... </Products> The GetProductXml() method uses a connection string from the Web.Config file, so add a <connectionStrings> element to the Web.Config file in your WCF Service application. Modify the settings shown below as needed for your server and database name. <connectionStrings>  <add name="Sandbox"        connectionString="Server=Localhost;Database=Sandbox;                         Integrated Security=Yes"/></connectionStrings> The Product Table You will need a Product table that you can read data from. I used the following structure for my product table. Add any data you want to this table after you create it in your database. CREATE TABLE Product(  ProductId int PRIMARY KEY IDENTITY(1,1) NOT NULL,  ProductName varchar(50) NOT NULL,  IntroductionDate datetime NULL,  Price money NULL) Step 3: Connect to WCF Service from Windows Phone Application Back in your Windows Phone application you will now need to add a Service Reference to the WCF Service application you just created. Right-mouse click on the Windows Phone Project and choose Add Service Reference… from the context menu. Click on the Discover button. In the Namespace text box enter “ProductServiceRefrence”, then click the OK button. If you entered everything correctly, Visual Studio will generate some code that allows you to connect to your Product service. On the MainPage.xaml designer window double click on the Get XML button to generate the Click event procedure for this button. In the Click event procedure make a call to a GetXmlFromServer() method. This method will also need a “Completed” event procedure to be written since all communication with a WCF Service from Windows Phone must be asynchronous.  Write these two methods as follows: private const string KEY_NAME = "ProductData"; private void GetXmlFromServer(){  ProductServiceClient client = new ProductServiceClient();   client.GetProductXmlCompleted += new     EventHandler<GetProductXmlCompletedEventArgs>      (client_GetProductXmlCompleted);   client.GetProductXmlAsync();  client.CloseAsync();} void client_GetProductXmlCompleted(object sender,                                   GetProductXmlCompletedEventArgs e){  // Store XML data in Isolated Storage  IsolatedStorageSettings.ApplicationSettings[KEY_NAME] = e.Result;   btnRead.IsEnabled = true;} As you can see, this is a fairly standard call to a WCF Service. In the Completed event you get the Result from the event argument, which is the XML, and store it into Isolated Storage using the IsolatedStorageSettings.ApplicationSettings class. Notice the constant that I added to specify the name of the key. You will use this constant later to read the data from Isolated Storage. Step 4: Create a Product Class Even though you stored XML data into Isolated Storage when you read that data out you will want to convert each element in the XML file into an actual Product object. This means that you need to create a Product class in your Windows Phone application. Add a Product class to your project that looks like the code below: public class Product{  public string ProductName{ get; set; }  public int ProductId{ get; set; }  public DateTime IntroductionDate{ get; set; }  public decimal Price{ get; set; }} Step 5: Read Settings from Isolated Storage Now that you have the XML data stored in Isolated Storage, it is time to use it. Go back to the MainPage.xaml design view and double click on the Read XML button to generate the Click event procedure. From the Click event procedure call a method named ReadProductXml().Create this method as shown below: private void ReadProductXml(){  XElement xElem = null;   if (IsolatedStorageSettings.ApplicationSettings.Contains(KEY_NAME))  {    xElem = XElement.Parse(     IsolatedStorageSettings.ApplicationSettings[KEY_NAME].ToString());     // Create a list of Product objects    var products =         from prod in xElem.Descendants("Product")        orderby prod.Attribute("ProductName").Value        select new Product        {          ProductId = Convert.ToInt32(prod.Attribute("ProductId").Value),          ProductName = prod.Attribute("ProductName").Value,          IntroductionDate =             Convert.ToDateTime(prod.Attribute("IntroductionDate").Value),          Price = Convert.ToDecimal(prod.Attribute("Price").Value)        };     lstData.DataContext = products;  }} The ReadProductXml() method checks to make sure that the key name that you saved your XML as exists in Isolated Storage prior to trying to open it. If the key name exists, then you retrieve the value as a string. Use the XElement’s Parse method to convert the XML string to a XElement object. LINQ to XML is used to iterate over each element in the XElement object and create a new Product object from each attribute in your XML file. The LINQ to XML code also orders the XML data by the ProductName. After the LINQ to XML code runs you end up with an IEnumerable collection of Product objects in the variable named “products”. You assign this collection of product data to the DataContext of the ListBox you created in XAML. The DisplayMemberPath property of the ListBox is set to “ProductName” so it will now display the product name for each row in your products collection. Summary In this article you learned how to retrieve an XML string from a table in a database, return that string across a WCF Service and store it into Isolated Storage on your Windows Phone. You then used LINQ to XML to create a collection of Product objects from the data stored and display that data in a Windows Phone list box. This same technique can be used in Silverlight or WPF applications too. NOTE: You can download the complete sample code at my website. http://www.pdsa.com/downloads. Choose Tips & Tricks, then "Get XML From Server for Use on Windows Phone" from the drop-down. Good Luck with your Coding,Paul Sheriff ** SPECIAL OFFER FOR MY BLOG READERS **Visit http://www.pdsa.com/Event/Blog for a free video on Silverlight entitled Silverlight XAML for the Complete Novice - Part 1.  

    Read the article

  • How to edit item in a listbox shown from reading a .csv file?

    - by Shuvo
    I am working in a project where my application can open a .csv file and read data from it. The .csv file contains the latitude, longitude of places. The application reads data from the file shows it in a static map and display icon on the right places. The application can open multiple file at a time and it opens with a new tab every time. But I am having trouble in couple of cases When I am trying to add a new point to the .csv file opened. I am able to write new point on the same file instead adding a new point data to the existing its replacing others and writing the new point only. I cannot use selectedIndexChange event to perform edit option on the listbox and then save the file. Any direction would be great. using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Windows.Forms; using System.IO; namespace CourseworkExample { public partial class Form1 : Form { public GPSDataPoint gpsdp; List<GPSDataPoint> data; List<PictureBox> pictures; List<TabPage> tabs; public static int pn = 0; private TabPage currentComponent; private Bitmap bmp1; string[] symbols = { "hospital", "university" }; Image[] symbolImages; ListBox lb = new ListBox(); string name = ""; string path = ""; public Form1() { InitializeComponent(); data = new List<GPSDataPoint>(); pictures = new List<PictureBox>(); tabs = new List<TabPage>(); symbolImages = new Image[symbols.Length]; for (int i = 0; i < 2; i++) { string location = "data/" + symbols[i] + ".png"; symbolImages[i] = Image.FromFile(location); } } private void openToolStripMenuItem_Click(object sender, EventArgs e) { FileDialog ofd = new OpenFileDialog(); string filter = "CSV File (*.csv)|*.csv"; ofd.Filter = filter; DialogResult dr = ofd.ShowDialog(); if (dr.Equals(DialogResult.OK)) { int i = ofd.FileName.LastIndexOf("\\"); name = ofd.FileName; path = ofd.FileName; if (i > 0) { name = ofd.FileName.Substring(i + 1); path = ofd.FileName.Substring(0, i + 1); } TextReader input = new StreamReader(ofd.FileName); string mapName = input.ReadLine(); GPSDataPoint gpsD = new GPSDataPoint(); gpsD.setBounds(input.ReadLine()); string s; while ((s = input.ReadLine()) != null) { gpsD.addWaypoint(s); } input.Close(); TabPage tabPage = new TabPage(); tabPage.Location = new System.Drawing.Point(4, 22); tabPage.Name = "tabPage" + pn; lb.Width = 300; int selectedindex = lb.SelectedIndex; lb.Items.Add(mapName); lb.Items.Add("Bounds"); lb.Items.Add(gpsD.Bounds[0] + " " + gpsD.Bounds[1] + " " + gpsD.Bounds[2] + " " + gpsD.Bounds[3]); lb.Items.Add("Waypoint"); foreach (WayPoint wp in gpsD.DataList) { lb.Items.Add(wp.Name + " " + wp.Latitude + " " + wp.Longitude + " " + wp.Ele + " " + wp.Sym); } tabPage.Controls.Add(lb); pn++; tabPage.Padding = new System.Windows.Forms.Padding(3); tabPage.Size = new System.Drawing.Size(192, 74); tabPage.TabIndex = 0; tabPage.Text = name; tabPage.UseVisualStyleBackColor = true; tabs.Add(tabPage); tabControl1.Controls.Add(tabPage); tabPage = new TabPage(); tabPage.Location = new System.Drawing.Point(4, 22); tabPage.Name = "tabPage" + pn; pn++; tabPage.Padding = new System.Windows.Forms.Padding(3); tabPage.Size = new System.Drawing.Size(192, 74); tabPage.TabIndex = 0; tabPage.Text = mapName; string location = path + mapName; tabPage.UseVisualStyleBackColor = true; tabs.Add(tabPage); PictureBox pb = new PictureBox(); pb.Name = "pictureBox" + pn; pb.Image = Image.FromFile(location); tabControl2.Controls.Add(tabPage); pb.Width = pb.Image.Width; pb.Height = pb.Image.Height; tabPage.Controls.Add(pb); currentComponent = tabPage; tabPage.Width = pb.Width; tabPage.Height = pb.Height; pn++; tabControl2.Width = pb.Width; tabControl2.Height = pb.Height; bmp1 = (Bitmap)pb.Image; int lx, ly; float realWidth = gpsD.Bounds[1] - gpsD.Bounds[3]; float imageW = pb.Image.Width; float dx = imageW * (gpsD.Bounds[1] - gpsD.getWayPoint(0).Longitude) / realWidth; float realHeight = gpsD.Bounds[0] - gpsD.Bounds[2]; float imageH = pb.Image.Height; float dy = imageH * (gpsD.Bounds[0] - gpsD.getWayPoint(0).Latitude) / realHeight; lx = (int)dx; ly = (int)dy; using (Graphics g = Graphics.FromImage(bmp1)) { Rectangle rect = new Rectangle(lx, ly, 20, 20); if (gpsD.getWayPoint(0).Sym.Equals("")) { g.DrawRectangle(new Pen(Color.Red), rect); } else { if (gpsD.getWayPoint(0).Sym.Equals("hospital")) { g.DrawImage(symbolImages[0], rect); } else { if (gpsD.getWayPoint(0).Sym.Equals("university")) { g.DrawImage(symbolImages[1], rect); } } } } pb.Image = bmp1; pb.Invalidate(); } } private void openToolStripMenuItem_Click_1(object sender, EventArgs e) { FileDialog ofd = new OpenFileDialog(); string filter = "CSV File (*.csv)|*.csv"; ofd.Filter = filter; DialogResult dr = ofd.ShowDialog(); if (dr.Equals(DialogResult.OK)) { int i = ofd.FileName.LastIndexOf("\\"); name = ofd.FileName; path = ofd.FileName; if (i > 0) { name = ofd.FileName.Substring(i + 1); path = ofd.FileName.Substring(0, i + 1); } TextReader input = new StreamReader(ofd.FileName); string mapName = input.ReadLine(); GPSDataPoint gpsD = new GPSDataPoint(); gpsD.setBounds(input.ReadLine()); string s; while ((s = input.ReadLine()) != null) { gpsD.addWaypoint(s); } input.Close(); TabPage tabPage = new TabPage(); tabPage.Location = new System.Drawing.Point(4, 22); tabPage.Name = "tabPage" + pn; ListBox lb = new ListBox(); lb.Width = 300; lb.Items.Add(mapName); lb.Items.Add("Bounds"); lb.Items.Add(gpsD.Bounds[0] + " " + gpsD.Bounds[1] + " " + gpsD.Bounds[2] + " " + gpsD.Bounds[3]); lb.Items.Add("Waypoint"); foreach (WayPoint wp in gpsD.DataList) { lb.Items.Add(wp.Name + " " + wp.Latitude + " " + wp.Longitude + " " + wp.Ele + " " + wp.Sym); } tabPage.Controls.Add(lb); pn++; tabPage.Padding = new System.Windows.Forms.Padding(3); tabPage.Size = new System.Drawing.Size(192, 74); tabPage.TabIndex = 0; tabPage.Text = name; tabPage.UseVisualStyleBackColor = true; tabs.Add(tabPage); tabControl1.Controls.Add(tabPage); tabPage = new TabPage(); tabPage.Location = new System.Drawing.Point(4, 22); tabPage.Name = "tabPage" + pn; pn++; tabPage.Padding = new System.Windows.Forms.Padding(3); tabPage.Size = new System.Drawing.Size(192, 74); tabPage.TabIndex = 0; tabPage.Text = mapName; string location = path + mapName; tabPage.UseVisualStyleBackColor = true; tabs.Add(tabPage); PictureBox pb = new PictureBox(); pb.Name = "pictureBox" + pn; pb.Image = Image.FromFile(location); tabControl2.Controls.Add(tabPage); pb.Width = pb.Image.Width; pb.Height = pb.Image.Height; tabPage.Controls.Add(pb); currentComponent = tabPage; tabPage.Width = pb.Width; tabPage.Height = pb.Height; pn++; tabControl2.Width = pb.Width; tabControl2.Height = pb.Height; bmp1 = (Bitmap)pb.Image; int lx, ly; float realWidth = gpsD.Bounds[1] - gpsD.Bounds[3]; float imageW = pb.Image.Width; float dx = imageW * (gpsD.Bounds[1] - gpsD.getWayPoint(0).Longitude) / realWidth; float realHeight = gpsD.Bounds[0] - gpsD.Bounds[2]; float imageH = pb.Image.Height; float dy = imageH * (gpsD.Bounds[0] - gpsD.getWayPoint(0).Latitude) / realHeight; lx = (int)dx; ly = (int)dy; using (Graphics g = Graphics.FromImage(bmp1)) { Rectangle rect = new Rectangle(lx, ly, 20, 20); if (gpsD.getWayPoint(0).Sym.Equals("")) { g.DrawRectangle(new Pen(Color.Red), rect); } else { if (gpsD.getWayPoint(0).Sym.Equals("hospital")) { g.DrawImage(symbolImages[0], rect); } else { if (gpsD.getWayPoint(0).Sym.Equals("university")) { g.DrawImage(symbolImages[1], rect); } } } } pb.Image = bmp1; pb.Invalidate(); MessageBox.Show(data.ToString()); } } private void exitToolStripMenuItem_Click(object sender, EventArgs e) { this.Close(); } private void addBtn_Click(object sender, EventArgs e) { string wayName = nameTxtBox.Text; float wayLat = Convert.ToSingle(latTxtBox.Text); float wayLong = Convert.ToSingle(longTxtBox.Text); float wayEle = Convert.ToSingle(elevTxtBox.Text); WayPoint wp = new WayPoint(wayName, wayLat, wayLong, wayEle); GPSDataPoint gdp = new GPSDataPoint(); data = new List<GPSDataPoint>(); gdp.Add(wp); lb.Items.Add(wp.Name + " " + wp.Latitude + " " + wp.Longitude + " " + wp.Ele + " " + wp.Sym); lb.Refresh(); StreamWriter sr = new StreamWriter(name); sr.Write(lb); sr.Close(); DialogResult result = MessageBox.Show("Save in New File?","Save", MessageBoxButtons.YesNo); if (result == DialogResult.Yes) { SaveFileDialog saveDialog = new SaveFileDialog(); saveDialog.FileName = "default.csv"; DialogResult saveResult = saveDialog.ShowDialog(); if (saveResult == DialogResult.OK) { sr = new StreamWriter(saveDialog.FileName, true); sr.WriteLine(wayName + "," + wayLat + "," + wayLong + "," + wayEle); sr.Close(); } } else { // sr = new StreamWriter(name, true); // sr.WriteLine(wayName + "," + wayLat + "," + wayLong + "," + wayEle); sr.Close(); } MessageBox.Show(name + path); } } } GPSDataPoint.cs using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.IO; namespace CourseworkExample { public class GPSDataPoint { private float[] bounds; private List<WayPoint> dataList; public GPSDataPoint() { dataList = new List<WayPoint>(); } internal void setBounds(string p) { string[] b = p.Split(','); bounds = new float[b.Length]; for (int i = 0; i < b.Length; i++) { bounds[i] = Convert.ToSingle(b[i]); } } public float[] Bounds { get { return bounds; } } internal void addWaypoint(string s) { WayPoint wp = new WayPoint(s); dataList.Add(wp); } public WayPoint getWayPoint(int i) { if (i < dataList.Count) { return dataList[i]; } else return null; } public List<WayPoint> DataList { get { return dataList; } } internal void Add(WayPoint wp) { dataList.Add(wp); } } } WayPoint.cs using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace CourseworkExample { public class WayPoint { private string name; private float ele; private float latitude; private float longitude; private string sym; public WayPoint(string name, float latitude, float longitude, float elevation) { this.name = name; this.latitude = latitude; this.longitude = longitude; this.ele = elevation; } public WayPoint() { name = "no name"; ele = 3.5F; latitude = 3.5F; longitude = 0.0F; sym = "no symbol"; } public WayPoint(string s) { string[] bits = s.Split(','); name = bits[0]; longitude = Convert.ToSingle(bits[2]); latitude = Convert.ToSingle(bits[1]); if (bits.Length > 4) sym = bits[4]; else sym = ""; try { ele = Convert.ToSingle(bits[3]); } catch (Exception e) { ele = 0.0f; } } public float Longitude { get { return longitude; } set { longitude = value; } } public float Latitude { get { return latitude; } set { latitude = value; } } public float Ele { get { return ele; } set { ele = value; } } public string Name { get { return name; } set { name = value; } } public string Sym { get { return sym; } set { sym = value; } } } } .csv file data birthplace.png 51.483788,-0.351906,51.460745,-0.302982 Born Here,51.473805,-0.32532,-,hospital Danced here,51,483805,-0.32532,-,hospital

    Read the article

  • Moving the swapfiles to a dedicated partition in Snow Leopard

    - by e.James
    I have been able to move Apple's virtual memory swapfiles to a dedicated partition on my hard drive up until now. The technique I have been using is described in a thread on forums.macosxhints.com. However, with the developer preview of Snow Leopard, this method no longer works. Does anyone know how it could be done with the new OS? Update: I have marked dblu's answer as accepted even though it didn't quite work because he gave excellent, detailed instructions and because his suggestion to use plutil ultimately pointed me in the right direction. The complete, working solution is posted here in the question because I don't have enough reputation to edit the accepted answer. Complete solution: 1. Open Terminal and make a backup copy of Apple's default dynamic_pager.plist: $ cd /System/Library/LaunchDaemons $ sudo cp com.apple.dynamic_pager.plist{,_bak} 2. Convert the plist from binary to plain XML: $ sudo plutil -convert xml1 com.apple.dynamic_pager.plist 3. Open the converted plist with your text editor of choice. (I use pico, see dblu's answer for an example using vim): $ sudo pico -w com.apple.dynamic_pager.plist It should look as follows: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs$ <plist version="1.0"> <dict> <key>EnableTransactions</key> <true/> <key>HopefullyExitsLast</key> <true/> <key>Label</key> <string>com.apple.dynamic_pager</string> <key>OnDemand</key> <false/> <key>ProgramArguments</key> <array> <string>/sbin/dynamic_pager</string> <string>-F</string> <string>/private/var/vm/swapfile</string> </array> </dict> </plist> 4. Change the ProgramArguments array (lines 13 through 18) so that it launches an intermediate shell script instead of launching dynamic_pager directly. See note #1 for details on why this is necessary. <key>ProgramArguments</key> <array> <string>/sbin/dynamic_pager_init</string> </array> 5. Save the plist, and return to the terminal prompt. Using pico, the commands would be: <ctrl+o> to save the file <enter> to accept the same filename (com.apple.dynamic_pager.plist) <ctrl+x> to exit 6. Convert the modified plist back to binary: $ sudo plutil -convert binary1 com.apple.dynamic_pager.plist 7. Create the intermediate shell script: $ cd /sbin $ sudo pico -w dynamic_pager_init The script should look as follows (my partition is called 'Swap', and I chose to put the swapfiles in a hidden directory on that partition, called '.vm' be sure that the directory you specify actually exists): Update: This version of the script makes use of wait4path as suggested by ZILjr: #!/bin/bash #launch Apple's dynamic_pager only when the swap volume is mounted echo "Waiting for Swap volume to mount"; wait4path /Volumes/Swap; echo "Launching dynamic pager on volume Swap"; /sbin/dynamic_pager -F /Volumes/Swap/.vm/swapfile; 8. Save and close dynamic_pager_init (same commands as step 5) 9. Modify permissions and ownership for dynamic_pager_init: $ sudo chmod a+x-w /sbin/dynamic_pager_init $ sudo chown root:wheel /sbin/dynamic_pager_init 10. Verify the permissions on dynamic_pager_init: $ ls -l dynamic_pager_init -r-xr-xr-x 1 root wheel 6 18 Sep 15:11 dynamic_pager_init 11. Restart your Mac. If you run into trouble, switch to verbose startup mode by holding down Command-v immediately after the startup chime. This will let you see all of the startup messages that appear during startup. If you run into even worse trouble (i.e. you never see the login screen), hold down Command-s instead. This will boot the computer in single-user mode (no graphical UI, just a command prompt) and allow you to restore the backup copy of com.apple.dynamic_pager.plist that you made in step 1. 12. Once the computer boots, fire up Terminal and verify that the swap files have actually been moved: $ cd /Volumes/Swap/.vm $ ls -l You should see something like this: -rw------- 1 someUser staff 67108864 18 Sep 12:02 swapfile0 13. Delete the old swapfiles: $ cd /private/var/vm $ sudo rm swapfile* 14. Profit! Note 1 Simply modifying the arguments to dynamic_pager in the plist does not always work, and when it fails, it does so in a spectacularly silent way. The problem stems from the fact that dynamic_pager is launched very early in the startup process. If your swap partition has not yet been mounted when dynamic_pager is first loaded (in my experience, this happens 99% of the time), then the system will fake its way through. It will create a symbolic link in your /Volumes directory which has the same name as your swap partition, but points back to the default swapfile location (/private/var/vm). Then, when your actual swap partition mounts, it will be given the name Swap 1 (or YourDriveName 1). You can see the problem by opening up Terminal and listing the contents of your /Volumes directory: $ cd /Volumes $ ls -l You will see something like this: drwxrwxrwx 11 yourUser staff 442 16 Sep 12:13 Swap -> private/var/vm drwxrwxrwx 14 yourUser staff 5 16 Sep 12:13 Swap 1 lrwxr-xr-x 1 root admin 1 17 Sep 12:01 System -> / Note that this failure can be very hard to spot. If you were to check for the swapfiles as I show in step 12, you would still see them! The symbolic link would make it seem as though your swapfiles had been moved, even though they were actually being stored in the default location. Note 2 I was originally unable to get this to work in Snow Leopard because com.apple.dynamic_pager.plist was stored in binary format. I made a copy of the original file and opened it with Apple's Property List Editor (available with Xcode) in order to make changes, but this process added some extended attributes to the plist file which caused the system to ignore it and just use the defaults. As dblu pointed out, using plutil to convert the file to plain XML works like a charm. Note 3 You can check the Console application to see any messages that dynamic_pager_init echos to the screen. If you see the following lines repeated over and over again, there is a problem with the setup. I ran into these messages because I forgot to create the '.vm' directory that I specified in dynamic_pager_init. com.apple.launchd[1] (com.apple.dynamic_pager[176]) Exited with exit code: 1 com.apple.launchd[1] (com.apple.dynamic_pager) Throttling respawn: Will start in 10 seconds When everything is working properly, you may see the above message a couple of times, but you should also see the following message, and then no more of the "Throttling respawn" messages afterwards. com.apple.dynamic_pager[???] Launching dynamic pager on volume Swap This means that the script did have to wait for the partition to load, but in the end it was successful.

    Read the article

  • SQL SERVER – Introduction to SQL Server 2014 In-Memory OLTP

    - by Pinal Dave
    In SQL Server 2014 Microsoft has introduced a new database engine component called In-Memory OLTP aka project “Hekaton” which is fully integrated into the SQL Server Database Engine. It is optimized for OLTP workloads accessing memory resident data. In-memory OLTP helps us create memory optimized tables which in turn offer significant performance improvement for our typical OLTP workload. The main objective of memory optimized table is to ensure that highly transactional tables could live in memory and remain in memory forever without even losing out a single record. The most significant part is that it still supports majority of our Transact-SQL statement. Transact-SQL stored procedures can be compiled to machine code for further performance improvements on memory-optimized tables. This engine is designed to ensure higher concurrency and minimal blocking. In-Memory OLTP alleviates the issue of locking, using a new type of multi-version optimistic concurrency control. It also substantially reduces waiting for log writes by generating far less log data and needing fewer log writes. Points to remember Memory-optimized tables refer to tables using the new data structures and key words added as part of In-Memory OLTP. Disk-based tables refer to your normal tables which we used to create in SQL Server since its inception. These tables use a fixed size 8 KB pages that need to be read from and written to disk as a unit. Natively compiled stored procedures refer to an object Type which is new and is supported by in-memory OLTP engine which convert it into machine code, which can further improve the data access performance for memory –optimized tables. Natively compiled stored procedures can only reference memory-optimized tables, they can’t be used to reference any disk –based table. Interpreted Transact-SQL stored procedures, which is what SQL Server has always used. Cross-container transactions refer to transactions that reference both memory-optimized tables and disk-based tables. Interop refers to interpreted Transact-SQL that references memory-optimized tables. Using In-Memory OLTP In-Memory OLTP engine has been available as part of SQL Server 2014 since June 2013 CTPs. Installation of In-Memory OLTP is part of the SQL Server setup application. The In-Memory OLTP components can only be installed with a 64-bit edition of SQL Server 2014 hence they are not available with 32-bit editions. Creating Databases Any database that will store memory-optimized tables must have a MEMORY_OPTIMIZED_DATA filegroup. This filegroup is specifically designed to store the checkpoint files needed by SQL Server to recover the memory-optimized tables, and although the syntax for creating the filegroup is almost the same as for creating a regular filestream filegroup, it must also specify the option CONTAINS MEMORY_OPTIMIZED_DATA. Here is an example of a CREATE DATABASE statement for a database that can support memory-optimized tables: CREATE DATABASE InMemoryDB ON PRIMARY(NAME = [InMemoryDB_data], FILENAME = 'D:\data\InMemoryDB_data.mdf', size=500MB), FILEGROUP [SampleDB_mod_fg] CONTAINS MEMORY_OPTIMIZED_DATA (NAME = [InMemoryDB_mod_dir], FILENAME = 'S:\data\InMemoryDB_mod_dir'), (NAME = [InMemoryDB_mod_dir], FILENAME = 'R:\data\InMemoryDB_mod_dir') LOG ON (name = [SampleDB_log], Filename='L:\log\InMemoryDB_log.ldf', size=500MB) COLLATE Latin1_General_100_BIN2; Above example code creates files on three different drives (D:  S: and R:) for the data files and in memory storage so if you would like to run this code kindly change the drive and folder locations as per your convenience. Also notice that binary collation was specified as Windows (non-SQL). BIN2 collation is the only collation support at this point for any indexes on memory optimized tables. It is also possible to add a MEMORY_OPTIMIZED_DATA file group to an existing database, use the below command to achieve the same. ALTER DATABASE AdventureWorks2012 ADD FILEGROUP hekaton_mod CONTAINS MEMORY_OPTIMIZED_DATA; GO ALTER DATABASE AdventureWorks2012 ADD FILE (NAME='hekaton_mod', FILENAME='S:\data\hekaton_mod') TO FILEGROUP hekaton_mod; GO Creating Tables There is no major syntactical difference between creating a disk based table or a memory –optimized table but yes there are a few restrictions and a few new essential extensions. Essentially any memory-optimized table should use the MEMORY_OPTIMIZED = ON clause as shown in the Create Table query example. DURABILITY clause (SCHEMA_AND_DATA or SCHEMA_ONLY) Memory-optimized table should always be defined with a DURABILITY value which can be either SCHEMA_AND_DATA or  SCHEMA_ONLY the former being the default. A memory-optimized table defined with DURABILITY=SCHEMA_ONLY will not persist the data to disk which means the data durability is compromised whereas DURABILITY= SCHEMA_AND_DATA ensures that data is also persisted along with the schema. Indexing Memory Optimized Table A memory-optimized table must always have an index for all tables created with DURABILITY= SCHEMA_AND_DATA and this can be achieved by declaring a PRIMARY KEY Constraint at the time of creating a table. The following example shows a PRIMARY KEY index created as a HASH index, for which a bucket count must also be specified. CREATE TABLE Mem_Table ( [Name] VARCHAR(32) NOT NULL PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT = 100000), [City] VARCHAR(32) NULL, [State_Province] VARCHAR(32) NULL, [LastModified] DATETIME NOT NULL, ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA); Now as you can see in the above query example we have used the clause MEMORY_OPTIMIZED = ON to make sure that it is considered as a memory optimized table and not just a normal table and also used the DURABILITY Clause= SCHEMA_AND_DATA which means it will persist data along with metadata and also you can notice this table has a PRIMARY KEY mentioned upfront which is also a mandatory clause for memory-optimized tables. We will talk more about HASH Indexes and BUCKET_COUNT in later articles on this topic which will be focusing more on Row and Index storage on Memory-Optimized tables. So stay tuned for that as well. Now as we covered the basics of Memory Optimized tables and understood the key things to remember while using memory optimized tables, let’s explore more using examples to understand the Performance gains using memory-optimized tables. I will be using the database which i created earlier in this article i.e. InMemoryDB in the below Demo Exercise. USE InMemoryDB GO -- Creating a disk based table CREATE TABLE dbo.Disktable ( Id INT IDENTITY, Name CHAR(40) ) GO CREATE NONCLUSTERED INDEX IX_ID ON dbo.Disktable (Id) GO -- Creating a memory optimized table with similar structure and DURABILITY = SCHEMA_AND_DATA CREATE TABLE dbo.Memorytable_durable ( Id INT NOT NULL PRIMARY KEY NONCLUSTERED Hash WITH (bucket_count =1000000), Name CHAR(40) ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA) GO -- Creating an another memory optimized table with similar structure but DURABILITY = SCHEMA_Only CREATE TABLE dbo.Memorytable_nondurable ( Id INT NOT NULL PRIMARY KEY NONCLUSTERED Hash WITH (bucket_count =1000000), Name CHAR(40) ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_only) GO -- Now insert 100000 records in dbo.Disktable and observe the Time Taken DECLARE @i_t bigint SET @i_t =1 WHILE @i_t<= 100000 BEGIN INSERT INTO dbo.Disktable(Name) VALUES('sachin' + CONVERT(VARCHAR,@i_t)) SET @i_t+=1 END -- Do the same inserts for Memory table dbo.Memorytable_durable and observe the Time Taken DECLARE @i_t bigint SET @i_t =1 WHILE @i_t<= 100000 BEGIN INSERT INTO dbo.Memorytable_durable VALUES(@i_t, 'sachin' + CONVERT(VARCHAR,@i_t)) SET @i_t+=1 END -- Now finally do the same inserts for Memory table dbo.Memorytable_nondurable and observe the Time Taken DECLARE @i_t bigint SET @i_t =1 WHILE @i_t<= 100000 BEGIN INSERT INTO dbo.Memorytable_nondurable VALUES(@i_t, 'sachin' + CONVERT(VARCHAR,@i_t)) SET @i_t+=1 END The above 3 Inserts took 1.20 minutes, 54 secs, and 2 secs respectively to insert 100000 records on my machine with 8 Gb RAM. This proves the point that memory-optimized tables can definitely help businesses achieve better performance for their highly transactional business table and memory- optimized tables with Durability SCHEMA_ONLY is even faster as it does not bother persisting its data to disk which makes it supremely fast. Koenig Solutions is one of the few organizations which offer IT training on SQL Server 2014 and all its updates. Now, I leave the decision on using memory_Optimized tables on you, I hope you like this article and it helped you understand  the fundamentals of IN-Memory OLTP . Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Koenig

    Read the article

  • Conversion of BizTalk Projects to Use the New WCF-SAP Adaptor

    - by Geordie
    We are in the process of upgrading our BizTalk Environment from BizTalk 2006 R2 to BizTalk 2010. The SAP adaptor in BizTalk 2010 is an all new and more powerful WCF-SAP adaptor. When my colleagues tested out the new adaptor they discovered that the format of the data extracted from SAP was not identical to the old adaptor. This is not a big deal if the structure of the messages from SAP is simple. In this case we were receiving the delivery and invoice iDocs. Both these structures are complex especially the delivery document. Over the past few years I have tweaked the delivery mapping to remove bugs from original mapping. The idea of redoing these maps did not appeal and due to the current work load was not even an option. I opted for a rather crude alternative of pulling in the iDoc in the new typed format and then adding a static map at the start of the orchestration to convert the data to the old schema.  Note WCF-SAP data formats (on the binding tab of the configuration dialog box is the ‘RecieiveIdocFormat’ field): Typed:  Returns a XML document with the hierarchy represented in XML and all fields being represented by XML tags. RFC: Returns an XML document with the hierarchy represented in XML but the iDoc lines in flat file format. String: This returns the iDoc in a format that is closest to the original flat file format but is still wrapped with some top level XML tags. The files also contained some strange characters at the end of each line. I started with the invoice document and it was quite straight forward to add the mapping but this is where my problems started. The orchestrations for these documents are dynamic and so require the identity of the partner to be able to correctly configure the orchestration. The partner identity is in the EDI_DC40 segment of the iDoc. In the old project the RECPRN node of the segment was promoted. The code to set a variable to the partner ID was now failing. After lot of head scratching I discovered the problem was due to the addition of Namespaces to the fields in the EDI_DC40 segment. To overcome this I needed to use an xPath query with a Namespace Manager. This had to be done in custom code. I now tried to repeat the process with the delivery document. Unfortunately when we tried to get sample typed data from SAP an exception was thrown. The adapter "WCF-SAP" raised an error message. Details "Microsoft.ServiceModel.Channels.Common.XmlReaderGenerationException: The segment or group definition E2EDKA1001 was not found in the IDoc metadata. The UniqueId of the IDoc type is: IDOCTYP/3/DESADV01/ZASNEXT1/640. For Receive operations, the SAP adapter does not support unreleased segments.   Our guess is that when the WCF-SAP adaptor tries to down load the data it retrieves a data schema from SAP. For some reason the schema does not match the data. This may be due to the version of SAP we are running or due to a customization. Either way resolving this problem did not look easy. When doing some research on this problem I found an article showing me how to get the data from SAP using the WCF-SAP adaptor without any XML tags. http://blogs.msdn.com/b/adapters/archive/2007/10/05/receiving-idocs-getting-the-raw-idoc-data.aspx Reproduction of Mustansir blog: Since the WCF based SAP Adapter is ... well, WCF based, all data flowing in and out of the adapter is encapsulated within a SOAP message. Which means there are those pesky xml tags all over the place. If you want to receive an Idoc from SAP, you can receive it in "Typed" format (in which case each column in each segment of the idoc appears within its own xml tag), or you can receive it in "String" format (in which case there are just 2 xml tags at the top, the raw xml data in string/flat file format, and the 2 closing xml tags). In "String" format, an incoming idoc (for ORDERS05, containing 5 data records) would look like: <ReceiveIdoc ><idocData>EDI_DC40 8000000000001064985620 E2EDK01005 800000000000106498500000100000001 E2EDK14 8000000000001064985000002000000020111000 E2EDK14 8000000000001064985000003000000020081000 E2EDK14 80000000000010649850000040000000200710 E2EDK14 80000000000010649850000050000000200600</idocData></ReceiveIdoc> (I have trimmed part of the control record so that it fits cleanly here on one line). Now, you're only interested in the IDOC data, and don't care much for the XML tags. It isn't that difficult to write your own pipeline component, or even some logic in the orchestration to remove the tags, right? Well, you don't need to write any extra code at all - the WCF Adapter can help you here! During the configuration of your one-way Receive Location using WCF-Custom, navigate to the Messages tab. Under the section "Inbound BizTalk Messge Body", select the "Path" radio button, and: (a) Enter the body path expression as: /*[local-name()='ReceiveIdoc']/*[local-name()='idocData'] (b) Choose "String" for the Node Encoding. What we've done is, used an XPATH to pull out the value of the "idocData" node from the XML. Your Receive Location will now emit text containing only the idoc data. You can at this point, for example, put the Flat File Pipeline component to convert the flat text into a different xml format based on some other schema you already have, and receive your version of the xml formatted message in your orchestration.   This was potentially a much easier solution than adding the static maps to the orchestrations and overcame the issue with ‘Typed’ delivery documents. Not quite so fast… Note: When I followed Mustansir’s blog the characters at the end of each line disappeared. After configuring the adaptor and passing the iDoc data into the original flat file receive pipelines I was receiving exceptions. There was a failure executing the receive pipeline: "PAPINETPipelines.DeliveryFlatFileReceive, CustomerIntegration2.PAPINET.Pipelines, Version=1.0.0.0, Culture=neutral, PublicKeyToken=4ca3635fbf092bbb" Source: "Pipeline " Receive Port: "recSAP_Delivery" URI: "D:\CustomerIntegration2\SAP\Delivery\*.xml" Reason: An error occurred when parsing the incoming document: "Unexpected data found while looking for: 'Z2EDPZ7' The current definition being parsed is E2EDP07GRP. The stream offset where the error occured is 8859. The line number where the error occured is 23. The column where the error occured is 0.". Although the new flat file looked the same as the old one there was a differences. In the original file all lines in the document were exactly 1064 character long. In the new file all lines were truncated to the last alphanumeric character. The final piece of the puzzle was to add a custom pipeline component to pad all the lines to 1064 characters. This component was added to the decode node of the custom delivery and invoice flat file disassembler pipelines. Execute method of the custom pipeline component: public IBaseMessage Execute(IPipelineContext pc, IBaseMessage inmsg) { //Convert Stream to a string Stream s = null; IBaseMessagePart bodyPart = inmsg.BodyPart;   // NOTE inmsg.BodyPart.Data is implemented only as a setter in the http adapter API and a //getter and setter for the file adapter. Use GetOriginalDataStream to get data instead. if (bodyPart != null) s = bodyPart.GetOriginalDataStream();   string newMsg = string.Empty; string strLine; try { StreamReader sr = new StreamReader(s); strLine = sr.ReadLine(); while (strLine != null) { //Execute padding code if (strLine != null) strLine = strLine.PadRight(1064, ' ') + "\r\n"; newMsg += strLine; strLine = sr.ReadLine(); } sr.Close(); } catch (IOException ex) { throw new Exception("Error occured trying to pad the message to 1064 charactors"); }   //Convert back to stream and set to Data property inmsg.BodyPart.Data = new MemoryStream(Encoding.UTF8.GetBytes(newMsg)); ; //reset the position of the stream to zero inmsg.BodyPart.Data.Position = 0; return inmsg; }

    Read the article

  • C# Extension Methods - To Extend or Not To Extend...

    - by James Michael Hare
    I've been thinking a lot about extension methods lately, and I must admit I both love them and hate them. They are a lot like sugar, they taste so nice and sweet, but they'll rot your teeth if you eat them too much.   I can't deny that they aren't useful and very handy. One of the major components of the Shared Component library where I work is a set of useful extension methods. But, I also can't deny that they tend to be overused and abused to willy-nilly extend every living type.   So what constitutes a good extension method? Obviously, you can write an extension method for nearly anything whether it is a good idea or not. Many times, in fact, an idea seems like a good extension method but in retrospect really doesn't fit.   So what's the litmus test? To me, an extension method should be like in the movies when a person runs into their twin, separated at birth. You just know you're related. Obviously, that's hard to quantify, so let's try to put a few rules-of-thumb around them.   A good extension method should:     Apply to any possible instance of the type it extends.     Simplify logic and improve readability/maintainability.     Apply to the most specific type or interface applicable.     Be isolated in a namespace so that it does not pollute IntelliSense.     So let's look at a few examples in relation to these rules.   The first rule, to me, is the most important of all. Once again, it bears repeating, a good extension method should apply to all possible instances of the type it extends. It should feel like the long lost relative that should have been included in the original class but somehow was missing from the family tree.    Take this nifty little int extension, I saw this once in a blog and at first I really thought it was pretty cool, but then I started noticing a code smell I couldn't quite put my finger on. So let's look:       public static class IntExtensinos     {         public static int Seconds(int num)         {             return num * 1000;         }           public static int Minutes(int num)         {             return num * 60000;         }     }     This is so you could do things like:       ...     Thread.Sleep(5.Seconds());     ...     proxy.Timeout = 1.Minutes();     ...     Awww, you say, that's cute! Well, that's the problem, it's kitschy and it doesn't always apply (and incidentally you could achieve the same thing with TimeStamp.FromSeconds(5)). It's syntactical candy that looks cool, but tends to rot and pollute the code. It would allow things like:       total += numberOfTodaysOrders.Seconds();     which makes no sense and should never be allowed. The problem is you're applying an extension method to a logical domain, not a type domain. That is, the extension method Seconds() doesn't really apply to ALL ints, it applies to ints that are representative of time that you want to convert to milliseconds.    Do you see what I mean? The two problems, in a nutshell, are that a) Seconds() called off a non-time value makes no sense and b) calling Seconds() off something to pass to something that does not take milliseconds will be off by a factor of 1000 or worse.   Thus, in my mind, you should only ever have an extension method that applies to the whole domain of that type.   For example, this is one of my personal favorites:       public static bool IsBetween<T>(this T value, T low, T high)         where T : IComparable<T>     {         return value.CompareTo(low) >= 0 && value.CompareTo(high) <= 0;     }   This allows you to check if any IComparable<T> is within an upper and lower bound. Think of how many times you type something like:       if (response.Employee.Address.YearsAt >= 2         && response.Employee.Address.YearsAt <= 10)     {     ...     }     Now, you can instead type:       if(response.Employee.Address.YearsAt.IsBetween(2, 10))     {     ...     }     Note that this applies to all IComparable<T> -- that's ints, chars, strings, DateTime, etc -- and does not depend on any logical domain. In addition, it satisfies the second point and actually makes the code more readable and maintainable.   Let's look at the third point. In it we said that an extension method should fit the most specific interface or type possible. Now, I'm not saying if you have something that applies to enumerables, you create an extension for List, Array, Dictionary, etc (though you may have reasons for doing so), but that you should beware of making things TOO general.   For example, let's say we had an extension method like this:       public static T ConvertTo<T>(this object value)     {         return (T)Convert.ChangeType(value, typeof(T));     }         This lets you do more fluent conversions like:       double d = "5.0".ConvertTo<double>();     However, if you dig into Reflector (LOVE that tool) you will see that if the type you are calling on does not implement IConvertible, what you convert to MUST be the exact type or it will throw an InvalidCastException. Now this may or may not be what you want in this situation, and I leave that up to you. Things like this would fail:       object value = new Employee();     ...     // class cast exception because typeof(IEmployee) != typeof(Employee)     IEmployee emp = value.ConvertTo<IEmployee>();       Yes, that's a downfall of working with Convertible in general, but if you wanted your fluent interface to be more type-safe so that ConvertTo were only callable on IConvertibles (and let casting be a manual task), you could easily make it:         public static T ConvertTo<T>(this IConvertible value)     {         return (T)Convert.ChangeType(value, typeof(T));     }         This is what I mean by choosing the best type to extend. Consider that if we used the previous (object) version, every time we typed a dot ('.') on an instance we'd pull up ConvertTo() whether it was applicable or not. By filtering our extension method down to only valid types (those that implement IConvertible) we greatly reduce our IntelliSense pollution and apply a good level of compile-time correctness.   Now my fourth rule is just my general rule-of-thumb. Obviously, you can make extension methods as in-your-face as you want. I included all mine in my work libraries in its own sub-namespace, something akin to:       namespace Shared.Core.Extensions { ... }     This is in a library called Shared.Core, so just referencing the Core library doesn't pollute your IntelliSense, you have to actually do a using on Shared.Core.Extensions to bring the methods in. This is very similar to the way Microsoft puts its extension methods in System.Linq. This way, if you want 'em, you use the appropriate namespace. If you don't want 'em, they won't pollute your namespace.   To really make this work, however, that namespace should only include extension methods and subordinate types those extensions themselves may use. If you plant other useful classes in those namespaces, once a user includes it, they get all the extensions too.   Also, just as a personal preference, extension methods that aren't simply syntactical shortcuts, I like to put in a static utility class and then have extension methods for syntactical candy. For instance, I think it imaginable that any object could be converted to XML:       namespace Shared.Core     {         // A collection of XML Utility classes         public static class XmlUtility         {             ...             // Serialize an object into an xml string             public static string ToXml(object input)             {                 var xs = new XmlSerializer(input.GetType());                   // use new UTF8Encoding here, not Encoding.UTF8. The later includes                 // the BOM which screws up subsequent reads, the former does not.                 using (var memoryStream = new MemoryStream())                 using (var xmlTextWriter = new XmlTextWriter(memoryStream, new UTF8Encoding()))                 {                     xs.Serialize(xmlTextWriter, input);                     return Encoding.UTF8.GetString(memoryStream.ToArray());                 }             }             ...         }     }   I also wanted to be able to call this from an object like:       value.ToXml();     But here's the problem, if i made this an extension method from the start with that one little keyword "this", it would pop into IntelliSense for all objects which could be very polluting. Instead, I put the logic into a utility class so that users have the choice of whether or not they want to use it as just a class and not pollute IntelliSense, then in my extensions namespace, I add the syntactical candy:       namespace Shared.Core.Extensions     {         public static class XmlExtensions         {             public static string ToXml(this object value)             {                 return XmlUtility.ToXml(value);             }         }     }   So now it's the best of both worlds. On one hand, they can use the utility class if they don't want to pollute IntelliSense, and on the other hand they can include the Extensions namespace and use as an extension if they want. The neat thing is it also adheres to the Single Responsibility Principle. The XmlUtility is responsible for converting objects to XML, and the XmlExtensions is responsible for extending object's interface for ToXml().

    Read the article

  • Time to stop using &ldquo;Execute Package Task&rdquo;&ndash; a way to execute package in SSIS catalog taking advantage of the new project deployment model ,and the logging and reporting feature

    - by Kevin Shyr
    I set out to find a way to dynamically call package in SSIS 2012.  The following are 2 excellent blogs I found; I used them heavily.  The code below has some addition to parameter types and message types, but was made essentially derived entirely from the blogs. http://sqlblog.com/blogs/jamie_thomson/archive/2011/07/16/ssis-logging-in-denali.aspx http://www.ssistalk.com/2012/07/24/quick-tip-run-ssis-2012-packages-synchronously-and-other-execution-options/   The code: Every package will be called by a PackageController package.  The packageController is initialized with some information on which package to run and what information to pass in.   The following is the stored procedure called from the “Execute SQL Task”.  Here is the highlight of the stored procedure It takes in packageName, project name, and folder name (folder in SSIS project deployment to SSIS catalog) The stored procedure sets the package variables of the upcoming package execution Execute package in SSIS Catalog Get the status of the execution.  Also, if exists, get the error message’s message_id and store them in the management database. Return value to “Execute SQL Task” to manage failure properly CREATE PROCEDURE [AUDIT].[LaunchPackageExecutionInSSISCatalog]        @PackageName NVARCHAR(255)        , @ProjectFolder NVARCHAR(255)        , @ProjectName NVARCHAR(255)        , @AuditKey INT        , @DisableNotification BIT        , @PackageExecutionLogID INT AS BEGIN TRY        DECLARE @execution_id BIGINT = 0;        -- Create a package execution        EXEC [SSISDB].[catalog].[create_execution]                     @package_name=@PackageName,                     @execution_id=@execution_id OUTPUT,                     @folder_name=@ProjectFolder,                     @project_name=@ProjectName,                     @use32bitruntime=False;          UPDATE [AUDIT].[PackageInstanceExecutionLog] WITH(ROWLOCK)        SET [SSISCatalogExecutionID] = @execution_id        WHERE [PackageInstanceExecutionLogID] = @PackageExecutionLogID          -- this is to set the execution synchronized so that I can check the result in the end        EXEC [SSISDB].[catalog].[set_execution_parameter_value]                     @execution_id,                      @object_type=50,                     @parameter_name=N'SYNCHRONIZED',                     @parameter_value=1; -- true          /********************************************************         ********************************************************              Section: setting parameters                     Source table:  SSISDB.internal.object_parameters              object_type list:                     20: project level variables                     30: package level variables                     50: execution parameter         ********************************************************         ********************************************************/        EXEC [SSISDB].[catalog].[set_execution_parameter_value]                     @execution_id,                      @object_type=30,                     @parameter_name=N'FromParent_AuditKey',                     @parameter_value=@AuditKey; -- true          EXEC [SSISDB].[catalog].[set_execution_parameter_value]                     @execution_id,                      @object_type=30,                     @parameter_name=N'FromParent_DisableNotification',                     @parameter_value=@DisableNotification; -- true          EXEC [SSISDB].[catalog].[set_execution_parameter_value]                     @execution_id,                      @object_type=30,                     @parameter_name=N'FromParent_PackageInstanceExecutionID',                     @parameter_value=@PackageExecutionLogID; -- true        /********************************************************         ********************************************************              Section: setting variables END         ********************************************************         ********************************************************/            /* This section is carried over from example code           I don't see a reason to change them yet        */        -- Set our package parameters        EXEC [SSISDB].[catalog].[set_execution_parameter_value]                     @execution_id,                      @object_type=50,                     @parameter_name=N'DUMP_ON_EVENT',                     @parameter_value=1; -- true          EXEC [SSISDB].[catalog].[set_execution_parameter_value]                     @execution_id,                      @object_type=50,                     @parameter_name=N'DUMP_EVENT_CODE',                     @parameter_value=N'0x80040E4D;0x80004005';          EXEC [SSISDB].[catalog].[set_execution_parameter_value]                     @execution_id,                      @object_type=50,                     @parameter_name=N'LOGGING_LEVEL',                     @parameter_value= 1; -- Basic          EXEC [SSISDB].[catalog].[set_execution_parameter_value]                     @execution_id,                      @object_type=50,                     @parameter_name=N'DUMP_ON_ERROR',                     @parameter_value=1; -- true                              /********************************************************         ********************************************************              Section: EXECUTING         ********************************************************         ********************************************************/        EXEC [SSISDB].[catalog].[start_execution]                     @execution_id;        /********************************************************         ********************************************************              Section: EXECUTING END         ********************************************************         ********************************************************/            /********************************************************         ********************************************************              Section: checking execution result                     Source table:  [SSISDB].[catalog].[executions]              status:                     1: created                     2: running                     3: cancelled                     4: failed                     5: pending                     6: ended unexpectedly                     7: succeeded                     8: stopping                     9: completed         ********************************************************         ********************************************************/        if EXISTS(SELECT TOP 1 1                            FROM [SSISDB].[catalog].[executions] WITH(NOLOCK)                            WHERE [execution_id] = @execution_id                                  AND [status] NOT IN (2, 7, 9)) BEGIN                /********************************************************               ********************************************************                     Section: logging error messages                            Source table:  [SSISDB].[internal].[operation_messages]                     message type:                            10:  OnPreValidate                             20:  OnPostValidate                             30:  OnPreExecute                             40:  OnPostExecute                             60:  OnProgress                             70:  OnInformation                             90:  Diagnostic                             110:  OnWarning                            120:  OnError                            130:  Failure                            140:  DiagnosticEx                             200:  Custom events                             400:  OnPipeline                     message source type:                            10:  Messages logged by the entry APIs (e.g. T-SQL, CLR Stored procedures)                             20:  Messages logged by the external process used to run package (ISServerExec)                             30:  Messages logged by the package-level objects                             40:  Messages logged by tasks in the control flow                             50:  Messages logged by containers (For, ForEach, Sequence) in the control flow                             60:  Messages logged by the Data Flow Task                                    ********************************************************               ********************************************************/                INSERT INTO AUDIT.PackageInstanceExecutionOperationErrorLink                     SELECT @PackageExecutionLogID                                  ,[operation_message_id]                            FROM [SSISDB].[internal].[operation_messages] WITH(NOLOCK)                            WHERE operation_id = @execution_id                                  AND message_type IN (120, 130)                           EXEC [AUDIT].[FailPackageInstanceExecution] @PackageExecutionLogID, 'SSISDB Internal operation_messages found'                GOTO ReturnTrueAsErrorFlag                /********************************************************               ********************************************************                     Section: checking messages END               ********************************************************               ********************************************************/                /* This part is not really working, so now using rowcount to pass status              --DECLARE @PackageErrorMessage NVARCHAR(4000)              --SET @PackageErrorMessage = @PackageName + 'failed with executionID: ' + CONVERT(VARCHAR(20), @execution_id)                --RAISERROR (@PackageErrorMessage -- Message text.              --     , 18 -- Severity,              --     , 1 -- State,              --     , N'check table AUDIT.PackageInstanceExecutionErrorMessages' -- First argument.              --     );              */        END        ELSE BEGIN              GOTO ReturnFalseAsErrorFlagToSignalSuccess        END        /********************************************************         ********************************************************              Section: checking execution result END         ********************************************************         ********************************************************/ END TRY BEGIN CATCH        DECLARE @SSISCatalogCallError NVARCHAR(MAX)        SELECT @SSISCatalogCallError = ERROR_MESSAGE()          EXEC [AUDIT].[FailPackageInstanceExecution] @PackageExecutionLogID, @SSISCatalogCallError          GOTO ReturnTrueAsErrorFlag END CATCH;     /********************************************************  ********************************************************    Section: end result  ********************************************************  ********************************************************/ ReturnTrueAsErrorFlag:        SELECT CONVERT(BIT, 1) AS PackageExecutionErrorExists ReturnFalseAsErrorFlagToSignalSuccess:        SELECT CONVERT(BIT, 0) AS PackageExecutionErrorExists   GO

    Read the article

< Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >