Search Results

Search found 8332 results on 334 pages for 'console cowboy'.

Page 80/334 | < Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >

  • Embedding generic sql queries into c# program

    - by Pooja Balkundi
    Okay referring to my first question code in the main, I want the user to enter employee name at runtime and then i take this name which user has entered and compare it with the e_name of my emp table , if it exists i want to display all information of that employee , how can I achieve this ? using System; using System.Collections.Generic; using System.Linq; using System.Windows.Forms; using MySql.Data.MySqlClient; namespace ConnectCsharppToMySQL { public class DBConnect { private MySqlConnection connection; private string server; private string database; private string uid; private string password; string name; //Constructor public DBConnect() { Initialize(); } //Initialize values private void Initialize() { server = "localhost"; database = "test"; uid = "root"; password = ""; string connectionString; connectionString = "SERVER=" + server + ";" + "DATABASE=" + database + ";" + "UID=" + uid + ";" + "PASSWORD=" + password + ";"; connection = new MySqlConnection(connectionString); } //open connection to database private bool OpenConnection() { try { connection.Open(); return true; } catch (MySqlException ex) { //When handling errors, you can your application's response based //on the error number. //The two most common error numbers when connecting are as follows: //0: Cannot connect to server. //1045: Invalid user name and/or password. switch (ex.Number) { case 0: MessageBox.Show("Cannot connect to server. Contact administrator"); break; case 1045: MessageBox.Show("Invalid username/password, please try again"); break; } return false; } } //Close connection private bool CloseConnection() { try { connection.Close(); return true; } catch (MySqlException ex) { MessageBox.Show(ex.Message); return false; } } //Insert statement public void Insert() { string query = "INSERT INTO emp (e_name, age) VALUES('Pooja R', '21')"; //open connection if (this.OpenConnection() == true) { //create command and assign the query and connection from the constructor MySqlCommand cmd = new MySqlCommand(query, connection); //Execute command cmd.ExecuteNonQuery(); //close connection this.CloseConnection(); } } //Update statement public void Update() { string query = "UPDATE emp SET e_name='Peachy', age='22' WHERE e_name='Pooja R'"; //Open connection if (this.OpenConnection() == true) { //create mysql command MySqlCommand cmd = new MySqlCommand(); //Assign the query using CommandText cmd.CommandText = query; //Assign the connection using Connection cmd.Connection = connection; //Execute query cmd.ExecuteNonQuery(); //close connection this.CloseConnection(); } } //Select statement public List<string>[] Select() { string query = "SELECT * FROM emp where e_name=(/*I WANT USER ENTERED NAME TO GET INSERTED HERE*/)"; //Create a list to store the result List<string>[] list = new List<string>[3]; list[0] = new List<string>(); list[1] = new List<string>(); list[2] = new List<string>(); //Open connection if (this.OpenConnection() == true) { //Create Command MySqlCommand cmd = new MySqlCommand(query, connection); //Create a data reader and Execute the command MySqlDataReader dataReader = cmd.ExecuteReader(); //Read the data and store them in the list while (dataReader.Read()) { list[0].Add(dataReader["e_id"] + ""); list[1].Add(dataReader["e_name"] + ""); list[2].Add(dataReader["age"] + ""); } //close Data Reader dataReader.Close(); //close Connection this.CloseConnection(); //return list to be displayed return list; } else { return list; } } public static void Main(String[] args) { DBConnect db1 = new DBConnect(); Console.WriteLine("Initializing"); db1.Initialize(); Console.WriteLine("Search :"); Console.WriteLine("Enter the employee name"); db1.name = Console.ReadLine(); db1.Select(); Console.ReadLine(); } } }

    Read the article

  • Passing a parameter so that it cannot be changed – C#

    - by nmarun
    I read this requirement of not allowing a user to change the value of a property passed as a parameter to a method. In C++, as far as I could recall (it’s been over 10 yrs, so I had to refresh memory), you can pass ‘const’ to a function parameter and this ensures that the parameter cannot be changed inside the scope of the function. There’s no such direct way of doing this in C#, but that does not mean it cannot be done!! Ok, so this ‘not-so-direct’ technique depends on the type of the parameter – a simple property or a collection. Parameter as a simple property: This is quite easy (and you might have guessed it already). Bulent Ozkir clearly explains how this can be done here. Parameter as a collection property: Obviously the above does not work if the parameter is a collection of some type. Let’s dig-in. Suppose I need to create a collection of type KeyTitle as defined below. 1: public class KeyTitle 2: { 3: public int Key { get; set; } 4: public string Title { get; set; } 5: } My class is declared as below: 1: public class Class1 2: { 3: public Class1() 4: { 5: MyKeyTitleList = new List<KeyTitle>(); 6: } 7: 8: public List<KeyTitle> MyKeyTitleList { get; set; } 9: public ReadOnlyCollection<KeyTitle> ReadonlyKeyTitleCollection 10: { 11: // .AsReadOnly creates a ReadOnlyCollection<> type 12: get { return MyKeyTitleList.AsReadOnly(); } 13: } 14: } See the .AsReadOnly() method used in the second property? As MSDN says it: “Returns a read-only IList<T> wrapper for the current collection.” Knowing this, I can implement my code as: 1: public static void Main() 2: { 3: Class1 class1 = new Class1(); 4: class1.MyKeyTitleList.Add(new KeyTitle { Key = 1, Title = "abc" }); 5: class1.MyKeyTitleList.Add(new KeyTitle { Key = 2, Title = "def" }); 6: class1.MyKeyTitleList.Add(new KeyTitle { Key = 3, Title = "ghi" }); 7: class1.MyKeyTitleList.Add(new KeyTitle { Key = 4, Title = "jkl" }); 8:  9: TryToModifyCollection(class1.MyKeyTitleList.AsReadOnly()); 10:  11: Console.ReadLine(); 12: } 13:  14: private static void TryToModifyCollection(ReadOnlyCollection<KeyTitle> readOnlyCollection) 15: { 16: // can only read 17: for (int i = 0; i < readOnlyCollection.Count; i++) 18: { 19: Console.WriteLine("{0} - {1}", readOnlyCollection[i].Key, readOnlyCollection[i].Title); 20: } 21: // Add() - not allowed 22: // even the indexer does not have a setter 23: } The output is as expected: The below image shows two things. In the first line, I’ve tried to access an element in my read-only collection through an indexer. It shows that the ReadOnlyCollection<> does not have a setter on the indexer. The second line tells that there’s no ‘Add()’ method for this type of collection. The capture below shows there’s no ‘Remove()’ method either, there-by eliminating all ways of modifying a collection. Mission accomplished… right? Now, even if you have a collection of different type, all you need to do is to somehow cast (used loosely) it to a List<> and then do a .AsReadOnly() to get a ReadOnlyCollection of your custom collection type. As an example, if you have an IDictionary<int, string>, you can create a List<T> of this type with a wrapper class (KeyTitle in our case). 1: public IDictionary<int, string> MyDictionary { get; set; } 2:  3: public ReadOnlyCollection<KeyTitle> ReadonlyDictionary 4: { 5: get 6: { 7: return (from item in MyDictionary 8: select new KeyTitle 9: { 10: Key = item.Key, 11: Title = item.Value, 12: }).ToList().AsReadOnly(); 13: } 14: } Cool huh? Just one thing you need to know about the .AsReadOnly() method is that the only way to modify your ReadOnlyCollection<> is to modify the original collection. So doing: 1: public static void Main() 2: { 3: Class1 class1 = new Class1(); 4: class1.MyKeyTitleList.Add(new KeyTitle { Key = 1, Title = "abc" }); 5: class1.MyKeyTitleList.Add(new KeyTitle { Key = 2, Title = "def" }); 6: class1.MyKeyTitleList.Add(new KeyTitle { Key = 3, Title = "ghi" }); 7: class1.MyKeyTitleList.Add(new KeyTitle { Key = 4, Title = "jkl" }); 8: TryToModifyCollection(class1.MyKeyTitleList.AsReadOnly()); 9:  10: Console.WriteLine(); 11:  12: class1.MyKeyTitleList.Add(new KeyTitle { Key = 5, Title = "mno" }); 13: class1.MyKeyTitleList[2] = new KeyTitle{Key = 3, Title = "GHI"}; 14: TryToModifyCollection(class1.MyKeyTitleList.AsReadOnly()); 15:  16: Console.ReadLine(); 17: } Gives me the output of: See that the second element’s Title is changed to upper-case and the fifth element also gets displayed even though we’re still looping through the same ReadOnlyCollection<KeyTitle>. Verdict: Now you know of a way to implement ‘Method(const param1)’ in your code!

    Read the article

  • c++ and c# speed compared

    - by Mack
    I was worried about C#'s speed when it deals with heavy calculations, when you need to use raw CPU power. I always thought that C++ is much faster than C# when it comes to calculations. So I did some quick tests. The first test computes prime numbers < an integer n, the second test computes some pandigital numbers. The idea for second test comes from here: Pandigital Numbers C# prime computation: using System; using System.Diagnostics; class Program { static int primes(int n) { uint i, j; int countprimes = 0; for (i = 1; i <= n; i++) { bool isprime = true; for (j = 2; j <= Math.Sqrt(i); j++) if ((i % j) == 0) { isprime = false; break; } if (isprime) countprimes++; } return countprimes; } static void Main(string[] args) { int n = int.Parse(Console.ReadLine()); Stopwatch sw = new Stopwatch(); sw.Start(); int res = primes(n); sw.Stop(); Console.WriteLine("I found {0} prime numbers between 0 and {1} in {2} msecs.", res, n, sw.ElapsedMilliseconds); Console.ReadKey(); } } C++ variant: #include <iostream> #include <ctime> int primes(unsigned long n) { unsigned long i, j; int countprimes = 0; for(i = 1; i <= n; i++) { int isprime = 1; for(j = 2; j < (i^(1/2)); j++) if(!(i%j)) { isprime = 0; break; } countprimes+= isprime; } return countprimes; } int main() { int n, res; cin>>n; unsigned int start = clock(); res = primes(n); int tprime = clock() - start; cout<<"\nI found "<<res<<" prime numbers between 1 and "<<n<<" in "<<tprime<<" msecs."; return 0; } When I ran the test trying to find primes < than 100,000, C# variant finished in 0.409 seconds and C++ variant in 5.553 seconds. When I ran them for 1,000,000 C# finished in 6.039 seconds and C++ in about 337 seconds. Pandigital test in C#: using System; using System.Diagnostics; class Program { static bool IsPandigital(int n) { int digits = 0; int count = 0; int tmp; for (; n > 0; n /= 10, ++count) { if ((tmp = digits) == (digits |= 1 << (n - ((n / 10) * 10) - 1))) return false; } return digits == (1 << count) - 1; } static void Main() { int pans = 0; Stopwatch sw = new Stopwatch(); sw.Start(); for (int i = 1; i <= 123456789; i++) { if (IsPandigital(i)) { pans++; } } sw.Stop(); Console.WriteLine("{0}pcs, {1}ms", pans, sw.ElapsedMilliseconds); Console.ReadKey(); } } Pandigital test in C++: #include <iostream> #include <ctime> using namespace std; int IsPandigital(int n) { int digits = 0; int count = 0; int tmp; for (; n > 0; n /= 10, ++count) { if ((tmp = digits) == (digits |= 1 << (n - ((n / 10) * 10) - 1))) return 0; } return digits == (1 << count) - 1; } int main() { int pans = 0; unsigned int start = clock(); for (int i = 1; i <= 123456789; i++) { if (IsPandigital(i)) { pans++; } } int ptime = clock() - start; cout<<"\nPans:"<<pans<<" time:"<<ptime; return 0; } C# variant runs in 29.906 seconds and C++ in about 36.298 seconds. I didn't touch any compiler switches and bot C# and C++ programs were compiled with debug options. Before I attempted to run the test I was worried that C# will lag well behind C++, but now it seems that there is a pretty big speed difference in C# favor. Can anybody explain this? C# is jitted and C++ is compiled native so it's normal that a C++ will be faster than a C# variant. Thanks for the answers!

    Read the article

  • LINQ – SequenceEqual() method

    - by nmarun
    I have been looking at LINQ extension methods and have blogged about what I learned from them in my blog space. Next in line is the SequenceEqual() method. Here’s the description about this method: “Determines whether two sequences are equal by comparing the elements by using the default equality comparer for their type.” Let’s play with some code: 1: int[] numbers = { 5, 4, 1, 3, 9, 8, 6, 7, 2, 0 }; 2: // int[] numbersCopy = numbers; 3: int[] numbersCopy = { 5, 4, 1, 3, 9, 8, 6, 7, 2, 0 }; 4:  5: Console.WriteLine(numbers.SequenceEqual(numbersCopy)); This gives an output of ‘True’ – basically compares each of the elements in the two arrays and returns true in this case. The result is same even if you uncomment line 2 and comment line 3 (I didn’t need to say that now did I?). So then what happens for custom types? For this, I created a Product class with the following definition: 1: class Product 2: { 3: public int ProductId { get; set; } 4: public string Name { get; set; } 5: public string Category { get; set; } 6: public DateTime MfgDate { get; set; } 7: public Status Status { get; set; } 8: } 9:  10: public enum Status 11: { 12: Active = 1, 13: InActive = 2, 14: OffShelf = 3, 15: } In my calling code, I’m just adding a few product items: 1: private static List<Product> GetProducts() 2: { 3: return new List<Product> 4: { 5: new Product 6: { 7: ProductId = 1, 8: Name = "Laptop", 9: Category = "Computer", 10: MfgDate = new DateTime(2003, 4, 3), 11: Status = Status.Active, 12: }, 13: new Product 14: { 15: ProductId = 2, 16: Name = "Compact Disc", 17: Category = "Water Sport", 18: MfgDate = new DateTime(2009, 12, 3), 19: Status = Status.InActive, 20: }, 21: new Product 22: { 23: ProductId = 3, 24: Name = "Floppy", 25: Category = "Computer", 26: MfgDate = new DateTime(1993, 3, 7), 27: Status = Status.OffShelf, 28: }, 29: }; 30: } Now for the actual check: 1: List<Product> products1 = GetProducts(); 2: List<Product> products2 = GetProducts(); 3:  4: Console.WriteLine(products1.SequenceEqual(products2)); This one returns ‘False’ and the reason is simple – this one checks for reference equality and the products in the both the lists get different ‘memory addresses’ (sounds like I’m talking in ‘C’). In order to modify this behavior and return a ‘True’ result, we need to modify the Product class as follows: 1: class Product : IEquatable<Product> 2: { 3: public int ProductId { get; set; } 4: public string Name { get; set; } 5: public string Category { get; set; } 6: public DateTime MfgDate { get; set; } 7: public Status Status { get; set; } 8:  9: public override bool Equals(object obj) 10: { 11: return Equals(obj as Product); 12: } 13:  14: public bool Equals(Product other) 15: { 16: //Check whether the compared object is null. 17: if (ReferenceEquals(other, null)) return false; 18:  19: //Check whether the compared object references the same data. 20: if (ReferenceEquals(this, other)) return true; 21:  22: //Check whether the products' properties are equal. 23: return ProductId.Equals(other.ProductId) 24: && Name.Equals(other.Name) 25: && Category.Equals(other.Category) 26: && MfgDate.Equals(other.MfgDate) 27: && Status.Equals(other.Status); 28: } 29:  30: // If Equals() returns true for a pair of objects 31: // then GetHashCode() must return the same value for these objects. 32: // read why in the following articles: 33: // http://geekswithblogs.net/akraus1/archive/2010/02/28/138234.aspx 34: // http://stackoverflow.com/questions/371328/why-is-it-important-to-override-gethashcode-when-equals-method-is-overriden-in-c 35: public override int GetHashCode() 36: { 37: //Get hash code for the ProductId field. 38: int hashProductId = ProductId.GetHashCode(); 39:  40: //Get hash code for the Name field if it is not null. 41: int hashName = Name == null ? 0 : Name.GetHashCode(); 42:  43: //Get hash code for the ProductId field. 44: int hashCategory = Category.GetHashCode(); 45:  46: //Get hash code for the ProductId field. 47: int hashMfgDate = MfgDate.GetHashCode(); 48:  49: //Get hash code for the ProductId field. 50: int hashStatus = Status.GetHashCode(); 51: //Calculate the hash code for the product. 52: return hashProductId ^ hashName ^ hashCategory & hashMfgDate & hashStatus; 53: } 54:  55: public static bool operator ==(Product a, Product b) 56: { 57: // Enable a == b for null references to return the right value 58: if (ReferenceEquals(a, b)) 59: { 60: return true; 61: } 62: // If one is null and the other not. Remember a==null will lead to Stackoverflow! 63: if (ReferenceEquals(a, null)) 64: { 65: return false; 66: } 67: return a.Equals((object)b); 68: } 69:  70: public static bool operator !=(Product a, Product b) 71: { 72: return !(a == b); 73: } 74: } Now THAT kinda looks overwhelming. But lets take one simple step at a time. Ok first thing you’ve noticed is that the class implements IEquatable<Product> interface – the key step towards achieving our goal. This interface provides us with an ‘Equals’ method to perform the test for equality with another Product object, in this case. This method is called in the following situations: when you do a ProductInstance.Equals(AnotherProductInstance) and when you perform actions like Contains<T>, IndexOf() or Remove() on your collection Coming to the Equals method defined line 14 onwards. The two ‘if’ blocks check for null and referential equality using the ReferenceEquals() method defined in the Object class. Line 23 is where I’m doing the actual check on the properties of the Product instances. This is what returns the ‘True’ for us when we run the application. I have also overridden the Object.Equals() method which calls the Equals() method of the interface. One thing to remember is that anytime you override the Equals() method, its’ a good practice to override the GetHashCode() method and overload the ‘==’ and the ‘!=’ operators. For detailed information on this, please read this and this. Since we’ve overloaded the operators as well, we get ‘True’ when we do actions like: 1: Console.WriteLine(products1.Contains(products2[0])); 2: Console.WriteLine(products1[0] == products2[0]); This completes the full circle on the SequenceEqual() method. See the code used in the article here.

    Read the article

  • Installing Exchange 2013 CU1

    - by marc dekeyser
    Originally posted on: http://geekswithblogs.net/marcde/archive/2013/08/01/installing-exchange-2013-cu1.aspxBefore you begin Download the following software: · UCMA 4.0: http://www.microsoft.com/en-us/download/details.aspx?id=34992 · Office 2010 filter packs 64 bit: http://www.microsoft.com/en-us/download/details.aspx?id=17062 · Office 2010 filter packs SP1 64 bit: http://www.microsoft.com/en-us/download/details.aspx?id=26604 Prerequisite installation Step 1 : Open Windows Powershell     Step 2: Enter following string to start prerequisite installation for a multirole server – Install-WindowsFeature AS-HTTP-Activation, Desktop-Experience, NET-Framework-45-Features, RPC-over-HTTP-proxy, RSAT-Clustering, RSAT-Clustering-CmdInterface, RSAT-Clustering-Mgmt, RSAT-Clustering-PowerShell, Web-Mgmt-Console, WAS-Process-Model, Web-Asp-Net45, Web-Basic-Auth, Web-Client-Auth, Web-Digest-Auth, Web-Dir-Browsing, Web-Dyn-Compression, Web-Http-Errors, Web-Http-Logging, Web-Http-Redirect, Web-Http-Tracing, Web-ISAPI-Ext, Web-ISAPI-Filter, Web-Lgcy-Mgmt-Console, Web-Metabase, Web-Mgmt-Console, Web-Mgmt-Service, Web-Net-Ext45, Web-Request-Monitor, Web-Server, Web-Stat-Compression, Web-Static-Content, Web-Windows-Auth, Web-WMI, Windows-Identity-Foundation   Step 3: restart the server   Shutdown.exe /r /t 60     Step 4: Install the UCMA Runtime Setup Navigate to the folder holding the prerequisite downloads and double click the “UCMARunTimeSetup”     Step 5: Accept the Run prompt     Step 6: Click the left click on "Next (button)" in "Microsoft Unified Communications Managed API 4.0, Runtime Setup"     Step 7: Left click on "I have read and accept the license terms. (check box)" in "Microsoft Unified Communications Managed API 4.0, Runtime Setup"     Step 8: Left click on "Install (button)" in "Microsoft Unified Communications Managed API 4.0, Runtime Setup"     Step 9: Left click on "Finish (button)" in "Microsoft Unified Communications Managed API 4.0, Runtime Setup"     Step 10: Start the Office 2010 filter pack installation     Step 11: Left click on "Run (button)" in "Open File - Security Warning"     Step 12: Left click on "Microsoft Filter Pack 2.0 (button)" as it hides in the background by default.     Step 13: Left click on "Next (button)" in "Microsoft Filter Pack 2.0"     Step 14: Left click on "I accept the terms in the License Agreement (check box)" in "Microsoft Filter Pack 2.0"     Step 15: Left click on "Next (button)" in "Microsoft Filter Pack 2.0"     Step 16: Left click on "OK (button)" in "Microsoft Filter Pack 2.0"     Step 17: Start the installation of the Office 2010 Filterpack SP1.     Step 18: Left click on "Run (button)" in "Open File - Security Warning"     Step 19: Left click on "Click here to accept the Microsoft Software License Terms. (check box)" in "Microsoft Office 2010 Filter Pack Service Pack 1 (SP1)"     Step 20: Left click on "Continue (button)" in "Microsoft Office 2010 Filter Pack Service Pack 1 (SP1)"     Step 21: (?21/?06/?2013 11:23:25) User left click on "OK (button)" in "Microsoft Office 2010 Filter Pack Service Pack 1 (SP1)"     Step 22: Left click on "Windows PowerShell (button)"     Step 23: restart the server. Shutdown.exe /r /t 60   Step 24: Left click on "Close (button)" in "You're about to be signed off"     Installing Exchange server 2013 Step 1: Navigate to the Exchange 2013 CU1 extracted location and run setup.exe Left click on "next (button)" in "Exchange Server Setup" Step 2: Left click on "next (button)" in "Exchange Server Setup" Step 3: Left click on "Exchange Server Setup (window)" in "Exchange Server Setup" Step 4: Left click on "Exchange Server Setup (window)" in "Exchange Server Setup" a Step 5: User left click on "next (button)" in "Exchange Server Setup" Step 6: Left click on "I accept the terms in the license agreement" in "Exchange Server Setup" Step 7: Left click on "next (button)" in "Exchange Server Setup" Step 8: Left click on "next (button)" in "Exchange Server Setup" Step 9: Select "Mailbox role” in "Exchange Server Setup" Step 10: Select "Client Access role" in "Exchange Server Setup" Step 11: Left click on "next (button)" in "Exchange Server Setup" Step 12: Left click on "next (button)" in "Exchange Server Setup" Step 13: Choose the installation path and left click on "next (button)" in "Exchange Server Setup" Step 14: Leave malware scanning on by making sure the radio button is on “No”and left click on "Exchange Server Setup (window)" in "Exchange Server Setup"                   Step 15: Left click on "finish (button)" in "Exchange Server Setup" Step 16: Restart the server. Shutdown.exe /r /t 60

    Read the article

  • Oracle WebCenter Portlet Debugging

    - by Alexander Rudat
    IntroductionThis article describes how to debug a portlets that is already deployed to WebLogic server using Oracle JDeveloper 11g.OverviewThese a Normal 0 21 false false false DE X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} re the basic steps involved in remote debugging an WebCenter portlets deployed in WebLogic:Configuration of the WebLogic to support remote debuggingConfiguration of the portlet project in JDeveloperActual debugging of the portletConfiguration of the WebLogicTo start the WebLogic server in debugging mode, there are a couple of configuration changes that need to be done to the WebLogic domain where the portlet is deployed.First we need to edit JVM options of the WebLogic server startup script where the portlet is deployed. Normally the startManagedWebLogic.cmd is used to start this managed server.This startup script is located in the %MIDDLEWARE_HOME%\user_projects\domains\<domain_name>\bin  directory, where %MIDDLEWARE_HOME% is the installation directory of WebLogic.Add the following line before the set JAVA_OPTIONS= line:set REMOTE_DEBUG_JAVA_OPTIONS=-Xdebug -Xnoagent -Xrunjdwp:transport=dt_socket,address=4000,server=y,suspend=nChange the set JAVA_OPTIONS= line to read like the one below:set JAVA_OPTIONS=%SAVE_JAVA_OPTIONS% %ADF_JAVA_OPTIONS% %REMOTE_DEBUG_JAVA_OPTIONS%After this changes save the startup script and start the managed server and be sure that you have access to the admin console (for example http://localhost:7001/console).Finally we need to check, that HTTP tunneling is enabled on the managed server. To do this, login to the admin console, select the managed server and select the Protocols tab.Be sure that Enable Tunneling is selected.Configuration of the portletFirst let's create a new run configuration specifically for remote debugging. Double-click the project where you portlets are developed.In the Project Properties select the Run/Debug/Profile page. Click New... to create a new run configuration. In the Create Run Configuration  dialog enter Remote Debugging for the name of the run configuration. Leave the Copy Settings From selection to Default and click OK to create the new run configuration.Once the Remote Debug run configuration is created, select it in the Run Configurations and click Edit... to bring up the Edit Run Configuration dialog. In the Launch Settings page click on the Remote Debugging checkbox to enable remote debugging for this run configuration.Finally select the Remote page and verify that the Protocol is set to Attach to JPDA and the port matches the port specified earlier when configuring WebLogic for remote debugging (defaults to 4000).Actual debugging of the portletTo start the remote debugging profile, right-click on your portlet project and select Start Remote Debugger.Now JDeveloper is asking the host and port specification. If you WebLogic server is installed locally, you can apply the default settings: Set a breakpoint at you java code and run the portal (WebCenter) application, where the portlet is used.That's all, now you are able to debug the portlet java code. Hope you will find all errors in your portlet :-)Referenceshttp://www.oracle.com/technology/products/jdev/howtos/weblogic/remotedebugwls.html

    Read the article

  • Running ODI 11gR1 Standalone Agent as a Windows Service

    - by fx.nicolas
    ODI 11gR1 introduces the capability to use OPMN to start and protect agent processes as services. Setting up the OPMN agent is covered in the following post and extensively in the ODI Installation Guide. Unfortunately, OPMN is not installed along with ODI, and ODI 10g users who are really at ease with the old Java Wrapper are a little bit puzzled by OPMN, and ask: "How can I simply set up the agent as a service?". Well... although the Tanuki Service Wrapper is no longer available for free, and the agentservice.bat script lost, you can switch to another service wrapper for the same result. For example, Yet Another Java Service Wrapper (YAJSW) is a good candidate. To configure a standalone agent with YAJSW: download YAJSW Uncompress the zip to a folder (called %YAJSW% in this example) Configure, start and test your standalone agent. Make sure that this agent is loaded with all the required libraries and drivers, as the service will not load dynamically the drivers added subsequently in the /drivers directory. Retrieve the PID of the agent process: Open Task Manager. Select View Select Columns Select the PID (Process Identifier) column, then click OK In the list of processes, find the java.exe process corresponding to your agent, and note its PID. Open a command line prompt in %YAJSW%/bat and run: genConfig.bat <your_pid> This command generates a wrapper configuration file for the agent. This file is called %YAJSW%/conf/wrapper.conf. Stop your agent. Edit the wrapper.conf file and modify the configuration of your service. For example, modify the display name and description of the service as shown in the example below. Important: Make sure to escape the commas in the ODI encoded passwords with a backslash! In the example below, the ODI_SUPERVISOR_ENCODED_PASS contained a comma character which had to be prefixed with a backslash. # Title to use when running as a console wrapper.console.title=\"AGENT\" #******************************************************************** # Wrapper Windows Service and Posix Daemon Properties #******************************************************************** # Name of the service wrapper.ntservice.name=AGENT_113 # Display name of the service wrapper.ntservice.displayname=ODI Agent # Description of the service wrapper.ntservice.description=Oracle Data Integrator Agent 11gR3 (11.1.1.3.0) ... # Escape the comma in the password with a backslash. wrapper.app.parameter.7 = -ODI_SUPERVISOR_ENCODED_PASS=fJya.vR5kvNcu9TtV\,jVZEt Execute your wrapped agent as console by calling in the command line prompt: runConsole.bat Check that your agent is running, and test it again.This command starts the agent with the configuration but does not install it yet as a service. To Install the agent as service call installService.bat From that point, you can view, start and stop the agent via the windows services. Et voilà ! Two final notes: - To modify the agent configuration, you must uninstall/reinstall the service. For this purpose, run the uninstallService.bat to uninstall it and play again the process above. - To be able to uninstall the agent service, you should keep a backup of the wrapper.conf file. This is particularly important when starting several services with the wrapper.

    Read the article

  • SQL – Quick Start with Admin Sections of NuoDB – Manage NuoDB Database

    - by Pinal Dave
    In the yesterday’s blog post we have seen that it is extremely easy to install the NuoDB database on your local machine. Now that the application is properly set up, let us explore NuoDB a bit more and get you familiar with the how it works and what the important areas of the NuoDB are that you should learn. As we have already installed NuoDB, now we will quickly start with two of the important areas in NuoDB: 1) Admin and 2) Explorer. In this blog post I will explore how the Admin Section of the NuoDB Console works.  In the next blog post we will learn how the Explorer Section works. Let us go to the NuoDB Console by typing the following URL in your browser: http://localhost:8080/ It will bring you to the following screen: On this screen you can see a big Start QuickStart button. Click on the button and it will bring you to following screen. On this screen you will find very important information about Domain and Database Settings. It is our habit that we do not read what is written on the screen and keep on clicking on continue without reading. While we are familiar with most wizards, we can often miss the very important message on the screen. Please note the information of Domain Settings and Database Settings from the following screen before clicking on Create Database. Domain Settings User: quickstart Password: quickstart Database Settings User: dba Password: goalie Database: test Schema: HOCKEY Once you click on the Create Database button it will immediately start creating sample database. First, it will start a Storage Manager and right after that it will start a Transaction Engine. Once the engine is up, it will Create a Schema and Sample Data. On the success of the creating the sample database it will show the following screen. Now is the time where we can explore the NuoDB Admin or NuoDB Explorer. If you click on Admin, it will first show following login screen. Enter for the username “domain” and for the password “bird”. Alternatively you can enter “quickstart”  twice for username and password.  It works as too. Once you enter into the Admin Section, on the left side you can see information about NuoDB and Admin Console and on the right side you can see the domain overview area. From this Administrative section you can do any of the following tasks: Create a view of the entire domain Add and remove databases Start and stop NuoDB Transaction Engines and Storage Managers Monitor transaction across all the NuoDB databases On the right side of the Admin Section we can see various information about a particular NuoDB domain. You can quickly view various alerts, find out information about the number of host machines that are provisioned for the domain, and see the number of databases and processes that are running in the domain. If you click on the “1 host” link you will be able to see various processes, CPU usage and other information. In the Processes Section you can see that there are two different types of processes. The first process (where you can see the floppy drive icon) represents a running Storage Manager process and the second process a running Transaction Engine process. You can click on the links for the Storage Manager and Transaction Engine to see further statistical details right down to the last byte of the data. There are various charts available for analysis as well. I think the product is quite mature and the user can add different monitor charts to the Admin section. Additionally, the Admin section is the place where you can create and manage new databases. I hope today’s tutorial gives you enough confidence that you can try out NuoDB and checkout various administrative activities with the database. I am personally impressed with their dashboard related to various counters. For more information about how the NuoDB architecture works and what a Storage Manager or Transaction Engine does, check out this short video with NuoDB CTO Seth Proctor:  In the next blog post, we will try out the Explorer section of NuoDB, which allows us to run SQL queries and write SQL code.  Meanwhile, I strongly suggest you download and install NuoDB and get yourself familiar with the product. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: NuoDB

    Read the article

  • C# Multiple Property Sort

    - by Ben Griswold
    As you can see in the snippet below, sorting is easy with Linq.  Simply provide your OrderBy criteria and you’re done.  If you want a secondary sort field, add a ThenBy expression to the chain.  Want a third level sort?  Just add ThenBy along with another sort expression. var projects = new List<Project>     {         new Project {Description = "A", ProjectStatusTypeId = 1},         new Project {Description = "B", ProjectStatusTypeId = 3},         new Project {Description = "C", ProjectStatusTypeId = 3},         new Project {Description = "C", ProjectStatusTypeId = 2},         new Project {Description = "E", ProjectStatusTypeId = 1},         new Project {Description = "A", ProjectStatusTypeId = 2},         new Project {Description = "C", ProjectStatusTypeId = 4},         new Project {Description = "A", ProjectStatusTypeId = 3}     };   projects = projects     .OrderBy(x => x.Description)     .ThenBy(x => x.ProjectStatusTypeId)     .ToList();   foreach (var project in projects) {     Console.Out.WriteLine("{0} {1}", project.Description,         project.ProjectStatusTypeId); } Linq offers a great sort solution most of the time, but what if you want or need to do it the old fashioned way? projects.Sort ((x, y) =>         Comparer<String>.Default             .Compare(x.Description, y.Description) != 0 ?         Comparer<String>.Default             .Compare(x.Description, y.Description) :         Comparer<Int32>.Default             .Compare(x.ProjectStatusTypeId, y.ProjectStatusTypeId));   foreach (var project in projects) {     Console.Out.WriteLine("{0} {1}", project.Description,         project.ProjectStatusTypeId); } It’s not that bad, right? Just for fun, let add some additional logic to our sort.  Let’s say we wanted our secondary sort to be based on the name associated with the ProjectStatusTypeId.  projects.Sort((x, y) =>        Comparer<String>.Default             .Compare(x.Description, y.Description) != 0 ?        Comparer<String>.Default             .Compare(x.Description, y.Description) :        Comparer<String>.Default             .Compare(GetProjectStatusTypeName(x.ProjectStatusTypeId),                 GetProjectStatusTypeName(y.ProjectStatusTypeId)));   foreach (var project in projects) {     Console.Out.WriteLine("{0} {1}", project.Description,         GetProjectStatusTypeName(project.ProjectStatusTypeId)); } The comparer will now consider the result of the GetProjectStatusTypeName and order the list accordingly.  Of course, you can take this same approach with Linq as well.

    Read the article

  • Rules and advice for logging?

    - by Nick Rosencrantz
    In my organization we've put together some rules / guildelines about logging that I would like to know if you can add to or comment. We use Java but you may comment in general about loggin - rules and advice Use the correct logging level ERROR: Something has gone very wrong and need fixing immediately WARNING: The process can continue without fixing. The application should tolerate this level but the warning should always get investigated. INFO: Information that an important process is finished DEBUG. Is only used during development Make sure that you know what you're logging. Avoid that the logging influences the behavior of the application The function of the logging should be to write messages in the log. Log messages should be descriptive, clear, short and concise. There is not much use of a nonsense message when troubleshooting. Put the right properties in log4j Put in that the right method and class is written automatically. Example: Datedfile -web log4j.rootLogger=ERROR, DATEDFILE log4j.logger.org.springframework=INFO log4j.logger.waffle=ERROR log4j.logger.se.prv=INFO log4j.logger.se.prv.common.mvc=INFO log4j.logger.se.prv.omklassning=DEBUG log4j.appender.DATEDFILE=biz.minaret.log4j.DatedFileAppender log4j.appender.DATEDFILE.layout=org.apache.log4j.PatternLayout log4j.appender.DATEDFILE.layout.ConversionPattern=%d{HH:mm:ss,SSS} %-5p [%C{1}.%M] - %m%n log4j.appender.DATEDFILE.Prefix=omklassning. log4j.appender.DATEDFILE.Suffix=.log log4j.appender.DATEDFILE.Directory=//localhost/WebSphereLog/omklassning/ Log value. Please log values from the application. Log prefix. State which part of the application it is that the logging is written from, preferably with something for the project agreed prefix e.g. PANDORA_DB The amount of text. Be careful so that there is not too much logging text. It can influence the performance of the app. Loggning format: -There are several variants and methods to use with log4j but we would like a uniform use of the following format, when we log at exceptions: logger.error("PANDORA_DB2: Fel vid hämtning av frist i TP210_RAPPORTFRIST", e); In the example above it is assumed that we have set log4j properties so that it automatically write the class and the method. Always use logger and not the following: System.out.println(), System.err.println(), e.printStackTrace() If the web app uses our framework you can get very detailed error information from EJB, if using try-catch in the handler and logging according to the model above: In our project we use this conversion pattern with which method and class names are written out automatically . Here we use two different pattents for console and for datedfileappender: log4j.appender.CONSOLE.layout.ConversionPattern=%d{ABSOLUTE} %5p %c{1}:%L - %m%n log4j.appender.DATEDFILE.layout.ConversionPattern=%d [%t] %-5p %c - %m%n In both the examples above method and class wioll be written out. In the console row number will also be written our. toString() Please have a toString() for every object. EX: @Override public String toString() { StringBuilder sb = new StringBuilder(); sb.append(" DwfInformation [ "); sb.append("cc: ").append(cc); sb.append("pn: ").append(pn); sb.append("kc: ").append(kc); sb.append("numberOfPages: ").append(numberOfPages); sb.append("publicationDate: ").append(publicationDate); sb.append("version: ").append(version); sb.append(" ]"); return sb.toString(); } instead of special method which make these outputs public void printAll() { logger.info("inbet: " + getInbetInput()); logger.info("betdat: " + betdat); logger.info("betid: " + betid); logger.info("send: " + send); logger.info("appr: " + appr); logger.info("rereg: " + rereg); logger.info("NY: " + ny); logger.info("CNT: " + cnt); } So is there anything you can add, comment or find questionable with these ways of using the logging? Feel free to answer or comment even if it is not related to Java, Java and log4j is just an implementation of how this is reasoned.

    Read the article

  • LINQ to Twitter Queries with LINQPad

    - by Joe Mayo
    LINQPad is a popular utility for .NET developers who use LINQ a lot.  In addition to standard SQL queries, LINQPad also supports other types of LINQ providers, including LINQ to Twitter.  The following sections explain how to set up LINQPad for making queries with LINQ to Twitter. LINQPad comes in a couple versions and this example uses LINQPad4, which runs on the .NET Framework 4.0. 1. The first thing you'll need to do is set up a reference to the LinqToTwitter.dll. From the Query menu, select query properties. Click the Browse button and find the LinqToTwitter.dll binary. You should see something similar to the Query Properties window below. 2. While you have the query properties window open, add the namespace for the LINQ to Twitter types.  Click the Additional Namespace Imports tab and type in LinqToTwitter. The results are shown below: 3. The default query type, when you first start LINQPad, is C# Expression, but you'll need to change this to support multiple statements.  Change the Language dropdown, on the Main window, to C# Statements. 4. To query LINQ to Twitter, instantiate a TwitterContext, by typing the following into the LINQPad Query window: var ctx = new TwitterContext(); Note: If you're getting syntax errors, go back and make sure you did steps #2 and #3 properly. 5. Next, add a query, but don't materialize it, like this: var tweets = from tweet in ctx.Status where tweet.Type == StatusType.Public select new { tweet.Text, tweet.Geo, tweet.User }; 6. Next, you want the output to be displayed in the LINQPad grid, so do a Dump, like this: tweets.Dump(); The following image shows the final results:   That was an unauthenticated query, but you can also perform authenticated queries with LINQ to Twitter's support of OAuth.  Here's an example that uses the PinAuthorizer (type this into the LINQPad Query window): var auth = new PinAuthorizer { Credentials = new InMemoryCredentials { ConsumerKey = "", ConsumerSecret = "" }, UseCompression = true, GoToTwitterAuthorization = pageLink => Process.Start(pageLink), GetPin = () => { // this executes after user authorizes, which begins with the call to auth.Authorize() below. Console.WriteLine("\nAfter you authorize this application, Twitter will give you a 7-digit PIN Number.\n"); Console.Write("Enter the PIN number here: "); return Console.ReadLine(); } }; // start the authorization process (launches Twitter authorization page). auth.Authorize(); var ctx = new TwitterContext(auth, "https://api.twitter.com/1/", "https://search.twitter.com/"); var tweets = from tweet in ctx.Status where tweet.Type == StatusType.Public select new { tweet.Text, tweet.Geo, tweet.User }; tweets.Dump(); This code is very similar to what you'll find in the LINQ to Twitter downloadable source code solution, in the LinqToTwitterDemo project.  For obvious reasons, I changed the value assigned to ConsumerKey and ConsumerSecret, which you'll have to obtain by visiting http://dev.twitter.com and registering your application. One tip, you'll probably want to make this easier on yourself by creating your own DLL that encapsulates all of the OAuth logic and then call a method or property on you custom class that returns a fully functioning TwitterContext.  This will help avoid adding all this code every time you want to make a query. Now, you know how to set up LINQPad for LINQ to Twitter, perform unauthenticated queries, and perform queries with OAuth. Joe

    Read the article

  • Ubuntu Server 12 not spawning a serial ttyS0 when running on Xen

    - by segfaultreloaded
    I have this problem on more than one host, so the specific hardware is not an issue. Bare metal Ubuntu 12 is not creating a login process on the only serial port, in the default configuration. The serial port works correctly with the firmware. It works correctly with Grub2. I have even connected the serial line to 2 different external client boxes, so the problem is neither the hardware nor the remote client. When finally booted, the system fails to create the login process. root@xenpro3:~# ps ax | grep tty 1229 tty4 Ss+ 0:00 /sbin/getty -8 38400 tty4 1233 tty5 Ss+ 0:00 /sbin/getty -8 38400 tty5 1239 tty2 Ss+ 0:00 /sbin/getty -8 38400 tty2 1241 tty3 Ss+ 0:00 /sbin/getty -8 38400 tty3 1245 tty6 Ss+ 0:00 /sbin/getty -8 38400 tty6 1403 tty1 Ss+ 0:00 /sbin/getty -8 38400 tty1 1996 pts/0 S+ 0:00 grep --color=auto tty root@xenpro3:~# dmesg | grep tty [ 0.000000] Command line: BOOT_IMAGE=/vmlinuz-3.2.0-30-generic root=/dev/mapper/xenpro3-root ro console=ttyS0,115200n8 [ 0.000000] Kernel command line: BOOT_IMAGE=/vmlinuz-3.2.0-30-generic root=/dev/mapper/xenpro3-root ro console=ttyS0,115200n8 [ 0.000000] console [ttyS0] enabled [ 2.160986] serial8250: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A [ 2.203396] serial8250: ttyS1 at I/O 0x2f8 (irq = 3) is a 16550A [ 2.263296] 00:08: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A [ 2.323102] 00:09: ttyS1 at I/O 0x2f8 (irq = 3) is a 16550A root@xenpro3:~# uname -a Linux xenpro3 3.2.0-30-generic #48-Ubuntu SMP Fri Aug 24 16:52:48 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux root@xenpro3:~# I have tried putting a ttyS0.conf file in /etc/initab, which solves the problem bare metal but I still cannot get the serial port to work when booting Ubuntu on top of Xen, as domain 0. My serial line output looks like this, when booting Xen /dev/ttyS0 at 0x03f8 (irq = 4) is a 16550A * Exporting directories for NFS kernel daemon... [ OK ] * Starting NFS kernel daemon [ OK ] SSL tunnels disabled, see /etc/default/stunnel4 [ 18.654627] XENBUS: Unable to read cpu state [ 18.659631] XENBUS: Unable to read cpu state [ 18.664398] XENBUS: Unable to read cpu state [ 18.669248] XENBUS: Unable to read cpu state * Starting Xen daemons [ OK ] mountall: Disconnected from Plymouth At this point, the serial line is no longer connected to a process. Xen itself is running just fine. Dmesg gives me a long list of [ 120.236841] init: ttyS0 main process ended, respawning [ 120.239717] ttyS0: LSR safety check engaged! [ 130.240265] init: ttyS0 main process (1631) terminated with status 1 [ 130.240294] init: ttyS0 main process ended, respawning [ 130.242970] ttyS0: LSR safety check engaged! which is no surprise because I see root@xenpro3:~# ls -l /dev/ttyS? crw-rw---- 1 root tty 4, 64 Nov 7 14:04 /dev/ttyS0 crw-rw---- 1 root dialout 4, 65 Nov 7 14:04 /dev/ttyS1 crw-rw---- 1 root dialout 4, 66 Nov 7 14:04 /dev/ttyS2 crw-rw---- 1 root dialout 4, 67 Nov 7 14:04 /dev/ttyS3 crw-rw---- 1 root dialout 4, 68 Nov 7 14:04 /dev/ttyS4 crw-rw---- 1 root dialout 4, 69 Nov 7 14:04 /dev/ttyS5 crw-rw---- 1 root dialout 4, 70 Nov 7 14:04 /dev/ttyS6 crw-rw---- 1 root dialout 4, 71 Nov 7 14:04 /dev/ttyS7 crw-rw---- 1 root dialout 4, 72 Nov 7 14:04 /dev/ttyS8 crw-rw---- 1 root dialout 4, 73 Nov 7 14:04 /dev/ttyS9 If I manually change the group of /dev/ttyS0 to dialout, it gets changed back. I have made no changes to the default udev rules, so I cannot see where this problem is coming from. Sincerely, John

    Read the article

  • Converting a PV vm back into an HVM vm

    - by wim.coekaerts
    I have been doing some Oracle VM benchmark stuff in the last week or 2 in my off hours and yesterday I wanted to convert one of my VMs that was based on a paravirt kernel into a vm that just boots as a regular hardware virt VM with a standard x86-64 kernel. It took me a little while to figure out the fastest way so now that I have it pretty much down I wanted to share the steps. A PV kernel uses pygrub and a paravirt kernel image that lives on the vm image virtual disk. since this disk image does not have to be bootable it doesn't contain a boot sector and if you just restart the VM in hvm mode the virtual bios will just not do much as it can't start the boot process from disk The first thing I do is make a backup of my vm.cfg file :-) and then edit it as follows : the original file contains : bootloader = '/usr/bin/pygrub' I replace that with : acpi = 1 apic = 1 builder = 'hvm' device_model = '/usr/lib/xen/bin/qemu-dm' kernel = '/usr/lib/xen/boot/hvmloader' then changing the disk files. I change my xvd disks to hd disks and I copy over the iso image of my instal lDVD. In the case of my VM template it was based on OL5U4 So I downloaded Enterprise-R5-U4-Server-x86_64-dvd.iso and added it as a cd device. disk = ['file:/ovs/OVM_EL5U4_X86_64_11202RAC_PVM/System.img,xvda,w', 'file:/ovs/OVM_EL5U4_X86_64_11202RAC_PVM/Oracle11202RAC_x86_64-xvdb.img,xvdb,w', ] to disk = ['file:/ovs/OVM_EL5U4_X86_64_11202RAC_PVM/System.img,hda,w', 'file:/ovs/OVM_EL5U4_X86_64_11202RAC_PVM/Oracle11202RAC_x86_64-xvdb.img,hdb,w', 'file:/ovs/OVM_EL5U4_X86_64_11202RAC_PVM/Enterprise-R5-U4-Server-x86_64-dvd.iso, hdc:cdrom,r', ] boot='d' for the network devices (vifs) I change : vif = ['bridge=xenbr2,type=netfront'] to vif = ['bridge=xenbr2,type=ioemu'] That should do it. Next, inside the VM, I copy over the regular kernel rpm that I want to end up running in hvm mode. In this example case it was : kernel-2.6.18-164.0.0.0.1.el5.x8664.rpm. I will use that later on in the process. I put this kernel simply in /root At this point I just start the vm with xm create vm.cfg and start my vnc console to the vm console. Oracle Linux will boot from the iso image, I just go through the install steps and click on UPgrade existing (not re-install). Because the VM is the same as the ISO the install won't actually do anything and it will run through instantly. When the "Reboot" button pops up, don't reboot. Switch to the command prompt console. hi alt-f2 to go to the shell prompt. Now it's easy : umount /mnt/sysimage/boot cd /mnt/sysimage chroot . mount /dev/hda1 (if that was your /boot partition) export PATH=/sbin:$PATH (just to clean that up) edit /etc/modprobe.conf and comment out the xen modules (just put a # in front) Install grub. if your /boot is hda1 then that is (hd0,0) $ grub root (hd0,0) setup (hd0) exit grub now you have a good bootsector, grub installed and you have your grub.conf file Install the new kernel cd root (this is your old /root in your pv image) rpm -ivh remove (or comment out) boot='d' in your vm.cfg restart the VM and you should be good to go, regular grub should start and load your environment. Caveats : this assumes you used labels for your filesystems. if /etc/fstab were to have devices listed then you would have to rename these device before rebooting as well. If you had a /dev/xvda disk then this would be /dev/hda or /dev/sda. All in all it is a relatively short and simple process.

    Read the article

  • Converting a PV vm back into an HVM vm

    - by wim.coekaerts
    I have been doing some Oracle VM benchmark stuff in the last week or 2 in my off hours and yesterday I wanted to convert one of my VMs that was based on a paravirt kernel into a vm that just boots as a regular hardware virt VM with a standard x86-64 kernel. It took me a little while to figure out the fastest way so now that I have it pretty much down I wanted to share the steps. A PV kernel uses pygrub and a paravirt kernel image that lives on the vm image virtual disk. since this disk image does not have to be bootable it doesn't contain a boot sector and if you just restart the VM in hvm mode the virtual bios will just not do much as it can't start the boot process from disk The first thing I do is make a backup of my vm.cfg file :-) and then edit it as follows : the original file contains : bootloader = '/usr/bin/pygrub' I replace that with : acpi = 1 apic = 1 builder = 'hvm' device_model = '/usr/lib/xen/bin/qemu-dm' kernel = '/usr/lib/xen/boot/hvmloader' then changing the disk files. I change my xvd disks to hd disks and I copy over the iso image of my instal lDVD. In the case of my VM template it was based on OL5U4 So I downloaded Enterprise-R5-U4-Server-x86_64-dvd.iso and added it as a cd device. disk = ['file:/ovs/OVM_EL5U4_X86_64_11202RAC_PVM/System.img,xvda,w', 'file:/ovs/OVM_EL5U4_X86_64_11202RAC_PVM/Oracle11202RAC_x86_64-xvdb.img,xvdb,w', ] to disk = ['file:/ovs/OVM_EL5U4_X86_64_11202RAC_PVM/System.img,hda,w', 'file:/ovs/OVM_EL5U4_X86_64_11202RAC_PVM/Oracle11202RAC_x86_64-xvdb.img,hdb,w', 'file:/ovs/OVM_EL5U4_X86_64_11202RAC_PVM/Enterprise-R5-U4-Server-x86_64-dvd.iso, hdc:cdrom,r', ] boot='d' for the network devices (vifs) I change : vif = ['bridge=xenbr2,type=netfront'] to vif = ['bridge=xenbr2,type=ioemu'] That should do it. Next, inside the VM, I copy over the regular kernel rpm that I want to end up running in hvm mode. In this example case it was : kernel-2.6.18-164.0.0.0.1.el5.x8664.rpm. I will use that later on in the process. I put this kernel simply in /root At this point I just start the vm with xm create vm.cfg and start my vnc console to the vm console. Oracle Linux will boot from the iso image, I just go through the install steps and click on UPgrade existing (not re-install). Because the VM is the same as the ISO the install won't actually do anything and it will run through instantly. When the "Reboot" button pops up, don't reboot. Switch to the command prompt console. hi alt-f2 to go to the shell prompt. Now it's easy : umount /mnt/sysimage/boot cd /mnt/sysimage chroot . mount /dev/hda1 (if that was your /boot partition) export PATH=/sbin:$PATH (just to clean that up) edit /etc/modprobe.conf and comment out the xen modules (just put a # in front) Install grub. if your /boot is hda1 then that is (hd0,0) $ grub root (hd0,0) setup (hd0) exit grub now you have a good bootsector, grub installed and you have your grub.conf file Install the new kernel cd root (this is your old /root in your pv image) rpm -ivh remove (or comment out) boot='d' in your vm.cfg restart the VM and you should be good to go, regular grub should start and load your environment. Caveats : this assumes you used labels for your filesystems. if /etc/fstab were to have devices listed then you would have to rename these device before rebooting as well. If you had a /dev/xvda disk then this would be /dev/hda or /dev/sda. All in all it is a relatively short and simple process.

    Read the article

  • Stepping outside Visual Studio IDE [Part 2 of 2] with Mono 2.6.4

    - by mbcrump
    Continuing part 2 of my Stepping outside the Visual Studio IDE, is the open-source Mono Project. Mono is a software platform designed to allow developers to easily create cross platform applications. Sponsored by Novell (http://www.novell.com/), Mono is an open source implementation of Microsoft's .NET Framework based on the ECMA standards for C# and the Common Language Runtime. A growing family of solutions and an active and enthusiastic contributing community is helping position Mono to become the leading choice for development of Linux applications. So, to clarify. You can use Mono to develop .NET applications that will run on Linux, Windows or Mac. It’s basically a IDE that has roots in Linux. Let’s first look at the compatibility: Compatibility If you already have an application written in .Net, you can scan your application with the Mono Migration Analyzer (MoMA) to determine if your application uses anything not supported by Mono. The current release version of Mono is 2.6. (Released December 2009) The easiest way to describe what Mono currently supports is: Everything in .NET 3.5 except WPF and WF, limited WCF. Here is a slightly more detailed view, by .NET framework version: Implemented C# 3.0 System.Core LINQ ASP.Net 3.5 ASP.Net MVC C# 2.0 (generics) Core Libraries 2.0: mscorlib, System, System.Xml ASP.Net 2.0 - except WebParts ADO.Net 2.0 Winforms/System.Drawing 2.0 - does not support right-to-left C# 1.0 Core Libraries 1.1: mscorlib, System, System.Xml ASP.Net 1.1 ADO.Net 1.1 Winforms/System.Drawing 1.1 Partially Implemented LINQ to SQL - Mostly done, but a few features missing WCF - silverlight 2.0 subset completed Not Implemented WPF - no plans to implement WF - Will implement WF 4 instead on future versions of Mono. System.Management - does not map to Linux System.EnterpriseServices - deprecated Links to documentation. The Official Mono FAQ’s Links to binaries. Mono IDE Latest Version is 2.6.4 That's it, nothing more is required except to compile and run .net code in Linux. Installation After landing on the mono project home page, you can select which platform you want to download. I typically pick the Virtual PC image since I spend all of my day using Windows 7. Go ahead and pick whatever version is best for you. The Virtual PC image comes with Suse Linux. Once the image is launch, you will see the following: I’m not going to go through each option but its best to start with “Start Here” icon. It will provide you with information on new projects or existing VS projects. After you get Mono installed, it's probably a good idea to run a quick Hello World program to make sure everything is setup properly. This allows you to know that your Mono is working before you try writing or running a more complex application. To write a "Hello World" program follow these steps: Start Mono Development Environment. Create a new Project: File->New->Solution Select "Console Project" in the category list. Enter a project name into the Project name field, for example, "HW Project". Click "Forward" Click “Packaging” then OK. You should have a screen very simular to a VS Console App. Click the "Run" button in the toolbar (Ctrl-F5). Look in the Application Output and you should have the “Hello World!” Your screen should look like the screen below. That should do it for a simple console app in mono. To test out an ASP.NET application, simply copy your code to a new directory in /srv/www/htdocs, then visit the following URL: http://localhost/directoryname/page.aspx where directoryname is the directory where you deployed your application and page.aspx is the initial page for your software. Databases You can continue to use SQL server database or use MySQL, Postgress, Sybase, Oracle, IBM’s DB2 or SQLite db. Conclusion I hope this brief look at the Mono IDE helps someone get acquainted with development outside of VS. As always, I welcome any suggestions or comments.

    Read the article

  • Ops Center 12c - Update - Provisioning Solaris on x86 Using a Card-Based NIC

    - by scottdickson
    Last week, I posted a blog describing how to use Ops Center to provision Solaris over the network via a NIC on a card rather than the built-in NIC.  Really, that was all about how to install Solaris on a SPARC system.  This week, we'll look at how to do the same thing for an x86-based server. Really, the overall process is exactly the same, at least for Solaris 11, with only minor updates. We will focus on Solaris 11 for this blog.  Once I verify that the same approach works for Solaris 10, I will provide another update. Booting Solaris 11 on x86 Just as before, in order to configure the server for network boot across a card-based NIC, it is necessary to declare the asset to associate the additional MACs with the server.  You likely will need to access the server console via the ILOM to figure out the MAC and to get a good idea of the network instance number.  The simplest way to find both of these is to start a network boot using the desired NIC and see where it appears in the list of network interfaces and what MAC is used when it tries to boot.  Go to the ILOM for the server.  Reset the server and start the console.  When the BIOS loads, select the boot menu, usually with Ctrl-P.  This will give you a menu of devices to boot from, including all of the NICs.  Select the NIC you want to boot from.  Its position in the list is a good indication of what network number Solaris will give the device. In this case, we want to boot from the 5th interface (GB_4, net4).  Pick it and start the boot processes.  When it starts to boot, you will see the MAC address for the interface Once you have the network instance and the MAC, go through the same process of declaring the asset as in the SPARC case.  This associates the additional network interface with the server.. Creating an OS Provisioning Plan The simplest way to do the boot via an alternate interface on an x86 system is to do a manual boot.  Update the OS provisioning profile as in the SPARC case to reflect the fact that we are booting from a different interface.  Update, in this case, the network boot device to be GB_4/net4, or the device corresponding to your network instance number.  Configure the profile to support manual network boot by checking the box for manual boot in the OS Provisioning profile. Booting the System Once you have created a profile and plan to support booting from the additional NIC, we are ready to install the server. Again, from the ILOM, reset the system and start the console.  When the BIOS loads, select boot from the Boot Menu as above.  Select the network interface from the list as before and start the boot process.  When the grub bootloader loads, the default boot image is the Solaris Text Installer.  On the grub menu, select Automated Installer and Ops Center takes over from there. Lessons The key lesson from all of this is that Ops Center is a valuable tool for provisioning servers whether they are connected via built-in network interfaces or via high-speed NICs on cards.  This is great news for modern datacenters using converged network infrastructures.  The process works for both SPARC and x86 Solaris installations.  And it's easy and repeatable.

    Read the article

  • West Wind WebSurge - an easy way to Load Test Web Applications

    - by Rick Strahl
    A few months ago on a project the subject of load testing came up. We were having some serious issues with a Web application that would start spewing SQL lock errors under somewhat heavy load. These sort of errors can be tough to catch, precisely because they only occur under load and not during typical development testing. To replicate this error more reliably we needed to put a load on the application and run it for a while before these SQL errors would flare up. It’s been a while since I’d looked at load testing tools, so I spent a bit of time looking at different tools and frankly didn’t really find anything that was a good fit. A lot of tools were either a pain to use, didn’t have the basic features I needed, or are extravagantly expensive. In  the end I got frustrated enough to build an initially small custom load test solution that then morphed into a more generic library, then gained a console front end and eventually turned into a full blown Web load testing tool that is now called West Wind WebSurge. I got seriously frustrated looking for tools every time I needed some quick and dirty load testing for an application. If my aim is to just put an application under heavy enough load to find a scalability problem in code, or to simply try and push an application to its limits on the hardware it’s running I shouldn’t have to have to struggle to set up tests. It should be easy enough to get going in a few minutes, so that the testing can be set up quickly so that it can be done on a regular basis without a lot of hassle. And that was the goal when I started to build out my initial custom load tester into a more widely usable tool. If you’re in a hurry and you want to check it out, you can find more information and download links here: West Wind WebSurge Product Page Walk through Video Download link (zip) Install from Chocolatey Source on GitHub For a more detailed discussion of the why’s and how’s and some background continue reading. How did I get here? When I started out on this path, I wasn’t planning on building a tool like this myself – but I got frustrated enough looking at what’s out there to think that I can do better than what’s available for the most common simple load testing scenarios. When we ran into the SQL lock problems I mentioned, I started looking around what’s available for Web load testing solutions that would work for our whole team which consisted of a few developers and a couple of IT guys both of which needed to be able to run the tests. It had been a while since I looked at tools and I figured that by now there should be some good solutions out there, but as it turns out I didn’t really find anything that fit our relatively simple needs without costing an arm and a leg… I spent the better part of a day installing and trying various load testing tools and to be frank most of them were either terrible at what they do, incredibly unfriendly to use, used some terminology I couldn’t even parse, or were extremely expensive (and I mean in the ‘sell your liver’ range of expensive). Pick your poison. There are also a number of online solutions for load testing and they actually looked more promising, but those wouldn’t work well for our scenario as the application is running inside of a private VPN with no outside access into the VPN. Most of those online solutions also ended up being very pricey as well – presumably because of the bandwidth required to test over the open Web can be enormous. When I asked around on Twitter what people were using– I got mostly… crickets. Several people mentioned Visual Studio Load Test, and most other suggestions pointed to online solutions. I did get a bunch of responses though with people asking to let them know what I found – apparently I’m not alone when it comes to finding load testing tools that are effective and easy to use. As to Visual Studio, the higher end skus of Visual Studio and the test edition include a Web load testing tool, which is quite powerful, but there are a number of issues with that: First it’s tied to Visual Studio so it’s not very portable – you need a VS install. I also find the test setup and terminology used by the VS test runner extremely confusing. Heck, it’s complicated enough that there’s even a Pluralsight course on using the Visual Studio Web test from Steve Smith. And of course you need to have one of the high end Visual Studio Skus, and those are mucho Dinero ($$$) – just for the load testing that’s rarely an option. Some of the tools are ultra extensive and let you run analysis tools on the target serves which is useful, but in most cases – just plain overkill and only distracts from what I tend to be ultimately interested in: Reproducing problems that occur at high load, and finding the upper limits and ‘what if’ scenarios as load is ramped up increasingly against a site. Yes it’s useful to have Web app instrumentation, but often that’s not what you’re interested in. I still fondly remember early days of Web testing when Microsoft had the WAST (Web Application Stress Tool) tool, which was rather simple – and also somewhat limited – but easily allowed you to create stress tests very quickly. It had some serious limitations (mainly that it didn’t work with SSL),  but the idea behind it was excellent: Create tests quickly and easily and provide a decent engine to run it locally with minimal setup. You could get set up and run tests within a few minutes. Unfortunately, that tool died a quiet death as so many of Microsoft’s tools that probably were built by an intern and then abandoned, even though there was a lot of potential and it was actually fairly widely used. Eventually the tools was no longer downloadable and now it simply doesn’t work anymore on higher end hardware. West Wind Web Surge – Making Load Testing Quick and Easy So I ended up creating West Wind WebSurge out of rebellious frustration… The goal of WebSurge is to make it drop dead simple to create load tests. It’s super easy to capture sessions either using the built in capture tool (big props to Eric Lawrence, Telerik and FiddlerCore which made that piece a snap), using the full version of Fiddler and exporting sessions, or by manually or programmatically creating text files based on plain HTTP headers to create requests. I’ve been using this tool for 4 months now on a regular basis on various projects as a reality check for performance and scalability and it’s worked extremely well for finding small performance issues. I also use it regularly as a simple URL tester, as it allows me to quickly enter a URL plus headers and content and test that URL and its results along with the ability to easily save one or more of those URLs. A few weeks back I made a walk through video that goes over most of the features of WebSurge in some detail: Note that the UI has slightly changed since then, so there are some UI improvements. Most notably the test results screen has been updated recently to a different layout and to provide more information about each URL in a session at a glance. The video and the main WebSurge site has a lot of info of basic operations. For the rest of this post I’ll talk about a few deeper aspects that may be of interest while also giving a glance at how WebSurge works. Session Capturing As you would expect, WebSurge works with Sessions of Urls that are played back under load. Here’s what the main Session View looks like: You can create session entries manually by individually adding URLs to test (on the Request tab on the right) and saving them, or you can capture output from Web Browsers, Windows Desktop applications that call services, your own applications using the built in Capture tool. With this tool you can capture anything HTTP -SSL requests and content from Web pages, AJAX calls, SOAP or REST services – again anything that uses Windows or .NET HTTP APIs. Behind the scenes the capture tool uses FiddlerCore so basically anything you can capture with Fiddler you can also capture with Web Surge Session capture tool. Alternately you can actually use Fiddler as well, and then export the captured Fiddler trace to a file, which can then be imported into WebSurge. This is a nice way to let somebody capture session without having to actually install WebSurge or for your customers to provide an exact playback scenario for a given set of URLs that cause a problem perhaps. Note that not all applications work with Fiddler’s proxy unless you configure a proxy. For example, .NET Web applications that make HTTP calls usually don’t show up in Fiddler by default. For those .NET applications you can explicitly override proxy settings to capture those requests to service calls. The capture tool also has handy optional filters that allow you to filter by domain, to help block out noise that you typically don’t want to include in your requests. For example, if your pages include links to CDNs, or Google Analytics or social links you typically don’t want to include those in your load test, so by capturing just from a specific domain you are guaranteed content from only that one domain. Additionally you can provide url filters in the configuration file – filters allow to provide filter strings that if contained in a url will cause requests to be ignored. Again this is useful if you don’t filter by domain but you want to filter out things like static image, css and script files etc. Often you’re not interested in the load characteristics of these static and usually cached resources as they just add noise to tests and often skew the overall url performance results. In my testing I tend to care only about my dynamic requests. SSL Captures require Fiddler Note, that in order to capture SSL requests you’ll have to install the Fiddler’s SSL certificate. The easiest way to do this is to install Fiddler and use its SSL configuration options to get the certificate into the local certificate store. There’s a document on the Telerik site that provides the exact steps to get SSL captures to work with Fiddler and therefore with WebSurge. Session Storage A group of URLs entered or captured make up a Session. Sessions can be saved and restored easily as they use a very simple text format that simply stored on disk. The format is slightly customized HTTP header traces separated by a separator line. The headers are standard HTTP headers except that the full URL instead of just the domain relative path is stored as part of the 1st HTTP header line for easier parsing. Because it’s just text and uses the same format that Fiddler uses for exports, it’s super easy to create Sessions by hand manually or under program control writing out to a simple text file. You can see what this format looks like in the Capture window figure above – the raw captured format is also what’s stored to disk and what WebSurge parses from. The only ‘custom’ part of these headers is that 1st line contains the full URL instead of the domain relative path and Host: header. The rest of each header are just plain standard HTTP headers with each individual URL isolated by a separator line. The format used here also uses what Fiddler produces for exports, so it’s easy to exchange or view data either in Fiddler or WebSurge. Urls can also be edited interactively so you can modify the headers easily as well: Again – it’s just plain HTTP headers so anything you can do with HTTP can be added here. Use it for single URL Testing Incidentally I’ve also found this form as an excellent way to test and replay individual URLs for simple non-load testing purposes. Because you can capture a single or many URLs and store them on disk, this also provides a nice HTTP playground where you can record URLs with their headers, and fire them one at a time or as a session and see results immediately. It’s actually an easy way for REST presentations and I find the simple UI flow actually easier than using Fiddler natively. Finally you can save one or more URLs as a session for later retrieval. I’m using this more and more for simple URL checks. Overriding Cookies and Domains Speaking of HTTP headers – you can also overwrite cookies used as part of the options. One thing that happens with modern Web applications is that you have session cookies in use for authorization. These cookies tend to expire at some point which would invalidate a test. Using the Options dialog you can actually override the cookie: which replaces the cookie for all requests with the cookie value specified here. You can capture a valid cookie from a manual HTTP request in your browser and then paste into the cookie field, to replace the existing Cookie with the new one that is now valid. Likewise you can easily replace the domain so if you captured urls on west-wind.com and now you want to test on localhost you can do that easily easily as well. You could even do something like capture on store.west-wind.com and then test on localhost/store which would also work. Running Load Tests Once you’ve created a Session you can specify the length of the test in seconds, and specify the number of simultaneous threads to run each session on. Sessions run through each of the URLs in the session sequentially by default. One option in the options list above is that you can also randomize the URLs so each thread runs requests in a different order. This avoids bunching up URLs initially when tests start as all threads run the same requests simultaneously which can sometimes skew the results of the first few minutes of a test. While sessions run some progress information is displayed: By default there’s a live view of requests displayed in a Console-like window. On the bottom of the window there’s a running total summary that displays where you’re at in the test, how many requests have been processed and what the requests per second count is currently for all requests. Note that for tests that run over a thousand requests a second it’s a good idea to turn off the console display. While the console display is nice to see that something is happening and also gives you slight idea what’s happening with actual requests, once a lot of requests are processed, this UI updating actually adds a lot of CPU overhead to the application which may cause the actual load generated to be reduced. If you are running a 1000 requests a second there’s not much to see anyway as requests roll by way too fast to see individual lines anyway. If you look on the options panel, there is a NoProgressEvents option that disables the console display. Note that the summary display is still updated approximately once a second so you can always tell that the test is still running. Test Results When the test is done you get a simple Results display: On the right you get an overall summary as well as breakdown by each URL in the session. Both success and failures are highlighted so it’s easy to see what’s breaking in your load test. The report can be printed or you can also open the HTML document in your default Web Browser for printing to PDF or saving the HTML document to disk. The list on the right shows you a partial list of the URLs that were fired so you can look in detail at the request and response data. The list can be filtered by success and failure requests. Each list is partial only (at the moment) and limited to a max of 1000 items in order to render reasonably quickly. Each item in the list can be clicked to see the full request and response data: This particularly useful for errors so you can quickly see and copy what request data was used and in the case of a GET request you can also just click the link to quickly jump to the page. For non-GET requests you can find the URL in the Session list, and use the context menu to Test the URL as configured including any HTTP content data to send. You get to see the full HTTP request and response as well as a link in the Request header to go visit the actual page. Not so useful for a POST as above, but definitely useful for GET requests. Finally you can also get a few charts. The most useful one is probably the Request per Second chart which can be accessed from the Charts menu or shortcut. Here’s what it looks like:   Results can also be exported to JSON, XML and HTML. Keep in mind that these files can get very large rather quickly though, so exports can end up taking a while to complete. Command Line Interface WebSurge runs with a small core load engine and this engine is plugged into the front end application I’ve shown so far. There’s also a command line interface available to run WebSurge from the Windows command prompt. Using the command line you can run tests for either an individual URL (similar to AB.exe for example) or a full Session file. By default when it runs WebSurgeCli shows progress every second showing total request count, failures and the requests per second for the entire test. A silent option can turn off this progress display and display only the results. The command line interface can be useful for build integration which allows checking for failures perhaps or hitting a specific requests per second count etc. It’s also nice to use this as quick and dirty URL test facility similar to the way you’d use Apache Bench (ab.exe). Unlike ab.exe though, WebSurgeCli supports SSL and makes it much easier to create multi-URL tests using either manual editing or the WebSurge UI. Current Status Currently West Wind WebSurge is still in Beta status. I’m still adding small new features and tweaking the UI in an attempt to make it as easy and self-explanatory as possible to run. Documentation for the UI and specialty features is also still a work in progress. I plan on open-sourcing this product, but it won’t be free. There’s a free version available that provides a limited number of threads and request URLs to run. A relatively low cost license  removes the thread and request limitations. Pricing info can be found on the Web site – there’s an introductory price which is $99 at the moment which I think is reasonable compared to most other for pay solutions out there that are exorbitant by comparison… The reason code is not available yet is – well, the UI portion of the app is a bit embarrassing in its current monolithic state. The UI started as a very simple interface originally that later got a lot more complex – yeah, that never happens, right? Unless there’s a lot of interest I don’t foresee re-writing the UI entirely (which would be ideal), but in the meantime at least some cleanup is required before I dare to publish it :-). The code will likely be released with version 1.0. I’m very interested in feedback. Do you think this could be useful to you and provide value over other tools you may or may not have used before? I hope so – it already has provided a ton of value for me and the work I do that made the development worthwhile at this point. You can leave a comment below, or for more extensive discussions you can post a message on the West Wind Message Board in the WebSurge section Microsoft MVPs and Insiders get a free License If you’re a Microsoft MVP or a Microsoft Insider you can get a full license for free. Send me a link to your current, official Microsoft profile and I’ll send you a not-for resale license. Send any messages to [email protected]. Resources For more info on WebSurge and to download it to try it out, use the following links. West Wind WebSurge Home Download West Wind WebSurge Getting Started with West Wind WebSurge Video© Rick Strahl, West Wind Technologies, 2005-2014Posted in ASP.NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • C#/.NET Little Wonders: The Generic Func Delegates

    - by James Michael Hare
    Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can help improve your code by making it easier to write and maintain. The index of all my past little wonders posts can be found here. Back in one of my three original “Little Wonders” Trilogy of posts, I had listed generic delegates as one of the Little Wonders of .NET.  Later, someone posted a comment saying said that they would love more detail on the generic delegates and their uses, since my original entry just scratched the surface of them. Last week, I began our look at some of the handy generic delegates built into .NET with a description of delegates in general, and the Action family of delegates.  For this week, I’ll launch into a look at the Func family of generic delegates and how they can be used to support generic, reusable algorithms and classes. Quick Delegate Recap Delegates are similar to function pointers in C++ in that they allow you to store a reference to a method.  They can store references to either static or instance methods, and can actually be used to chain several methods together in one delegate. Delegates are very type-safe and can be satisfied with any standard method, anonymous method, or a lambda expression.  They can also be null as well (refers to no method), so care should be taken to make sure that the delegate is not null before you invoke it. Delegates are defined using the keyword delegate, where the delegate’s type name is placed where you would typically place the method name: 1: // This delegate matches any method that takes string, returns nothing 2: public delegate void Log(string message); This delegate defines a delegate type named Log that can be used to store references to any method(s) that satisfies its signature (whether instance, static, lambda expression, etc.). Delegate instances then can be assigned zero (null) or more methods using the operator = which replaces the existing delegate chain, or by using the operator += which adds a method to the end of a delegate chain: 1: // creates a delegate instance named currentLogger defaulted to Console.WriteLine (static method) 2: Log currentLogger = Console.Out.WriteLine; 3:  4: // invokes the delegate, which writes to the console out 5: currentLogger("Hi Standard Out!"); 6:  7: // append a delegate to Console.Error.WriteLine to go to std error 8: currentLogger += Console.Error.WriteLine; 9:  10: // invokes the delegate chain and writes message to std out and std err 11: currentLogger("Hi Standard Out and Error!"); While delegates give us a lot of power, it can be cumbersome to re-create fairly standard delegate definitions repeatedly, for this purpose the generic delegates were introduced in various stages in .NET.  These support various method types with particular signatures. Note: a caveat with generic delegates is that while they can support multiple parameters, they do not match methods that contains ref or out parameters. If you want to a delegate to represent methods that takes ref or out parameters, you will need to create a custom delegate. We’ve got the Func… delegates Just like it’s cousin, the Action delegate family, the Func delegate family gives us a lot of power to use generic delegates to make classes and algorithms more generic.  Using them keeps us from having to define a new delegate type when need to make a class or algorithm generic. Remember that the point of the Action delegate family was to be able to perform an “action” on an item, with no return results.  Thus Action delegates can be used to represent most methods that take 0 to 16 arguments but return void.  You can assign a method The Func delegate family was introduced in .NET 3.5 with the advent of LINQ, and gives us the power to define a function that can be called on 0 to 16 arguments and returns a result.  Thus, the main difference between Action and Func, from a delegate perspective, is that Actions return nothing, but Funcs return a result. The Func family of delegates have signatures as follows: Func<TResult> – matches a method that takes no arguments, and returns value of type TResult. Func<T, TResult> – matches a method that takes an argument of type T, and returns value of type TResult. Func<T1, T2, TResult> – matches a method that takes arguments of type T1 and T2, and returns value of type TResult. Func<T1, T2, …, TResult> – and so on up to 16 arguments, and returns value of type TResult. These are handy because they quickly allow you to be able to specify that a method or class you design will perform a function to produce a result as long as the method you specify meets the signature. For example, let’s say you were designing a generic aggregator, and you wanted to allow the user to define how the values will be aggregated into the result (i.e. Sum, Min, Max, etc…).  To do this, we would ask the user of our class to pass in a method that would take the current total, the next value, and produce a new total.  A class like this could look like: 1: public sealed class Aggregator<TValue, TResult> 2: { 3: // holds method that takes previous result, combines with next value, creates new result 4: private Func<TResult, TValue, TResult> _aggregationMethod; 5:  6: // gets or sets the current result of aggregation 7: public TResult Result { get; private set; } 8:  9: // construct the aggregator given the method to use to aggregate values 10: public Aggregator(Func<TResult, TValue, TResult> aggregationMethod = null) 11: { 12: if (aggregationMethod == null) throw new ArgumentNullException("aggregationMethod"); 13:  14: _aggregationMethod = aggregationMethod; 15: } 16:  17: // method to add next value 18: public void Aggregate(TValue nextValue) 19: { 20: // performs the aggregation method function on the current result and next and sets to current result 21: Result = _aggregationMethod(Result, nextValue); 22: } 23: } Of course, LINQ already has an Aggregate extension method, but that works on a sequence of IEnumerable<T>, whereas this is designed to work more with aggregating single results over time (such as keeping track of a max response time for a service). We could then use this generic aggregator to find the sum of a series of values over time, or the max of a series of values over time (among other things): 1: // creates an aggregator that adds the next to the total to sum the values 2: var sumAggregator = new Aggregator<int, int>((total, next) => total + next); 3:  4: // creates an aggregator (using static method) that returns the max of previous result and next 5: var maxAggregator = new Aggregator<int, int>(Math.Max); So, if we were timing the response time of a web method every time it was called, we could pass that response time to both of these aggregators to get an idea of the total time spent in that web method, and the max time spent in any one call to the web method: 1: // total will be 13 and max 13 2: int responseTime = 13; 3: sumAggregator.Aggregate(responseTime); 4: maxAggregator.Aggregate(responseTime); 5:  6: // total will be 20 and max still 13 7: responseTime = 7; 8: sumAggregator.Aggregate(responseTime); 9: maxAggregator.Aggregate(responseTime); 10:  11: // total will be 40 and max now 20 12: responseTime = 20; 13: sumAggregator.Aggregate(responseTime); 14: maxAggregator.Aggregate(responseTime); The Func delegate family is useful for making generic algorithms and classes, and in particular allows the caller of the method or user of the class to specify a function to be performed in order to generate a result. What is the result of a Func delegate chain? If you remember, we said earlier that you can assign multiple methods to a delegate by using the += operator to chain them.  So how does this affect delegates such as Func that return a value, when applied to something like the code below? 1: Func<int, int, int> combo = null; 2:  3: // What if we wanted to aggregate the sum and max together? 4: combo += (total, next) => total + next; 5: combo += Math.Max; 6:  7: // what is the result? 8: var comboAggregator = new Aggregator<int, int>(combo); Well, in .NET if you chain multiple methods in a delegate, they will all get invoked, but the result of the delegate is the result of the last method invoked in the chain.  Thus, this aggregator would always result in the Math.Max() result.  The other chained method (the sum) gets executed first, but it’s result is thrown away: 1: // result is 13 2: int responseTime = 13; 3: comboAggregator.Aggregate(responseTime); 4:  5: // result is still 13 6: responseTime = 7; 7: comboAggregator.Aggregate(responseTime); 8:  9: // result is now 20 10: responseTime = 20; 11: comboAggregator.Aggregate(responseTime); So remember, you can chain multiple Func (or other delegates that return values) together, but if you do so you will only get the last executed result. Func delegates and co-variance/contra-variance in .NET 4.0 Just like the Action delegate, as of .NET 4.0, the Func delegate family is contra-variant on its arguments.  In addition, it is co-variant on its return type.  To support this, in .NET 4.0 the signatures of the Func delegates changed to: Func<out TResult> – matches a method that takes no arguments, and returns value of type TResult (or a more derived type). Func<in T, out TResult> – matches a method that takes an argument of type T (or a less derived type), and returns value of type TResult(or a more derived type). Func<in T1, in T2, out TResult> – matches a method that takes arguments of type T1 and T2 (or less derived types), and returns value of type TResult (or a more derived type). Func<in T1, in T2, …, out TResult> – and so on up to 16 arguments, and returns value of type TResult (or a more derived type). Notice the addition of the in and out keywords before each of the generic type placeholders.  As we saw last week, the in keyword is used to specify that a generic type can be contra-variant -- it can match the given type or a type that is less derived.  However, the out keyword, is used to specify that a generic type can be co-variant -- it can match the given type or a type that is more derived. On contra-variance, if you are saying you need an function that will accept a string, you can just as easily give it an function that accepts an object.  In other words, if you say “give me an function that will process dogs”, I could pass you a method that will process any animal, because all dogs are animals.  On the co-variance side, if you are saying you need a function that returns an object, you can just as easily pass it a function that returns a string because any string returned from the given method can be accepted by a delegate expecting an object result, since string is more derived.  Once again, in other words, if you say “give me a method that creates an animal”, I can pass you a method that will create a dog, because all dogs are animals. It really all makes sense, you can pass a more specific thing to a less specific parameter, and you can return a more specific thing as a less specific result.  In other words, pay attention to the direction the item travels (parameters go in, results come out).  Keeping that in mind, you can always pass more specific things in and return more specific things out. For example, in the code below, we have a method that takes a Func<object> to generate an object, but we can pass it a Func<string> because the return type of object can obviously accept a return value of string as well: 1: // since Func<object> is co-variant, this will access Func<string>, etc... 2: public static string Sequence(int count, Func<object> generator) 3: { 4: var builder = new StringBuilder(); 5:  6: for (int i=0; i<count; i++) 7: { 8: object value = generator(); 9: builder.Append(value); 10: } 11:  12: return builder.ToString(); 13: } Even though the method above takes a Func<object>, we can pass a Func<string> because the TResult type placeholder is co-variant and accepts types that are more derived as well: 1: // delegate that's typed to return string. 2: Func<string> stringGenerator = () => DateTime.Now.ToString(); 3:  4: // This will work in .NET 4.0, but not in previous versions 5: Sequence(100, stringGenerator); Previous versions of .NET implemented some forms of co-variance and contra-variance before, but .NET 4.0 goes one step further and allows you to pass or assign an Func<A, BResult> to a Func<Y, ZResult> as long as A is less derived (or same) as Y, and BResult is more derived (or same) as ZResult. Sidebar: The Func and the Predicate A method that takes one argument and returns a bool is generally thought of as a predicate.  Predicates are used to examine an item and determine whether that item satisfies a particular condition.  Predicates are typically unary, but you may also have binary and other predicates as well. Predicates are often used to filter results, such as in the LINQ Where() extension method: 1: var numbers = new[] { 1, 2, 4, 13, 8, 10, 27 }; 2:  3: // call Where() using a predicate which determines if the number is even 4: var evens = numbers.Where(num => num % 2 == 0); As of .NET 3.5, predicates are typically represented as Func<T, bool> where T is the type of the item to examine.  Previous to .NET 3.5, there was a Predicate<T> type that tended to be used (which we’ll discuss next week) and is still supported, but most developers recommend using Func<T, bool> now, as it prevents confusion with overloads that accept unary predicates and binary predicates, etc.: 1: // this seems more confusing as an overload set, because of Predicate vs Func 2: public static SomeMethod(Predicate<int> unaryPredicate) { } 3: public static SomeMethod(Func<int, int, bool> binaryPredicate) { } 4:  5: // this seems more consistent as an overload set, since just uses Func 6: public static SomeMethod(Func<int, bool> unaryPredicate) { } 7: public static SomeMethod(Func<int, int, bool> binaryPredicate) { } Also, even though Predicate<T> and Func<T, bool> match the same signatures, they are separate types!  Thus you cannot assign a Predicate<T> instance to a Func<T, bool> instance and vice versa: 1: // the same method, lambda expression, etc can be assigned to both 2: Predicate<int> isEven = i => (i % 2) == 0; 3: Func<int, bool> alsoIsEven = i => (i % 2) == 0; 4:  5: // but the delegate instances cannot be directly assigned, strongly typed! 6: // ERROR: cannot convert type... 7: isEven = alsoIsEven; 8:  9: // however, you can assign by wrapping in a new instance: 10: isEven = new Predicate<int>(alsoIsEven); 11: alsoIsEven = new Func<int, bool>(isEven); So, the general advice that seems to come from most developers is that Predicate<T> is still supported, but we should use Func<T, bool> for consistency in .NET 3.5 and above. Sidebar: Func as a Generator for Unit Testing One area of difficulty in unit testing can be unit testing code that is based on time of day.  We’d still want to unit test our code to make sure the logic is accurate, but we don’t want the results of our unit tests to be dependent on the time they are run. One way (of many) around this is to create an internal generator that will produce the “current” time of day.  This would default to returning result from DateTime.Now (or some other method), but we could inject specific times for our unit testing.  Generators are typically methods that return (generate) a value for use in a class/method. For example, say we are creating a CacheItem<T> class that represents an item in the cache, and we want to make sure the item shows as expired if the age is more than 30 seconds.  Such a class could look like: 1: // responsible for maintaining an item of type T in the cache 2: public sealed class CacheItem<T> 3: { 4: // helper method that returns the current time 5: private static Func<DateTime> _timeGenerator = () => DateTime.Now; 6:  7: // allows internal access to the time generator 8: internal static Func<DateTime> TimeGenerator 9: { 10: get { return _timeGenerator; } 11: set { _timeGenerator = value; } 12: } 13:  14: // time the item was cached 15: public DateTime CachedTime { get; private set; } 16:  17: // the item cached 18: public T Value { get; private set; } 19:  20: // item is expired if older than 30 seconds 21: public bool IsExpired 22: { 23: get { return _timeGenerator() - CachedTime > TimeSpan.FromSeconds(30.0); } 24: } 25:  26: // creates the new cached item, setting cached time to "current" time 27: public CacheItem(T value) 28: { 29: Value = value; 30: CachedTime = _timeGenerator(); 31: } 32: } Then, we can use this construct to unit test our CacheItem<T> without any time dependencies: 1: var baseTime = DateTime.Now; 2:  3: // start with current time stored above (so doesn't drift) 4: CacheItem<int>.TimeGenerator = () => baseTime; 5:  6: var target = new CacheItem<int>(13); 7:  8: // now add 15 seconds, should still be non-expired 9: CacheItem<int>.TimeGenerator = () => baseTime.AddSeconds(15); 10:  11: Assert.IsFalse(target.IsExpired); 12:  13: // now add 31 seconds, should now be expired 14: CacheItem<int>.TimeGenerator = () => baseTime.AddSeconds(31); 15:  16: Assert.IsTrue(target.IsExpired); Now we can unit test for 1 second before, 1 second after, 1 millisecond before, 1 day after, etc.  Func delegates can be a handy tool for this type of value generation to support more testable code.  Summary Generic delegates give us a lot of power to make truly generic algorithms and classes.  The Func family of delegates is a great way to be able to specify functions to calculate a result based on 0-16 arguments.  Stay tuned in the weeks that follow for other generic delegates in the .NET Framework!   Tweet Technorati Tags: .NET, C#, CSharp, Little Wonders, Generics, Func, Delegates

    Read the article

  • jQuery: How to check if a value exists in an array?

    - by Jannis
    Hello, I am trying to write a simple input field validation plugin at the moment (more of a learning exercise really) and thought this would not be too hard, seeing as all I should have to do is: Get input fields Store them in array with each one's value On submit of form check if array contains any empty strings But I seem to fail at writing something that checks for an empty string (read: input with no text inside) inside my array. Here is the code I have so far: var form = $(this), // passed in form element inputs = form.find('input'), // all of this forms input fields isValid = false; // initially set the form to not be valid function validate() { var fields = inputs.serializeArray(); // make an array out of input fields // start -- this does not work for (var n in fields) { if (fields[n].value == "") { isValid = false; console.log('failed'); } else { isValid = true; console.log('passed'); }; } // end -- this does not work }; // close validate() // TRIGGERS inputs.live('keyup blur', function(event) { validate(); }); Any help with how I can check if one of the fields is blank and if so return a isValid = false would be much appreciated. I also played around with the $.inArray("", fields) but this would never return 0 or 1 even when the console.log showed that the fields had no value. Thanks for reading.

    Read the article

  • Exception on SslStream.AuthenticateAsClient (The message was badly formatted)

    - by Noms
    I have got wierd problem going on. I am trying to connect to Apple server via TCP/SSL. I am using a Client certificate provided by Apple for push notifications. I installed the certificate on my server (Win2k3) in both Local Trusted Root certificates and Local Personal Certificates folder. Now I have a class library that deals with that connection, when i call this class library from a console application running from the server it works absolutely fine, but when i call that class library from an asp.net page or asmx web service I get the following exception. A call to SSPI failed, see inner exception. The message received was unexpected or badly formatted. This is my code: X509Certificate cert = new X509Certificate(certificateLocation, certificatePassword); X509CertificateCollection certCollection = new X509CertificateCollection(new X509Certificate[1] { cert }); // OPEN the new SSL Stream SslStream ssl = new SslStream(client.GetStream(), false, new RemoteCertificateValidationCallback(ValidateServerCertificate), null); ssl.AuthenticateAsClient(ipAddress, certCollection, SslProtocols.Default, false); ssl.AuthenticateAsClient is where the error gets thrown. This is driving me nuts. If the console application can connect fine, there must be some problem with asp.net network layer security that is failing the authentication... not sure, perhaps need to add something or some sort of security policy in the web.config. Also just to point out that i can connect fine on my local development machine both with console and website. Anyone has got any ideas?

    Read the article

  • Issues integrating NCover with CC.NET, .NET framework 4.0 and MsTest

    - by Nikhil
    I'm implementing continuous integration with CruiseControl.NET, .NET 4.0, NCover and MsTest. On the build server I'm unable to run code coverage from the Ncover explorer or NCover console. When I run where vstesthost.exe from the Ncover console it returns the Visual Studio 9.0 path and does not seem to pick up .net framework 4.0. I've followed instructions from this MSTest: Measuring Test Quality With NCover post with slight modifications for .net framework 4.0, without any success. My CC.NET script looks like this <exec> <executable>C:\Program Files (x86)\NCover\NCover.Console.exe</executable> <baseDirectory>$(project_root)\</baseDirectory> <buildArgs>"C:\Program Files (x86)\**Microsoft Visual Studio 10.0**\Common7\IDE\MSTest.exe" /testcontainer:...\...\UnitTests.dll /resultsfile:TestResults.trx //xml D:\_Projects\....\Temp_Coverage.xml //pm vstesthost.exe</buildArgs> <buildTimeoutSeconds>$(ncover.timeout)</buildTimeoutSeconds> </exec> Has anyone come across similar issue. Any help would be much appreciated.

    Read the article

  • C# performance analysis- how to count CPU cycles?

    - by Lirik
    Is this a valid way to do performance analysis? I want to get nanosecond accuracy and determine the performance of typecasting: class PerformanceTest { static double last = 0.0; static List<object> numericGenericData = new List<object>(); static List<double> numericTypedData = new List<double>(); static void Main(string[] args) { double totalWithCasting = 0.0; double totalWithoutCasting = 0.0; for (double d = 0.0; d < 1000000.0; ++d) { numericGenericData.Add(d); numericTypedData.Add(d); } Stopwatch stopwatch = new Stopwatch(); for (int i = 0; i < 10; ++i) { stopwatch.Start(); testWithTypecasting(); stopwatch.Stop(); totalWithCasting += stopwatch.ElapsedTicks; stopwatch.Start(); testWithoutTypeCasting(); stopwatch.Stop(); totalWithoutCasting += stopwatch.ElapsedTicks; } Console.WriteLine("Avg with typecasting = {0}", (totalWithCasting/10)); Console.WriteLine("Avg without typecasting = {0}", (totalWithoutCasting/10)); Console.ReadKey(); } static void testWithTypecasting() { foreach (object o in numericGenericData) { last = ((double)o*(double)o)/200; } } static void testWithoutTypeCasting() { foreach (double d in numericTypedData) { last = (d * d)/200; } } } The output is: Avg with typecasting = 468872.3 Avg without typecasting = 501157.9 I'm a little suspicious... it looks like there is nearly no impact on the performance. Is casting really that cheap?

    Read the article

  • "Microsoft.ACE.OLEDB.12.0 provider is not registered" [RESOLVED]

    - by Azim
    I have a Visual Studio 2008 solution with two projects (a Word-Template project and a VB.Net console application for testing). Both projects reference a database project which opens a connection to an MS-Access 2007 database file and have references to System.Data.OleDb. In the database project I have a function which retrieves a data table as follows private class AdminDatabase ' stores the connection string which is set in the New() method dim strAdminConnection as string public sub New() ... adminName = dlgopen.FileName conAdminDB = New OleDbConnection conAdminDB.ConnectionString = "Data Source='" + adminName + "';" + _ "Provider=Microsoft.ACE.OLEDB.12.0" ' store the connection string in strAdminConnection strAdminConnection = conAdminDB.ConnectionString.ToString() My.Settings.SetUserOverride("AdminConnectionString", strAdminConnection) ... End Sub ' retrieves data from the database Public Function getDataTable(ByVal sqlStatement As String) As DataTable Dim ds As New DataSet Dim dt As New DataTable Dim da As New OleDbDataAdapter Dim localCon As New OleDbConnection localCon.ConnectionString = strAdminConnection Using localCon Dim command As OleDbCommand = localCon.CreateCommand() command.CommandText = sqlStatement localCon.Open() da.SelectCommand = command da.Fill(dt) getDataTable = dt End Using End Function End Class When I call this function from my Word 2007 Template project everything works fine; no errors. But when I run it from the console application it throws the following exception ex = {"The 'Microsoft.ACE.OLEDB.12.0' provider is not registered on the local machine."} Both projects have the same reference and the console application did work when I first wrote it (a while ago) but now it has stopped work. I must be missing something but I don't know what. Any ideas?

    Read the article

  • Using pinvoke in c# to call sprintf and friends on 64-bit

    - by bde
    I am having an interesting problem with using pinvoke in C# to call _snwprintf. It works for integer types, but not for floating point numbers. This is on 64-bit Windows, it works fine on 32-bit. My code is below, please keep in mind that this is a contrived example to show the behavior I am seeing. class Program { [DllImport("msvcrt.dll", CharSet = CharSet.Unicode, CallingConvention = CallingConvention.Cdecl)] private static extern int _snwprintf([MarshalAs(UnmanagedType.LPWStr)] StringBuilder str, uint length, String format, int p); [DllImport("msvcrt.dll", CharSet = CharSet.Unicode, CallingConvention = CallingConvention.Cdecl)] private static extern int _snwprintf([MarshalAs(UnmanagedType.LPWStr)] StringBuilder str, uint length, String format, double p); static void Main(string[] args) { Double d = 1.0f; Int32 i = 1; Object o = (object)d; StringBuilder str = new StringBuilder(); _snwprintf(str, 32, "%10.1f", (Double)o); Console.WriteLine(str.ToString()); o = (object)i; _snwprintf(str, 32, "%10d", (Int32)o); Console.WriteLine(str.ToString()); Console.ReadKey(); } } The output of this program is 0.0 1 It should print 1.0 on the first line and not 0.0, and so far I am stumped.

    Read the article

  • StandardOutput.EndOfStream Hangs

    - by Ashton Halladay
    I'm starting a process within my C# application which runs a console application. I've redirected standard input and output, and am able to read a few lines via StandardOutput.ReadLine(). I'm convinced I have the ProcessStartInfo configured correctly. The console application, when started, outputs a few lines (ending with a "marker" line) and then waits for input. After receiving the input, it again outputs a few lines (ending again with a "marker" line), and so on. My intention is to read lines from it until I receive the "marker" line, at which point I know to send the appropriate input string. My problem is that, after several iterations, the program hangs. Pausing the debugger tends to place the hang within a call to StandardOutput.EndOfStream. This is the case in the following test code: while (!mProcess.StandardOutput.EndOfStream) // Program hangs here. { Console.WriteLine(mProcess.StandardOutput.ReadLine()); } When I'm testing for the "marker" line, I get the same kind of hang if I attempt to access StandardOutput.EndOfStream after reading the line: string line = ""; while (!isMarker(line)) { line = mProcess.StandardOutput.ReadLine(); } bool eos = mProcess.StandardOutput.EndOfStream; // Program hangs here. What might I be doing that causes this property to perform so horribly?

    Read the article

< Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >