Search Results

Search found 30367 results on 1215 pages for 'service reference'.

Page 408/1215 | < Previous Page | 404 405 406 407 408 409 410 411 412 413 414 415  | Next Page >

  • How to identify each parameter type in a C# method?

    - by user465876
    I have a C# method say: MyMethod(int num, string name, Color color, MyComplexType complex) Using reflection, how can I distinctly identify each of the parameter types of any method? I want to perform some task by parameter type. If the type is simple int, string or boolean then I do something, if it is Color, XMLDocument, etc I do something else and if it is user defined type like MyComplexType or MyCalci etc then I want to do certain task. I am able to retrieve all the parameters of a method using ParameterInfo and can loop through each parameter and get their types. But how can I identify each data type? foreach (var parameter in parameters) { //identify primitive types?? //identify value types //identify reference types } Edit: this is apart of my code to create a propert grid sort of page where I want to show the parameter list with data types for the selected method. If the parameter has any userdefined type/reference type then I want to expand it further to show all the elements under it with datatypes.

    Read the article

  • Persistent Logins

    - by Trido
    I am following this article on creating persistent login cookies with my ASP.NET site: http://msdn.microsoft.com/en-us/library/system.web.security.formsauthentication.encrypt.aspx The issue is that when I navigate to the page, I get the following compiler error: Compiler Error Message: CS1061: 'ASP.administration_login_aspx' does not contain a definition for 'Login_Click' and no extension method 'Login_Click' accepting a first argument of type 'ASP.administration_login_aspx' could be found (are you missing a using directive or an assembly reference?) Can anyone tell me why this is. The error message doesn't really say much and I don't believe I am missing a directive or assembly reference. The code builds without any problems. EDIT: I did not include code because I said I was following that link which includes code. I copy/pasted from the example.

    Read the article

  • What to do with "Default Web Site" on a Japanese OS? (E.G: ??? Web ???)

    - by Mike Atlas
    I'm currently in the process of upgrading old II6 automation scripts that use the iisvdir tool to create/modify/update apps and virtual directories. The IIS6 "iisvdir" commands reference paths in IIS6 that are from the metabase, eg, "/W3SVC/1/ROOT/MyApp" - where 1 is the "Default" website. The command doesn't actually require the display name of the site to make changes to it. With an English OS, the "Default Web Site" site name is fine for usage in AppCmd, but if the OS is Japanese, it will be named: "??? Web ???" So how can I script AppCmd to refer to sites, vdirs and apps using language neutral identifiers to reference the "Default App Site"? (Disclosure: I only have a IIS7-English machine that I am working on currently, but I have both IIS6-English and IIS6-Japanese machines for testing my old scripts - so perhaps it really is just "Default Web Site" still on Win2k8-Japanese?)

    Read the article

  • JMS Step 4 - How to Create an 11g BPEL Process Which Writes a Message Based on an XML Schema to a JMS Queue

    - by John-Brown.Evans
    JMS Step 4 - How to Create an 11g BPEL Process Which Writes a Message Based on an XML Schema to a JMS Queue ol{margin:0;padding:0} .c11_4{vertical-align:top;width:129.8pt;border-style:solid;background-color:#f3f3f3;border-color:#000000;border-width:1pt;padding:5pt 5pt 5pt 5pt} .c9_4{vertical-align:top;width:207pt;border-style:solid;background-color:#f3f3f3;border-color:#000000;border-width:1pt;padding:5pt 5pt 5pt 5pt}.c14{vertical-align:top;width:207pt;border-style:solid;border-color:#000000;border-width:1pt;padding:5pt 5pt 5pt 5pt} .c17_4{vertical-align:top;width:129.8pt;border-style:solid;border-color:#000000;border-width:1pt;padding:5pt 5pt 5pt 5pt} .c7_4{vertical-align:top;width:130pt;border-style:solid;border-color:#000000;border-width:1pt;padding:0pt 5pt 0pt 5pt} .c19_4{vertical-align:top;width:468pt;border-style:solid;border-color:#000000;border-width:1pt;padding:5pt 5pt 5pt 5pt} .c22_4{background-color:#ffffff} .c20_4{list-style-type:disc;margin:0;padding:0} .c6_4{font-size:8pt;font-family:"Courier New"} .c24_4{color:inherit;text-decoration:inherit} .c23_4{color:#1155cc;text-decoration:underline} .c0_4{height:11pt;direction:ltr} .c10_4{font-size:10pt;font-family:"Courier New"} .c3_4{padding-left:0pt;margin-left:36pt} .c18_4{font-size:8pt} .c8_4{text-align:center} .c12_4{background-color:#ffff00} .c2_4{font-weight:bold} .c21_4{background-color:#00ff00} .c4_4{line-height:1.0} .c1_4{direction:ltr} .c15_4{background-color:#f3f3f3} .c13_4{font-family:"Courier New"} .c5_4{font-style:italic} .c16_4{border-collapse:collapse} .title{padding-top:24pt;line-height:1.15;text-align:left;color:#000000;font-size:36pt;font-family:"Arial";font-weight:bold;padding-bottom:6pt} .subtitle{padding-top:18pt;line-height:1.15;text-align:left;color:#666666;font-style:italic;font-size:24pt;font-family:"Georgia";padding-bottom:4pt} li{color:#000000;font-size:10pt;font-family:"Arial"} p{color:#000000;font-size:10pt;margin:0;font-family:"Arial"} h1{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:18pt;font-family:"Arial";font-weight:normal;padding-bottom:0pt} h2{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:18pt;font-family:"Arial";font-weight:bold;padding-bottom:0pt} h3{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:14pt;font-family:"Arial";font-weight:normal;padding-bottom:0pt} h4{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-style:italic;font-size:11pt;font-family:"Arial";padding-bottom:0pt} h5{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:10pt;font-family:"Arial";font-weight:normal;padding-bottom:0pt} h6{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-style:italic;font-size:10pt;font-family:"Arial";padding-bottom:0pt} This post continues the series of JMS articles which demonstrate how to use JMS queues in a SOA context. The previous posts were: JMS Step 1 - How to Create a Simple JMS Queue in Weblogic Server 11g JMS Step 2 - Using the QueueSend.java Sample Program to Send a Message to a JMS Queue JMS Step 3 - Using the QueueReceive.java Sample Program to Read a Message from a JMS Queue In this example we will create a BPEL process which will write (enqueue) a message to a JMS queue using a JMS adapter. The JMS adapter will enqueue the full XML payload to the queue. This sample will use the following WebLogic Server objects. The first two, the Connection Factory and JMS Queue, were created as part of the first blog post in this series, JMS Step 1 - How to Create a Simple JMS Queue in Weblogic Server 11g. If you haven't created those objects yet, please see that post for details on how to do so. The Connection Pool will be created as part of this example. Object Name Type JNDI Name TestConnectionFactory Connection Factory jms/TestConnectionFactory TestJMSQueue JMS Queue jms/TestJMSQueue eis/wls/TestQueue Connection Pool eis/wls/TestQueue 1. Verify Connection Factory and JMS Queue As mentioned above, this example uses a WLS Connection Factory called TestConnectionFactory and a JMS queue TestJMSQueue. As these are prerequisites for this example, let us verify they exist. Log in to the WebLogic Server Administration Console. Select Services > JMS Modules > TestJMSModule You should see the following objects: If not, or if the TestJMSModule is missing, please see the abovementioned article and create these objects before continuing. 2. Create a JMS Adapter Connection Pool in WebLogic Server The BPEL process we are about to create uses a JMS adapter to write to the JMS queue. The JMS adapter is deployed to the WebLogic server and needs to be configured to include a connection pool which references the connection factory associated with the JMS queue. In the WebLogic Server Console Go to Deployments > Next and select (click on) the JmsAdapter Select Configuration > Outbound Connection Pools and expand oracle.tip.adapter.jms.IJmsConnectionFactory. This will display the list of connections configured for this adapter. For example, eis/aqjms/Queue, eis/aqjms/Topic etc. These JNDI names are actually quite confusing. We are expecting to configure a connection pool here, but the names refer to queues and topics. One would expect these to be called *ConnectionPool or *_CF or similar, but to conform to this nomenclature, we will call our entry eis/wls/TestQueue . This JNDI name is also the name we will use later, when creating a BPEL process to access this JMS queue! Select New, check the oracle.tip.adapter.jms.IJmsConnectionFactory check box and Next. Enter JNDI Name: eis/wls/TestQueue for the connection instance, then press Finish. Expand oracle.tip.adapter.jms.IJmsConnectionFactory again and select (click on) eis/wls/TestQueue The ConnectionFactoryLocation must point to the JNDI name of the connection factory associated with the JMS queue you will be writing to. In our example, this is the connection factory called TestConnectionFactory, with the JNDI name jms/TestConnectionFactory.( As a reminder, this connection factory is contained in the JMS Module called TestJMSModule, under Services > Messaging > JMS Modules > TestJMSModule which we verified at the beginning of this document. )Enter jms/TestConnectionFactory  into the Property Value field for Connection Factory Location. After entering it, you must press Return/Enter then Save for the value to be accepted. If your WebLogic server is running in Development mode, you should see the message that the changes have been activated and the deployment plan successfully updated. If not, then you will manually need to activate the changes in the WebLogic server console. Although the changes have been activated, the JmsAdapter needs to be redeployed in order for the changes to become effective. This should be confirmed by the message Remember to update your deployment to reflect the new plan when you are finished with your changes as can be seen in the following screen shot: The next step is to redeploy the JmsAdapter.Navigate back to the Deployments screen, either by selecting it in the left-hand navigation tree or by selecting the “Summary of Deployments” link in the breadcrumbs list at the top of the screen. Then select the checkbox next to JmsAdapter and press the Update button On the Update Application Assistant page, select “Redeploy this application using the following deployment files” and press Finish. After a few seconds you should get the message that the selected deployments were updated. The JMS adapter configuration is complete and it can now be used to access the JMS queue. To summarize: we have created a JMS adapter connection pool connector with the JNDI name jms/TestConnectionFactory. This is the JNDI name to be accessed by a process such as a BPEL process, when using the JMS adapter to access the previously created JMS queue with the JNDI name jms/TestJMSQueue. In the following step, we will set up a BPEL process to use this JMS adapter to write to the JMS queue. 3. Create a BPEL Composite with a JMS Adapter Partner Link This step requires that you have a valid Application Server Connection defined in JDeveloper, pointing to the application server on which you created the JMS Queue and Connection Factory. You can create this connection in JDeveloper under the Application Server Navigator. Give it any name and be sure to test the connection before completing it. This sample will use the connection name jbevans-lx-PS5, as that is the name of the connection pointing to my SOA PS5 installation. When using a JMS adapter from within a BPEL process, there are various configuration options, such as the operation type (consume message, produce message etc.), delivery mode and message type. One of these options is the choice of the format of the JMS message payload. This can be structured around an existing XSD, in which case the full XML element and tags are passed, or it can be opaque, meaning that the payload is sent as-is to the JMS adapter. In the case of an XSD-based message, the payload can simply be copied to the input variable of the JMS adapter. In the case of an opaque message, the JMS adapter’s input variable is of type base64binary. So the payload needs to be converted to base64 binary first. I will go into this in more detail in a later blog entry. This sample will pass a simple message to the adapter, based on the following simple XSD file, which consists of a single string element: stringPayload.xsd <?xml version="1.0" encoding="windows-1252" ?> <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns="http://www.example.org" targetNamespace="http://www.example.org" elementFormDefault="qualified" <xsd:element name="exampleElement" type="xsd:string"> </xsd:element> </xsd:schema> The following steps are all executed in JDeveloper. The SOA project will be created inside a JDeveloper Application. If you do not already have an application to contain the project, you can create a new one via File > New > General > Generic Application. Give the application any name, for example JMSTests and, when prompted for a project name and type, call the project JmsAdapterWriteWithXsd and select SOA as the project technology type. If you already have an application, continue below. Create a SOA Project Create a new project and choose SOA Tier > SOA Project as its type. Name it JmsAdapterWriteSchema. When prompted for the composite type, choose Composite With BPEL Process. When prompted for the BPEL Process, name it JmsAdapterWriteSchema too and choose Synchronous BPEL Process as the template. This will create a composite with a BPEL process and an exposed SOAP service. Double-click the BPEL process to open and begin editing it. You should see a simple BPEL process with a Receive and Reply activity. As we created a default process without an XML schema, the input and output variables are simple strings. Create an XSD File An XSD file is required later to define the message format to be passed to the JMS adapter. In this step, we create a simple XSD file, containing a string variable and add it to the project. First select the xsd item in the left-hand navigation tree to ensure that the XSD file is created under that item. Select File > New > General > XML and choose XML Schema. Call it stringPayload.xsd and when the editor opens, select the Source view. then replace the contents with the contents of the stringPayload.xsd example above and save the file. You should see it under the xsd item in the navigation tree. Create a JMS Adapter Partner Link We will create the JMS adapter as a service at the composite level. If it is not already open, double-click the composite.xml file in the navigator to open it. From the Component Palette, drag a JMS adapter over onto the right-hand swim lane, under External References. This will start the JMS Adapter Configuration Wizard. Use the following entries: Service Name: JmsAdapterWrite Oracle Enterprise Messaging Service (OEMS): Oracle Weblogic JMS AppServer Connection: Use an existing application server connection pointing to the WebLogic server on which the above JMS queue and connection factory were created. You can use the “+” button to create a connection directly from the wizard, if you do not already have one. This example uses a connection called jbevans-lx-PS5. Adapter Interface > Interface: Define from operation and schema (specified later) Operation Type: Produce Message Operation Name: Produce_message Destination Name: Press the Browse button, select Destination Type: Queues, then press Search. Wait for the list to populate, then select the entry for TestJMSQueue , which is the queue created earlier. JNDI Name: The JNDI name to use for the JMS connection. This is probably the most important step in this exercise and the most common source of error. This is the JNDI name of the JMS adapter’s connection pool created in the WebLogic Server and which points to the connection factory. JDeveloper does not verify the value entered here. If you enter a wrong value, the JMS adapter won’t find the queue and you will get an error message at runtime, which is very difficult to trace. In our example, this is the value eis/wls/TestQueue . (See the earlier step on how to create a JMS Adapter Connection Pool in WebLogic Server for details.) MessagesURL: We will use the XSD file we created earlier, stringPayload.xsd to define the message format for the JMS adapter. Press the magnifying glass icon to search for schema files. Expand Project Schema Files > stringPayload.xsd and select exampleElement: string. Press Next and Finish, which will complete the JMS Adapter configuration. Wire the BPEL Component to the JMS Adapter In this step, we link the BPEL process/component to the JMS adapter. From the composite.xml editor, drag the right-arrow icon from the BPEL process to the JMS adapter’s in-arrow. This completes the steps at the composite level. 4. Complete the BPEL Process Design Invoke the JMS Adapter Open the BPEL component by double-clicking it in the design view of the composite.xml, or open it from the project navigator by selecting the JmsAdapterWriteSchema.bpel file. This will display the BPEL process in the design view. You should see the JmsAdapterWrite partner link under one of the two swim lanes. We want it in the right-hand swim lane. If JDeveloper displays it in the left-hand lane, right-click it and choose Display > Move To Opposite Swim Lane. An Invoke activity is required in order to invoke the JMS adapter. Drag an Invoke activity between the Receive and Reply activities. Drag the right-hand arrow from the Invoke activity to the JMS adapter partner link. This will open the Invoke editor. The correct default values are entered automatically and are fine for our purposes. We only need to define the input variable to use for the JMS adapter. By pressing the green “+” symbol, a variable of the correct type can be auto-generated, for example with the name Invoke1_Produce_Message_InputVariable. Press OK after creating the variable. ( For some reason, while I was testing this, the JMS Adapter moved back to the left-hand swim lane again after this step. There is no harm in leaving it there, but I find it easier to follow if it is in the right-hand lane, because I kind-of think of the message coming in on the left and being routed through the right. But you can follow your personal preference here.) Assign Variables Drag an Assign activity between the Receive and Invoke activities. We will simply copy the input variable to the JMS adapter and, for completion, so the process has an output to print, again to the process’s output variable. Double-click the Assign activity and create two Copy rules: for the first, drag Variables > inputVariable > payload > client:process > client:input_string to Invoke1_Produce_Message_InputVariable > body > ns2:exampleElement for the second, drag the same input variable to outputVariable > payload > client:processResponse > client:result This will create two copy rules, similar to the following: Press OK. This completes the BPEL and Composite design. 5. Compile and Deploy the Composite We won’t go into too much detail on how to compile and deploy. In JDeveloper, compile the process by pressing the Make or Rebuild icons or by right-clicking the project name in the navigator and selecting Make... or Rebuild... If the compilation is successful, deploy it to the SOA server connection defined earlier. (Right-click the project name in the navigator, select Deploy to Application Server, choose the application server connection, choose the partition on the server (usually default) and press Finish. You should see the message ---- Deployment finished. ---- in the Deployment frame, if the deployment was successful. 6. Test the Composite This is the exciting part. Open two tabs in your browser and log in to the WebLogic Administration Console in one tab and the Enterprise Manager 11g Fusion Middleware Control (EM) for your SOA installation in the other. We will use the Console to monitor the messages being written to the queue and the EM to execute the composite. In the Console, go to Services > Messaging > JMS Modules > TestJMSModule > TestJMSQueue > Monitoring. Note the number of messages under Messages Current. In the EM, go to SOA > soa-infra (soa_server1) > default (or wherever you deployed your composite to) and click on JmsAdapterWriteSchema [1.0], then press the Test button. Under Input Arguments, enter any string into the text input field for the payload, for example Test Message then press Test Web Service. If the instance is successful you should see the same text in the Response message, “Test Message”. In the Console, refresh the Monitoring screen to confirm a new message has been written to the queue. Check the checkbox and press Show Messages. Click on the newest message and view its contents. They should include the full XML of the entered payload. 7. Troubleshooting If you get an exception similar to the following at runtime ... BINDING.JCA-12510 JCA Resource Adapter location error. Unable to locate the JCA Resource Adapter via .jca binding file element The JCA Binding Component is unable to startup the Resource Adapter specified in the element: location='eis/wls/QueueTest'. The reason for this is most likely that either 1) the Resource Adapters RAR file has not been deployed successfully to the WebLogic Application server or 2) the '' element in weblogic-ra.xml has not been set to eis/wls/QueueTest. In the last case you will have to add a new WebLogic JCA connection factory (deploy a RAR). Please correct this and then restart the Application Server at oracle.integration.platform.blocks.adapter.fw.AdapterBindingException. createJndiLookupException(AdapterBindingException.java:130) at oracle.integration.platform.blocks.adapter.fw.jca.cci. JCAConnectionManager$JCAConnectionPool.createJCAConnectionFactory (JCAConnectionManager.java:1387) at oracle.integration.platform.blocks.adapter.fw.jca.cci. JCAConnectionManager$JCAConnectionPool.newPoolObject (JCAConnectionManager.java:1285) ... then this is very likely due to an incorrect JNDI name entered for the JMS Connection in the JMS Adapter Wizard. Recheck those steps. The error message prints the name of the JNDI name used. In this example, it was incorrectly entered as eis/wls/QueueTest instead of eis/wls/TestQueue. This concludes this example. Best regards John-Brown Evans Oracle Technology Proactive Support Delivery

    Read the article

  • Node.js Adventure - Host Node.js on Windows Azure Worker Role

    - by Shaun
    In my previous post I demonstrated about how to develop and deploy a Node.js application on Windows Azure Web Site (a.k.a. WAWS). WAWS is a new feature in Windows Azure platform. Since it’s low-cost, and it provides IIS and IISNode components so that we can host our Node.js application though Git, FTP and WebMatrix without any configuration and component installation. But sometimes we need to use the Windows Azure Cloud Service (a.k.a. WACS) and host our Node.js on worker role. Below are some benefits of using worker role. - WAWS leverages IIS and IISNode to host Node.js application, which runs in x86 WOW mode. It reduces the performance comparing with x64 in some cases. - WACS worker role does not need IIS, hence there’s no restriction of IIS, such as 8000 concurrent requests limitation. - WACS provides more flexibility and controls to the developers. For example, we can RDP to the virtual machines of our worker role instances. - WACS provides the service configuration features which can be changed when the role is running. - WACS provides more scaling capability than WAWS. In WAWS we can have at most 3 reserved instances per web site while in WACS we can have up to 20 instances in a subscription. - Since when using WACS worker role we starts the node by ourselves in a process, we can control the input, output and error stream. We can also control the version of Node.js.   Run Node.js in Worker Role Node.js can be started by just having its execution file. This means in Windows Azure, we can have a worker role with the “node.exe” and the Node.js source files, then start it in Run method of the worker role entry class. Let’s create a new windows azure project in Visual Studio and add a new worker role. Since we need our worker role execute the “node.exe” with our application code we need to add the “node.exe” into our project. Right click on the worker role project and add an existing item. By default the Node.js will be installed in the “Program Files\nodejs” folder so we can navigate there and add the “node.exe”. Then we need to create the entry code of Node.js. In WAWS the entry file must be named “server.js”, which is because it’s hosted by IIS and IISNode and IISNode only accept “server.js”. But here as we control everything we can choose any files as the entry code. For example, I created a new JavaScript file named “index.js” in project root. Since we created a C# Windows Azure project we cannot create a JavaScript file from the context menu “Add new item”. We have to create a text file, and then rename it to JavaScript extension. After we added these two files we should set their “Copy to Output Directory” property to “Copy Always”, or “Copy if Newer”. Otherwise they will not be involved in the package when deployed. Let’s paste a very simple Node.js code in the “index.js” as below. As you can see I created a web server listening at port 12345. 1: var http = require("http"); 2: var port = 12345; 3:  4: http.createServer(function (req, res) { 5: res.writeHead(200, { "Content-Type": "text/plain" }); 6: res.end("Hello World\n"); 7: }).listen(port); 8:  9: console.log("Server running at port %d", port); Then we need to start “node.exe” with this file when our worker role was started. This can be done in its Run method. I found the Node.js and entry JavaScript file name, and then create a new process to run it. Our worker role will wait for the process to be exited. If everything is OK once our web server was opened the process will be there listening for incoming requests, and should not be terminated. The code in worker role would be like this. 1: public override void Run() 2: { 3: // This is a sample worker implementation. Replace with your logic. 4: Trace.WriteLine("NodejsHost entry point called", "Information"); 5:  6: // retrieve the node.exe and entry node.js source code file name. 7: var node = Environment.ExpandEnvironmentVariables(@"%RoleRoot%\approot\node.exe"); 8: var js = "index.js"; 9:  10: // prepare the process starting of node.exe 11: var info = new ProcessStartInfo(node, js) 12: { 13: CreateNoWindow = false, 14: ErrorDialog = true, 15: WindowStyle = ProcessWindowStyle.Normal, 16: UseShellExecute = false, 17: WorkingDirectory = Environment.ExpandEnvironmentVariables(@"%RoleRoot%\approot") 18: }; 19: Trace.WriteLine(string.Format("{0} {1}", node, js), "Information"); 20:  21: // start the node.exe with entry code and wait for exit 22: var process = Process.Start(info); 23: process.WaitForExit(); 24: } Then we can run it locally. In the computer emulator UI the worker role started and it executed the Node.js, then Node.js windows appeared. Open the browser to verify the website hosted by our worker role. Next let’s deploy it to azure. But we need some additional steps. First, we need to create an input endpoint. By default there’s no endpoint defined in a worker role. So we will open the role property window in Visual Studio, create a new input TCP endpoint to the port we want our website to use. In this case I will use 80. Even though we created a web server we should add a TCP endpoint of the worker role, since Node.js always listen on TCP instead of HTTP. And then changed the “index.js”, let our web server listen on 80. 1: var http = require("http"); 2: var port = 80; 3:  4: http.createServer(function (req, res) { 5: res.writeHead(200, { "Content-Type": "text/plain" }); 6: res.end("Hello World\n"); 7: }).listen(port); 8:  9: console.log("Server running at port %d", port); Then publish it to Windows Azure. And then in browser we can see our Node.js website was running on WACS worker role. We may encounter an error if we tried to run our Node.js website on 80 port at local emulator. This is because the compute emulator registered 80 and map the 80 endpoint to 81. But our Node.js cannot detect this operation. So when it tried to listen on 80 it will failed since 80 have been used.   Use NPM Modules When we are using WAWS to host Node.js, we can simply install modules we need, and then just publish or upload all files to WAWS. But if we are using WACS worker role, we have to do some extra steps to make the modules work. Assuming that we plan to use “express” in our application. Firstly of all we should download and install this module through NPM command. But after the install finished, they are just in the disk but not included in the worker role project. If we deploy the worker role right now the module will not be packaged and uploaded to azure. Hence we need to add them to the project. On solution explorer window click the “Show all files” button, select the “node_modules” folder and in the context menu select “Include In Project”. But that not enough. We also need to make all files in this module to “Copy always” or “Copy if newer”, so that they can be uploaded to azure with the “node.exe” and “index.js”. This is painful step since there might be many files in a module. So I created a small tool which can update a C# project file, make its all items as “Copy always”. The code is very simple. 1: static void Main(string[] args) 2: { 3: if (args.Length < 1) 4: { 5: Console.WriteLine("Usage: copyallalways [project file]"); 6: return; 7: } 8:  9: var proj = args[0]; 10: File.Copy(proj, string.Format("{0}.bak", proj)); 11:  12: var xml = new XmlDocument(); 13: xml.Load(proj); 14: var nsManager = new XmlNamespaceManager(xml.NameTable); 15: nsManager.AddNamespace("pf", "http://schemas.microsoft.com/developer/msbuild/2003"); 16:  17: // add the output setting to copy always 18: var contentNodes = xml.SelectNodes("//pf:Project/pf:ItemGroup/pf:Content", nsManager); 19: UpdateNodes(contentNodes, xml, nsManager); 20: var noneNodes = xml.SelectNodes("//pf:Project/pf:ItemGroup/pf:None", nsManager); 21: UpdateNodes(noneNodes, xml, nsManager); 22: xml.Save(proj); 23:  24: // remove the namespace attributes 25: var content = xml.InnerXml.Replace("<CopyToOutputDirectory xmlns=\"\">", "<CopyToOutputDirectory>"); 26: xml.LoadXml(content); 27: xml.Save(proj); 28: } 29:  30: static void UpdateNodes(XmlNodeList nodes, XmlDocument xml, XmlNamespaceManager nsManager) 31: { 32: foreach (XmlNode node in nodes) 33: { 34: var copyToOutputDirectoryNode = node.SelectSingleNode("pf:CopyToOutputDirectory", nsManager); 35: if (copyToOutputDirectoryNode == null) 36: { 37: var n = xml.CreateNode(XmlNodeType.Element, "CopyToOutputDirectory", null); 38: n.InnerText = "Always"; 39: node.AppendChild(n); 40: } 41: else 42: { 43: if (string.Compare(copyToOutputDirectoryNode.InnerText, "Always", true) != 0) 44: { 45: copyToOutputDirectoryNode.InnerText = "Always"; 46: } 47: } 48: } 49: } Please be careful when use this tool. I created only for demo so do not use it directly in a production environment. Unload the worker role project, execute this tool with the worker role project file name as the command line argument, it will set all items as “Copy always”. Then reload this worker role project. Now let’s change the “index.js” to use express. 1: var express = require("express"); 2: var app = express(); 3:  4: var port = 80; 5:  6: app.configure(function () { 7: }); 8:  9: app.get("/", function (req, res) { 10: res.send("Hello Node.js!"); 11: }); 12:  13: app.get("/User/:id", function (req, res) { 14: var id = req.params.id; 15: res.json({ 16: "id": id, 17: "name": "user " + id, 18: "company": "IGT" 19: }); 20: }); 21:  22: app.listen(port); Finally let’s publish it and have a look in browser.   Use Windows Azure SQL Database We can use Windows Azure SQL Database (a.k.a. WACD) from Node.js as well on worker role hosting. Since we can control the version of Node.js, here we can use x64 version of “node-sqlserver” now. This is better than if we host Node.js on WAWS since it only support x86. Just install the “node-sqlserver” module from NPM, copy the “sqlserver.node” from “Build\Release” folder to “Lib” folder. Include them in worker role project and run my tool to make them to “Copy always”. Finally update the “index.js” to use WASD. 1: var express = require("express"); 2: var sql = require("node-sqlserver"); 3:  4: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:{SERVER NAME}.database.windows.net,1433;Database={DATABASE NAME};Uid={LOGIN}@{SERVER NAME};Pwd={PASSWORD};Encrypt=yes;Connection Timeout=30;"; 5: var port = 80; 6:  7: var app = express(); 8:  9: app.configure(function () { 10: app.use(express.bodyParser()); 11: }); 12:  13: app.get("/", function (req, res) { 14: sql.open(connectionString, function (err, conn) { 15: if (err) { 16: console.log(err); 17: res.send(500, "Cannot open connection."); 18: } 19: else { 20: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 21: if (err) { 22: console.log(err); 23: res.send(500, "Cannot retrieve records."); 24: } 25: else { 26: res.json(results); 27: } 28: }); 29: } 30: }); 31: }); 32:  33: app.get("/text/:key/:culture", function (req, res) { 34: sql.open(connectionString, function (err, conn) { 35: if (err) { 36: console.log(err); 37: res.send(500, "Cannot open connection."); 38: } 39: else { 40: var key = req.params.key; 41: var culture = req.params.culture; 42: var command = "SELECT * FROM [Resource] WHERE [Key] = '" + key + "' AND [Culture] = '" + culture + "'"; 43: conn.queryRaw(command, function (err, results) { 44: if (err) { 45: console.log(err); 46: res.send(500, "Cannot retrieve records."); 47: } 48: else { 49: res.json(results); 50: } 51: }); 52: } 53: }); 54: }); 55:  56: app.get("/sproc/:key/:culture", function (req, res) { 57: sql.open(connectionString, function (err, conn) { 58: if (err) { 59: console.log(err); 60: res.send(500, "Cannot open connection."); 61: } 62: else { 63: var key = req.params.key; 64: var culture = req.params.culture; 65: var command = "EXEC GetItem '" + key + "', '" + culture + "'"; 66: conn.queryRaw(command, function (err, results) { 67: if (err) { 68: console.log(err); 69: res.send(500, "Cannot retrieve records."); 70: } 71: else { 72: res.json(results); 73: } 74: }); 75: } 76: }); 77: }); 78:  79: app.post("/new", function (req, res) { 80: var key = req.body.key; 81: var culture = req.body.culture; 82: var val = req.body.val; 83:  84: sql.open(connectionString, function (err, conn) { 85: if (err) { 86: console.log(err); 87: res.send(500, "Cannot open connection."); 88: } 89: else { 90: var command = "INSERT INTO [Resource] VALUES ('" + key + "', '" + culture + "', N'" + val + "')"; 91: conn.queryRaw(command, function (err, results) { 92: if (err) { 93: console.log(err); 94: res.send(500, "Cannot retrieve records."); 95: } 96: else { 97: res.send(200, "Inserted Successful"); 98: } 99: }); 100: } 101: }); 102: }); 103:  104: app.listen(port); Publish to azure and now we can see our Node.js is working with WASD through x64 version “node-sqlserver”.   Summary In this post I demonstrated how to host our Node.js in Windows Azure Cloud Service worker role. By using worker role we can control the version of Node.js, as well as the entry code. And it’s possible to do some pre jobs before the Node.js application started. It also removed the IIS and IISNode limitation. I personally recommended to use worker role as our Node.js hosting. But there are some problem if you use the approach I mentioned here. The first one is, we need to set all JavaScript files and module files as “Copy always” or “Copy if newer” manually. The second one is, in this way we cannot retrieve the cloud service configuration information. For example, we defined the endpoint in worker role property but we also specified the listening port in Node.js hardcoded. It should be changed that our Node.js can retrieve the endpoint. But I can tell you it won’t be working here. In the next post I will describe another way to execute the “node.exe” and Node.js application, so that we can get the cloud service configuration in Node.js. I will also demonstrate how to use Windows Azure Storage from Node.js by using the Windows Azure Node.js SDK.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Exchange 2003-Exchange 2010 post migration GAL/OAB problem

    - by user68726
    I am very new to Exchange so forgive my newbie-ness. I've exhausted Google trying to find a way to solve my problem so I'm hoping some of you gurus can shed some light on my next steps. Please forgive my bungling around through this. The problem I cannot download/update the Global Address List (GAL) and Offline Address Book (OAB) on my Outlook 2010 clients. I get: Task 'emailaddress' reported error (0x8004010F) : 'The operation failed. An object cannot be found.' ---- error. I'm using cached exchange mode, which if I turn off Outlook hangs completely from the moment I start it up. (Note I've replaced my actual email address with 'emailaddress') Background information I migrated mailboxes, public store, etc. from a Small Business Server 2003 with Exchange 2003 box to a Server 2008 R2 with Exchange 2010 based primarily on an experts exchange how to article. The exchange server is up and running as an internet facing exchange server with all of the roles necessary to send and receive mail and in that capacity is working fine. I "thought" I had successfully migrated everything from the SBS03 box, and due to huge amounts of errors in everything from AD to the Exchange install itself I removed the reference to the SBS03 server in adsiedit. I've still got access to the old SBS03 box, but as I said the number of errors in everything is preventing even the uninstall of Exchange (or the starting of the Exchange Information Store service), so I'm quite content to leave that box completely out of the picture while trying to solve my problem. After research I discovered this is most likely because I failed to run the “update-globaladdresslist” (or get / update) command from the Exchange shell before I removed the Exchange 2003 server from adsiedit (and the network). If I run the command now it gives me: WARNING: The recipient "domainname.com/Microsoft Exchange System Objects/Offline Address Book - first administrative group" is invalid and couldn't be updated. WARNING: The recipient "domainname.com/Microsoft Exchange System Objects/Schedule+ Free Busy Information – first administrative group" is invalid and couldn't be updated. WARNING: The recipient "domainname.com/Microsoft Exchange System Objects/ContainernameArchive" is invalid and couldn't be updated. WARNING: The recipient "domainname.com/Microsoft Exchange System Objects/ContainernameContacts" is invalid and couldn't be updated. (Note that I’ve replaced my domain with “domainname.com” and my organization name with “containername”) What I’ve tried I don’t want to use the old OAB, or GAL, I don’t care about either, our GAL and distribution lists needed to be organized anyway, so at this point I really just want to get rid of the old reference to the “first administrative group” and move on. I’ve tried to create a new GAL and tell Exchange 2010 to use that GAL instead of the old GAL, but I'm obviously missing some of the commands or something dumb I need to do to start over with a blank slate/GAL/OAB. I'm very tempted to completely delete the entire "first administrative group" tree from adsiedit and see if that gets rid of the ridiculous reference that no longer exists but I dont want to break something else. Commands run to try to create a new GAL and tell exch10 to use that GAL: New-globaladdresslist –name NAMEOFNEWGAL Set-globaladdresslist GUID –name NAMEOFNEWGAL This did nothing for me except now when I run get-globaladdresslist or with the | FL pipe I see two GALs listed, the “default global address list” and the “NAMEOFNEWGAL” that I created. After a little more research this morning it looks like you can't change/delete/remove the default address list, and the only way to do what I'm trying to do would be to maybe remove the default address list via adsiedit and recreate with a command something like new-GlobalAddressList -Name "Default Global Address List" -IncludedRecipients AllRecipients. This would be acceptable but I've searched and searched and can't find instructions or a breakdown of where exactly the default GAL lives in AD, and if I'd have to remove multiple child references/records. ** Of interest** I'm getting an event ID 9337 in my application log OALGen did not find any recipients in address list ‘\Global Address List. This offline address list will not be generated. -\NAMEOFMYOAB --------- on my Exchange 2010 box, which pretty much to me seems to confirm my suspicion that the empty GAL/OAB is what's causing the Outlook client 0x8004010F error. Help please!

    Read the article

  • Exchange 2003-Exchange 2010 post migration GAL/OAB problem

    - by user68726
    I am very new to Exchange so forgive my newbie-ness. I've exhausted Google trying to find a way to solve my problem so I'm hoping some of you gurus can shed some light on my next steps. Please forgive my bungling around through this. The problem I cannot download/update the Global Address List (GAL) and Offline Address Book (OAB) on my Outlook 2010 clients. I get: Task 'emailaddress' reported error (0x8004010F) : 'The operation failed. An object cannot be found.' ---- error. I'm using cached exchange mode, which if I turn off Outlook hangs completely from the moment I start it up. (Note I've replaced my actual email address with 'emailaddress') Background information I migrated mailboxes, public store, etc. from a Small Business Server 2003 with Exchange 2003 box to a Server 2008 R2 with Exchange 2010 based primarily on an experts exchange how to article. The exchange server is up and running as an internet facing exchange server with all of the roles necessary to send and receive mail and in that capacity is working fine. I "thought" I had successfully migrated everything from the SBS03 box, and due to huge amounts of errors in everything from AD to the Exchange install itself I removed the reference to the SBS03 server in adsiedit. I've still got access to the old SBS03 box, but as I said the number of errors in everything is preventing even the uninstall of Exchange (or the starting of the Exchange Information Store service), so I'm quite content to leave that box completely out of the picture while trying to solve my problem. After research I discovered this is most likely because I failed to run the “update-globaladdresslist” (or get / update) command from the Exchange shell before I removed the Exchange 2003 server from adsiedit (and the network). If I run the command now it gives me: WARNING: The recipient "domainname.com/Microsoft Exchange System Objects/Offline Address Book - first administrative group" is invalid and couldn't be updated. WARNING: The recipient "domainname.com/Microsoft Exchange System Objects/Schedule+ Free Busy Information – first administrative group" is invalid and couldn't be updated. WARNING: The recipient "domainname.com/Microsoft Exchange System Objects/ContainernameArchive" is invalid and couldn't be updated. WARNING: The recipient "domainname.com/Microsoft Exchange System Objects/ContainernameContacts" is invalid and couldn't be updated. (Note that I’ve replaced my domain with “domainname.com” and my organization name with “containername”) What I’ve tried I don’t want to use the old OAB, or GAL, I don’t care about either, our GAL and distribution lists needed to be organized anyway, so at this point I really just want to get rid of the old reference to the “first administrative group” and move on. I’ve tried to create a new GAL and tell Exchange 2010 to use that GAL instead of the old GAL, but I'm obviously missing some of the commands or something dumb I need to do to start over with a blank slate/GAL/OAB. I'm very tempted to completely delete the entire "first administrative group" tree from adsiedit and see if that gets rid of the ridiculous reference that no longer exists but I dont want to break something else. Commands run to try to create a new GAL and tell exch10 to use that GAL: New-globaladdresslist –name NAMEOFNEWGAL Set-globaladdresslist GUID –name NAMEOFNEWGAL This did nothing for me except now when I run get-globaladdresslist or with the | FL pipe I see two GALs listed, the “default global address list” and the “NAMEOFNEWGAL” that I created. After a little more research this morning it looks like you can't change/delete/remove the default address list, and the only way to do what I'm trying to do would be to maybe remove the default address list via adsiedit and recreate with a command something like new-GlobalAddressList -Name "Default Global Address List" -IncludedRecipients AllRecipients. This would be acceptable but I've searched and searched and can't find instructions or a breakdown of where exactly the default GAL lives in AD, and if I'd have to remove multiple child references/records. Of interest I'm getting an event ID 9337 in my application log OALGen did not find any recipients in address list \Global Address List. This offline address list will not be generated. -\NAMEOFMYOAB --------- on my Exchange 2010 box, which pretty much to me seems to confirm my suspicion that the empty GAL/OAB is what's causing the Outlook client 0x8004010F error. Help please!

    Read the article

  • Why do I get error "1337 The security ID structure is invalid" when using subinacl?

    - by ralbatross
    I have a standard Win 7 account 'popuser' to which I'd like to grant start and stop permissions for the OpenVPNService. I've used the following command successfully on other machines, but for some reason on a new Acer Aspire 5830T that I'm setting up this doesn't do the trick for me: subinacl /service OpenVPNService /grant=popuser=TO I keep getting the following error message: LookupAccountName : OpenVPNService:popuser 1337 The security ID structure is invalid. Current object OpenVPNService will not be processed Elapsed Time: 00 00:00:00 Done: 0, Modified 0, Failed 0, Syntax errors 1 Last Syntax Error:WARNING : /grant=popuser=to : Error when checking arguments - OpenVPNService I've tried adding the machine name to the username and the service name to no avail. I'm running command prompt as an administrator. Anyone have any ideas what's going on?

    Read the article

  • How to add NT Virtual Machine\Virtual Machines to GPO

    - by Nicola Cassolato
    I have a Windows 2012 Server with Hyper-V enabled and a few virtual machines. My current configuration has a few account in the "Log on as a service" list in the domain policies, and sometimes this prevent my virtual machines from starting (I get this error: 'Error 0x80070569 ('VM_NAME' failed to start worker process: Logon Failure: The user has not been granted the requested logon type at this computer.') As described in this KB I would like to add NT Virtual Machine\Virtual Machines to my "Log on as a service" list to resolve my problem. My problem is that when I try to add that user to my domain policy I get an error message: "The following account could not be validated". My domain controller obviously doesn't know about that user since it's not an Hyper-V enabled server. How can I add that account to my Domain Policies?

    Read the article

  • "Host usb device connections disabled" in VMware???

    - by ZlateWay
    I installed Linux, Windows XP and Chrome OS in VMware Workstation 7 and in every OS the USB host doesn't work. When I start some of the Operative Systems this message shows up: "host usb device connections disabled" and under that : "The connection to the VMware USB Arbitration Service was unsuccessful. Please check the status of this service in the Microsoft Management Console." So what to do? What do I need to install to make the usb host work? BTW I use Windows 7 as a host OS. Thanks

    Read the article

  • Show me other ways to activate is disabled?

    - by Simon
    I have installed an instance of Win Server 2008 R2 on a Virtual machine. This machine is purposefully on an isolated environment (no internet) for testing purposes. I need to activate the machine and I have my MSDN key ready to go. I just ran through the same steps with a Windows 7 VM, and was able to register using an automated telephone service from Microsoft. With Windows Server 2008, I can see the same option to register using this service, but it is greyed out: Google pointed me to people experiencing this issue but the only workarounds I saw where to use the internet (not an option for me). Does anyone know how to enable this option?

    Read the article

  • SQL SERVER – Summary of Month – Wait Type – Day 28 of 28

    - by pinaldave
    I am glad to announce that the month of Wait Types and Queues very successful. I am glad that it was very well received and there was great amount of participation from community. I am fortunate to have some of the excellent comments throughout the series. I want to dedicate this series to all the guest blogger – Jonathan, Jacob, Glenn, and Feodor for their kindness to take a participation in this series. Here is the complete list of the blog posts in this series. I enjoyed writing the series and I plan to continue writing similar series. Please offer your opinion. SQL SERVER – Introduction to Wait Stats and Wait Types – Wait Type – Day 1 of 28 SQL SERVER – Signal Wait Time Introduction with Simple Example – Wait Type – Day 2 of 28 SQL SERVER – DMV – sys.dm_os_wait_stats Explanation – Wait Type – Day 3 of 28 SQL SERVER – DMV – sys.dm_os_waiting_tasks and sys.dm_exec_requests – Wait Type – Day 4 of 28 SQL SERVER – Capturing Wait Types and Wait Stats Information at Interval – Wait Type – Day 5 of 28 SQL SERVER – CXPACKET – Parallelism – Usual Solution – Wait Type – Day 6 of 28 SQL SERVER – CXPACKET – Parallelism – Advanced Solution – Wait Type – Day 7 of 28 SQL SERVER – SOS_SCHEDULER_YIELD – Wait Type – Day 8 of 28 SQL SERVER – PAGEIOLATCH_DT, PAGEIOLATCH_EX, PAGEIOLATCH_KP, PAGEIOLATCH_SH, PAGEIOLATCH_UP – Wait Type – Day 9 of 28 SQL SERVER – IO_COMPLETION – Wait Type – Day 10 of 28 SQL SERVER – ASYNC_IO_COMPLETION – Wait Type – Day 11 of 28 SQL SERVER – PAGELATCH_DT, PAGELATCH_EX, PAGELATCH_KP, PAGELATCH_SH, PAGELATCH_UP – Wait Type – Day 12 of 28 SQL SERVER – FT_IFTS_SCHEDULER_IDLE_WAIT – Full Text – Wait Type – Day 13 of 28 SQL SERVER – BACKUPIO, BACKUPBUFFER – Wait Type – Day 14 of 28 SQL SERVER – LCK_M_XXX – Wait Type – Day 15 of 28 SQL SERVER – Guest Post – Jonathan Kehayias – Wait Type – Day 16 of 28 SQL SERVER – WRITELOG – Wait Type – Day 17 of 28 SQL SERVER – LOGBUFFER – Wait Type – Day 18 of 28 SQL SERVER – PREEMPTIVE and Non-PREEMPTIVE – Wait Type – Day 19 of 28 SQL SERVER – MSQL_XP – Wait Type – Day 20 of 28 SQL SERVER – Guest Posts – Feodor Georgiev – The Context of Our Database Environment – Going Beyond the Internal SQL Server Waits – Wait Type – Day 21 of 28 SQL SERVER – Guest Post – Jacob Sebastian – Filestream – Wait Types – Wait Queues – Day 22 of 28 SQL SERVER – OLEDB – Link Server – Wait Type – Day 23 of 28 SQL SERVER – 2000 – DBCC SQLPERF(waitstats) – Wait Type – Day 24 of 28 SQL SERVER – 2011 – Wait Type – Day 25 of 28 SQL SERVER – Guest Post – Glenn Berry – Wait Type – Day 26 of 28 SQL SERVER – Best Reference – Wait Type – Day 27 of 28 SQL SERVER – Summary of Month – Wait Type – Day 28 of 28 Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Optimization, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, SQLServer, T SQL, Technology

    Read the article

  • Exchange 2010 periodically stops responding to SMTP events with error 421 4.4.1 Connection timed out

    - by Michael Shimmins
    After some help diagnosing why Exchange 2010 Enterprise stops responding to SMTP events. I can't find a pattern to it. It doesn't appear to be an actual timeout, as the server responds immediately with the error. To reproduce it I telnet into the server on port 25 and issue a EHLO. The server immediately replies with the 421: 421 4.4.1 Connection timed out Once this starts happening I've found restarting the exchange box is the only reliable way to get it flowing again. Sometimes restarting the Transport service or the mailbox attendant service seems to fix it, but this could be coincidental as it often has no effect.

    Read the article

  • TFS 2010 Build Custom Activity for Merging Assemblies

    - by Jakob Ehn
    *** The sample build process template discussed in this post is available for download from here: http://cid-ee034c9f620cd58d.office.live.com/self.aspx/BlogSamples/ILMerge.xaml ***   In my previous post I talked about library builds that we use to build and replicate dependencies between applications in TFS. This is typically used for common libraries and tools that several other application need to reference. When the libraries grow in size over time, so does the number of assemblies. So all solutions that uses the common library must reference all the necessary assemblies that they need, and if we for example do a refactoring and extract some code into a new assembly, all the clients must update their references to reflect these changes, otherwise it won’t compile. To improve on this, we use a tool from Microsoft Research called ILMerge (Download from here). It can be used to merge several assemblies into one assembly that contains all types. If you haven’t used this tool before, you should check it out. Previously I have implemented this in builds using a simple batch file that contains the full command, something like this: "%ProgramFiles(x86)%\microsoft\ilmerge\ilmerge.exe" /target:library /attr:ClassLibrary1.bl.dll /out:MyNewLibrary.dll ClassLibrary1.dll ClassLibrar2.dll ClassLibrary3.dll This merges 3 assemblies (ClassLibrary1, 2 and 3) into a new assembly called MyNewLibrary.dll. It will copy the attributes (file version, product version etc..) from ClassLibrary1.dll, using the /attr switch. For more info on ILMerge command line tool, see the above link. This approach works, but requires a little bit too much knowledge for the developers creating builds, therefor I have implemented a custom activity that wraps the use of ILMerge. This makes it much simpler to setup a new build definition and have the build automatically do the merging. The usage of the activity is then implemented as part of the Library Build process template mentioned in the previous post. For this article I have just created a simple build process template that only performs the ILMerge operation.   Below is the code for the custom activity. To make it compile, you need to reference the ILMerge.exe assembly. /// <summary> /// Activity for merging a list of assembies into one, using ILMerge /// </summary> public sealed class ILMergeActivity : BaseCodeActivity { /// <summary> /// A list of file paths to the assemblies that should be merged /// </summary> [RequiredArgument] public InArgument<IEnumerable<string>> InputAssemblies { get; set; } /// <summary> /// Full path to the generated assembly /// </summary> [RequiredArgument] public InArgument<string> OutputFile { get; set; } /// <summary> /// Which input assembly that the attibutes for the generated assembly should be copied from. /// Optional. If not specified, the first input assembly will be used /// </summary> public InArgument<string> AttributeFile { get; set; } /// <summary> /// Kind of assembly to generate, dll or exe /// </summary> public InArgument<TargetKindEnum> TargetKind { get; set; } // If your activity returns a value, derive from CodeActivity<TResult> // and return the value from the Execute method. protected override void Execute(CodeActivityContext context) { string message = InputAssemblies.Get(context).Aggregate("", (current, assembly) => current + (assembly + " ")); TrackMessage(context, "Merging " + message + " into " + OutputFile.Get(context)); ILMerge m = new ILMerge(); m.SetInputAssemblies(InputAssemblies.Get(context).ToArray()); m.TargetKind = TargetKind.Get(context) == TargetKindEnum.Dll ? ILMerge.Kind.Dll : ILMerge.Kind.Exe; m.OutputFile = OutputFile.Get(context); m.AttributeFile = !String.IsNullOrEmpty(AttributeFile.Get(context)) ? AttributeFile.Get(context) : InputAssemblies.Get(context).First(); m.SetTargetPlatform(RuntimeEnvironment.GetSystemVersion().Substring(0,2), RuntimeEnvironment.GetRuntimeDirectory()); m.Merge(); TrackMessage(context, "Generated " + m.OutputFile); } } [Browsable(true)] public enum TargetKindEnum { Dll, Exe } NB: The activity inherits from a BaseCodeActivity class which is an internal helper class which contains some methods and properties useful for moste custom activities. In this case, it uses the TrackeMessage method for writing to the build log. You either need to remove the TrackMessage method calls, or implement this yourself (which is not very hard… ) The custom activity has the following input arguments: InputAssemblies A list with the (full) paths to the assemblies to merge OutputFile The name of the resulting merged assembly AttributeFile Which assembly to use as the template for the attribute of the merged assembly. This argument is optional and if left blank, the first assembly in the input list is used TargetKind Decides what type of assembly to create, can be either a dll or an exe Of course, there are more switches to the ILMerge.exe, and these can be exposed as input arguments as well if you need it. To show how the custom activity can be used, I have attached a build process template (see link at the top of this post) that merges the output of the projects being built (CommonLibrary.dll and CommonLibrary2.dll) into a merged assembly (NewLibrary.dll). The build process template has the following custom process parameters:   The Assemblies To Merge argument is passed into a FindMatchingFiles activity to located all assemblies that are located in the BinariesDirectory folder after the compilation has been performed by Team Build. Here is the complete sequence of activities that performs the merge operation. It is located at the end of the Try, Compile, Test and Associate… sequence: It splits the AssembliesToMerge parameter and appends the full path (using the BinariesDirectory variable) and then enumerates the matching files using the FindMatchingFiles activity. When running the build, you can see that it merges two assemblies into a new one:     And the merged assembly (and associated pdb file) is copied to the drop location together with the rest of the assemblies:

    Read the article

  • Enabling SFTP Access within PLESK

    - by spelley
    Hello everyone, I have a client who wants to ensure his upload is secure, so we are trying to enable SFTP for him on our Linux PLESK server. I have enabled SSH access to bin/bash for FTP accounts, and created a new user. When I attempt to SFTP using either the IP address or the domain name, this is the error FileZilla is giving me: Error: Authentication failed. Error: Critical error Error: Could not connect to server Here is some basic information regarding the server: Operating system Linux 2.6.24.5-20080421a Plesk Control Panel version psa v8.6.0_build86080930.03 os_CentOS 5 I had read in some places that I should reboot the SSH Service in Server - Services, however, there is no SSH Service within the list. I'm not really a server guy so it's quite possible I'm missing something obvious. Thanks for any help that you guys can provide!

    Read the article

  • visio 2010 with office 2010 prerelease version install error

    - by antony.trupe
    I installed Microsoft Office Professional Plus 2010 Beta. When I attempt to install Visio Premium 2010 Beta, I get the following error. Setup is unable to proceed because of the following errors: Microsoft Office 2010 does not support upgrading from a prerelease version of Microsoft Office 2010. You must first uninstall any prerelease versions of Microsoft Office 2010 products and associated technologies. Correct the issues listed above and re-run setup. Here's the list of Microsoft products I currently have installed: Microsoft Forefront Client Security Antimalware Service(1.5.1981.0) Microsoft Forefront Client Security State Assessment Service(1.0.1725.0) Microsoft Office Professional Plus 2010(14.0.4763.1000) Microsoft WSE 3.0 Runtime(3.0.5305.0)

    Read the article

  • CodePlex Daily Summary for Wednesday, February 24, 2010

    CodePlex Daily Summary for Wednesday, February 24, 2010New ProjectsADO.Net DataSets to ExtJs.data.Store: A JavaScript (and C#) based project to reduce the amount of client-side code necessary to consume ADO.Net / ASP.Net web services when using ExtJS.AMP.Net Wrapper: AMP is a platform to build on-line marketplaces (http://www.poweredbyamp.com). AMP.Net provided Object-Like interaction with AMP's restful service...ArkSwitch: ArkSwitch is an easy to use, finger-friendly task manager for Windows Mobile 6.5.3 (with a WM6.5 compatibility mode). It is developed mainly in C#,...Biffen: Cinema-booking project in Computer Science at University College Nordjylland, Denmark.Braintree Client Library: Client library for integrating with the Braintree Gateway.Business Framework: A framework which helps building business applications. It provides business rules, validation rules and a text-based language for writing rules. I...Camp Araminta: This project will be used to coordinate development efforts on the Camp Araminta website.ChoServiceHost: Simple and easy way to create and host Windows Service Applications in .NET 3.5/Visual Studio 2008Delta College Game Development Project: Project site for cs 16 game development classDotNetNuke® Labs: DotNetNuke Labs is a collection of "research & development" type projects for the DotNetNuke platform.Generic web part for hosting Silverlight content on SharePoint sites (WSS,MOSS): This is a generic web part for hosting Silverlight content on WSS 30 and MOSS 2007 sites. The objective of this web part was to make it easy for us...GpTiming: GpTiming is a simple "lab" application related to race events, based on a Domain Model.HTML Forms in Windows Forms: As the names suggests this code library is designed to introduce HTML code (primarily form code) into Windows Forms. It was created because standar...imgur uploader - .net open source uploader for image sharing site imgur: Imgur uploader strives to be an easy to use uploader for images you would like to share with friends and family. It is written in c#.kuuy static system: kuuy static system is a full static publish website system!LaTeX Grapher: The goal of this project is to make a tool that facilitates making high quality two dimensional vector graphic function plots with a minimal amount...LightREST: A .NET library to consume REST-based HTTP services.Machiavelli: Machiavelli is Stackoverflow inspired project that I am working on following Andrew Siemer's article on DotNetSlackers. Mover: Mover makes it easier for developers to create programmatic animations in Silverlight. It provides an expressive API to the platform's underlying S...MVC Presenter: ASP.NET MVC 2で作るプレゼンビューアーnHibernate Attribute mapping: How to use Attibute mapping with a ManyToMany Relationship with nHibernateNIPO Data Processing Component Framework: NIPO is a general purpose component framework for data processing applications (that follow the IPO-principle). Its plugin-based architecture makes...PowerShell Remote File Explorer: This project intends to develop a Windows forms based file explorer to browse/transfer files over PowerShell 2.0 remoting channel. The file transfe...Process Flow Tracking of Biomass Distribution Project (University of Mumbai): At Larsen & Toubro Infotech India Ltd., my team worked on a SCM (Supply Chain Management) based project titled 'Process Flow Tracking of Biomass Di...VS2010 Rc1 Fix: Illustrates a fix for working with the ASAP.NET Wizard control with VS2010 RC1Yicker: a microblog program devolep by c#.New ReleasesADO.Net DataSets to ExtJs.data.Store: Ext.net: This is the first version of Ext.net. This version contains a single class, Ext.net.Store which extends the Ext.data.Store class to consume ADO.Ne...AMP.Net Wrapper: AMP.Net v1.0: Provides abstraction for all the product search functionality offered by AMP.ArkSwitch: ArkSwitch legacy versions: Old versions - no need to download themArkSwitch: ArkSwitch v1.1.0: ArkSwitch v1.1.0Braintree Client Library: Braintree 1.0.0: Braintree .NET client library 1.0.0Business Framework: BusinessFramework preview: Early preview bits. See Rules for a sample.Business Framework: Samples: SamplesCC.Votd: CC.Votd 1.0.10.224: This is the initial release of CC.Votd. Marking as beta since I'm the only one who has used it up to this point.ChoServiceHost: ChoServiceHost.msi: Easy way to develop Windows Service applications in .NET 3.5/VS.NET 2008. (Installer)ChoServiceHost: ChoServiceHost-Src.zip: Easy way to develop Windows Service applications in .NET 3.5/VS.NET 2008. (Source Files)CHS Extranet: Beta 2.4: Beta 2.4 Release: Change Log: Added HTML preview options for XLS, XLSX, DOCX File Changes: ~/MyComputer.aspx ~/mycomputer.css ~/basestyle.css...Composure: AvalonDock-55751-VS2010.NET4: This is a "convenience build" of AvalonDock (drop 55751) for VIsual Studio 2010 and .NET 4.0. Nothing has been altered in the source code (which ...Data Access Component: Version 2.6: Add LINQ support.Desktop Google Reader: 1.3 Beta 1: New features: Read it Later included (see http://readitlaterlist.com/) Liking added (working: see number of liking users, see if liking yourself,...Explorer Plus: Explorer Plus v0.3: Amazon Locales AddedFree Silverlight & WPF Chart Control - Visifire: Visifire SL and WPF Charts 3.0.3 Released: Hi, Today we have released the final version of Visifire v3.0.3 which contains the following major features: * DataBinding. * IndicatorEn...Generic web part for hosting Silverlight content on SharePoint sites (WSS,MOSS): CTP: The objective of this release was to gather feedback from the wider community. I intend to pursue further development and make fixes wherever appro...HTML Forms in Windows Forms: HTMLForms 1.0: First Release.imgur uploader - .net open source uploader for image sharing site imgur: Release 2010-02-23-01: This is the first codeplex release! Let mayhem commence...Jeremi Stadler: Stick Tops 2.5: Sticktops is a very light program that makes it easy to paste stuff on small notes on the screen. All notes you have is saved on a server so you ca...kuuy static system: kss_v1.0beta sql: kss_v1.0beta sql scripts sourceMDownloader: MDownloader-0.15.2.55998: Fixed detecting uploading.com dead links; Added hiding rss entries without files;Mover: MoverLib for Silverlight 3: A first version of MoverLib for Silverlight 3.nHibernate Attribute mapping: 1.0: Source CodenHibernate Attribute mapping: Download 1: Zip fileNodeXL: Network Overview, Discovery and Exploration for Excel: NodeXL Class Libraries, version 1.0.1.113: The NodeXL class libraries can be used to display network graphs in .NET applications. To include a NodeXL network graph in a WPF desktop or Windo...NodeXL: Network Overview, Discovery and Exploration for Excel: NodeXL Excel 2007 Template, version 1.0.1.113: The NodeXL Excel 2007 template displays a network graph using edge and vertex lists stored in an Excel 2007 workbook. What's NewThis version inclu...OAuthLib: OAuthLib (1.6.0.0): Difference between previous version is as next. 7079 Make it possible to pass factory method of request in ObtainUnauthorizedRequestToken and Reque...patterns & practices SharePoint Guidance: SPG2010 Drop 5: SharePoint Guidance Drop Notes Microsoft patterns and practices ****************************************** ***************************************...PowerShell Remote File Explorer: PSRemoteExplorer 0.1: This release is the initial release of PowerShell remote file explorer. This enables the basic functionality of a remote file explorer. This also p...Reusable Library: v1.0.3: A collection of reusable abstractions for enterprise application developer.SharePoint Outlook Connector: Version 1.0.2.4: Version 1.0.2.4 Minor bugs have been fixed.Silverlight Server File Manager: First production release: This release is in production. Release on change set 37268.SIMD Detector: 2nd Release: Released C/CLI assembly project for use in CSharp and VB. Tested in CSharp console application. A Windows Form application coming soon. Projects ma...Source Analysis Policy: Source Analysis Policy v1.1 SP1: This release contains the compiled, and signed binaries in an installation package. This package also registers the policy with Microsoft Visual St...SpecExpress : A Fluent Validation Framework: SpecExpress 1.1: UpdatesAdded Validation Contexts feature Fixed bug with handling for Bool Types and Required MessageStore now allows for overriding individual ...VCC: Latest build, v2.1.30223.0: Automatic drop of latest buildVS2010 Rc1 Fix: RC1Fix01: This is a very simple project implementing a Microsoft Walkthrough at http://msdn.microsoft.com/en-us/library/wdb4eb30%28VS.100%29.aspx and the man...WPF AutoComplete TextBox Control: version 1.0: Initial releaseMost Popular ProjectsASP.NET Ajax LibraryManaged Extensibility FrameworkAccelerators for Microsoft Dynamics CRMWindows 7 USB/DVD Download ToolDotNetZip LibraryMDownloaderVirtual Router - Wifi Hot Spot for Windows 7 / 2008 R2MFCMAPIDroid ExplorerUseful Sharepoint Designer Custom Workflow ActivitiesMost Active ProjectsDinnerNow.netRawrBlogEngine.NETInfoServiceNB_Store - Free DotNetNuke Ecommerce Catalog ModuleRapid Entity Framework. (ORM). CTP 2SharpMap - Geospatial Application Framework for the CLRjQuery Library for SharePoint Web Servicespatterns & practices – Enterprise LibraryXcoordination Application Space

    Read the article

  • System Account Logon Failures ever 30 seconds

    - by floyd
    We have two Windows 2008 R2 SP1 servers running in a SQL failover cluster. On one of them we are getting the following events in the security log every 30 seconds. The parts that are blank are actually blank. Has anyone seen similar issues, or assist in tracking down the cause of these events? No other event logs show anything relevant that I can tell. Log Name: Security Source: Microsoft-Windows-Security-Auditing Date: 10/17/2012 10:02:04 PM Event ID: 4625 Task Category: Logon Level: Information Keywords: Audit Failure User: N/A Computer: SERVERNAME.domainname.local Description: An account failed to log on. Subject: Security ID: SYSTEM Account Name: SERVERNAME$ Account Domain: DOMAINNAME Logon ID: 0x3e7 Logon Type: 3 Account For Which Logon Failed: Security ID: NULL SID Account Name: Account Domain: Failure Information: Failure Reason: Unknown user name or bad password. Status: 0xc000006d Sub Status: 0xc0000064 Process Information: Caller Process ID: 0x238 Caller Process Name: C:\Windows\System32\lsass.exe Network Information: Workstation Name: SERVERNAME Source Network Address: - Source Port: - Detailed Authentication Information: Logon Process: Schannel Authentication Package: Kerberos Transited Services: - Package Name (NTLM only): - Key Length: 0 Second event which follows every one of the above events Log Name: Security Source: Microsoft-Windows-Security-Auditing Date: 10/17/2012 10:02:04 PM Event ID: 4625 Task Category: Logon Level: Information Keywords: Audit Failure User: N/A Computer: SERVERNAME.domainname.local Description: An account failed to log on. Subject: Security ID: NULL SID Account Name: - Account Domain: - Logon ID: 0x0 Logon Type: 3 Account For Which Logon Failed: Security ID: NULL SID Account Name: Account Domain: Failure Information: Failure Reason: An Error occured during Logon. Status: 0xc000006d Sub Status: 0x80090325 Process Information: Caller Process ID: 0x0 Caller Process Name: - Network Information: Workstation Name: - Source Network Address: - Source Port: - Detailed Authentication Information: Logon Process: Schannel Authentication Package: Microsoft Unified Security Protocol Provider Transited Services: - Package Name (NTLM only): - Key Length: 0 EDIT UPDATE: I have a bit more information to add. I installed Network Monitor on this machine and did a filter for Kerberos traffic and found the following which corresponds to the timestamps in the security audit log. A Kerberos AS_Request Cname: CN=SQLInstanceName Realm:domain.local Sname krbtgt/domain.local Reply from DC: KRB_ERROR: KDC_ERR_C_PRINCIPAL_UNKOWN I then checked the security audit logs of the DC which responded and found the following: A Kerberos authentication ticket (TGT) was requested. Account Information: Account Name: X509N:<S>CN=SQLInstanceName Supplied Realm Name: domain.local User ID: NULL SID Service Information: Service Name: krbtgt/domain.local Service ID: NULL SID Network Information: Client Address: ::ffff:10.240.42.101 Client Port: 58207 Additional Information: Ticket Options: 0x40810010 Result Code: 0x6 Ticket Encryption Type: 0xffffffff Pre-Authentication Type: - Certificate Information: Certificate Issuer Name: Certificate Serial Number: Certificate Thumbprint: So appears to be related to a certificate installed on the SQL machine, still dont have any clue why or whats wrong with said certificate. It's not expired etc.

    Read the article

  • Fixing the Model Binding issue of ASP.NET MVC 4 and ASP.NET Web API

    - by imran_ku07
            Introduction:                     Yesterday when I was checking ASP.NET forums, I found an important issue/bug in ASP.NET MVC 4 and ASP.NET Web API. The issue is present in System.Web.PrefixContainer class which is used by both ASP.NET MVC and ASP.NET Web API assembly. The details of this issue is available in this thread. This bug can be a breaking change for you if you upgraded your application to ASP.NET MVC 4 and your application model properties using the convention available in the above thread. So, I have created a package which will fix this issue both in ASP.NET MVC and ASP.NET Web API. In this article, I will show you how to use this package.           Description:                     Create or open an ASP.NET MVC 4 project and install ImranB.ModelBindingFix NuGet package. Then, add this using statement on your global.asax.cs file, using ImranB.ModelBindingFix;                     Then, just add this line in Application_Start method,   Fixer.FixModelBindingIssue(); // For fixing only in MVC call this //Fixer.FixMvcModelBindingIssue(); // For fixing only in Web API call this //Fixer.FixWebApiModelBindingIssue(); .                     This line will fix the model binding issue. If you are using Html.Action or Html.RenderAction then you should use Html.FixedAction or Html.FixedRenderAction instead to avoid this bug(make sure to reference ImranB.ModelBindingFix.SystemWebMvc namespace). If you are using FormDataCollection.ReadAs extension method then you should use FormDataCollection.FixedReadAs instead to avoid this bug(make sure to reference ImranB.ModelBindingFix.SystemWebHttp namespace). The source code of this package is available at github.          Summary:                     There is a small but important issue/bug in ASP.NET MVC 4. In this article, I showed you how to fix this issue/bug by using a package. Hopefully you will enjoy this article too.

    Read the article

  • SQL SERVER – How to See Active SQL Server Connections For Database

    - by Pinal Dave
    Another question received via email - “How do I I know which user is connected to my database with how many connection?” Here is the script which will give us answer to the question. SELECT DB_NAME(dbid) AS DBName, COUNT(dbid) AS NumberOfConnections, loginame FROM    sys.sysprocesses GROUP BY dbid, loginame ORDER BY DB_NAME(dbid) Here is the resultset: Reference: Pinal Dave (http://blog.SQLAuthority.com)Filed under: PostADay, SQL, SQL Authority, SQL DMV, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • DFS Replication only works in one direction

    - by depthless
    SERVER1 - SBS 2011 SERVER2 - Windows Server 2008r2 I'm using DFS-R. Changes work perfectly from 1 to 2 but nothing propagates from 2 to 1. When I attempt to run: dfsrdiag staticrpc /port:5001 /member:SERVER2 I get the error: [ERROR] Failed to connect to WMI services on computer: SERVER2.domain That command works just fine in the other direction. I've completely disabled the firewall on both ends. SERVER1 pings SERVER2 just fine. RPC Service is running on both servers. WMI Service is running on both servers.

    Read the article

  • SQL SERVER – List All the DMV and DMF on Server

    - by pinaldave
    “How many DMVs and DVFs are there in SQL Server 2008?” – this question was asked to me in one of the recent SQL Server Trainings. Answer is very simple: SELECT name, type, type_desc FROM sys.system_objects WHERE name LIKE 'dm_%' ORDER BY name Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL DMV

    Read the article

  • View Maps and Get Directions in Google Chrome

    - by Asian Angel
    Every so often we all need to look at a map for reference purposes or to get directions. If you are looking for a great quick reference app then join us as we look at the Mini Google Maps extension for Google Chrome. Mini Google Maps in Action While this may look like a rather basic map extension there is more to it than meets the eye at first glance. Here is the default view when you open Mini Google Maps for the first time. Things that we really liked about this extension were: Three different aerial views available (Map, Satellite, & Terrain) Three different viewing sizes available (and the extension remembers your chosen size) The ability to get directions in combination with a map We decided to try each of the viewing sizes available…here you can see the “Medium Setting”. Notice that the scale stays the same but you get more territory included to view. Then the “Large Setting”…which we infinitely preferred to the others. Once again look at the amount of territory included by default…very nice. Switching over to the “Satellite View”… Followed by the “Terrain View”. For our first example we decided to peek at Vancouver, British Columbia. After zooming out a little bit we had a very nice looking map. For the next test we asked for directions from Vancouver to Toronto. Both the directions and map turned out very well. And just for fun we looked up Paris, France with the “Satellite View”. Conclusion If you find yourself needing to view a map or get directions often then the Mini Google Maps extension will be a very useful tool for you. Links Download the Mini Google Maps extension (Google Chrome Extensions) Similar Articles Productive Geek Tips Get Maps and Directions to Your Contacts in Outlook 2007Stupid Geek Tricks: Browse the Web from OutlookView the Time & Date in Chrome When Hiding Your TaskbarHow to Make Google Chrome Your Default BrowserAccess Google Chrome’s Special Pages the Easy Way TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Make your Joomla & Drupal Sites Mobile with OSMOBI Integrate Twitter and Delicious and Make Life Easier Design Your Web Pages Using the Golden Ratio Worldwide Growth of the Internet How to Find Your Mac Address Use My TextTools to Edit and Organize Text

    Read the article

  • Enterprise SharePoint 2010 Hosting, SharePoint Foundation 2010 Hosting, SharePoint Standard 2010 Hos

    - by Michael J. Hamilton, Sr.
    Enterprise SharePoint 2010 Hosting, SharePoint Foundation 2010 Hosting, SharePoint Standard 2010 Hosting, Michigan Sclera, a Microsoft Hosted Services Provider Partner, is offering key Service Offerings around the Microsoft SharePoint Server 2010 stack. Specifically – if you’re looking for SharePoint Foundation, SharePoint Standard or Enterprise 2010 hosting provisions, checkout the Service Offerings from Sclera Hosting (www.sclerahosting.com) and compare with some of the lowest prices available on the web today. I wanted to post this so you could shot around and compare. There are a couple of the larger on demand hosting agencies (247hosting, and fpweb hosting) – that charge outrageous fees  - like $350 a month for SharePoint Foundation 2010 hosting. The most incredible part? This is on a shared domain name – not the client’s domain. It’s hosting on something like .sharepointsites.com">.sharepointsites.com">http://<yourSiteName>.sharepointsites.com – or something crazy like that. Sclera Hosting provides you on demand – SharePoint Foundation, SharePoint Server Standard/Enterprise – 2010 RTM bits – within minutes of your order – ON YOUR DOMAIN – and that is a major perk for me. You have complete SharePoint Designer 2010 integration; complete support for custom assemblies, web parts, you name it – this hosting provider gives you more bang for buck than any provider on the Net today. Now – some teasers – I was in a meeting this week and I heard – SharePoint Foundation – 2010 RTM bits – unlimited users, 10 GB content database quota, full SharePoint Designer 2010 integration/support, all on the client’s domain – sit down and soak this up - $175.00 per month – no kidding. Now, I do not know about you – but – I have not seen a deal like that EVER on the Net – so – get over to www.sclerahosting.com – or email the Sales Team at Sclera Design, Inc. today for more details. Have a great weekend!

    Read the article

  • SQL SERVER – Change Database Access to Single User Mode Using SSMS

    - by pinaldave
    I have previously written about how using T-SQL Script we can convert the database access to single user mode before backup. I was recently asked if the same can be done using SQL Server Management Studio. Yes! You can do it from database property (Write click on database and select database property) and follow image. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology

    Read the article

< Previous Page | 404 405 406 407 408 409 410 411 412 413 414 415  | Next Page >