Search Results

Search found 28563 results on 1143 pages for 'strange but true'.

Page 203/1143 | < Previous Page | 199 200 201 202 203 204 205 206 207 208 209 210  | Next Page >

  • Solution: Testing Web Services with MSTest on Team Build

    - by Martin Hinshelwood
    Guess what. About 20 minutes after I fixed the build, Allan broke it again! Update: 4th March 2010 – After having huge problems getting this working I read Billy Wang’s post which showed me the light. The problem here is that even though the test passes locally it will not during an Automated Build. When you send your tests to the build server it does not understand that you want to spin up the web site and run tests against that! When you run the test in Visual Studio it spins up the web site anyway, but would you expect your test to pass if you told the website not to spin up? Of course not. So, when you send the code to the build server you need to tell it what to spin up. First, the best way to get the parameters you need is to right click on the method you want to test and select “Create Unit Test”. This will detect wither you are running in IIS or ASP.NET Development Server or None, and create the relevant tags. Figure: Right clicking on “SaveDefaultProjectFile” will produce a context menu with “Create Unit tests…” on it. If you use this option it will AutoDetect most of the Attributes that are required. /// <summary> ///A test for SSW.SQLDeploy.SilverlightUI.Web.Services.IProfileService.SaveDefaultProjectFile ///</summary> // TODO: Ensure that the UrlToTest attribute specifies a URL to an ASP.NET page (for example, // http://.../Default.aspx). This is necessary for the unit test to be executed on the web server, // whether you are testing a page, web service, or a WCF service. [TestMethod()] [HostType("ASP.NET")] [AspNetDevelopmentServerHost("D:\\Workspaces\\SSW\\SSW\\SqlDeploy\\DEV\\Main\\SSW.SQLDeploy.SilverlightUI.Web", "/")] [UrlToTest("http://localhost:3100/")] [DeploymentItem("SSW.SQLDeploy.SilverlightUI.Web.dll")] public void SaveDefaultProjectFileTest() { IProfileService target = new ProfileService(); // TODO: Initialize to an appropriate value string strComputerName = string.Empty; // TODO: Initialize to an appropriate value bool expected = false; // TODO: Initialize to an appropriate value bool actual; actual = target.SaveDefaultProjectFile(strComputerName); Assert.AreEqual(expected, actual); Assert.Inconclusive("Verify the correctness of this test method."); } Figure: Auto created code that shows the attributes required to run correctly in IIS or in this case ASP.NET Development Server If you are a purist and don’t like creating unit tests like this then you just need to add the three attributes manually. HostType – This attribute specified what host to use. Its an extensibility point, so you could write your own. Or you could just use “ASP.NET”. UrlToTest – This specifies the start URL. For most tests it does not matter which page you call, as long as it is a valid page otherwise your test may not run on the server, but may pass anyway. AspNetDevelopmentServerHost – This is a nasty one, it is only used if you are using ASP.NET Development Host and is unnecessary if you are using IIS. This sets the host settings and the first value MUST be the physical path to the root of your web application. OK, so all that was rubbish and I could not get anything working using the MSDN documentation. Google provided very little help until I ran into Billy Wang’s post  and I heard that heavenly music that all developers hear when understanding dawns that what they have been doing up until now is just plain stupid. I am sure that the above will work when I am doing Web Unit Tests, but there is a much easier way when doing web services. You need to add the AspNetDevelopmentServer attribute to your code. This will tell MSTest to spin up an ASP.NET Development server to host the service. Specify the path to the web application you want to use. [AspNetDevelopmentServer("WebApp1", "D:\\Workspaces\\SSW\\SSW\\SqlDeploy\\DEV\\Main\\SSW.SQLDeploy.SilverlightUI.Web")] [DeploymentItem("SSW.SQLDeploy.SilverlightUI.Web.dll")] [TestMethod] public void ProfileService_Integration_SaveDefaultProjectFile_Returns_True() { ProfileServiceClient target = new ProfileServiceClient(); bool isTrue = target.SaveDefaultProjectFile("Mav"); Assert.AreEqual(true, isTrue); } Figure: This AspNetDevelopmentServer will make sure that the specified web application is launched. Now we can run the test and have it pass, but if the dynamically assigned ASP.NET Development server port changes what happens to the details in your app.config that was generated when creating a reference to the web service? Well, it would be wrong and the test would fail. This is where Billy’s helper method comes in. Once you have created an instance of your service call, and it has loaded the config, but before you make any calls to it you need to go in and dynamically set the Endpoint address to the same address as your dynamically hosted Web Application. using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.VisualStudio.TestTools.UnitTesting; using System.Reflection; using System.ServiceModel.Description; using System.ServiceModel; namespace SSW.SQLDeploy.Test { class WcfWebServiceHelper { public static bool TryUrlRedirection(object client, TestContext context, string identifier) { bool result = true; try { PropertyInfo property = client.GetType().GetProperty("Endpoint"); string webServer = context.Properties[string.Format("AspNetDevelopmentServer.{0}", identifier)].ToString(); Uri webServerUri = new Uri(webServer); ServiceEndpoint endpoint = (ServiceEndpoint)property.GetValue(client, null); EndpointAddressBuilder builder = new EndpointAddressBuilder(endpoint.Address); builder.Uri = new Uri(endpoint.Address.Uri.OriginalString.Replace(endpoint.Address.Uri.Authority, webServerUri.Authority)); endpoint.Address = builder.ToEndpointAddress(); } catch (Exception e) { context.WriteLine(e.Message); result = false; } return result; } } } Figure: This fixes a problem with the URL in your web.config not being the same as the dynamically hosted ASP.NET Development server port. We can now add a call to this method after we created the Proxy object and change the Endpoint for the Service to the correct one. This process is wrapped in an assert as if it fails there is no point in continuing. [AspNetDevelopmentServer("WebApp1", D:\\Workspaces\\SSW\\SSW\\SqlDeploy\\DEV\\Main\\SSW.SQLDeploy.SilverlightUI.Web")] [DeploymentItem("SSW.SQLDeploy.SilverlightUI.Web.dll")] [TestMethod] public void ProfileService_Integration_SaveDefaultProjectFile_Returns_True() { ProfileServiceClient target = new ProfileServiceClient(); Assert.IsTrue(WcfWebServiceHelper.TryUrlRedirection(target, TestContext, "WebApp1")); bool isTrue = target.SaveDefaultProjectFile("Mav"); Assert.AreEqual(true, isTrue); } Figure: Editing the Endpoint from the app.config on the fly to match the dynamically hosted ASP.NET Development Server URL and port is now easy. As you can imagine AspNetDevelopmentServer poses some problems of you have multiple developers. What are the chances of everyone using the same location to store the source? What about if you are using a build server, how do you tell MSTest where to look for the files? To the rescue is a property called" “%PathToWebRoot%” which is always right on the build server. It will always point to your build drop folder for your solutions web sites. Which will be “\\tfs.ssw.com.au\BuildDrop\[BuildName]\Debug\_PrecompiledWeb\” or whatever your build drop location is. So lets change the code above to add this. [AspNetDevelopmentServer("WebApp1", "%PathToWebRoot%\\SSW.SQLDeploy.SilverlightUI.Web")] [DeploymentItem("SSW.SQLDeploy.SilverlightUI.Web.dll")] [TestMethod] public void ProfileService_Integration_SaveDefaultProjectFile_Returns_True() { ProfileServiceClient target = new ProfileServiceClient(); Assert.IsTrue(WcfWebServiceHelper.TryUrlRedirection(target, TestContext, "WebApp1")); bool isTrue = target.SaveDefaultProjectFile("Mav"); Assert.AreEqual(true, isTrue); } Figure: Adding %PathToWebRoot% to the AspNetDevelopmentServer path makes it work everywhere. Now we have another problem… this will ONLY run on the build server and will fail locally as %PathToWebRoot%’s default value is “C:\Users\[profile]\Documents\Visual Studio 2010\Projects”. Well this sucks… How do we get the test to run on any build server and any developer laptop. Open “Tools | Options | Test Tools | Test Execution” in Visual Studio and you will see a field called “Web application root directory”. This is where you override that default above. Figure: You can override the default website location for tests. In my case I would put in “D:\Workspaces\SSW\SSW\SqlDeploy\DEV\Main” and all the developers working with this branch would put in the folder that they have mapped. Can you see a problem? What is I create a “$/SSW/SqlDeploy/DEV/34567” branch from Main and I want to run tests in there. Well… I would have to change the value above. This is not ideal, but as you can put your projects anywhere on a computer, it has to be done. Conclusion Although this looks convoluted and complicated there are real problems being solved here that mean that you have a test ANYWHERE solution. Any build server, any Developer workstation. Resources: http://billwg.blogspot.com/2009/06/testing-wcf-web-services.html http://tough-to-find.blogspot.com/2008/04/testing-asmx-web-services-in-visual.html http://msdn.microsoft.com/en-us/library/ms243399(VS.100).aspx http://blogs.msdn.com/dscruggs/archive/2008/09/29/web-tests-unit-tests-the-asp-net-development-server-and-code-coverage.aspx http://www.5z5.com/News/?543f8bc8b36b174f Technorati Tags: VS2010,MSTest,Team Build 2010,Team Build,Visual Studio,Visual Studio 2010,Visual Studio ALM,Team Test,Team Test 2010

    Read the article

  • Installing UCMA 3.0 and Creating a Communications Server "14"Trusted Application Pool

    A lot of setup and administration tasks have gotten a lot easier in Communications Server 14; one of them is building an application server to develop and run your UCMA 3.0 applications on. In this post, Ill walk you through installing the UCMA 3.0 Core SDK and creating a Trusted Application Pool on the server, thus adding it to the Communications Server 14 topology and allowing you to host and run UCMA 3.0 applications on it. Note: These instructions will change slightly as the bits get updated for the eventual Beta release I will update this post as soon as I get a chance to run this setup on a more recent build. Im doing the install on a simple Communications Server 14 topology consisting of the following Windows Server 2008 R2 Hyper-V images: DC Domain Controller ExchangeUM Exchange Server 2010 CS-SE Microsoft Communications Server 2010 Standard Edition TS Development machine Ill walk through setting up UCMA 3.0 on the TS VM, which is a fully patched Windows Server 2008 R2 machine that is joined to the Fabrikam domain.   Im also running Visual Studio 2010 on this VM because I intend to use it as a development machine.  In a future post, Ill walk through installing just the UCMA 3.0 run time to build a true production UCMA application server. Im making a couple of assumptions here: You have an existing CS 2010 site and cluster configured(well look at this in a future post) Youre starting with a fully patched Windows Server 2008 R2 machine The machine is joined to your domain This walkthrough was done in my Fabrikam VM environment but can easily be modified for your own environment. Installing the UCMA 3.0 SDK Lets start by installing the UCMA 3.0 SDK.  Run UcmaSdkWebDownload.msi to kick off the SDK installer package extract process. The installed package is extracted to C: >> Program Files >> Microsoft UCMA 3.0 >> SDK Installer Package.  Browse there and run setup.exe. Click Install to install the UCMA 3.0 Core SDK and Workflow SDK. Install Communications Server Core Components UCMA 3.0 introduces a new concept called Auto-provisioning, which is most easily explained from the developer point of view.  Remember what your app.config looked it in UCMA 2.0?  You had to store the application GRUU, the trusted contact SIP Uri, the port for your application, and the name of the certificate authority. Thats all gone with auto-provisioning all you need in your app.config is your ApplicationId, e.g.: urn:application:MyApplication. How does CS 2010 do this? All of the applications configuration data is associated with the applications id.  UCMA also queries a replicated copy of the Central Management Database to retrieve the applications configuration data and also the configuration data for any endpoints. In this step, well run Bootstrapper.exe to install the CS Core components, this checked for the following components and installs them if they are not already present: VcRedist Sqlexpress Sqlnativeclient Sqlbackcompat Ucmaredist OcsCore.msi Open a command window at C: >> Program Files >> Microsoft Communications Server 2010 >> Deployment and run the following command: Bootstrapper.exe /BootstrapReplica /MinCache /SourceDirectory:"%ProgramFiles%\Microsoft UCMA 3.0\SDK Installer Package\Prereq\BootstrapperCache" Create a New Trusted Application Pool The next step is to create a new trusted application pool for the new server.  Fire up the Communications Server Management Shell from Start >> Microsoft Communications Server 2010 >> Communications Server Management Shell and enter the following PowerShell command: New-CsTrustedApplicationPool -Identity <FQDN of Server> -Registrar <FQDN of CS Server> -Site <CS Site Name> Verify that the new server was added to the CS topology by running the following PowerShell command: (Get-CsTopology -AsXml).ToString() > Topology.xml This created a file called Topology.xml in the directory that you ran the command from.  Open the file and find the Clusters section and look for a node for the new server. The Cluster Fqdn is the name of your server, and note the name of the Site that this Cluster is a part of. <Cluster Fqdn="appsrv.fabrikam.com" RequiresReplication="true" RequiresSetup="true"> <ClusterId SiteId="UcMarketing2" Number="5" /> <Machine OrdinalInCluster="1" Fqdn="appsrv.fabrikam.com"> <NetInterface InterfaceSide="Primary" InterfaceNumber="1" IPAddress="0.0.0.0" /> </Machine> </Cluster> Configure CS Management Store Replication At this point, we have the CS Core components installed and the server configured as a trusted application pool.  We now need to set up replication so that the Central Management Store replicates down to the new server. From the Communications Server Management Shell, run the following PowerShell command to enable the Replica service on the new server: Enable-CSReplica The Replica service is enabled, but hasn't done anything yet. This can be verified by running the following PowerShell command to check the replication status for the various servers in the topology: Get-CSManagementStoreReplicationStatus You can see in the screenshot below that the UpToDate property of the new server is still False Run the following PowerShell command to force the replication to run: Invoke-CSManagementStoreReplicationStatus Run Get-CSManagementStoreReplicationStatus again to verify that the new service is now up to date Request and Set a New Certificate The last step in the process is to request a new certificate from the certificate authority on the domain and assign it to the new server. From the Communications Server Management Shell, run the following PowerShell command to request a new certificate: Request-CSCertificate -Action new -Type default -CA <Domain Controller FQDN>\<Certificate Authority> Setting the -Verbose switch on the cmdlet creates an Xml file with its output. Open the Xml file and copy the thumbprint of the generated certificate. <?xml version="1.0" encoding="utf-8"?> <Action Name="Request-CsCertificate" Time="20100512T212258"> <Action Name="Request-CsCertificate" Time="20100512T212258"> <Info Title="Connection" Time="20100512T212258">Data Source=(local)\rtclocal;Initial Catalog=xds;Integrated Security=True</Info> <Action Time="20100512T212258"> <Info Title="Certificate use" Time="20100512T212258">urn:certref:default</Info> <Info Title="Subject distinguished name" Time="20100512T212258">CN="appsrv2.fabrikam.com"</Info> <Info Time="20100512T212259">The certificate request is submitted to the Certification Authority dc.fabrikam.com\FabrikamCA.</Info> <Info Time="20100512T212259">The certificate was issued.</Info> <Info Time="20100512T212259">The certificate was imported with thumbprint AFC3C46E459C1A39AD06247676F3555826DBF705.</Info> <Complete Time="20100512T212259" /> </Action> <Info Title="command status" Time="20100512T212259">Command execution processing completed</Info> <Action Name="DeploymentXdsCmdlet.SaveCachedItems" Time="20100512T212259"> <Info Time="20100512T212259">0 updates</Info> <Complete Time="20100512T212259" /> </Action> <Info Title="command status" Time="20100512T212259">Command has completed</Info> </Action> </Action> Run the following PowerShell command to set the certificate: Set-CsCertificate -Type Default -Thumbprint <Thumbprint> Wrapping Up You now have a new UCMA 3.0 application server in your Communications Server 2010 server topology.  You can provision trusted applications and trusted application endpoints on the new server using the Communications Server 2010 Management Shell.  Well take a look at how to do that in another post. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • September 2011 Release of the Ajax Control Toolkit

    - by Stephen Walther
    I’m happy to announce the release of the September 2011 Ajax Control Toolkit. This release has several important new features including: Date ranges – When using the Calendar extender, you can specify a start and end date and a user can pick only those dates which fall within the specified range. This was the fourth top-voted feature request for the Ajax Control Toolkit at CodePlex. Twitter Control – You can use the new Twitter control to display recent tweets associated with a particular Twitter user or tweets which match a search query. Gravatar Control – You can use the new Gravatar control to display a unique image for each user of your website. Users can upload custom images to the Gravatar.com website or the Gravatar control can display a unique, auto-generated, image for a user. You can download this release this very minute by visiting CodePlex: http://AjaxControlToolkit.CodePlex.com Alternatively, you can execute the following command from the Visual Studio NuGet console: Improvements to the Ajax Control Toolkit Calendar Control The Ajax Control Toolkit Calendar extender control is one of the most heavily used controls from the Ajax Control Toolkit. The developers on the Superexpert team spent the last sprint focusing on improving this control. There are three important changes that we made to the Calendar control: we added support for date ranges, we added support for highlighting today’s date, and we made fixes to several bugs related to time zones and daylight savings. Using Calendar Date Ranges One of the top-voted feature requests for the Ajax Control Toolkit was a request to add support for date ranges to the Calendar control (this was the fourth most voted feature request at CodePlex). With the latest release of the Ajax Control Toolkit, the Calendar extender now supports date ranges. For example, the following page illustrates how you can create a popup calendar which allows a user only to pick dates between March 2, 2009 and May 16, 2009. <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="CalendarDateRange.aspx.cs" Inherits="WebApplication1.CalendarDateRange" %> <%@ Register TagPrefix="asp" Namespace="AjaxControlToolkit" Assembly="AjaxControlToolkit" %> <html> <head runat="server"> <title>Calendar Date Range</title> </head> <body> <form id="form1" runat="server"> <asp:ToolkitScriptManager ID="tsm" runat="server" /> <asp:TextBox ID="txtHotelReservationDate" runat="server" /> <asp:CalendarExtender ID="Calendar1" TargetControlID="txtHotelReservationDate" StartDate="3/2/2009" EndDate="5/16/2009" SelectedDate="3/2/2009" runat="server" /> </form> </body> </html> This page contains three controls: an Ajax Control Toolkit ToolkitScriptManager control, a standard ASP.NET TextBox control, and an Ajax Control Toolkit CalendarExtender control. Notice that the Calendar control includes StartDate and EndDate properties which restrict the range of valid dates. The Calendar control shows days, months, and years outside of the valid range as struck out. You cannot select days, months, or years which fall outside of the range. The following video illustrates interacting with the new date range feature: If you want to experiment with a live version of the Ajax Control Toolkit Calendar extender control then you can visit the Calendar Sample Page at the Ajax Control Toolkit Sample Site. Highlighted Today’s Date Another highly requested feature for the Calendar control was support for highlighting today’s date. The Calendar control now highlights the user’s current date regardless of the user’s time zone. Fixes to Time Zone and Daylight Savings Time Bugs We fixed several significant Calendar extender bugs related to time zones and daylight savings time. For example, previously, when you set the Calendar control’s SelectedDate property to the value 1/1/2007 then the selected data would appear as 12/31/2006 or 1/1/2007 or 1/2/2007 depending on the server time zone. For example, if your server time zone was set to Samoa (UTC-11:00), then setting SelectedDate=”1/1/2007” would result in “12/31/2006” being selected in the Calendar. Users of the Calendar extender control found this behavior confusing. After careful consideration, we decided to change the Calendar extender so that it interprets all dates as UTC dates. In other words, if you set StartDate=”1/1/2007” then the Calendar extender parses the date as 1/1/2007 UTC instead of parsing the date according to the server time zone. By interpreting all dates as UTC dates, we avoid all of the reported issues with the SelectedDate property showing the wrong date. Furthermore, when you set the StartDate and EndDate properties, you know that the same StartDate and EndDate will be selected regardless of the time zone associated with the server or associated with the browser. The date 1/1/2007 will always be the date 1/1/2007. The New Twitter Control This release of the Ajax Control Toolkit introduces a new twitter control. You can use the Twitter control to display recent tweets associated with a particular twitter user. You also can use this control to show the results of a twitter search. The following page illustrates how you can use the Twitter control to display recent tweets made by Scott Hanselman: <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="TwitterProfile.aspx.cs" Inherits="WebApplication1.TwitterProfile" %> <%@ Register TagPrefix="asp" Namespace="AjaxControlToolkit" Assembly="AjaxControlToolkit" %> <html > <head runat="server"> <title>Twitter Profile</title> </head> <body> <form id="form1" runat="server"> <asp:ToolkitScriptManager ID="tsm" runat="server" /> <asp:Twitter ID="Twitter1" ScreenName="shanselman" runat="server" /> </form> </body> </html> This page includes two Ajax Control Toolkit controls: the ToolkitScriptManager control and the Twitter control. The Twitter control is set to display tweets from Scott Hanselman (shanselman): You also can use the Twitter control to display the results of a search query. For example, the following page displays all recent tweets related to the Ajax Control Toolkit: Twitter limits the number of times that you can interact with their API in an hour. Twitter recommends that you cache results on the server (https://dev.twitter.com/docs/rate-limiting). By default, the Twitter control caches results on the server for a duration of 5 minutes. You can modify the cache duration by assigning a value (in seconds) to the Twitter control's CacheDuration property. The Twitter control wraps a standard ASP.NET ListView control. You can customize the appearance of the Twitter control by modifying its LayoutTemplate, StatusTemplate, AlternatingStatusTemplate, and EmptyDataTemplate. To learn more about the new Twitter control, visit the live Twitter Sample Page. The New Gravatar Control The September 2011 release of the Ajax Control Toolkit also includes a new Gravatar control. This control makes it easy to display a unique image for each user of your website. A Gravatar is associated with an email address. You can visit Gravatar.com and upload an image and associate the image with your email address. That way, every website which uses Gravatars (such as the www.ASP.NET website) will display your image next to your name. For example, I visited the Gravatar.com website and associated an image of a Koala Bear with the email address [email protected]. The following page illustrates how you can use the Gravatar control to display the Gravatar image associated with the [email protected] email address: <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="GravatarDemo.aspx.cs" Inherits="WebApplication1.GravatarDemo" %> <%@ Register TagPrefix="asp" Namespace="AjaxControlToolkit" Assembly="AjaxControlToolkit" %> <html xmlns="http://www.w3.org/1999/xhtml"> <head id="Head1" runat="server"> <title>Gravatar Demo</title> </head> <body> <form id="form1" runat="server"> <asp:ToolkitScriptManager ID="tsm" runat="server" /> <asp:Gravatar ID="Gravatar1" Email="[email protected]" runat="server" /> </form> </body> </html> The page above simply displays the Gravatar image associated with the [email protected] email address: If a user has not uploaded an image to Gravatar.com then you can auto-generate a unique image for the user from the user email address. The Gravatar control supports four types of auto-generated images: Identicon -- A different geometric pattern is generated for each unrecognized email. MonsterId -- A different image of a monster is generated for each unrecognized email. Wavatar -- A different image of a face is generated for each unrecognized email. Retro -- A different 8-bit arcade-style face is generated for each unrecognized email. For example, there is no Gravatar image associated with the email address [email protected]. The following page displays an auto-generated MonsterId for this email address: <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="GravatarMonster.aspx.cs" Inherits="WebApplication1.GravatarMonster" %> <%@ Register TagPrefix="asp" Namespace="AjaxControlToolkit" Assembly="AjaxControlToolkit" %> <html xmlns="http://www.w3.org/1999/xhtml"> <head id="Head1" runat="server"> <title>Gravatar Monster</title> </head> <body> <form id="form1" runat="server"> <asp:ToolkitScriptManager ID="tsm" runat="server" /> <asp:Gravatar ID="Gravatar1" Email="[email protected]" DefaultImageBehavior="MonsterId" runat="server" /> </form> </body> </html> The page above generates the following image automatically from the supplied email address: To learn more about the properties of the new Gravatar control, visit the live Gravatar Sample Page. ASP.NET Connections Talk on the Ajax Control Toolkit If you are interested in learning more about the changes that we are making to the Ajax Control Toolkit then please come to my talk on the Ajax Control Toolkit at the upcoming ASP.NET Connections conference. In the talk, I will present a summary of the changes that we have made to the Ajax Control Toolkit over the last several months and discuss our future plans. Do you have ideas for new Ajax Control Toolkit controls? Ideas for improving the toolkit? Come to my talk – I would love to hear from you. You can register for the ASP.NET Connections conference by visiting the following website: Register for ASP.NET Connections   Summary The previous release of the Ajax Control Toolkit – the July 2011 Release – has had over 100,000 downloads. That is a huge number of developers who are working with the Ajax Control Toolkit. We are really excited about the new features which we added to the Ajax Control Toolkit in the latest September sprint. We hope that you find the updated Calender control, the new Twitter control, and the new Gravatar control valuable when building your ASP.NET Web Forms applications.

    Read the article

  • `# probe: true` in /etc/rc.d/init.d/* files on a RedHat system

    - by Chen Levy
    Some files (e.g. nfs, nfslock, bind) in my /etc/rc.d/init.d/ directory have in their comment header a line such as: # probe: true I found that those particular scripts has the probe verb i.e.: service nfs probe But this is due to the fact that the mentioned scripts has code that deals with the probe verb. I find no mention of the # probe: true notation in chkconfig man page, nor in any related man pages. Googleing for it also didn't help. Is there a real significance for that line, or is it pure documentation?

    Read the article

  • openGL ES - change the render mode from RENDERMODE_WHEN_DIRTY to RENDERMODE_CONTINUOUSLY on touch

    - by Sid
    i want to change the rendermode from RENDERMODE_WHEN_DIRTY to RENDERMODE_CONTINUOUSLY when i touch the screen. WHAT i Need : Initially the object should be stationary. after touching the screen, it should move automatically. The motion of my object is a projectile motion ans it is working fine. what i get : Force close and a NULL pointer exception. My code : public class BallThrowGLSurfaceView extends GLSurfaceView{ MyRender _renderObj; Context context; GLSurfaceView glView; public BallThrowGLSurfaceView(Context context) { super(context); // TODO Auto-generated constructor stub _renderObj = new MyRender(context); this.setRenderer(_renderObj); this.setRenderMode(RENDERMODE_WHEN_DIRTY); this.requestFocus(); this.setFocusableInTouchMode(true); glView = new GLSurfaceView(context.getApplicationContext()); } @Override public boolean onTouchEvent(MotionEvent event) { // TODO Auto-generated method stub if (event != null) { if (event.getAction() == MotionEvent.ACTION_DOWN) { if (_renderObj != null) { Log.i("renderObj", _renderObj + "lll"); // Ensure we call switchMode() on the OpenGL thread. // queueEvent() is a method of GLSurfaceView that will do this for us. queueEvent(new Runnable() { public void run() { glView.setRenderMode(RENDERMODE_CONTINUOUSLY); } }); return true; } } } return super.onTouchEvent(event); } } PS : i know that i am making some silly mistakes in this, but cannot figure out what it really is.

    Read the article

  • How do I run Firefox OS as a standalone application?

    - by JamesTheAwesomeDude
    I got the add-on for the Firefox OS simulator, and it works great! It even keeps functioning after Firefox is closed, so I can save processing power for other things. I'd like to run it as a standalone application, so that I don't even have to open Firefox in the first place. I've gone to the System Monitor, and it says that the process (I guessed which by CPU usage and filename) was started via /home/james/.mozilla/firefox-trunk/vkuuxfit.default/extensions/[email protected]/resources/r2d2b2g/data/linux64/b2g/plugin-container 3386 true tab, so I tried running that in the Terminal (after I'd closed the simulator, of course,) but it gives this: james@james-OptiPlex-GX620:~/.mozilla/firefox-trunk/vkuuxfit.default/extensions/[email protected]/resources/r2d2b2g/data/linux64/b2g$ ./plugin-container 3386 true tab ./plugin-container: error while loading shared libraries: libxpcom.so: cannot open shared object file: No such file or directory james@james-OptiPlex-GX620:~/.mozilla/firefox-trunk/vkuuxfit.default/extensions/[email protected]/resources/r2d2b2g/data/linux64/b2g$ What should I do? Is what I'm attempting even possible? (It should be, since the simulator kept running even after Firefox itself was closed...) NOTE: I've tried chmod u+sx plugin-container, but that didn't help.

    Read the article

  • exchange web service C# code send email from home

    - by KK
    Is it possible to write C# code as below and send email using my home network? I have a valid user name and password on that exchange server. Is there any configuration that I can set to achieve this? BTW this code blow works when I run it within office network. I want this code to work when run from any network. String cMSExchangeWebServiceURL = (String)System.Configuration.ConfigurationSettings.AppSettings["MSExchangeWebServiceURL"]; String cEmail = (String)System.Configuration.ConfigurationSettings.AppSettings["Cemail"]; String cPassword = (String)System.Configuration.ConfigurationSettings.AppSettings["Cpassword"]; String cTo = (String)System.Configuration.ConfigurationSettings.AppSettings["CTo"]; ExchangeServiceBinding esb = new ExchangeServiceBinding(); esb.Timeout = 1800000; esb.AllowAutoRedirect = true; esb.UseDefaultCredentials = false; esb.Credentials = new NetworkCredential(cEmail, cPassword); esb.Url = cMSExchangeWebServiceURL; ServicePointManager.ServerCertificateValidationCallback += delegate(object sender1, X509Certificate certificate, X509Chain chain, SslPolicyErrors sslPolicyErrors) { return true; }; // Create a CreateItem request object CreateItemType request = new CreateItemType(); // Setup the request: // Indicate that we only want to send the message. No copy will be saved. request.MessageDisposition = MessageDispositionType.SendOnly; request.MessageDispositionSpecified = true; // Create a message object and set its properties MessageType message = new MessageType(); message.Subject = subject; message.Body = new TestOutgoingEmailServer.com.cogniti.mail1.BodyType(); message.Body.BodyType1 = BodyTypeType.HTML; message.Body.Value = body; message.ToRecipients = new EmailAddressType[3]; message.ToRecipients[0] = new EmailAddressType(); //message.ToRecipients[1] = new EmailAddressType(); //message.ToRecipients[2] = new EmailAddressType(); message.ToRecipients[0].EmailAddress = "[email protected]"; message.ToRecipients[0].RoutingType = "SMTP"; //message.CcRecipients = new EmailAddressType[1]; //message.CcRecipients[0] = new EmailAddressType(); //message.CcRecipients[0].EmailAddress = toEmailAddress.ElementAt(1).ToString(); //message.CcRecipients[0].RoutingType = "SMTP"; //There are some more properties in MessageType object //you can set all according to your requirement // Construct the array of items to send request.Items = new NonEmptyArrayOfAllItemsType(); request.Items.Items = new ItemType[1]; request.Items.Items[0] = message; // Call the CreateItem EWS method. CreateItemResponseType response = esb.CreateItem(request);

    Read the article

  • Unable to start Tomcat6 with HTTPS enabled

    - by ram
    I have the following server.xml settings for my tomcat6 server <!-- COMMENTED <Connector port="8080" maxThreads="150" enableLookups="false" acceptCount="100" scheme="http" redirectPort="8443"/> --> <!-- COMMENTED <Connector port="80" maxThreads="150" enableLookups="false" acceptCount="100" scheme="http" redirectPort="443"/> --> <Connector port="443" maxHttpHeaderSize="8192" maxThreads="150" enableLookups="false" disableUploadTimeout="true" acceptCount="100" scheme="https" secure="true" SSLEnabled="true" SSLCertificateFile="%SSL_CERT%" SSLCertificateKeyFile="%SSL_KEY%" SSLCipherSuite="ALL:!ADH:!kEDH:!SSLv2:!EXPORT40:!EXP:!LOW" compression="on" compressableMimeType="text/html,text/xml,text/plain,application/javascript,application/json,text/javascript"/> Complete server.xml is here but when I try to start the application I get the following error in catalina.*.log file INFO: Initializing Coyote HTTP/1.1 on http-80 Apr 7, 2013 8:38:38 PM org.apache.coyote.http11.Http11AprProtocol init SEVERE: Error initializing endpoint java.lang.Exception: Invalid Server SSL Protocol (error:00000000:lib(0):func(0):reason(0)) at org.apache.tomcat.jni.SSLContext.make(Native Method) at org.apache.tomcat.util.net.AprEndpoint.init(AprEndpoint.java:729) at org.apache.coyote.http11.Http11AprProtocol.init(Http11AprProtocol.java:107) at org.apache.catalina.connector.Connector.initialize(Connector.java:1049) at org.apache.catalina.core.StandardService.initialize(StandardService.java:703) at org.apache.catalina.core.StandardServer.initialize(StandardServer.java:838) at org.apache.catalina.startup.Catalina.load(Catalina.java:538) at org.apache.catalina.startup.Catalina.load(Catalina.java:562) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.catalina.startup.Bootstrap.load(Bootstrap.java:261) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:413) Apr 7, 2013 8:38:38 PM org.apache.catalina.core.StandardService initialize SEVERE: Failed to initialize connector [Connector[HTTP/1.1-443]] LifecycleException: Protocol handler initialization failed: java.lang.Exception: Invalid Server SSL Protocol (error:00000000:lib(0):func(0):reason(0)) at org.apache.catalina.connector.Connector.initialize(Connector.java:1051) at org.apache.catalina.core.StandardService.initialize(StandardService.java:703) at org.apache.catalina.core.StandardServer.initialize(StandardServer.java:838) at org.apache.catalina.startup.Catalina.load(Catalina.java:538) at org.apache.catalina.startup.Catalina.load(Catalina.java:562) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.catalina.startup.Bootstrap.load(Bootstrap.java:261) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:413) I've checked the following things already I have given read permissions for everyone for .crt and .key files I copied server.xml to a different working tomcat6 server and it works there, server.xml from the mentioned working tomcat5 webserver doesn't work here and it fails with the same error Works well with just HTTP enabled explicitly mentioning protocol in the Connector i.e. protocol="org.apache.coyote.http11.Http11AprProtocol" results in the same exception Please help me if I am missing something. Thanks in advance

    Read the article

  • May 2011 Release of the Ajax Control Toolkit

    - by Stephen Walther
    I’m happy to announce that the Superexpert team has published the May 2011 release of the Ajax Control Toolkit at CodePlex. You can download the new release at the following URL: http://ajaxcontroltoolkit.codeplex.com/releases/view/65800 This release focused on improving the ModalPopup and AsyncFileUpload controls. Our team closed a total of 34 bugs related to the ModalPopup and AsyncFileUpload controls. Enhanced ModalPopup Control You can take advantage of the Ajax Control Toolkit ModalPopup control to easily create popup dialogs in your ASP.NET Web Forms applications. When the dialog appears, you cannot interact with any page content which appears behind the modal dialog. For example, the following page contains a standard ASP.NET Button and Panel. When you click the Button, the Panel appears as a popup dialog: <%@ Page Language="vb" AutoEventWireup="false" CodeBehind="Simple.aspx.vb" Inherits="ACTSamples.Simple" %> <%@ Register TagPrefix="act" Namespace="AjaxControlToolkit" Assembly="AjaxControlToolkit" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title>Simple Modal Popup Sample</title> <style type="text/css"> html { background-color: blue; } #dialog { border: 2px solid black; width: 500px; background-color: White; } #dialogContents { padding: 10px; } .modalBackground { background-color:Gray; filter:alpha(opacity=70); opacity:0.7; } </style> </head> <body> <form id="form1" runat="server"> <div> <act:ToolkitScriptManager ID="tsm" runat="server" /> <asp:Panel ID="dialog" runat="server"> <div id="dialogContents"> Here are the contents of the dialog. <br /> <asp:Button ID="btnOK" Text="OK" runat="server" /> </div> </asp:Panel> <asp:Button ID="btnShow" Text="Open Dialog" runat="server" /> <act:ModalPopupExtender TargetControlID="btnShow" PopupControlID="dialog" OkControlID="btnOK" DropShadow="true" BackgroundCssClass="modalBackground" runat="server" /> </div> </form> </body> </html>     Notice that the page includes two controls from the Ajax Control Toolkit: the ToolkitScriptManager and the ModalPopupExtender control. Any page which uses any of the controls from the Ajax Control Toolkit must include a ToolkitScriptManager. The ModalPopupExtender is used to create the popup. The following properties are set: · TargetControlID – This is the ID of the Button or LinkButton control which causes the modal popup to be displayed. · PopupControlID – This is the ID of the Panel control which contains the content displayed in the modal popup. · OKControlID – This is the ID of a Button or LinkButton which causes the modal popup to close. · DropShadow – Displays a drop shadow behind the modal popup. · BackgroundCSSClass – The name of a Cascading Style Sheet class which is used to gray out the background of the page when the modal popup is displayed. The ModalPopup is completely cross-browser compatible. For example, the following screenshots show the same page displayed in Firefox 4, Internet Explorer 9, and Chrome 11: The ModalPopup control has lots of nice properties. For example, you can make the ModalPopup draggable. You also can programmatically hide and show a modal popup from either server-side or client-side code. To learn more about the properties of the ModalPopup control, see the following website: http://www.asp.net/ajax/ajaxcontroltoolkit/Samples/ModalPopup/ModalPopup.aspx Animated ModalPopup Control In the May 2011 release of the Ajax Control Toolkit, we enhanced the Modal Popup control so that it supports animations. We made this modification in response to a feature request posted at CodePlex which got 65 votes (plenty of people wanted this feature): http://ajaxcontroltoolkit.codeplex.com/workitem/6944 I want to thank Dani Kenan for posting a patch to this issue which we used as the basis for adding animation support for the modal popup. Thanks Dani! The enhanced ModalPopup in the May 2011 release of the Ajax Control Toolkit supports the following animations: OnShowing – Called before the modal popup is shown. OnShown – Called after the modal popup is shown. OnHiding – Called before the modal popup is hidden. OnHidden – Called after the modal popup is hidden. You can use these animations, for example, to fade-in a modal popup when it is displayed and fade-out the popup when it is hidden. Here’s the code: <act:ModalPopupExtender ID="ModalPopupExtender1" TargetControlID="btnShow" PopupControlID="dialog" OkControlID="btnOK" DropShadow="true" BackgroundCssClass="modalBackground" runat="server"> <Animations> <OnShown> <Fadein /> </OnShown> <OnHiding> <Fadeout /> </OnHiding> </Animations> </act:ModalPopupExtender>     So that you can experience the full joy of this animated modal popup, I recorded the following video: Of course, you can use any of the animations supported by the Ajax Control Toolkit with the modal popup. The animation reference is located here: http://www.asp.net/ajax/ajaxcontroltoolkit/Samples/Walkthrough/AnimationReference.aspx Fixes to the AsyncFileUpload In the May 2011 release, we also focused our energies on performing bug fixes for the AsyncFileUpload control. We fixed several major issues with the AsyncFileUpload including: It did not work in master pages It did not work when ClientIDMode=”Static” It did not work with Firefox 4 It did not work when multiple AsyncFileUploads were included in the same page It generated markup which was not HTML5 compatible The AsyncFileUpload control is a super useful control. It enables you to upload files in a form without performing a postback. Here’s some sample code which demonstrates how you can use the AsyncFileUpload: <%@ Page Language="vb" AutoEventWireup="false" CodeBehind="Simple.aspx.vb" Inherits="ACTSamples.Simple1" %> <%@ Register TagPrefix="act" Namespace="AjaxControlToolkit" Assembly="AjaxControlToolkit" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head id="Head1" runat="server"> <title>Simple AsyncFileUpload</title> </head> <body> <form id="form1" runat="server"> <div> <act:ToolkitScriptManager ID="tsm" runat="server" /> User Name: <br /> <asp:TextBox ID="txtUserName" runat="server" /> <asp:RequiredFieldValidator EnableClientScript="false" ErrorMessage="Required" ControlToValidate="txtUserName" runat="server" /> <br /><br /> Avatar: <act:AsyncFileUpload ID="async1" ThrobberID="throbber" UploadingBackColor="yellow" ErrorBackColor="red" CompleteBackColor="green" UploaderStyle="Modern" PersistFile="true" runat="server" /> <asp:Image ID="throbber" ImageUrl="uploading.gif" style="display:none" runat="server" /> <br /><br /> <asp:Button ID="btnSubmit" Text="Submit" runat="server" /> </div> </form> </body> </html> And here’s the code-behind for the page above: Public Class Simple1 Inherits System.Web.UI.Page Private Sub btnSubmit_Click(ByVal sender As Object, ByVal e As System.EventArgs) Handles btnSubmit.Click If Page.IsValid Then ' Get Form Fields Dim userName As String Dim file As Byte() userName = txtUserName.Text If async1.HasFile Then file = async1.FileBytes End If ' Save userName, file to database ' Redirect to success page Response.Redirect("SimpleDone.aspx") End If End Sub End Class   The form above contains an AsyncFileUpload which has values for the following properties: ThrobberID – The ID of an element in the page to display while a file is being uploaded. UploadingBackColor – The color to display in the upload field while a file is being uploaded. ErrorBackColor – The color to display in the upload field when there is an error uploading a file. CompleteBackColor – The color to display in the upload field when the upload is complete. UploaderStyle – The user interface style: Traditional or Modern. PersistFile – When true, the uploaded file is persisted in Session state. The last property PersistFile, causes the uploaded file to be stored in Session state. That way, if completing a form requires multiple postbacks, then the user needs to upload the file only once. For example, if there is a server validation error, then the user is not required to re-upload the file after fixing the validation issue. In the sample code above, this condition is simulated by disabling client-side validation for the RequiredFieldValidator control. The RequiredFieldValidator EnableClientScript property has the value false. The following video demonstrates how the AsyncFileUpload control works: You can learn more about the properties and methods of the AsyncFileUpload control by visiting the following page: http://www.asp.net/ajax/ajaxcontroltoolkit/Samples/AsyncFileUpload/AsyncFileUpload.aspx Conclusion In the May 2011 release of the Ajax Control Toolkit, we addressed over 30 bugs related to the ModalPopup and AsyncFileUpload controls. Furthermore, by building on code submitted by the community, we enhanced the ModalPopup control so that it supports animation (Thanks Dani). In our next sprint for the June release of the Ajax Control Toolkit, we plan to focus on the HTML Editor control. Subscribe to this blog to keep updated.

    Read the article

  • Blazing fast performance with RadGridView for Silverlight 4, RadDataPager and WCF RIA Services

    In my previous post I’ve used almost 2 million records to the check the grid performance in WPF and I’ve decided to do the same for Silverlight 4 using WCF RIA Services. The grid again is bound completely codelessly using DomainDataSource and RadDataPager: <Grid x:Name="LayoutRoot"> <Grid.RowDefinitions> <RowDefinition /> <RowDefinition Height="Auto" /> </Grid.RowDefinitions> <riaControls:DomainDataSource Name="orderDomainDataSource" QueryName="GetOrdersAndOrderDetails"> <riaControls:DomainDataSource.DomainContext> <my:NorthwindDomainContext /> </riaControls:DomainDataSource.DomainContext> </riaControls:DomainDataSource> <telerik:RadGridView Name="RadGridView1" IsReadOnly="True" AutoExpandGroups="True" ItemsSource="{Binding Data, ElementName=orderDomainDataSource}" /> <telerik:RadDataPager Grid.Row="1" PageSize="10" Source="{Binding Data, ElementName=orderDomainDataSource}" DisplayMode="All" /> </Grid> And the query again will return join between Northwind Orders and Order_Details: … public IQueryable<OrdersAndOrderDetails> GetOrdersAndOrderDetails() ...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • URL Rewrite is adding HTTPS to my canonical redirects in IIS7

    - by Derek Hunziker
    Hello, I have the following rule defined in my Web.config: <rule name="Enforce canonical hostname" stopProcessing="true"> <match url="(.*)" /> <conditions> <add input="{HTTP_HOST}" negate="true" pattern="^www\.mydomain\.org$" /> </conditions> <action type="Redirect" url="http://www.mydomain.com/" redirectType="Permanent" /> </rule> What I am experiencing is strange... It appears that I am being redirected to https://www.mydomain.com/ which causes my browser to hang. I do not have SSL encryption turned on, nor do I have any special authorization rules. The web server in question is behind an F5 load balancer. Any ideas?

    Read the article

  • Passing multiple simple POST Values to ASP.NET Web API

    - by Rick Strahl
    A few weeks backs I posted a blog post  about what does and doesn't work with ASP.NET Web API when it comes to POSTing data to a Web API controller. One of the features that doesn't work out of the box - somewhat unexpectedly -  is the ability to map POST form variables to simple parameters of a Web API method. For example imagine you have this form and you want to post this data to a Web API end point like this via AJAX: <form> Name: <input type="name" name="name" value="Rick" /> Value: <input type="value" name="value" value="12" /> Entered: <input type="entered" name="entered" value="12/01/2011" /> <input type="button" id="btnSend" value="Send" /> </form> <script type="text/javascript"> $("#btnSend").click( function() { $.post("samples/PostMultipleSimpleValues?action=kazam", $("form").serialize(), function (result) { alert(result); }); }); </script> or you might do this more explicitly by creating a simple client map and specifying the POST values directly by hand:$.post("samples/PostMultipleSimpleValues?action=kazam", { name: "Rick", value: 1, entered: "12/01/2012" }, $("form").serialize(), function (result) { alert(result); }); On the wire this generates a simple POST request with Url Encoded values in the content:POST /AspNetWebApi/samples/PostMultipleSimpleValues?action=kazam HTTP/1.1 Host: localhost User-Agent: Mozilla/5.0 (Windows NT 6.2; WOW64; rv:15.0) Gecko/20100101 Firefox/15.0.1 Accept: application/json Connection: keep-alive Content-Type: application/x-www-form-urlencoded; charset=UTF-8 X-Requested-With: XMLHttpRequest Referer: http://localhost/AspNetWebApi/FormPostTest.html Content-Length: 41 Pragma: no-cache Cache-Control: no-cachename=Rick&value=12&entered=12%2F10%2F2011 Seems simple enough, right? We are basically posting 3 form variables and 1 query string value to the server. Unfortunately Web API can't handle request out of the box. If I create a method like this:[HttpPost] public string PostMultipleSimpleValues(string name, int value, DateTime entered, string action = null) { return string.Format("Name: {0}, Value: {1}, Date: {2}, Action: {3}", name, value, entered, action); }You'll find that you get an HTTP 404 error and { "Message": "No HTTP resource was found that matches the request URI…"} Yes, it's possible to pass multiple POST parameters of course, but Web API expects you to use Model Binding for this - mapping the post parameters to a strongly typed .NET object, not to single parameters. Alternately you can also accept a FormDataCollection parameter on your API method to get a name value collection of all POSTed values. If you're using JSON only, using the dynamic JObject/JValue objects might also work. ModelBinding is fine in many use cases, but can quickly become overkill if you only need to pass a couple of simple parameters to many methods. Especially in applications with many, many AJAX callbacks the 'parameter mapping type' per method signature can lead to serious class pollution in a project very quickly. Simple POST variables are also commonly used in AJAX applications to pass data to the server, even in many complex public APIs. So this is not an uncommon use case, and - maybe more so a behavior that I would have expected Web API to support natively. The question "Why aren't my POST parameters mapping to Web API method parameters" is already a frequent one… So this is something that I think is fairly important, but unfortunately missing in the base Web API installation. Creating a Custom Parameter Binder Luckily Web API is greatly extensible and there's a way to create a custom Parameter Binding to provide this functionality! Although this solution took me a long while to find and then only with the help of some folks Microsoft (thanks Hong Mei!!!), it's not difficult to hook up in your own projects. It requires one small class and a GlobalConfiguration hookup. Web API parameter bindings allow you to intercept processing of individual parameters - they deal with mapping parameters to the signature as well as converting the parameters to the actual values that are returned. Here's the implementation of the SimplePostVariableParameterBinding class:public class SimplePostVariableParameterBinding : HttpParameterBinding { private const string MultipleBodyParameters = "MultipleBodyParameters"; public SimplePostVariableParameterBinding(HttpParameterDescriptor descriptor) : base(descriptor) { } /// <summary> /// Check for simple binding parameters in POST data. Bind POST /// data as well as query string data /// </summary> public override Task ExecuteBindingAsync(ModelMetadataProvider metadataProvider, HttpActionContext actionContext, CancellationToken cancellationToken) { // Body can only be read once, so read and cache it NameValueCollection col = TryReadBody(actionContext.Request); string stringValue = null; if (col != null) stringValue = col[Descriptor.ParameterName]; // try reading query string if we have no POST/PUT match if (stringValue == null) { var query = actionContext.Request.GetQueryNameValuePairs(); if (query != null) { var matches = query.Where(kv => kv.Key.ToLower() == Descriptor.ParameterName.ToLower()); if (matches.Count() > 0) stringValue = matches.First().Value; } } object value = StringToType(stringValue); // Set the binding result here SetValue(actionContext, value); // now, we can return a completed task with no result TaskCompletionSource<AsyncVoid> tcs = new TaskCompletionSource<AsyncVoid>(); tcs.SetResult(default(AsyncVoid)); return tcs.Task; } private object StringToType(string stringValue) { object value = null; if (stringValue == null) value = null; else if (Descriptor.ParameterType == typeof(string)) value = stringValue; else if (Descriptor.ParameterType == typeof(int)) value = int.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(Int32)) value = Int32.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(Int64)) value = Int64.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(decimal)) value = decimal.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(double)) value = double.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(DateTime)) value = DateTime.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(bool)) { value = false; if (stringValue == "true" || stringValue == "on" || stringValue == "1") value = true; } else value = stringValue; return value; } /// <summary> /// Read and cache the request body /// </summary> /// <param name="request"></param> /// <returns></returns> private NameValueCollection TryReadBody(HttpRequestMessage request) { object result = null; // try to read out of cache first if (!request.Properties.TryGetValue(MultipleBodyParameters, out result)) { // parsing the string like firstname=Hongmei&lastname=Ge result = request.Content.ReadAsFormDataAsync().Result; request.Properties.Add(MultipleBodyParameters, result); } return result as NameValueCollection; } private struct AsyncVoid { } }   The ExecuteBindingAsync method is fired for each parameter that is mapped and sent for conversion. This custom binding is fired only if the incoming parameter is a simple type (that gets defined later when I hook up the binding), so this binding never fires on complex types or if the first type is not a simple type. For the first parameter of a request the Binding first reads the request body into a NameValueCollection and caches that in the request.Properties collection. The request body can only be read once, so the first parameter request reads it and then caches it. Subsequent parameters then use the cached POST value collection. Once the form collection is available the value of the parameter is read, and the value is translated into the target type requested by the Descriptor. SetValue writes out the value to be mapped. Once you have the ParameterBinding in place, the binding has to be assigned. This is done along with all other Web API configuration tasks at application startup in global.asax's Application_Start:GlobalConfiguration.Configuration.ParameterBindingRules .Insert(0, (HttpParameterDescriptor descriptor) => { var supportedMethods = descriptor.ActionDescriptor.SupportedHttpMethods; // Only apply this binder on POST and PUT operations if (supportedMethods.Contains(HttpMethod.Post) || supportedMethods.Contains(HttpMethod.Put)) { var supportedTypes = new Type[] { typeof(string), typeof(int), typeof(decimal), typeof(double), typeof(bool), typeof(DateTime) }; if (supportedTypes.Where(typ => typ == descriptor.ParameterType).Count() > 0) return new SimplePostVariableParameterBinding(descriptor); } // let the default bindings do their work return null; });   The ParameterBindingRules.Insert method takes a delegate that checks which type of requests it should handle. The logic here checks whether the request is POST or PUT and whether the parameter type is a simple type that is supported. Web API calls this delegate once for each method signature it tries to map and the delegate returns null to indicate it's not handling this parameter, or it returns a new parameter binding instance - in this case the SimplePostVariableParameterBinding. Once the parameter binding and this hook up code is in place, you can now pass simple POST values to methods with simple parameters. The examples I showed above should now work in addition to the standard bindings. Summary Clearly this is not easy to discover. I spent quite a bit of time digging through the Web API source trying to figure this out on my own without much luck. It took Hong Mei at Micrsoft to provide a base example as I asked around so I can't take credit for this solution :-). But once you know where to look, Web API is brilliantly extensible to make it relatively easy to customize the parameter behavior. I'm very stoked that this got resolved  - in the last two months I've had two customers with projects that decided not to use Web API in AJAX heavy SPA applications because this POST variable mapping wasn't available. This might actually change their mind to still switch back and take advantage of the many great features in Web API. I too frequently use plain POST variables for communicating with server AJAX handlers and while I could have worked around this (with untyped JObject or the Form collection mostly), having proper POST to parameter mapping makes things much easier. I said this in my last post on POST data and say it again here: I think POST to method parameter mapping should have been shipped in the box with Web API, because without knowing about this limitation the expectation is that simple POST variables map to parameters just like query string values do. I hope Microsoft considers including this type of functionality natively in the next version of Web API natively or at least as a built-in HttpParameterBinding that can be just added. This is especially true, since this binding doesn't affect existing bindings. Resources SimplePostVariableParameterBinding Source on GitHub Global.asax hookup source Mapping URL Encoded Post Values in  ASP.NET Web API© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api  AJAX   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Oracle HRMS API –Update Employee Fed Tax Rule

    - by PRajkumar
    API --  pay_federal_tax_rule_api.update_fed_tax_rule Example -- DECLARE    lb_correction                              BOOLEAN;    lb_update                                   BOOLEAN;    lb_update_override                 BOOLEAN;    lb_update_change_insert      BOOLEAN;    ld_effective_start_date            DATE;    ld_effective_end_date             DATE;    ln_assignment_id                     NUMBER                    := 33561;    lc_dt_ud_mode                          VARCHAR2(100)     := NULL;    ln_object_version_number     NUMBER                    := 0;    ln_supp_tax_override_rate    PAY_US_EMP_FED_TAX_RULES_F.SUPP_TAX_OVERRIDE_RATE%TYPE;    ln_emp_fed_tax_rule_id         PAY_US_EMP_FED_TAX_RULES_F.EMP_FED_TAX_RULE_ID%TYPE; BEGIN    -- Find Date Track Mode    -- -------------------------------    dt_api.find_dt_upd_modes    (   -- Input data elements        -- ------------------------------       p_effective_date                   => TO_DATE('12-JUN-2011'),       p_base_table_name            => 'PER_ALL_ASSIGNMENTS_F',       p_base_key_column           => 'ASSIGNMENT_ID',       p_base_key_value               => ln_assignment_id,       -- Output data elements       -- -------------------------------       p_correction                          => lb_correction,       p_update                                => lb_update,       p_update_override              => lb_update_override,       p_update_change_insert   => lb_update_change_insert   );    IF ( lb_update_override = TRUE OR lb_update_change_insert = TRUE )  THEN      -- UPDATE_OVERRIDE      -- --------------------------------      lc_dt_ud_mode := 'UPDATE_OVERRIDE';  END IF;    IF ( lb_correction = TRUE )  THEN     -- CORRECTION     -- ----------------------     lc_dt_ud_mode := 'CORRECTION';  END IF;    IF ( lb_update = TRUE )  THEN      -- UPDATE      -- -------------      lc_dt_ud_mode := 'UPDATE';  END IF;       -- Update Employee Fed Tax Rule   -- ----------------------------------------------   pay_federal_tax_rule_api.update_fed_tax_rule   (   -- Input data elements       -- -----------------------------       p_effective_date                        => TO_DATE('20-JUN-2011'),       p_datetrack_update_mode   => lc_dt_ud_mode,       p_emp_fed_tax_rule_id         => 7417,       p_withholding_allowances  => 100,       p_fit_additional_tax                => 10,       p_fit_exempt                               => 'N',       p_supp_tax_override_rate     => 5,       -- Output data elements       -- --------------------------------      p_object_version_number       => ln_object_version_number,      p_effective_start_date               => ld_effective_start_date,      p_effective_end_date                => ld_effective_end_date   );    COMMIT; EXCEPTION           WHEN OTHERS THEN                          ROLLBACK;                          dbms_output.put_line(SQLERRM); END; / SHOW ERR;  

    Read the article

  • Loading a Template From a User Control

    - by Ricardo Peres
    What if you wanted to load a template (ITemplate property) from an external user control (.ascx) file? Yes, it is possible; there are a number of ways to do this, the one I'll talk about here is through a type converter. You need to apply a TypeConverterAttribute to your ITemplate property where you specify a custom type converter that does the job. This type converter relies on InstanceDescriptor. Here is the code for it: public class TemplateTypeConverter: TypeConverter { public override Boolean CanConvertFrom(ITypeDescriptorContext context, Type sourceType) { return ((sourceType == typeof(String)) || (base.CanConvertFrom(context, sourceType) == true)); } public override Boolean CanConvertTo(ITypeDescriptorContext context, Type destinationType) { return ((destinationType == typeof(InstanceDescriptor)) || (base.CanConvertTo(context, destinationType) == true)); } public override Object ConvertTo(ITypeDescriptorContext context, CultureInfo culture, Object value, Type destinationType) { if (destinationType == typeof(InstanceDescriptor)) { Object objectFactory = value.GetType().GetField("_objectFactory", BindingFlags.NonPublic | BindingFlags.Instance).GetValue(value); Object builtType = objectFactory.GetType().BaseType.GetField("_builtType", BindingFlags.NonPublic | BindingFlags.Instance).GetValue(objectFactory); MethodInfo loadTemplate = typeof(TemplateTypeConverter).GetMethod("LoadTemplate"); return (new InstanceDescriptor(loadTemplate, new Object [] { "~/" + (builtType as Type).Name.Replace('_', '/').Replace("/ascx", ".ascx") })); } return base.ConvertTo(context, culture, value, destinationType); } public static ITemplate LoadTemplate(String virtualPath) { using (Page page = new Page()) { return (page.LoadTemplate(virtualPath)); } } } And, on your control: public class MyControl: Control { [Browsable(false)] [TypeConverter(typeof(TemplateTypeConverter))] public ITemplate Template { get; set; } } This allows the following declaration: Hope this helps! SyntaxHighlighter.config.clipboardSwf = 'http://alexgorbatchev.com/pub/sh/2.0.320/scripts/clipboard.swf'; SyntaxHighlighter.brushes.CSharp.aliases = ['c#', 'c-sharp', 'csharp']; SyntaxHighlighter.brushes.Xml.aliases = ['xml']; SyntaxHighlighter.all();

    Read the article

  • Getting WCF Services in a Silverlight solution to play nice on deployment

    - by brendonpage
    I have come across 2 issues with deploying WCF services in a Silverlight solution, admittedly the one is more of a hiccup, and only occurs if you take the easy way out and reference your services through visual studio. The First Issue This occurs when you deploy your WFC services to an IIS server. When browse to the services using your web browser, you are greeted with “This collection already contains an address with scheme http.  There can be at most one address per scheme in this collection.”. When you make a call to this service from your Silverlight application, you get the extremely helpful “NotFound” error, this error message can be found in the error property of the event arguments on the complete event handler for that call. As it did with me this will leave most people scratching their head, because the very same services work just fine on the ASP.NET Development Web Server and on my local IIS server. Now I’m no server/hosting/IIS expert so I did a bit of searching when I first encountered this issue. I found out this happens because IIS supports multiple address bindings per protocol (http/https/ftp … etc) per web site, but WCF only supports binding to one address per protocol. This causes a problem when the WCF service is hosted on a site with multiple address bindings, because IIS provides all of the bindings to the host factory when running the service. While this problem occurs mainly on shared hosting solutions, it is not limited to shared hosting, it just seems like all shared hosting providers setup sites on their servers with multiple address bindings. For interests sake I added functionality to the example project attached to this post to dump the addresses given to the WCF service by IIS into a log file. This was the output on the shared hosting solution I use: http://mydomain.co.za/Services/TestService.svc http://www.mydomain.co.za/Services/TestService.svc http://mydomain-co-za.win13.wadns.net/Services/TestService.svc http://win13/Services/TestService.svc As you can see all these addresses are for the http protocol, which is where it all goes wrong for WCF. Fixes for the First Issue There are a few ways to get around this. The first being the easiest, target .NET 4! Yes that's right in .NET 4 WCF services support multiple addresses per protocol. This functionality is enabled by an option, which is on by default if you create a new project, you will need to turn on if you are upgrading to .NET 4. To do this set the multipleSiteBindingsEnabled property of the serviceHostingEnviroment tag in the web.config file to true, as shown below: <system.serviceModel>     <serviceHostingEnvironment multipleSiteBindingsEnabled="true" /> </system.serviceModel> Beware this ONLY works in .NET 4, so if you don’t have a server with .NET 4 installed on that you can deploy to, you will need to employ one of the other work a rounds. The second option will work for .NET 3.5 & 4. For this option all you need to do is modify the web.config file and add baseAddressPrefixFilters to the serviceHostingEnviroment tag as shown below: <system.serviceModel>     <serviceHostingEnvironment>         <baseAddressPrefixFilters>              <add prefix="http://www.mydomain.co.za"/>         </baseAddressPrefixFilters>     </serviceHostingEnvironment> </system.serviceModel> These will be used to filter the list of base addresses that IIS provides to the host factory. When specifying these prefix filters be sure to specify filters which will only allow 1 result through, otherwise the entire exercise will be pointless. There is however a problem with this work a round, you are only allowed to specify 1 prefix filter per protocol. Which means you can’t add filters for all your environments, this will therefore add to the list of things to do before deploying or switching dev machines. The third option is the one I currently employ, it will work for .NET 3, 3.5 & 4, although it is not needed for .NET 4. For this option you create a custom host factory which inherits from the ServiceHostFactory class. In the implementation of the ServiceHostFactory you employ logic to figure out which of the base addresses, that are give by IIS, to use when creating the service host. The logic you use to do this is completely up to you, I have seen quite a few solutions that simply statically reference an index from the list of base addresses, this works for most situations but falls short in others. For instance, if the order of the base addresses where to change, it might end up returning an address that only resolves on the servers local network, like the last one in the example I gave at the beginning. Another instance, if a request comes in on a different protocol, like https, you will be creating the service host using an address which is on the incorrect protocol, like http. To reliably find the correct address to use, I use the address that the service was requested on. To accomplish this I use the HttpContext, which requires the service to operate with AspNetCompatibilityRequirements set on. If for some reason running you services with AspNetCompatibilityRequirements on isn’t an option, you can still use this method, you will just have to come up with your own logic for selecting the correct address. First you will need to enable AspNetCompatibilityRequirements for your hosting environment, to do this you will need to set it to true in the web.config file as shown below: <system.serviceModel>     <serviceHostingEnvironment AspNetCompatibilityRequirements="true" /> </system.serviceModel> You will then need to mark any services that are going to use the custom host factory, to allow AspNetCompatibilityRequirements, as shown below: [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class TestService { } Now for the custom host factory, this is where the logic lives that selects the correct address to create service host with. The one i use is shown below: public class CustomHostFactory : ServiceHostFactory { protected override ServiceHost CreateServiceHost(Type serviceType, Uri[] baseAddresses) { // // Compose a prefix filter based on the requested uri // string prefixFilter = HttpContext.Current.Request.Url.Scheme + "://" + HttpContext.Current.Request.Url.DnsSafeHost; if (!HttpContext.Current.Request.Url.IsDefaultPort) { prefixFilter += ":" + HttpContext.Current.Request.Url.Port.ToString() + "/"; } // // Find a base address that matches the prefix filter // foreach (Uri baseAddress in baseAddresses) { if (baseAddress.OriginalString.StartsWith(prefixFilter)) { return new ServiceHost(serviceType, baseAddress); } } // // Throw exception if no matching base address was found // throw new Exception("Custom Host Factory: No base address matching '" + prefixFilter + "' was found."); } } The most important line in the custom host factory is the one that returns a new service host. This has to return a service host that specifies only one base address per protocol. Since I filter by the address the request came on in, I only need to create the service host with one address, since this address will always be of the correct protocol. Now you have a custom host factory you have to tell your services to use it. To do this you view the markup of the service by right clicking on it in the solution explorer and choosing “View Markup”. Then you add/set the value of the Factory property to the full namespace path of you custom host factory, as shown below. And that is it done, the service will now use the specified custom host factory. The Second Issue As I mentioned earlier this issue is more of a hiccup, but I thought worthy of a mention so I included it. This issue only occurs when you add a service reference to a Silverlight project. Visual Studio will generate a lot of code for you, part of that generated code is the ServiceReferences.ClientConfig file. This file stores the endpoint configuration that is used when accessing your services using the generated proxy classes. Here is what that file looks like: <configuration>     <system.serviceModel>         <bindings>             <customBinding>                 <binding name="CustomBinding_TestService">                     <binaryMessageEncoding />                     <httpTransport maxReceivedMessageSize="2147483647" maxBufferSize="2147483647" />                 </binding>                 <binding name="CustomBinding_BrokenService">                     <binaryMessageEncoding />                     <httpTransport maxReceivedMessageSize="2147483647" maxBufferSize="2147483647" />                 </binding>             </customBinding>         </bindings>         <client>             <endpoint address="http://localhost:49347/services/TestService.svc"                 binding="customBinding" bindingConfiguration="CustomBinding_TestService"                 contract="TestService.TestService" name="CustomBinding_TestService" />             <endpoint address="http://localhost:49347/Services/BrokenService.svc"                 binding="customBinding" bindingConfiguration="CustomBinding_BrokenService"                 contract="BrokenService.BrokenService" name="CustomBinding_BrokenService" />         </client>     </system.serviceModel> </configuration> As you will notice the addresses for the end points are set to the addresses of the services you added the service references from, so unless you are adding the service references from your live services, you will have to change these addresses before you deploy. This is little more than an annoyance really, but it adds to the list of things to do before you can deploy, and if left unchecked that list can get out of control. Fix for the Second Issue The way you would usually access a service added this way is to create an instance of the proxy class like so: BrokenServiceClient proxy = new BrokenServiceClient(); Closer inspection of these generated proxy classes reveals that there are a few overloaded constructors, one of which allows you to specify the end point address to use when creating the proxy. From here all you have to do is come up with some logic that will provide you with the relative path to your services. Since my WCF services are usually hosted in the same project as my Silverlight app I use the class shown below: public class ServiceProxyHelper { /// <summary> /// Create a broken service proxy /// </summary> /// <returns>A broken service proxy</returns> public static BrokenServiceClient CreateBrokenServiceProxy() { Uri address = new Uri(Application.Current.Host.Source, "../Services/BrokenService.svc"); return new BrokenServiceClient("CustomBinding_BrokenService", address.AbsoluteUri); } } Then I will create an instance of the proxy class using my service helper class like so: BrokenServiceClient proxy = ServiceProxyHelper.CreateBrokenServiceProxy(); The way this works is “Application.Current.Host.Source” will return the URL to the ClientBin folder the Silverlight app is hosted in, the “../Services/BrokenService.svc” is then used as the relative path to the service from the ClientBin folder, combined by the Uri object this gives me the URL to my service. The “CustomBinding_BrokenService” is a reference to the end point configuration in the ServiceReferences.ClientConfig file. Yes this means you still need the ServiceReferences.ClientConfig file. All this is doing is using a different end point address than the one specified in the ServiceReferences.ClientConfig file, all the other settings form the ServiceReferences.ClientConfig file are still used when creating the proxy. I have uploaded an example project which covers the custom host factory solution from the first issue and everything from the second issue. I included the code to write a list of base addresses to a log file in my implementation of the custom host factory, this is not need for the custom host factory to function and can safely be removed. Download (WCFServicesDeploymentExample.zip)

    Read the article

  • How To: Filter as you type RadGridView inside RadComboBox for WPF and Silverlight

    Ive made small example on how to place RadGridView inside editable RadComboBox and filter the grid items as you type in the combo:   The easiest way to place any UI element in RadComboBox is to create single RadComboBoxItem and define desired Template: <telerikInput:RadComboBox Text="{Binding Text, Mode=TwoWay}" IsEditable="True" Height="25" Width="200"> <telerikInput:RadComboBox.Items> <telerikInput:RadComboBoxItem> <telerikInput:RadComboBoxItem.Template> <ControlTemplate> <telerikGrid:RadGridView x:Name="RadGridView1" ShowGroupPanel="False" CanUserFreezeColumns="False" RowIndicatorVisibility="Collapsed" IsReadOnly="True" IsFilteringAllowed="False" ItemsSource="{Binding Items}" Width="200" Height="150" SelectedItem="{Binding SelectedItem, Mode=TwoWay}"> </telerikGrid:RadGridView> </ControlTemplate> </telerikInput:RadComboBoxItem.Template> </telerikInput:RadComboBoxItem> </telerikInput:RadComboBox.Items> </telerikInput:RadComboBox> Now you can create small view model and bind ...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Reality behind wireless security - the weakness of encrypting

    - by Cawas
    I welcome better key-wording here, both on tags and title, and I'll add more links as soon as possible. For some years I'm trying to conceive a wireless environment that I'd setup anywhere and advise for everyone, including from big enterprises to small home networks of 1 machine. I've always had the feeling using any kind of the so called "wireless security" methods is actually a bad design. I'm talking mostly about encrypting and pass-phrasing (which are actually two different concepts), since I won't even considering hiding SSID and mac filtering. I understand it's a natural way of thinking. With cable networking nobody can access the network unless they have access to the physical cable, so you're "secure" in the physical way. In a way, encrypting is for wireless what walling (building walls) is for the cables. And giving pass-phrases is adding a door with a key. But the cabling without encryption is also insecure. Someone just need to plugin and get your data! And while I can see the use for encrypting data, I don't think it's a security measure in wireless networks. As I said elsewhere, I believe we should encrypt only sensitive data regardless of wires. And passwords should be added to the users, always, not to wifi. For securing files, truly, best solution is backup. Sure all that doesn't happen that often, but I won't consider the most situations where people just don't care. I think there are enough situations where people actually care on using passwords on their OS users, so let's go with that in mind. For being able to break the walls or the door someone will need proper equipment such as a hammer or a master key of some kind. Same is true for breaking the wireless walls in the analogy. But, I'd say true data security is at another place. I keep promoting the Fonera concept as an instance. It opens up a free wifi port, if you choose so, and anyone can connect to the internet through that, without having any access to your LAN. It also uses a QoS which will never let your bandwidth drop from that public usage. That's security, and it's open. And who doesn't want to be able to use internet freely anywhere you can find wifi spots? I have 3G myself, but that's beyond the point here. If I have a wifi at home I want to let people freely use it for internet as to not be an hypocrite and even guests can easily access my files, just for reading access, so I don't need to keep setting up encryption and pass-phrases that are not whole compatible. I'll probably be bashed for promoting the non-usage of WPA 2 with AES or whatever, but I wanted to know from more experienced (super) users out there: what do you think? Is there really a need for encryption to have true wireless security?

    Read the article

  • Supervisor sentry-web exit status 1

    - by rockingskier
    I'm having problems getting Sentry (https://www.getsentry.com - not enough rep for a link) running as a service using supervisor. I can run Sentry in the command line and view it correctly in the browser but when it comes to supervisor I am completely in the dark. I shall try and give all the details I can Initial user warning By no means a server admin, just playing/learning in VirtualBox. Literally only just discovered supervisor from reading the Sentry documentation so I may well be making some obvious mistakes here. The setup: Ubuntu server 11.10 (fresh install, VirtualBox) virtualenv with Sentry and its dependencies. supervisor Instructions followed Supervisor with vanilla ini file Sentry/supervisor instructions My supervisor ini (Sentry section) [program:sentry-web] directory=/root/.virtualenvs/sentry/ command= start http /root/.virtualenvs/sentry/bin/sentry autostart=true autorestart=true redirect_stderr=true OK so here we go: When I run supervisord -n I get the following messages rather than a nice web interface to play with. 2012-04-12 23:48:09,024 CRIT Supervisor running as root (no user in config file) 2012-04-12 23:48:09,097 INFO RPC interface 'supervisor' initialized 2012-04-12 23:48:09,099 CRIT Server 'unix_http_server' running without any HTTP authentication checking 2012-04-12 23:48:09,100 INFO supervisord started with pid 17813 2012-04-12 23:48:10,126 INFO spawned: 'sentry-web' with pid 17816 2012-04-12 23:48:10,169 INFO exited: sentry-web (exit status 1; not expected) 2012-04-12 23:48:11,199 INFO spawned: 'sentry-web' with pid 17817 2012-04-12 23:48:11,238 INFO exited: sentry-web (exit status 1; not expected) 2012-04-12 23:48:13,269 INFO spawned: 'sentry-web' with pid 17818 2012-04-12 23:48:13,309 INFO exited: sentry-web (exit status 1; not expected) 2012-04-12 23:48:16,343 INFO spawned: 'sentry-web' with pid 17819 2012-04-12 23:48:16,389 INFO exited: sentry-web (exit status 1; not expected) 2012-04-12 23:48:17,394 INFO gave up: sentry-web entered FATAL state, too many start retries too quickly CRIT Supervisor running as root (no user in config file) suggests a big problem, probably shouldn't be running this as root? CRIT Server 'unix_http_server' running without any HTTP authentication checking Surely authentication is optional? INFO exited: sentry-web (exit status 1; not expected) *sad face* here. Google hasn't been much help yet. Anyway, that is it as far as I know. If anyone can help me that would be greatly appreciated. Thanks in advance.

    Read the article

  • Error when Sending Emails

    - by dallasclark
    A client of mine keeps receiving the following email when sending mail but their emails are sent successfully. Your outgoing (SMTP) e-mail server has reported an internal error... The server responded: 451 qq read error (#4.3.0) In the mail log (/usr/local/psa/var/log/maillog) I receive the following error: /var/qmail/bin/relaylock[3152]: /var/qmail/bin/relaylock My SMTP Service is setup as followed, if this helps service smtp { socket_type = stream protocol = tcp wait = no disable = no user = root instances = UNLIMITED env = SMTPAUTH=1 server = /var/qmail/bin/tcp-env server_args = -Rt0 /var/qmail/bin/relaylock /var/qmail/bin/qmail-smtpd /var/qmail/bin/smtp_auth /var/qmail/bin/true /var/qmail/bin/cmd5checkpw /var/qmail/bin/true }

    Read the article

  • Google Analytics and Whos.amung.us in realtime visitors, why such an enormous discrepancy?

    - by jacouh
    Since years I use in a site both Google Analytics and Whos.amung.us, both Google analytics and whos.amung.us javascripts are inserted in the same pages in the tracked part of the site. In real-time visitors, why such an enormous discrepancy ? for example at the moment, Google analytics gives me 9 visitors, whos.amung.us indicates 59, a ratio of 6 times? Why whos.amung.us is 6 times optimistic than Google Analytics in terms of the realtime visitors? Google whos.amung.us My question is: whos.amung.us does not detect robots while Google does? GA ignores visitors from some countries, not whos.amung.us? Some robots/bots execute whos.amung.us javascript for tracking? While no robots/bots can execute the tracking javascript provided by Google Analytics? To facilitate your analysis, I copy JS code used below: Google analytics: <script type="text/javascript"> var _gaq = _gaq || []; _gaq.push(['_setAccount', 'MyGaAccountNo']); _gaq.push(['_trackPageview']); (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); </script> Whos.amung.us: <script>var _wau = _wau || []; _wau.push(["tab", "MyWAUAccountNo", "c6x", "right-upper"]);(function() { var s=document.createElement("script"); s.async=true; s.src="http://widgets.amung.us/tab.js";document.getElementsByTagName("head")[0].appendChild(s);})();</script> I've aleady signaled this to WAU staff some time ago, NR, I've not done this to Google as they don't handle this kind of feedback. Thank you for your explanations.

    Read the article

  • Ubuntu ATI second display as main display

    - by Josh
    how can i make my external second display as main display for ubuntu? Im using the ATI Control Center (amdcccle) Seems there is no way to make this switch under the GUI Section "ServerLayout" Identifier "amdcccle Layout" Screen 0 "amdcccle-Screen[1]-0" 0 0 EndSection Section "Files" EndSection Section "Module" Load "glx" EndSection Section "ServerFlags" Option "Xinerama" "off" EndSection Section "Monitor" Identifier "0-LCD" Option "VendorName" "ATI Proprietary Driver" Option "ModelName" "Generic Autodetecting Monitor" Option "DPMS" "true" Option "PreferredMode" "1366x768" Option "TargetRefresh" "60" Option "Position" "1680 0" Option "Rotate" "normal" Option "Disable" "false" EndSection Section "Monitor" Identifier "0-CRT1" Option "VendorName" "ATI Proprietary Driver" Option "ModelName" "Generic Autodetecting Monitor" Option "DPMS" "true" Option "PreferredMode" "1680x1050" Option "TargetRefresh" "60" Option "Position" "0 0" Option "Rotate" "normal" Option "Disable" "false" EndSection Section "Device" Identifier "Default Device" Driver "fglrx" EndSection Section "Device" Identifier "amdcccle-Device[1]-0" Driver "fglrx" Option "Monitor-LCD" "0-LCD" Option "Monitor-CRT1" "0-CRT1" BusID "PCI:1:5:0" EndSection Section "Device" Identifier "amdcccle-Device[1]-1" Driver "fglrx" Option "Monitor-LCD" "0-LCD" BusID "PCI:1:5:0" Screen 1 EndSection Section "Screen" Identifier "Default Screen" DefaultDepth 24 EndSection Section "Screen" Identifier "amdcccle-Screen[1]-0" Device "amdcccle-Device[1]-0" DefaultDepth 24 SubSection "Display" Viewport 0 0 Virtual 3046 3046 Depth 24 EndSubSection EndSection Section "Screen" Identifier "amdcccle-Screen[1]-1" Device "amdcccle-Device[1]-1" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 EndSubSection EndSection

    Read the article

  • How to make software development decisions based on facts

    - by Laila
    We love to hear stories about the many and varied ways our customers use the tools that we develop, but in our earnest search for stories and feedback, we'd rather forgotten that some of our keenest users are fellow RedGaters, in the same building. It was almost by chance that we discovered how the SQL Source Control team were using SmartAssembly. As it happens, there is a separate account (here on Simple-Talk) of how SmartAssembly was used to support the Early Access program; by providing answers to specific questions about how the SQL Source Control product was used. But what really got us all grinning was how valuable the SQL Source Control team found the reports that SmartAssembly was quickly and painlessly providing. So gather round, my friends, and I'll tell you the Tale Of The Framework Upgrade . <strange mirage effect to denote a flashback. A subtle background string of music starts playing in minor key> Kevin and his team were undecided. They weren't sure whether they could move their software product from .NET 2 to .NET 3.5 , let alone to .NET 4. You see, they were faced with having to guess what version of .NET was already installed on the average user's machine, which I'm sure you'll agree is no easy task. Upgrading their code to .NET 3.5 might put a barrier to people trying the tool, which was the last thing Kevin wanted: "what if our users have to download X, Y, and Z before being able to open the application?" he asked. That fear of users having to do half an hour of downloads (.followed by at least ten minutes of installation. followed by a five minute restart) meant that Kevin's team couldn't take advantage of WCF (Windows Communication Foundation). This made them sad, because WCF would have allowed them to write their code in a much simpler way, and in hours instead of days (as was the case with .NET 2). Oh sure, they had a gut feeling that this probably wasn't the case, 3.5 had been out for so many years, but they weren't sure. <background music switches to major key> SmartAssembly Feature Usage Reporting gave Kevin and his team exactly what they needed: hard data on their users' systems, both hardware and software. I was there, I saw it happen, and that's not the sort of thing a woman quickly forgets. I'll always remember his last words (before he went to lunch): "You get lots of free information by just checking a box in SmartAssembly" is what he said. For example, they could see how many CPU cores their customers were using, and found out that they should be making use of parallelism to take advantage of available cores. But crucially, (and this is the moral of my tale, dear reader), Kevin saw that 99% of SQL Source Control's users were on .NET 3.5 or above.   So he knew that they could make the switch and that is was safe to do so. With this reassurance, they could use WCF to not only make development easier, but to also give them a really nice way to do inter-process communication between the Source Control and the SQL Compare products. To have done that on .NET 2.0 was certainly possible <knowing chuckle>, but Microsoft have made it a lot easier with WCF. <strange mirage effect to denote end of flashback> So you see, with Feature Usage Reporting, they finally got the hard evidence they needed to safely make the switch to .NET 3.5, knowing it would not inconvenience their users. And that, my friends, is just the sort of thing we like to hear.

    Read the article

  • C#: String Concatenation vs Format vs StringBuilder

    - by James Michael Hare
    I was looking through my groups’ C# coding standards the other day and there were a couple of legacy items in there that caught my eye.  They had been passed down from committee to committee so many times that no one even thought to second guess and try them for a long time.  It’s yet another example of how micro-optimizations can often get the best of us and cause us to write code that is not as maintainable as it could be for the sake of squeezing an extra ounce of performance out of our software. So the two standards in question were these, in paraphrase: Prefer StringBuilder or string.Format() to string concatenation. Prefer string.Equals() with case-insensitive option to string.ToUpper().Equals(). Now some of you may already know what my results are going to show, as these items have been compared before on many blogs, but I think it’s always worth repeating and trying these yourself.  So let’s dig in. The first test was a pretty standard one.  When concattenating strings, what is the best choice: StringBuilder, string concattenation, or string.Format()? So before we being I read in a number of iterations from the console and a length of each string to generate.  Then I generate that many random strings of the given length and an array to hold the results.  Why am I so keen to keep the results?  Because I want to be able to snapshot the memory and don’t want garbage collection to collect the strings, hence the array to keep hold of them.  I also didn’t want the random strings to be part of the allocation, so I pre-allocate them and the array up front before the snapshot.  So in the code snippets below: num – Number of iterations. strings – Array of randomly generated strings. results – Array to hold the results of the concatenation tests. timer – A System.Diagnostics.Stopwatch() instance to time code execution. start – Beginning memory size. stop – Ending memory size. after – Memory size after final GC. So first, let’s look at the concatenation loop: 1: // build num strings using concattenation. 2: for (int i = 0; i < num; i++) 3: { 4: results[i] = "This is test #" + i + " with a result of " + strings[i]; 5: } Pretty standard, right?  Next for string.Format(): 1: // build strings using string.Format() 2: for (int i = 0; i < num; i++) 3: { 4: results[i] = string.Format("This is test #{0} with a result of {1}", i, strings[i]); 5: }   Finally, StringBuilder: 1: // build strings using StringBuilder 2: for (int i = 0; i < num; i++) 3: { 4: var builder = new StringBuilder(); 5: builder.Append("This is test #"); 6: builder.Append(i); 7: builder.Append(" with a result of "); 8: builder.Append(strings[i]); 9: results[i] = builder.ToString(); 10: } So I take each of these loops, and time them by using a block like this: 1: // get the total amount of memory used, true tells it to run GC first. 2: start = System.GC.GetTotalMemory(true); 3:  4: // restart the timer 5: timer.Reset(); 6: timer.Start(); 7:  8: // *** code to time and measure goes here. *** 9:  10: // get the current amount of memory, stop the timer, then get memory after GC. 11: stop = System.GC.GetTotalMemory(false); 12: timer.Stop(); 13: other = System.GC.GetTotalMemory(true); So let’s look at what happens when I run each of these blocks through the timer and memory check at 500,000 iterations: 1: Operator + - Time: 547, Memory: 56104540/55595960 - 500000 2: string.Format() - Time: 749, Memory: 57295812/55595960 - 500000 3: StringBuilder - Time: 608, Memory: 55312888/55595960 – 500000   Egad!  string.Format brings up the rear and + triumphs, well, at least in terms of speed.  The concat burns more memory than StringBuilder but less than string.Format().  This shows two main things: StringBuilder is not always the panacea many think it is. The difference between any of the three is miniscule! The second point is extremely important!  You will often here people who will grasp at results and say, “look, operator + is 10% faster than StringBuilder so always use StringBuilder.”  Statements like this are a disservice and often misleading.  For example, if I had a good guess at what the size of the string would be, I could have preallocated my StringBuffer like so:   1: for (int i = 0; i < num; i++) 2: { 3: // pre-declare StringBuilder to have 100 char buffer. 4: var builder = new StringBuilder(100); 5: builder.Append("This is test #"); 6: builder.Append(i); 7: builder.Append(" with a result of "); 8: builder.Append(strings[i]); 9: results[i] = builder.ToString(); 10: }   Now let’s look at the times: 1: Operator + - Time: 551, Memory: 56104412/55595960 - 500000 2: string.Format() - Time: 753, Memory: 57296484/55595960 - 500000 3: StringBuilder - Time: 525, Memory: 59779156/55595960 - 500000   Whoa!  All of the sudden StringBuilder is back on top again!  But notice, it takes more memory now.  This makes perfect sense if you examine the IL behind the scenes.  Whenever you do a string concat (+) in your code, it examines the lengths of the arguments and creates a StringBuilder behind the scenes of the appropriate size for you. But even IF we know the approximate size of our StringBuilder, look how much less readable it is!  That’s why I feel you should always take into account both readability and performance.  After all, consider all these timings are over 500,000 iterations.   That’s at best  0.0004 ms difference per call which is neglidgable at best.  The key is to pick the best tool for the job.  What do I mean?  Consider these awesome words of wisdom: Concatenate (+) is best at concatenating.  StringBuilder is best when you need to building. Format is best at formatting. Totally Earth-shattering, right!  But if you consider it carefully, it actually has a lot of beauty in it’s simplicity.  Remember, there is no magic bullet.  If one of these always beat the others we’d only have one and not three choices. The fact is, the concattenation operator (+) has been optimized for speed and looks the cleanest for joining together a known set of strings in the simplest manner possible. StringBuilder, on the other hand, excels when you need to build a string of inderterminant length.  Use it in those times when you are looping till you hit a stop condition and building a result and it won’t steer you wrong. String.Format seems to be the looser from the stats, but consider which of these is more readable.  Yes, ignore the fact that you could do this with ToString() on a DateTime.  1: // build a date via concatenation 2: var date1 = (month < 10 ? string.Empty : "0") + month + '/' 3: + (day < 10 ? string.Empty : "0") + '/' + year; 4:  5: // build a date via string builder 6: var builder = new StringBuilder(10); 7: if (month < 10) builder.Append('0'); 8: builder.Append(month); 9: builder.Append('/'); 10: if (day < 10) builder.Append('0'); 11: builder.Append(day); 12: builder.Append('/'); 13: builder.Append(year); 14: var date2 = builder.ToString(); 15:  16: // build a date via string.Format 17: var date3 = string.Format("{0:00}/{1:00}/{2:0000}", month, day, year); 18:  So the strength in string.Format is that it makes constructing a formatted string easy to read.  Yes, it’s slower, but look at how much more elegant it is to do zero-padding and anything else string.Format does. So my lesson is, don’t look for the silver bullet!  Choose the best tool.  Micro-optimization almost always bites you in the end because you’re sacrificing readability for performance, which is almost exactly the wrong choice 90% of the time. I love the rules of optimization.  They’ve been stated before in many forms, but here’s how I always remember them: For Beginners: Do not optimize. For Experts: Do not optimize yet. It’s so true.  Most of the time on today’s modern hardware, a micro-second optimization at the sake of readability will net you nothing because it won’t be your bottleneck.  Code for readability, choose the best tool for the job which will usually be the most readable and maintainable as well.  Then, and only then, if you need that extra performance boost after profiling your code and exhausting all other options… then you can start to think about optimizing.

    Read the article

  • Tomcat 6 HTTPS connector: keep alive timeout not being respected

    - by sehugg
    I'm using Tomcat 6.0.24 on Ubuntu (JDK 1.6) with an app that does Comet-style requests on an HTTPS connector (directly against Tomcat, not using APR). I'd like to set the keep-alive to 5 minutes so I don't have to refresh my long-polling connections. Here is my config: <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="1000" keepAliveTimeout="330000" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" /> Unfortunately it seems that the server closes the connection after 65 seconds. The pcap from a sample session goes something like this: T=0 Client sends SYN to server, handshake etc. T=65 Server sends FIN to client T=307 Client sends FIN to server (I'm guessing the 5 minute timeout on the client is due to the HTTP lib not detecting the socket close on the server end, but in any case -- the server shouldn't be closing the connection that early) (edit: this works as expected when using the standard HTTP connector)

    Read the article

  • Using Diskpart in a PowerShell script won't allow script to reuse drive letter

    - by Kyle
    I built a script that mounts (attach) a VHD using Diskpart, cleans out some system files and then unmounts (detach) it. It uses a foreach loop and is suppose to clean multiple VHD using the same drive letter. However, after the 1st VHD it fails. I also noticed that when I try to manually attach a VHD with diskpart, diskpart succeeds, the Disk Manager shows the disk with the correct drive letter, but within the same PoSH instance I can not connect (set-location) to that drive. If I do a manual diskpart when I 1st open PoSH I can attach and detach all I want and I get the drive letter every time. Is there something I need to do to reset diskpart in the script? Here's a snippet of the script I'm using. function Mount-VHD { [CmdletBinding()] param ( [Parameter(Position=0,Mandatory=$true,ValueFromPipeline=$false)] [string]$Path, [Parameter(Position=1,Mandatory=$false,ValueFromPipeline=$false)] [string]$DL, [string]$DiskpartScript = "$env:SystemDrive\DiskpartScript.txt", [switch]$Rescan ) begin { function InvokeDiskpart { Diskpart.exe /s $DiskpartScript } ## Validate Operating System Version ## if (Get-WmiObject win32_OperatingSystem -Filter "Version < '6.1'") {throw "The script operation requires at least Windows 7 or Windows Server 2008 R2."} } process{ ## Diskpart Script Content ## Here-String statement purposefully not indented ## @" $(if ($Rescan) {'Rescan'}) Select VDisk File="$Path" `nAttach VDisk Exit "@ | Out-File -FilePath $DiskpartScript -Encoding ASCII -Force InvokeDiskpart Start-Sleep -Seconds 3 @" Select VDisk File="$Path"`nSelect partition 1 `nAssign Letter="$DL" Exit "@ | Out-File -FilePath $DiskpartScript -Encoding ASCII -Force InvokeDiskpart } end { Remove-Item -Path $DiskpartScript -Force ; "" Write-Host "The VHD ""$Path"" has been successfully mounted." ; "" } } function Dismount-VHD { [CmdletBinding()] param ( [Parameter(Position=0,Mandatory=$true,ValueFromPipeline=$false)] [string]$Path, [switch]$Remove, [switch]$NoConfirm, [string]$DiskpartScript = "$env:SystemDrive\DiskpartScript.txt", [switch]$Rescan ) begin { function InvokeDiskpart { Diskpart.exe /s $DiskpartScript } function RemoveVHD { switch ($NoConfirm) { $false { ## Prompt for confirmation to delete the VHD file ## "" ; Write-Warning "Are you sure you want to delete the file ""$Path""?" $Prompt = Read-Host "Type ""YES"" to continue or anything else to break" if ($Prompt -ceq 'YES') { Remove-Item -Path $Path -Force "" ; Write-Host "VHD ""$Path"" deleted!" ; "" } else { "" ; Write-Host "Script terminated without deleting the VHD file." ; "" } } $true { ## Confirmation prompt suppressed ## Remove-Item -Path $Path -Force "" ; Write-Host "VHD ""$Path"" deleted!" ; "" } } } ## Validate Operating System Version ## if (Get-WmiObject win32_OperatingSystem -Filter "Version < '6.1'") {throw "The script operation requires at least Windows 7 or Windows Server 2008 R2."} } process{ ## DiskPart Script Content ## Here-String statement purposefully not indented ## @" $(if ($Rescan) {'Rescan'}) Select VDisk File="$Path"`nDetach VDisk Exit "@ | Out-File -FilePath $DiskpartScript -Encoding ASCII -Force InvokeDiskpart Start-Sleep -Seconds 10 } end { if ($Remove) {RemoveVHD} Remove-Item -Path $DiskpartScript -Force ; "" } }

    Read the article

< Previous Page | 199 200 201 202 203 204 205 206 207 208 209 210  | Next Page >