Search Results

Search found 65196 results on 2608 pages for 'add service reference'.

Page 320/2608 | < Previous Page | 316 317 318 319 320 321 322 323 324 325 326 327  | Next Page >

  • Workaround for PHP SOAP request failure when wsdl defines service port binding as https and port 80?

    - by scooterhanson
    I am consuming a SOAP web service using php5's soap extension. The service' wsdl was generated using Axis java2wsdl, and whatever options are used during generation result in the port binding url being listed as https://xxx.xxx.xxx.xxx**:80** If I download the wsdl to my server, remove the port 80 specification from the port binding location value, and reference the local file in my soapclient call it works fine. However, if I try to reference it remotely (or download it and reference it locally, as-is) the call fails with a soap fault. I have no input into the service side so I can't make them change their wsdl-generation process. So, unless there's a way to make the soapclient ignorant of the port, I'm stuck with using a locally modified copy of someone else' wsdl (which I'd rather not do). Any thoughts on how to make my soapclient ignore the port 80?

    Read the article

  • Why is my WCF Rest Service on IIS7 Authenticating TWICE!?!?

    - by TheAggie
    Ok, if someone could shed some light on this for me, I would greatly appreciate it. So here we go. I had a rest service running fine the other day but after I accidentally overwrote the web.config all hell broke loose. I've spent the past day and a half trying to sort things out but I can't seem to figure out what is missing or misplaced. So, I've designed this service around WCF Rest Contrib (http://wcfrestcontrib.codeplex.com)'s authentication process. Now, I can get this working fine on my localhost w/ the current web.config (minus the endpoint entry) but once I upload it to discountasp and select "basic authorization" in the ISS7 Manager, it appears that I'm getting authenticated twice! Once using my discount asp.net user/pass and then the next time using the application user/pass. Unfortunately I only provide one set of credentials and don't want to hard code my discountasp account info into the app. Like I said before, this worked fine a few days ago. Anyway. here is my web.config as it is now: <?xml version="1.0"?> <configuration> <connectionStrings> <add name="SQL2008_ConnectionString" connectionString="Data Source=sql2k8xx.discountasp.net;Initial Catalog=SQL2008_xx;Persist Security Info=True;User ID=SQL2008_xx_user;Password=myPass" providerName="System.Data.SqlClient" /> </connectionStrings> <system.web> <httpRuntime maxRequestLength="204800" executionTimeout="3600"/> <compilation debug="true"> <assemblies> <add assembly="System.Core, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089"/> <add assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> </assemblies> </compilation> <httpModules> <add name="ServiceAnonymityModule" type="WcfRestContrib.Web.ServiceAnonymityModule, WcfRestContrib"/> </httpModules> </system.web> <system.codedom> <compilers> <compiler language="c#;cs;csharp" extension=".cs" warningLevel="4" type="Microsoft.CSharp.CSharpCodeProvider, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"> <providerOption name="CompilerVersion" value="v3.5"/> <providerOption name="WarnAsError" value="false"/> </compiler> </compilers> </system.codedom> <system.webServer> <validation validateIntegratedModeConfiguration="false"/> <modules> <remove name="ServiceAnonymityModule"/> <add name="ServiceAnonymityModule" type="WcfRestContrib.Web.ServiceAnonymityModule, WcfRestContrib"/> </modules> <handlers> <remove name="WebServiceHandlerFactory-Integrated"/> </handlers> </system.webServer> <system.diagnostics> <trace autoflush="true" /> </system.diagnostics> <system.serviceModel> <serviceHostingEnvironment aspNetCompatibilityEnabled="false"> <baseAddressPrefixFilters> <add prefix="http://www.mydomain.com/myServiceBaseAddress"/> </baseAddressPrefixFilters> </serviceHostingEnvironment> <extensions> <behaviorExtensions> <add name="webAuthentication" type="WcfRestContrib.ServiceModel.Configuration.WebAuthentication.ConfigurationBehaviorElement, WcfRestContrib, Version=1.0.5.0, Culture=neutral, PublicKeyToken=89183999a8dc93b5"/> <add name="errorHandler" type="WcfRestContrib.ServiceModel.Configuration.ErrorHandler.BehaviorElement, WcfRestContrib, Version=1.0.5.0, Culture=neutral, PublicKeyToken=89183999a8dc93b5"/> <add name="webFormatter" type="WcfRestContrib.ServiceModel.Configuration.WebDispatchFormatter.ConfigurationBehaviorElement, WcfRestContrib, Version=1.0.5.0, Culture=neutral, PublicKeyToken=89183999a8dc93b5"/> <add name="webErrorHandler" type="WcfRestContrib.ServiceModel.Configuration.WebErrorHandler.ConfigurationBehaviorElement, WcfRestContrib, Version=1.0.5.0, Culture=neutral, PublicKeyToken=89183999a8dc93b5"/> </behaviorExtensions> </extensions> <bindings> <customBinding> <binding name="HttpStreamedRest"> <httpTransport maxReceivedMessageSize="209715200" manualAddressing="true" /> </binding> <binding name="HttpsStreamedRest"> <httpsTransport maxReceivedMessageSize="209715200" manualAddressing="true" /> </binding> </customBinding> </bindings> <behaviors> <serviceBehaviors> <behavior name="Rest"> <webAuthentication requireSecureTransport="false" authenticationHandlerType="WcfRestContrib.ServiceModel.Dispatcher.WebBasicAuthenticationHandler, WcfRestContrib" usernamePasswordValidatorType="MyLibrary.Runtime.SecurityValidator, MyLibrary" source="MyRESTServiceRealm"/> <webFormatter> <formatters defaultMimeType="application/xml"> <formatter mimeTypes="application/xml,text/xml" type="WcfRestContrib.ServiceModel.Dispatcher.Formatters.PoxDataContract, WcfRestContrib"/> <formatter mimeTypes="application/json" type="WcfRestContrib.ServiceModel.Dispatcher.Formatters.DataContractJson, WcfRestContrib"/> <formatter mimeTypes="application/x-www-form-urlencoded" type="WcfRestContrib.ServiceModel.Dispatcher.Formatters.FormUrlEncoded, WcfRestContrib"/> </formatters> </webFormatter> <errorHandler errorHandlerType="WcfRestContrib.ServiceModel.Web.WebErrorHandler, WcfRestContrib"/> <webErrorHandler returnRawException="true" logHandlerType="MyLibrary.Runtime.LogHandler, MyLibrary" unhandledErrorMessage="An error has occured processing your request. Please contact technical support for further assistance."/> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel> </configuration> So, whenever I upload this and change the ISS setting to Basic Authentication, it looks like it is trying to use the default handler for authentication as if I try to enter my web app user/pass, I get an error screen which has the following detailed information about the moduel/handler Detailed Error Information Module: IIS Web Core Notification: AuthenticateRequest Handler: svc-ISAPI-2.0 Error Code: 0x80070005 Requested URL: http://www.mydomain.com:80/MyService.../MyService.svc Physical Path: E:\web\xxxxxx\htdocs\MyServiceBaseAddress\MyService.svc Logon Method: Not yet determined Logon User: Not yet determined Now for the fun stuff... i tried providing my discountasp.net account username/password for kicks and sure enough it responded properly for any [OperationContract] which doesn't have [OperationAuthentication] defined (which is only one or two of the operations I have). I thought this was strange, so I looked at fiddler and saw something interesting. Whenever I try request a procedure with [OperationAuthentication] defined and provide my discountasp.net username/pass I get two different "WWW-Authenticate" headers back in Fiddler: WWW-Authenticate: Basic realm="MyRESTServiceRealm" WWW-Authenticate: Basic realm="www.mydomain.com" On the other hand, if I try to access the same procedures with only my application's user/pass, I only get the site's header: WWW-Authenticate: Basic realm="www.mydomain.com" My hypothesis is that for some reason I'm having to pass through the default "Basic Authorization" layer set by IIS before I can get to the application's "Custom Basic Authorization" layer. After verifying this by created an identical user/pass for my service that I use for my discountasp.net account, I was able to successfully pass both layers of authentication without any issues... so I think I can conclude that this is indeed the issue. Now how do I disable the default one? Do I need to do this in the IIS Manager, or in the web.config? Anyway, I have absolutely no idea how this is possible or what I need to do to resolve the issue, but I know that something is seriously out of whack. Any suggestions would be greatly appreciated! Thanks.

    Read the article

  • RRAS won’t start with 8007042a or event ID 7024, aka the “routing remote access unable to load Iprtrmgr.dll”

    - by KCotreau
    History: The history of this error, which has mostly gone unsolved, dates back to Windows 2000. Platforms affected: Windows Server 2008 R2, Server 2008, Server 2003 R2, Server 2003, Server 2000 (both 32-bit and 64-bit installs are affected). Error Messages Event ID: 7024 The Routing and Remote Access service terminated with service-specific error 2 (0x2). Event ID: 7024 The Routing and Remote Access service terminated with service-specific error 31 (0x1F). Event ID: 7024 The Routing and Remote Access service terminated with service-specific error 20205 (0x4EED). Event ID: 7024 The Routing and Remote Access service terminated with service-specific error 193 (0xC1). Event ID: 20103 Unable to load C:\WINDOWS\System32\iprtrmgr.dll . (32-bit installs). Event ID: 20103 Unable to load C:\WINDOWS\SysWOW64\iprtrmgr.dll . (64-bit installs).

    Read the article

  • "Error in the Site Data Web Service." when performing crawl

    - by Janis Veinbergs
    Installed SharePoint Services v3 (SP2, october 2009 cumulative updates, Language Pack), attached to a content database I had previously (all works). Installed Search server 2008 Express (with language pack) on top of WSS and crawl does not work. However it works for newly created web application + database. Was playing around with accounts, permissions to try get it working. Currently I have WSS_Crawler account with such permissions: Office Search Server runs with WSS_Crawler account Config database has read permissions for WSS_Crawler Content database has read permissions for WSS_Crawler WSS_Crawler is owner of search database. Added WSS_Crawler to SQL server browser user group and administrator Yes, i'v given more permissions than needed, but it doesn't even work with that and i don't know if its permission problem or what. Crawl log says there is Error in the Site Data Web Service., nothing more. There were known issues with a similar error: Error in the Site Data Web Service. (Value does not fall within the expected range.), but this is not the case as thats an old issue and i hope it has been included in SP2... Logs are from olders to newest (descending order). They don't appear to be very helpful. Crawl log http://serveris Crawled Local Office SharePoint Server sites 3/15/2010 9:39 AM sts3://serveris Crawled Local Office SharePoint Server sites 3/15/2010 9:39 AM sts3://serveris/contentdbid={55180cfa-9d2d-46e4... Crawled Local Office SharePoint Server sites 3/15/2010 9:39 AM http://serveris/test Error in the Site Data Web Service. Local Office SharePoint Server sites 3/15/2010 9:39 AM http://serveris Error in the Site Data Web Service. Local Office SharePoint Server sites 3/15/2010 9:39 AM EventLog No errors in EventLog, just some Information events that Office Server Search provides The search service started. Successfully stored the application configuration registry snapshot in the database. Context: Application 'SharedServices Component: da1288b2-4109-4219-8c0c-3a22802eb842 Catalog: Portal_Content. A master merge was started due to an external request. Component: da1288b2-4109-4219-8c0c-3a22802eb842 A master merge has completed for catalog Portal_Content. Component: da1288b2-4109-4219-8c0c-3a22802eb842 Catalog: AnchorProject. A master merge was started due to an external request. Component: da1288b2-4109-4219-8c0c-3a22802eb842 A master merge has completed for catalog AnchorProject. ULS Log Just some information, but no exceptions, unexpected errors 03/15/2010 09:03:28.28 mssearch.exe (0x1B2C) 0x0E8C Search Server Common GatherStatus 0 Monitorable Insert crawl 771 to inprogress queue hr 0x00000000 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:6591 03/15/2010 09:03:28.28 mssearch.exe (0x1B2C) 0x0E8C Search Server Common GatherStatus 0 Monitorable Request Start Crawl 1, project Portal_Content, crawl 771 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:2875 03/15/2010 09:03:28.28 mssearch.exe (0x1B2C) 0x0E8C Search Server Common GatherStatus 0 Monitorable Advise status change 1, project Portal_Content, crawl 771 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4853 03/15/2010 09:03:28.28 w3wp.exe (0x1D98) 0x0958 Search Server Common MS Search Administration 8wn6 Information A full crawl was started on 'Local Office SharePoint Server sites' by BALTICOVO\janis.veinbergs. 03/15/2010 09:03:28.43 mssdmn.exe (0x1750) 0x10F8 ULS Logging Unified Logging Service 8wsv High ULS Init Completed (mssdmn.exe, Microsoft.Office.Server.Native.dll) 03/15/2010 09:03:30.48 mssdmn.exe (0x1750) 0x09C0 Search Server Common MS Search Indexing 8z0v Medium Create CCache 03/15/2010 09:03:30.56 mssdmn.exe (0x1750) 0x09C0 Search Server Common MS Search Indexing 8z0z Medium Create CUserCatalogCache 03/15/2010 09:03:32.06 w3wp.exe (0x1D98) 0x0958 Search Server Common MS Search Administration 90ge Medium SQL: dbo.proc_MSS_PropagationGetQueryServers 03/15/2010 09:03:32.09 w3wp.exe (0x1D98) 0x0958 Search Server Common MS Search Administration 7phq High GetProtocolConfigHelper failed in GetNotesInterface(). 03/15/2010 09:03:34.26 mssearch.exe (0x1B2C) 0x16A4 Search Server Common GatherStatus 0 Monitorable Advise status change 12, project Portal_Content, crawl -1 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4853 03/15/2010 09:03:35.92 mssearch.exe (0x1B2C) 0x16A4 Search Server Common GatherStatus 0 Monitorable Advise status change 12, project Portal_Content, crawl -1 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4853 03/15/2010 09:03:37.32 mssearch.exe (0x1B2C) 0x16A4 Search Server Common GatherStatus 0 Monitorable Advise status change 12, project Portal_Content, crawl -1 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4853 03/15/2010 09:03:37.23 mssdmn.exe (0x1750) 0x1850 Search Server Common MS Search Indexing 8z14 Medium Test TRACE (NULL):(null), (NULL)(null), (CrLf): , end 03/15/2010 09:03:39.04 mssearch.exe (0x1B2C) 0x16A4 Search Server Common GatherStatus 0 Monitorable Advise status change 12, project Portal_Content, crawl -1 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4853 03/15/2010 09:03:40.98 mssdmn.exe (0x1750) 0x0B24 Search Server Common MS Search Indexing 7how Monitorable GetWebDefaultPage fail. error 2147755542, strWebUrl http://serveris 03/15/2010 09:03:41.87 mssdmn.exe (0x1750) 0x1260 Search Server Common PHSts 0 Monitorable CSTS3Accessor::GetSubWebListItemAccessURL GetAccessURL failed: Return error to caller, hr=80042616 - File:d:\office\source\search\search\gather\protocols\sts3\sts3acc.cxx Line:505 03/15/2010 09:03:41.87 mssdmn.exe (0x1750) 0x1260 Search Server Common PHSts 0 Monitorable CSTS3Accessor::Init: GetSubWebListItemAccessURL failed. Return error to caller, hr=80042616 - File:d:\office\source\search\search\gather\protocols\sts3\sts3acc.cxx Line:348 03/15/2010 09:03:41.87 mssdmn.exe (0x1750) 0x1260 Search Server Common PHSts 0 Monitorable CSTS3Accessor::Init fails, Url sts3://serveris/siteurl=test/siteid={390611b2-55f3-4a99-8600-778727177a28}/weburl=/webid={fb0e4bff-65d5-4ded-98d5-fd099456962b}, hr=80042616 - File:d:\office\source\search\search\gather\protocols\sts3\sts3handler.cxx Line:243 03/15/2010 09:03:41.87 mssdmn.exe (0x1750) 0x1260 Search Server Common PHSts 0 Monitorable CSTS3Handler::CreateAccessorExB: Return error to caller, hr=80042616 - File:d:\office\source\search\search\gather\protocols\sts3\sts3handler.cxx Line:261 03/15/2010 09:03:40.98 mssdmn.exe (0x1750) 0x1260 Search Server Common MS Search Indexing 7how Monitorable GetWebDefaultPage fail. error 2147755542, strWebUrl http://serveris/test 03/15/2010 09:03:41.90 mssdmn.exe (0x1750) 0x0B24 Search Server Common PHSts 0 Monitorable CSTS3Accessor::GetSubWebListItemAccessURL GetAccessURL failed: Return error to caller, hr=80042616 - File:d:\office\source\search\search\gather\protocols\sts3\sts3acc.cxx Line:505 03/15/2010 09:03:41.90 mssdmn.exe (0x1750) 0x0B24 Search Server Common PHSts 0 Monitorable CSTS3Accessor::Init: GetSubWebListItemAccessURL failed. Return error to caller, hr=80042616 - File:d:\office\source\search\search\gather\protocols\sts3\sts3acc.cxx Line:348 03/15/2010 09:03:41.90 mssdmn.exe (0x1750) 0x0B24 Search Server Common PHSts 0 Monitorable CSTS3Accessor::Init fails, Url sts3://serveris/siteurl=/siteid={505443fa-ef12-4f1e-a04b-d5450c939b78}/weburl=/webid={c5a4f8aa-9561-4527-9e1a-b3c23200f11c}, hr=80042616 - File:d:\office\source\search\search\gather\protocols\sts3\sts3handler.cxx Line:243 03/15/2010 09:03:41.90 mssdmn.exe (0x1750) 0x0B24 Search Server Common PHSts 0 Monitorable CSTS3Handler::CreateAccessorExB: Return error to caller, hr=80042616 - File:d:\office\source\search\search\gather\protocols\sts3\sts3handler.cxx Line:261 03/15/2010 09:03:43.26 mssearch.exe (0x1B2C) 0x0750 Search Server Common GatherStatus 0 Monitorable Advise status change 24, project Portal_Content, crawl 771 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4853 03/15/2010 09:03:43.26 mssearch.exe (0x1B2C) 0x1804 Search Server Common GatherStatus 0 Monitorable Remove crawl 771 from inprogress queue - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:6722 03/15/2010 09:03:43.26 mssearch.exe (0x1B2C) 0x0750 Search Server Common GatherStatus 0 Monitorable Advise status change 12, project Portal_Content, crawl -1 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4853 03/15/2010 09:03:44.65 mssearch.exe (0x1B2C) 0x1804 Search Server Common GatherStatus 0 Monitorable Insert crawl 772 to inprogress queue hr 0x00000000 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:6591 03/15/2010 09:03:44.65 mssearch.exe (0x1B2C) 0x1804 Search Server Common GatherStatus 0 Monitorable Request Start Crawl 0, project AnchorProject, crawl 772 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:2875 03/15/2010 09:03:44.65 mssearch.exe (0x1B2C) 0x1804 Search Server Common GatherStatus 0 Monitorable Advise status change 0, project AnchorProject, crawl 772 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4853 03/15/2010 09:03:44.65 mssearch.exe (0x1B2C) 0x1804 Search Server Common GatherStatus 0 Monitorable Unlock Queue, project Portal_Content - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:2922 03/15/2010 09:03:44.82 mssearch.exe (0x1B2C) 0x1DD0 Search Server Common GathererSql 0 Monitorable CGatherer::LoadTransactionsFromCrawlInternal Flush anchor, count 0 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4943 03/15/2010 09:03:44.95 mssearch.exe (0x1B2C) 0x0750 Search Server Common GatherStatus 0 Monitorable Advise status change 12, project AnchorProject, crawl -1 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4853 03/15/2010 09:03:46.51 mssearch.exe (0x1B2C) 0x0750 Search Server Common GatherStatus 0 Monitorable Advise status change 12, project AnchorProject, crawl -1 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4853 03/15/2010 09:03:46.39 mssearch.exe (0x1B2C) 0x1E4C Search Server Common GathererSql 0 Monitorable CGatherer::LoadTransactionsFromCrawlInternal Flush anchor, count 0 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4943 03/15/2010 09:03:49.01 mssearch.exe (0x1B2C) 0x1C6C Search Server Common GathererSql 0 Monitorable CGatherer::LoadTransactionsFromCrawlInternal Flush anchor, count 1 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4943 03/15/2010 09:03:49.87 mssearch.exe (0x1B2C) 0x155C Search Server Common GathererSql 0 Monitorable CGatherer::LoadTransactionsFromCrawlInternal Flush anchor, count 1 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4943 03/15/2010 09:03:49.29 mssearch.exe (0x1B2C) 0x155C Search Server Common GathererSql 0 Monitorable CGatherer::LoadTransactionsFromCrawlInternal Flush anchor, count 1 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4943 03/15/2010 09:03:49.53 mssearch.exe (0x1B2C) 0x155C Search Server Common GathererSql 0 Monitorable CGatherer::LoadTransactionsFromCrawlInternal Flush anchor, count 1 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4943 03/15/2010 09:03:49.67 mssearch.exe (0x1B2C) 0x155C Search Server Common GathererSql 0 Monitorable CGatherer::LoadTransactionsFromCrawlInternal Flush anchor, count 1 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4943 03/15/2010 09:03:49.82 mssearch.exe (0x1B2C) 0x155C Search Server Common GathererSql 0 Monitorable CGatherer::LoadTransactionsFromCrawlInternal Flush anchor, count 1 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4943 03/15/2010 09:03:49.84 mssearch.exe (0x1B2C) 0x155C Search Server Common GathererSql 0 Monitorable CGatherer::LoadTransactionsFromCrawlInternal Flush anchor, count 0 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4943 03/15/2010 09:03:49.89 mssearch.exe (0x1B2C) 0x155C Search Server Common GathererSql 0 Monitorable CGatherer::LoadTransactionsFromCrawlInternal Flush anchor, count 0 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4943 03/15/2010 09:03:49.90 mssearch.exe (0x1B2C) 0x0750 Search Server Common GatherStatus 0 Monitorable Advise status change 12, project AnchorProject, crawl -1 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4853 03/15/2010 09:03:51.42 mssearch.exe (0x1B2C) 0x1E4C Search Server Common GatherStatus 0 Monitorable Advise status change 4, project AnchorProject, crawl 772 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4853 03/15/2010 09:03:51.00 mssearch.exe (0x1B2C) 0x1E4C Search Server Common GathererSql 0 Monitorable CGatherer::LoadTransactionsFromCrawlInternal Flush anchor, count 0 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4943 03/15/2010 09:03:51.42 mssearch.exe (0x1B2C) 0x1CCC Search Server Common GatherStatus 0 Monitorable Remove crawl 772 from inprogress queue - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:6722 03/15/2010 09:03:52.96 mssearch.exe (0x1B2C) 0x1CCC Search Server Common GatherStatus 0 Monitorable Insert crawl 773 to inprogress queue hr 0x00000000 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:6591 03/15/2010 09:03:52.96 mssearch.exe (0x1B2C) 0x1CCC Search Server Common GatherStatus 0 Monitorable Request Start Crawl 0, project AnchorProject, crawl 773 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:2875 03/15/2010 09:03:55.29 mssearch.exe (0x1B2C) 0x1CCC Search Server Common GatherStatus 0 Monitorable Unlock Queue, project AnchorProject - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:2922 03/15/2010 09:03:55.29 mssearch.exe (0x1B2C) 0x1CCC Search Server Common GatherStatus 0 Monitorable Removed start crawl request from Queue 0, crawl 773 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:2942 03/15/2010 09:03:55.29 mssearch.exe (0x1B2C) 0x1CCC Search Server Common GatherStatus 0 Monitorable Request Start Crawl 0, project AnchorProject, crawl 773 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:2875 03/15/2010 09:03:55.29 mssearch.exe (0x1B2C) 0x1CCC Search Server Common GatherStatus 0 Monitorable Advise status change 0, project AnchorProject, crawl 773 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4853 03/15/2010 09:03:55.37 mssearch.exe (0x1B2C) 0x1CCC Search Server Common GathererSql 0 Monitorable CGatherer::LoadTransactionsFromCrawlInternal Flush anchor, count 0 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4943 03/15/2010 09:03:55.37 mssearch.exe (0x1B2C) 0x0750 Search Server Common GatherStatus 0 Monitorable Advise status change 12, project AnchorProject, crawl -1 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4853 03/15/2010 09:03:56.71 mssearch.exe (0x1B2C) 0x1E4C Search Server Common GathererSql 0 Monitorable CGatherer::LoadTransactionsFromCrawlInternal Flush anchor, count 0 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4943 03/15/2010 09:03:56.78 mssearch.exe (0x1B2C) 0x0750 Search Server Common GatherStatus 0 Monitorable Advise status change 12, project AnchorProject, crawl -1 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4853 03/15/2010 09:03:58.40 mssearch.exe (0x1B2C) 0x155C Search Server Common GathererSql 0 Monitorable CGatherer::LoadTransactionsFromCrawlInternal Flush anchor, count 0 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4943 03/15/2010 09:03:58.89 mssearch.exe (0x1B2C) 0x155C Search Server Common GatherStatus 0 Monitorable Advise status change 4, project AnchorProject, crawl 773 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4853 03/15/2010 09:03:58.89 mssearch.exe (0x1B2C) 0x1130 Search Server Common GatherStatus 0 Monitorable Remove crawl 773 from inprogress queue - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:6722 03/15/2010 09:03:58.89 mssearch.exe (0x1B2C) 0x1130 Search Server Common GatherStatus 0 Monitorable Unlock Queue, project AnchorProject - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:2922 What could be wrong here - any clues?

    Read the article

  • Step by Step:How to use Web Services in ASP.NET AJAX

    - by Yousef_Jadallah
    In my Article Preventing Duplicate Date With ASP.NET AJAX I’ve used ASP.NET AJAX With Web Service Technology, Therefore I add this topic as an introduction how to access Web services from client script in AJAX-enabled ASP.NET Web pages. As well I write this topic to answer the common questions which most of the developers face while working with ASP.NET Ajax Web Services especially in Microsoft ASP.NET official forum http://forums.asp.net/. ASP.NET enables you to create Web services can be accessed from client script in Web pages by using AJAX technology to make Web service calls. Data is exchanged asynchronously between client and server, typically in JSON format.   Lets go a head with the steps :   1-Create a new project , if you are using VS 2005 you have to create ASP.NET Ajax Enabled Web site.   2-Add new Item , Choose Web Service file .     3-To make your Web Services accessible from script, first it must be an .asmx Web service whose Web service class is qualified with the ScriptServiceAttribute attribute and every method you are using to be called from Client script must be qualified with the WebMethodAttribute attribute. On other hand you can use your Web page( CS or VB files) to add static methods accessible from Client Script , just you need to add WebMethod Attribute and set the EnablePageMethods attribute of the ScriptManager control to true..   The other condition is to register the ScriptHandlerFactory HTTP handler, which processes calls made from script to .asmx Web services : <system.web> <httpHandlers> <remove verb="*" path="*.asmx"/> <add verb="*" path="*.asmx" type="System.Web.Script.Services.ScriptHandlerFactory" validate="false"/> </httpHandlers> <system.web> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } but this already added automatically for any Web.config file of any ASP.NET AJAX Enabled WebSite or Project, So you don’t need to add it.   4-Avoid the default Method HelloWorld, then add your method in your asmx file lets say  OurServerOutput , As a consequence your Web service will be like this : using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Services;     [WebService(Namespace = "http://tempuri.org/")] [WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)] [System.Web.Script.Services.ScriptService] public class WebService : System.Web.Services.WebService {     [WebMethod] public string OurServerOutput() { return "The Server Date and Time is : " + DateTime.Now.ToString(); } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   5-Add ScriptManager Contol to your aspx file then reference the Web service by adding an asp:ServiceReference child element to the ScriptManager control and setting its path attribute to point to the Web service, That generate a JavaScript proxy class for calling the specified Web service from client script.   <asp:ScriptManager runat="server" ID="scriptManager"> <Services> <asp:ServiceReference Path="WebService.asmx" /> </Services> </asp:ScriptManager> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   Basically ,to enable your application to call Web services(.asmx files) by using client script, the server asynchronous communication layer automatically generates JavaScript proxy classes. A proxy class is generated for each Web service for which an <asp:ServiceReference> element is included under the <asp:ScriptManager> control in the page.   6-Create new button to call the JavaSciprt function and a label to display the returned value . <input id="btnCallDateTime" type="button" value="Call Web Service" onclick="CallDateTime()"/> <asp:Label ID="lblOutupt" runat="server" Text="Label"></asp:Label> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   7-Define the JavaScript code to call the Web Service : <script language="javascript" type="text/javascript">   function CallDateTime() {   WebService.OurServerOutput(OnSucceeded); }   function OnSucceeded(result) { var lblOutput = document.getElementById("lblOutupt"); lblOutput.innerHTML = result; } </script> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } CallDateTime function calls the Web Service Method OurServerOutput… OnSucceeded function Used as the callback function that processes the Web Service return value. which the result parameter is a simple parameter contain the Server Date Time value returned from the Web Service . Finally , when you complete these steps and run your application you can press the button and retrieve Server Date time without postback.   Conclusion: In this topic I describes how to access Web services from client script in AJAX-enabled ASP.NET Web pages With a full .NET Framework/JSON serialize, direct integration with the familiar .asmx Web services ,Using  simple example,Also you can connect with the database to return value by create WebMethod in your Web Service file and the same steps you can use. Next time I will show you more complex example which returns a complex type like objects.   Hope this help.

    Read the article

  • Tip/Trick: Fix Common SEO Problems Using the URL Rewrite Extension

    - by ScottGu
    Search engine optimization (SEO) is important for any publically facing web-site.  A large % of traffic to sites now comes directly from search engines, and improving your site’s search relevancy will lead to more users visiting your site from search engine queries.  This can directly or indirectly increase the money you make through your site. This blog post covers how you can use the free Microsoft URL Rewrite Extension to fix a bunch of common SEO problems that your site might have.  It takes less than 15 minutes (and no code changes) to apply 4 simple URL Rewrite rules to your site, and in doing so cause search engines to drive more visitors and traffic to your site.  The techniques below work equally well with both ASP.NET Web Forms and ASP.NET MVC based sites.  They also works with all versions of ASP.NET (and even work with non-ASP.NET content). [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] Measuring the SEO of your website with the Microsoft SEO Toolkit A few months ago I blogged about the free SEO Toolkit that we’ve shipped.  This useful tool enables you to automatically crawl/scan your site for SEO correctness, and it then flags any SEO issues it finds.  I highly recommend downloading and using the tool against any public site you work on.  It makes it easy to spot SEO issues you might have in your site, and pinpoint ways to optimize it further. Below is a simple example of a report I ran against one of my sites (www.scottgu.com) prior to applying the URL Rewrite rules I’ll cover later in this blog post:   Search Relevancy and URL Splitting Two of the important things that search engines evaluate when assessing your site’s “search relevancy” are: How many other sites link to your content.  Search engines assume that if a lot of people around the web are linking to your content, then it is likely useful and so weight it higher in relevancy. The uniqueness of the content it finds on your site.  If search engines find that the content is duplicated in multiple places around the Internet (or on multiple URLs on your site) then it is likely to drop the relevancy of the content. One of the things you want to be very careful to avoid when building public facing sites is to not allow different URLs to retrieve the same content within your site.  Doing so will hurt with both of the situations above.  In particular, allowing external sites to link to the same content with multiple URLs will cause your link-count and page-ranking to be split up across those different URLs (and so give you a smaller page rank than what it would otherwise be if it was just one URL).  Not allowing external sites to link to you in different ways sounds easy in theory – but you might wonder what exactly this means in practice and how you avoid it. 4 Really Common SEO Problems Your Sites Might Have Below are 4 really common scenarios that can cause your site to inadvertently expose multiple URLs for the same content.  When this happens external sites linking to yours will end up splitting their page links across multiple URLs - and as a result cause you to have a lower page ranking with search engines than you deserve. SEO Problem #1: Default Document IIS (and other web servers) supports the concept of a “default document”.  This allows you to avoid having to explicitly specify the page you want to serve at either the root of the web-site/application, or within a sub-directory.  This is convenient – but means that by default this content is available via two different publically exposed URLs (which is bad).  For example: http://scottgu.com/ http://scottgu.com/default.aspx SEO Problem #2: Different URL Casings Web developers often don’t realize URLs are case sensitive to search engines on the web.  This means that search engines will treat the following links as two completely different URLs: http://scottgu.com/Albums.aspx http://scottgu.com/albums.aspx SEO Problem #3: Trailing Slashes Consider the below two URLs – they might look the same at first, but they are subtly different. The trailing slash creates yet another situation that causes search engines to treat the URLs as different and so split search rankings: http://scottgu.com http://scottgu.com/ SEO Problem #4: Canonical Host Names Sometimes sites support scenarios where they support a web-site with both a leading “www” hostname prefix as well as just the hostname itself.  This causes search engines to treat the URLs as different and split search rankling: http://scottgu.com/albums.aspx/ http://www.scottgu.com/albums.aspx/ How to Easily Fix these SEO Problems in 10 minutes (or less) using IIS Rewrite If you haven’t been careful when coding your sites, chances are you are suffering from one (or more) of the above SEO problems.  Addressing these issues will improve your search engine relevancy ranking and drive more traffic to your site. The “good news” is that fixing the above 4 issues is really easy using the URL Rewrite Extension.  This is a completely free Microsoft extension available for IIS 7.x (on Windows Server 2008, Windows Server 2008 R2, Windows 7 and Windows Vista).  The great thing about using the IIS Rewrite extension is that it allows you to fix the above problems *without* having to change any code within your applications.  You can easily install the URL Rewrite Extension in under 3 minutes using the Microsoft Web Platform Installer (a free tool we ship that automates setting up web servers and development machines).  Just click the green “Install Now” button on the URL Rewrite Spotlight page to install it on your Windows Server 2008, Windows 7 or Windows Vista machine: Once installed you’ll find that a new “URL Rewrite” icon is available within the IIS 7 Admin Tool: Double-clicking the icon will open up the URL Rewrite admin panel – which will display the list of URL Rewrite rules configured for a particular application or site: Notice that our rewrite rule list above is currently empty (which is the default when you first install the extension).  We can click the “Add Rule…” link button in the top-right of the panel to add and enable new URL Rewriting logic for our site.  Scenario 1: Handling Default Document Scenarios One of the SEO problems I discussed earlier in this post was the scenario where the “default document” feature of IIS causes you to inadvertently expose two URLs for the same content on your site.  For example: http://scottgu.com/ http://scottgu.com/default.aspx We can fix this by adding a new IIS Rewrite rule that automatically redirects anyone who navigates to the second URL to instead go to the first one.  We will setup the HTTP redirect to be a “permanent redirect” – which will indicate to search engines that they should follow the redirect and use the new URL they are redirected to as the identifier of the content they retrieve.  Let’s look at how we can create such a rule.  We’ll begin by clicking the “Add Rule” link in the screenshot above.  This will cause the below dialog to display: We’ll select the “Blank Rule” template within the “Inbound rules” section to create a new custom URL Rewriting rule.  This will display an empty pane like below: Don’t worry – setting up the above rule is easy.  The following 4 steps explain how to do so: Step 1: Name the Rule Our first step will be to name the rule we are creating.  Naming it with a descriptive name will make it easier to find and understand later.  Let’s name this rule our “Default Document URL Rewrite” rule: Step 2: Setup the Regular Expression that Matches this Rule Our second step will be to specify a regular expression filter that will cause this rule to execute when an incoming URL matches the regex pattern.   Don’t worry if you aren’t good with regular expressions - I suck at them too. The trick is to know someone who is good at them or copy/paste them from a web-site.  Below we are going to specify the following regular expression as our pattern rule: (.*?)/?Default\.aspx$ This pattern will match any URL string that ends with Default.aspx. The "(.*?)" matches any preceding character zero or more times. The "/?" part says to match the slash symbol zero or one times. The "$" symbol at the end will ensure that the pattern will only match strings that end with Default.aspx.  Combining all these regex elements allows this rule to work not only for the root of your web site (e.g. http://scottgu.com/default.aspx) but also for any application or subdirectory within the site (e.g. http://scottgu.com/photos/default.aspx.  Because the “ignore case” checkbox is selected it will match both “Default.aspx” as well as “default.aspx” within the URL.   One nice feature built-into the rule editor is a “Test pattern” button that you can click to bring up a dialog that allows you to test out a few URLs with the rule you are configuring: Above I've added a “products/default.aspx” URL and clicked the “Test” button.  This will give me immediate feedback on whether the rule will execute for it.  Step 3: Setup a Permanent Redirect Action We’ll then setup an action to occur when our regular expression pattern matches the incoming URL: In the dialog above I’ve changed the “Action Type” drop down to be a “Redirect” action.  The “Redirect Type” will be a HTTP 301 Permanent redirect – which means search engines will follow it. I’ve also set the “Redirect URL” property to be: {R:1}/ This indicates that we want to redirect the web client requesting the original URL to a new URL that has the originally requested URL path - minus the "Default.aspx" in it.  For example, requests for http://scottgu.com/default.aspx will be redirected to http://scottgu.com/, and requests for http://scottgu.com/photos/default.aspx will be redirected to http://scottgu.com/photos/ The "{R:N}" regex construct, where N >= 0, is called a back-reference and N is the back-reference index. In the case of our pattern "(.*?)/?Default\.aspx$", if the input URL is "products/Default.aspx" then {R:0} will contain "products/Default.aspx" and {R:1} will contain "products".  We are going to use this {R:1}/ value to be the URL we redirect users to.  Step 4: Apply and Save the Rule Our final step is to click the “Apply” button in the top right hand of the IIS admin tool – which will cause the tool to persist the URL Rewrite rule into our application’s root web.config file (under a <system.webServer/rewrite> configuration section): <configuration>     <system.webServer>         <rewrite>             <rules>                 <rule name="Default Document" stopProcessing="true">                     <match url="(.*?)/?Default\.aspx$" />                     <action type="Redirect" url="{R:1}/" />                 </rule>             </rules>         </rewrite>     </system.webServer> </configuration> Because IIS 7.x and ASP.NET share the same web.config files, you can actually just copy/paste the above code into your web.config files using Visual Studio and skip the need to run the admin tool entirely.  This also makes adding/deploying URL Rewrite rules with your ASP.NET applications really easy. Step 5: Try the Rule Out Now that we’ve saved the rule, let’s try it out on our site.  Try the following two URLs on my site: http://scottgu.com/ http://scottgu.com/default.aspx Notice that the second URL automatically redirects to the first one.  Because it is a permanent redirect, search engines will follow the URL and should update the page ranking of http://scottgu.com to include links to http://scottgu.com/default.aspx as well. Scenario 2: Different URL Casing Another common SEO problem I discussed earlier in this post is that URLs are case sensitive to search engines on the web.  This means that search engines will treat the following links as two completely different URLs: http://scottgu.com/Albums.aspx http://scottgu.com/albums.aspx We can fix this by adding a new IIS Rewrite rule that automatically redirects anyone who navigates to the first URL to instead go to the second (all lower-case) one.  Like before, we will setup the HTTP redirect to be a “permanent redirect” – which will indicate to search engines that they should follow the redirect and use the new URL they are redirected to as the identifier of the content they retrieve. To create such a rule we’ll click the “Add Rule” link in the URL Rewrite admin tool again.  This will cause the “Add Rule” dialog to appear again: Unlike the previous scenario (where we created a “Blank Rule”), with this scenario we can take advantage of a built-in “Enforce lowercase URLs” rule template.  When we click the “ok” button we’ll see the following dialog which asks us if we want to create a rule that enforces the use of lowercase letters in URLs: When we click the “Yes” button we’ll get a pre-written rule that automatically performs a permanent redirect if an incoming URL has upper-case characters in it – and automatically send users to a lower-case version of the URL: We can click the “Apply” button to use this rule “as-is” and have it apply to all incoming URLs to our site.  Because my www.scottgu.com site uses ASP.NET Web Forms, I’m going to make one small change to the rule we generated above – which is to add a condition that will ensure that URLs to ASP.NET’s built-in “WebResource.axd” handler are excluded from our case-sensitivity URL Rewrite logic.  URLs to the WebResource.axd handler will only come from server-controls emitted from my pages – and will never be linked to from external sites.  While my site will continue to function fine if we redirect these URLs to automatically be lower-case – doing so isn’t necessary and will add an extra HTTP redirect to many of my pages.  The good news is that adding a condition that prevents my URL Rewriting rule from happening with certain URLs is easy.  We simply need to expand the “Conditions” section of the form above We can then click the “Add” button to add a condition clause.  This will bring up the “Add Condition” dialog: Above I’ve entered {URL} as the Condition input – and said that this rule should only execute if the URL does not match a regex pattern which contains the string “WebResource.axd”.  This will ensure that WebResource.axd URLs to my site will be allowed to execute just fine without having the URL be re-written to be all lower-case. Note: If you have static resources (like references to .jpg, .css, and .js files) within your site that currently use upper-case characters you’ll probably want to add additional condition filter clauses so that URLs to them also don’t get redirected to be lower-case (just add rules for patterns like .jpg, .gif, .js, etc).  Your site will continue to work fine if these URLs get redirected to be lower case (meaning the site won’t break) – but it will cause an extra HTTP redirect to happen on your site for URLs that don’t need to be redirected for SEO reasons.  So setting up a condition clause makes sense to add. When I click the “ok” button above and apply our lower-case rewriting rule the admin tool will save the following additional rule to our web.config file: <configuration>     <system.webServer>         <rewrite>             <rules>                 <rule name="Default Document" stopProcessing="true">                     <match url="(.*?)/?Default\.aspx$" />                     <action type="Redirect" url="{R:1}/" />                 </rule>                 <rule name="Lower Case URLs" stopProcessing="true">                     <match url="[A-Z]" ignoreCase="false" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{ToLower:{URL}}" />                 </rule>             </rules>         </rewrite>     </system.webServer> </configuration> Try the Rule Out Now that we’ve saved the rule, let’s try it out on our site.  Try the following two URLs on my site: http://scottgu.com/Albums.aspx http://scottgu.com/albums.aspx Notice that the first URL (which has a capital “A”) automatically does a redirect to a lower-case version of the URL.  Scenario 3: Trailing Slashes Another common SEO problem I discussed earlier in this post is the scenario of trailing slashes within URLs.  The trailing slash creates yet another situation that causes search engines to treat the URLs as different and so split search rankings: http://scottgu.com http://scottgu.com/ We can fix this by adding a new IIS Rewrite rule that automatically redirects anyone who navigates to the first URL (that does not have a trailing slash) to instead go to the second one that does.  Like before, we will setup the HTTP redirect to be a “permanent redirect” – which will indicate to search engines that they should follow the redirect and use the new URL they are redirected to as the identifier of the content they retrieve.  To create such a rule we’ll click the “Add Rule” link in the URL Rewrite admin tool again.  This will cause the “Add Rule” dialog to appear again: The URL Rewrite admin tool has a built-in “Append or remove the trailing slash symbol” rule template.  When we select it and click the “ok” button we’ll see the following dialog which asks us if we want to create a rule that automatically redirects users to a URL with a trailing slash if one isn’t present: Like within our previous lower-casing rewrite rule we’ll add one additional condition clause that will exclude WebResource.axd URLs from being processed by this rule.  This will avoid an unnecessary redirect for happening for those URLs. When we click the “OK” button we’ll get a pre-written rule that automatically performs a permanent redirect if the URL doesn’t have a trailing slash – and if the URL is not processed by either a directory or a file.  This will save the following additional rule to our web.config file: <configuration>     <system.webServer>         <rewrite>             <rules>                 <rule name="Default Document" stopProcessing="true">                     <match url="(.*?)/?Default\.aspx$" />                     <action type="Redirect" url="{R:1}/" />                 </rule>                 <rule name="Lower Case URLs" stopProcessing="true">                     <match url="[A-Z]" ignoreCase="false" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{ToLower:{URL}}" />                 </rule>                 <rule name="Trailing Slash" stopProcessing="true">                     <match url="(.*[^/])$" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" />                         <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" />                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{R:1}/" />                 </rule>             </rules>         </rewrite>     </system.webServer> </configuration> Try the Rule Out Now that we’ve saved the rule, let’s try it out on our site.  Try the following two URLs on my site: http://scottgu.com http://scottgu.com/ Notice that the first URL (which has no trailing slash) automatically does a redirect to a URL with the trailing slash.  Because it is a permanent redirect, search engines will follow the URL and update the page ranking. Scenario 4: Canonical Host Names The final SEO problem I discussed earlier are scenarios where a site works with both a leading “www” hostname prefix as well as just the hostname itself.  This causes search engines to treat the URLs as different and split search rankling: http://www.scottgu.com/albums.aspx http://scottgu.com/albums.aspx We can fix this by adding a new IIS Rewrite rule that automatically redirects anyone who navigates to the first URL (that has a www prefix) to instead go to the second URL.  Like before, we will setup the HTTP redirect to be a “permanent redirect” – which will indicate to search engines that they should follow the redirect and use the new URL they are redirected to as the identifier of the content they retrieve.  To create such a rule we’ll click the “Add Rule” link in the URL Rewrite admin tool again.  This will cause the “Add Rule” dialog to appear again: The URL Rewrite admin tool has a built-in “Canonical domain name” rule template.  When we select it and click the “ok” button we’ll see the following dialog which asks us if we want to create a redirect rule that automatically redirects users to a primary host name URL: Above I’m entering the primary URL address I want to expose to the web: scottgu.com.  When we click the “OK” button we’ll get a pre-written rule that automatically performs a permanent redirect if the URL has another leading domain name prefix.  This will save the following additional rule to our web.config file: <configuration>     <system.webServer>         <rewrite>             <rules>                 <rule name="Cannonical Hostname">                     <match url="(.*)" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{HTTP_HOST}" pattern="^scottgu\.com$" negate="true" />                     </conditions>                     <action type="Redirect" url="http://scottgu.com/{R:1}" />                 </rule>                 <rule name="Default Document" stopProcessing="true">                     <match url="(.*?)/?Default\.aspx$" />                     <action type="Redirect" url="{R:1}/" />                 </rule>                 <rule name="Lower Case URLs" stopProcessing="true">                     <match url="[A-Z]" ignoreCase="false" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{ToLower:{URL}}" />                 </rule>                 <rule name="Trailing Slash" stopProcessing="true">                     <match url="(.*[^/])$" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" />                         <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" />                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{R:1}/" />                 </rule>             </rules>         </rewrite>     </system.webServer> </configuration> Try the Rule Out Now that we’ve saved the rule, let’s try it out on our site.  Try the following two URLs on my site: http://www.scottgu.com/albums.aspx http://scottgu.com/albums.aspx Notice that the first URL (which has the “www” prefix) now automatically does a redirect to the second URL which does not have the www prefix.  Because it is a permanent redirect, search engines will follow the URL and update the page ranking. 4 Simple Rules for Improved SEO The above 4 rules are pretty easy to setup and should take less than 15 minutes to configure on existing sites you already have.  The beauty of using a solution like the URL Rewrite Extension is that you can take advantage of it without having to change code within your web-site – and without having to break any existing links already pointing at your site.  Users who follow existing links will be automatically redirected to the new URLs you wish to publish.  And search engines will start to give your site a higher search relevancy ranking – which will list your site higher in search results and drive more traffic to it. Customizing your URL Rewriting rules further is easy to-do either by editing the web.config file directly, or alternatively, just double click the URL Rewrite icon within the IIS 7.x admin tool and it will list all the active rules for your web-site or application: Clicking any of the rules above will open the rules editor back up and allow you to tweak/customize/save them further. Summary Measuring and improving SEO is something every developer building a public-facing web-site needs to think about and focus on.  If you haven’t already, download and use the SEO Toolkit to analyze the SEO of your sites today. New URL Routing features in ASP.NET MVC and ASP.NET Web Forms 4 make it much easier to build applications that have more control over the URLs that are published.  Tools like the URL Rewrite Extension that I’ve talked about in this blog post make it much easier to improve the URLs that are published from sites you already have built today – without requiring you to change a lot of code. The URL Rewrite Extension provides a bunch of additional great capabilities – far beyond just SEO - as well.  I’ll be covering these additional capabilities more in future blog posts. Hope this helps, Scott

    Read the article

  • How to add a local file to trusted zone in IE8?

    - by Raghu Dodda
    I want to add a file on my local drive (C:\something.html) to the Trusted Zone in IE8 (my OS is Windows Server 2003). The Add Sites Dialog box, does not seem to take entries for files on the local drive. I have tried: file://C:\something.html file:\\localhost\c$\something.html I have seen other solutions (on superuser and elsewhere) such Mark of Web, that allow your local file to be treated as if it were part of the internet zone, but I want to add my file to the Trusted Zone. How can I do this? Thanks.

    Read the article

  • Project Server 2007 install issue - ProjectEventService won't start

    - by Brian Meinertz
    Trying to install PS2007 with SP1 on Server 2003. The install goes fine, but when running the SharePoint Configuration Wizard, it fails at stage 6 of 12 with the error: Failed to register SharePoint Services. An exception of type System.InvalidOperationException was thrown. Additional exception information: Cannot start service ProjectEventService on computer '.'. From the PSCDiagnostics log: Exception: System.InvalidOperationException: Cannot start service ProjectEventService on computer '.'. --- System.ComponentModel.Win32Exception: The service did not respond to the start or control request in a timely fashion. The ProjectEventService (Microsoft Office Project Server Event) won't even start manually using the Network Service account. Starting the service with a domain account works, but subsequently running the Config Wizard causes the service to be removed and re-provisioned to run using the Network Service account, which again fails. Presumably Network Service needs elevated permissions, but even adding it to the local Admin group makes no difference. Anyone come across this sort of issue before?

    Read the article

  • SQLAuthority News – Guest Post – FAULT Contract in WCF with Learning Video

    - by pinaldave
    This is guest post by one of my very good friends and .NET MVP, Dhananjay Kumar. The very first impression one gets when they meet him is his politeness. He is an extremely nice person, but has superlative knowledge in .NET and is truly helpful to all of us. Objective: This article will give a basic introduction on: How to handle Exception at service side? How to use Fault contract at Service side? How to handle Service Exception at client side? A Few Points about Exception at Service Exception is technology-specific. Exception should not be shared beyond service boundary. Since Exception is technology-specific, it cannot be propagated to other clients. Exception is of many types. CLR Exception Windows32 Exception Runtime Exception at service C++ Exception Exception is very much native to the technology in which service is made. Exception must be converted from technology-specific information to natural information that can be communicated to the client. SOAP Fault FaultException<T> Service should throw FaultException<T>, instead of the usual CLR exception. FaultException<T> is a specialization of Fault Exception. Any client that programs against FaultException can handle the Exception thrown by FaultException<T>. The type parameter T conveys the error detail. T can be of any type like Exception, CLR Type or any type that can be serialized. T can be of type Data contract. T is a generic parameter that conveys the error details. You can read complete article http://dhananjaykumar.net/2010/05/23/fault-contract-in-wcf-with-learning-video/ Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • How do I add 'www' before a subdomain, like www.subdomain.domain.com?

    - by Snehal Masne
    I want to add 'www' in front of a subdomain e.g. www.subdomain.domain.com My blogs are hosted on Blogger and am using GoDaddy for having custom domains. I have HOST @ entries for 'domain' pointing specified by blogger. The following subsdomains are configured by adding CNAME alias as follows : subdomain -> ghs.google.com www -> ghs.google.com For domain (including www.domain) I have one blog. For subdomain, I am pointing it to seperate blog using above entries and 'subdomain.domain.com' works fine. I read articles on this issue and tried adding following CNAME entry but no luck : www.subdomain -> subdomain.domain.com How do I make 'www.subdomain.domain.com' work ?

    Read the article

  • Wacom consumer tablet driver service may crash while opening Bamboo Preferences, often after resuming computer from sleep

    - by DragonLord
    One of the ExpressKeys on my Wacom Bamboo Capture graphics tablet is mapped to Bamboo Preferences, so that I can quickly access the tablet settings and view the battery level (I have the Wireless Accessory Kit installed). However, when I connect the tablet to the computer, in wired or wireless mode, and attempt to open Bamboo Preferences, the Wacom consumer tablet driver service may crash, most often when I try to do so after resuming the computer from sleep. There is usually no direct indication of the crash (although I once did get Tablet Service for consumer driver stopped working and was closed), only that the cursor shows that the system is busy for a split second. When this happens, the pen no longer tracks on the screen when in proximity of the tablet (even though it is detected by the tablet itself); however, touch continues to function correctly. To recover from this condition, I need to restart the tablet driver services. I got tired of having to go through Task Manager to restart the service every time this happens, so I ended up writing the following command script, with a shortcut on the desktop for running it with elevated privileges: net stop TabletServicePen net start TabletServicePen net stop TouchServicePen net start TouchServicePen Is there something I can do to prevent these crashes from happening in the first place, or do I have have to deal with this issue until the driver is updated? Windows 7 Home Premium 64-bit. Tablet drivers are up to date. Technical details Action Center gives the following details about the crash in Reliability Monitor: Source Tablet Service for consumer driver Summary Stopped working Date ?10/?15/?2012 2:48 PM Status Report sent Description Faulting Application Path: C:\Program Files\Tablet\Pen\Pen_Tablet.exe Problem signature Problem Event Name: APPCRASH Application Name: Pen_Tablet.exe Application Version: 5.2.5.5 Application Timestamp: 4e694ecd Fault Module Name: Pen_Tablet.exe Fault Module Version: 5.2.5.5 Fault Module Timestamp: 4e694ecd Exception Code: c0000005 Exception Offset: 00000000002f6cde OS Version: 6.1.7601.2.1.0.768.3 Locale ID: 1033 Additional Information 1: 9d4f Additional Information 2: 9d4f1c8d2c16a5d47e28521ff719cfba Additional Information 3: 375e Additional Information 4: 375ebb9963823eb7e450696f2abb66cc Extra information about the problem Bucket ID: 45598085 Exception code 0xC0000005 means STATUS_ACCESS_VIOLATION. The event log contains essentially the same information.

    Read the article

  • Project Server 2007 install issue - ProjectEventService won't start

    - by Brian Meinertz
    Trying to install PS2007 with SP1 on Server 2003. The install goes fine, but when running the SharePoint Configuration Wizard, it fails at stage 6 of 12 with the error: Failed to register SharePoint Services. An exception of type System.InvalidOperationException was thrown. Additional exception information: Cannot start service ProjectEventService on computer '.'. From the PSCDiagnostics log: Exception: System.InvalidOperationException: Cannot start service ProjectEventService on computer '.'. --- System.ComponentModel.Win32Exception: The service did not respond to the start or control request in a timely fashion. The ProjectEventService (Microsoft Office Project Server Event) won't even start manually using the Network Service account. Starting the service with a domain account works, but subsequently running the Config Wizard causes the service to be removed and re-provisioned to run using the Network Service account, which again fails. Presumably Network Service needs elevated permissions, but even adding it to the local Admin group makes no difference. Anyone come across this sort of issue before?

    Read the article

  • Dynamic meta description and keyword tags for your MasterPages

    - by Aamir Hasan
     Today we're going to look at a technique for dynamically inserting meta tags into your master pages. By taking control of the head tag and inserting your own HtmlMeta you can easily customise these tags.Might have noticed that when you create a new master page in visual studio your <head> tag gets decorated with a runat="server" attribute.Asp.net doesn't add this kind of decoration to any other html tags (although you are free to add it if you want). So what makes the head tag special?By adding the runat="server" you're giving actually converting the control into a HtmlHead control. That doesn't particularly matter for this tutorial other than to note that given a reference to the head control you get all the extras that come with asp.net controls such as access to its controls collection.The HtmlMeta control lets us wrap up <meta> tags via asp.net code. To add a meta description we need to create an instance, set the name property, the content property, and then add it to the head: asp.net using (C#)protected void Page_Init(object sender, EventArgs e){  // Add meta description tag  HtmlMeta metaDescription = new HtmlMeta();  metaDescription.Name = "Description";  metaDescription.Content = "Short, unique and keywords rich page description.";  Page.Header.Controls.Add(metaDescription);   // Add meta keywords tag  HtmlMeta metaKeywords = new HtmlMeta();  metaKeywords.Name = "Keywords";  metaKeywords.Content = "selected,page,keywords";  Page.Header.Controls.Add(metaKeywords);}asp.net ( VB.NET )Protected Sub Page_Init(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Init  ' Add meta description tag  Dim metaDescription As HtmlMeta = New HtmlMeta()  metaDescription.Name = "Description"  metaDescription.Content = "Short, unique and keywords rich page description."  Page.Header.Controls.Add(metaDescription)   ' Add meta keywords tag  Dim metaKeywords As HtmlMeta = New HtmlMeta()  metaKeywords.Name = "Keywords"  metaKeywords.Content = "selected,page,keywords"  Page.Header.Controls.Add(metaKeywords)End Sub

    Read the article

  • httpd not starting with systemd on F17

    - by malfy
    Title says it all. This is a fresh Fedora 17 system running on a Xen hypervisor. No idea why it won't start [root@box~]  uname -a Linux box.localhost 3.5.4-2.fc17.i686.PAE #1 SMP Wed Sep 26 22:10:23 UTC 2012 i686 i686 i386 GNU/Linux [root@box~]  cat /etc/redhat-release Fedora release 17 (Beefy Miracle) [root@box~]  systemctl enable httpd.service ln -s '/usr/lib/systemd/system/httpd.service' '/etc/systemd/system/multi-user.target.wants/httpd.service' [root@box~]  systemctl start httpd.service Job failed. See system journal and 'systemctl status' for details. [root@box~]  systemctl status httpd.service httpd.service - The Apache HTTP Server (prefork MPM) Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled) Active: failed (Result: exit-code) since Fri, 19 Oct 2012 22:43:37 -0500; 3s ago Process: 18225 ExecStart=/usr/sbin/httpd $OPTIONS -k start (code=exited, status=226/NAMESPACE) CGroup: name=systemd:/system/httpd.service Oct 19 22:43:37 box.localhost systemd[18225]: Failed at step NAMESPACE spawning /usr/sbin/httpd: No such file or directory [root@box~]  ls -al /usr/sbin/httpd -rwxr-xr-x 1 root root 343496 Apr 30 04:56 /usr/sbin/httpd

    Read the article

  • Is it a good idea to add robots "noindex" m tags deep, low content pages, e.g. product model data

    - by Cognize
    I'm considering adding robots "noindex, follow" tags to the very numerous product data pages that are linked from the product style pages in our online store. For example, each product style has a page with full text content on the product: http://www.shop.example/Product/Category/Style/SOME-STYLE-CODE Then many data pages with technical data for each model code is linked from the product style page. http://www.shop.example/Product/Category/Style/SOME-STYLE-CODE-1 http://www.shop.example/Product/Category/Style/SOME-STYLE-CODE-2 http://www.shop.example/Product/Category/Style/SOME-STYLE-CODE-3 It is these technical data pages that I intend to add the no index code to, as I imagine that this might stop these pages from cannibalizing keyword authority for more important content rich pages on the site. Any advice appreciated.

    Read the article

  • No endpoint listening at.........

    - by Michael Stephenson
    I was having some very frustrating behaviour on our build server and while I found a number of articles online with similar error messages none of them helped me.  I thought I would just explain this here incase if helps me or anyone else in future.The error message we were getting is:There was no endpoint listening at http://localhostStubs.ExternalApplication/SampleService.svc that could accept the message. This is often caused by an incorrect address or SOAP action. See InnerException, if present, for more detailsOur scenario is as follows:We have a solution where a WCF service application hosting the WCF routing service is listening to the Windows Azure Service Bus Relay.  We have an acceptance test project in the solution which sends a message to the service bus which is then received by the WCF routing service and routed to SampleService.svc which is hosted in another IIS application on the same box.  A response is flowed back through to the test.  In the tests there are 5 scenarios simulating a successful message, and various error conditions.  On my developer machine it was working absolutely fine every time, and a clean build on my developer machine worked fine.  On the build server however one or more of the tests would fail each time with the above error message.  There didnt seem to be any pattern to which test would fail.The solution was building on a Windows 2008 R2 machine with IIS 7 and AppFabric Server installed with auto-start configured for the IIS Application which would be listening to service bus.After lots of searching online and looking at logs etc it turned out to be a simple solution to just restart the WAS service (Windows Process Activation Service) and the services it advised you to restart with it.  Hope this helps someone else

    Read the article

  • Service Unavailable - App Pool disabled error...anyone know why?

    - by andy
    Hi guys A small number of our sites intermediately experience an error that is a mystery to us. Our set up is: Windows Server 2003 IIS6 .NET 3.5 We have about 35 Websites running on IIS Each site has it's own App Pool Each App Pool has the identity: Network Service The symptom is: The Application Pool for a site stops, and when you browse to the site, you only see: SERVICE UNAVAILABLE In the Event Logs, we have one Error, that looks like this: Application pool 'AppPool1' is being automatically disabled due to a series of failures in the process(es) serving that application pool. And we have several Warnings leading up to the error, that look like this (note: the warnings look the same, except for the process id): A process serving application pool 'AppPool1' suffered a fatal communication error with the World Wide Web Publishing Service. The process id was '292'. The data field contains the error number. What's causing this? As I said, it doeasn't happen very often....the last time was about 6 months ago...and it usually happens with the same site/App pool. Any ideas? cheers!

    Read the article

  • SharePoint 2010 Hosting - ASPHostPortal :: Installing SSRS 2008 R2 on SharePoint 2010

    - by mbridge
    What do you need first? Please download SQL Server® 2008 R2 November CTP Reporting Services Add-in for Microsoft SharePoint® Technologies 2010 and please follow this steps: 1. Install a SharePoint technology instance. (Already did this when installing PowerPivot with SharePoint) 2. Install SQL Server 2008 R2 November CTP Reporting Services and specify that the report server use SharePoint Integrated mode 3. Configure Reporting Services 4. Download the Reporting Services Add-in by clicking the rsSharePoint.msi link later on this page. To start the installation immediately, click Run After installing Reporting services and the add-in your reporting server is ready to be integrated with SharePoint, in SharePoint 2010 we have some new admin screens. To integrate go to central admin, general application settings: When you successfully installed the add-in a reporting services icon will be there. Click Reporting Services Integration: Add the report server web service url (To get the URL, open the Reporting Services Configuration tool, connect to the report server, and click Web Service URL. Click the URL to verify it works. Copy the URL and paste it into Report Server Web Service URL.), select your authentication mode (windows authentication is prefered). Add a username and password of your admin account. Click ok to configure and start the integration. After the installation you can set the reporting services default. What is changed in SP2010 is that there isn’t a report library available. You have to add content types to a default library. So go to a site collection, site actions, View all site content. Create a Asset library: Now we have to make sure we can add reports to the library. To do this we have to add content types: Open the library, click on library tools, library settings, Under Content Types, click Add from existing site content types. In the Select Content Types section, in Select site content types from, click the arrow to select Reporting Services. In the Available Site Content Types list, click Report Builder, Report Data Source and Report and then click Add to move the selected content type to the Content types to add list. Now we are ready to upload reports and execute them from within our webparts: Another interesting post: - Integrating SharePoint 2010 and SQL 2008 R2

    Read the article

  • How to add the Windows defender into Windows Explorer's right click menu to scan a particular drive/folder/file on demand?

    - by avirk
    There is no option in Windows Explorer to scan a particular drive (or file) on demand by right clicking on it in Windows Explorer as we had in Windows 7 with Microsoft Security Essentials or like other antivirus solutions. I know we can run a custom scan for the particular drive or specific folder but that process is too lengthy and time consuming. The guide How to Add a "Windows Defender" Cascading Desktop Context Menu in Windows 8 explains how we can add Windows Defender in the desktop right click menu, so I'm curious, is there a way to add it in the Windows Explorer right click menu to launch a search whenever I need to?

    Read the article

  • 3D rotation tool. How can I add simple extrusion?

    - by Gerve
    The 3D rotation tool is excellent but it only lets you rotate 2D objects, this means my object is wafer thin. Is there any way to add simple extrusion or depth to a symbol? I don't really want to use any 3rd party libraries like Away3D or Papervision, this is overkill for my simple 2D game. I only want to do this creating a couple motion tweens if possible. More Details: Below is what my symbol looks like (just with a bit more color). The symbol does a little 3D rotation and then flies away, it's just for something like a scoreboard within the app.

    Read the article

  • Tip on Reusing Classes in Different .NET Project Types

    - by psheriff
    All of us have class libraries that we developed for use in our projects. When you create a .NET Class Library project with many classes, you can use that DLL in ASP.NET, Windows Forms and WPF applications. However, for Silverlight and Windows Phone, these .NET Class Libraries cannot be used. The reason is Silverlight and Windows Phone both use a scaled down version of .NET and thus do not have access to the full .NET framework class library. However, there are many classes and functionality that will work in the full .NET and in the scaled down versions that Silverlight and Windows Phone use.Let’s take an example of a class that you might want to use in all of the above mentioned projects. The code listing shown below might be something that you have in a Windows Form or an ASP.NET application. public class StringCommon{  public static bool IsAllLowerCase(string value)  {    return new Regex(@"^([^A-Z])+$").IsMatch(value);  }   public static bool IsAllUpperCase(string value)  {    return new Regex(@"^([^a-z])+$").IsMatch(value);  }} The StringCommon class is very simple with just two methods, but you know that the System.Text.RegularExpressions namespace is available in Silverlight and Windows Phone. Thus, you know that you may reuse this class in your Silverlight and Windows Phone projects. Here is the problem: if you create a Silverlight Class Library project and you right-click on that project in Solution Explorer and choose Add | Add Existing Item… from the menu, the class file StringCommon.cs will be copied from the original location and placed into the Silverlight Class Library project. You now have two files with the same code. If you want to change the code you will now need to change it in two places! This is a maintenance nightmare that you have just created. If you then add this to a Windows Phone Class Library project, you now have three places you need to modify the code! Add As LinkInstead of creating three separate copies of the same class file, you want to leave the original class file in its original location and just create a link to that file from the Silverlight and Windows Phone class libraries. Visual Studio will allow you to do this, but you need to do one additional step in the Add Existing Item dialog (see Figure 1). You will still right mouse click on the project and choose Add | Add Existing Item… from the menu. You will still highlight the file you want to add to your project, but DO NOT click on the Add button. Instead click on the drop down portion of the Add button and choose the “Add As Link” menu item. This will now create a link to the file on disk and will not copy the file into your new project. Figure 1: Add as Link will create a link, not copy the file over. When this linked file is added to your project, there will be a different icon next to that file in the Solution Explorer window. This icon signifies that this is a link to a file in another folder on your hard drive.   Figure 2: The Linked file will have a different icon to show it is a link. Of course, if you have code that will not work in Silverlight or Windows Phone -- because the code has dependencies on features of .NET that are not supported on those platforms – you  can always wrap conditional compilation code around the offending code so it will be removed when compiled in those class libraries. SummaryIn this short blog entry you learned how to reuse one of your class libraries from ASP.NET, Windows Forms or WPF applications in your Silverlight or Windows Phone class libraries. You can do this without creating a maintenance nightmare by using the “Add a Link” feature of the Add Existing Item dialog. Good Luck with your Coding,Paul Sheriff ** SPECIAL OFFER FOR MY BLOG READERS **Visit http://www.pdsa.com/Event/Blog for a free video on Silverlight entitled Silverlight XAML for the Complete Novice - Part 1.

    Read the article

  • Architecture design with MyBatis mappers

    - by Wolf
    I am creating rest web service for providing data. I am using Spring MVC for handling rest requests, and MyBatis for data access. Application should be designed in the way that it should be easy to change the data access implementation (for example to hibernate or something else) and it has to be fast (so I am trying to avoid unnecessary overcomplication of design). Now my question is about the general design of layers. I would normally use DAO interface and then different implementations for different data access strategies, but MyBatis uses interfaces to access the data. So I can think of 2 possible models but I am not sure which one is better or if there is any other nice way: Controller layer - uses Service layer interfaces services are then implemented for each data access stretegy - for example for mybatis: service implementation uses Mapper classes to access data and do whatever it needs to do with them and sends them to controller layer Controller layer - uses Service layer - service layer uses DAO interfaces DAOs are then implemented for each data access strategy - for example for mybatis: DAO class uses mapper interface to access data and sends them to service layer, service layer then do whatever it needs to do with them and sends them to controller layer I prefer the first strategy as it seems to be less complicated, but then I would have to write all of the service code for another data access again. What do you think? Thank You

    Read the article

  • How do I add a network printer in Ubuntu 12.04?

    - by Ricky Robinson
    I know the name and the IP address of a network printer, but I can't seem to be able to search by IP address or name. Ubuntu developers love to move things around to make it difficult for users, so now with Ubuntu 12.04 I can only go on Application -> System Tools -> System Settings -> Printers, click on Network and a list of printers appears. Too bad the one I want to add isn't there. How do I do it? Here it suggests System -> Administration -> Printing, which simply doesn't exist.

    Read the article

  • Should I add parameters to instance methods that use those instance fields as parameters?

    - by john smith optional
    I have an instance method that uses instance fields in its work. I can leave the method without that parameters as they're available to me, or I can add them to the parameter list, thus making my method more "generic" and not reliable on the class. On the other hand, additional parameters will be in parameters list. Which approach is preferable and why? Edit: at the moment I don't know if my method will be public or private. Edit2: clarification: both method and fields are instance level.

    Read the article

  • Check for Apache state in Linux

    - by loulouzekiller
    Hi i have a java application that starts/stops/restart Apache and it should also check for its status, so i have looked how checking for status of apache and it appears that you have to check for the state of the service, problem is that when i use apachectl start the service httpd is still not started, is that normal ? i check for the service state by doing service httpd status . [root@lxrdcpsm ~]# service httpd status httpd is stopped [root@lxrdcpsm ~]# /apps/apache/2.4.4/bin/apachectl start httpd (pid 20502) already running [root@lxrdcpsm ~]# service httpd status httpd is stopped [root@lxrdcpsm ~]# /apps/apache/2.4.4/bin/apachectl stop [root@lxrdcpsm ~]# service httpd status httpd is stopped [root@lxrdcpsm ~]# /apps/apache/2.4.4/bin/apachectl start [root@lxrdcpsm ~]# service httpd status httpd is stopped [root@lxrdcpsm ~]#

    Read the article

< Previous Page | 316 317 318 319 320 321 322 323 324 325 326 327  | Next Page >