Search Results

Search found 5817 results on 233 pages for 'named ranges'.

Page 43/233 | < Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >

  • Service Discovery in WCF 4.0 &ndash; Part 1

    - by Shaun
    When designing a service oriented architecture (SOA) system, there will be a lot of services with many service contracts, endpoints and behaviors. Besides the client calling the service, in a large distributed system a service may invoke other services. In this case, one service might need to know the endpoints it invokes. This might not be a problem in a small system. But when you have more than 10 services this might be a problem. For example in my current product, there are around 10 services, such as the user authentication service, UI integration service, location service, license service, device monitor service, event monitor service, schedule job service, accounting service, player management service, etc..   Benefit of Discovery Service Since almost all my services need to invoke at least one other service. This would be a difficult task to make sure all services endpoints are configured correctly in every service. And furthermore, it would be a nightmare when a service changed its endpoint at runtime. Hence, we need a discovery service to remove the dependency (configuration dependency). A discovery service plays as a service dictionary which stores the relationship between the contracts and the endpoints for every service. By using the discovery service, when service X wants to invoke service Y, it just need to ask the discovery service where is service Y, then the discovery service will return all proper endpoints of service Y, then service X can use the endpoint to send the request to service Y. And when some services changed their endpoint address, all need to do is to update its records in the discovery service then all others will know its new endpoint. In WCF 4.0 Discovery it supports both managed proxy discovery mode and ad-hoc discovery mode. In ad-hoc mode there is no standalone discovery service. When a client wanted to invoke a service, it will broadcast an message (normally in UDP protocol) to the entire network with the service match criteria. All services which enabled the discovery behavior will receive this message and only those matched services will send their endpoint back to the client. The managed proxy discovery service works as I described above. In this post I will only cover the managed proxy mode, where there’s a discovery service. For more information about the ad-hoc mode please refer to the MSDN.   Service Announcement and Probe The main functionality of discovery service should be return the proper endpoint addresses back to the service who is looking for. In most cases the consume service (as a client) will send the contract which it wanted to request to the discovery service. And then the discovery service will find the endpoint and respond. Sometimes the contract and endpoint are not enough. It also contains versioning, extensions attributes. This post I will only cover the case includes contract and endpoint. When a client (or sometimes a service who need to invoke another service) need to connect to a target service, it will firstly request the discovery service through the “Probe” method with the criteria. Basically the criteria contains the contract type name of the target service. Then the discovery service will search its endpoint repository by the criteria. The repository might be a database, a distributed cache or a flat XML file. If it matches, the discovery service will grab the endpoint information (it’s called discovery endpoint metadata in WCF) and send back. And this is called “Probe”. Finally the client received the discovery endpoint metadata and will use the endpoint to connect to the target service. Besides the probe, discovery service should take the responsible to know there is a new service available when it goes online, as well as stopped when it goes offline. This feature is named “Announcement”. When a service started and stopped, it will announce to the discovery service. So the basic functionality of a discovery service should includes: 1, An endpoint which receive the service online message, and add the service endpoint information in the discovery repository. 2, An endpoint which receive the service offline message, and remove the service endpoint information from the discovery repository. 3, An endpoint which receive the client probe message, and return the matches service endpoints, and return the discovery endpoint metadata. WCF 4.0 discovery service just covers all these features in it's infrastructure classes.   Discovery Service in WCF 4.0 WCF 4.0 introduced a new assembly named System.ServiceModel.Discovery which has all necessary classes and interfaces to build a WS-Discovery compliant discovery service. It supports ad-hoc and managed proxy modes. For the case mentioned in this post, what we need to build is a standalone discovery service, which is the managed proxy discovery service mode. To build a managed discovery service in WCF 4.0 just create a new class inherits from the abstract class System.ServiceModel.Discovery.DiscoveryProxy. This class implemented and abstracted the procedures of service announcement and probe. And it exposes 8 abstract methods where we can implement our own endpoint register, unregister and find logic. These 8 methods are asynchronized, which means all invokes to the discovery service are asynchronously, for better service capability and performance. 1, OnBeginOnlineAnnouncement, OnEndOnlineAnnouncement: Invoked when a service sent the online announcement message. We need to add the endpoint information to the repository in this method. 2, OnBeginOfflineAnnouncement, OnEndOfflineAnnouncement: Invoked when a service sent the offline announcement message. We need to remove the endpoint information from the repository in this method. 3, OnBeginFind, OnEndFind: Invoked when a client sent the probe message that want to find the service endpoint information. We need to look for the proper endpoints by matching the client’s criteria through the repository in this method. 4, OnBeginResolve, OnEndResolve: Invoked then a client sent the resolve message. Different from the find method, when using resolve method the discovery service will return the exactly one service endpoint metadata to the client. In our example we will NOT implement this method.   Let’s create our own discovery service, inherit the base System.ServiceModel.Discovery.DiscoveryProxy. We also need to specify the service behavior in this class. Since the build-in discovery service host class only support the singleton mode, we must set its instance context mode to single. 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Text; 5: using System.ServiceModel.Discovery; 6: using System.ServiceModel; 7:  8: namespace Phare.Service 9: { 10: [ServiceBehavior(InstanceContextMode = InstanceContextMode.Single, ConcurrencyMode = ConcurrencyMode.Multiple)] 11: public class ManagedProxyDiscoveryService : DiscoveryProxy 12: { 13: protected override IAsyncResult OnBeginFind(FindRequestContext findRequestContext, AsyncCallback callback, object state) 14: { 15: throw new NotImplementedException(); 16: } 17:  18: protected override IAsyncResult OnBeginOfflineAnnouncement(DiscoveryMessageSequence messageSequence, EndpointDiscoveryMetadata endpointDiscoveryMetadata, AsyncCallback callback, object state) 19: { 20: throw new NotImplementedException(); 21: } 22:  23: protected override IAsyncResult OnBeginOnlineAnnouncement(DiscoveryMessageSequence messageSequence, EndpointDiscoveryMetadata endpointDiscoveryMetadata, AsyncCallback callback, object state) 24: { 25: throw new NotImplementedException(); 26: } 27:  28: protected override IAsyncResult OnBeginResolve(ResolveCriteria resolveCriteria, AsyncCallback callback, object state) 29: { 30: throw new NotImplementedException(); 31: } 32:  33: protected override void OnEndFind(IAsyncResult result) 34: { 35: throw new NotImplementedException(); 36: } 37:  38: protected override void OnEndOfflineAnnouncement(IAsyncResult result) 39: { 40: throw new NotImplementedException(); 41: } 42:  43: protected override void OnEndOnlineAnnouncement(IAsyncResult result) 44: { 45: throw new NotImplementedException(); 46: } 47:  48: protected override EndpointDiscoveryMetadata OnEndResolve(IAsyncResult result) 49: { 50: throw new NotImplementedException(); 51: } 52: } 53: } Then let’s implement the online, offline and find methods one by one. WCF discovery service gives us full flexibility to implement the endpoint add, remove and find logic. For the demo purpose we will use an internal dictionary to store the services’ endpoint metadata. In the next post we will see how to serialize and store these information in database. Define a concurrent dictionary inside the service class since our it will be used in the multiple threads scenario. 1: [ServiceBehavior(InstanceContextMode = InstanceContextMode.Single, ConcurrencyMode = ConcurrencyMode.Multiple)] 2: public class ManagedProxyDiscoveryService : DiscoveryProxy 3: { 4: private ConcurrentDictionary<EndpointAddress, EndpointDiscoveryMetadata> _services; 5:  6: public ManagedProxyDiscoveryService() 7: { 8: _services = new ConcurrentDictionary<EndpointAddress, EndpointDiscoveryMetadata>(); 9: } 10: } Then we can simply implement the logic of service online and offline. 1: protected override IAsyncResult OnBeginOnlineAnnouncement(DiscoveryMessageSequence messageSequence, EndpointDiscoveryMetadata endpointDiscoveryMetadata, AsyncCallback callback, object state) 2: { 3: _services.AddOrUpdate(endpointDiscoveryMetadata.Address, endpointDiscoveryMetadata, (key, value) => endpointDiscoveryMetadata); 4: return new OnOnlineAnnouncementAsyncResult(callback, state); 5: } 6:  7: protected override void OnEndOnlineAnnouncement(IAsyncResult result) 8: { 9: OnOnlineAnnouncementAsyncResult.End(result); 10: } 11:  12: protected override IAsyncResult OnBeginOfflineAnnouncement(DiscoveryMessageSequence messageSequence, EndpointDiscoveryMetadata endpointDiscoveryMetadata, AsyncCallback callback, object state) 13: { 14: EndpointDiscoveryMetadata endpoint = null; 15: _services.TryRemove(endpointDiscoveryMetadata.Address, out endpoint); 16: return new OnOfflineAnnouncementAsyncResult(callback, state); 17: } 18:  19: protected override void OnEndOfflineAnnouncement(IAsyncResult result) 20: { 21: OnOfflineAnnouncementAsyncResult.End(result); 22: } Regards the find method, the parameter FindRequestContext.Criteria has a method named IsMatch, which can be use for us to evaluate which service metadata is satisfied with the criteria. So the implementation of find method would be like this. 1: protected override IAsyncResult OnBeginFind(FindRequestContext findRequestContext, AsyncCallback callback, object state) 2: { 3: _services.Where(s => findRequestContext.Criteria.IsMatch(s.Value)) 4: .Select(s => s.Value) 5: .All(meta => 6: { 7: findRequestContext.AddMatchingEndpoint(meta); 8: return true; 9: }); 10: return new OnFindAsyncResult(callback, state); 11: } 12:  13: protected override void OnEndFind(IAsyncResult result) 14: { 15: OnFindAsyncResult.End(result); 16: } As you can see, we checked all endpoints metadata in repository by invoking the IsMatch method. Then add all proper endpoints metadata into the parameter. Finally since all these methods are asynchronized we need some AsyncResult classes as well. Below are the base class and the inherited classes used in previous methods. 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Text; 5: using System.Threading; 6:  7: namespace Phare.Service 8: { 9: abstract internal class AsyncResult : IAsyncResult 10: { 11: AsyncCallback callback; 12: bool completedSynchronously; 13: bool endCalled; 14: Exception exception; 15: bool isCompleted; 16: ManualResetEvent manualResetEvent; 17: object state; 18: object thisLock; 19:  20: protected AsyncResult(AsyncCallback callback, object state) 21: { 22: this.callback = callback; 23: this.state = state; 24: this.thisLock = new object(); 25: } 26:  27: public object AsyncState 28: { 29: get 30: { 31: return state; 32: } 33: } 34:  35: public WaitHandle AsyncWaitHandle 36: { 37: get 38: { 39: if (manualResetEvent != null) 40: { 41: return manualResetEvent; 42: } 43: lock (ThisLock) 44: { 45: if (manualResetEvent == null) 46: { 47: manualResetEvent = new ManualResetEvent(isCompleted); 48: } 49: } 50: return manualResetEvent; 51: } 52: } 53:  54: public bool CompletedSynchronously 55: { 56: get 57: { 58: return completedSynchronously; 59: } 60: } 61:  62: public bool IsCompleted 63: { 64: get 65: { 66: return isCompleted; 67: } 68: } 69:  70: object ThisLock 71: { 72: get 73: { 74: return this.thisLock; 75: } 76: } 77:  78: protected static TAsyncResult End<TAsyncResult>(IAsyncResult result) 79: where TAsyncResult : AsyncResult 80: { 81: if (result == null) 82: { 83: throw new ArgumentNullException("result"); 84: } 85:  86: TAsyncResult asyncResult = result as TAsyncResult; 87:  88: if (asyncResult == null) 89: { 90: throw new ArgumentException("Invalid async result.", "result"); 91: } 92:  93: if (asyncResult.endCalled) 94: { 95: throw new InvalidOperationException("Async object already ended."); 96: } 97:  98: asyncResult.endCalled = true; 99:  100: if (!asyncResult.isCompleted) 101: { 102: asyncResult.AsyncWaitHandle.WaitOne(); 103: } 104:  105: if (asyncResult.manualResetEvent != null) 106: { 107: asyncResult.manualResetEvent.Close(); 108: } 109:  110: if (asyncResult.exception != null) 111: { 112: throw asyncResult.exception; 113: } 114:  115: return asyncResult; 116: } 117:  118: protected void Complete(bool completedSynchronously) 119: { 120: if (isCompleted) 121: { 122: throw new InvalidOperationException("This async result is already completed."); 123: } 124:  125: this.completedSynchronously = completedSynchronously; 126:  127: if (completedSynchronously) 128: { 129: this.isCompleted = true; 130: } 131: else 132: { 133: lock (ThisLock) 134: { 135: this.isCompleted = true; 136: if (this.manualResetEvent != null) 137: { 138: this.manualResetEvent.Set(); 139: } 140: } 141: } 142:  143: if (callback != null) 144: { 145: callback(this); 146: } 147: } 148:  149: protected void Complete(bool completedSynchronously, Exception exception) 150: { 151: this.exception = exception; 152: Complete(completedSynchronously); 153: } 154: } 155: } 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Text; 5: using System.ServiceModel.Discovery; 6: using Phare.Service; 7:  8: namespace Phare.Service 9: { 10: internal sealed class OnOnlineAnnouncementAsyncResult : AsyncResult 11: { 12: public OnOnlineAnnouncementAsyncResult(AsyncCallback callback, object state) 13: : base(callback, state) 14: { 15: this.Complete(true); 16: } 17:  18: public static void End(IAsyncResult result) 19: { 20: AsyncResult.End<OnOnlineAnnouncementAsyncResult>(result); 21: } 22:  23: } 24:  25: sealed class OnOfflineAnnouncementAsyncResult : AsyncResult 26: { 27: public OnOfflineAnnouncementAsyncResult(AsyncCallback callback, object state) 28: : base(callback, state) 29: { 30: this.Complete(true); 31: } 32:  33: public static void End(IAsyncResult result) 34: { 35: AsyncResult.End<OnOfflineAnnouncementAsyncResult>(result); 36: } 37: } 38:  39: sealed class OnFindAsyncResult : AsyncResult 40: { 41: public OnFindAsyncResult(AsyncCallback callback, object state) 42: : base(callback, state) 43: { 44: this.Complete(true); 45: } 46:  47: public static void End(IAsyncResult result) 48: { 49: AsyncResult.End<OnFindAsyncResult>(result); 50: } 51: } 52:  53: sealed class OnResolveAsyncResult : AsyncResult 54: { 55: EndpointDiscoveryMetadata matchingEndpoint; 56:  57: public OnResolveAsyncResult(EndpointDiscoveryMetadata matchingEndpoint, AsyncCallback callback, object state) 58: : base(callback, state) 59: { 60: this.matchingEndpoint = matchingEndpoint; 61: this.Complete(true); 62: } 63:  64: public static EndpointDiscoveryMetadata End(IAsyncResult result) 65: { 66: OnResolveAsyncResult thisPtr = AsyncResult.End<OnResolveAsyncResult>(result); 67: return thisPtr.matchingEndpoint; 68: } 69: } 70: } Now we have finished the discovery service. The next step is to host it. The discovery service is a standard WCF service. So we can use ServiceHost on a console application, windows service, or in IIS as usual. The following code is how to host the discovery service we had just created in a console application. 1: static void Main(string[] args) 2: { 3: using (var host = new ServiceHost(new ManagedProxyDiscoveryService())) 4: { 5: host.Opened += (sender, e) => 6: { 7: host.Description.Endpoints.All((ep) => 8: { 9: Console.WriteLine(ep.ListenUri); 10: return true; 11: }); 12: }; 13:  14: try 15: { 16: // retrieve the announcement, probe endpoint and binding from configuration 17: var announcementEndpointAddress = new EndpointAddress(ConfigurationManager.AppSettings["announcementEndpointAddress"]); 18: var probeEndpointAddress = new EndpointAddress(ConfigurationManager.AppSettings["probeEndpointAddress"]); 19: var binding = Activator.CreateInstance(Type.GetType(ConfigurationManager.AppSettings["bindingType"], true, true)) as Binding; 20: var announcementEndpoint = new AnnouncementEndpoint(binding, announcementEndpointAddress); 21: var probeEndpoint = new DiscoveryEndpoint(binding, probeEndpointAddress); 22: probeEndpoint.IsSystemEndpoint = false; 23: // append the service endpoint for announcement and probe 24: host.AddServiceEndpoint(announcementEndpoint); 25: host.AddServiceEndpoint(probeEndpoint); 26:  27: host.Open(); 28:  29: Console.WriteLine("Press any key to exit."); 30: Console.ReadKey(); 31: } 32: catch (Exception ex) 33: { 34: Console.WriteLine(ex.ToString()); 35: } 36: } 37:  38: Console.WriteLine("Done."); 39: Console.ReadKey(); 40: } What we need to notice is that, the discovery service needs two endpoints for announcement and probe. In this example I just retrieve them from the configuration file. I also specified the binding of these two endpoints in configuration file as well. 1: <?xml version="1.0"?> 2: <configuration> 3: <startup> 4: <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0"/> 5: </startup> 6: <appSettings> 7: <add key="announcementEndpointAddress" value="net.tcp://localhost:10010/announcement"/> 8: <add key="probeEndpointAddress" value="net.tcp://localhost:10011/probe"/> 9: <add key="bindingType" value="System.ServiceModel.NetTcpBinding, System.ServiceModel, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"/> 10: </appSettings> 11: </configuration> And this is the console screen when I ran my discovery service. As you can see there are two endpoints listening for announcement message and probe message.   Discoverable Service and Client Next, let’s create a WCF service that is discoverable, which means it can be found by the discovery service. To do so, we need to let the service send the online announcement message to the discovery service, as well as offline message before it shutdown. Just create a simple service which can make the incoming string to upper. The service contract and implementation would be like this. 1: [ServiceContract] 2: public interface IStringService 3: { 4: [OperationContract] 5: string ToUpper(string content); 6: } 1: public class StringService : IStringService 2: { 3: public string ToUpper(string content) 4: { 5: return content.ToUpper(); 6: } 7: } Then host this service in the console application. In order to make the discovery service easy to be tested the service address will be changed each time it’s started. 1: static void Main(string[] args) 2: { 3: var baseAddress = new Uri(string.Format("net.tcp://localhost:11001/stringservice/{0}/", Guid.NewGuid().ToString())); 4:  5: using (var host = new ServiceHost(typeof(StringService), baseAddress)) 6: { 7: host.Opened += (sender, e) => 8: { 9: Console.WriteLine("Service opened at {0}", host.Description.Endpoints.First().ListenUri); 10: }; 11:  12: host.AddServiceEndpoint(typeof(IStringService), new NetTcpBinding(), string.Empty); 13:  14: host.Open(); 15:  16: Console.WriteLine("Press any key to exit."); 17: Console.ReadKey(); 18: } 19: } Currently this service is NOT discoverable. We need to add a special service behavior so that it could send the online and offline message to the discovery service announcement endpoint when the host is opened and closed. WCF 4.0 introduced a service behavior named ServiceDiscoveryBehavior. When we specified the announcement endpoint address and appended it to the service behaviors this service will be discoverable. 1: var announcementAddress = new EndpointAddress(ConfigurationManager.AppSettings["announcementEndpointAddress"]); 2: var announcementBinding = Activator.CreateInstance(Type.GetType(ConfigurationManager.AppSettings["bindingType"], true, true)) as Binding; 3: var announcementEndpoint = new AnnouncementEndpoint(announcementBinding, announcementAddress); 4: var discoveryBehavior = new ServiceDiscoveryBehavior(); 5: discoveryBehavior.AnnouncementEndpoints.Add(announcementEndpoint); 6: host.Description.Behaviors.Add(discoveryBehavior); The ServiceDiscoveryBehavior utilizes the service extension and channel dispatcher to implement the online and offline announcement logic. In short, it injected the channel open and close procedure and send the online and offline message to the announcement endpoint.   On client side, when we have the discovery service, a client can invoke a service without knowing its endpoint. WCF discovery assembly provides a class named DiscoveryClient, which can be used to find the proper service endpoint by passing the criteria. In the code below I initialized the DiscoveryClient, specified the discovery service probe endpoint address. Then I created the find criteria by specifying the service contract I wanted to use and invoke the Find method. This will send the probe message to the discovery service and it will find the endpoints back to me. The discovery service will return all endpoints that matches the find criteria, which means in the result of the find method there might be more than one endpoints. In this example I just returned the first matched one back. In the next post I will show how to extend our discovery service to make it work like a service load balancer. 1: static EndpointAddress FindServiceEndpoint() 2: { 3: var probeEndpointAddress = new EndpointAddress(ConfigurationManager.AppSettings["probeEndpointAddress"]); 4: var probeBinding = Activator.CreateInstance(Type.GetType(ConfigurationManager.AppSettings["bindingType"], true, true)) as Binding; 5: var discoveryEndpoint = new DiscoveryEndpoint(probeBinding, probeEndpointAddress); 6:  7: EndpointAddress address = null; 8: FindResponse result = null; 9: using (var discoveryClient = new DiscoveryClient(discoveryEndpoint)) 10: { 11: result = discoveryClient.Find(new FindCriteria(typeof(IStringService))); 12: } 13:  14: if (result != null && result.Endpoints.Any()) 15: { 16: var endpointMetadata = result.Endpoints.First(); 17: address = endpointMetadata.Address; 18: } 19: return address; 20: } Once we probed the discovery service we will receive the endpoint. So in the client code we can created the channel factory from the endpoint and binding, and invoke to the service. When creating the client side channel factory we need to make sure that the client side binding should be the same as the service side. WCF discovery service can be used to find the endpoint for a service contract, but the binding is NOT included. This is because the binding was not in the WS-Discovery specification. In the next post I will demonstrate how to add the binding information into the discovery service. At that moment the client don’t need to create the binding by itself. Instead it will use the binding received from the discovery service. 1: static void Main(string[] args) 2: { 3: Console.WriteLine("Say something..."); 4: var content = Console.ReadLine(); 5: while (!string.IsNullOrWhiteSpace(content)) 6: { 7: Console.WriteLine("Finding the service endpoint..."); 8: var address = FindServiceEndpoint(); 9: if (address == null) 10: { 11: Console.WriteLine("There is no endpoint matches the criteria."); 12: } 13: else 14: { 15: Console.WriteLine("Found the endpoint {0}", address.Uri); 16:  17: var factory = new ChannelFactory<IStringService>(new NetTcpBinding(), address); 18: factory.Opened += (sender, e) => 19: { 20: Console.WriteLine("Connecting to {0}.", factory.Endpoint.ListenUri); 21: }; 22: var proxy = factory.CreateChannel(); 23: using (proxy as IDisposable) 24: { 25: Console.WriteLine("ToUpper: {0} => {1}", content, proxy.ToUpper(content)); 26: } 27: } 28:  29: Console.WriteLine("Say something..."); 30: content = Console.ReadLine(); 31: } 32: } Similarly, the discovery service probe endpoint and binding were defined in the configuration file. 1: <?xml version="1.0"?> 2: <configuration> 3: <startup> 4: <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0"/> 5: </startup> 6: <appSettings> 7: <add key="announcementEndpointAddress" value="net.tcp://localhost:10010/announcement"/> 8: <add key="probeEndpointAddress" value="net.tcp://localhost:10011/probe"/> 9: <add key="bindingType" value="System.ServiceModel.NetTcpBinding, System.ServiceModel, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"/> 10: </appSettings> 11: </configuration> OK, now let’s have a test. Firstly start the discovery service, and then start our discoverable service. When it started it will announced to the discovery service and registered its endpoint into the repository, which is the local dictionary. And then start the client and type something. As you can see the client asked the discovery service for the endpoint and then establish the connection to the discoverable service. And more interesting, do NOT close the client console but terminate the discoverable service but press the enter key. This will make the service send the offline message to the discovery service. Then start the discoverable service again. Since we made it use a different address each time it started, currently it should be hosted on another address. If we enter something in the client we could see that it asked the discovery service and retrieve the new endpoint, and connect the the service.   Summary In this post I discussed the benefit of using the discovery service and the procedures of service announcement and probe. I also demonstrated how to leverage the WCF Discovery feature in WCF 4.0 to build a simple managed discovery service. For test purpose, in this example I used the in memory dictionary as the discovery endpoint metadata repository. And when finding I also just return the first matched endpoint back. I also hard coded the bindings between the discoverable service and the client. In next post I will show you how to solve the problem mentioned above, as well as some additional feature for production usage. You can download the code here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • apache deny directive

    - by user12145
    I am using apache deny to deny a country's ip ranges(Turkey in this case). However from the apache log I still see ips from .tr(using dls connection presumably) accessing and get a valid http 200 response: dslxxx.xxx-xxxxx.ttnet.net.tr what am I missing?

    Read the article

  • Parallel File Copy

    - by Jon
    I have a list of files I need to copy on a Linux system - each file ranges from 10 to 100GB in size. I only want to copy to the local filesystem. Is there a way to do this in parallel - with multiple processes each responsible for copying a file - in a simple manner? I can easily write a multithreaded program to do this, but I'm interested in finding out if there's a low level Linux method for doing this.

    Read the article

  • Yahoo Messenger IP range

    - by Adrian
    I use PeerBlock (former PeerGuardian) and, as a consequence, Yahoo Messenger (actually Pidgin) fails to connect every once in a while; PeerBlock reports the access being blocked because the destination IP is in one of the block lists. Where can I get a list of all IP ranges belonging to Yahoo Messenger so I can configure an "allow" rule in PeerBlock?

    Read the article

  • Secret, unlogged, transparent, case-sensitive proxy in IIS6?

    - by Ian Boyd
    Does IIS have a secret, unlogged, transparent, case-sensitive proxy built into it? A file exists on the web-server: GET http://www.stackoverflow.com/javascript/ModifyQuoteArea.js HTTP/1.1 Accept: text/html, application/xhtml+xml, */* Accept-Language: en-US User-Agent: Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0) Accept-Encoding: gzip, deflate Connection: Keep-Alive Host: www.stackoverflow.com HTTP/1.1 200 OK Connection: Keep-Alive Content-Length: 29246 Date: Mon, 07 Mar 2011 14:20:07 GMT Content-Type: application/x-javascript ETag: "5a0a6178edacb1:1c51" Server: Microsoft-IIS/6.0 Last-Modified: Fri, 02 Tue 2010 17:03:32 GMT Accept-Ranges: bytes X-Powered-By: ASP.NET ... Problem is that a changes made to the file will not get served, the old (i.e. February of last year) version keeps getting served: HTTP/1.1 200 OK Connection: Keep-Alive Content-Length: 29246 Date: Mon, 07 Mar 2011 14:23:07 GMT Content-Type: application/x-javascript ETag: "5a0a6178edacb1:1c51" Server: Microsoft-IIS/6.0 Last-Modified: Fri, 02 Tue 2010 17:03:32 GMT Accept-Ranges: bytes X-Powered-By: ASP.NET ... The same old file gets served, even though we've: renamed the file deleted the file restarted IIS The request for this file does not appear in the IIS logs (e.g. C:\WINNT\System32\LogFiles\W3SVC7\) And this only happens from the outside (i.e. the internet). If you issue the request locally on the server, then you will: - get the current file (file there) - 404 (file renamed) - 404 (file deleted) But if i change the case of the requested resource, i.e.: GET http://www.stackoverflow.com/javascript/MoDiFyQuOtEArEa.js HTTP/1.1 Accept: text/html, application/xhtml+xml, */* Accept-Language: en-US User-Agent: Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0) Accept-Encoding: gzip, deflate Connection: Keep-Alive Host: www.stackoverflow.com Note: MoDiFyQuOtEArEa.js verses ModifyQuoteArea.js Then i do get the proper file (or get the 404 as i expect if the file is renamed or deleted). But any subsequent changes to the file will not show up until i change the case of the file i'm asking for. Checking the IIS logs all indicate that the (internet) requests are all coming the correct client on the internet (i.e. not from some intermediate proxy). Since the file doesn't exist on the hard drive anymore, i conclude that there is a proxy. The requests serviced from this proxy are not logged in the IIS logs. The requests for new files are logged, and from the client IP, not a proxy IP. The proxy is case sensitivie. This does not sound like something Microsoft, or IIS, would do: - a transparent proxy - case-sensitivie - unlogged - surviving restarts of IIS - surviving in a cache for hours i can't believe that our customer's IIS is doing these things. i'm assuming there is some other transparent proxy in front of IIS. Or, does IIS have a transparent, unlogged, case-sensitive, memory based, proxy, that caches content for at least 7 hours? (Come Monday morning, IIS is serving the correct file, unlogged).

    Read the article

  • Apache won't serve images larger than ~2K

    - by dtbaker
    Hello, Just upgraded an old box to Ubuntu to 10.04.2 LTS. Apache will not display images to a browser that are over about 2K. Small images seem to display fine. Static HTML and PHP continues to works fine as well. Installed: apache2 2.2.14-5ubuntu8.4 apache2-mpm-prefork 2.2.14-5ubuntu8.4 apache2-utils 2.2.14-5ubuntu8.4 apache2.2-bin 2.2.14-5ubuntu8.4 apache2.2-common 2.2.14-5ubuntu8.4 here is an ngrep of an image that doesn't display fine in the browser: T 192.168.0.4:32907 - 192.168.0.54:80 [AP] GET /path/path/logo.png HTTP/1.1..Host: 192.1 68.0.54..Connection: keep-alive..Accept: application/xml,application/xhtml+ xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5..User-Ag ent: Mozilla/5.0 (X11; U; Linux x86_64; en-US) AppleWebKit/534.13 (KHTML, like Gecko) Chrome/9.0.597.98 Safari/534.13..Accept-Enco ding: gzip,deflate,sdch..Accept-Language: en-US,en;q=0.8..Accept- Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3.... T 192.168.0.54:80 - 192.168.0.4:32907 [A] HTTP/1.1 200 OK..Date: Wed, 09 Mar 2011 05:28:38 GMT..Server: Apa che/2.2.14 (Ubuntu)..Last-Modified: Tue, 05 Oct 2010 11:59:17 GMT ..ETag: "17b6f4-15fe-491dd63eb2f40"..Accept-Ranges: bytes..Conten t-Length: 5630..Keep-Alive: timeout=15, max=100..Connection: Keep -Alive..Content-Type: image/png.....PNG........IHDR...!...v...... .%.....sRGB.........bKGD..............pHYs.................tIME.. etc... This looks ok to me! I have tried firefox and chrome, both display small images fine but when a large image is requested the browser prompts to download the file. When the image file is saved to the local computer it is corrupt, it also takes a long time to save which makes me think the browser cannot see the content-length header sent from apache. Also when I look at the saved image file it includes the headers from apache, along with a bit of garbage at the top, like so: vi logo.png: ^@^UÅd^@$^]V^S^H^@E^@^Q,n!@^@@^F^@^@À¨^@6À¨^@^D^@P^Y¬rÇŹéw^P^@Ú^@^@^A^A^H ^@^GÝ^]^@pbSHTTP/1.1 200 OK^M Date: Wed, 09 Mar 2011 04:47:04 GMT^M Server: Apache/2.2.14 (Ubuntu)^M Last-Modified: Tue, 05 Oct 2010 11:59:17 GMT^M ETag: "17b6ff-157c-491dd63eb2f40"^M Accept-Ranges: bytes^M Content-Length: 5500^M Keep-Alive: timeout=15, max=94^M Connection: Keep-Alive^M Content-Type: image/png^M ^M PNG^M etc... Any ideas? It's driving me nuts. There is nothing in apache error logs, and permissions are fine (because the image data is there, it's just somewhat corrupt). There's no proxy or iptables on this ubuntu box either. Thanks heaps!! Dave ps: just tried on IE from a different computer, same problem :( pps: rebooted server, no help.

    Read the article

  • LogParser query to grab only external IP addresses from IIS logs?

    - by Josh
    I'm working on a public website that is used by both external visitors and internal employees. I'm after the external visitor hits, but I can't think of a good way to filter out the internal IP ranges. Using LogParser, what is the best way to filter IISW3C logs by IP range? This is all I've come up with so far, which can't possibly be the best or most efficient way. WHERE [c-ip] NOT LIKE (10.10.%, 10.11.%) Any help is appreciated.

    Read the article

  • can i use an ip-list include file for iptable blacklisting

    - by rubo77
    I would like to block all countries except mine in iptables, that is a lits with about 100.000 Entries. how can i define this blacklistfile in a script, so iptables blocks all those ip-ranges? maybe i can use http://www.ipdeny.com/ipblocks/data/countries/ that provides lists in the form 117.55.192.0/20 117.104.224.0/21 119.59.80.0/21 121.100.48.0/21 ... i want to be able to change the blacklistfile easily without having to change the iptables-script

    Read the article

  • IF function that refers to another cell if true

    - by geoconfusion
    Can someone please help? I am using the IF function to find cells within certain ranges, but want the cell to contain the value if it falls within that specific range. for example: =IF (AA3 is between 150 and 400 then AD3 is equal to AA3 and if not leave blank) my current formula below does not work: =IF(AND(AA3150, AA3<400), AD3=AA3, "" ) where AD3 is the cell I am working in... any suggestions?

    Read the article

  • Auto-Configuring SSIS Packages

    - by Davide Mauri
    SSIS Package Configurations are very useful to make packages flexible so that you can change objects properties at run-time and thus make the package configurable without having to open and edit it. In a complex scenario where you have dozen of packages (even in in the smallest BI project I worked on I had 50 packages), each package may have its own configuration needs. This means that each time you have to run the package you have to pass the correct Package Configuration. I usually use XML configuration files and I also force everyone that works with me to make sure that an object that is used in several packages has the same name in all package where it is used, in order to simplify configurations usage. Connection Managers are a good example of one of those objects. For example, all the packages that needs to access to the Data Warehouse database must have a Connection Manager named DWH. Basically we define a set of “global” objects so that we can have a configuration file for them, so that it can be used by all packages. If a package as some specific configuration needs, we create a specific – or “local” – XML configuration file or we set the value that needs to be configured at runtime using DTLoggedExec’s Package Parameters: http://dtloggedexec.davidemauri.it/Package%20Parameters.ashx Now, how we can improve this even more? I’d like to have a package that, when it’s run, automatically goes “somewhere” and search for global or local configuration, loads it and applies it to itself. That’s the basic idea of Auto-Configuring Packages. The “somewhere” is a SQL Server table, defined in this way In this table you’ll put the values that you want to be used at runtime by your package: The ConfigurationFilter column specify to which package that configuration line has to be applied. A package will use that line only if the value specified in the ConfigurationFilter column is equal to its name. In the above sample. only the package named “simple-package” will use the line number two. There is an exception here: the $$Global value indicate a configuration row that has to be applied to any package. With this simple behavior it’s possible to replicate the “global” and the “local” configuration approach I’ve described before. The ConfigurationValue contains the value you want to be applied at runtime and the PackagePath contains the object to which that value will be applied. The ConfiguredValueType column defined the data type of the value and the Checksum column is contains a calculated value that is simply the hash value of ConfigurationFilter plus PackagePath so that it can be used as a Primary Key to guarantee uniqueness of configuration rows. As you may have noticed the table is very similar to the table originally used by SSIS in order to put DTS Configuration into SQL Server tables: SQL Server SSIS Configuration Type: http://msdn.microsoft.com/en-us/library/ms141682.aspx Now, how it works? It’s very easy: you just have to call DTLoggedExec with the /AC option: DTLoggedExec.exe /FILE:”mypackage.dtsx” /AC:"localhost;ssis_auto_configuration;ssiscfg.configuration" the AC option expects a string with the following format: <database_server>;<database_name>;<table_name>; only Windows Authentication is supported. When DTLoggedExec finds an Auto-Configuration request, it injects a new connection manager in the loaded package. The injected connection manager is named $$DTLoggedExec_AutoConfigure and is used by the two SQL Server DTS Configuration ($$DTLoggedExec_Global and $$DTLoggedExec_Local) also injected by DTLoggedExec, used to load “local” and “global” configuration. Now, you may start to wonder why this approach cannot be used without having all this stuff going around, but just passing to a package always two XML DTS Configuration files, (to have to “local” and the “global” configurations) doing something like this: DTLoggedExec.exe /FILE:”mypackage.dtsx” /CONF:”global.dtsConfig” /CONF:”mypackage.dtsConfig” The problem is that this approach doesn’t work if you have, in one of the two configuration file, a value that has to be applied to an object that doesn’t exists in the loaded package. This situation will raise an error that will halt package execution. To solve this problem, you may want to create a configuration file for each package. Unfortunately this will make deployment and management harder, since you’ll have to deal with a great number of configuration files. The Auto-Configuration approach solve all these problems at once! We’re using it in a project where we have hundreds of packages and I can tell you that deployment of packages and their configuration for the pre-production and production environment has never been so easy! To use the Auto-Configuration option you have to download the latest DTLoggedExec release: http://dtloggedexec.codeplex.com/releases/view/62218 Feedback, as usual, are very welcome!

    Read the article

  • 10 Essential Tools for building ASP.NET Websites

    - by Stephen Walther
    I recently put together a simple public website created with ASP.NET for my company at Superexpert.com. I was surprised by the number of free tools that I ended up using to put together the website. Therefore, I thought it would be interesting to create a list of essential tools for building ASP.NET websites. These tools work equally well with both ASP.NET Web Forms and ASP.NET MVC. Performance Tools After reading Steve Souders two (very excellent) books on front-end website performance High Performance Web Sites and Even Faster Web Sites, I have been super sensitive to front-end website performance. According to Souders’ Performance Golden Rule: “Optimize front-end performance first, that's where 80% or more of the end-user response time is spent” You can use the tools below to reduce the size of the images, JavaScript files, and CSS files used by an ASP.NET application. 1. Sprite and Image Optimization Framework CSS sprites were first described in an article written for A List Apart entitled CSS sprites: Image Slicing’s Kiss of Death. When you use sprites, you combine multiple images used by a website into a single image. Next, you use CSS trickery to display particular sub-images from the combined image in a webpage. The primary advantage of sprites is that they reduce the number of requests required to display a webpage. Requesting a single large image is faster than requesting multiple small images. In general, the more resources – images, JavaScript files, CSS files – that must be moved across the wire, the slower your website. However, most people avoid using sprites because they require a lot of work. You need to combine all of the images and write just the right CSS rules to display the sub-images. The Microsoft Sprite and Image Optimization Framework enables you to avoid all of this work. The framework combines the images for you automatically. Furthermore, the framework includes an ASP.NET Web Forms control and an ASP.NET MVC helper that makes it easy to display the sub-images. You can download the Sprite and Image Optimization Framework from CodePlex at http://aspnet.codeplex.com/releases/view/50869. The Sprite and Image Optimization Framework was written by Morgan McClean who worked in the office next to mine at Microsoft. Morgan was a scary smart Intern from Canada and we discussed the Framework while he was building it (I was really excited to learn that he was working on it). Morgan added some great advanced features to this framework. For example, the Sprite and Image Optimization Framework supports something called image inlining. When you use image inlining, the actual image is stored in the CSS file. Here’s an example of what image inlining looks like: .Home_StephenWalther_small-jpg { width:75px; height:100px; background: url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAEsAAABkCAIAAABB1lpeAAAAB GdBTUEAALGOfPtRkwAAACBjSFJNAACHDwAAjA8AAP1SAACBQAAAfXkAAOmLAAA85QAAGcxzPIV3AAAKL s+zNfREAAAAASUVORK5CYII=) no-repeat 0% 0%; } The actual image (in this case a picture of me that is displayed on the home page of the Superexpert.com website) is stored in the CSS file. If you visit the Superexpert.com website then very few separate images are downloaded. For example, all of the images with a red border in the screenshot below take advantage of CSS sprites: Unfortunately, there are some significant Gotchas that you need to be aware of when using the Sprite and Image Optimization Framework. There are workarounds for these Gotchas. I plan to write about these Gotchas and workarounds in a future blog entry. 2. Microsoft Ajax Minifier Whenever possible you should combine, minify, compress, and cache with a far future header all of your JavaScript and CSS files. The Microsoft Ajax Minifier makes it easy to minify JavaScript and CSS files. Don’t confuse minification and compression. You need to do both. According to Souders, you can reduce the size of a JavaScript file by an additional 20% (on average) by minifying a JavaScript file after you compress the file. When you minify a JavaScript or CSS file, you use various tricks to reduce the size of the file before you compress the file. For example, you can minify a JavaScript file by replacing long JavaScript variables names with short variables names and removing unnecessary white space and comments. You can minify a CSS file by doing such things as replacing long color names such as #ffffff with shorter equivalents such as #fff. The Microsoft Ajax Minifier was created by Microsoft employee Ron Logan. Internally, this tool was being used by several large Microsoft websites. We also used the tool heavily on the ASP.NET team. I convinced Ron to publish the tool on CodePlex so that everyone in the world could take advantage of it. You can download the tool from the ASP.NET Ajax website and read documentation for the tool here. I created the installer for the Microsoft Ajax Minifier. When creating the installer, I also created a Visual Studio build task to make it easy to minify all of your JavaScript and CSS files whenever you do a build within Visual Studio automatically. Read the Ajax Minifier Quick Start to learn how to configure the build task. 3. ySlow The ySlow tool is a free add-on for Firefox created by Yahoo that enables you to test the front-end of your website. For example, here are the current test results for the Superexpert.com website: The Superexpert.com website has an overall score of B (not perfect but not bad). The ySlow tool is not perfect. For example, the Superexpert.com website received a failing grade of F for not using a Content Delivery Network even though the website using the Microsoft Ajax Content Delivery Network for JavaScript files such as jQuery. Uptime After publishing a website live to the world, you want to ensure that the website does not encounter any issues and that it stays live. I use the following tools to monitor the Superexpert.com website now that it is live. 4. ELMAH ELMAH stands for Error Logging Modules and Handlers for ASP.NET. ELMAH enables you to record any errors that happen at your website so you can review them in the future. You can download ELMAH for free from the ELMAH project website. ELMAH works great with both ASP.NET Web Forms and ASP.NET MVC. You can configure ELMAH to store errors in a number of different stores including XML files, the Event Log, an Access database, a SQL database, an Oracle database, or in computer RAM. You also can configure ELMAH to email error messages to you when they happen. By default, you can access ELMAH by requesting the elmah.axd page from a website with ELMAH installed. Here’s what the elmah page looks like from the Superexpert.com website (this page is password-protected because secret information can be revealed in an error message): If you click on a particular error message, you can view the original Yellow Screen ASP.NET error message (even when the error message was never displayed to the actual user). I installed ELMAH by taking advantage of the new package manager for ASP.NET named NuGet (originally named NuPack). You can read the details about NuGet in the following blog entry by Scott Guthrie. You can download NuGet from CodePlex. 5. Pingdom I use Pingdom to verify that the Superexpert.com website is always up. You can sign up for Pingdom by visiting Pingdom.com. You can use Pingdom to monitor a single website for free. At the Pingdom website, you configure the frequency that your website gets pinged. I verify that the Superexpert.com website is up every 5 minutes. I have the Pingdom service verify that it can retrieve the string “Contact Us” from the website homepage. If your website goes down, you can configure Pingdom so that it sends an email, Twitter, SMS, or iPhone alert. I use the Pingdom iPhone app which looks like this: 6. Host Tracker If your website does go down then you need some way of determining whether it is a problem with your local network or if your website is down for everyone. I use a website named Host-Tracker.com to check how badly a website is down. Here’s what the Host-Tracker website displays for the Superexpert.com website when the website can be successfully pinged from everywhere in the world: Notice that Host-Tracker pinged the Superexpert.com website from 68 locations including Roubaix, France and Scranton, PA. Debugging I mean debugging in the broadest possible sense. I use the following tools when building a website to verify that I have not made a mistake. 7. HTML Spell Checker Why doesn’t Visual Studio have a built-in spell checker? Don’t know – I’ve always found this mysterious. Fortunately, however, a former member of the ASP.NET team wrote a free spell checker that you can use with your ASP.NET pages. I find a spell checker indispensible. It is easy to delude yourself that you are capable of perfect spelling. I’m always super embarrassed when I actually run the spell checking tool and discover all of my spelling mistakes. The fastest way to add the HTML Spell Checker extension to Visual Studio is to select the menu option Tools, Extension Manager within Visual Studio. Click on Online Gallery and search for HTML Spell Checker: 8. IIS SEO Toolkit If people cannot find your website through Google then you should not even bother to create it. Microsoft has a great extension for IIS named the IIS Search Engine Optimization Toolkit that you can use to identify issue with your website that would hurt its page rank. You also can use this tool to quickly create a sitemap for your website that you can submit to Google or Bing. You can even generate the sitemap for an ASP.NET MVC website. Here’s what the report overview for the Superexpert.com website looks like: Notice that the Sueprexpert.com website had plenty of violations. For example, there are 65 cases in which a page has a broken hyperlink. You can drill into these violations to identity the exact page and location where these violations occur. 9. LinqPad If your ASP.NET website accesses a database then you should be using LINQ to Entities with the Entity Framework. Using LINQ involves some magic. LINQ queries written in C# get converted into SQL queries for you. If you are not careful about how you write your LINQ queries, you could unintentionally build a really badly performing website. LinqPad is a free tool that enables you to experiment with your LINQ queries. It even works with Microsoft SQL CE 4 and Azure. You can use LinqPad to execute a LINQ to Entities query and see the results. You also can use it to see the resulting SQL that gets executed against the database: 10. .NET Reflector I use .NET Reflector daily. The .NET Reflector tool enables you to take any assembly and disassemble the assembly into C# or VB.NET code. You can use .NET Reflector to see the “Source Code” of an assembly even when you do not have the actual source code. You can download a free version of .NET Reflector from the Redgate website. I use .NET Reflector primarily to help me understand what code is doing internally. For example, I used .NET Reflector with the Sprite and Image Optimization Framework to better understand how the MVC Image helper works. Here’s part of the disassembled code from the Image helper class: Summary In this blog entry, I’ve discussed several of the tools that I used to create the Superexpert.com website. These are tools that I use to improve the performance, improve the SEO, verify the uptime, or debug the Superexpert.com website. All of the tools discussed in this blog entry are free. Furthermore, all of these tools work with both ASP.NET Web Forms and ASP.NET MVC. Let me know if there are any tools that you use daily when building ASP.NET websites.

    Read the article

  • Troubleshooting Application Timeouts in SQL Server

    - by Tara Kizer
    I recently received the following email from a blog reader: "We are having an OLTP database instance, using SQL Server 2005 with little to moderate traffic (10-20 requests/min). There are also bulk imports that occur at regular intervals in this DB and the import duration ranges between 10secs to 1 min, depending on the data size. Intermittently (2-3 times in a week), we face an issue, where queries get timed out (default of 30 secs set in application). On analyzing, we found two stored procedures, having queries with multiple table joins inside them of taking a long time (5-10 mins) in getting executed, when ideally the execution duration ranges between 5-10 secs. Execution plan of the same displayed Clustered Index Scan happening instead of Clustered Index Seek. All required Indexes are found to be present and Index fragmentation is also minimal as we Rebuild Indexes regularly alongwith Updating Statistics. With no other alternate options occuring to us, we restarted SQL server and thereafter the performance was back on track. But sometimes it was still giving timeout errors for some hits and so we also restarted IIS and that stopped the problem as of now." Rather than respond directly to the blog reader, I thought it would be more interesting to share my thoughts on this issue in a blog. There are a few things that I can think of that could cause abnormal timeouts: Blocking Bad plan in cache Outdated statistics Hardware bottleneck To determine if blocking is the issue, we can easily run sp_who/sp_who2 or a query directly on sysprocesses (select * from master..sysprocesses where blocking <> 0).  If blocking is present and consistent, then you'll need to determine whether or not to kill the parent blocking process.  Killing a process will cause the transaction to rollback, so you need to proceed with caution.  Killing the parent blocking process is only a temporary solution, so you'll need to do more thorough analysis to figure out why the blocking was present.  You should look into missing indexes and perhaps consider changing the database's isolation level to READ_COMMITTED_SNAPSHOT. The blog reader mentions that the execution plan shows a clustered index scan when a clustered index seek is normal for the stored procedure.  A clustered index scan might have been chosen either because that is what is in cache already or because of out of date statistics.  The blog reader mentions that bulk imports occur at regular intervals, so outdated statistics is definitely something that could cause this issue.  The blog reader may need to update statistics after imports are done if the imports are changing a lot of data (greater than 10%).  If the statistics are good, then the query optimizer might have chosen to scan rather than seek in a previous execution because the scan was determined to be less costly due to the value of an input parameter.  If this parameter value is rare, then its execution plan in cache is what we call a bad plan.  You want the best plan in cache for the most frequent parameter values.  If a bad plan is a recurring problem on your system, then you should consider rewriting the stored procedure.  You might want to break up the code into multiple stored procedures so that each can have a different execution plan in cache. To remove a bad plan from cache, you can recompile the stored procedure.  An alternative method is to run DBCC FREEPROCACHE which drops the procedure cache.  It is better to recompile stored procedures rather than dropping the procedure cache as dropping the procedure cache affects all plans in cache rather than just the ones that were bad, so there will be a temporary performance penalty until the plans are loaded into cache again. To determine if there is a hardware bottleneck occurring such as slow I/O or high CPU utilization, you will need to run Performance Monitor on the database server.  Hopefully you already have a baseline of the server so you know what is normal and what is not.  Be on the lookout for I/O requests taking longer than 12 milliseconds and CPU utilization over 90%.  The servers that I support typically are under 30% CPU utilization, but your baseline could be higher and be within a normal range. If restarting the SQL Server service fixes the problem, then the problem was most likely due to blocking or a bad plan in the procedure cache.  Rather than restarting the SQL Server service, which causes downtime, the blog reader should instead analyze the above mentioned things.  Proceed with caution when restarting the SQL Server service as all transactions that have not completed will be rolled back at startup.  This crash recovery process could take longer than normal if there was a long-running transaction running when the service was stopped.  Until the crash recovery process is completed on the database, it is unavailable to your applications. If restarting IIS fixes the problem, then the problem might not have been inside SQL Server.  Prior to taking this step, you should do analysis of the above mentioned things. If you can think of other reasons why the blog reader is facing this issue a few times a week, I'd love to hear your thoughts via a blog comment.

    Read the article

  • Using IP Restrictions with URL Rewrite-Week 25

    - by OWScott
    URL Rewrite offers tremendous flexibility for customizing rules to your environment. One area of functionality that is often desired for URL Rewrite is to allow a large list of approved or denied IP addresses and subnet ranges. IIS’s original IP Restrictions is helpful for fully blocking an IP address, but it doesn’t offer the flexibility that URL Rewrite does. An example where URL Rewrite is helpful is where you want to allow only authorized IPs to access staging.yoursite.com, but where staging.yoursite.com is part of the same site as www.yoursite.com. This requires conditional logic for the user’s IP. This lesson covers this unique situation while also introducing Rewrite Maps, server variables, and pairing rules to add more flexibility. This is week 25 of a 52 week series for the Web Pro. Past and future videos can be found here: http://dotnetslackers.com/projects/LearnIIS7/ You can find this week’s video here.

    Read the article

  • Better documentation for tasks waiting on resources

    - by SQLOS Team
    The sys.dm_os_waiting_tasks DMV contains a wealth of useful information about tasks waiting on a resource, but until now detailed information about the resource being consumed - sys.dm_os_waiting_tasks.resource_description - hasn't been documented, apart from a rather self-evident "Description of the resource that is being consumed."   Thanks to a recent Connect suggestion this column will get more information added. Here is a summary of the possible values that can appear in this column - Note this information is current for SQL Server 2008 R2 and Denali:   Thread-pool resource owner:•       threadpool id=scheduler<hex-address> Parallel query resource owner:•       exchangeEvent id={Port|Pipe}<hex-address> WaitType=<exchange-wait-type> nodeId=<exchange-node-id> Exchange-wait-type can be one of the following.•       e_waitNone•       e_waitPipeNewRow•       e_waitPipeGetRow•       e_waitSynchronizeConsumerOpen•       e_waitPortOpen•       e_waitPortClose•       e_waitRange Lock resource owner:<type-specific-description> id=lock<lock-hex-address> mode=<mode> associatedObjectId=<associated-obj-id>               <type-specific-description> can be:• For DATABASE: databaselock subresource=<databaselock-subresource> dbid=<db-id>• For FILE: filelock fileid=<file-id> subresource=<filelock-subresource> dbid=<db-id>• For OBJECT: objectlock lockPartition=<lock-partition-id> objid=<obj-id> subresource=<objectlock-subresource> dbid=<db-id>• For PAGE: pagelock fileid=<file-id> pageid=<page-id> dbid=<db-id> subresource=<pagelock-subresource>• For Key: keylock  hobtid=<hobt-id> dbid=<db-id>• For EXTENT: extentlock fileid=<file-id> pageid=<page-id> dbid=<db-id>• For RID: ridlock fileid=<file-id> pageid=<page-id> dbid=<db-id>• For APPLICATION: applicationlock hash=<hash> databasePrincipalId=<role-id> dbid=<db-id>• For METADATA: metadatalock subresource=<metadata-subresource> classid=<metadatalock-description> dbid=<db-id>• For HOBT: hobtlock hobtid=<hobt-id> subresource=<hobt-subresource> dbid=<db-id>• For ALLOCATION_UNIT: allocunitlock hobtid=<hobt-id> subresource=<alloc-unit-subresource> dbid=<db-id> <mode> can be:• Sch-S• Sch-M• S• U• X• IS• IU• IX• SIU• SIX• UIX• BU• RangeS-S• RangeS-U• RangeI-N• RangeI-S• RangeI-U• RangeI-X• RangeX-S• RangeX-U• RangeX-X External resource owner:•       External ExternalResource=<wait-type> Generic resource owner:•       TransactionMutex TransactionInfo Workspace=<workspace-id>•       Mutex•       CLRTaskJoin•       CLRMonitorEvent•       CLRRWLockEvent•       resourceWait Latch resource owner:•       <db-id>:<file-id>:<page-in-file>•       <GUID>•       <latch-class> (<latch-address>)   Further Information Slava Oks's weblog: sys.dm_os_waiting_tasks.Informit.com: Identifying Blocking Using sys.dm_os_waiting_tasks - Ken Henderson   - Guy

    Read the article

  • How to Tell If Your Computer is Overheating and What to Do About It

    - by Chris Hoffman
    Heat is a computer’s enemy. Computers are designed with heat dispersion and ventilation in mind so they don’t overheat. If too much heat builds up, your computer may become unstable or suddenly shut down. The CPU and graphics card produce much more heat when running demanding applications. If there’s a problem with your computer’s cooling system, an excess of heat could even physically damage its components. Is Your Computer Overheating? When using a typical computer in a typical way, you shouldn’t have to worry about overheating at all. However, if you’re encountering system instability issues like abrupt shut downs, blue screens, and freezes — especially while doing something demanding like playing PC games or encoding video — your computer may be overheating. This can happen for several reasons. Your computer’s case may be full of dust, a fan may have failed, something may be blocking your computer’s vents, or you may have a compact laptop that was never designed to run at maximum performance for hours on end. Monitoring Your Computer’s Temperature First, bear in mind that different CPUs and GPUs (graphics cards) have different optimal temperature ranges. Before getting too worried about a temperature, be sure to check your computer’s documentation — or its CPU or graphics card specifications — and ensure you know the temperature ranges your hardware can handle. You can monitor your computer’s temperatures in a variety of different ways. First, you may have a way to monitor temperature that is already built into your system. You can often view temperature values in your computer’s BIOS or UEFI settings screen. This allows you to quickly see your computer’s temperature if Windows freezes or blue screens on you — just boot the computer, enter the BIOS or UEFI screen, and check the temperatures displayed there. Note that not all BIOSes or UEFI screens will display this information, but it is very common. There are also programs that will display your computer’s temperature. Such programs just read the sensors inside your computer and show you the temperature value they report, so there are a wide variety of tools you can use for this, from the simple Speccy system information utility to an advanced tool like SpeedFan. HWMonitor also offer this feature, displaying a wide variety of sensor information. Be sure to look at your CPU and graphics card temperatures. You can also find other temperatures, such as the temperature of your hard drive, but these components will generally only overheat if it becomes extremely hot in the computer’s case. They shouldn’t generate too much heat on their own. If you think your computer may be overheating, don’t just glance as these sensors once and ignore them. Do something demanding with your computer, such as running a CPU burn-in test with Prime 95, playing a PC game, or running a graphical benchmark. Monitor the computer’s temperature while you do this, even checking a few hours later — does any component overheat after you push it hard for a while? Preventing Your Computer From Overheating If your computer is overheating, here are some things you can do about it: Dust Out Your Computer’s Case: Dust accumulates in desktop PC cases and even laptops over time, clogging fans and blocking air flow. This dust can cause ventilation problems, trapping heat and preventing your PC from cooling itself properly. Be sure to clean your computer’s case occasionally to prevent dust build-up. Unfortunately, it’s often more difficult to dust out overheating laptops. Ensure Proper Ventilation: Put the computer in a location where it can properly ventilate itself. If it’s a desktop, don’t push the case up against a wall so that the computer’s vents become blocked or leave it near a radiator or heating vent. If it’s a laptop, be careful to not block its air vents, particularly when doing something demanding. For example, putting a laptop down on a mattress, allowing it to sink in, and leaving it there can lead to overheating — especially if the laptop is doing something demanding and generating heat it can’t get rid of. Check if Fans Are Running: If you’re not sure why your computer started overheating, open its case and check that all the fans are running. It’s possible that a CPU, graphics card, or case fan failed or became unplugged, reducing air flow. Tune Up Heat Sinks: If your CPU is overheating, its heat sink may not be seated correctly or its thermal paste may be old. You may need to remove the heat sink and re-apply new thermal paste before reseating the heat sink properly. This tip applies more to tweakers, overclockers, and people who build their own PCs, especially if they may have made a mistake when originally applying the thermal paste. This is often much more difficult when it comes to laptops, which generally aren’t designed to be user-serviceable. That can lead to trouble if the laptop becomes filled with dust and needs to be cleaned out, especially if the laptop was never designed to be opened by users at all. Consult our guide to diagnosing and fixing an overheating laptop for help with cooling down a hot laptop. Overheating is a definite danger when overclocking your CPU or graphics card. Overclocking will cause your components to run hotter, and the additional heat will cause problems unless you can properly cool your components. If you’ve overclocked your hardware and it has started to overheat — well, throttle back the overclock! Image Credit: Vinni Malek on Flickr     

    Read the article

  • How do you turn a cube into a sphere?

    - by Tom Dalling
    I'm trying to make a quad sphere based on an article, which shows results like this: I can generate a cube correctly: But when I convert all the points according to this formula (from the page linked above): x = x * sqrtf(1.0 - (y*y/2.0) - (z*z/2.0) + (y*y*z*z/3.0)); y = y * sqrtf(1.0 - (z*z/2.0) - (x*x/2.0) + (z*z*x*x/3.0)); z = z * sqrtf(1.0 - (x*x/2.0) - (y*y/2.0) + (x*x*y*y/3.0)); My sphere looks like this: As you can see, the edges of the cube still poke out too far. The cube ranges from -1 to +1 on all axes, like the article says. Any ideas what is wrong?

    Read the article

  • Resources on how to relate structured and semi- / un-structured information

    - by Fritz Meissner
    I don't have a great background in information organisation / retrieval, but I know of a few ways of dealing with the problem. For structured information, it's possible to go OOish - everything "has-a" or "has-many" something else, and you navigate the graph to find relationships between things. For unstructured information, you have techniques like text search and tagging. What resources - articles or books - are there that summarise the CS theory behind these techniques or could introduce me to others? I'm developing a system that needs to handle capture and retrieval of information that ranges from necessarily unstructured (advice about X) to structured (list of Xs that relate to Ys) to a combination (Ys that relate to the advice about X) and I'd like to get some insight into how to do it properly.

    Read the article

  • Vdpau performance in Precise with Unity 3d

    - by bowser
    vdpau seems to be broken in Precise under Unity 3d. CPU usage ranges around 50-70% for 1080p movies while same movies utilizes around 5-10% in Natty with vdpau enabled (under Unity3d) The card is Nvidia G105m. It doesn't seem to be a Nvidia driver problem because in gnome-shell everything works as expected and I have tried different versions of Nvidia drivers (295.20, 295.33, 295.40 and the latest 302.XX from xorg-edgers) The results are all the same, works in Gnome Shell but not in Unity 3d. Disabling syn to vbank works if movie is not in full screen mode, but it doesn't work for full screen. I have searched around and haven't found much info. I am wondering if others are experiencing the same problem and if there are some known work around that I have missed. Unity 3d is otherwise very nice in Precise, but this is a show stopping issue for me (literally). Thanks. I have filed a bug here https://bugs.launchpad.net/unity/+bug/993397

    Read the article

  • Can not print after upgrading from 12.x to 14.04

    - by user318889
    After upgrading from V12.04 to V14.04 I am not able to print. I am using an HP LaserJet 400 M451dn. The printer troubleshooter told me that there is no solution to the problem. This is the output of the advanced diagnositc output. (Due to limited space I cut the output!) Can anybody tell me what is going wrong. I am using the printer via USB ? Page 1 (Scheduler not running?): {'cups_connection_failure': False} Page 2 (Is local server publishing?): {'local_server_exporting_printers': False} Page 3 (Choose printer): {'cups_dest': , 'cups_instance': None, 'cups_queue': u'HP-LaserJet-400-color-M451dn', 'cups_queue_listed': True} Page 4 (Check printer sanity): {'cups_device_uri_scheme': u'hp', 'cups_printer_dict': {'device-uri': u'hp:/usb/HP_LaserJet_400_color_M451dn?serial=CNFF308670', 'printer-info': u'Hewlett-Packard HP LaserJet 400 color M451dn', 'printer-is-shared': True, 'printer-location': u'Pinatubo', 'printer-make-and-model': u'HP LJ 300-400 color M351-M451 Postscript (recommended)', 'printer-state': 4, 'printer-state-message': u'', 'printer-state-reasons': [u'none'], 'printer-type': 8556636, 'printer-uri-supported': u'ipp://localhost:631/printers/HP-LaserJet-400-color-M451dn'}, 'cups_printer_remote': False, 'hplip_output': (['', '\x1b[01mHP Linux Imaging and Printing System (ver. 3.14.6)\x1b[0m', '\x1b[01mDevice Information Utility ver. 5.2\x1b[0m', '', 'Copyright (c) 2001-13 Hewlett-Packard Development Company, LP', 'This software comes with ABSOLUTELY NO WARRANTY.', 'This is free software, and you are welcome to distribute it', 'under certain conditions. See COPYING file for more details.', '', '', '\x1b[01mhp:/usb/HP_LaserJet_400_color_M451dn?serial=CNFF308670\x1b[0m', '', '\x1b[01mDevice Parameters (dynamic data):\x1b[0m', '\x1b[01m Parameter Value(s) \x1b[0m', ' ---------------------------- ----------------------------------------------------------', ' back-end hp ', " cups-printers ['HP-LaserJet-400-color-M451dn'] ", ' cups-uri hp:/usb/HP_LaserJet_400_color_M451dn?serial=CNFF308670 ', ' dev-file ', ' device-state -1 ', ' device-uri hp:/usb/HP_LaserJet_400_color_M451dn?serial=CNFF308670 ', ' deviceid ', ' error-state 101 ', ' host ', ' is-hp True ', ' panel 0 ', ' panel-line1 ', ' panel-line2 ', ' port 1 ', ' serial CNFF308670 ', ' status-code 5002 ', ' status-desc ', '\x1b[01m', 'Model Parameters (static data):\x1b[0m', '\x1b[01m Parameter Value(s) \x1b[0m', ' ---------------------------- ----------------------------------------------------------', ' align-type 0 ', ' clean-type 0 ', ' color-cal-type 0 ', ' copy-type 0 ', ' embedded-server-type 0 ', ' fax-type 0 ', ' fw-download False ', ' icon hp_color_laserjet_cp2025.png ', ' io-mfp-mode 1 ', ' io-mode 1 ', ' io-support 6 ', ' job-storage 0 ', ' linefeed-cal-type 0 ', ' model HP_LaserJet_400_color_M451dn ', ' model-ui HP LaserJet 400 Color m451dn ', ' model1 HP LaserJet 400 Color M451dn ', ' monitor-type 0 ', ' panel-check-type 0 ', ' pcard-type 0 ', ' plugin 0 ', ' plugin-reason 0 ', ' power-settings 0 ', ' ppd-name lj_300_400_color_m351_m451 ', ' pq-diag-type 0 ', ' r-type 0 ', ' r0-agent1-kind 4 ', ' r0-agent1-sku CE410A/CE410X ', ' r0-agent1-type 1 ', ' r0-agent2-kind 4 ', ' r0-agent2-sku CE411A ', ' r0-agent2-type 4 ', ' r0-agent3-kind 4 ', ' r0-agent3-sku CE413A ', ' r0-agent3-type 5 ', ' r0-agent4-kind 4 ', ' r0-agent4-sku CE412A ', ' r0-agent4-type 6 ', ' scan-src 0 ', ' scan-type 0 ', ' status-battery-check 0 ', ' status-dynamic-counters 0 ', ' status-type 3 ', ' support-released True ', ' support-subtype 2202411 ', ' support-type 2 ', ' support-ver 3.12.2 ', " tech-class ['Postscript'] ", " tech-subclass ['Normal'] ", ' tech-type 4 ', ' usb-pid 3882 ', ' usb-vid 1008 ', ' wifi-config 0 ', '\x1b[01m', 'Status History (most recent first):\x1b[0m', '\x1b[01m Date/Time Code Status Description User Job ID \x1b[0m', ' -------------------- ----- ---------------------------------------- -------- --------', ' 08/21/14 00:07:25 5012 Device communication error richard 0 ', ' 08/20/14 13:42:44 500 Started a print job richard 4214 ', '', '', 'Done.', ''], ['\x1b[35;01mwarning: No display found.\x1b[0m', '\x1b[31;01merror: hp-info -u/--gui requires Qt4 GUI support. Entering interactive mode.\x1b[0m', '\x1b[31;01merror: Unable to communicate with device (code=12): hp:/usb/HP_LaserJet_400_color_M451dn?serial=CNFF308670\x1b[0m', '\x1b[31;01merror: Error opening device (Device not found).\x1b[0m', ''], 0), 'is_cups_class': False, 'local_cups_queue_attributes': {'charset-configured': u'utf-8', 'charset-supported': [u'us-ascii', u'utf-8'], 'color-supported': True, 'compression-supported': [u'none', u'gzip'], 'copies-default': 1, 'copies-supported': (1, 9999), 'cups-version': u'1.7.2', 'device-uri': u'hp:/usb/HP_LaserJet_400_color_M451dn?serial=CNFF308670', 'document-format-default': u'application/octet-stream', 'document-format-supported': [u'application/octet-stream', u'application/pdf', u'application/postscript', u'application/vnd.adobe-reader-postscript', u'application/vnd.cups-command', u'application/vnd.cups-pdf', u'application/vnd.cups-pdf-banner', u'application/vnd.cups-postscript', u'application/vnd.cups-raw', u'application/vnd.samsung-ps', u'application/x-cshell', u'application/x-csource', u'application/x-perl', u'application/x-shell', u'image/gif', u'image/jpeg', u'image/png', u'image/tiff', u'image/urf', u'image/x-bitmap', u'image/x-photocd', u'image/x-portable-anymap', u'image/x-portable-bitmap', u'image/x-portable-graymap', u'image/x-portable-pixmap', u'image/x-sgi-rgb', u'image/x-sun-raster', u'image/x-xbitmap', u'image/x-xpixmap', u'image/x-xwindowdump', u'text/css', u'text/html', u'text/plain'], 'finishings-default': 3, 'finishings-supported': [3], 'generated-natural-language-supported': [u'en-us'], 'ipp-versions-supported': [u'1.0', u'1.1', u'2.0', u'2.1'], 'ippget-event-life': 15, 'job-creation-attributes-supported': [u'copies', u'finishings', u'ipp-attribute-fidelity', u'job-hold-until', u'job-name', u'job-priority', u'job-sheets', u'media', u'media-col', u'multiple-document-handling', u'number-up', u'output-bin', u'orientation-requested', u'page-ranges', u'print-color-mode', u'print-quality', u'printer-resolution', u'sides'], 'job-hold-until-default': u'no-hold', 'job-hold-until-supported': [u'no-hold', u'indefinite', u'day-time', u'evening', u'night', u'second-shift', u'third-shift', u'weekend'], 'job-ids-supported': True, 'job-k-limit': 0, 'job-k-octets-supported': (0, 470914416), 'job-page-limit': 0, 'job-priority-default': 50, 'job-priority-supported': [100], 'job-quota-period': 0, 'job-settable-attributes-supported': [u'copies', u'finishings', u'job-hold-until', u'job-name', u'job-priority', u'media', u'media-col', u'multiple-document-handling', u'number-up', u'output-bin', u'orientation-requested', u'page-ranges', u'print-color-mode', u'print-quality', u'printer-resolution', u'sides'], 'job-sheets-default': (u'none', u'none'), 'job-sheets-supported': [u'none', u'classified', u'confidential', u'form', u'secret', u'standard', u'topsecret', u'unclassified'], 'jpeg-k-octets-supported': (0, 470914416), 'jpeg-x-dimension-supported': (0, 65535), 'jpeg-y-dimension-supported': (1, 65535), 'marker-change-time': 0, 'media-bottom-margin-supported': [423], 'media-col-default': u'(unknown IPP value tag 0x34)', 'media-col-supported': [u'media-bottom-margin', u'media-left-margin', u'media-right-margin', u'media-size', u'media-source', u'media-top-margin', u'media-type'], 'media-default': u'iso_a4_210x297mm', 'media-left-margin-supported': [423], 'media-right-margin-supported': [423],

    Read the article

  • How often do CPUs make calculation errors?

    - by veryfoolish
    In Dijkstra's Notes on Structured Programming he talks a lot about the provability of computer programs as abstract entities. As a corollary, he remarks how testing isn't enough. E.g., he points out the fact that it would be impossible to test a multiplication function f(x,y) = x*y for any large values of x and y across the entire ranges of x and y. My question concerns his misc. remarks on "lousy hardware". I know the essay was written in the 1970s when computer hardware was less reliable, but computers still aren't perfect, so they must make calculation mistakes sometimes. Does anybody know how often this happens or if there are any statistics on this?

    Read the article

  • Manipulating Perlin Noise

    - by Numeri
    I've been learning about Procedurally Generated Content lately (in particular, Perlin noise). Perlin noise works great for making things like landscapes, height maps, and stuff like that. But now I am trying to generate structures more like mountain ranges (in 2D, as 3D would be way over my head right now) or underground veins of ores. I can't manage to manipulate Perlin Noise to do this. Making a cut off point (i.e. using only the tops of the 'mountains' of a heightmap) wouldn't work, because I would get lumps of mountains/veins. Any suggestions? Thanks, Numeri

    Read the article

  • Sanity checks vs file sizes

    - by Richard Fabian
    In your game assets do you make room for explicit sanity checks, or do you have some generally expected bounds which you assert? I've been thinking about how we compress data and thought that it's much better to have the former, and less of the latter. If your data can exceed your normal valid ranges, but if it does it's an error, then surely that implies you're not compressing the data well enough? What do you do to find out if your data is compressed as far as it can be, and what do you use to ensure your data isn't corrupted and ensure it's an official release? EDIT I'm not interested in sanity checking the file size, but instead, how you manage your sanity checks and whether you arrange the excess size caused by the opportunity to do sanity checks by using explicit extra data, or through allowing the data enough file space (data member size) to be out of valid range and thus able to be checked merely by looking at the asset in memory after loading.

    Read the article

< Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >