Search Results

Search found 1302 results on 53 pages for 'async'.

Page 23/53 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • jquery.ajax returns content-type "html" on iis while it returns content-type "json" on local host

    - by Sridhar
    Hi, I am using jQuery.ajax function to make an ajax call to a page method in asp.net. I specifically set the content-type to "application/json; charset=utf-8". When I looked at the response in the firebug it says the content-type is html. Following is the code to my ajax call $.ajax({ async: asyncVal, type: "POST", url: url + '/' + webMethod, data: dataPackage, contentType: "application/json; charset=UTF-8", dataType: "json", error: errorFunction, success: successFunction });

    Read the article

  • Is this a valid url parameter in jquery.ajax()?

    - by udaya
    Is this a valid url parameter in jquery.ajax(), <script type="text/javascript"> $(document).ready(function() { getRecordspage(); }); function getRecordspage() { $.ajax({ type: "POST", url: "http://localhost/codeigniter_cup_myth/index.php/adminController/mainAccount", data: "", contentType: "application/json; charset=utf-8", global:false, async: false, dataType: "json", success: function(jsonObj) { alert(jsonobj); } }); } </script> The url doesn't seem to go to my controller function...

    Read the article

  • Service Discovery in WCF 4.0 &ndash; Part 1

    - by Shaun
    When designing a service oriented architecture (SOA) system, there will be a lot of services with many service contracts, endpoints and behaviors. Besides the client calling the service, in a large distributed system a service may invoke other services. In this case, one service might need to know the endpoints it invokes. This might not be a problem in a small system. But when you have more than 10 services this might be a problem. For example in my current product, there are around 10 services, such as the user authentication service, UI integration service, location service, license service, device monitor service, event monitor service, schedule job service, accounting service, player management service, etc..   Benefit of Discovery Service Since almost all my services need to invoke at least one other service. This would be a difficult task to make sure all services endpoints are configured correctly in every service. And furthermore, it would be a nightmare when a service changed its endpoint at runtime. Hence, we need a discovery service to remove the dependency (configuration dependency). A discovery service plays as a service dictionary which stores the relationship between the contracts and the endpoints for every service. By using the discovery service, when service X wants to invoke service Y, it just need to ask the discovery service where is service Y, then the discovery service will return all proper endpoints of service Y, then service X can use the endpoint to send the request to service Y. And when some services changed their endpoint address, all need to do is to update its records in the discovery service then all others will know its new endpoint. In WCF 4.0 Discovery it supports both managed proxy discovery mode and ad-hoc discovery mode. In ad-hoc mode there is no standalone discovery service. When a client wanted to invoke a service, it will broadcast an message (normally in UDP protocol) to the entire network with the service match criteria. All services which enabled the discovery behavior will receive this message and only those matched services will send their endpoint back to the client. The managed proxy discovery service works as I described above. In this post I will only cover the managed proxy mode, where there’s a discovery service. For more information about the ad-hoc mode please refer to the MSDN.   Service Announcement and Probe The main functionality of discovery service should be return the proper endpoint addresses back to the service who is looking for. In most cases the consume service (as a client) will send the contract which it wanted to request to the discovery service. And then the discovery service will find the endpoint and respond. Sometimes the contract and endpoint are not enough. It also contains versioning, extensions attributes. This post I will only cover the case includes contract and endpoint. When a client (or sometimes a service who need to invoke another service) need to connect to a target service, it will firstly request the discovery service through the “Probe” method with the criteria. Basically the criteria contains the contract type name of the target service. Then the discovery service will search its endpoint repository by the criteria. The repository might be a database, a distributed cache or a flat XML file. If it matches, the discovery service will grab the endpoint information (it’s called discovery endpoint metadata in WCF) and send back. And this is called “Probe”. Finally the client received the discovery endpoint metadata and will use the endpoint to connect to the target service. Besides the probe, discovery service should take the responsible to know there is a new service available when it goes online, as well as stopped when it goes offline. This feature is named “Announcement”. When a service started and stopped, it will announce to the discovery service. So the basic functionality of a discovery service should includes: 1, An endpoint which receive the service online message, and add the service endpoint information in the discovery repository. 2, An endpoint which receive the service offline message, and remove the service endpoint information from the discovery repository. 3, An endpoint which receive the client probe message, and return the matches service endpoints, and return the discovery endpoint metadata. WCF 4.0 discovery service just covers all these features in it's infrastructure classes.   Discovery Service in WCF 4.0 WCF 4.0 introduced a new assembly named System.ServiceModel.Discovery which has all necessary classes and interfaces to build a WS-Discovery compliant discovery service. It supports ad-hoc and managed proxy modes. For the case mentioned in this post, what we need to build is a standalone discovery service, which is the managed proxy discovery service mode. To build a managed discovery service in WCF 4.0 just create a new class inherits from the abstract class System.ServiceModel.Discovery.DiscoveryProxy. This class implemented and abstracted the procedures of service announcement and probe. And it exposes 8 abstract methods where we can implement our own endpoint register, unregister and find logic. These 8 methods are asynchronized, which means all invokes to the discovery service are asynchronously, for better service capability and performance. 1, OnBeginOnlineAnnouncement, OnEndOnlineAnnouncement: Invoked when a service sent the online announcement message. We need to add the endpoint information to the repository in this method. 2, OnBeginOfflineAnnouncement, OnEndOfflineAnnouncement: Invoked when a service sent the offline announcement message. We need to remove the endpoint information from the repository in this method. 3, OnBeginFind, OnEndFind: Invoked when a client sent the probe message that want to find the service endpoint information. We need to look for the proper endpoints by matching the client’s criteria through the repository in this method. 4, OnBeginResolve, OnEndResolve: Invoked then a client sent the resolve message. Different from the find method, when using resolve method the discovery service will return the exactly one service endpoint metadata to the client. In our example we will NOT implement this method.   Let’s create our own discovery service, inherit the base System.ServiceModel.Discovery.DiscoveryProxy. We also need to specify the service behavior in this class. Since the build-in discovery service host class only support the singleton mode, we must set its instance context mode to single. 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Text; 5: using System.ServiceModel.Discovery; 6: using System.ServiceModel; 7:  8: namespace Phare.Service 9: { 10: [ServiceBehavior(InstanceContextMode = InstanceContextMode.Single, ConcurrencyMode = ConcurrencyMode.Multiple)] 11: public class ManagedProxyDiscoveryService : DiscoveryProxy 12: { 13: protected override IAsyncResult OnBeginFind(FindRequestContext findRequestContext, AsyncCallback callback, object state) 14: { 15: throw new NotImplementedException(); 16: } 17:  18: protected override IAsyncResult OnBeginOfflineAnnouncement(DiscoveryMessageSequence messageSequence, EndpointDiscoveryMetadata endpointDiscoveryMetadata, AsyncCallback callback, object state) 19: { 20: throw new NotImplementedException(); 21: } 22:  23: protected override IAsyncResult OnBeginOnlineAnnouncement(DiscoveryMessageSequence messageSequence, EndpointDiscoveryMetadata endpointDiscoveryMetadata, AsyncCallback callback, object state) 24: { 25: throw new NotImplementedException(); 26: } 27:  28: protected override IAsyncResult OnBeginResolve(ResolveCriteria resolveCriteria, AsyncCallback callback, object state) 29: { 30: throw new NotImplementedException(); 31: } 32:  33: protected override void OnEndFind(IAsyncResult result) 34: { 35: throw new NotImplementedException(); 36: } 37:  38: protected override void OnEndOfflineAnnouncement(IAsyncResult result) 39: { 40: throw new NotImplementedException(); 41: } 42:  43: protected override void OnEndOnlineAnnouncement(IAsyncResult result) 44: { 45: throw new NotImplementedException(); 46: } 47:  48: protected override EndpointDiscoveryMetadata OnEndResolve(IAsyncResult result) 49: { 50: throw new NotImplementedException(); 51: } 52: } 53: } Then let’s implement the online, offline and find methods one by one. WCF discovery service gives us full flexibility to implement the endpoint add, remove and find logic. For the demo purpose we will use an internal dictionary to store the services’ endpoint metadata. In the next post we will see how to serialize and store these information in database. Define a concurrent dictionary inside the service class since our it will be used in the multiple threads scenario. 1: [ServiceBehavior(InstanceContextMode = InstanceContextMode.Single, ConcurrencyMode = ConcurrencyMode.Multiple)] 2: public class ManagedProxyDiscoveryService : DiscoveryProxy 3: { 4: private ConcurrentDictionary<EndpointAddress, EndpointDiscoveryMetadata> _services; 5:  6: public ManagedProxyDiscoveryService() 7: { 8: _services = new ConcurrentDictionary<EndpointAddress, EndpointDiscoveryMetadata>(); 9: } 10: } Then we can simply implement the logic of service online and offline. 1: protected override IAsyncResult OnBeginOnlineAnnouncement(DiscoveryMessageSequence messageSequence, EndpointDiscoveryMetadata endpointDiscoveryMetadata, AsyncCallback callback, object state) 2: { 3: _services.AddOrUpdate(endpointDiscoveryMetadata.Address, endpointDiscoveryMetadata, (key, value) => endpointDiscoveryMetadata); 4: return new OnOnlineAnnouncementAsyncResult(callback, state); 5: } 6:  7: protected override void OnEndOnlineAnnouncement(IAsyncResult result) 8: { 9: OnOnlineAnnouncementAsyncResult.End(result); 10: } 11:  12: protected override IAsyncResult OnBeginOfflineAnnouncement(DiscoveryMessageSequence messageSequence, EndpointDiscoveryMetadata endpointDiscoveryMetadata, AsyncCallback callback, object state) 13: { 14: EndpointDiscoveryMetadata endpoint = null; 15: _services.TryRemove(endpointDiscoveryMetadata.Address, out endpoint); 16: return new OnOfflineAnnouncementAsyncResult(callback, state); 17: } 18:  19: protected override void OnEndOfflineAnnouncement(IAsyncResult result) 20: { 21: OnOfflineAnnouncementAsyncResult.End(result); 22: } Regards the find method, the parameter FindRequestContext.Criteria has a method named IsMatch, which can be use for us to evaluate which service metadata is satisfied with the criteria. So the implementation of find method would be like this. 1: protected override IAsyncResult OnBeginFind(FindRequestContext findRequestContext, AsyncCallback callback, object state) 2: { 3: _services.Where(s => findRequestContext.Criteria.IsMatch(s.Value)) 4: .Select(s => s.Value) 5: .All(meta => 6: { 7: findRequestContext.AddMatchingEndpoint(meta); 8: return true; 9: }); 10: return new OnFindAsyncResult(callback, state); 11: } 12:  13: protected override void OnEndFind(IAsyncResult result) 14: { 15: OnFindAsyncResult.End(result); 16: } As you can see, we checked all endpoints metadata in repository by invoking the IsMatch method. Then add all proper endpoints metadata into the parameter. Finally since all these methods are asynchronized we need some AsyncResult classes as well. Below are the base class and the inherited classes used in previous methods. 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Text; 5: using System.Threading; 6:  7: namespace Phare.Service 8: { 9: abstract internal class AsyncResult : IAsyncResult 10: { 11: AsyncCallback callback; 12: bool completedSynchronously; 13: bool endCalled; 14: Exception exception; 15: bool isCompleted; 16: ManualResetEvent manualResetEvent; 17: object state; 18: object thisLock; 19:  20: protected AsyncResult(AsyncCallback callback, object state) 21: { 22: this.callback = callback; 23: this.state = state; 24: this.thisLock = new object(); 25: } 26:  27: public object AsyncState 28: { 29: get 30: { 31: return state; 32: } 33: } 34:  35: public WaitHandle AsyncWaitHandle 36: { 37: get 38: { 39: if (manualResetEvent != null) 40: { 41: return manualResetEvent; 42: } 43: lock (ThisLock) 44: { 45: if (manualResetEvent == null) 46: { 47: manualResetEvent = new ManualResetEvent(isCompleted); 48: } 49: } 50: return manualResetEvent; 51: } 52: } 53:  54: public bool CompletedSynchronously 55: { 56: get 57: { 58: return completedSynchronously; 59: } 60: } 61:  62: public bool IsCompleted 63: { 64: get 65: { 66: return isCompleted; 67: } 68: } 69:  70: object ThisLock 71: { 72: get 73: { 74: return this.thisLock; 75: } 76: } 77:  78: protected static TAsyncResult End<TAsyncResult>(IAsyncResult result) 79: where TAsyncResult : AsyncResult 80: { 81: if (result == null) 82: { 83: throw new ArgumentNullException("result"); 84: } 85:  86: TAsyncResult asyncResult = result as TAsyncResult; 87:  88: if (asyncResult == null) 89: { 90: throw new ArgumentException("Invalid async result.", "result"); 91: } 92:  93: if (asyncResult.endCalled) 94: { 95: throw new InvalidOperationException("Async object already ended."); 96: } 97:  98: asyncResult.endCalled = true; 99:  100: if (!asyncResult.isCompleted) 101: { 102: asyncResult.AsyncWaitHandle.WaitOne(); 103: } 104:  105: if (asyncResult.manualResetEvent != null) 106: { 107: asyncResult.manualResetEvent.Close(); 108: } 109:  110: if (asyncResult.exception != null) 111: { 112: throw asyncResult.exception; 113: } 114:  115: return asyncResult; 116: } 117:  118: protected void Complete(bool completedSynchronously) 119: { 120: if (isCompleted) 121: { 122: throw new InvalidOperationException("This async result is already completed."); 123: } 124:  125: this.completedSynchronously = completedSynchronously; 126:  127: if (completedSynchronously) 128: { 129: this.isCompleted = true; 130: } 131: else 132: { 133: lock (ThisLock) 134: { 135: this.isCompleted = true; 136: if (this.manualResetEvent != null) 137: { 138: this.manualResetEvent.Set(); 139: } 140: } 141: } 142:  143: if (callback != null) 144: { 145: callback(this); 146: } 147: } 148:  149: protected void Complete(bool completedSynchronously, Exception exception) 150: { 151: this.exception = exception; 152: Complete(completedSynchronously); 153: } 154: } 155: } 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Text; 5: using System.ServiceModel.Discovery; 6: using Phare.Service; 7:  8: namespace Phare.Service 9: { 10: internal sealed class OnOnlineAnnouncementAsyncResult : AsyncResult 11: { 12: public OnOnlineAnnouncementAsyncResult(AsyncCallback callback, object state) 13: : base(callback, state) 14: { 15: this.Complete(true); 16: } 17:  18: public static void End(IAsyncResult result) 19: { 20: AsyncResult.End<OnOnlineAnnouncementAsyncResult>(result); 21: } 22:  23: } 24:  25: sealed class OnOfflineAnnouncementAsyncResult : AsyncResult 26: { 27: public OnOfflineAnnouncementAsyncResult(AsyncCallback callback, object state) 28: : base(callback, state) 29: { 30: this.Complete(true); 31: } 32:  33: public static void End(IAsyncResult result) 34: { 35: AsyncResult.End<OnOfflineAnnouncementAsyncResult>(result); 36: } 37: } 38:  39: sealed class OnFindAsyncResult : AsyncResult 40: { 41: public OnFindAsyncResult(AsyncCallback callback, object state) 42: : base(callback, state) 43: { 44: this.Complete(true); 45: } 46:  47: public static void End(IAsyncResult result) 48: { 49: AsyncResult.End<OnFindAsyncResult>(result); 50: } 51: } 52:  53: sealed class OnResolveAsyncResult : AsyncResult 54: { 55: EndpointDiscoveryMetadata matchingEndpoint; 56:  57: public OnResolveAsyncResult(EndpointDiscoveryMetadata matchingEndpoint, AsyncCallback callback, object state) 58: : base(callback, state) 59: { 60: this.matchingEndpoint = matchingEndpoint; 61: this.Complete(true); 62: } 63:  64: public static EndpointDiscoveryMetadata End(IAsyncResult result) 65: { 66: OnResolveAsyncResult thisPtr = AsyncResult.End<OnResolveAsyncResult>(result); 67: return thisPtr.matchingEndpoint; 68: } 69: } 70: } Now we have finished the discovery service. The next step is to host it. The discovery service is a standard WCF service. So we can use ServiceHost on a console application, windows service, or in IIS as usual. The following code is how to host the discovery service we had just created in a console application. 1: static void Main(string[] args) 2: { 3: using (var host = new ServiceHost(new ManagedProxyDiscoveryService())) 4: { 5: host.Opened += (sender, e) => 6: { 7: host.Description.Endpoints.All((ep) => 8: { 9: Console.WriteLine(ep.ListenUri); 10: return true; 11: }); 12: }; 13:  14: try 15: { 16: // retrieve the announcement, probe endpoint and binding from configuration 17: var announcementEndpointAddress = new EndpointAddress(ConfigurationManager.AppSettings["announcementEndpointAddress"]); 18: var probeEndpointAddress = new EndpointAddress(ConfigurationManager.AppSettings["probeEndpointAddress"]); 19: var binding = Activator.CreateInstance(Type.GetType(ConfigurationManager.AppSettings["bindingType"], true, true)) as Binding; 20: var announcementEndpoint = new AnnouncementEndpoint(binding, announcementEndpointAddress); 21: var probeEndpoint = new DiscoveryEndpoint(binding, probeEndpointAddress); 22: probeEndpoint.IsSystemEndpoint = false; 23: // append the service endpoint for announcement and probe 24: host.AddServiceEndpoint(announcementEndpoint); 25: host.AddServiceEndpoint(probeEndpoint); 26:  27: host.Open(); 28:  29: Console.WriteLine("Press any key to exit."); 30: Console.ReadKey(); 31: } 32: catch (Exception ex) 33: { 34: Console.WriteLine(ex.ToString()); 35: } 36: } 37:  38: Console.WriteLine("Done."); 39: Console.ReadKey(); 40: } What we need to notice is that, the discovery service needs two endpoints for announcement and probe. In this example I just retrieve them from the configuration file. I also specified the binding of these two endpoints in configuration file as well. 1: <?xml version="1.0"?> 2: <configuration> 3: <startup> 4: <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0"/> 5: </startup> 6: <appSettings> 7: <add key="announcementEndpointAddress" value="net.tcp://localhost:10010/announcement"/> 8: <add key="probeEndpointAddress" value="net.tcp://localhost:10011/probe"/> 9: <add key="bindingType" value="System.ServiceModel.NetTcpBinding, System.ServiceModel, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"/> 10: </appSettings> 11: </configuration> And this is the console screen when I ran my discovery service. As you can see there are two endpoints listening for announcement message and probe message.   Discoverable Service and Client Next, let’s create a WCF service that is discoverable, which means it can be found by the discovery service. To do so, we need to let the service send the online announcement message to the discovery service, as well as offline message before it shutdown. Just create a simple service which can make the incoming string to upper. The service contract and implementation would be like this. 1: [ServiceContract] 2: public interface IStringService 3: { 4: [OperationContract] 5: string ToUpper(string content); 6: } 1: public class StringService : IStringService 2: { 3: public string ToUpper(string content) 4: { 5: return content.ToUpper(); 6: } 7: } Then host this service in the console application. In order to make the discovery service easy to be tested the service address will be changed each time it’s started. 1: static void Main(string[] args) 2: { 3: var baseAddress = new Uri(string.Format("net.tcp://localhost:11001/stringservice/{0}/", Guid.NewGuid().ToString())); 4:  5: using (var host = new ServiceHost(typeof(StringService), baseAddress)) 6: { 7: host.Opened += (sender, e) => 8: { 9: Console.WriteLine("Service opened at {0}", host.Description.Endpoints.First().ListenUri); 10: }; 11:  12: host.AddServiceEndpoint(typeof(IStringService), new NetTcpBinding(), string.Empty); 13:  14: host.Open(); 15:  16: Console.WriteLine("Press any key to exit."); 17: Console.ReadKey(); 18: } 19: } Currently this service is NOT discoverable. We need to add a special service behavior so that it could send the online and offline message to the discovery service announcement endpoint when the host is opened and closed. WCF 4.0 introduced a service behavior named ServiceDiscoveryBehavior. When we specified the announcement endpoint address and appended it to the service behaviors this service will be discoverable. 1: var announcementAddress = new EndpointAddress(ConfigurationManager.AppSettings["announcementEndpointAddress"]); 2: var announcementBinding = Activator.CreateInstance(Type.GetType(ConfigurationManager.AppSettings["bindingType"], true, true)) as Binding; 3: var announcementEndpoint = new AnnouncementEndpoint(announcementBinding, announcementAddress); 4: var discoveryBehavior = new ServiceDiscoveryBehavior(); 5: discoveryBehavior.AnnouncementEndpoints.Add(announcementEndpoint); 6: host.Description.Behaviors.Add(discoveryBehavior); The ServiceDiscoveryBehavior utilizes the service extension and channel dispatcher to implement the online and offline announcement logic. In short, it injected the channel open and close procedure and send the online and offline message to the announcement endpoint.   On client side, when we have the discovery service, a client can invoke a service without knowing its endpoint. WCF discovery assembly provides a class named DiscoveryClient, which can be used to find the proper service endpoint by passing the criteria. In the code below I initialized the DiscoveryClient, specified the discovery service probe endpoint address. Then I created the find criteria by specifying the service contract I wanted to use and invoke the Find method. This will send the probe message to the discovery service and it will find the endpoints back to me. The discovery service will return all endpoints that matches the find criteria, which means in the result of the find method there might be more than one endpoints. In this example I just returned the first matched one back. In the next post I will show how to extend our discovery service to make it work like a service load balancer. 1: static EndpointAddress FindServiceEndpoint() 2: { 3: var probeEndpointAddress = new EndpointAddress(ConfigurationManager.AppSettings["probeEndpointAddress"]); 4: var probeBinding = Activator.CreateInstance(Type.GetType(ConfigurationManager.AppSettings["bindingType"], true, true)) as Binding; 5: var discoveryEndpoint = new DiscoveryEndpoint(probeBinding, probeEndpointAddress); 6:  7: EndpointAddress address = null; 8: FindResponse result = null; 9: using (var discoveryClient = new DiscoveryClient(discoveryEndpoint)) 10: { 11: result = discoveryClient.Find(new FindCriteria(typeof(IStringService))); 12: } 13:  14: if (result != null && result.Endpoints.Any()) 15: { 16: var endpointMetadata = result.Endpoints.First(); 17: address = endpointMetadata.Address; 18: } 19: return address; 20: } Once we probed the discovery service we will receive the endpoint. So in the client code we can created the channel factory from the endpoint and binding, and invoke to the service. When creating the client side channel factory we need to make sure that the client side binding should be the same as the service side. WCF discovery service can be used to find the endpoint for a service contract, but the binding is NOT included. This is because the binding was not in the WS-Discovery specification. In the next post I will demonstrate how to add the binding information into the discovery service. At that moment the client don’t need to create the binding by itself. Instead it will use the binding received from the discovery service. 1: static void Main(string[] args) 2: { 3: Console.WriteLine("Say something..."); 4: var content = Console.ReadLine(); 5: while (!string.IsNullOrWhiteSpace(content)) 6: { 7: Console.WriteLine("Finding the service endpoint..."); 8: var address = FindServiceEndpoint(); 9: if (address == null) 10: { 11: Console.WriteLine("There is no endpoint matches the criteria."); 12: } 13: else 14: { 15: Console.WriteLine("Found the endpoint {0}", address.Uri); 16:  17: var factory = new ChannelFactory<IStringService>(new NetTcpBinding(), address); 18: factory.Opened += (sender, e) => 19: { 20: Console.WriteLine("Connecting to {0}.", factory.Endpoint.ListenUri); 21: }; 22: var proxy = factory.CreateChannel(); 23: using (proxy as IDisposable) 24: { 25: Console.WriteLine("ToUpper: {0} => {1}", content, proxy.ToUpper(content)); 26: } 27: } 28:  29: Console.WriteLine("Say something..."); 30: content = Console.ReadLine(); 31: } 32: } Similarly, the discovery service probe endpoint and binding were defined in the configuration file. 1: <?xml version="1.0"?> 2: <configuration> 3: <startup> 4: <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0"/> 5: </startup> 6: <appSettings> 7: <add key="announcementEndpointAddress" value="net.tcp://localhost:10010/announcement"/> 8: <add key="probeEndpointAddress" value="net.tcp://localhost:10011/probe"/> 9: <add key="bindingType" value="System.ServiceModel.NetTcpBinding, System.ServiceModel, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"/> 10: </appSettings> 11: </configuration> OK, now let’s have a test. Firstly start the discovery service, and then start our discoverable service. When it started it will announced to the discovery service and registered its endpoint into the repository, which is the local dictionary. And then start the client and type something. As you can see the client asked the discovery service for the endpoint and then establish the connection to the discoverable service. And more interesting, do NOT close the client console but terminate the discoverable service but press the enter key. This will make the service send the offline message to the discovery service. Then start the discoverable service again. Since we made it use a different address each time it started, currently it should be hosted on another address. If we enter something in the client we could see that it asked the discovery service and retrieve the new endpoint, and connect the the service.   Summary In this post I discussed the benefit of using the discovery service and the procedures of service announcement and probe. I also demonstrated how to leverage the WCF Discovery feature in WCF 4.0 to build a simple managed discovery service. For test purpose, in this example I used the in memory dictionary as the discovery endpoint metadata repository. And when finding I also just return the first matched endpoint back. I also hard coded the bindings between the discoverable service and the client. In next post I will show you how to solve the problem mentioned above, as well as some additional feature for production usage. You can download the code here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Node.js Adventure - Storage Services and Service Runtime

    - by Shaun
    When I described on how to host a Node.js application on Windows Azure, one of questions might be raised about how to consume the vary Windows Azure services, such as the storage, service bus, access control, etc.. Interact with windows azure services is available in Node.js through the Windows Azure Node.js SDK, which is a module available in NPM. In this post I would like to describe on how to use Windows Azure Storage (a.k.a. WAS) as well as the service runtime.   Consume Windows Azure Storage Let’s firstly have a look on how to consume WAS through Node.js. As we know in the previous post we can host Node.js application on Windows Azure Web Site (a.k.a. WAWS) as well as Windows Azure Cloud Service (a.k.a. WACS). In theory, WAWS is also built on top of WACS worker roles with some more features. Hence in this post I will only demonstrate for hosting in WACS worker role. The Node.js code can be used when consuming WAS when hosted on WAWS. But since there’s no roles in WAWS, the code for consuming service runtime mentioned in the next section cannot be used for WAWS node application. We can use the solution that I created in my last post. Alternatively we can create a new windows azure project in Visual Studio with a worker role, add the “node.exe” and “index.js” and install “express” and “node-sqlserver” modules, make all files as “Copy always”. In order to use windows azure services we need to have Windows Azure Node.js SDK, as knows as a module named “azure” which can be installed through NPM. Once we downloaded and installed, we need to include them in our worker role project and make them as “Copy always”. You can use my “Copy all always” tool mentioned in my last post to update the currently worker role project file. You can also find the source code of this tool here. The source code of Windows Azure SDK for Node.js can be found in its GitHub page. It contains two parts. One is a CLI tool which provides a cross platform command line package for Mac and Linux to manage WAWS and Windows Azure Virtual Machines (a.k.a. WAVM). The other is a library for managing and consuming vary windows azure services includes tables, blobs, queues, service bus and the service runtime. I will not cover all of them but will only demonstrate on how to use tables and service runtime information in this post. You can find the full document of this SDK here. Back to Visual Studio and open the “index.js”, let’s continue our application from the last post, which was working against Windows Azure SQL Database (a.k.a. WASD). The code should looks like this. 1: var express = require("express"); 2: var sql = require("node-sqlserver"); 3:  4: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd={PASSWORD};Encrypt=yes;Connection Timeout=30;"; 5: var port = 80; 6:  7: var app = express(); 8:  9: app.configure(function () { 10: app.use(express.bodyParser()); 11: }); 12:  13: app.get("/", function (req, res) { 14: sql.open(connectionString, function (err, conn) { 15: if (err) { 16: console.log(err); 17: res.send(500, "Cannot open connection."); 18: } 19: else { 20: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 21: if (err) { 22: console.log(err); 23: res.send(500, "Cannot retrieve records."); 24: } 25: else { 26: res.json(results); 27: } 28: }); 29: } 30: }); 31: }); 32:  33: app.get("/text/:key/:culture", function (req, res) { 34: sql.open(connectionString, function (err, conn) { 35: if (err) { 36: console.log(err); 37: res.send(500, "Cannot open connection."); 38: } 39: else { 40: var key = req.params.key; 41: var culture = req.params.culture; 42: var command = "SELECT * FROM [Resource] WHERE [Key] = '" + key + "' AND [Culture] = '" + culture + "'"; 43: conn.queryRaw(command, function (err, results) { 44: if (err) { 45: console.log(err); 46: res.send(500, "Cannot retrieve records."); 47: } 48: else { 49: res.json(results); 50: } 51: }); 52: } 53: }); 54: }); 55:  56: app.get("/sproc/:key/:culture", function (req, res) { 57: sql.open(connectionString, function (err, conn) { 58: if (err) { 59: console.log(err); 60: res.send(500, "Cannot open connection."); 61: } 62: else { 63: var key = req.params.key; 64: var culture = req.params.culture; 65: var command = "EXEC GetItem '" + key + "', '" + culture + "'"; 66: conn.queryRaw(command, function (err, results) { 67: if (err) { 68: console.log(err); 69: res.send(500, "Cannot retrieve records."); 70: } 71: else { 72: res.json(results); 73: } 74: }); 75: } 76: }); 77: }); 78:  79: app.post("/new", function (req, res) { 80: var key = req.body.key; 81: var culture = req.body.culture; 82: var val = req.body.val; 83:  84: sql.open(connectionString, function (err, conn) { 85: if (err) { 86: console.log(err); 87: res.send(500, "Cannot open connection."); 88: } 89: else { 90: var command = "INSERT INTO [Resource] VALUES ('" + key + "', '" + culture + "', N'" + val + "')"; 91: conn.queryRaw(command, function (err, results) { 92: if (err) { 93: console.log(err); 94: res.send(500, "Cannot retrieve records."); 95: } 96: else { 97: res.send(200, "Inserted Successful"); 98: } 99: }); 100: } 101: }); 102: }); 103:  104: app.listen(port); Now let’s create a new function, copy the records from WASD to table service. 1. Delete the table named “resource”. 2. Create a new table named “resource”. These 2 steps ensures that we have an empty table. 3. Load all records from the “resource” table in WASD. 4. For each records loaded from WASD, insert them into the table one by one. 5. Prompt to user when finished. In order to use table service we need the storage account and key, which can be found from the developer portal. Just select the storage account and click the Manage Keys button. Then create two local variants in our Node.js application for the storage account name and key. Since we need to use WAS we need to import the azure module. Also I created another variant stored the table name. In order to work with table service I need to create the storage client for table service. This is very similar as the Windows Azure SDK for .NET. As the code below I created a new variant named “client” and use “createTableService”, specified my storage account name and key. 1: var azure = require("azure"); 2: var storageAccountName = "synctile"; 3: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 4: var tableName = "resource"; 5: var client = azure.createTableService(storageAccountName, storageAccountKey); Now create a new function for URL “/was/init” so that we can trigger it through browser. Then in this function we will firstly load all records from WASD. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: } 18: } 19: }); 20: } 21: }); 22: }); When we succeed loaded all records we can start to transform them into table service. First I need to recreate the table in table service. This can be done by deleting and creating the table through table client I had just created previously. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: } 27: }); 28: }); 29: } 30: } 31: }); 32: } 33: }); 34: }); As you can see, the azure SDK provide its methods in callback pattern. In fact, almost all modules in Node.js use the callback pattern. For example, when I deleted a table I invoked “deleteTable” method, provided the name of the table and a callback function which will be performed when the table had been deleted or failed. Underlying, the azure module will perform the table deletion operation in POSIX async threads pool asynchronously. And once it’s done the callback function will be performed. This is the reason we need to nest the table creation code inside the deletion function. If we perform the table creation code after the deletion code then they will be invoked in parallel. Next, for each records in WASD I created an entity and then insert into the table service. Finally I send the response to the browser. Can you find a bug in the code below? I will describe it later in this post. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: for (var i = 0; i < results.rows.length; i++) { 27: var entity = { 28: "PartitionKey": results.rows[i][1], 29: "RowKey": results.rows[i][0], 30: "Value": results.rows[i][2] 31: }; 32: client.insertEntity(tableName, entity, function (error) { 33: if (error) { 34: error["target"] = "insertEntity"; 35: res.send(500, error); 36: } 37: else { 38: console.log("entity inserted"); 39: } 40: }); 41: } 42: // send the 43: console.log("all done"); 44: res.send(200, "All done!"); 45: } 46: }); 47: }); 48: } 49: } 50: }); 51: } 52: }); 53: }); Now we can publish it to the cloud and have a try. But normally we’d better test it at the local emulator first. In Node.js SDK there are three build-in properties which provides the account name, key and host address for local storage emulator. We can use them to initialize our table service client. We also need to change the SQL connection string to let it use my local database. The code will be changed as below. 1: // windows azure sql database 2: //var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd=eszqu94XZY;Encrypt=yes;Connection Timeout=30;"; 3: // sql server 4: var connectionString = "Driver={SQL Server Native Client 11.0};Server={.};Database={Caspar};Trusted_Connection={Yes};"; 5:  6: var azure = require("azure"); 7: var storageAccountName = "synctile"; 8: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 9: var tableName = "resource"; 10: // windows azure storage 11: //var client = azure.createTableService(storageAccountName, storageAccountKey); 12: // local storage emulator 13: var client = azure.createTableService(azure.ServiceClient.DEVSTORE_STORAGE_ACCOUNT, azure.ServiceClient.DEVSTORE_STORAGE_ACCESS_KEY, azure.ServiceClient.DEVSTORE_TABLE_HOST); Now let’s run the application and navigate to “localhost:12345/was/init” as I hosted it on port 12345. We can find it transformed the data from my local database to local table service. Everything looks fine. But there is a bug in my code. If we have a look on the Node.js command window we will find that it sent response before all records had been inserted, which is not what I expected. The reason is that, as I mentioned before, Node.js perform all IO operations in non-blocking model. When we inserted the records we executed the table service insert method in parallel, and the operation of sending response was also executed in parallel, even though I wrote it at the end of my logic. The correct logic should be, when all entities had been copied to table service with no error, then I will send response to the browser, otherwise I should send error message to the browser. To do so I need to import another module named “async”, which helps us to coordinate our asynchronous code. Install the module and import it at the beginning of the code. Then we can use its “forEach” method for the asynchronous code of inserting table entities. The first argument of “forEach” is the array that will be performed. The second argument is the operation for each items in the array. And the third argument will be invoked then all items had been performed or any errors occurred. Here we can send our response to browser. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: async.forEach(results.rows, 26: // transform the records 27: function (row, callback) { 28: var entity = { 29: "PartitionKey": row[1], 30: "RowKey": row[0], 31: "Value": row[2] 32: }; 33: client.insertEntity(tableName, entity, function (error) { 34: if (error) { 35: callback(error); 36: } 37: else { 38: console.log("entity inserted."); 39: callback(null); 40: } 41: }); 42: }, 43: // send reponse 44: function (error) { 45: if (error) { 46: error["target"] = "insertEntity"; 47: res.send(500, error); 48: } 49: else { 50: console.log("all done"); 51: res.send(200, "All done!"); 52: } 53: } 54: ); 55: } 56: }); 57: }); 58: } 59: } 60: }); 61: } 62: }); 63: }); Run it locally and now we can find the response was sent after all entities had been inserted. Query entities against table service is simple as well. Just use the “queryEntity” method from the table service client and providing the partition key and row key. We can also provide a complex query criteria as well, for example the code here. In the code below I queried an entity by the partition key and row key, and return the proper localization value in response. 1: app.get("/was/:key/:culture", function (req, res) { 2: var key = req.params.key; 3: var culture = req.params.culture; 4: client.queryEntity(tableName, culture, key, function (error, entity) { 5: if (error) { 6: res.send(500, error); 7: } 8: else { 9: res.json(entity); 10: } 11: }); 12: }); And then tested it on local emulator. Finally if we want to publish this application to the cloud we should change the database connection string and storage account. For more information about how to consume blob and queue service, as well as the service bus please refer to the MSDN page.   Consume Service Runtime As I mentioned above, before we published our application to the cloud we need to change the connection string and account information in our code. But if you had played with WACS you should have known that the service runtime provides the ability to retrieve configuration settings, endpoints and local resource information at runtime. Which means we can have these values defined in CSCFG and CSDEF files and then the runtime should be able to retrieve the proper values. For example we can add some role settings though the property window of the role, specify the connection string and storage account for cloud and local. And the can also use the endpoint which defined in role environment to our Node.js application. In Node.js SDK we can get an object from “azure.RoleEnvironment”, which provides the functionalities to retrieve the configuration settings and endpoints, etc.. In the code below I defined the connection string variants and then use the SDK to retrieve and initialize the table client. 1: var connectionString = ""; 2: var storageAccountName = ""; 3: var storageAccountKey = ""; 4: var tableName = ""; 5: var client; 6:  7: azure.RoleEnvironment.getConfigurationSettings(function (error, settings) { 8: if (error) { 9: console.log("ERROR: getConfigurationSettings"); 10: console.log(JSON.stringify(error)); 11: } 12: else { 13: console.log(JSON.stringify(settings)); 14: connectionString = settings["SqlConnectionString"]; 15: storageAccountName = settings["StorageAccountName"]; 16: storageAccountKey = settings["StorageAccountKey"]; 17: tableName = settings["TableName"]; 18:  19: console.log("connectionString = %s", connectionString); 20: console.log("storageAccountName = %s", storageAccountName); 21: console.log("storageAccountKey = %s", storageAccountKey); 22: console.log("tableName = %s", tableName); 23:  24: client = azure.createTableService(storageAccountName, storageAccountKey); 25: } 26: }); In this way we don’t need to amend the code for the configurations between local and cloud environment since the service runtime will take care of it. At the end of the code we will listen the application on the port retrieved from SDK as well. 1: azure.RoleEnvironment.getCurrentRoleInstance(function (error, instance) { 2: if (error) { 3: console.log("ERROR: getCurrentRoleInstance"); 4: console.log(JSON.stringify(error)); 5: } 6: else { 7: console.log(JSON.stringify(instance)); 8: if (instance["endpoints"] && instance["endpoints"]["nodejs"]) { 9: var endpoint = instance["endpoints"]["nodejs"]; 10: app.listen(endpoint["port"]); 11: } 12: else { 13: app.listen(8080); 14: } 15: } 16: }); But if we tested the application right now we will find that it cannot retrieve any values from service runtime. This is because by default, the entry point of this role was defined to the worker role class. In windows azure environment the service runtime will open a named pipeline to the entry point instance, so that it can connect to the runtime and retrieve values. But in this case, since the entry point was worker role and the Node.js was opened inside the role, the named pipeline was established between our worker role class and service runtime, so our Node.js application cannot use it. To fix this problem we need to open the CSDEF file under the azure project, add a new element named Runtime. Then add an element named EntryPoint which specify the Node.js command line. So that the Node.js application will have the connection to service runtime, then it’s able to read the configurations. Start the Node.js at local emulator we can find it retrieved the connections, storage account for local. And if we publish our application to azure then it works with WASD and storage service through the configurations for cloud.   Summary In this post I demonstrated how to use Windows Azure SDK for Node.js to interact with storage service, especially the table service. I also demonstrated on how to use WACS service runtime, how to retrieve the configuration settings and the endpoint information. And in order to make the service runtime available to my Node.js application I need to create an entry point element in CSDEF file and set “node.exe” as the entry point. I used five posts to introduce and demonstrate on how to run a Node.js application on Windows platform, how to use Windows Azure Web Site and Windows Azure Cloud Service worker role to host our Node.js application. I also described how to work with other services provided by Windows Azure platform through Windows Azure SDK for Node.js. Node.js is a very new and young network application platform. But since it’s very simple and easy to learn and deploy, as well as, it utilizes single thread non-blocking IO model, Node.js became more and more popular on web application and web service development especially for those IO sensitive projects. And as Node.js is very good at scaling-out, it’s more useful on cloud computing platform. Use Node.js on Windows platform is new, too. The modules for SQL database and Windows Azure SDK are still under development and enhancement. It doesn’t support SQL parameter in “node-sqlserver”. It does support using storage connection string to create the storage client in “azure”. But Microsoft is working on make them easier to use, working on add more features and functionalities.   PS, you can download the source code here. You can download the source code of my “Copy all always” tool here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Understanding NFS4 (Linux server)

    - by drumfire
    I've been a bit bothered by NFS4 on Linux. Some information 'out there' seems to conflict with other information, and other information appears hard to find. So here are a couple of things that caught my attention, hopefully someone out there can shed some light on this. This question focuses exclusively on NFS4 without Kerberos etc. 1. Exports There is ambiguous information in the exports manpage on the structure of /etc/exports. To quote from exports(5): Also, each line may have one or more specifications for default options after the path name, in the form of a dash ("-") followed by an option list. The option list is used for all subsequent exports on that line only. What does "subsequent exports on that line only" mean? 1.2 fsid=0 not required anymore? I was searching for fsid when I found a comment on the linux-nfs list stating fsid=0 is not required anymore. Now I'm just confused, do I need it with nfs4 or not?! 2. Non-exported directory still mountable Say I have the following tree: /exp /exp/users /exp/distr /exp/distr/archlinux /exp/distr/debian And I have the following entries in this fstab entry: /dev/disk/by-label/users /mnt/users ext4 defaults 0 0 /dev/disk/by-label/distr /mnt/distr ext4 defaults 0 0 /mnt/users /exp/users none bind 0 0 /mnt/distr /exp/distr none bind 0 0 And my exports is exactly this: /exp 192.168.1.0/24(fsid=0,rw,async,no_subtree_check,no_root_squash) /exp/distr 192.168.1.0/24(rw,async,no_subtree_check,no_root_squash) And exportfs -arv shows: exporting 192.168.1.0/24:/exp/distr exporting 192.168.1.0/24:/exp Then why am I able to do this and get no error on a client: mount -t nfs4 server:/exp/users /tmp/test Even though /exp/users is not exported? I didn't export this directory, and while I don't see the contents of /dev/disk/by-label/users unless I specify crossmnt, I am still able to write to the directory. Everything I write to there goes to the underlying directory of /exp/users which can be seen when I umount /exp/users; ls /exp/users.. 3. The odd case of showmount -d server As stated by rpc.mountd(8), this command should display directories that are either currently mounted by clients, or stale entries in /var/lib/nfs/rmtab, as can be read: The rpc.mountd daemon registers every successful MNT request by adding an entry to the /var/lib/nfs/rmtab file. When receivng a UMNT request from an NFS client, rpc.mountd simply removes the matching entry from /var/lib/nfs/rmtab, as long as the access control list for that export allows that sender to access the export. (...) Note, however, that there is little to guarantee that the contents of /var/lib/nfs/rmtab are accurate. A client may continue accessing an export even after invoking UMNT. If the client reboots without sending a UMNT request, stale entries remain for that client in /var/lib/nfs/rmtab. After reading this I surely wonder: Isn't it terribly insecure to just expose this type of client information; Aren't unaware server admins bound to have an rmtab with a lot of stale clients; Is this the reason that clients that mount nfs4 directories with mount -v get to see output like "nothing was mounted" even though something was mounted? I have a lot of other questions regarding nfs4, but I'll keep it at this for the moment.. :)

    Read the article

  • .NET 4.5 is an in-place replacement for .NET 4.0

    - by Rick Strahl
    With the betas for .NET 4.5 and Visual Studio 11 and Windows 8 shipping many people will be installing .NET 4.5 and hacking away on it. There are a number of great enhancements that are fairly transparent, but it's important to understand what .NET 4.5 actually is in terms of the CLR running on your machine. When .NET 4.5 is installed it effectively replaces .NET 4.0 on the machine. .NET 4.0 gets overwritten by a new version of .NET 4.5 which - according to Microsoft - is supposed to be 100% backwards compatible. While 100% backwards compatible sounds great, we all know that 100% is a hard number to hit, and even the aforementioned blog post at the Microsoft site acknowledges this. But there's so much more than backwards compatibility that makes this awkward at best and confusing at worst. What does ‘Replacement’ mean? When you install .NET 4.5 your .NET 4.0 assemblies in the \Windows\.NET Framework\V4.0.30319 are overwritten with a new set of assemblies. You end up with overwritten assemblies as well as a bunch of new ones (like the new System.Net.Http assemblies for example). The following screen shot demonstrates system.dll on my test machine (left) running .NET 4.5 on the right and my production laptop running stock .NET 4.0 (right):   Clearly they are different files with a difference in file sizes (interesting that the 4.5 version is actually smaller). That’s not all. If you actually query the runtime version when .NET 4.5 is installed with with Environment.Version you still get: 4.0.30319 If you open the properties of System.dll assembly in .NET 4.5 you'll also see: Notice that the file version is also left at 4.0.xxx. There are differences in build numbers: .NET 4.0 shows 261 and the current .NET 4.5 beta build is 17379. I suppose you can use assume a build number greater than 17000 is .NET 4.5, but that's pretty hokey to say the least. There’s no easy or obvious way to tell whether you are running on 4.0 or 4.5 – to the application they appear to be the same runtime version. And that is what Microsoft intends here. .NET 4.5 is intended as an in-place upgrade. Compile to 4.5 run on 4.0 – not quite! You can compile an application for .NET 4.5 and run it on the 4.0 runtime – that is until you hit a new feature that doesn’t exist on 4.0. At which point the app bombs at runtime. Say you write some code that is mostly .NET 4.0, but only has a few of the new features of .NET 4.5 like aync/await buried deep in the bowels of the application where it only fires occasionally. .NET will happily start your application and run everything 4.0 fine, until it hits that 4.5 code – and then crash unceremoniously at runtime. Oh joy! You can .NET 4.0 applications on .NET 4.5 of course and that should work without much fanfare. Different than .NET 3.0/3.5 Note that this in-place replacement is very different from the side by side installs of .NET 2.0 and 3.0/3.5 which all ran on the 2.0 version of the CLR. The two 3.x versions were basically library enhancements on top of the core .NET 2.0 runtime. Both versions ran under the .NET 2.0 runtime which wasn’t changed (other than for security patches and bug fixes) for the whole 3.x cycle. The 4.5 update instead completely replaces the .NET 4.0 runtime and leaves the actual version number set at v4.0.30319. When you build a new project with Visual Studio 2011, you can still target .NET 4.0 or you can target .NET 4.5. But you are in effect referencing the same set of assemblies for both regardless which version you use. What's different is the compiler used to compile and link your code so compiling with .NET 4.0 gives you just the subset of the functionality that is available in .NET 4.0, but when you use the 4.5 compiler you get the full functionality of what’s actually available in the assemblies and extra libraries. It doesn’t look like you will be able to use Visual Studio 2010 to develop .NET 4.5 applications. Good news – Bad news Microsoft is trying hard to experiment with every possible permutation of releasing new versions of the .NET framework apparently. No two updates have been the same. Clearly updating to a full new version of .NET (ie. .NET 2.0, 4.0 and at some point 5.0 runtimes) has its own set of challenges, but doing an in-place update of the runtime and then not even providing a good way to tell which version is installed is pretty whacky even by Microsoft’s standards. Especially given that .NET 4.5 includes a fairly significant update with all the aysnc functionality baked into the runtime. Most of the IO APIs have been updated to support task based async operation which significantly affects many existing APIs. To make things worse .NET 4.5 will be the initial version of .NET that ships with Windows 8 so it will be with us for a long time to come unless Microsoft finally decides to push .NET versions onto Windows machines as part of system upgrades (which currently doesn’t happen). This is the same story we had when Vista launched with .NET 3.0 which was a minor version that quickly was replaced by 3.5 which was more long lived and practical. People had enough problems dealing with the confusing versioning of the 3.x versions which ran on .NET 2.0. I can’t count the amount support calls and questions I’ve fielded because people couldn’t find a .NET 3.5 entry in the IIS version dialog. The same is likely to happen with .NET 4.5. It’s all well and good when we know that .NET 4.5 is an in-place replacement, but administrators and IT folks not intimately familiar with .NET are unlikely to understand this nuance and end up thoroughly confused which version is installed. It’s hard for me to see any upside to an in-place update and I haven’t really seen a good explanation of why this approach was decided on. Sure if the version stays the same existing assembly bindings don’t break so applications can stay running through an update. I suppose this is useful for some component vendors and strongly signed assemblies in corporate environments. But seriously, if you are going to throw .NET 4.5 into the mix, who won’t be recompiling all code and thoroughly test that code to work on .NET 4.5? A recompile requirement doesn’t seem that serious in light of a major version upgrade.  Resources http://blogs.msdn.com/b/dotnet/archive/2011/09/26/compatibility-of-net-framework-4-5.aspx http://www.devproconnections.com/article/net-framework/net-framework-45-versioning-faces-problems-141160© Rick Strahl, West Wind Technologies, 2005-2012Posted in .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Consuming the Amazon S3 service from a Win8 Metro Application

    - by cibrax
    As many of the existing Http APIs for Cloud Services, AWS also provides a set of different platform SDKs for hiding many of complexities present in the APIs. While there is a platform SDK for .NET, which is open source and available in C#, that SDK does not work in Win8 Metro Applications for the changes introduced in WinRT. WinRT offers a complete different set of APIs for doing I/O operations such as doing http calls or using cryptography for signing or encrypting data, two aspects that are absolutely necessary for consuming AWS. All the I/O APIs available as part of WinRT are asynchronous, and uses the TPL model for .NET applications (HTML and JavaScript Metro applications use a model based in promises, which is similar concept).  In the case of S3, the http Authorization header is used for two purposes, authenticating clients and make sure the messages were not altered while they were in transit. For doing that, it uses a signature or hash of the message content and some of the headers using a symmetric key (That's just one of the available mechanisms). Windows Azure for example also uses the same mechanism in many of its APIs. There are three challenges that any developer working for first time in Metro will have to face to consume S3, the new WinRT APIs, the asynchronous nature of them and the complexity introduced for generating the Authorization header. Having said that, I decided to write this post with some of the gotchas I found myself trying to consume this Amazon service. 1. Generating the signature for the Authorization header All the cryptography APIs in WinRT are available under Windows.Security.Cryptography namespace. Many of operations available in these APIs uses the concept of buffers (IBuffer) for representing a chunk of binary data. As you will see in the example below, these buffers are mainly generated with the use of static methods in a WinRT class CryptographicBuffer available as part of the namespace previously mentioned. private string DeriveAuthToken(string resource, string httpMethod, string timestamp) { var stringToSign = string.Format("{0}\n" + "\n" + "\n" + "\n" + "x-amz-date:{1}\n" + "/{2}/", httpMethod, timestamp, resource); var algorithm = MacAlgorithmProvider.OpenAlgorithm("HMAC_SHA1"); var keyMaterial = CryptographicBuffer.CreateFromByteArray(Encoding.UTF8.GetBytes(this.secret)); var hmacKey = algorithm.CreateKey(keyMaterial); var signature = CryptographicEngine.Sign( hmacKey, CryptographicBuffer.CreateFromByteArray(Encoding.UTF8.GetBytes(stringToSign)) ); return CryptographicBuffer.EncodeToBase64String(signature); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The algorithm that determines the information or content you need to use for generating the signature is very well described as part of the AWS documentation. In this case, this method is generating a signature required for creating a new bucket. A HmacSha1 hash is computed using a secret or symetric key provided by AWS in the management console. 2. Sending an Http Request to the S3 service WinRT also ships with the System.Net.Http.HttpClient that was first introduced some months ago with ASP.NET Web API. This client provides a rich interface on top the traditional WebHttpRequest class, and also solves some of limitations found in this last one. There are a few things that don't work with a raw WebHttpRequest such as setting the Host header, which is something absolutely required for consuming S3. Also, HttpClient is more friendly for doing unit tests, as it receives a HttpMessageHandler as part of the constructor that can fake to emulate a real http call. This is how the code for consuming the service with HttpClient looks like, public async Task<S3Response> CreateBucket(string name, string region = null, params string[] acl) { var timestamp = string.Format("{0:r}", DateTime.UtcNow); var auth = DeriveAuthToken(name, "PUT", timestamp); var request = new HttpRequestMessage(HttpMethod.Put, "http://s3.amazonaws.com/"); request.Headers.Host = string.Format("{0}.s3.amazonaws.com", name); request.Headers.TryAddWithoutValidation("Authorization", "AWS " + this.key + ":" + auth); request.Headers.Add("x-amz-date", timestamp); var client = new HttpClient(); var response = await client.SendAsync(request); return new S3Response { Succeed = response.StatusCode == HttpStatusCode.OK, Message = (response.Content != null) ? await response.Content.ReadAsStringAsync() : null }; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } You will notice a few additional things in this code. By default, HttpClient validates the values for some well-know headers, and Authorization is one of them. It won't allow you to set a value with ":" on it, which is something that S3 expects. However, that's not a problem at all, as you can skip the validation by using the TryAddWithoutValidation method. Also, the code is heavily relying on the new async and await keywords to transform all the asynchronous calls into synchronous ones. In case you would want to unit test this code and faking the call to the real S3 service, you should have to modify it to inject a custom HttpMessageHandler into the HttpClient. The following implementation illustrates this concept, In case you would want to unit test this code and faking the call to the real S3 service, you should have to modify it to inject a custom HttpMessageHandler into the HttpClient. The following implementation illustrates this concept, public class FakeHttpMessageHandler : HttpMessageHandler { HttpResponseMessage response; public FakeHttpMessageHandler(HttpResponseMessage response) { this.response = response; } protected override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, System.Threading.CancellationToken cancellationToken) { var tcs = new TaskCompletionSource<HttpResponseMessage>(); tcs.SetResult(response); return tcs.Task; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } You can use this handler for injecting any response while you are unit testing the code.

    Read the article

  • Consuming the Amazon S3 service from a Win8 Metro Application

    - by cibrax
    As many of the existing Http APIs for Cloud Services, AWS also provides a set of different platform SDKs for hiding many of complexities present in the APIs. While there is a platform SDK for .NET, which is open source and available in C#, that SDK does not work in Win8 Metro Applications for the changes introduced in WinRT. WinRT offers a complete different set of APIs for doing I/O operations such as doing http calls or using cryptography for signing or encrypting data, two aspects that are absolutely necessary for consuming AWS. All the I/O APIs available as part of WinRT are asynchronous, and uses the TPL model for .NET applications (HTML and JavaScript Metro applications use a model based in promises, which is similar concept).  In the case of S3, the http Authorization header is used for two purposes, authenticating clients and make sure the messages were not altered while they were in transit. For doing that, it uses a signature or hash of the message content and some of the headers using a symmetric key (That's just one of the available mechanisms). Windows Azure for example also uses the same mechanism in many of its APIs. There are three challenges that any developer working for first time in Metro will have to face to consume S3, the new WinRT APIs, the asynchronous nature of them and the complexity introduced for generating the Authorization header. Having said that, I decided to write this post with some of the gotchas I found myself trying to consume this Amazon service. 1. Generating the signature for the Authorization header All the cryptography APIs in WinRT are available under Windows.Security.Cryptography namespace. Many of operations available in these APIs uses the concept of buffers (IBuffer) for representing a chunk of binary data. As you will see in the example below, these buffers are mainly generated with the use of static methods in a WinRT class CryptographicBuffer available as part of the namespace previously mentioned. private string DeriveAuthToken(string resource, string httpMethod, string timestamp) { var stringToSign = string.Format("{0}\n" + "\n" + "\n" + "\n" + "x-amz-date:{1}\n" + "/{2}/", httpMethod, timestamp, resource); var algorithm = MacAlgorithmProvider.OpenAlgorithm("HMAC_SHA1"); var keyMaterial = CryptographicBuffer.CreateFromByteArray(Encoding.UTF8.GetBytes(this.secret)); var hmacKey = algorithm.CreateKey(keyMaterial); var signature = CryptographicEngine.Sign( hmacKey, CryptographicBuffer.CreateFromByteArray(Encoding.UTF8.GetBytes(stringToSign)) ); return CryptographicBuffer.EncodeToBase64String(signature); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The algorithm that determines the information or content you need to use for generating the signature is very well described as part of the AWS documentation. In this case, this method is generating a signature required for creating a new bucket. A HmacSha1 hash is computed using a secret or symetric key provided by AWS in the management console. 2. Sending an Http Request to the S3 service WinRT also ships with the System.Net.Http.HttpClient that was first introduced some months ago with ASP.NET Web API. This client provides a rich interface on top the traditional WebHttpRequest class, and also solves some of limitations found in this last one. There are a few things that don't work with a raw WebHttpRequest such as setting the Host header, which is something absolutely required for consuming S3. Also, HttpClient is more friendly for doing unit tests, as it receives a HttpMessageHandler as part of the constructor that can fake to emulate a real http call. This is how the code for consuming the service with HttpClient looks like, public async Task<S3Response> CreateBucket(string name, string region = null, params string[] acl) { var timestamp = string.Format("{0:r}", DateTime.UtcNow); var auth = DeriveAuthToken(name, "PUT", timestamp); var request = new HttpRequestMessage(HttpMethod.Put, "http://s3.amazonaws.com/"); request.Headers.Host = string.Format("{0}.s3.amazonaws.com", name); request.Headers.TryAddWithoutValidation("Authorization", "AWS " + this.key + ":" + auth); request.Headers.Add("x-amz-date", timestamp); var client = new HttpClient(); var response = await client.SendAsync(request); return new S3Response { Succeed = response.StatusCode == HttpStatusCode.OK, Message = (response.Content != null) ? await response.Content.ReadAsStringAsync() : null }; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } You will notice a few additional things in this code. By default, HttpClient validates the values for some well-know headers, and Authorization is one of them. It won't allow you to set a value with ":" on it, which is something that S3 expects. However, that's not a problem at all, as you can skip the validation by using the TryAddWithoutValidation method. Also, the code is heavily relying on the new async and await keywords to transform all the asynchronous calls into synchronous ones. In case you would want to unit test this code and faking the call to the real S3 service, you should have to modify it to inject a custom HttpMessageHandler into the HttpClient. The following implementation illustrates this concept, In case you would want to unit test this code and faking the call to the real S3 service, you should have to modify it to inject a custom HttpMessageHandler into the HttpClient. The following implementation illustrates this concept, public class FakeHttpMessageHandler : HttpMessageHandler { HttpResponseMessage response; public FakeHttpMessageHandler(HttpResponseMessage response) { this.response = response; } protected override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, System.Threading.CancellationToken cancellationToken) { var tcs = new TaskCompletionSource<HttpResponseMessage>(); tcs.SetResult(response); return tcs.Task; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } You can use this handler for injecting any response while you are unit testing the code.

    Read the article

  • can't install psycopg2 in my env on mac os x lion

    - by Alexander Ovchinnikov
    I tried install psycopg2 via pip in my virtual env, but got this error: ld: library not found for -lpq (full log here: http://pastebin.com/XdmGyJ4u ) I tried install postgres 9.1 from .dmg and via port, (gksks)iMac-Alexander:~ lorddaedra$ locate libpq /Developer/SDKs/MacOSX10.7.sdk/usr/include/libpq /Developer/SDKs/MacOSX10.7.sdk/usr/include/libpq/libpq-fs.h /Developer/SDKs/MacOSX10.7.sdk/usr/include/libpq-events.h /Developer/SDKs/MacOSX10.7.sdk/usr/include/libpq-fe.h /Developer/SDKs/MacOSX10.7.sdk/usr/include/postgresql/internal/libpq /Developer/SDKs/MacOSX10.7.sdk/usr/include/postgresql/internal/libpq/pqcomm.h /Developer/SDKs/MacOSX10.7.sdk/usr/include/postgresql/internal/libpq-int.h /Developer/SDKs/MacOSX10.7.sdk/usr/include/postgresql/server/libpq /Developer/SDKs/MacOSX10.7.sdk/usr/include/postgresql/server/libpq/auth.h /Developer/SDKs/MacOSX10.7.sdk/usr/include/postgresql/server/libpq/be-fsstubs.h /Developer/SDKs/MacOSX10.7.sdk/usr/include/postgresql/server/libpq/crypt.h /Developer/SDKs/MacOSX10.7.sdk/usr/include/postgresql/server/libpq/hba.h /Developer/SDKs/MacOSX10.7.sdk/usr/include/postgresql/server/libpq/ip.h /Developer/SDKs/MacOSX10.7.sdk/usr/include/postgresql/server/libpq/libpq-be.h /Developer/SDKs/MacOSX10.7.sdk/usr/include/postgresql/server/libpq/libpq-fs.h /Developer/SDKs/MacOSX10.7.sdk/usr/include/postgresql/server/libpq/libpq.h /Developer/SDKs/MacOSX10.7.sdk/usr/include/postgresql/server/libpq/md5.h /Developer/SDKs/MacOSX10.7.sdk/usr/include/postgresql/server/libpq/pqcomm.h /Developer/SDKs/MacOSX10.7.sdk/usr/include/postgresql/server/libpq/pqformat.h /Developer/SDKs/MacOSX10.7.sdk/usr/include/postgresql/server/libpq/pqsignal.h /Developer/SDKs/MacOSX10.7.sdk/usr/lib/libpq.5.3.dylib /Developer/SDKs/MacOSX10.7.sdk/usr/lib/libpq.5.dylib /Developer/SDKs/MacOSX10.7.sdk/usr/lib/libpq.a /Developer/SDKs/MacOSX10.7.sdk/usr/lib/libpq.dylib /Library/PostgreSQL/9.1/doc/postgresql/html/install-windows-libpq.html /Library/PostgreSQL/9.1/doc/postgresql/html/libpq-async.html /Library/PostgreSQL/9.1/doc/postgresql/html/libpq-build.html /Library/PostgreSQL/9.1/doc/postgresql/html/libpq-cancel.html /Library/PostgreSQL/9.1/doc/postgresql/html/libpq-connect.html /Library/PostgreSQL/9.1/doc/postgresql/html/libpq-control.html /Library/PostgreSQL/9.1/doc/postgresql/html/libpq-copy.html /Library/PostgreSQL/9.1/doc/postgresql/html/libpq-envars.html /Library/PostgreSQL/9.1/doc/postgresql/html/libpq-events.html /Library/PostgreSQL/9.1/doc/postgresql/html/libpq-example.html /Library/PostgreSQL/9.1/doc/postgresql/html/libpq-exec.html /Library/PostgreSQL/9.1/doc/postgresql/html/libpq-fastpath.html /Library/PostgreSQL/9.1/doc/postgresql/html/libpq-ldap.html /Library/PostgreSQL/9.1/doc/postgresql/html/libpq-misc.html /Library/PostgreSQL/9.1/doc/postgresql/html/libpq-notice-processing.html /Library/PostgreSQL/9.1/doc/postgresql/html/libpq-notify.html /Library/PostgreSQL/9.1/doc/postgresql/html/libpq-pgpass.html /Library/PostgreSQL/9.1/doc/postgresql/html/libpq-pgservice.html /Library/PostgreSQL/9.1/doc/postgresql/html/libpq-ssl.html /Library/PostgreSQL/9.1/doc/postgresql/html/libpq-status.html /Library/PostgreSQL/9.1/doc/postgresql/html/libpq-threading.html /Library/PostgreSQL/9.1/doc/postgresql/html/libpq.html /Library/PostgreSQL/9.1/include/libpq /Library/PostgreSQL/9.1/include/libpq/libpq-fs.h /Library/PostgreSQL/9.1/include/libpq-events.h /Library/PostgreSQL/9.1/include/libpq-fe.h /Library/PostgreSQL/9.1/include/postgresql/internal/libpq /Library/PostgreSQL/9.1/include/postgresql/internal/libpq/pqcomm.h /Library/PostgreSQL/9.1/include/postgresql/internal/libpq-int.h /Library/PostgreSQL/9.1/include/postgresql/server/libpq /Library/PostgreSQL/9.1/include/postgresql/server/libpq/auth.h /Library/PostgreSQL/9.1/include/postgresql/server/libpq/be-fsstubs.h /Library/PostgreSQL/9.1/include/postgresql/server/libpq/crypt.h /Library/PostgreSQL/9.1/include/postgresql/server/libpq/hba.h /Library/PostgreSQL/9.1/include/postgresql/server/libpq/ip.h /Library/PostgreSQL/9.1/include/postgresql/server/libpq/libpq-be.h /Library/PostgreSQL/9.1/include/postgresql/server/libpq/libpq-fs.h /Library/PostgreSQL/9.1/include/postgresql/server/libpq/libpq.h /Library/PostgreSQL/9.1/include/postgresql/server/libpq/md5.h /Library/PostgreSQL/9.1/include/postgresql/server/libpq/pqcomm.h /Library/PostgreSQL/9.1/include/postgresql/server/libpq/pqformat.h /Library/PostgreSQL/9.1/include/postgresql/server/libpq/pqsignal.h /Library/PostgreSQL/9.1/lib/libpq.5.4.dylib /Library/PostgreSQL/9.1/lib/libpq.5.dylib /Library/PostgreSQL/9.1/lib/libpq.a /Library/PostgreSQL/9.1/lib/libpq.dylib /Library/PostgreSQL/9.1/lib/postgresql/libpqwalreceiver.so /Library/PostgreSQL/9.1/pgAdmin3.app/Contents/Frameworks/libpq.5.dylib /Library/PostgreSQL/psqlODBC/lib/libpq.5.4.dylib /Library/PostgreSQL/psqlODBC/lib/libpq.5.dylib /Library/PostgreSQL/psqlODBC/lib/libpq.dylib /Library/WebServer/Documents/postgresql/html/install-windows-libpq.html /Library/WebServer/Documents/postgresql/html/libpq-async.html /Library/WebServer/Documents/postgresql/html/libpq-build.html /Library/WebServer/Documents/postgresql/html/libpq-cancel.html /Library/WebServer/Documents/postgresql/html/libpq-connect.html /Library/WebServer/Documents/postgresql/html/libpq-control.html /Library/WebServer/Documents/postgresql/html/libpq-copy.html /Library/WebServer/Documents/postgresql/html/libpq-envars.html /Library/WebServer/Documents/postgresql/html/libpq-events.html /Library/WebServer/Documents/postgresql/html/libpq-example.html /Library/WebServer/Documents/postgresql/html/libpq-exec.html /Library/WebServer/Documents/postgresql/html/libpq-fastpath.html /Library/WebServer/Documents/postgresql/html/libpq-ldap.html /Library/WebServer/Documents/postgresql/html/libpq-misc.html /Library/WebServer/Documents/postgresql/html/libpq-notice-processing.html /Library/WebServer/Documents/postgresql/html/libpq-notify.html /Library/WebServer/Documents/postgresql/html/libpq-pgpass.html /Library/WebServer/Documents/postgresql/html/libpq-pgservice.html /Library/WebServer/Documents/postgresql/html/libpq-ssl.html /Library/WebServer/Documents/postgresql/html/libpq-status.html /Library/WebServer/Documents/postgresql/html/libpq-threading.html /Library/WebServer/Documents/postgresql/html/libpq.html /opt/local/include/postgresql90/internal/libpq /opt/local/include/postgresql90/internal/libpq/pqcomm.h /opt/local/include/postgresql90/internal/libpq-int.h /opt/local/include/postgresql90/libpq /opt/local/include/postgresql90/libpq/libpq-fs.h /opt/local/include/postgresql90/libpq-events.h /opt/local/include/postgresql90/libpq-fe.h /opt/local/include/postgresql90/server/libpq /opt/local/include/postgresql90/server/libpq/auth.h /opt/local/include/postgresql90/server/libpq/be-fsstubs.h /opt/local/include/postgresql90/server/libpq/crypt.h /opt/local/include/postgresql90/server/libpq/hba.h /opt/local/include/postgresql90/server/libpq/ip.h /opt/local/include/postgresql90/server/libpq/libpq-be.h /opt/local/include/postgresql90/server/libpq/libpq-fs.h /opt/local/include/postgresql90/server/libpq/libpq.h /opt/local/include/postgresql90/server/libpq/md5.h /opt/local/include/postgresql90/server/libpq/pqcomm.h /opt/local/include/postgresql90/server/libpq/pqformat.h /opt/local/include/postgresql90/server/libpq/pqsignal.h /opt/local/lib/postgresql90/libpq.5.3.dylib /opt/local/lib/postgresql90/libpq.5.dylib /opt/local/lib/postgresql90/libpq.a /opt/local/lib/postgresql90/libpq.dylib /opt/local/lib/postgresql90/libpqwalreceiver.so /opt/local/var/macports/sources/rsync.macports.org/release/tarballs/ports/databases/libpqxx /opt/local/var/macports/sources/rsync.macports.org/release/tarballs/ports/databases/libpqxx/Portfile /opt/local/var/macports/sources/rsync.macports.org/release/tarballs/ports/databases/libpqxx26 /opt/local/var/macports/sources/rsync.macports.org/release/tarballs/ports/databases/libpqxx26/Portfile /usr/include/libpq /usr/include/libpq/libpq-fs.h /usr/include/libpq-events.h /usr/include/libpq-fe.h /usr/include/postgresql/internal/libpq /usr/include/postgresql/internal/libpq/pqcomm.h /usr/include/postgresql/internal/libpq-int.h /usr/include/postgresql/server/libpq /usr/include/postgresql/server/libpq/auth.h /usr/include/postgresql/server/libpq/be-fsstubs.h /usr/include/postgresql/server/libpq/crypt.h /usr/include/postgresql/server/libpq/hba.h /usr/include/postgresql/server/libpq/ip.h /usr/include/postgresql/server/libpq/libpq-be.h /usr/include/postgresql/server/libpq/libpq-fs.h /usr/include/postgresql/server/libpq/libpq.h /usr/include/postgresql/server/libpq/md5.h /usr/include/postgresql/server/libpq/pqcomm.h /usr/include/postgresql/server/libpq/pqformat.h /usr/include/postgresql/server/libpq/pqsignal.h /usr/lib/libpq.5.3.dylib /usr/lib/libpq.5.dylib /usr/lib/libpq.a /usr/lib/libpq.dylib How to tell pip to use this lib in /Library/PostgreSQL/9.1/lib/ (or may be in /usr/lib)? or may be install this lib again in my env (i try keep my env isolated from mac as possible)

    Read the article

  • Using SSL on slapd

    - by Warren
    I am setting up slapd to use SSL on Fedora 14. I have the following in my /etc/openldap/slapd.d/cn=config.ldif: olcTLSCACertificateFile: /etc/pki/tls/certs/SSL_CA_Bundle.pem olcTLSCertificateFile: /etc/pki/tls/certs/mydomain.crt olcTLSCertificateKeyFile: /etc/pki/tls/private/mydomain.key olcTLSCipherSuite: HIGH:MEDIUM:-SSLv2 olcTLSVerifyClient: demand and the following in my /etc/sysconfig/ldap: SLAPD_LDAP=no SLAPD_LDAPS=yes In my ldap.conf file, I have BASE dc=mydomain,dc=com URI ldaps://localhost TLS_CACERTDIR /etc/pki/tls/certs TLS_REQCERT allow However, when I connect to the localhost, ldapsearch returns the following: ldap_initialize( <DEFAULT> ) ldap_create Enter LDAP Password: ldap_sasl_bind ldap_send_initial_request ldap_new_connection 1 1 0 ldap_int_open_connection ldap_connect_to_host: TCP localhost:636 ldap_new_socket: 3 ldap_prepare_socket: 3 ldap_connect_to_host: Trying 127.0.0.1:636 ldap_pvt_connect: fd: 3 tm: -1 async: 0 TLS: loaded CA certificate file /etc/pki/tls/certs/978601d0.0 from CA certificate directory /etc/pki/tls/certs. TLS: loaded CA certificate file /etc/pki/tls/certs/b69d4130.0 from CA certificate directory /etc/pki/tls/certs. TLS certificate verification: defer TLS: error: connect - force handshake failure: errno 0 - moznss error -12271 TLS: can't connect: . ldap_err2string ldap_sasl_bind(SIMPLE): Can't contact LDAP server (-1) What do I have incorrect?

    Read the article

  • Intel P6100 CPU and Mobile Intel® HM55 Express Chipset

    - by Christopher Painter
    I have an Asus K52F-BBR5 notebook that uses an Intel P6100 ( 2GHZ 15x multiplier) and HM55 Express Chipset. I'm looking to replace it's 3GB with 8GB. The Crucial database seems to indicate that a PC3-8500 CAS 7 and PC3-10666 CAS 9 will both work. I'm not up to date on the latest DDR3 nomencalature and I'm wondering which would provide better performance. The price difference is negligible. Drawing on past experiences from many many years ago I could make an argument for either based on sync/async bus speed arguments and CAS latency differences but the truth is I don't know enough about the HM55 chipset to know which would be the correct choice. Does anyone know the answer or point me to information that would help me make the choice? I'm pretty sure the performance difference will be somewhat negligible also but still I'd like to make the optimal choice.

    Read the article

  • Split section of video with ffmpeg

    - by Rob
    I've been trying to get this to work and I'm really close, but something still isn't right. I have a 14 second clip I'm trying to cut out of a longer mp4 video. I got the video to cut to the right place with this command: ffmpeg -ss 00:05:13.0 -i ~/videos/trim_me.mp4 -vcodec h264 -acodec copy -t 00:00:14.0 ~/videos/trimmed.mp4 If I didn't specify -vcodec it was starting from an "I-Frame" (I guess) and wasn't the right place. The audio is starting from that spot as well, so I tried setting -acodec the same way: ffmpeg -ss 00:05:13.0 -i ~/videos/trim_me.mp4 -vcodec h264 -acodec aac -ac 2 -ab 225k -ar 48000 -strict -2 -t 00:00:14.0 ~/videos/trimmed.mp4 Which doesn't really help much. Setting -async 1 makes it take longer, and then the audio does match up, but not until 4 seconds into the video. :/ I'd ideally not like to install anything else and have a commandline solution for this.

    Read the article

  • How to enable connection security for WMI firewall rules when using VAMT 2.0?

    - by Ondrej Tucny
    I want to use VAMT 2.0 to install product keys and active software in remote machines. Everything works fine as long as the ASync-In, DCOM-In, and WMI-In Windows Firewall rules are enabled and the action is set to Allow the connection. However, when I try using Allow the connection if it is secure (regardless of the connection security option chosen) VAMT won't connect to the remote machine. I tried using wbemtest and the error always is “The RPC server is unavailable”, error code 0x800706ba. How do I setup at least some level of connection security for remote WMI access for VAMT to work? I googled for correct VAMT setup, read the Volume Activation 2.0 Step-by-Step guide, but no luck finding anything about connection security.

    Read the article

  • Windows7 NFS with linux server

    - by Vitaly
    Hi. I have an Ubuntu server and want to access its web folder (/var/www). What I done: installed nfs-kernel-server, nfs-common and portmap (as in faq) Setted up /etc/exports: /var/www 192.168.1.0/255.255.255.0(rw,no_roow_squash,async,subtree_check) Then: sudo exportfs -ra Then: sudo /etc/init.d/nfs-kernle-server restart I checked, if all works on same machine: sudo 192.168.1.101:/var/www /mnt/test Then accessed /mnt/test and seen that all data present and all ok. Next, I tried to connect this folder to windows7 using NFS client: First, I checked, that linux exported path successfully: showmount -e 192.168.1.101 /var/www 192.168.1.0/255.255.255.0 All ok, go to mount: mount -o anon 192.168.1.101:/var/www z: Console said, that all success.. but. I cant access drive Z (drive exists in the system and point to right folder). When I try to access drive Z my Explorer just going to sleep and then say that timeout expired. Help me please.

    Read the article

  • FFmpeg audio dont work in converted videos

    - by Juddy Swaft
    NOTICE: when i convert videos via terminal and download them from ftp into pc the audio works fine. I use: if($ext == "avi" && $convert_avi == true) { $convert_source = _VIDEOS_DIR_PATH.$new_name; $conv_name = substr(md5($file['name'].rand(1,888)), 2, 10).".mp4"; $converted_file = _VIDEOS_DIR_PATH.$conv_name; $ffmpeg_command = 'ffmpeg -i '.$convert_source.' -acodec libmp3lame -vcodec libx264 -s 1280x720 -ar 44100 -async 44100 -r 29.970 -ac 2 -qscale 5 '.$converted_file; echo exec($ffmpeg_command); $sql = "UPDATE pm_temp SET url = '".$conv_name."' WHERE url = '".$new_name."' LIMIT 1"; $result = @mysql_query($sql); unlink($convert_source); } This code to convert avi to mp4 ffmpeg concole output: root@1tb:~# ffmpeg -i sample.avi -acodec libmp3lame -vcodec libx264 -s 1280x720 -ar 44100 -async 44100 -r 29.970 -ac 2 -qscale 5 goodsample.mp4 ffmpeg version 0.7.15, Copyright (c) 2000-2013 the FFmpeg developers built on Feb 22 2013 07:18:58 with gcc 4.4.5 configuration: --enable-libdc1394 --prefix=/usr --extra-cflags='-Wall -g ' --cc='ccache cc' --enable-shared --enable-libmp3lame --enable-gpl --enable-libvorbis --enable-pthreads --enable-libfaac --enable-libxvid --enable-postproc --enable-x11grab --enable-libgsm --enable-libtheora --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libx264 --enable-libspeex --enable-nonfree --disable-stripping --enable-avfilter --enable-libdirac --disable-decoder=libdirac --enable-libfreetype --enable-libschroedinger --disable-encoder=libschroedinger - s libavutil 50. 43. 0 / 50. 43. 0 libavcodec 52.123. 0 / 52.123. 0 libavformat 52.111. 0 / 52.111. 0 libavdevice 52. 5. 0 / 52. 5. 0 libavfilter 1. 80. 0 / 1. 80. 0 libswscale 0. 14. 1 / 0. 14. 1 libpostproc 51. 2. 0 / 51. 2. 0 [mp3 @ 0x191d4100] Header missing [mpeg4 @ 0x191d1dc0] Invalid and inefficient vfw-avi packed B frames detected Input #0, avi, from 'sample.avi': Metadata: encoder : VirtualDubMod 1.5.10.2 (build 2540/release) Duration: 00:01:01.81, start: 0.000000, bitrate: 1194 kb/s Stream #0.0: Video: mpeg4, yuv420p, 640x352 [PAR 1:1 DAR 20:11], 23.98 tbr, Stream #0.1: Audio: mp3, 48000 Hz, stereo, s16, 128 kb/s [buffer @ 0x191d1c80] w:640 h:352 pixfmt:yuv420p tb:1/1000000 sar:1/1 sws_param: [scale @ 0x191d6880] w:640 h:352 fmt:yuv420p -> w:1280 h:720 fmt:yuv420p flags:0 [libx264 @ 0x191ce5a0] Default settings detected, using medium profile [libx264 @ 0x191ce5a0] using SAR=45/44 [libx264 @ 0x191ce5a0] using cpu capabilities: MMX2 SSE2Fast SSSE3 FastShuffle S [libx264 @ 0x191ce5a0] profile High, level 3.1 [libx264 @ 0x191ce5a0] 264 - core 118 - H.264/MPEG-4 AVC codec - Copyleft 2003-2 6 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_off 1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_l Output #0, mp4, to 'goodsample.mp4': Metadata: encoder : Lavf52.111.0 Stream #0.0: Video: libx264, yuv420p, 1280x720 [PAR 45:44 DAR 20:11], q=2-31 Stream #0.1: Audio: libmp3lame, 44100 Hz, stereo, s16, 64 kb/s Stream mapping: Stream #0.0 -> #0.0 Stream #0.1 -> #0.1 Press [q] to stop, [?] for help [mp3 @ 0x191d4100] Header missing Error while decoding stream #0.1 [mpeg4 @ 0x191d1dc0] Invalid and inefficient vfw-avi packed B frames detected [mp3 @ 0x191d4100] incomplete frame 9467kB time=00:01:00.32 bitrate=1285.5kbits/ Error while decoding stream #0.1 frame= 1852 fps= 20 q=29.0 Lsize= 9652kB time=00:01:01.72 bitrate=1280.9kbits video:9121kB audio:483kB global headers:0kB muxing overhead 0.499688% frame I:11 Avg QP:16.78 size: 51456 [libx264 @ 0x191ce5a0] frame P:784 Avg QP:20.81 size: 8954 [libx264 @ 0x191ce5a0] frame B:1057 Avg QP:26.06 size: 1659 [libx264 @ 0x191ce5a0] consecutive B-frames: 22.0% 3.1% 7.5% 67.4% [libx264 @ 0x191ce5a0] mb I I16..4: 31.1% 59.8% 9.1% [libx264 @ 0x191ce5a0] mb P I16..4: 1.8% 2.6% 0.2% P16..4: 24.3% 7.0% 4.0 [libx264 @ 0x191ce5a0] mb B I16..4: 0.1% 0.1% 0.0% B16..8: 22.7% 0.8% 0.2 [libx264 @ 0x191ce5a0] 8x8 transform intra:57.0% inter:72.6% [libx264 @ 0x191ce5a0] coded y,uvDC,uvAC intra: 44.4% 33.3% 10.3% inter: 7.6% 5. [libx264 @ 0x191ce5a0] i16 v,h,dc,p: 68% 14% 8% 10% [libx264 @ 0x191ce5a0] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 21% 14% 27% 5% 7% 7% 6 [libx264 @ 0x191ce5a0] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 28% 14% 14% 6% 10% 9% 7 [libx264 @ 0x191ce5a0] i8c dc,h,v,p: 67% 13% 17% 3% [libx264 @ 0x191ce5a0] Weighted P-Frames: Y:1.9% UV:0.4% [libx264 @ 0x191ce5a0] ref P L0: 62.2% 12.8% 10.3% 14.5% 0.2% [libx264 @ 0x191ce5a0] ref B L0: 88.1% 5.5% 6.4% [libx264 @ 0x191ce5a0] ref B L1: 95.7% 4.3% [libx264 @ 0x191ce5a0] kb/s:1209.03 I know there is couple errors tough, but i dont know hot to fix it. Also i would be very thankfull if someone can help reduce video size but is not main problem video weights as original avi but sill.

    Read the article

  • Connmand Equivalent

    - by CurtisS
    What's the connmand equivalent on Fedora 15? I'm trying to connect to the Internet with Fedora 15 but I'm having problems. It can't see my network connection and when I try to run nm-connection-editor I get: (nm-connection-editor:9816): WARNING: **get_all_cb: couldn't retrivew system settings properties ... (nm-connection-editor: 9816): WARNING **: fetch_connections_done: error fetching connections: (32) ... (nm-connection-editor: 9816): GVFS-RemoteVolumneMonitor-WARNING ** cannot connect to the session bus: (nm-connection-editor: 9816) GVFS-RemoteVolumeMonitor-WARNING **: cannot connect to the session bus ... g_dbus_connection_real_closed: Remote peer vanished with error: Underlying GIOSStream returned 0 bytes onan async read (g-io-error-quark, 0). Exiting.

    Read the article

  • How does syslog-ng handles flush_lines(0) ?

    - by Luke404
    I wanted to make sure my syslog-ng was doing async logging. Reading through the documentation I see the flush_lines() option for file() destinations, if unspecified, will use the global default. Then I see that the global setting defaults to 0 but it doesn't explain what that means. Is it going to do synchronous logging when set to 0? is it going to buffer an unlimited number of lines (flushing just every flush_timeout() number of seconds)? is it going to bite me?

    Read the article

  • How to determine the used size of device associated's buffer

    - by dubbaluga
    Hi, when mounting a device without the "sync" option, e. g. by invoking the following: mount -o async /dev/sdc1 /mnt a buffer is associated with a device to optimize (speed) read/write operations. Is there a way to determine the size of this buffer? Another question that comes into my mind is, if it's possible to find out how much of it is used currently. This can be interesting to determine the time it would take to "sync" or "umount" slow devices, such as flash-based media. Thanks in advance for your answers, Rainer

    Read the article

  • Nginx request forking

    - by Adam
    Hi, I'm wondering if nginx can "fork" a request. Let's imagine config: upstream backend { server localhost:8080; ... more servers here } server { location /myloc { FORK-REQUEST http://my-other-url:3135/something proxy_pass http://backend; } } I would like nginx to send a copy of request to the url specified by FORK-REQUEST and after that to load balance it with backend servers and return the response to the client. As I don't need the response from FORK-REQUEST it would be best if this request was async so normal prcessing doesn't have to wait. Is a scenario like this possible?

    Read the article

  • NFS share access - Permission denied

    - by rgngl
    I'm trying to share a directory on my NAS device(WD Mybook WE) with NFS to another machine on my local network. The directory on the NAS device looks like this: drwxr-x--- 15 git git 4096 Nov 17 01:05 git/ And id's of the user git on the NAS device is like this: [root@myhost DataVolume]# id git uid=505(git) gid=505(git) I played with many different parameters in the /etc/exports file and this is what I got there currently: /DataVolume/git 192.168.0.20(async,rw,no_root_squash,no_subtree_check) On the client side I have the user git and group git with the same id's to match the ones on the server. user@myclient:~$ id git uid=505(git) gid=505(git) groups=505(git) I mount the directory with: sudo mount myhost:/DataVolume/git -t nfs git/ and the mounted directory looks like: drwxr-x--- 15 git git 4096 Nov 17 01:05 git After these steps I can't seem to cd to that directory with any user, including git and root. I am getting a Permission denied error. Thanks in advance for any help.

    Read the article

  • ffmpeg error while segmenting

    - by Tommy Ng
    I'm using ffmpeg and segmenter on Ubuntu 10.04 to create the transport stream from flv/h264 video files and then segment the ts segments for ipad streaming. Some ts files show an error with segmenter - Output #0, mpegts, to '29': Stream #0.0: Video: 0x0000, yuv420p, 480x360, q=2-31, 90k tbn, 25 tbc Stream #0.1: Audio: 0x0000, 0 channels, s16 [mpegts @ 0x11f4ac0]sample rate not set Could not write mpegts header to first output file my ffmpeg command for creating the ts file - ffmpeg -i 1.flv -f mpegts -acodec libfaac -ar 48000 -ab 64k -s 480x360 -vcodec libx264 -b 192k -flags +loop -cmp +chroma -partitions +parti4x4+partp8x8+partb8x8 -subq 5 -trellis 1 -refs 1 -coder 0 -me_range 16 -keyint_min 25 -sc_threshold 40 -i_qfactor 0.71 -bt 200k -maxrate 192k -bufsize 192k -rc_eq 'blurCplx^(1-qComp)' -qcomp 0.6 -qmin 10 -qmax 51 -qdiff 4 -level 30 -aspect 480:360 -g 30 -async 2 -y 1.ts my segmenter command - segmenter 1.ts 10 1 1.m3u8 path/to/streams/

    Read the article

  • NFS of NAS server blocks in cluster environment

    - by Zardoz
    In our department we have an Iomega NAS (px4-300d) connected to a Supermicro cluster with 5 nodes (12 cores per node). Each node mounts a share on that NAS by using NFS. Unfortunately after some time (several minutes) of permanent read/write operations (from all nodes) the NAS starts to block and a bit later freezes completely. We tried several options of the mount command, but nothing helped (async, intr, wsize, rsize). The NAS itself doesn't allow many options (better to say none). Do you have any recommendation how to integrate a NAS using NFS in a cluster environment?

    Read the article

  • App Pool crashes before loading mscorsvr. How to troubleshoot?

    - by codepoke
    I have an app pool that recycles every 29 hours, per default. It recycles smoothly 9 times out of 10, and I'm pretty sure the recycle itself is good for the app. Once every couple weeks the recycle does not work. The old worker process dies cleanly and the new worker process starts, but will not serve up content. Recycling the app pool again manually works like a charm. The failed worker process stops and dies cleanly and a second new worker process fires up and serves content perfectly. I took a crash dump against the failed worker process prior to recycling it, and DebugDiag found nothing to complain about. I tried to dig a little deeper using WinDBG, but mscorsvr/mscorwks is not loaded yet 15 minutes after the new process started. There are 14 threads running (4 async) and 20 pending client connections, but .NET is not even loaded into the process yet. Any suggestions where to poke and prod to find a root cause on this?

    Read the article

  • Good Choice of Memory for Asus K52F-BBR5

    - by Christopher Painter
    I recently purchased an Asus K52F-BBR5 notebook. It's a basic laptop with an Intel P6100 CPU and Mobile Intel® HM55 Express Chipset. It came with 3GB of DDR3 SODIMM memory and I'd like to expand it to 8GB. I'm a little confused by DDR3 nomenclature and not up to date on my knowledge of chipsets. I'd like to make a good choice when selecting memory for it. Crucial's database suggests using either a PC3-8500 with CAS 7 or a PC3-10600 with a CAS of 9. Is the 8500 better because of it's CAS 7 or will my chipset run the memory async at a higher speed and get better performance? Which would be a better choice for my chipset and CPU? Price difference is negligble.

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >