Search Results

Search found 6945 results on 278 pages for 'azure use cases'.

Page 38/278 | < Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >

  • Asynchronously returning a hierarchal data using .NET TPL... what should my return object "look" like?

    - by makerofthings7
    I want to use the .NET TPL to asynchronously do a DIR /S and search each subdirectory on a hard drive, and want to search for a word in each file... what should my API look like? In this scenario I know that each sub directory will have 0..10000 files or 0...10000 directories. I know the tree is unbalanced and want to return data (in relation to its position in the hierarchy) as soon as it's available. I am interested in getting data as quickly as possible, but also want to update that result if "better" data is found (better means closer to the root of c:) I may also be interested in finding all matches in relation to its position in the hierarchy. (akin to a report) Question: How should I return data to my caller? My first guess is that I think I need a shared object that will maintain the current "status" of the traversal (started | notstarted | complete ) , and might base it on the System.Collections.Concurrent. Another idea that I'm considering is the consumer/producer pattern (which ConcurrentCollections can handle) however I'm not sure what the objects "look" like. Optional Logical Constraint: The API doesn't have to address this, but in my "real world" design, if a directory has files, then only one file will ever contain the word I'm looking for.  If someone were to literally do a DIR /S as described above then they would need to account for more than one matching file per subdirectory. More information : I'm using Azure Tables to store a hierarchy of data using these TPL extension methods. A "node" is a table. Not only does each node in the hierarchy have a relation to any number of nodes, but it's possible for each node to have a reciprocal link back to any other node. This may have issues with recursion but I'm addressing that with a shared object in my recursion loop. Note that each "node" also has the ability to store local data unique to that node. It is this information that I'm searching for. In other words, I'm searching for a specific fixed RowKey in a hierarchy of nodes. When I search for the fixed RowKey in the hierarchy I'm interested in getting the results FAST (first node found) but prefer data that is "closer" to the starting point of the hierarchy. Since many nodes may have the particular RowKey I'm interested in, sometimes I may want to get a report of ALL the nodes that contain this RowKey.

    Read the article

  • Node.js running under IIS Express Keeps Crashing

    - by PazoozaTest Pazman
    I recently resinstalled Windows 7 on my machine and went back to downloading and installing the tools to help me continue developing node.js windows azure web applications. I followed the instructions given on the node.js azure site: http://www.windowsazure.com/en-us/develop/nodejs/ and using web installer 4.0 it says I have successfully installed these tools: Windows Azure Powershell Windows Azure SDK for Node.js - June 2012 Windows Azure SDK for .Net (VS 2012 RC) - June 2012 IIS Recommend Configuration The problem I am experiencing is that when I run the site using powershell e.g: start-azureemulator -launch it goes ahead and runs IIS Express, and after several minutes IIS Express crashes with the following information: Problem signature: Problem Event Name: APPCRASH Application Name: iisexpress.exe Application Version: 8.0.8298.0 Application Timestamp: 4f620349 Fault Module Name: iiscore.dll Fault Module Version: 8.0.8298.0 Fault Module Timestamp: 4f63b65c Exception Code: c0000005 Exception Offset: 00021767 OS Version: 6.1.7601.2.1.0.256.28 Locale ID: 1033 Additional Information 1: f66d Additional Information 2: f66d807b515d6b2dc6f28f66db769a01 Additional Information 3: 7b2f Additional Information 4: 7b2f6797d07ebc2c23f2b227e779722e I am running 2 instances each time, and both of them crash one after the other. Is anyone experiencing something similar and fix this issue ? Is their an upgrade I need to do ? I've run windows update but it says I've got all the latest updates etc. Can I tell the powershell cmdlet to use IIS 7 instead of IIS Express? I'm guessing its something to do with IIS Express on my machine. I did some hunting around and found this person here who experienced a similar problem: https://github.com/tjanczuk/iisnode/issues/149 I've got a cron job running every 1 second, to check if any website totals need to be updated. Could this be causing IIS Express to crash? Cheers

    Read the article

  • Oracle and Microsoft Expand Choice and Flexibility in Deploying Oracle Software in the Cloud

    - by Gene Eun
    Oracle and Microsoft have entered into a new partnership that will help customers embrace cloud computing by providing greater choice and flexibility in how to deploy Oracle software.  Here are the key elements of the partnership: Effective today, our customers can run supported Oracle software on Windows Server Hyper-V and in Windows Azure Effective today, Oracle provides license mobility for customers who want to run Oracle software on Windows Azure Microsoft will add Infrastructure Services instances with popular configurations of Oracle software including Java, Oracle Database and Oracle WebLogic Server to the Windows Azure image gallery Microsoft will offer fully licensed and supported Java in Windows Azure Oracle will offer Oracle Linux, with a variety of Oracle software, as preconfigured instances on Windows Azure Oracle’s strategy and commitment is to support multiple platforms, and Microsoft Windows has long been an important supported platform.  Oracle is now extending that support to Windows Server Hyper-V and Window Azure by providing certification and support for Oracle applications, middleware, database, Java and Oracle Linux on Windows Server Hyper-V and Windows Azure. As of today, customers can deploy Oracle software on Microsoft private clouds and Windows Azure, as well as Oracle private and public clouds and other supported cloud environments. For information related to software licensing in Windows Azure, see Licensing Oracle Software in the Cloud Computing Environment. Also, Oracle Support policies as they apply to Oracle software running in Windows Azure or on Windows Server Hyper-V are covered in two My Oracle Support (MOS) notes which are shown below: MOS Note 1563794.1 Certified Software on Microsoft Windows Server 2012 Hyper-V - NEW MOS Note 417770.1 Oracle Linux Support Policies for Virtualization and Emulation - UPDATED

    Read the article

  • Part 1 - Load Testing In The Cloud

    - by Tarun Arora
    Azure is fascinating, but even more fascinating is the marriage of Azure and TFS! Introduction Recently a client I worked for had 2 major business critical applications being delivered, with very little time budgeted for Performance testing, we immediately hit a bottleneck when the performance testing phase started, the in house infrastructure team could not support the hardware requirements in the short notice. It was suggested that the performance testing be performed on one of the QA environments which was a fraction of the production environment. This didn’t seem right, the team decided to turn to the cloud. The team took advantage of the elasticity offered by Azure, starting with a single test agent which was provisioned and ready for use with in 30 minutes the team scaled up to 17 test agents to perform a very comprehensive performance testing cycle. Issues were identified and resolved but the highlight was that the cost of running the ‘test rig’ proved to be less than if hosted on premise by the infrastructure team. Thank you for taking the time out to read this blog post, in the series of posts, I’ll try and cover the start to end of everything you need to know to use Azure to build your Test Rig in the cloud. But Why Azure? I have my own Data Centre… If the environment is provisioned in your own datacentre, - No matter what level of service agreement you may have with your infrastructure team there will be down time when the environment is patched - How fast can you scale up or down the environments (keeping the enterprise processes in mind) Administration, Cost, Flexibility and Scalability are the areas you would want to think around when taking the decision between your own Data Centre and Azure! How is Microsoft's Public Cloud Offering different from Amazon’s Public Cloud Offering? Microsoft's offering of the Cloud is a hybrid of Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) which distinguishes Microsoft's offering from other providers such as Amazon (Amazon only offers IaaS). PaaS – Platform as a Service IaaS – Infrastructure as a Service Fills the needs of those who want to build and run custom applications as services. Similar to traditional hosting, where a business will use the hosted environment as a logical extension of the on-premises datacentre. A service provider offers a pre-configured, virtualized application server environment to which applications can be deployed by the development staff. Since the service providers manage the hardware (patching, upgrades and so forth), as well as application server uptime, the involvement of IT pros is minimized. On-demand scalability combined with hardware and application server management relieves developers from infrastructure concerns and allows them to focus on building applications. The servers (physical and virtual) are rented on an as-needed basis, and the IT professionals who manage the infrastructure have full control of the software configuration. This kind of flexibility increases the complexity of the IT environment, as customer IT professionals need to maintain the servers as though they are on-premises. The maintenance activities may include patching and upgrades of the OS and the application server, load balancing, failover clustering of database servers, backup and restoration, and any other activities that mitigate the risks of hardware and software failures.   The biggest advantage with PaaS is that you do not have to worry about maintaining the environment, you can focus all your time in solving the business problems with your solution rather than worrying about maintaining the environment. If you decide to use a VM Role on Azure, you are asking for IaaS, more on this later. A nice blog post here on the difference between Saas, PaaS and IaaS. Now that we are convinced why we should be turning to the cloud and why in specific Azure, let’s discuss about the Test Rig. The Load Test Rig – Topology Now the moment of truth, Of course a big part of getting value from cloud computing is identifying the most adequate workloads to take to the cloud, so I’ve decided to try to make a Load Testing rig where the Agents are running on Windows Azure.   I’ll talk you through the above Topology, - User: User kick starts the load test run from the developer workstation on premise. This passes the request to the Test Controller. - Test Controller: The Test Controller is on premise connected to the same domain as the developer workstation. As soon as the Test Controller receives the request it makes use of the Windows Azure Connect service to orchestrate the test responsibilities to all the Test Agents. The Windows Azure Connect endpoint software must be active on all Azure instances and on the Controller machine as well. This allows IP connectivity between them and, given that the firewall is properly configured, allows the Controller to send work loads to the agents. In parallel, the Controller will collect the performance data from the agents, using the traditional WMI mechanisms. - Test Agents: The Test Agents are on the Windows Azure Public Cloud, as soon as the test controller issues instructions to the test agents, the test agents start executing the load tests. The HTTP requests are issued against the web server on premise, the results are captured by the test agents. And finally the results are passed over to the controller. - Servers: The Web Server and DB Server are hosted on premise in the datacentre, this is usually the case with business critical applications, you probably want to manage them your self. Recap and What’s next? So, in the introduction in the series of blog posts on Load Testing in the cloud I highlighted why creating a test rig in the cloud is a good idea, what advantages does Windows Azure offer and the Test Rig topology that I will be using. I would also like to mention that i stumbled upon this [Video] on Azure in a nutshell, great watch if you are new to Windows Azure. In the next post I intend to start setting up the Load Test Environment and discuss pricing with respect to test agent machine types that will be used in the test rig. Hope you enjoyed this post, If you have any recommendations on things that I should consider or any questions or feedback, feel free to add to this blog post. Remember to subscribe to http://feeds.feedburner.com/TarunArora.  See you in Part II.   Share this post : CodeProject

    Read the article

  • Exporting virtual machine from Windows Azure to Amazon

    - by Somebody
    As documentation says: You can import VMware ESX and VMware Workstation VMDK images, Citrix Xen VHD images and Microsoft Hyper-V VHD images for Windows Server 2003, Windows Server 2003 R2, Windows Server 2008 and Windows Server 2008 R2. What about virtual machine with Ubuntu on it? Will it import it successfully? Have anyone tried to do it? Is it possible to import VM directly from Windows Azure. Without need to download VM on my machine and then upload it to Amazon? Thanks.

    Read the article

  • Windows Azure Directory Sync Generic Failure

    - by Armand
    Ok so I have a domain that I want to sync to Office365 but when I start the Windows Azure Active Directory Sync tool Configuration Wizard I get an error with the following details: System.Management.ManagementException: Generic failure at System.Management.ManagementException.ThrowWithExtendedInfo(ManagementStatus errorCode) at System.Management.ManagementObjectCollection.ManagementObjectEnumerator.MoveNext() at Microsoft.Online.DirSync.Common.MiisAction.GetTargetMA() at Microsoft.Online.DirSync.Common.MiisAction.IsSyncInProgress() at Microsoft.Online.DirSync.Common.PrerequisiteChecks.ThrowIfSyncInProgress() at Microsoft.Online.DirSync.UI.IntroductionWizardPage.PrerequisiteValidation() at Microsoft.Online.DirSync.UI.IntroductionWizardPage.OnLoad(EventArgs e) at System.Windows.Forms.Control.CreateControl(Boolean fIgnoreVisible) at System.Windows.Forms.Control.CreateControl(Boolean fIgnoreVisible) at System.Windows.Forms.Control.CreateControl(Boolean fIgnoreVisible) at System.Windows.Forms.Control.CreateControl(Boolean fIgnoreVisible) at System.Windows.Forms.Control.CreateControl() at System.Windows.Forms.Control.WmShowWindow(Message& m) at System.Windows.Forms.Control.WndProc(Message& m) at System.Windows.Forms.Control.ControlNativeWindow.WndProc(Message& m) at System.Windows.Forms.NativeWindow.Callback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam) I have searched far and wide to no avail, this happens before I can even enter any details. A few notes: The server is not the domain controller Sharepoint 2013 is installed on this server The account I log in with and run the application with is a domain and enterprise admin I right click and run as administrator when I start the application So when I click continue on the error and go through the steps I get two possible scenarios that change from time to time at now predictable rate: 1) I just get an error, generic failure. 2) I get an error "Cannot start service MSOnlineSyncScheduler on computer '.'." Any help?

    Read the article

  • How to leverage the internal HTTP endpoint available on Azure web roles?

    - by Alfredo Delsors
    Imagine you have a Web application using an in-memory collection that changes occasionally but is used very often. The collection gets loaded from storage on the Application_Start global.asax event and is updated whenever its content changes. If you want to deploy this application on Azure you need to keep in mind that more than one instance of the application can be running at any time and therefore you need to provide some mechanism to keep all instances informed with the latest changes. Because the communication through internal endpoints between Azure role instances is at no cost, a good solution can be maintaining the information on Azure Storage Tables, reading its contents on the Application_Start event and populating its changes to all other instances using the internal HTTP port available on Azure Web Roles. You need to follow these steps to leverage the internal HTTP endpoint available on Azure web roles to maintain all instances up to date. 1.   Define an internal HTTP endpoint in the Web Role properties, for example InternalHttpEndpoint   2.   Add a new WCF service to the Web Role, for example NotificationService.svc 3.   Disable multiple site bindings in web.config: <serviceHostingEnvironment multipleSiteBindingsEnabled="false"> 4.   Add a method on the new service to receive notifications from other role instances. namespace Service { [ServiceContract] public interface INotificationService { [OperationContract(IsOneWay = true)] void Notify(Information info); } } 5.   Declare a class that inherits from System.ServiceModel.Activation.ServiceHostFactory and override the method CreateServiceHost to host the internal endpoint. public class InternalServiceFactory : ServiceHostFactory { protected override ServiceHost CreateServiceHost(Type serviceType, Uri[] baseAddresses) { var internalEndpointAddress = string.Format( "http://{0}/NotificationService.svc", RoleEnvironment.CurrentRoleInstance.InstanceEndpoints["InternalHttpEndpoint"].IPEndpoint); ServiceHost host = new ServiceHost( typeof(NotificationService), new Uri(internalEndpointAddress)); BasicHttpBinding binding = new BasicHttpBinding(SecurityMode.None); host.AddServiceEndpoint( typeof(INotificationService), binding, internalEndpointAddress); return host; } } Note that you can use SecurityMode.None because the internal endpoint is private to the instances of the service. 6.   Edit the markup of the service right clicking the svc file and selecting "View markup" to add the new factory as the factory to be used to create the service <%@ ServiceHost Language="C#" Debug="true" Factory="Service.InternalServiceFactory" Service="Service.NotificationService" CodeBehind="NotificationService.svc.cs" %> 7.   Now you can notify changes to other instances using this code: var current = RoleEnvironment.CurrentRoleInstance; var endPoints = current.Role.Instances .Where(instance => instance != current) .Select(instance => instance.InstanceEndpoints["InternalHttpEndpoint"]); foreach (var ep in endPoints) { EndpointAddress address = new EndpointAddress( String.Format("http://{0}/NotificationService.svc", ep.IPEndpoint)); BasicHttpBinding binding = new BasicHttpBinding(SecurityMode.None); var factory = new ChannelFactory<INotificationService>(binding); INotificationService instance = factory.CreateChannel(address); instance.Notify(changedinfo); }

    Read the article

  • Which one to select for my future career; Java, C#, Azure or Apex?

    - by user636195
    Hi folks, This time, I am going to studying Masters in Computer Science in U.S.A after a week. I have been doing my B.Sc for the past three years and after my freshman year I started working on projects (in C# and very rarely in Java) for the past two years.(i.e while I was a second and third year student). Now I am in a college where all of the programming courses are going to be taken in Java only (using Eclipse) and I am going to stay in this college for 8 months on campus and then fully employed for two years in other companies as a CPT. I really love to work on Microsoft products because, for me, they are simple and easy to use and understand. My future plan is to work in Cloud computing and be a Cloud based business owner in the near future. Since the college is going to teach us and let us do every project in Java, I was confused which programming language to use that will help me and enhance me in my career, and of course I wanted to select the one I liked to do everytime. I also heard a lot about Azure (Microsoft’s ) and Apex (Salesforce.com’s cloud computing programming language). Would you please give me your advice and recommendation based on my situation? Should I have to study only Java, or should I have to study C# or Azure beside Java on my own? The reason I asked this is because, since I have no clue how Azure works and how long it will take me to know the language, I am really confused which one to select (Java Vs C# and Azure Vs Apex or if there is any popular and mostly used Cloud Computing langauge). Do you think I can get a job in cloud computing if I study Azure or Apex by my own without experience? There is also one issue I want to consider which is a short term issue is. i.e Salary. Since I have to pay my student loan, I also need to get a good job which will let me pay my loan within two years. But, as I said, my long term plan is, get experience in Cloud Computing (from programming to administrative part,i.e every area of cloud computing) and then have my own business may be within 5-10 years. What do you think? Thank you for your time.

    Read the article

  • sysprep failure on Windows Server 2008

    - by dushyantp
    Before deploying a Azure VM Role, we need to perform %windir%\system32\sysprep\sysprep.exe /generalize /oobe /shutdown But in my case the sysprep fails with the log file %windir%\system32\sysprep\Panther\setuperr.txt saying: 2012-07-05 08:03:57, Error [0x0f0073] SYSPRP RunExternalDlls:Not running DLLs; either the machine is in an invalid state or we couldn't update the recorded state, dwRet = 31 2012-07-05 08:03:57, Error [0x0f00ae] SYSPRP WinMain:Hit failure while processing sysprep cleanup external providers; hr = 0x8007001f I do not always want to create a new image. Is there any work around? I followed the instructions in MS support here and tried: %windir%\system32\sysprep\sysprep.exe /generalize /oobe /shutdown /unattend:.\unattend.xml It did not work. Under certain circumstances, I need to tear down the VM Image from azure and re-deploy with some more changes. So sysprep has to run almost twice every week.

    Read the article

  • Is DHCP lease expriring years from now okay?

    - by sharptooth
    I'm reviewing Azure web role logs and there's output from ipconfig /all IPv4 Address. . . . . . . . . . . : 10.61.145.37(Preferred) . Subnet Mask . . . . . . . . . . . : 255.255.254.0. Lease Obtained. . . . . . . . . . : Monday, September 24, 2012 12:26:00 PM. Lease Expires . . . . . . . . . . : Thursday, October 31, 2148 6:55:12 PM. you see, the lease expires in year 2148 but my VM will likely not run for more than one month - when I deploy the new version of my code I first deploy it to new VMs, then switch traffic, then release the new VMs. In general such usage pattern is normal - VMs typically live from several dozen minutes to several weeks on Azure. I suspect the lease that long will cause problems on the internal Azure network sooner or later. Is such long DHCP lease okay or is it likely a misconfiguration?

    Read the article

  • Login authentication vanished from MongoDB install

    - by Robert Oschler
    A few months ago I enabled password protection on my MongoDB install. Today I ran the Mongo client and forgot to use my login details. Instead of rejecting nearly everything I try to do from the shell, like it should, I had complete access to all the databases and collections. Fortunately this instance is only running a few test apps, so I quickly shutdown the MongoD instance until I figure this out. Has anybody ever seen this kind of behavior before and knows what is going on? The MongoD instance is running on a Linux VM hosted by Azure. The only thing I can think of is that perhaps Azure restored an old copy of the VM, but I received no E-mails to that effect and everything else on the server seems to be proper, including new daemon processes that I added after I enabled password protection on MongoD.

    Read the article

  • How to fix “The requested service, ‘net.pipe://localhost/SecurityTokenServiceApplication/appsts.svc’ could not be activated.”

    - by ybbest
    Problem: When I try to publish a SharePoint2013 workflow, I received the error: The requested service, ‘net.pipe://localhost/SecurityTokenServiceApplication/appsts.svc’ could not be activated. After that, my workflow stopped working and every time I start a work I receive the following error message: System.ApplicationException: PreconditionFailed ---> System.ApplicationException: Error in the application. --- End of inner exception stack trace --- at System.Activities.Statements.Throw.Execute(CodeActivityContext context) at System.Activities.CodeActivity.InternalExecute(ActivityInstance instance, ActivityExecutor executor, BookmarkManager bookmarkManager) at System.Activities.Runtime.ActivityExecutor.ExecuteActivityWorkItem.ExecuteBody(ActivityExecutor executor, BookmarkManager bookmarkManager, Location resultLocation) Analysis: After analysis, I found the error by visiting the http://localhost:32843/SecurityTokenServiceApplication/securitytoken.svc and the error I got on the message is                                                                                                                                              Solution: The solution is basically getting more memory to the server. For development environment, you can restart your noderunner.exe or some other services to release some memories. To verify you have enough memory    you can browse to http://localhost:32843/SecurityTokenServiceApplication/securitytoken.svc , it should return the information below. Then you can republish your workflow and it will work like a charm.

    Read the article

  • How to Fix “Error occurred in deployment step ‘Activate Features’: System.TimeoutException:”

    - by ybbest
    Problem: When deploying a SharePoint2013 workflow using Visual Studio, I got the following Error: Error occurred in deployment step ‘Activate Features’: System.TimeoutException: The HTTP request has timed out after 20000 milliseconds. —> System.Net.WebException: The request was aborted: The request was canceled. at System.Net.HttpWebRequest.EndGetResponse(IAsyncResult asyncResult) at Microsoft.Workflow.Client.HttpGetResponseAsyncResult`1.OnGotResponse(IAsyncResult result) — End of inner exception stack trace — at Microsoft.Workflow.Common.AsyncResult.End[TAsyncResult](IAsyncResult result) at Microsoft.Workflow.Client.Ht Analysis: After reading AC’s blogpost and I find out the issue is to do with the service bus. Then I found out the following services are not started Solution: So I start the Service Bus Gateway and Service Bus Message Broker and the problem goes away. References: SharePoint 2013 Workflow – Advanced Workflow Debugging with Fiddler

    Read the article

  • EFMVC Migrated to .NET 4.5, Visual Studio 2012, ASP.NET MVC 4 and EF 5 Code First

    - by shiju
    I have just migrated my EFMVC app from .NET 4.0 and ASP.NET MVC 4 RC to .NET 4.5, ASP.NET MVC 4 RTM and Entity Framework 5 Code First. In this release, the EFMVC solution is built with Visual Studio 2012 RTM. The migration process was very smooth and did not made any major changes other than adding simple unit tests with NUnit and Moq. I will add more unit tests on later and will also modify the existing solution. Source Code You can download the source code from http://efmvc.codeplex.com/

    Read the article

  • Windows Azure Roles stuck in &lsquo;initializing&rsquo;, &rsquo;busy&rsquo;

    - by kaleidoscope
    Technorati Tags: windows azure,roles,stuck,initializing,busy,stopping,tinu If you have worked on Windows Azure you are bound to have faced this gnawing and dreaded issue – Your Web/Worker role goes from ‘initializing’ to ‘busy’ to ‘stopping’ but refuses to get ‘ready’. For those of us who have resorted to merciless desktop vandalism over this, there is still hope. In his post MSFT’s Toddy Mladenov summarizes few plausible reasons for this - 1. Missing runtime dependencies (DLLs) 2. Incorrect platform version of a DLL 3. Incorrect DiagnosticsConnectionString/DataConnectionString 4. Queues/Tables being read during initialization do not exist. 5. Certificate without exportable private key. 6. Returning from Run Method in Worker Role. For a more detailed and precise account visit his post: http://blog.toddysm.com/2010/01/windows-azure-deployment-stuck-in-initializing-busy-stopping-why.html - Tinu, O

    Read the article

  • SO-Aware at the Atlanta Connected Systems User Group

    - by gsusx
    Today my colleague Don Demsak will be presenting a session about WCF management, testing and governance using SO-Aware and the SO-Aware Test Workbench at the Connected Systems User Group in Atlanta . Don is a very engaging speaker and has prepared some very cool demos based on lessons of real world WCF solutions. If you are in the ATL area and interested in WCF, AppFabric, BizTalk you should definitely swing by Don’s session . Don’t forget to heckle him a bit (you can blame it for it ;) )...(read more)

    Read the article

  • Using a service registry that doesn’t suck part I: UDDI is dead

    - by gsusx
    This is the first of a series of posts on which I am hoping to detail some of the most common SOA governance scenarios in the real world, their challenges and the approach we’ve taken to address them in SO-Aware. This series does not intend to be a marketing pitch about SO-Aware. Instead, I would like to use this to foment an honest dialog between SOA governance technologists. For the starting post I decided to focus on the aspect that was once considered the keystone of SOA governance: service discovery...(read more)

    Read the article

  • ASP.NET and WIF: Showing custom profile username as User.Identity.Name

    - by DigiMortal
    I am building ASP.NET MVC application that uses external services to authenticate users. For ASP.NET users are fully authenticated when they are redirected back from external service. In system they are logically authenticated when they have created user profiles. In this posting I will show you how to force ASP.NET MVC controller actions to demand existence of custom user profiles. Using external authentication sources with AppFabric Suppose you want to be user-friendly and you don’t force users to keep in mind another username/password when they visit your site. You can accept logins from different popular sites like Windows Live, Facebook, Yahoo, Google and many more. If user has account in some of these services then he or she can use his or her account to log in to your site. If you have community site then you usually have support for user profiles too. Some of these providers give you some information about users and other don’t. So only thing in common you get from all those providers is some unique ID that identifies user in service uniquely. Image above shows you how new user joins your site. Existing users who already have profile are directed to users homepage after they are authenticated. You can read more about how to solve semi-authorized users problem from my blog posting ASP.NET MVC: Using ProfileRequiredAttribute to restrict access to pages. The other problem is related to usernames that we don’t get from all identity providers. Why is IIdentity.Name sometimes empty? The problem is described more specifically in my blog posting Identifying AppFabric Access Control Service users uniquely. Shortly the problem is that not all providers have claim called http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name. The following diagram illustrates what happens when user got token from AppFabric ACS and was redirected to your site. Now, when user was authenticated using Windows Live ID then we don’t have name claim in token and that’s why User.Identity.Name is empty. Okay, we can force nameidentifier to be used as name (we can do it in web.config file) but we have user profiles and we want username from profile to be shown when username is asked. Modifying name claim Now let’s force IClaimsIdentity to use username from our user profiles. You can read more about my profiles topic from my blog posting ASP.NET MVC: Using ProfileRequiredAttribute to restrict access to pages and you can find some useful extension methods for claims identity from my blog posting Identifying AppFabric Access Control Service users uniquely. Here is what we do to set User.Identity.Name: we will check if user has profile, if user has profile we will check if User.Identity.Name matches the name given by profile, if names does not match then probably identity provider returned some name for user, we will remove name claim and recreate it with correct username, we will add new name claim to claims collection. All this stuff happens in Application_AuthorizeRequest event of our web application. The code is here. protected void Application_AuthorizeRequest() {     if (string.IsNullOrEmpty(User.Identity.Name))     {         var identity = User.Identity;         var profile = identity.GetProfile();         if (profile != null)         {             if (profile.UserName != identity.Name)             {                 identity.RemoveName();                   var claim = new Claim("http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name", profile.UserName);                 var claimsIdentity = (IClaimsIdentity)identity;                 claimsIdentity.Claims.Add(claim);             }         }     } } RemoveName extension method is simple – it looks for name claims of IClaimsIdentity claims collection and removes them. public static void RemoveName(this IIdentity identity) {     if (identity == null)         return;       var claimsIndentity = identity as ClaimsIdentity;     if (claimsIndentity == null)         return;       for (var i = claimsIndentity.Claims.Count - 1; i >= 0; i--)     {         var claim = claimsIndentity.Claims[i];         if (claim.ClaimType == "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name")             claimsIndentity.Claims.RemoveAt(i);     } } And we are done. Now User.Identity.Name returns the username from user profile and you can use it to show username of current user everywhere in your site. Conclusion Mixing AppFabric Access Control Service and Windows Identity Foundation with custom authorization logic is not impossible but a little bit tricky. This posting finishes my little series about AppFabric ACS and WIF for this time and hopefully you found some useful tricks, tips, hacks and code pieces you can use in your own applications.

    Read the article

  • How to fix “The requested service, ‘net.pipe://localhost/SecurityTokenServiceApplication/appsts.svc’ could not be activated.”

    - by ybbest
    Problem: When I try to publish a SharePoint2013 workflow, I received the error: The requested service, ‘net.pipe://localhost/SecurityTokenServiceApplication/appsts.svc’ could not be activated. After that, my workflow stopped working and every time I start a work I receive the following error message: System.ApplicationException: PreconditionFailed ---> System.ApplicationException: Error in the application. --- End of inner exception stack trace --- at System.Activities.Statements.Throw.Execute(CodeActivityContext context) at System.Activities.CodeActivity.InternalExecute(ActivityInstance instance, ActivityExecutor executor, BookmarkManager bookmarkManager) at System.Activities.Runtime.ActivityExecutor.ExecuteActivityWorkItem.ExecuteBody(ActivityExecutor executor, BookmarkManager bookmarkManager, Location resultLocation) Analysis: After analysis, I found the error by visiting the http://localhost:32843/SecurityTokenServiceApplication/securitytoken.svc and the error I got on the message is                                                                                                                                              Solution: The solution is basically getting more memory to the server. For development environment, you can restart your noderunner.exe or some other services to release some memories. To verify you have enough memory    you can browse to http://localhost:32843/SecurityTokenServiceApplication/securitytoken.svc , it should return the information below. Then you can republish your workflow and it will work like a charm.

    Read the article

  • Access Control Service: Passive/Active Transition Sample

    - by Your DisplayName here!
    Here you can find my updated ACS2 sample. In addition to the existing front ends (web [WS-Federation], console [SOAP & REST], Silverlight [REST]) and error handling, it now also includes a WPF client that shows the passive/active transition with a SOAP service as illustrated here. All the ACS interaction is encapsulated in a WPF user control that: retrieves the JSON feed displays a list of supported identity providers triggers the sign in via a browser control retrieves the token response packages the token as a GenericXmlSecurityToken (to be used directly with the WIF ChannelFactory extensions methods) All you need to supply is the ACS namespace and the realm. Have fun!

    Read the article

  • Migration & Modernization: Windows/VB6 Apps to ASP.NET HTML5

    - by Visual WebGui
    I would like to invite you to a webinar we are doing in collaboration with Jeffrey S. Hammond , Principal Analyst serving Application Development & Delivery Professionals at Forrester Research. The webinar is free and it will will introduce the substantial changes brought on by the move to Web Applications and Open Web architectures, and the challenges it places on application development shops. We’ll also introduce how we at Gizmox are helping client navigate this mobile shift and evolve existing...(read more)

    Read the article

  • Access Control Service: Protocol and Token Transition

    - by Your DisplayName here!
    ACS v2 supports a number of protocols (WS-Federation, WS-Trust, OpenId, OAuth 2 / WRAP) and a number of token types (SWT, SAML 1.1/2.0) – see Vittorio’s Infographic here. Some protocols are designed for active client (WS-Trust, OAuth / WRAP) and some are designed for passive clients (WS-Federation, OpenID). One of the most obvious advantages of ACS is that it allows to transition between various protocols and token types. Once example would be using WS-Federation/SAML between your application and ACS to sign in with a Google account. Google is using OpenId and non-SAML tokens, but ACS transitions into WS-Federation and sends back a SAML token. This way you application only needs to understand a single protocol whereas ACS acts as a protocol bridge (see my ACS2 sample here). Another example would be transformation of a SAML token to a SWT. This is achieved by using the WRAP endpoint – you send a SAML token (from a registered identity provider) to ACS, and ACS turns it into a SWT token for the requested relying party, e.g. (using the WrapClient from Thinktecture.IdentityModel): [TestMethod] public void GetClaimsSamlToSwt() {     // get saml token from idp     var samlToken = Helper.GetSamlIdentityTokenForAcs();     // send to ACS for SWT converion     var swtToken = Helper.GetSimpleWebToken(samlToken);     var client = new HttpClient(Constants.BaseUri);     client.SetAccessToken(swtToken, WebClientTokenSchemes.OAuth);     // call REST service with SWT     var response = client.Get("wcf/client");     Assert.AreEqual<HttpStatusCode>(HttpStatusCode.OK, response.StatusCode); } There are more protocol transitions possible – but they are not so obvious. A popular example would be how to call a REST/SOAP service using e.g. a LiveId login. In the next post I will show you how to approach that scenario.

    Read the article

  • How to use call web service action in SharePoint2013 workflow

    - by ybbest
    In SharePoint2013, you can use call web service action and loop. In this post, I will show you how to achieve this. 1. Create a List workflow called CallWebService 2. Create a variable called listurl and assign the value to http://sp2010/_vti_bin/listdata.svc 3. Create a dictionary variable called RequestHeaders and add the following key value pairs. 4. Call the web service with the HttpHeaders you just build in the previous step and store the response in the variable ResponseContent. 5. The ResponseContent variable is the Dynamic values (in SharePoint designer it will be called dictionary type) and it is new feature for SharePoint2013 workflow. We can use the following actions to count the number items in the variable. 6. You can use loop in SharePoint 2013 workflow and out each list title as shown below.

    Read the article

  • ASP.NET MVC: Using ProfileRequiredAttribute to restrict access to pages

    - by DigiMortal
    If you are using AppFabric Access Control Services to authenticate users when they log in to your community site using Live ID, Google or some other popular identity provider, you need more than AuthorizeAttribute to make sure that users can access the content that is there for authenticated users only. In this posting I will show you hot to extend the AuthorizeAttribute so users must also have user profile filled. Semi-authorized users When user is authenticated through external identity provider then not all identity providers give us user name or other information we ask users when they join with our site. What all identity providers have in common is unique ID that helps you identify the user. Example. Users authenticated through Windows Live ID by AppFabric ACS have no name specified. Google’s identity provider is able to provide you with user name and e-mail address if user agrees to publish this information to you. They both give you unique ID of user when user is successfully authenticated in their service. There is logical shift between ASP.NET and my site when considering user as authorized. For ASP.NET MVC user is authorized when user has identity. For my site user is authorized when user has profile and row in my users table. Having profile means that user has unique username in my system and he or she is always identified by this username by other users. My solution is simple: I created my own action filter attribute that makes sure if user has profile to access given method and if user has no profile then browser is redirected to join page. Illustrating the problem Usually we restrict access to page using AuthorizeAttribute. Code is something like this. [Authorize] public ActionResult Details(string id) {     var profile = _userRepository.GetUserByUserName(id);     return View(profile); } If this page is only for site users and we have user profiles then all users – the ones that have profile and all the others that are just authenticated – can access the information. It is okay because all these users have successfully logged in in some service that is supported by AppFabric ACS. In my site the users with no profile are in grey spot. They are on half way to be users because they have no username and profile on my site yet. So looking at the image above again we need something that adds profile existence condition to user-only content. [ProfileRequired] public ActionResult Details(string id) {     var profile = _userRepository.GetUserByUserName(id);     return View(profile); } Now, this attribute will solve our problem as soon as we implement it. ProfileRequiredAttribute: Profiles are required to be fully authorized Here is my implementation of ProfileRequiredAttribute. It is pretty new and right now it is more like working draft but you can already play with it. public class ProfileRequiredAttribute : AuthorizeAttribute {     private readonly string _redirectUrl;       public ProfileRequiredAttribute()     {         _redirectUrl = ConfigurationManager.AppSettings["JoinUrl"];         if (string.IsNullOrWhiteSpace(_redirectUrl))             _redirectUrl = "~/";     }              public override void OnAuthorization(AuthorizationContext filterContext)     {         base.OnAuthorization(filterContext);           var httpContext = filterContext.HttpContext;         var identity = httpContext.User.Identity;           if (!identity.IsAuthenticated || identity.GetProfile() == null)             if(filterContext.Result == null)                 httpContext.Response.Redirect(_redirectUrl);          } } All methods with this attribute work as follows: if user is not authenticated then he or she is redirected to AppFabric ACS identity provider selection page, if user is authenticated but has no profile then user is by default redirected to main page of site but if you have application setting with name JoinUrl then user is redirected to this URL. First case is handled by AuthorizeAttribute and the second one is handled by custom logic in ProfileRequiredAttribute class. GetProfile() extension method To get user profile using less code in places where profiles are needed I wrote GetProfile() extension method for IIdentity interface. There are some more extension methods that read out user and identity provider identifier from claims and based on this information user profile is read from database. If you take this code with copy and paste I am sure it doesn’t work for you but you get the idea. public static User GetProfile(this IIdentity identity) {     if (identity == null)         return null;       var context = HttpContext.Current;     if (context.Items["UserProfile"] != null)         return context.Items["UserProfile"] as User;       var provider = identity.GetIdentityProvider();     var nameId = identity.GetNameIdentifier();       var rep = ObjectFactory.GetInstance<IUserRepository>();     var profile = rep.GetUserByProviderAndNameId(provider, nameId);       context.Items["UserProfile"] = profile;       return profile; } To avoid round trips to database I cache user profile to current request because the chance that profile gets changed meanwhile is very minimal. The other reason is maybe more tricky – profile objects are coming from Entity Framework context and context has also HTTP request as lifecycle. Conclusion This posting gave you some ideas how to finish user profiles stuff when you use AppFabric ACS as external authentication provider. Although there was little shift between us and ASP.NET MVC with interpretation of “authorized” we were easily able to solve the problem by extending AuthorizeAttribute to get all our requirements fulfilled. We also write extension method for IIdentity that returns as user profile based on username and caches the profile in HTTP request scope.

    Read the article

  • How to use call web service action in SharePoint2013 workflow

    - by ybbest
    In SharePoint2013, you can use call web service action and loop. In this post, I will show you how to achieve this. 1. Create a List workflow called CallWebService 2. Create a variable called listurl and assign the value to http://sp2010/_vti_bin/listdata.svc 3. Create a dictionary variable called RequestHeaders and add the following key value pairs. 4. Call the web service with the HttpHeaders you just build in the previous step and store the response in the variable ResponseContent. 5. The ResponseContent variable is the Dynamic values (in SharePoint designer it will be called dictionary type) and it is new feature for SharePoint2013 workflow. We can use the following actions to count the number items in the variable. 6. You can use loop in SharePoint 2013 workflow and out each list title as shown below.

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >