Search Results

Search found 6945 results on 278 pages for 'azure use cases'.

Page 33/278 | < Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >

  • Windows Identity Foundation - Local STS on Windows Azure.

    - by joe
    Hello, I am trying to use Federated authentication on Azure. I found a example of having a local sts outside azure which is used for authentication from a web role hosted in azure. This works perfectly. My issue is, i dont want to have an application outside azure. Instead, I want to host the local sts website also in azure. So in effect I will have two web roles (1. my actual website, 2. the sts). I tried the above approach my creating a new webrole and moving the files from my original sts project. But i am getting compilation errors even if I reference the required dlls. I have also set "copy local" to true. It will be very helpful if somebody can guide me.. Thanks

    Read the article

  • Why is IaaS important in Azure&hellip;

    - by Steve Loethen
    Three weeks ago, Microsoft released the next phase of Azure.  I have had several clients waiting on this release.  The fact that they have been waiting and are now more receptive to looking at the cloud.  Customers expressed fear of the unknown.  And a fear of lack of control, even when that lack of control also means a huge degree of flexibility to innovate with concerns about the underlying infrastructure.  I think IaaS will be that “gateway drug” to get customers who have been hesitant to take another look at the cloud.  The dialog can change from the cloud being this big scary unknown to a resource for workloads.  The conversations should have always been, and can know be even stronger, geared toward the following points: 1) The cloud is not unicorns and glitter, the cloud is resources.  Compute, storage, db’s, services bus, cache…..  Like many of the resources we have on-premise.  Not magic, just another resource with advantages and obstacles like any other resource. 2) The cloud should be part of the conversation for any new project.  All of the same criteria should be applied, on-premise or off.  Cost, security, reliability, scalability, speed to deploy, cost of licenses, need to customize image, complex workloads.  We have been having these discussions for years when we talk about on-premise projects.  We make decisions on OS’s, Databases, ESB’s, configuration and products based on a myriad of factors.  We use the same factors but now we have a additional set of resources to consider in our process. 3) The cloud is a great solution looking for some interesting problems.  It is our job to recognize the right problems that fit into the cloud, weigh the factors and decide what to do. IaaS makes this discussion easier, offers more choices, and often choices that many enterprises will find more better than PaaS.  Looking forward to helping clients realize the power of the cloud.

    Read the article

  • First Shard for SQL Azure and SQL Server

    - by Herve Roggero
    That's it!!!!! It's ready to go and be tested, abused and improved! It requires .NET 4.0 and uses some cool technologies, like caching (the new System.Runtime.Caching) and the Task Parallel Library (System.Threading.Tasks). With this library you can: Define a shard of 1, 2 or 100 SQL databases (a mix of SQL Server and SQL Azure) Read from the shard in parallel or sequentially, and cache resultsets Update, Delete a record from the shard Insert records quickly in the shard with a round-robin load Reset the cache You can download the source code and a sample application here: http://enzosqlshard.codeplex.com/  Note about the breadcrumbs: I had to add a connection GUID in order for the library to know which database a record came from. The GUID is currently calculated on the fly in the library using some of the parameters of the connection string. The GUID is also dynamically added to the result set so the client can pass it back to the library. I am curious to get your feedback on this approach. ** Correction from my previous post: this is a library for a Horizontal Partition Shard (HPS): tables are split across databases horizontally. So in essence, the tables need to have the same schema across the databases.

    Read the article

  • Head in the Clouds

    - by Tony Davis
    We're just past the second anniversary of the launch of Windows Azure. A couple of years' experience with Azure in the industry has provided some obvious success stories, but has deflated some of the initial marketing hyperbole. As a general principle, Azure seems to work well in providing a Service-Oriented Architecture for services in enterprises that suffer wide fluctuations in demand. Instead of being obliged to provide hardware sufficient for the occasional peaks in demand, one can hire capacity only when it is needed, and the cost of hosting an application is no longer a capital cost. It enables companies to avoid having to scale out hardware for peak periods only to see it underused for the rest of the time. A customer-facing application such as a concert ticketing system, which suffers high demand in short, predictable bursts of activity, is a great example of an application that would work well in Azure. However, moving existing applications to Azure isn't something to be done on impulse. Unless your application is .NET-based, and consists of 'stateless' components that communicate via queues, you are probably in for a lot of redevelopment work. It makes most sense for IT departments who are already deep in this .NET mindset, and who also want 'grown-up' methods of staging, testing, and deployment. Azure fits well with this culture and offers, as a bonus, good Visual Studio integration. The most-commonly stated barrier to porting these applications to Azure is the problem of reconciling the use of the cloud with legislation for data privacy and security. Putting databases in the cloud is a sticky issue for many and impossible for some due to compliance and security issues, the need for direct control over data, and so on. In the face of feedback from the early adopters of Azure, Microsoft has broadened the architectural choices to cater for a wide range of requirements. As well as SQL Azure Database (SAD) and Azure storage, the unstructured 'BLOB and Entity-Attribute-Value' NoSQL storage alternative (which equates more closely with folders and files than a database), Windows Azure offers a wide range of storage options including use of services such as oData: developers who are programming for Windows Azure can simply choose the one most appropriate for their needs. Secondly, and crucially, the Windows Azure architecture allows you the freedom to produce hybrid applications, where only those parts that need cloud-based hosting are deployed to Azure, whereas those parts that must unavoidably be hosted in a corporate datacenter can stay there. By using a hybrid architecture, it will seldom, if ever, be necessary to move an entire application to the cloud, along with personal and financial data. For example that we could port to Azure only put those parts of our ticketing application that capture and process tickets orders. Once an order is captured, the financial side can be processed in our own data center. In short, Windows Azure seems to be a very effective way of providing services that are subject to wide but predictable fluctuations in demand. Have you come to the same conclusions, or do you think I've got it wrong? If you've had experience with Azure, would you recommend it? It would be great to hear from you. Cheers, Tony.

    Read the article

  • Using the ASP.NET Membership API with SQL Server / SQL Azure: The new &ldquo;System.Web.Providers&rdquo; namespace

    - by Harish Ranganathan
    The Membership API came in .NET 2.0 and was a huge enhancement in building web applications with users, managing roles, permissions etc.,  The Membership API by default uses SQL Express and until Visual Studio 2008, it was available only through the ASP.NET Configuration manager screen (Website – ASP.NET Configuration) or (Project – ASP.NET Configuration) and for every application, one has to manually visit this place to start using the Security and other settings.  Upon doing that the default SQL Express database aspnet.mdf is created to store all the user profiles. Starting Visual Studio 2010 and .NET 4.0, the Default Website template includes the Membership API controls as a part of the page i.e. When you create a “File – New – ASP.NET Web Application” or an “ASP.NET MVC Application”, by default the Login/Register controls are enabled in the MasterPage and they are termed under “ApplicationServices” setting in the web.config file with connection string pointed to the SQL Express database. In fact, when you run the default website and click on “Logon” –> “Register”, and enter the details for registration and click “Register”, that is the time the aspnet.mdf file is created with the tables for Users, Roles, UsersInRoles, Profile etc., Now, this uses the default SQL Express database within the App_Data folder.  If you want to move your Membership information to some other database such as SQL Server, SQL CE or SQL Azure, you need to manually run the aspnet_regsql command and specify the destination database name. This would create all the Tables, Procedures and Views required to handle the Membership information.  Thereafter you can change the connection string for “ApplicationServices” to point to the database where you had run all the scripts. Now, enter “System.Web.Providers” Alpha. This is available as a part of the NuGet package library.  Scott Hanselman has a neat post describing the steps required to get it up and running as well as doing the basic changes  at http://www.hanselman.com/blog/IntroducingSystemWebProvidersASPNETUniversalProvidersForSessionMembershipRolesAndUserProfileOnSQLCompactAndSQLAzure.aspx Pretty much, it covers what the new System.Web.Providers do. One thing I wanted to clarify is that, the new “System.Web.Providers” add a lot of new settings which are also marked as the defaults, in the web.config.  Even now, they use SQL Express as the default database.  But, if you change the connection string for “DefaultConnection” under connectionStrings to point to your SQL Server or SQL Azure, Membership API would now be able to create all the tables, procedures and views at the destination specified (i.e. SQL Server or SQL Azure). In my case, I modified the DefaultConneciton to point to my SQL Azure database.  Next, I hit F5 to run the application.  The default view loads.  I clicked on “LogOn” and then “Register” since I knew there are no tables/users as of then.  One thing to note is that, I had put “NewDB” as the database name in the connection string that points to SQL Azure.  NewDB wasn’t existing and I would assume it would be created before the tables/views/procedures for Membership are created. Once I clicked on the “Register” to register my first username, it took a while and then registered as well as logged in me in.  Also, I went to the SQL Azure Management Portal and verified that there exists “NewDB” which has just been created I could also connect to the SQL Azure database “NewDB” from Management Studio and found that the tables now don’t have the aspnet_ prefix.  The tables were simply Users, Roles, UsersInRoles, Profiles etc., So, with a few clicks and configuration change, I could actually set up the user base for my application on SQL Azure and even make the SessionState, Roles, Profiles being stored in SQL Azure database. The new System.Web.Proivders also required MARS (MultipleActiveResultSets=true) setting since it uses Entity Framework for the DAL operations.  Also, the “Project – ASP.NET Configuration” screen can be used to further create/manage users/roles etc., although the data is stored on the remote database. With that, a long pending request from the community to have the ability to configure and use remote databases for Application users management without having to run the scripts from SQL Express is fulfilled. Cheers !!!

    Read the article

  • resolving incidents (closing cases) in CRM4 through webservices?

    - by Rafael D.
    Hello everybody, I'm trying to resolve/close Dynamics CRM4 cases/incidents through webservices. A single SetStateIncidentRequest is not enough and returns a Server was unable to process request error message. I think it has something to do with active workflows that trigger on case's attribute changes. I don't know if there's anything else preventing the request to work. Since it is possible to close those cases through the GUI, I guess there's a "correct" set of steps to follow in order to achieve it through CrmService; unfortunately, I've been googleing it for a while without finding what I want. Could anybody help me, please?

    Read the article

  • Specification: Use cases for CRUD

    - by Mario Ortegón
    I am writing a Product requirements specification. In this document I must describe the ways that the user can interact with the system in a very high level. Several of these operations are "Create-Read-Update-Delete" on some objects. The question is, when writing use cases for these operations, what is the right way to do so? Can I write only one Use Case called "Manage Object x" and then have these operations as included Use Cases? Or do I have to create one use case per operation, per object? The problem I see with the last approach is that I would be writing quite a few pages that I feel do not really contribute to the understanding of the problem. What is the best practice?

    Read the article

  • Proper structure for many test cases in Python with unittest

    - by mellort
    I am looking into the unittest package, and I'm not sure of the proper way to structure my test cases when writing a lot of them for the same method. Say I have a fact function which calculates the factorial of a number; would this testing file be OK? import unittest class functions_tester(unittest.TestCase): def test_fact_1(self): self.assertEqual(1, fact(1)) def test_fact_2(self): self.assertEqual(2, fact(2)) def test_fact_3(self): self.assertEqual(6, fact(3)) def test_fact_4(self): self.assertEqual(24, fact(4)) def test_fact_5(self): self.assertFalse(1==fact(5)) def test_fact_6(self): self.assertRaises(RuntimeError, fact, -1) #fact(-1) if __name__ == "__main__": unittest.main() It seems sloppy to have so many test methods for one method. I'd like to just have one testing method and put a ton of basic test cases (ie 4! ==24, 3!==6, 5!==120, and so on), but unittest doesn't let you do that. What is the best way to structure a testing file in this scenario? Thanks in advance for the help.

    Read the article

  • TFS 2012 or TFS Azure (Preview)

    - by Fore
    We want to migrate our current TFS 2010 solution that's hosted today in one of our own servers to TFS 2012 hosted somewhere else. We don't want to handle the servers any more, and therefor are looking at alternatives. TFS preview / Azure is one alternative, hosted in the cloud, but I'm not that happy with forcing users to use live id, and we don't have an AD. My second thought was to create a Azure virtual mashine, and there install and host TFS 2012. Is there any downsides with this? Compared to the price of bying a VPS this is cheap and feels reliable in Azure? Do you have any other ideas?

    Read the article

  • Cloud Computing Architecture Patterns: Don’t Focus on the Client

    - by BuckWoody
    Normally I try to put topics in the positive in other words "Do this" not "Don't do that". Sometimes its clearer to focus on what *not* to do. Popular development processes often start with screen mockups, or user input descriptions. In a scale-out pattern like Cloud Computing on Windows Azure, that's the wrong place to start. Start with the Data    Instead, I recommend that you start with the data that a process requires. That data might be temporary or persisted, but starting with the data and its requirements helps to define not only the storage engine you need but also drives everything from security to the integrity of the application. For instance, assume the requirements show that the user must enter their phone number, and that this datum is used in a contact management system further down the application chain. For that datum, you can determine what data type you need (U.S. only or International?) the security requirements, whether it needs ACID compliance, how it will be searched, indexed and so on. From one small data point you can extrapolate out your options for storing and processing the data. Here's the interesting part, which begins to break the patterns that we've used for decades: all of the data doesn't have the same requirements. The phone number might be best suited for a list, or an element, or a string, with either BASE or ACID requirements, based on how it is used. That means we don't have to dump everything into XML, an RDBMS, a NoSQL engine, or a flat file exclusively. In fact, one record might use all of those depending on the use-case requirements. Next Is Data Management  With the data defined, we can move on to how to store the data. Again, the requirements now dictate whether we need a full relational calculus or set-based operations, or we can choose another method based on the requirements for the data. And breaking another pattern its OK to store in more than once, in more than one location. We do this all the time for reporting systems and Business Intelligence systems, so this is a pattern we need to think about even for OLTP data. Move to Data Transport How does the data get around? We can use a connection-based method, sending the data along a transport to the storage engine, but in some cases we may want to use a cache, a queue, the Service Bus, or Complex Event Processing. Finally, Data Processing Most RDBMS engines, NoSQL, and certainly Big Data engines not only store data, but can process and manipulate it as well. Its doubtful that you'll calculate that phone number right? Well, if you're the phone company, you most certainly will. And so we see that even once we've chosen the data type, storage and engine, the same element can have different computing requirements based on how it is used. Sure, We Need A Front-End At Some Point Not all data is entered by human hands in fact most data isn't. We don't really need a Graphical User Interface (GUI) we need some way for a GUI to get data into and out of the systems listed earlier.   But when we do need to allow users to enter or examine data, that should be left to the GUI that best fits the device the user has. Ever tried to use an application designed for a web browser on a phone? Or one designed for a tablet on a phone? Its usually quite painful. The siren song of "We'll just write one interface for all devices" is strong, and has beguiled many an unsuspecting architect. But they just don't work out.   Instead, focus on the data, its transport and processing. Create API calls or a message system that allows for resilient transport to the device or interface, and let it do what it does best. References Microsoft Architecture Journal:   http://msdn.microsoft.com/en-us/architecture/bb410935.aspx Patterns and Practices:   http://msdn.microsoft.com/en-us/library/ff921345.aspx Windows Azure iOS, Android, Windows 8 Mobile Devices SDK: http://www.windowsazure.com/en-us/develop/mobile/tutorials/get-started-ios/ Windows Azure Facebook SDK: http://ntotten.com/2013/03/14/using-windows-azure-mobile-services-with-the-facebook-sdk-for-windows-phone/

    Read the article

  • Using a subset of GetHashCode() to increase AzureTable performance through partitioning

    - by makerofthings7
    Generally speaking, Azure Table IO performance improves as more partitions are used (with some tradeoffs in continuation tokens and batch updates I won't go into). Since the partition key is always a string I am considering using a "natural" load balancing technique based on a subset of the GetHashCode() of the partition key, and appending this subset to the partition key itself. This will allow all direct PK/RK queries to be computed with little overhead and with ease. Batch updates may just need an intermediate to group similar PKs together prior to submission. Question: Should I use GetHashCode() to compute the partition key? Is a better function available? If I use GetHashCode() does it matter which character I use for my PK? Is there an abstraction for Azure Table and Blob storage that does this for me already?

    Read the article

  • Concerns on first ASP.NET cloud application

    - by RPK
    I am writing a small ASP.NET Web Application. My worries are that I want to keep the architecture same giving me the option to install it on an Intranet or on a Cloud Platform. I am not using MVC but lately learned that Azure only supports ASP.NET MVC applications. I want to know whether ASP.NET Web Forms application work on Azure/AppHarbor or not. Do I need to convert this application to MVC if Web Forms is not supported? Will the same application run on Intranet as well?

    Read the article

  • Do you write common pre-conditions for a large number of unit test cases ?

    - by Vinoth Kumar
    I have heard/read writing common pre-conditions for a large number of test cases is a bad thing, since this dependency may cause large number of test cases to fail if something changes . What are your thoughts on it ? If this is so , then what exactly is the purpose of setUp() method in Junit that runs before each test case ? If the same code inside setUp() runs before each test case , why cant it run only once before running all the test cases together ?

    Read the article

  • Licensing approach for .NET library that might be used desktop / web-service / cloud environment

    - by Bobrovsky
    I am looking for advice how to architect licensing for a .NET library. I am not asking for tool/service recommendations or something like that. My library can be used in a regular desktop application, in an ASP.NET solution. And now Azure services come into play. Currently, for desktop applications the library checks if the application and company names from the version history are the same as the names the key was generated for. In other cases the library compares hardware IDs. Now there are problems: an Azure-enabled web-application can be run on different hardware each time (AFAIK) sometimes the hardware ID for the same hardware changes unexpectedly checking the hardware ID or version info might not be allowed in some circumstances (shared hosting for example) So, I am thinking about what approach I can take to architect a licensing scheme that: is friendly to customers (I do not try to fight piracy, but I do want to warn the customer if he uses the library on more servers than he paid for) can be used when there is no internet connection can be used on shared hosting What would you recommend?

    Read the article

  • Windows Azure Boot camp &ndash; Raleigh Wednesday June 23, 2010 * FREE*

    - by Jim Duffy
    Yes I know this is my second blog post about the free one-day Windows Azure boot camp on June 23rd in Raleigh, NC. What can I say I don’t want anyone to miss out on an opportunity to take advantage of some free Windows Azure training. Microsoft Developer Evangelist Brian Hitney and I will be presenting a one-day Windows Azure boot camp on June 23rd in Raleigh, NC at the Microsoft RTP offices. For more information on content, what to bring, directions, etc. just click here to go to the information and registration page for the Raleigh event. To find other dates and locations for the Windows Azure boot camps  head over to the Windows Azure Boot Camp page. Brian and I hope to see you there! Have a day. :-|

    Read the article

  • AppHarbor - Azure Done Right AKA Heroku for .NET

    - by Robz / Fervent Coder
    Easy and Instant deployments and instant scale for .NET? Awhile back a few of us were looking at Ruby Gems as the answer to package management for .NET. The gems platform supported the concept of DLLs as packages although some changes would have needed to happen to have long term use for the entire community. From that we formed a partnership with some folks at Microsoft to make v2 into something that would meet wider adoption across the community, which people now call NuGet. So now we have the concept of package management. What comes next? Heroku Instant deployments and instant scaling. Stupid simple API. This is Heroku. It doesn’t sound like much, but when you think of how fast you can go from an idea to having someone else tinker with it, you can start to see its power. In literally seconds you can be looking at your rails application deployed and online. Then when you are ready to scale, you can do that. This is power. Some may call this “cloud-computing” or PaaS (Platform as a Service). I first ran into Heroku back in July when I met Nick of RubyGems.org. At the time there was no alternative in the .NET-o-sphere. I don’t count Windows Azure, mostly because it is not simple and I don’t believe there is a free version. Heroku itself would not lend itself well to .NET due to the nature of platforms and each language’s specific needs (solution stack).  So I tucked the idea in the back of my head and moved on. AppHarbor Enters The Scene I’m not sure when I first heard about AppHarbor as a possible .NET version of Heroku. It may have been in November, but I didn’t actually try it until January. I was instantly hooked. AppHarbor is awesome! It still has a ways to go to be considered Heroku for .NET, but it already has a growing community. I created a video series (at the bottom of this post) that really highlights how fast you can get a product onto the web and really shows the power and simplicity of AppHarbor. Deploying is as simple as a git/hg push to appharbor. From there they build your code, run any unit tests you have and deploy it if everything succeeds. The screen on the right shows a simple and elegant UI to getting things done. The folks at AppHarbor graciously gave me a limited number of invites to hand out. If you are itching to try AppHarbor then navigate to: https://appharbor.com/account/new?inviteCode=ferventcoder  After playing with it, send feedback if you want more features. Go vote up two features I want that will make it more like Heroku. Disclaimer: I am in no way affiliated with AppHarbor and have not received any funds or favors from anyone at AppHarbor. I just think it is awesome and I want others to know about it. From Zero To Deployed in 15 Minutes (Or Less) Now I have a challenge for you. I created a video series showing how fast I could go from nothing to a deployed application. It could have been from Zero to Deployed in Less than 5 minutes, but I wanted to show you the tools a little more and give you an opportunity to beat my time. And that’s the challenge. Beat my time and show it in a video response. The video series is below (at least one of the videos has to be watched on YouTube). The person with the best time by March 15th @ 11:59PM CST will receive a prize. Ground rules: .NET Application with a valid database connection Start from Zero Deployed with AppHarbor or an alternative A timer displayed in the video that runs during the entire process Video response published on YouTube or acceptable alternative Video(s) must be published by March 15th at 11:59PM CST. Either post the link here as a comment or on YouTube as a response (also by 11:59PM CST March 15th) From Zero To Deployed In 15 Minutes (Or Less) Part 1 From Zero To Deployed In 15 Minutes (Or Less) Part 2 From Zero To Deployed In 15 Minutes (Or Less) Part 3

    Read the article

  • Cloud service and IM protocol advice, for a backend to group chat mobile app

    - by Jonathan
    Overview I’m going to develop an app on Android and iOS. It will allow users to set up group ‘chat rooms’ and talk on chat rooms set up by other users. The service needs to be highly scalable, such that it could accommodate a massive increase in users overnight (we can only dream). Chat requirements The chat protocol used should be flexible: it should allow me to determine who can view/post on ‘chat rooms’ based on certain other factors, as determined by the first poster/creator of the particular ‘chat room’. It should also allow for users to simply install the app and begin using the service, after only providing a simple nickname (which could be changed later). Chat protocol plans Having looked around I think the XMPP protocol is the best candidate. In particular the Multi-user chat extension looks like what I’ll need. Would this be most suited to my requirements, or do you know another potential solution? Cloud service I have been deciding between Amazon Web Services, Google App Engine and Windows Azure. I’m coming to the conclusion that Azure will be best, as it is easier to manage than AWS (ease of scalability will be a key factor in the design), I think it may be less restricted than GAE, plus Azure will soon have toolkits to allow easy interfacing with both Android and iOS phones. Is this the decision you would have made, or would you recommend/look into other cloud services? General project philosophy I have only recently started looking into this project’s feasibility, and am no expert on any of its aspects. So wherever possible I will leave the actual implementations to experts, i.e. choosing a higher-level cloud service, using a well-documented plugin of a, proven reliable, group chat protocol etc. My background I have some programming knowledge from a computer science degree. Main languages I’ve used have been Java and Python, but I don’t want this to affect design decisions for the project. The most appropriate languages for the task should be used, i.e. I don’t mind learning a lot of new skills (my current programming levels are relatively basic anyway). Thank you Thanks for reading, and any advice you have about any aspect would be greatly appreciated :-)

    Read the article

  • Wrapping REST based Web Service

    - by PaulPerry
    I am designing a system that will be running online under Microsoft Windows Azure. One component is a REST based web service which will really be a wrapper (using proxy pattern) which calls the REST web services of a business partner, which has to do with BLOB storage (note: we are not using azure storage). The majority of the functionality will be taking a request, calling our partner web service, receiving the request and then passing that back to the client. There are a number of reasons for doing this, but one of the big ones is that we are going to support three clients: our desktop application (win and mac), mobile apps (iOS), and a web front end. Having a single API which we then send to our partner protects us if that partner ever changes. I want our service to support both JSON and XML for the data transfer format, JSON for web and probably XML for the desktop and mobile (we already have an XML parser in those products). Our partner also supports both of these formats. I was planning on using ASP.NET MVC 4 with the Web API. As I design this, the thing that concerns me is the static type checking of C#. What if the partner adds or removes elements from the data? We can probably defensively code for that, but I still feel some concern. Also, we have to do a fair amount of tedious coding, to setup our API and then to turn around and call our partner’s API. There probably is not much choice on it though. But, in the back of my mind I wonder if maybe a more dynamic language would be a better choice. I want to reach out and see if anybody has had to do this before, what technology solutions they have used to (I am not attached to this one, these days Azure can host other technologies), and if anybody who has done something like this can point out any issues that came up. Thanks! Researching the issue seems to only find solutions which focus on connecting a SOAP web service over a proxy server, and not what I am referring to here. Note: Cross posted (by suggestion) from http://stackoverflow.com/questions/11906802/wrapping-rest-based-web-service Thank you!

    Read the article

  • Q&amp;A: Where does high performance computing fit with Windows Azure?

    - by Eric Nelson
    Answer I have been asked a couple of times this year about taking compute intensive operations to Windows Azure and/or High Performance Computing on Windows Azure. It is an interesting (if slightly niche) area. The good news is we have a great paper from David Chappell on HPC Server and Windows Azure integration. As a taster: A SOA application running entirely on Windows Azure runs its WCF services in Azure Worker nodes. Download now Related Links: Other Q&A posts on my team blog Don’t forget to connect with the UK team if you stumbled across this post by accident/bing/google

    Read the article

  • Handle cases where Nhibernate subclass does not exist

    - by kaykayman
    I have a scenario where I am using nhibernate to map records from one table to several different derived classes based on a discriminator. public class BaseClass { } public class DerivedClass0 : BaseClass { } public class DerivedClass1 : BaseClass { } public class DerivedClass2 : BaseClass { } I then use nhibernate's DiscriminateSubClassesOnColumn() method and alter the configuration to include <subclass name="DerivedClass0" extends="BaseClass" discriminator-value="discriminator0" /> <subclass name="DerivedClass1" extends="BaseClass" discriminator-value="discriminator1" /> <subclass name="DerivedClass2" extends="BaseClass" discriminator-value="discriminator2" /> so that when mapped, these classes are cast to their derived classes and not BaseClass. However, there are some records in my database which have a discriminator which does not have a corresponding subclass. In these cases, nHibernate throws an error: "Object with id: 'xxx' was not of the specified subclass..." Is there some way I can handle this, so that any records which do not have a corresponding subclass are cast to BaseClass rather than an error being thrown? I have simplified the above as much as possible, however it is worth noting that the XML is edited dynamically which is why I am referencing fluent nhibernate [DiscriminateSubClassesOnColumn()] and XML at the same time. The following things (which would help) are not an option: I cannot correct the data to remove records which are invalid I cannot create subclasses for those records which do not have one I need to handle cases where nHibernate tries to map on a discriminator and finds that one does not exist.

    Read the article

  • F# and statically checked union cases

    - by Johan Jonasson
    Soon me and my brother-in-arms Joel will release version 0.9 of Wing Beats. It's an internal DSL written in F#. With it you can generate XHTML. One of the sources of inspiration have been the XHTML.M module of the Ocsigen framework. I'm not used to the OCaml syntax, but I do understand XHTML.M somehow statically check if attributes and children of an element are of valid types. We have not been able to statically check the same thing in F#, and now I wonder if someone have any idea of how to do it? My first naive approach was to represent each element type in XHTML as a union case. But unfortunately you cannot statically restrict which cases are valid as parameter values, as in XHTML.M. Then I tried to use interfaces (each element type implements an interface for each valid parent) and type constraints, but I didn't manage to make it work without the use of explicit casting in a way that made the solution cumbersome to use. And it didn't feel like an elegant solution anyway. Today I've been looking at Code Contracts, but it seems to be incompatible with F# Interactive. When I hit alt + enter it freezes. Just to make my question clearer. Here is a super simple artificial example of the same problem: type Letter = | Vowel of string | Consonant of string let writeVowel = function | Vowel str -> sprintf "%s is a vowel" str I want writeVowel to only accept Vowels statically, and not as above, check it at runtime. How can we accomplish this? Does anyone have any idea? There must be a clever way of doing it. If not with union cases, maybe with interfaces? I've struggled with this, but am trapped in the box and can't think outside of it.

    Read the article

  • Why can't I get my Azure, WCF, REST, SSL project working? What am I doing wrong?

    - by Mark E
    I'm trying to get SSL, WCF and REST under Azure, but the page won't even load. Here are the steps I followed: 1) I mapped the www.mydomain.com CNAME to my azuresite.cloudapp.net 2) I procured an SSL certificate for www.mydomain.com and properly installed it at my azuresite.cloudapp.net hosted service project 3) I deployed my WCF REST service to Azure and started it. Below is my web.config configuration. The http (non-https) binding version worked correctly. My service URL, http: //www.mydomain .com/service.svc/sessions worked just fine. When I deployed the project with the web.config below, enabling SSL, https: //www.mydomain .com/service.svc/sessions does not even pull up at all. What am I doing wrong? <system.serviceModel> <services> <service name="Service"> <!-- non-https worked just fine --> <!-- <endpoint address="" binding="webHttpBinding" contract="IService" behaviorConfiguration="RestFriendly"> </endpoint> --> <!-- This does not work, what am I doing wrong? --> <endpoint address="" binding="webHttpBinding" bindingConfiguration="TransportSecurity" contract="IService" behaviorConfiguration="RestFriendly"> </endpoint> </service> </services> <behaviors> <endpointBehaviors> <behavior name="RestFriendly"> <webHttp></webHttp> </behavior> </endpointBehaviors> </behaviors> <bindings> <webHttpBinding> <binding name="TransportSecurity"> <security mode="Transport"> <transport clientCredentialType="None"/> </security> </binding> </webHttpBinding> </bindings> </system.serviceModel>

    Read the article

  • Getting an 'Microsoft.ACE.OLEDB.12.0' provider is not registered on the local machine error when I tried hosting on Azure Servers

    - by user1864888
    I am hosting a web application which uses OLEDB 12 to read an Excel file, but getting the error "'Microsoft.ACE.OLEDB.12.0' provider is not registered on the local machine". Wonder if I should use lower version of OLEDB or Azure does not support OLEDB services at all. Before rewriting the program, just wanted to see if anyone knows a better alternative. I am using Visual Studio 2010 for my development. Thanks in Advance.

    Read the article

  • Which do I select - Windows Azure or Amazon EC2 - for hosting unmanaged C++ code?

    - by sharptooth
    We have a server solution written entirely in unmanaged Visual C++. It contains complicated methods for really heavy data processing. The whole thing contains millions lines of code, so rewritning it all in some other language is not an option. We could write some extra code or make isolated changes, but rewriting everything is out of the question. Now we'd like to put it on a cloud. Which platform do we choose - Amazon EC2 or Windows Azure - and why?

    Read the article

< Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >