Search Results

Search found 20426 results on 818 pages for 'service packs'.

Page 202/818 | < Previous Page | 198 199 200 201 202 203 204 205 206 207 208 209  | Next Page >

  • Solved::MessageQueue.BeginReceive() null ref error - c#

    - by ltech
    Have a windows service that listens to a msmq. In the OnStart method is have this protected override void OnStart(string[] args) { try { _queue = new MessageQueue(_qPath);//this part works as i had logging before and afer this call //Add MSMQ Event _queue.ReceiveCompleted += new ReceiveCompletedEventHandler(queue_ReceiveCompleted);//this part works as i had logging before and afer this call _queue.BeginReceive();//This is where it is failing - get a null reference exception } catch(Exception ex) { EventLogger.LogEvent(EventSource, EventLogType, "OnStart" + _lineFeed + ex.InnerException.ToString() + _lineFeed + ex.Message.ToString()); } } where private MessageQueue _queue = null; This works on my machine but when deployed to a windows 2003 server and running as Network service account, it fails Exception recvd: Service cannot be started. System.NullReferenceException: Object reference not set to an instance of an object. at MYService.Service.OnStart(String[] args) at System.ServiceProcess.ServiceBase.ServiceQueuedMainCallback(Object state) Solved: Turned out that the Q that i set up, I had to explicitly add Network Service account to it under security tab

    Read the article

  • Writing an ASP.Net Web based TFS Client

    - by Glav
    So one of the things I needed to do was write an ASP.Net MVC based application for our senior execs to manage a set of arbitrary attributes against stories, bugs etc to be able to attribute whether the item was related to Research and Development, and if so, what kind. We are using TFS Azure and don’t have the option of custom templates. I have decided on using a string based field within the template that is not very visible and which we don’t use to write a small set of custom which will determine the research and development association. However, this string munging on the field is not very user friendly so we need a simple tool that can display attributes against items in a simple dropdown list or something similar. Enter a custom web app that accesses our TFS items in Azure (Note: We are also using Visual Studio 2012) Now TFS Azure uses your Live ID and it is not really possible to easily do this in a server based app where no interaction is available. Even if you capture the Live ID credentials yourself and try to submit them to TFS Azure, it wont work. Bottom line is that it is not straightforward nor obvious what you have to do. In fact, it is a real pain to find and there are some answers out there which don’t appear to be answers at all given they didn’t work in my scenario. So for anyone else who wants to do this, here is a simple breakdown on what you have to do: Go here and get the “TFS Service Credential Viewer”. Install it, run it and connect to your TFS instance in azure and create a service account. Note the username and password exactly as it presents it to you. This is the magic identity that will allow unattended, programmatic access. Without this step, don’t bother trying to do anything else. In your MVC app, reference the following assemblies from “C:\Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE\ReferenceAssemblies\v2.0”: Microsoft.TeamFoundation.Client.dll Microsoft.TeamFoundation.Common.dll Microsoft.TeamFoundation.VersionControl.Client.dll Microsoft.TeamFoundation.VersionControl.Common.dll Microsoft.TeamFoundation.WorkItemTracking.Client.DataStoreLoader.dll Microsoft.TeamFoundation.WorkItemTracking.Client.dll Microsoft.TeamFoundation.WorkItemTracking.Common.dll If hosting this in Internet Information Server, for the application pool this app runs under, you will need to enable 32 Bit support. You also have to allow the TFS client assemblies to store a cache of files on your system. If you don’t do this, you will authenticate fine, but then get an exception saying that it is unable to access the cache at some directory path when you query work items. You can set this up by adding the following to your web.config, in the <appSettings> element as shown below: <appSettings> <!-- Add reference to TFS Client Cache --> <add key="WorkItemTrackingCacheRoot" value="C:\windows\temp" /> </appSettings> With all that in place, you can write the following code: var token = new Microsoft.TeamFoundation.Client.SimpleWebTokenCredential("{you-service-account-name", "{your-service-acct-password}"); var clientCreds = new Microsoft.TeamFoundation.Client.TfsClientCredentials(token); var currentCollection = new TfsTeamProjectCollection(new Uri(“https://{yourdomain}.visualstudio.com/defaultcollection”), clientCreds); TfsConfigurationServercurrentCollection.EnsureAuthenticated(); In the above code, not the URL contains the “defaultcollection” at the end of the URL. Obviously replace {yourdomain} with whatever is defined for your TFS in Azure instance. In addition, make sure the service user account and password that was generated in the first step is substituted in here. Note: If something is not right, the “EnsureAuthenticated()” call will throw an exception with the message being you are not authorised. If you forget the “defaultcollection” on the URL, it will still fail but with a message saying you are not authorised. That is, a similar but different exception message. And that is it. You can then query the collection using something like: var service = currentCollection.GetService<WorkItemStore>(); var proj = service.Projects[0]; var allQueries = proj.StoredQueries; for (int qcnt = 0; qcnt < allQueries.Count; qcnt++) {     var query = allQueries[qcnt];     var queryDesc = string.format(“Query found named: {0}”,query.Name); } You get the idea. If you search around, you will find references to the ServiceIdentityCredentialProvider which is referenced in this article. I had no luck with this method and it all looked too hard since it required an extra KB article and other magic sauce. So I hope that helps. This article certainly would have helped me save a boat load of time and frustration.

    Read the article

  • Try all available WSDL IPs with JAX-WS

    - by Asaf
    I'm using JAX-WS to open a service port. When the DNS exposes two IPs for the DNS entry (of the WSDL), the Service tries to use only the first - resulting in a "Failed to access the WSDL at: http://some.url.com/someDocument?wsdl. It failed with: Connection refused: connect" exception. I've found an issue filed against JAX-WS, but with no resolution. this is the comment that describes my problem best. The code is just a one-liner: Service service = Service.create("http://some.url.com/someDocument?wsdl", engineQName); the smarts is exposing those two A records on http://some.url.com/ at the DNS. Can anyone help? 10x,

    Read the article

  • WCF REST based services authentication schemes

    - by FlySwat
    I have a simple authentication scheme for a set of semi-public REST API's we are building: /-----------------------\ | Client POST's ID/Pass | | to an Auth Service | \-----------------------/ [Client] ------------POST----------------------> [Service/Authenticate] | /-------------------------------\ | Service checks credentials | [Client] <---------Session Cookie------- | and generates a session token | | | in a cookie. | | \-------------------------------/ | [Client] -----------GET /w Cookie -------------> [Service/Something] | /----------------------------------\ | Client must pass session cookie | | with each API request | | or will get a 401. | \----------------------------------/ This works well, because the client never needs to do anything except receive a cookie, and then pass it along. For browser applications, this happens automatically by the browser, for non browser applications, it is pretty trivial to save the cookie and send it with each request. However, I have not figured out a good approach for doing the initial handshake from browser applications. For example, if this is all happening using a AJAX technique, what prevents the user from being able to access the ID/Pass the client is using to handshake with the service? It seem's like this is the only stumbling block to this approach and I'm stumped.

    Read the article

  • Run Exchange Management Shell cmdlets from Visual Basic/C#/.NET app

    - by nowarninglabel
    Goal: Provide a web service using Visual Basic or C# or .NET that interacts with the Exchange Management Shell, sending it commands to run cmdlets, and return the results as XML. (Note that we could use any lanaguage to write the service, but since it is a Windows Box and we have Visual Studio 2008, it seemed like easiest solution would be just use it to create a VB/.NET web service. Indeed, it was quite easy to do so, just point and click.) Problem: How to run an Exchange Management Shell cmdlet from the web service, e.g, Get-DistributionGroupMember "Live Presidents" Seems that we should be able to create a PowerShell script that runs the cmdlet, and be able to call that from the command line, and thus just call it from within the program. Does this sound correct? If so how would I go about this? Thanks. Answer can be language agnostic, but Visual Basic would probably be best since that is what I loaded the test web service up in.

    Read the article

  • Authentication for SaaS

    - by josh
    What would be recommended as an authentication solution for a Software-as-a-service product? Specifically, my product would have clients that would typically have low information technology skills, potentially not even having an IT department within their organization. I would still like to have my application authenticate against their internal directory service (eDirectory, Active Directory, etc.). I don't want them, however, to have to open/forward ports (for instance, opening up port 636 so I can do LDAPS binds directly to their directory service). One idea I had was to have an application installed on a server within their organization's network that would backconnect to my service. This would be a persistant socket. When I need to authenticate a user, I send the credentials via the socket (encrypted) - the application then performs a bind/whatever to authenticate against the directory service and replies with OK/FAIL. What would you suggest? My goal here is to essentially have the client install an application within their network, with very little configuration or intervention.

    Read the article

  • calling webservice in java servlet

    - by Pravin
    I have created a servlet which displays a form having some fields and a submit button and also created a web service having methods which are needed in my servlet. I have deployed the web service on Tomcat 5.5.9/Axis and servlet web application on Tomcat ( same instance of Tomcat) using eclipse. Since one is web service and other is web application both are running on separate instances of tomcat, so when i run them separately i.e servlet without the call to web service and a client that access that webservice it works fine but when i integrate them both i get a error like exception: javax.servlet.ServletException I would like to call the web service and return the result when i press the button Please advice me on how to implement that.

    Read the article

  • Cross-domain REST proxy with Javascript, HTML5

    - by Bosh
    I'm writing a service (say, service.com) that provides a REST API to external apps running inside of IFrames. (These apps are hosted from domains outside the service.com). I'm planning a javascript client library for the apps to make pure-javascript requests to the service.com REST API -- basically using postMessage and some ad-hoc encapsulation of my API calls to get messages back and forth across frames (from the outside-app.com IFrame -- service.com REST API, and back to the IFrame with a response). My question: is there any robust, general-purpose javascript library to accomplish the kind of cross-domain REST request proxying I need, or should I just hack it from scratch?

    Read the article

  • Hosting WCF on IIS5/6

    - by Pinu
    I have developed a Web service using WCF Service Application. This service application is a part of multiple projects. we have data access , services(business logic) , testing(to test class) and WCF Service application.Where WCF Service application is just like an infterface and all the request are sent to the services project. so all the projects communicate with each other. I am new to hosting WCF application. Now to host this on IIS do i have to put the whole project in the IIS virtual directory?

    Read the article

  • Open Source alternative to lablife

    - by Yauhen Yakimovich
    I am looking for an Open Source alternative to SAAS provided by lablife.org website. Main purpose of the service is to automate daily tasks for life science laboratory. This service was free and nice to use, but when the original company developing this service was bought by BioData they kind of decided to kill and replace it with a new service called labguru. Apparently, a new service has a lot of functionality missing or just bad. That's why I am on a search for an alternative solution. So if you are familiar with what this software does - and if there are any known alternatives, I would be very grateful for any of your tips.

    Read the article

  • #OOW 2012 @PARIS...talking Oracle and Clouds, and Optimized Datacenter

    - by Eric Bezille
    For those of you who want to get most out of Oracle technologies to evolve your IT to the Next Wave, I encourage you to register to the up coming Oracle Optimized Datacenter event that will take place in Paris on November 28th. You will get the opportunity to exchange with Oracle experts and customers having successfully evolve their IT by leveraging Oracle technologies. You will also get the latest news on some of the Oracle systems announcements made during OOW 2012. During this event we will make an update about Oracle and Clouds, from private to public and hybrid models. So in preparing this session, I thought it was a good start to make a status of Cloud Computing in France, and CIO requirements in particular. Starting in 2009 with the first Cloud Camp in Paris, the market has evolved, but the basics are still the same : think hybrid. From Traditional IT to Clouds One size doesn't fit all, and for big companies having already an IT in place, there will be parts eligible to external (public) cloud, and parts that would be required to stay inside the firewalls, so ability to integrate both side is key.  None the less, one of the major impact of Cloud Computing trend on IT, reported by Forrester, is the pressure it makes on CIO to evolve towards the same model that end-users are now used to in their day to day life, where self-service and flexibility are paramount. This is what is driving IT to transform itself toward "a Global Service Provider", or for some as "IT "is" the Business" (see : Gartner Identifies Four Futures for IT and CIO), and for both models toward a Private Cloud Service Provider. In this journey, there is still a big difference between most of existing external Cloud and a firm IT : the number of applications that a CIO has to manage. Most cloud providers today are overly specialized, but at the end of the day, there are really few business processes that rely on only one application. So CIOs has to combine everything together external and internal. And for the internal parts that they will have to make them evolve to a Private Cloud, the scope can be very large. This will often require CIOs to evolve from their traditional approach to more disruptive ones, the time has come to introduce new standards and processes, if they want to succeed. So let's have a look at the different Cloud models, what type of users they are addressing, what value they bring and most importantly what needs to be done by the  Cloud Provider, and what is left over to the user. IaaS, PaaS, SaaS : what's provided and what needs to be done First of all the Cloud Provider will have to provide all the infrastructure needed to deliver the service. And the more value IT will want to provide, the more IT will have to deliver and integrate : from disks to applications. As we can see in the above picture, providing pure IaaS, left a lot to cover for the end-user, that’s why the end-user targeted by this Cloud Service is IT people. If you want to bring more value to developers, you need to provide to them a development platform ready to use, which is what PaaS is standing for, by providing not only the processors power, storage and OS, but also the Database and Middleware platform. SaaS being the last mile of the Cloud, providing an application ready to use by business users, the remaining part for the end-users being configuring and specifying the application for their specific usage. In addition to that, there are common challenges encompassing all type of Cloud Services : Security : covering all aspect, not only of users management but also data flows and data privacy Charge back : measuring what is used and by whom Application management : providing capabilities not only to deploy, but also to upgrade, from OS for IaaS, Database, and Middleware for PaaS, to a full Business Application for SaaS. Scalability : ability to evolve ALL the components of the Cloud Provider stack as needed Availability : ability to cover “always on” requirements Efficiency : providing a infrastructure that leverage shared resources in an efficient way and still comply to SLA (performances, availability, scalability, and ability to evolve) Automation : providing the orchestration of ALL the components in all service life-cycle (deployment, growth & shrink (elasticity), upgrades,...) Management : providing monitoring, configuring and self-service up to the end-users Oracle Strategy and Clouds For CIOs to succeed in their Private Cloud implementation, means that they encompass all those aspects for each component life-cycle that they selected to build their Cloud. That’s where a multi-vendors layered approach comes short in terms of efficiency. That’s the reason why Oracle focus on taking care of all those aspects directly at Engineering level, to truly provide efficient Cloud Services solutions for IaaS, PaaS and SaaS. We are going as far as embedding software functions in hardware (storage, processor level,...) to ensure the best SLA with the highest efficiency. The beauty of it, as we rely on standards, is that the Oracle components that you are running today in-house, are exactly the same that we are using to build Clouds, bringing you flexibility, reversibility and fast path to adoption. With Oracle Engineered Systems (Exadata, Exalogic & SPARC SuperCluster, more specifically, when talking about Cloud), we are delivering all those components hardware and software already engineered together at Oracle factory, with a single pane of glace for the management of ALL the components through Oracle Enterprise Manager, and with high-availability, scalability and ability to evolve by design. To give you a feeling of what does that bring in terms just of implementation project timeline, for example with Oracle SPARC SuperCluster, we have a consistent track of record to have the system plug into existing Datacenter and ready in a week. This includes Oracle Database, OS, virtualization, Database Storage (Exadata Storage Cells in this case), Application Storage, and all network configuration. This strategy enable CIOs to very quickly build Cloud Services, taking out not only the complexity of integrating everything together but also taking out the automation and evolution complexity and cost. I invite you to discuss all those aspect in regards of your particular context face2face on November 28th.

    Read the article

  • Calling a COM+ Component remotely using C#

    - by Arturo Caballero
    Hello, I have a little COM+ component as a service on a remote server. I´m trying to execute a method of this component using: Type type = Type.GetTypeFromProgID("ComPlusTest.ComTestClass",serverName); // Create Service Object service = Activator.CreateInstance(type); // Execute Method String result = service.GetType().InvokeMember("LaunchPackage", BindingFlags.InvokeMethod, null, parameters, null).ToString(); The type is returned as null. What is the best way to do this?? The server is Windows 2003 Enterprise, the service is a .NET component wraped as COM+ (I know that I don´t have to do it that way, but the purpose is to itegrate a legacy App with a .NET component) The purpose of this is just to test that the .NET COM+ Component works. Thanks in advance!

    Read the article

  • Mapping java.util.Date to xs:date instead of xs:dateTime in JAX-WS

    - by Larsing
    Hi all, We hav an EJB, jws-anotated as a web service. It has a pretty complex pojo-model that generates an equally complex xsd. The pojos contain numerous java.util.Date. These all map to xs:dateTime. This service is used as "business service" in Oracle(BEA) OSB(AquaLogic). We also have a "proxy service" which we map to the BS with XQuery (the OSB/AquaLogic way). The proxy service's xsd has xs:date for the corresponding fields. For some reason, Oracle's implementation of XQuery does not support casting from xs:date to xs:dateTime(!). I could solve this by casting to xs:string and concat:ing with "T00:00:00", however, i would rather try to get JAX-WS to generate an xsd with xs:date instead. Only, I can't find any info on how to do this (anotations?). Can anyone give me a hint? Kind regards, Lars

    Read the article

  • Login method Customization using GINA [closed]

    - by netseng
    DUPLICATE:http://stackoverflow.com/questions/523912/login-method-customization-using-gina Hi All, I know it's not easy to find a master in GINA, but my question is most near to Interprocess Communication(IPC), I wrote my custom GINA in unmanaged c++, I included it a method that checks for validity of a fingerprint for the user try to login, this function will call some method in a running system windows service written in c#, the code follows: in GINA, unmanaged c++ if(Fingerprint.Validate(userName,finerprintTemplate) { //perform login } in windows service, C# public class Fingerprint { public static bool Validate(string userName, byte[] finerprintTemplate) { //Preform Some code to validate fingerprintTemplate with userName //and retuen result } } Does anyone know how to do such Communication between GINA and the windows service, or simply between c++ written service and C# written service. Thanks

    Read the article

  • Finding it Hard to Deliver Right Customer Experience: Think BPM!

    - by Ajay Khanna
    Our relationship with our customers is not a just a single interaction and we should not treat it like one. A customer’s relationship with a vendor is like a journey which starts way before customer makes a purchase and lasts long after that. The journey may start with customer researching a product that may lead to the eventual purchase and may continue with support or service needs for the product. A typical customer journey can be represented as shown below: As you may notice, customers tend to use multiple channels to interact with a company throughout their journey.  They also expect that they should get consistent experience, no matter what interaction channel they may choose. Customers do not like to repeat the information they have already provided and expect companies to remember their preferences, and offer them relevant products and services. If the company fails to meet this expectation, customers not only will abandon the purchase and go to the competitor but may also influence others’ purchase decision. Gone are the days when word of mouth was the only medium, and the customer could influence “Six” others. This is the age of social media and customer’s good or bad experience, especially bad get highly amplified and may influence hundreds of others. Challenges that face B2C companies today include: Delivering consistent experience: The reason that delivering consistent experience is challenging is due to fragmented data, disjointed systems and siloed multichannel interactions. Customers tend to get different service quality if they use web vs. phone vs. store. They get different responses from different service agents or get inconsistent answers if they call sales vs. service group in the company. Such inconsistent experiences result in lower customer satisfaction or NPS (net promoter score) numbers. Increasing Revenue: To stay competitive companies frequently introduce new products and services. Delay in launching such offerings has a significant impact on revenue realization. In addition to new product revenue, there are multiple opportunities to up-sell and cross-sell that impact bottom line. If companies are not able to identify such opportunities, bring a product to market quickly, or not offer the right product to the right customer at the right time, significant loss of revenue may occur. Ensuring Compliance: Companies must be compliant to ever changing regulations, these could be about Know Your Customer (KYC), Export/Import regulations, or taxation policies. In addition to government agencies, companies also need to comply with the SLA that they have committed to their customers. Lapse in meeting any of these requirements may lead to serious fines, penalties and loss in business. Companies have to make sure that they are in compliance will all such regulations and SLA commitments, at any given time. With the advent of social networks and mobile technology, companies not only need to focus on process efficiency but also on customer engagement. Improving engagement means delivering the customer experience as the customer is expecting and interacting with the customer at right time using right channel. Customers expect to be able to contact you via any channel of their choice (web, email, chat, mobile, social media), purchase via any viable channel (web, phone, store, mobile). Customers expect companies to understand their particular needs and remember their preferences on repeated visits. To deliver such an integrated, consistent, and contextual experience, power of BPM in must. Your company may be organized in departments like Marketing, Sales, Service. You may hold prospect data in SFA, order information in ERP, customer issues in CRM. However, the experience delivered to the customer must not be constrained by your system legacy. BPM helps in designing the right experience for the right customer and integrates all the underlining channels, systems, applications to make sure right information will be delivered to the right knowledge worker or to the customer every single time.     Orchestrating information across all systems (MDM, CRM, ERP), departments (commerce, merchandising, marketing service) and channels (Email, phone, web, social)  is the key, and that’s what BPM delivers. In addition to orchestrating systems and channels for consistency, BPM also provides an ability for analysis and decision management. By using data from historical transactions, social media and from other systems, users can determine the customer preferences, customer value, and churn propensity. This information, in the context, is then used while making a decision at a process step. Working with real-time decision management system can also suggest right up-sell or cross-sell offers, discounts or next-best-action steps for a particular customer. Timely action on customer issues or request is also a key tenet of a good customer experience. BPM’s complex event processing capabilities help companies to take proactive actions before issues get escalated. BPM system can be designed to listen to a certain event patters then deduce from those customer situations (credit card stolen, baggage lost, change of address) and do a triage before situation goes out of control. If such a situation arises you can send alerts to right people or immediately invoke corrective actions. Last but not least one of BPM’s key values is to drive continuous improvement. Learning about customers past experiences, interactions and social conversations, provide valuable insight. Such insight can be used to improve products, customer facing processes, and customer experience. You may take these insights as an input to design better more efficient and customer friendly sales, contact center or self-service processes. If customer experience is important for your business, make sure you have incorporated BPM as a part of your strategy to design, orchestrate and improve your customer facing processes.

    Read the article

  • Should I use custom exceptions to control the flow of application?

    - by bonefisher
    Is it a good practise to use custom business exceptions (e.g. BusinessRuleViolationException) to control the flow of user-errors/user-incorrect-inputs??? The classic approach: I have a web service, where I have 2 methods, one is the 'checker' (UsernameAlreadyExists()) and the other one is 'creator' (CreateUsername())... So if I want to create a username, I have to do 2 roundtrips to webservice, 1.check, 2.if check is OK, create. What about using UsernameAlreadyExistsException? So I call only the 2. web service method (CrateUsername()), which contains the check and if not successfull, it throws the UsernameAlreadyExistsException. So the end goal is to have only one round trip to web service and the checking can be contained also in other web service methods (so I avoid calling the UsernameAlreadyExists() all the times..). Furthermore I can use this kind of business error handling with other web service calls completely avoiding the checking prior the call.

    Read the article

  • Reading the tea leaves from Windows Azure support

    - by jamiet
    A few idle thoughts… Three months ago I had an issue regarding Windows Azure where I was unable to login to the management portal. At the time I contacted Azure support, the issue was soon resolved and I thought no more about it. Until today that is when I received an email from Azure support providing a detailed analysis of the root cause, the fix and moreover precise details about when and where things occurred. The email itself is interesting and I have included the entirety of it below. A few things were interesting to me: The level of detail and the diligence in investigating and reporting the issue I found really rather impressive. They even outline the number of users that were affected (127 in case you can’t be bothered reading). Compare this to the quite pathetic support that another division within Microsoft, Skype, provided to Greg Low recently: Skype support and dead parrot sketches   This line: “Windows Azure performed a planned change from using the Microsoft account service (formerly Windows Live ID) to the Azure Active Directory (AAD) as its primary authentication mechanism on August 24th. This change was made to enable future innovation in the area of authentication – particularly for organizationally owned identities, identity federation, stronger authentication methods and compliance certification. ” I also found to be particularly interesting. I have long thought that one of the reasons Microsoft has proved to be such a money-making machine in the enterprise is because they provide the infrastructure and then upsell on top of that – and nothing is more infrastructural than Active Directory. It has struck me of late that they are trying to make the same play of late in the cloud by tying all their services into Azure Active Directory and here we see a clear indication of that by making AAD the authentication mechanism for anyone using Windows Azure. I get the feeling that we’re going to hear much much more about AAD in the future; isn’t it about time we could log on to SQL Azure Windows Azure SQL Database without resorting to SQL authentication, for example? And why do Microsoft have two identity providers – Microsoft Account (aka Windows Live ID) and AAD – isn’t it about time those things were combined? As I said, just some idle thoughts. Below is the transcript of the email if you are interested. @Jamiet  This is regarding the support request <redacted> where in you were not able to login into the windows azure management portal with live id. We are providing you with the summary, root cause analysis and information about permanent fix: Incident Title: You were unable to access Windows Azure Portal after Microsoft Account to Azure Active Directory account Migration. Service Impacted: Management Portal Incident Start Date and Time: 8/24/2012 4:30:00 PM Date and Time Service was Restored: 10/17/2012 12:00:00 AM Summary: Windows Azure performed a planned change from using the Microsoft account service (formerly Windows Live ID) to the Azure Active Directory (AAD) as its primary authentication mechanism on August 24th.   This change was made to enable future innovation in the area of authentication – particularly for organizationally owned identities, identity federation, stronger authentication methods and compliance certification.   While this migration was largely transparent to Windows Azure users, a small number of users whose sign-in names were part of a Windows Live Custom Domain were unable to login.   This incompatibility was not discovered during the Quality Assurance testing phase prior to the migration. Customer Impact: Customers whose sign-in names were part of a Windows Live Custom Domain were unable to sign-in the Management Portal after ~4:00 p.m. PST on August 24th, 2012.   We determined that the issue did impact at least 127 users in 98 of these Windows Live Custom Domains and had a maximum potential impact of 1,110 users in total. Root Cause: The root cause of the issue was an incompatibility in the AAD authentication service to handle logins from Microsoft accounts whose sign-in names were part of a Windows Live Custom Domains.  This issue was not discovered during the Quality Assurance testing phase prior to the migration from Microsoft Account (MSA) to AAD. Mitigations: The issue was mitigated for the majority of affected users by 8:20 a.m. PST on August 25th, 2012 by running some internal scripts to correct many known Windows Live Custom Domains.   The remaining affected domains fell into two categories: Windows Live Custom Domains that were not corrected by 8/25/2012. An additional 48 Windows Live Custom Domains were fixed in the weeks following the incident within 2 business days after the AAD team received an escalation from product support regarding those accounts. Windows Live Custom domains that were also provisioned in Office365. Some of the affected Windows Live Custom Domains had already been provisioned in AAD because their owners signed up for Office365 which is a service that also uses AAD.   In these cases the Azure customers had to work around the issue by renaming their Microsoft Account or using a different Microsoft Account to administer their Azure subscription. Permanent Fix: The Azure Active Directory team permanently fixed the issue for all customers on 10/17/2012 in an upgraded release of the AAD service.

    Read the article

  • What are "web services"?

    - by Kevin
    I'm reading a book about programming ASP.NET in C#. The book makes the following comment: Previous editions of this book tackled web services, a feature that allows you to create code routines that can be called by other applications over the Internet.Web services are more interesting when considering rich client development (because they allow you to give web features to ordinary desktop applications),and they’re in the process of being replaced by a new technology known as WCF (Windows Communication Foundation). For those reasons, web services aren’t covered in this book.However,if you want to branch out and explore the web service world,you can download the web service chapters from the previous edition of this book from the book’s download page.The information in these chapters still applies to ASP.NET 3.5,because the web service feature hasn’t changed. Can someone offer, in "layman's terms" what exactly a web service is and if, indeed, they are being replaced, at least in .Net, with WCF? What would be a practical example of a web service? Are they stand alone programs that run on a web server and are invoked by a client or clients?

    Read the article

  • "derivative work" and the consumption of web services

    - by yodaj007
    From the Wowhead Terms of Service: "Intellectual Property Rights The Service and any necessary software used in connection with the Service ("Software") contain proprietary and confidential information that is protected by applicable intellectual property and other laws. You agree not to modify, rent, lease, loan, sell, distribute or create derivative works based on the Service or the Software, in whole or in part." Does this mean that I can't write a program to consume a web service being published by the writers of this TOS? I find it kind of scary that I even have to ask this question. The wikipedia article on "derivative works" isn't very conclusive.

    Read the article

  • How should my team decide between 3-tier and 2-tier architectures?

    - by j0rd4n
    My team is discussing the future direction we take our projects. Half the team believes in a pure 3-tier architecture while the other half favors a 2-tier architecture. Project Assumptions: Enterprise business applications Business logic needed between user and database Data validation necessary Service-oriented (prefer RESTful services) Multi-year maintenance plan Support hundreds of users 3-tier Team Favors: Persistant layer <== Domain layer <== UI layer Service boundary between at least persistant layer and domain layer. Domain layer might have service boundary between it. Translations between each layer (clean DTO separation) Hand roll persistance unless we can find creative yet elegant automation 2-tier Team Favors: Entity Framework + WCF Data Service layer <== UI layer Business logic kept in WCF Data Service interceptors Minimal translation between layers - favor faster coding So that's the high-level argument. What considerations should we take into account? What experiences have you had with either approach?

    Read the article

  • Have a doubt on uritemplates in WCF Services

    - by Debby
    I have a webservice with following operation contract and my service is hosted at //localhost:9002/Service.svc/ (I did not put http: before the localhost, as new user can post only one hyperlink) [OperationContract] [WebGet ( UriTemplate = "/Files/{Filepath}" ) ] Stream DownloadFile(string Filepath); This webservice would let users download file, if the proper filepath is provided (assuming, i somehow findout proper filepath). Now, I can access this service from a browser by typing, //localhost:9002/Service.svc/Files/(Filepath} If {filepath} is some simple string, its not a problem, but i want to send the location of the file. Lets us say users want to download file "C:\Test.mp3" on the server. But how can I pass "C:\Test.mp3" as {Filepath}. I get an error when i type http://localhost:9002/Service.svc/Files/C:\Test.mp3 in the browser. I am new to webservices and find that quickest way to clear doubts is in this community.

    Read the article

  • Is there a way to bring an application's GUI to the current desktop?

    - by Davy8
    Background: Started a fair amount of work before realizing that a Windows Service cannot start an app with a GUI that displays without potential problems. The proper solution of separating the GUI of the app to be started is non-trivial, so I'm trying to think of alternative solutions. There is a GUI to manage the service that is a separate executable, but the process to be launched (actually multiple instances of it) has its own GUI that needs to be shown. It doesn't need to be made visible by the service itself, but it needs to be at least able to be made visible by another process with a visible GUI. The Windows User that is running the service and that needs to see the GUI of the launched process is the same and known at install time. Is there some way to accomplish this or is it back to the drawing board? Also both the service and the app to launch are both our code and modifiable.

    Read the article

  • weird access denied issue with WMI

    - by stackunderflow1
    I'm seeing a weird access denied issue with WMI. we're trying to create a diff disk based on a parent vhd in a windows service app that runs under network service (machine account is an admin). everything works fine when we create the diff disk on other machine using wmi - we use an admin user account. however, we cannot do this on a local machine as wmi doesn't take user cred for the local machine. so thought network service account already should have access for this, it seems like it doesn't and even if we run the service under an admin service account, it fails. any pointers???

    Read the article

  • a completely decoupled OO system ?

    - by shrini1000
    To make an OO system as decoupled as possible, I'm thinking of the following approach: 1) we run an RMI/directory like service where objects can register and discover each other. They talk to this service through an interface 2) we run a messaging service to which objects can publish messages, and register subscription callbacks. Again, this happens through interfaces 3) when object A wants to invoke a method on object B, it discovers the target object's unique identity through #1 above, and publishes a message on the message service for object B 4) message services invokes B's callback to give it the message 5) B processes the request and sends the response for A on message service 6) A's callback is called and it gets the response. I feel this system is as decoupled as practically possible, but it has the following problems: 1) communication is typically asynchronous 2) hence it's non real time 3) the system as a whole is less efficient. Are there any other practical problems where this design obviously won't be applicable ? What are your thoughts on this design in general ?

    Read the article

  • coldfusion class path for multiple services - only recognizing first directory

    - by ffc
    Hi, I am trying to use multiple web services on a single CF page. I have entered the stubs directory for each of the web services in the ColdFusion Class Path within administrator server settings java and jvm paths are listed separated by commas: c:\coldfusion9\stubs\ws1,c:\coldfusion9\stubs\ws2 for some reason, only the web service whose stub path is listed first will work. When I try to call the second web service, I get a "web service operation ... cannot be found" but if i switch the order of paths listed in the admin settings and restart the service, the web service listed first now will work. any ideas on how to manage multiple web services and their stubs ? thanks!

    Read the article

< Previous Page | 198 199 200 201 202 203 204 205 206 207 208 209  | Next Page >