Search Results

Search found 5741 results on 230 pages for 'azure vm'.

Page 59/230 | < Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >

  • Access Control Service: Protocol and Token Transition

    - by Your DisplayName here!
    ACS v2 supports a number of protocols (WS-Federation, WS-Trust, OpenId, OAuth 2 / WRAP) and a number of token types (SWT, SAML 1.1/2.0) – see Vittorio’s Infographic here. Some protocols are designed for active client (WS-Trust, OAuth / WRAP) and some are designed for passive clients (WS-Federation, OpenID). One of the most obvious advantages of ACS is that it allows to transition between various protocols and token types. Once example would be using WS-Federation/SAML between your application and ACS to sign in with a Google account. Google is using OpenId and non-SAML tokens, but ACS transitions into WS-Federation and sends back a SAML token. This way you application only needs to understand a single protocol whereas ACS acts as a protocol bridge (see my ACS2 sample here). Another example would be transformation of a SAML token to a SWT. This is achieved by using the WRAP endpoint – you send a SAML token (from a registered identity provider) to ACS, and ACS turns it into a SWT token for the requested relying party, e.g. (using the WrapClient from Thinktecture.IdentityModel): [TestMethod] public void GetClaimsSamlToSwt() {     // get saml token from idp     var samlToken = Helper.GetSamlIdentityTokenForAcs();     // send to ACS for SWT converion     var swtToken = Helper.GetSimpleWebToken(samlToken);     var client = new HttpClient(Constants.BaseUri);     client.SetAccessToken(swtToken, WebClientTokenSchemes.OAuth);     // call REST service with SWT     var response = client.Get("wcf/client");     Assert.AreEqual<HttpStatusCode>(HttpStatusCode.OK, response.StatusCode); } There are more protocol transitions possible – but they are not so obvious. A popular example would be how to call a REST/SOAP service using e.g. a LiveId login. In the next post I will show you how to approach that scenario.

    Read the article

  • How to use call web service action in SharePoint2013 workflow

    - by ybbest
    In SharePoint2013, you can use call web service action and loop. In this post, I will show you how to achieve this. 1. Create a List workflow called CallWebService 2. Create a variable called listurl and assign the value to http://sp2010/_vti_bin/listdata.svc 3. Create a dictionary variable called RequestHeaders and add the following key value pairs. 4. Call the web service with the HttpHeaders you just build in the previous step and store the response in the variable ResponseContent. 5. The ResponseContent variable is the Dynamic values (in SharePoint designer it will be called dictionary type) and it is new feature for SharePoint2013 workflow. We can use the following actions to count the number items in the variable. 6. You can use loop in SharePoint 2013 workflow and out each list title as shown below.

    Read the article

  • ASP.NET MVC: Using ProfileRequiredAttribute to restrict access to pages

    - by DigiMortal
    If you are using AppFabric Access Control Services to authenticate users when they log in to your community site using Live ID, Google or some other popular identity provider, you need more than AuthorizeAttribute to make sure that users can access the content that is there for authenticated users only. In this posting I will show you hot to extend the AuthorizeAttribute so users must also have user profile filled. Semi-authorized users When user is authenticated through external identity provider then not all identity providers give us user name or other information we ask users when they join with our site. What all identity providers have in common is unique ID that helps you identify the user. Example. Users authenticated through Windows Live ID by AppFabric ACS have no name specified. Google’s identity provider is able to provide you with user name and e-mail address if user agrees to publish this information to you. They both give you unique ID of user when user is successfully authenticated in their service. There is logical shift between ASP.NET and my site when considering user as authorized. For ASP.NET MVC user is authorized when user has identity. For my site user is authorized when user has profile and row in my users table. Having profile means that user has unique username in my system and he or she is always identified by this username by other users. My solution is simple: I created my own action filter attribute that makes sure if user has profile to access given method and if user has no profile then browser is redirected to join page. Illustrating the problem Usually we restrict access to page using AuthorizeAttribute. Code is something like this. [Authorize] public ActionResult Details(string id) {     var profile = _userRepository.GetUserByUserName(id);     return View(profile); } If this page is only for site users and we have user profiles then all users – the ones that have profile and all the others that are just authenticated – can access the information. It is okay because all these users have successfully logged in in some service that is supported by AppFabric ACS. In my site the users with no profile are in grey spot. They are on half way to be users because they have no username and profile on my site yet. So looking at the image above again we need something that adds profile existence condition to user-only content. [ProfileRequired] public ActionResult Details(string id) {     var profile = _userRepository.GetUserByUserName(id);     return View(profile); } Now, this attribute will solve our problem as soon as we implement it. ProfileRequiredAttribute: Profiles are required to be fully authorized Here is my implementation of ProfileRequiredAttribute. It is pretty new and right now it is more like working draft but you can already play with it. public class ProfileRequiredAttribute : AuthorizeAttribute {     private readonly string _redirectUrl;       public ProfileRequiredAttribute()     {         _redirectUrl = ConfigurationManager.AppSettings["JoinUrl"];         if (string.IsNullOrWhiteSpace(_redirectUrl))             _redirectUrl = "~/";     }              public override void OnAuthorization(AuthorizationContext filterContext)     {         base.OnAuthorization(filterContext);           var httpContext = filterContext.HttpContext;         var identity = httpContext.User.Identity;           if (!identity.IsAuthenticated || identity.GetProfile() == null)             if(filterContext.Result == null)                 httpContext.Response.Redirect(_redirectUrl);          } } All methods with this attribute work as follows: if user is not authenticated then he or she is redirected to AppFabric ACS identity provider selection page, if user is authenticated but has no profile then user is by default redirected to main page of site but if you have application setting with name JoinUrl then user is redirected to this URL. First case is handled by AuthorizeAttribute and the second one is handled by custom logic in ProfileRequiredAttribute class. GetProfile() extension method To get user profile using less code in places where profiles are needed I wrote GetProfile() extension method for IIdentity interface. There are some more extension methods that read out user and identity provider identifier from claims and based on this information user profile is read from database. If you take this code with copy and paste I am sure it doesn’t work for you but you get the idea. public static User GetProfile(this IIdentity identity) {     if (identity == null)         return null;       var context = HttpContext.Current;     if (context.Items["UserProfile"] != null)         return context.Items["UserProfile"] as User;       var provider = identity.GetIdentityProvider();     var nameId = identity.GetNameIdentifier();       var rep = ObjectFactory.GetInstance<IUserRepository>();     var profile = rep.GetUserByProviderAndNameId(provider, nameId);       context.Items["UserProfile"] = profile;       return profile; } To avoid round trips to database I cache user profile to current request because the chance that profile gets changed meanwhile is very minimal. The other reason is maybe more tricky – profile objects are coming from Entity Framework context and context has also HTTP request as lifecycle. Conclusion This posting gave you some ideas how to finish user profiles stuff when you use AppFabric ACS as external authentication provider. Although there was little shift between us and ASP.NET MVC with interpretation of “authorized” we were easily able to solve the problem by extending AuthorizeAttribute to get all our requirements fulfilled. We also write extension method for IIdentity that returns as user profile based on username and caches the profile in HTTP request scope.

    Read the article

  • How to use call web service action in SharePoint2013 workflow

    - by ybbest
    In SharePoint2013, you can use call web service action and loop. In this post, I will show you how to achieve this. 1. Create a List workflow called CallWebService 2. Create a variable called listurl and assign the value to http://sp2010/_vti_bin/listdata.svc 3. Create a dictionary variable called RequestHeaders and add the following key value pairs. 4. Call the web service with the HttpHeaders you just build in the previous step and store the response in the variable ResponseContent. 5. The ResponseContent variable is the Dynamic values (in SharePoint designer it will be called dictionary type) and it is new feature for SharePoint2013 workflow. We can use the following actions to count the number items in the variable. 6. You can use loop in SharePoint 2013 workflow and out each list title as shown below.

    Read the article

  • DevIntersection Conference Dec 9th-12th

    - by ScottGu
    I’m excited to be presenting a keynote at the DevIntersection conference this coming Dec 9th->12th in Las Vegas.  This conference has an awesome set of speakers from a variety of backgrounds.  A number of people from my team (including Scott Hanselman, Scott Hunter and Daniel Roth from the ASP.NET team) will be presenting in addition to me.  You can learn more about the conference and check out the schedule here. Attendees who register by November 20th will receive a free Windows 8 Tablet – so if you are interested in attending sign-up soon! Hope to see some of you there, Scott

    Read the article

  • Access Control Service: Home Realm Discovery (HRD) Gotcha

    - by Your DisplayName here!
    I really like ACS2. One feature that is very useful is home realm discovery. ACS provides a Nascar style list as well as discovery based on email addresses. You can take control of the home realm selection process yourself by downloading the JSON feed or by manually setting the home realm parameter. Plenty of options – the only option missing is turning it off… In other words, when you setup your ACS namespace and realm and register identity provider, there is no way to keep the list of identity providers secret. An interested “user” can always retrieve all registered identity provider (using the browser or download the JSON feed). This may not be an issue with web identity providers, but when you use ACS to federate with customers or business partners, you maybe don’t want to disclose that list to the public (or to other customers). This is an adoption blocker for certain situations. I hope this feature will be added soon. In addition I would also like to see a feature I call “home realm aliases”. Some random string that I can use as a whr parameter instead of using the real issuer URI.

    Read the article

  • Access Control Service: Transitioning between Active and Passive Scenarios

    - by Your DisplayName here!
    As I mentioned in my last post, ACS features a number of ways to transition between protocol and token types. One not so widely known transition is between passive sign ins (browser) and active service consumers. Let’s see how this works. We all know the usual WS-Federation handshake via passive redirect. But ACS also allows driving the sign in process yourself via specially crafted WS-Federation query strings. So you can use the following URL to sign in using LiveID via ACS. ACS will then redirect back to the registered reply URL in your application: GET /login.srf?   wa=wsignin1.0&   wtrealm=https%3a%2f%2faccesscontrol.windows.net%2f&   wreply=https%3a%2f%2fleastprivilege.accesscontrol.windows.net%3a443%2fv2%2fwsfederation&   wp=MBI_FED_SSL&   wctx=pr%3dwsfederation%26rm%3dhttps%253a%252f%252froadie%252facs2rp%252frest%252f The wsfederation bit in the wctx parameter indicates, that the response to the token request will be transmitted back to the relying party via a POST. So far so good – but how can an active client receive that token now? ACS knows an alternative way to send the token request response. Instead of doing the redirect back to the RP, it emits a page that in turn echoes the token response using JavaScript’s window.external.notify. The URL would look like this: GET /login.srf?   wa=wsignin1.0&   wtrealm=https%3a%2f%2faccesscontrol.windows.net%2f&   wreply=https%3a%2f%2fleastprivilege.accesscontrol.windows.net%3a443%2fv2%2fwsfederation&   wp=MBI_FED_SSL&   wctx=pr%3djavascriptnotify%26rm%3dhttps%253a%252f%252froadie%252facs2rp%252frest%252f ACS would then render a page that contains the following script block: <script type="text/javascript">     try{         window.external.Notify('token_response');     }     catch(err){         alert("Error ACS50021: windows.external.Notify is not registered.");     } </script> Whereas token_response is a JSON encoded string with the following format: {   "appliesTo":"...",   "context":null,   "created":123,   "expires":123,   "securityToken":"...",   "tokenType":"..." } OK – so how does this all come together now? As an active client (Silverlight, WPF, WP7, WinForms etc). application, you would host a browser control and use the above URL to trigger the right series of redirects. All the browser controls support one way or the other to register a callback whenever the window.external.notify function is called. This way you get the JSON string from ACS back into the hosting application – and voila you have the security token. When you selected the SWT token format in ACS – you can use that token e.g. for REST services. When you have selected SAML, you can use the token e.g. for SOAP services. In the next post I will show how to retrieve these URLs from ACS and a practical example using WPF.

    Read the article

  • How can I restore VM on a new Hyper-V server?

    - by jaloplo
    Hi all, I was working gladly with my VM on my local Hyper-V server. But, after installing some updates on the host the system only show the famous blue screen. I couldn't start my host so I reinstalled it and configured as new Hyper-V server. My VM was in a another disk to prevent this happening but I don't know how to add it as a new VM on new server. In addition, this VM has various snapshots so, how can I add this VM to my new Hyper-V server? UPDATE: I can't do Export/Import because my server crashed before I can't do it.

    Read the article

  • How stable are Single Page Application (SPA) build with Microsoft .Net for enterprise application [on hold]

    - by Husrat Mehmood
    Imagine a situation where you have your data loading to your application via REST Api,you are building a responsive application(ajax request) for an Enterprise. What potential problems might I run into for a single page application(SPA) using Microsoft Asp.Net Web application build using MVC template? Are there advantages to just designing a multi-page application using asp.net mvc 5 remember I am using SPA for an Enterprise Application where there are role based views for the users.?

    Read the article

  • Cloud – the forecast is improving

    - by Rob Farley
    There is a lot of discussion about “the cloud”, and how that affects people’s data stories. Today the discussion enters the realm of T-SQL Tuesday, hosted this month by Jorge Segarra. Over the years, companies have invested a lot in making sure that their data is good, and I mean every aspect of it – the quality of it, the security of it, the performance of it, and more. Experts such as those of us at LobsterPot Solutions have helped these companies with this, and continue to work with clients to make sure that data is a strong part of their business, not an oversight. Whether business intelligence systems are being utilised or not, every business needs to be able to rely on its data, and have the confidence in it. Data should be a foundation upon which a business is built. In the past, data had been stored in paper-based systems. Filing cabinets stored vital information. Today, people have server rooms with storage of various kinds, recognising that filing cabinets don’t necessarily scale particularly well. It’s easy to ‘lose’ data in a filing cabinet, when you have people who need to make sure that the sheets of paper are in the right spot, and that you know how things are stored. Databases help solve that problem, but still the idea of a large filing cabinet continues, it just doesn’t involve paper. If something happens to the physical ‘filing cabinet’, then the problems are larger still. Then the data itself is under threat. Many clients have generators in case the power goes out, redundant cables in case the connectivity dies, and spare servers in other buildings just in case they’re required. But still they’re maintaining filing cabinets. You see, people like filing cabinets. There’s something to be said for having your data ‘close’. Even if the data is not in readable form, living as bits on a disk somewhere, the idea that its home is ‘in the building’ is comforting to many people. They simply don’t want to move their data anywhere else. The cloud offers an alternative to this, and the human element is an obstacle. By leveraging the cloud, companies can have someone else look after their filing cabinet. A lot of people really don’t like the idea of this, partly because the administrators of the data, those people who could potentially log in with escalated rights and see more than they should be allowed to, who need to be trusted to respond if there’s a problem, are now a faceless entity in the cloud. But this doesn’t mean that the cloud is bad – this is simply a concern that some people may have. In new functionality that’s on its way, we see other hybrid mechanisms that mean that people can leverage parts of the cloud with less fear. Companies can use cloud storage to hold their backup data, for example, backups that have been encrypted and are therefore not able to be read by anyone (including administrators) who don’t have the right password. Companies can have a database instance that runs locally, but which has its data files in the cloud, complete with Transparent Data Encryption if needed. There can be a higher level of control, making the change easier to accept. Hybrid options allow people who have had fears (potentially very justifiable) to take a new look at the cloud, and to start embracing some of the benefits of the cloud (such as letting someone else take care of storage, high availability, and more) without losing the feeling of the data being close. @rob_farley

    Read the article

  • Dev'Camps : Microsoft décortique Windows Azure le 20 juin lors d'une session gratuite sur sa plateforme Cloud pour les développeurs

    Dev'Camps: Microsoft décortique Windows Azure le 20 juin lors d'une session gratuite sur sa plateforme Cloud pour les développeurs Microsoft organise une série d'événements à travers la France sur ses plateformes et technologies du moment. Ces Dev'Camps sont un nouveau format d'évènements, 100% développement, en direct avec les experts Microsoft, pour vous aider dans vos projets applicatifs. Un des événements les plus attendus de cette série est l'Azure Camp, qui se tiendra la mercredi 20 juin, au siège de Microsoft à Issy-Les-Moulineaux. Les Azure Camps permettront en effet d'approfondir rapidement ses connaissances sur le nouvel outil phare de Microsoft ...

    Read the article

  • Access Control Service: Programmatically Accessing Identity Provider Information and Redirect URLs

    - by Your DisplayName here!
    In my last post I showed you that different redirect URLs trigger different response behaviors in ACS. Where did I actually get these URLs from? The answer is simple – I asked ACS ;) ACS publishes a JSON encoded feed that contains information about all registered identity providers, their display names, logos and URLs. With that information you can easily write a discovery client which, at the very heart, does this: public void GetAsync(string protocol) {     var url = string.Format( "https://{0}.{1}/v2/metadata/IdentityProviders.js?protocol={2}&realm={3}&version=1.0",         AcsNamespace,         "accesscontrol.windows.net",         protocol,         Realm);     _client.DownloadStringAsync(new Uri(url)); } The protocol can be one of these two values: wsfederation or javascriptnotify. Based on that value, the returned JSON will contain the URLs for either the redirect or notify method. Now with the help of some JSON serializer you can turn that information into CLR objects and display them in some sort of selection dialog. The next post will have a demo and source code.

    Read the article

  • Is there any way to optimize my search blob program?

    - by Vicky
    I written this code to search the blob items (text files) on the basis of there content. For ex : if I search for "Good", then the files that contains "Good or good" word the name of that files should appear in search result. My code is working but i want to optimize it. class BlobSearch { public static int num = 1; static void Main(string[] args) { string accountName = "accountName"; string accessKey = "accesskey"; string azureConString = "DefaultEndpointsProtocol=https;AccountName=" + accountName + ";AccountKey=" + accessKey; string blob = "MyBlobContainer"; string searchText = string.Empty; Console.WriteLine("Type and enter to search : "); searchText = Console.ReadLine(); CloudStorageAccount account = CloudStorageAccount.Parse(azureConString); CloudBlobClient blobClient = account.CreateCloudBlobClient(); CloudBlobContainer blobContainer = blobClient.GetContainerReference(blob); blobContainer.FetchAttributes(); var blobItemList = blobContainer.ListBlobs(); GetBlobList(searchText, blobContainer, blobItemList); Console.ReadLine(); } private static async void GetBlobList(string searchText, CloudBlobContainer blobContainer, IEnumerable<IListBlobItem> blobItemList) { foreach (var item in blobItemList) { string line = string.Empty; CloudBlockBlob blockBlob = blobContainer.GetBlockBlobReference(item.Uri.ToString()); if (blockBlob.Name.Contains(".txt")) { await Search(searchText, blockBlob); } } } private async static Task Search(string searchText, CloudBlockBlob blockBlob) { string text = await blockBlob.DownloadTextAsync(); if (text.ToLower().IndexOf(searchText.ToLower()) != -1) { Console.WriteLine("Result : " + num + " => " + blockBlob.Name.Substring(blockBlob.Name.LastIndexOf('/') + 1)); num++; } } } I think blobContainer.ListBlobs(); is blocking code because search will not work until all the blob items loaded. Is there anyway to optimize it or anywhere else in my code. Thanks

    Read the article

  • How to install Gitlab in a VM on a production server?

    - by Michaël Perrin
    I have a production server running Ubuntu 12.04 and I would like to install on it a VM with Gitlab (using Vagrant and Virtualbox). Let's say that the address to access Gitlab is gitlab.mydomain.com . The DNS zone has been configured to point to the IP address of the server. I want users to be able to access to Gitlab (either for pushing to a repository, or for accessing to the web interface) from the outside. The VM has been configured to have an IP address. It means that when browsing http://gitlab.mydomain.com for instance, the request has to be forwarded to the VM on the server, ie. to the VM IP address. What are the ways to configure this? Can Apache be used as a proxy? In this case, I guess it only works for HTTP requests, but not for pushing to a Git repository on the VM.

    Read the article

  • Microsoft sort la version 1.4 du SDK de Windows Azure, qui corrige de nombreux bogues de la version précédente

    Microsoft sort la version 1.4 du SDK de Windows Azure, qui corrige de nombreux bogues de la version précédente L'équipe de développement de Windows Azure vient de rendre disponible la nouvelle version de son SDK. Ainsi, Windows Azure SDK 1.4 est téléchargeable et apporte quelques mises à jour. Il corrige notamment quelques problèmes qui se trouvaient dans la version précédente (1.3), parmi lesquels : - correction de l'échec de l'IIS lors de la configuration du fichier web.config en lecture seule - correction des erreurs dans les packs IIS qui les faisaient doubler en taille lorsque packagés - correction du recyclage de la totalité d'un IIS web role quand le diagnotic sto...

    Read the article

  • Nouveau Windows Azure : des machines virtuelles persistantes sous Linux, du IaaS, et encore plus de technos open-sources supportées

    Nouveau Windows Azure : des machines virtuelles persistantes sous Linux Du IaaS, et encore plus de technologies open-sources supportées Windows Azure, la plateforme Cloud de Microsoft dédiée aux développeurs, continue sa montée en puissance. Depuis hier, plusieurs nouveaux services ont été officiellement annoncés. Parmi ceux-ci, un des plus attendus (et qui a alimenté le plus de rumeurs) est l'arrivée de machines virtuelles - persistantes ? capables de faire tourner des distributions Linux (Ubuntu, OpenSuse, CentOS, SUSE Linux Enterprise Server). Azure combine à présent des services d'infrastructure et de plateforme pour « une plus grande souplesse dans la façon de cons...

    Read the article

  • Microsoft ajoute un nouveau service de Build à la version hébergée de Team Foundation Server permettant de compiler sur Windows Azure

    Microsoft ajoute nouveau service de Build à la version hébergée de Team Foundation Server permettant de compiler le code source sur Windows Azure Microsoft a procédé à une mise à jour de la version hébergée de Team Foundation Server sur Windows Azure. Team Foundation Server est une solution de travail collaboratif permettant la gestion des sources, des builds, le suivi des éléments de travail, la planification et l'analyse des performances. Il y a un an de cela, Microsoft avait porté la solution sur sa plateforme d'hébergement Cloud Windows Azure. La société vient d'annoncer qu'un nouveau service de Build a été ajouté à Team Foundation. Ce service réduit le...

    Read the article

  • How can I move an existing VM's files to a new directory in the same datastore?

    - by blade
    Hi, I have some VMs deployed on ESX. In vSphere 4, I want to move these VMs into another directory in the datastore. So the VM directories are under root, but I want them in root/MyNewFolder. I tried this by turning off a VM, copying the VM's file (VMDK etc) into the directory I want, deleting the hard drive from the VM's settings, adding a new hard drive and then selecting the new path to the VMDK. When I press ok on the settings dialog box, having made this modification to the settings, I get the following error: not found. What I am trying to do also does not seem to be possible when making a new VM. I can only make VMs under root.

    Read the article

  • Azure VS Tools and SDK - systray already running&hellip;

    - by Shawn Cicoria
    If you are getting a message when you start the Compute Emulator “Systray already running…” from within Visual Studio one fix is to check what the image name is loading is. For some reason, on 2 of my machines the image was loading with the 8.3 format.  This caused the logic in the VS tools to not find the process.  So, to fix, I just did a little copy/rename magic. C:\Program Files\Windows Azure SDK\v1.3\bin>copy csmonitor.exe csmonitor-a.exe 1 file(s) copied. C:\Program Files\Windows Azure SDK\v1.3\bin>del csmonitor.exe C:\Program Files\Windows Azure SDK\v1.3\bin>copy csmonitor-a.exe csmonitor.exe 1 file(s) copied. If you bring up task manager and see something like CSMON~1.EXE in the Image Name column, you probably have this issue.

    Read the article

  • Le nouveau Windows Azure est disponible depuis ce matin, la version dévaluation de 90 jours est toujours valide

    Nouveau Windows Azure : des machines virtuelles persistantes sous Linux Du IaaS, et encore plus de technologies open-sources supportées Edit du 08/06/12 : Le nouveau Windows Azure est disponible depuis ce matin (0h15 heure de Paris) Windows Azure, la plateforme Cloud de Microsoft dédiée aux développeurs, continue sa montée en puissance. Depuis hier, plusieurs nouveaux services ont été officiellement annoncés. Parmi ceux-ci, un des plus attendus (et qui a alimenté le plus de rumeurs) est l'arrivée de machines virtuelles - persistantes ? capables de faire tourner des distributions Linux (Ubuntu, OpenSuse, CentOS, SUSE Linux Enterprise Server). Az...

    Read the article

  • Running Hadoop example in psuedo-distributed mode on vm

    - by manas
    I have set-up Hadoop on a OpenSuse 11.2 VM using Virtualbox.I have made the prerequisite configs. I ran this example in the Standalone mode successfully. But in psuedo-distributed mode I get the following error: $./bin/hadoop fs -put conf input 10/04/13 15:56:25 INFO hdfs.DFSClient: Exception in createBlockOutputStream java.net.SocketException: Protocol not available 10/04/13 15:56:25 INFO hdfs.DFSClient: Abandoning block blk_-8490915989783733314_1003 10/04/13 15:56:31 INFO hdfs.DFSClient: Exception in createBlockOutputStream java.net.SocketException: Protocol not available 10/04/13 15:56:31 INFO hdfs.DFSClient: Abandoning block blk_-1740343312313498323_1003 10/04/13 15:56:37 INFO hdfs.DFSClient: Exception in createBlockOutputStream java.net.SocketException: Protocol not available 10/04/13 15:56:37 INFO hdfs.DFSClient: Abandoning block blk_-3566235190507929459_1003 10/04/13 15:56:43 INFO hdfs.DFSClient: Exception in createBlockOutputStream java.net.SocketException: Protocol not available 10/04/13 15:56:43 INFO hdfs.DFSClient: Abandoning block blk_-1746222418910980888_1003 10/04/13 15:56:49 WARN hdfs.DFSClient: DataStreamer Exception: java.io.IOException: Unable to create new block. at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2845) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2288) 10/04/13 15:56:49 WARN hdfs.DFSClient: Error Recovery for block blk_-1746222418910980888_1003 bad datanode[0] nodes == null 10/04/13 15:56:49 WARN hdfs.DFSClient: Could not get block locations. Source file "/user/max/input/core-site.xml" - Aborting... put: Protocol not available 10/04/13 15:56:49 ERROR hdfs.DFSClient: Exception closing file /user/max/input/core-site.xml : java.net.SocketException: Protocol not available java.net.SocketException: Protocol not available at sun.nio.ch.Net.getIntOption0(Native Method) at sun.nio.ch.Net.getIntOption(Net.java:178) at sun.nio.ch.SocketChannelImpl$1.getInt(SocketChannelImpl.java:419) at sun.nio.ch.SocketOptsImpl.getInt(SocketOptsImpl.java:60) at sun.nio.ch.SocketOptsImpl.sendBufferSize(SocketOptsImpl.java:156) at sun.nio.ch.SocketOptsImpl$IP$TCP.sendBufferSize(SocketOptsImpl.java:286) at sun.nio.ch.OptionAdaptor.getSendBufferSize(OptionAdaptor.java:129) at sun.nio.ch.SocketAdaptor.getSendBufferSize(SocketAdaptor.java:328) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.createBlockOutputStream(DFSClient.java:2873) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2826) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2288) Any leads will be highly appreciated.

    Read the article

  • Would there be a market for this idea (cross platform VM for iPhone OS)

    - by Tzury Bar Yochay
    For a long time I wondered if the following idea worth a nickel or just a waste of time and energy. I am willing to start a project which will provide a kind of a VM for all iPxxx apps - so developed once for iPxxx can run on a Macbook, iMac, Linux, Android and windows (desktop and mobile). You get the idea, right? I want to do to the current iPhone SDK, the same as what Mono did to Microsoft .Net and perhaps a more complete set of implementation. I tend to believe that if overnight all apps on appstore become available on the android market as well that would be a mini revolution. Think about running iPad apps on every tablet that will come out to the market in the future. Wouldn't it be fantastic to all the developers, which from now on, can write once and sell everywhere? The main questions which I ask myself repeatedly is: "Is This Legal?" - I mean, say I have done this, would apple's lawyers will start sending me all kinds of nasty emails? I am willing to hear your opinion about this idea as well as if some of you willing and able to join forces and start this open source project.

    Read the article

  • Bing search API and Azure

    - by Gapton
    I am trying to programatically perform a search on Microsoft Bing search engine. Here is my understanding: There was a Bing Search API 2.0 , which will be replaced soon (1st Aug 2012) The new API is known as Windows Azure Marketplace. You use different URL for the two. In the old API (Bing Search API 2.0), you specify a key (Application ID) in the URL, and such key will be used to authenticate the request. As long as you have the key as a parameter in the URL, you can obtain the results. In the new API (Windows Azure Marketplace), you do NOT include the key (Account Key) in the URL. Instead, you put in a query URL, then the server will ask for your credentials. When using a browser, there will be a pop-up asking for a/c name and password. Instruction was to leave the account name blank and insert your key in the password field. Okay, I have done all that and I can see a JSON-formatted results of my search on my browser page. How do I do this programmatically in PHP? I tried searching for the documentation and sample code from Microsoft MSDN library, but I was either searching in the wrong place, or there are extremely limited resources in there. Would anyone be able to tell me how do you do the "enter the key in the password field in the pop-up" part in PHP please? Thanks alot in advance.

    Read the article

  • Cloud Backup: Getting the Users' Backs Up

    - by Tony Davis
    On Wednesday last week, Microsoft announced that as of July 1, all data transfers into its Microsoft Azure cloud will be free (though you have to pay for transferring data out). On Thursday last week, SQL Azure in Western Europe went down. It was a relatively short outage, but since SQL Azure currently provides no easy way to take a standard backup of a database and store it locally, many people had no recourse but to wait patiently for their cloud-based app to resume. It seems that Microsoft are very keen encourage developers to move their data onto their cloud, but are developers ready to do it, given that such basic backup capabilities are lacking? Recently on Simple-Talk, Mike Mooney described a perfect use case for the Microsoft Cloud. They had a simple web-based application with a SQL Server backend; they could move the application to Windows Azure, and the data into SQL Azure and in the process free themselves from much of the hassle surrounding management and scaling of the hardware, network and so on. It was a great fit and yet it nearly didn't happen; lack of support for the BACKUP command almost proved a show-stopper. Of course, backups of Azure databases are always and have always been taken automatically, for disaster recovery purposes, but these are strictly on-cloud copies and as of now it is not possible to use them to them to restore a database to a particular point in time. It seems that none of those clever Microsoft people managed to predict the need to perform basic backups of Azure databases so that copies could be stored locally, outside the Azure universe. At the very least, as Mike points out, performing a local backup before a new deployment is more or less mandatory. Microsoft did at least note the sound of gnashing teeth and, as a stop-gap measure, offered SQL Azure Database Copy which basically allows you to create an online clone of your database, but this doesn't allow for storing local archives of the data. To that end MS has provided SQL Azure Import/Export, to package up and export a database and its data, using BACPACs. These BACPACs do not guarantee transactional consistency; for example, if a child table is modified after the parent is copied, then the copied database will be in inconsistent state (meaning, to add to the fun, BACPACs need to be created from a database copy). In any event, widespread problems with BACPAC's evil cousin, the DACPAC have been well-documented, and it seems likely that many will also give BACPAC the bum's rush. Finally, in a TechEd 2011 presentation tagged "SQL Azure Advanced Administration", it was announced that "backup and restore" were coming in the next SQL Azure CTP. And yet this still doesn't mean that we'll get simple backups as DBAs know and love them. What it does mean, at least, is the ability to restore any given database to a point in time within a 2-week window. For the time being, if you want a local copy of your data and don't want to brave the BACPAC, one is left with SSIS or BCP, creative use of schema and data comparison tools, or use of SQL Azure Backup (currently in beta) in order to perform this simple but vital task. Cheers, Tony.

    Read the article

  • Install remote desktop session host remotely

    - by Jorge
    I've removed the remote desktop host role from one of my Azue VM, so I can't RDP into it. I have tried to recreate the endpoints and even the VM with no luck. I can't either access remotely with Server Manager; getting this error: The client cannot connect to the destination specified in the request. Verify that the services on the destination is running" I cannot connect remotely to the registry Make sure this computer is on the network, has remote administration enabled, and that both computers are running the remote registry service PsPing tells me The remote computer refuses the network connection Which workaround can I follow to solve this?

    Read the article

< Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >