Search Results

Search found 2630 results on 106 pages for 'sergey p aka azure'.

Page 7/106 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • External File Upload Optimizations for Windows Azure

    - by rgillen
    [Cross posted from here: http://rob.gillenfamily.net/post/External-File-Upload-Optimizations-for-Windows-Azure.aspx] I’m wrapping up a bit of the work we’ve been doing on data movement optimizations for cloud computing and the latest set of data yielded some interesting points I thought I’d share. The work done here is not really rocket science but may, in some ways, be slightly counter-intuitive and therefore seemed worthy of posting. Summary: for those who don’t like to read detailed posts or don’t have time, the synopsis is that if you are uploading data to Azure, block your data (even down to 1MB) and upload in parallel. Set your block size based on your source file size, but if you must choose a fixed value, use 1MB. Following the above will result in significant performance gains… upwards of 10x-24x and a reduction in overall file transfer time of upwards of 90% (eg, uploading a 1GB file averaged 46.37 minutes prior to optimizations and averaged 1.86 minutes afterwards). Detail: For those of you who want more detail, or think that the claims at the end of the preceding paragraph are over-reaching, what follows is information and code supporting these claims. As the title would indicate, these tests were run from our research facility pointing to the Azure cloud (specifically US North Central as it is physically closest to us) and do not represent intra-cloud results… we have performed intra-cloud tests and the overall results are similar in notion but the data rates are significantly different as well as the tipping points for the various block sizes… this will be detailed separately). We started by building a very simple console application that would loop through a directory and upload each file to Azure storage. This application used the shipping storage client library from the 1.1 version of the azure tools. The only real variation from the client library is that we added code to collect and record the duration (in ms) and size (in bytes) for each file transferred. The code is available here. We then created a directory that had a collection of files for the following sizes: 2KB, 32KB, 64KB, 128KB, 512KB, 1MB, 5MB, 10MB, 25MB, 50MB, 100MB, 250MB, 500MB, 750MB, and 1GB (50 files for each size listed). These files contained randomly-generated binary data and do not benefit from compression (a separate discussion topic). Our file generation tool is available here. The baseline was established by running the application described above against the directory containing all of the data files. This application uploads the files in a random order so as to avoid transferring all of the files of a given size sequentially and thereby spreading the affects of periodic Internet delays across the collection of results.  We then ran some scripts to split the resulting data and generate some reports. The raw data collected for our non-optimized tests is available via the links in the Related Resources section at the bottom of this post. For each file size, we calculated the average upload time (and standard deviation) and the average transfer rate (and standard deviation). As you likely are aware, transferring data across the Internet is susceptible to many transient delays which can cause anomalies in the resulting data. It is for this reason that we randomized the order of source file processing as well as executed the tests 50x for each file size. We expect that these steps will yield a sufficiently balanced set of results. Once the baseline was collected and analyzed, we updated the test harness application with some methods to split the source file into user-defined block sizes and then to upload those blocks in parallel (using the PutBlock() method of Azure storage). The parallelization was handled by simply relying on the Parallel Extensions to .NET to provide a Parallel.For loop (see linked source for specific implementation details in Program.cs, line 173 and following… less than 100 lines total). Once all of the blocks were uploaded, we called PutBlockList() to assemble/commit the file in Azure storage. For each block transferred, the MD5 was calculated and sent ensuring that the bits that arrived matched was was intended. The timer for the blocked/parallelized transfer method wraps the entire process (source file splitting, block transfer, MD5 validation, file committal). A diagram of the process is as follows: We then tested the affects of blocking & parallelizing the transfers by running the updated application against the same source set and did a parameter sweep on the block size including 256KB, 512KB, 1MB, 2MB, and 4MB (our assumption was that anything lower than 256KB wasn’t worth the trouble and 4MB is the maximum size of a block supported by Azure). The raw data for the parallel tests is available via the links in the Related Resources section at the bottom of this post. This data was processed and then compared against the single-threaded / non-optimized transfer numbers and the results were encouraging. The Excel version of the results is available here. Two semi-obvious points need to be made prior to reviewing the data. The first is that if the block size is larger than the source file size you will end up with a “negative optimization” due to the overhead of attempting to block and parallelize. The second is that as the files get smaller, the clock-time cost of blocking and parallelizing (overhead) is more apparent and can tend towards negative optimizations. For this reason (and is supported in the raw data provided in the linked worksheet) the charts and dialog below ignore source file sizes less than 1MB. (click chart for full size image) The chart above illustrates some interesting points about the results: When the block size is smaller than the source file, performance increases but as the block size approaches and then passes the source file size, you see decreasing benefit to the point of negative gains (see the values for the 1MB file size) For some of the moderately-sized source files, small blocks (256KB) are best As the size of the source file gets larger (see values for 50MB and up), the smallest block size is not the most efficient (presumably due, at least in part, to the increased number of blocks, increased number of individual transfer requests, and reassembly/committal costs). Once you pass the 250MB source file size, the difference in rate for 1MB to 4MB blocks is more-or-less constant The 1MB block size gives the best average improvement (~16x) but the optimal approach would be to vary the block size based on the size of the source file.    (click chart for full size image) The above is another view of the same data as the prior chart just with the axis changed (x-axis represents file size and plotted data shows improvement by block size). It again highlights the fact that the 1MB block size is probably the best overall size but highlights the benefits of some of the other block sizes at different source file sizes. This last chart shows the change in total duration of the file uploads based on different block sizes for the source file sizes. Nothing really new here other than this view of the data highlights the negative affects of poorly choosing a block size for smaller files.   Summary What we have found so far is that blocking your file uploads and uploading them in parallel results in significant performance improvements. Further, utilizing extension methods and the Task Parallel Library (.NET 4.0) make short work of altering the shipping client library to provide this functionality while minimizing the amount of change to existing applications that might be using the client library for other interactions.   Related Resources Source code for upload test application Source code for random file generator ODatas feed of raw data from non-optimized transfer tests Experiment Metadata Experiment Datasets 2KB Uploads 32KB Uploads 64KB Uploads 128KB Uploads 256KB Uploads 512KB Uploads 1MB Uploads 5MB Uploads 10MB Uploads 25MB Uploads 50MB Uploads 100MB Uploads 250MB Uploads 500MB Uploads 750MB Uploads 1GB Uploads Raw Data OData feeds of raw data from blocked/parallelized transfer tests Experiment Metadata Experiment Datasets Raw Data 256KB Blocks 512KB Blocks 1MB Blocks 2MB Blocks 4MB Blocks Excel worksheet showing summarizations and comparisons

    Read the article

  • SQL Azure Data Sync

    - by kaleidoscope
    The Microsoft Sync Framework Power Pack for SQL Azure contains a series of components that improve the experience of synchronizing with SQL Azure. This includes runtime components that optimize performance and simplify the process of synchronizing with the cloud. SQL Azure Data Sync allows developers and DBA's to: · Link existing on-premises data stores to SQL Azure. · Create new applications in Windows Azure without abandoning existing on-premises applications. · Extend on-premises data to remote offices, retail stores and mobile workers via the cloud. · Take Windows Azure and SQL Azure based web application offline to provide an “Outlook like” cached-mode experience. The Microsoft Sync Framework Power Pack for SQL Azure is comprised of the following: · SqlAzureSyncProvider · Sql Azure Offline Visual Studio Plug-In · SQL Azure Data Sync Tool for SQL Server · New SQL Azure Events Automated Provisioning Geeta

    Read the article

  • Azure Florida Association: Modern Architecture for Elastic Azure Applications

    - by Herve Roggero
    Join us on November 28th at 7PM, US Eastern Time, to hear Zachary Gramana talk about elastic scale on Windows Azure. REGISTER HERE: https://www3.gotomeeting.com/register/657038102 Description: Building horizontally scalable applications can be challenging. If you face the need to rapidly scale or adjust to high load variations, then you are left with little choice. Azure provides a fantastic platform for building elastic applications. Combined with recent advances in browser capabilities, some older architectural patterns have become relevant again. We will dust off one of them, the client-server architecture, and show how we can use its modern incarnation to bypass a class of problems normally encountered with distributed ASP.NET applications.

    Read the article

  • Finalists for the Microsoft Accelerator for Windows Azure

    - by ScottGu
    Today, I am pleased to announce the ten finalists for the Microsoft Accelerator for Windows Azure powered by TechStars. These startups are about to launch into a three-month program where they will develop new products and businesses using Windows Azure. The response to the program has been fantastic - we received nearly 600 applications from entrepreneurs in 69 countries around the world, spanning a host of industries including retail, travel, entertainment, banking, real estate and more.  There were so many innovative ideas and amazing teams that it really made the selection process hard.  We finally landed on 10 finalists, based on their experience, qualifications, and innovative business ideas built on the cloud. This fall’s Windows Azure class includes: Advertory – Berlin, Germany. Advertory helps local businesses increase revenue and build customer loyalty. Appetas – Seattle, WA. Appetas' mission is to make restaurants look as beautiful online as they do on the plate! BagsUp – Sydney, Australia. Find great places from people you trust. Embarke – San Diego, CA. Embarke allows developers and companies the ability to integrate with any human communication channel (Facebook, Email, Text Message, Twitter) without having to learn the specifics, write code, or spend time on any of them. Fanzo – Seattle, WA. Fanzo puts sports fans in the spotlight. Find other fans, show off your fanswagger and get rewarded for your passion. MetricsHub – Bellevue, WA. A service providing cloud monitoring with incident detection and prebuilt workflows for remedying common problems. Mobilligy – Bellevue, WA. Mobilligy revolutionizes how people pay their bills by bringing convenient, secure, and instant bill payment support to mobile devices. Realty Mogul – Los Angeles, CA. Realty Mogul is a crowdfunding platform for real estate where accredited investors pool capital and invest in properties that are acquired, managed and eventually resold by professional private real estate companies and their management teams. Staq – San Francisco, CA. Back-end as a service for APIs. Socedo – Bellevue, WA. A simple and effective web application for lead generation and relationship management on Twitter. Each startup will be hosted in Seattle and mentored by entrepreneurs and venture capitalists as well as leaders from Windows Azure and other Microsoft organizations. The teams will spend the first month ideating and refining their business concepts with input and advice from their mentors as well as Microsoft customers, followed by two months of design and development. They will present their results to investors and Microsoft partners at an event in mid-January. We are really looking forward to seeing how their businesses evolve.  These teams have demonstrated incredible energy, passion, and innovative capabilities – and they are ready to show the world what’s possible with Windows Azure. Thanks, Scott P.S. And if you are new to Twitter you can also optionally follow me: @scottgu

    Read the article

  • Windows Azure Diagnostics: Next to Useless?

    - by Your DisplayName here!
    To quote my good friend Christian: “Tracing is probably one of the most discussed topics in the Windows Azure world. Not because it is freaking cool – but because it can be very tedious and partly massively counter-intuitive.” <rant> The .NET Framework has this wonderful facility called TraceSource. You define a named trace and route that to a configurable listener. This gives you a lot of flexibility – you can create a single trace file – or multiple ones. There is even nice tooling around that. SvcTraceViewer from the SDK let’s you open the XML trace files – you can filter and sort by trace source and event type, aggreate multiple files…blablabla. Just what you would expect from a decent tracing infrastructure. Now comes Windows Azure. I was already were grateful that starting with the SDK 1.2 we finally had a way to do tracing and diagnostics in the cloud (kudos!). But the way the Azure DiagnosticMonitor is currently implemented – could be called flawed. The Azure SDK provides a DiagnosticsMonitorTraceListener – which is the right way to go. The only problem is, that way this works is, that all traces (from all sources) get written to an ETW trace. Then the DiagMon listens to these traces and copies them periodically to your storage account. So far so good. But guess what happens to your nice trace files: the trace source names get “lost”. They appear in your message text at the end. So much for filtering and sorting and aggregating (regex #fail or #win??). Every trace line becomes an entry in a Azure Storage Table – the svclog format is gone. So much for the existing tooling. To solve that problem, one workaround was to write your own trace listener (!) that creates svclog files inside of local storage and use the DiagMon to copy those. Christian has a blog post about that. OK done that. Now it turns out that this mechanism does not work anymore in 1.3 with FullIIS (see here). Quoting: “Some IIS 7.0 logs not collected due to permissions issues...The root cause to both of these issues is the permissions on the log files.” And the workaround: “To read the files yourself, log on to the instance with a remote desktop connection.” Now then have fun with your multi-instance deployments…. </rant>

    Read the article

  • SharePoint Apps and Windows Azure

    - by ScottGu
    Last Monday I had an opportunity to present as part of the keynote of this year’s SharePoint Conference.  My segment of the keynote covered the new SharePoint Cloud App Model we are introducing as part of the upcoming SharePoint 2013 and Office 365 releases.  This new app model for SharePoint is additive to the full trust solutions developers write today, and is built around three core tenants: Simplifying the development model and making it consistent between the on-premises version of SharePoint and SharePoint Online provided with Office 365. Making the execution model loosely coupled – and enabling developers to build apps and write code that can run outside of the core SharePoint service. This makes it easy to deploy SharePoint apps using Windows Azure, and avoid having to worry about breaking SharePoint and the apps within it when something is upgraded.  This new loosely coupled model also enables developers to write SharePoint applications that can leverage the full capabilities of the .NET Framework – including ASP.NET Web Forms 4.5, ASP.NET MVC 4, ASP.NET Web API, EF 5, Async, and more. Implementing this loosely coupled model using standard web protocols – like OAuth, JSON, and REST APIs – that enable developers to re-use skills and tools, and easily integrate SharePoint with Web and Mobile application architectures. A video of my talk + demos is now available to watch online: In the talk I walked through building an app from scratch – it showed off how easy it is to build solutions using new SharePoint application, and highlighted a web + workflow + mobile scenario that integrates SharePoint with code hosted on Windows Azure (all built using Visual Studio 2012 and ASP.NET 4.5 – including MVC and Web API). The new SharePoint Cloud App Model is something that I think is pretty exciting, and it is going to make it a lot easier to build SharePoint apps using the full power of both Windows Azure and the .NET Framework.  Using Windows Azure to easily extend SaaS based solutions like Office 365 is also a really natural fit and one that is going to offer a bunch of great developer opportunities.  Hope this helps, Scott  P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Aplicações do SharePoint e Windows Azure

    - by Leniel Macaferi
    Segunda-feira passada eu tive a oportunidade de me apresentar dando uma palestra na SharePoint Conference (em Inglês). Meu segmento na palestra cobriu o novo modelo de Aplicações para Nuvem do SharePoint (SharePoint Cloud App Model) que estamos introduzindo como parte dos próximos lançamentos do SharePoint 2013 e Office 365. Este novo modelo de aplicações para o SharePoint é aditivo para as soluções de total confiança que os desenvolvedores escrevem atualmente, e é construído em torno de três pilares principais: Simplificar o modelo de desenvolvimento tornando-o consistente entre a versão local do SharePoint e a versão online do SharePoint fornecida com o Office 365. Tornar o modelo de execução flexível - permitindo que os desenvolvedores criem aplicações e escrevam código que pode ser executado fora do núcleo do serviço do SharePoint. Isto torna mais fácil implantar aplicações SharePoint usando a Windows Azure, evitando a preocupação com a quebra do SharePoint e das aplicações que rodam dentro dele quando algo é atualizado. Este novo modelo flexível também permite que os desenvolvedores escrevam aplicações do SharePoint que podem alavancar as capacidades do .NET Framework - incluindo ASP.NET Web Forms 4.5, ASP.NET MVC 4, ASP.NET Web API, Entity Framework 5, Async, e mais. Implementar este modelo flexível utilizando protocolos padrão da web - como OAuth, JSON e APIs REST - que permitem aos desenvolvedores reutilizar habilidades e ferramentas, facilmente integrando o SharePoint com arquiteturas Web e arquiteturas para aplicações móveis. Um vídeo da minha palestra + demos está disponível para assistir on-line (em Inglês): Na palestra eu mostrei como construir uma aplicação a partir do zero - ela mostrou como é fácil construir soluções usando a nova aplicação SharePoint, e destacou um cenário web + workflow + móvel que integra o SharePoint com código hospedado na Windows Azure (totalmente construído usando o Visual Studio 2012 e ASP.NET 4.5 - incluindo MVC e Web API). O novo Modelo de Aplicações para Nuvem do SharePoint é algo que eu acho extremamente emocionante, e que vai tornar muito mais fácil criar aplicações SharePoint usando todo o poder da Windows Azure e do .NET Framework. Usar a Windows Azure para estender facilmente soluções baseadas em SaaS como o Office 365 é também algo muito natural e que vai oferecer um monte de ótimas oportunidades para os desenvolvedores.  Espero que ajude, - Scott P.S. Além do blog, eu também estou utilizando o Twitter para atualizações rápidas e para compartilhar links. Siga-me em: twitter.com/ScottGu Texto traduzido do post original por Leniel Macaferi.

    Read the article

  • Is SQL Azure a newbies springboard?

    - by jamiet
    Earlier today I was considering the various SQL Server platforms that are available today and I wondered aloud, wonder how long until the majority of #sqlserver newcomers use @sqlazure instead of installing locally Let me explain. My first experience of development was way back in the early 90s when I would crank open VBA in Access or Excel and start hammering out some code, usually by recording macros and looking at the code that they produced (sound familiar?). The reason was simple, Office was becoming ubiquitous so the barrier to entry was incredibly low and, save for a short hiatus at university, I’ve been developing on the Microsoft platform ever since. These days spend most of my time using SQL Server. I take a look at SQL Azure today I see a lot of similarities with those early experiences, the barrier to entry is low and getting lower. I don’t have to download some software or actually install anything other than a web browser in order to get myself a fully functioning SQL Server  database against which I can ostensibly start hammering out some code and I believe that to be incredibly empowering. Having said that there are still a few pretty high barriers, namely: I need to get out my credit card Its pretty useless without some development tools such as SQL Server Management Studio, which I do have to install. The second of those barriers will disappear pretty soon when Project Houston delivers a web-based admin and presentation tool for SQL Azure so that just leaves the matter of my having to use a credit card. If Microsoft have any sense at all then they will realise the huge potential of opening up a free, throttled version of SQL Azure for newbies to party on; they get to developers early (just like they did with me all those years ago) and it gives potential customers an opportunity to try-before-they-buy. Perhaps in 20 years time people will be talking about SQL Azure as being their first foray into the world of coding! @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Windows Azure Virtual Machines - Make Sure You Follow the Documentation

    - by BuckWoody
    To create a Windows Azure Infrastructure-as-a-Service Virtual Machine you have several options. You can simply select an image from a “Gallery” which includes Windows or Linux operating systems, or even a Windows Server with pre-installed software like SQL Server. One of the advantages to Windows Azure Virtual Machines is that it is stored in a standard Hyper-V format – with the base hard-disk as a VHD. That means you can move a Virtual Machine from on-premises to Windows Azure, and then move it back again. You can even use a simple series of PowerShell scripts to do the move, or automate it with other methods. And this then leads to another very interesting option for deploying systems: you can create a server VHD, configure it with the software you want, and then run the “SYSPREP” process on it. SYSPREP is a Windows utility that essentially strips the identity from a system, and when you re-start that system it asks a few details on what you want to call it and so on. By doing this, you can essentially create your own gallery of systems, either for testing, development servers, demo systems and more. You can learn more about how to do that here: http://msdn.microsoft.com/en-us/library/windowsazure/gg465407.aspx   But there is a small issue you can run into that I wanted to make you aware of. Whenever you deploy a system to Windows Azure Virtual Machines, you must meet certain password complexity requirements. However, when you build the machine locally and SYSPREP it, you might not choose a strong password for the account you use to Remote Desktop to the machine. In that case, you might not be able to reach the system after you deploy it. Once again, the key here is reading through the instructions before you start. Check out the link I showed above, and this link: http://technet.microsoft.com/en-us/library/cc264456.aspx to make sure you understand what you want to deploy.  

    Read the article

  • Windows Azure Horror Story: The Phantom App that Ate my Data

    - by jasont
    Originally posted on: http://geekswithblogs.net/jasont/archive/2013/10/31/windows-azure-horror-story-the-phantom-app-that-ate-my.aspxI’ve posted in the past about how my company, Stackify, makes some great tools that enhance the Windows Azure experience. By just using the Azure Management Portal, you don’t get a lot of insight into what is happening inside your instances, which are just Windows VMs. This morning, I noticed something quite scary, and fittingly enough, it’s Halloween. I deleted a deployment after doing a new deployment and VIP swap. But, after a few minutes, the instance still appeared in my Stackify dashboard. I thought this odd, as we monitor these operations and remove devices that have been deleted. Digging in showed that it wasn’t a problem with Stackify. This instance was still there, and still sending data. You’ll notice though that the “Status” check shows that the instance no longer exists. And it gets scarier. I quickly checked the running services, and sure enough, the Windows Azure Guest Agent (which runs worker roles) was still running: “Surely, this can’t be” I’m thinking at this point. Knowing that this worker role is in our QA environment, it has debug logging turned on. And, using our log file tailing feature, I can see that, yes, indeed, this deleted deployment is still executing code. Pulling messages off queues. Sending alerts. Changing data. Fortunately, our tools give me the ability to manually disable the Windows service that runs this code. In the meantime, I have contacted everyone I know (and support) at Microsoft to address this issue. Needless to say, this is concerning. When you delete a deployment, it needs to actually delete. Not continue to eat your data. As soon as I learn more, I’ll be posting an update.

    Read the article

  • Creating a SQL Azure Database Should be Easier

    - by Ken Cox [MVP]
    Every time I try to create a database + tables + data for Windows Azure SQL I get errors.  One of them is 'Filegroup reference and partitioning scheme' is not supported in this version of SQL Server.' It’s partly due to my poor memory (since I’ve succeeded before) and partly due to the failure of tools that should be helping me. For example, when I want to create a script from an existing database on my local workstation, I use SQL Server Management Studio (currently v 11.0.2100.60).  I go to Tasks > Generate Scripts which brings up the nice Generate and Publish Scripts wizard. When I go into the Advanced button, under Script for Server Version, why don’t I see SQL Azure as an option by now? The tool should be sorting this out for me, right? Maybe this is available in SQL Server Data Tools? I haven’t got into that yet. Just merge the functionality with SSMS, please. Anyway, I pick an older version of SQL for the target and still need to tweak it for Azure. For example, I take out all the “[dbo].” stuff. Why is it put there by the wizard? I also have to get rid of "ON [PRIMARY]"  to deal with the error I noted at the top. Yes, there’s information on what a table needs to look like in SQL Azure but the tools should know this so I don’t have to mess with it.

    Read the article

  • Remote Desktop to Your Azure Virtual Machine

    - by Shaun
    The Windows Azure Team had just published their new development portal this week and the SDK 1.3. Within this new release there are a lot of cool feature available. The one I’m looking forward to is Remote Desktop Access to your running Windows Azure Virtual Machine.   Configuration Remote Desktop Access It would be very simple to make the azure service enable the remote desktop access. First of all let’s create a new windows azure project from the Visual Studio. In this example I just created a normal MVC 2 web role without any modifications. Then we right-click the azure project node in the solution explorer window and select “Publish”. Then let’s select the “Deploy your Windows Azure project to Windows Azure” on the top radio button. And then select the credential, deployment service/slot, storage and label as susal. You must have the Management API Certificates uploaded to your Windows Azure account, and install the certification on you machine before in order to use this one-click deployment feature. If you are familiar with this dialog you will notice that there’s a linkage named “Configure Remote Desktop connections”. Here is where you need to make this service enable the remote desktop feature. After clicked this link we will set the configuration of the remote desktop access authorization information. There are 4 steps we need to do to configure our access. Certificates: We need either create or select a certificate file in order to encypt the access cerdenticals. In this example I will use the certificate file for my Management API. Username: The remote desktop user name to access the virtual machine. Password: The password for the access. Expiration: The access cerdentals would be expired after 1 month by default but we can amend here. After that we clicked the OK button to back to the publish dialog.   The next step is to back to the new windows azure portal and navigate to the hosted services list. I created a new hosted service and upload the certificate file onto this service. The user name and password access to the azure machine must be encrypted from the local machine, and then send to the windows azure platform, then decrypted on the azure side by the same file. This is why we need to upload the certificate file onto azure. We navigated to the “Hosted Services, Storage Accounts & CDN"” from the left panel and created a new hosted service named “SDK13” and selected the “Certificates” node. Then we clicked the “Add Certificates” button. Then we select the local certificate file and the password to install it into this azure service.   The final step would be back to our Visual Studio and in the pulish dialog just click the OK button. The Visual Studio will upload our package and the configuration into our service with the remote desktop settings.   Remote Desktop Access to Azure Virtual Machine All things had been done, let’s have a look back on the Windows Azure Development Portal. If I selected the web role that I had just published we can see on the toolbar there’s a section named “Remote Access”. In this section the Enable checkbox had been checked which means this role has the Remote Desktop Access feature enabled. If we want to modify the access cerdentals we can simply click the Configure button. Then we can update the user name, password, certificates and the expiration date.   Let’s select the instance node under the web role. In this case I just created one instance for demo. We can see that when we selected the instance node, the Connect button turned enabled. After clicked this button there will be a RDP file downloaded. This is a Remote Desctop configuration file that we can use to access to our azure virtual machine. Let’s download it to our local machine and execute. We input the user name and password we specified when we published our application to azure and then click OK. There might be some certificates warning dislog appeared. This is because the certificates we use to encryption is not signed by a trusted provider. Just select OK in these cases as we know the certificate is safty to us. Finally, the virtual machine of Windows Azure appeared.   A Quick Look into the Azure Virtual Machine Let’s just have a very quick look into our virtual machine. There are 3 disks available for us: C, D and E. Disk C: Store the local resource, diagnosis information, etc. Disk D: System disk which contains the OS, IIS, .NET Frameworks, etc. Disk E: Sotre our application code. The IIS which hosting our webiste on Azure. The IP configuration of the azure virtual machine.   Summary In this post I covered one of the new feature of the Azure SDK 1.3 – Remote Desktop Access. We can set the access per service and all of the instances of this service could be accessed through the remote desktop tool. With this feature we can deep into the virtual machines of our instances to see the inner information such as the system event, IIS log, system information, etc. But we should pay attention to modify the system settings. 2 reasons from what I know for now: 1. If we have more than one instances against our service we should ensure that all system settings we modifed are applied to all instances/virtual machines. Otherwise, as the machines are under the azure load balance proxy our application process may doesn’t work due to the defferent settings between the instances. 2. When the virtual machine encounted some problem and need to be translated to another physical machine all settings we made would be disappeared.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Windows Azure Platform Training Kit - June Update

    - by guybarrette
    Microsoft released an update to its Azure training kit. Here is what is new in the kit: Introduction to Windows Azure - VS2010 version Introduction To SQL Azure - VS2010 version Introduction to the Windows Azure Platform AppFabric Service Bus - VS2010 version Introduction to Dallas - VS2010 version Introduction to the Windows Azure Platform AppFabric Access Control Service - VS2010 version Web Services and Identity in the Cloud Exploring Windows Azure Storage VS2010 version + new Exercise: “Working with Drives” Windows Azure Deployment VS2010 version + new Exercise: “Securing Windows Azure with SSL” Minor fixes to presentations – mainly timelines, pricing, new features etc. Download it here var addthis_pub="guybarrette";

    Read the article

  • Windows Azure SDK - ASPProviders example

    - by David
    Hi, since the new SDK 1.1 is missing the tutorial for "ASPProviders", i am currently asking myself how i would implement a "azure session state provider" ( this is the path in the "old" SDK: C:\Program Files\Windows Azure SDK\v1.0\Samples\AspProviders ) Related threads: http://stackoverflow.com/questions/1023108/how-does-microsoft-azure-handle-session-state http://social.msdn.microsoft.com/Forums/en-US/windowsazure/thread/2d1340ed-0ad0-456a-b069-aa6b85672102/ Has anyone an idea or even the old example project and could post some snippets of the config here?

    Read the article

  • Choosing between .NET Service Bus Queues vs Azure Queue Service

    - by ChrisV
    Just a quick question regarding an Azure application. If I have a number of Web and Worker roles that need to communicate, documentation says to use the Azure Queue Service. However, I've just read that the new .NET Service Bus now also offers queues. These look to be more powerful as they appear to offer a much more detailed API. Whilst the .NSB looks more interesting it has a couple of issues that make me wary of using it in distributed application. (for example, Queue Expiration... if I cannot guarantee that a queue will be renewed on time I may lose it all!). Has anyone had any experience using either of these two technologies and could give any advice on when to choose one over the other. I suspect that whilst the service bus looks more powerful, as my use case is really just enabling Web/Worker roles to communicate between each other, that the Azure Queue Service is what I'm after. But I'm just really looking for confirmation of that before progamming myself in to a corner :-) Thanks in advance. UPDATE Have read up about the two systems over the break. It defo looks like .NET service bus is more specifically designed for integrating systems rather than providing a general purpose reliable messaging system. Azure Queues are distributed and so reliable and scalable in a way that .NSB queues are not and so more suitable for code hosted within Azure itself. Thanks for the responses.

    Read the article

  • Where to store things like user pictures using Azure? Blob Storage?

    - by n26
    I have just migrated a project of mine for test cases to Microsoft's azure. But for functionalities similar to an avatar upload I need write access to the files on the harddrive. But this is a cloud, so this is not possible. How can I build such functionalities instead? Should I use the Blob Storage or is there a better solution? Does it make sense to store all website images (f.e. layout images) in the Blob Storage? So I would have a Cookie-free Domain for my static content?

    Read the article

  • knife azure image list doesn't return User image

    - by TooLSHeD
    I'm trying to create and bootstrap a Windows VM in Azure using knife-azure. I initially tried using a Public Win 2008 r2 image, but quickly found out that winrm needs to be configured before this can work. So, I created a VM from that image, configured winrm as per these instructions and captured the VM. The problem is that the image does not show up when executing knife azure image list. When I try creating the server with the image name from the Azure portal, it complains that it does not exist. I'm running Ubuntu, so I tried the Azure cli tools and it doesn't show there either. I installed Azure PS in a Win 8 VM and then it shows up. Feeling encouraged, I installed Chef and knife-azure in the Win 8 VM, but it doesn't show up there either. How do I get my User image to show in knife azure?

    Read the article

  • Determine Last Modification Datetime for an Azure Table

    - by embeddedprogrammer
    I am developing an application which may be hosted on a microsoft sql server, or on Azure SQL, depending upon the end user's wishes. My whole system works fine with the exception of some WCF functions which determine the last modification time of tables using the following technique: SELECT OBJECT_NAME(OBJECT_ID) as tableName, last_user_update as lastUpdate FROM mydb.sys.dm_db_index_usage_stats This query fails in Azure. Is there any analogous way to get table last modification dates from Azure's sql?

    Read the article

  • A pseudo-listener for AlwaysOn Availability Groups for SQL Server virtual machines running in Azure

    - by MikeD
    I am involved in a project that is implementing SharePoint 2013 on virtual machines hosted in Azure. The back end data tier consists of two Azure VMs running SQL Server 2012, with the SharePoint databases contained in an AlwaysOn Availability Group. I used this "Tutorial: AlwaysOn Availability Groups in Windows Azure (GUI)" to help me implement this setup.Because Azure DHCP will not assign multiple unique IP addresses to the same VM, having an AG Listener in Azure is not currently supported.  I wanted to figure out another mechanism to support a "pseudo listener" of some sort. First, I created a CNAME (alias) record in the DNS zone with a short TTL (time to live) of 5 minutes (I may yet make this even shorter). The record represents a logical name (let's say the alias is SPSQL) of the server to connect to for the databases in the availability group (AG). When Server1 was hosting the primary replica of the AG, I would set the CNAME of SPSQL to be SERVER1. When the AG failed over to Server1, I wanted to set the CNAME to SERVER2. Seemed simple enough.(It's important to point out that the connection strings for my SharePoint services should use the CNAME alias, and not the actual server name. This whole thing falls apart otherwise.)To accomplish this, I created identical SQL Agent Jobs on Server1 and Server2, with two steps:1. Step 1: Determine if this server is hosting the primary replica.This is a TSQL step using this script:declare @agName sysname = 'AGTest'set nocount on declare @primaryReplica sysnameselect @primaryReplica = agState.primary_replicafrom sys.dm_hadr_availability_group_states agState   join sys.availability_groups ag on agstate.group_id = ag.group_id   where ag.name = @AGname if not exists(   select *    from sys.dm_hadr_availability_group_states agState   join sys.availability_groups ag on agstate.group_id = ag.group_id   where @@Servername = agstate.primary_replica    and ag.name = @AGname)begin   raiserror ('Primary replica of %s is not hosted on %s, it is hosted on %s',17,1,@Agname, @@Servername, @primaryReplica) endThis script determines if the primary replica value of the AG group is the same as the server name, which means that our server is hosting the current AG (you should update the value of the @AgName variable to the name of your AG). If this is true, I want the DNS alias to point to this server. If the current server is not hosting the primary replica, then the script raises an error. Also, if the script can't be executed because it cannot connect to the server, that also will generate an error. For the job step settings, I set the On Failure option to "Quit the job reporting success". The next step in the job will set the DNS alias to this server name, and I only want to do that if I know that it is the current primary replica, otherwise I don't want to do anything. I also include the step output in the job history so I can see the error message.Job Step 2: Update the CNAME entry in DNS with this server's name.I used a PowerShell script to accomplish this:$cname = "SPSQL.contoso.com"$query = "Select * from MicrosoftDNS_CNAMEType"$dns1 = "dc01.contoso.com"$dns2 = "dc02.contoso.com"if ((Test-Connection -ComputerName $dns1 -Count 1 -Quiet) -eq $true){    $dnsServer = $dns1}elseif ((Test-Connection -ComputerName $dns2 -Count 1 -Quiet) -eq $true) {   $dnsServer = $dns2}else{  $msg = "Unable to connect to DNS servers: " + $dns1 + ", " + $dns2   Throw $msg}$record = Get-WmiObject -Namespace "root\microsoftdns" -Query $query -ComputerName $dnsServer  | ? { $_.Ownername -match $cname }$thisServer = [System.Net.Dns]::GetHostEntry("LocalHost").HostName + "."$currentServer = $record.RecordData if ($currentServer -eq $thisServer ) {     $cname + " CNAME is up to date: " + $currentServer}else{    $cname + " CNAME is being updated to " + $thisServer + ". It was " + $currentServer    $record.RecordData = $thisServer    $record.put()}This script does a few things:finds a responsive domain controller (Test-Connection does a ping and returns a Boolean value if you specify the -Quiet parameter)makes a WMI call to the domain controller to get the current CNAME record value (Get-WmiObject)gets the FQDN of this server (GetHostEntry)checks if the CNAME record is correct and updates it if necessary(You should update the values of the variables $cname, $dns1 and $dns2 for your environment.)Since my domain controllers are also hosted in Azure VMs, either one of them could be down at any point in time, so I need to find a DC that is responsive before attempting the DNS call. The other little thing here is that the CNAME record contains the FQDN of a machine, plus it ends with a period. So the comparison of the CNAME record has to take the trailing period into account. When I tested this step, I was getting ACCESS DENIED responses from PowerShell for the Get-WmiObject cmdlet that does a remote lookup on the DC. This occurred because the SQL Agent service account was not a member of the Domain Admins group, so I decided to create a SQL Credential to store the credentials for a domain administrator account and use it as a PowerShell proxy (rather than give the service account Domain Admins membership).In SQL Management Studio, right click on the Credentials node (under the server's Security node), and choose New Credential...Then, under SQL Agent-->Proxies, right click on the PowerShell node and choose New Proxy...Finally, in the job step properties for the PowerShell step, select the new proxy in the Run As drop down.I created this two step Job on both nodes of the Availability Group, but if you had more than two nodes, just create the same job on all the servers. I set the schedule for the job to execute every minute.When the server that is hosting the primary replica is running the job, the job history looks like this:The job history on the secondary server looks like this: When a failover occurs, the SQL Agent job on the new primary replica will detect that the CNAME needs to be updated within a minute. Based on the TTL of the CNAME (which I said at the beginning was 5 minutes), the SharePoint servers will get the new alias within five minutes and should be able to reconnect. I may want to shorten up the TTL to reduce the time it takes for the client connections to use the new alias. Using a DNS CNAME and a SQL Agent Job on all servers hosting AG replicas, I was able to create a pseudo-listener to automatically change the name of the server that was hosting the primary replica, for a scenario where I cannot use a regular AG listener (in this case, because the servers are all hosted in Azure).    

    Read the article

  • Azure application working on emulator but not on azure cloud

    - by Hisham Riaz
    firstly i am developing my MVC3 application on visual web developer 2010 express, by migrating my MVC3 (cshtml) files on MVC2. it works great on local system using the emulator, but once i deploy the application on azure it gives runtime errors. example: The layout page "~/Views/Shared/test_page.cshtml" could not be found at the following path: "~/Views/Shared/test_page.cshtml". Source Error: Line 8: //Layout = "~/Views/Shared/upload.cshtml"; Line 9: //Layout = "~/Views/Shared/_Layout2.cshtml"; Line 10: Layout = "~/Views/Shared/test_page.cshtml"; Line 11: } Line 12: else CODE IS AS FOLLOWS: _ViewStart.cshtml file @{ string AccId = Request.QueryString["AccId"].ToString(); if (AccId=="0") { //Layout = "~/Views/Shared/upload.cshtml"; //Layout = "~/Views/Shared/_Layout2.cshtml"; Layout = "~/Views/Shared/test_page.cshtml"; } else { string LayOutPagePath = MVCTest.Models.ComponentClass.GetLayOutPagePath(AccId); Layout = LayOutPagePath; } } ......... how ever the page exist, and is working fine on azure emulator, but not in azure cloud. CODE FOR test_page.cshtml @{ var result = "1234567890"; var temp_xml = MVCTest.Models.ComponentClass.GetTemplateAndTheme("1");//returning xml string LayOutPagePath = MVCTest.Models.ComponentClass.GetLayOutPagePath("1");//returning string } @RenderBody() test_page @temp_xml @result @LayOutPagePath

    Read the article

  • Need clarification concerning Windows Azure

    - by SnOrfus
    I basically need some confirmation and clarification concerning Windows Azure with respect to a Silverlight application using RIA Services. In a normal Silverlight app that uses RIA services you have 2 projects: App App.Web ... where App is the default client-side Silverlight and app.web is the server-side code where your RIA services go. If you create a Windows Azure app and add a WCF Web Services Role, you get: App (Azure project) App.Services (WCF Services project) In App.Services, you add your RIA DomainService(s). You would then add another project to this solution that would be the client-side Silverlight that accesses the RIA Services in the App.Services project. You then can add the entity model to the App.Services or another project that is referenced by App.Services (if that division is required for unit testing etc.) and connect that entity model to either a SQLServer db or a SQLAzure instance. Is this correct? If not, what is the general 'layout' for building an application with the following tiers: UI (Silverlight 4) Services (RIA Services) Entity/Domain (EF 4) Data (SQL Server)

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >