Search Results

Search found 14693 results on 588 pages for 'azure storage tables'.

Page 30/588 | < Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >

  • Azure Service Bus Scalability

    - by phebbar
    I am trying to understand how can I make Azure Service Bus Topic to be scaleable to handle 10,000 requests/second from more than 50 different clients. I found this article at Microsoft - http://msdn.microsoft.com/en-us/library/windowsazure/hh528527.aspx. This provides lot of good input to scale azure service bus like creating multiple message factories, sending and receiving asynchronously, doing batch send/receive. But all these input are from the publisher and subscriber client perspective. What if the node running the Topic can not handle the huge number of transactions? How do I monitor that? How do I have the Topic running on multiple nodes? Any input on that would be helpful. Also wondering if any one has done any capacity testing with Topic/Queue and I am eager to see those results... Thanks, Prasanna

    Read the article

  • How to restore your production database without needing additional storage

    - by David Atkinson
    Production databases can get very large. This in itself is to be expected, but when a copy of the database is needed the database must be restored, requiring additional and costly storage.  For example, if you want to give each developer a full copy of your production server, you'll need n times the storage cost for your n-developer team. The same is true for any test databases that are created during the course of your project lifecycle. If you've read my previous blog posts, you'll be aware that I've been focusing on the database continuous integration theme. In my CI setup I create a "production"-equivalent database directly from its source control representation, and use this to test my upgrade scripts. Despite this being a perfectly valid and practical thing to do as part of a CI setup, it's not the exact equivalent to running the upgrade script on a copy of the actual production database. So why shouldn't I instead simply restore the most recent production backup as part of my CI process? There are two reasons why this would be impractical. 1. My CI environment isn't an exact copy of my production environment. Indeed, this would be the case in a perfect world, and it is strongly recommended as a good practice if you follow Jez Humble and David Farley's "Continuous Delivery" teachings, but in practical terms this might not always be possible, especially where storage is concerned. It may just not be possible to restore a huge production database on the environment you've been allotted. 2. It's not just about the storage requirements, it's also the time it takes to do the restore. The whole point of continuous integration is that you are alerted as early as possible whether the build (yes, the database upgrade script counts!) is broken. If I have to run an hour-long restore each time I commit a change to source control I'm just not going to get the feedback quickly enough to react. So what's the solution? Red Gate has a technology, SQL Virtual Restore, that is able to restore a database without using up additional storage. Although this sounds too good to be true, the explanation is quite simple (although I'm sure the technical implementation details under the hood are quite complex!) Instead of restoring the backup in the conventional sense, SQL Virtual Restore will effectively mount the backup using its HyperBac technology. It creates a data and log file, .vmdf, and .vldf, that becomes the delta between the .bak file and the virtual database. This means that both read and write operations are permitted on a virtual database as from SQL Server's point of view it is no different from a conventional database. Instead of doubling the storage requirements upon a restore, there is no 'duplicate' storage requirements, other than the trivially small virtual log and data files (see illustration below). The benefit is magnified the more databases you mount to the same backup file. This technique could be used to provide a large development team a full development instance of a large production database. It is also incredibly easy to set up. Once SQL Virtual Restore is installed, you simply run a conventional RESTORE command to create the virtual database. This is what I have running as part of a nightly "release test" process triggered by my CI tool. RESTORE DATABASE WidgetProduction_virtual FROM DISK=N'C:\WidgetWF\ProdBackup\WidgetProduction.bak' WITH MOVE N'WidgetProduction' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_WidgetProduction_Virtual.vmdf', MOVE N'WidgetProduction_log' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_log_WidgetProduction_Virtual.vldf', NORECOVERY, STATS=1, REPLACE GO RESTORE DATABASE mydatabase WITH RECOVERY   Note the only change from what you would do normally is the naming of the .vmdf and .vldf files. SQL Virtual Restore intercepts this by monitoring the extension and applies its magic, ensuring the 'virtual' restore happens rather than the conventional storage-heavy restore. My automated release test then applies the upgrade scripts to the virtual production database and runs some validation tests, giving me confidence that were I to run this on production for real, all would go smoothly. For illustration, here is my 8Gb production database: And its corresponding backup file: Here are the .vldf and .vmdf files, which represent the only additional used storage for the new database following the virtual restore.   The beauty of this product is its simplicity. Once it is installed, the interaction with the backup and virtual database is exactly the same as before, as the clever stuff is being done at a lower level. SQL Virtual Restore can be downloaded as a fully functional 14-day trial. Technorati Tags: SQL Server

    Read the article

  • How to restore your production database without needing additional storage

    - by David Atkinson
    Production databases can get very large. This in itself is to be expected, but when a copy of the database is needed the database must be restored, requiring additional and costly storage.  For example, if you want to give each developer a full copy of your production server, you’ll need n times the storage cost for your n-developer team. The same is true for any test databases that are created during the course of your project lifecycle. If you’ve read my previous blog posts, you’ll be aware that I’ve been focusing on the database continuous integration theme. In my CI setup I create a “production”-equivalent database directly from its source control representation, and use this to test my upgrade scripts. Despite this being a perfectly valid and practical thing to do as part of a CI setup, it’s not the exact equivalent to running the upgrade script on a copy of the actual production database. So why shouldn’t I instead simply restore the most recent production backup as part of my CI process? There are two reasons why this would be impractical. 1. My CI environment isn’t an exact copy of my production environment. Indeed, this would be the case in a perfect world, and it is strongly recommended as a good practice if you follow Jez Humble and David Farley’s “Continuous Delivery” teachings, but in practical terms this might not always be possible, especially where storage is concerned. It may just not be possible to restore a huge production database on the environment you’ve been allotted. 2. It’s not just about the storage requirements, it’s also the time it takes to do the restore. The whole point of continuous integration is that you are alerted as early as possible whether the build (yes, the database upgrade script counts!) is broken. If I have to run an hour-long restore each time I commit a change to source control I’m just not going to get the feedback quickly enough to react. So what’s the solution? Red Gate has a technology, SQL Virtual Restore, that is able to restore a database without using up additional storage. Although this sounds too good to be true, the explanation is quite simple (although I’m sure the technical implementation details under the hood are quite complex!) Instead of restoring the backup in the conventional sense, SQL Virtual Restore will effectively mount the backup using its HyperBac technology. It creates a data and log file, .vmdf, and .vldf, that becomes the delta between the .bak file and the virtual database. This means that both read and write operations are permitted on a virtual database as from SQL Server’s point of view it is no different from a conventional database. Instead of doubling the storage requirements upon a restore, there is no ‘duplicate’ storage requirements, other than the trivially small virtual log and data files (see illustration below). The benefit is magnified the more databases you mount to the same backup file. This technique could be used to provide a large development team a full development instance of a large production database. It is also incredibly easy to set up. Once SQL Virtual Restore is installed, you simply run a conventional RESTORE command to create the virtual database. This is what I have running as part of a nightly “release test” process triggered by my CI tool. RESTORE DATABASE WidgetProduction_Virtual FROM DISK=N'D:\VirtualDatabase\WidgetProduction.bak' WITH MOVE N'WidgetProduction' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_WidgetProduction_Virtual.vmdf', MOVE N'WidgetProduction_log' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_log_WidgetProduction_Virtual.vldf', NORECOVERY, STATS=1, REPLACE GO RESTORE DATABASE WidgetProduction_Virtual WITH RECOVERY   Note the only change from what you would do normally is the naming of the .vmdf and .vldf files. SQL Virtual Restore intercepts this by monitoring the extension and applies its magic, ensuring the ‘virtual’ restore happens rather than the conventional storage-heavy restore. My automated release test then applies the upgrade scripts to the virtual production database and runs some validation tests, giving me confidence that were I to run this on production for real, all would go smoothly. For illustration, here is my 8Gb production database: And its corresponding backup file: Here are the .vldf and .vmdf files, which represent the only additional used storage for the new database following the virtual restore.   The beauty of this product is its simplicity. Once it is installed, the interaction with the backup and virtual database is exactly the same as before, as the clever stuff is being done at a lower level. SQL Virtual Restore can be downloaded as a fully functional 14-day trial. Technorati Tags: SQL Server

    Read the article

  • How to restore your production database without needing additional storage

    - by David Atkinson
    Production databases can get very large. This in itself is to be expected, but when a copy of the database is needed the database must be restored, requiring additional and costly storage.  For example, if you want to give each developer a full copy of your production server, you'll need n times the storage cost for your n-developer team. The same is true for any test databases that are created during the course of your project lifecycle. If you've read my previous blog posts, you'll be aware that I've been focusing on the database continuous integration theme. In my CI setup I create a "production"-equivalent database directly from its source control representation, and use this to test my upgrade scripts. Despite this being a perfectly valid and practical thing to do as part of a CI setup, it's not the exact equivalent to running the upgrade script on a copy of the actual production database. So why shouldn't I instead simply restore the most recent production backup as part of my CI process? There are two reasons why this would be impractical. 1. My CI environment isn't an exact copy of my production environment. Indeed, this would be the case in a perfect world, and it is strongly recommended as a good practice if you follow Jez Humble and David Farley's "Continuous Delivery" teachings, but in practical terms this might not always be possible, especially where storage is concerned. It may just not be possible to restore a huge production database on the environment you've been allotted. 2. It's not just about the storage requirements, it's also the time it takes to do the restore. The whole point of continuous integration is that you are alerted as early as possible whether the build (yes, the database upgrade script counts!) is broken. If I have to run an hour-long restore each time I commit a change to source control I'm just not going to get the feedback quickly enough to react. So what's the solution? Red Gate has a technology, SQL Virtual Restore, that is able to restore a database without using up additional storage. Although this sounds too good to be true, the explanation is quite simple (although I'm sure the technical implementation details under the hood are quite complex!) Instead of restoring the backup in the conventional sense, SQL Virtual Restore will effectively mount the backup using its HyperBac technology. It creates a data and log file, .vmdf, and .vldf, that becomes the delta between the .bak file and the virtual database. This means that both read and write operations are permitted on a virtual database as from SQL Server's point of view it is no different from a conventional database. Instead of doubling the storage requirements upon a restore, there is no 'duplicate' storage requirements, other than the trivially small virtual log and data files (see illustration below). The benefit is magnified the more databases you mount to the same backup file. This technique could be used to provide a large development team a full development instance of a large production database. It is also incredibly easy to set up. Once SQL Virtual Restore is installed, you simply run a conventional RESTORE command to create the virtual database. This is what I have running as part of a nightly "release test" process triggered by my CI tool. RESTORE DATABASE WidgetProduction_virtual FROM DISK=N'C:\WidgetWF\ProdBackup\WidgetProduction.bak' WITH MOVE N'WidgetProduction' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_WidgetProduction_Virtual.vmdf', MOVE N'WidgetProduction_log' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_log_WidgetProduction_Virtual.vldf', NORECOVERY, STATS=1, REPLACE GO RESTORE DATABASE mydatabase WITH RECOVERY   Note the only change from what you would do normally is the naming of the .vmdf and .vldf files. SQL Virtual Restore intercepts this by monitoring the extension and applies its magic, ensuring the 'virtual' restore happens rather than the conventional storage-heavy restore. My automated release test then applies the upgrade scripts to the virtual production database and runs some validation tests, giving me confidence that were I to run this on production for real, all would go smoothly. For illustration, here is my 8Gb production database: And its corresponding backup file: Here are the .vldf and .vmdf files, which represent the only additional used storage for the new database following the virtual restore.   The beauty of this product is its simplicity. Once it is installed, the interaction with the backup and virtual database is exactly the same as before, as the clever stuff is being done at a lower level. SQL Virtual Restore can be downloaded as a fully functional 14-day trial. Technorati Tags: SQL Server

    Read the article

  • Simple Scripting for your Exalogic Storage

    - by Trond Strømme
    As part of my job in Oracle ACS (Advanced Customer Services) I'm handling lots of different systems and customers. Among the recent systems I worked with have been Oracle's Exalogic engineered systems. One of the things I'd never had much exposure to as a system developer/architect/middleware guy/Java dude has been storage; outside of consuming it for my photography needs.. Well, I'm always ready for a new challenge... I'd downloaded the 7000 series storage simulator when it was released in the good old Sun days, found it fun and instructive to play around with, but as I never touched storage in any way (besides consuming it..) I forgot about it. A couple of years ago when I started working with Exalogic engineered systems it again came into light as an invaluable learning and testing tool for the embedded storage in an Exalogic;  Oracle's Sun ZFS Storage 7320 Appliance.  aaaanyway... I've been "booted" into a part-time role as the interim storage/system admin/middleware/Java guy for a client and found I needed to create the occasional report or summary or whatever.. of what's using the storage in the 7320 (as default configured for an Exalogic, 40T of disk in a mirrored configuration, yielding 18T of actual space.) Reading the nice documentation and some articles on the Oracle Technology Network I saw great possibilities with the embedded ECMAScript3/JavaScript engine in the 7000 series.  In my personal opinion anyone who's dealing with Exalogic administration, or exposed to any of the 7000 series of storage appliances and servers that Oracle offers should have a VirtualBox instance of it kicking around. For development and testing it's a fantastic tool. (It can save you from explaining (most) of the embarrassing FAILS you can do if you test something in a production system to your management...) So download, and install.  A small sidestep, if after firing up the 7000 series simulator in VirtualBox you've forgotten what it's IP address is, the following will sort you out if you log in directly via the running VirtualBox VM. So in my case I can ssh to 192.168.56.101 or point a browser to https://192.168.56.101:215 to log into the storage appliance. One simple way of executing a script on the 7320 is to ssh to the device and redirecting a file with the script in it to ssh. ssh [email protected] < myscript.js One question I got from my client and the people who will take over the systems was: "how can we see the quotas and allocations for all projects/shares in one easy go so we don't have to go navigating around in the BUI for all the hundreds of shares the 7320 is hosting just to check if anything is running dry?" Easy! JavaScript time, VirtualBox and emacs! //NOTE! this script is available 'as is' It has ben run on a couple of 7320's, (running 2010.08.17.3.0,1-1.25 & // 2011.04.24.1.0,1-1.8) a 7420 and the VB image, but I personally //offer no guarantee whatsoever that it won't make your server topple, catch fire or in any way go pear shaped.. //run at your own risk or learn from my code and or mistakes.. script run('cd /'); run('shares'); //get all projects: proj = list(); function spaceToGig(bytes){ return bytes/1073741824; //convert bytes to GB } function fullInPercent(quota, space_data){ tmp = (space_data/quota)*100; return tmp; } //print header, slightly good looking printf(" %s/%-15s %8s(GB) %7s(GB) %5s(GB) %7s(GB) %3s\n","Project", "Share","Quota","Ref", "Snap", "Total","%full"); printf("-------------------------------------------------------------------------------\n") //for each project, get all shares. check for quota and calculate percentage and human readable figures.. for (i=0;i<proj.length;i++){ run('select ' + proj[i]); //get all shares for a project var pshares = list(); //for each share get quota properties for (j=0;j<pshares.length;j++){ run('select ' + pshares[j]); quota = get('quota'); //properties associated with a share or inherited from a project spaceData = get('space_data'); spaceSnap = get('space_snapshots'); spaceTotal = get('space_total'); if(quota>0){ //has quota printf(" %s/%-15s \t%4.2fGB\t%.2fGB\t%.2fGB\t%.2fGB\t%5.2f%%\n",proj[i], pshares[j],spaceToGig(quota),spaceToGig(spaceData),spaceToGig(spaceSnap),spaceToGig(spaceTotal),fullInPercent(quota,spaceTotal)); }else{ //no quota printf(" %s/%-15s \t%8s\t%.2fGB\t%.2fGB\t%.2fGB\t%s\n",proj[i],pshares[j], "N/A", spaceToGig(spaceData),spaceToGig(spaceSnap),spaceToGig(spaceTotal),"N/A"); } run('cd ..'); } run('done'); } The resulting output should look something like this: Project/Share Quota(GB) Ref(GB) Snap(GB) Total(GB) %full ------------------------------------------------------------------------------- ACSExalogicSystem/domains N/A 0.04GB 0.00GB 0.04GB N/A ACSExalogicSystem/logs N/A 0.01GB 0.00GB 0.01GB N/A ACSExalogicSystem/nodemgrs N/A 0.00GB 0.00GB 0.00GB N/A ACSExalogicSystem/stores N/A 0.04GB 0.00GB 0.04GB N/A ***_dev/FMW_***_1 133GB 4.24GB 0.01GB 4.25GB 3.19% ***_dev/FMW_***_2 N/A 4.25GB 0.01GB 4.26GB N/A ***_dev/applications 10GB 0.00GB 0.00GB 0.00GB 0.00% ***_dev/domains 50GB 10.75GB 3.55GB 14.30GB 28.61% ***_dev/logs 20GB 0.32GB 0.01GB 0.33GB 1.66% ***_dev/softwaredepot 20GB 4.15GB 0.00GB 4.15GB 20.73% ***_dev/stores 20GB 0.01GB 0.00GB 0.01GB 0.05% ###_dev/FMW_###_1 400GB 17.63GB 0.12GB 17.75GB 4.44% ###_dev/applications N/A 0.00GB 0.00GB 0.00GB N/A ###_dev/domains 120GB 14.21GB 5.53GB 19.74GB 16.45% ###_dev/logs 15GB 0.00GB 0.00GB 0.00GB 0.00% ###_dev/softwaredepot 250GB 73.55GB 0.02GB 73.57GB 29.43% …snip My apologies if the output is a bit mis-aligned here and there, I only bothered making it look good, not perfect :/ I also removed some of the project names (*,#)

    Read the article

  • Is it wise to use temporary tables?

    - by Industrial
    Hi guys, We have a mySQL database table for products. We are utilizing a cache layer to reduce database load, but we think that it's a good idea to minimize the actual data needed to be stored in the cache layer to speed up the application further. All the products in the database, that is visible to visitors have a price attached to them: The prices are stored in a different table, called prices . There are multiple price categories depending on which discount level each visitor (customer) applies to. From time to time, there are campaigns which means that a special price for each product is available. The special prices are stored in a table called specials. Is it a bad to make a temp table that binds the tables together? It would only have the neccessary information and would ofcourse be cached. -------------|-------------|------------ | productId | hasPrice | hasSpecial -------------|-------------|------------ 1 | 1 | 0 2 | 1 | 1 By doing such, it would be super easy to know if the specific product really has a price, without having to iterate through the complete prices or specials table each time a product should be listed or presented. Are temp tables a common thing for web applications or is it just bad design?

    Read the article

  • Windows Azure Evolution &ndash; TFS Integration (WAWS Part 2)

    - by Shaun
    So this is the fourth blog post about the new features of Windows Azure and the second part of Windows Azure Web Sites. But this is not just focus on the WAWS since the function I’m going to introduce is available in both Windows Azure Web Sites and Windows Azure Cloud Service (a.k.a. hosted service). In the previous post I talked about the Windows Azure Web Sites and how to use its gallery to build a WordPress personal blog without coding. Besides the gallery we can create an empty web site and upload our website from vary approaches. And one of the highlighted feature here is that, we can make our web site integrated with a source control service, such as TFS and Git, so that it will be deployed automatically once a new commit or build available.   Create New Empty Web Site In the developer portal when creating a new web site, we can select QUICK CREATE item. This will create an empty web site with only one shared instance without any database associated. Let’s specify the URL, region and subscription and click OK. After a few seconds our website will be ready. And now we can click the BROWSE button to open this empty website. As you can see there is a welcome page available in my website even thought I didn’t upload or deploy anything. This means even though the website will be charged even before anything was deployed, similar as the cloud service (hosted service). It is because once we created a website, Windows Azure platform had arranged a hosting process (w3wp.exe) in the group of virtual machines.   Create Project in TFS Preview Service and Setup Link Currently the Windows Azure Web Sites can integrate with TFS and Git as its deployment source, and it only support the Microsoft TFS Preview Service for now. I will not deep into how to use the TFS preview service in this post but once we click into the website we had just created and then clicked the “Set up TFS publishing”, there will be a dialog helping us to connect to this service. If you don’t have an account you can click the link shown below to request one. Assuming we have already had an account of TFS service then we need to create a new project firstly. Go to your TFS service website and create a new project, giving the project name, description and the process template. Then, back to the developer portal and clicked the “Set up TFS publishing” link. In the popping up window I will provide my TFS service URL and click the “Authorize now” link. Click “Accept” button to allow my windows azure to connect to my TFS service. Then it will be back to the developer portal and list all projects in my account. Just select the one I had just created and click OK. Then our website is linking to the TFS project I specified and finally it will show similar like this below. This means the web site had been linked to the TFS successfully.   Work with TFS Preview Service in VS2010 In the figure above there are some links to guide us how to connect to the TFS server through Visual Studio 2010 and 2012 RC. If you are using Visual Studio 2012 RC, you don’t need any extension. But if you are using Visual Studio 2010 you must have SP1 and KB2581206 installed. To connect to my TFS service just open the Visual Studio and in the Team Explorer, we can add a new TFS server and paste the URL of my TFS service from the developer portal. And select the project I had just created, then it will be listed in my Team Explorer. Now let’s start to build our website. Since the website we are going to build will be deployed to WAWS, it’s NOT a cloud service, NOT a web role. So in this case we need to create a normal ASP.NET web application. For example, an ASP.NET MVC 3 web application. Next, right click on the solution and select “Add Solution to Source Control”, select the project I had just created. Then check my code in. Once the check-in finished we can see that there is a build running in the TFS server. And if we back to the developer portal, we will see in our web site deployment page there’s a deployment running. In fact, once we linked our web site to our TFS then it will create a new build definition in our TFS project. It will be triggered by each check-in and deploy to the web site we linked automatically. So that when our code had been compiled it will be published to our web site from our TFS server. Once the build and deployment finished we can see it’s now active on our developer portal. Now we can see the web site that created from my Visual Studio and deployed by my TFS.   Continue Deployment through VS and TFS A big benefit when using TFS publishing is the continue deployment. Now if I changed some code in my Visual Studio, for example update some text on the home page and check in my changes, then it will trigger an new build and deploy to my WAWS automatically. And even more, if we wanted to rollback to a previous version we can just select an existing deployment listed in the portal and click REDEPLOY at the bottom.   Q&A: Can Web Site use Storage work with a Worker Role? Stacy asked a question in my previous post, which was “can a web site use Windows Azure Storage and furthermore working with a worker role”. Since the web site is deployed on the windows azure virtual machines in data center, it must be able to use all windows azure features such as the storage, SQL databases, CDN, etc.. But since when using web site we normally have a standard ASP.NET web application, PHP website or NodeJS, the windows azure SDK was not referenced by default. But we can add them by ourselves. In our sample project let’s right click on my MVC project and clicked the “Manage NuGet packages”. And in the dialog I will search windows azure packages and select the “Windows Azure Storage” to install. Then we will have the assemblies to access windows azure storage such as tables, queues and blobs. Since I have a storage account already, let’s have a quick demo, just to list all blobs in a container. The code would be like this. 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Web; 5: using System.Web.Mvc; 6: using Microsoft.WindowsAzure; 7: using Microsoft.WindowsAzure.StorageClient; 8:  9: namespace WAASTFSDemo.Controllers 10: { 11: public class HomeController : Controller 12: { 13: public ActionResult Index() 14: { 15: ViewBag.Message = "Welcome to Windows Azure!"; 16:  17: var credentials = new StorageCredentialsAccountAndKey("[STORAGE_ACCOUNT]", "[STORAGE_KEY]"); 18: var account = new CloudStorageAccount(credentials, false); 19: var client = account.CreateCloudBlobClient(); 20: var container = client.GetContainerReference("shared"); 21: ViewBag.Blobs = container.ListBlobs().Select(b => b.Uri.AbsoluteUri); 22:  23: return View(); 24: } 25:  26: public ActionResult About() 27: { 28: return View(); 29: } 30: } 31: } 1: @{ 2: ViewBag.Title = "Home Page"; 3: } 4:  5: <h2>@ViewBag.Message</h2> 6: <p> 7: To learn more about ASP.NET MVC visit <a href="http://asp.net/mvc" title="ASP.NET MVC Website">http://asp.net/mvc</a>. 8: </p> 9: <div> 10: <ul> 11: @foreach (var blob in ViewBag.Blobs) 12: { 13: <li>@blob</li> 14: } 15: </ul> 16: </div> And then just check in the code, it will be deployed to my web site. Finally we can see the blobs in my storage.   This is just an example but it proves that web sites can connect to storage, table, blob and queue as well. So the answer to Stacy should be “yes”. The web site can use queue storage to work with worker role.   Summary In this post I demonstrated how to integrate with TFS from Windows Azure Web Sites. You can see our website can be built, uploaded and deployed automatically by TFS service. All we need to do is to provide the TFS name and select the project. Not only the Windows Azure Web Site, in this upgrade the Windows Azure Cloud Services (hosted service) can be published through TFS as well. Very similar as what we have shown below. But currently, only Microsoft TFS Service Preview can be integrated with Windows Azure. But I think in the future we can link the TFS in our enterprise and some 3rd party TFS such as CodePlex to Windows Azure.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Windows 8.1 Insufficient storage available to create shadow copy

    - by Bob.at.SBS
    [Note: After I entered the problem statement, I found this question, which is apparently the same problem. Maybe one of us will get a good answer...] I have used the "Windows 7 File Recovery" tool under Windows 8 to create system image backups to an external USB hard drive. I built a new Windows 8.1 machine, and I want to create my first system image backup of that machine to the same USB hard drive. The "Windows 7 File Recovery" tool is gone in Windows 8.1, but wbAdmin is alive and well: wbAdmin start backup -backupTarget:\\?\Volume{2a2b...994f} -allCritical -quiet fails with this text displayed: wbadmin 1.0 - Backup command-line tool (C) Copyright 2013 Microsoft Corporation. All rights reserved. Retrieving volume information... This will back up (EFI System Partition),(C:),Recovery (300.00 MB) to \?\Volume {2a2b1255-3a86-11e3-be86-b8ca3a83994f}. The backup operation to F: is starting. Creating a shadow copy of the volumes specified for backup... Summary of the backup operation: The backup operation stopped before completing. The backup operation stopped before completing. Detailed error: ERROR - A Volume Shadow Copy Service operation error has occurred: (0x8004231f) Insufficient storage available to create either the shadow copy storage file or other shadow copy data. The EFI System Partition is 100 MB The Recovery Partition is 300 MB The C partition is 1.72 TB, NTFS, 218 GB used, 1.51 TB free The destination drive is 1.81 TB, NTFS, 678 GB used, 1.15 TB free I've fiddled with vssadmin resize shadowstorage, with no change in the error. vssadmin list shadowstorage displays: Shadow Copy Storage association For volume: (C:)\?\Volume{37a0...263}\ Shadow Copy Storage volume: (C:)\?\Volume{37a0...263}\ Used Shadow Copy Storage space: 2.39 GB (0%) Allocated Shadow Copy Storage space: 2.81 GB (0%) Maximum Shadow Copy Storage space: 531 GB (30%) Shadow Copy Storage association For volume: (F:)\?\Volume{2a2...94f}\ Shadow Copy Storage volume: (F:)\?\Volume{2a2...94f}\ Used Shadow Copy Storage space: 334 GB (17%) Allocated Shadow Copy Storage space: 337 GB (18%) Maximum Shadow Copy Storage space: UNBOUNDED (922154758%) (Yeah, the "percent calculation" for UNBOUNDED is seriously bogus.) I've run SFC /verifyonly and it seems happy. I've verified that the new `Volume Shadow Copy" service starts when I start the backup operation. Any suggestions?

    Read the article

  • Is it possible to backup an SQL Database hosted on an Azure VM, to our internal DPM2012?

    - by Florent Courtay
    I've got an SQL database on an azure VM (non domain) that i'd like to backup to our internal DPM 2012 server. I've installed the DPM agent on the Azure VM, setup DCOM to use only the ports 5000 to 5025 on both the VM and the DPM server, created the 135, 5000-5025, 5718 5719 endpoints on azure and on the VM's firewall. When trying to add this agent to the DPM server, I end up with an error, "Unable to contact the protection Agent on server .cloudapp.net" I know there is some sort of connection between them, as using a wrong password gives me an Invalid Credentials error. The error seems to be DCOM related : When trying to connect to the Azure VM from the DPM server using VBEMTest, i get an Error "0x800706ba The RPC server is unavailable", but access is deneid when using wrong credentials ) What am i missing ? Has someone been able to achieve this kind of setup ? Thanks for your help !

    Read the article

  • How do you live-migrate Hyper-V to Azure?

    - by TopHat
    I have a new install of Windows Server 2012 with the Hyper-V role setup and a couple VMs running along fat, dumb, and happy. I want to play with Azure hosting for VMs for a couple of stand-alone boxes. Is there anything special that I need to wire up to be able to live-migrate to Azure? I have the 90-day Azure trial account right now. Any special plumbing required? I have not found a lot of documentation about this yet. Everything I found points to manually copying the VHDs via command line and the Azure 2012 SDK.

    Read the article

  • Using Javascript to call the Azure Blob Storage REST API

    - by user350829
    I'm developing a Flash app that saves files to the Azure Blob Storage API. I've learned that you should use the REST API directly rather than a go-between WCF service as this is the most efficient (using a web role is a bottleneck). The problem is that Flash can't do PUT or DELETE methods over Http and has to use external Javascript. This is not an area that I'm familiar with and need some advice/links to examples of using Javascript to work with the Storage API (I've obviously Googled this to no avail). Is this even possible? The Javascript would be hosted in a web role on the same domain. Many thanks, Ed

    Read the article

  • Azure storage: Uploaded files with size zero bytes

    - by Fabio Milheiro
    When I upload an image file to a blob, the image is uploaded apparently successfully (no errors). When I go to cloud storage studio, the file is there, but with a size of 0 (zero) bytes. The following is the code that I am using: // These two methods belong to the ContentService class used to upload // files in the storage. public void SetContent(HttpPostedFileBase file, string filename, bool overwrite) { CloudBlobContainer blobContainer = GetContainer(); var blob = blobContainer.GetBlobReference(filename); if (file != null) { blob.Properties.ContentType = file.ContentType; blob.UploadFromStream(file.InputStream); } else { blob.Properties.ContentType = "application/octet-stream"; blob.UploadByteArray(new byte[1]); } } public string UploadFile(HttpPostedFileBase file, string uploadPath) { if (file.ContentLength == 0) { return null; } string filename; int indexBar = file.FileName.LastIndexOf('\\'); if (indexBar > -1) { filename = DateTime.UtcNow.Ticks + file.FileName.Substring(indexBar + 1); } else { filename = DateTime.UtcNow.Ticks + file.FileName; } ContentService.Instance.SetContent(file, Helper.CombinePath(uploadPath, filename), true); return filename; } // The above code is called by this code. HttpPostedFileBase newFile = Request.Files["newFile"] as HttpPostedFileBase; ContentService service = new ContentService(); blog.Image = service.UploadFile(newFile, string.Format("{0}{1}", Constants.Paths.BlogImages, blog.RowKey)); Before the image file is uploaded to the storage, the Property InputStream from the HttpPostedFileBase appears to be fine (the size of the of image corresponds to what is expected! And no exceptions are thrown). And the really strange thing is that this works perfectly in other cases (uploading Power Points or even other images from the Worker role). The code that calls the SetContent method seems to be exactly the same and file seems to be correct since a new file with zero bytes is created at the correct location. Does any one have any suggestion please? I debugged this code dozens of times and I cannot see the problem. Any suggestions are welcome! Thanks

    Read the article

  • Silverlight isolated storage: what identifies an "application"?

    - by Joe White
    The docs for Silverlight's IsolatedStorageFile.GetUserStoreForApplication just say that the isolated storage is specific to the "application", and that each different application will have its own storage independent of all other "applications" (but with one quota for the entire domain). That's great, but I haven't found anything yet that explains just what "application" is supposed to mean (either in the Silverlight docs or the regular .NET Framework docs). What information does Silverlight, in particular, use to decide that "this is application A" and "this is application B"? Does it just go off the URI to the .xap file, or what?

    Read the article

  • Access block level storage via kernel

    - by N 1.1
    How to access block level storage via the kernel (w/o using scsi libraries)? My intent is to implement a block level storage protocol over network for learning purpose, almost the same way SCSI works. Requests will be generated by initiator and sent to target (both userspace program) which makes call to kernel module and returns the data using TCP protocol to initiator. So far, I have managed to build a simple "Hello" module and run it (I am new at kernel programming), but unable to proceed with block access. After searching a lot, I found struct buffer_head * bread(int dev,int block) in linux/fs.h, but the compiler throws error. error: implicit declaration of function ‘bread’ Please help, also feel free to advice on starting with kernel programming. Thank you!

    Read the article

  • Latex: Listing all figures (tables, algorithm) once again at the end of the document

    - by Zlatko
    Hi all, I have been writhing a rather large document with latex. Now I would like to list all the figures / tables / algortihms once again at the end of the file so that I can check if they all look the same. For example, if every algorithm has the same notation. How can I do this? I know about \listofalgorithms and \listoffigures but they only list the names of the algorithms or figures and the pages where they are. Thanks.

    Read the article

  • Storage for large gridded datasets

    - by nullglob
    I am looking for a good storage format for large, gridded datasets. The application is meteorology, and we would prefer a format that is common within this field (to help exchange data with others). I don't need to deal with special data structures, and there should be a Fortran API. I am currently considering HDF5, GRIB2 and NetCDF4. How do these formats compare in terms of data compression? What are their main limitations? How steep is the learning curve? Are there any other storage formats worth investigating? I have not found a great deal of material outlining the differences and pros/cons of these formats (there is one relevant SO thread, and a presentation comparing GRIB and NetCDF).

    Read the article

  • Mysql insert into 2 tables

    - by Spidfire
    I want to make a insert into 2 tables visits: visit_id int | card_id int registration: registration_id int | type enum('in','out') | timestamp int | visit_id int i want something like: INSERT INTO `visits` as v ,`registration` as v (v.`visit_id`,v.`card_id`,r.`registration_id`, r.`type`, r.`timestamp`, r.`visit_id`) VALUES (NULL, 12131141,NULL, UNIX_TIMESTAMP(), v.`visit_id`); I wonder if its possible

    Read the article

  • Azure git deployment - missing references in 2nd assembly

    - by Dan
    I'm trying to setup Bitbucket deployment to an Azure website. I successfully have Bitbucket and Azure linked, but when I push to Bitbucket, I get the following error on the Azure site: If I click on 'View Log', it shows the following compile errors: D:\Windows\Microsoft.NET\Framework\v4.0.30319\Microsoft.Common.targets(1578,5): warning MSB3245: Could not resolve this reference. Could not locate the assembly "System.Web.Mvc, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35, processorArchitecture=MSIL". Check to make sure the assembly exists on disk. If this reference is required by your code, you may get compilation errors. [C:\DWASFiles\Sites\<projname>\VirtualDirectory0\site\repository\<projname>.Common\<projname>.Common.csproj] D:\Windows\Microsoft.NET\Framework\v4.0.30319\Microsoft.Common.targets(1578,5): warning MSB3245: Could not resolve this reference. Could not locate the assembly "WebMatrix.WebData, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35, processorArchitecture=MSIL". Check to make sure the assembly exists on disk. If this reference is required by your code, you may get compilation errors. [C:\DWASFiles\Sites\<projname>\VirtualDirectory0\site\repository\<projname>.Common\<projname>.Common.csproj] CustomMembershipProvider.cs(5,7): error CS0246: The type or namespace name 'WebMatrix' could not be found (are you missing a using directive or an assembly reference?) [C:\DWASFiles\Sites\<projname>\VirtualDirectory0\site\repository\<projname>.Common\<projname>.Common.csproj] CustomMembershipProvider.cs(9,38): error CS0246: The type or namespace name 'ExtendedMembershipProvider' could not be found (are you missing a using directive or an assembly reference?) [C:\DWASFiles\Sites\<projname>\VirtualDirectory0\site\repository\<projname>.Common\<projname>.Common.csproj] Models\AccountModels.cs(3,18): error CS0234: The type or namespace name 'Mvc' does not exist in the namespace 'System.Web' (are you missing an assembly reference?) [C:\DWASFiles\Sites\<projname>\VirtualDirectory0\site\repository\<projname>.Common\<projname>.Common.csproj] CustomMembershipProvider.cs(198,37): error CS0246: The type or namespace name 'OAuthAccountData' could not be found (are you missing a using directive or an assembly reference?) [C:\DWASFiles\Sites\<projname>\VirtualDirectory0\site\repository\<projname>.Common\<projname>.Common.csproj] Models\AccountModels.cs(40,10): error CS0246: The type or namespace name 'Compare' could not be found (are you missing a using directive or an assembly reference?) [C:\DWASFiles\Sites\<projname>\VirtualDirectory0\site\repository\<projname>.Common\<projname>.Common.csproj] Models\AccountModels.cs(40,10): error CS0246: The type or namespace name 'CompareAttribute' could not be found (are you missing a using directive or an assembly reference?) [C:\DWASFiles\Sites\<projname>\VirtualDirectory0\site\repository\<projname>.Common\<projname>.Common.csproj] Models\AccountModels.cs(73,10): error CS0246: The type or namespace name 'Compare' could not be found (are you missing a using directive or an assembly reference?) [C:\DWASFiles\Sites\<projname>\VirtualDirectory0\site\repository\<projname>.Common\<projname>.Common.csproj] Models\AccountModels.cs(73,10): error CS0246: The type or namespace name 'CompareAttribute' could not be found (are you missing a using directive or an assembly reference?) [C:\DWASFiles\Sites\<projname>\VirtualDirectory0\site\repository\<projname>.Common\<projname>.Common.csproj] Note that these compile errors are against another assembly in my project (the assembly where I put the business logic). When Googling, the only mention I found was about having to set the "local copy" flag to true for those references. I've tried this, but still got the same errors. This all compiles fine locally. Any ideas?

    Read the article

  • Getting code-first Entity Framework to build tables on SQL Azure

    - by NER1808
    I am new the code-first Entity Framework. I have tried a few things now, but can't get EF to construct any tables in the my SQL Azure database. Can anyone advise of some steps and settings I should check. The membership provider has no problems create it's tables. I have added the PersistSecurityInfo=True in the connection string. The connection string is using the main user account for the server. When I implement the tables in the database using sql everything works fine. I have the following in the WebRole.cs //Initialize the database Database.SetInitializer<ReykerSCPContext>(new DbInitializer()); My DbInitializer (which does not get run before I get a "Invalid object name 'dbo.ClientAccountIFAs'." when I try to access the table for the first time. Sometime after startup. public class DbInitializer:DropCreateDatabaseIfModelChanges<ReykerSCPContext> { protected override void Seed(ReykerSCPContext context) { using (context) { //Add Doc Types context.DocTypes.Add(new DocType() { DocTypeId = 1, Description = "Statement" }); context.DocTypes.Add(new DocType() { DocTypeId = 2, Description = "Contract note" }); context.DocTypes.Add(new DocType() { DocTypeId = 3, Description = "Notification" }); context.DocTypes.Add(new DocType() { DocTypeId = 4, Description = "Invoice" }); context.DocTypes.Add(new DocType() { DocTypeId = 5, Description = "Document" }); context.DocTypes.Add(new DocType() { DocTypeId = 6, Description = "Newsletter" }); context.DocTypes.Add(new DocType() { DocTypeId = 7, Description = "Terms and Conditions" }); //Add ReykerAccounttypes context.ReykerAccountTypes.Add(new ReykerAccountType() { ReykerAccountTypeID = 1, Description = "ISA" }); context.ReykerAccountTypes.Add(new ReykerAccountType() { ReykerAccountTypeID = 2, Description = "Trading" }); context.ReykerAccountTypes.Add(new ReykerAccountType() { ReykerAccountTypeID = 3, Description = "SIPP" }); context.ReykerAccountTypes.Add(new ReykerAccountType() { ReykerAccountTypeID = 4, Description = "CTF" }); context.ReykerAccountTypes.Add(new ReykerAccountType() { ReykerAccountTypeID = 5, Description = "JISA" }); context.ReykerAccountTypes.Add(new ReykerAccountType() { ReykerAccountTypeID = 6, Description = "Direct" }); context.ReykerAccountTypes.Add(new ReykerAccountType() { ReykerAccountTypeID = 7, Description = "ISA & Direct" }); //Save the changes context.SaveChanges(); } and my DBContext class looks like public class ReykerSCPContext : DbContext { //set the connection explicitly public ReykerSCPContext():base("ReykerSCPContext"){} //define tables public DbSet<ClientAccountIFA> ClientAccountIFAs { get; set; } public DbSet<Document> Documents { get; set; } public DbSet<DocType> DocTypes { get; set; } public DbSet<ReykerAccountType> ReykerAccountTypes { get; set; } protected override void OnModelCreating(DbModelBuilder modelBuilder) { //Runs when creating the model. Can use to define special relationships, such as many-to-many. } The code used to access the is public List<ClientAccountIFA> GetAllClientAccountIFAs() { using (DataContext) { var caiCollection = from c in DataContext.ClientAccountIFAs select c; return caiCollection.ToList(); } } and it errors on the last line. Help!

    Read the article

  • Are there any home/soho NAS devices that will backup/sync to the cloud?

    - by 3rdparty
    Looking for a home office (SOHO) market (priced) network hard drive (NAS) that will sync some or all of its content to a cloud-based backup service. The only option I've been able to find so far is NetGear's [ReadyNAS Vault][1] however from what I've read it's not as secure as it could be, and the service is quite expensive ($200/yr for 50GB of cloud storage) - it's 'powered' by ElephantDrive Ideally would love to see something like Wuala integrated into a Lacie Network HDD - conveniently, I suspect this is in the works as Lacie recently acquired Wuala, however nothing has come of it yet. I know there are options to use rsync with a customizable NAS (such as the very versatile and hackable D-Link DNS-323, but the easier this is to setup and maintain, the better. Thanks! ps. I had many links posted within this question, but was limited to posting with only one due to anti-spam restrictions - gotta get my 'reputation' higher!

    Read the article

  • MySQL unstable behavior with external stroge

    - by onurozcelik
    In our project we are using MySQL 5.0.90(InnoDB engine) server with an external storage. We store MySQL data files in an external storage. When the external storage down for a reason we have unstable behaviours. So we made some tests. In Windows Server 2008 We closed external storage physically. MySQL service stoped and we could not reach the server. Then we opened the storage unit and we could not start service until we reconfigured MySQL. We made storage unit offline from operating system. After 3-4 minutes and some insert trials(some insert trials succeded) MySQL service stoped and we could not reach the server. Then we made storage unit online and we could start the service(not automatically). In Windows Server 2003 We made storage unit offline. After 3-4 minutes and some insert trials(some insert trials succeded) MySQL service stoped and we could not reach the server. Then we made storage unit online we could not start the service until we reinstalled MySQL. Before reinstall we tried to reconfigure but it did not work. We closed external storage physically. MySQL service stoped and we could not reach the server. After that we opened the storage unit and we could start service(not automatically) We expect service to autostart after storage unit is online/open. But these tests show unstable behaviors. Is there any solutions to this.

    Read the article

  • Flexible cloud file storage for a web.py app?

    - by benwad
    I'm creating a web app using web.py (although I may later rewrite it for Tornado) which involves a lot of file manipulation. One example, the app will have a git-style 'commit' operation, in which some files are sent to the server and placed in a new folder along with the unchanged files from the last version. This will involve copying the old folder to the new folder, replacing/adding/deleting the files in the commit to the new folder, then deleting all unchanged files in the old folder (as they are now in the new folder). I've decided on Heroku for the app hosting environment, and I am currently looking at cloud storage options that are built with these kinds of operations in mind. I was thinking of Amazon S3, however I'm not sure if that lets you carry out these kinds of file operations in-place. I was thinking I may have to load these files into the server's RAM and then re-insert them into the bucket, costing me a fortune. I was also thinking of Progstr Filer (http://filer.progstr.com/index.html) but that seems to only integrate with Rails apps. Can anyone help with this? Basically I want file operations to be as cheap as possible.

    Read the article

  • WCF Web Service Gives 404 error in Azure

    - by landyman
    I'm new to using WCF and Azure, but I have a WCF Web Service that works correctly when debugging in Visual Studio. I set the startup project to Azure, and I get 404 errors for any URL I try related to the service. Here is what I think is relavant code: From IWebService.cs [OperationContract] [WebGet(UriTemplate = "GetData/Xml?value={value}", ResponseFormat=WebMessageFormat.Xml)] string GetDataXml(string value); and from Web.config: <system.serviceModel> <services> <service name="WebService" behaviorConfiguration="WebServiceBehavior"> <!-- Service Endpoints --> <endpoint address="" binding="webHttpBinding" contract="IWebService" behaviorConfiguration="WebEndpointBehavior"></endpoint> <endpoint address="ws" binding="wsHttpBinding" contract="IWebService"/> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange"/> </service> </services> <behaviors> <serviceBehaviors> <behavior name="WebServiceBehavior"> <serviceMetadata httpGetEnabled="true"/> <serviceDebug includeExceptionDetailInFaults="true"/> </behavior> </serviceBehaviors> <endpointBehaviors> <behavior name="WebEndpointBehavior"> <webHttp/> </behavior> </endpointBehaviors> </behaviors> </system.serviceModel> I have tried changing the binding to 'basicHttpBinding', but that had no luck. Thanks in advance for any help!

    Read the article

< Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >