Search Results

Search found 6879 results on 276 pages for 'azure storage blobs'.

Page 141/276 | < Previous Page | 137 138 139 140 141 142 143 144 145 146 147 148  | Next Page >

  • AWS .NET SDK v2: setting up queues and topics

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2013/10/13/aws-.net-sdk-v2-setting-up-queues-and-topics.aspxFollowing on from my last post, reading from SQS queues with the new SDK is easy stuff, but linking a Simple Notification Service topic to an SQS queue is a bit more involved. The AWS model for topics and subscriptions is a bit more advanced than in Azure Service Bus. SNS lets you have subscribers on multiple different channels, so you can send a message which gets relayed to email address, mobile apps and SQS queues all in one go. As the topic owner, when you request a subscription on any channel, the owner needs to confirm they’re happy for you to send them messages. With email subscriptions, the user gets a confirmation request from Amazon which they need to reply to before they start getting messages. With SQS, you need to grant the topic permission to write to the queue. If you own both the topic and the queue, you can do it all in code with the .NET SDK. Let’s say you want to create a new topic, a new queue as a topic subscriber, and link the two together. Creating the topic is easy with the SNS client (which has an expanded name, AmazonSimpleNotificationServiceClient, compare to the SQS class which is just called QueueClient): var request = new CreateTopicRequest(); request.Name = TopicName; var response = _snsClient.CreateTopic(request); TopicArn = response.TopicArn; In the response from AWS (which I’m assuming is successful), you get an ARN – Amazon Resource Name – which is the unique identifier for the topic. We create the queue using the same code from my last post, AWS .NET SDK v2: the message-pump pattern, and then we need to subscribe the queue to the topic. The topic creates the subscription request: var response = _snsClient.Subscribe(new SubscribeRequest { TopicArn = TopicArn, Protocol = "sqs", Endpoint = _queueClient.QueueArn }); That response will give you an ARN for the subscription, which you’ll need if you want to set attributes like RawMessageDelivery. Then the SQS client needs to confirm the subscription by allowing the topic to send messages to it. The SDK doesn’t give you a nice mechanism for doing that, so I’ve extended my AWS wrapper with a method that encapsulates it: internal void AllowSnsToSendMessages(TopicClient topicClient) { var policy = Policies.AllowSendFormat.Replace("%QueueArn%", QueueArn).Replace("%TopicArn%", topicClient.TopicArn); var request = new SetQueueAttributesRequest(); request.Attributes.Add("Policy", policy); request.QueueUrl = QueueUrl; var response = _sqsClient.SetQueueAttributes(request); } That builds up a policy statement, which gets added to the queue as an attribute, and specifies that the topic is allowed to send messages to the queue. The statement itself is a JSON block which contains the ARN of the queue, the ARN of the topic, and an Allow effect for the sqs:SendMessage action: public const string AllowSendFormat= @"{ ""Statement"": [ { ""Sid"": ""MySQSPolicy001"", ""Effect"": ""Allow"", ""Principal"": { ""AWS"": ""*"" }, ""Action"": ""sqs:SendMessage"", ""Resource"": ""%QueueArn%"", ""Condition"": { ""ArnEquals"": { ""aws:SourceArn"": ""%TopicArn%"" } } } ] }"; There’s a new gist with an updated QueueClient and a new TopicClient here: Wrappers for the SQS and SNS clients in the AWS SDK for .NET v2. Both clients have an Ensure() method which creates the resource, so if you want to create a topic and a subscription you can use:  var topicClient = new TopicClient(“BigNews”, “ImListening”); And the topic client has a Subscribe() method, which calls into the message pump on the queue client: topicClient.Subscribe(x=>Log.Debug(x.Body)); var message = {}; //etc. topicClient.Publish(message); So you can isolate all the fiddly bits and use SQS and SNS with a similar interface to the Azure SDK.

    Read the article

  • BizTalk Server 2013 beta on Windows 8 (with Visual Studio 2012, SQL Server 2012 &amp; ESB Toolkit 2.2)

    - by Vishal
    Hello BizTalkers, Finally, Microsoft released the beta version of BizTalk Server 2010 R2 and now its called BizTalk Server 2013. I had tried the BTS 2010 R2 CTP version on Windows Azure VM and particularly I was excited about the RESTful services support and ESB fully integrated into BizTalk. Well didn’t get chance to test it much, Azure & VM running cost associated . Anyways, I was waiting for this announcement and I was so much glad that Microsoft finally released the on premise one.  Check what’s new in the BizTalk Server 2013.  Officially Microsoft says that BizTalk Server 2013 “beta” is not supported on Windows 8 but I was curious to try it out. Below is my installation and configuration experience. Virtual Machine configuration: VM Ware Workstation 9.0. Windows 8 Enterprise x64. SQL Server 2012. Visual Studio 2012 Ultimate. BizTalk Server 2013 beta. Windows 8 Machine name: WIN8 Local Administrator account name: Admin First I installed Windows 8 Enterprise on a VM Ware Workstation 9.0 and updated the OS. Even Windows 8 is the new release so luckily didn’t had much updates to perform. Next Installed Visual Studio 2012 Ultimate which was straightforward installation. Next Installed SQL Server 2012. Select New SQL Server stand-alone installation & followed the steps as shown in the screenshot below.   Once the installation is finished, fire up SQL Server Management Studio and try connecting. Initially when the management studio opened up, I thought why did Visual Studio 2010 open when I tried opening SQL Management studio but well, they made the interface alike VS 2010. Cool, I like it. Next is the real deal, download the BizTalk Server 2013 and unzip to particular folder. Double click the Setup.exe and follow the steps in the screenshots. Install Microsoft BizTalk Server 2013 beta. I selected all the normal artifacts and also all the artifacts under Additional Software's. So far so good. Next Launch BizTalk Server Configuration and I used Basic configuration as shown in screenshot below. Didn’t expect to see this but “wala”. Successful in the first shot. Still I wasn’t sure & something would have gone wrong so fired up the BizTalk Server Administration Console and that too came up just fine. Still was not able to believe so created a simple messaging application:  message in –> message out and that too worked just fine. Finally I was convinced that BizTalk Server 2013 did work on Windows 8. Next step was to install the ESB Toolkit 2.2 which is now integrated with BizTalk Server and does not come as a separate standalone installation file. Again run the BizTalk Setup.exe from the unzipped folder. Install Microsoft ESB Toolkit. Next, unlike ESB Configuration would  not open up by itself so go to “Windows 8 so called Start” (I could not resist to write this) and open the ESB Toolkit Configuration wizard. Below screenshot display the configurations I used. Also you can find them on MSDN here. Finally after the ESB Configuration, I open Admin Console and checked the 2 ESB application deployed. Cool. This concludes my experience about installation and configuration of BizTalk Server 2013 Beta & ESB Toolkit 2.2 on Windows 8. I will try and keep writing about BizTalk Server 2013 and its use with RESTful Services etc. Thanks, Vishal Mody

    Read the article

  • How much is a subscriber worth?

    - by Tom Lewin
    This year at Red Gate, we’ve started providing a way to back up SQL Azure databases and Azure storage. We decided to sell this as a service, instead of a product, which means customers only pay for what they use. Unfortunately for us, it makes figuring out revenue much trickier. With a product like SQL Compare, a customer pays for it, and it’s theirs for good. Sure, we offer support and upgrades, but, fundamentally, the sale is a simple, upfront transaction: we’ve made this product, you need this product, we swap product for money and everyone is happy. With software as a service, it isn’t that easy. The money and product don’t change hands up front. Instead, we provide a service in exchange for a recurring fee. We know someone buying SQL Compare will pay us $X, but we don’t know how long service customers will stay with us, or how much they will spend. How do we find this out? We use lifetime value analysis. What is lifetime value? Lifetime value, or LTV, is how much a customer is worth to the business. For Entrepreneurs has a brilliant write up that we followed to conduct our analysis. Basically, it all boils down to this equation: LTV = ARPU x ALC To make it a bit less of an alphabet-soup and a bit more understandable, we can write it out in full: The lifetime value of a customer equals the average revenue per customer per month, times the average time a customer spends with the service Simple, right? A customer is worth the average spend times the average stay. If customers pay on average $50/month, and stay on average for ten months, then a new customer will, on average, bring in $500 over the time they are a customer! Average spend is easy to work out; it’s revenue divided by customers. The problem comes when we realise that we don’t know exactly how long a customer will stay with us. How can we figure out the average lifetime of a customer, if we only have six months’ worth of data? The answer lies in the fact that: Average Lifetime of a Customer = 1 / Churn Rate The churn rate is the percentage of customers that cancel in a month. If half of your customers cancel each month, then your average customer lifetime is two months. The problem we faced was that we didn’t have enough data to make an estimate of one month’s cancellations reliable (because barely anybody cancels)! To deal with this data problem, we can take data from the last three months instead. This means we have more data to play with. We can still use the equation above, we just need to multiply the final result by three (as we worked out how many three month periods customers stay for, and we want our answer to be in months). Now these estimates are likely to be fairly unreliable; when there’s not a lot of data it pays to be cautious with inference. That said, the numbers we have look fairly consistent, and it’s super easy to revise our estimates when new data comes in. At the very least, these numbers give us a vague idea of whether a subscription business is viable. As far as Cloud Services goes, the business looks very viable indeed, and the low cancellation rates are much more than just data points in LTV equations; they show that the product is working out great for our customers, which is exactly what we’re looking for!

    Read the article

  • Compress/Decompress NSString in objective-c (iphone) using GZIP or deflate

    - by Steven
    Hi, I have a web-service running on Windows Azure which returns JSON that I consume in my iPhone app. Unfortunately, Windows Azure doesn't seem to support the compression of dynamic responses yet (long story) so I decided to get around it by returning an uncompressed JSON package, which contains a compressed (using GZIP) string. e.g {"Error":null,"IsCompressed":true,"Success":true,"Value":"vWsAAB+LCAAAAAAAB..etc.."} ... where value is the compressed string of a complex object represented in JSON. This was really easy to implement on the server, but for the life of me I can't figure out how to decompress a gzipped NSString into an uncompressed NSString, all the examples I can find for zlib etc are dealing with files etc. Can anyone give me any clues on how to do this? (I'd also be happy for a solution that used deflate as I could change the server-side implementation to use deflate too). Thanks!! Steven Edit 1: Aaah, I see that ASIHTTPRequest is using the following function in it's source code: //uncompress gzipped data with zlib + (NSData *)uncompressZippedData:(NSData*)compressedData; ... and I'm aware that I can convert NSString to NSData, so I'll see if this leads me anywhere!

    Read the article

  • Nonetype object has no attribute '__getitem__'

    - by adohertyd
    I am trying to use an API wrapper downloaded from the net to get results from the new azure Bing API. I'm trying to implement it as per the instructions but getting the runtime error: Traceback (most recent call last): File "bingwrapper.py", line 4, in <module> bingsearch.request("affirmative action") File "/usr/local/lib/python2.7/dist-packages/bingsearch-0.1-py2.7.egg/bingsearch.py", line 8, in request return r.json['d']['results'] TypeError: 'NoneType' object has no attribute '__getitem__' This is the wrapper code: import requests URL = 'https://api.datamarket.azure.com/Data.ashx/Bing/SearchWeb/Web?Query=%(query)s&$top=50&$format=json' API_KEY = 'SECRET_API_KEY' def request(query, **params): r = requests.get(URL % {'query': query}, auth=('', API_KEY)) return r.json['d']['results'] The instructions are: >>> import bingsearch >>> bingsearch.API_KEY='Your-Api-Key-Here' >>> r = bingsearch.request("Python Software Foundation") >>> r.status_code 200 >>> r[0]['Description'] u'Python Software Foundation Home Page. The mission of the Python Software Foundation is to promote, protect, and advance the Python programming language, and to ...' >>> r[0]['Url'] u'http://www.python.org/psf/ This is my code that uses the wrapper (as per the instructions): import bingsearch bingsearch.API_KEY='abcdefghijklmnopqrstuv' r = bingsearch.request("affirmative+action")

    Read the article

  • Great examples of self-paced labs and exercises

    - by Mayo
    It is probably a safe bet that many of us are what they call Tactile / Kinesthetic Learners meaning that we learn best when we are physically doing something as opposed to listening to an online tutorial or reading a book. My goal with this question is to derive a list of books or online resources that serve as superb examples of self-paced programming labs and exercises. For example, I was extremely impressed with the SportsStore exercise in Steven Sanderson's Pro ASP.NET MVC Framework. The exercise spanned multiple chapters and gradually introduced new topics. I was also impressed with the materials associated with the Windows Azure Boot Camp. The demos and lab materials, accessible through the website, allow us to practice and reinforce what we can read about in articles and books. Please list any examples you might have, one per submission, below. The question is language/platform agnostic. Suggestions can be generic or specific to a given technology (PHP, SQL Server, Azure, Flash, Objective C, etc.). I only ask that the answers pertain to labs and exercises that relate to programming. My hope is that the best answers will float to the top allowing developers to review the top answers and find another programming topic that can be learned through example.

    Read the article

  • Route WCF ServiceHost to another computer

    - by I2nfo
    GoodDay, I'm not a guru when it comes to WCF, but i do know the basics. My question is, how do i create a ServiceHost on machine X, while the code is on machine Y? if i build and run this code on my dev machine(localhost) : servicehost = new ServiceHost(typeof(MyService1)); servicehost.AddServiceEndpoint(typeof(IMyService1), new NetTcpBinding(),"net.tcp://my.datacenter.com/MyApp/MyService1"); //This is normally set to localhost. What implementation must be done on the datacenter server, so that if i had to point to http://my.datacenter.com/MyApp/MyService1 , it will route the service operation to my dev machine (localhost). However, the datacenter should not be accessible via the internet. It is a possible infrastructure that we researching to see if we can create a service bus type architecture so that all our customers can invoke other customer services running on their respective machines just by calling our datacenter url. We have looked at Windows Azure, but we have our own datacenter infrasture that we wish to leverage off. Come think of it, we kind of building our own Azure, on a very very basic scale. How does one go creating this? Thanks in Advance

    Read the article

  • ActionScript 3.0 Color Output Error?

    - by TheDarkIn1978
    I'm employing color in a current AS3 project, and have come across what appears to be an error in the Flash Player (version 10). it might also be an error with Apple's DigitalColor Meter (version 3.7.2), which is what i'm using to sample the displayed colors on Mac OS X Snow Leopard (version 10.6.3). //Primary, secondary, and tertiary colors of the RGB color wheel var red:Number = 0xFF0000; var orange:Number = 0xFF7D00; var yellow:Number = 0xFFFF00; var chartreuse:Number = 0x7DFF00; var green:Number = 0x00FF00; var spring:Number = 0x00FF7D; var cyan:Number = 0x00FFFF; var azure:Number = 0x007DFF; //reads 0x0077FF var blue:Number = 0x0000FF; var violet:Number = 0x7D00FF; var magenta:Number = 0xFF00FF; //reads 0xFF00F8 var rose:Number = 0xFF007D; //reads 0xFF0077 all of these colors display normally except for 3: Azure, Magenta and Rose. they are coded with the appropriate number, but when i use the color meter to sample the displayed colors, those 3 return inaccurate results. anyone have any insight about this issue? what is causing the error, the Flash runtime or the color sampler? if it's the Flash player, could this problem be much deeper? *sampling this image will return inaccurate results due to .jpg compression. it's simply for illustration

    Read the article

  • How do I authenticate an ADO.NET Data Service?

    - by lsb
    Hi! I've created an ADO.Net Data Service hosted in a Azure worker role. I want to pass credentials from a simple console client to the service then validate them using a QueryInterceptor. Unfortunately, the credentials don't seem to be making it over the wire. The following is a simplified version of the code I'm using, starting with the DataService on the server: using System; using System.Data.Services; using System.Linq.Expressions; using System.ServiceModel; using System.Web; namespace Oslo.Worker { [ServiceBehavior(AddressFilterMode = AddressFilterMode.Any)] public class AdminService : DataService<OsloEntities> { public static void InitializeService( IDataServiceConfiguration config) { config.SetEntitySetAccessRule("*", EntitySetRights.All); config.SetServiceOperationAccessRule("*", ServiceOperationRights.All); } [QueryInterceptor("Pairs")] public Expression<Func<Pair, bool>> OnQueryPairs() { // This doesn't work!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! if (HttpContext.Current.User.Identity.Name != "ADMIN") throw new Exception("Ooops!"); return p => true; } } } Here's the AdminService I'm using to instantiate the AdminService in my Azure worker role: using System; using System.Data.Services; namespace Oslo.Worker { public class AdminHost : DataServiceHost { public AdminHost(Uri baseAddress) : base(typeof(AdminService), new Uri[] { baseAddress }) { } } } And finally, here's the client code. using System; using System.Data.Services.Client; using System.Net; using Oslo.Shared; namespace Oslo.ClientTest { public class AdminContext : DataServiceContext { public AdminContext(Uri serviceRoot, string userName, string password) : base(serviceRoot) { Credentials = new NetworkCredential(userName, password); } public DataServiceQuery<Order> Orders { get { return base.CreateQuery<Pair>("Orders"); } } } } I should mention that the code works great with the signal exception that the credentials are not being passed over the wire. Any help in this regard would be greatly appreciated! Thanks....

    Read the article

  • Getting client denied when accessing a wsgi graphite script

    - by Dr BDO Adams
    I'm trying to set up graphite on my Mac OS X 10.7 lion, i've set up apache to call the python graphite script via WSGI, but when i try to access it, i get a forbiden from apache and in the error log. "client denied by server configuration: /opt/graphite/webapp/graphite.wsgi" I've checked that the scripts location is allowed in httpd.conf, and the permissions of the file, but they seem correct. What do i have to do to get access. Below is the httpd.conf, which is nearly the graphite example. <IfModule !wsgi_module.c> LoadModule wsgi_module modules/mod_wsgi.so </IfModule> WSGISocketPrefix /usr/local/apache/run/wigs <VirtualHost _default_:*> ServerName graphite DocumentRoot "/opt/graphite/webapp" ErrorLog /opt/graphite/storage/log/webapp/error.log CustomLog /opt/graphite/storage/log/webapp/access.log common WSGIDaemonProcess graphite processes=5 threads=5 display-name='%{GROUP}' inactivity-timeout=120 WSGIProcessGroup graphite WSGIApplicationGroup %{GLOBAL} WSGIImportScript /opt/graphite/conf/graphite.wsgi process-group=graphite application-group=%{GLOBAL} # XXX You will need to create this file! There is a graphite.wsgi.example # file in this directory that you can safely use, just copy it to graphite.wgsi WSGIScriptAlias / /opt/graphite/webapp/graphite.wsgi Alias /content/ /opt/graphite/webapp/content/ <Location "/content/"> SetHandler None </Location> # XXX In order for the django admin site media to work you Alias /media/ "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site- packages/django/contrib/admin/media/" <Location "/media/"> SetHandler None </Location> # The graphite.wsgi file has to be accessible by apache. <Directory "/opt/graphite/webapp/"> Options +ExecCGI Order deny,allow Allow from all </Directory> </VirtualHost> Can you help?

    Read the article

  • Very large database, very small portion most being retrieved in real time

    - by mingyeow
    Hi folks, I have an interesting database problem. I have a DB that is 150GB in size. My memory buffer is 8GB. Most of my data is rarely being retrieved, or mainly being retrieved by backend processes. I would very much prefer to keep them around because some features require them. Some of it (namely some tables, and some identifiable parts of certain tables) are used very often in a user facing manner How can I make sure that the latter is always being kept in memory? (there is more than enough space for these) More info: We are on Ruby on rails. The database is MYSQL, our tables are stored using INNODB. We are sharding the data across 2 partitions. Because we are sharding it, we store most of our data using JSON blobs, while indexing only the primary keys

    Read the article

  • Cmdlets for AD CS deployment: Install-ADcsCertificationAuthority cmdlet failing when attempting to install an offline policy CA

    - by red888
    I installed an offline root CA without issue using this command: Install-ADcsCertificationAuthority ` -OverwriteExistingKey ` <#In the case of a re-installation#> ` -AllowAdministratorInteraction ` -CACommonName ` "LAB Corporate Root CA" ` -CADistinguishedNameSuffix ` 'O=LAB Inc.,C=US' ` -CAType ` StandaloneRootCA ` -CryptoProviderName ` "RSA#Microsoft Software Key Storage Provider" ` -HashAlgorithmName ` SHA256 ` -KeyLength ` 2048 ` -ValidityPeriod ` Years ` -ValidityPeriodUnits ` 20 ` -DatabaseDirectory ` 'E:\CAData\CertDB' ` -LogDirectory ` 'E:\CAData\CertLog' ` -Verbose I installed the root CA's cert and CRl on the policy CA, installed the AD CS binaries, and attempted to run this command to install the policy CA and export a req file: Install-ADcsCertificationAuthority ` -OverwriteExistingKey ` <#In the case of a re-installation#> ` -AllowAdministratorInteraction ` -CACommonName ` "LAB Corporate Policy Internal CA" ` -CADistinguishedNameSuffix ` 'O=LAB Inc.,C=US' ` -CAType ` StandaloneSubordinateCA ` -ParentCA ` rootca ` -OutputCertRequestFile ` 'e:\polca-int.req' ` -CryptoProviderName ` "RSA#Microsoft Software Key Storage Provider" ` -HashAlgorithmName ` SHA256 ` -KeyLength ` 2048 ` -ValidityPeriod ` Years ` -ValidityPeriodUnits ` 10 ` -DatabaseDirectory ` 'E:\CAData\CertDB' ` -LogDirectory ` 'E:\CAData\CertLog' ` -Verbose When doing this I receive the following error: VERBOSE: Calling InitializeDefaults method on the setup object. Install-ADcsCertificationAuthority : At line:1 char:1 + Install-ADcsCertificationAuthority ` + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : InvalidArgument: (:) [Install-AdcsCertificationA uthority], CertificationAuthoritySetupException + FullyQualifiedErrorId : ValidateParameters,Microsoft.CertificateServices .Deployment.Commands.CA.InstallADCSCertificationAuthority Is there a parameter I am entering incorrectly or something?

    Read the article

  • Existing open source software for wireless mesh networking?

    - by user70352
    Hello! My goal: Build a wireless mesh network with some ALIX 2D2 (500 MHz AMD Geode LX800 x86 CPU, 256MB RAM, Atheros wireless card) Aside from working like a normal wireless mesh network users should be able to read/write data from/to the ALIX Boxes and the ALIX boxes should be able to process data. Questions: Should I try to flash dd-wrt x86, voyage linux (linux.voyage.hk) or something else? What (open source) software should I look into before I start? Should I use a 'server' for data storage and processing instead of the ALIX boxes? Is it even possible to use the ALIX boxes for routing AND data storage and processing? Final notes: Data can be anything, for example, I want to setup a wireless mesh using the OLSRD protocol so my whole town gets wifi and can access songs on the network. It's not for that, but that's the idea. I'm not afraid of programing, compiling or working with *nix. This is mostly 'for fun' rather than for practicality. Thanks in advance for any feedback.

    Read the article

  • Recovering a VHD after resizing it using VBoxManage

    - by tjrobinson
    I am using VirtualBox 4.1.18 and had a virtual machine running Windows 8 RC with a single VHD, which was initially sized at 25GB (too small!). After installing the OS and some applications I ran out of disk space so shut down the guest and then used this command to resize the VHD to 80GB: C:\Program Files\Oracle\VirtualBox> .\VBoxManage.exe modifyhd "D:\VirtualBox VMs\Windows 8 RC\Windows 8 RC.vhd" --resize 81920 0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100% C:\Program Files\Oracle\VirtualBox> .\VBoxManage.exe showhdinfo "D:\VirtualBox VMs\Windows 8 RC\Windows 8 RC.vhd" UUID: 03fb26e7-d8bb-49b5-8cc2-1dc350750e64 Accessible: yes Logical size: 81920 MBytes Current size on disk: 24954 MBytes Type: normal (base) Storage format: VHD Format variant: dynamic default In use by VMs: Windows 8 RC (UUID: a6e6aa57-2d3a-421b-8042-7aae566e3e0b) Location: D:\VirtualBox VMs\Windows 8 RC\Windows 8 RC.vhd So far so good. However, when I started the guest up again I got the dreaded: Fatal: No bootable medium found! system halted If I boot into GParted it shows a single 80GB drive as "unallocated". The option to scan for and attempt to repair a filesystem doesn't find anything. I also tried cloning the VHD into a VDI file, just in case that magically fixed it: C:\Program Files\Oracle\VirtualBox> .\VBoxManage.exe clonehd "D:\VirtualBox VMs\Windows 8 RC\Windows 8 RC.vhd" "D:\VirtualBox VMs\Windows 8 RC\Windows 8 RC.vdi" --format VDI 0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100% Clone hard disk created in format 'VDI'. UUID: baf0c2c4-362f-4f6c-846a-37bb1ffc027b C:\Program Files\Oracle\VirtualBox> .\VBoxManage.exe showhdinfo "D:\VirtualBox VMs\Windows 8 RC\Windows 8 RC.vdi" UUID: baf0c2c4-362f-4f6c-846a-37bb1ffc027b Accessible: yes Logical size: 81920 MBytes Current size on disk: 24798 MBytes Type: normal (base) Storage format: VDI Format variant: dynamic default In use by VMs: Windows 8 RC (UUID: a6e6aa57-2d3a-421b-8042-7aae566e3e0b) Location: D:\VirtualBox VMs\Windows 8 RC\Windows 8 RC.vdi Is there anything else I could try to recover the drive? No, I don't have a backup :( My host OS is Windows 7 64-bit.

    Read the article

  • Can VMWare Server 2.0 be useful in Production for easing backups?

    - by Keith Sirmons
    Howdy, Let's run this idea by the group here. I am thinking about using VMWare Server in production to host a 2008 Domain Controller with DHCP and DNS, a 2008 member server with WSUS, some virus software, and other "management" utilities a second 2008 member server with SQL, IIS, and File Shares for a medium business of 50-100 desktops. The reason I am leaning toward Server vs ESXi is for backup purposes. Using ESXi, if I want to backup the VM's, I would need a second server in the office with enough storage availability to hold a copy of the vmdks. I am wondering if putting this virtual environment on top of a basic 2008 server install will allow for easier backups to both tape and/or to offsite storage using JungleDisk. Can a snapshot be triggered easily via a scheduled job? I know this doesn't necessarily handle file level restores, but I want to make sure in a DR situation, we can restore production servers quickly. Does this concept hold water? Would a very minimum install of the 2008 Host remove too many resources from the actual production machines? This would be a new Dell 410 server with 12 GB ram and (6) 600 GB 15K in a RAID 6, Dual Intel Xeon 2.26GHz procs.

    Read the article

  • SQL Server slow in production environment

    - by Lieven Cardoen
    I have a weird problem in a customer's production environment. I can't give any details on the infrastructure, except that SQL server runs on a virtual server. The data, log and filestream file are on another storage server (data and filestream together and log on a separate server). In our local Test environment, there's one particular query that executes with these durations: first we clear the cache 300ms (First time it takes longer, but from then on it's cached.) 20ms 15ms 17ms In the customer's production environment, the SQL Server is more powerful, these are the durations (I didn't have the rights to clear the cache. Will try this tomorrow). 2500ms 2600ms 2400ms The servers in the customer's production environment are more powerful but they do have virtual servers (we don't). What could be the cause... Not enough memory? Fragmentation? Physical storage? How would you tackle this performance problem? EDIT: Some people have asked me if the data set is equal and it is. I restored their database on our environment. It's true that this was the first thing I looked at. (@Everyone: I added the edit because it will be the first thing that many will think off).

    Read the article

  • Migrate installation from one HDD to another?

    - by dougoftheabaci
    I have a server with a 64 GB SSD inside. It runs just fine, but occasionally there's a hiccup that causes it to nearly fill up. When that happens my server starts to lockup and generally misbehave. I'm looking to buy a bigger SSD (either 128 GB or 256 GB) but I'm a bit unsure of how best to make the transition. For a start, I don't have an external monitor. If I need one I'll have to borrow it from work. Most of the time I just SSH into the server from my iMac. The only solution I can think of would be to buy two FW800 2.5" cases, boot from the 64 GB SSD and clone it to the 128 GB SSD. Seem a bit excessive but it might be my best option. I do have more than one SATA port on my server, but they're all currently being use for storage drives. They don't mount by default, so I could unplug them and just have the two SSDs and do the whole thing via SSH. This is another option I'm considering. My main concern with either is how best to make sure everything goes across. I want a carbon copy of the first one onto the second. This is especially important because I have a ZFS volume (my storage) and I'm a bit unfamiliar with how to move everything across. I could just start fresh and reinstall everything on the SSD, but that seems like extra trouble I don't need. So any advice on how best to achieve my goals would be appreciated. Thanks! Server is running Ubuntu Server 12.04. The iMac has 10.8.1.

    Read the article

  • Backing up SQL NetApp Snapshots using TSM

    - by WerkkreW
    In our environment we have a 3 node SQL 2005 Cluster which is on NetApp storage. We are currently using SMSQL (NetApp SnapManager for SQL) to take Snapshot backups of the data. This works great, but due to some audit requirements we are also forced to maintain some copies on tape. We have used NDMP in other places across the enterprise but we do not want to use it in this specific instance. Basically what I need to do is, get the most recent snapshot copy of the databases on tape, via Tivoli Storage Manager (TSM). What I have done is, obtained a basic Windows Server 2003 VM with SnapDrive installed, which is SAN attached and zoned to the NetApp, and I have written a batch file to do the following: Mount the latest __RECENT snapshot lun to the host, using a specific drive letter Perform a TSM based incremental backup Dis-mount the LUN This seems to work fine, except sometimes the LUN's do not mount due to some sort of timeout. Also, due to my limited knowledge of windows batch scripting, I have no way to monitor the success or failure of these backups since I do not know how to send a valid return code back to the TSM scheduling service. Is there a more efficient/elegant way to accomplish this without NDMP?

    Read the article

  • external drive enclosure -> software RAID 5?

    - by memilanuk
    Hello all, I have two older PCs on my LAN posing as 'servers'... one running FreeNAS off a USB stick using three 500GB hdds in a ZFS RAID-Z pool serving as storage for the LAN and one running Debian Lenny with an 80GB drive used as a general purpose 'tinker' box that I can ssh into, etc. Problem is that the SMART report for one of those 500GB drives in the FreeNAS box is showing some pre-failure attributes, and the whole array is a little small anyways. Rather than simply replace one 500GB drive with another 500GB drive, and have no backup of the file server, I'd like to upgrade all the drives to 2TB ones - but I have no where to store that much data in the mean while. As such, I started looking at getting a 4-bay external drive enclosure with an eSATA card for the Debian box, with the hopes of creating a RAID5 + LVM setup using those drives and backing the data up to that external drive enclosure. After the backup is done, replace the drives in the FreeNAS box and rebuild the array there and mirror the data back. Then, I'd have both the primary storage (on the FreeNAS box) and a backup (which I don't have currently) using the external drive enclosure on the Debian box. My big question is... most of these external drive boxes seem to claim support for JBOD, RAID 0, 1, 10, 5, etc. - should I presume that is simply fake RAID like many commodity mobos have, and not really usable in Linux? In that case, with all the drives hanging off the one eSATA connection, will Linux (specifically Debian Squeeze, as I plan on upgrading that box here shortly) see all four drives, or just the first one? Will I be able to configure them in a RAID5 array as desired? Thanks, Monte

    Read the article

  • How to improve Windows Server 2008 R2 to handle many connections?

    - by invisal
    It has been a few days so far that I am trying to figure how to solve this problem. First of all, I am running a website with an average daily page view of 350,000. Previously, all ads management (tracking click and impression that each ads has served) and content were served in a single server with the following spec: Server 1 OS: Windows 2008 R2 64-Bit CPU: Intel® Core™ i5 - 4 cores RAM: 8 GB Storage: 2 x 1 TB hard drives Bandwidth: 10 TB per month To improve our website speed, I decided to separate the ads management script to another dedicated server because we have more than 15 advertisers to 30 advertisers per each page. Server 2 OS: Windows 2008 R2 64-Bit CPU: Intel® Core™ i5 - 4 cores RAM: 4 GB Storage: 2 x 300 GB hard drives Bandwidth: 10 TB per month The Problem The problem is that Server 1 can handle both content and ads system. Now, that I take away the ads system and put it at Server 2. Server 2 can barely serve only ads system. Test First of all, I moved 75% of the ads to Server 2. And then, perform a ping to server: ping -t xxxxx. [I did the ping for 10 minutes and its following similar pattern as below] Reply from xxxxx bytes=32 time=290ms TTL=116 Reply from xxxxx bytes=32 time=289ms TTL=116 Reply from xxxxx bytes=32 time=320ms TTL=116 Reply from xxxxx bytes=32 time=286ms TTL=116 Reply from xxxxx bytes=32 time=286ms TTL=116 Reply from xxxxx bytes=32 time=348ms TTL=116 Reply from xxxxx bytes=32 time=284ms TTL=116 Then, I moved 100% of the ads to Server 2. Then, perform a ping to server again. [I did the ping for 10 minutes and its following similar pattern as below] Reply from xxxxx bytes=32 time=290ms TTL=116 Request timed out Reply from xxxxx bytes=32 time=320ms TTL=116 Reply from xxxxx bytes=32 time=286ms TTL=116 Request timed out Request timed out Reply from xxxxx bytes=32 time=284ms TTL=116 Attempts Increase MaxUserPort and TcpNumConnection Restart the server Increase IIS Max Instances and Instance MaxRequests Server Resource Only 10%-15% of the network connection is used Only 10%-15% of the CPU is used Only 25% of the memory is used

    Read the article

  • Lost partition after restarting

    - by nxhoaf
    I have Window 7 Professional Service pack installed in my Laptop Lenovo Thinkpad t420. After formatting the disk, and install Window 7 (detailed as above), I went to Computer -- Manager -- Storage -- Disk Management to split my 300gb C partition into 2 partition: C (which is 162gb) E (which is 140gb) Is work fine for about 2 days. Today, when I turn on my computer, I'm very suprise that the E partition is disappear. I can surely confirm that I didn't do any stupid thing yesterday. And before I shut down my computer, everything was fine. In general, here is what I did during the last today (from the point that I formatted the disk, and installed Window) Format 300gb hard disk Install window 7 Install eclipse, db2, .... ( I'm a developer) Install some other tools (Open office, Skype...) Install PGP (http://www.symantec.com/encryption) <--- I'm forced to used that due to my company policy Use Computer -- Manager -- Storage -- Disk Management to split my 300gb C partition into 2 partition as described above. It worked quite well for two last days. Until day... Can you please help me to recover my lost partition ? Thank you! For more info, here is my partition info: You can also see the image here

    Read the article

  • What is optimal hardware configuration for heavy load LAMP application

    - by Piotr K.
    I need to run Linux-Apache-PHP-MySQL application (Moodle e-learning platform) for a large number of concurrent users - I am aiming 5000 users. By concurrent I mean that 5000 people should be able to work with the application at the same time. "Work" means not only do database reads but writes as well. The application is not very typical, since it is doing a lot of inserts/updates on the database, so caching techniques are not helping to much. We are using InnoDB storage engine. In addition application is not written with performance in mind. For instance one Apache thread usually occupies about 30-50 MB of RAM. I would be greatful for information what hardware is needed to build scalable configuration that is able to handle this kind of load. We are using right now two HP DLG 380 with two 4 core processors which are able to handle much lower load (typically 300-500 concurrent users). Is it reasonable to invest in this kind of boxes and build cluster using them or is it better to go with some more high-end hardware? I am particularly curious how many and how powerful servers are needed (number of processors/cores, size of RAM) what network equipment should be used (what kind of switches, network cards) any other hardware, like particular disc storage solutions, etc, that are needed Another thing is how to put together everything, that is what is the most optimal architecture. Clustering with MySQL is rather hard (people are complaining about MySQL Cluster, even here on Stackoverflow).

    Read the article

  • New virtualization project and old SAN

    - by Chris
    Hi, We'll start shortly a partial virtualization of our infrastructure and consolidate a dozen servers into virtuals instances. We'll also add some client application virtualization into the mix for good measure. Two HP DL 380 with the new xeons 56xx and 96 GB of memory each running xenserver + xenapp will then take charge of most of our IT needs. So far, so good. One element that is missing from the picture is the storage part. We need some sort of shared storage to enable live motion and other HA features. We have an IBM DS 4300 SAN that we can use for that. But since it's in production since 2005, I'm not sure about such a critical role for a 5yr old part. So my question is: What is the reliability of this kind of equipment after 5 yr ? Can it last 10 yr with no or few problems ? Since our budjet is tight, not buying another SAN will be a big plus. This lead me to another question: FC disks cost an arm and a leg from IBM. When I type the replacement # in google (for example IBM 300GB 15K 4GBPS FC HDD 42D0410), I can find it at a fraction of the price at various sites. So am I stupid to buy from IBM or naive to trust 3rd party reseller ?? Thanks, Chris

    Read the article

  • Access Denied / Server 2008 / Home Directories

    - by Shaun Murphy
    Domain Controller: BDC01 (192.168.9.2) Storage Server: BrightonSAN1 (192.168.9.3) Domain: brighton.local Last night I moved our users home directories off of our Domain Controller onto a storage server using the MS FSMT. I'm getting a mixed bag of errors. The first being some users cannot logon properly, they can't access the logon.vbs in the sysvol folder on the DC and consequently cannot map their drives. I've narrowed that down to a DNS issue as we there was a remnant of our previous DNS server in the DHCP server options and scope options. I'm able to get their drives remapped by browsing to the sysvol folder by IP address as opposed to Computer Name and manually running the logon.vbs script. The other error I'm getting is Access Denied on a few of the users home directories. The top level folder (Home) is shared as normal and I've removed and re-added the NTFS security a number of times now including making the user the owner with full control. I've checked each and every individual file and folder in said users home directory and they are indeed the owner but I'm unable to write but I can read the contents. I'm stumped. This isn't happening to all clients. I'm considering removing their AD accounts, backing up their folders and readding them as a last resort but obviously I'd like to know why the above errors are happening.

    Read the article

  • Setup of high-end web server and DB server cluster on Amazon EC2: Is this how it's done?

    - by user1086584
    Amazon is so technical, I want to confirm that my understanding is correct. We have a large 500 GB database. (OrientDB.) We will have it mirrored to one another in the same Availability Zone. We believe the database size will grow rapidly. The plan is: Get 4 large instances that are compatible types with Placement Groups (as well as ideally, Enhanced Networking) (2 for web, 2 for DB.) We use an EBS-backed instances to store our operating system. Discussion here: http://alestic.com/2012/01/ec2-ebs-boot-recommended We can set up ephemeral SSD instance storage as swap space. (But it is lost after even a reboot. I hear its hard to add ephemeral storage if booting from EBS, but possible.) For offsite backup, we will take periodic snapshots and store them on S3. Obviously we need to ensure the database is in a safe state when that snapshot happens to avoid corruption. (Any hints here, aside from shutting down the DB?) If the database gets too big, we need to create a EBS volume that's larger. We can use RAID to break the 1 TB limit: http://alestic.com/2009/06/ec2-ebs-raid Static assets on web servers will be stored on S3. Is that correct? Or am I missing something?

    Read the article

< Previous Page | 137 138 139 140 141 142 143 144 145 146 147 148  | Next Page >