Search Results

Search found 9156 results on 367 pages for 'cloud storage'.

Page 65/367 | < Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >

  • SQL SERVER – 2008 – Introduction to Snapshot Database – Restore From Snapshot

    - by pinaldave
    Snapshot database is one of the most interesting concepts that I have used at some places recently. Here is a quick definition of the subject from Book On Line: A Database Snapshot is a read-only, static view of a database (the source database). Multiple snapshots can exist on a source database and can always reside on the same server instance as the database. Each database snapshot is consistent, in terms of transactions, with the source database as of the moment of the snapshot’s creation. A snapshot persists until it is explicitly dropped by the database owner. If you do not know how Snapshot database work, here is a quick note on the subject. However, please refer to the official description on Book-on-Line for accuracy. Snapshot database is a read-only database created from an original database called the “source database”. This database operates at page level. When Snapshot database is created, it is produced on sparse files; in fact, it does not occupy any space (or occupies very little space) in the Operating System. When any data page is modified in the source database, that data page is copied to Snapshot database, making the sparse file size increases. When an unmodified data page is read in the Snapshot database, it actually reads the pages of the original database. In other words, the changes that happen in the source database are reflected in the Snapshot database. Let us see a simple example of Snapshot. In the following exercise, we will do a few operations. Please note that this script is for demo purposes only- there are a few considerations of CPU, DISK I/O and memory, which will be discussed in the future posts. Create Snapshot Delete Data from Original DB Restore Data from Snapshot First, let us create the first Snapshot database and observe the sparse file details. USE master GO -- Create Regular Database CREATE DATABASE RegularDB GO USE RegularDB GO -- Populate Regular Database with Sample Table CREATE TABLE FirstTable (ID INT, Value VARCHAR(10)) INSERT INTO FirstTable VALUES(1, 'First'); INSERT INTO FirstTable VALUES(2, 'Second'); INSERT INTO FirstTable VALUES(3, 'Third'); GO -- Create Snapshot Database CREATE DATABASE SnapshotDB ON (Name ='RegularDB', FileName='c:\SSDB.ss1') AS SNAPSHOT OF RegularDB; GO -- Select from Regular and Snapshot Database SELECT * FROM RegularDB.dbo.FirstTable; SELECT * FROM SnapshotDB.dbo.FirstTable; GO Now let us see the resultset for the same. Now let us do delete something from the Original DB and check the same details we checked before. -- Delete from Regular Database DELETE FROM RegularDB.dbo.FirstTable; GO -- Select from Regular and Snapshot Database SELECT * FROM RegularDB.dbo.FirstTable; SELECT * FROM SnapshotDB.dbo.FirstTable; GO When we check the details of sparse file created by Snapshot database, we will find some interesting details. The details of Regular DB remain the same. It clearly shows that when we delete data from Regular/Source DB, it copies the data pages to Snapshot database. This is the reason why the size of the snapshot DB is increased. Now let us take this small exercise to  the next level and restore our deleted data from Snapshot DB to Original Source DB. -- Restore Data from Snapshot Database USE master GO RESTORE DATABASE RegularDB FROM DATABASE_SNAPSHOT = 'SnapshotDB'; GO -- Select from Regular and Snapshot Database SELECT * FROM RegularDB.dbo.FirstTable; SELECT * FROM SnapshotDB.dbo.FirstTable; GO -- Clean up DROP DATABASE [SnapshotDB]; DROP DATABASE [RegularDB]; GO Now let us check the details of the select statement and we can see that we are successful able to restore the database from Snapshot Database. We can clearly see that this is a very useful feature in case you would encounter a good business that needs it. I would like to request the readers to suggest more details if they are using this feature in their business. Also, let me know if you think it can be potentially used to achieve any tasks. Complete Script of the afore- mentioned operation for easy reference is as follows: USE master GO -- Create Regular Database CREATE DATABASE RegularDB GO USE RegularDB GO -- Populate Regular Database with Sample Table CREATE TABLE FirstTable (ID INT, Value VARCHAR(10)) INSERT INTO FirstTable VALUES(1, 'First'); INSERT INTO FirstTable VALUES(2, 'Second'); INSERT INTO FirstTable VALUES(3, 'Third'); GO -- Create Snapshot Database CREATE DATABASE SnapshotDB ON (Name ='RegularDB', FileName='c:\SSDB.ss1') AS SNAPSHOT OF RegularDB; GO -- Select from Regular and Snapshot Database SELECT * FROM RegularDB.dbo.FirstTable; SELECT * FROM SnapshotDB.dbo.FirstTable; GO -- Delete from Regular Database DELETE FROM RegularDB.dbo.FirstTable; GO -- Select from Regular and Snapshot Database SELECT * FROM RegularDB.dbo.FirstTable; SELECT * FROM SnapshotDB.dbo.FirstTable; GO -- Restore Data from Snapshot Database USE master GO RESTORE DATABASE RegularDB FROM DATABASE_SNAPSHOT = 'SnapshotDB'; GO -- Select from Regular and Snapshot Database SELECT * FROM RegularDB.dbo.FirstTable; SELECT * FROM SnapshotDB.dbo.FirstTable; GO -- Clean up DROP DATABASE [SnapshotDB]; DROP DATABASE [RegularDB]; GO Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: SQL, SQL Authority, SQL Backup and Restore, SQL Data Storage, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • jQuery and Windows Azure

    - by Stephen Walther
    The goal of this blog entry is to describe how you can host a simple Ajax application created with jQuery in the Windows Azure cloud. In this blog entry, I make no assumptions. I assume that you have never used Windows Azure and I am going to walk through the steps required to host the application in the cloud in agonizing detail. Our application will consist of a single HTML page and a single service. The HTML page will contain jQuery code that invokes the service to retrieve and display set of records. There are five steps that you must complete to host the jQuery application: Sign up for Windows Azure Create a Hosted Service Install the Windows Azure Tools for Visual Studio Create a Windows Azure Cloud Service Deploy the Cloud Service Sign Up for Windows Azure Go to http://www.microsoft.com/windowsazure/ and click the Sign up Now button. Select one of the offers. I selected the Introductory Special offer because it is free and I just wanted to experiment with Windows Azure for the purposes of this blog entry.     To sign up, you will need a Windows Live ID and you will need to enter a credit card number. After you finish the sign up process, you will receive an email that explains how to activate your account. Accessing the Developer Portal After you create your account and your account is activated, you can access the Windows Azure developer portal by visiting the following URL: http://windows.azure.com/ When you first visit the developer portal, you will see the one project that you created when you set up your Windows Azure account (In a fit of creativity, I named my project StephenWalther).     Creating a New Windows Azure Hosted Service Before you can host an application in the cloud, you must first add a hosted service to your project. Click your project on the summary page and click the New Service link. You are presented with the option of creating either a new Storage Account or a new Hosted Services.     Because we have code that we want to run in the cloud – the WCF Service -- we want to select the Hosted Services option. After you select this option, you must provide a name and description for your service. This information is used on the developer portal so you can distinguish your services.     When you create a new hosted service, you must enter a unique name for your service (I selected jQueryApp) and you must select a region for this service (I selected Anywhere US). Click the Create button to create the new hosted service.   Install the Windows Azure Tools for Visual Studio We’ll use Visual Studio to create our jQuery project. Before you can use Visual Studio with Windows Azure, you must first install the Windows Azure Tools for Visual Studio. Go to http://www.microsoft.com/windowsazure/ and click the Get Tools and SDK button. The Windows Azure Tools for Visual Studio works with both Visual Studio 2008 and Visual Studio 2010.   Installation of the Windows Azure Tools for Visual Studio is painless. You just need to check some agreement checkboxes and click the Next button a few times and installation will begin:   Creating a Windows Azure Application After you install the Windows Azure Tools for Visual Studio, you can choose to create a Windows Azure Cloud Service by selecting the menu option File, New Project and selecting the Windows Azure Cloud Service project template. I named my new Cloud Service with the name jQueryApp.     Next, you need to select the type of Cloud Service project that you want to create from the New Cloud Service Project dialog.   I selected the C# ASP.NET Web Role option. Alternatively, I could have picked the ASP.NET MVC 2 Web Role option if I wanted to use jQuery with ASP.NET MVC or even the CGI Web Role option if I wanted to use jQuery with PHP. After you complete these steps, you end up with two projects in your Visual Studio solution. The project named WebRole1 represents your ASP.NET application and we will use this project to create our jQuery application. Creating the jQuery Application in the Cloud We are now ready to create the jQuery application. We’ll create a super simple application that displays a list of records retrieved from a WCF service (hosted in the cloud). Create a new page in the WebRole1 project named Default.htm and add the following code: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Products</title> <style type="text/css"> #productContainer div { border:solid 1px black; padding:5px; margin:5px; } </style> </head> <body> <h1>Product Catalog</h1> <div id="productContainer"></div> <script id="productTemplate" type="text/html"> <div> Name: {{= name }} <br /> Price: {{= price }} </div> </script> <script src="Scripts/jquery-1.4.2.js" type="text/javascript"></script> <script src="Scripts/jquery.tmpl.js" type="text/javascript"></script> <script type="text/javascript"> var products = [ {name:"Milk", price:4.55}, {name:"Yogurt", price:2.99}, {name:"Steak", price:23.44} ]; $("#productTemplate").render(products).appendTo("#productContainer"); </script> </body> </html> The jQuery code in this page simply displays a list of products by using a template. I am using a jQuery template to format each product. You can learn more about using jQuery templates by reading the following blog entry by Scott Guthrie: http://weblogs.asp.net/scottgu/archive/2010/05/07/jquery-templates-and-data-linking-and-microsoft-contributing-to-jquery.aspx You can test whether the Default.htm page is working correctly by running your application (hit the F5 key). The first time that you run your application, a database is set up on your local machine to simulate cloud storage. You will see the following dialog: If the Default.htm page works as expected, you should see the list of three products: Adding an Ajax-Enabled WCF Service In the previous section, we created a simple jQuery application that displays an array by using a template. The application is a little too simple because the data is static. In this section, we’ll modify the page so that the data is retrieved from a WCF service instead of an array. First, we need to add a new Ajax-enabled WCF Service to the WebRole1 project. Select the menu option Project, Add New Item and select the Ajax-enabled WCF Service project item. Name the new service ProductService.svc. Modify the service so that it returns a static collection of products. The final code for the ProductService.svc should look like this: using System.Collections.Generic; using System.ServiceModel; using System.ServiceModel.Activation; namespace WebRole1 { public class Product { public string name { get; set; } public decimal price { get; set; } } [ServiceContract(Namespace = "")] [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class ProductService { [OperationContract] public IList<Product> SelectProducts() { var products = new List<Product>(); products.Add(new Product {name="Milk", price=4.55m} ); products.Add(new Product { name = "Yogurt", price = 2.99m }); products.Add(new Product { name = "Steak", price = 23.44m }); return products; } } }   In real life, you would want to retrieve the list of products from storage instead of a static array. We are being lazy here. Next you need to modify the Default.htm page to use the ProductService.svc. The jQuery script in the following updated Default.htm page makes an Ajax call to the WCF service. The data retrieved from the ProductService.svc is displayed in the client template. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Products</title> <style type="text/css"> #productContainer div { border:solid 1px black; padding:5px; margin:5px; } </style> </head> <body> <h1>Product Catalog</h1> <div id="productContainer"></div> <script id="productTemplate" type="text/html"> <div> Name: {{= name }} <br /> Price: {{= price }} </div> </script> <script src="Scripts/jquery-1.4.2.js" type="text/javascript"></script> <script src="Scripts/jquery.tmpl.js" type="text/javascript"></script> <script type="text/javascript"> $.post("ProductService.svc/SelectProducts", function (results) { var products = results["d"]; $("#productTemplate").render(products).appendTo("#productContainer"); }); </script> </body> </html>   Deploying the jQuery Application to the Cloud Now that we have created our jQuery application, we are ready to deploy our application to the cloud so that the whole world can use it. Right-click your jQueryApp project in the Solution Explorer window and select the Publish menu option. When you select publish, your application and your application configuration information is packaged up into two files named jQueryApp.cspkg and ServiceConfiguration.cscfg. Visual Studio opens the directory that contains the two files. In order to deploy these files to the Windows Azure cloud, you must upload these files yourself. Return to the Windows Azure Developers Portal at the following address: http://windows.azure.com/ Select your project and select the jQueryApp service. You will see a mysterious cube. Click the Deploy button to upload your application.   Next, you need to browse to the location on your hard drive where the jQueryApp project was published and select both the packaged application and the packaged application configuration file. Supply the deployment with a name and click the Deploy button.     While your application is in the process of being deployed, you can view a progress bar.     Running the jQuery Application in the Cloud Finally, you can run your jQuery application in the cloud by clicking the Run button.   It might take several minutes for your application to initialize (go grab a coffee). After WebRole1 finishes initializing, you can navigate to the following URL to view your live jQuery application in the cloud: http://jqueryapp.cloudapp.net/default.htm The page is hosted on the Windows Azure cloud and the WCF service executes every time that you request the page to retrieve the list of products. Summary Because we started from scratch, we needed to complete several steps to create and deploy our jQuery application to the Windows Azure cloud. We needed to create a Windows Azure account, create a hosted service, install the Windows Azure Tools for Visual Studio, create the jQuery application, and deploy it to the cloud. Now that we have finished this process once, modifying our existing cloud application or creating a new cloud application is easy. jQuery and Windows Azure work nicely together. We can take advantage of jQuery to build applications that run in the browser and we can take advantage of Windows Azure to host the backend services required by our jQuery application. The big benefit of Windows Azure is that it enables us to scale. If, all of the sudden, our jQuery application explodes in popularity, Windows Azure enables us to easily scale up to meet the demand. We can handle anything that the Internet might throw at us.

    Read the article

  • Online Password Security Tactics

    - by BuckWoody
    Recently two more large databases were attacked and compromised, one at the popular Gawker Media sites and the other at McDonald’s. Every time this kind of thing happens (which is FAR too often) it should remind the technical professional to ensure that they secure their systems correctly. If you write software that stores passwords, it should be heavily encrypted, and not human-readable in any storage. I advocate a different store for the login and password, so that if one is compromised, the other is not. I also advocate that you set a bit flag when a user changes their password, and send out a reminder to change passwords if that bit isn’t changed every three or six months.    But this post is about the *other* side – what to do to secure your own passwords, especially those you use online, either in a cloud service or at a provider. While you’re not in control of these breaches, there are some things you can do to help protect yourself. Most of these are obvious, but they contain a few little twists that make the process easier.   Use Complex Passwords This is easily stated, and probably one of the most un-heeded piece of advice. There are three main concepts here: ·         Don’t use a dictionary-based word ·         Use mixed case ·         Use punctuation, special characters and so on   So this: password Isn’t nearly as safe as this: P@ssw03d   Of course, this only helps if the site that stores your password encrypts it. Gawker does, so theoretically if you had the second password you’re in better shape, at least, than the first. Dictionary words are quickly broken, regardless of the encryption, so the more unusual characters you use, and the farther away from the dictionary words you get, the better.   Of course, this doesn’t help, not even a little, if the site stores the passwords in clear text, or the key to their encryption is broken. In that case…   Use a Different Password at Every Site What? I have hundreds of sites! Are you kidding me? Nope – I’m not. If you use the same password at every site, when a site gets attacked, the attacker will store your name and password value for attacks at other sites. So the only safe thing to do is to use different names or passwords (or both) at each site. Of course, most sites use your e-mail as a username, so you’re kind of hosed there. So even though you have hundreds of sites you visit, you need to have at least a different password at each site.   But it’s easier than you think – if you use an algorithm.   What I’m describing is to pick a “root” password, and then modify that based on the site or purpose. That way, if the site is compromised, you can still use that root password for the other sites.   Let’s take that second password: P@ssw03d   And now you can append, prepend or intersperse that password with other characters to make it unique to the site. That way you can easily remember the root password, but make it unique to the site. For instance, perhaps you read a lot of information on Gawker – how about these:   P@ssw03dRead ReadP@ssw03d PR@esasdw03d   If you have lots of sites, tracking even this can be difficult, so I recommend you use password software such as Password Safe or some other tool to have a secure database of your passwords at each site. DO NOT store this on the web. DO NOT use an Office document (Microsoft or otherwise) that is “encrypted” – the encryption office automation packages use is very trivial, and easily broken. A quick web search for tools to do that should show you how bad a choice this is.   Change Your Password on a Schedule I know. It’s a real pain. And it doesn’t seem worth it…until your account gets hacked. A quick note here – whenever a site gets hacked (and I find out about it) I change the password at that site immediately (or quit doing business with them) and then change the root password on every site, as quickly as I can.   If you follow the tip above, it’s not as hard. Just add another number, year, month, day, something like that into the mix. It’s not unlike making a Primary Key in an RDBMS.   P@ssw03dRead10242010   Change the site, and then update your password database. I do this about once a month, on the first or last day, during staff meetings. (J)   If you have other tips, post them here. We can all learn from each other on this.

    Read the article

  • Windows Azure: Creation of a file on cloud blob container

    - by veda
    I am writing a program that will be executing on the cloud. The program will generate an output that should be written on to a file and the file should be saved on the blob container. I don't have a idea of how to do that Will this code FileStream fs = new FileStream(file, FileMode.Create); StreamWriter sw = new StreamWriter(fs); generate a file named "file" on the cloud...

    Read the article

  • Do any clouds support SSD storage?

    - by taw
    I'm using Amazon cloud right now, and the biggest performance issue is horrible I/O performance. As long as something fits RAM it's fine - once it's too big it gets ridiculously slow (in many different scenarios). There are only so many ways one can avoid hitting disk - so the question is - does Amazon or some other cloud provide SSD option?

    Read the article

  • Strange error from mysql storage engine

    - by zerkms
    General error: 1030 Got error -1 from storage engine the used storage engine is innodb the query was runned when i got it today morning was: SELECT feeds.* FROM feeds ORDER BY RAND() LIMIT 1 i know rand() is bad but it's very small table (<500 records) and not loaded project this error i receive approximately once a day. cannot google anything relevant :-(

    Read the article

  • DOM Storage and locks

    - by user535759
    Since DOM storage and its equivalencies persist in between tabs and windows, I've thought about using it for message passing. The problem is that fetch and store are different operations, and therefore not atomic. I have models that rely on UUID generation, conflict resolutions, and beaconing to do the small subset of what I need to do, but my real question is this: Since the local storage is a shared memory resource, what are the locking mechanisms available for mutual access?

    Read the article

  • tag cloud filter help please

    - by rod
    Hi All, I have a div that contains many spans and each of those spans contains a single href. Basically it's a tag cloud. What I'd like to do is have a textbox that filters the tag cloud on KeyUp event. Any ideas or is this possible? Thanks, rodchar

    Read the article

  • Local development using HTML5 storage

    - by jasongonzales
    I am experimenting with HTML5 local storage functionality, but was frustrated to learn that the browser won't allow local storage when the file is local. My guess is that the browser (Chrome in my case, FF too) wants to see a domain rather than a file location. Has anyone here discovered a workaround for developing locally? Perhaps setting up a local domain? That sounds like too much trouble. There should just be a developer option in the browser, grrrrrr.

    Read the article

  • Installing Intel Rapid Storage Drivers makes my eSATA Drives act weird

    - by Filip Ekberg
    I have a HP 8530w Elitebook this Laptop got an eSATA port which I want to plug my LaCie d2 Quadra V2 1TB harddrive into. It all works well on a fresh install of Windows 7 without the Intel Chipset drivers installed. However when I install the Intel Rapid Storage drivers or the Intel Matrix software my drive seems to "disconnect" when I use it to much. I have a lot of Virtual PC's on the drive and when I start them the disk somewhat disconnects. What could cause this?

    Read the article

  • Tape Storage - How do I setup a tape backup system for use with my NAS

    - by John Himmelman
    I currently have a QNAP NAS with a raid 5 config (~600gb storage) but don't have a reliable backup solution. I've heard great things about tape backup systems (reliability, durability, etc..). How can I go about setting up a tape backup system? The tape drives seem very expensive (1k+ for a decent one, more than the price of my NAS). What are the important specs to compare and features to take into consideration? Edit: Does anyone have links to some good resources? There is a ton of articles, guides, and sites on this subject, not sure where to start.

    Read the article

  • Setting default version on Azure Blob Storage?

    - by Erik
    What is the easiest way, without having to create your own utility, to set the default service version to the latest in Azure Blob Storage ? http://msdn.microsoft.com/en-us/library/windowsazure/dd894041 There is basically nothing to be set in the Azure portal and I am having a difficult time finding working utilities to use for Azure. For some reason Azure is defaulting to the oldest version which does not send things like the http range header for example. Any utility that can do this ? Thank you.

    Read the article

  • Windows Storage Server 2003 R2 RDP Disconnection

    - by Antitribu
    I am unable to remote desktop to our Windows Storage Server 2003 R2 machine. There are a few people about with similar issues but no answers around. Symantec End Point is installed but no network protection. Using rdesktop in Linux I receive: ERROR: send: Connection reset by peer NOT IMPLEMENTED: PDU 12 ERROR: Connection closed Where the "PDU XX" number changes on each connection. In windows using the latest mstsc I receive: The connection was lost due to a network error. Try connecting again. If the problem persists, contact your network Administrator. Any ideas?

    Read the article

  • How does fail2ban 0.9 database storage actually works?

    - by Arantir
    Fail2ban 0.9 introduce database storage to save bans on restart. But I can't find out the actual mechanism of it work. There is dbpurgeage parameter which controls lifetime of old bans, defaults to 24 hours. As I see from code research, fail2ban saves a ban to the db with timeofban equals to the moment of ban being saved. Then every dbpurgeage period it removes all bans with timeofban < MyTime.time() - self._purgeAge, in other words removes all bans have been stored more than 24 hours ago. But what if an IP was banned for the month? Does all this mean that with dbpurgeage = 86400 after restart in 24 hours I will lost all bans longer than 24 hours? I just want that all my permanent bans will be preserved in any case.

    Read the article

  • How can I configure a Linksys EA4500 + usb printer for network printing (without connect cloud)

    - by Larry Kyrala
    The documentation and classic firmware (2.0.37) for Cisco's Linksys EA4500 is a bit sparse on setup details. It says I can connect a USB-printer, but then goes on to try to sell "Connect Cloud" remote management software. I don't want that. I just want to know how to set this up with the existing advanced firmware. Is it possible? AFAIK, to setup a IPP or LDP printer, there is usually some kind of queue configuration on the server (i.e. the ea4500 in this case), but I can't find it in the firmware. I also have been unable to find any existing protocols from win7 or mac osx. (windows network share, IPP/LDP etc.) I'm curious if I need to have the "Storage" accounts active and connect to my router either via the local IP or router name. There's a lot of unknowns here; it would help to know how this particular router actually works.

    Read the article

  • Sharing external storage between different operating systems?

    - by CT
    I just received a Lacie 1TB external usb/firewire/esata drive. I have 2 machines. A macbook running osx 10.6 and a desktop running windows 7. I would like to rip my dvd collection to iso and store on my external. Right now I use my macbook's disk utility to rip dvds to iso. However my desktop is what is connected to my hdtv. I mainly just use the desktop for media. I'd like to format the external with a 200 GB partition for time machine backups and have the rest for storage. DVD iso are often above 8GB so that sort of eliminates FAT file systems correct? Do I have any options to be able to have my mac and pc both see the drive?

    Read the article

  • Network architecture when using Rackspace's Cloud

    - by brianz
    I'm planning on launching a web application soon, and have decided on using Rackspace's Cloud offering with Debian. I'm not expecting that much traffic to start, but would rather get the architecture correct now even with the small VPSs. The thing I'm not quite sure about is how many VPSs I should get. At a minimum, I know I'll want three VPSs: Two Apache webservers One server for MySQL I'd also like: Nginx load balancer MySQL replication memcached I'm not sure where those last three processes should be running. Can the load balancer run on the same machine as the MySQL slave, or should they each run on their own machine? Does memcached run alongside the webservers or on different machines?

    Read the article

  • VMWare esxi 4.1 storage errors with MD3200

    - by Karl Katzke
    We're seeing some storage errors from the esxi logs relating to our MD3200. I'm sort of a VMWare noob and am not sure where to go from here because I couldn't find a lot of documentation on the VMWare website, and the forums didn't seem to have any posts about it with actual answers. Everything is working, but I'm trying to proactively troubleshoot this. sfcb-vmware_base|StoragePool Cannot get logical disk data from controller 0 sfcb-vmware_base|Volume Cannot get logical disk data from controller 0 sfcb-vmware_base|storelib-GetLDList-ProcessLibCommandCall failed; rval = 0x800E The ESXi boxes are connected directly via SAS to the controller on the MD3200. What do these errors actually mean, and what's a good path to start troubleshooting or solving them?

    Read the article

  • What is the easiest way to get MySQL's Archive Storage Engine working on CentOS 5.4

    - by tronda
    The Archive Storage Engine is not enabled by the default build of MySQL in CentOS/RHEL. I would like to enable it on our CentOS 5.4 server. My initial reaction was to modify the SPEC file for the SRPMS file, but this indicates that this might not be that easy. There's always the option to build from MySQL source, but I would prefer if possible to stay within the RPMS/Yum world. Does anybody have a successful approach to this by using RPMS/SRPMS/Yum? Some patches which makes this work flawless with SRPMS?

    Read the article

  • Triple Boot - With storage partition

    - by art
    I'm new to the multi-boot world, as i used to rely on virtualizing for running linux. Recently i moved to Dual booting Windows 7 and Ubuntu, with a storage partition for all my files where both operating systems could access them. Is it possible, to have 1 partition for 7, another for XP, another for Ubuntu, and a separate partition where the OS's can access my files? so 4 partitions on my hdd. and if there's a better way to go about this (or if its not possible), please let me know! thanks

    Read the article

  • Personal wiki on usb / the cloud?

    - by drby
    I'm looking for a personal wiki that can be installed on an usb stick and (more importantly) somewhere on the cloud (dropbox). I've looked at the Wiki Matrix, but I really don't care that much about any of the options, so I end up with a choice between ~50 wikis at the end. Tried out TiddlyWiki but there are some things that really annoy me like the fact that all pages get opened on the same page. It really looks like it'd turn into a giant mess pretty quickly. I'd like to have something that's pretty close in terms of appearance and usability to wikipedia. Hierarchical categories for organization would be really nice. And accessible storage (in case I ever want to convert it to something else).

    Read the article

  • NetFlow Storage Calculator

    - by javano
    I am planning to deploy a NetFlow server (using NfSen/NfDump) for harvesting data from Cisco devices; Are there standard calculations or guidelines I can use to calculate my server requirements, specifically I need to plan for storage. Is there a way of knowing how much data I will collect per day for example, given N flows? Lets say one device has 10k flows per day, this is typically XYZ MBs, so I can scale this up? If not, how many flows are you guys and girls recording per day, and how much data is this generating? Hopefully we can generate an estimate from everyone else's figures! P.S. If it makes a difference, I'll be collecting from <= 50 devices max (non more than 50Mbps each).

    Read the article

  • HALEVT troubleshooting: VFAT usb storage device gets mounted with root:root user:group

    - by Nova deViator
    Hi, i'm banging my head for number of days around this problem. using Halevt for automounting, everything mostly works, but the only thing is that Halevt mounts external USB storage devices as root. So, as user i cannot write to files on them. Halevt gets run as halevt user on boot through /etc/init.d script. This is Ubuntu Lucid with Awesome WM. No GDM. Running halevt as user seem to not work (halevt runs but doesn't respond on Insert) I know HAL is deprecated and removed and i should probably write my own UDEV rules, but until then it seems there must a be simple hack that enables mounting VFAT/NTFS devices with specific uid/gid. this question/answer helps a lot, but not specifically to the above.

    Read the article

  • How does the data storage work? [closed]

    - by Andres Adhi
    I am really new to the whole concept of Data storage, Domain, Server and everything else related to this. Can someone pleases explain what a Domain is? How are server part of the Domain and How are Database stored in the Server or Domain? How does a new server be able to connect to existing database server to get all the data needed. I tried to find this information in the web but I am not really finding a good resource. It may be because these is really basic information. I will really appreciate if someone can explain these concept in plain terms. Thanks in advance.

    Read the article

  • MySQL Server - Got error -1 from storage engine

    - by Bobby
    I am currently trying to restore a MySQL table from the .ibd file. I have been following the instructions on the MySQL reference manual on how to use DISCARD and IMPORT TABLESPACE to replace the .idb files. Discarding the tablespace returns no error and the file is deleted however IMPORTING the replacement .ibd file yields a "Got error -1 from storage engine" error. There doesn't seem to be too much information about what exactly an error -1 is. Does anybody have any further insight as to why an import table space isn't working?

    Read the article

< Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >