Search Results

Search found 6879 results on 276 pages for 'azure storage blobs'.

Page 78/276 | < Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >

  • Google App Engine: Which is its RDBMS?

    - by Vimvq1987
    According to this: http://code.google.com/appengine/docs/whatisgoogleappengine.html it seems that GAE only uses Datastore to store data, which is equivalent with Table service on Windows Azure Platform. Does anyone know that which RDBMS it uses? or such thing exists or not?

    Read the article

  • What is the best SOHO NAS currently available?

    - by VinceJS
    What is the "best" Small Office Home Office (SOHO) Network Attached Storage (NAS) device available? Best performance vs. cost that is! I am looking for one that I can use at home to safely store my pictures, videos. What features should I look for? There are so many NAS reviews on the web, how do you choose the right one?

    Read the article

  • MySql, InnoDB & Null Values

    - by pws5068
    Formerly I was using MyISAM storage engine for MySql and I had defined the combination of three fields to be unique. Now I have switched to InnoDB, which I assume caused this problem, and now NULL != NULL. So for the following table: ID (Auto) | Field_A | Field_B | Field_C I can insert (Field_A,Field_B,Field_C) Values(1,2,NULL) (1,2,NULL) (1,2,NULL) infinitely many times. How can I prevent this behavior?

    Read the article

  • Building a community photography site, where can I store my photos online?

    - by thanos panousis
    I am in the process of laying down the requirements for a photography community site. An important feature to investigate would be allowing more fotos/account than rival sites around my country's internet. What are the possibilities out there? Should I go for something like amazon S3, or is there anything that offers more image-related features? I am mostly interested in low price per GB (storage and transfer out).

    Read the article

  • localStorage not working in IE9 and Firefox

    - by maha
    I am working with localStorage. My code is perfectly working in Chrome, but not in IE9 and Firefox. Here is the code: document.addEventListener("DOMContentLoaded", restoreContents, false); document.getElementsByTagName("body")[0].onclick=function(){saveContents('myList','contentMain', event, this);}; function amIclicked(e, eleObject) { alert("amIClicked"); e = e || event; var target = e.target || e.srcElement; alert("target = " + target.id); if(target.id=='pageBody' || target.id=='Save') return true; else return false; } function saveContents(e, d, eveObj, eleObject) { //alert("saveContents"); if (amIclicked(eveObj, eleObject)) { var cacheValue = document.getElementById(e).innerHTML; var cacheKey = "key_" + selectedKey; var storage = window.localStorage; //alert ("cacheKey = " + cacheKey + " ,cacheValue = " + cacheValue); if(typeof(Storage)!=="undifined"){ localStorage.setItem("cacheKey","cacheValue"); } //alert ("Saved!!"); var dd = document.getElementById(d); //document.getElementById("contentMain").style.display == "none"; dd.style.display = "none"; } } function restoreContents(e,k) { //alert("test"); if(k.length < 1) { return; } var mySavedList = localStorage["key_" + k]; if (mySavedList != undefined) { document.getElementById(e).innerHTML = mySavedList; } } <a onclick="ShowContent('contentMain','myList','Sample_1'); return true;" href="#" >Sample 1</a><br/><br/> <a onclick="ShowContent('contentMain','myList','Sample_2'); return true;" href="#" >Sample 2</a><br/><br/> <div style="display:none;display:none;position:absolute;border-style: solid;background-color: white;padding: 5px;"id="contentMain"> <ol id="myList" contenteditable="true"> <li>Enter Content here</li> </ol> <!--<input id="editToggleButton" type="button" value="Edit"/>--> </div> When I debugging in Iexplore Iam getting the error as SCRIPT5007: Unable to get value of the property 'length': object is null or undefined sample_1.html, line 157 character 3 Thanks

    Read the article

  • Making many network shares appear as one

    - by jimbojw
    Givens: disk is cheap, and there's plenty lying around on various computers around the corporate intranet redundant contiguous large storage volumes are expensive Problem: It would be fantastic to have a single entry point (drive letter, network path) that presents all this space as one contiguous filesystem, effectively abstracting the disk and network architecture from the paths presented to users. Does anyone know how to implement such a solution? I'm open to Windows and non-windows solutions, free and proprietary.

    Read the article

  • Virtual Machines List from PowerShell vs PowerShell ISE and PowerGUI

    - by slybloty
    I am confused to why I have different information based on where I try to retrieve it from. I have 3 Windows 2012 servers (G0, G1, and G2) running Hyper-V. The following situation is captured from one server which I use to run scripts and control the others. What I'm trying to do is to get a list of the virtual machines in existence on these 3 machines: Using PowerGUI and PowerShell ISE: PS > Get-VMHost | select name Name ---- G0.nothing.com G2.nothing.com G1.nothing.com PS > Get-VMHost | Get-VM | select name Name ---- VM1628856-4 VM1628856-2 VM1628856-6 VM1628856-3 VM1628856-1 VM1628856-5 Using PowerShell: PS > Get-VMHost | select name Name ---- G0 PS > Get-VM Name State CPUUsage(%) MemoryAssigned(M) Uptime Status ---- ----- ----------- ----------------- ------ ------ VM1107610-1 OffCritical 0 0 00:00:00 Cannot connect to virtual machine configuration storage VM1390728-1 OffCritical 0 0 00:00:00 Cannot connect to virtual machine configuration storage VM1393540-1 OffCritical 0 0 00:00:00 Cannot connect to virtual machine configuration storage VM1393540-10 OffCritical 0 0 00:00:00 Cannot connect to virtual machine configuration storage VM1393540-2 OffCritical 0 0 00:00:00 Cannot connect to virtual machine configuration storage VM1393540-3 OffCritical 0 0 00:00:00 Cannot connect to virtual machine configuration storage VM1393540-4 OffCritical 0 0 00:00:00 Cannot connect to virtual machine configuration storage VM1393540-5 OffCritical 0 0 00:00:00 Cannot connect to virtual machine configuration storage VM1393540-6 OffCritical 0 0 00:00:00 Cannot connect to virtual machine configuration storage VM1393540-7 OffCritical 0 0 00:00:00 Cannot connect to virtual machine configuration storage VM1393540-8 OffCritical 0 0 00:00:00 Cannot connect to virtual machine configuration storage VM1393540-9 OffCritical 0 0 00:00:00 Cannot connect to virtual machine configuration storage VM1833022-1 OffCritical 0 0 00:00:00 Cannot connect to virtual machine configuration storage My main concern is that I don't have reliable information from the 3 tools. The Hyper-V Manager application shows the same list as the PowerShell does. But if I run my scripts from the other two tools, which is what I mostly do, I don't have the same information available, therefore I can't manipulate the same VMs. I've also noticed that the Virtual Machine Manager shows the same list of VMs as the first two tools, PowerGUI and PowerShell ISE. Which information is valid? And how can I retrieve the correct list of VMs? EDIT 1 The $env:psmodulepath value: PS > $env:psmodulepath C:\Users\administrator\Documents\WindowsPowerShell\Modules; C:\Windows\system32\WindowsPowerShell\v1.0\Modules\; C:\Program Files (x86)\Microsoft SQL Server\110\Tools\PowerShell\Modules\; C:\Program Files\Microsoft System Center 2012\Virtual Machine Manager\bin\Configuration Providers\; C:\Program Files\Microsoft System Center 2012\Virtual Machine Manager\bin\psModules\; C:\Program Files (x86)\QLogic Corporation\QInstaller\Modules EDIT 2 PowerShell is using this Hyper-V module: C:\Windows\Microsoft.Net\assembly\GAC_MSIL\Microsoft.HyperV.PowerShell\v4.0_6.3.0.0__31bf3856ad364e35\Microsoft.HyperV.PowerShell.dll And PowerGUI is using this one: C:\Windows\System32\WindowsPowerShell\v1.0\Modules\Hyper-V\Hyper-V.psd1 If I try to load the module used by PowerShell onto PowerGUI I still get the same different results. How can I receive the correct information listed under Hyper-V using PowerGUI or PowerShell ISE?

    Read the article

  • HP ACU shows parity initialization failed (with screenshot)

    - by lbanz
    I put in a new drive due to a hard drive failure. When the rebuild got to 100%, the controller fails and I need to reboot the server to bring it online. I had to do this about three times and it eventually finished rebuilding. But I found that it says parity initialization status failed. I've left it for a few hours but it didn't seem to reinitialize. Then I ran the insight online diagnostic tools and it reported the disk that I put in reached read/write error threshold. So I'm beginning to think that the brand new disk I put in is faulty. Before I put in the disk, the parity initialization was at a finished state. Should I replace the new disk I put in? I'm very worried as I think the parity is broken. Or is there a way to kick start the initialization process?

    Read the article

  • Using CMS for App Configuration - Part 1, Deploying Umbraco

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2014/06/04/using-cms-for-app-configurationndashpart-1-deploy-umbraco.aspxSince my last post on using CMS for semi-static API content, How about a new platform for your next API… a CMS?, I’ve been using the idea for centralized app configuration, and this post is the first in a series that will walk through how to do that, step-by-step. The approach gives you a platform-independent, easily configurable way to specify your application configuration for different environments, with a built-in approval workflow, change auditing and the ability to easily rollback to previous settings. It’s like Azure Web and Worker Roles where you can specify settings that change at runtime, but it's not specific to Azure - you can use it for any app that needs changeable config, provided it can access the Internet. The series breaks down into four posts: Deploying Umbraco – the CMS that will store your configurable settings and the current values; Publishing your config – create a document type that encapsulates your settings and a template to expose them as JSON; Consuming your config – in .NET, a simple client that uses dynamic objects to access settings; Config lifecycle management – how to publish, audit, and rollback settings. Let’s get started. Deploying Umbraco There’s an Umbraco package on Azure Websites, so deploying your own instance is easy – but there are a couple of things to watch out for, so this step-by-step will put you in a good place. Create From Gallery The easiest way to get started is with an Azure subscription, navigate to add a new Website and then Create From Gallery. Under CMS, you’ll see an Umbraco package (currently at version 7.1.3): Configure Your App For high availability and scale, you’ll want your CMS on separate kit from anything else you have in Azure, so in the configuration of Umbraco I’d create a new SQL Azure database – which Umbraco will use to store all its content: You can use the free 20mb database option if you don’t have demanding NFRs, or if you’re just experimenting. You’ll need to specify a password for a SQL Server account which the Umbraco service will use, and changing from the default username umbracouser is probably wise. Specify Database Settings You can create a new database on an existing server if you have one, or create new. If you create a new server *do not* use the same username for the database server login as you used for the Umbraco account. If you do, the deployment will fail later. Think of this as the SQL Admin account that you can use for managing the db, the previous account was the service account Umbraco uses to connect. Make Tea If you have a fast kettle. It takes about two minutes for Azure to create and provision the website and the database. Install Umbraco So far we’ve deployed an empty instance of Umbraco using the Azure package, and now we need to browse to the site and complete installation. My Website was called my-app-config, so to complete installation I browse to http://my-app-config.azurewebsites.net:   Enter the credentials you want to use to login – this account will have full admin rights to the Umbraco instance. Note that between deploying your new Umbraco instance and completing installation in this step, anyone can browse to your website and complete the installation themselves with their own credentials, if they know the URL. Remote possibility, but it’s there. From this page *do not* click the big green Install button. If you do, Umbraco will configure itself with a local SQL Server CE database (.sdf file on the Web server), and ignore the SQL Azure database you’ve carefully provisioned and may be paying for. Instead, click on the Customize link and: Configure Your Database You need to enter your SQL Azure database details here, so you’ll have to get the server name from the Azure Management Console. You don’t need to explicitly grant access to your Umbraco website for the database though. Click Continue and you’ll be offered a “starter” website to install: If you don’t know Umbraco at all (but you are familiar with ASP.NET MVC) then a starter website is worthwhile to see how it all hangs together. But after a while you’ll have a bunch of artifacts in your CMS that you don’t want and you’ll have to work out which you can safely delete. So I’d click “No thanks, I do not want to install a starter website” and give yourself a clean Umbraco install. When it completes, the installation will log you in to the welcome screen for managing Umbraco – which you can access from http://my-app-config.azurewebsites.net/umbraco: That’s It Easy. Umbraco is installed, using a dedicated SQL Azure instance that you can separately scale, sync and backup, and ready for your content. In the next post, we’ll define what our app config looks like, and publish some settings for the dev environment.

    Read the article

  • How does a hard drive compare to Flash memory working as a hard drive in terms of speed?

    - by Jian Lin
    Some experiment I did with hard drive read/write speed was 10MB/s write and 40MB/s read, and with a USB Flash drive, it can be 5MB/s write and 10MB/s read. Also, if I put a virtual hard drive .vhd file in a hard drive or in a USB Flash drive and try a Virtual Machine using it, the one using the hard drive is quite fast, while the one using the USB Flash drive is close to not usable. So I wonder some early netbooks use 4GB or 8GB flash memory as the hard drive, and even the Apple Mac Air has an option of using flash memory instead of a hard drive. But in those situation, will the speed be slower than using a hard drive, like in the case of a USB Flash drive?

    Read the article

  • What is the deal with hard drive technology moving to 4K sectors, vs. 512 bytes? Are 4K sector disk

    - by Chris W. Rea
    I've noticed that some Western Digital hard drives are now sporting 4K sectors, that is, the sectors are larger: 4096 bytes vs. the actual de facto standard of 512 bytes. So: What's the big deal with 4K sectors? Is it marketing hype, or a real advantage? Why should somebody building a new PC care, or not, about 4K sectors? Why is this transition taking place now? Why didn't it happen sooner? Are there things to look out for when buying a 4K sector hard drive? e.g. incompatibility? Anything else we should know about 4K sectors?

    Read the article

  • How to format as FAT32 from Windows 7/Vista

    - by Jack Ukleja
    What is the best way to format a USB drive with FAT32 (for Mac compatibility) from within Windows 7/Vista? I ask because the Disk Management only lets you pick exFAT (because the disk is over 32GB I believe). Doing it from the command line with diskpart doesn't seem to work either.

    Read the article

  • Synology DS210j & online backup: what do people recommend?

    - by Dean
    I've just purchased a Synology DS210j for my home network and would like to backup this NAS online. I noticed that DiskStation Manager v2.3 provides various options including Amazon S3 and rsync: Does anybody have some real usage against cost statistics for Amazon's S3 service? How is sensitive data protected on Amazon S3? Are there any rsync online backup options? If so, what do people recommend? UPDATE: I am still unable to find any decent answers to the above questions, can anybody help me out?

    Read the article

  • Windows 7 on laptop hard drive does not boot after installing Windows to Go on external hard drive

    - by user1687031
    Just successfully installed and ran Windows 8's Windows to Go on a USB external hard drive. However, after shutting down and removing the USB hard disk, whey I to start my laptop (with only Windows 7 installed), but it doesn't boot and trying to repair it doesn't work. It seems that Windows 8 had corrupted the partition table on the laptop's hard drive, causing Windows 7 to fail to boot. How do I fix that and avoid future problems of the same type?

    Read the article

  • Increase samba space on open suse 12.1

    - by Kapil Sharma
    I know linux basics but not an expert. IT guy left the job here and there is some time before new hire. So sorry if question is very basic. We have local testing server based on Open SUSE 12.1, which also act as shared drive between dev/mgmt team here and using Samba for that. Now we are running out of space on samba, even though server's 2*1TB harddisk is nearly 90% free. My question is, what is limiting Samba and how can I increase its limit? We need around at least 500 GB as shared drive but currently its just 25 GB. I don't need step by step answer, just a link to any helpful article would be sufficient. Probably I'm putting wrong keywords in google so not getting any helpful link. EDIT: Output of commands in the first comment. All commands were run as root user df -h (getting error with df -ht) Filesystem Size Used Avail Use% Mounted on rootfs 30G 5.1G 23G 19% / devtmpfs 2.0G 36K 2.0G 1% /dev tmpfs 2.0G 1.1M 2.0G 1% /dev/shm tmpfs 2.0G 676K 2.0G 1% /run /dev/sda2 30G 5.1G 23G 19% / tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup tmpfs 2.0G 676K 2.0G 1% /var/run tmpfs 2.0G 0 2.0G 0% /media tmpfs 2.0G 676K 2.0G 1% /var/lock /dev/sda3 36G 31G 3.3G 91% /home fdisk -l /dev/[hmsv]d* Disk /dev/sda: 80.0 GB, 80026361856 bytes 255 heads, 63 sectors/track, 9729 cylinders, total 156301488 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x2d4a2d49 Device Boot Start End Blocks Id System /dev/sda1 2048 16771071 8384512 82 Linux swap / Solaris /dev/sda2 * 16771072 79681535 31455232 83 Linux /dev/sda3 79681536 156301311 38309888 83 Linux Disk /dev/sda1: 8585 MB, 8585740288 bytes 255 heads, 63 sectors/track, 1043 cylinders, total 16769024 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/sda1 doesn't contain a valid partition table Disk /dev/sda2: 32.2 GB, 32210157568 bytes 255 heads, 63 sectors/track, 3915 cylinders, total 62910464 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System Disk /dev/sda3: 39.2 GB, 39229325312 bytes 255 heads, 63 sectors/track, 4769 cylinders, total 76619776 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/sda3 doesn't contain a valid partition table vgs No volume groups found lvs No volume groups found output of vi /etc/samba/smb.conf # smb.conf is the main Samba configuration file. You find a full commented # version at /usr/share/doc/packages/samba/examples/smb.conf.SUSE if the # samba-doc package is installed. # Date: 2011-11-02 [global] workgroup = WORKGROUP passdb backend = tdbsam printing = cups printcap name = cups printcap cache time = 750 cups options = raw map to guest = Bad User include = /etc/samba/dhcp.conf logon path = \\%L\profiles\.msprofile logon home = \\%L\%U\.9xprofile logon drive = P: usershare allow guests = Yes [homes] comment = Home Directories valid users = %S, %D%w%S browseable = No read only = No inherit acls = Yes [profiles] comment = Network Profiles Service path = %H read only = No store dos attributes = Yes create mask = 0600 directory mask = 0700 [users] comment = All users path = /home read only = No inherit acls = Yes veto files = /aquota.user/groups/shares/ [groups] comment = All groups path = /home/groups read only = No inherit acls = Yes [printers] comment = All Printers path = /var/tmp printable = Yes create mask = 0600 browseable = No [print$] comment = Printer Drivers path = /var/lib/samba/drivers write list = @ntadmin root force group = ntadmin create mask = 0664 directory mask = 0775 [allusers] comment = All Users path = /home/shares/allusers valid users = @users force group = users create mask = 0660 directory mask = 0771 writable = yes

    Read the article

  • Alternative to Amazon's S3 service?

    - by Cory
    Just wondering if there is good alternative to Amazon's S3 service? I like S3 but the bandwidth cost is high. I looked at CouldFiles from Rackspace but the cost is even higher. I don't mind prepaying or having monthly payment in order to reduce the bandwidth cost greatly. Thank you for any help

    Read the article

  • Is RAID 0 or JBOD better for home media server?

    - by Donald Hughes
    I have an external two-bay drive enclosure (the OWC Mercury Elite-AL Pro) connected to a Mac Mini (my home media server) over FireWire 800. I'm streaming media to other computers in the house over wired gigabit. I have two 1.5 TB drives that I'm using independently right now. The media is on one, and I'm mirroring the files to the other drive at night as a backup. But as I approach filling up the drive I'm wanting to span those two drives together to give me a total of about 3 TB, and then buy another drive for backups. The external enclosure supports both RAID 0 and JBOD, but I'm not clear on which would be better in this situation. Would RAID 0 provide any performance improvements over JBOD for streaming video (possibly several streams at once? How does each affect the MTBF of the drives? In general, should I choose RAID 0, JBOD, or keep them independent?

    Read the article

  • Are there no downsides to NetApp SAN solutions other than price?

    - by flashkube
    We have pretty much decided on a NetApp solution for our first SAN. Given that, I've been tasked with finding as much reason not to go with NetApp as I can. We like to do this A) so we know what we're getting into and B) so we aren't clouded by the inevitable post vendor demo euphoria. I have scoured the internet for cons and can only find one: price. Have you had a nightmare experience with NetApp that you just want to get off your chest? Please, only people with NetApp experience. Thank you!

    Read the article

  • KVM guest io is much slower than host io: is that normal?

    - by Evolver
    I have a Qemu-KVM host system setup on CentOS 6.3. Four 1TB SATA HDDs working in Software RAID10. Guest CentOS 6.3 is installed on separate LVM. People say that they see guest performance almost equal to host performance, but I don't see that. My i/o tests are showing 30-70% slower performance on guest than on host system. I tried to change scheduler (set elevator=deadline on host and elevator=noop on guest), set blkio.weight to 1000 in cgroup, change io to virtio... But none of these changes gave me any significant results. This is a guest .xml config part: <disk type='file' device='disk'> <driver name='qemu' type='raw'/> <source file='/dev/vgkvmnode/lv2'/> <target dev='vda' bus='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </disk> There are my tests: Host system: iozone test # iozone -a -i0 -i1 -i2 -s8G -r64k random random KB reclen write rewrite read reread read write 8388608 64 189930 197436 266786 267254 28644 66642 dd read test: one process and then four simultaneous processes # dd if=/dev/vgkvmnode/lv2 of=/dev/null bs=1M count=1024 iflag=direct 1073741824 bytes (1.1 GB) copied, 4.23044 s, 254 MB/s # dd if=/dev/vgkvmnode/lv2 of=/dev/null bs=1M count=1024 iflag=direct skip=1024 & dd if=/dev/vgkvmnode/lv2 of=/dev/null bs=1M count=1024 iflag=direct skip=2048 & dd if=/dev/vgkvmnode/lv2 of=/dev/null bs=1M count=1024 iflag=direct skip=3072 & dd if=/dev/vgkvmnode/lv2 of=/dev/null bs=1M count=1024 iflag=direct skip=4096 1073741824 bytes (1.1 GB) copied, 14.4528 s, 74.3 MB/s 1073741824 bytes (1.1 GB) copied, 14.562 s, 73.7 MB/s 1073741824 bytes (1.1 GB) copied, 14.6341 s, 73.4 MB/s 1073741824 bytes (1.1 GB) copied, 14.7006 s, 73.0 MB/s dd write test: one process and then four simultaneous processes # dd if=/dev/zero of=test bs=1M count=1024 oflag=direct 1073741824 bytes (1.1 GB) copied, 6.2039 s, 173 MB/s # dd if=/dev/zero of=test bs=1M count=1024 oflag=direct & dd if=/dev/zero of=test2 bs=1M count=1024 oflag=direct & dd if=/dev/zero of=test3 bs=1M count=1024 oflag=direct & dd if=/dev/zero of=test4 bs=1M count=1024 oflag=direct 1073741824 bytes (1.1 GB) copied, 32.7173 s, 32.8 MB/s 1073741824 bytes (1.1 GB) copied, 32.8868 s, 32.6 MB/s 1073741824 bytes (1.1 GB) copied, 32.9097 s, 32.6 MB/s 1073741824 bytes (1.1 GB) copied, 32.9688 s, 32.6 MB/s Guest system: iozone test # iozone -a -i0 -i1 -i2 -s512M -r64k random random KB reclen write rewrite read reread read write 524288 64 93374 154596 141193 149865 21394 46264 dd read test: one process and then four simultaneous processes # dd if=/dev/mapper/VolGroup-lv_home of=/dev/null bs=1M count=1024 iflag=direct skip=1024 1073741824 bytes (1.1 GB) copied, 5.04356 s, 213 MB/s # dd if=/dev/mapper/VolGroup-lv_home of=/dev/null bs=1M count=1024 iflag=direct skip=1024 & dd if=/dev/mapper/VolGroup-lv_home of=/dev/null bs=1M count=1024 iflag=direct skip=2048 & dd if=/dev/mapper/VolGroup-lv_home of=/dev/null bs=1M count=1024 iflag=direct skip=3072 & dd if=/dev/mapper/VolGroup-lv_home of=/dev/null bs=1M count=1024 iflag=direct skip=4096 1073741824 bytes (1.1 GB) copied, 24.7348 s, 43.4 MB/s 1073741824 bytes (1.1 GB) copied, 24.7378 s, 43.4 MB/s 1073741824 bytes (1.1 GB) copied, 24.7408 s, 43.4 MB/s 1073741824 bytes (1.1 GB) copied, 24.744 s, 43.4 MB/s dd write test: one process and then four simultaneous processes # dd if=/dev/zero of=test bs=1M count=1024 oflag=direct 1073741824 bytes (1.1 GB) copied, 10.415 s, 103 MB/s # dd if=/dev/zero of=test bs=1M count=1024 oflag=direct & dd if=/dev/zero of=test2 bs=1M count=1024 oflag=direct & dd if=/dev/zero of=test3 bs=1M count=1024 oflag=direct & dd if=/dev/zero of=test4 bs=1M count=1024 oflag=direct 1073741824 bytes (1.1 GB) copied, 49.8874 s, 21.5 MB/s 1073741824 bytes (1.1 GB) copied, 49.8608 s, 21.5 MB/s 1073741824 bytes (1.1 GB) copied, 49.8693 s, 21.5 MB/s 1073741824 bytes (1.1 GB) copied, 49.9427 s, 21.5 MB/s I wonder is that normal situation or did I missed something?

    Read the article

  • Free or Open Solution for Storing and Charting CSV data

    - by rrrfusco
    I'm presently storing CSV files, combining them, opening them in open office, creating pivot tables and then generating charts from the spreadsheet. I've looked at OOBase, but appending csv files to base is clunky for some reason. SQLite seems like a good database solution, but I've haven't found a good charting program that connects to it with ease. Although open office (or libreoffice) maintains the references and allows you to update the information, this process is far from efficient. There are too many steps and it seems one program should handle all of these tasks. A better program would be more intuitive, allow you to simply add inserts into a database, and include an interface for standard charting settings. EDIT Simplest Automated Analysis and Chart Generation Tool? The above answer references Spotfire and Tableau, each of which has a free 14 and 30 day trial. Each program is nicely streamlined and designed. I'm looking for a program between this quality and LibreOffice. Can you recommend a better open or free desktop solution for windows?

    Read the article

  • Migrating RAID6 array from HP SmartArray P600 to LSI MegaRAID SAS 8888ELP

    - by MLu
    My customer has an external RAID6 array (8x 1TB in HP MSA60 enclosure) attached to HP SmartArray P600 controller. They have now decided to replace the server and the new one comes with LSI MegaRAID SAS 8888ELP. The HW supplier insists the migration is as simple as pulling the eSATA cable from P600 and inserting it into LSI but I'm not convinced. Unfortunately I have nowhere to test, and will have to do the exercise on the production array :( So the question in brief - is it possible to import the HP SmartArray P600-created RAID6 array to LSI MegaRAID 8888ELP?

    Read the article

  • Finding a backup and synchronization solution

    - by Andrea Zilio
    I'm having difficulties to find a backup and synchronization solution with the following characteristics: Cross-platform: Windows, Linux, Mac Offsite backup (so Internet Backup) Data deduplication Transfer only new/modified bits of modified files Secure: Data encrypted before leaving computer Maintain multiple versions of files (even deleted files) Folder synchronization integrated with backup and across multiple computers connected to the internet (not necessarily in the same LAN) I think that the Folder Sync feature needs a better explanation. The use case is this: you have a desktop pc and a laptop. The desktop pc contains a folder with some files and this folder is part of the backup (so it was selected to be backed up). The laptop does not contain that folder or that files at all. Then you're abroad with your laptop and you need that folder. So you want to be able to open the backup program, select that folder from the backup and download it in your laptop mantaining it synchronized with the backed up version. When you then come back home and switch on your desktop pc you want the folder we're talking about to be updated in the desktop PC. Does anyone knows any service with all these features? I've only found SpiderOak to support all the features I've mentioned but I'm not completely satisfied by the time taken to complete a backup. Sometimes it seems to hang for minutes with no reasons at all and folder synchronization occurs only after all files are backed up (instead folder sync should have a separated queue independent from other backup operations and synchronization should occurs frequently... for example every 5 minutes or less, independently from the frequency of normal backup operations)

    Read the article

< Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >