Search Results

Search found 5904 results on 237 pages for 'hybrid storage'.

Page 40/237 | < Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >

  • Generalized strategy for file server virtualization in Xenserver

    - by Jamie
    I'm not shopping as much as I'm looking for some guidance on good idea / bad idea strategies. I'm sure I'm not in the "best practices" budget range. Currently, I have 3 dell poweredges running xenserver in a pool. Each node has a ubuntu file server, serving about 6TB. One is the primary, the other two are rsync targets for backup. The 6TB is stored on their respective local storage disks as an LVM of 3x2tb virtual disks. The fileserver VM disks are also stored on the node local disks. Each node also runs a smattering of light-weight VMs for web, development, windows VMs, and stuff like that. Several of those VM's disks reside on a QNAP NAS to play with live migration. These VM's are often clients of the primary file server (like all the mail, web content, user files are stored on the file server, not on the mail, web, and samba VMs). This all works fine, and is a major step up for us. The downside is that the QNAP is a single point of failure. And the only thing the QNAP is doing is serving migratable VM images, not client data. Someday the poweredge local arrays will be full, and we will have to reinvent ourselves again. Is it wise to have heavywieght vms (like the fileserver, with its 6+ TB disks) on a SAN or NAS? Would it be better to keep the VMs lightweight, have the VM images on a SAN or NAS, and use 2 or more NAS act as NFS-serving file appliances? A hybrid SAN/NAS that can serve iscsi for images and NFS for the client vms? It seems like live-magration would be a misnomer if you have to migrate a fileserver with its entire 6+ TB disk. I recognize there are plenty of ways to skin the cat. We've already skinned it a few ways. What makes sense?

    Read the article

  • Hybrid Exchange Online setup with on premise public folders, certificate issues?

    - by exxoid
    We have a Hybrid Exchange setup with Exchange Online (v15 tenant) and Exchange 2010 on premise. The hybrid configuration for the most part is working, what I am having an issue with is getting public folders to work for cloud users. I followed the official documentation here (http://technet.microsoft.com/en-us/library/dn249373(v=exchg.150).aspx) and it kind of works. When I am accessing Outlook on a public wifi I am able to bring up the cloud mailboxes and on premise public folders show up in Outlook. When I am accessing email via Outlook as a cloud user on the same LAN as the on premise exchange, the cloud user makes the outlook.com connection for live/ad/archive mailbox but fails to create a proxy connection for the on premise public folders. The error I get is a certificate mismatch, it seems that when a user on the LAN accesses Outlook/Exchange it is using a different certificate vs. when Outlook is launched on a WiFi network. When I look at the Outlook connection information, I see the connection to outlook.com for ad/live/archive mailbox but no entry for public folder connection. Our on premise Exchange is 2010 SP3 with latest CUs. The client is a domain joined laptop with Windows 7 and Office 2010 SP2, latest windows updates applied. Our infrastructure has a working ADFS 3 and DirSync setup for Office 365. My question then is, what do I need to do to make sure that the Cloud user launching Outlook on the LAN uses the proper certificate (the wildcard 3rd party cert.. vs. the self signed certificate which it looks like it may be using during the connection attempt).

    Read the article

  • Windows Azure : Storage Client Exception Unhandled

    - by veda
    I am writing a code for upload large files into the blobs using blocks... When I tested it, it gave me an StorageClientException It stated: One of the request inputs is out of range. I got this exception in this line of the code: blob.PutBlock(block, ms, null); Here is my code: protected void ButUploadBlocks_click(object sender, EventArgs e) { // store upladed file as a blob storage if (uplFileUpload.HasFile) { name = uplFileUpload.FileName; byte[] byteArray = uplFileUpload.FileBytes; Int64 contentLength = byteArray.Length; int numBytesPerBlock = 250 *1024; // 250KB per block int blocksCount = (int)Math.Ceiling((double)contentLength / numBytesPerBlock); // number of blocks MemoryStream ms ; List<string>BlockIds = new List<string>(); string block; int offset = 0; // get refernce to the cloud blob container CloudBlobContainer blobContainer = cloudBlobClient.GetContainerReference("documents"); // set the name for the uploading files string UploadDocName = name; // get the blob reference and set the metadata properties CloudBlockBlob blob = blobContainer.GetBlockBlobReference(UploadDocName); blob.Properties.ContentType = uplFileUpload.PostedFile.ContentType; for (int i = 0; i < blocksCount; i++, offset = offset + numBytesPerBlock) { block = Convert.ToBase64String(BitConverter.GetBytes(i)); ms = new MemoryStream(); ms.Write(byteArray, offset, numBytesPerBlock); blob.PutBlock(block, ms, null); BlockIds.Add(block); } blob.PutBlockList(BlockIds); blob.Metadata["FILETYPE"] = "text"; } } Can anyone tell me how to solve it...

    Read the article

  • problem storing a hash in DB using Storage::nfreeze Perl

    - by Sam
    Hello, I want to insert a hash in the db using Storage::nfreeze but the data is not inserted properly. the code is as follow: %rec=(); $rec{'name'} = 'my name'; $rec{'address'} = 'my address'; my $order1 = new Order(); $order1->set_session(\%rec); $self->createOrder($order1); sub createOrder { my $self = $_[0]; my $order = $_[1]; # Retrieve the fields to insert into the database. my $st = $dbh->prepare("insert into order (session,.......) values(?,........)"); my $session = %{$order->get_session()}; $st->execute(&Storable::nfreeze(\%session),.....); $st->finish(); } sub getOrder { ... my $session = &Storable::thaw( $ref->{'session'} ); ..... } the thaw is working fine because I tested it withe some rows that have been inserted correctly. but when I try to get a row that was inserted using the createOrder subroutine, I get an error saying" Storable binary image v36.65 more recent than I am (v2.7) at blib/lib/Storable.pm (autosplit into blib/lib/auto/Storable/thaw.al) line 415 the error comes from the line that have thaw. the nfreeze did not store the hash properly. Can someone point me to what i m doing wrong in the createOrder subroutine? Thanks in advance. I know the module version have nothing to do with the problem.

    Read the article

  • NoSQL for filesystem storage organization and replication?

    - by wheaties
    We've been discussing design of a data warehouse strategy within our group for meeting testing, reproducibility, and data syncing requirements. One of the suggested ideas is to adapt a NoSQL approach using an existing tool rather than try to re-implement a whole lot of the same on a file system. I don't know if a NoSQL approach is even the best approach to what we're trying to accomplish but perhaps if I describe what we need/want you all can help. Most of our files are large, 50+ Gig in size, held in a proprietary, third-party format. We need to be able to access each file by a name/date/source/time/artifact combination. Essentially a key-value pair style look-up. When we query for a file, we don't want to have to load all of it into memory. They're really too large and would swamp our server. We want to be able to somehow get a reference to the file and then use a proprietary, third-party API to ingest portions of it. We want to easily add, remove, and export files from storage. We'd like to set up automatic file replication between two servers (we can write a script for this.) That is, sync the contents of one server with another. We don't need a distributed system where it only appears as if we have one server. We'd like complete replication. We also have other smaller files that have a tree type relationship with the Big files. One file's content will point to the next and so on, and so on. It's not a "spoked wheel," it's a full blown tree. We'd prefer a Python, C or C++ API to work with a system like this but most of us are experienced with a variety of languages. We don't mind as long as it works, gets the job done, and saves us time. What you think? Is there something out there like this?

    Read the article

  • Android - Where to store generated bitmaps?

    - by Josh
    I've got an app which dynamically generates anywhere from 6 to 100 small bitmaps for the user to move around the screen in a given session. I currently generate them in onCreate and store them to the sd card, so that after an orientation change I can grab them out of external storage and display them again. However, this takes time (the loading) and I'd like to keep the bitmap references around between lifecyle changes for quicker access. My question is, is there a better place to store my generated bitmaps? I was thinking about creating a static storage library in my base activity, something that would only need to be reloaded when the app is completely removed from memory (shutdown, other apps need resources, 30 minute restart, etc). Ideally, I'd like the user to be able to back out to the title screen, click a "Resume" button, and in onCreate I just have access to those resident bitmap references instead of having to load them from storage again. For this reason I don't think Activity.onRetainNonConfigurationInstance is what I need. Alternatively, is there a better way to handle multiple generated bitmaps than what I'm doing or the plan I described?

    Read the article

  • Best choice for a personal "online backup" in Europe

    - by marc_s
    I'm looking for an online backup solution for personal use - besides all the usual requirements (like not too expensive, since it's for personal use), I'd like to add two requirements to it: data center should be in Europe (I don't want my personal data stored in the US, when the next crazed president comes along and wants to confiscate and rifle through everybody's files.....) the online backup store should be accessible through a drive letter in cmd.exe So far, I've looked at a few services, but none have totally convinced me: Dropbox is looking ok, but they insist on creating a silly "My Dropbox" directory in my data path - and there's no way I can choose that name. Sorry - "My everything" is for dummies - I don't like that, I like to name my files and folders according to my liking LiveDrive is OK, too - they offer European storage, drive letter and all - but those drive letters are only available in the Windows Explorer - and not on the cmd.exe command line :-( and since I do 99% of my work on the command line, this is a major drawback..... Any other services I haven't looked at worth checking out? Marc

    Read the article

  • How to check CPU temperature on a HP P2000?

    - by Pavel
    I have a HP StorageWorks MSA Storage P2000 G3 SAS. show sensor-status gives something like # show sensor-status Sensor Name Value Status ---------------------------------------------------- On-Board Temperature 1-Ctlr A 53 C OK On-Board Temperature 1-Ctlr B 52 C OK On-Board Temperature 2-Ctlr A 61 C OK On-Board Temperature 2-Ctlr B 63 C OK On-Board Temperature 3-Ctlr A 53 C OK On-Board Temperature 3-Ctlr B 53 C OK Disk Controller Temp-Ctlr A 34 C OK Disk Controller Temp-Ctlr B 32 C OK Memory Controller Temp-Ctlr A 66 C OK Memory Controller Temp-Ctlr B 67 C OK [...] Overall Unit Status OK OK Temperature Loc: upper-IOM A 40 C OK Temperature Loc: lower-IOM B 38 C OK Temperature Loc: left-PSU 36 C OK Temperature Loc: right-PSU 40 C OK [...] is one of the values the CPU/FPGA temperature? Or, if not, how do I get it? Thanks!

    Read the article

  • Single/Mulitple LUN for vmware vm hosting

    - by Yucong Sun
    I'm building a iscsi storage system for hosting about ~500 Vmware vm running concurrently. And I have a disk array with 15 disks, I only need moderate write performance but preferably not SPOFed. so, that leaves me with RAID1 / RAID10 , I have couple choices: 1) 3x LUN 4disk RAID10 + 3 hot-swap 2) 1x LUN 14disk RAID10 + 1 hot-swap 3) 7x LUN 2disk RAID1 + 1 host-swap Which way is better? Is there a real problem running 500 vms on single LUN? and would it be better to resort to 7 LUns so each VM is better isolated with each other?

    Read the article

  • Improving SAS multipath to JBOD performance on Linux

    - by user36825
    Hello all I'm trying to optimize a storage setup on some Sun hardware with Linux. Any thoughts would be greatly appreciated. We have the following hardware: Sun Blade X6270 2* LSISAS1068E SAS controllers 2* Sun J4400 JBODs with 1 TB disks (24 disks per JBOD) Fedora Core 12 2.6.33 release kernel from FC13 (also tried with latest 2.6.31 kernel from FC12, same results) Here's the datasheet for the SAS hardware: http://www.sun.com/storage/storage_networking/hba/sas/PCIe.pdf It's using PCI Express 1.0a, 8x lanes. With a bandwidth of 250 MB/sec per lane, we should be able to do 2000 MB/sec per SAS controller. Each controller can do 3 Gb/sec per port and has two 4 port PHYs. We connect both PHYs from a controller to a JBOD. So between the JBOD and the controller we have 2 PHYs * 4 SAS ports * 3 Gb/sec = 24 Gb/sec of bandwidth, which is more than the PCI Express bandwidth. With write caching enabled and when doing big writes, each disk can sustain about 80 MB/sec (near the start of the disk). With 24 disks, that means we should be able to do 1920 MB/sec per JBOD. multipath { rr_min_io 100 uid 0 path_grouping_policy multibus failback manual path_selector "round-robin 0" rr_weight priorities alias somealias no_path_retry queue mode 0644 gid 0 wwid somewwid } I tried values of 50, 100, 1000 for rr_min_io, but it doesn't seem to make much difference. Along with varying rr_min_io I tried adding some delay between starting the dd's to prevent all of them writing over the same PHY at the same time, but this didn't make any difference, so I think the I/O's are getting properly spread out. According to /proc/interrupts, the SAS controllers are using a "IR-IO-APIC-fasteoi" interrupt scheme. For some reason only core #0 in the machine is handling these interrupts. I can improve performance slightly by assigning a separate core to handle the interrupts for each SAS controller: echo 2 /proc/irq/24/smp_affinity echo 4 /proc/irq/26/smp_affinity Using dd to write to the disk generates "Function call interrupts" (no idea what these are), which are handled by core #4, so I keep other processes off this core too. I run 48 dd's (one for each disk), assigning them to cores not dealing with interrupts like so: taskset -c somecore dd if=/dev/zero of=/dev/mapper/mpathx oflag=direct bs=128M oflag=direct prevents any kind of buffer cache from getting involved. None of my cores seem maxed out. The cores dealing with interrupts are mostly idle and all the other cores are waiting on I/O as one would expect. Cpu0 : 0.0%us, 1.0%sy, 0.0%ni, 91.2%id, 7.5%wa, 0.0%hi, 0.2%si, 0.0%st Cpu1 : 0.0%us, 0.8%sy, 0.0%ni, 93.0%id, 0.2%wa, 0.0%hi, 6.0%si, 0.0%st Cpu2 : 0.0%us, 0.6%sy, 0.0%ni, 94.4%id, 0.1%wa, 0.0%hi, 4.8%si, 0.0%st Cpu3 : 0.0%us, 7.5%sy, 0.0%ni, 36.3%id, 56.1%wa, 0.0%hi, 0.0%si, 0.0%st Cpu4 : 0.0%us, 1.3%sy, 0.0%ni, 85.7%id, 4.9%wa, 0.0%hi, 8.1%si, 0.0%st Cpu5 : 0.1%us, 5.5%sy, 0.0%ni, 36.2%id, 58.3%wa, 0.0%hi, 0.0%si, 0.0%st Cpu6 : 0.0%us, 5.0%sy, 0.0%ni, 36.3%id, 58.7%wa, 0.0%hi, 0.0%si, 0.0%st Cpu7 : 0.0%us, 5.1%sy, 0.0%ni, 36.3%id, 58.5%wa, 0.0%hi, 0.0%si, 0.0%st Cpu8 : 0.1%us, 8.3%sy, 0.0%ni, 27.2%id, 64.4%wa, 0.0%hi, 0.0%si, 0.0%st Cpu9 : 0.1%us, 7.9%sy, 0.0%ni, 36.2%id, 55.8%wa, 0.0%hi, 0.0%si, 0.0%st Cpu10 : 0.0%us, 7.8%sy, 0.0%ni, 36.2%id, 56.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu11 : 0.0%us, 7.3%sy, 0.0%ni, 36.3%id, 56.4%wa, 0.0%hi, 0.0%si, 0.0%st Cpu12 : 0.0%us, 5.6%sy, 0.0%ni, 33.1%id, 61.2%wa, 0.0%hi, 0.0%si, 0.0%st Cpu13 : 0.1%us, 5.3%sy, 0.0%ni, 36.1%id, 58.5%wa, 0.0%hi, 0.0%si, 0.0%st Cpu14 : 0.0%us, 4.9%sy, 0.0%ni, 36.4%id, 58.7%wa, 0.0%hi, 0.0%si, 0.0%st Cpu15 : 0.1%us, 5.4%sy, 0.0%ni, 36.5%id, 58.1%wa, 0.0%hi, 0.0%si, 0.0%st Given all this, the throughput reported by running "dstat 10" is in the range of 2200-2300 MB/sec. Given the math above I would expect something in the range of 2*1920 ~= 3600+ MB/sec. Does anybody have any idea where my missing bandwidth went? Thanks!

    Read the article

  • Gluster bricks are offline and errors in logs

    - by Roman Newaza
    I have substituted all the IP addresses with hostnames and renamed configs (IP to hostname) in /var/lib/glusterd by my shell script. After that I restarted Gluster Daemon and the volume. Then I checked if all the peers are connected: root@GlusterNode1a:~# gluster peer status Number of Peers: 3 Hostname: gluster-1b Uuid: 47f469e2-907a-4518-b6a4-f44878761fd2 State: Peer in Cluster (Connected) Hostname: gluster-2b Uuid: dc3a3ff7-9e30-44ac-9d15-00f9dab4d8b9 State: Peer in Cluster (Connected) Hostname: gluster-2a Uuid: 72405811-15a0-456b-86bb-1589058ff89b State: Peer in Cluster (Connected) I could see mounted volumes size change on all the nodes when I execute df command, so new data is coming. But recently I noticed error messages in app log: copy(/storage/152627/dat): failed to open stream: Structure needs cleaning readfile(/storage/1438227/dat): failed to open stream: Input/output error unlink(/storage/189457/23/dat): No such file or directory Finally, I have found out some bricks are offline: root@GlusterNode1a:~# gluster volume status Status of volume: storage Gluster process Port Online Pid ------------------------------------------------------------------------------ Brick gluster-1a:/storage/1a 24009 Y 1326 Brick gluster-1b:/storage/1b 24009 N N/A Brick gluster-2a:/storage/2a 24009 N N/A Brick gluster-2b:/storage/2b 24009 N N/A Brick gluster-1a:/storage/3a 24011 Y 1332 Brick gluster-1b:/storage/3b 24011 N N/A Brick gluster-2a:/storage/4a 24011 N N/A Brick gluster-2b:/storage/4b 24011 N N/A NFS Server on localhost 38467 Y 24670 Self-heal Daemon on localhost N/A Y 24676 NFS Server on gluster-2b 38467 Y 4339 Self-heal Daemon on gluster-2b N/A Y 4345 NFS Server on gluster-2a 38467 Y 1392 Self-heal Daemon on gluster-2a N/A Y 1402 NFS Server on gluster-1b 38467 Y 2435 Self-heal Daemon on gluster-1b N/A Y 2441 What can I do about that? I need to fix it. Note: CPU and Network usage of all the four nodes are about the same.

    Read the article

  • How do I use an internal SSD as a scratch disk for FCP X?

    - by andrewb
    I'm contemplating setting up my MacBook Air as a video editing machine. If I do this, I'll upgrade to a 256 GB SSD, and I should be able to keep around 100 GB or more free for video editing. The video files would of course be stored externally, but save purchasing some expensive Thunderbolt RAID device (which I suppose is gradually becoming more of an option), it will be slow for read/writes. How can I have a set up where I take advantage of my SSD's speed for a scratch disk/cache for FCP X, but still have the TB(s) of storage of externals? I don't want to have to be moving files constantly back and forth, this is about saving time not wasting it.

    Read the article

  • Is a "failed" RAID 5 disk really no good?

    - by GregH
    This is my first venture in to setting up RAID on my home system. After installing 3 x 1TB drives in RAID 5, everything was running well for about 10 days. Then, the Intel Rapid Storage Technology software that monitors the disks and RAID on my system, told me that I had a failed drive. I marked the drive as good, and the array rebuilt. Then a day or so later I got a notification again, that the drive failed. I'm just wondering if this drive really is no good or if there is something I can do to get it working again? Or, do I just need to return it to the store where I bought it and get a replacement?

    Read the article

  • How to use new disk space after extend attached SAN disk

    - by Edu Lomeli
    I have extended the space of my SAN vDisk from 1TB to 1.2TB, but Windows Explorer doesn't show the new size. After resize the vdisk in the SAN Manager, the Disk Management utility shows the 200GB unallocated space, then I resized the partition to use the unallocated space to get a 1.2TB partition, the process was succesfully, but in the Windows File Explorer the disk still have 1TB of total space. Win version: Windows Storage Server Enterprise 2007. Do I need to restart the server? How can I use the new extra space without rebooting?

    Read the article

  • Why does StackExchange store images in imgur rather than its own servers? [migrated]

    - by martin's
    I am trying to understand the technical (and business) logic behind taking such an approach. Certainly SE isn't short of server or bandwidth resources. I don't think imgur is a CDN, so that can't be the reason. On the one hand one is giving up local control (meaning your files, your hardware) of the content. On the other, you don't have to use your own bandwidth, storage and resources. Then again, you depend on someone else for the reliability and up-time of your service.

    Read the article

  • Linux Disk Setup for VMs

    - by zjherner
    Been trying to find the ideal way to setup disks/partitions for Linux guests on ESXi. Seems as though Linux is falling behind when it comes easily adding disk space. The end goal is to be able to add disk space to a Linux server without rebooting the server or taking the server offline. Ideally, I would expect adding disk to a Linux machine should be as easy as adding disk space to a Windows machine. I expand the vmdk file from vSphere Open disk mangler find the disk and extend volume. Would have to use command line tools in linux which is no big deal, but I haven't been able to find a solid way to exand filesystems on the fly. What is everyone else using for disk setups on their linux guests? Has anyone been able to acheive adding storage space to linux without downtime? Can it be done without using lvm?

    Read the article

  • Is a cluster the most cost effective redundancy method for windows server 2003?

    - by Ryan
    We had a server with bad ram which caused a long outage while they figured it out and our client facing apps had to go down for a while. We are coming up with a solution for instant fail-over but are not sure what the most cost effective method would be. Is a windows server cluster the best method for this? Also note we are using Parallels Virtuozzo if that makes any difference here. We found Parallels has a documented method for setting this up but it said it required a Domain Controller as well as a Fiber connection to shared storage, is all that really needed? Thanks.

    Read the article

  • Flushing disk cache for performance benchmarks?

    - by Ido Hadanny
    I'm doing some performance benchmark on some heavy SQL script running on postgres 8.4 on a ubuntu box (natty). I'm experiencing some pretty un-stable performance, even though I'm supposed to be the only one running on the machine (the same script on the exact same data might run in 20m and then 40m for no specific reason). So, remembering my distant DBA training, I decided I should flush the postgres cache, using sudo /etc/init.d/postgresql restart, but it's still shaky! My question: maybe I'm missing some caches in my disk/os? I'm using a netapp appliance as my storage. Am I on the right track? Do I even want to make sure I get repeatable performance before I start tuning?

    Read the article

  • Is LiveDrive.com reliable?

    - by Marc
    I'm currently using DropBox (50GB account) which works fine, but at this moment I'm not impressed with its speed. I have a 60down/6up connection and only LiveDrive can use almost the full bandwidth of my connection. Dropbox often is very slow (avg. 100-500Kb/s compared to LD at 6MB/s). If I only look at the speed and storage costs then LD is much better, but I don't have enough experience with LD to be able to say something about reliability. Can anyone comment on this? Thx.

    Read the article

  • Stop RAID 5 from Initializing

    - by Antz
    Hi, I am trying to follow Ictinike's guide on Recovering Intel RAID "Non-Member Disk" Error found here, Ictinike's RAID recovery Guide I have recreated my RAID array as per the instructions. However my RAID array status is then automatically set to: INITIALIZE When I boot back into my Windows XP desktop, the Intel Matrix Storage Utility begins to "Initialize" my drives. This is a long slow process that will take about 20 hours. I suspect all my data will be lost. I have gone back into my bios and disabled my RAID controller to prevent any further initialization and data loss. I have read that initialization will cause data loss. I've also read somewhere that it won't. I am not so confident in the latter. Is there anyway to stop this initialization process so I can continue to follow the steps in the recovery guide? Some system specs: ABIT IP35 Pro Motherboard ICH9R on board RAID controller

    Read the article

  • What's the best solution for file sharing in my case? DAS or NAS?

    - by jakub
    I want to have in my network small, cheap and energy efficient server with will be fully customizable (Gnu/Linux, OpenBSD). What is more I want to have big, redundant storage in my network and access to it via server. I have already small terminal without hard drive (no SATA/PATA, one drive on USB) which works fine. I don't want to buy big server, or to use regular computer for that. It's not cheap. I thought about a small case (ITX?), and cheap computer in this with SATA ports, but I cannot find anything interesting :( I thought about NAS in network and server independently and booting server from NAS, I'm not sure which technologies will be good for that, and I don't know what with performance. Direct connection to NAS through network from workstation is next pro for that. What do you think about DAS? It will be good for that?

    Read the article

  • multiple file systems for mysql

    - by RainDoctor
    Does mysql support multiple file systems for a single database with most of the tables being on MyISAM? Context: we have a 1.5TB mysql database, which is increasing at the rate of 200GB per month. The storage is directly attached, whose slots are almost full. I can add another DAS, and increase the file system. But resizing volume, resizing file system, etc are getting messy. Is there a concept of "tablespace, datafile" (like in oracle) in MySql world? Or how you guys manage mysql db with these kind of constraints?

    Read the article

  • DD-WRT/openwrt question

    - by Shiki
    Can I squeeze more speed out of my router (when it comes to USB attached storage device on it) with open/DD wrt? (Sorry I don't really know such firmwares.) (Guess it works with ntfs-3g? I don't know.) Feel free to make this a real question. Basically the question: Does the change worth it in the terms of speed? (My router is a TP-Link WR1043N. Edited it out of the question since it would make it too specified.)

    Read the article

  • What lasts longer: Data stored on non-volatile flash RAM, optical media, or magnetic disk?

    - by Chris W. Rea
    What lasts longer: Data stored on non-volatile flash RAM (USB stick or SD cards?), optical media (CD, DVD, or Blu-Ray?), or magnetic disk (floppies, hard drives?) My gut tells me optical media, but I'm not sure. Furthermore, which of those digital media would be most suitable for long-term data storage where environmental issues are unknown, such as low/high temperature or humidity? For example, what digital media could be stored in a basement, attic, or time capsule, and be expected to survive a reasonably long time? e.g. a lifetime, and then some. Update: Looks like optical media and magnetic tape each have one vote below. Does anybody else have an opinion or know of a study comparing the two?

    Read the article

  • Way to auto resize photos before uploaded to cloud service?

    - by AndroidHustle
    I love using auto syncing services to have my photos taken with my smartphone stored on a cloud storage service. One problem though is that the photos are uploaded in high resolution and takes up a lot of space on the drive. I wonder if any one knows of a service/strategy to have the auto uploaded photo resized to have it occupy less space when auto stored? That is, without me having to take the photos with lesser quality, I still want photos taken with the highest quality since I may take a photo I really like.

    Read the article

< Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >