Search Results

Search found 7222 results on 289 pages for 'storage cells'.

Page 47/289 | < Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >

  • Disaster recovery backup of files/photos for personal use

    - by Renesis
    I'm looking for the best method to store a backup of important files and 5+ years of digital photos that is safe from some type of fire/flood disaster in my home. I'm looking for: Affordable: Less than $100/yr or first-time cost. Reliable: At least a smaller chance of failing than there is of fire or flood Easy for initial backup and to add to, and at least semi-easy to recover. I recently purchased a small home safe for physical vitals. It was inexpensive, solid, and is fire/water safe. If I had a physical copy of the digital files, the safe would work fine for this, but I don't know what to store in it that adequately meets the requirements above. Hard drive - I read that the danger of it not spinning up makes a hard drive a bad choice for this type of storage, although it was my first thought and would definitely be the simplest choice - very easy to take out once a month and add files to. DVDs - Way too much of a hassle for both backup and restore. Tape - No idea on the affordability of this option Online - Given that I have at least 300GB already and ever-increasing megapixels means ever-bigger files, and my ISP upload is about 2Mb at the best, this just doesn't sound like a good option for me, but I could be convinced. Other - Have I missed something? Also, I'm already covered both for sync between computers (Dropbox) and a nightly backup of these files (External HDD). The problem with the nightly backup is obviously that it's always with the computer and in a disaster would be destroyed along with it. Is anyone else doing something similar? Is the HDD as poor of a choice as I read, or is it a feasible option? Maybe two to reduce the likelihood of failure?

    Read the article

  • Seagate 3TB ST3000DM001 hard drive not recognized by Linux, causes fdisk to hang

    - by MountainX
    I'm running Kubuntu 12.04. I have a brand new, never used Seagate 3TB ST3000DM001 hard drive. It's an internal drive. I installed it in a USB enclosure. When I connect it to my PC, nothing happens automatically. When I run sudo fdisk -l, fdisk hangs (without reporting this drive) until I disconnect this drive from the USB port. blkid won't report it either. I tried connecting it to both USB 2.0 and USB 3.0 ports on my PC. I got the same result either way. I tried two different USB enclosures with the same result. If I take the same drive, same enclosure and connect it to a Windows 7 laptop, it is recognized automatically as a USB mass storage device. I want to format the drive (probably ext4) and copy files to it. I have another drive, also in a USB enclosure, that is connected via USB 3.0 to this PC and it works fine. It's a 2.0 TB Samsung HDD. I plan to copy files from the 2TB to the 3TB drive, once I get this issue resolved. My motherboard is an Asus Asus P8B WS LGA1155/ Intel C206/ Quad CrossFireX/ SATA3&USB3.0/ A&2GbE/ ATX. What is the resolution?

    Read the article

  • Solid State Drive Occasionally Freezes For A Minute While OS Is "Beach Balling"

    - by Boris_yo
    Almost a month ago I bought Intel 330 128GB solid state drive, migrated my data with Intel branded limited-feature Acronis from old HDD to new SSD, optimized with Intel Toolbox and started using it. Occassionally I get close to 1 minute freezes while seeing operating system "beach balls" and animations still work, I can interact and click on something but nothing responds, nothing loads. Recently a couple of such freezes occurred in shorter amount of time in a row. I have noticed that if I stop interacting with laptop, the freeze lasts less time than if I was interacting with laptop. But the bigger problem is when freeze just does not end and computer keeps being stuck unlil it is more than half hour and I run out of patience to keep waiting and feeling I need to restart the system because I am not getting anywhere. Such freeze happened while laptop was cold booting into Windows 7. This is when freeze hang occurred and I had to restart, only later to be greeted with Windows recovery screen stating something about faiure of boot sector and asking to insert Windows repair CD. But after I restarted, Windows booted successfully and all was well. I have filmed video of freeze hang occurring in cold boot which you can see here (on video page look below for description): http://www.youtube.com/watch?v=8b7MQlcDTUs As I have mentioned in the beginning, the SSD is less than a month old but here is S.M.A.R.T statistics just in case (TRIM is enabled btw according to CrystalDiskInfo): I want to emphasize that this SSD is the only drive I have, yet it is working in RAID mode (it was enabled initially in BIOS by previous laptop's owner) on Intel Rapid Storage drivers. I am contemplating about switching to AHCI mode but want to be sure this won't cause data loss. Additionally, the stock firmware is the only firmware available currently, yet Intel does not respond to my posts in their community board. If anyone here has this SSD model or generally has experience with SSD drives, I would love to know your thoughts.

    Read the article

  • linux: mount old ATA disk to USB adapter

    - by 130490868091234
    I am trying to recover data from an old Linux that was installed in a computer on an ATA hard drive. I found a ScanLogic Corp. SL11R-IDE IDE Bridge (04ce:0002), an ATA adapter to USB 1.0 like the one in the picture: and after switching it on, I plugged it into a laptop with Ubuntu 12.04. I am used to the drives being automatically mounted, but this one doesn't show up in /media. After doing a dmesg, all I got is this: [215298.671924] usb 2-1.1: new full-speed USB device number 5 using ehci_hcd [215298.767330] scsi19 : usb-storage 2-1.1:1.0 [215299.841701] usb 2-1.1: reset full-speed USB device number 5 using ehci_hcd [215300.017258] usb 2-1.1: reset full-speed USB device number 5 using ehci_hcd [215300.197050] usb 2-1.1: reset full-speed USB device number 5 using ehci_hcd [215300.372730] usb 2-1.1: reset full-speed USB device number 5 using ehci_hcd I tried plugging in the adapter to the three different USB ports in my laptop (one of them USB 3.0), but got no luck with any of them. I get different devices under, for example: /dev/bus/usb/003/002 or /dev/bus/usb/002/004, but I don't get any /dev/sdbN links. The output blkid -o list -c /dev/null is just the laptop's partitions. I have tried taking out the jumper, putting it as master and as CS Enabled, but didn't change the result. Any ideas?

    Read the article

  • Could hybrid SSD + HDD be made with fixed internal partitions?

    - by Aaron
    I was pretty close to getting Seagate's Momentus XT but have been scared off by the many problems reported on forums and feedback sites, especially in Mac Book Pros. So I'm waiting for mk 2 with some extra flash and better reliablilty I'm assuming will come out this year. What would suit me better though is a 32+500 hybrid drive where I have more control over what is on the flash drive and what is on the disk drive. So there are 2 physical partitions within the one 2.5" hard drive enclosure which use different media internally (32GB for core files and 500GB for data and multimedia). The partitions would be locked so they can't be changed. - Or even better, the disk driver just makes them appear as two disks to the OS that share the same bus... Perhaps it's ok if the bios just sees the first drive until the OS is loaded. Is either of it technically possible? Obviously difficult to market outside of the enthusiast market. The SSD memory modules can be pretty small right, so they could even make them a card that plugs into a secondary connection on the enclosure. That would be good for computer builders as well as for upgrading and recoverability. Then future operating systems could recognise these system SSD drives and automatically install the OS + swap files on it. While placing document libraries on the larger data drive. While in the longer term HDD will probably disapear there will always be a trade off between speed, storage size and expense.

    Read the article

  • Generalized strategy for file server virtualization in Xenserver

    - by Jamie
    I'm not shopping as much as I'm looking for some guidance on good idea / bad idea strategies. I'm sure I'm not in the "best practices" budget range. Currently, I have 3 dell poweredges running xenserver in a pool. Each node has a ubuntu file server, serving about 6TB. One is the primary, the other two are rsync targets for backup. The 6TB is stored on their respective local storage disks as an LVM of 3x2tb virtual disks. The fileserver VM disks are also stored on the node local disks. Each node also runs a smattering of light-weight VMs for web, development, windows VMs, and stuff like that. Several of those VM's disks reside on a QNAP NAS to play with live migration. These VM's are often clients of the primary file server (like all the mail, web content, user files are stored on the file server, not on the mail, web, and samba VMs). This all works fine, and is a major step up for us. The downside is that the QNAP is a single point of failure. And the only thing the QNAP is doing is serving migratable VM images, not client data. Someday the poweredge local arrays will be full, and we will have to reinvent ourselves again. Is it wise to have heavywieght vms (like the fileserver, with its 6+ TB disks) on a SAN or NAS? Would it be better to keep the VMs lightweight, have the VM images on a SAN or NAS, and use 2 or more NAS act as NFS-serving file appliances? A hybrid SAN/NAS that can serve iscsi for images and NFS for the client vms? It seems like live-magration would be a misnomer if you have to migrate a fileserver with its entire 6+ TB disk. I recognize there are plenty of ways to skin the cat. We've already skinned it a few ways. What makes sense?

    Read the article

  • What is the max supported number of SATA devices (using cable adapters) on a Dell SAS 6/iR adapter?

    - by Zac B
    I've got a Dell SAS 6/iR PCI-E adapter. I don't have a multiplier backplane. I'm planning on connecting SATA (non SAS) drives. If I buy cable adapters only (ones that split a SAS connector on the card to a certain number of SATA cables), how many drives can I connect to this card? The way I see it, there are two limitations: a limitation imposed by the theoretical max number of devices supported on the card (which I've dug through the specs to find, but haven't seen yet), and a limitation imposed by the number of SAS plugs on the card multiplied by the number of SATA cables that come out of the highest-multiplying splitter I can buy. The answer to my question would be the minimum of those two limitations. I've seen 4x SATA coming out of some splitters; are there any that have more? Alternatively, if this is an RTFM question, does anyone have a good link to a "this is how SAS works, this is how you figure out the max number of devices, and this is how the concepts of 'ports', 'lanes', 'endpoint devices', and 'connectors' all relate in SAS-land" document? I've looked around on the Dell docs, but haven't found anything that explains this to someone at my level of understanding of SAN/enterprise storage technologies. Cheers!

    Read the article

  • How to set up Windows 7 Professional as a NAS

    - by Enyalius
    I searched and didn't find any answers, so please forgive me if this is a repeat. Anyway, I have an older computer that I'm using as an HTPC, and I was hoping that I could use it as a NAS/multimedia server, as well. My primary uses would include accessing content on my PS3 (same LAN), accessing content from other computers on my home network and (if I can) accessing content from my Android phone over the internet. I have used SubSonic to stream music to my Android phone and other computers before, but I would really like to find a way to do this natively if possible. I know that I can buy external hard disk cases that can plug in the USB port of my router, that I can get a Drobo or other network storage solution, but I would really just rather not spend the money (especially considering that I already have a computer that I should be able to use). Hardware involved: Apple AirPort Extreme base station router (most recent revision) Home Theater Personal Computer: Core 2 Duo @ 2.4GHz, 8GB DDR2 RAM, ~3.5TB hard drive space Sony Playstaiton 3 Thin 120GB HTC Thunderbolt (I have 4G coverage) rooted and running Android 2.2.1 Various Apple laptops Various Windows 7 desktops/laptops Thanks in advance! Note- I have looked at open source NAS software but I would like to preserve the Windows Media Center functionality in Windows 7, so other NAS software is not an option for me currently. .

    Read the article

  • One Way Sync with Dropbox?

    - by user244805
    Is there any way I can mirror a dropbox folder to my C drive by just running a portable file? Extra background information because I know you guys hate it when you don't get the entire situation: I go back to University in fall and I need a new storage solution. I decided to use DropBox to sync my tiny University files (< 5 MB). I need to access these files from 4 machines: Windows 7 Home machine Windows 7 University A machine Windows 7 University B machine Android tablet 1 and 4 are a non-issue. The problem lies with 2 and 3. I want to be able to edit my files on 2 and 3 but those machines are not mine. There is an easy fix. Run a portable version of the DropBox syncer on a USB drive. But the problem is that I don't want to carry a USB drive around with me all the time. In that case, I can just run the small portable DropBox syncer off the internet. But where will it to store the files? A temporary directory on the C drive. There is only one issue left: there are hundreds of machines that I will randomly use that fit in categories 2 and 3. My portable DropBox syncer will notice that the temporary directory is empty on each new PC I use and instead of downloading my DropBox folder to the machine, it syncs the other way around i.e. it deletes my entire DropBox. The solution is to mirror my DropBox onto the temporary directory before running the DropBox syncer.

    Read the article

  • Hard Reset USB in Ubuntu 10.04

    - by Cory
    I have a USB device (a modem) that is really finicky. Sometimes it works fine, but other times it refuses to connect. The only solution I have found to fix it once it gets into a bad state is to physically unplug the device and plug it back in. However, I don't always have physical access to the machine it is plugged in on, so I'm looking for a way to do this through the command line. This post suggests running: $ sudo modprobe -w -r usb_storage; sudo modprobe usb_storage However I get an "unknown option -w" output. This slightly modified command: $ sudo modprobe -r usb_storage Fails with the message FATAL: Module usb_storage is in use. If I try to kill -9 the processes marked [usb-storage] before running they refuse to die (I think because they are deeply tied to the kernel). Anyone know of a way to do this? NOTE: I cross-posted this on serverfault as I didn't know which was more appropriate. I will delete and/or link whichever one is answered first.

    Read the article

  • RAIDs with a lot of spindles - how to safely put to use the "wasted" space

    - by kubanczyk
    I have a fairly large number of RAID arrays (server controllers as well as midrange SAN storage) that all suffer from the same problem: barely enough spindles to keep the peak I/O performance, and tons of unused disk space. I guess it's a universal issue since vendors offer the smallest drives of 300 GB capacity but the random I/O performance hasn't really grown much since the time when the smallest drives were 36 GB. One example is a database that has 300 GB and needs random performance of 3200 IOPS, so it gets 16 disks (4800 GB minus 300 GB and we have 4.5 TB wasted space). Another common example are redo logs for a OLTP database that is sensitive in terms of response time. The redo logs get their own 300 GB mirror, but take 30 GB: 270 GB wasted. What I would like to see is a systematic approach for both Linux and Windows environment. How to set up the space so sysadmin team would be reminded about the risk of hindering the performance of the main db/app? Or, even better, to be protected from that risk? The typical situation that comes to my mind is "oh, I have this very large zip file, where do I uncompress it? Umm let's see the df -h and we figure something out in no time..." I don't put emphasis on strictness of the security (sysadmins are trusted to act in good faith), but on overall simplicity of the approach. For Linux, it would be great to have a filesystem customized to cap I/O rate to a very low level - is this possible?

    Read the article

  • Where can someone store >100GB of pictures online? [closed]

    - by sbi
    A person who is not very computer-savvy needs to store 130GB of photos. The key parameters are: an non-negligible probability that the company selling the storage will be existing, and the data accessible, for at least five years data should be considered safe once uploaded reasonable terms of service: google drive reserving the right to literally do anything they want with their user's data is not acceptable; the possibility that the CIA might look at those pictures is not considered a threat easy to use from Windows, preferably as a drive no nerve-wracking limitations ("cannot upload 10GB/day" or "files 500MB" etc.) that serve no purpose other than pushing the user to the next-higher price plan some upgrade plan: there's currently 10-30GB of new photos per year, with a tendency to increase, which might bust a 150GB limit next January ability to somehow sort the pictures: currently they are sorted into folders, but something alike (tags) would be just as good, if easy enough to apply of course, the pricing is important (although there's a reason this is the last bullet; reasonable data safety is considered more important) Nice to have, but not necessary features would be: additional features related to photos (thumbnail generation, album sharing etc.) access from web and other platforms than Windows (smart phones) Let me stress this again: The person in need of that is able to copy pictures from the camera to the computer, can copy files in the explorer, and uses a web email service. That's about it, there's almost no understanding of what happens under the hood.

    Read the article

  • Hard Reset USB in Ubuntu 10.04

    - by Cory
    I have a USB device (a modem) that is really finicky. Sometimes it works fine, but other times it refuses to connect. The only solution I have found to fix it once it gets into a bad state is to physically unplug the device and plug it back in. However, I don't always have physical access to the machine it is plugged in on, so I'm looking for a way to do this through the command line. This post suggests running: $ sudo modprobe -w -r usb_storage; sudo modprobe usb_storage However I get an "unknown option -w" output. This slightly modified command: $ sudo modprobe -r usb_storage Fails with the message FATAL: Module usb_storage is in use. If I try to kill -9 the processes marked [usb-storage] before running they refuse to die (I think because they are deeply tied to the kernel). Anyone know of a way to do this? NOTE: I cross-posted this on serverfault as I didn't know which was more appropriate. I will delete and/or link whichever one is answered first.

    Read the article

  • ZFS Data Loss Scenarios

    - by Obtuse
    I'm looking toward building a largish ZFS Pool (150TB+), and I'd like to hear people experiences about data loss scenarios due to failed hardware, in particular, distinguishing between instances where just some data is lost vs. the whole filesystem (of if there even is such a distinction in ZFS). For example: let's say a vdev is lost due to a failure like an external drive enclosure losing power, or a controller card failing. From what I've read the pool should go into a faulted mode, but if the vdev is returned the pool should recover? or not? or if the vdev is partially damaged, does one lose the whole pool, some files, etc.? What happens if a ZIL device fails? Or just one of several ZILs? Truly any and all anecdotes or hypothetical scenarios backed by deep technical knowledge are appreciated! Thanks! Update: We're doing this on the cheap since we are a small business (9 people or so) but we generate a fair amount of imaging data. The data is mostly smallish files, by my count about 500k files per TB. The data is important but not uber-critical. We are planning to use the ZFS pool to mirror 48TB "live" data array (in use for 3 years or so), and use the the rest of the storage for 'archived' data. The pool will be shared using NFS. The rack is supposedly on a building backup generator line, and we have two APC UPSes capable of powering the rack at full load for 5 mins or so.

    Read the article

  • in Open Office Calc, how do I drag and drop cells to insert rather than replace their destination?

    - by joachim
    I want to rearrange rows with the mouse in Calc. In Excel, I select the whole row, then drag and drop it while holding SHIFT. This causes the drag and drop cursor to turn into a bar rather than cells, and the cells are inserted at the bar's position. Is there a way to accomplish the same sort of thing in Calc without going round the houses inserting columns before the drag operation?

    Read the article

  • In OpenOffice Calc, how do I drag and drop cells to insert rather than replace their destination?

    - by joachim
    I want to rearrange rows with the mouse in Calc. In Excel, I select the whole row, then drag and drop it while holding Shift. This causes the drag and drop cursor to turn into a bar rather than cells, and the cells are inserted at the bar's position. Is there a way to accomplish the same sort of thing in Calc without going around the houses inserting columns before the drag operation?

    Read the article

  • Exchange IMAP4 connector - Error Event ID 2006

    - by MikeB
    Hi, A couple of users in my organisation use IMAP4 to connect to Exchange 2007 (Update rollup 9 applied) because they prefer Thunderbird / Postbox clients. One of the users is generating errors in the Application Log as follows: An exception Microsoft.Exchange.Data.Storage.ConversionFailedException occurred while converting message Imap4Message 1523, user "*******", folder *********, subject: "******", date: "*******" into MIME format. Microsoft.Exchange.Data.Storage.ConversionFailedException: Message content has become corrupted. ---> System.ArgumentException: Value should be a valid content type in the form 'token/token' Parameter name: value at Microsoft.Exchange.Data.Mime.ContentTypeHeader.set_Value(String value) at Microsoft.Exchange.Data.Storage.MimeStreamWriter.WriteHeader(HeaderId type, String data) at Microsoft.Exchange.Data.Storage.ItemToMimeConverter.WriteMimeStreamAttachment(StreamAttachmentBase attachment, MimeFlags flags) --- End of inner exception stack trace --- at Microsoft.Exchange.Data.Storage.ItemToMimeConverter.WriteMimeStreamAttachment(StreamAttachmentBase attachment, MimeFlags flags) at Microsoft.Exchange.Data.Storage.ItemToMimeConverter.WriteMimeAttachment(MimePartInfo part, MimeFlags flags) at Microsoft.Exchange.Data.Storage.ItemToMimeConverter.WriteMimePart(MimePartInfo part, MimeFlags mimeFlags) at Microsoft.Exchange.Data.Storage.ItemToMimeConverter.WriteMimeParts(List`1 parts, MimeFlags mimeFlags) at Microsoft.Exchange.Data.Storage.ItemToMimeConverter.WriteMimePart(MimePartInfo part, MimeFlags mimeFlags) at Microsoft.Exchange.Data.Storage.ImapItemConverter.<>c__DisplayClass2.<WriteMimePart>b__0() at Microsoft.Exchange.Data.Storage.ConvertUtils.CallCts(Trace tracer, String methodName, String exceptionString, CtsCall ctsCall) at Microsoft.Exchange.Data.Storage.ImapItemConverter.WriteMimePart(ItemToMimeConverter converter, MimeStreamWriter writer, OutboundConversionOptions options, MimePartInfo partInfo, MimeFlags conversionFlags) at Microsoft.Exchange.Data.Storage.ImapItemConverter.GetBody(Stream outStream) at Microsoft.Exchange.Data.Storage.ImapItemConverter.GetBody(Stream outStream, UInt32[] indices) From my reading around it seems that the suggestion is to ask users to log in to Outlook / OWA and view the messages there. However, having logged in as the users myself, the messages cannot be found either through searching or by browsing the folder detailed in the log entry. The server returns the following error to the client: "The message could not be retrieved using the IMAP4 protocol. The message has not been deleted and may be accessible using either Microsoft Outlook or Microsoft Office Outlook Web Access. You can also try contacting the original sender of the message to find out about the contents of the message. Retrieval of this message will be retried when the server is updated with a fix that addresses the problem." Messages were transferred in to Exchange by copying them from the old Apple Xserve, accessed using IMAP. So my question, finally: 1. Is there any way to get the IMAP Exchange connector to rebuild its cache of messages since it doesn't seem to be pulling them directly from the MAPI store? 2. Alternatively, if there is no database, any ideas on why these messages don't appear in Outlook or OWA would be gratefully received. Many thanks, Mike

    Read the article

  • Exchange IMAP4 connector - Error Event ID 2006

    - by MikeB
    A couple of users in my organisation use IMAP4 to connect to Exchange 2007 (Update rollup 9 applied) because they prefer Thunderbird / Postbox clients. One of the users is generating errors in the Application Log as follows: An exception Microsoft.Exchange.Data.Storage.ConversionFailedException occurred while converting message Imap4Message 1523, user "*******", folder *********, subject: "******", date: "*******" into MIME format. Microsoft.Exchange.Data.Storage.ConversionFailedException: Message content has become corrupted. ---> System.ArgumentException: Value should be a valid content type in the form 'token/token' Parameter name: value at Microsoft.Exchange.Data.Mime.ContentTypeHeader.set_Value(String value) at Microsoft.Exchange.Data.Storage.MimeStreamWriter.WriteHeader(HeaderId type, String data) at Microsoft.Exchange.Data.Storage.ItemToMimeConverter.WriteMimeStreamAttachment(StreamAttachmentBase attachment, MimeFlags flags) --- End of inner exception stack trace --- at Microsoft.Exchange.Data.Storage.ItemToMimeConverter.WriteMimeStreamAttachment(StreamAttachmentBase attachment, MimeFlags flags) at Microsoft.Exchange.Data.Storage.ItemToMimeConverter.WriteMimeAttachment(MimePartInfo part, MimeFlags flags) at Microsoft.Exchange.Data.Storage.ItemToMimeConverter.WriteMimePart(MimePartInfo part, MimeFlags mimeFlags) at Microsoft.Exchange.Data.Storage.ItemToMimeConverter.WriteMimeParts(List`1 parts, MimeFlags mimeFlags) at Microsoft.Exchange.Data.Storage.ItemToMimeConverter.WriteMimePart(MimePartInfo part, MimeFlags mimeFlags) at Microsoft.Exchange.Data.Storage.ImapItemConverter.<>c__DisplayClass2.<WriteMimePart>b__0() at Microsoft.Exchange.Data.Storage.ConvertUtils.CallCts(Trace tracer, String methodName, String exceptionString, CtsCall ctsCall) at Microsoft.Exchange.Data.Storage.ImapItemConverter.WriteMimePart(ItemToMimeConverter converter, MimeStreamWriter writer, OutboundConversionOptions options, MimePartInfo partInfo, MimeFlags conversionFlags) at Microsoft.Exchange.Data.Storage.ImapItemConverter.GetBody(Stream outStream) at Microsoft.Exchange.Data.Storage.ImapItemConverter.GetBody(Stream outStream, UInt32[] indices) From my reading around it seems that the suggestion is to ask users to log in to Outlook / OWA and view the messages there. However, having logged in as the users myself, the messages cannot be found either through searching or by browsing the folder detailed in the log entry. The server returns the following error to the client: "The message could not be retrieved using the IMAP4 protocol. The message has not been deleted and may be accessible using either Microsoft Outlook or Microsoft Office Outlook Web Access. You can also try contacting the original sender of the message to find out about the contents of the message. Retrieval of this message will be retried when the server is updated with a fix that addresses the problem." Messages were transferred in to Exchange by copying them from the old Apple Xserve, accessed using IMAP. So my question, finally: 1. Is there any way to get the IMAP Exchange connector to rebuild its cache of messages since it doesn't seem to be pulling them directly from the MAPI store? 2. Alternatively, if there is no database, any ideas on why these messages don't appear in Outlook or OWA would be gratefully received. Many thanks, Mike

    Read the article

  • Creating and Saving an Excel File

    - by Kris
    I have the following code that creates a new Excel file in my C# code behind. When I attempt to save the file I would like the user to select the location of the save. In Method #1, I can save the file my using the workbook SaveCopyAs without prompting the user for a location. This saves one file to the C:\Temp directory. Method #2 will save the file in my Users\Documents folder, then prompt the user to select the location and save a second copy. How can I eliminate the first copy from saving in the Users\Documents folder? Excel.Application oXL; Excel._Workbook oWB; Excel._Worksheet oSheet; Excel.Range oRng; try { //Start Excel and get Application object. oXL = new Excel.Application(); oXL.Visible = false; //Get a new workbook. oWB = (Excel._Workbook)(oXL.Workbooks.Add(Missing.Value)); oSheet = (Excel._Worksheet)oWB.ActiveSheet; // ***** oSheet.Cells[2, 6] = "Ship To:"; oSheet.get_Range("F2", "F2").Font.Bold = true; oSheet.Cells[2, 7] = sShipToName; oSheet.Cells[3, 7] = sAddress; oSheet.Cells[4, 7] = sCityStateZip; oSheet.Cells[5, 7] = sContactName; oSheet.Cells[6, 7] = sContactPhone; oSheet.Cells[9, 1] = "Shipment No:"; oSheet.get_Range("A9", "A9").Font.Bold = true; oSheet.Cells[9, 2] = sJobNumber; oSheet.Cells[9, 6] = "Courier:"; oSheet.get_Range("F9", "F9").Font.Bold = true; oSheet.Cells[9, 7] = sCarrierName; oSheet.Cells[11, 1] = "Requested Delivery Date:"; oSheet.get_Range("A11", "A11").Font.Bold = true; oSheet.Cells[11, 2] = sRequestDeliveryDate; oSheet.Cells[11, 6] = "Courier Acct No:"; oSheet.get_Range("F11", "F11").Font.Bold = true; oSheet.Cells[11, 7] = sCarrierAcctNum; // ***** Method #1 //oWB.SaveCopyAs(@"C:\Temp\" + sJobNumber +".xls"); Method #2 oXL.SaveWorkspace(sJobNumber + ".xls"); } catch (Exception theException) { String errorMessage; errorMessage = "Error: "; errorMessage = String.Concat(errorMessage, theException.Message); errorMessage = String.Concat(errorMessage, " Line: "); errorMessage = String.Concat(errorMessage, theException.Source); }

    Read the article

  • LocalStorage storing multiple div hides

    - by Jesse Toxik
    I am using a simple code to hide multiple divs if the link to hide them is clicked. On one of them I have local storage set-up to remember of the div is hidden or not. The thing is this. How can I write a code to make local storage remember the hidden state of multiple divs WITHOUT having to put localstorage.setItem for each individual div. Is it possible to store an array with the div ids and their display set to true or false to decide if the page should show them or not? **********EDITED************ function ShowHide(id) { if(document.getElementById(id).style.display = '') { document.getElementById(id).style.display = 'none'; } else if (document.getElementById(id).style.display = 'none') { document.getElementById(id).style.display = ''; }

    Read the article

  • ORM on iPhone. More simple than CoreData.

    - by Alexander Babaev
    The question is rather simple. I know that there is SQLite. There is Core Data also. But I need something in between. More object-oriented than SQLite API and simplier than Core Data. Main points are: I need access to stored entities only by id. No queries required. I need to store items of a single type, it means that I can use only one table if I choose SQLite. I want automatic object-relational conversion. Or object-storage if the storage is not relational. I can use object archiving, but I have to implement things (NSArchiver). But I want to write some kind of class and get persistence automatically. As it can be done with Hibernate/ActiveRecord/Core Data/etc. Thanks.

    Read the article

  • ServiceBus WorkerRole DiagnosticMonitor Error

    - by user1596485
    I have a WebRole and 2 ServiceBus WorkerRoles running, During the OnStart of the roles I get the following Exception: [System.ArgumentOutOfRangeException] invalid syntax for container log4net Parameter name: initialConfiguration Running Azure: ConfigurationManager version=1.7.0.3 ServiceBus version=1.7.0.1 Storage version=1.7.0.0 This occurs while running locally in the dev Azure environment and in the Cloud. All roles have the following Configurtion settings: <LocalStorage name="Log4Net" cleanOnRoleRecycle="true" sizeInMB="2048" /> All Roles have the following code in the OnStart: try { // Configure Disgnostics to poll Log file to Blob Storage var diagnosticsConfig = DiagnosticMonitor.GetDefaultInitialConfiguration(); diagnosticsConfig.Directories.ScheduledTransferPeriod = TimeSpan.FromMinutes(5); diagnosticsConfig.Directories.DataSources.Add( new DirectoryConfiguration { Path = RoleEnvironment.GetLocalResource("Log4Net").RootPath, DirectoryQuotaInMB = 512, Container = "wad-WebRolelog4net" }); DiagnosticMonitor.Start("Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString", diagnosticsConfig); } catch { OnStop(); return false; }

    Read the article

  • How to reliably map vSphere disks <-> Linux devices

    - by brianmcgee
    Task at hand After a virtual disk has been added to a Linux VM on vSphere 5, we need to identify the disks in order to automate the LVM storage provision. The virtual disks may reside on different datastores (e.g. sas or flash) and although they may be of the same size, their speed may vary. So I need a method to map the vSphere disks to Linux devices. Ideas Through the vSphere API, I am able to get the device info: Data Object Type: VirtualDiskFlatVer2BackingInfo Parent Managed Object ID: vm-230 Property Path: config.hardware.device[2000].backing Properties Name Type Value ChangeId string Unset contentId string "d58ec8c12486ea55c6f6d913642e1801" datastore ManagedObjectReference:Datastore datastore-216 (W5-CFAS012-Hybrid-CL20-004) deltaDiskFormat string "redoLogFormat" deltaGrainSize int Unset digestEnabled boolean false diskMode string "persistent" dynamicProperty DynamicProperty[] Unset dynamicType string Unset eagerlyScrub boolean Unset fileName string "[W5-CFAS012-Hybrid-CL20-004] l****9-000001.vmdk" parent VirtualDiskFlatVer2BackingInfo parent split boolean false thinProvisioned boolean false uuid string "6000C295-ab45-704e-9497-b25d2ba8dc00" writeThrough boolean false And on Linux I may read the uuid strings: [root@lx***** ~]# lsscsi -t [1:0:0:0] cd/dvd ata: /dev/sr0 [2:0:0:0] disk sas:0x5000c295ab45704e /dev/sda [3:0:0:0] disk sas:0x5000c2932dfa693f /dev/sdb [3:0:1:0] disk sas:0x5000c29dcd64314a /dev/sdc As you can see, the uuid string of disk /dev/sda looks somehow familiar to the string that is visible in the VMware API. Only the first hex digit is different (5 vs. 6) and it is only present to the third hyphen. So this looks promising... Alternative idea Select disks by controller. But is it reliable that the ascending SCSI Id also matches the next vSphere virtual disk? What happens if I add another DVD-ROM drive / USB Thumb drive? This will probably introduce new SCSI devices in between. Thats the cause why I think I will discard this idea. Questions Does someone know an easier method to map vSphere disks and Linux devices? Can someone explain the differences in the uuid strings? (I think this has something to do with SAS adressing initiator and target... WWN like...) May I reliably map devices by using those uuid strings? How about SCSI virtual disks? There is no uuid visible then... This task seems to be so obvious. Why doesn't Vmware think about this and simply add a way to query the disk mapping via Vmware Tools?

    Read the article

  • juju bootstrap fails with a local environment, why?

    - by Braiam
    Each time I try to bootstrap juju using a local enviroment it fails starting the juju-db-braiam-local script as follows: $ sudo juju --debug --verbose bootstrap 2013-10-20 02:28:53 INFO juju.provider.local environprovider.go:32 opening environment "local" 2013-10-20 02:28:53 DEBUG juju.provider.local environ.go:210 found "10.0.3.1" as address for "lxcbr0" 2013-10-20 02:28:53 DEBUG juju.provider.local environ.go:234 checking 10.0.3.1:8040 to see if machine agent running storage listener 2013-10-20 02:28:53 DEBUG juju.provider.local environ.go:237 nope, start some 2013-10-20 02:28:53 DEBUG juju.environs.tools storage.go:87 Uploading tools for [raring precise] 2013-10-20 02:28:53 DEBUG juju.environs.tools build.go:109 looking for: juju 2013-10-20 02:28:53 DEBUG juju.environs.tools build.go:150 checking: /usr/bin/jujud 2013-10-20 02:28:53 INFO juju.environs.tools build.go:156 found existing jujud 2013-10-20 02:28:53 INFO juju.environs.tools build.go:166 target: /tmp/juju-tools243949228/jujud 2013-10-20 02:28:53 DEBUG juju.environs.tools build.go:217 forcing version to 1.14.1.1 2013-10-20 02:28:53 DEBUG juju.environs.tools build.go:37 adding entry: &tar.Header{Name:"FORCE-VERSION", Mode:420, Uid:0, Gid:0, Size:8, ModTime:time.Time{sec:63517832933, nsec:278894120, loc:(*time.Location)(0x108fda0)}, Typeflag:0x30, Linkname:"", Uname:"ubuntu", Gname:"ubuntu", Devmajor:0, Devminor:0, AccessTime:time.Time{sec:63517832933, nsec:278894120, loc:(*time.Location)(0x108fda0)}, ChangeTime:time.Time{sec:63517832933, nsec:278894120, loc:(*time.Location)(0x108fda0)}} 2013-10-20 02:28:53 DEBUG juju.environs.tools build.go:37 adding entry: &tar.Header{Name:"jujud", Mode:493, Uid:0, Gid:0, Size:19179512, ModTime:time.Time{sec:63517832933, nsec:274894120, loc:(*time.Location)(0x108fda0)}, Typeflag:0x30, Linkname:"", Uname:"ubuntu", Gname:"ubuntu", Devmajor:0, Devminor:0, AccessTime:time.Time{sec:63517832933, nsec:274894120, loc:(*time.Location)(0x108fda0)}, ChangeTime:time.Time{sec:63517832933, nsec:274894120, loc:(*time.Location)(0x108fda0)}} 2013-10-20 02:28:55 INFO juju.environs.tools storage.go:106 built 1.14.1.1-raring-amd64 (4196kB) 2013-10-20 02:28:55 INFO juju.environs.tools storage.go:112 uploading 1.14.1.1-precise-amd64 2013-10-20 02:28:55 INFO juju.environs.tools storage.go:112 uploading 1.14.1.1-raring-amd64 2013-10-20 02:28:55 INFO juju.environs.tools tools.go:29 reading tools with major version 1 2013-10-20 02:28:55 INFO juju.environs.tools tools.go:34 filtering tools by version: 1.14.1.1 2013-10-20 02:28:55 INFO juju.environs.tools tools.go:37 filtering tools by series: precise 2013-10-20 02:28:55 DEBUG juju.environs.tools storage.go:41 reading v1.* tools 2013-10-20 02:28:55 DEBUG juju.environs.tools storage.go:61 found 1.14.1.1-precise-amd64 2013-10-20 02:28:55 DEBUG juju.environs.tools storage.go:61 found 1.14.1.1-raring-amd64 2013-10-20 02:28:55 INFO juju.environs.boostrap bootstrap.go:57 bootstrapping environment "local" 2013-10-20 02:28:55 INFO juju.environs.tools tools.go:29 reading tools with major version 1 2013-10-20 02:28:55 INFO juju.environs.tools tools.go:34 filtering tools by version: 1.14.1.1 2013-10-20 02:28:55 INFO juju.environs.tools tools.go:37 filtering tools by series: precise 2013-10-20 02:28:55 DEBUG juju.environs.tools storage.go:41 reading v1.* tools 2013-10-20 02:28:55 DEBUG juju.environs.tools storage.go:61 found 1.14.1.1-precise-amd64 2013-10-20 02:28:55 DEBUG juju.environs.tools storage.go:61 found 1.14.1.1-raring-amd64 2013-10-20 02:28:55 DEBUG juju.provider.local environ.go:395 create mongo journal dir: /home/braiam/.juju/local/db/journal 2013-10-20 02:28:55 DEBUG juju.provider.local environ.go:401 generate server cert 2013-10-20 02:28:55 INFO juju.provider.local environ.go:421 installing service juju-db-braiam-local to /etc/init 2013-10-20 02:28:56 ERROR juju.provider.local environ.go:423 could not install mongo service: exec ["start" "juju-db-braiam-local"]: exit status 1 (start: Job failed to start) 2013-10-20 02:28:56 ERROR juju supercommand.go:282 command failed: exec ["start" "juju-db-braiam-local"]: exit status 1 (start: Job failed to start) error: exec ["start" "juju-db-braiam-local"]: exit status 1 (start: Job failed to start) What is the reason for this error and how to solve it?

    Read the article

< Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >