Search Results

Search found 329 results on 14 pages for 'tape'.

Page 11/14 | < Previous Page | 7 8 9 10 11 12 13 14  | Next Page >

  • Mounting 2.5" SSD in Antec Atlas 550's 3.5" bay

    - by cecilkorik
    Just got some new SSDs to add to my servers, unfortunately there doesn't appear to be anywhere to actually mount them. The cases are Antec Atlas 550 mid-tower server cases. The problem is that the 3.5" bays are intended for 3.5" hard drives (obviously) not SSDs, so they are mounted from the bottom with rubber grommets to reduce vibration. There are NO side-holes in any of the 3.5" bays, all mounting has to be done from the bottom of the drive, through the thick rubber grommets, which requires special extra-long screws with large heads. Those screws will not work with the SSD, the thread is too coarse, 2.5" drives apparently use M3 screws with a finer pitch. The case's 3.5" bays have holes aligned for 2.5" drives and by moving the grommets into the appropriate holes they line up with the SSD. But that's as far as I can get. I don't have and cannot find any screws that use the finer M3 pitch that are long enough and have the wide head. I can't remove the grommets because the screw holes are much too wide without them, the entire head of the screw fits through. And again, there are no side-holes, so I can't use the mounting bracket that came with the drive nor any other 2.5"-3.5" bracket I've found. Here is a pic of what I am dealing with (note that the SSD is just resting on the case, it's not currently screwed in if that's not clear) Any ideas for safely mounting these little guys without resorting to duct tape or superglue would be very appreciated. If anyone knows where I could buy the appropriate type of M3 threaded screw or has any other ideas please help me out here.

    Read the article

  • How do you verify a restore?

    - by Nic
    What tool(s) would you use to verify that a restored file structure is whole and complete? My environment is a Windows Server 2008 file server. (We use tape for backup, but that is inconsequential.) I am specifically looking for a tool that will: Record the names of all files and folders below a specified directory Optionally calculate checksums of each file encountered Save this index in a human-readable format Compare the index against restored data and show differences Some background: I recently had to replace the disks in our file server. The upgrade was scheduled to start 36 hours after the most recent full backup, so I created a differential backup. However, it turns out that one of our applications was clearing the archive bit on files saved to the server, so these were not included in the differential backup. I was unaware of this until my users reported some files as missing. Aside from this, are there any other common methods for validating the integrity of a restore? I am frequently told that testing backups by restoring them is the only way to know that backups are working, but how do you deal with the case where it works 99% correctly and the other 1% silently fails?

    Read the article

  • NTBackup Error: C: is not a valid drive

    - by Chris
    I'm trying to use NtBackup to back up the C: Drive on a Microsoft Windows Small Business Server 2003 machine and get the following error in the log file: Backup Status Operation: Backup Active backup destination: 4mm DDS Media name: "Media created 04/02/2011 at 21:56" Error: The device reported an error on a request to read data from media. Error reported: Invalid command. There may be a hardware or media problem. Please check the system event log for relevant failures. Error: C: is not a valid drive, or you do not have access. The operation did not successfully complete. I'm using a brand new SATA Quantum Dat-72 drive with a brand new tape (tried a couple of tapes). I carry out the following: Open NTBackup Select Backup Tab Tick the box next to C: Ensure Destination is 4mm DDS Media is set to New Press Start Backup Choose Replace the data on the media and press Start Backup NTBackup tries to mount the media Error Message shows: The device reported an error on a request to read data from media. Error reported: INvalid command. There may be a hardware or media problem. Please check the system event log for relevant failures. On checking the log I find the following: Event Type: Information Event Source: NTBackup Event Category: None Event ID: 8018 Date: 04/02/2011 Time: 22:02:02 User: N/A Computer: SERVER Description: Begin Operation For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. and then; Event Type: Information Event Source: NTBackup Event Category: None Event ID: 8019 Date: 04/02/2011 Time: 22:02:59 User: N/A Computer: SERVER Description: End Operation: The operation was successfully completed. Consult the backup report for more details. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.

    Read the article

  • NetApp NDMP backup with BE 2010 R2 works, restore fails

    - by uuwe
    Hi, I'm having some issues with a new Backup Exec 2010 R2 installation. I configured a NetApp FAS2020 as an NDMP device and want to backup files from the NAS to a tape drive connected to my backup server. I set up ndmpd according to this document (http://www.symantec.com/business/support/index?page=content&id=TECH48957) and created a separate backup user (http://filers.blogspot.com/2006/09/setting-veritas-netbackup-with-non.html). Backup works perfectly, but restoring any file gives me an authentication failed error. The NDMP device has a "global" ndmp user configured in the device tab (tried this with the newly created ndmpd backup user and the netapp root) and I can also configure separate resource credentials in the BE restore job. I have tried setting the same accounts for the "global" ndmp device and the restore credentials and have also tried setting different accounts for them. NDMP debug level is at 5 and this is what shows up in /etc/messages. The session is closed immediately after it has been granted. 16:12:07 PST [Java_Thread:info]: ndmpdserver: ndmpd.access allowed for version = 4, sessionId = 51, from src ip = 192.168.11.17, dst ip = FAS2020-1/192.168.11.75, src port = 50857, dst port = 10000 16:12:07 PST [Java_Thread:info]: Ndmpd51: ndmpd session closed successfully for version = 4, sessionId = 51, from src ip = 192.168.11.17, dst ip = FAS2020-1/192.168.11.75, src port = 50857, dst port = 10000 Running wireshark on the backup server doesn't produce much. It shows a SYN - SYN/ACK - NDMP CONNECT_CLOSE Request from the backup server. The Resource Credentials for the restore job behave very oddly. If I enter NDMP credentials and do "Test All" it fails. If I use my regular domain backup account, it is successful. There are no failed or succeeded logons in the NetApp ndmp log and tracing this check shows that it doesn't even connect to the NAS. This makes me think that this is more likely flaky BE behaviour rather than misconfiguration of the NAS. Here is the options ndmp output: FAS2020-1 options ndmp ndmpd.access all ndmpd.authtype challenge ndmpd.connectlog.enabled on ndmpd.enable on ndmpd.ignore_ctime.enabled off ndmpd.offset_map.enable on ndmpd.password_length 16 ndmpd.preferred_interface disable ndmpd.tcpnodelay.enable off

    Read the article

  • Creating a tar file with checksums included

    - by wazoox
    Here's my problem : I need to archive to tar files a lot ( up to 60 TB) of big files (usually 30 to 40 GB each). I would like to make checksums ( md5, sha1, whatever) of these files before archiving; however not reading every file twice (once for checksumming, twice for tar'ing) is more or less a necessity to achieve a very high archiving performance (LTO-4 wants 120 MB/s sustained, and the backup window is limited). So I'd need some way to read a file, feeding a checksumming tool on one side, and building a tar to tape on the other side, something along : tar cf - files | tee tarfile.tar | md5sum - Except that I don't want the checksum of the whole archive (this sample shell code does just this) but a checksum for each individual file in the archive. I've studied GNU tar, Pax, Star options. I've looked at the source from Archive::Tar. I see no obvious way to achieve this. It looks like I'll have to hand-build something in C or similar to achieve what I need. Perl/Python/etc simply won't cut it performance-wise, and the various tar programs miss the necessary "plugin architecture". Does anyone know of any existing solution to this before I start code-churning ?

    Read the article

  • NTBackup Error: C: is not a valid drive

    - by Chris
    I'm trying to use NtBackup to back up the C: Drive on a Microsoft Windows Small Business Server 2003 machine and get the following error in the log file: Backup Status Operation: Backup Active backup destination: 4mm DDS Media name: "Media created 04/02/2011 at 21:56" Error: The device reported an error on a request to read data from media. Error reported: Invalid command. There may be a hardware or media problem. Please check the system event log for relevant failures. Error: C: is not a valid drive, or you do not have access. The operation did not successfully complete. I'm using a brand new SATA Quantum Dat-72 drive with a brand new tape (tried a couple of tapes). I carry out the following: Open NTBackup Select Backup Tab Tick the box next to C: Ensure Destination is 4mm DDS Media is set to New Press Start Backup Choose Replace the data on the media and press Start Backup NTBackup tries to mount the media Error Message shows: The device reported an error on a request to read data from media. Error reported: INvalid command. There may be a hardware or media problem. Please check the system event log for relevant failures. On checking the log I find the following: Event Type: Information Event Source: NTBackup Event Category: None Event ID: 8018 Date: 04/02/2011 Time: 22:02:02 User: N/A Computer: SERVER Description: Begin Operation For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. and then; Event Type: Information Event Source: NTBackup Event Category: None Event ID: 8019 Date: 04/02/2011 Time: 22:02:59 User: N/A Computer: SERVER Description: End Operation: The operation was successfully completed. Consult the backup report for more details. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.

    Read the article

  • Best way to attach 96 tb to workstation

    - by user994179
    I'm running a workstation with dual xeon 5690's (12 physical/24 logical cores), 192 gb of ram (ie, maxed-out), Windows 7 64bit, 5 slots for adapter cards, and 1 tb of internal storage, with 5 more internal bays available. I have an app that creates data files totaling about 88 tbs. These are written once every 14 months, and the rest of the time the app only needs to read them; and 95% of the reads are sequential reads of huge chunks of data. I have some control over how big the individual files are, but ideally they would be between 5 and 8 tbs. The app will be reading from only one drive at a time, and the nature of the data is such that if (when) a drive dies I can restore the data to a new disk from tape. While it would be nice to be able to use the fastest drive/controllers available, at this point size matters more than speed. After doing lots of reading, I am leaning toward buying a bunch of cheap 2tb drives and putting them into a bunch of cheap enclosures. All this stuff is going into my home office, so I need to avoid the raised floor/refrigerated approach. My questions: Is the cheap drive/enclosure solution the best one for this situation? Given the nature of the app and the way the data is used, does RAID make sense? If so, which one? For huge sequential reads, would Usb 3.0 and eSata be a wash performance-wise? For each slot available on the workstation, can I hook up an enclosure that can hold multiple drives? Or is it one controller per drive? If I can have multiple drives on one controller, am I essentially splitting the bandwidth (throughput)? For example, if I have a 12 bay enclosure, is the throughput of the controller reduced by a factor of 12? Are there any Windows 7 volume/drive/capacity limits I should be aware of? Thanks

    Read the article

  • SQL 2008 Backups to UNC Share Failing 0xC002F210

    - by Matty Brown
    This problem is driving me NUTS!! We take backups of all of our production databases to a network share, which are then backed up to tape nightly. 8pm Mon-Fri - Full backup, followed by log backup 7am-7pm Mon-Fri, at half-hour interval - Log backup Our backups have been working in this manner since we migrated from SQL Server Standard 2000 to 2008, 3 years ago. Recently, the first log backup on Mondays have been failing. Not every time, but almost every time! The rest of the week, we've had no problems. I guess the issue may have something to do with the size of the log backup that's attempted after a weekend of no backups. Now onto the issue I need a fix for... All this week, every full backup on our biggest two databases have failed (Both backups < 1GB compressed). There's plenty of disk space on the source and destination servers. I'm guessing the issue is to do with the amount of time it takes to complete the backups of these databases, and/or the size of the backup files required to complete these backups. Changing the backup destination to local storage works fine (and very, very fast in comparison). From the Job History, I can find a few hints as to what the problem could be... Code: 0xC002F210 (Always this code, but a mix of the following descriptions...) "The operating system returned the error '64(failed to retrieve text for this error. Reason: 1815)' while attempting 'SetEndOfFile' on '\drserver\SQLBackups\Database.bak'. BACKUP DATABASE is terminating abnormally. "The operating system returned the error '64(failed to retrieve text for this error. Reason: 1815)' while attempting 'FlushFileBuffers' on '\drserver\SQLBackups\Database.bak'. BACKUP DATABASE is terminating abnormally. Please help save my hair and sanity!!

    Read the article

  • Looking for a recommendation on measuring a high availability app that is using a CDN.

    - by T Reddy
    I work for a Fortune 500 company that struggles with accurately measuring performance and availability for high availability applications (i.e., apps that are up 99.5% with 5 seconds page to page navigation). We factor in both scheduled and unscheduled downtime to determine this availability number. However, we recently added a CDN into the mix, which kind of complicates our metrics a bit. The CDN now handles about 75% of our traffic, while sending the remainder to our own servers. We attempt to measure what we call a "true user experience" (i.e., our testing scripts emulate a typical user clicking through the application.) These monitoring scripts sit outside of our network, which means we're hitting the CDN about 75% of the time. Management has decided that we take the worst case scenario to measure availability. So if our origin servers are having problems, but yet the CDN is serving content just fine, we still take a hit on availability. The same is true the other way around. My thought is that as long as the "user experience" is successful, we should not unnecessarily punish ourselves. After all, a CDN is there to improve performance and availability! I'm just wondering if anyone has any knowledge of how other Fortune 500 companies calculate their availability numbers? I look at apple.com, for instance, of a storefront that uses a CDN that never seems to be down (unless there is about to be a major product announcement.) It would be great to have some hard, factual data because I don't believe that we need to unnecessarily hurt ourselves on these metrics. We are making business decisions based on these numbers. I can say, however, given that these metrics are visible to management, issues get addressed and resolved pretty fast (read: we cut through the red-tape pretty quick.) Unfortunately, as a developer, I don't want management to think that the application is up or down because some external factor (i.e., CDN) is influencing the numbers. Thoughts? (I mistakenly posted this question on StackOverflow, sorry in advance for the cross-post)

    Read the article

  • Dell 2970 - HP 1/8 G2 autoloader keeps falling off LSI 2032 SCSI chain

    - by middaparka
    I've a somewhat irritating problem with a Dell 2970 that has a HP 1/8 G2 autoloader (the Ultrium LTO 2 model) attached to the Dell/LSI 2032 non-RAID SCSI card. In essence, sometimes the autoloader/drive completely fails to appear on the SCSI chain (i.e.: there's neither a media changer or tape drive present within the device manager) and sometimes it appears but then subsequently disappears at a seemingly random (yet always inconvenient) time, resulting in backup failures. On most occasions, there are simply no errors logged in the system event log, but I did manage to capture a series of LSI_SCSI event ID 11 ("The driver detected a controller error on \Device\RaidPort0") errors followed by an event ID 129, ("Reset to device, \Device\RaidPort0, was issued") error during testing. I've tried two different cables, both with the same effect – sometimes the autoloader appears (for a while), sometimes it's completely absent. There's only one terminator I've tried to use, but as I've since successfully tested the autoloader on multiple occasions (albeit via a Adaptec U160 card on a different machine), my gut feel is that the issue doesn't lie with the terminator, or indeed the autoloader itself. As such, I'm just wondering if anyone has any ideas? It's most likely not relevant, but this is all under Windows SBS 2008, running Backup Exec 12.5 SBS edition (the Dell version), both fully patched. Addidtionally, the autoloader is running the latest firmware. It's been a while since I've dealt with anything SCSI, so all suggestions will be gratefully, gratefully received.

    Read the article

  • Disaster recovery backup of files/photos for personal use

    - by Renesis
    I'm looking for the best method to store a backup of important files and 5+ years of digital photos that is safe from some type of fire/flood disaster in my home. I'm looking for: Affordable: Less than $100/yr or first-time cost. Reliable: At least a smaller chance of failing than there is of fire or flood Easy for initial backup and to add to, and at least semi-easy to recover. I recently purchased a small home safe for physical vitals. It was inexpensive, solid, and is fire/water safe. If I had a physical copy of the digital files, the safe would work fine for this, but I don't know what to store in it that adequately meets the requirements above. Hard drive - I read that the danger of it not spinning up makes a hard drive a bad choice for this type of storage, although it was my first thought and would definitely be the simplest choice - very easy to take out once a month and add files to. DVDs - Way too much of a hassle for both backup and restore. Tape - No idea on the affordability of this option Online - Given that I have at least 300GB already and ever-increasing megapixels means ever-bigger files, and my ISP upload is about 2Mb at the best, this just doesn't sound like a good option for me, but I could be convinced. Other - Have I missed something? Also, I'm already covered both for sync between computers (Dropbox) and a nightly backup of these files (External HDD). The problem with the nightly backup is obviously that it's always with the computer and in a disaster would be destroyed along with it. Is anyone else doing something similar? Is the HDD as poor of a choice as I read, or is it a feasible option? Maybe two to reduce the likelihood of failure?

    Read the article

  • DV-AVI in DivX Plus Player and VirtualDub - playback issue

    - by user714965
    I have an AVI-video which I had (digitally) transferred from a DV camera to my PC. The video contains errors at the beginning most likely because the DV tape was pretty old. When playing this video in the DivX Plus Player I get some picture artefacts and some high noise peaks of the sound. These stops after two seconds. When I'm playing this video in VirtualDub (where I want to cut it) I get the same picture artefacts. But the sound errors (those loud high peaks) lasts ten seconds. These sound errors are also contained in the cutted video. Why does the video have more errors when played in VirtualDub? I think because of different codecs which are used for decoding the video? How can I change the codec which VirtualDub uses for decoding? I have installed ffdshow for this but it seems that it is not used because I don't get the ffdshow icon in the taskbar when playing the video in VirtualDub. When playing in DivX Player Plus I get this icon.

    Read the article

  • How do I prevent spawning of zombie-like apache2 processes on Dreamhost VPS?

    - by Jonathan Hayward
    I have had a website for months or longer on a DreamHost VPS, and I have had vague memories on, in initial setup, having to turn off some customized Apache under /dh to get a standard Apache 2.x to work with. Things have been going along on an even keel, when I started making some changes lately and I found that when I tried to bounce Apache (/usr/sbin/apachectl restart), it couldn't bind to port 80, and my site had been converted from a big literature site to a small parking site. I tried to see what was listening on 80, and it was a DreamHost-customized Apache that had spawned. I killed all of them, restarted Apache, and changed the parent directory under /dh to mode 000. That was a day or two ago. I was bouncing Apache again in trying to get a new site to load under HTTPS, and I found that once again DreamHost's apache had spawned, from the directory I set to mode 000, and once again converted my site to a parking page. I renamed the directory, but I am very skeptical of whether I have permanently killed the DreamHost-customized Apache. Besides duct tape options like a crontab to kill and delete each minute, how can I permanently turn off the Apache processes that are spawning from a location under /dh and interfering with standard Apache? What should I be doing that I am not? Can DreamHost's technical support stop the interference? Thanks,

    Read the article

  • Archive Manager, SQL 2005 and MaxTokenSize high CPU

    - by Tim Alexander
    So, I posted this question a few days ago: Impact of increasing the MaxTokenSize for Kerberos Tickets Since then the thought was to test our settings on two member servers, one with IIS and one without. I setup two GPOs to configure the MaxTokenSize reg setting to 48000 and MaxFieldLength/MaxRequestBytes to 64200 (based on MS KB2020943, these are set at 4/3 * T + 200). The member server seemed to work ok (a devalued tape backup server). The IIS server however has had some strange repercussions. The IIS Sserver host Quest Software Archive Manager (AM) 4.5 that communicates with SQL Server 2005 Enterprise on Server 2003 R2. After the changes all looked good until the SQL Server hit 100% CPU. I have removed the GPOS, removed the reg values and even replaced them with defaults (12000 for token size and can't remember the other one but was in a blog post about the issue in my other post). No change. Bouncing the IIS Server stops the high CPU and a colleague has looked at the SQL server and it is definitely the AM connection taking up the time/work on the SQL server. I haven't changed the reg values on the SQL server or the DCs but am reluctant to do so without understanding why this has happened. I am guessing its to do with the overriding auth and group issue we have but I am not seeing Kerberos errors in either event log. Has anyone seen something similar or does anyone have some tips? Was definitely blindsided by the Kerberos issue and am swimming against the tide to keep things functioning.

    Read the article

  • Is there a historical computer peripherals or accessories museum or even just a current list?

    - by zimmer62
    Thinking about all the unique and different peripherals I've owned over the years, from ISA capture cards, to parallel port controlled shutter glasses for 3d games. I've seen many many accessory or computer peripherals come and go. The nostalgia of these things is a lot of fun. I tried to find some sort of historical time-line or list but what mostly turned up is computers themselves. I'm more interested in the mice, scanners, the weird adapters that shouldn't exist, short run very rare products, strange devices from computer shows in the 80's and 90's... Hardware you might find in a geeks basement that would be completely useless now, but was the coolest thing around when it was new. An example would be a drawing tablet I had for my TI-99 computer, or the audio tape player accessory for a C64 which let you save files to audio tapes, An ISA card that did the same for PC's hooked up to a VCR. Remember that IBM-PC Jr upgrade kit, that added a floppy drive, more memory and the AT switch in the back? I'd love to find either a wiki, or a list that has already been assembled which contain many of these weird (or common) accessories. I've had so many over the years I suppose I could start a wiki here if such a list doesn't already exist.

    Read the article

  • Looking for suggestions for hosting Windows 2000 Server in the cloud / VPS / etc?

    - by JohnyD
    I have a Windows 2000 Server, currently virtualized in Hyper-V, that I would like to get running off-site as a backup (cloud, VPS, etc). You can't virtualize in EC2 and I'm fairly certain there are no Server 2000 AMI's floating about (correct me if I'm wrong!). If anyone has a recommendation on how I can get a virtualized Windows 2000 Server running in a secure, remote environment I would be grateful. As far as locations go I'd be interested in both North America as well as Australia and Europe. In a nutshell, we're ploughing our way out of a legacy codebase and this server is the last that remains of the legacy apps. However, it is still very much used by our clients. Everything is backed up each night (data, images, etc) to tape which is then taken offsite. However, in the event of a fire I would love to have a backup legacy server to point DNS records to. So while I am rebuilding from the ashes our services would already be available. It would save a lot of time and make my managers all the more happy (and that's what it's all about, riighhtt? :D) Thank you all for your suggestions. Please let me know if I've left out any important information. Additional info: - the legacy codebase does not function properly in Server 2003

    Read the article

  • does leaving the laptop cord in break the cord?

    - by firedrake
    i recently got a new cord for my laptop because the cord i had before broke.im not sure how it got messed up, but one moment it charged my laptop and the next moment it didn't. i then got a new cord. the new cord worked perfectly fine until now, and it has only been a month. the cord wont work unless i am pressing it into the socket, which i am currently doing. the only thing i can find in common with the cords is that i leave them plugged in 24/7. my brother says that is the problem, but i do not think it is.any tips or hints? im going to use duct tape to keep the pressure on the cord till i can find a better solution(im also thinking it could be the hole i plug it into on the back of my computer, but im focusing on the other idea for now.if i wiggle the plug part in the socket of my computer it will stop charging unless im pressing it in or to the side) any help or ideas are appreciated. not sure if this will help but i use a gateway with windows vista. thanks

    Read the article

  • What does SQL Server's BACKUPIO wait type mean?

    - by solublefish
    I'm using Sql Server 2008 ("R1"), with some maintenance plans that back up my databases to a network share. Some of my backup jobs show long waits of type "BACKUPIO". Of course it seems like this is an I/O subsystem limitation, but I'm skeptical. Perfmon stats for I/O on the production (source) server are well within normal trends for that server. The destination server shows a sustained 7MB/s write rate, which seems incredibly low, even for a slow disk. The network link is gigabit ethernet and nowhere near saturated. The few docs I've turned up about BACKUPIO indicate that it's not specifically a wait on I/O, surprisingly enough. This MSFT doc says it's abnormal unless you're using a tape drive, which I'm not. But it doesn't say (or I don't understand) exactly what resource is missing. http://www.docstoc.com/docs/24580659/Performance-Tuning-in-SQL-Server-2005 And this piece says it's not related to I/O performance at all. http://www.informit.com/articles/article.aspx?p=686168&seqNum=5 "Note that BACKUPIO and IO_AUDIT_MUTEX are not related to IO performance." Anyway, does anyone know what BACKUPIO actually means and/or what I can do to diagnose or eliminate it?

    Read the article

  • WAN Optimization for Small Office/Home Office

    - by TiernanO
    I have been reading up on WAN optimization for the last while, mostly out of interest of speeding up my own internet connections, but also to speed up the office internet connection. At home, I have 2 cable modems plugged into a RouterBoard RB750, which load balances the connections. In the office, we have a single connection into a NetGear router. Most of the WAN Optimization products I have seen, seem to be prohibitively expensive, but also seem to be based on the idea of having multiple branches around the world. What I am looking for, ideally, is as follows: software install: I am "guessing" I need to install it in 2 places: one in the office or house, and one in "the cloud". any connections going to, say, The US (we are in Europe, but our backup's live in the US currently, which would be something important to speed up) would be "tunnelled" though the Optimizer. If downloading or uploading large files, open multiple connections between both "the cloud" and the optimizer... This is where a lot of speed could be gained. finally, for items not compressed, they would be compressed on the cloud side of things, also items that are already on the optimizer could be not sent again. kind of like RSync or Proxy servers... So, is there something that can be done? Is it available using off the shelf components (some magic script with SSH, Squid, Linux and duct tape) or is it something that needs to be purchased? or even an Open Source Project that does 90% of what i am asking?

    Read the article

  • Bacula stops writing to disk volume after 2GB

    - by m.list
    Bacula Version: 5.2.5 I have configured bacula to write volumes to disk, however bacula stops writing to the volume as soon as it reaches 2gb. The file system is not an issue as I have stored files larger than 2gb. 06-Dec 17:22 backup-sd JobId 8421: End of Volume "Full-Monthly-0005" at 0:2147475577 on device "FileStorage" (/nfs/backup-pool). Write of 64512 bytes got 8069. 06-Dec 17:22 backup-sd JobId 8421: End of medium on Volume "Full-Monthly-0005" Bytes=2,147,475,578 Blocks=33,288 at 06-Dec-2012 17:22. backup1@backup:/nfs/backup-pool$ ls -alh Full-Monthly-0005 <br> -rw-r----- 1 bacula tape 2.0G Dec 3 16:14 Full-Monthly-0005 bacula-dir.conf: Pool { Name = Full-Monthly Pool Type = Backup Recycle = yes Volume Retention = 5 months Volume Use Duration = 1 day Maximum Volumes = 5 Maximum Volume Bytes = 12gb } bacula-sd.conf: Device { Name = FileStorage Media Type = File Archive Device = /nfs/backup-pool LabelMedia = yes # lets Bacula label unlabeled media Random Access = Yes RemovableMedia = no AlwaysOpen = no Label media = yes Maximum Volume Size = 12gb } In my original configuration Maximum Volume Bytes and Maximum Volume Size were not set at all and so should have defauted to no maximum but that did not work either.

    Read the article

  • Would anyone tell me how to fetch the media:thumb element's attribute from a json feed?

    - by ash
    I made a yahoo pipe that pulls up the atoms as json format; however, I can fetch and display all the elements in my html page except for the element's attribute. Would anyone tell me how to fetch the media:thumb element's attribute from a json feed? I am pasting the html page's code with javascript. If you save the html page and then view it in browser, you will see that all the necessary elements get output at html page except for the media:thumb as I cannot display the attribute of media:thumb when the feed is formatted as json. I am also pasting the some portion of the json feed so that you can have an idea what i am talking about. Please tell me how to retrieve attribute from media:thumb element of a json feed by using plain javascript but no server side code or javascript library. Thank you. function getFeed(feed){ var newScript = document.createElement('script'); newScript.type = 'text/javascript'; newScript.src = 'http://pipes.yahoo.com/pipes/pipe.run?_id=40616620df99780bceb3fe923cecd216&_render=json&_callback=piper'; document.getElementsByTagName("head")[0].appendChild(newScript); } function piper(feed){ var tmp=''; for (var i=0; i'; tmp+=feed.value.items[i].title+''; tmp+=feed.value.items[i].author.name+''; tmp+=feed.value.items[i].published+''; if (feed.value.items[i].description) { tmp+=feed.value.items[i].description+''; } tmp+='<hr>'; } document.getElementById('rssLayer').innerHTML=tmp; } </script> bchnbc .............................................................. Some portion of the json feed that gets generated by yahoo pipe .............................................................. piper({"count":2,"value":{"title":"myPipe","description":"Pipes Output","link":"http:\/\/pipes.yahoo.com\/pipes\/pipe.info?_id=f7f4175d493cf1171aecbd3268fea5ee","pubDate":"Fri, 02 Apr 2010 17:59:22 -0700","generator":"http:\/\/pipes.yahoo.com\/pipes\/","callback":"piper", "items": [{ "rights":"Attribution - Noncommercial - No Derivative Works", "link":"http:\/\/vodo.net\/mixtape1", "y:id":{"value":null,"permalink":"true"}, "content":{"content":"We're proud to be releasing this first VODO MIXTAPE. Actual tape might be a thing of the past, but before P2P, mixtapes were the most popular way of sharing popular culture the world had known -- and once called the 'most widely practiced American art form'. We want to resuscitate the spirit of the mixtape for this VODO MIXTAPE series: compilations of our favourite shorts, the weird, the wild and the wonky, all brought together in a temporary and uncomfortable company.","type":"text"}, "author": {"name":"Various"}, "description":"We're proud to be releasing this first VODO MIXTAPE. Actual tape might be a thing of the past, but before P2P, mixtapes were the most popular way of sharing popular culture the world had known -- and once called the 'most widely practiced American art form'. We want to resuscitate the spirit of the mixtape for this VODO MIXTAPE series: compilations of our favourite shorts, the weird, the wild and the wonky, all brought together in a temporary and uncomfortable company.", "media:thumbnail": { "url":"http:\/\/vodo.net\/\/thumbnails\/Mixtape1.jpg" }, "published":"2010-03-08-09:20:20 PM", "format": { "audio_bitrate":null, "width":"608", "xmlns":"http:\/\/xmlns.transmission.cc\/FileFormat", "channels":"2", "samplerate":"44100.0", "duration":"3092.36", "height":"352", "size":"733925376.0", "framerate":"25.0", "audio_codec":"mp3", "video_bitrate":"1898.0", "video_codec":"XVID", "pixel_aspect_ratio":"16:9" }, "y:title":"Mixtape #1: VODO's favourite short films", "title":"Mixtape #1: VODO's favourite short films", "id":null, "pubDate":"2010-03-08-09:20:20 PM", "y:published":{"hour":"3","timezone":"UTC","second":"0","month":"4","minute":"10","utime":"1270264200","day":"3","day_of_week":"6","year":"2010" }}, {"rights":"Attribution - Noncommercial - No Derivative Works","link":"http:\/\/vodo.net\/gilbert","y:id":{"value":"cd6584e06ea4ce7fcd34172f4bbd919e295f8680","permalink":"true"},"content":{"content":"A documentary short about Gilbert, the Beacon Hill \"town crier.\" For the last 9 years, since losing his job and becoming homeless, Gilbert has delivered the weather, sports, and breaking headlines from his spot on the Boston Common. Music (used with permission) in this piece is called \"Blue Bicycle\" by Dusseldorf-based pianist \/ composer Volker Bertelmann also known as Hauschka. Artistic Statement: This is the first in a series of profiles of people who I think are interesting, and who I see on almost a daily basis. I don't want to limit the series to people who live \"on the fringe,\" but it would be appropriate to say that most of the people I interview are eclectic, eccentric, and just a little bit unique. The art is in the viewing - but I hope to turn my lens on individuals that don't always color in the lines, whether they can help it or not.","type":"text"},"author":{"name":"Nathaniel Hansen"},"description":"A documentary short about Gilbert, the Beacon Hill \"town crier.\" For the last 9 years, since losing his job and becoming homeless, Gilbert has delivered the weather, sports, and breaking headlines from his spot on the Boston Common. Music (used with permission) in this piece is called \"Blue Bicycle\" by Dusseldorf-based pianist \/ composer Volker Bertelmann also known as Hauschka. Artistic Statement: This is the first in a series of profiles of people who I think are interesting, and who I see on almost a daily basis. I don't want to limit the series to people who live \"on the fringe,\" but it would be appropriate to say that most of the people I interview are eclectic, eccentric, and just a little bit unique. The art is in the viewing - but I hope to turn my lens on individuals that don't always color in the lines, whether they can help it or not.","media:thumbnail":{"url":"http:\/\/vodo.net\/\/thumbnails\/gilbert.jpeg"},"published":"2010-03-03-10:37:05 AM","format":{"audio_bitrate":null,"width":"624","xmlns":"http:\/\/xmlns.transmission.cc\/FileFormat","channels":"2","samplerate":null,"duration":"373.673","height":"352","size":"123321266.0","framerate":null,"audio_codec":"mp3","video_bitrate":null,"video_codec":"XVID","pixel_aspect_ratio":"16:9"},"y:title":"Gilbert","title":"Gilbert","id":"cd6584e06ea4ce7fcd34172f4bbd919e295f8680","pubDate":"2010-03-03-10:37:05 AM","y:published":{"hour":"3","timezone":"UTC","second":"0","month":"4","minute":"10","utime":"1270264200","day":"3","day_of_week":"6","year":"2010" }} ] }})

    Read the article

  • Major Analyst Report Chooses Oracle As An ECM Leader

    - by brian.dirking(at)oracle.com
    Oracle announced that Gartner, Inc. has named Oracle as a Leader in its latest "Magic Quadrant for Enterprise Content Management" in a press release issued this morning. Gartner's Magic Quadrant reports position vendors within a particular quadrant based on their completeness of vision and ability to execute. According to Gartner, "Leaders have the highest combined scores for Ability to Execute and Completeness of Vision. They are doing well and are prepared for the future with a clearly articulated vision. In the context of ECM, they have strong channel partners, presence in multiple regions, consistent financial performance, broad platform support and good customer support. In addition, they dominate in one or more technology or vertical market. Leaders deliver a suite that addresses market demand for direct delivery of the majority of core components, though these are not necessarily owned by them, tightly integrated, unique or best-of-breed in each area. We place more emphasis this year on demonstrated enterprise deployments; integration with other business applications and content repositories; incorporation of Web 2.0 and XML capabilities; and vertical-process and horizontal-solution focus. Leaders should drive market transformation." "To extend content governance and best practices across the enterprise, organizations need an enterprise content management solution that delivers a broad set of functionality and is tightly integrated with business processes," said Andy MacMillan, vice president, Product Management, Oracle. "We believe that Oracle's position as a Leader in this report is recognition of the industry-leading performance, integration and scalability delivered in Oracle Enterprise Content Management Suite 11g." With Oracle Enterprise Content Management Suite 11g, Oracle offers a comprehensive, integrated and high-performance content management solution that helps organizations increase efficiency, reduce costs and improve content security. In the report, Oracle is grouped among the top three vendors for execution, and is the furthest to the right, placing Oracle as the most visionary vendor. This vision stems from Oracle's integration of content management right into key business processes, delivering content in context as people need it. Using a PeopleSoft Accounts Payable user as an example, as an employee processes an invoice, Oracle ECM Suite brings that invoice up on the screen so the processor can verify the content right in the process, improving speed and accuracy. Oracle integrates content into business processes such as Human Resources, Travel and Expense, and others, in the major enterprise applications such as PeopleSoft, JD Edwards, Siebel, and E-Business Suite. As part of Oracle's Enterprise Application Documents strategy, you can see an example of these integrations in this webinar: Managing Customer Documents and Marketing Assets in Siebel. You can also get a white paper of the ROI Embry Riddle achieved using Oracle Content Management integrated with enterprise applications. Embry Riddle moved from a point solution for content management on accounts payable to an infrastructure investment - they are now using Oracle Content Management for accounts payable with Oracle E-Business Suite, and for student on-boarding with PeopleSoft e-Campus. They continue to expand their use of Oracle Content Management to address further use cases from a core infrastructure. Oracle also shows its vision in the ability to deliver content optimized for online channels. Marketers can use Oracle ECM Suite to deliver digital assets and offers as part of an integrated campaign that understands website visitors and ensures that they are given the most pertinent information and offers. Oracle also provides full lifecycle management through its built-in records management. Companies are able to manage the lifecycle of content (both records and non-records) through built-in retention management. And with the integration of Oracle ECM Suite and Sun Storage Archive Manager, content can be routed to the appropriate storage media based upon content type, usage data or other business rules. This ensures that the most accessed content is instantly available, and archived content is stored on a more appropriate medium like tape. You can learn more in this webinar - Oracle Content Management and Sun Tiered Storage. If you are interested in reading more about why Oracle was chosen as a Leader, view the Gartner Magic Quadrant for Enterprise Content Management.

    Read the article

  • eSTEP Newsletter November 2012

    - by uwes
    Dear Partners,We would like to inform you that the November '12 issue of our Newsletter is now available.The issue contains information to the following topics: News from CorpOracle Celebrates 25 Years of SPARC Innovation; IDC White Papers Finds Growing Customer Comfort with Oracle Solaris Operating System; Oracle Buys Instantis; Pillar Axiom OpenWorld Highlights; Announcement Oracle Solaris 11.1 Availability (data sheet, new features, FAQ's, corporate pages, internal blog, download links, Oracle shop); Announcing StorageTek VSM 6; Announcement Oracle Solaris Cluster 4.1 Availability (new features, FAQ's, cluster corp page, download site, shop for media); Announcement: Oracle Database Appliance 2.4 patch update becomes available Technical SectionOracle White papers on SPARC SuperCluster; Understanding Parallel Execution; With LTFS, Tape is Gaining Storage Ground with additional link to How to Create Oracle Solaris 11 Zones with Oracle Enterprise Manager Ops Center; Provisioning Capabilities of Oracle Enterprise Ops Center Manager 12c; Maximizing your SPARC T4 Oracle Solaris Application Performance with the following articles: SPARC T4 Servers Set World Record on Siebel CRM 8.1.1.4 Benchmark, SPARC T4-Based Highly Scalable Solutions Posts New World Record on SPECjEnterprise2010 Benchmark, SPARC T4 Server Delivers Outstanding Performance on Oracle Business Intelligence Enterprise Edition 11g; Oracle SUN ZFS Storage Appliance Reference Architecture for VMware vSphere4;  Why 4K? - George Wilson's ZFS Day Talk; Pillar Axiom 600 with connected subjects: Oracle Introduces Pillar Axiom Release 5 Storage System Software, Driving down the high cost of Storage, This Provisioning with Pilar Axiom 600, Pillar Axiom 600- System overview and architecture; Migrate to Oracle;s SPARC Systems; Top 5 Reasons to Migrate to Oracle's SPARC Systems Learning & EventsRecently delivered Techcasts: Learning Paths; Oracle Database 11g: Database Administration (New) - Learning Path; Webcast: Drill Down on Disaster Recovery; What are Oracle Users Doing to Improve Availability and Disaster Recovery; SAP NetWeaver and Oracle Exadata Database Machine ReferencesARTstor Selects Oracle’s Sun ZFS Storage 7420 Appliances To Support Rapidly Growing Digital Image Library, Scottish Widows Cuts Sales Administration 20%, Reduces Time to Prepare Reports by 75%, and Achieves Return on Investment in First Year, Oracle's CRM Cloud Service Powers Innovation: Applications on Demand; Technology on Demand, How toHow to Migrate Your Data to Oracle Solaris 11 Using Shadow Migration; Using svcbundle to Create SMF Manifests and Profiles in Oracle Solaris 11; How to prepare a Sun ZFS Storage Appliance to Serve as a Storage Devise with Oracle Enterprise Manager Ops Center 12c; Command Summary: Basic Operations with the Image Packaging System In Oracle Solaris 11; How to Update to Oracle Solaris 11.1 Using the Image Packaging System, How to Migrate Oracle Database from Oracle Solaris 8 to Oracle Solaris 11;  Setting Up, Configuring, and Using an Oracle WebLogic Server Cluster; Ease the Chaos with Automated Patching: Oracle Enterprise Manager Cloud Control 12c; Book excerpt: Oracle Exalogic Elastic Cloud Handbook You find the Newsletter on our portal under eSTEP News ---> Latest Newsletter. You will need to provide your email address and the pin below to get access. Link to the portal is shown below.URL: http://launch.oracle.com/PIN: eSTEP_2011Previous published Newsletters can be found under the Archived Newsletters section and more useful information under the Events, Download and Links tab. Feel free to explore and any feedback is appreciated to help us improve the service and information we deliver.Thanks and best regards,Partner HW Enablement EMEA

    Read the article

  • Xsigo and Oracle's Storage

    - by Philippe Deverchère
    Xsigo, a virtual network infrastructure provider, has recently been acquired by Oracle. Following this acquisition, one might ask ourselves why it is important to Oracle and how Oracle's storage is going to benefit on the long term from this virtualized infrastructure layer. Well, the first thing to understand is that Virtual Networking addresses both network and storage connectivity. Oracle Virtual Networking, as the Xsigo technology is now called, connects any server to any network and storage, so this is not just about connecting servers to the Internet or Intranet. It is also for a large part connecting servers to NAS and SAN storage. Connecting servers to storage has become increasingly complex in the past few years because of the strong emergence of virtualization at the Operating System level. 50% of enterprise workloads are now virtualized, up from 18% in 2009, resulting in a strong consolidation of various applications in a high density server footprint. At the same time, server I/O capability increased 8x in the last 8 years. All this has pushed IT administrators to multiply the number of I/O connections in the back-end of their physical servers, resulting in a messy and very hard to manage networking infrastructure. Here is a typical view of a rack back-end when no virtual networking is used. We consider that today: - 75% of users have ten or more Ethernet ports per server - 85% of users have two or more SAN ports per server - 58% have had to add connectivity to a server specifically for VMs - 65% consider cable reduction a priority The average is 12 or more ports per server, resulting in an extremely complex infrastructure to manage. What Oracle wants to achieve with its Oracle Virtual Networking offering is pretty simple. The objective is to eliminate the complexity through a dramatic reduction of cabling between servers and storage/networks. It is also to provide a software based management system so that any server can be connected to any network or any storage, on demand, and without physical intervention on the infrastructure. At the end of the day, the picture on the left shows what one wants to get for the back-end of customer's racks: just a couple of connections on each physical server to provide a simple, agile and fast network infrastructure for both storage and networking access. This is exactly what the Oracle Virtual Networking solution does. It transforms a complex, error-prone, difficult to manage and expensive networking infrastructure into a simple, high performance and agile solution for the data center. Practically speaking, and for the sake of simplicity, imagine that each server just hosts a minimal number of physical InfiniBand HCAs (Host Channel Adapter) with two links (for redundancy) onto the Oracle Fabric Interconnect director. Using the Oracle Fabric Manager software, you'll then be able to create virtual NICs and HBAs (called vNIC and vHBA) that will be seen by the servers as standard NICs and HBAs and associate them to networks and storage systems which are physically connected to the back-end of the director through standard Fibre Channel and Ethernet GbE/10GbE ports. In addition to this incredibly simple "at-a-click" connectivity capability, the Oracle Virtual Networking solution offers powerful features such as network isolation, Quality of Service, advanced performance monitoring and non-disruptive reconfiguration, migration and scalability of networking infrastructure. So let's go back now to our initial question: why is Oracle Virtual Networking especially important to Oracle's storage solutions? After all, one could connect any storage in the back-end of the Oracle Fabric Interconnect directors, right? The answer is pretty simple: since Oracle owns both the virtualized networking infrastructure and the storage (ZFS-SA, Pillar Axiom and tape), it is possible to imagine several ways in the future to add value when it comes to connect storage to a virtualized storage network: enhanced storage capabilities, converged management between storage and network, improved diagnostic capabilities and optimized integration resulting in higher performance and unique features/functions. Of course, all this is not going to be done overnight, and future will tell us is which evolutions come first. But there is little doubt that the integration of Xsigo within Oracle is going to create opportunities for Oracle's storage!

    Read the article

  • What's up with LDoms: Part 9 - Direct IO

    - by Stefan Hinker
    In the last article of this series, we discussed the most general of all physical IO options available for LDoms, root domains.  Now, let's have a short look at the next level of granularity: Virtualizing individual PCIe slots.  In the LDoms terminology, this feature is called "Direct IO" or DIO.  It is very similar to root domains, but instead of reassigning ownership of a complete root complex, it only moves a single PCIe slot or endpoint device to a different domain.  Let's look again at hardware available to mars in the original configuration: root@sun:~# ldm ls-io NAME TYPE BUS DOMAIN STATUS ---- ---- --- ------ ------ pci_0 BUS pci_0 primary pci_1 BUS pci_1 primary pci_2 BUS pci_2 primary pci_3 BUS pci_3 primary /SYS/MB/PCIE1 PCIE pci_0 primary EMP /SYS/MB/SASHBA0 PCIE pci_0 primary OCC /SYS/MB/NET0 PCIE pci_0 primary OCC /SYS/MB/PCIE5 PCIE pci_1 primary EMP /SYS/MB/PCIE6 PCIE pci_1 primary EMP /SYS/MB/PCIE7 PCIE pci_1 primary EMP /SYS/MB/PCIE2 PCIE pci_2 primary EMP /SYS/MB/PCIE3 PCIE pci_2 primary OCC /SYS/MB/PCIE4 PCIE pci_2 primary EMP /SYS/MB/PCIE8 PCIE pci_3 primary EMP /SYS/MB/SASHBA1 PCIE pci_3 primary OCC /SYS/MB/NET2 PCIE pci_3 primary OCC /SYS/MB/NET0/IOVNET.PF0 PF pci_0 primary /SYS/MB/NET0/IOVNET.PF1 PF pci_0 primary /SYS/MB/NET2/IOVNET.PF0 PF pci_3 primary /SYS/MB/NET2/IOVNET.PF1 PF pci_3 primary All of the "PCIE" type devices are available for SDIO, with a few limitations.  If the device is a slot, the card in that slot must support the DIO feature.  The documentation lists all such cards.  Moving a slot to a different domain works just like moving a PCI root complex.  Again, this is not a dynamic process and includes reboots of the affected domains.  The resulting configuration is nicely shown in a diagram in the Admin Guide: There are several important things to note and consider here: The domain receiving the slot/endpoint device turns into an IO domain in LDoms terminology, because it now owns some physical IO hardware. Solaris will create nodes for this hardware under /devices.  This includes entries for the virtual PCI root complex (pci_0 in the diagram) and anything between it and the actual endpoint device.  It is very important to understand that all of this PCIe infrastructure is virtual only!  Only the actual endpoint devices are true physical hardware. There is an implicit dependency between the guest owning the endpoint device and the root domain owning the real PCIe infrastructure: Only if the root domain is up and running, will the guest domain have access to the endpoint device. The root domain is still responsible for resetting and configuring the PCIe infrastructure (root complex, PCIe level configurations, error handling etc.) because it owns this part of the physical infrastructure. This also means that if the root domain needs to reset the PCIe root complex for any reason (typically a reboot of the root domain) it will reset and thus disrupt the operation of the endpoint device owned by the guest domain.  The result in the guest is not predictable.  I recommend to configure the resulting behaviour of the guest using domain dependencies as described in the Admin Guide in Chapter "Configuring Domain Dependencies". Please consult the Admin Guide in Section "Creating an I/O Domain by Assigning PCIe Endpoint Devices" for all the details! As you can see, there are several restrictions for this feature.  It was introduced in LDoms 2.0, mainly to allow the configuration of guest domains that need access to tape devices.  Today, with the higher number of PCIe root complexes and the availability of SR-IOV, the need to use this feature is declining.  I personally do not recommend to use it, mainly because of the drawbacks of the depencies on the root domain and because it can be replaced with SR-IOV (although then with similar limitations). This was a rather short entry, more for completeness.  I believe that DIO can usually be replaced by SR-IOV, which is much more flexible.  I will cover SR-IOV in the next section of this blog series.

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14  | Next Page >