Search Results

Search found 11084 results on 444 pages for 'storage media'.

Page 72/444 | < Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >

  • Feedback on available mid-to-enterprise level desktop backup solutions [closed]

    - by user85610
    I am involved in the creation of a new backup solution to replace our current Retrospect setup, which has become a significant time sink to administer. We have almost 200 desktop and some laptop clients, both Windows and OS X. We're only interested in products oriented around disk-to-disk, and would be integrate well with our current set of nine NAS devices as target storage. I'd just like some feedback from anyone out there, as it's sometimes difficult otherwise to find objective reviews of software at this level. Both data and time are important enough that we need a reliable solution which won't be prone to self-destruction as often as Retrospect. Bonus points for de-duplication, which might help squeeze more service time out of our NAS setup in terms of capacity. Currently considering Commvault and Netbackup. Many other products I've seen don't have an OSX client. Any thoughts?

    Read the article

  • What is the best filesystem for storing thousands of files in one dictionary-like id-blob structure?

    - by Ivan
    What filesystem best suits my needs? Thousands or even millions of files in one directory. Good (ext4 & ntfs level or close) reliability (incl. fault tolerance) and access speed. No directories actually needed, as well as descriptive names, just a dictionary-like structure of id-blob pairs is all I need. No links, attributes, and access control features needed. The purpose is a file storage where all the metadata (data describing all the facts about what the file actually contains and who can access it) is stored in a MySQL database. As far as I know common filesystems like NTFS and ext3/4 can go dead-slow if there are too many files placed in one directory - that's why I ask.

    Read the article

  • Transfer Raid Drives to External Enclosure

    - by dubbeat
    I have 2 raid disks (a grand total of 360GB) in my laptop. I'm fast running out of space and want to install new drives. I've a pretty good idea how to do this. My question is what can I do with the drives that I remove? I've got lots of media files on these drives that I'd like to keep and maybe transfer back onto my laptop once I have the new drives installed. Bearing in mind that I know next to nothing about hardware how do you suggegst I go about reusing the removed drives somehow? Thanks,

    Read the article

  • What is the value/cost of enabling "spread spectrum clocking" on my hard drives?

    - by Stu Thompson
    I'm building up a biggish NAS box (10x WD RE4 2TB SATA RAID10) and ran into some problems. During the course of my research, debugging, investigations, etc, I discovered a jumper on the physical drives labeled "spread spectrum clocking". After some googling about this on teh internets, it seems to be a feature that some suggest (without reference or explanation) enabling in 'a storage configuration' that makes the drive less sussesptable to EMI. But why? I've got three core questions: Why is this feature not enabled by default? What are the actual benefits? Are there any costs?

    Read the article

  • Swap files in Cloud Infrastructures

    - by ffeldhaus
    At our company we set up an OpenStack Cloud and are currently creating internal guidelines for creation of OS templates / images. One controversial topic was if we should provide swap inside the VM templates. Therefore I'd like to ask the following questions From an elastic Cloud provider point of view, does it make sense to offer swap partitions / files in the VM templates or is swap not needed when a VM can be resized? Which scenarios necessarily demand a swap file to be present? What kind of Storage should be used for swap files (e.g. local / central, FC / iSCSI / NFS)? Are there any best practices for offering swap files in a performant way in Cloud Infrastructures?

    Read the article

  • Bsplayer - load audio tracks from external files

    - by torran
    I have a movie file: Video ID : 1 Format : AVC Format/Info : Advanced Video Codec Format profile : [email protected] Format settings, CABAC : Yes Format settings, ReFrames : 5 frames Muxing mode : Container [email protected] Codec ID : V_MPEG4/ISO/AVC Duration : 54mn 13s Bit rate : 3 380 Kbps Nominal bit rate : 3 459 Kbps Width : 1 280 pixels Height : 720 pixels Display aspect ratio : 16:9 Frame rate : 23.976 fps Resolution : 8 bits Colorimetry : 4:2:0 Scan type : Progressive Bits/(Pixel*Frame) : 0.153 Stream size : 1.28 GiB (88%) Writing library : x264 core 88 r1471 1144615 Audio ID : 2 Format : AC-3 Format/Info : Audio Coding 3 Codec ID : A_AC3 Duration : 54mn 16s Bit rate mode : Constant Bit rate : 384 Kbps Channel(s) : 6 channels Channel positions : Front: L C R, Side: L R, LFE Sampling rate : 48.0 KHz Stream size : 149 MiB (10%) and additional audio files in same folder: .mp3 and .ac3. How can I load them with bsplayer? Right click-audio-audio streams is empty. If i open the movie with media players classic I can switch audio files.

    Read the article

  • Best (physical) DRM free MP3 players [closed]

    - by alex
    I'm looking to purchase an MP3 player soon. It should: Be compatible with Windows Media Player Hold at least 40 GB Be completely DRM free Be reliable and well built. I don't want to repeat my iRiver experience. Be small enough to be comfortably carried in my pocket. I don't care about looks, this can be the ugliest beast ever. Knowing this, what should I buy? [I figured this is almost on topic for Super User, if not: vote to close it.]

    Read the article

  • Any recommendations on a NAS for a home-super-user?

    - by marc_s
    Can anyone recommend a good NAS for use in a home-server environment? I would request at least 2, preferably 4 disks, and I am most interested in good to excellent throughput for file-server and backup purposes - don't need any of the fancy media-streaming or -sharing features, that's not of interest to me. For a 4 or more disk solution, support for the various RAID levels (0, 1, 1+0, 5) would be a plus - especially if supported in hardware (rather than just a software emulation). I just need a place to put my collection of data, ISO images, and so forth - and since several external disks (self-built and off-the-shelf) have failed so far, I'm looking into a more reliable solution. Marc

    Read the article

  • Do you lose everything when you have a hard disk failure in a multi-hard disk LVM that does NOT use RAID?

    - by user72630
    I'm debating about using LVM for a media/file server because I would like to combine multiple physical hard disks into one volume. I do not wish to use any RAID in my LVM so my question is: If one of the multiple hard disks in my volume were to go down would I lose all my data or would I just lose the data that was stored on that individual disk? Also, if I were to just lose the data on the individual disk, would it be as simple as replacing that disk and restoring what was on it from a backup to recover? Thanks everyone.

    Read the article

  • Should you archive documents before backing up to the cloud?

    - by gabbsmo
    I'm planning to add a cloud storage to my personal backup strategy. But now I wonder if it really is worth the trouble of compressing my documents and photos. The Open XML-format already have zip-compression and JPEG is a lossy image format. So there really isn't much benefit in compressing. 20Mb of documents become about 17Mb at the ULTRA preset of 7-zip. One benefit I can imagine is that you can shorten upload time by archiving the folders since it minimizes the number of requests that is needed to be sent to the server at upload and download. So what are your thoughts and experience in this issue? Should you archive your documents before backing them up to the cloud?

    Read the article

  • Can I run Win7 virtualized for my HTPC?

    - by Daniel Schaffer
    I'm currently running Vista for my HTPC, and am planning on upgrading to Win7 soon. However, I've been considering installing it as a VM so that I can run Windows Server 2008 and/or Windows Home Server. The single requirement is that the HTPC must boot up to Windows Media Center with absolutely no user intervention. I need to be able to hit the power button have it go. I've got this working currently, so I don't need to keep a keyboard or mouse plugged in - all I use is my remote. If possible, I'd love to be able to do these other things: Use Win2k8 Server as a VM host for Win7 Pro and WHS. This also lets me run IIS7 for doing ASP.NET development Use WHS for all the wonderful things it does for a home network Are either of the two optional things possible while meeting the WMC requirement?

    Read the article

  • Safest snapshot of a failing harddrive?

    - by ironfroggy
    I have a headless machine that stopped booting, so I pulled it out for diagnostics and got a message that one of the harddrives was about to fail, so I pulled them all out and I need to get everything off, before figuring out which I need to get rid of. I wasn't sure which drive was failing, because it only said "Harddrive 1" and I don't know which it referred to. I'm wondering the best way to get everything off. I'm worried if I copy everything, I could get corrupt data and not realize some files are wrong until the drive is completely out of commission. What are my best options to get everything off in a way I can safely move to new storage?

    Read the article

  • Send chrome tab from w7 laptop to w7 HTPC? (Like with iPad to AppleTV)

    - by Justin
    For the last couple days I've been trying to figure out how to get open tabs syncing between chrome installs on different computers to no avail. (if it's supposed to work the way I think it should, that is.) I have a laptop that I do all my web browsing on. Once in a while I'll come across some video that's worthy of the big-screen and the surround sound and want to open that tab (or media) on the HTPC. It'd be nice if I could just 'Right click Send to HTPC' and it opens up there with no further hassle. But even opening chrome on the HTPC and finding all my current tabs waiting would be fine. Alas, open tabs syncing doesn't seem to actually open tabs on other devices for me. Has anyone come up with a way to accomplish anything similar? Thanks all!

    Read the article

  • Can different drive speeds and sizes be used in a hardware RAID configuration w/o affecting performance?

    - by R. Dill
    Specifically, I have a RAID 1 array configuration with two 500gb 7200rpm SATA drives mirrored as logical drive 1 (a) and two of the same mirrored as logical drive 2 (b). I'd like to add two 1tb 5400rpm drives in the same mirrored fashion as logical drive 3 (c). These drives will only serve as file storage with occasional but necessary access, and therefore, space is more important than speed. In researching whether this configuration is doable, I've been told and have read that the array will only see the smallest drive size and slowest speed. However, my understanding is that as long as the pairs themselves aren't mixed (and in this case, they aren't) that the array should view and use all drives at their actual speed and size. I'd like to be sure before purchasing the additional drives. Insight anyone?

    Read the article

  • Ubuntu USB flash boot drive gets spontaneous "Unhandled sense code" error and causes drive to switch to Write protected

    - by Steve
    What happens is that the system runs fine for several days or even a week and then suddenly the root file-system / goes read-only. Looking at the syslog it shows that there was an 'Unhandled sense code'. This is under Ubuntu 10.04 but I saw the same thing with Ubuntu 9 with different flash media. /dev/sdg1 on / type ext4 (rw,errors=remount-ro) Jun 26 08:50:04 host1 kernel: [926247.565090] sd 5:0:0:0: [sda] Unhandled sense code Jun 26 08:50:04 host1 kernel: [926247.565094] sd 5:0:0:0: [sda] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE Jun 26 08:50:04 host1 kernel: [926247.565098] sd 5:0:0:0: [sda] Sense Key : Data Protect [current] Jun 26 08:50:04 host1 kernel: [926247.565103] sd 5:0:0:0: [sda] Add. Sense: Write protected Jun 26 08:50:04 host1 kernel: [926247.565108] sd 5:0:0:0: [sda] CDB: Write(10): 2a 00 00 46 29 18 00 00 08 00 Jun 26 08:50:04 host1 kernel: [926247.565117] end_request: I/O error, dev sda, sector 4598040 Jun 26 08:50:04 host1 kernel: [926247.569788] Buffer I/O error on device sda1, logical block 574499 Jun 26 08:50:04 host1 kernel: [926247.574677] lost page write due to I/O error on sda1

    Read the article

  • Optimal storage of data structure for fast lookup and persistence

    - by Mikael Svenson
    Scenario I have the following methods: public void AddItemSecurity(int itemId, int[] userIds) public int[] GetValidItemIds(int userId) Initially I'm thinking storage on the form: itemId -> userId, userId, userId and userId -> itemId, itemId, itemId AddItemSecurity is based on how I get data from a third party API, GetValidItemIds is how I want to use it at runtime. There are potentially 2000 users and 10 million items. Item id's are on the form: 2007123456, 2010001234 (10 digits where first four represent the year). AddItemSecurity does not have to perform super fast, but GetValidIds needs to be subsecond. Also, if there is an update on an existing itemId I need to remove that itemId for users no longer in the list. I'm trying to think about how I should store this in an optimal fashion. Preferably on disk (with caching), but I want the code maintainable and clean. If the item id's had started at 0, I thought about creating a byte array the length of MaxItemId / 8 for each user, and set a true/false bit if the item was present or not. That would limit the array length to little over 1mb per user and give fast lookups as well as an easy way to update the list per user. By persisting this as Memory Mapped Files with the .Net 4 framework I think I would get decent caching as well (if the machine has enough RAM) without implementing caching logic myself. Parsing the id, stripping out the year, and store an array per year could be a solution. The ItemId - UserId[] list can be serialized directly to disk and read/write with a normal FileStream in order to persist the list and diff it when there are changes. Each time a new user is added all the lists have to updated as well, but this can be done nightly. Question Should I continue to try out this approach, or are there other paths which should be explored as well? I'm thinking SQL server will not perform fast enough, and it would give an overhead (at least if it's hosted on a different server), but my assumptions might be wrong. Any thought or insights on the matter is appreciated. And I want to try to solve it without adding too much hardware :) [Update 2010-03-31] I have now tested with SQL server 2008 under the following conditions. Table with two columns (userid,itemid) both are Int Clustered index on the two columns Added ~800.000 items for 180 users - Total of 144 million rows Allocated 4gb ram for SQL server Dual Core 2.66ghz laptop SSD disk Use a SqlDataReader to read all itemid's into a List Loop over all users If I run one thread it averages on 0.2 seconds. When I add a second thread it goes up to 0.4 seconds, which is still ok. From there on the results are decreasing. Adding a third thread brings alot of the queries up to 2 seonds. A forth thread, up to 4 seconds, a fifth spikes some of the queries up to 50 seconds. The CPU is roofing while this is going on, even on one thread. My test app takes some due to the speedy loop, and sql the rest. Which leads me to the conclusion that it won't scale very well. At least not on my tested hardware. Are there ways to optimize the database, say storing an array of int's per user instead of one record per item. But this makes it harder to remove items.

    Read the article

  • Delphi - Using DeviceIoControl passing IOCTL_DISK_GET_LENGTH_INFO to get flash media physical size (Not Partition)

    - by SuicideClutchX2
    Alright this is the result of a couple of other questions. It appears I was doing something wrong with the suggestions and at this point have come up with an error when using the suggested API to get the media size. Those new to my problem I am working at the physical disk level, not within the confines of a partition or file system. Here is the pastebin code for the main unit (Delphi 2009) - http://clutchx2.pastebin.com/iMnq8kSx Here is the application source and executable with a form built to output the status of whats going on - http://www.mediafire.com/?js8e6ci8zrjq0de Its probably easier to use the download, unless your just looking for problems within the code. I will also paste the code here. unit Main; interface uses Windows, Messages, SysUtils, Variants, Classes, Graphics, Controls, Forms, Dialogs, StdCtrls; type TfrmMain = class(TForm) edtDrive: TEdit; lblDrive: TLabel; btnMethod1: TButton; btnMethod2: TButton; lblSpace: TLabel; edtSpace: TEdit; lblFail: TLabel; edtFail: TEdit; lblError: TLabel; edtError: TEdit; procedure btnMethod1Click(Sender: TObject); private { Private declarations } public { Public declarations } end; TDiskExtent = record DiskNumber: Cardinal; StartingOffset: Int64; ExtentLength: Int64; end; DISK_EXTENT = TDiskExtent; PDiskExtent = ^TDiskExtent; TVolumeDiskExtents = record NumberOfDiskExtents: Cardinal; Extents: array[0..0] of TDiskExtent; end; VOLUME_DISK_EXTENTS = TVolumeDiskExtents; PVolumeDiskExtents = ^TVolumeDiskExtents; var frmMain: TfrmMain; const FILE_DEVICE_DISK = $00000007; METHOD_BUFFERED = 0; FILE_ANY_ACCESS = 0; IOCTL_DISK_BASE = FILE_DEVICE_DISK; IOCTL_VOLUME_BASE = DWORD('V'); IOCTL_DISK_GET_LENGTH_INFO = $80070017; IOCTL_VOLUME_GET_VOLUME_DISK_EXTENTS = ((IOCTL_VOLUME_BASE shl 16) or (FILE_ANY_ACCESS shl 14) or (0 shl 2) or METHOD_BUFFERED); implementation {$R *.dfm} function GetLD(Drive: Char): Cardinal; var Buffer : String; begin Buffer := Format('\\.\%s:',[Drive]); Result := CreateFile(PChar(Buffer),GENERIC_READ Or GENERIC_WRITE,FILE_SHARE_READ,nil,OPEN_EXISTING,0,0); If Result = INVALID_HANDLE_VALUE Then begin Result := CreateFile(PChar(Buffer),GENERIC_READ,FILE_SHARE_READ,nil,OPEN_EXISTING,0,0); end; end; function GetPD(Drive: Byte): Cardinal; var Buffer : String; begin If Drive = 0 Then begin Result := INVALID_HANDLE_VALUE; Exit; end; Buffer := Format('\\.\PHYSICALDRIVE%d',[Drive]); Result := CreateFile(PChar(Buffer),GENERIC_READ Or GENERIC_WRITE,FILE_SHARE_READ,nil,OPEN_EXISTING,0,0); If Result = INVALID_HANDLE_VALUE Then begin Result := CreateFile(PChar(Buffer),GENERIC_READ,FILE_SHARE_READ,nil,OPEN_EXISTING,0,0); end; end; function GetPhysicalDiskNumber(Drive: Char): Byte; var LD : DWORD; DiskExtents : PVolumeDiskExtents; DiskExtent : TDiskExtent; BytesReturned : Cardinal; begin Result := 0; LD := GetLD(Drive); If LD = INVALID_HANDLE_VALUE Then Exit; Try DiskExtents := AllocMem(Max_Path); DeviceIOControl(LD,IOCTL_VOLUME_GET_VOLUME_DISK_EXTENTS,nil,0,DiskExtents,Max_Path,BytesReturned,nil); If DiskExtents^.NumberOfDiskExtents > 0 Then begin DiskExtent := DiskExtents^.Extents[0]; Result := DiskExtent.DiskNumber; end; Finally CloseHandle(LD); end; end; procedure TfrmMain.btnMethod1Click(Sender: TObject); var PD : DWORD; CardSize: Int64; BytesReturned: DWORD; CallSuccess: Boolean; begin PD := GetPD(GetPhysicalDiskNumber(edtDrive.Text[1])); If PD = INVALID_HANDLE_VALUE Then Begin ShowMessage('Invalid Physical Disk Handle'); Exit; End; CallSuccess := DeviceIoControl(PD, IOCTL_DISK_GET_LENGTH_INFO, nil, 0, @CardSize, SizeOf(CardSize), BytesReturned, nil); if not CallSuccess then begin edtError.Text := IntToStr(GetLastError()); edtFail.Text := 'True'; end else edtFail.Text := 'False'; CloseHandle(PD); end; end. I placed a second method button on the form so I can write a different set of code into the app if I feel like it. Only minimal error handling and safeguards are there is nothing that wasn't necessary for debugging this via source. I tried this on a Sony Memory Stick using a PSP as the reader because I cant find the adapter for using a duo in my machine. The target is an MS and half of my users use a PSP for a reader half dont. However this should work fine on SD cards and that is a secondary target for my work as well. I tried this on a usb memory card reader and several SD cards. Now that I have fixed my attempt I get an error returned. 50 ERROR_NOT_SUPPORTED The request is not supported. I have found an application that uses this API as well as alot of related functions for what I am trying todo. I am getting ready to look into it the application is called DriveImage and its source is here - http://sourceforge.net/projects/diskimage/ The only thing I have really noticed from that application is there use of TFileStream and using that to get a handle on the physical disk.

    Read the article

  • Silverlight 5 Hosting :: Features in Silverlight 5 and Release Date

    - by mbridge
    Silverlight 5 is finally announced in the Silverlight FireStarter Event on the 2nd December, 2010. This new version of Silverlight which was earlier labeled as 'Future of Microsoft Silverlight' has now come much closer to go live as the first Silverlight 5 Beta version is expected to be shipped during the early months of 2011. However for the full fledged and the final release of Silverlight 5, we have to wait many more months as the same is likely to be made available within the Q3 2011. As would have been usually expected, this latest edition would feature many new capabilities thereby extending the developer productivity to a whole new dimension of premium media experience and feature-rich business applications. It comes along with many new feature updates as well as the inclusion of new technologies to improve the standard of the Silverlight applications which are now fine-tuned to produce next generation business and media solutions that is capable to meet the requirements of the advanced web-based app development. The Silverlight 5 is all set to replace the previous fourth version which now includes more than forty new features while also dropping various deprecated elements that was prevalent earlier. It has brought around some major performance enhancements and also included better support for various other tools and technologies. Following are some of the changes that are registered to be available under the Silverlight 5 Beta edition which is scheduled to be launched during the Q1 2011. Silverlight 5 : Premium Media Experiences The media features of Silverlight 5 has seen some major enhancements with a lot of optimizations being made to deliver richer solutions. It's capability has now been extended to make things easier, faster and capable of performing the desired tasks in the most efficient manner. The Silverlight media solutions has already been a part of many companies in the recent days where various on-demand Silverlight services were featured but with the arrival of the next generation premium media solution of Silverlight 5, it is expected to register new heights of success and global user acclamation for using it with many esteemed web-based projects and media solutions. - The most happening element in the new Silverlight 5 will be its support for utilizing the GPU based hardware acceleration which is intended to lower down the CPU load to a significant extent and thereby allowing faster rendering of media contents without consuming much resources. This feature is believed to be particularly helpful for low configured machines to run full HD media content without any lagging caused due to processor load. It will hence be one great feature to revolutionize the new generation high quality media contents to be available within the web in a more efficient manner with its hardware decoded video playback capabilities. - With the inclusion of hardware video decoding to minimize the processor load, the Silverlight 5 also comes with another optimization enhancement to also reduce the power consumption level by making new methods to deal with the power-saver settings. With this optimization in effect, the computer would be automatically allowed to switch to sleep mode while no video playback is in progress and also to prevent any screensavers to popup and cause annoyances during any video playback. There would also be other power saver options which will be made available to best suit the users requirements and purpose. - The Silverlight trickplay feature is another great way to tweak any silverlight powered media content as is used for many video tutorial sites or for dealing with any sort of presentations. This feature enables the user to modify the playback speed to either slowdown or speedup during the playback durations based on the requirements without compromising on the quality of output. Normally such manipulations always makes the content's audio to go off-pitch, but the same will not be the case with TrickPlay and the audio would seamlessly progress with the video without skipping any of its part. - In addition to all of the above, the new Silverlight 5 will be featuring wireless control of all the media contents by making use of remote controllers. With the use of such remote devices, it will be easier to handle the various media playback controls thereby providing more freedom while experiencing the premium media services. Silverlight 5 : Business Application Development The application development standard has been extended with more possibilities by bringing forth new and useful technologies and also reviving the existing methods to work better than what it was used to. From the UI improvements to advanced technical aspects, the Silverlight 5 scores high on all grounds to produce great next generation business delivered applications by putting in more creativity and resourceful touch to all the apps being produced with it. - The WPF feature of Silverlight is made more effective by introducing new standards of Databinding which is intended to improve the productivity standards of the Silverlight application developer. It brings in a lot of convenience in debugging the databinding components or expressions and hence making things work in a flawless manner. Some additional features related to databinding includes that of Ancestor RelativeSource, Implicit DataTemplates and Model View ViewModel (MVVM) support with DataContextChanged event and many other new features relating it. - It now comes with a refined text and printing service which facilitates better clarity of the text rendering and also many positive changes which are being applied to the layout pattern. New supports has been added to include OpenType font, multi-column text, linked-text containers and character leading support to name a few among the available features.This also includes some important printing aspects like that of Postscript Vector Printing API which allows to program our printing tasks in a user defined way and Pivot functionality for visualization concerns of informations. - The Graphics support is the key improvements being incorporated which now enables to utilize three dimensional graphics pattern using GPU acceleration. It can manage to provide some really cool visualizations being curved to provide media contents within the business apps with also the support for full HD contents at 1080p quality. - Silverlight 5 includes the support for 64-bit operating systems and relevant browsers and is also optimized to provide better performance. It can support the background thread for the networking which can reduce the latency of the network to a considerable extent. The Out-of-Browser functionality adds the support for utilizing various libraries and also the Win32 API. It also comes with testing support with VS 2010 which is mostly an automated procedure and has also enabled increased security aspects of all the Silverlight 5 developed applications by using the improved version of group policy support.

    Read the article

  • How do I test is storage-conf is being loaded in Cassandra 0.7.3?

    - by user657253
    I have installed Cassandra and gotten it working on two machines. I have followed the instructions to hook them up to each other by configuring the storage-conf.xml files. Both machines respond well to thrift and to command line cassandra. This is tutorial I used to setup the storage-conf.xml files. The tutorial says that if I run netstat, I should NOT see Cassandra bound to 127.0.0.1 on my seed node. I should see it bound to my internal IP, which I have configured in the storage-conf.xml file. I have rebooted the servers and relaunched cassandra. Still, I see the localhost address insead of the correct internal IP address. Is it that my .yaml file is overriding the storage-conf.xml file? If so, how do I delete the appropriate things in the .yaml? Or how do I tell Cassandra to look for my storage-conf.xml file? A few things I have tried: renaming the cassandra.yaml file. What happens is that cassandra will not load. If i rename the storage-conf.xml, cassandra does load. When I installed Cassandra, it did not come with a storage-conf.xml file. I had to grab it off the apache wiki.

    Read the article

  • Backup data rate on Raspberry Pi maxing out at 5 Mb/s. Why?

    - by bastibe
    I set up my Raspberry Pi as a Time Machine, as documented here. At the moment, the Raspberry Pi is connected to my MacBook Pro using a direct Ethernet cable. Also, an external hard drive (laptop drive) is connected to the Raspberry Pi using the USB port. However, backups are pretty slow. Activity Monitor claims that the Network is transferring a very steady 5 Mb/s, where my Time Capsule is transferring up to 8 Mb/s with a lot of fluctuation. The Raspberry Pi self-reports (top) that its CPU is only half-used, with about equal parts afpd, usb-storage and jbd2/sda1-8. Thus, I think that the processing power of the Raspberry Pi does not seem to be the problem here. To me, this looks like there is some kind of bottleneck that maxes out at 5 Mb/s thus potentially having my backups run at less than their potential speed. To the best of my knowledge, this might be the afp-daemon, the usb-bus or the external hard drive. So, my question is, how could I identify the true culprit and what can I do about it?

    Read the article

  • VMWare Newbie - looking for hardware recommendations and help :) [closed]

    - by Dan
    I am looking for some hardware recommendations on an upcoming virtualization project. We are a small company (80 users - 25 in site 1, 55 in site 2) currently using Windows Server 2003 - no VM servers yet. Our AD is setup where site 1 is the root domain and site 2 is a subdomain/subnet - connected by T1 and VPN for failover. The current DC's also server as file servers, print servers, AntiVirus servers. Email is in the cloud. Additionally then in site 1 we have 3 additional member servers - one running IBM Websphere for a customer specific app, one running Infor PowerLink (no real heavy load) and another that we use for Virtual Studio apps and also runs DirSync for Exchange Online. No heavy workloads on any of these machines really. We also have an AS400 box that we run ERP/CRM software on that site 2 connects to over the WAN link. In site 2 we also have a SQL machine that runs on Win2K server. Database files are not large less than 5 GB. Light to Medium workload on this machine. File servers in each site store less than 500 GB data and probably won't grow to more than 1TB in the next 5 years. I am looking to go to VMWare in both sites and virtualize all servers. What recommendations do you have for server, storage hardware? Is it safe to virtualize all of your DC's? Any help or advice would be greatly appreciated. Thanks.

    Read the article

  • Performance associated with storing millions of files on NTFS

    - by Tim Brigham
    Does anyone have a method / formula, etc that I could use - hopefully based on both current and projected numbers of files - to project the 'right' length of the split and the number of nested folders? Please note that although similar it isn't quite the same as Storing a million images in the filesystem. I'm looking for a way to help make the theories outlined more generic. Assumptions I have 'some' initial number of files. This number would be arbitrary but large. Say 500k to 10m+. I have considered the underlying physical hardware disk IO requirements that would be necessary to support such an endeavor. Put another way As time progresses this store will grow. I want to have the best balance of current performance and as my needs increase. Say I double or triple my storage. I need to be able to address both current needs and projected future growth. I need to both plan ahead and not sacrifice too much of current performance. What I've come up with I'm already thinking about using a hash split every so many characters to split things out across multiple directories and keeping the trees even, very similar as outlined in the comments in the question above. It also avoids duplicate files, which would be critical over time. I'm sure that the initial folder structure would be different based on what I've outlined, and depending on the initial scale. As far as I can figure there isn't a one size fits all solution here. It would be horrendously time intensive to work something out experimentally.

    Read the article

  • Is current SATA 6 gb/s equipment simply unreliable?

    - by korkman
    I have a 45-disk array of Seagate Barracuda 3 TB ST3000DM001 (yes these are desktop drives I'm aware of that) in a Supermicro sc847 JBOD, connected via LSI 9285. I have found a solution for the problem description below by reducing speed via MegaCli -PhySetLinkSpeed -phy0 2 -a0; for i in $(seq 48); do MegaCli -PhySetLinkSpeed -phy${i} 2 -a0; done and rebooting. The question remains: Is this typical for current 6 gb/s equipment? Is this the sad state of SATA storage? Or is some of my equipment (the sff-8088 cables come to mind) bad? The Problem was: Synchronizing HW RAID-6, disks kept offlining. Fetching SMART values reveiled that those which offlined did not increase powered-on hours anymore. That is, their firmware (CC4C) seems to crash. Digging into the matter by switching to Software RAID-6, with the disks passed-through, I got tons of kernel messages scattered across all disks, with 6 gb/s: sd 0:0:9:0: [sdb] Sense Key : No Sense [current] Info fld=0x0 sd 0:0:9:0: [sdb] Add. Sense: No additional sense information And finally, when a disk offlines: megasas: [ 5]waiting for 160 commands to complete ... megasas: [35]waiting for 159 commands to complete ... megasas: [155]waiting for 156 commands to complete ... megaraid_sas: pending commands remain after waiting, will reset adapter. Ugly controller reset here, then minutes later: megaraid_sas: Reset successful. sd 0:0:28:0: Device offlined - not ready after error recovery ... sd 0:0:28:0: [sdu] Unhandled error code sd 0:0:28:0: [sdu] Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK sd 0:0:28:0: [sdu] CDB: Read(10): 28 00 23 21 2f 40 00 00 70 00 sd 0:0:28:0: [sdu] killing request Reduced speed to 3 gb/s like written above, all problems vanished.

    Read the article

  • Malware Defense Shows Up in PlayOn Settings/Logs Although System Has Been Thoroughly Cleaned

    - by nicorellius
    I was hit really hard by some nasty malware: Malware Defense. I was doing something I should not have been doing when I got it (surfing Pirate Bay for TV shows). It locked up my system and I had to reboot in safe mode. I was able to shut down the process and remove it using a malware killer tool. I then installed, after my machine was cleaned up a bit, Clamwin, Malwarebytes, and another AV tool. I cleaned the heck out of my system. Simultaneously, while this was going on, I was having trouble with my media-server, PlayOn. This tool is great, but has some bugs. One in particular is that it will not function well with AV software running. I found a way to allow the new AV software to run while using PlayOn, but it still says I have Malware Defense on. Firstly, Malware Defense is long gone. I cleaned all remnants from my registry and scoured my system with the above tools multiple times. PlayOn is getting some information that I have this crap installed on my system, but it's not. The system runs OK, but not optimally. I have a feeling it is causing my streaming to be interrupted sometimes. How is it that I can't even find Malware Defense on my system if I tried but yet somehow PlayOn is getting a finger print of it somewhere? I have gone back and forth with MediaMall to no avail. I kind of just gave up, because the streaming works OK. BTW, I also uninstalled/reinstalled PlayOn several times, reverted back to previous versions, etc. The only thing I haven't done is reformat my disk and reinstall Windows. I really don't want to do this if there is another way to remove this little print. Any ideas?

    Read the article

  • Best way to attach 96 tb to workstation

    - by user994179
    I'm running a workstation with dual xeon 5690's (12 physical/24 logical cores), 192 gb of ram (ie, maxed-out), Windows 7 64bit, 5 slots for adapter cards, and 1 tb of internal storage, with 5 more internal bays available. I have an app that creates data files totaling about 88 tbs. These are written once every 14 months, and the rest of the time the app only needs to read them; and 95% of the reads are sequential reads of huge chunks of data. I have some control over how big the individual files are, but ideally they would be between 5 and 8 tbs. The app will be reading from only one drive at a time, and the nature of the data is such that if (when) a drive dies I can restore the data to a new disk from tape. While it would be nice to be able to use the fastest drive/controllers available, at this point size matters more than speed. After doing lots of reading, I am leaning toward buying a bunch of cheap 2tb drives and putting them into a bunch of cheap enclosures. All this stuff is going into my home office, so I need to avoid the raised floor/refrigerated approach. My questions: Is the cheap drive/enclosure solution the best one for this situation? Given the nature of the app and the way the data is used, does RAID make sense? If so, which one? For huge sequential reads, would Usb 3.0 and eSata be a wash performance-wise? For each slot available on the workstation, can I hook up an enclosure that can hold multiple drives? Or is it one controller per drive? If I can have multiple drives on one controller, am I essentially splitting the bandwidth (throughput)? For example, if I have a 12 bay enclosure, is the throughput of the controller reduced by a factor of 12? Are there any Windows 7 volume/drive/capacity limits I should be aware of? Thanks

    Read the article

< Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >