Search Results

Search found 40479 results on 1620 pages for 'binary files'.

Page 400/1620 | < Previous Page | 396 397 398 399 400 401 402 403 404 405 406 407  | Next Page >

  • Run disk error check on NTFS file?

    - by paulius_l
    I have a feeling that my system hard drive is dying. Benchmark kind of enforces it. Here is the benchmark of my system hard drive during low system activity: And here is the benchmark of backup drive: Furthermore, there are some files which I just can't touch because I get CRC errors and the hard drive activity spikes to 100% with operating speeds less than 1 MB/s while working with such files. I haven't yet tried swapping SATA cable as I have read this might cause the problems. Anyway, I would like to run some tests on specific clustsers where those files I am interested in are stored. I don't want to do the full chkdsk because it takes a very long time. I would like to either find the utility which executes the disk check directly on the clusters where the file belongs or a couple utilities where one tells me the cluster locations and another can check just those locations. How do I check and possibly fix disk errors where the files I am interested in are stored? Edit: S.M.A.R.T. info:

    Read the article

  • Is there a Distributed SAN/Storage System out there?

    - by Joel Coel
    Like many other places, we ask our users not to save files to their local machines. Instead, we encourage that they be put on a file server so that others (with appropriate permissions) can use them and that the files are backed up properly. The result of this is that most users have large hard drives that are sitting mainly empty. It's 2010 now. Surely there is a system out there that lets you turn that empty space into a virtual SAN or document library? What I envision is a client program that is pushed out to users' PCs that coordinates with a central server. The server looks to users just like a normal file server, but instead of keeping entire file contents it merely keeps a record of where those files can be found among various user PCs. It then coordinates with the right clients to serve up file requests. The client software would be able to respond to such requests directly, as well as be smart enough to cache recent files locally. For redundancy the server could make sure files are copied to multiple PCs, perhaps allowing you to define groups in different locations so that an instance of the entire repository lives in each group to protect against a disaster in one building taking down everything else. Obviously you wouldn't point your database server here, but for simpler things I see several advantages: Files can often be transferred from a nearer machine. Disk space grows automatically as your company does. Should ultimately be cheaper, as you don't need to keep a separate set of disks I can see a few downsides as well: Occasional degradation of user pc performance, if the machine has to serve or accept a large file transfer during a busy period. Writes have to be propogated around the network several times (though I suspect this isn't really much of a problem, as reading happens in most places more than writing) Still need a way to send a complete copy of the data offsite occasionally, and this would make it very hard to do differentials Think of this like a cloud storage system that lives entirely within your corporate LAN and makes use of your existing user equipment. Our old main file server is due for retirement in about 2 years, and I'm looking into replacing it with a small SAN. I'm thinking something like this would be a better fit. As a school, we have a couple computer labs I can leave running that would be perfect for adding a little extra redundancy to the system. Unfortunately, the closest thing I can find is Dienst, and it's just a paper that dates back to 1994. Am I just using the wrong buzzwords in my searches, or does this really not exist? If not, is there a big downside that I'm missing?

    Read the article

  • Can a USB/IDE/SATA adapter be flaky?

    - by Ward
    I use USB/IDE/SATA converters a lot and on the two that I have now, I sometimes get errors copying files to drives. It only happens when I'm copying big files to the drive (big can mean as little as 100MB, I think it happens more often with bigger files - 300MB or more), and basically the copy will fail and I'll get one or more error messages about "Delayed write failed." But if I disconnect the drive and re-connect it, I'll usually be able to continue. (The file that was being copied will be corrupt, but otherwise the drive is fine.) I just noticed a new type of flakiness: the data transfer rate can vary widely. I copied one set of files (5x300MB files) and it took 10+minutes, then I copied another set (approx. the same sizes) and it took less than a minute. I haven't done systematic testing, the other things I'm doing on my laptop at the same time might have some impact, and I haven't cross-checked the two adapters I have and the 3 hard drives I'm working with to see if there's a pattern. I'm more wondering if anyone else has seen anything like this.

    Read the article

  • I need a few minutes of dedicated server a week, but not for hosting, just to convert ogg etc

    - by talkingnews
    I'm completely happy with my webhosting, it's just that I need to do one little thing they won't allow, and that's run an instance of Sox to convert about 30 mp3s to ogg files, in various directories, a couple of times a week, to be done automatically in response to the detection of the upload of an mp3. Probably looking at a minute of server time over the whole week. I've had unhelpful suggestions on other forums like "why not leave your home PC on 24 hours a day and then use all your isp bandwidth to do this", which doesn't work for me. I know that I can host files on, say, Amazon S3, but is there something similar for my needs? All it would need to do would be: wget/ftp the mp3 files, convert them to ogg, ftp the files back to my hosting. Of course, all this wouldn't be needed if there was such a thing as a compiled binary of Sox (or any mp3ogg converter) for Centos which I could upload without needing root access, but I've given up asking that one, but always open to suggestions!

    Read the article

  • Industrial strength cloud file storage

    - by ArthurG
    I'm looking for an industrial strength cloud file storage system. It will be used by multiple people in a startup. Our requirements: Transparent file system access: files and folders in the file system must be able transparently access (read and write) files in the cloud; files must be synchronized whenever network access is available and buffered otherwise. The system must be usable by non-technical people. Access control: we need to control who can access which files, at least on a very coarse basis. e.g., the developers will be able to access the system design documents, only the corporate folks can access recruiting documents, and only management can access certain corporate documents. Dropbox provides this via Sharing folders, but that's not adequate, if I understand it correctly, because there's no authentication of the sharing user. so the cloud service should have a notion of an account (our startup) with multiple users with distinct credentials and rights for each user Clients: it must be accessible from Macs and PCs; I would hope that it supports Linux (e.g., Ubuntu) too Security: it must provide robust security Backup: the cloud service must reliably backup the files Versioning: change version history, is a big plus, but not required Not free: we're willing to pay for the service So far, we've reviewed the following, albeit not completely thoroughly: Dropbox: has all except 1) Access control, which is provided via Sharing folders, but that's not adequate, if I understand it correctly, because there's no authentication of the sharing user. and 2) Security, as discussed here http://www.economist.com/blogs/babbage/2011/05/internet_security and here http://blog.dropbox.com/?p=821. Windows Live Mesh, has all except 1) Clients, only supporting Windows 7 and OS X. SpiderOak has all, except 1) Transparent file system access, which is only available for 1 user. Amazon Cloud, doesn't offer 1) Transparent file system access Rackspace Cloud Drive has all except 1) Access control and 2) Versioning I'll gladly include any clarifications or additional systems the community provides. Arthur

    Read the article

  • rpmbuild gives seg fault

    - by Deepti Jain
    I am trying to build an rpm using the rpmbuild tool. I have source code which build binaries around 30 GB. This software for which I am making the rpm has dozens of executables. When I copy only the binaries of a single executable (Eg. init) my rpm builds successfully. But when I dump the entire build to the rpm, rpmbuild does everything but gives a seg fault in the end. Here is my spec file: # This is a sample spec file for wget %define _topdir /root/mywget %define name source %define release 1 %define version 1.12 %define _builddir /root/mywget/BUILD/glenlivet %define _buildrootdir /root/mywget/BUILDROOT %define _buildroot /root/mywget/BUILDROOT %define _sourcedir /root/mywget/SOURCES BuildRoot: %{_buildroot} Summary: GNU source License: GPL Name: %{name} Version: %{version} Release: %{release} Source: %{name}-%{version}.tar.gz Prefix: /usr Group: Development/Tools %description The GNU sample program downloads files from the Internet using the command-line. %prep %setup -q -n glenlivet %build cd %{_builddir} make all %install rm -rf %{_buildrootdir} mkdir -p %{_buildrootdir}/bin cp -p -r %{_builddir}/build/obj-x64/* %{_buildrootdir}/bin/ %files %defattr(-,root,root) /bin/* If I only copy some of the binaries (let say one utility and its dependent binaries) it works fine. But when I try to copy the entire build, I get a seg fault. I get the seg fault after rpmbuild has executed these sections: %prep %build %install rpmbuild also processes my source file. Processing files: source-1.12-1 Finding Provides: Finding Requires: Finding Supplements: Provides:...... Requires:...... Checking for unpackaged file(s):/ usr/lib/rpm/check-files /root/mywget/BUILDROOT Checking for unpackaged file(s):/ usr/lib/rpm/check-files /root/mywget/BUILDROOT Segmentation fault Any clue what wrong is going on or where does rpmbuild fails? Thanks in advance

    Read the article

  • Unable to create installdriver instance

    - by Entity
    When trying to install a program called "AV Grabber", I get the following error message. unable to create installdriver instance Product name is: EZ Grabber Right click on the executable 7.1.79.0 I have tried installing Install Sheild 7, but have no luck trying to remove this error message. Any Ideas? Machine: Windows XP (Home Edition) User Account: Administrator Account Folder version of installshield is visible from: C:\Program Files\Common Files\InstallShield\Driver\7\Intel 32 Have tried the following command, but have not helped. "C:\Program Files\Common Files\InstallShield\Driver\7\Intel 32\IDriver.exe" -Embedding

    Read the article

  • Discrepancy in file size on disk and ls output

    - by smokinguns
    I have a script that checks for gzipped file sizes greater than 1MB and outputs files along with their sizes as a report. This is the code: myReport=`ls -ltrh "$somePath" | egrep '\.gz$' | awk '{print $9,"=>",$5}'` # Count files that exceed 1MB oversizeFiles=`find "$somePath" -maxdepth 1 -size +1M -iname "*.gz" -print0 | xargs -0 ls -lh | wc -l` if [ $oversizeFiles -eq 0 ];then status="PASS" else status="CHECK FAILED. FOUND FILES GREATER THAN 1MB" fi echo -e $status"\n"$myReport The problem is that ls command outputs the files sizes as 1.0MB in the report but the status is "FAIL" as "$oversizeFiles" variable's value is 2. I checked the file sizes on disk and 2 files are 1.1MB. Why this discrepancy? How should I modify the script so that I can generate an accurate report? BTW, I'm on a Mac. Here is what man page for "find" says on my Mac OSX: -size n[ckMGTP] True if the file's size, rounded up, in 512-byte blocks is n. If n is followed by a c,then the primary is true if the file's size is n bytes (characters). Similarly if n is followed by a scale indicator then the file's size is compared to n scaled as: k kilobytes (1024 bytes) M megabytes (1024 kilobytes) G gigabytes (1024 megabytes) T terabytes (1024 gigabytes) P petabytes (1024 terabytes)

    Read the article

  • Does moving a file outside NTFS loose data in alertnate data streams?

    - by jay
    I have a lot of files on machine running Windows Server 2008 which I wanted to move to a Fedora machine. How can I keep the attributes stored in, for example, media files (date taken, rating, length, etc) while transfering it to outside the realm of NTFS's Alternate Data Streams. I'm aware that similar metadata exists in other file systems, but what happens when you move these files? And what's the best way to retain them in other file systems?

    Read the article

  • Domain migration - 301 Redirect of all contentes of directory)

    - by Trufa
    Hi, I would like to know if it is possible to do the following considering that I would like to migrate domains. I have lets say: one.com/files/one.html one.com/files/two.php one.com/other/three.html one.com/other/four.doc one.com/other/subdirectory/five.doc I am migrating to two.com So I would like to make RESPECTIVE 301 redirects to the following: two.com/old/files/one.html two.com/old/files/two.php two.com/old/other/three.html two.com/old/other/four.doc two.com/old/other/subdirectory/five.doc I've tried with cPanel and although I come "close" with the redirects option I can't seem to make it happen. The folders are not much (10 -12) the file are a lot, and obviously impossible to make it manually. How would you proceed? Can this/ should this be done with regex from the .htaccess?? Can you direct all the elements of a subdirectory in the manner expressed above? I hope the question is clear enough, if not please ask for any clarification needed!! Thanks in advance!!

    Read the article

  • chroot for unsecure programs execution

    - by attwad
    Hi, I have never set-up a chroot-jailed environment before and I am afraid I need some help to do it well. To explain shortly what this is all about: I have a webserver to which users send python scripts to process various files that are stored on the server (the system is for Research purpose). Everyday a cron job starts the execution of the uploaded scripts via a command of this kind: /usr/bin/python script_file.py All of this is really insecure and I would like to create a jail in which I would copy the necessary files (uploaded scripts, files to process, python binary and dependencies). I already looked at various utilities to create jails but none of them seemed up-to-date or were lacking solid documentation (ie. the links proposed in How can I run an untrusted python script) Could anyone guide me to a viable solution to my problem? like a working example of a script that creates a jail, put some files in it and executes a python script? Thank you very much.

    Read the article

  • What virus renames all images to EXE?

    - by user29373
    I have a virus that renames all jpg file extensions to EXE files and hide the original files at the same folder!! I can see hidden Files with FarManager and I cannot see them in Windows Explorer(even with show hidden files option?!!) How can I restore it to its original file extension? Do you have any tool to scan the converted file and restore it to its original file extension? What the virus name? how can I remove it manually?

    Read the article

  • Can "tar" backup incrementally?

    - by Somebody still uses you MS-DOS
    I have my home folder with a few GB. Is it possible to run tar on it, create a home.tar.gz, and then for changed files, it creates home1.tar.gz only with modified files from previous tar (thus being an incremental backup)? I would like to check the resulting checksum files and export them as well like home.md5, home1.md5, etc. (I know this could be another process, but interesting as well).

    Read the article

  • Untar with date filter

    - by Don
    Is there any way to untar and only extract those files that are above a certain date including directory structure?? I restored a backup on a play server but it was a few days old. However I have a tar archive of the entire structure that is more up to date and healthy so now I want to extract all files (including directory structure) based on a date filter on the files if possible?

    Read the article

  • Should I go along with my choice of web hosting company or still search?

    - by Devner
    Hi all, I have been searching for a good website hosting company that can offer me all the services that I need for hosting my PHP & MySQL based website. Now this is a community based website and users will be able to upload pictures, etc. The hosting company that I have in mind, currently lets me do everything... let me use mail(), supports CRON jobs, etc. Of course they are charging about $6/month. Now the only problem with this company is that they have a limit of 50,000 files that can exist within the hosting account at any time. This kind of contradicts their frontpage ad of "UNLIMITED SPACE" on their website. Apart from this, I know of no other reason why I should not go with this hosting company. But my issue is that 50,000 file limit is what I cannot live with, once the users increase in significant number and the files they upload, exceed 50,000 in number. Now since this is a dynamic website and also includes sensitive issues like payments, etc. I am not sure if I should go ahead with this company as I am just starting out and then later switch over to a better hosting company which does not limit me with 50,000 files. If I need to switch over once I host with this company, I will need to take backups of all the files located in my account (jpg, zip, etc.), then upload them to the new host. I am not aware of any tools that can help me in this process. Can you please mention if you know any? I can go ahead with the other companies right now, but their cost is double/triple of the current price and they all sport less features than my current choice. If I pay more, then they are ready to accommodate my higher demands. Unfortunately, the company that I am willing to go with now, does NOT have any other higher/better plans that I can switch to. So that's the really really bad part. So my question(s): Since I am starting out with my website and since the scope of users initially is going to be less/small, should I go ahead with the current choice and then once the demand increases, switch over to a better provider? If yes, how can I transfer my database, especially the jpg files, etc. to the new provider? I don't even know the tools required to backup and restore to another host. (I don't like this idea but still..) Should I go ahead and pay more right now and go with better providers (without knowing if the website is going to do really that well) just for saving myself the trouble of having to take a backup of the 50,000 files and upload to a new host from an old host and just start paying double/triple the price without even knowing if I would receive back the returns as I expected? Backup and Restore in such a bulky numbers is something that I have never done before and hence I am stuck here trying to decide what to do. The price per month is also a considerable factor in my decision. All these web hosting companies say one common thing: It is customers responsibility to backup and restore data and they are not liable for any loss. So no matter what hosting company that I would like to go with, they ask me to take backup via FTP so that I can restore them whenever I want (& it seems to be safer to have the files locally with me). Some are providing tools for backup and some are not and I am not sure how much their backup tools can be trusted considering the disclaimers they have. I have never backed-up and restored 50,000 files from one web host to another, so please, all you experienced people out there, leave your comments and let me know your suggestions so that I can decide. I have spent 2 days fighting with myself trying to decide what to do and finally concluded that this is a double-edged sword and I can't arrive at a satisfactory final decision without involving others suggestions. I believe that someone must be out there who may have had such troublesome decision to make. So all your suggestions to help me make my decision are appreciated. Thank you all.

    Read the article

  • (Win7) Gets stuck with ~1% CPU. Especially with multithreading

    - by meow
    Windows 7 32 bit, up to date, Intel i7 860. (For some reason the company runs 32bit Windows everywhere.) I tried to update all motherboard drivers etc. as far as possible. I have a performance issue with a machine which appears in connection with multithreading (or so I think). As an example (and where I most often see it, but it appears on other programs as well): ProteoWizard is a file conversion tool for mass spectrometry files. I can add a list of files and it will attempt to process up to 8 files in parallel (quadcore x 2 threads/core). If I choose 1 to 6 files, I start the process and it goes straight through. If I have =7 files in the queue, conversion goes to ~20%, then gets stuck for 15 seconds, then continues again, always in "chunks" of a few % before getting stuck again. During the time the process is stuck, CPU is at 1%. RAM is not limiting, it is maybe at 70% or so and not going up. I don't get the same problem on other, even slower machines. The computer gets also stuck at 1% CPU doing nothing on other occasions, but for multithreading it is most frequent. Where should I look for the problem?

    Read the article

  • My server's been hacked EMERGENCY

    - by Grant unwin
    I'm on my way into work at 9.30 p.m. on a Sunday because our server has been compromised somehow and was resulting in a DOS attack on our provider. The servers access to the Internet has been shut down which means over 5-600 of our clients sites are now down. Now this could be an FTP hack, or some weakness in code somewhere. I'm not sure till I get there. How can I track this down quickly? We're in for a whole lot of litigation if I don't get the server back up ASAP. Any help is appreciated. UPDATE Thanks to everyone for your help. Luckily I WASN'T the only person responsible for this server, just the nearest. We managed to resolve this problem, although it may not apply to many others in a different situation. I'll detail what we did. We unplugged the server from the net. It was performing (attempting to perform) a Denial Of Service attack on another server in Indonesia, and the guilty party was also based there. We firstly tried to identify where on the server this was coming from, considering we have over 500 sites on the server, we expected to be moonlighting for some time. However, with SSH access still, we ran a command to find all files edited or created in the time the attacks started. Luckily, the offending file was created over the winter holidays which meant that not many other files were created on the server at that time. We were then able to identify the offending file which was inside the uploaded images folder within a ZenCart website. After a short cigarette break we concluded that, due to the files location, it must have been uploaded via a file upload facility that was inadequetly secured. After some googling, we found that there was a security vulnerability that allowed files to be uploaded, within the ZenCart admin panel, for a picture for a record company. (The section that it never really even used), posting this form just uploaded any file, it did not check the extension of the file, and didn't even check to see if the user was logged in. This meant that any files could be uploaded, including a PHP file for the attack. We secured the vulnerability with ZenCart on the infected site, and removed the offending files. The job was done, and I was home for 2 a.m. The Moral - Always apply security patches for ZenCart, or any other CMS system for that matter. As when security updates are released, the whole world is made aware of the vulnerability. - Always do backups, and backup your backups. - Employ or arrange for someone that will be there in times like these. To prevent anyone from relying on a panicy post on Server Fault. Happy servering!

    Read the article

  • Windows disk change monitoring for malware analysis

    - by SuperDuck
    Not sure if this question belongs to here, because it has some relations with 'serverfault' (system backups) and 'stackoverflow' (software analysis). I'm looking for a solution to monitor disk changes on a Windows system and selectively revert them. It should be able to handle live files like registry parts, so may need to be an offline backup software. It shouldn't silently pass over files which the current admin user doesn't have permissions on (files with no permission entries or owned by the 'system' user) Registry change tracking would be a bonus but is not a requirement I use virtual machines for malware analysis, there is even no solution to list file changes in disk snapshot files (delta VMDK). I currently use Ashampoo for monitoring changes. Though it's the best one between similars, it's not a good software and hasn't really evolved in many 'platinum', 'deluxe' versions released in the last 10 years (it even used non-resizable windows until the latest version). The real problem is it misses some disk / registry changes. Perhaps it only compares modification dates and doesn't catch a change if the dates are preserved. So, I think the solution should compare files using hashes, or file sizes at least. There are numerous backup software out there and I'm sure one can handle this, offline or online.

    Read the article

  • How can I specify multiple rules for a particular log file(s) with logrotate?

    - by Ether
    I have a logrotate.d config file that looks something like this: /home/myapp/log/* { daily compress dateext ifempty delaycompress olddir /home/myapp/baklog } There are a few particular log files where I want to apply additional rules, such as "mail". How can I apply additional rules to just some files? If I add another rule above that matches the additional files (e.g. /home/myapp/log/warning.log { ... }, I get an error like error: /etc/logrotate.d/myapp:3 duplicate log entry for /home/myapp/log/warning.log. How can I specify multiple rules that match particular files in an overlapping kind of way?

    Read the article

  • Make symlink on Windows of whole tree without modifying the original folder

    - by DarkGhostHunter
    I'm trying to do this: make a symlink of a whole directory "C:/Master", in different folders like "C:\Projects\Alpha\", "C:\Projects\Beta\" an so on. "Master" directory usually changes in files and data. I work on the "Projects/*", where every project folder uses the "Master" files, but every one has new files in them. Let's say, I point to the car engine in every project folder, and inside them I add different kind of wheels. The problem I'm having, as a Windows 8 user, is that symlinks (junction) acts as a window to "Master" - I'm not allowed to add any file inside. I looking a way to reference the entire "Master" directory, and add new files - not edit any of the "Master" ones. It's as described here, but on Windows.

    Read the article

  • Cygwin file and directory user and group

    - by dvanaria
    I use Cygwin as my main development environment on both my home and work computers. In order to share files between the two computers, I use Dropbox, which is installed in the following folder on both computers: c:\cygwin\home\dvanaria\dropbox Everything works great, except for one thing. When I'm working on my home computer and do an ls -l on any directory, all the files show up as owned by dvanaria of group Users. But when I work from my work computer, an ls -l shows all files as being owned by Administrators and of group Domain Users. I know Cygwin uses some kind of mapping between Windows users and permissions to the /etc/passwd file. But to be honest I have no idea how this file works or how it maps to Windows under Cygwin. Could anyone help figure this out? The main problem is that I can't edit any files when using my work computer, only read them.

    Read the article

  • Windows Search Indexing SSD

    - by user654628
    I just bought a new SSD (Vertex4 from OCZ, 256G) and installed Windows 8 with it on my laptop. I am not using an external hard drive to keep extra data (paging, temp files etc) because I am using a laptop and do not want to carry it around with me. My question is, if I disable Windows indexer (Windows Search service), does Windows still search files under the search (Windows 7 Start menu search/Metro UI Windows 8 search)? Since the indexer was meant to search for files by indexing them, does this mean that Windows will not search new files? Thanks in advance.

    Read the article

  • CDN recommendation

    - by michaeld79
    Hey all, I am looking for a CDN service that is able to update the end point files on demand via API in max time of 10 min. or an expiration time for the files that is 10 min or less. In addition the CDN must have an option to upload files via API (working with PHP in my project). thanks in advance michael D

    Read the article

  • Windows XP cannot read DVD burned on Ubuntu

    - by webcrawl
    The story : I have burned a DVD disc on Ubuntu using Brasero. When file burning was complete, I canceled the burning of checksum to DVD. Then I put that DVD on Windows XP (SP3) and copied all the files from DVD to the hard drive (no errors when copying). When that was done I discovered that all copied files are not readable. What is more, all the files on that DVD also shown to be not readable, even though all file names, directories were in their place. What I found out? Windows detect that the disc is in CDFS (CD-ROM File System). Disc is clean as new, have no scratches. All files while opened in Notepad++ look like "NULNULNULNULNUL" in one line. The size of files is normal. Other discs that are recognized as CDFS can be read with no problems. What I tried? Starting CDFS service in Windows registry. Result - a new device in Windows Device Manager (JUBS JGH2ZCT SCSI CdRom Device). Removing my CD/DVD device from Device Manager. Result - Windows restarted the system and reinstalled the driver. So... how to read the DVD, when I have no access to any other PC, any other OS?

    Read the article

< Previous Page | 396 397 398 399 400 401 402 403 404 405 406 407  | Next Page >