Search Results

Search found 40999 results on 1640 pages for 'duplicate files'.

Page 125/1640 | < Previous Page | 121 122 123 124 125 126 127 128 129 130 131 132  | Next Page >

  • Advantages of multiple SQL Server files with a single RAID array

    - by Dr Giles M
    Originally posted on stack overflow, but re-worded. Imagine the scenario : For a database I have RAID arrays R: (MDF) T: (transaction log) and of course shared transparent usage of X: (tempDB). I've been reading around and get the impression that if you are using RAID then adding multiple SQL Server NDF files sitting on R: within a filegroup won't yeild any more improvements. Of course, adding another raid array S: and putting an NDF file on that would. However, being a reasonably savvy software person, it's not unthinkable to hypothesise that, even for smaller MDFs sitting on one RAID array that SQL Server will perform growth and locking operations (for writes) on the MDF, so adding NDFs to the filegroup even if they sat on R: would distribute the locking operations and growth operations allowing more throughput? Or does the time taken to reconstruct the data from distributed filegroups outweigh the benefits of reduced locking? I'm also aware that the behaviour and benefits may be different for tables/indeces/log. Is there a good site that distinguishes the benefits of multiple files when RAID is already in place?

    Read the article

  • "Show hidden files" option is not working

    - by crazygamer
    OK, I know this is the very basic thing that goes with Windows, but I am asking it here in search of answer. I put my pendrive and it autoruns. This changed the show hidden files option off, I mean I am not able to see my hidden files as it is not applying the changes. What is the registry file that has modified? I have scanned my computer using 4 antivirus programs. BitDefender found and deleted something in temperary folder. The rest didn't showed anything. I have encountered this problem a few more times but this time I don't want to format it ;-)

    Read the article

  • Win 7 accessing large files uses 100% RAM

    - by user181276
    Running Win 7 64-bit SP1 with 8 GB RAM. I first noticed this problem when using the GUI to copy some large (5+ GB) files from one disk to another. What happens is the physical memory in use rises quite quickly to 100% and the system comes to a crawl. If I just start to access the file in a media player (it is a movie) the memory usage climbs up slowly but eventually reaches 100%. When copying the same files via XCOPY I do not have this problem. Using RAMMAP I see most of the memory usage is under "Mapped File" and is allocated under the "Active" column. If I select "Empty System Working Set" the RAM usage drops back down but then starts to climb back up. Any ideas on what I can check/test to eliminate this issue?

    Read the article

  • Restore more than 250 files using DPM 2010 and PowerShell

    - by toryan
    I've got what should be a fairly simple task: restore the following files from DPM: D:\inetpub\wwwroot\*\index.* I followed the instructions in this TechNet wiki and pretty much thought I had it. Unfortunately, the New-SearchOption commandlet can only return 250 results, and this search would generate way more results than that. So actually only the first 250 files were restored, which is no use to anybody. Does anyone know of any way to get around the 250 search results limit? I guess it would be possible to get the subdirectories of D:\inetpub\wwwroot and loop through them in turn, but I kind of want to keep this fairly simple as it is only for this task.

    Read the article

  • rsync error: some files/attrs were not transferred

    - by Daniel Ball
    Using rsync(ubuntu) and a DeltaCopy server on W2K3 to back up some of the data on the file server before I migrate from W2K3 to Ubuntu server. After it completed I ran a dry run just in case something had been missed or changed ... I got the following: sudo rsync -az -n 198.3.9.25::Music /mnt/raid/music [sudo] password for daniel: file has vanished: "?????\#267????" (in Music) file has vanished: "????????" (in Music) ... rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1526) [generator=3.0.7] I just want to make sure I'm reading it right, that somehow there are files on the receiving end that aren't on the sending?

    Read the article

  • Basic video editor for AVCHD Lite files?

    - by davr
    I have a camera (Panasonic Lumix GF-1) that outputs "AVCHD Lite" files, 720p h264 in a MTS container. I saw this question that said Movie Maker in Windows 7 supports AVCHD...but I just tried, and unfortunately it does not support AVCHD Lite. Are there any free or inexpensive non-linear video editors (NLE) that can natively handle AVCHD Lite files, without requiring some 3rd party driver? If not, are there any 3rd party drivers that are especially stable? (In my experience they usually have some problems...I got AVCHD Lite loading into VirtualDub using a 3rd party plugin, but it's very slow and sometimes crashes, and seeking takes ages.

    Read the article

  • Using ServletOutputStream to write very large files in a Java servlet without memory issues

    - by Martin
    I am using IBM Websphere Application Server v6 and Java 1.4 and am trying to write large CSV files to the ServletOutputStream for a user to download. Files are ranging from a 50-750MB at the moment. The smaller files aren't causing too much of a problem but with the larger files it appears that it is being written into the heap which is then causing an OutOfMemory error and bringing down the entire server. These files can only be served out to authenticated users over https which is why I am serving them through a Servlet instead of just sticking them in Apache. The code I am using is (some fluff removed around this): resp.setHeader("Content-length", "" + fileLength); resp.setContentType("application/vnd.ms-excel"); resp.setHeader("Content-Disposition","attachment; filename=\"export.csv\""); FileInputStream inputStream = null; try { inputStream = new FileInputStream(path); byte[] buffer = new byte[1024]; int bytesRead = 0; do { bytesRead = inputStream.read(buffer, offset, buffer.length); resp.getOutputStream().write(buffer, 0, bytesRead); } while (bytesRead == buffer.length); resp.getOutputStream().flush(); } finally { if(inputStream != null) inputStream.close(); } The FileInputStream doesn't seem to be causing a problem as if I write to another file or just remove the write completly the memory usage doesn't appear to be a problem. What I am thinking is that the resp.getOutputStream().write is being stored in memory until the data can be sent through to the client. So the entire file might be read and stored in the resp.getOutputStream() causing my memory issues and crashing! I have tried Buffering these streams and also tried using Channels from java.nio, none of which seems to make any bit of difference to my memory issues. I have also flushed the outputstream once per iteration of the loop and after the loop, which didn't help.

    Read the article

  • SQL Server 2005 standard filegroups / files for performance on SAN

    - by Blootac
    Ok so I've just been on a SQL Server course and we discussed the usage scenarios of multiple filegroups and files when in use over local RAID and local disks but we didn't touch SAN scenarios so my question is as follows; I currently have a 250 gig database running on SQL Server 2005 where some tables have a huge number of writes and others are fairly static. The database and all objects reside in a single file group with a single data file. The log file is also on the same volume. My interpretation is that separate data files should be used across different disks to lessen disk contention and that file groups should be used for partitioning of data. However, with a SAN you obviously don't really have the same issue of disk contention that you do with a small RAID setup (or at least we don't at the moment), and standard edition doesn't support partitioning. So in order to improve parallelism what should I do? My understanding of various Microsoft publications is that if I increase the number of data files, separate threads can act across each file separately. Which leads me to the question how many files should I have. One per core? Should I be putting tables and indexes with high levels of activity in separate file groups, each with the same number of data files as we have cores? Thank you

    Read the article

  • Xcode custom project template creates only references to files

    - by user315374
    Hi, I tried to create a custom project template for setting up unit testing. The problem is that when i create a new project based on this template it creates references to the template files : When i edit a file, it changes my template files instead of my actual project files ! When i delete my template, files from my actual project becomes red ! The project template is in : /Developer/Platforms/iPhoneOS.platform/Developer/Library/Xcode/Project Templates/Application/Test-based Application Reading some question on stack overflow i tried to install my template project in /Library/Application Support/Developer/Shared/Xcode/Project Templates But i cannot see my template when i create a new project. Can anybody help ? Thanks, Vincent

    Read the article

  • LAMP/TURNKEY LINUX/VIRTUAL BOX: Manipulating Files on a Virtual Machine

    - by aeid
    I am running Ubuntu 9.10 and I want to install turnkey linux's LAMP server on my machine to test out my code. I installed Turnkey LAMP via VirtualBox and it seems to be working because I can access the http://localhost. My question is: How do I manipulate files via VirtualBox? For example, if I had installed LAMP on my machine (not on a virtual machine), I could easily add/edit/delete files in the var/WWW folder. Where is the equivalent of "WWW" folder on Virtualbox and how can I interface with it? Thanks,

    Read the article

  • Best way to rip DVD movies to ISO files

    - by alex
    I'm trying to backup my DVD collection. I have Handbrake, and will eventually experiment with the best settings to use. For now, I'd like to backup the DVD's to ISO files, that i can mount and then use Handbrake on later, or burn back on to DVD should the original get damaged. I have a WD TV box that is capable of playing ISO files also. What's the best program for doing this? I'm not so much concerned with file size.

    Read the article

  • MySQL Backup: Can I copying individual MyISAM table files to another server with different MySQL ver

    - by Komputer
    I means copying individual MyISAM table files is: (shut down mysqld and copy the .frm, .myd, and .myi files from one database folder to another) Question: (a) can I use this way to backup MySQL database folder from one server to another server with different MySQL version? (b) can this backup files moved to different OS? (example: debian to centos) thanks in advance.

    Read the article

  • Dreamweaver Files uploaded to Win 2008 server cause login prompt

    - by Lil
    I have a customer who uses a 4 year old version of Dreamweaver to edit her webpages. My hosting reseller account is with a company that uses Windows Server 2008. Every time my customer edits a page and uploads it, I have to set the permissions for that file to be readable, manually from the site's control panel. The customer is furious with me because her files cause the login prompt. I am able to upload files myself that remain readable to the site with both Filezilla and with Frontpage. I am assuming that her Dreamweaver settings are the cause of the problem but I don't have that program myself and don't know what to advise her. Any suggestions?

    Read the article

  • How to extract corrupted RAR files

    - by Anyname Donotcare
    I try to extract a file from set of .RAR parts. I always get the following error: Corrupted so I tried many programs to extract the corrupted RAR files. I have 10 parts, each one 400MB big. The last progam I used was Recovery toolbox for RAR. It gave me the following message: Severity:!High Message:Recovery error for file XXXX with message set file pointer error. The program extracted 2GB of data, but it should extract 4GB. Is there any way to extract the content or am I supposed to download the files again from another server?

    Read the article

  • C# parsing txt files IF name format is desired format

    - by jakesankey
    OK, I have txt files that I am parsing and saving into a sql db. The names are formatted like R306025COMP_272A4075_20090929_080159.txt However, there are a select few (out of thousands of files) with names that are formatted differently (particularly files that were generated as tests), example R306025COMP_SU2_TestBottom_20090915_101441.txt The reason this causes a problem for me is that I am using Split('_')[1,2,etc] to extract the R number, the 272A4075 portion, and the 20090929 (date) portion. When the application comes across the oddly named files, it fails because it is trying to parse 'TestBottom' as a date and inserts 'SU2' instead of the 272 number. Basically I want the app to recognize that if the file's name is not formatted like my first example, skip it. Any advice?

    Read the article

  • Iterating through folders and files in batch file?

    - by Will Marcouiller
    Here's my situation. A project has as objective to migrate some attachments to another system. These attachments will be located to a parent folder, let's say "Folder 0" (see this question's diagram for better understanding), and they will be zipped/compressed. I want my batch script to be called like so: BatchScript.bat "c:\temp\usd\Folder 0" I'm using 7za.exe as the command line extraction tool. What I want my batch script to do is to iterate through the "Folder 0"'s subfolders, and extract all of the containing ZIP files into their respective folder. It is obligatory that the files extracted are in the same folder as their respective ZIP files. So, files contained in "File 1.zip" are needed in "Folder 1" and so forth. I have read about the FOR...DO command on Windows XP Professional Product Documentation - Using Batch Files. Here's my script: @ECHO OFF FOR /D %folder IN (%%rootFolderCmdLnParam) DO FOR %zippedFile IN (*.zip) DO 7za.exe e %zippedFile I guess that I would also need to change the actual directory before calling 7za.exe e %zippedFile for file extraction, but I can't figure out how in this batch file (through I know how in command line, and even if I know it is the same instruction "cd"). Anyone's help is gratefully appreciated.

    Read the article

  • How to create a custom yaml config file in Symfony

    - by Guillaume Flandre
    What I want to do is quite simple: store data in a custom config file that I want to read later on. I created my file something.yml that I put in the global config directory. It looks like that: prod: test: ok dev: test: ko all: foo: bar john: doe Then I copied the config_handlers.yml and also put it in the config directory and added the following at the top of the file: config/something.yml: class: sfDefineEnvironmentConfigHandler param: prefix: something_ But if I'm calling sfConfig::get("something_foo"); I keep getting NULL. What did I do wrong? I just want to read values, so no need to create a custome config handler, right? I've read the doc here: http://www.symfony-project.org/book/1_2/19-Mastering-Symfony-s-Configuration-Files even though I'm running 1.4 (I don't think that changed since then). Edit: Of course I can use sfYaml::load() but I'd like to do things in a better way.

    Read the article

  • Why does a website server recieve local files much slower than recieve files on other websites?

    - by T...
    The server at http://any2djvu.djvuzone.org receives same files from local computers much slower than from links on other websites (with the same files have been uploaded to the other websites, such as dropbox.com). The speed of uploading a file from local computers to other websites such as dropbox is also much faster than to any2djvu website. For example, a pdf file of 17MB needs more than 1 min to be uploaded to any2djvu server from a local computer with normal ISP such as Comcast High-speed internet, but takes less than 3 seconds from a dropbox link to any2djvu server, and takes around 10 seconds from the same local computer to dropbox. I wonder why there is such big differences for the speeds of different uploading ways to a web server? Thanks!

    Read the article

  • Combine Multiple Audio Files into a single higher-quality audio File

    - by namenlos
    BACKGROUND My team gave a demo to a large audience - we recorded the audio of the demo in multiple locations in the room (3) the audio was recorded using cheap laptop microphones I was not involved in the recording of the audio or the demo Both audio files suck in some form the first one is of a recording near the speaker - which clearly gets his voice but the the audience is audience is muffled - also this one is slightly noisy The second recording was done in the middle of the audience - it gets the audience questions clearly but actually gets the speaker rather sometimes well and sometimes poorly (not all the speakers spoke loudly enough to be heard) MY QUESTION Is there any techinque or software which can be used to merge these audio files in such a way that the best qualities of each are preserved. I am NOT asking now to simply merge them together in one track - I've already done that in Audacity and it is certainly better - what I am looking for could be considered closer to how HDR images are created - multiple exposures combined into an enhanced new version which is not simply an average of the inputs. NOTE Am not an "Audio" guy - just a normal user

    Read the article

  • BUILDROOT files during RPM generation

    - by khmarbaise
    Currently i have the following spec file to create a RPM. The spec file is generated by maven plugin to produce a RPM out of it. The question is: will i find files which are mentioned in the spec file after the rpm generation inside the BUILDROOT/SPECS/SOURCES/SRPMS structure? %define _unpackaged_files_terminate_build 0 Name: rpm-1 Version: 1.0 Release: 1 Summary: rpm-1 License: 2009 my org Distribution: My App Vendor: my org URL: www.my.org Group: Application/Collectors Packager: my org Provides: project Requires: /bin/sh Requires: jre >= 1.5 Requires: BASE_PACKAGE PreReq: dependency Obsoletes: project autoprov: yes autoreq: yes BuildRoot: /home/build/.jenkins/jobs/rpm-maven-plugin/workspace/target/it/rpm-1/target/rpm/rpm-1/buildroot %description %install if [ -e $RPM_BUILD_ROOT ]; then mv /home/build/.jenkins/jobs/rpm-maven-plugin/workspace/target/it/rpm-1/target/rpm/rpm-1/tmp-buildroot/* $RPM_BUILD_ROOT else mv /home/build/.jenkins/jobs/rpm-maven-plugin/workspace/target/it/rpm-1/target/rpm/rpm-1/tmp-buildroot $RPM_BUILD_ROOT fi ln -s /usr/myusr/app $RPM_BUILD_ROOT/usr/myusr/app2 ln -s /tmp/myapp/somefile $RPM_BUILD_ROOT/tmp/myapp/somefile2 ln -s name.sh $RPM_BUILD_ROOT/usr/myusr/app/bin/oldname.sh %files %defattr(-,myuser,mygroup,-) %dir "/usr/myusr/app" "/usr/myusr/app2" "/tmp/myapp/somefile" "/tmp/myapp/somefile2" "/usr/myusr/app/lib" %attr(755,myuser,mygroup) "/usr/myusr/app/bin/start.sh" %attr(755,myuser,mygroup) "/usr/myusr/app/bin/filter-version.txt" %attr(755,myuser,mygroup) "/usr/myusr/app/bin/name.sh" %attr(755,myuser,mygroup) "/usr/myusr/app/bin/name-Linux.sh" %attr(755,myuser,mygroup) "/usr/myusr/app/bin/filter.txt" %attr(755,myuser,mygroup) "/usr/myusr/app/bin/oldname.sh" %dir "/usr/myusr/app/conf" %config "/usr/myusr/app/conf/log4j.xml" "/usr/myusr/app/conf/log4j.xml.deliver" %prep echo "hello from prepare" %pre -p /bin/sh #!/bin/sh if [ -s "/etc/init.d/myapp" ] then /etc/init.d/myapp stop rm /etc/init.d/myapp fi %post #!/bin/sh #create soft link script to services directory ln -s /usr/myusr/app/bin/start.sh /etc/init.d/myapp chmod 555 /etc/init.d/myapp %preun #!/bin/sh #the argument being passed in indicates how many versions will exist #during an upgrade, this value will be 1, in which case we do not want to stop #the service since the new version will be running once this script is called #during an uninstall, the value will be 0, in which case we do want to stop #the service and remove the /etc/init.d script. if [ "$1" = "0" ] then if [ -s "/etc/init.d/myapp" ] then /etc/init.d/myapp stop rm /etc/init.d/myapp fi fi; %triggerin -- dependency, dependency1 echo "hello from install" %changelog * Tue May 23 2000 Vincent Danen <[email protected]> 0.27.2-2mdk -update BuildPreReq to include rep-gtk and rep-gtkgnome * Thu May 11 2000 Vincent Danen <[email protected]> 0.27.2-1mdk -0.27.2 * Thu May 11 2000 Vincent Danen <[email protected]> 0.27.1-2mdk -added BuildPreReq -change name from Sawmill to Sawfish The problem i found is that the files (filter.txt in particular) after the generation process on a Ubuntu system but not on SuSE system. Which might be caused by different rpm versions ? Currently we have an integration test which fails based on the non existing of the file (filter.txt under a buildroot folder?)

    Read the article

  • Converting Outlook Express csv adress book and dbx files into Thunderbird on W7

    - by PiotrK
    Recently I changed my OS from XP to W7. I made backup of any Outlook Express messages (the dbx files and adress book as CSV). On W7 I want to import that data into Thunderbird. There is option for importing from Outlook Express, but it is looking for live application data (I can't specify directory with real files myself) and there is no Outlook Express installed on W7 so I can't just import it back to it and then into Thunderbird. How can I import that data into Thunderbird?

    Read the article

< Previous Page | 121 122 123 124 125 126 127 128 129 130 131 132  | Next Page >