Search Results

Search found 45804 results on 1833 pages for 'large files'.

Page 285/1833 | < Previous Page | 281 282 283 284 285 286 287 288 289 290 291 292  | Next Page >

  • dns queries not using nscd for caching

    - by xenoterracide
    I'm trying to use nscd (Nameservices Cache Daemon) to cache dns locally so I can stop using bind to do it. I've gotten it started and ntpd seems to attempt to use it. But everything else for hosts seems to ignore it. e.g if I do dig apache.org 3 times none of them will hit the cache. I'm viewing the cache stats using nscd -g to determine whether it's been used. I've also turned the debug log level up to see if I can see it hitting and the queries don't even hit nscd. nsswitch.conf # Begin /etc/nsswitch.conf passwd: files group: files shadow: files publickey: files hosts: cache files dns networks: files protocols: files services: files ethers: files rpc: files netgroup: files # End /etc/nsswitch.confenter code here nscd.conf # # /etc/nscd.conf # # An example Name Service Cache config file. This file is needed by nscd. # # Legal entries are: # # logfile <file> # debug-level <level> # threads <initial #threads to use> # max-threads <maximum #threads to use> # server-user <user to run server as instead of root> # server-user is ignored if nscd is started with -S parameters # stat-user <user who is allowed to request statistics> # reload-count unlimited|<number> # paranoia <yes|no> # restart-interval <time in seconds> # # enable-cache <service> <yes|no> # positive-time-to-live <service> <time in seconds> # negative-time-to-live <service> <time in seconds> # suggested-size <service> <prime number> # check-files <service> <yes|no> # persistent <service> <yes|no> # shared <service> <yes|no> # max-db-size <service> <number bytes> # auto-propagate <service> <yes|no> # # Currently supported cache names (services): passwd, group, hosts, services # logfile /var/log/nscd.log threads 4 max-threads 32 server-user nobody # stat-user somebody debug-level 9 # reload-count 5 paranoia no # restart-interval 3600 enable-cache passwd yes positive-time-to-live passwd 600 negative-time-to-live passwd 20 suggested-size passwd 211 check-files passwd yes persistent passwd yes shared passwd yes max-db-size passwd 33554432 auto-propagate passwd yes enable-cache group yes positive-time-to-live group 3600 negative-time-to-live group 60 suggested-size group 211 check-files group yes persistent group yes shared group yes max-db-size group 33554432 auto-propagate group yes enable-cache hosts yes positive-time-to-live hosts 3600 negative-time-to-live hosts 20 suggested-size hosts 211 check-files hosts yes persistent hosts yes shared hosts yes max-db-size hosts 33554432 enable-cache services yes positive-time-to-live services 28800 negative-time-to-live services 20 suggested-size services 211 check-files services yes persistent services yes shared services yes max-db-size services 33554432 resolv.conf # Generated by dhcpcd from eth0 nameserver 127.0.0.1 domain westell.com nameserver 192.168.1.1 nameserver 208.67.222.222 nameserver 208.67.220.220 as kind of a side note I'm using archlinux.

    Read the article

  • dns queries not using nscd for caching

    - by xenoterracide
    I'm trying to use nscd (Nameservices Cache Daemon) to cache dns locally so I can stop using bind to do it. I've gotten it started and ntpd seems to attempt to use it. But everything else for hosts seems to ignore it. e.g if I do dig apache.org 3 times none of them will hit the cache. I'm viewing the cache stats using nscd -g to determine whether it's been used. I've also turned the debug log level up to see if I can see it hitting and the queries don't even hit nscd. nsswitch.conf # Begin /etc/nsswitch.conf passwd: files group: files shadow: files publickey: files hosts: cache files dns networks: files protocols: files services: files ethers: files rpc: files netgroup: files # End /etc/nsswitch.confenter code here nscd.conf # # /etc/nscd.conf # # An example Name Service Cache config file. This file is needed by nscd. # # Legal entries are: # # logfile <file> # debug-level <level> # threads <initial #threads to use> # max-threads <maximum #threads to use> # server-user <user to run server as instead of root> # server-user is ignored if nscd is started with -S parameters # stat-user <user who is allowed to request statistics> # reload-count unlimited|<number> # paranoia <yes|no> # restart-interval <time in seconds> # # enable-cache <service> <yes|no> # positive-time-to-live <service> <time in seconds> # negative-time-to-live <service> <time in seconds> # suggested-size <service> <prime number> # check-files <service> <yes|no> # persistent <service> <yes|no> # shared <service> <yes|no> # max-db-size <service> <number bytes> # auto-propagate <service> <yes|no> # # Currently supported cache names (services): passwd, group, hosts, services # logfile /var/log/nscd.log threads 4 max-threads 32 server-user nobody # stat-user somebody debug-level 9 # reload-count 5 paranoia no # restart-interval 3600 enable-cache passwd yes positive-time-to-live passwd 600 negative-time-to-live passwd 20 suggested-size passwd 211 check-files passwd yes persistent passwd yes shared passwd yes max-db-size passwd 33554432 auto-propagate passwd yes enable-cache group yes positive-time-to-live group 3600 negative-time-to-live group 60 suggested-size group 211 check-files group yes persistent group yes shared group yes max-db-size group 33554432 auto-propagate group yes enable-cache hosts yes positive-time-to-live hosts 3600 negative-time-to-live hosts 20 suggested-size hosts 211 check-files hosts yes persistent hosts yes shared hosts yes max-db-size hosts 33554432 enable-cache services yes positive-time-to-live services 28800 negative-time-to-live services 20 suggested-size services 211 check-files services yes persistent services yes shared services yes max-db-size services 33554432 resolv.conf # Generated by dhcpcd from eth0 nameserver 127.0.0.1 domain westell.com nameserver 192.168.1.1 nameserver 208.67.222.222 nameserver 208.67.220.220 as kind of a side note I'm using archlinux.

    Read the article

  • What is the syntax for Dsynchronize "exclude filter" for files 's full path to exclude bin\* and obj\* of a C# solution?

    - by Nam G. VU
    Dsynchronize is a great free tool to sync two folders. I'm using it to sync two solutions checked out from two different TFS Team Collection. I want to exclude the following: All files in bin folder All files in obj folder I tried bin\*; obj\* but it doesn't work. How can I do that? ps. Though, trying *.g.* and *cache* help to exclude the files whose names match with the filter. It seems the filter is applied to the file name only NOT the full path of the file

    Read the article

  • How to copy lots of files between two computers, without network?

    - by Steve Bennett
    I want to copy around 50Gb of files from my desktop to my work laptop. For some reason, the laptop won't connect to my home network. I haven't had any luck with a direct ethernet connection either, and I'm not willing to change any of the laptop's network configuration (last time I did that, I couldn't get onto the network at work, making me Not Very Popular). So...what else is there? The obvious route is copying via SD card. My largest card is 8Gb. But I can't find a good workflow. Is there a tool designed for this, where I could just repetitively move the card back and forth, without having to select files? I've tried using teracopy, but you end up missing a few files. I guess I could zip everything up into multi-volume .rars or something...but is there a more elegant way?

    Read the article

  • How can I set audit controls on files owned by TrustedInstaller using Powershell?

    - by Drise
    I am trying to set audit controls on a number of files (listed in ACLsWin.txt) located in \%Windows%\System32 (for example, aaclient.dll) using the following Powershell script: $FileList = Get-Content ".\ACLsWin.txt" $ACL = New-Object System.Security.AccessControl.FileSecurity $AccessRule = New-Object System.Security.AccessControl.FileSystemAuditRule("Everyone", "Delete", "Failure") $ACL.AddAuditRule($AccessRule) foreach($File in $FileList) { Write-Host "Changing audit on $File" $ACL | Set-Acl $File } Whenever I run the script, I get the error PermissionDenied [Set-Acl] UnauthorizedAccessException. This seems to come from the fact that the owner of these files is TrustedInstaller. I am running these scripts as Administrator (even though I'm on the the built-in Administrator account) and it's still failing. I can set these audit controls by hand using the Security tab, but there are at least 200 files for which doing by hand may lead to human errors. How can I get around TrustedInstaller and set these audit controls using Powershell?

    Read the article

  • Windows 7 won't recognize backup set can I script extracting the files in some other way?

    - by datatoo
    The Windows 7 Backup/Restore created multiple backup sets and I was able to restore the oldest version, but not the most recent, which is not seen by the application. I do see all of the zip files and there are hundreds in later versions. Is there a way to extract each of these correctly outside of the regular restoration method? Perhaps scripting an extract of each day one after another? further clarifying The backup files were all made to an external drive. The original computer died completely, power supply, drives everything. I am trying to reconstruct as much as possible and the only backup set recognized is 6 months older. This was recovered over a new install, but unzipping thousands of zip files is not really a simple unzip copy project as the original paths are not a simple thing to reconstruct.

    Read the article

  • Is it better to have more small ram chips or fewer large ones?

    - by Alex Andronov
    I am currently building a new server. I have options between say 32GB Memory for 2 CPUs, DDR3, 1066MHz (8x4GB Dual Ranked RDIMMs) and 36GB Memory for 2 CPUs, DDR3, 1066MHz (18x2GB Dual Ranked RDIMMs) Both at the same price. Should I go for the higher ram amount or the fewer chips? This will be for a Dell PowerEdge R710 with two Intel® Xeon® E5530, 2.4Ghz, 8MB Cache, 5.86 GT/s QPI, Turbo, HT Thanks

    Read the article

  • Why do I see a large performance hit with DRBD?

    - by BHS
    I see a much larger performance hit with DRBD than their user manual says I should get. I'm using DRBD 8.3.7 (Fedora 13 RPMs). I've setup a DRBD test and measured throughput of disk and network without DRBD: dd if=/dev/zero of=/data.tmp bs=512M count=1 oflag=direct 536870912 bytes (537 MB) copied, 4.62985 s, 116 MB/s / is a logical volume on the disk I'm testing with, mounted without DRBD iperf: [ 4] 0.0-10.0 sec 1.10 GBytes 941 Mbits/sec According to Throughput overhead expectations, the bottleneck would be whichever is slower, the network or the disk and DRBD should have an overhead of 3%. In my case network and I/O seem to be pretty evenly matched. It sounds like I should be able to get around 100 MB/s. So, with the raw drbd device, I get dd if=/dev/zero of=/dev/drbd2 bs=512M count=1 oflag=direct 536870912 bytes (537 MB) copied, 6.61362 s, 81.2 MB/s which is slower than I would expect. Then, once I format the device with ext4, I get dd if=/dev/zero of=/mnt/data.tmp bs=512M count=1 oflag=direct 536870912 bytes (537 MB) copied, 9.60918 s, 55.9 MB/s This doesn't seem right. There must be some other factor playing into this that I'm not aware of. global_common.conf global { usage-count yes; } common { protocol C; } syncer { al-extents 1801; rate 33M; } data_mirror.res resource data_mirror { device /dev/drbd1; disk /dev/sdb1; meta-disk internal; on cluster1 { address 192.168.33.10:7789; } on cluster2 { address 192.168.33.12:7789; } } For the hardware I have two identical machines: 6 GB RAM Quad core AMD Phenom 3.2Ghz Motherboard SATA controller 7200 RPM 64MB cache 1TB WD drive The network is 1Gb connected via a switch. I know that a direct connection is recommended, but could it make this much of a difference? Edited I just tried monitoring the bandwidth used to try to see what's happening. I used ibmonitor and measured average bandwidth while I ran the dd test 10 times. I got: avg ~450Mbits writing to ext4 avg ~800Mbits writing to raw device It looks like with ext4, drbd is using about half the bandwidth it uses with the raw device so there's a bottleneck that is not the network.

    Read the article

  • Fastest way to move files from a guest VM to the host?

    - by iTayb
    Hey there. I'm looking for the fastest way to copy files from a VM to physical servers. Setting up a network between them isn't a thing I'd like to do. I believe it is much more secure when not having one. VMware suggests using the Copy-VMGuestFile cmdlet from their PowerCLI interface, however I find it slow (Running at approximately 1.5MB/s). I thought of the following: Creating a new virtual hard drive, moving the files in, and download the .vmdk file from the server, then extracting it locally. It is possible, however will not work with working VMs, and I don't want to shut-down the VM every time I want to move files. Use the virtual floppy device and download the .flp file. It works even if the VM is running, but it is limited to 2.8MB. Do I have any other way? I'm using ESXi 4.1. Thanks.

    Read the article

  • How to configure Linux to open files by extension?

    - by Gregory MOUSSAT
    The various Linux's desktops open files according to their mime type. This is a very nice feature but I also need to open them by extension (as with Windows). For instance, I want to open every xxxxx.vnc files with a specific program when I double-click on them. I use xfce but I don't think it differs from Gnome or KDE because all of them use the same configuration files (defaults.list and mimeapps.list). If possible the settings are user specific, not system wide. I've found some very poor informations about that, and all are system wide, so may be wiped out by some updates.

    Read the article

  • If using eMule, how to keep current downloading files while adding a hard drive?

    - by the searcher
    If there are still downloading files (ones that will need extra 2 week or unknown time because they are rare files) but need to use a new hard drive because no space is left in hard drive, then is there a way to use new hard drive while keeping existing downloads ongoing? That's because if we change the folder in eMule from G: to H:, then all existing downloads will disappear too... Update: I can move the completed files over to the new hard drive... but it is going to be a never ending task... (old hard drive gets full... move some... and repeat)

    Read the article

  • Windows 7 Explorer keyboard shortcut: set focus to files/folders/content area?

    - by Pup
    Is there a Windows 7 Explorer keyboard shortcut to set focus to files/folders/content area (depicted below)? This has bothered me for so long... I want to set my explorer window's focus to the files pane (shown below). What's the most efficient way to do that with a keyboard? Here's what I've been doing: - Tab / Shift+Tab to move focus through interactive window elements until it looks like a selection rectangle appears over one of the files in my window. - Alt+V, Alt+D to change appearance setting of a folder contents' icons. Doesn't always work, depending on what's selected at the time.

    Read the article

  • Transferring files from computer to Android Simulator SD Card ?

    - by mgpyone
    I've tried Android Simulator for Mac and can use it well. also I've set 100 MB for SD Storage for that simulator. however, I don't found a way of transferring files from my Mac to that Android Simulator SD Storage. Current solution is I've to send files to my mail and have to access via Simulator ,then download to it . well, but it's not available fro all formats . something like image file(.img) are not allowed to download to the simulator. I've seek any folder of SD Card for Simulator within Android Folder I've extracted. I've found nothing. I want to transfer files from my HD to Android simulator SD card storage. Thus, is there any effective solution that support my idea ? I'm on Mac OS X 10.6.2.

    Read the article

  • Best way to send large files point-to-point?

    - by Adam S
    I'm looking for a way to send a 10GB file to a friend. I really need to send it over the internet, but e-mail or uploading sites are not really an option. I remember using MSN messenger and having a file transfer feature that worked decently well. However, my friend doesn't have this software and doesn't want to get it. I know that the professional versions of TeamViewer have such a feature, but are there any free alternatives?

    Read the article

  • Forgot to unmount/eject external hdd, lost moved files. OSX

    - by balupton
    So I was using my mac with my external hard drive connected via USB. I moved about 10gigs of data to it (via drag and drop while holding command to move the files rather than to copy them). They moved to the drive alright, but as I was having some issues and finder crashed after the transfer, I was unable to eject the volume and later everything froze so I had to do a hard restart (hold the power button). When I remounted the volume (plugged the external hdd back in) it no longer had any of the files which I moved onto it. How can I recover these files, as it was a lot of data! Cheers.

    Read the article

  • How to rename files in a folder using the ls command output as a pipe ?

    - by user1179459
    I am using GNU/Linux and BASH shell, What i wanted to do is in server is to i need to be able to download the files stating with B* and D* and then rename them to ~B* and ~D*(same file name just ~ in-front) i wrote following which works fine for the downloading part ideally i would like it to use ls command output as well but dont know how to do that. cd inbox get D* get B* ls B*|rename $0 ~B.* bye Any idea ? ideally what i would like to do is ls command to send the list of files one by one to the get command and then the once the get command is completed i want rename command executed renaming the server files

    Read the article

  • Maximum number of files in one ext3 directory while still getting acceptable performance?

    - by knorv
    I have an application writing to an ext3 directory which over time has grown to roughly three million files. Needless to say, reading the file listing of this directory is unbearably slow. I don't blame ext3. The proper solution would have been to let the application code write to sub-directories such as ./a/b/c/abc.ext rather than using only ./abc.ext. I'm changing to such a sub-directory structure and my question is simply: roughly how many files should I expect to store in one ext3 directory while still getting acceptable performance? What's your experience? Or in other words; assuming that I need to store three million files in the structure, how many levels deep should the ./a/b/c/abc.ext structure be? Obviously this is a question that cannot be answered exactly, but I'm looking for a ball park estimate.

    Read the article

  • Copying and rotating large table from Excel to Word without turning it into picture/wmf/...

    - by ldigas
    What would be the easiest way of copying and rotating a table made in Excel, to Word without turning it into a picture/enhanced metafile/or something alike. I know I can use the Section Break routine, but the problem is the table needs to go into a company frame (which I cannot turn onto a landscape), so I literally need to turn the table by 90 degrees. Any way of doing something like that ?

    Read the article

  • File History - Unable to scan user libraries for changes and perform backup of modified files for configuration

    - by azl
    When trying to run the File History tool in Windows 8 it runs for about 2 seconds then stops. No files are backed up to the selected drive. In the event viewer the only error that appears is: Unable to scan user libraries for changes and perform backup of modified files for configuration C:\Users\win8User\AppData\Local\Microsoft\Windows\FileHistory\Configuration\Config I've tried deleting both the configuration files and the FileHistory directory on the target drive. Setting up File History again results in the same error. Is there a better way to track down what is causing the failure? Or somehow get the File History tool to create a more verbose log file that shows what is causing the problem?

    Read the article

  • Viewing a large field in a query in SQL management studio with ZOOM?

    - by smithym
    Hi there, Can anyone help? I am using SQL management studio (sql server 2008) to run queries and some of the fields that come back are varchar(max) for example and it has a lot of information - Is there a zoom feature to open the window and show me the contents with vertical and horizontal scrollbars? I remember there was, i thought it was F2 but i must have been mistaken as it doesn't work Now i have to scroll horizontal on the field and its really difficult to see everything Also some of the fields contain new line codes etc so it would be great if the zoom feature would display the info using the new line codes etc Any body know how to do this?

    Read the article

  • Website with large number of users keeps going down due to memory leaks:Tomcat6 and Java 6

    - by user1766478
    We host many websites on one of our two virtual servers. We use tomcat 6 and Java 6. It is an MVC model with a hibernate like layer. The problem is, one of our biggest clients with the most number of members, keeps crashing the server every 6-8 hours(precisely in the mornings when most members login) We have been having this issue for 4 days now. Trying to figure out the problem but we suspect memory leaks. Any suggestions?

    Read the article

  • Does multiple files in SQL Server when using RAID help reduce conflicts in growth and file-locking?

    - by Dr Giles M
    I've been reading around and get the impression that if you are using RAID then using multiple SQL Server files within a filegroup won't yeild any more improvements, and the benefits are purely administrative (if you started to run out of space or wanted to partition off data into managable chunks for backups/balancing the data around your big server room). However, being a reasonably savvy software person, it's not unthinkable to hypothesise that, even for smaller databases that SQL Server will perform growth and locking operations (for writes) on a LOGICAL file basis, so even if you are using RAID, it seems to make sense to have multiple files in a file group to balance I/O, or does the time taken to reconstruct the data from distributed filegroups outweigh the benefits of reduced locking? I'm also aware that the behaviour and benefits may be different for tables/indeces/log. Is there a good site that distinguishes the benefits of multiple files when RAID is already in place?

    Read the article

  • Windows 7 x64 how to verify integrity of ALL files on an NTFS disk?

    - by kilves76
    Looking for a tool that would verify integrity of ALL files on a Windows 7 x64 NTFS disk reliably? This is for testing of experimental defrag software, so it really needs to be secure and foolproof. I know it will take a long time, there's millions of files on the disk, but safety just cannot be compromised in a situation like this. Freeware solution much preferred. Can be either Windows software (=inducing pitfalls about files changing due to booting Windows) or a stand alone boot (for example linux boot cd + usb key for storing chksum/metadata).

    Read the article

  • Want to install OS from USB instead of CD : How to deal with *.img image files?

    - by claws
    I'm on windows. I'm trying DragonflyBSD operating system. as you can see here: http://www.dragonflybsd.org/download/ there are two kinds of images CD (.iso) and USB (.img) files available for download. I downloaded *.iso and using UNetbootin to make a bootable USB stick. But its taking hell lot of time. Its been 2 hours and its just 50% done(9k of 18k files). I'm really pissed off now! I used *.iso because I didn't know how to deal with *.img files. Will it be quick *.img file? How to use it to make bootable USB?

    Read the article

< Previous Page | 281 282 283 284 285 286 287 288 289 290 291 292  | Next Page >