Search Results

Search found 49518 results on 1981 pages for 'configuration files'.

Page 536/1981 | < Previous Page | 532 533 534 535 536 537 538 539 540 541 542 543  | Next Page >

  • Windows 7 Slow Searching

    - by Guy Thomas
    I have a new Windows 7 machine with twice as much RAM and a faster processor than my old Windows Server 2008 R2 Machine. I am disappointed that searching amongst my 10,000 image files takes twice as long on my new Windows 7 machine. Both machines have their own copy of these same files. In other respects e.g. opening my huge Outlook files, the new machine is faster. The Windows Search Service has started. And I set indexing on the image folder about 3 days ago. Any ideas why I suffer from this poor index / search experience? Other than adding / removing folders, is there anything I can do to tweak indexing?

    Read the article

  • sbs-server with 2 nics and 2 connections to the internet with different providers not working as it

    - by erik-van-gorp
    We have the following configuration : A sbs-2003 server in a domain (mydomain.com) with 2 network cards, each connected to a different network (provider), with different gateways, one for web and one for mail and clients. (we do this because the bandwitdh we get from our providers is too small to handle all the mail(+spam) traffic and webservices, so we took 2 providers) DNS is as follows : www.mydomain.com 1.2.3.4 mail.mydomain.com 5.6.7.8 NIC 1(192.168.1.3) is connected to to the internet through a firewall at 192.168.1.1, having wan address 1.2.3.4 NIC 2(10.0.0.3) is connected to to the internet through a firewall at 10.0.0.1, having wan address 5.6.7.8 Both nics have their default gateway installed at their corresponding routers. Also the metrics are set equal. (i know this isn't a supported config, but it works more or less). In this configuration i can use RDP on both wan adresses, and telnet to port 25 works as well on both. The issue now is that since a few weeks , we get regular disconnections, and website hickups(timeouts), several per hour. If we set one router to a higher metric, that route no longer works. In short, I want the mails to route through NIC2 and the web through NIC1. Any better configuration (without installing a second mail server) ?

    Read the article

  • Only tunnel certain applications via OpenVPN

    - by jinjin
    Hi, I've purchased a VPN solution, it works correctly when I have "redirect-gateway def1" in the configuration file (routing all traffic through the VPN). However when I remove that line from the configuration file, I am still able to ping-out of the machine (ping -I tap0), however I cannot ping the IP assigned to the machine (it's a public ip), i get the error: Destination Host Unreachable. I only want to have certain applications sending traffic through the VPN tunnel (eg: ZNC, irssi), all of which i can select which IP they use. However they can't recieve any data, making the tunnel essentially useless to me when disabling redirect-gateway. Any ideas on how to allow specific applications use the tunnel, without of forcing everything to go through it? My configuration file is as follows: dev tap remote #.#.#.# float #.#.#.# port 5129 comp-lzo ifconfig #.#.#.# 255.255.255.128 route-gateway #.#.#.# #redirect-gateway def1 secret key.txt cipher AES-128-CBC The output of ifconfig -a when the tunnel is connected: tap0 Link encap:Ethernet HWaddr 00:ff:47:d3:6d:f3 inet addr:#.#.#.# Bcast:#.#.#.# Mask:255.255.255.255 inet6 addr: <snip> Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:612 errors:0 dropped:0 overruns:0 frame:0 TX packets:35 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:25704 (25.1 KiB) TX bytes:6427 (6.2 KiB) EDIT: the Bcast:#.#.#.# (ifconfig) is different from route-gateway #.#.#.# (openvpn) if that makes any difference.

    Read the article

  • Linux: Alternative to rsync? (ie, scp with resume)

    - by Joernsn
    I've been using rsync to automatically send files from one box to another, which is great compared to scp, since it supports resuming. However, when resuming a very large file (10gb) rsync has to read both files and compare them, which is very slow. I don't need fancy error handling, just "scp with resume", so here's my question: Is there an alternative to rsync/scp, that supports resuming without having to read both source and destination files? I've read the manuals without finding anything I can use, please let me know if I've missed something. This is the rsync line I've been using: rsync -av --partial --progress --inplace SRC DST

    Read the article

  • Do large folder sizes slow down IO performance?

    - by Aaron
    We have a Linux server process that writes a few thousand files to a directory, deletes the files, and then writes a few thousand more files to the same directory without deleting the directory. What I'm starting to see is that the process doing the writing is getting slower and slower. My question is this: The directory size of the folder has grown from 4096 to over 200000 as see by this output of ls -l. root@ad57rs0b# ls -l 15000PN5AIA3I6_B total 232 drwxr-xr-x 2 chef chef 233472 May 30 21:35 barcodes On ext3, can these large directory sizes slow down performance? Thanks. Aaron

    Read the article

  • How to determine main movie DVD track before ripping via mencoder

    - by Ampp3
    Maybe there's a simple answer for this, but when looking at the files on a DVD (IFOs, VOBs,etc), is there a way to easily determine the longest/main track? I'm trying to automate the process of finding the main movie track on a DVD and am running into issues. I thought this could be done by finding the BIGGEST track (look through VTS_XX_N.VOB files, where XX is the track number, and find the track with the largest filesize (sum sizes of VOB files for that track)), but apparently that isn't correct. One DVD had track 7 as the largest track (by my method), but mencoder didn't produce the correct output with this track, but worked with track 9 instead. Am I missing something? EDIT: I've heard of the utility 'lsdvd' for getting track information, but I was hoping to avoid compiling this, and use a basic method instead (ie: what I tried above). Does anyone have any idea WHY my idea didn't work?

    Read the article

  • Compressed disk image on Linux

    - by Aaron Digulla
    I just got my new computer with a much bigger harddisk. I think I copied all important files over but just to be sure, I'd like to keep a disk image of my old disk. To save space, I'd like to compress it but I didn't find an option to mount a compressed image. My goals: Result must be easy to access No need to decompress the whole thing before I can access anything Files should be quick to locate - no TAR/CPIO archive Necessary space should be less than just copying the files over So ideally, I'm looking for a read-only, compressed file system which I can create in a file and which grows automatically.

    Read the article

  • How can I "filter" postfix-generated bounce messages?

    - by Flimzy
    We are using postfix 2.7 and custom SMTPD (based on qpsmtpd) in highly customized configuration for spam filtering. We have a new requirement to filter postfix-generated bounces through our custom qpsmtpd process (not so much for content filtering, but to process these bounces accordingly). Our current configuration looks (in part) like this: main.cf (only customizations shown): 2526 inet n - - - 0 cleanup pickup fifo n - - 60 1 pickup -o content_filter=smtp:127.0.0.2 Our smtpd injects messages to postfix on port 2526, by speaking directly to the cleanup daemon. And the custom pickup command instructs postfix to hand off all locally-generated mail (from cron, nagios, or other custom scripts) to our custom smtpd. The problem is that this configuration does not affect postfix generated bounce messages, since they do not go through the pickup daemon. I have tried adding the same content_filter option to the bounce daemon commands, but it does not seem to have any effect: bounce unix - - - - 0 bounce -o content_filter=smtp:127.0.0.2 defer unix - - - - 0 bounce -o content_filter=smtp:127.0.0.2 trace unix - - - - 0 bounce -o content_filter=smtp:127.0.0.2 For reference, here is my main.cf file, as well: biff = no # TLS parameters smtpd_tls_loglevel = 0 smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${queue_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${queue_directory}/smtp_scache smtp_tls_security_level = may mydestination = $myhostname alias_maps = proxy:pgsql:/etc/postfix/dc-aliases.cf transport_maps = proxy:pgsql:/etc/postfix/dc-transport.cf # This is enforced on incoming mail by QPSMTPD, so this is simply # the upper possible bound (also enforced in defaults.pl) message_size_limit = 262144000 mailbox_size_limit = 0 # We do our own message expiration, but if we set this to 0, then postfix # will try each mail delivery only once, so instead we set it to 100 days # (which is the max postfix seems to support) maximal_queue_lifetime = 100d hash_queue_depth = 1 hash_queue_names = deferred, defer, hold I also tried adding the internal_mail_filter_classes option to main.cf, but also tono affect: internal_mail_filter_classes = bounce,notify I am open to any suggestions, including handling our current content-filtering-loop in a different way. If it's not clear what I'm asking, please let me know, and I can try to clarify.

    Read the article

  • installing mod_wsgi giving 403 error

    - by John Smiith
    installing mod_wsgi giving 403 error httpd.conf i added code below WSGIScriptAlias /wsgi "C:/xampp/www/htdocs/wsgi_app/wsgi_handler.py" <Directory "C:/xampp/www/htdocs/wsgi_app/"> AllowOverride None Options None Order deny,allow Allow from all </Directory> wsgi_handler.py status = ‘200 OK’ output = ‘Hello World!’ response_headers = [('Content-type', 'text/plain'), ('Content-Length', str(len(output)))] start_response(status, response_headers) return [output] Note: localhost is my virtual host domain and it is working fine but when i request http://localhost/wsgi/ got 403 error. <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot "C:/xampp/www/htdocs/localhost" ServerName localhost ServerAlias www.localhost ErrorLog "logs/localhost-error.log" CustomLog "logs/localhost-access.log" combined </VirtualHost> Error log [Wed Jul 04 06:01:54 2012] [error] [client 127.0.0.1] File does not exist: C:/xampp/www/htdocs/localhost/favicon.ico [Wed Jul 04 06:01:54 2012] [error] [client 127.0.0.1] client denied by server configuration: C:/xampp/Bin/apache [Wed Jul 04 06:01:58 2012] [error] [client 127.0.0.1] Options ExecCGI is off in this directory: C:/xampp/www/htdocs/wsgi_app/wsgi_handler.py [Wed Jul 04 06:01:58 2012] [error] [client 127.0.0.1] client denied by server configuration: C:/xampp/Bin/apache [Wed Jul 04 06:01:58 2012] [error] [client 127.0.0.1] File does not exist: C:/xampp/www/htdocs/localhost/favicon.ico [Wed Jul 04 06:01:58 2012] [error] [client 127.0.0.1] client denied by server configuration: C:/xampp/Bin/apache Note: My apache is not in c:/xampp/bin/apache it is in c:/xampp/bin/server-apache/

    Read the article

  • Samba PDC plus universal folder

    - by skids89
    I know how to configure samba on my ubuntu box to become a PDC however I need some select files to be accessible to multiple users. These files are beyond their personal files. I.E. users A-C need to be able to access a schedule saved as a spreadsheet. But user D does not and users B-D need to be able to access confidential employee info but user A does not. How do I set this up on top of the PDC structure? Any video tutorials would be a plus. Im new to linux so documentation is a confusing slow slog to learn. Thanks so much in advance!

    Read the article

  • MacOS X 10.6 Portable Home Directory sync fails due to FileSync agent crashing

    - by tegbains
    On one of our cleanly installed MacPro machines running MacOS X 10.6.6 connected to our MacOS X 10.6.6 Server, syncing data using Portable Home Directories fails. It seems to be due to the filesync agent crashing during the home sync. We get -41 and -8026 errors, which we are suspecting are indicating that there is too much data or filesync agent can't read the files. The user is the owner of the files and can read/write to all of the files. < Logout 0:: [11/02/04 13:10:42.751] Error -41 copying /Volumes/RCAUsers/earlpeng/Library/Mail/Mailboxes/email from old imac./Attachments/12081/2.2. (source = NO) < Logout 0:: [11/02/04 13:10:42.758] Error -8062 copying /Volumes/RCAUsers/earlpeng/Library/Mail/Mailboxes/email from old imac./Attachments/12081/2.2/[email protected]. (source = NO) < Logout 1:: [11/02/04 13:10:42.758] -[DeepCopyContext deepCopyError:sourceError:sourceRef:]: error = -8062, wasSource = NO: return shouldContinue = NO

    Read the article

  • Advanced merge directory tree with cp in Linux

    - by mtt
    I need to: Copy all of a tree's folders (with all files, including hidden) under /sourcefolder/* preserving user privileges to /destfolder/ If there is a conflict with a file (a file with the same name exists in destfolder), then rename file in destfolder with a standard rule, like add "old" prefix to filename (readme.txt will become oldreadme.txt) copy the conflicted file from source to destination Conflicts between folders should be transparent - if same directory exists in both sourcefolder and destfolder, then preserve it and recursively copy its content according to the above rules. I need also a .txt report that describes all files/folders added to destfolder and files that were renamed. How can I accomplish this?

    Read the article

  • ffmpeg volume parameter format

    - by tanon
    ffmpeg's -vol parameter is confusing me. 256 => normal (i guess meaning same as input volume, no change) 512 => (double the volume - read this somewhere). So what to do for 3 times the volume? 1.5 times the volume? Basically, lets say I have the max sound amplitudes (audacity levels) in 3 files as: 0.8 0.6 0.9 I want to amplify in the first two files, so that max=0.9 in all files. What parameters of -vol I would use?

    Read the article

  • bootstraping a SparkleShare project

    - by WoJ
    I just tried SparkleShare as a possible replacement for dropbox/insynch. It looks quite promising, being based on open standards. I was wondering if someone has gone though the process of "bootstraping" a SparkleShare project. I have the initial files I would like to keep synchronized on two clients and the server (as plain files). I was wondering if there would be a way to set a project up so that I would not need to download/upload all the files back and forth (as they are readily available on all three systems). I guess this would involve some git kung-fu I am far from mastering. Thanks!

    Read the article

  • Connect two networks

    - by Meek Barrios
    Connecting two different offices with a wireless link and linux boxes. Hardware: 2 CISCO RV42, 2 Dual Homed Linux Boxes running debian, 2 2Wire and 2 AirMax 5 Configuration is: Office A LAN A (10.1.1.0/24) -> RV42 A (WAN1 - 10.1.1.254) -> 2Wire A (Internet) LINUX A ( ETH0 (LAN) 10.1.1.253, ETH1 (LINK) (10.1.3.3) Wireless Link --- AirMax A <-> AirMax B connected as Wireless Bridge Office B LAN B (10.1.2.0/24) -> RV42 B (WAN1 - 10.1.2.254) -> 2Wire B (Internet) LINUX B ( ETH0 (LAN) 10.1.2.253 -> ETH1 (LINK) (10.1.3.4) Network configuration is: LAN A - Default Gateway 10.1.1.254 RV42 A - Static Route 10.1.3.0/24 on 10.1.1.253 Static Route 10.1.2.0/24 on 10.1.1.253 Default on 192.168.1.1 (WAN1 Internet Access) Linux A - ETH0 10.1.1.253 netmask 255.255.255.0 gw 10.1.1.254 ETH1 10.1.3.3 netmask 255.255.255.0 gw 10.1.3.1 AIRMAX A - 10.1.3.1 netmask 255.255.255.0 gw 10.1.3.1 LAN B - Default Gateway 10.1.2.254 RV42 B - Static Route 10.1.3.0/24 on 10.1.2.253 Static Route 10.1.1.0/24 on 10.1.2.253 Default on 192.168.1.1 (WAN1 Internet Access) Linux B - ETH0 10.1.2.253 netmask 255.255.255.0 gw 10.1.2.254 ETH1 10.1.3.4 netmask 255.255.255.0 gw 10.1.3.2 AIRMAX B - 10.1.3.2 netmask 255.255.255.0 gw 10.1.3.2 Both linux have ip_forward set to 1 and the following on the iptables: iptables -F iptables -X iptables -P FORWARD ACCEPT iptables -P INPUT ACCEPT iptables -P OUTPUT ACCEPT I can ping from Linux B any ip on 10.1.1.0/24 segment and on linux A any ip on 10.1.2.0/24 segment however I cannot connect to HTTP or FTP on those machines. From LAN A I cannot see any other network. I'm looking for some advice for this configuration or a better solution. Regards

    Read the article

  • Move some iTunes library items to different drive?

    - by Sören Kuklau
    My internal hard drive is somewhat small, and I only regularly listen to a fraction of my iTunes library anyway, so I'd like to keep large portions on it on an external drive for archival purposes. Since dealing with multiple iTunes libraries is somewhat painful, the solution I'm looking for is to move individual items of the library to a different location, without compromising the "Keep organized" and "Copy files" settings. I found an AppleScript that I assume is supposed to do this, Move Files To Folder…, but it instead copies them, and doesn't update the library accordingly. I can do this manually by moving the file, then accessing it in iTunes — it'll prompt me for the new location. I just don't intend to do this one by one for thousands of files.

    Read the article

  • Setup CENTOS Centralized AUDIT and RSYSLOG server

    - by Warron.French
    Attempting to use these links: Sending audit logs to SYSLOG server or http://wiki.rsyslog.com/index.php/Centralizing_the_audit_log I have been unable to get centralized AUDIT logging to work on my ALL-CentOS network environment. I have 6 workstations dt1...dt6, and the log files are not generated at all and I cannot tell if the messages are being sent from these workstations: dt1..dt6 over to the server (srv1). I have configured the rsyslog.conf on the workstations as shown in the link: Sending audit logs to SYSLOG server, and add the additional touches for generating the logfiles into a separate directory per YEAR/MONTH/DAY (using proper syntax) and into separate HOSTNAME-based_audit.log files. Note: RSYSLOG messaging does appear to work from the workstations over to the server, but the audit logging portion is not working. I am running CentOS-6.5 with RPMs: audit-2.2-4.el6_5.x86_64, audit-libs-2.2-4.el6_5.x86_64, and rsyslog-5.8.10-8.el6.x86_64 I have gotten zero responses from wiki.rsyslog.com and really need this to work. If needed I can send files of one of my workstations and the server to aid in the process. Thanks, Warron

    Read the article

  • Directory comparison in Meld but ignoring changes that only involve file timestamp?

    - by creamcheese
    I'm using Meld to compare two directories of source code on Ubuntu. However, because all of the files in one of the directories have been 'touched' so that all of their timestamps were updated, Meld is showing them as different, even though the contents of the files have not changed. But I'm only trying to find files that have different content. I don't see an option to get Meld just to look at changed contents. Any ideas for how to do this in Meld or is there a better GUI directory comparison tool for Ubuntu?

    Read the article

  • How can I view a PDF in Firefox when the server specifies the wrong content type?

    - by Sam
    I am using Mozilla Firefox with a PDF viewer plug-in. The plug-in has been correctly associated with Adobe Reader files to view them in the browser in the settings. I would like to be able to view PDF files in Firefox rather than downloading them. This already works correctly when a web server indicates that a file has the Content-Type of application/pdf. However, some web servers provide other Content-Types for PDFs, such as application/octet-stream. (See this example of a PDF served with a non-pdf Content-Type.) I have looked at Firefox's MimeTypes.rdf file, and it appears to only support mapping applications based on file types for non-Internet-based files. How can I have Firefox view all PDF documents in-browser rather than only the ones with the application/pdf Content-Type?

    Read the article

  • Windows 7 Ult machine can see XP Pro but not vice versa

    - by Chadworthington
    My new Windows 7 Ult. PC can attach to my work laptop and pull files but my work laptop cannot find my Windows 7 on the network. Is there some sort of limitation going on here? Before I got the new XP machine, my old XP Pro PC could pull files from the XP Pro laptop but not vice versa. The common thread seems to be that the work laptop cannot see other PCs, Windows 7 or not. Could it be because that PC is on a work domain? When I pull files from the work PC, I am prompted for domain credentials, which I provide.

    Read the article

  • Puzzled about PHP file permission and shared webhosting - what are some explanations?

    - by extrakun
    I have this issue with different web-hosting, particular upload scripts which can only upload to a folder only if it has 777 permission (which is risky). On the test server (on a different webhost), 755 works well. On another web-hosting, log files generated by PHP file functions cannot be write to some time, but other files are mysteriously unaffected (for instance, the log files for the entire week is 655, and they work well, but just today's log-file doesn't work unless it is set to 777). I am more of an application developer than a server backend expert, so these behaviours puzzle me to no end. Why are they happening? What can be done?

    Read the article

  • Hyperlink to doc file slow opening

    - by mserioli
    I've two excel file with inside some link to .doc and .pdf file. Both excel files and linked files are on a network shared folder. The first excel file is an .xls, the second an .xlsm. While opening link to .pdf file is very fast (the file is open in few seconds) it take a long time to open .doc files (about 40 secs.). I have searched on internet but found no solution at the moment. I have this problem with both excel 2007 and 2010. Does anyone know how to solve this problem? Thanks a lot Marco

    Read the article

  • How to get filename of job in cups?

    - by Grook
    I have printed a couple of files and lpstat shows that they are completed. But the output is something like this: # lpstat -W completed -l Canon-1 root 1086464 Sat May 21 22:47:03 2011 Alerts: job-canceled-by-user queued for Canon Canon-2 root 337920 Mon May 23 20:18:02 2011 Alerts: job-canceled-by-user queued for Canon CanonWin-3 root 17408 Mon May 23 20:29:40 2011 Alerts: job-completed-successfully queued for CanonWin` How can i get names of files which has been printed? P.S. Is there is any bash-script which allows me to get names of all files which has been printed?

    Read the article

  • A desktop Wiki editor/viewer: is there anything out there?

    - by MrBertie
    I'm a big user of wikis, mainly Dokuwiki, I really like the clarity and ease of use of simple text files. However all good wikis seem to require a web-server of some kind; has anyone come across a good desktop wiki editor/viewer that work with plain-text files, and allow me to work with wiki text files just like any other document file type (note: not a desktop wiki running inside a local webserver) Before you rush to suggest (I hope!) I have done months of research on this and have tried Wixi, Wikidpad, zulupad.... Any ideas anyone?

    Read the article

  • rsync remote to local automatic backup

    - by Mark Molina
    Because all my work is stored on a remote server I would like to auto backup my server monthly and weekly. My server is running Centos 5.5 and while searching the web I'm found a tool named rsync. I got my first update manually by using this command in terminal: sudo rsync -chavzP --stats USERNAME@IPADDRES: PATH_TO_BACKUP LOCAL_PATH_TO_BACKUP I then prompt my password for that user and bob's my uncle. This backups the necessary files from my remote server to my local device but does somebody know how I can automate this? Like automatic running this script every sunday? EDIT I forgot to mention that I let direct admin backup the files I need and then copy those files from the remote server to a local server.

    Read the article

< Previous Page | 532 533 534 535 536 537 538 539 540 541 542 543  | Next Page >