Search Results

Search found 39200 results on 1568 pages for 'zip files'.

Page 58/1568 | < Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >

  • Unable to browse some pdfs and docs.

    - by JamesEggers
    I have a web site that uses Microsoft Indexing Service to index and query a directory that holds various documents of type pdf, rtf, mht, and doc. The indexing and querying works well (for the most part); however, some files will load while others will not. This is a Windows Server 2003 box running the site using IIS 6. The indexed directory is a sub directory off of the site's root directory (i.e. http://my.domain.com/files/). The file paths are accurate in the URL; however, I can only access some of the files of each file type. The files that I cannot access give a 404 File Not Found. I am able to open all files via windows explorer;however, attempting to open them via a browser over http is hit and miss. Has anyone experienced this issue and know how to resolve it? Anyone have any idea why I could access some files but not others? Does anyone have any recommendations on what to look into to try this (i.e. does owner matter or something like that?)? EDIT: Here is the Request and Response Headers for a bad file: GET /files/file1.pdf HTTP/1.1 Accept: image/gif, image/jpeg, image/pjpeg, image/pjpeg, application/x-shockwave-flash, application/xaml+xml, application/vnd.ms-xpsdocument, application/x-ms-xbap, application/x-ms-application, application/x-silverlight, application/vnd.ms-excel, application/vnd.ms-powerpoint, application/msword, / Accept-Language: en-us User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.04506.30; .NET CLR 3.0.04506.590; .NET CLR 3.0.04506.648; .NET CLR 3.5.21022; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729) Accept-Encoding: gzip, deflate Proxy-Connection: Keep-Alive Host: my.domain.com HTTP/1.1 404 Not Found Content-Length: 1635 Content-Type: text/html Server: Microsoft-IIS/6.0 X-Powered-By: ASP.NET Date: Mon, 01 Jun 2009 15:38:54 GMT [typical 404 page markup excluded] Here is the Request/Response headers for the good file: GET /files/file2.pdf HTTP/1.1 Accept: image/gif, image/jpeg, image/pjpeg, image/pjpeg, application/x-shockwave-flash, application/xaml+xml, application/vnd.ms-xpsdocument, application/x-ms-xbap, application/x-ms-application, application/x-silverlight, application/vnd.ms-excel, application/vnd.ms-powerpoint, application/msword, / Accept-Language: en-us User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.04506.30; .NET CLR 3.0.04506.590; .NET CLR 3.0.04506.648; .NET CLR 3.5.21022; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729) Accept-Encoding: gzip, deflate Proxy-Connection: Keep-Alive Host: my.domain.com HTTP/1.1 200 OK Content-Length: 352464 Content-Type: application/pdf Last-Modified: Tue, 13 Jan 2009 15:27:35 GMT Accept-Ranges: bytes ETag: "74ccc5759375c91:2a47" Server: Microsoft-IIS/6.0 X-Powered-By: ASP.NET Date: Mon, 01 Jun 2009 15:50:33 GMT

    Read the article

  • Using Fzip lib under Adobe Air App

    - by Rafael Oliveira
    I'm currently working on a project for Adobe Air (1.5.3) and I need to unzip a file, copy some of its contents to another file. Then I saw people talking about the Fzip (http://codeazur.com.br/lab/fzip) lib. The problem is that I don't know how to "import" or use this library with Javascript and Adobe Air, since Javascript doesn't have the import directive. How can I manage to do that ?

    Read the article

  • Renaming ICSharpCode.SharpZipLib.dll

    - by John B.
    Hi, well I am having a problem renaming the ICSharpCode.SharpZipLib.dll file to anythig else. I am trying to shorten the file name. I reference the assembly in the project, but when the program reaches the statements where I use the library. It spawns an error that it could not find the assembly or file 'ICSharpCode.SharpZipLib'. When I change the file name back to ICSharpCode.SharpZipLib.dll the application works noramally. So, is there any way to change the file name. Also, am I allowed to change it without violating the license (I am going to use it in a commercial application). Thanks.

    Read the article

  • TurboPower Abbrevia in C++Builder2009

    - by Fintch
    I want install TurboPower Abbrevia 3.05 from http://sourceforge.net/projects/tpabbrevia/ but its not working. :( docu says: _4. Open & compile the runtime package specific to the IDE being used (e.g. B305vr2007.dpk for Delphi2007) Start C++Builder2009 - "Open Project..", select "B305vr2009.dpk" and click "open", but nothing happen. What is my mistake?

    Read the article

  • Ubuntu's file-roller problem 5010. "Cannot open display"

    - by Denis
    Hello everybody! I use "file-roller" to manage with archieves on my Ubuntu. But it fails on running without user interface(just teminal). The error is shown below. ** (file-roller:5453): CRITICAL **: Failed to parse arguments: Cannot open display: How can I prevent file-roller from using GUI or can you recommend me any other archieve manager I can use from terminal. It would be perfect, if this manager can handle as much formats as it is possible.

    Read the article

  • Logfiles go blank after logrotate rotates them.

    - by Hilt86
    I have an ubuntu 8.04 LTS server that runs openvpn. The openvpn server writes to a standard logfile under /var/log and prior to a month ago logrotate would automatically rotate the files and compress them. The files are still being rotated however the new logfile (ovpn.log) is empty. Restarting the openvpn daemon fixes the issue (ie: openvpn writes status events to the file) but after about 10 days the file is rotated again openvpn can't write to the logfile again. This is also strange because logrotate is set to rotate every 6 months. Openvpn runs as nobody and the logfiles are owned by root and admin which is strange because it should either work at all times or not work at all if the permissions are the cause, unless openvpn runs as root temporarily and then drops down to nobody after initializing ?

    Read the article

  • IIS6 Indexing Service indexing asp.net codebehind (.aspx.cs) files

    - by Patrick F
    I've setup a few catalogs on an Windows Server 2003 IIS6 install, each tracking files within a website. In the Properties - Generation Dialog for each catalog, 'Index files with unknown extensions' is turned OFF. 'Inherit above settings from Service' in that dialog is also turned off. However, the index is returning results for .cs files, along with abstracts for those files. I've emptied and restarted the catalogs but the files are still appearing. My understanding was the Indexing Service would by default only index HTML, ASCII, and Office Documents. What's going on?

    Read the article

  • Measuring accesses to files - apache

    - by George
    So, I run a website, that among other things serves some files (usually PDFs). All of these are stored under a specific directory on the server: /var/www/vhosts/mysite.com/httpdocs/site/pdf_files Due to storage issues on my VPS I am thinking of getting some S3 or other cloud storage, and mount it as a drive using S3QL/S3FS. Then I will be able to have the pdf_files folder symlinked to the cloud folder and serve those files using that, without any changes on the web app (is that a good plan?) Now, before doing that, to estimate costs, I need to measure how many file accesses people do, how many times those pdf files are downloaded each month for example. Basically how many times those pdf files are accessed through the webserver. I'd like to do it on the apache level. What's the best way that this can be done? e.g.: measuring the bandwidth used by files in that specific folder would also be nice, but estimating the GET requests I'll be doing to amazon is more important.

    Read the article

  • .htaccess to deny access to most xml files

    - by CEich
    I recently had a Joomla site hacked, so I'm trying to harden the site a bit. There's a section in the recommended .htaccess that restricts outside access to the xml files that come with extensions. However, it also keeps my sitemap.xml file from being accessed. How do I allow a certain file whiles keeping the rest? here's the default code: <Files ~ "\.xml$"> Order allow,deny Deny from all Satisfy all </Files> and my modification that caused a 500 error: <Files ~ "(?!sitemap)\.xml$"> Order allow,deny Deny from all Satisfy all </Files>

    Read the article

  • Problem modifying read-only files on Samba NAS

    - by Felix Dombek
    Hi, I have files on a Samba server in the local company network and accessing them from a Windows Vista machine. Usually, if I want to delete a directory containing write-protected (read-only) files, Windows would ask "This file is read-only, are you sure?". However, when I do this with a dir on the server, Windows just tells me that I need permissions. The workaround is to remove the read-only flag from the directory and all contained files and then deleting. However, I have a TortoiseSVN versioned dir on the server, and the .svn dirs contain read-only files. I need to remove the read-only flags from the dir before every commit, or else it fails. This is quite distressing and shouldn't be so. Does someone know how to attack this problem? (If someone knows how to tell TortoiseSVN to not make its files read-only, that would probably be ok as well) ... Thanks!

    Read the article

  • Nginx, logrotate and empty files

    - by user37887
    Hi. I have a problem with nginx/logrotate. The problems is that nginx is logging access to 2 files (main and data). I have the following contrab setting: 0 * * * * /usr/sbin/logrotate -f /home/orwell/orwell-setup/bin/logrotate-nginx And the file "logrotate-nginx" has the following content: /tmp/data.log { rotate 90 daily missingok notifempty size 1 sharedscripts postrotate [ ! -f /tmp/nginx.pid ] || kill -USR1 `cat /tmp/nginx.pid` MORE THINGS endscript } /tmp/main.log { rotate 90 daily missingok notifempty size 1 sharedscripts postrotate [ ! -f /tmp/nginx.pid ] || kill -USR1 `cat /tmp/nginx.pid` MORE THINGS endscript } The work is done in the two files, but there is a problem that nginx stops logging into those files. Both files are created, but they are empty. Any ideas why nginx stop logging info to both files?

    Read the article

  • Cannot read/access Apache2 access logs

    - by webworm
    I have been asked to take a look at some access logs for an Apcahe2 web server running on Ubuntu. I have been told by the administrator of the machine that my login has "admin" access yet I cannot seem to copy the access logs from Apache2 to my local machine via FTP for analysis. I figure one of two things is happening ... I don't really have full admin access Some other process (perhaps Apache2) has control of the log files and won't let me copy them. How can I tell if I truly have admin access? What type of access do I need to request? Root access? Something else? Should I be able to copy these log files with admin access?

    Read the article

  • Is there a diff utility that allows you to exclude columns?

    - by user39160
    For example, I have a text file, each line is a long string. I want to exclude 2 "segments" of this string, say columns 1-7 and 20-22. So the bottom 2 lines below would be a match: 123456789012345678901234567890 ------------------------------ xxxxxxxAAAAAAAAAAAAxxxBBBBBBBB yyyyyyyAAAAAAAAAAAAyyyBBBBBBBB I know WinMerge has a "IgnoreColumns" plugin but I have never go this working. In this example I would rename it IgnoreColumns_1-7, 20-22.dll, select it in the plugins menu, and choose "Pre-Differ." but it has never worked. I am going to be comparing huge files that I don't want to modify. I'm not opposed to stream editing them in the comparison with sed or something like that, but I would prefer not to modify the actual files. I have not chose to feed sed to diff yet just because I was hoping for for a more visual view of the data.

    Read the article

  • Using Amazon S3 for multiple remote data site uploads, securely

    - by Aitch
    I've been playing about with Amazon S3 a little for the first time and like what I see for various reasons relating to my potential use case. We have multiple (online) remote server boxes harvesting sensor data that is regularly uploaded every hour or so (rsync'ed) to a VPS server. The number of remote server boxes is growing regularly and forecast to keep growing (hundreds). The servers are geographically dispersed. The servers are also automatically built, therefore generic with standard tools and not bespoke per location. The data is many hundreds of files per day. I want to avoid a situation where I need to provision more VPS storage, or additional servers every time we hit the VPS capacity limit, after every N server deployments, whatever N might be. The remote servers can never be considered fully secure due to us not knowing what might happen to them when we are not looking. Our current solution is a bit naive and simply restricts inbound rsync only over ssh to known mac address directories and a known public key. There are plenty of holes to pick in this, I know. Let's say I write or use a script like s3cmd/s3sync to potentially push up the files. Would I need to manage hundreds of access keys and have each server customized to include this (do-able, but key management becomes nightmarish?) Could I restrict inbound connections somehow (eg by mac address), or just allow write-only to any client that was running the script? ( i could deal with a flood of data if someone got into a system? ) having a bucket per remote machine does not seem feasible due to bucket limits? I don't think I want to use a single common key as if one machine is breached then potentially, a malicious hack could get access to the filestore key and start deleting for ll clients, correct? I hope my inexperience has not blinded me to some other solution that might be suggested! I've read lots of examples of people using S3 for backup, but can't really find anything about this sort of data collection, unless my google terminology is wrong... I've written more than I should here, perhaps it can be summarised thus: In a perfect world I just want to have one of our techs install a new remote server into a location and it automagically starts sending files home with little or no intervention, and minimises risk? Pipedream or feasible? TIA, Aitch

    Read the article

  • DFS keeps constantly replicating almost all files

    - by Adrian Godong
    We have always had problems with DFS, but recently it has gotten worse (with no apparent reason) to the point it's becoming harmful. We have one master server and DFS connections to other four servers. The four severs don't modify any files, so all replications always propagate from the master to the four other servers. The replicated directory has about 900,000 files. In the recent weeks, every time we check DFS, the DSF backlogs have hundredths of thousand of files. For instance now, the master server now replicates about 700,000 to three of the four servers while the fourth one is fine. Sometimes, only one is off, sometimes two and this time three. Also, it is never the same set of servers. It is inconceivable that something periodically touches all 900,000 files. The biggest change which happens is a scheduled update of several thousand files every six hours. Does anybody have the same problem? Is it a known issue?

    Read the article

  • Disable .htaccess from apache allowoverride none, still reads .htaccess files

    - by John Magnolia
    I have moved all of our .htaccess config into <Directory> blocks and set AllowOverride None in the default and default-ssl. Although after restarting apache it is still reading the .htaccess files. How can I completely turn off reading these files? Update of all files with "AllowOverride" /etc/apache2/mods-available/userdir.conf <IfModule mod_userdir.c> UserDir public_html UserDir disabled root <Directory /home/*/public_html> AllowOverride FileInfo AuthConfig Limit Indexes Options MultiViews Indexes SymLinksIfOwnerMatch IncludesNoExec <Limit GET POST OPTIONS> Order allow,deny Allow from all </Limit> <LimitExcept GET POST OPTIONS> Order deny,allow Deny from all </LimitExcept> </Directory> </IfModule> /etc/apache2/mods-available/alias.conf <IfModule alias_module> # # Aliases: Add here as many aliases as you need (with no limit). The format is # Alias fakename realname # # Note that if you include a trailing / on fakename then the server will # require it to be present in the URL. So "/icons" isn't aliased in this # example, only "/icons/". If the fakename is slash-terminated, then the # realname must also be slash terminated, and if the fakename omits the # trailing slash, the realname must also omit it. # # We include the /icons/ alias for FancyIndexed directory listings. If # you do not use FancyIndexing, you may comment this out. # Alias /icons/ "/usr/share/apache2/icons/" <Directory "/usr/share/apache2/icons"> Options Indexes MultiViews AllowOverride None Order allow,deny Allow from all </Directory> </IfModule> /etc/apache2/httpd.conf # # Directives to allow use of AWStats as a CGI # Alias /awstatsclasses "/usr/share/doc/awstats/examples/wwwroot/classes/" Alias /awstatscss "/usr/share/doc/awstats/examples/wwwroot/css/" Alias /awstatsicons "/usr/share/doc/awstats/examples/wwwroot/icon/" ScriptAlias /awstats/ "/usr/share/doc/awstats/examples/wwwroot/cgi-bin/" # # This is to permit URL access to scripts/files in AWStats directory. # <Directory "/usr/share/doc/awstats/examples/wwwroot"> Options None AllowOverride None Order allow,deny Allow from all </Directory> Alias /awstats-icon/ /usr/share/awstats/icon/ <Directory /usr/share/awstats/icon> Options None AllowOverride None Order allow,deny Allow from all </Directory> /etc/apache2/sites-available/default-ssl <IfModule mod_ssl.c> <VirtualHost _default_:443> ServerAdmin webmaster@localhost DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/ssl_access.log combined # SSL Engine Switch: # Enable/Disable SSL for this virtual host. SSLEngine on # A self-signed (snakeoil) certificate can be created by installing # the ssl-cert package. See # /usr/share/doc/apache2.2-common/README.Debian.gz for more info. # If both key and certificate are stored in the same file, only the # SSLCertificateFile directive is needed. SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key # Server Certificate Chain: # Point SSLCertificateChainFile at a file containing the # concatenation of PEM encoded CA certificates which form the # certificate chain for the server certificate. Alternatively # the referenced file can be the same as SSLCertificateFile # when the CA certificates are directly appended to the server # certificate for convinience. #SSLCertificateChainFile /etc/apache2/ssl.crt/server-ca.crt # Certificate Authority (CA): # Set the CA certificate verification path where to find CA # certificates for client authentication or alternatively one # huge file containing all of them (file must be PEM encoded) # Note: Inside SSLCACertificatePath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCACertificatePath /etc/ssl/certs/ #SSLCACertificateFile /etc/apache2/ssl.crt/ca-bundle.crt # Certificate Revocation Lists (CRL): # Set the CA revocation path where to find CA CRLs for client # authentication or alternatively one huge file containing all # of them (file must be PEM encoded) # Note: Inside SSLCARevocationPath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCARevocationPath /etc/apache2/ssl.crl/ #SSLCARevocationFile /etc/apache2/ssl.crl/ca-bundle.crl # Client Authentication (Type): # Client certificate verification type and depth. Types are # none, optional, require and optional_no_ca. Depth is a # number which specifies how deeply to verify the certificate # issuer chain before deciding the certificate is not valid. #SSLVerifyClient require #SSLVerifyDepth 10 # Access Control: # With SSLRequire you can do per-directory access control based # on arbitrary complex boolean expressions containing server # variable checks and other lookup directives. The syntax is a # mixture between C and Perl. See the mod_ssl documentation # for more details. #<Location /> #SSLRequire ( %{SSL_CIPHER} !~ m/^(EXP|NULL)/ \ # and %{SSL_CLIENT_S_DN_O} eq "Snake Oil, Ltd." \ # and %{SSL_CLIENT_S_DN_OU} in {"Staff", "CA", "Dev"} \ # and %{TIME_WDAY} >= 1 and %{TIME_WDAY} <= 5 \ # and %{TIME_HOUR} >= 8 and %{TIME_HOUR} <= 20 ) \ # or %{REMOTE_ADDR} =~ m/^192\.76\.162\.[0-9]+$/ #</Location> # SSL Engine Options: # Set various options for the SSL engine. # o FakeBasicAuth: # Translate the client X.509 into a Basic Authorisation. This means that # the standard Auth/DBMAuth methods can be used for access control. The # user name is the `one line' version of the client's X.509 certificate. # Note that no password is obtained from the user. Every entry in the user # file needs this password: `xxj31ZMTZzkVA'. # o ExportCertData: # This exports two additional environment variables: SSL_CLIENT_CERT and # SSL_SERVER_CERT. These contain the PEM-encoded certificates of the # server (always existing) and the client (only existing when client # authentication is used). This can be used to import the certificates # into CGI scripts. # o StdEnvVars: # This exports the standard SSL/TLS related `SSL_*' environment variables. # Per default this exportation is switched off for performance reasons, # because the extraction step is an expensive operation and is usually # useless for serving static content. So one usually enables the # exportation for CGI and SSI requests only. # o StrictRequire: # This denies access when "SSLRequireSSL" or "SSLRequire" applied even # under a "Satisfy any" situation, i.e. when it applies access is denied # and no other module can change it. # o OptRenegotiate: # This enables optimized SSL connection renegotiation handling when SSL # directives are used in per-directory context. #SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire <FilesMatch "\.(cgi|shtml|phtml|php)$"> SSLOptions +StdEnvVars </FilesMatch> <Directory /usr/lib/cgi-bin> SSLOptions +StdEnvVars </Directory> # SSL Protocol Adjustments: # The safe and default but still SSL/TLS standard compliant shutdown # approach is that mod_ssl sends the close notify alert but doesn't wait for # the close notify alert from client. When you need a different shutdown # approach you can use one of the following variables: # o ssl-unclean-shutdown: # This forces an unclean shutdown when the connection is closed, i.e. no # SSL close notify alert is send or allowed to received. This violates # the SSL/TLS standard but is needed for some brain-dead browsers. Use # this when you receive I/O errors because of the standard approach where # mod_ssl sends the close notify alert. # o ssl-accurate-shutdown: # This forces an accurate shutdown when the connection is closed, i.e. a # SSL close notify alert is send and mod_ssl waits for the close notify # alert of the client. This is 100% SSL/TLS standard compliant, but in # practice often causes hanging connections with brain-dead browsers. Use # this only for browsers where you know that their SSL implementation # works correctly. # Notice: Most problems of broken clients are also related to the HTTP # keep-alive facility, so you usually additionally want to disable # keep-alive for those clients, too. Use variable "nokeepalive" for this. # Similarly, one has to force some clients to use HTTP/1.0 to workaround # their broken HTTP/1.1 implementation. Use variables "downgrade-1.0" and # "force-response-1.0" for this. BrowserMatch "MSIE [2-6]" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 # MSIE 7 and newer should be able to use keepalive BrowserMatch "MSIE [17-9]" ssl-unclean-shutdown </VirtualHost> </IfModule> /etc/apache2/sites-available/default <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options -Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> Alias /delboy /usr/share/phpmyadmin <Directory /usr/share/phpmyadmin> # Restrict phpmyadmin access Order Deny,Allow Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> /etc/apache2/conf.d/security # # Disable access to the entire file system except for the directories that # are explicitly allowed later. # # This currently breaks the configurations that come with some web application # Debian packages. # #<Directory /> # AllowOverride None # Order Deny,Allow # Deny from all #</Directory> # Changing the following options will not really affect the security of the # server, but might make attacks slightly more difficult in some cases. # # ServerTokens # This directive configures what you return as the Server HTTP response # Header. The default is 'Full' which sends information about the OS-Type # and compiled in modules. # Set to one of: Full | OS | Minimal | Minor | Major | Prod # where Full conveys the most information, and Prod the least. # #ServerTokens Minimal ServerTokens OS #ServerTokens Full # # Optionally add a line containing the server version and virtual host # name to server-generated pages (internal error documents, FTP directory # listings, mod_status and mod_info output etc., but not CGI generated # documents or custom error documents). # Set to "EMail" to also include a mailto: link to the ServerAdmin. # Set to one of: On | Off | EMail # #ServerSignature Off ServerSignature On # # Allow TRACE method # # Set to "extended" to also reflect the request body (only for testing and # diagnostic purposes). # # Set to one of: On | Off | extended # TraceEnable Off #TraceEnable On /etc/apache2/apache2.conf # # Based upon the NCSA server configuration files originally by Rob McCool. # # This is the main Apache server configuration file. It contains the # configuration directives that give the server its instructions. # See http://httpd.apache.org/docs/2.2/ for detailed information about # the directives. # # Do NOT simply read the instructions in here without understanding # what they do. They're here only as hints or reminders. If you are unsure # consult the online docs. You have been warned. # # The configuration directives are grouped into three basic sections: # 1. Directives that control the operation of the Apache server process as a # whole (the 'global environment'). # 2. Directives that define the parameters of the 'main' or 'default' server, # which responds to requests that aren't handled by a virtual host. # These directives also provide default values for the settings # of all virtual hosts. # 3. Settings for virtual hosts, which allow Web requests to be sent to # different IP addresses or hostnames and have them handled by the # same Apache server process. # # Configuration and logfile names: If the filenames you specify for many # of the server's control files begin with "/" (or "drive:/" for Win32), the # server will use that explicit path. If the filenames do *not* begin # with "/", the value of ServerRoot is prepended -- so "foo.log" # with ServerRoot set to "/etc/apache2" will be interpreted by the # server as "/etc/apache2/foo.log". # ### Section 1: Global Environment # # The directives in this section affect the overall operation of Apache, # such as the number of concurrent requests it can handle or where it # can find its configuration files. # # # ServerRoot: The top of the directory tree under which the server's # configuration, error, and log files are kept. # # NOTE! If you intend to place this on an NFS (or otherwise network) # mounted filesystem then please read the LockFile documentation (available # at <URL:http://httpd.apache.org/docs/2.2/mod/mpm_common.html#lockfile>); # you will save yourself a lot of trouble. # # Do NOT add a slash at the end of the directory path. # #ServerRoot "/etc/apache2" # # The accept serialization lock file MUST BE STORED ON A LOCAL DISK. # LockFile ${APACHE_LOCK_DIR}/accept.lock # # PidFile: The file in which the server should record its process # identification number when it starts. # This needs to be set in /etc/apache2/envvars # PidFile ${APACHE_PID_FILE} # # Timeout: The number of seconds before receives and sends time out. # Timeout 300 # # KeepAlive: Whether or not to allow persistent connections (more than # one request per connection). Set to "Off" to deactivate. # KeepAlive On # # MaxKeepAliveRequests: The maximum number of requests to allow # during a persistent connection. Set to 0 to allow an unlimited amount. # We recommend you leave this number high, for maximum performance. # MaxKeepAliveRequests 100 # # KeepAliveTimeout: Number of seconds to wait for the next request from the # same client on the same connection. # KeepAliveTimeout 4 ## ## Server-Pool Size Regulation (MPM specific) ## # prefork MPM # StartServers: number of server processes to start # MinSpareServers: minimum number of server processes which are kept spare # MaxSpareServers: maximum number of server processes which are kept spare # MaxClients: maximum number of server processes allowed to start # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_prefork_module> StartServers 5 MinSpareServers 5 MaxSpareServers 10 MaxClients 150 MaxRequestsPerChild 500 </IfModule> # worker MPM # StartServers: initial number of server processes to start # MaxClients: maximum number of simultaneous client connections # MinSpareThreads: minimum number of worker threads which are kept spare # MaxSpareThreads: maximum number of worker threads which are kept spare # ThreadLimit: ThreadsPerChild can be changed to this maximum value during a # graceful restart. ThreadLimit can only be changed by stopping # and starting Apache. # ThreadsPerChild: constant number of worker threads in each server process # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_worker_module> StartServers 2 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxClients 150 MaxRequestsPerChild 0 </IfModule> # event MPM # StartServers: initial number of server processes to start # MaxClients: maximum number of simultaneous client connections # MinSpareThreads: minimum number of worker threads which are kept spare # MaxSpareThreads: maximum number of worker threads which are kept spare # ThreadsPerChild: constant number of worker threads in each server process # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_event_module> StartServers 2 MaxClients 150 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxRequestsPerChild 0 </IfModule> # These need to be set in /etc/apache2/envvars User ${APACHE_RUN_USER} Group ${APACHE_RUN_GROUP} # # AccessFileName: The name of the file to look for in each directory # for additional configuration directives. See also the AllowOverride # directive. # AccessFileName .htaccess # # The following lines prevent .htaccess and .htpasswd files from being # viewed by Web clients. # <Files ~ "^\.ht"> Order allow,deny Deny from all Satisfy all </Files> # # DefaultType is the default MIME type the server will use for a document # if it cannot otherwise determine one, such as from filename extensions. # If your server contains mostly text or HTML documents, "text/plain" is # a good value. If most of your content is binary, such as applications # or images, you may want to use "application/octet-stream" instead to # keep browsers from trying to display binary files as though they are # text. # DefaultType text/plain # # HostnameLookups: Log the names of clients or just their IP addresses # e.g., www.apache.org (on) or 204.62.129.132 (off). # The default is off because it'd be overall better for the net if people # had to knowingly turn this feature on, since enabling it means that # each client request will result in AT LEAST one lookup request to the # nameserver. # HostnameLookups Off # ErrorLog: The location of the error log file. # If you do not specify an ErrorLog directive within a <VirtualHost> # container, error messages relating to that virtual host will be # logged here. If you *do* define an error logfile for a <VirtualHost> # container, that host's errors will be logged there and not here. # ErrorLog ${APACHE_LOG_DIR}/error.log # # LogLevel: Control the number of messages logged to the error_log. # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. # LogLevel warn # Include module configuration: Include mods-enabled/*.load Include mods-enabled/*.conf # Include all the user configurations: Include httpd.conf # Include ports listing Include ports.conf # # The following directives define some format nicknames for use with # a CustomLog directive (see below). # If you are behind a reverse proxy, you might want to change %h into %{X-Forwarded-For}i # LogFormat "%v:%p %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined LogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %O" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent # Include of directories ignores editors' and dpkg's backup files, # see README.Debian for details. # Include generic snippets of statements Include conf.d/ # Include the virtual host configurations: Include sites-enabled/

    Read the article

  • Merging Two KML Files to Display Them with Different Marker Icons on Google Maps

    - by Maxim Z.
    Let's say that I have two spreadsheets with addresses. I uploaded these spreadsheets into Google Fusion Tables, geocoded the addresses, and exported the results as KML files. Now, I want to take these two KML files and merge them, while maintaining the location data and using it to map the points with Google Maps. Well, I found a way to easily merge the KML files: import both of them into a "My Maps" map with Google Maps! However, my problem is this: when I do that, all of the locations in my data have the same marker icon on the map. From past experience, I know that these markers can be somehow defined inside the KML files. Is it possible to combine these two KML files while giving one's points one marker icon and the other's points another marker icon? Just in case my question is confusing, what I mean, is giving the first set of points blue markers, for example, and the other set of points red markers, so that they can be overlayed.

    Read the article

  • How rotate TomCat 6 logs on Windows every night

    - by Danilo Brambilla
    Hi all, our TomCat 6 is running on a Windows Server 2003 server producing some logs on Program Files\Apache Software Foundation\Tomcat 6.0\logs folder. Only catalina.YYYY-MM-DD.log rotates every night. Admin. Host-Manager. Jakarta. LocalHost. Manager. stderr. stdout does not roate and are dated at the last server restart date. These files are most empty and always locked. How can I set TomCat to rotate all these logs every night (if possible without server/service restart)? Thank you in advance for help.

    Read the article

  • RHEL5: Can't create sparse file bigger than 256GB in tmpfs

    - by John Kugelman
    /var/log/lastlog gets written to when you log in. The size of this file is based off of the largest UID in the system. The larger the maximum UID, the larger this file is. Thankfully it's a sparse file so the size on disk is much smaller than the size ls reports (ls -s reports the size on disk). On our system we're authenticating against an Active Directory server, and the UIDs users are assigned end up being really, really large. Like, say, UID 900,000,000 for the first AD user, 900,000,001 for the second, etc. That's strange but should be okay. It results in /var/log/lastlog being huuuuuge, though--once an AD user logs in lastlog shows up as 280GB. Its real size is still small, thankfully. This works fine when /var/log/lastlog is stored on the hard drive on an ext3 filesystem. It breaks, however, if lastlog is stored in a tmpfs filesystem. Then it appears that the max file size for any file on the tmpfs is 256GB, so the sessreg program errors out trying to write to lastlog. Where is this 256GB limit coming from, and how can I increase it? As a simple test for creating large sparse files I've been doing: dd if=/dev/zero of=sparse-file bs=1 count=1 seek=300GB I've tried Googling for "tmpfs max file size", "256GB filesystem limit", "linux max file size", things like that. I haven't been able to find much. The only mention of 256GB I can find is that ext3 filesystems with 2KB blocks are limited to 256GB files. But our hard drives are formatted with 4K blocks so that doesn't seem to be it--not to mention this is happening in a tmpfs mounted ON TOP of the hard drive so the ext3 partition shouldn't be a factor. This is all happening on a 64-bit Red Hat Enterprise Linux 5.4 system. Interestingly, on my personal development machine, which is a 32-bit Fedora Core 6 box, I can create 300GB+ files in tmpfs filesystems no problem. On the RHEL5.4 systems it is no go.

    Read the article

  • Open source command line tools for indexing a large number of text files

    - by ergosys
    I'm looking for any open source command line tool or tools which will allow me to index and search a large number of plain text files. Approximate search would be a plus. The tool only needs to print the files that match, although some match context would be useful. A GUI tool isn't useful for my application, nor is anything that searches files one by one (grep for example). I'm basically targeting unix platforms (osx, linux, bsd). EDIT: I'm not interested in any sort of tool that is system-wide, or needs to run in the background. Basically, I want to build an index for a directory tree full of text files and then later be able to search against it. Preferably the index is one or a few files that I can specify the location of. Any ideas?

    Read the article

  • serving static assets via http is really slow compared to sshfs (apache2/nginx)

    - by s1lv3r
    After migrating to a new VPS I had some users complaining about slow loading images on their sites. After creating some test files with dd I realized that I can download all files via sshfs with full speed while downloads via web are painfully slow. The larger the file is and the longer the transfer takes, the slower the transfer speed gets. I thought I had some problems with Apache and just spend the whole evening with replacing Apache2 against nginx for static file serving - with no effect at all. No I/O wait states in top. Tons of RAM free, no high CPU utilization and hdparm shows a decent I/O performance at all times. I just have no idea anymore, what's happening on this server. This is a link to a demo file: http://master.dealux.de/file.tgz Anybody an idea what I can check out?

    Read the article

  • How to check for duplicate files?

    - by miorel
    I have an external hard drive on which I have backed up files several times. Some files were modified between backups, others were not. Some may have been renamed. Now I'm running out of space, and I'd like to clean up duplicate files. My idea was to md5sum every file on the drive, then look for duplicates, and diff the relevant files (just in case, haha). Is this the best way to do this? What are some other methods of checking for duplicate files?

    Read the article

  • Recursively move files in sub-dirs to new sub-dirs of same name

    - by Gabriel
    I have a batch of files all ending with the same string, ie: *_ext.dat located in several sub-dirs along with several other files, in a given main dir. This is the structure: /main_dir/subdir1/file11_ext.dat /main_dir/subdir1/file12_ext.dat /main_dir/subdir1/file13_ext.dat /main_dir/subdir1/file14_other.dat /main_dir/subdir1/file15_other.dat /main_dir/subdir2/file21_ext.dat /main_dir/subdir2/file22_ext.dat /main_dir/subdir2/file23_ext.dat /main_dir/subdir2/file24_other.dat /main_dir/subdir2/file25_other.dat /main_dir/subdir3/file31_ext.dat /main_dir/subdir3/file32_ext.dat /main_dir/subdir3/file33_ext.dat /main_dir/subdir3/file34_other.dat /main_dir/subdir3/file35_other.dat I need to recursively move only the files ending in *_ext.dat into a new main dir, new_dir, respecting the sub-dir structure so the files will end up in an equivalent dir structure like this: /new_dir/subdir1/file11_ext.dat /new_dir/subdir1/file12_ext.dat /new_dir/subdir1/file13_ext.dat /new_dir/subdir2/file21_ext.dat /new_dir/subdir2/file22_ext.dat /new_dir/subdir2/file23_ext.dat /new_dir/subdir3/file31_ext.dat /new_dir/subdir3/file32_ext.dat /new_dir/subdir3/file33_ext.dat Because of this the command should also create those sub-dirs with their corresponding names. I know that with a line like this one: find . -name "*_ext.dat" -print0 | xargs -0 rm -rf I can delete all those files, but I don't know how to modify it to do what I need (or if it is even possible).

    Read the article

< Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >