Search Results

Search found 40999 results on 1640 pages for 'duplicate files'.

Page 63/1640 | < Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >

  • Create "duplicate copy" of github repository

    - by user1483934
    I see answers to similar questions and I'm sure I may just not be familiar enough with Git and Github terminology to know if they apply to my question. What I need to do is to clone an existing Github remote repository (a private repo under another person's username that I have contributor access to) and create a new private remote repo under my account. The existing repo user is going to make significant alterations to the repo, delete, and re-push, before they do that they want me to clone and create a duplicate so we can continue working from the repo under my user. I want to preserve the commit history with the repo if possible. I've cloned locally but don't can't seem to figure out how to push it to a new remote that isn't origined to the original user.

    Read the article

  • Duplicate file descriptor after popen

    - by alaamh
    I am using popen to execute a command under linux, then 4 process wile use the same output. I am trying to duplicate the file descriptor again to pass it to each process. here is my code: FILE* file_source = (FILE*) popen(source_command, "r"); int fd = fileno(file_source); fdatasync(fd); int fd[4],y, total = 4 ; for (y = 0; y < total; y++) { dest_fd[y] = dup(fd); } actually if total set to 1 it work fin, after changing total = 4 it does not work anymore.

    Read the article

  • function to remove duplicate characters in a string

    - by Codenotguru
    The following code is trying to remove any duplicate characters in a string.Iam not sure if the code is right??Can anybody help me with the working of the code i.e whats actually happening when there is a match in characters? public static void removeDuplicates(char[] str) { if (str == null) return; int len = str.length; if (len < 2) return; int tail = 1; for (int i = 1; i < len; ++i) { int j; for (j = 0; j < tail; ++j) { if (str[i] == str[j]) break; } if (j == tail) { str[tail] = str[i]; ++tail; } } str[tail] = 0; }

    Read the article

  • Does duplicate id's screw up jquery selectors?

    - by Matt
    If I had two divs, both with id="myDiv" Would $("#myDiv").fadeOut(); fade both divs out? Or would it fade only the first/second? Or none at all? How do I change which one it fades out? Note: I know duplicate id's is against standards but I'm using the fancybox modal popup and it duplicates specified content on your page for the content of the popup. If anyone knows a way around this (maybe I'm using fancybox wrong) please let me know.

    Read the article

  • Duplicate array but maintain pointer links

    - by St. John Johnson
    Suppose I have an array of nodes (objects). I need to create a duplicate of this array that I can modify without affecting the source array. But changing the nodes will affect the source nodes. Basically maintaining pointers to the objects instead of duplicating their values. // node(x, y) $array[0] = new node(15, 10); $array[1] = new node(30, -10); $array[2] = new node(-2, 49); // Some sort of copy system $array2 = $array; // Just to show modification to the array doesn't affect the source array array_pop($array2); if (count($array) == count($array2)) echo "Fail"; // Changing the node value should affect the source array $array2[0]->x = 30; if ($array2[0]->x == $array[0]->x) echo "Goal"; What would be the best way to do this?

    Read the article

  • Insert Statment with Case for avoid duplicate record insertion

    - by rama
    I have written the below SP for Precheck for Duplicate records before insert into Table . but it is not allow me yo write insert staement inside the CASE . how can I write Stored Procedure for fist Check the value @Ordername into table After that if it is not present then it should inserted into Database . CREATE PROCEDURE [Test Procedure ] ( @section varchar(70), @mark varchar(70), @qty decimal(18,2), @Weight decimal(18,2), @dateupdateremark int, @OrderName varchar(70) ) AS BEGIN SET NOCOUNT ON; select case(@OrderName) when (select OrderName from dbo.tbl_insertxmldetails where(@OrderName) not in (select OrderName from tbl_insertxmldetails)) then insert into dbo.tbl_insertxmldetails (Section, Mark, QTY,Weight,Dateupdateremark ,OrderName,SystemDate) values (@Section, @Mark, @QTY,@Weight, @Dateupdateremark,@OrderName,GETDATE()) else 'File already Exists' end

    Read the article

  • javascript: what are immediate functions used for [duplicate]

    - by tkoomzaaskz
    This question already has an answer here: Why using self executing function in JavaScript? [duplicate] 4 answers I've been programming in JS since some time, but I have never came upon a need of using immediate functions, for example: (function(){ console.log('hello, I am an immediate function'); }()) What would be the difference if I just wrote: console.log('hello, I am an immediate function'); ? I don't have any access to this function anyway (it is not assigned anywhere). I think (but I'm not sure) that I can implement everything without immediate functions - so why do people use it?

    Read the article

  • Socket.io Duplicate clients on a namespace

    - by Servernumber
    Hi I'm trying to use dynamic namespace to create them on demand. It's working except that I get duplicate or more client for some reason Server side : io.of("/" + group ).on("connection", function(socket_group) { socket_group.groupId = group; socket_group.on("infos", function(){ console.log("on the group !"); }) socket_group.on('list', function () { Urls.find({'group' : '...'}).sort({ '_id' : -1 }).limit(10).exec(function(err, data){ socket_group.emit('links', data); }); }) [...] }) Client Side : socket.emit('list', { ... }); On the client side only one command is sent but the server is always responding with 2 or more responses. Every time I close/open my app the response is incremented. Thanks if you find out.

    Read the article

  • C#: Using regular expression (Regex) to duplicate a specific character in a string

    - by user3703944
    Anyone know how to use regex to duplicate a specific character in a string? I have a path that is entered like this: C:/Example/example I would like to use regex (or any other method) to display it like this: C://Example//example Is it possible? This is where I'm getting the file path private void btnSearchImage_Click_1(object sender, EventArgs e) { OpenFileDialog ofd = new OpenFileDialog(); ofd.Filter = "Image Files(*.jpg; *.jpeg; *.gif; *.bmp)|*.jpg; *.jpeg; *.gif; *.bmp"; if (ofd.ShowDialog() == System.Windows.Forms.DialogResult.OK) { string filenName = ofd.FileName; pictureBox1.Image = new Bitmap(filenName); string path = filenName; txtimgPath.Text = path; } } Thanks

    Read the article

  • list boxes(A,B) with some duplicate values

    - by Dev
    I have two list boxes(A,B) with some duplicate values and i can send values from A to B or B to A using send button and i have one save button. For the first time without sending any list box if i click on save button i am showing message like "NO changes or Done" But once i send one item from A to B and again sending that same item from B to A it means no changes or Done. here also i want to show same message "NO changes or Done" .but i am unable to find the staus can any one pleasse give code or tips to find the default status for listboxs in javascript. Thanks

    Read the article

  • Traversing ORM relationships returns duplicate results

    - by NKing253
    I have 4 tables -- store, catalog_galleries, catalog_images, and catalog_financials. When I traverse the relationship from store --> catalog_galleries --> catalog_images in other words: store.getCatalogGallery().getCatalogImages() I get duplicate records. Does anyone know what could be the cause of this? Any suggestions on where to look? The store table has a OneToOne relationship with catalog_galleries which in turn has a OneToMany relationship with catalog_images and an eager fetch type. The store table also has a OneToMany relationship with catalog_financials.

    Read the article

  • How To View and Write To System Log Files on Ubuntu

    - by Chris Hoffman
    Linux logs a large amount of events to the disk, where they’re mostly stored in the /var/log directory in plain text. Most log entries go through the system logging daemon, syslogd, and are written to the system log. Ubuntu includes a number of ways of viewing these logs, either graphically or from the command-line. You can also write your own log messages to the system log — particularly useful in scripts. How to Banish Duplicate Photos with VisiPic How to Make Your Laptop Choose a Wired Connection Instead of Wireless HTG Explains: What Is Two-Factor Authentication and Should I Be Using It?

    Read the article

  • Unable to browse some pdfs and docs.

    - by JamesEggers
    I have a web site that uses Microsoft Indexing Service to index and query a directory that holds various documents of type pdf, rtf, mht, and doc. The indexing and querying works well (for the most part); however, some files will load while others will not. This is a Windows Server 2003 box running the site using IIS 6. The indexed directory is a sub directory off of the site's root directory (i.e. http://my.domain.com/files/). The file paths are accurate in the URL; however, I can only access some of the files of each file type. The files that I cannot access give a 404 File Not Found. I am able to open all files via windows explorer;however, attempting to open them via a browser over http is hit and miss. Has anyone experienced this issue and know how to resolve it? Anyone have any idea why I could access some files but not others? Does anyone have any recommendations on what to look into to try this (i.e. does owner matter or something like that?)? EDIT: Here is the Request and Response Headers for a bad file: GET /files/file1.pdf HTTP/1.1 Accept: image/gif, image/jpeg, image/pjpeg, image/pjpeg, application/x-shockwave-flash, application/xaml+xml, application/vnd.ms-xpsdocument, application/x-ms-xbap, application/x-ms-application, application/x-silverlight, application/vnd.ms-excel, application/vnd.ms-powerpoint, application/msword, / Accept-Language: en-us User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.04506.30; .NET CLR 3.0.04506.590; .NET CLR 3.0.04506.648; .NET CLR 3.5.21022; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729) Accept-Encoding: gzip, deflate Proxy-Connection: Keep-Alive Host: my.domain.com HTTP/1.1 404 Not Found Content-Length: 1635 Content-Type: text/html Server: Microsoft-IIS/6.0 X-Powered-By: ASP.NET Date: Mon, 01 Jun 2009 15:38:54 GMT [typical 404 page markup excluded] Here is the Request/Response headers for the good file: GET /files/file2.pdf HTTP/1.1 Accept: image/gif, image/jpeg, image/pjpeg, image/pjpeg, application/x-shockwave-flash, application/xaml+xml, application/vnd.ms-xpsdocument, application/x-ms-xbap, application/x-ms-application, application/x-silverlight, application/vnd.ms-excel, application/vnd.ms-powerpoint, application/msword, / Accept-Language: en-us User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.04506.30; .NET CLR 3.0.04506.590; .NET CLR 3.0.04506.648; .NET CLR 3.5.21022; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729) Accept-Encoding: gzip, deflate Proxy-Connection: Keep-Alive Host: my.domain.com HTTP/1.1 200 OK Content-Length: 352464 Content-Type: application/pdf Last-Modified: Tue, 13 Jan 2009 15:27:35 GMT Accept-Ranges: bytes ETag: "74ccc5759375c91:2a47" Server: Microsoft-IIS/6.0 X-Powered-By: ASP.NET Date: Mon, 01 Jun 2009 15:50:33 GMT

    Read the article

  • Logfiles go blank after logrotate rotates them.

    - by Hilt86
    I have an ubuntu 8.04 LTS server that runs openvpn. The openvpn server writes to a standard logfile under /var/log and prior to a month ago logrotate would automatically rotate the files and compress them. The files are still being rotated however the new logfile (ovpn.log) is empty. Restarting the openvpn daemon fixes the issue (ie: openvpn writes status events to the file) but after about 10 days the file is rotated again openvpn can't write to the logfile again. This is also strange because logrotate is set to rotate every 6 months. Openvpn runs as nobody and the logfiles are owned by root and admin which is strange because it should either work at all times or not work at all if the permissions are the cause, unless openvpn runs as root temporarily and then drops down to nobody after initializing ?

    Read the article

  • IIS6 Indexing Service indexing asp.net codebehind (.aspx.cs) files

    - by Patrick F
    I've setup a few catalogs on an Windows Server 2003 IIS6 install, each tracking files within a website. In the Properties - Generation Dialog for each catalog, 'Index files with unknown extensions' is turned OFF. 'Inherit above settings from Service' in that dialog is also turned off. However, the index is returning results for .cs files, along with abstracts for those files. I've emptied and restarted the catalogs but the files are still appearing. My understanding was the Indexing Service would by default only index HTML, ASCII, and Office Documents. What's going on?

    Read the article

  • Measuring accesses to files - apache

    - by George
    So, I run a website, that among other things serves some files (usually PDFs). All of these are stored under a specific directory on the server: /var/www/vhosts/mysite.com/httpdocs/site/pdf_files Due to storage issues on my VPS I am thinking of getting some S3 or other cloud storage, and mount it as a drive using S3QL/S3FS. Then I will be able to have the pdf_files folder symlinked to the cloud folder and serve those files using that, without any changes on the web app (is that a good plan?) Now, before doing that, to estimate costs, I need to measure how many file accesses people do, how many times those pdf files are downloaded each month for example. Basically how many times those pdf files are accessed through the webserver. I'd like to do it on the apache level. What's the best way that this can be done? e.g.: measuring the bandwidth used by files in that specific folder would also be nice, but estimating the GET requests I'll be doing to amazon is more important.

    Read the article

  • .htaccess to deny access to most xml files

    - by CEich
    I recently had a Joomla site hacked, so I'm trying to harden the site a bit. There's a section in the recommended .htaccess that restricts outside access to the xml files that come with extensions. However, it also keeps my sitemap.xml file from being accessed. How do I allow a certain file whiles keeping the rest? here's the default code: <Files ~ "\.xml$"> Order allow,deny Deny from all Satisfy all </Files> and my modification that caused a 500 error: <Files ~ "(?!sitemap)\.xml$"> Order allow,deny Deny from all Satisfy all </Files>

    Read the article

  • Problem modifying read-only files on Samba NAS

    - by Felix Dombek
    Hi, I have files on a Samba server in the local company network and accessing them from a Windows Vista machine. Usually, if I want to delete a directory containing write-protected (read-only) files, Windows would ask "This file is read-only, are you sure?". However, when I do this with a dir on the server, Windows just tells me that I need permissions. The workaround is to remove the read-only flag from the directory and all contained files and then deleting. However, I have a TortoiseSVN versioned dir on the server, and the .svn dirs contain read-only files. I need to remove the read-only flags from the dir before every commit, or else it fails. This is quite distressing and shouldn't be so. Does someone know how to attack this problem? (If someone knows how to tell TortoiseSVN to not make its files read-only, that would probably be ok as well) ... Thanks!

    Read the article

  • Nginx, logrotate and empty files

    - by user37887
    Hi. I have a problem with nginx/logrotate. The problems is that nginx is logging access to 2 files (main and data). I have the following contrab setting: 0 * * * * /usr/sbin/logrotate -f /home/orwell/orwell-setup/bin/logrotate-nginx And the file "logrotate-nginx" has the following content: /tmp/data.log { rotate 90 daily missingok notifempty size 1 sharedscripts postrotate [ ! -f /tmp/nginx.pid ] || kill -USR1 `cat /tmp/nginx.pid` MORE THINGS endscript } /tmp/main.log { rotate 90 daily missingok notifempty size 1 sharedscripts postrotate [ ! -f /tmp/nginx.pid ] || kill -USR1 `cat /tmp/nginx.pid` MORE THINGS endscript } The work is done in the two files, but there is a problem that nginx stops logging into those files. Both files are created, but they are empty. Any ideas why nginx stop logging info to both files?

    Read the article

  • Cannot read/access Apache2 access logs

    - by webworm
    I have been asked to take a look at some access logs for an Apcahe2 web server running on Ubuntu. I have been told by the administrator of the machine that my login has "admin" access yet I cannot seem to copy the access logs from Apache2 to my local machine via FTP for analysis. I figure one of two things is happening ... I don't really have full admin access Some other process (perhaps Apache2) has control of the log files and won't let me copy them. How can I tell if I truly have admin access? What type of access do I need to request? Root access? Something else? Should I be able to copy these log files with admin access?

    Read the article

  • Using Amazon S3 for multiple remote data site uploads, securely

    - by Aitch
    I've been playing about with Amazon S3 a little for the first time and like what I see for various reasons relating to my potential use case. We have multiple (online) remote server boxes harvesting sensor data that is regularly uploaded every hour or so (rsync'ed) to a VPS server. The number of remote server boxes is growing regularly and forecast to keep growing (hundreds). The servers are geographically dispersed. The servers are also automatically built, therefore generic with standard tools and not bespoke per location. The data is many hundreds of files per day. I want to avoid a situation where I need to provision more VPS storage, or additional servers every time we hit the VPS capacity limit, after every N server deployments, whatever N might be. The remote servers can never be considered fully secure due to us not knowing what might happen to them when we are not looking. Our current solution is a bit naive and simply restricts inbound rsync only over ssh to known mac address directories and a known public key. There are plenty of holes to pick in this, I know. Let's say I write or use a script like s3cmd/s3sync to potentially push up the files. Would I need to manage hundreds of access keys and have each server customized to include this (do-able, but key management becomes nightmarish?) Could I restrict inbound connections somehow (eg by mac address), or just allow write-only to any client that was running the script? ( i could deal with a flood of data if someone got into a system? ) having a bucket per remote machine does not seem feasible due to bucket limits? I don't think I want to use a single common key as if one machine is breached then potentially, a malicious hack could get access to the filestore key and start deleting for ll clients, correct? I hope my inexperience has not blinded me to some other solution that might be suggested! I've read lots of examples of people using S3 for backup, but can't really find anything about this sort of data collection, unless my google terminology is wrong... I've written more than I should here, perhaps it can be summarised thus: In a perfect world I just want to have one of our techs install a new remote server into a location and it automagically starts sending files home with little or no intervention, and minimises risk? Pipedream or feasible? TIA, Aitch

    Read the article

  • DFS keeps constantly replicating almost all files

    - by Adrian Godong
    We have always had problems with DFS, but recently it has gotten worse (with no apparent reason) to the point it's becoming harmful. We have one master server and DFS connections to other four servers. The four severs don't modify any files, so all replications always propagate from the master to the four other servers. The replicated directory has about 900,000 files. In the recent weeks, every time we check DFS, the DSF backlogs have hundredths of thousand of files. For instance now, the master server now replicates about 700,000 to three of the four servers while the fourth one is fine. Sometimes, only one is off, sometimes two and this time three. Also, it is never the same set of servers. It is inconceivable that something periodically touches all 900,000 files. The biggest change which happens is a scheduled update of several thousand files every six hours. Does anybody have the same problem? Is it a known issue?

    Read the article

  • Disable .htaccess from apache allowoverride none, still reads .htaccess files

    - by John Magnolia
    I have moved all of our .htaccess config into <Directory> blocks and set AllowOverride None in the default and default-ssl. Although after restarting apache it is still reading the .htaccess files. How can I completely turn off reading these files? Update of all files with "AllowOverride" /etc/apache2/mods-available/userdir.conf <IfModule mod_userdir.c> UserDir public_html UserDir disabled root <Directory /home/*/public_html> AllowOverride FileInfo AuthConfig Limit Indexes Options MultiViews Indexes SymLinksIfOwnerMatch IncludesNoExec <Limit GET POST OPTIONS> Order allow,deny Allow from all </Limit> <LimitExcept GET POST OPTIONS> Order deny,allow Deny from all </LimitExcept> </Directory> </IfModule> /etc/apache2/mods-available/alias.conf <IfModule alias_module> # # Aliases: Add here as many aliases as you need (with no limit). The format is # Alias fakename realname # # Note that if you include a trailing / on fakename then the server will # require it to be present in the URL. So "/icons" isn't aliased in this # example, only "/icons/". If the fakename is slash-terminated, then the # realname must also be slash terminated, and if the fakename omits the # trailing slash, the realname must also omit it. # # We include the /icons/ alias for FancyIndexed directory listings. If # you do not use FancyIndexing, you may comment this out. # Alias /icons/ "/usr/share/apache2/icons/" <Directory "/usr/share/apache2/icons"> Options Indexes MultiViews AllowOverride None Order allow,deny Allow from all </Directory> </IfModule> /etc/apache2/httpd.conf # # Directives to allow use of AWStats as a CGI # Alias /awstatsclasses "/usr/share/doc/awstats/examples/wwwroot/classes/" Alias /awstatscss "/usr/share/doc/awstats/examples/wwwroot/css/" Alias /awstatsicons "/usr/share/doc/awstats/examples/wwwroot/icon/" ScriptAlias /awstats/ "/usr/share/doc/awstats/examples/wwwroot/cgi-bin/" # # This is to permit URL access to scripts/files in AWStats directory. # <Directory "/usr/share/doc/awstats/examples/wwwroot"> Options None AllowOverride None Order allow,deny Allow from all </Directory> Alias /awstats-icon/ /usr/share/awstats/icon/ <Directory /usr/share/awstats/icon> Options None AllowOverride None Order allow,deny Allow from all </Directory> /etc/apache2/sites-available/default-ssl <IfModule mod_ssl.c> <VirtualHost _default_:443> ServerAdmin webmaster@localhost DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/ssl_access.log combined # SSL Engine Switch: # Enable/Disable SSL for this virtual host. SSLEngine on # A self-signed (snakeoil) certificate can be created by installing # the ssl-cert package. See # /usr/share/doc/apache2.2-common/README.Debian.gz for more info. # If both key and certificate are stored in the same file, only the # SSLCertificateFile directive is needed. SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key # Server Certificate Chain: # Point SSLCertificateChainFile at a file containing the # concatenation of PEM encoded CA certificates which form the # certificate chain for the server certificate. Alternatively # the referenced file can be the same as SSLCertificateFile # when the CA certificates are directly appended to the server # certificate for convinience. #SSLCertificateChainFile /etc/apache2/ssl.crt/server-ca.crt # Certificate Authority (CA): # Set the CA certificate verification path where to find CA # certificates for client authentication or alternatively one # huge file containing all of them (file must be PEM encoded) # Note: Inside SSLCACertificatePath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCACertificatePath /etc/ssl/certs/ #SSLCACertificateFile /etc/apache2/ssl.crt/ca-bundle.crt # Certificate Revocation Lists (CRL): # Set the CA revocation path where to find CA CRLs for client # authentication or alternatively one huge file containing all # of them (file must be PEM encoded) # Note: Inside SSLCARevocationPath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCARevocationPath /etc/apache2/ssl.crl/ #SSLCARevocationFile /etc/apache2/ssl.crl/ca-bundle.crl # Client Authentication (Type): # Client certificate verification type and depth. Types are # none, optional, require and optional_no_ca. Depth is a # number which specifies how deeply to verify the certificate # issuer chain before deciding the certificate is not valid. #SSLVerifyClient require #SSLVerifyDepth 10 # Access Control: # With SSLRequire you can do per-directory access control based # on arbitrary complex boolean expressions containing server # variable checks and other lookup directives. The syntax is a # mixture between C and Perl. See the mod_ssl documentation # for more details. #<Location /> #SSLRequire ( %{SSL_CIPHER} !~ m/^(EXP|NULL)/ \ # and %{SSL_CLIENT_S_DN_O} eq "Snake Oil, Ltd." \ # and %{SSL_CLIENT_S_DN_OU} in {"Staff", "CA", "Dev"} \ # and %{TIME_WDAY} >= 1 and %{TIME_WDAY} <= 5 \ # and %{TIME_HOUR} >= 8 and %{TIME_HOUR} <= 20 ) \ # or %{REMOTE_ADDR} =~ m/^192\.76\.162\.[0-9]+$/ #</Location> # SSL Engine Options: # Set various options for the SSL engine. # o FakeBasicAuth: # Translate the client X.509 into a Basic Authorisation. This means that # the standard Auth/DBMAuth methods can be used for access control. The # user name is the `one line' version of the client's X.509 certificate. # Note that no password is obtained from the user. Every entry in the user # file needs this password: `xxj31ZMTZzkVA'. # o ExportCertData: # This exports two additional environment variables: SSL_CLIENT_CERT and # SSL_SERVER_CERT. These contain the PEM-encoded certificates of the # server (always existing) and the client (only existing when client # authentication is used). This can be used to import the certificates # into CGI scripts. # o StdEnvVars: # This exports the standard SSL/TLS related `SSL_*' environment variables. # Per default this exportation is switched off for performance reasons, # because the extraction step is an expensive operation and is usually # useless for serving static content. So one usually enables the # exportation for CGI and SSI requests only. # o StrictRequire: # This denies access when "SSLRequireSSL" or "SSLRequire" applied even # under a "Satisfy any" situation, i.e. when it applies access is denied # and no other module can change it. # o OptRenegotiate: # This enables optimized SSL connection renegotiation handling when SSL # directives are used in per-directory context. #SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire <FilesMatch "\.(cgi|shtml|phtml|php)$"> SSLOptions +StdEnvVars </FilesMatch> <Directory /usr/lib/cgi-bin> SSLOptions +StdEnvVars </Directory> # SSL Protocol Adjustments: # The safe and default but still SSL/TLS standard compliant shutdown # approach is that mod_ssl sends the close notify alert but doesn't wait for # the close notify alert from client. When you need a different shutdown # approach you can use one of the following variables: # o ssl-unclean-shutdown: # This forces an unclean shutdown when the connection is closed, i.e. no # SSL close notify alert is send or allowed to received. This violates # the SSL/TLS standard but is needed for some brain-dead browsers. Use # this when you receive I/O errors because of the standard approach where # mod_ssl sends the close notify alert. # o ssl-accurate-shutdown: # This forces an accurate shutdown when the connection is closed, i.e. a # SSL close notify alert is send and mod_ssl waits for the close notify # alert of the client. This is 100% SSL/TLS standard compliant, but in # practice often causes hanging connections with brain-dead browsers. Use # this only for browsers where you know that their SSL implementation # works correctly. # Notice: Most problems of broken clients are also related to the HTTP # keep-alive facility, so you usually additionally want to disable # keep-alive for those clients, too. Use variable "nokeepalive" for this. # Similarly, one has to force some clients to use HTTP/1.0 to workaround # their broken HTTP/1.1 implementation. Use variables "downgrade-1.0" and # "force-response-1.0" for this. BrowserMatch "MSIE [2-6]" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 # MSIE 7 and newer should be able to use keepalive BrowserMatch "MSIE [17-9]" ssl-unclean-shutdown </VirtualHost> </IfModule> /etc/apache2/sites-available/default <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options -Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> Alias /delboy /usr/share/phpmyadmin <Directory /usr/share/phpmyadmin> # Restrict phpmyadmin access Order Deny,Allow Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> /etc/apache2/conf.d/security # # Disable access to the entire file system except for the directories that # are explicitly allowed later. # # This currently breaks the configurations that come with some web application # Debian packages. # #<Directory /> # AllowOverride None # Order Deny,Allow # Deny from all #</Directory> # Changing the following options will not really affect the security of the # server, but might make attacks slightly more difficult in some cases. # # ServerTokens # This directive configures what you return as the Server HTTP response # Header. The default is 'Full' which sends information about the OS-Type # and compiled in modules. # Set to one of: Full | OS | Minimal | Minor | Major | Prod # where Full conveys the most information, and Prod the least. # #ServerTokens Minimal ServerTokens OS #ServerTokens Full # # Optionally add a line containing the server version and virtual host # name to server-generated pages (internal error documents, FTP directory # listings, mod_status and mod_info output etc., but not CGI generated # documents or custom error documents). # Set to "EMail" to also include a mailto: link to the ServerAdmin. # Set to one of: On | Off | EMail # #ServerSignature Off ServerSignature On # # Allow TRACE method # # Set to "extended" to also reflect the request body (only for testing and # diagnostic purposes). # # Set to one of: On | Off | extended # TraceEnable Off #TraceEnable On /etc/apache2/apache2.conf # # Based upon the NCSA server configuration files originally by Rob McCool. # # This is the main Apache server configuration file. It contains the # configuration directives that give the server its instructions. # See http://httpd.apache.org/docs/2.2/ for detailed information about # the directives. # # Do NOT simply read the instructions in here without understanding # what they do. They're here only as hints or reminders. If you are unsure # consult the online docs. You have been warned. # # The configuration directives are grouped into three basic sections: # 1. Directives that control the operation of the Apache server process as a # whole (the 'global environment'). # 2. Directives that define the parameters of the 'main' or 'default' server, # which responds to requests that aren't handled by a virtual host. # These directives also provide default values for the settings # of all virtual hosts. # 3. Settings for virtual hosts, which allow Web requests to be sent to # different IP addresses or hostnames and have them handled by the # same Apache server process. # # Configuration and logfile names: If the filenames you specify for many # of the server's control files begin with "/" (or "drive:/" for Win32), the # server will use that explicit path. If the filenames do *not* begin # with "/", the value of ServerRoot is prepended -- so "foo.log" # with ServerRoot set to "/etc/apache2" will be interpreted by the # server as "/etc/apache2/foo.log". # ### Section 1: Global Environment # # The directives in this section affect the overall operation of Apache, # such as the number of concurrent requests it can handle or where it # can find its configuration files. # # # ServerRoot: The top of the directory tree under which the server's # configuration, error, and log files are kept. # # NOTE! If you intend to place this on an NFS (or otherwise network) # mounted filesystem then please read the LockFile documentation (available # at <URL:http://httpd.apache.org/docs/2.2/mod/mpm_common.html#lockfile>); # you will save yourself a lot of trouble. # # Do NOT add a slash at the end of the directory path. # #ServerRoot "/etc/apache2" # # The accept serialization lock file MUST BE STORED ON A LOCAL DISK. # LockFile ${APACHE_LOCK_DIR}/accept.lock # # PidFile: The file in which the server should record its process # identification number when it starts. # This needs to be set in /etc/apache2/envvars # PidFile ${APACHE_PID_FILE} # # Timeout: The number of seconds before receives and sends time out. # Timeout 300 # # KeepAlive: Whether or not to allow persistent connections (more than # one request per connection). Set to "Off" to deactivate. # KeepAlive On # # MaxKeepAliveRequests: The maximum number of requests to allow # during a persistent connection. Set to 0 to allow an unlimited amount. # We recommend you leave this number high, for maximum performance. # MaxKeepAliveRequests 100 # # KeepAliveTimeout: Number of seconds to wait for the next request from the # same client on the same connection. # KeepAliveTimeout 4 ## ## Server-Pool Size Regulation (MPM specific) ## # prefork MPM # StartServers: number of server processes to start # MinSpareServers: minimum number of server processes which are kept spare # MaxSpareServers: maximum number of server processes which are kept spare # MaxClients: maximum number of server processes allowed to start # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_prefork_module> StartServers 5 MinSpareServers 5 MaxSpareServers 10 MaxClients 150 MaxRequestsPerChild 500 </IfModule> # worker MPM # StartServers: initial number of server processes to start # MaxClients: maximum number of simultaneous client connections # MinSpareThreads: minimum number of worker threads which are kept spare # MaxSpareThreads: maximum number of worker threads which are kept spare # ThreadLimit: ThreadsPerChild can be changed to this maximum value during a # graceful restart. ThreadLimit can only be changed by stopping # and starting Apache. # ThreadsPerChild: constant number of worker threads in each server process # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_worker_module> StartServers 2 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxClients 150 MaxRequestsPerChild 0 </IfModule> # event MPM # StartServers: initial number of server processes to start # MaxClients: maximum number of simultaneous client connections # MinSpareThreads: minimum number of worker threads which are kept spare # MaxSpareThreads: maximum number of worker threads which are kept spare # ThreadsPerChild: constant number of worker threads in each server process # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_event_module> StartServers 2 MaxClients 150 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxRequestsPerChild 0 </IfModule> # These need to be set in /etc/apache2/envvars User ${APACHE_RUN_USER} Group ${APACHE_RUN_GROUP} # # AccessFileName: The name of the file to look for in each directory # for additional configuration directives. See also the AllowOverride # directive. # AccessFileName .htaccess # # The following lines prevent .htaccess and .htpasswd files from being # viewed by Web clients. # <Files ~ "^\.ht"> Order allow,deny Deny from all Satisfy all </Files> # # DefaultType is the default MIME type the server will use for a document # if it cannot otherwise determine one, such as from filename extensions. # If your server contains mostly text or HTML documents, "text/plain" is # a good value. If most of your content is binary, such as applications # or images, you may want to use "application/octet-stream" instead to # keep browsers from trying to display binary files as though they are # text. # DefaultType text/plain # # HostnameLookups: Log the names of clients or just their IP addresses # e.g., www.apache.org (on) or 204.62.129.132 (off). # The default is off because it'd be overall better for the net if people # had to knowingly turn this feature on, since enabling it means that # each client request will result in AT LEAST one lookup request to the # nameserver. # HostnameLookups Off # ErrorLog: The location of the error log file. # If you do not specify an ErrorLog directive within a <VirtualHost> # container, error messages relating to that virtual host will be # logged here. If you *do* define an error logfile for a <VirtualHost> # container, that host's errors will be logged there and not here. # ErrorLog ${APACHE_LOG_DIR}/error.log # # LogLevel: Control the number of messages logged to the error_log. # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. # LogLevel warn # Include module configuration: Include mods-enabled/*.load Include mods-enabled/*.conf # Include all the user configurations: Include httpd.conf # Include ports listing Include ports.conf # # The following directives define some format nicknames for use with # a CustomLog directive (see below). # If you are behind a reverse proxy, you might want to change %h into %{X-Forwarded-For}i # LogFormat "%v:%p %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined LogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %O" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent # Include of directories ignores editors' and dpkg's backup files, # see README.Debian for details. # Include generic snippets of statements Include conf.d/ # Include the virtual host configurations: Include sites-enabled/

    Read the article

  • How rotate TomCat 6 logs on Windows every night

    - by Danilo Brambilla
    Hi all, our TomCat 6 is running on a Windows Server 2003 server producing some logs on Program Files\Apache Software Foundation\Tomcat 6.0\logs folder. Only catalina.YYYY-MM-DD.log rotates every night. Admin. Host-Manager. Jakarta. LocalHost. Manager. stderr. stdout does not roate and are dated at the last server restart date. These files are most empty and always locked. How can I set TomCat to rotate all these logs every night (if possible without server/service restart)? Thank you in advance for help.

    Read the article

  • Merging Two KML Files to Display Them with Different Marker Icons on Google Maps

    - by Maxim Z.
    Let's say that I have two spreadsheets with addresses. I uploaded these spreadsheets into Google Fusion Tables, geocoded the addresses, and exported the results as KML files. Now, I want to take these two KML files and merge them, while maintaining the location data and using it to map the points with Google Maps. Well, I found a way to easily merge the KML files: import both of them into a "My Maps" map with Google Maps! However, my problem is this: when I do that, all of the locations in my data have the same marker icon on the map. From past experience, I know that these markers can be somehow defined inside the KML files. Is it possible to combine these two KML files while giving one's points one marker icon and the other's points another marker icon? Just in case my question is confusing, what I mean, is giving the first set of points blue markers, for example, and the other set of points red markers, so that they can be overlayed.

    Read the article

< Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >