Search Results

Search found 3471 results on 139 pages for 'docs'.

Page 102/139 | < Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >

  • Apache in OS X not displaying localhost nor vhosts correctly

    - by Marcus
    I've encountered a really odd problem in my development environment, and I really can't make any sense of it. It started by a locally developed PHP-site refused to update any content I edited in a file – no text or nothing. So if the document was: <h2>Hello!</h2> and I edited it to <h2>What's wrong?!</h2> it still outputed <h2>Hello!</h2>. I thought is was some kind of cache:ing problem, but no "hard reloads" in the browser nor sudo apachectl -k restart sorted it out. Only a restart of my Mac did finally fix it. Now, a few days later even stranger issues are appearing. I have a LAMP-stack installed via Homebrew, in httpd-vhosts.conf I've set ~/Dev/ as my localhost, and I set up a <VirtualHost *80> for each project ("ServerName projectname.dev" for example). However, what ever files of folder I put in ~/Dev/ have stopped showing up on localhost, and new VirtualHost-directives doesn't work. Three projects + "docs" in the folder: But "localhost" only displays the two older projects...? So, as I've said – I've tried restarting Apache (without errors), clearing browser caches (tried in three browsers, Chrome, Safari and Firefox) and ever rebooting the Mac. Nothing. Any ideas? Running OS X 10.8.5 and Apache 2.2.24.

    Read the article

  • How to resolve virtual disk degraded in Windows Server 2012

    - by harrydev
    I am using the new Storage Spaces feature in Windows Server 2012. I have the following disks: FriendlyName CanPool OperationalStatus HealthStatus Usage Size ------------ ------- ----------------- ------------ ----- ---- PhysicalDisk2 False OK Healthy Auto-Select 2.73 TB PhysicalDisk3 False OK Healthy Auto-Select 2.73 TB PhysicalDisk4 False OK Healthy Auto-Select 2.73 TB PhysicalDisk5 False OK Healthy Auto-Select 2.73 TB There is also a separate OS disk. The above disks are part of a single storage pool: FriendlyName OperationalStatus HealthStatus IsPrimordial IsReadOnly ------------ ----------------- ------------ ------------ ---------- Pool OK Healthy False False Within this storage pool some virtual disks are defined, see below: FriendlyName ResiliencySettingNa OperationalStatus HealthStatus IsManualAttach Size me ------------ ------------------- ----------------- ------------ -------------- ---- Docs Mirror OK Healthy False 500 GB Data Mirror Degraded Warning False 500 GB Work Mirror Degraded Warning False 2 TB Now the virtual disks are all running normal 2-way mirror, but two of the virtual disks are degraded. This is probably because one of the physical disks was offline for a short period of time. However, now the virtual disk cannot be repaired, even though, all physical disks are healthy. There is plenty of available space in the storage pool. This I cannot understand so I was hoping for some help, on how to resolve this? Below I have listed the full output from the Get-VirtualDisk CmdLet for the "Work" disk: ObjectId : {XXXXXXXX} PassThroughClass : PassThroughIds : PassThroughNamespace : PassThroughServer : UniqueId : XXXXXXXX Access : Read/Write AllocatedSize : 412316860416 DetachedReason : None FootprintOnPool : 824633720832 FriendlyName : Work HealthStatus : Warning Interleave : 262144 IsDeduplicationEnabled : False IsEnclosureAware : False IsManualAttach : False IsSnapshot : False LogicalSectorSize : 512 Name : NameFormat : NumberOfAvailableCopies : 0 NumberOfColumns : 2 NumberOfDataCopies : 2 OperationalStatus : Degraded OtherOperationalStatusDescription : OtherUsageDescription : Disk for data being worked on (not backed up) ParityLayout : PhysicalDiskRedundancy : 1 PhysicalSectorSize : 4096 ProvisioningType : Thin RequestNoSinglePointOfFailure : True ResiliencySettingName : Mirror Size : 2199023255552 UniqueIdFormat : Vendor Specific UniqueIdFormatDescription : Usage : Other PSComputerName :

    Read the article

  • How do I change the Dropbox directory on a headless GNU/Linux server?

    - by DrTwox
    I have installed Dropbox 2.0.0 via command line on my home server (Ubuntu Server 12.04) to use for off-site automated backups, but I can't change the directory that the Dropbox daemon keeps synced. I've tried the following: The official docs say to use the desktop application, which is not applicable in my situation. However I installed the desktop app on my desktop machine and changed the default folder location, but I can't find where this change is stored in the ~/.dropbox/ directory so I can make the same change on the server. This page (and several others) recommends a Python script to do the job. Looking at the script, it opens a SQLite database called ~/.dropbox/dropbox.db, which does not exist on my Dropbox install, leading me to believe the script is out-of-date. This forum thread suggests manually inserting the required row in the config.db database, which I did, but it made no difference. I checked the same database file on my desktop machine, and it does not have the dropbox_path key, so I'm presuming the information in that thread is also out of date for version 2.0. I have tried to launch the Dropbox GUI configuration wizard over SSH with X11 forwarding, as suggested in one of the answers, but the binary must detect the absence of a local X11 install and it starts a command line daemon instead, which provides no means to change the option I need. I am currently using a symlink, as suggested as an answer, but this is a kludge. I would like to know the correct way to make the change. How do I change the Dropbox directory on a headless GNU/Linux server? Update: I've ditched Dropbox and started using Copy. Their Linux tools and support is far superior to Dropbox. I leave this question here in case someone, someday, can answer it.

    Read the article

  • How ZFS handles online replacement in a RAID-Z (theoretical)

    - by Kevin
    This is a somewhat theoretical question about ZFS and RAID-Z. I'll use a three disk single-parity array as an example for clarity, but the problem can be extended to any number of disks and any parity. Suppose we have disks A, B, and C in the pool, and that it is clean. Suppose now that we physically add disk D with the intention of replacing disk C, and that disk C is still functioning correctly and is only being replaced out of preventive maintenance. Some admins might just yank C and install D, which is a little more organized as devices need not change IDs - however this does leave the array degraded temporarily and so for this example suppose we install D without offlining or removing C. Solaris docs indicate that we can replace a disk without first offlining it, using a command such as: zpool replace pool C D This should cause a resilvering onto D. Let us say that resilvering proceeds "downwards" along a "cursor." (I don't know the actual terminology used in the internal implementation.) Suppose now that midways through the resilvering, disk A fails. In theory, this should be recoverable, as above the cursor B and D contain sufficient parity and below the cursor B and C contain sufficient parity. However, whether or not this is actually recoverable depnds upon internal design decisions in ZFS which I am not aware of (and which the manual doesn't say in certain terms). If ZFS continues to send writes to C below the cursor, then we are fine. If, however, ZFS internally treats C as though it were gone, resilvering D only from parity between A and B and only writing A and B below the cursor, then we're toast. Some experimenting could answer this question but I was hoping maybe someone on here already knows which way ZFS handles this situation. Thank you in advance for any insight!

    Read the article

  • Postgres pgpass windows - not working

    - by Scott
    DB: Postgres 9.0 Client: Windows 7 Server Windows 2008, 64bit I'm trying to connect remotely to a postgres instance for purposes of performing a pg_dump to my local machine. Everything works from my client machine, except that I need to provide a password at the password prompt, and I'd ultimately like to batch this with a script. I've followed the instructions here: http://www.postgresql.org/docs/current/static/libpq-pgpass.html but it's not working. To recap, I've created a file on the client (and tried the server as well): C:/Users/postgres/AppData/postgresql/pgpass.conf, where postgresql is the db user. The file has one line with the following data: *:5432:*postgres:[mypassword] (also tried explicit ip/dbname values, all asterisks, and every combination in between. (I've also tried replacing each '*' with [localhost|myip] and [mydatabasename] respectively. From my client machine, I connect using: pg_dump -h [myip] -U postgres -w [mydbname] [mylocaldumpfile] I'm presuming that I need to provide the '-w' switch in order to ignore password prompt, at which point it should look in the AppData directory on the server. It just comes back with "connection to database failed: fe_sendauth: no password supplied. Any insights are appreciated. As a hack workaround, if there was a way I could tell the windows batch file on my client machine to inject the password at the postgres prompt, that would work as well. Thanks.

    Read the article

  • Virtualizing an Inline network appliance with VirtualBox (or VMWare)

    - by Tzury Bar Yochay
    My device, which is a Linux based IP in-liner is transparent to the network peripherals, that is, no IP address assigned to any of its interfaces. For the sake of the conversation, let's use ADSL connection as an example, while the device is inspecting the bi-directional traffic, the network is behaving same as if device was not there, attached to the wire (see Physical setup at the attached diagram). I wonder if I can enclosed that "device" within a Windows machine and have it operated virtually so it still seats inline between the ADSL router and the Windows netwroking interface by using virtual NICs, (or whatever their name is in windows), and inspecting the traffic, same as if it was on a separate physical device, the drawing under "Virtual Setup" in the attached diagram show what I am trying to achieve. Reading a bit on the VirtualBox docs, seems like binding the right side is relatively simple, perhaps I should have one network adapter set as Bridge Networking and VirtualBox will connect it to the physical NIC on the host machine, and network packets are exchanged directly, circumventing the host operating system's network stack (WinXP in my case). However, I have no idea how to achieve the left side of my diagram, which requires adding virtual NICs to windows and configure them correctly in a way to make that pipeline possible. I would appreciate any help. by the way, if that is not possible with VirtualBox but with other virtualization solution (e.g. VMWare), I would accept the other as well.

    Read the article

  • Django apache + mod_wsgi with virtualenv

    - by ArgsKwargs
    I have some questions running multiple Django sites on a VPS I have a server that uses openPanel to automatically create VirtualHosts within apache2. My ideal situation is that I would have multiple virtualenvs with different dependencies installed so the python dist-packages directory isn't contaminated for different Django sites. For example: /home/user/virtualenv1 /home/user/virtualenv2 My django applications reside at /var/www, so For example: /var/www/djangosite1 /var/www/djangosite2 Now I've read upon openPanel docs and figured out the best thing todo is create a django.conf file inside the mydomain.com.inc folder, which looks something like: /etc/apache2/openpanel.d/mydomain.com.inc/django.conf DocumentRoot /var/www/djangosite1/project WSGIScriptAlias / /var/www/djangosite1/project/wsgi.py WSGIDaemonProcess mydomain python-path=/home/user/virtualenv1/lib/python2.6/site-packages <Directory /var/www/djangosite1/project> Order allow,deny Allow from all </Directory> Alias /static /var/www/djangosite1/project/static-root Now my problem is that this setup seems unable to find the virtualenv site-packages thus not recognizing any dependencies available in the given virtualenv Also, commenting out this line doesn't seem to break or change a thing: WSGIDaemonProcess mydomain python-path=/home/user/virtualenv1/lib/python2.6/site-packages For example: > service apache2 start ImportError: No module named South When I install South outside the virtualenv everything works

    Read the article

  • emacs ORG-mode "headless" export-as commands?

    - by Seamus
    When I use org-export-as-latex or org-export-as-html orgmode turns my buffer into a .tex file or .html file. But I don't want all the extra junk that it adds to the file: I want to handle the documentclass and everything myself and just \input the org mode generated file. (Or the analogous things for html with php). So if my org file just has: * Section - Stuff - Things I want the org mode command to output just \section{Section} \begin{itemize} \item Stuff \item Things \end{itemize} Without any of the extra \tableofcontents junk that ORG adds to it. I know I could define my own kind of #+LaTeX_CLASS that could add the packages I want and so on, but I don't want to do things that way (and that wouldn't remove the \maketitle or the spurious \vspace* that ORG insists on inserting. Is there a command to do this "headless" parsing and converting? I had a look but it's not obvious from the documentation. Presumably some low level ORG command is doing the parsing and converting I want, but I couldn't find what it was called from looking at the docs and C-h pages... This is not a question about HTML or LaTeX but about emacs ORG mode. So don't kick it off to some other site...

    Read the article

  • Deploy our own software using Puppet?

    - by Ken
    (Apologies in advance for the stupidity in this question. I'm normally a programmer, not a sysadmin, but I've taken it upon myself to automate some things, and clean up some other things which are automated but not in the prettiest way. :-) I've been looking around at various tools for automation of software deployment to a bunch of servers, like cfengine, Puppet, and Chef. So far, Puppet looks the most appealing, but I've certainly not committed to anything yet. These tools all look like they can do a great job of keeping a bunch of servers up-to-date with prepackaged software. What I don't get is: how does one use a tool (like Puppet) to manage deployments of our own internal software? I think I'm at a loss because I've seen a thousand tutorials showing how to keep Apache ensure => latest (which is pretty cool), but nothing that quite corresponds to my use-case today, which is something more like: when a human being pushes The Button, pull branch A from the version-control repository B run command C to compile it copy the binaries D to servers E1 through E10 on each server, run command F to make all changes take effect Puppet sounds great, and I totally see the advantage of declarative, idempotent configuration over some shell scripts, but I've not seen any tutorials for "you want to update your shell scripts to Puppet (or Chef, or cfengine) so here's what you should do". Is there such a thing? Is it obvious to other people how to take the things provided in the Puppet docs and replicate the behavior I want? Am I just not getting it? What it's sounding like to me, so far, is that the human being (#1) would manually package the software (#2 and #3) external to Puppet, manually update the Puppet config, which would trigger Puppet to update the servers ... maybe? (I'm a little confused here, as I'm sure you can tell.) Thanks!

    Read the article

  • How to enable hotlink protection without hardcoding my domain in the Apache config file?

    - by Jeff
    Been surfing around for a solution for a couple days now. How do I enable Apache hotlink protection without hardcoding my domain in the config file so I can port the code to my other domains without having to update the config file every time? This is what I have so far: RewriteCond %{HTTP_REFERER} !^$ RewriteCond %{HTTP_REFERER} !^http://www\.example\.com [NC] RewriteRule \.(gif|ico|jpe|jpeg|jpg|png)$ - [NC,F,L] ... And this is what Apache suggests: SetEnvIf Referer example\.com localreferer <FilesMatch \.(jpg|png|gif)$> Order deny,allow Deny from all Allow from env=localreferer </FilesMatch> ... both of which hardcode the domain in their rules. The closest I came to finding any info that covers this is right here on ServerFault, but the conclusion was that it cannot be done. Based on my research, that appears to be true, but I didn't find any questions or commentary dedicated soley to this question. If anyone's curious, here is the link to the Apache 2 docs that cover this topic. Note that Apache variables (e.g. %{HTTP_REFERER}) can only be used in the RewriteCond text-string and the RewriteRule substitution arguments.

    Read the article

  • 802.11g -> wired ethernet bridging not working

    - by Malachi
    Usually people want to go the other direction, but I want to take our relatively fast and stable house 802.11g signal and bridge it to ethernet. I have tried using an Airport Express (the b/g flavor) and my i7 MacBook pro, both to no avail. Word is that the b/g flavor of This flavor of Airport Express maxes at firmware 6.3 which doesn't support this kind of bridging properly. However, I expected my MacBook pro to do the job with its "Internet Sharing" feature. Alas, although my wired PC does sort of see it, it doesn't work out. Strangely, using DHCP the PC receives the same IP address as my MBP uses on the network. Less strangely, but still surprisingly, the wired ethernet port on my mac registers as the IP address of the gateway when queried with IFCONFIG. It sort of makes sense that the mac would "pretend" to be the gateway, but the whole thing just isn't working and seems configured wrong - but all the docs I see say basically "OS X Internet Sharing: click it and go". What do I do? Do i really have to buy more hardware, even though I have plenty of would-be candidates for bridging? Incidentally, the host router originating the 802.11g signal is a belkin 802.11g router, and is documented to support WDS.

    Read the article

  • plesk: how to configure reverse proxy rules properly?

    - by rvdb
    I'm trying to configure reverse proxy rules in vhost.conf. I have Apache-2.2.8 on Ubuntu-8.04, monitored by Plesk-10.4.4. What I'm trying to achieve is defining a reverse proxy rule that defers all traffic to -say- http://mydomain/tomcat/ to the Tomcat server running on port 8080. I have mod_rewrite and mod_proxy loaded in Apache. As far as I understand mod_proxy docs, entering following rules in /var/www/vhosts/mydomain/conf/vhost.conf should work: <Proxy *> Order deny,allow Allow from all </Proxy> ProxyRequests off RewriteRule ^/tomcat/(.*)$ http://mydomain:8080/$1 [P] Yet, I am getting a HTTP 500: internal server error when requesting above URL. (Note: I decided to use a rewrite rule in order to at least get some information logged.) I have made mod_rewrite log extensively, and find following entries in the logs [note: due to a limitation of max. 2 URLs in posts of new users, I have modified all following URLs so that they only contain 1 slash after http:. In case you're suspecting typos: this was done on purpose): 81.241.230.23 - - [19/Mar/2012:16:42:59 +0100] [mydomain/sid#b06ab8][rid#1024af8/initial] (2) init rewrite engine with requested uri /tomcat/testApp/ 81.241.230.23 - - [19/Mar/2012:16:42:59 +0100] [mydomain/sid#b06ab8][rid#1024af8/initial] (3) applying pattern '^/tomcat/(.*)$' to uri '/tomcat/testApp/' 81.241.230.23 - - [19/Mar/2012:16:42:59 +0100] [mydomain/sid#b06ab8][rid#1024af8/initial] (2) rewrite '/tomcat/testApp/' - 'http:/mydomain:8080/testApp/' 81.241.230.23 - - [19/Mar/2012:16:42:59 +0100] [mydomain/sid#b06ab8][rid#1024af8/initial] (2) forcing proxy-throughput with http:/mydomain:8080/testApp/ 81.241.230.23 - - [19/Mar/2012:16:42:59 +0100] [mydomain/sid#b06ab8][rid#1024af8/initial] (1) go-ahead with proxy request proxy:http:/mydomain:8080/testApp/ [OK] This suggests that the rewrite and proxy part is processed ok; still the proxied request produces a 500 error. Yet: Addressing the testApp directly via http:/mydomain:8080/testApp does work. The same setup does work on my local computer. Is there something else (Plesk-related, perhaps?) I should configure? Many thanks for any pointers! Ron

    Read the article

  • What is proper relationship between /etc/hosts and DNS A records for a Linux server?

    - by MountainX
    I have an Ubuntu server. It is going to be a web server with a URI of www.example.com. I have a DNS A record pointing www.example.com to the server's IP address. Let's say I pick "trinity" as the hostname for this server. I want to set up the DNS records correctly. I need reverse DNS to www.example.com, so a CNAME for www.example.com doesn't seem appropriate. Here's my question: Is it considered best practice to set up two DNS records (which in my case would likely be two A records), one for www.example.com and one for trinity.example.com, both pointing to this server's IP address? (Or, even if it is not accepted as a best practice, is it a good idea?) If so, would the following be a proper /etc/hosts file? $ cat /etc/hosts 127.0.1.1 trinity.local trinity 99.100.101.102 trinity.example.com trinity www.example.com This server is a Linode and Linode's docs seem to imply that the above approach is best (if I am reading them correctly). Here's the relevant section. I bolded the line that seems to apply here. Update /etc/hosts Next, edit your /etc/hosts file to resemble the following example, replacing "plato" with your chosen hostname, "example.com" with your system's domain name, and "12.34.56.78" with your system's IP address. As with the hostname, the domain name part of your FQDN does not necesarily need to have any relationship to websites or other services hosted on the server (although it may if you wish). As an example, you might host "www.something.com" on your server, but the system's FQDN might be "mars.somethingelse.com." File:/etc/hosts 127.0.0.1 localhost.localdomain localhost 12.34.56.78 plato.example.com plato The value you assign as your system's FQDN should have an "A" record in DNS pointing to your Linode's IP address. For more information on configuring DNS, please see our guide on configuring DNS with the Linode Manager.

    Read the article

  • Creating a bootable USB drive from a distro split over two DVD ISOs

    - by Kev
    I am searching and not finding the right way to do this. Please note, I don't think I'm trying for anything strange here. I just want to make a bootable USB stick of a single OS that happens to be larger than one DVD and happens to be larger than FAT32 will allow for in a single file. On our slow connection I spent a long time downloading CentOS 5.9's two DVD ISOs: CentOS-5.9-x86_64-bin-DVD-1of2.iso (4.4 GB) CentOS-5.9-x86_64-bin-DVD-2of2.iso (718 MB) I have a USB stick that I want to somehow get these two ISOs on. Since the first one is 4.4 GB, I can't use ISO2USB because it insists on FAT32. I cannot find an alternative that lets you specify more than one ISO image--of the same distro, I'm not trying for some fancy multi-boot thing--to put on the same stick. I guess I should have downloaded the CD ISOs, but I thought I was "saving time" because then I wouldn't have as many files to run through the md5 checker. There's no IMG file of the whole thing (only a net install version, which I don't want--I want to pre-download everything) otherwise I would've gone for that. So, given that I have these two DVD ISOs, how can I get them on a stick that will boot and make use of both of them properly to install CentOS somewhere? Again, I don't think this is anything out of the ordinary, yet I can't find software/docs that seem to support this. Am I stuck re-downloading everything in CD-sized ISOs just to do this? I found this, but it doesn't run on Windows. I am using Windows to prepare the stick.

    Read the article

  • What is the max supported number of SATA devices (using cable adapters) on a Dell SAS 6/iR adapter?

    - by Zac B
    I've got a Dell SAS 6/iR PCI-E adapter. I don't have a multiplier backplane. I'm planning on connecting SATA (non SAS) drives. If I buy cable adapters only (ones that split a SAS connector on the card to a certain number of SATA cables), how many drives can I connect to this card? The way I see it, there are two limitations: a limitation imposed by the theoretical max number of devices supported on the card (which I've dug through the specs to find, but haven't seen yet), and a limitation imposed by the number of SAS plugs on the card multiplied by the number of SATA cables that come out of the highest-multiplying splitter I can buy. The answer to my question would be the minimum of those two limitations. I've seen 4x SATA coming out of some splitters; are there any that have more? Alternatively, if this is an RTFM question, does anyone have a good link to a "this is how SAS works, this is how you figure out the max number of devices, and this is how the concepts of 'ports', 'lanes', 'endpoint devices', and 'connectors' all relate in SAS-land" document? I've looked around on the Dell docs, but haven't found anything that explains this to someone at my level of understanding of SAN/enterprise storage technologies. Cheers!

    Read the article

  • Postfix additional transports - is it working?

    - by threecheeseopera
    I have enabled two additional transports in my postfix config to deal with recipient domains that demand connection limiting, per the instructions here at serverfault. However, I have no idea if this is working or not; in fact, I think it is not working, due to the send speeds I am seeing in the logs. How might I determine if my additional transports are working? If they aren't, do you have any tips on figuring out why? And, do you have any comments on my particular configuration? (am I a bucket of fail?) I have enabled the additional transports in master.cf: smtp inet n - - - - smtpd careful unix - - n - 10 smtp -o smtp_connect_timeout=5 -o smtp_helo_timeout=5 cautious unix - - n - - smtp -o smtp_connect_timeout=5 -o smtp_helo_timeout=5 I have set up the transport mapping file /etc/postfix/transport: hotmail.com cautious: yahoo.com careful: gmail.com cautious: earthlink.net cautious: msn.com cautious: live.com cautious: aol.com careful: I have set up the transport mapping and some connection-limiting settings in main.cf: transport_maps = hash:/etc/postfix/transport careful_initial_destination_concurrency = 5 careful_destination_concurrency_limit = 10 cautious_destination_concurrency_limit = 50 Finally, I have run converted the transport file to a db per the postfix docs: #> postmap /etc/postfix/transport And then restarted postfix. I do see my transport_maps setting when I run postconf, but I do not see any of the transport-specific settings ('careful_xxx_yyy_zzz'). Also the mail logs do not appear to be different in any way to what they were previously. Thanks!!!

    Read the article

  • Apache2 default vhost in alphabetical order or override with _default_ vhost?

    - by benbradley
    I've got multiple named vhosts on an Apache web server (CentOS 5, Apache 2.2.3). Each vhost has their own config file in /etc/httpd/vhosts.d and these vhost config files are included from the main httpd conf with... Include vhosts.d/*.conf Here's an example of one of the vhost confs... NameVirtualHost *:80 <VirtualHost *:80> ServerName www.domain.biz ServerAlias domain.biz www.domain.biz DocumentRoot /var/www/www.domain.biz <Directory /var/www/www.domain.biz> Options +FollowSymLinks Order Allow,Deny Allow from all </Directory> CustomLog /var/log/httpd/www.domain.biz_access.log combined ErrorLog /var/log/httpd/www.domain.biz_error.log </VirtualHost> Now I when anyone tries to access the server directly by using the public IP address, they get the first vhost specified in the aggregated config (so in my case it's alphabetical order from the vhosts.d directory). Anyone accessing the server directly by IP address, I'd like them to just get an 403 or a 404. I've discovered several ways to set a default/catch-all vhost and some conflicting opinions. I could create a new vhost conf in vhosts.d called 000aaadefault.conf or something but that feels a bit nasty. I could have a <VirtualHost> block in my main httpd.conf before the vhosts.d directory is included. I could just specify a DocumentRoot in my main httpd.conf What about specifying a default vhost in httpd.conf with _default_ http://httpd.apache.org/docs/2.2/vhosts/examples.html#default Would having a <VirtualHost _default_:*> block in my httpd.conf before I Include vhosts.d/*.conf be the best way for a catch-all?

    Read the article

  • Postgres pgpass windows - not working

    - by Scott
    DB: Postgres 9.0 Client: Windows 7 Server Windows 2008, 64bit I'm trying to connect remotely to a postgres instance for purposes of performing a pg_dump to my local machine. Everything works from my client machine, except that I need to provide a password at the password prompt, and I'd ultimately like to batch this with a script. I've followed the instructions here: http://www.postgresql.org/docs/current/static/libpq-pgpass.html but it's not working. To recap, I've created a file on the client (and tried the server as well): C:/Users/postgres/AppData/postgresql/pgpass.conf, where postgresql is the db user. The file has one line with the following data: *:5432:*postgres:[mypassword] (also tried explicit ip/dbname values, all asterisks, and every combination in between. (I've also tried replacing each '*' with [localhost|myip] and [mydatabasename] respectively. From my client machine, I connect using: pg_dump -h [myip] -U postgres -w [mydbname] [mylocaldumpfile] I'm presuming that I need to provide the '-w' switch in order to ignore password prompt, at which point it should look in the AppData directory on the server. It just comes back with "connection to database failed: fe_sendauth: no password supplied. Any insights are appreciated. As a hack workaround, if there was a way I could tell the windows batch file on my client machine to inject the password at the postgres prompt, that would work as well. Thanks.

    Read the article

  • CloudFront with Custom Origin and ELB

    - by kmfk
    We are using CloudFront for our static assets but also wanted to allow for Gzip. We set up a new distribution with a custom origin pointing back to our application servers which are behind a elastic load balancer. We manually keep the files in sync across the cluster and update them when we publish. However, with this set up, we get nothing but Miss and RefreshHits from CloudFront, which so far has defeated the purpose. Is there any additional settings in order to use an ELB as your custom origin? In the docs, it references this as a viable solution. It appears when we point the distribution to a single server in our production cluster, cloudfront properly caches our assets. Is it possible that the sticky sessions cookie and the subsequent header that gets added by it could be an issue? Cache-Control: no-cache="set-cookie" //Added by load balancer Any ideas? FYI - currently, we have our custom origin pointing to a single EC2 instance, so caching is working correctly - in case you try to curl the file below. Example headers: curl -I http://static.quick-cdn.com/css/9850999.css HTTP/1.0 200 OK Accept-Ranges: bytes Cache-Control: max-age=3700 Cache-Control: no-cache="set-cookie" Content-Length: 23038 Content-Type: text/css Date: Thu, 12 Apr 2012 23:03:52 GMT Last-Modified: Thu, 12 Apr 2012 23:00:14 GMT Server: Apache/2.2.17 (Ubuntu) Vary: Accept-Encoding X-Cache: RefreshHit from cloudfront X-Amz-Cf-Id: K_q7Zy3_jdzlEJ85ukELVtdx1GmuXqApAbZZ7G0fPt0mxRMqPKX5pQ==,RzJmPku-rEIO9WlvuSoKa8hiAaR3dLk5KC4cQMWWrf_MDhmjWe8n6A== Via: 1.0 28c34f9fbf559a21ee16594849e4fc9c.cloudfront.net (CloudFront) Connection: close

    Read the article

  • nginx hackery : change image file every X request

    - by Vangel
    Let me describe what I am trying to do first. I have a bunch of pictures in a directory called /images/*.(jpg|gif|png|blah blah|) Now say these images are embedded in an html page and I dont really care which image or where its embedded. For every 10th request for the same picture file (if possible) or for any picture I want to display a fixed image (e.g. trollface.jpg). thats it! I have searched around a bit but i am not even sure what I am looking for. Rewrite might help but then its a permanent thing. this has got to do something with requests. I have heard perl scripts can be used with nginx. I can't write an nginx module (though I did bravely lookup the docs and then gave up) Before you ask "But why don't you do it in application, noob?". This is a static files only server. The point is to not execute any binary at all.

    Read the article

  • How do I count the times each number appears in columns of numbers?

    - by Andy C.
    I am sure this must be easy, but I am inexperienced. About the best way to think of my problem is to think of it as trying to sort and then count lottery numbers. To stay simple, let's do a Pick 3 game. Let's look at 10 drawings. I would split each drawn number into a separate column: DATE BALL#1 BALL#2 BALL#3 3/1 1 3 5 3/2 3 7 8 3/3 2 2 1 3/4 5 7 6 3/5 2 3 1 3/6 0 5 9 3/7 3 7 0 3/8 6 8 4 3/9 2 4 3 3/10 7 1 2 I would like to be able to build formulas into cells that would tell me how many times each number appeared overall, and how many times each number appeared in the position it occurred. Like this (using the above example): Number Overall Count Ball#1 Count Ball#2 Count Ball#3 Count 0 2 1 0 1 1 4 1 1 2 (That is, The number zero appears twice overall, and came up once as the first number drawn; zero times as the middle ball; and once as the third ball. Likewise, the number 1 was drawn four times in our 10-day period. It was the first ball once, the second ball once and the third ball twice.) And so on. All help appreciated. I have access to Excel and Microsoft Works, or of course if there is a Google Docs way to handle this All thanks for any help.

    Read the article

  • nginx: Rewrite PHP does not work

    - by Ton Hoekstra
    I've a Suffix Proxy installed and I'm using the following rewrite with wildcard subdomain DNS on: location / { if (!-e $request_filename) { rewrite ^(.*)$ /index.php last; break; } } My suffix proxy has the following URL format: (subdomain and/or domain + domain extension to proxy).proxy.org/(request-uri to proxy) I've this php code in my index.php: if(preg_match('#([\w\.-]+)\.example\.com(.+)#', $_SERVER['SERVER_NAME'].$_SERVER['REQUEST_URI'], $match)) { header('Location: http://example.com/browse.php?u=http://'.$match[1].$match[2]); die; } But when requested a page with a .php extension I'll get a 404 not found error: http://www.php.net.proxy.org/docs.php - HTTP/1.1 404 Not Found http://www.utexas.edu.proxy.org/learn/php/ex3.php - HTTP/1.1 404 Not Found But everything else is working (also index.php is working): http://php.net.proxy.org/index.php - HTTP/1.1 200 OK http://www.php-scripts.com.proxy.org/php_diary/example2.php3 - HTTP/1.1 200 OK http://www.utexas.edu.proxy.org/learn/php/ex3.phps - HTTP/1.1 200 OK http://www.w3schools.com.proxy.org/html/default.asp - HTTP/1.1 200 OK Somebody has an answer? I don't know why it's not working, on apache it's working fine. Thanks in advance. I've removed the location and now it's working perfectly: if (!-e $request_filename) { rewrite ^(.*)$ /index.php last; break; }

    Read the article

  • How can I implement ansible with per-host passwords, securely?

    - by supervacuo
    I would like to use ansible to manage a group of existing servers. I have created an ansible_hosts file, and tested successfully (with the -K option) with commands that only target a single host ansible -i ansible_hosts host1 --sudo -K # + commands ... My problem now is that the user passwords on each host are different, but I can't find a way of handling this in Ansible. Using -K, I am only prompted for a single sudo password up-front, which then seems to be tried for all subsequent hosts without prompting: host1 | ... host2 | FAILED => Incorrect sudo password host3 | FAILED => Incorrect sudo password host4 | FAILED => Incorrect sudo password host5 | FAILED => Incorrect sudo password Research so far: a StackOverflow question with one incorrect answer ("use -K") and one response by the author saying "Found out I needed passwordless sudo" the Ansible docs, which say "Use of passwordless sudo makes things easier to automate, but it’s not required." (emphasis mine) this security StackExchange question which takes it as read that NOPASSWD is required article "Scalable and Understandable Provisioning..." which says: "running sudo may require typing a password, which is a sure way of blocking Ansible forever. A simple fix is to run visudo on the target host, and make sure that the user Ansible will use to login does not have to type a password" article "Basic Ansible Playbooks", which says "Ansible could log into the target server as root and avoid the need for sudo, or let the ansible user have sudo without a password, but the thought of doing either makes my spleen threaten to leap up my gullet and block my windpipe, so I don’t" My thoughts exactly, but then how to extend beyond a single server? ansible issue #1227, "Ansible should ask for sudo password for all users in a playbook", which was closed a year ago by mpdehaan with the comment "Haven't seen much demand for this, I think most people are sudoing from only one user account or using keys most of the time." So... how are people using Ansible in situations like these? Setting NOPASSWD in /etc/sudoers, reusing password across hosts or enabling root SSH login all seem rather drastic reductions in security.

    Read the article

  • How could I embed formatted XML source in WORD documents?

    - by eckes
    I'm writing a documentation with WORD that contains XML source code (whole files) as examples. The way I'm embedding the currently XML is quite cumbersome and doesn't seem to me as really maintainable: I'm finishing the editing of the document in WORD and create a PDF from it using Acrobat next, I open my XML files (2x input files, 2x generated output files) with IE and print them with the PDF printer supplied by Acrobat now, I open up Acrobat Pro and attach the four XML-PDF files to my original document The problem with that work flow for me is that it involves too much manual labor in order to get the documentation done. What I've tried up to now is not really satisfying for me: converting each XML to PDF and appending them like described above opening the XML files with SCiTE, copy as RTF and paste into Word playing around with the LaTeX packages minted, pygments and listings (I could write the docs with LaTeX too) but found some unsolvable problems in each of these packages I'm searching for a way that produces my documentation more automatic. For example embedding the XML files including formatting of IE (which I find quite readable). The files should be included by reference so that I don't have to paste the XML sources manually every time the XML changes.

    Read the article

  • RAID 6 that can read with least 1000 Mbit/s?

    - by Diblo Dk
    I purchased a Dell PERC 6/i which I expected to be able to read with 1000 Mbps. There is not much to do now, but there are some things I wanted knowledge about for another time. I have configured it with four 2 TByte drives and RAID 6. It have 256 MByt ram and transfer rate of 300 Mbps. The benchmark test showed: Min read rate: 136.3 Mbps Max read rate: 329,6 Mbps Avg read rate: 242,2 Mbps What could I had done to get at least 1000 Mbps? Is it normal for internal and external RAID controllers to have a lower transfer rate eg. 300 Mbps? (I did not noticed at the time that it was not 3 Gbps) How would a RAID 10 had performed compared to RAID 6 or 5? Would it have been better to use software RAID (Linux) with the internal 3 Gbps SATA controller? UPDATE: The drives is SATA III 6 Gbps. http://www.seagate.com/files/staticfiles/docs/pdf/datasheet/disc/desktop-hdd-data-sheet-ds1770-1-1212us.pdf (2TB)

    Read the article

< Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >