Search Results

Search found 21235 results on 850 pages for 'www'.

Page 123/850 | < Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >

  • fan adapter required for 2 controlers

    - by spy
    good day i would like to ask if i can connect the same fan to 2 fan controllers at the same time, if so how do i go about it please, i bought http://www.ebay.co.uk/itm/121255084003?ssPageName=STRK:MEWNX:IT&_trksid=p3984.m1497.l2649 wich has a loger cable and sits on my table and http://www.ebay.co.uk/itm/111365380971?ssPageName=STRK:MEWNX:IT&_trksid=p3984.m1497.l2649 sits in the tower and facing away from me.

    Read the article

  • TIF opens via IE, but not within a page

    - by tsilb
    I have a picture: http://www.kconnolly.net/Gallery/Panoramas/Mountainscapes/DSCN2049_stitch_sm.tif Which loads fine when I paste it into my address bar. However, when I embed it within a Page, I get a broken image icon. However, all browsers treat it as a download. Conversely, this image, which was made the same way, works fine when I embed it within a Page. Browsers treat this as a Document and not as a Download: http://www.kconnolly.net/Gallery/Panoramas/Mountainscapes/DSCN2060_stitch_sm.tif Why is that?

    Read the article

  • How to change the Nginx default folder?

    - by Ido Bukin
    I setup a server with Nginx and i set my Public_HTML in - /home/user/public_html/website.com/public And its always redirect to - /usr/local/nginx/html/ How can i change this ? Nginx.conf - user www-data www-data; worker_processes 4; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; tcp_nopush on; tcp_nodelay off; keepalive_timeout 5; gzip on; gzip_comp_level 2; gzip_proxied any; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; include /usr/local/nginx/sites-enabled/*; } /usr/local/nginx/sites-enabled/default - server { listen 80; server_name localhost; location / { root html; index index.php index.html index.htm; } # redirect server error pages to the static page /50x.html error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } /usr/local/nginx/sites-available/website.com - server { listen 80; server_name website.com; rewrite ^/(.*) http://www.website.com/$1 permanent; } server { listen 80; server_name www.website.com; access_log /home/user/public_html/website.com/log/access.log; error_log /home/user/public_html/website.com/log/error.log; location / { root /home/user/public_html/website.com/public/; index index.php index.html; } # pass the PHP scripts to FastCGI server listening on # 127.0.0.1:9000 location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include /usr/local/nginx/conf/fastcgi_params; fastcgi_param SCRIPT_FILENAME /home/user/public_html/website.com/public/$fastcgi_script_name; } } The error message I get is Fatal error: require_once() [function.require]: Failed opening required '/usr/local/nginx/html/202-config/functions.php' the server try to find the file in the Nginx folder and not in my Public_Html

    Read the article

  • two different virtual hosts, one page displayed

    - by majdal
    Hello! I have two different sites configured using virtual hosts (the content of the virtualhost files is posted below) i just copied the default file and edited a few lines... When i direct my browser to either of the two sites, only the content of the first of the two appears... Why? <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot /var/www/hunterprojects.com/public_html <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/hunterprojects.com/public_html> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> AND THE SECOND ONE: <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot /var/www/dodolabarchive.ca/public_html <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/dodolabarchive.ca/public_html> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost>

    Read the article

  • Running an rsync sweep before initializing lsyncd for synchronizing instances on EC2

    - by chrisallenlane
    My company uses several EC2 servers that will scale up and down according to the load we're receiving on our sites at any given moment. For the sake of our discussion here, we're running four instances: master.ourdomain.com - the file syncing "hub" of the webservers www1/www2/www3.ourdomain.com - three webservers which turn on or off as dictated by load I'm using lsyncd to keep all of the webservers in sync, and for the most part, it's working quite well. We're using a two-way syncing scheme, such that each webserver syncs against master, and master syncs against each webserver. Thus, the webservers are kept in sync, even though they aren't syncing against each other directly. I'm having one problem that I'm having a hard time solving,though. It occurs under these circumstances: When changes are made on master (perhaps after we've pushed new code), while some of the redundant webservers are sleeping And then a sleeping webserver wakes-up to absorb load Under that circumstance, I would like the following to happen: First, the newly-awoken webserver should sync its file structure - one way - against master, to bring its web application code up-to-date. Then, and only then, should it begin pushing changes in its file structure back to master. Unfortunately, currently, when a sleeping server is started, when lsyncd starts up, it pushes changes back to master before updating its own codebase, thus overwriting new code with old. Thus, before lsyncd starts, I'd like to be able to synchronize the webservers code against master's, perhaps by running a simple one-way rsync against the two machines. We're running lsyncd v.2, and I've tried to make this happen by using the "bash" configuration options documented in the lsyncd manual. My configuration file looks like this: settings = { logfile = "/home/user/log/lsyncd/log.txt", statusFile = "/home/user/log/lsyncd/status.txt", maxProcesses = 2, nodaemon = false, } bash = { onStartup = "rsync [email protected]:/home/user/www /home/user/www" } sync{ default.rsyncssh, source="/home/user/www/", host="[email protected]", targetdir="/home/user/www/", rsyncOpts="-ltus", excludeFrom="/home/user/conf/lsyncd/exclude" } (I've obviously redacted that file somewhat to protect the identities of the guilty.) Simply put, though, this just isn't working. How else might I approach this problem? I was looking at the --delete-after option in man rsync, but I don't think that does what I'm looking for. Are there any suggestions about how I should approach this problem? Thanks for lending your time and expertise. Chris

    Read the article

  • In CPanel, cron job is not being executing and not sending any mail

    - by Abhishek
    though many of us has asked many question related to cron jobs, let me ask my one... I want to execute a PHP script periodically. As a cron command I'm using: php -q http://www.example.com/cron.php?action=getA I also tried this one: php -q /home/myuser/www.example.com/cron.php?action=getA It is not getting executed and not sending any mail. I set the mail ID to my gMail ID. What am I doing wrong?

    Read the article

  • Cisco ASA5505 8.2 Multiple Outside IP to Multiple Inside IP

    - by GriffJ
    Trying to setup ASA5505. Semi working but having issues with accessing services from the outside. ASA5505 Basic License, Version 8.2. (plus upgrade to unlimited inside hosts). Alert: I'm a Cisco Noob. 321.321.39.X is a place holder for privacy. I came up with this config and tested it tonight. ASA Version 8.2(1) ! hostname <removed> domain-name <removed> enable password <removed> encrypted passwd <removed> encrypted names ! interface Vlan1 nameif inside security-level 100 ip address 172.21.36.1 255.255.252.0 ! interface Vlan2 nameif outside security-level 0 ip address 321.321.39.10 255.255.255.248 ! interface Ethernet0/0 switchport access vlan 2 ! interface Ethernet0/1 ! interface Ethernet0/2 ! interface Ethernet0/3 ! interface Ethernet0/4 ! interface Ethernet0/5 ! interface Ethernet0/6 ! interface Ethernet0/7 ! ftp mode passive dns server-group DefaultDNS domain-name <removed> access-list outside_inbound extended permit tcp any host 321.321.39.10 eq pptp access-list outside_inbound extended permit tcp any host 321.321.39.11 eq https access-list outside_inbound extended permit tcp any host 321.321.39.11 eq 993 access-list outside_inbound extended permit tcp any host 321.321.39.11 eq smtp access-list outside_inbound extended permit tcp any host 321.321.39.11 eq 1001 access-list outside_inbound extended permit tcp any host 321.321.39.11 eq 465 access-list outside_inbound extended permit tcp any host 321.321.39.11 eq domain access-list outside_inbound extended permit udp any eq domain host 321.321.39.11 eq domain access-list outside_inbound extended permit tcp any host 321.321.39.12 eq www access-list outside_inbound extended permit tcp any host 321.321.39.12 eq https access-list outside_inbound extended permit tcp any host 321.321.39.13 eq www access-list outside_inbound extended permit tcp any host 321.321.39.13 eq https access-list outside_inbound extended permit icmp any any echo-reply access-list outside_inbound extended permit icmp any any source-quench access-list outside_inbound extended permit icmp any any unreachable access-list outside_inbound extended permit icmp any any time-exceeded access-list outside_inbound extended permit icmp any any traceroute access-list outside_inbound extended permit icmp any any echo pager lines 24 logging asdm informational mtu inside 1500 mtu outside 1500 icmp unreachable rate-limit 1 burst-size 1 no asdm history enable arp timeout 14400 global (outside) 2 321.321.39.11-321.321.39.14 netmask 255.255.255.248 global (outside) 1 interface nat (inside) 1 0.0.0.0 0.0.0.0 static (inside,outside) tcp interface pptp 172.21.37.20 pptp netmask 255.255.255.255 static (inside,outside) 321.321.39.11 172.21.37.14 netmask 255.255.255.255 static (inside,outside) 321.321.39.12 172.21.37.24 netmask 255.255.255.255 static (inside,outside) 321.321.39.13 172.21.37.17 netmask 255.255.255.255 access-group outside_inbound in interface outside route outside 0.0.0.0 0.0.0.0 321.321.39.9 1 route inside 192.168.15.0 255.255.255.0 172.21.36.52 1 timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00 timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00 timeout sip-provisional-media 0:02:00 uauth 0:05:00 absolute timeout tcp-proxy-reassembly 0:01:00 dynamic-access-policy-record DfltAccessPolicy http server enable http 172.21.36.0 255.255.252.0 inside no snmp-server location no snmp-server contact snmp-server enable traps snmp authentication linkup linkdown coldstart crypto ipsec security-association lifetime seconds 28800 crypto ipsec security-association lifetime kilobytes 4608000 telnet 172.21.36.0 255.255.252.0 inside telnet timeout 60 ssh timeout 5 console timeout 0 threat-detection basic-threat threat-detection statistics access-list no threat-detection statistics tcp-intercept webvpn ! class-map inspection_default match default-inspection-traffic ! ! policy-map type inspect dns preset_dns_map parameters message-length maximum 512 policy-map global_policy class inspection_default inspect dns preset_dns_map inspect ftp inspect h323 h225 inspect h323 ras inspect rsh inspect rtsp inspect sqlnet inspect skinny inspect sunrpc inspect xdmcp inspect sip inspect netbios inspect tftp inspect pptp inspect ipsec-pass-thru inspect http ! service-policy global_policy global prompt hostname context The servers that had static forwards did not have any outside network access. couldn't ping google.com for instance. mail server couldn't Domain POP the Barracuda spam filter from our ISP etc. So after doing some reading I removed the statics for 172.21.37.11, 12 and 13, and replaced those three with what's below.. static (inside,outside) tcp 321.321.39.11 https 172.21.37.14 https netmask 255.255.255.255 static (inside,outside) tcp 321.321.39.11 993 172.21.37.14 993 netmask 255.255.255.255 static (inside,outside) tcp 321.321.39.11 smtp 172.21.37.14 smtp netmask 255.255.255.255 static (inside,outside) tcp 321.321.39.11 1001 172.21.37.14 1001 netmask 255.255.255.255 static (inside,outside) tcp 321.321.39.11 465 172.21.37.14 465 netmask 255.255.255.255 static (inside,outside) tcp 321.321.39.11 domain 172.21.37.14 domain netmask 255.255.255.255 static (inside,outside) tcp 321.321.39.12 www 172.21.37.24 www netmask 255.255.255.255 static (inside,outside) tcp 321.321.39.12 https 172.21.37.24 https netmask 255.255.255.255 static (inside,outside) tcp 321.321.39.13 www 172.21.37.17 www netmask 255.255.255.255 static (inside,outside) tcp 321.321.39.13 https 172.21.37.17 https netmask 255.255.255.255 Now the servers (for instance 172.21.37.14) could ping the outside world again. Mail started flowing (Domain POP was successful) etc. etc. But I forgot to check if webmail worked from the outside admittedly. But the webservers at 172.21.37.17 and 172.21.37.24 still didn't respond from the outside world. Although I was able to PPTP VPN in on 321.321.39.10 (interface) which is the outside interface IP address. and it is static mapped to 172.21.37.20. So I'm thinking there must be something wrong with NAT somewhere? no response from 321.321.39.11 to 321.321.39.14.. Could anyone look over the config and please let me know what I've done wrong? Is there something I've missed? well obviously but.. please help! Thank you.

    Read the article

  • bind named server - forward some requests to other servers

    - by Pentium100
    Is there a way to make bind answer some queries but forward all other queries (of the same domain) to another server, as in: example.com A 127.0.0.1 www.example.com A 127.0.0.1 everything not on this list (example.com MX, ftp.example.com A etc) - ask 192.168.0.1 (another DNS server) Essentially I want to intercept some (but not all) queries going to (in this example) 192.168.0.1 and answer for it. example.com A- intercept www.example.com - intercept example.com MX - pass trough ftp.example.com - pass trough

    Read the article

  • nginx + php fpm -> 404 php pages - file not found

    - by Mahesh
    *2037 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream server { listen 80; ## listen for ipv4; this line is default and implied #listen [::]:80 default ipv6only=on; ## listen for ipv6 server_name .site.com; root /var/www/site; error_page 404 /404.php; access_log /var/log/nginx/site.access.log; index index.html index.php; if ($http_host != "www.site.com") { rewrite ^ http://www.site.com$request_uri permanent; } location ~* \.php$ { fastcgi_index index.php; fastcgi_pass 127.0.0.1:9000; #fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock; fastcgi_buffer_size 128k; fastcgi_buffers 256 4k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_read_timeout 240; include /etc/nginx/fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param SCRIPT_NAME $fastcgi_script_name; } location ~ /\. { access_log off; log_not_found off; deny all; } location ~ /(libraries|setup/frames|setup/libs) { deny all; return 404; } location ~ ^/uploads/(\d+)/(\d+)/(\d+)/(\d+)/(.*)$ { alias /var/www/site/images/missing.gif; #i need to modify this to show only missing files. right now it is showing missing for all the files. } location ~* \.(jpg|jpeg|png|gif|ico|css|js)$ { access_log off; expires 20d; } location /user_uploads/ { location ~ .*\.(php)?$ { deny all; } } location ~ /\.ht { deny all; } } php-fpm config is default and is not touched. The problem is little strange for me. Error pages are showing File not found only if they are .php files. Other error files are clearly calling the 404.php file site.com/test = calls 404.php site.com/test.php = File not found. I am searching and making changes. but it hasn't solved the problem.

    Read the article

  • User start daemon .pid Permission denied

    - by kornnflake
    Trying to start a unicorn daemon as a non-root user but failing hard. Unicorn gives the the following error: directory for pid=/var/run/sinatra_test/sinatra_test.pid not writable So I made the following: sudo mkdir /var/run/sinatra_test sudo chown ruby:www-data /var/run/sinatra_test sudo chmod g+w /var/run/sinatra_test ls -ld /var/run/sinatra_test returns: drwxrwxr-x 2 ruby www-data 60 Oct 27 09:55 /var/run/sinatra_test What am I missing? Still getting Permission denied errors.

    Read the article

  • NFC SDcard available in stores anywhere?

    - by Joqn
    In recent years, several companies announced SDcards adding NFC communication to smartphones: http://www.engadget.com/2010/03/16/first-data-and-tyfone-announce-partnership-for-nfc-payments-by-m/ http://www.digitimes.com/supply_chain_window/story.asp?datepublish=2011/06/27&pages=PR&seq=200 However, I cannot find any online store that would sell such a card. Any ideas, or did they all not make it to market?

    Read the article

  • nginx and awstats and 404's

    - by user66700
    I'm running awstats server stats log from another machine, nginx.conf: http://pastebin.com/buKAdfdd cgi-bin.conf: http://pastebin.com/qbbJ1rwK But I'm getting 404 when I visit http://domain2.com/awstats/awstats.domain.com.html (and the 23 files are there..) in /var/www/awstats/ I was following this, but not sure how to resolve the 404's http://www.goitworld.com/analysis-access-logs-of-nginx-by-awstats/

    Read the article

  • How can I log in with my Google OpenID

    - by compie
    I'm trying to login to www.refactormycode.com with my Google OpenID. But the site keeps saying: "Sorry, the OpenID server couldn't be found" What am I doing wrong? I have tried https://www.google.com/accounts/o8/id and http://google.com/profiles/me

    Read the article

  • Create a redirect for IIS7

    - by justSteve
    When i created my (server08/IIS7) server's SSL certificate I mistakenly defined the common name to be 'myDomain.com' instead of 'www.myDomain.com', meaning that https://www.myDomain.com comes up untrusted. I understand that I can create a server redirect to correct the problem but I don't see where/how to do that from IIS's server manager. thx

    Read the article

  • Help needed- subdomains

    - by user205296
    Hi, I have a subdomain named http://arun.rocks.com and another domain named www.rocks.com/projects/main.php/. I want my subdomain to always redirect to www.rocks.com/projects/main.php/. How to do this? Kindly help

    Read the article

  • Trying to get DNS services running on Windows Server 2008 R2, what am I getting wrong ?

    - by LaserBeak
    Ok, So I am basically trying to get a home server pc up that will provide Domain name services, act as Mail server and web server. I have one static IP, well it's not officially static but hasn't changed in two years so I'll call it static. I have done the following: Configured router NAT/virtual port forward UDP/TCP port 53 to the internal IP of my server 192.168.1.16, in adapter settings specified the manual settings: 192.168.1.16 IP, gateway 192.168.1.1, Subnet: 255.255.255.0 and loopback DNS: 127.0.0.1 Using my public my public IP Checked using http://www.canyouseeme.org/ that port 53 is open and is not being blocked by my ISP. It can see services on this port. Registered Domain name (mydomain.com.au) Updated whois database through the domain registrars site and registered NameServer names: ns0.mydomain.com.au and ns2.mydomain.com.au, both have been associated with my single public IP. (Waited 24 hours) Update the nameserver for mydomain.com.au: primary ns0.mydomain.com.au secondary: ns2.mydomain.com.au (waited 24+ hours) Installed Server 2008 R2, install web server role and DNS role. Webserver works when I enter my public IP into browser of any PC/mobile, get IIS7 welcome page. In DNS server: Created new forward lookup zone: ; ; Database file mydoman.com.au.dns for mydomain.com.au zone. ; Zone version: 10 ; @ IN SOA mydomain.com.au. mydomain.testdomain.com. ( 10 ; serial number 900 ; refresh 600 ; retry 86400 ; expire 3600 ) ; default TTL ; ; Zone NS records ; @ NS ns0.mydomain.com.au. @ NS ns1.mydomain.com.au. ; ; Zone records ; @ A 192.168.1.16 www A 192.168.1.16 The Domain name services will however not work, the whois database updated with ns0.mydomain.com.au etc. but when I type in my site name www.mydomain.com.au from an external machine it will not open site and I can't even ping it (Can't find host) When I check the ns0.mydomain.com.au NS record using a tool Like: http://www.squish.net/dnscheck/ I get: Security: Server ns0.mydomain.com.au (XXX.XXX.XXX.XX <- my public IP) is recursive Domain exists but there is no such record Any ideas, thanks...

    Read the article

  • Localhost has just stopped working (using xampp)

    - by Joe Taylor
    I installed Xampp to use for local development of a Drupal site. Its been working fine out of the box until now. The main Xampp localhost welcome menu loads, however my subdirectory (localhost/drupal) doesn't. It just spins in the browser for ages and nothing happens. Just a blank screen. I've tried the edit people suggest in the hosts file but that hasn't work and I'm getting no errors so not sure what to do. Anyone have any ideas what might be wrong? PS I'm running Windows 7 edit: Log files: Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 123731968 bytes) in C:\xampp\apps\drupal\htdocs\sites\all\themes\directory\node--job.tpl.php on line 41 Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 123731968 bytes) in C:\xampp\apps\drupal\htdocs\sites\all\themes\directory\node--job.tpl.php on line 41 [Tue Nov 05 20:52:07.242454 2013] [ssl:warn] [pid 8432:tid 260] AH01909: RSA certificate configured for www.example.com:443 does NOT include an ID which matches the server name [Tue Nov 05 20:52:07.331459 2013] [core:warn] [pid 8432:tid 260] AH00098: pid file C:/xampp/apache/logs/httpd.pid overwritten -- Unclean shutdown of previous Apache run? [Tue Nov 05 20:52:07.820487 2013] [ssl:warn] [pid 8432:tid 260] AH01909: RSA certificate configured for www.example.com:443 does NOT include an ID which matches the server name [Tue Nov 05 20:52:07.898492 2013] [mpm_winnt:notice] [pid 8432:tid 260] AH00455: Apache/2.4.4 (Win32) OpenSSL/0.9.8y PHP/5.4.16 configured -- resuming normal operations [Tue Nov 05 20:52:07.898492 2013] [mpm_winnt:notice] [pid 8432:tid 260] AH00456: Server built: Feb 23 2013 13:07:34 [Tue Nov 05 20:52:07.898492 2013] [core:notice] [pid 8432:tid 260] AH00094: Command line: 'c:\xampp\apache\bin\httpd.exe -d C:/xampp/apache' [Tue Nov 05 20:52:07.905492 2013] [mpm_winnt:notice] [pid 8432:tid 260] AH00418: Parent: Created child process 7588 [Tue Nov 05 20:52:08.882548 2013] [ssl:warn] [pid 7588:tid 272] AH01909: RSA certificate configured for www.example.com:443 does NOT include an ID which matches the server name [Tue Nov 05 20:52:09.467582 2013] [ssl:warn] [pid 7588:tid 272] AH01909: RSA certificate configured for www.example.com:443 does NOT include an ID which matches the server name [Tue Nov 05 20:52:09.534585 2013] [mpm_winnt:notice] [pid 7588:tid 272] AH00354: Child: Starting 150 worker threads. Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 123731968 bytes) in C:\xampp\apps\drupal\htdocs\sites\all\themes\directory\node--job.tpl.php on line 41 Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 123731968 bytes) in C:\xampp\apps\drupal\htdocs\sites\all\themes\directory\node--job.tpl.php on line 41

    Read the article

  • Varnish server in front of nginx server with multiple virtualhosts

    - by Garreth 00
    I have tried to search for a solution for this, but can't find any documentation/tips on my specific setup. My setup: Backendserver: ngnix: 2 different websites (2 top domains) in virtualenv, running gunicorn/python/django Backendserver hardware(VPS) 2gb ram, 8 CPU Databaseserver: postgresql - pg_bouncer Backendserver hardware (VPS) 1gb ram, 8 CPU Varnishserver: only running varnish Varnishserver hardware (VPS) 1gb ram, 8 CPU I'm trying to set up a varnish server to handle rare spike in traffic (20 000 unique req/s) The spike happens when a tv program mention one of the sites. What do I need to do, to make the varnish server cache both sites/domains on my backendserver? My /etc/varnish/default.vcl : backend django_backend { .host = "local.backendserver.com"; .port = "8080"; } My /usr/local/nginx/site-avaible/domain1.com upstream gunicorn_domain1 { server unix:/home/<USER>/.virtualenvs/<DOMAIN1>/<APP1>/run/gunicorn.sock fail_timeout=0; } server { listen 80; listen 8080; server_name domain1.com; rewrite ^ http://www.domains.com$request_uri? permanent; } server { listen 80 default_server; listen 8080; client_max_body_size 4G; server_name www.domain1.com; keepalive_timeout 5; # path for static files root /home/<USER>/<APP>-media/; location / { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; if (!-f $request_filename) { proxy_pass http://gunicorn_domain1; break; } } } My /usr/local/nginx/site-avaible/domain2.com upstream gunicorn_domain2 { server unix:/home/<USER>/.virtualenvs/<DOMAIN2>/<APP2>/run/gunicorn.sock fail_timeout=0; } server { listen 80; listen 8080; server_name domain2.com; rewrite ^ http://www.domains.com$request_uri? permanent; } server { listen 80; listen 8080; client_max_body_size 4G; server_name www.domain2.com; keepalive_timeout 5; # path for static files root /home/<USER>/<APP>-media/; location / { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; if (!-f $request_filename) { proxy_pass http://gunicorn_domain2; break; } } } Right now, If I try the Ip of the varnishserver I only get served domain1.com. Would everything be correct if I change the DNS of the two domain to point to the varnishserver, or is there extra setup before it would work? Question 2: Do I need a dedicated server for varnish, or could I just install varnish on my backendserver, or would the server run out of memory quick?

    Read the article

  • Monit checking URL follow redirects

    - by beck
    I am looking to use monit to keep an eye on my site. I want it to treat it the site like an external user so am testing the url but it doesn't seem to follow redirects. The content check is being performed on the html of the redirect. #request works: if failed url http://www.sharelatex.com/blog/posts/future.html content == "301" #request fails if failed url http://www.sharelatex.com/blog/posts/future.html content == "actual content" Finding out how to get the url check to follow 30X would be great.

    Read the article

  • Autounattend.xml not being recognized in VirtualBox

    - by beagle
    I am working my way through the steps on this page to prepare an unattended installation of Windows 7 Enterprise x64 for purposes of a college assignment which simply requires the process to be carried out and documented. Both the "technician" and "reference" computers are virtual machines created in VirtualBox 4.3.12, as will be the destination computer. I seem to have successfully completed Step 1, building an Autounattend.xml answer file using Windows System Image Manager, in as far as the answer file validates successfully. The problem arises when I try to install Windows on the reference machine from the DVD image in conjunction with the Autounattend file on a USB drive. I have tried a couple of different USB devices, and the devices themselves seem to be recognized, but the answer file does not, as instead of taking the configuration settings from the file the user interface appears as in a manual installation. Has anyone come across this problem or a solution? The xml created by Windows SIM is below for reference in case the problem is with the file itself. <?xml version="1.0" encoding="utf-8"?> <unattend xmlns="urn:schemas-microsoft-com:unattend"> <settings pass="oobeSystem"> <component name="Microsoft-Windows-Deployment" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS" xmlns:wcm="http://schemas.microsoft.com/WMIConfig/2002/State" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <Reseal> <Mode>Audit</Mode> </Reseal> </component> <component name="Microsoft-Windows-Shell-Setup" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS" xmlns:wcm="http://schemas.microsoft.com/WMIConfig/2002/State" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <OOBE> <HideEULAPage>true</HideEULAPage> <ProtectYourPC>3</ProtectYourPC> </OOBE> </component> </settings> <settings pass="windowsPE"> <component name="Microsoft-Windows-International-Core-WinPE" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS" xmlns:wcm="http://schemas.microsoft.com/WMIConfig/2002/State" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <SetupUILanguage> <UILanguage>en-IE</UILanguage> </SetupUILanguage> <InputLocale>en-IE</InputLocale> <SystemLocale>en-IE</SystemLocale> <UILanguage>en-IE</UILanguage> <UserLocale>en-IE</UserLocale> </component> <component name="Microsoft-Windows-Setup" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS" xmlns:wcm="http://schemas.microsoft.com/WMIConfig/2002/State" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <DiskConfiguration> <Disk wcm:action="add"> <CreatePartitions> <CreatePartition wcm:action="add"> <Order>1</Order> <Size>300</Size> <Type>Primary</Type> </CreatePartition> <CreatePartition wcm:action="add"> <Order>2</Order> <Extend>true</Extend> <Type>Primary</Type> </CreatePartition> </CreatePartitions> <ModifyPartitions> <ModifyPartition wcm:action="add"> <Active>true</Active> <Format>NTFS</Format> <Label>System</Label> <Order>1</Order> <PartitionID>1</PartitionID> </ModifyPartition> <ModifyPartition wcm:action="add"> <Format>NTFS</Format> <Label>Windows</Label> <Order>2</Order> <PartitionID>2</PartitionID> </ModifyPartition> </ModifyPartitions> <DiskID>0</DiskID> <WillWipeDisk>true</WillWipeDisk> </Disk> <WillShowUI>OnError</WillShowUI> </DiskConfiguration> <ImageInstall> <OSImage> <InstallTo> <DiskID>0</DiskID> <PartitionID>2</PartitionID> </InstallTo> <InstallToAvailablePartition>false</InstallToAvailablePartition> <WillShowUI>OnError</WillShowUI> </OSImage> </ImageInstall> <UserData> <ProductKey> <WillShowUI>OnError</WillShowUI> </ProductKey> <AcceptEula>true</AcceptEula> </UserData> </component> </settings> <settings pass="specialize"> <component name="Microsoft-Windows-IE-InternetExplorer" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS" xmlns:wcm="http://schemas.microsoft.com/WMIConfig/2002/State" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <Home_Page>http://www.example.com</Home_Page> </component> </settings> <cpi:offlineImage cpi:source="wim://technician/users/user/desktop/install.wim#Windows 7 ENTERPRISE" xmlns:cpi="urn:schemas-microsoft-com:cpi" />

    Read the article

  • Apache to Ngnix Rewrite to a Directory confusion

    - by Robin
    i could use a little help with rewrite and nginx... Basically the structure of my App looks like this Headdirectory -- -APPBase -SomeMoreStuff -WWWDirectory .htaccess So i need to redirect into the WWWDirectory when i open the Headdirectory. In Apache its done with a htaccess and the following Content : RewriteEngine On RewriteRule ^(.*) www/$1 I already tried in Nginx : location /Headdirectory { rewrite ^/(.*) /www/$1; } And i tried to create an Alias but that didnt work... Would be nice if someone could help me out. Have a nice Day

    Read the article

  • Allow SFTP to a single folder not in home directory

    - by Brandon
    I have a web server that I use to host my websites. I have all the websites in folders in /srv/www. There is a Wordpress site that I want to give SFTP access to another developer, how would I go about doing this so that they only have access to /srv/www/thesite.com and not any other directories? Running Ubuntu 9.04.

    Read the article

  • nginx location rewrite but proxy_rewrite off

    - by Jan
    I'm trying to use nginx for proxying requests to my internal backend. My configuration reads as follows: location /Shibboleth.sso { proxy_pass internal-backend; # ip proxy_redirect off; } But, my redirects are always rewritten.. My backend returns a response like https://www.google.de/test and my browser receives https://www.mydomain.de/test How do I get nginx to just forward the response?

    Read the article

< Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >