Search Results

Search found 56549 results on 2262 pages for 'zeynep koch(at)oracle com'.

Page 830/2262 | < Previous Page | 826 827 828 829 830 831 832 833 834 835 836 837  | Next Page >

  • Is .htaccess slowing down my dedicated server?

    - by David Robles
    First of all, I consider myself more a programmer than a servers guy. I have a website where I receive about 3,000 visits per day, which I think is a lot less than the max capacity for a dedicated server. However, I've noticed that the connection to the website is pretty slow, e.g., to load images, to connect to it via SSH, etc. I configured .httaccess recently to avoid hotlinking to images in my server (i.e. .jpg, .gif and .png), and I was wondering if that could be slowing down my website. This is the configuration that I have: # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteEngine on RewriteCond %{HTTP_REFERER} !^$ RewriteCond %{HTTP_REFERER} !^http://www.mysite.com/.*$ [NC] RewriteCond %{HTTP_REFERER} !^http://www.mysite.com$ [NC] RewriteRule .*\.(jpg|jpeg|gif|png|bmp|swf)$ http://www.google.com/ [R,NC] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> # END WordPress I found some code to do that in google, and I just copied to .htacces since I'm not an expert in apache. It works, but I don't know if that is the best way to do it. How can I see if that is the reason why the server is slow? Are there any tools to monitor it? What would you do guys? Thanks in advance!

    Read the article

  • Display maintenance site to requesters based on their IP address

    - by user64294
    Hi all. I would like to set a special configuration in our apache web server. I would like to display sites to the users according to their IP addresses. We plan to upgrade our web sites. During the upgrade we'll put a maintenance site: so all the users which will connect to our web sites will get this site. There are 200 websites affected by the upgrade, so I don't want to change apache settings for each one. In order to test the upgrade i need to set apache to let only my IP address to access to asked site. If my IP address is a.b.c.d and if i ask for test.com i want to see it. but all other users, having a different IP address, should get the maintenane site even if they look for test.com. Our webserver is hosted out of the office (ovh.com france). The testers are the developers at our office and me. We can take some sites and enable them for test in which we implement IP restrictions in each website: the idea is on these websites, if the visitor's IP address is different from our office IP address we redirect this visitor to our maintenance website else we display the website. Is there a way to do this? Thank you.

    Read the article

  • OpenLDAP replication fails, "syncrepl_entry: rid=666 be_modify failed (20)"

    - by Pavel
    I've configured a second host to replicate the main LDAP server via syncrepl in the slapd.conf: syncrepl rid=666 provider=ldaps://my-main-server.com type=refreshAndPersist searchBase="dc=Staff,dc=my-main-server,dc=com" filter="(objectClass=*)" scope=sub schemachecking=off bindmethod=simple binddn="cn=repadmin,dc=my-main-server,dc=com" credentials=mypassword When I restart slapd, it writes to /var/log/debug Jun 11 15:48:33 cluster-mn-04 slapd[29441]: @(#) $OpenLDAP: slapd 2.4.9 (Mar 31 2009 07:18:37) $ ^Ibuildd@yellow:/build/buildd/openldap2.3-2.4.9/debian/build/servers/slapd Jun 11 15:48:34 cluster-mn-04 slapd[29442]: slapd starting Jun 11 15:48:34 cluster-mn-04 slapd[29442]: null_callback : error code 0x14 Jun 11 15:48:34 cluster-mn-04 slapd[29442]: syncrepl_entry: rid=666 be_modify failed (20) Jun 11 15:48:34 cluster-mn-04 slapd[29442]: do_syncrepl: rid=666 quitting I've looked into the sources for the return code and found only #define LDAP_TYPE_OR_VALUE_EXISTS 0x14 in include/ldap.h. Anyway, I don't quite get what the error message means. Can you help me debugging this problem and figure out why the LDAP replication doesn't work? I've managed to put a "manual" copy via slapcat and slapadd into the database, but I'd like to sync automatically. UPDATE: "Solved" by removing /var/lib/ldap/* and re-importing the database with slapadd.

    Read the article

  • Connecting jconsole using SOCKS to Amazon EC2

    - by freshfunk
    I'm trying to use jconsole to view stats on an EC2 instance by using a socks proxy created by SSH. I've tried the various scripts mentioned in the links below but to no avail: http://simplygenius.com/2010/08/jconsole-via-socks-ssh-tunnel.html http://gabrielcain.com/blog/2010/11/02/using-ssh-proxying-to-connect-jconsole-to-remote-cassandra-instances/ I'm running ssh -f -ND 8123 myuser@mymachine and verified that at least Firefox goes through it as a proxy. I then run jconsole -J-DsocksProxyHost=localhost -J-DsocksProxyPort=8123 service:jmx:rmi:///jndi/rmi://ec2-XX-XX-XXX-XXX.compute-1.amazonaws.com:8080/jmxrmi I run netstat -n on my EC2 instance and I see a connection created by my machine. However, the connection eventually disappears and I get a 'channel 2: open failed: connect failed: Operation timed out' from my ssh tunnel. I've opened the jmx port through the security group and I've checked the port on the EC2 instance to make sure it's open (by telnet-ing to it). I'm not sure where to look next. Are there some properties in sshd_config or ssh_config I need to enable for tunneling? Or anything in Mac OS X? I feel like a serious noob but sys administration is really not my strong point. I've spent several hours and can't get this to work.

    Read the article

  • Server unreachable without www

    - by deamon
    My server is unreachable without "www." prefix, even when trying it with ping. The DNS entry looks like this: $TTL 86400 @ IN SOA ns1.first-ns.de. postmaster.robot.first-ns.de. ( 2011010600 ; serial 14400 ; refresh 1800 ; retry 604800 ; expire 86400 ) ; minimum @ IN NS robotns3.second-ns.com. @ IN NS robotns2.second-ns.de. @ IN NS ns1.first-ns.de. @ IN A 1.2.3.4 localhost IN A 127.0.0.1 mail IN A 1.2.3.4 www IN A 1.2.3.4 ftp IN CNAME www imap IN CNAME www loopback IN CNAME localhost pop IN CNAME www relay IN CNAME www smtp IN CNAME www @ A DNS record of the same type for another domain on the same server is working with and without "www". And the VirualHost config looks like this: <VirtualHost *:80> ServerName somewhere.com ServerAlias www.somewhere.com ServerSignature Off ... </VirtualHost> Any idea what could be wrong?

    Read the article

  • Excel 2007 - Adding line breaks in a cell and no line over 50 characters

    - by Richard Drew
    I have notes stored in an excel cell. I add line breaks and dates every time I add a new note. I need to copy this to another program, but it has a line limit of 50 characters. I want a line break for each new date and for when each date's comment goes over 50 characters. I'm able to do one or the other, but I can't figure out how to do both. I'd prefer words not to be split up, but at this point I don't care. Below is some sample input. If needed for a SUBSTITUTE or REPLACE function, I could add a ~ before each date in my input as a delimiter. Sample Input: 07/03 - FU on query. Copies and history included. CC to Jane Doe and John Public 06/29 - Cust claiming not to have these and wrong PO on query form. Responded with inv sent dates and locations, correct PO values, and copies. 06/27 - New ticket opened using query form 06/12 - Opened ticket with helpdesk asking status 05/21 - Copy submitted to customeremail@customer.com 05/14 - Copy sent to John Public and email@customer.com Ideal Output: 07/03 - FU on query. Copies and history included. CC to Jane Doe and John Public 06/29 - Cust claiming not to have these and wrong PO on query form. Responded with inv sent dates an d locations, correct PO values, and copies. 06/27 - New ticket opened using query form 06/12 - Opened ticket with helpdesk asking status 05/21 - Copy submitted to [email protected] om 05/14 - Copy sent to John Public and email@custome r.com

    Read the article

  • How can I cornify a SharePoint site?

    - by Chris Farmer
    Inspired by the April 1 gravatar changes and the memory of last year's cornification of Stack Overflow, I wanted to add a cornify button to my company's SharePoint app. I just added their html snippet to a Content Editor Web Part. <a href="http://www.cornify.com" onclick="cornify_add();return false;"> <img src="http://www.cornify.com/assets/cornifycorn.gif" width="52" height="51" border="0" alt="Cornify" /> </a><script type="text/javascript" src="http://www.cornify.com/js/cornify.js"></script> The button renders all glittery and beautiful, and the magical functionality works fine in Chrome and Firefox (I'm on Windows 7) for me. But, in IE8, all the gorgeous unicorn images get added at the bottom of the page such that you can't see them unless you scroll down. Since most of our users are IE users, I fear that this just isn't going to be all that much fun. So, is there some known way to force this to work better in IE8, or is there another similarly fun site adornment utility that I could use that might behave better in a SharePoint 2007 app running in IE7/8?

    Read the article

  • Is timeout in tracertoutput an indication of an error?

    - by nitramk
    TCP/IP packages sent from my computer to a remote server does not always reach destination and ends up being retransmitted sometimes several times before they succeed. To troubleshoot this, I'm running a tracert to the server: Tracing route to <site> [<address>] Over a maximum of 30 hops: 1 <1 ms <1 ms <1 ms mymachine 2 <1 ms <1 ms <1 ms gw.levonline.com [217.70.32.30] 3 <1 ms <1 ms <1 ms 81.201.213.218 4 <1 ms <1 ms <1 ms bmf1-hmf1.driften.net [81.201.213.12] 5 <1 ms <1 ms <1 ms 10ge-2-4-cr2.a1.sth.ownit.se [84.246.88.157] 6 <1 ms * <1 ms netnod-ix-ge-b-sth-4470.microsoft.com [195.69.11.181] 7 26 ms * * ge-3-0-0-0.ams-64cb-1a.ntwk.msn.net [207.46.42.1] 8 48 ms 57 ms 56 ms ten9-1.lts-76e-1.ntwk.msn.net [207.46.42.133] 9 * * * Request timed out. In step 6 and 7, I'm seeing timeouts while waiting for the reply from the server (as seen above). Running the same tracert many times gives varying output, sometimes the response is fine, but sometimes I get this timeout 1, 2 and sometimes for all 3 packets. The timeout always starts at the same server, netnod-ix-ge-b-sth-4470.microsoft.com. I've tried setting the tracert timeout to 10 seconds, but am still getting the timeout. Running tracert towards other servers does not give me the same timeout. Microsoft network technicians tells me that the problem is not on "their" side. Are these timeouts an indicator of a lost packet on the specific node which did not respond? Are the timeouts an indication of there being a problem, or is it normal?

    Read the article

  • IIS 7 rewriting subdomain to point at a specific port.

    - by Tommy Jakobsen
    Having installed Team Foundation Server 2010 on Windows Server 2008, I need an easy URL for our developers to access their repositories. The default URL for the TFS repositories is http://localhost:8080/tfs Now I want the subdomain domain tfs.server.domain.com to point at http://localhost:8080/tfs. And when you write access tfs.server.domain.com/repos_name it should redirect to http://localhost:8080/tfs/repos_name. How can I do this in IIS 7? I already tried using the following rule, but it does not work. I get a 404. <rewrite> <globalRules> <rule name="TFS" stopProcessing="true"> <match url="^(?:tfs/)(.*)" /> <conditions> <add input="{HTTP_HOST}" pattern="^tfs.server.domain.com$" /> </conditions> <action type="Rewrite" url="http://localhost:8080/tfs/{R:1}" /> </rule> </globalRules> </rewrite>

    Read the article

  • Sparing level on HP EVA 4000

    - by Samuel
    One of the disks of our EVA4000 died today. This diskgroup (all volumes vraid5 with sparing level 1 and almost no space left for more volumes, 1TiB drives) is being rebuilt with "spare space" right now, and it will take at least 15 hours to do the leveling/rebuilding. We can't get a new disk until Friday. So, the question is, what would happen if another disk dies before the leveling completes? Would we lose data? And after that, how many aditional disks could die before losing data? 1 or 2? In "usual" RAID, we would be vulnerable to data loss while the rebuild takes place, but in this case the space reserved for sparing is two times the size of the bigger disk, so at the very least the effect should be the same of having two spares. Thanks in advance. Update: I have found some interesting threads about this question but still can't answer to this question, so I'm starting a bounty. http://blog.thestoragearchitect.com/2008/10/27/understanding-eva/ http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&url=http%3A%2F%2Fwww.experts-exchange.com%2FStorage%2FStorage_Technology%2FQ_25548177.html (Expert Exchange question from google).

    Read the article

  • One Apache VirtualHost entry overrides another?

    - by johnlai2004
    I can't tell why one apache virtual host entry keeps overriding another. The following file // filename: cbl <VirtualHost 74.207.237.23:80> ServerAdmin utopia@completebeautylist.com ServerName completebeautylist.com ServerAlias www.completebeautylist.com DocumentRoot /srv/www/cbl/production/public_html/ ErrorLog /srv/www/cbl/production/logs/error.log CustomLog /srv/www/cbl/production/logs/access.log combined </VirtualHost> keeps overriding this file // filename: theccco.org <VirtualHost 74.207.237.23:80> SuexecUserGroup "#1010" "#1010" ServerName theccco.org ServerAlias www.theccco.org ServerAlias webmail.theccco.org ServerAlias admin.theccco.org DocumentRoot /home/theccco/public_html ErrorLog /var/log/virtualmin/theccco.org_error_log CustomLog /var/log/virtualmin/theccco.org_access_log combined ScriptAlias /cgi-bin/ /home/theccco/cgi-bin/ DirectoryIndex index.html index.htm index.php index.php4 index.php5 <Directory /home/theccco/public_html> Options -Indexes +IncludesNOEXEC +FollowSymLinks allow from all AllowOverride All </Directory> <Directory /home/theccco/cgi-bin> allow from all </Directory> RewriteEngine on RewriteCond %{HTTP_HOST} =webmail.theccco.org RewriteRule ^(.*) https://theccco.org:20000/ [R] RewriteCond %{HTTP_HOST} =admin.theccco.org RewriteRule ^(.*) https://theccco.org:10000/ [R] Alias /dav /home/theccco/public_html <Location /dav> DAV On AuthType Basic AuthName theccco.org AuthUserFile /home/theccco/etc/dav.digest.passwd Require valid-user ForceType text/plain Satisfy All RewriteEngine off </Location> </VirtualHost> I tried a2ensite, a2dissite, and reloading I get this message * Reloading web server config apache2 apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1 for ServerName [Thu Apr 15 10:47:36 2010] [warn] NameVirtualHost 74.207.237.23:443 has no VirtualHosts Aside from that, I don't know what else could be wrong. Can anyone tell me what to do?

    Read the article

  • Nginx Tries to download file when rewriting non-existent url

    - by Vince Kronlein
    All requests to a non-existent file should be re-written to index.php?name=$1 All other requests should be processed as normal. With this server block, the server is trying to download all non-existent urls: server { server_name www.domain.com; rewrite ^(.*) http://domain.com$1 permanent; } server { listen 80; server_name domain.com; client_max_body_size 500M; index index.php index.html index.htm; root /home/username/public_html; location ~ /\.ht { deny all; } location ~ \.php$ { try_files $uri = 404; fastcgi_split_path_info ^(.+\.php)(/.+)$; include fastcgi_params; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_pass 127.0.0.1:9002; } location ~* ^.+\.(ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|rss|atom|jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid|midi|wav|bmp|rtf)$ { access_log off; log_not_found off; expires max; } location /plg { } location / { if (!-f $request_filename){ rewrite ^(.*)$ /index.php?name=$1 break; } } } I've checked to see that my default_type = text/html instead of octet stream, not sure what the deal is.

    Read the article

  • Homebrew large data cluster access for 2 user levels?

    - by Yegor
    The title probably makes little sense, so here is an example. I have a file hosting site, that serves a large amount of semi-randomly accessed files. The setup is as follows: High horsepower front-end +DB server that also does encoding for files that need encoding Fresh file server, which stores newly uploaded content, thats probably (and usually) rapidly accessible, which has 500GB of raided SSD storage, that can push over 3GBit of traffic. 3 cheap node servers, containing 2 x 750GB SATA drives in raid1, where files older than 2 weeks are archived, from the SSD server (mentioned above). Files on each server are accessed via subdomains (via modsec) in a straight forward fashion (server1.domain.com, server2.domain.com, etc) Where I have the problem is this. I introduced a "premium" service where people pay a small fee every month, and get ad-free, quick accesses to stuff on the site. Once they are logged in, they access same files via premium.server1.domain.com via a different modsec script, with a different pass phrase. That all works fine and dandy.... except the cheap node servers are all IO bound, so accessing the files on them via a different, unsaturated network makes no difference, since it cannot read off the drive fast enough. What would be a good way to make files on the site be accessible via 2 different network routes, 1 of which will be saturated (the "free network") while all other files are on an un-saturated "premium" network?

    Read the article

  • NGINX rewrite for vanity URLs when file doesn't exist (try_files and rewrite together)

    - by user1721724
    I'm trying to get vanity URLs on my server. If the file path from the URL doesn't exist, I want to rewrite the URL to profile.php, but if my users have periods in their usernames, their vanity URL doesn't work. Here is my conf block. server { listen 80; server_name www.example.com; rewrite ^/([a-zA-Z0-9-_]+)$ /profile.php?url=$1 last; root /var/www/html/example.com; error_page 404 = /404.php; location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ { expires 1y; log_not_found off; } location ~ \.php$ { fastcgi_pass example_fast_cgi; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/html/example.com$fastcgi_script_name; include fastcgi_params; } location / { index index.php index.html index.htm; } location ~ /\.ht { deny all; } location /404.php { internal; return 404; } } Any help would be appreciated. Thanks!

    Read the article

  • Connecting to my home router web interface from work

    - by Joe
    Hi, I'm trying to connect to my home router web interface from work. I use dyndns, because I don't have a static IP at home, and it works perfectly from any other place except my workplace (update: I made a mistake, see edit below). When trying to access the web interface from work I get a "500 Server Error" with the code: SERVER_RESPONSE_RESET. I'm not trying to use any protocols such as remote desktop, I'm only trying to access the web interface. I can access any other web page from my workplace with no problems, and I think my router web interface is like any other web page, isn't it? I thought maybe my work place proxy blocks addresses of services like dyndns, so I also applied another trick. Since I have a web page on my own domain (say www.mydomain.com) which I can access from work, I tried adding a CNAME to my domain which is linked to the dyndns address (router.mydomain.com). This way if anyone enters the address router.mydomain.com from anywhere, they reach my home router web interface, and there's no way of knowing it's a dyndns address (or is there?). However, it still doesn't work from my workplace (I get the same error message). Any ides? Edit: I'm sorry to say I made a mistake earlier. I used to be able to access my home router web interface from my old workplace, and I thought it was still possible since I don't recall making any configuration changes. However, after reading the replies, I went over to my old workplace and checked, and it doesn't work from there either. I'm very sorry for giving out wrong and misleading information about my problem. So to summarize: my problem is that I can't access my home router web interface from anywhere.

    Read the article

  • Comprehensive solution for managing patches, event viewing, change management, inventory, etc

    - by Holocryptic
    I'm looking for a solution that incorporates most or all of the following: Patch Management, Server event viewing/tracking, AD change management, ticketing and internal/external kb, remote access - ability to shadow user sessions or create new ones, imaging, and inventory. Our environments contains Windows Servers and ESXi Hosts (We're not completely virtual, but we're moving that direction). Various Cisco and Linksys switches and firewalls. This is a tall order, and I don't know if it can be done on a reasonable budget. I've looked and found some questions on SF that deal with some of this: http://serverfault.com/questions/72015/active-directory-management-tools-for-medium-sized-forest-less-than-1000-users http://serverfault.com/questions/4021/are-there-any-tools-to-do-change-management-with-active-directory-group-policy http://serverfault.com/questions/21752/what-is-a-good-patch-update-management-server What I'm ideally looking for is a reasonably cheap solution that integrates the features into a central interface. We're a non-profit, so money is a limiting factor (the cheaper, the better; but we have a max of $15k). What we are trying to avoid is having to deal with multiple vendors, while maintaining scalability (we're creating more sites that we'll have to manage). Is this possible, or will we have to cobble together something to make it work for us?

    Read the article

  • Apache HTTPd FollowSymLinks path permission

    - by apast
    Hi, I'm configuring my development environment with a basic Apache HTTPd configuration. But, to avoid a often problem, I want to map my test URL to my development folder. I'm using Ubuntu. My development path is located under the following example path: /home/myusername/myworkspace/hptargetpath/src/pages Considering the following symbolic link mapping: #ls -l /opt/share/www/mydevelopmentrootpath: lrwxrwxrwx 1 root root 77 2011-02-13 18:53 /opt/share/www/mydevelopmentrootpath -> /home/myusername/myworkspace/hptargetpath/src/pages With this folder mapping, I configured Apache HTTPd with the following configuration: <VirtualHost *:*> ServerName local.server.com ServerAdmin some@test.com DirectoryIndex index.html DocumentRoot /opt/share/www/mydevelopmentrootpath <Directory /opt/share/www/mydevelopmentrootpath/ > Options +Indexes Options +FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> </VirtualHost> But, I'm receiving a 403 Forbidden error when I want to access index.html under the address http://local.server.com/index.html. 403 Forbidden You don't have permission to access /index.html on this server. On httpd debug log, I checked the following message: [Sun Feb 13 19:34:47 2011] [error] [client 127.0.1.1] Symbolic link not allowed or link target not accessible: /opt/share/www/mydevelopmentrootpath I'm thinking that this problem is been generated by some path permission. It's not a direct permission to directory, but some intermediate directory in the path. There's a directive on httpd core Options: SymLinksIfOwnerMatch The server will only follow symbolic links for which the target file or directory is owned by the same user id as the link. But, I tested it without effects. Somebody may help me? I think that it's a trivial configuration on development environment. Best regards, And Past

    Read the article

  • Requesting better explanation for expires headers

    - by syn4k
    I have successfully implemented expires headers however, for several days I have been stumped by one thing. This article: http://www.tipsandtricks-hq.com/how-to-add-far-future-expires-headers-to-your-wordpress-site-1533 states Keep in mind that when you use expires header the files are cached in the browser until it expires so do not use this on files that changes frequently. Other sites indicate the same in my reading. But this doesn't seem to be true. I have updated an image, using the same name, several times. Each time I update and refresh my browser, the new image (with the same name) displays. I understand from this article that the old image should display unless I use a new name. Do you happen to know where the misunderstanding is? I have verified that the image in question has expires headers set on it: Request Headers: Host domain.com User-Agent Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.28) Gecko/20120306 Firefox/3.6.28 FirePHP/0.5 Accept image/png,image/*;q=0.8,*/*;q=0.5 Accept-Language en-us,en;q=0.5 Accept-Encoding gzip,deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 115 Connection keep-alive Referer http://domain.com/index.php Cookie __utma=1.61479883.1332439113.1332783348.1332796726.4; __utmz=1.1332439113.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none);PHPSESSID=lv2hun9klt2nhrdkdbqt8abug7; __utmb=1.33.10.1332796726; __utmc=1; ck_authorized=true x-insight activate If-Modified-Since Mon, 26 Mar 2012 21:55:33 GMT Cache-Control max-age=0 Response Headers: Date Mon, 26 Mar 2012 22:06:50 GMT Server Apache/2.2.3 (CentOS) Connection close Expires Wed, 25 Apr 2012 22:06:50 GMT Cache-Control max-age=2592000

    Read the article

  • Firefox will not remember local site cookie

    - by Campo
    This is a weird one. We have a production server (Server 2008) and two staging servers (Server 2008 and Server 2003) I have sites on all of these. They all use cookies. On the Production server when browsing to our site www.supernovainteractive.com there is a cookie that detects when you visted the site and it will not refresh the logo animation (top left hand side) on clicking to another page. This works for all browsers on the production server. I’m not sure what’s going on but for some reason cookies are not working on one site in the 2008 staging server only. This is when browsing using Firefox (3.6.3) they work fine on all other browsers (IE, Chrome, Safari, Opera) In addition, the 2003 staging server works fine. You can test on the Supernova Interactive site by noticing the logo in the top left corner. It uses a cookie to detect if you’ve already seen the animation. Once you’ve seen it once, it doesn’t animate again until tomorrow. Currently, it’s animating every time. I have opened an outside facing port so others can see the issue. Http://exchange.supernova.com:10009 Any ideas on this one? Firewalls are off on the server. Notice you do not get a cookie from Exchange.supernova.com.

    Read the article

  • Nginx and Gunicorn hanging on GET requests

    - by whatWhat
    I'm using Nginx + Gunicorn which is serving my Django project. All GET requests hang for ~1 min. The content seems to be available immediately as I can see it in the Browser inspector but the browser itself looks like it's still waiting for more data. Heres my Ngnix config #allow for up to 3 connections per second. limit_req_zone $binary_remote_addr zone=one:10m rate=3r/s; server { listen 80; server_name example.com; root /var/www/example.com/example/; # serve directly - analogous for static/staticfiles location /media/ { # this changes depending on your python version root /home/example/; } location /static/ { # if asset versioning is used if ($query_string) { expires max; } root /var/www/example.com; } location / { #Allow for a burst of 50. limit_req zone=one burst=50 nodelay; proxy_pass_header Server; proxy_set_header Host $http_host; proxy_redirect off; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Scheme $scheme; proxy_connect_timeout 10; proxy_read_timeout 10; proxy_pass http://localhost:8001/; } # what to serve if upstream is not available or crashes error_page 500 502 503 504 /media/50x.html; } My Gunicorn Config: bind = "127.0.0.1:8001" workers = 3 worker_class = "gevent" Is there anything obvious that would be causing the requests to stay open for so long?

    Read the article

  • Can't upgrade NVIDIA GeForce 310M display driver on Acer Aspire 5745PG

    - by Emerson
    I've been for days already trying to update my video driver. I have an Acer Aspire 5745PG with a "NVIDIA GeForce 310M" board, and I was trying to run Sony Vegas video editor with Boris Continunn plugins. It happened that some of the plugins, like BCC Text Extrude wouldn't work, showing the message "Insufficient depth resolution to run Blue". I then read somewhere that updating the display driver would do the trick. That was when my nightmares started, I lost already good 3 nights trying to sort this out, without success :( The display driver that was before (and that I current have after restoring) was the version 8.16.11.8997. First thing I tried was downloading the 8.17.12.6619 driver directly from Acer, which was shown as the latest version from Acer website: http://support.acer.com/product/default.aspx?modelId=2466 Running it would say "Diver Package Failure - Setup failed to read the required Display Driver to be used with this package" I then tried directly the NVIDIA own driver, which the latest was version 296.10: http://us.download.nvidia.com/Windows/296.10/296.10-notebook-win7-winvista-64bit-international-whql.exe That gave me similar error message :/ So after some researching I found out that some people had the same issue and they had to change the configuration file to allow the installer to recognize this NVIDIA board: http://forums.nvidia.com/index.php?showtopic=222904 That topic said to look for the "Device Instance Id" property of the "NVIDIA GeForce 310M" display , which I couldn't find, instead I found the "Hardware Id", which seemed to be the right one. I followed the instructions and changed the inf file first for the Acer installation, and after for the NVIDIA own driver. It actually managed to go ahead with the installation in both instances, but the only thing I got was a black screen, while the computer still apeared to be running fine. I had to hard reset, and then it would come back with generic vga driver. I could only get my display back using the recovery function. I imagine thousands of this notebook was sold, and it can't have its driver updated?? Could someone help me with this?? Thanks Echo

    Read the article

  • Site resolves fine without "www", "www" creates database error

    - by PatrickS
    Working with BOA ( Barracuda / Octopus / Aegir ) , I've installed a few Drupal sites without any problems and following the same process for all. BOA is running on Nginx. All sites are going thru Cloudflare's network , where I set the same DNS settings. A example.com points to IPADDRESS A www points to IPADDRESS the nameservers of each domain are pointing to Cloudflare's respective nameservers. It all works , except for one site that works perfectly without "www" , but with "www" returns the typical database error if Drupal can't find the site's database. Site off-line The site is currently not available due to technical problems. Please try again later. Thank you for your understanding. If you are the maintainer of this site, please check your database settings in the settings.php file and ensure that your hosting provider's database server is running. For more help, see the handbook, or contact your hosting provider. In BOA, all sites have the same alias, basically a symlink, redirecting like this... www.example.com - example.com

    Read the article

  • Hardware chose: ASUS Eee Pad Slider or ASUS Eee Pad Transformer for web development?

    - by JamesM
    I was just wondering out of the following Tablets which one seams better to get? I am a web-developer, Always using Unix/Linux/BSD, I want a tablet that has a keyboard. http://gdgt.com/asus/eee/pad/slider/ http://gdgt.com/asus/eee/pad/transformer/ http://www.tweaktown.com/news/18311/asus_eee_pad_slider_transformer_tablets_with_physical_keyboard/index.html I know both are similar, but not sure what one I should get. The Slider seems very nice but again the keyboard is fixed to the tablet unlike the Transformer. P.S: I'm going to use one of the above to showcase my programming work at school, as well as just being used as a cheaper notebook than the $300 Windows.7 locked down notebooks. By Locked down, I mean we pay $300 for them and after 3 years we can do what ever to them, they are Lenovo thinkpad mini-10 and What they have installed is all you get, they don't let us install what ever OS on them. And with the question on both of those links, I think that the transformer would be better but that is only taking in the fact of it being both a tablet and a notebook. What I really care about is power; which one is more powerful? It will be running kFreeBSD-Debian-Squeeze with Linux-Mint theme with several other packages. Though I'm not going to run Windows (which I feel is bloated), I still want power. To help keep my computer from slowing down with cache, I will have a cron.d/hourly script cleaning out the cache memory.

    Read the article

  • configuration of zend frame work in ubuntu

    - by Rahul Mehta
    Hi, I have created a project zfapi by zf command in ubuntu. Now http://myserverpath.com/zfapi/ gives me listing of folder public application and others. http://myserverpath.com/zfapi/public give me the index page index.php. and i have made the UserController.php in application/controllers but by http://myserverpath.com/zfapi/user/ is saying user not found. what configuration i need to set for running it proper. I had set my /etc/apache2/apache2.conf added the following in the last . <VirtualHost *:80> DocumentRoot "/var/www/mypath/zfapi" <Directory "/var/www/mypath/zfapi"> Order allow,deny Allow from all AllowOverride all </Directory> </VirtualHost> is giving me this error while restarting server. [Sat Jan 08 13:32:53 2011] [error] VirtualHost *:80 -- mixing * ports and non-* ports with a NameVirtualHost address is not supported, proceeding with undefined results apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1 for ServerName [Sat Jan 08 13:33:03 2011] [error] VirtualHost *:80 -- mixing * ports and non-* ports with a NameVirtualHost address is not supported, proceeding with undefined results what this i should do .?

    Read the article

  • wget crawling search results of news website

    - by kiltek
    I am trying to crawl the search results of a news website using wget. The name of the website is www.voanews.com. After typing in my search keyword and clicking search, it proceeds to the results. Then i can specify a "to" and a "from"-date and hit search again. After this the URL becomes: http://www.voanews.com/search/?st=article&k=mykeyword&df=10%2F01%2F2013&dt=09%2F20%2F2013&ob=dt#article and the actual content of the results is what i want to download. To achieve this I created the following wget-command: wget --reject=js,txt,gif,jpeg,jpg \ --accept=html \ --user-agent=My-Browser \ --recursive --level=2 \ www.voanews.com/search/?st=article&k=germany&df=08%2F21%2F2013&dt=09%2F20%2F2013&ob=dt#article Unfortunately, the crawler doesn't download the search results. It only gets into the upper link bar, which contains the "Home,USA,Africa,Asia,..." links and saves the articles they link to. It seems like he crawler doesn't check the search result links at all. What am I doing wrong and how can I modify the wget command to download the results search list links (and of course the sites they link to) only ?

    Read the article

< Previous Page | 826 827 828 829 830 831 832 833 834 835 836 837  | Next Page >