Search Results

Search found 15780 results on 632 pages for 'pass board elections 2011'.

Page 153/632 | < Previous Page | 149 150 151 152 153 154 155 156 157 158 159 160  | Next Page >

  • Still failing a function, not sure why...ideas on test cases to run?

    - by igor
    I've been trying to get this Sudoku game working, and I am still failing some of the individual functions. All together the game works, but when I run it through an "autograder", some test cases fail.. Currently I am stuck on the following function, placeValue, failing. I do have the output that I get vs. what the correct one should be, but am confused..what is something going on? EDIT: I do not know what input/calls they make to the function. What happens is that "invalid row" is outputted after every placeValue call, and I can't trace why.. Here is the output (mine + correct one) if it's at all helpful: http://pastebin.com/Wd3P3nDA Here is placeValue, and following is getCoords that placeValue calls.. void placeValue(Square board[BOARD_SIZE][BOARD_SIZE]) { int x,y,value; if(getCoords(x,y)) { cin>>value; if(board[x][y].permanent) { cout<< endl << "That location cannot be changed"; } else if(!(value>=1 && value<=9)) { cout << "Invalid number"<< endl; clearInput(); } else if(validMove(board, x, y, value)) { board[x][y].number=value; } } } bool getCoords(int & x, int & y) { char row; y=0; cin>>row>>y; x = static_cast<int>(toupper(row)); if (isalpha(row) && (x >= 'A' && x <= 'I') && y >= 1 && y <= 9) { x = x - 'A'; // converts x from a letter to corresponding index in matrix y = y - 1; // converts y to corresponding index in matrix return (true); } else if (!(x >= 'A' && x <= 'I')) { cout<<"Invalid row"<<endl; clearInput(); return false; } else { cout<<"Invalid column"<<endl; clearInput(); return false; } }

    Read the article

  • WordPress wp_nav_menu specific sub menu only

    - by HWD
    I want to create a menu that pulls only the sub-menu options for a particular page. For example, if the menu option DASHBOARD has the sub-menus MESSAGE BOARD and CALENDAR, I want to be able to create a separate menu with just MESSAGE BOARD and CALENDAR. I'd like to do this without the wp_list_pages function so that the menu options can be managed using the Appearances Menus tab in WordPress. Thanks!

    Read the article

  • What is the lightest way to make a huge chess-like grid?

    - by Sotkra
    Hey there I'm working on a browser-game and I can't help but wonder about what's the lightest way to make the grid/board on which the game takes place. Right now, as a mere sample, I'll show you this: http://sotkra.com/game/ Now, as the grid gets bigger and bigger, the table and its td's create a very heavy filepage which in turn...sucks in more resources from the browser engine and computer. So, is a table with td's the most lightweight way to craft a huge grid-like board or is there something lighter that you recommend? Cheers Sotkra

    Read the article

  • Website Reviewing Application/Interface

    - by Eric Di Bari
    I am the technology director at a small nonprofit, and we are in the process of making a new website. We have several proposed mock-ups of different homepage designs, and need to receive input from our board members. Is there an online application/program/framework that will receive and organize user comments? I'm looking for something that will allow commenting while viewing the page, rather than just a message board or wiki.

    Read the article

  • How to implement "short" nested vanity urls in rails?

    - by UnSandpiper
    I understand how to create a vanity URL in Rails in order to translate http://mysite.com/forum/1 into http://mysite.com/some-forum-name But I'd like to take it a step further and get the following working (if it is possible at all): Instead of: http://mysite.com/forum/1/board/99/thread/321 I'd like in the first step to get to something like this: http://mysite.com/1/99/321 and ultimately have it like http://mysite.com/some-forum-name/some-board-name/this-is-the-thread-subject. Is this possible?

    Read the article

  • Pfsense mbuf full, what to do?

    - by sathia
    I noticed today that the MBUF usage has hit its limit. Apparently the site I'm running under pfsense is having some troubles too, I'd like to know if it would be safe to just sysctl kern.ipc.nmbclusters=65536 I wouldn't like to reboot the server, is it safe (or useful) to do it via pfsense shell? thanks you very much 2.0-RELEASE (amd64) built on Tue Sep 13 17:05:32 EDT 2011 State table size 35573/550000 MBUF Usage 25600/25600 CPU usage 2% Memory usage 17% (2GB) Swap 0% CPU: Intel(R) Xeon(R) CPU E5450 @ 3.00GHz

    Read the article

  • WatchGuard 'Internal Policy' intermittently blocking outbound web traffic

    - by vfilby
    I have a lot of legitimate outbound traffic intermittently being denied by WatchGuard's "Internal Policy." Today I tried to go to Splunk's homepage and my traffic was denied by my watchguard XTM 22 with Pro upgrade. What is the "Internal Policy" and what can I do to control it? Example of Traffic being blocked Type Date Action Source IP Port Interface Destination IP Port Policy Traffic 2011-09-21T18:24:43 Deny 10.0.0.90 49627 3-Primary LAN 64.127.105.40 80 Firebox Internal Policy http/tcp Top three firewall policies:

    Read the article

  • Gmail - error adding pop3 account from my mail server (postfix+courier)

    - by Lucas Lobosque
    I use courier to add pop3/imap support to my mail server, and I get this when I try to add a new pop3 account in gmail: Server returned error: "Missing +OK response upon connecting to the server: * OK [CAPABILITY IMAP4rev1 UIDPLUS CHILDREN NAMESPACE THREAD=ORDEREDSUBJECT THREAD=REFERENCES SORT QUOTA IDLE ACL ACL2=UNION STARTTLS] Courier-IMAP ready. Copyright 1998-2011 Double Precision, Inc. See COPYING for distribution information." Any help on how to fix this would be appreciated.

    Read the article

  • passenger-status - ERROR: Phusion Passenger doesn't seem to be running

    - by Casual Coder
    My server is: Server version: Apache/2.2.11 (Ubuntu) Server built: Aug 16 2010 17:44:11 My ruby version ruby 1.9.2p136 (2010-12-25 revision 30365) [x86_64-linux]. I've installed passenger 3.0.7 via RubyGems. I've run passenger-install-apache2-module and everything went fine. I've modified configuration (load module, edit virtualhost etc.) and restarted Apache. Module is loading fine (apache does not complain). But Passenger is obviously not working: sudo passenger-status ERROR: Phusion Passenger doesn't seem to be running. How can I get it working ? Edit 1: /etc/apache2/mods-enabled/passenger.load LoadModule passenger_module /usr/lib/ruby/gems/1.9.1/gems/passenger-3.0.7/ext/apache2/mod_passenger.so Root of passenger: passenger-config --root /usr/lib/ruby/gems/1.9.1/gems/passenger-3.0.7 Apache VirtualHost sub URI configuration in /etc/apache2/sites-enabled/railsapps: <VirtualHost <IP ADDRESS>:80> ServerAdmin webmaster@localhost ServerName my.server.name PassengerRoot /usr/lib/ruby/gems/1.9.1/gems/passenger-3.0.7 PassengerRuby /usr/bin/ruby RailsEnv development DocumentRoot /www/vhosts/railsapps <Directory /www/vhosts/railsapps> Options FollowSymlinks -MultiViews AllowOverride None Order allow,deny Allow from all </Directory> RailsBaseURI /siteA <Directory /www/vhosts/railsapps/siteA> Options -MultiViews AllowOverride All Order allow,deny Allow from all </Directory> RailsBaseURI /siteB <Directory /www/vhosts/railsapps/siteB> AllowOverride All Options -MultiViews Order allow,deny Allow from all </Directory> LogLevel info ErrorLog /var/log/apache2/railsapps_error.log CustomLog /var/log/apache2/railsapps_access.log combined </VirtualHost> Of course as in 'users guide apache.html' siteA and siteB are symlinks to siteA/public and siteB/public absolute paths respectively. Edit 2: In logs there is nothing related to passenger. Permissions are also fine (read and executable) on directories in paths. Even if it was some misconfiguration or permission problem isn't passenger suppose to be running? I mean sudo passenger-status should at least output --- general information ---. When I place some test html file in railsapps directory it is served fine. /var/log/apache2/railsapps_error.log [Sun Jun 19 12:19:08 2011] [error] [client <IP>] Directory index forbidden by Options directive: /www/vhosts/railsapps/siteA/ [Sun Jun 19 12:19:08 2011] [error] [client <IP>] File does not exist: /www/vhosts/railsapps/favicon.ico

    Read the article

  • How to install ceph on EC2 Amazon Linux AMI

    - by takaomag
    I want to test Ceph (a distributed network storage and file system) on some EC2 hosts which is derived from Amazon Linux AMI (amzn-ami-2011.09.2.x86_64-ebs). The kernel version is 3.2 and btrfs is enabled. But kernel config options related to Ceph (CONFIG_CEPH_FS and CONFIG_BLK_DEV_RBD) seems to be disabled. I have to make a new kernel and register it to amazon ? Or, does someone know more easy way ?

    Read the article

  • Unclear pricing of Windows Azure

    - by Dirk
    How do you people think about the Windows Azure pricing model and the way it is presented to the user? I just found out that Azure keeps charging hours for STOPPED instances. I just received a bill from more than 100 euro for 3 STOPPED instances (not) running "HelloAzure". I the past I also played around with Amazon Web Services. Amazon doesn't charge for stopped instances. I was wondering: "Should I have known this before, or is Microsoft doing a bad job in clear communication in the pricing model?" Quote from http://www.microsoft.com/windowsazure/pricing/ : Compute time, measured in service hours: Windows Azure compute hours are charged only for when your application is deployed. When developing and testing your application, developers will want to remove the compute instances that are not being used to minimize compute hour billing. Partial compute hours are billed as full hours. I read this, so I stopped all instances after a few hours playing around. Now it seems I should have deleted them, not just "stopped". Strictly speaking, all depends on the definition of the word "deployed". If you upload an application, but it is not running, can it still be regarded as being "deployed"? May be, but when you read this for the first time, with AWS experience in mind, I don't think it's 100% clear what this means. Technically speaking, an uploaded application only uses (read: should only use / needs only) a few MB harddrive space. It doesn't require any CPU time. If Azure wants to reserve CPU's for not running instances.. well, that's Azure's choice, not mine. I don't want to spread a hate campaign at all, but I do want to know how people think about this subject. Should Microsoft be more clear about their pricing model or do you think it's clear enough? Second question: did anyone got refunded for a similar case? Thanks in advance! UPDATE 27-01-2011 I sent an email to customer support a few days ago, but I guess that didn't reach anu human being because I didn't hear anything from it. So, I made a telephone call today with a Dutch customer support representative (I live in Holland). She totally understood the problem and she's trying to get a refund for me. However, she mentioned that "usually these refund requests are denied", but she's going to try. She also mentioned that I'm not the first one with this (or similar) problem. UPDATE 28-01-2011 I just received a phonecall from Microsoft support. The lady told me some good news: the money will refunded. However, the invoice has not been made yet, and my creditcard will first be chardged, after which it will be refunded, but hey, that's no problem for me! I'm glad the way it's solved! Thanks everybody!

    Read the article

  • Should Windows Multipoint Server stations on individual video cards support hardware video acceleration?

    - by villares
    I've set up a test machine with multiple PCI-e nVidia GF440 Video cards and installed Windows Multipoint Server 2011. I use the same kind of hardware set up with a BeTwin multiseat solution to create a class lab for Google SketchUp teaching (highly OpenGL dependent) and it works ok. On the Multipoint Windows test machine the drivers seem to be installed OK but I don´t seem to get any hardware video acceleration. Is this a intrinsic limitation of this solution or am I doing something wrong?

    Read the article

  • Bitnami stacks compatible with Ubuntu 11?

    - by pthesis
    Has anyone successfully installed Bitnami native stacks on Ubuntu 11? After changing the bin file's permissions to allow it to be executable, I get the following error in the Terminal: bitnami-dokuwiki-2011-05-25a-0-linux-x64-installer.bin: 1: Syntax error: Unterminated quoted string for the Docuwiki stack, and bitnami-lampstack-3.0.6-0-linux-x64-installer.bin: 1: Syntax error: Unterminated quoted string for the LAMP stack.

    Read the article

  • ffmpeg hangs when creating a video

    - by FearUs
    I am trying to insert an audio channel with a video: first of all I extract the audio from the original video for processing: ffmpeg -i lotr.mp4 lotr.wav I then extract all frames for later processing too: ffmpeg -i lotr.mp4 -f image2 %d.jpg When done processing audio and video streams, I try to create the video ffmpeg -f image2 -r 15 -i %d.jpg new.mp4 then merge with the audio: ffmpeg -i new.mp4 -i lotr.wav -map 0:0 -map 1:0 new_w_audio.mp4 Result: CPU activity = 100%, the process hangs and never returns. PS: I even tried it without modifying the images or the audio (so just trying to unpack then repack the video) but still the same output FFmpeg version SVN-r26400, Copyright (c) 2000-2011 the FFmpeg developers built on Jan 18 2011 04:07:05 with gcc 4.4.2 configuration: --enable-gpl --enable-version3 --enable-libgsm --enable-libvorb is --enable-libtheora --enable-libspeex --enable-libmp3lame --enable-libopenjpeg --enable-libschroedinger --enable-libopencore_amrwb --enable-libopencore_amrnb --enable-libvpx --disable-decoder=libvpx --arch=x86 --enable-runtime-cpudetect - -enable-libxvid --enable-libx264 --enable-librtmp --extra-libs='-lrtmp -lpolarss l -lws2_32 -lwinmm' --target-os=mingw32 --enable-avisynth --enable-w32threads -- cross-prefix=i686-mingw32- --cc='ccache i686-mingw32-gcc' --enable-memalign-hack libavutil 50.36. 0 / 50.36. 0 libavcore 0.16. 1 / 0.16. 1 libavcodec 52.108. 0 / 52.108. 0 libavformat 52.93. 0 / 52.93. 0 libavdevice 52. 2. 3 / 52. 2. 3 libavfilter 1.74. 0 / 1.74. 0 libswscale 0.12. 0 / 0.12. 0 Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'new.mp4': Metadata: major_brand : isom minor_version : 512 compatible_brands: isomiso2mp41 creation_time : 1970-01-01 00:00:00 encoder : Lavf52.93.0 Duration: 00:00:29.66, start: 0.000000, bitrate: 193 kb/s Stream #0.0(und): Video: mpeg4, yuv420p, 200x134 [PAR 1:1 DAR 100:67], 192 k b/s, 15 fps, 15 tbr, 15 tbn, 15 tbc Metadata: creation_time : 1970-01-01 00:00:00 [wav @ 01fed010] max_analyze_duration reached Input #1, wav, from 'lotr.wav': Duration: 00:00:29.90, bitrate: 176 kb/s Stream #1.0: Audio: pcm_s16le, 11025 Hz, 1 channels, s16, 176 kb/s File 'new_w_audio.mp4' already exists. Overwrite ? [y/N] y [buffer @ 01b03820] w:200 h:134 pixfmt:yuv420p Output #0, mp4, to 'new_w_audio.mp4': Metadata: major_brand : isom minor_version : 512 compatible_brands: isomiso2mp41 creation_time : 1970-01-01 00:00:00 encoder : Lavf52.93.0 Stream #0.0(und): Video: mpeg4, yuv420p, 200x134 [PAR 1:1 DAR 100:67], q=2-3 1, 200 kb/s, 15 tbn, 15 tbc Metadata: creation_time : 1970-01-01 00:00:00 Stream #0.1: Audio: aac, 11025 Hz, 1 channels, s16, 64 kb/s Stream mapping: Stream #0.0 -> #0.0 Stream #1.0 -> #0.1 Press [q] to stop encoding

    Read the article

  • Powerline Networking messages

    - by Griffo
    I happened to fire up Console.app today and found lots of these messages in the log: 13/12/2011 22:54:09.291 de.devolo.networkservice: [pktifc] write: Device not configured They are being generated 8 times every 5-second interval. My network is setup as shown in the attached diagram where there are two Devolo powerline network adapters connecting the wireless access point upstairs to my modem/wifi access point/dhcp server device downstairs. I want to know what these messages mean, whether they are a problem and if so, how to solve it.

    Read the article

  • nginx proxypath https redirect fails without trailing slash

    - by Thermionix
    I'm trying to setup Nginx to forward requests to several backend services using proxy_pass. The links on the pages that lack trailing slashes do have https:// in front, but get redirected to a http request with a trailing slash - which ends in connection refused - I only want these services to be available through https. So if a link is too https://example.com/internal/errorlogs in a browser when loaded https://example.com/internal/errorlogs gives Error Code 10061: Connection refused (it redirects to http://example.com/internal/errorlogs/) If I manually append the trialing slash https://example.com/internal/errorlogs/ it loads I've tried with varied trailing forward slashes appended to the proxypath and location in proxy.conf to no effect, have also added server_name_in_redirect off; This happens on more than one app under nginx, and works in apache reverse proxy Config files; proxy.conf location /internal { proxy_pass http://localhost:8081/internal; include proxy.inc; } .... more entries .... sites-enabled/main server { listen 443; server_name example.com; server_name_in_redirect off; include proxy.conf; ssl on; } proxy.inc proxy_connect_timeout 59s; proxy_send_timeout 600; proxy_read_timeout 600; proxy_buffer_size 64k; proxy_buffers 16 32k; proxy_pass_header Set-Cookie; proxy_redirect off; proxy_hide_header Vary; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; proxy_set_header Accept-Encoding ''; proxy_ignore_headers Cache-Control Expires; proxy_set_header Referer $http_referer; proxy_set_header Host $host; proxy_set_header Cookie $http_cookie; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Ssl on; proxy_set_header X-Forwarded-Proto https; curl output -$ curl -I -k https://example.com/internal/errorlogs/ HTTP/1.1 200 OK Server: nginx/1.0.5 Date: Thu, 24 Nov 2011 23:32:07 GMT Content-Type: text/html;charset=utf-8 Connection: keep-alive Content-Length: 14327 -$ curl -I -k https://example.com/internal/errorlogs HTTP/1.1 301 Moved Permanently Server: nginx/1.0.5 Date: Thu, 24 Nov 2011 23:32:11 GMT Content-Type: text/html;charset=utf-8 Connection: keep-alive Content-Length: 127 Location: http://example.com/internal/errorlogs/

    Read the article

  • Varnish default.vcl grace period

    - by Vladimir
    These are my settings for a grace period (/etc/varnish/default.vcl) sub vcl_recv { .... set req.grace = 360000s; ... } sub vcl_fetch { ... set beresp.grace = 360000s; ... } I tested Varnish using localhost and nodejs as a server. I started localhost, the site was up. Then I disconnected server and the site got disconnected in less than 2 min. It says: Error 503 Service Unavailable Service Unavailable Guru Meditation: XID: 1890127100 Varnish cache server Could you tell me what could be the problem? sub vcl_fetch { if (beresp.ttl < 120s) { ##std.log("Adjusting TTL"); set beresp.ttl = 36000s; ##120s; } # Do not cache the object if the backend application does not want us to. if (beresp.http.Cache-Control ~ "(no-cache|no-store|private|must-revalidate)") { return(hit_for_pass); } # Do not cache the object if the status is not in the 200s if (beresp.status >= 300) { # Remove the Set-Cookie header #remove beresp.http.Set-Cookie; return(hit_for_pass); } # # Everything below here should be cached # # Remove the Set-Cookie header ####remove beresp.http.Set-Cookie; # Set the grace time ## set beresp.grace = 1s; //change this to minutes in case of app shutdown set beresp.grace = 360000s; ## 10 hour - reduce if it has negative impact # Static assets - browser caches tpiphem for a long time. if (req.url ~ "\.(css|js|.js|jpg|jpeg|gif|ico|png)\??\d*$") { /* Remove Expires from backend, it's not long enough */ unset beresp.http.expires; /* Set the clients TTL on this object */ set beresp.http.cache-control = "public, max-age=31536000"; /* marker for vcl_deliver to reset Age: */ set beresp.http.magicmarker = "1"; } else { set beresp.http.Cache-Control = "private, max-age=0, must-revalidate"; set beresp.http.Pragma = "no-cache"; } if (req.url ~ "\.(css|js|min|)\??\d*$") { set beresp.do_gzip = true; unset beresp.http.expires; set beresp.http.cache-control = "public, max-age=31536000"; set beresp.http.expires = beresp.ttl; set beresp.http.age = "0"; } ##do not duplicate these settings if (req.url ~ ".css") { set beresp.do_gzip = true; unset beresp.http.expires; set beresp.http.cache-control = "public, max-age=31536000"; set beresp.http.expires = beresp.ttl; set beresp.http.age = "0"; } if (req.url ~ ".js") { set beresp.do_gzip = true; unset beresp.http.expires; set beresp.http.cache-control = "public, max-age=31536000"; set beresp.http.expires = beresp.ttl; set beresp.http.age = "0"; } if (req.url ~ ".min") { set beresp.do_gzip = true; unset beresp.http.expires; set beresp.http.cache-control = "public, max-age=31536000"; set beresp.http.expires = beresp.ttl; set beresp.http.age = "0"; } ## If the request to the backend returns a code other than 200, restart the loop ## If the number of restarts reaches the value of the parameter max_restarts, ## the request will be error'ed. max_restarts defaults to 4. This prevents ## an eternal loop in the event that, e.g., the object does not exist at all. if (beresp.status != 200 && beresp.status != 403 && beresp.status != 404) { return(restart); } if (beresp.status == 302) { return(deliver); } # Never cache posts if (req.url ~ "\/post\/" || req.url ~ "\/submit\/" || req.url ~ "\/ask\/" || req.url ~ "\/add\/") { return(hit_for_pass); } ##check this setting to ensure that it does not cause issues for browsers with no gzip if (beresp.http.content-type ~ "text") { set beresp.do_gzip = true; } if (beresp.http.Set-Cookie) { return(deliver); } ##if (req.url == "/index.html") { set beresp.do_esi = true; ##} ## check if this is needed or should be used # return(deliver); the object return(deliver); } sub vcl_recv { ##avoid leeching of images call hot_link; set req.grace = 360000s; ##2m ## if one backend is down - use another if (req.restarts == 0) { set req.backend = cache_director; ##we can specify individual VMs } else if (req.restarts == 1) { set req.backend = cache_director; } ## post calls should not be cached - add cookie for these requests if using micro-caching # Pass requests that are not GET or HEAD if (req.request != "GET" && req.request != "HEAD") { return(pass); ## return(pass) goes to backend - not cache } # Don't cache the result of a redirect if (req.http.Referer ~ "redir" || req.http.Origin ~ "jumpto") { return(pass); } # Don't cache the result of a redirect (asking for logon) if (req.http.Referer ~ "post" || req.http.Referer ~ "submit" || req.http.Referer ~ "add" || req.http.Referer ~ "ask") { return(pass); } # Never cache posts - ensure that we do not use these strings in our URLs' that need to be cached if (req.url ~ "\/post\/" || req.url ~ "\/submit\/" || req.url ~ "\/ask\/" || req.url ~ "\/add\/") { return(pass); } ## if (req.http.Authorization || req.http.Cookie) { if (req.http.Authorization) { /* Not cacheable by default */ return (pass); } # Handle compression correctly. Different browsers send different # "Accept-Encoding" headers, even though they mostly all support the same # compression mechanisms. By consolidating these compression headers into # a consistent format, we can reduce the size of the cache and get more hits. # @see: http:// varnish.projects.linpro.no/wiki/FAQ/Compression if (req.http.Accept-Encoding) { if (req.url ~ "\.(jpg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|ico)$") { # No point in compressing these remove req.http.Accept-Encoding; } else if (req.http.Accept-Encoding ~ "gzip") { # If the browser supports it, we'll use gzip. set req.http.Accept-Encoding = "gzip"; } else if (req.http.Accept-Encoding ~ "deflate") { # Next, try deflate if it is supported. set req.http.Accept-Encoding = "deflate"; } else { # Unknown algorithm. Remove it and send unencoded. unset req.http.Accept-Encoding; } } # lookup graphics, css, js & ico files in the cache if (req.url ~ "\.(png|gif|jpg|jpeg|css|.js|ico)$") { return(lookup); } ##added on 0918 - check if it causes issues with user specific content if (req.request == "GET" && req.http.cookie) { return(lookup); } # Pipe requests that are non-RFC2616 or CONNECT which is weird. if (req.request != "GET" && req.request != "HEAD" && req.request != "PUT" && req.request != "POST" && req.request != "TRACE" && req.request != "OPTIONS" && req.request != "DELETE") { ##closing connection and calling pipe return(pipe); } ##purge content via localhost only if (req.request == "PURGE") { if (!client.ip ~ purge) { error 405 "Not allowed."; } return(lookup); } ## do we need this? ## return(lookup); }

    Read the article

  • I just restarted Apache and now the server is down

    - by James
    I am pretty terrified right now. I'm scared I'm going to get a call in a couple minutes from a hundred people saying the website doesn't work. I was at the terminal changing some configuration files when I went to restart the server to update the .conf files with this command: /etc/init.d/apache2 graceful After I ran that, none of the websites work and I have no idea what to do. There are about 100 errors I am getting according to the log files. They all begin with "PHP Notice" and most relate to "use of undefined constant" Also, I just spoke with a coworker, describing what I did, and he noticed that there are two installations of apache on the server and that I restarted the one that we don't use. This is what the error log says (assuming it's the correct error log): [Wed Jan 05 11:52:06 2011] [notice] Graceful restart requested, doing restart Warning: DocumentRoot [/u/apps/staging/antetr/current/public/] does not exist [Wed Jan 05 11:52:08 2011] [warn] NameVirtualHost *:80 has no VirtualHosts (98)Address already in use: make_sock: could not bind to address [::]:80 (98)Address already in use: make_sock: could not bind to address 0.0.0.0:80 no listening sockets available, shutting down Unable to open logs (2)No such file or directory: apache2: could not open error log file /u/apps/production/madfilmdash/current/log/apache-error.log. Unable to open logs (2)No such file or directory: apache2: could not open error log file /u/apps/production/madfilmdash/current/log/apache-error.log. Unable to open logs (2)No such file or directory: apache2: could not open error log file /u/apps/production/madfilmdash/current/log/apache-error.log. Unable to open logs (2)No such file or directory: apache2: could not open error log file /u/apps/production/madfilmdash/current/log/apache-error.log. Unable to open logs (2)No such file or directory: apache2: could not open error log file /u/apps/production/madfilmdash/current/log/apache-error.log. Unable to open logs (2)No such file or directory: apache2: could not open error log file /u/apps/production/madfilmdash/current/log/apache-error.log. Unable to open logs (2)No such file or directory: apache2: could not open error log file /u/apps/production/madfilmdash/current/log/apache-error.log. Unable to open logs (2)No such file or directory: apache2: could not open error log file /u/apps/production/madfilmdash/current/log/apache-error.log. Unable to open logs FINAL UPDATE: Ok, I fixed it. The problem was (as you experts facepalming probably know) that it couldn't access an error log in the directory I was working in. I created an empty error log file and tried the restart command again and now all the sites are back up... Though my original problem is still there.. Thanks to all those who offered advice, it really helped and let me breathe for a moment.

    Read the article

  • I just restarted Apache and now the server is down

    - by James
    I am pretty terrified right now. I'm scared I'm going to get a call in a couple minutes from a hundred people saying the website doesn't work. I was at the terminal changing some configuration files when I went to restart the server to update the .conf files with this command: /etc/init.d/apache2 graceful After I ran that, none of the websites work and I have no idea what to do. There are about 100 errors I am getting according to the log files. They all begin with "PHP Notice" and most relate to "use of undefined constant" Also, I just spoke with a coworker, describing what I did, and he noticed that there are two installations of apache on the server and that I restarted the one that we don't use. This is what the error log says (assuming it's the correct error log): [Wed Jan 05 11:52:06 2011] [notice] Graceful restart requested, doing restart Warning: DocumentRoot [/u/apps/staging/antetr/current/public/] does not exist [Wed Jan 05 11:52:08 2011] [warn] NameVirtualHost *:80 has no VirtualHosts (98)Address already in use: make_sock: could not bind to address [::]:80 (98)Address already in use: make_sock: could not bind to address 0.0.0.0:80 no listening sockets available, shutting down Unable to open logs (2)No such file or directory: apache2: could not open error log file /u/apps/production/madfilmdash/current/log/apache-error.log. Unable to open logs (2)No such file or directory: apache2: could not open error log file /u/apps/production/madfilmdash/current/log/apache-error.log. Unable to open logs (2)No such file or directory: apache2: could not open error log file /u/apps/production/madfilmdash/current/log/apache-error.log. Unable to open logs (2)No such file or directory: apache2: could not open error log file /u/apps/production/madfilmdash/current/log/apache-error.log. Unable to open logs (2)No such file or directory: apache2: could not open error log file /u/apps/production/madfilmdash/current/log/apache-error.log. Unable to open logs (2)No such file or directory: apache2: could not open error log file /u/apps/production/madfilmdash/current/log/apache-error.log. Unable to open logs (2)No such file or directory: apache2: could not open error log file /u/apps/production/madfilmdash/current/log/apache-error.log. Unable to open logs (2)No such file or directory: apache2: could not open error log file /u/apps/production/madfilmdash/current/log/apache-error.log. Unable to open logs FINAL UPDATE: Ok, I fixed it. The problem was (as you experts facepalming probably know) that it couldn't access an error log in the directory I was working in. I created an empty error log file and tried the restart command again and now all the sites are back up... Though my original problem is still there.. Thanks to all those who offered advice, it really helped and let me breathe for a moment.

    Read the article

  • Gnome 3 release date

    - by extropy
    Is there a timeline for Gnome 3 development? When do we get hands on the first beta builds? The only thing I know is that Gnome 3 is planned in place of Gnome 2.30. Last Gnome release was 2.26 in March 2009. Sticking to 6 month releases, 2.30 should arrive at March 2011. Is there some other date?

    Read the article

  • App to log network user's file activities

    - by Luke
    My employer recently purchased a Mac Pro server (2011 model) and we've installed OSX Lion Server from the Mac Appstore. I'm definitely no server admin, but I have been tasked with finding out the following: Can we monitor or log what users do? (ie, open a file, copy a file.... or file-related tasks) Can we stop files from being copied? Or be alerted when this does occur? Thanks in advance for any assistance.

    Read the article

  • Secret, unlogged, transparent, case-sensitive proxy in IIS6?

    - by Ian Boyd
    Does IIS have a secret, unlogged, transparent, case-sensitive proxy built into it? A file exists on the web-server: GET http://www.stackoverflow.com/javascript/ModifyQuoteArea.js HTTP/1.1 Accept: text/html, application/xhtml+xml, */* Accept-Language: en-US User-Agent: Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0) Accept-Encoding: gzip, deflate Connection: Keep-Alive Host: www.stackoverflow.com HTTP/1.1 200 OK Connection: Keep-Alive Content-Length: 29246 Date: Mon, 07 Mar 2011 14:20:07 GMT Content-Type: application/x-javascript ETag: "5a0a6178edacb1:1c51" Server: Microsoft-IIS/6.0 Last-Modified: Fri, 02 Tue 2010 17:03:32 GMT Accept-Ranges: bytes X-Powered-By: ASP.NET ... Problem is that a changes made to the file will not get served, the old (i.e. February of last year) version keeps getting served: HTTP/1.1 200 OK Connection: Keep-Alive Content-Length: 29246 Date: Mon, 07 Mar 2011 14:23:07 GMT Content-Type: application/x-javascript ETag: "5a0a6178edacb1:1c51" Server: Microsoft-IIS/6.0 Last-Modified: Fri, 02 Tue 2010 17:03:32 GMT Accept-Ranges: bytes X-Powered-By: ASP.NET ... The same old file gets served, even though we've: renamed the file deleted the file restarted IIS The request for this file does not appear in the IIS logs (e.g. C:\WINNT\System32\LogFiles\W3SVC7\) And this only happens from the outside (i.e. the internet). If you issue the request locally on the server, then you will: - get the current file (file there) - 404 (file renamed) - 404 (file deleted) But if i change the case of the requested resource, i.e.: GET http://www.stackoverflow.com/javascript/MoDiFyQuOtEArEa.js HTTP/1.1 Accept: text/html, application/xhtml+xml, */* Accept-Language: en-US User-Agent: Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0) Accept-Encoding: gzip, deflate Connection: Keep-Alive Host: www.stackoverflow.com Note: MoDiFyQuOtEArEa.js verses ModifyQuoteArea.js Then i do get the proper file (or get the 404 as i expect if the file is renamed or deleted). But any subsequent changes to the file will not show up until i change the case of the file i'm asking for. Checking the IIS logs all indicate that the (internet) requests are all coming the correct client on the internet (i.e. not from some intermediate proxy). Since the file doesn't exist on the hard drive anymore, i conclude that there is a proxy. The requests serviced from this proxy are not logged in the IIS logs. The requests for new files are logged, and from the client IP, not a proxy IP. The proxy is case sensitivie. This does not sound like something Microsoft, or IIS, would do: - a transparent proxy - case-sensitivie - unlogged - surviving restarts of IIS - surviving in a cache for hours i can't believe that our customer's IIS is doing these things. i'm assuming there is some other transparent proxy in front of IIS. Or, does IIS have a transparent, unlogged, case-sensitive, memory based, proxy, that caches content for at least 7 hours? (Come Monday morning, IIS is serving the correct file, unlogged).

    Read the article

  • How to antialias trendline in Excel?

    - by user23122
    I have created a couple of line charts in Excel 2011 for Mac. The actual data line looks good and is antialiased in a nice way but when I then add a trendline it is jagged and ugly: I have tried "all" options available under Format Trendline, to no avail. There is an option "Soft Edges" but it doesn't seem to work as expected: when I increase the value there the trend line gets more and more narrow until it disappears.

    Read the article

  • Comparing columns in Excel

    - by Regan
    I needed to take columns A, B, and C and compare D, E and F. Here's an example: A B C D E F Jump Smith 5 Jump Smith 8 Run Naylor 2 Swim Fran 4 Swim Fran 7 Jog Dylan 1 Jump Fran 3 Jog Smith 4 So I want to match column A and B with D and E but still have both number related C for 2011 and F for 2012. Can anyone please help with that formula? My data is from A3-C4344 and D3 - D4470.

    Read the article

  • Problem connecting to SSH in office network

    - by Jeune
    I have trouble connecting via SSH to a server whenever I am in the office. I get as far as being prompted for my password and then after that there's a long wait which always ends in a Write failed: Broken pipe This is only for connecting via SSH. I use svn to commit files to a repository hosted on the same server and there are no hitches. Furthermore, this only happens in our office. When I go the university or whenever I am at home or at the coffee shop I am able to connect seamlessly. There are no firewalls in our office. It's just a basic wireless router connected to a modem setup. It's the same setup I have at home and I guess the same setup in the coffee shop. What are the causes for a broken pipe and why does this phenomenon only happen when I try connect via SSH and not when I work with svn on the same server? Updated: Some debug logs after authentication: debug3: packet_send2: adding 48 (len 64 padlen 16 extra_pad 64) debug2: we sent a password packet, wait for reply debug1: Authentication succeeded (password). debug1: channel 0: new [client-session] debug3: ssh_session2_open: channel_new: 0 debug2: channel 0: send open debug1: Entering interactive session. debug2: callback start debug2: client_session2_setup: id 0 debug2: channel 0: request pty-req confirm 1 debug1: Sending environment. debug3: Ignored env ORBIT_SOCKETDIR debug3: Ignored env SSH_AGENT_PID debug3: Ignored env TERM debug3: Ignored env SHELL debug3: Ignored env XDG_SESSION_COOKIE debug3: Ignored env WINDOWID debug3: Ignored env GNOME_KEYRING_CONTROL debug3: Ignored env GTK_MODULES debug3: Ignored env USER debug3: Ignored env LS_COLORS debug3: Ignored env LIBGL_DRIVERS_PATH debug3: Ignored env SSH_AUTH_SOCK debug3: Ignored env DEFAULTS_PATH debug3: Ignored env SESSION_MANAGER debug3: Ignored env USERNAME debug3: Ignored env XDG_CONFIG_DIRS debug3: Ignored env DESKTOP_SESSION debug3: Ignored env LIBGL_ALWAYS_INDIRECT debug3: Ignored env PATH debug3: Ignored env PWD debug3: Ignored env GDM_KEYBOARD_LAYOUT debug1: Sending env LANG = en_PH.utf8 debug2: channel 0: request env confirm 0 debug3: Ignored env GNOME_KEYRING_PID debug3: Ignored env MANDATORY_PATH debug3: Ignored env GDM_LANG debug3: Ignored env GDMSESSION debug3: Ignored env SHLVL debug3: Ignored env HOME debug3: Ignored env GNOME_DESKTOP_SESSION_ID debug3: Ignored env LOGNAME debug3: Ignored env XDG_DATA_DIRS debug3: Ignored env DBUS_SESSION_BUS_ADDRESS debug3: Ignored env LESSOPEN debug3: Ignored env WINDOWPATH debug3: Ignored env DISPLAY debug3: Ignored env LESSCLOSE debug3: Ignored env XAUTHORITY debug3: Ignored env COLORTERM debug3: Ignored env OLDPWD debug3: Ignored env _ debug2: channel 0: request shell confirm 1 debug2: fd 3 setting TCP_NODELAY debug2: callback done debug2: channel 0: open confirm rwindow 0 rmax 32768 UPDATE 2011-14-07: I am able to connect to the server via SSH now. I didn't do anything but that's because there is no one in the office but me! Having said that, is it possible that it has something to do with the number of sessions an SSH server can handle? UPDATE 2011-14-07: I try to login via SSH through Putty on another machine running windows together with my current SSH session in Ubuntu and now it seems my SSH session in Ubuntu has been dropped. I can't type into the terminal. Is Putty the culprit now?

    Read the article

< Previous Page | 149 150 151 152 153 154 155 156 157 158 159 160  | Next Page >