Search Results

Search found 21908 results on 877 pages for 'content catalog'.

Page 241/877 | < Previous Page | 237 238 239 240 241 242 243 244 245 246 247 248  | Next Page >

  • Typical outbound port list for guest access?

    - by Steve
    I manage a weekly rental house that includes wireless Internet access. I've allowed all outbound ports on my router but my ISP has disabled my Internet access twice now because guests have downloaded (or served up) copyrighted content. So I'd like to institute some port filtering to discourage p2p sharing (see disclaimer below). But I don't want to inconvenience the 99.9% of folks who keep things above-board. My question is, what outbound ports are typically open for rental/hotel wireless Internet access, or where can I find such a list? TCP 80,443,25,110 at a minimum. Though my own email service uses 995 and 465 for SSL, some may use IMAP, I personally use SSH and FTP, so I'll open those. Roughly I figure I need to open access to privileged ports, and close 1024 & above. Is there a whitelist I should institute for commonly used high ports? And does it make sense to block UDP 1024 ? Disclaimer: I realize anyone replying to this message could circumvent the port filtering and share content to their heart's content. I do not need comprehensive p2p blocking, which requires more than a port whitelist. Anyone staying at the house shoulders the responsibility for their Internet use, per the rental contract. Also anyone savvy enough to circumvent the port filters would hopefully be savvy enough to use some sort of peer blocking, thereby preventing the ISP from taking down the service.

    Read the article

  • How to setup Python with Lighttpd and FastCGI (like PHP)

    - by johndir
    Running Lighttpd on Linux, I would like to be able to execute Python scripts just the way I execute PHP scripts. The goal is to be able to execute arbitrary script files stored in the WWW directory, e.g. http://www.example.com/*.py. I would not like to spawn a new Python instance (interpreter) for every request (like done in regular CGI, if I'm not mistaken), which is why I'm using FastCGI. Following Lighttpd's documentation, the following is the FastCGI part of my config file. The problem is that it always runs the /usr/local/bin/python-fcgi script for every *.py file, regardless of the content of that file: http://www.example.com/script.py [output=>] "python-fcgi: test" (regardless of the content of script.py) I'm not interested in using any framework, but simply executing individual [web] scripts. How can I make it act like PHP, executing any script in the WWW directory by requesting it's path? /etc/lighttpd/conf.d/fastcgi.conf: server.modules += ( "mod_fastcgi" ) index-file.names += ( "index.php" ) fastcgi.server = ( ".php" => ( "localhost" => ( "bin-path" => "/usr/bin/php-cgi", "socket" => "/var/run/lighttpd/php-fastcgi.sock", "max-procs" => 4, # default value "bin-environment" => ( "PHP_FCGI_CHILDREN" => "1", # default value ), "broken-scriptfilename" => "enable" ) ), ".py" => ( "python-fcgi" => ( "socket" => "/var/run/lighttpd/fastcgi.python.socket", "bin-path" => "/usr/local/bin/python-fcgi", "check-local" => "disable", "max-procs" => 1, ) ) ) /usr/local/bin/python-fcgi: #!/usr/bin/python2 def myapp(environ, start_response): start_response('200 OK', [('Content-Type', 'text/plain')]) return ['python-fcgi: test\n'] if __name__ == '__main__': from flup.server.fcgi import WSGIServer WSGIServer(myapp).run()

    Read the article

  • how to set cache control to public in iis 7.5

    - by ivymike
    I'm trying to set cache control header to max age using the following snippet in my web.config: <system.webServer> <staticContent> <clientCache cacheControlMode="UseMaxAge" cacheControlMaxAge="1.00:00:00" /> </staticContent> </system.webServer> Some how this isn't being reflected in the response. Instead I see a Cache-Control: private header on the responses. I'm using NancyFx framework (which is a layer on top of Asp.net). Is there any thing else I need to do ? Below are the reponse headers I receive: HTTP/1.1 200 OK\r\n Cache-Control: private\r\n Content-Type: application/x-javascript\r\n Content-Encoding: gzip\r\n Last-Modified: Mon, 19 Mar 2012 16:42:03 GMT\r\n ETag: 8ced406593e38e7\r\n Vary: Accept-Encoding\r\n Server: Microsoft-IIS/7.5\r\n Nancy-Version: 0.9.0.0\r\n Set-Cookie: NCSRF=AAEAAAD%2f%2f%2f%2f%2fAQAAAAAAAAAMAgAAADxOYW5jeSwgVmVyc2lvbj0wLjkuMC4wLCBDdWx0dXJlPW5ldXRyYWwsIFB1YmxpY0tleVRva2VuPW51bGwFAQAAABhOYW5jeS5TZWN1cml0eS5Dc3JmVG9rZW4DAAAAHDxSYW5kb21CeXRlcz5rX19CYWNraW5nRmllbGQcPENyZWF0ZWREYXRlPmtfX0JhY2tpbmdGaWVsZBU8SG1hYz5rX19CYWNraW5nRmllbGQHAAcCDQICAAAACQMAAADTubwoldTOiAkEAAAADwMAAAAKAAAAAkpT5d9aTSzL3BAPBAAAACAAAAACPUCyrmSXQhkp%2bfrDz7lZa7O7ja%2fIg7HV9AW6RbPPRLYLAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA%3d; path=/; HttpOnly\r\n X-AspNet-Version: 4.0.30319\r\n Date: Tue, 20 Mar 2012 09:44:20 GMT\r\n Content-Length: 1624\r\n

    Read the article

  • Managing Linux Directory Permissions & SFTP

    - by Dizzle
    Good morning; I have a RHEL 5.7 web server configured to allow SSH/SFTP only by specific groups. I'd like for content managers to upload content to their respective directories and have that content inherit the user/group ownership of the directory regardless of upload method or application. For example: John is in group "web" for SSH/SFTP rights and "finance" for directory permissions, and uploads to directory "webstuff" via SFTP. Directory "webstuff" has permissions of "2760" (rwxrws---), and ownership of "apache:finance". If John uploads an update to an existing file in "webstuff", the ownership of the file stays at "apache:finance". If John uploads a new file to "webstuff", the ownership of the file is "john:finance". My desire is to have any file from John uploaded to "webstuff" to change to the directory's owner. I've tried with setuid and setgid both set, but the user-ownership didn't take. I've seen mentions on ServerFault of using ACL's, or a chrooted jail for SFTP but I have yet to configure and test them, and I don't know if they're a viable solution (they could be, I just don't know because I've never done either). Any thoughts and assistance would be greatly appreciated.

    Read the article

  • nginx and proxy_hide_header

    - by giskard
    When I curl for a URL I get this answer back: > < HTTP/1.1 200 OK < Server: nginx/0.7.65 < Date: Thu, 04 Mar 2010 12:18:27 GMT < Content-Type: application/json < Connection: close < Expires: Thu, 04 Mar 2010 12:18:27 UTC < http.context.path: /1/ < jersey.response: com.sun.jersey.spi.container.ContainerResponse@17646d60 < http.custom.headers: {Content-Type=text/plain} < http.request.path: /2/messages/latest.json < http.status: 200 < Transfer-Encoding: chunked I want to remove < http.context.path: /1/ < jersey.response: com.sun.jersey.spi.container.ContainerResponse@17646d60 < http.custom.headers: {Content-Type=text/plain} < http.request.path: /2/messages/latest.json < http.status: 200 So I used the proxy_hide_header directive in this way: location / { if ($arg_id) { proxy_pass http..authorized; break; } proxy_pass http..anonymous; proxy_hide_header http.context.path; proxy_hide_header jersey.response; proxy_hide_header http.request.path; proxy_hide_header http.status ; } But it doesn't work. any clues?

    Read the article

  • openSSL tutorial not fully working - Can sign but cannot restore original file

    - by djechelon
    I'm writing, and testing, a little tutorial for my groupmates involved in an openSSL homework. We have a bunch of PDF files, I'm the CA and each one should send me a signed PDF for me to be verified. I've told them to do the following (and tried to do it by myself) Request and obtain a certificate (I'll skip this part) Create a MIME message with the PDF file in it makemime -c "text/pdf" -a "Content-Disposition: attachment; filename=”Elaborato.pdf" Elaborato.pdf > Elaborato.pdf.msg Sign with openSSL openssl smime -sign -in Elaborato.pdf.msg -out Elaborato.pdf.p7m -certfile ca.pem -certfile nomegruppo.crt -inkey nomegruppo.key -signer nomegruppo.crt Verify with openssl smime -verify -in Elaborato.pdf.p7m -out Elaborato-verified.msg -CAfile ca.pem -signer nomegruppo.crt Extract attachment with munpack Elaborato-verified.msg View with Acrobat Reader The problem is that even if I get a file that (from its binary content) resembles a PDF file my current Ubuntu PDF viewer doesn't read it. The XXXElaborato.pdf extracted by munpack is a little bit smaller than the original. What's the problem with this procedure? In theory, they should send me the signed S/MIME message and I should be able to read the PDF within it. Why can't I restore the original content of the PDF file?

    Read the article

  • What's the piece of hardware listening on Facebook's or Wikipedia's IP address?

    - by Igor Ostrovsky
    I am trying to understand how massive sites like Facebook or Wikipedia work, for my intellectual curiosity. I read about various techniques for building scalable sites, but I am still puzzled about one particular detail. The part that confuses me is that ultimately, the DNS will map the entire domain to a single IP address, or a handful of IP addresses in the case of round-robin DNS. For example, wikipedia.org has only one type-A DNS record. So, people from all over the world visiting Wikipedia have to send a request to the one IP address specified in DNS. What is the piece of hardware that listens on the IP address for a massive site, and how can it possibly handle all the load coming from the requests for users all over the world? Edit 1: Thanks for all the responses! Anycast seems like a feasible answer... Does anyone know of a way to check whether a particular IP address is anycast-routed, so that I could verify that this really is the trick used in practice by large sites? Edit 2: After more reading on the topic, it appears that anycast is not typically used for dynamic web content. Anycast is usually used for UDP (e.g., DNS lookups), or sometimes for static content. One interesting thing to note is that Facebook uses profile.ak.fbcdn.net to host static content like style sheets and javascript libraries. Each time I ping this name, I get a response from a different IP address. However, I can't tell whether this is anycast in action, or a completely different technique. Back to my original question: as far as I can tell, even a large site will have a single expensive piece of load-balancing hardware listening on its handful of public IP addresses.

    Read the article

  • Can't access one directory via HTTPS + public FQDN

    - by Justin James
    Hello - I have the strangest IIS error that I've ever seen in my life. I have an application/directory on an IIS server, that throws an error 500 when accessing ANY of the content in it, including HTML documents, when accessed via HTTPS AND the machines FQDN. When I access it with "localhost" it works fine. When I added a bogus entry for the NIC's IP in the hosts file, it worked fine. When I access it with the machines name and HTTP it works fine. Here's a chart (the machine's name is "lofn.titaniumcrowbar.com"): http - lofn.titaniumcrowbar.com: works https - lofn.titaniumcrowbar.com: broken https - localhost: works https - temp.titaniumcrowbar.com (put into hosts file): works I set up tracing, and I got some useless information: "The I/O operation has been aborted because of either a thread exit or an application request. (0x800703e3)" This would make sense, except this happens when pulling up static content. While the directory may be an "application", the content is all static in it. Any/all suggestions, no matter how strange, are VERY appreciated. Thanks! J.Ja

    Read the article

  • My URL has been identified as a phishing site

    - by user2118559
    Some months before ordered VPS at Ramnode According to tutorial (ZPanelCP on CentOS 6.4) http://www.zvps.co.uk/zpanelcp/centos-6 Installed CentOS and ZPanel) Today received email We are requesting that you secure and investigate the phishing website identified below. This URL has been identified as a phishing site and is currently involved in identity theft activities. URL: hxxp://111.11.111.111/www.connet-itunes.fr/iTunesConnect.woasp/ //IP is modified (not real) This site is being used to display false or spoofed content in an apparent effort to steal personal and financial information. This matter is URGENT. We believe that individuals are being falsely directed to this page and may be persuaded into divulging personal information to a criminal, if the content is not immediately disabled. Trying to understand. Some hacker hacked VPS, placed some file (?) with content that redirects to www.connet-itunes.fr/iTunesConnect.woasp? Then questions 1) how can I find the file? Where it may be located? url is URL: hxxp://111.11.111.111/ IP address, not domain name 2) What to do to protect VPS (with CentOS)? Any tutorial? Where may be security problem? I mean may be someone faced something similar....

    Read the article

  • Apache serving empty gzip with assets produced by Rails Asset Pipeline

    - by PizzaPill
    I followed the steps described on the blogpost The Asset Pipeline, from development to production and tweaked them to my environment. The two important files are: /etc/apache/site-available/example.com <VirtualHost *:80> ServerName example.com ServerAlias www.example.com DocumentRoot "/var/www/sites/example.com/current/public" ErrorLog "/var/log/apache2/example.com-error_log" CustomLog "/var/log/apache2/example.com-access_log" common <Directory "/var/www/sites/example.com/current/public"> Options All AllowOverride All Order allow,deny Allow from all </Directory> <Directory "/var/www/sites/example.com/current/public/assets"> AllowOverride All </Directory> <LocationMatch "^/assets/.*$"> Header unset Last-Modified Header unset ETag FileETag none ExpiresActive On ExpiresDefault "access plus 1 year" </LocationMatch> RewriteEngine On # Remove the www RewriteCond %{HTTP_HOST} ^www.example.com$ [NC] RewriteRule ^(.*)$ http://example.com/$1 [R=301,L] </VirtualHost> /var/www/sites/example.com/shared/assets/.htaccess RewriteEngine on RewriteCond %{HTTP:Accept-Encoding} \b(x-)?gzip\b RewriteCond %{REQUEST_FILENAME}.gz -s RewriteRule ^(.+) $1.gz [L] <FilesMatch \.css\.gz$> ForceType text/css Header set Content-Encoding gzip </FilesMatch> <FilesMatch \.js\.gz$> ForceType text/javascript Header set Content-Encoding gzip </FilesMatch> But apache seems to send empty gzip files because the testsite looses all styles and firebug doesnt find any content for the css files. Altough if I call the assets-path directly I get some gibberish that looks like binary data. If I move the htaccess-file everything is back to normal. How could I find out where/what went wrong or do you have any suggestions what error I made? > apache2 -v System: Server version: Apache/2.2.14 (Ubuntu) Server built: Mar 5 2012 16:42:17 > uname -a Linux node0 2.6.18-028stab094.3 #1 SMP Thu Sep 22 12:47:37 MSD 2011 x86_64 GNU/Linux

    Read the article

  • How to create hash or yml from top level attributes values of node?

    - by Sarah Haskins
    I have a chef recipe where I want to take all of the attributes under node['cfn']['environment'] and write them to a yml file. I could do something like this (it works fine): content = { "environment_class" => node['cfn']['environment']['environment_class'], "node_id" => node['cfn']['environment']['node_id'], "reporting_prefix" => node['cfn']['environment']['reporting_prefix'], "cfn_signal_url" => node['cfn']['environment']['signal_url'] } yml_string = YAML::dump(content) file "/etc/configuration/environment/platform.yml" do mode 0644 action :create content "#{yml_string}" end But I don't like that I have to explicitly list out the names of the attributes. If later I add a new attributes it would be nice if it automatically was included in the written out yml file. So I tried something like this: yml_string = node['cfn']['environment'].to_yaml But because the node is actually a Mash, I get a platform.yml file like this (it contains a lot of unexpected nesting that I don't want): --- !ruby/object:Chef::Node::Attribute normal: tags: [] cfn: environment: &25793640 reporting_prefix: Platform2 signal_url: https://cloudformation-waitcondition-us-east-1.s3.amazonaws.com/... environment_class: Dev node_id: i-908adf9 ... But what I want is this: ---- reporting_prefix: Platform2 signal_url: https://cloudformation-waitcondition-us-east-1.s3.amazonaws.com/... environment_class: Dev node_id: i-908adf9 How can I achieve the desired yml output w/o explicitly listing the attributes by name?

    Read the article

  • Why Wireshark does not recognize this HTTP response?

    - by Alois Mahdal
    I have a trivial CGI script that outputs simple text content. It's written in Perl and using CGI module and it specifies only the most basic headers: print $q->header( -type => 'text/plain', -Content_length => $length, ); print $stuff; There's no apparent issue with functionality, but I'm confused about the fact that Wireshark does not recognize the HTTP response as HTTP--it's marked as TCP. Here is request and response: GET /cgi-bin/memfile/memfile.pl?mbytes=1 HTTP/1.1 Host: 10.6.130.38 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:11.0) Gecko/20100101 Firefox/11.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: cs,en-us;q=0.7,en;q=0.3 Accept-Encoding: gzip, deflate Connection: keep-alive HTTP/1.1 200 OK Date: Thu, 05 Apr 2012 18:52:23 GMT Server: Apache/2.2.15 (Win32) mod_ssl/2.2.15 OpenSSL/0.9.8m Content-length: 1048616 Keep-Alive: timeout=5, max=100 Connection: Keep-Alive Content-Type: text/plain; charset=ISO-8859-1 XXXXXXXX... And here is the packet overview (Full packet is here on pastebin) No. Time Source srcp Destination dstp Protocol Info tcp.stream abstime 5 0.112749 10.6.130.38 80 10.6.130.53 48072 TCP [TCP segment of a reassembled PDU] 0 20:52:23.228063 Frame 5: 1514 bytes on wire (12112 bits), 1514 bytes captured (12112 bits) Ethernet II, Src: Dell_97:29:ac (00:1e:4f:97:29:ac), Dst: Dell_3b:fe:70 (00:24:e8:3b:fe:70) Internet Protocol Version 4, Src: 10.6.130.38 (10.6.130.38), Dst: 10.6.130.53 (10.6.130.53) Transmission Control Protocol, Src Port: http (80), Dst Port: 48072 (48072), Seq: 1, Ack: 330, Len: 1460 Now when I see this in Wireshark: there's usual TCP handshake then the GET request shown as HTTP with preview then the next packet contains the response, but is not marked as an HTTP response--just a generic "[TCP segment of a reassembled PDU]", and is not caught by "http.response" filter. Can somebody explain why Wireshark does not recognize it? Is there something wrong with the response?

    Read the article

  • Suggestions for splitting server roles amongst Hyper-V virtual servers / RAID6 or RAID10? / AppAssure

    - by Anon
    We have 2 Hyper-V hosts at present running 1 virtual server that was converted from a physical box running all roles. My plan is to split the roles over various virtual machines, upgrading to the latest software versions as I go, and use the backup server as a standby in case the main server fails. AppAssure backup software has a feature called Virtual Standby, so the VHD's can be ready to be fired up on the backup server if necessary. Off-site backups will be done via external USB drive for now. I'm just seeking some input/suggestions into how I'm planning to split the roles out amongst various virtual servers. Also, I'm curious how to setup the storage on the servers. We do not have any NAS's, SAN'S or any budget for this. What would the best RAID level be to use? I'm thinking either RAID6 (which is currently used) however I'm concerned about the write speeds, or RAID10 but again I'm worried that I can only lose 1 drive (from the same mirror) as opposed to any 2 with RAID6. I realise I have a hot swap for this, but what if a further drive fails during a rebuild? Is the write penalty of RAID6 worth the extra reliability over RAID10? Or will it be too slow with all the roles I am planning, therefore RAID10 is my only real option? The reason for the needed redundancy is I am the only technician and I'm not always on-site. Options I've considered: 1) 5 drives in RAID6 set, 200gb for host OS, rest for VM storage. 1 drive for hot swap - this is how it is currently setup 2) 4 drives in RAID10 set, 200gb for host OS, rest for VM storage. 2 drives for hot swap 3) 4 drives in RAID10 set for VM storage, 2 drives in RAID1 set for host OS. No drives for hot swap - While this is probably the best option with the amount of drives I have, I don't like the idea of having no hot swap 4) 3 drives in RAID6 set for VM storage, 2 drives in RAID1 set for host OS. 1 drive for hot swap All options give us enough storage capacity for our files, etc. We don't have any budget for extra drives or extra hot swap HD chassis for the servers. We have about 70 clients and about 150 users. MAIN SERVER Intel Xeon 5520 @ 2.27 GHz (2 processors) 16GB RAM 6 x 1TB Seagate Barracuda ES.2 Enterprise SATA drives Intel SRCSATAWB RAID controller Virtual machine workload using Hyper-V on Windows Server 2008 R2: DC01 - Active Directory Domain Controller / DNS server / Global catalog - 1GB RAM DC02 - Active Directory Domain Controller / DNS server / Global catalog - 1GB RAM Member Server - DHCP server, File server, Print server - 1GB RAM SCCM Member Server - 4GB RAM Third Party Software Member Server - A/V server, Ticketing software, etc - 4GB RAM Exchange 2007 - 4GB RAM - however we are probably migrating to a hosted solution, therefore freeing up resources BACKUP SERVER Intel Xeon E5410 @ 2.33GHz (2 processors) 16GB RAM 6 x 2TB WD RE4 SATA drives Intel SRCSASRB RAID controller Virtual machine workload using Hyper-V on Windows Server 2008 R2: AppAssure backup software - 8GB RAM

    Read the article

  • Secure data from a server to a workstation using jumper hosts

    - by apalsson
    Hello. I have a WWW-server, my problem is that the content is sensitive and should not be accessible for people without proper credentials. How can I improve the ease of use but still maintain security following scenario; The Server is accessed through a "jumper host", i.e. the client connects to the jumper using VPN-connection and uses RemoteDesktop to access the jumper. From the jumper he uses RemoteDesktop again to access the Server. Finally on the Server the user can access content using a WWW-browser. All the way from the VPN-client to the WWW-browser requires authentication using a SmartCard-token. This seems quite secure to me. Content only gets mirrored on the RemoteDesktop between Server and jumper, no cached files to worry about. Connection between jumper and client is protected using VPN(ssl), so no eavesdropping. But it is quite cumbersome for the clients with many steps and connections to open. :( So, how can I improve the user experience accessing my server without compromising security? Thanks.

    Read the article

  • Bacula backup process always blocks the restore

    - by georgehu
    Every day we have a long running catalog backup process, and I found there is no way to restore a file during the backup. So, Bacula is designed to block the restore while back is running? I'm using a disk backup, I couldn't understand why I can't restore file from early written volumes as the back process is not supposed to writing on the same volume file.

    Read the article

  • Configure IIS to pass-through CGI output without any conditioning

    - by Daniel Watrous
    I'm building a web service on Windows 2008 R2 with IIS 7.5 and Python 2.5. Right now I have the Handler Mappings and everything else setup just fine, Except that IIS is modifying what it gets back from the CGI script before sending it along the the client. Here's an example: I wrote the following CGI script: # hello.py print "Status: 400 Bad Request" print "Content-Type: text/html" print print "Error Message" According to the HTTP spec this should be fine and a Status of 400 should allow for a description of the error message in the body of the response. When the server response actually comes back to me I get the following: Status: 400 Bad Request Date: Fri, 11 Feb 2011 17:58:30 GMT X-Powered-By: ASP.NET Connection: close Content-Length: 11 Server: Microsoft-IIS/7.5 Content-Type: text/html Bad Request I've seen on this forum and others where I can change or eliminate the X-Powered-By header element, but I would like IIS to leave it alone altogether. I'm not sure why it takes my response, deletes "Error Message" from the body and replaces it with "Bad Request" and then adds all that other junk in. Is there some way to tell IIS to just send the response along without making any changes at all?

    Read the article

  • Graphite not running

    - by River
    I'm currently trying to install graphite 0.9.9 on a gentoo box using these instructions from the graphite wiki. Essentially, it fronts graphite using apache and mod_wsgi. Everything seems to have gone well, except that apache / the graphite webapp never seem to return a response to the web browser (the browser continuously waits to load the page). I've turned on the graphite debug info, but the only message in the log files is this, repeated over and over again in info.log (with the pid always changing): Thu Feb 23 01:59:38 2012 :: graphite.wsgi - pid 4810 - reloading search index These instructions have worked for me before to set up graphite on an Ubuntu machine. I suspect that mod_wsgi is dying, but I have confirmed that mod_wsgi works fine when not serving the graphite webapp. This is what my graphite.conf vhost file looks like: WSGISocketPrefix /etc/httpd/wsgi/ <VirtualHost *:80> ServerName # Server name DocumentRoot "/opt/graphite/webapp" ErrorLog /opt/graphite/storage/log/webapp/error.log CustomLog /opt/graphite/storage/log/webapp/access.log common # I've found that an equal number of processes & threads tends # to show the best performance for Graphite (ymmv). WSGIDaemonProcess graphite processes=5 threads=5 display-name='%{GROUP}' inactivity-timeout=120 WSGIProcessGroup graphite WSGIApplicationGroup %{GLOBAL} WSGIImportScript /opt/graphite/conf/graphite.wsgi process-group=graphite application-group=%{GLOBAL} WSGIScriptAlias / /opt/graphite/conf/graphite.wsgi Alias /content/ /opt/graphite/webapp/content/ <Location "/content/"> SetHandler None </Location> # XXX In order for the django admin site media to work you # must change @DJANGO_ROOT@ to be the path to your django # installation, which is probably something like: # /usr/lib/python2.6/site-packages/django Alias /media/ "/usr/lib64/python2.6/site-packages/django/contrib/admin/media/" <Location "/media/"> SetHandler None </Location> # The graphite.wsgi file has to be accessible by apache. It won't # be visible to clients because of the DocumentRoot though. <Directory /opt/graphite/conf/> Order deny,allow Allow from all </Directory> </VirtualHost>

    Read the article

  • How to override puppet class arguments in child node?

    - by Jon Skarpeteig
    I'm attempting to accomplish something like the below: node 'basenode' { class { 'puppet' : disable => false, } } node 'child' inherits 'basenode' { class { 'puppet' : disable => true, } } This gives me: err: Could not retrieve catalog from remote server: Error 400 on SERVER: Duplicate definition: Class[Puppet] is already defined How can I override this setting for this single node, and still have a parameterised class?

    Read the article

  • hybrid cable for QSFP to CX4 convertion

    - by John-ZFS
    here is a hybrid cable for QSFP to CX4. Will this fit SFP+ ports? Deeply confused by standards and struck in a situation with wrong hardware selection!Personally have not seen the ports/hardware and hence the obviously stupid question! thanks for stopping by and bearing with me. http://www.cablesondemand.com/pcategory/72/category/QSFP+-+CX4/URvars/Catalog/Library/InfoManage/QSFP_TO_CX4_COPPER_CABLES.htm

    Read the article

  • 'Can't convert nil into String' error upon Puppet run

    - by Adrian
    When attempting to use modules copied into the Puppet modules directory, my puppet client returns ' Could not retrieve catalog from remote server: Error 400 on SERVER: can't convert nil in String' errors when connecting to the Puppet master server. [root@puppetmaster modules]# rpm -qa *puppet* puppet-2.7.18-1.el6.noarch puppet-server-2.7.18-1.el6.noarch [root@puppetmaster modules]# uname -sr Linux 2.6.32-279.el6.x86_64 Code all checks out and is valid. SELinux is turned on.

    Read the article

  • Some Emails incoming to Outlook 2007 are blank, same emails work fine on webmail, iphone, etc

    - by Funran
    This is a pretty easy problem to describe. Basically users who have just been upgraded to Outlook 2007 (yeah I know 2010 is out), are not receiving SOME emails (from outside our domain, ie hotmail, yahoo). Receiving is not the correct word, these emails come in, along with their attachments, subjects, to/from line, etc. But the body is blank. If the same user goes into their webmail, iphone, blackberry instead, they can read the message fine. It's clear to me that something in Outlook 2007 is not generating the body correctly, so it just strips it. I just don't know WHY. Our mail server was recently upgraded to Exchange 2010, users on 2010 running outlook 2003 are working fine, it's just the random emails for users using 2007. I hope I made that clear enough, thank you for any future help guys. EDIT: I don't see rft, but i swear I've seen it before. Here is the view source on a recent email. <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN"><html><head> <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1"> <meta name="GENERATOR" content="MSHTML 8.00.6001.19120"> <DEFANGED_style_0 <="" style=""> </head> <body bgcolor="#ffffff"> <p><DEFANGED_DIV><font color="#0000ff" size="2" face="Calibri">MS,</font></p><DEFANGED_DIV> <p><DEFANGED_DIV><font color="#0000ff" size="2" face="Calibri">Could you tell me please what the legal descrip &amp; Topo Quad name is for this Monroe P.ID Site?</font></p><DEFANGED_DIV> <p><DEFANGED_DIV><em><font color="#0000ff" size="2" face="Calibri">Thanks, Henry Roye</font></em></p><DEFANGED_DIV></body></html>

    Read the article

  • Force encoding with IIS 7

    - by Cédric Boivin
    I try to force encoding with IIS 7. When I add in the http response headers the key : Content-Type and value charset=utf-8 i got this key content-type : text/html,content-type=utf-8 it's there a way to remove the comma ? Thanks Justin for your answer. But it's seen don't work. There is my config, i need to do that for asp classic. <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.webServer> <staticContent> <remove fileExtension=".html" /> <remove fileExtension=".hxt" /> <remove fileExtension=".htm" /> <remove fileExtension=".asp" /> <mimeMap fileExtension=".htm" mimeType="text/html" /> <mimeMap fileExtension=".hxt" mimeType="text/html" /> <mimeMap fileExtension=".html" mimeType="text/html" /> <mimeMap fileExtension=".asp" mimeType="text/html; charset=UTF-8" /> </staticContent> </system.webServer> </configuration>

    Read the article

  • Nginx server 301 Moved permanently

    - by user145714
    When I did a curl -v http://site-wordpress.com:81 I received this result: About to connect() to site-wordpress.com port 81 (#0) Trying ip... connected Connected to site-wordpress.com (ip) port 81 (#0) GET / HTTP/1.1 User-Agent: curl/7.19.7 (x86_64-unknown-linux-gnu) libcurl/7.19.7 NSS/3.12.6.2 zlib/1.2.3 libidn/1.18 libssh2/1.2.2 Host: site-wordpress.com:81 Accept: / < HTTP/1.1 301 Moved Permanently < Server: nginx/1.2.4 < Date: Fri, 16 Nov 2012 16:28:19 GMT < Content-Type: text/html; charset=UTF-8 < Transfer-Encoding: chunked < Connection: keep-alive < X-Pingback: The URL above/xmlrpc.php < Location: The URL above Seems like this line in my fastcgi_params is causing grief. fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; If I remove this line , I get HTTP/1.1 200 OK but I get a blank page. This is my config: server { listen 81; server_name site-wordpress.com; root /var/www/html/site; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; index index.php; if (!-e $request_filename){ rewrite ^(.*)$ /index.php break; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; # port where FastCGI processes were spawned fastcgi_index index.php; include /etc/nginx/fastcgi_params; include /etc/nginx/mime.types; } location ~ \.css { add_header Content-Type text/css; } location ~ \.js { add_header Content-Type application/x-javascript; } } This config works with ip and port 80. But now I need to use a domain name and port 81, which doesn't work. Could someone please help. Thanks.

    Read the article

  • SQL Connection String to access localhost\SQLEXPRESS

    - by user34683
    I've installed SQL Express on my PC hoping to do some practice creating tables and then modifying them. I coded a webpage in Visual Studio to, basically, SELECT * from a table in the SQLEXPRESS, but I can never get the connection string to work. Please help My connection string "Data Source=localhost\SQLEXPRESS;Initial Catalog=test;User Id=xaa9-PC\xaa9;Password=abcd;" Error Message: Query is select * from tblCustomers where username='johndoe' error is Login failed for user 'xaa9-PC\xaa9'.

    Read the article

< Previous Page | 237 238 239 240 241 242 243 244 245 246 247 248  | Next Page >