Search Results

Search found 3554 results on 143 pages for 'django 1 1'.

Page 132/143 | < Previous Page | 128 129 130 131 132 133 134 135 136 137 138 139  | Next Page >

  • Ubuntu whois package and request limits

    - by Sam Hammamy
    I'm writing a django app with a form that accepts an IP and does a whois lookup on the discovered domain names. I've found the Ubuntu package whois which I plan to call from a python subprocess, and read the stdout into a StringIO, then parse for things like Registrar, Name Servers, etc. My question is, it seems that there are many paid whois services, which means that there must be a reason why people don't just use this Ubuntu package. I'm wondering if there's a request limit on the number of requests from a single IP to the package's whois server? I will probably be making 250 domain lookups per IP or maybe more. Also, I've found that some domains aren't searchable: qmul.ac.uk is searchable kat.ph is not searchable ahram.org.eg is not searchable Any particular reason for that?

    Read the article

  • How to install python modules for specific python version

    - by Zayatzz
    I needed to install UCS2 python next to UCS4 python. So I went to comp.lang.python and asked them about it. Probably not the best place to ask it, but they answered https://groups.google.com/forum/?fromgroups#!topic/comp.lang.python/bGuAfqa76W8 and now i have brand new python 2.7.3 ucs2 installed in /opt/bin/python What I need now is - how can I install all other python modules that I have installed for that python version also. Basically stuff like PIL and postgresql and mod_wsgi - basically everything needed to run Django for that python version. Is this the right the place to ask for it?

    Read the article

  • How can I make Amazon SES the default method of sending mail from my server?

    - by Jake
    I'd like to start using Amazon SES for all emails from our server. We have a few freelance designers with PHP hosting, some Django/Python web apps and also some system utilities which send emails. So I'd like to have PHP's mail function, the command line mail command and our python apps all be able to use it, preferably without having to set them all up in their own way. I think what I need is to have something like Postfix running on localhost and using SES for it's delivery but I don't know how to do that. Amazon's docs state I need to setup my mail transfer agent (MTA) so that it invokes the ses-send-email.pl script. I have the script but I'm not sure how to achieve this. Am I on the right track? If so, how can I configure Postfix to use that script?

    Read the article

  • osx 10.6.3 how does apache config work?

    - by w-
    hi, just got a macbook pro 15" so i'm unfamiliar with how the filesystem is laid out. i noticed when in my filesystem that i've got a few paths specifying httpd.conf /etc/apache2/httpd.conf /opt/local/apache2/conf/httpd.conf /private/etc/apache2/httpd.conf the confs are different in lots of ways-- e.g. user, group, server_root, modules that are loaded, etc. the apache2 folders themselves also greatly differ. it seems that the one getting used is either /etc/apache2/httpd.conf or /private/etc/apache2/httpd.conf i'm wondering if i might have messed up my system after installing some packages (php5, django, etc) via macports and maybe ended up with 2 apache2 instances. my questions are hence: - which httpd.conf is the one being used ? - what are the other files for? thanks

    Read the article

  • Limit which processes a user can restart with supervisor?

    - by dvcolgan
    I have used supervisor to manage a Gunicorn process running a Django site, though this question could pertain to anything being managed by supervisor. Previously I was the only person managing and using our server, and supervisor just ran as root and I would use sudo to run supervisorctl restart myapp when needed. Now our server has to support multiple users working on different sites, and each project needs to be able to restart their own gunicorn processes without being able to restart other users' processes. I followed this blog post: http://drumcoder.co.uk/blog/2010/nov/24/running-supervisorctl-non-root/ and was able to allow non-root users to use supervisorctl, but now anyone can restart anyone else's processes. From the looks of it, supervisor doesn't have a way of doing per-user access control. Anyone have any ideas on how to allow users to restart only their own processes without root?

    Read the article

  • How do I get started with Chef?

    - by Brad Wright
    The chef documentation is pretty bad. And Google isn't helping me. Can anyone point me at a decent article or something that would help me get started? My specific issues are: How do I get a client to read my configuration? chef-solo seems like the best start (I don't want to run an OpenID server or Merb) How do I configure Apache to serve Django? I already know how to do this via regular server configuration, but I figure an example Chef recipe would be a good start;

    Read the article

  • I'd like to rebuild my web server without web management software; what knowledge, skills, and tools will I require? [closed]

    - by Joe Zeng
    I've been using Webmin for my web server that runs my personal website and a host of other websites for a while now, and I feel like I should be able to manage my web server more directly, because I haven't even touched the Webmin for the past year or so and I feel like maybe it has too much functionality that I have to click through the next time I want to access it or create a new subdomain or database on my site. I want to try something lighter and more wholly manageable, now that I'm more comfortable with using ssh and command-line tools. I've decided that I'm going to try using Django as a framework, but obviously that's only part of the picture. What sort of knowledge will I require?

    Read the article

  • Screen startup apps

    - by stillinbeta
    I know that most people don't bother with things like screen anymore, but I happen to really like it, even in this GUI day and age. I still do most of my development from a BASH prompt, so it's extremely useful to me. What I'm wondering is what the easiest way is to start an instance of screen (stored in a shell script or .screenrc or somewhere else) so that it starts up with set commands already running in set windows. For example, I use a django test server, so I'd like one window to come up running "python manage.py runserver" and another blank, waiting for commands. The man page is wholly indecipherable. These old unix utilities can do quite nearly everything, so I'm sure this is possible, but I can't for the life of me figure out how. I

    Read the article

  • Apache whitelist a single location, but require basic auth for everything else

    - by Chris Lawlor
    I'm sure this is simple, but Google is not my friend this morning. The goal is: /public... is openly accessible everything else (including /) requires basic auth. This is a WSGI app, with a single WSGI script (it's a django site, if that matters..) I have this: <Location /public> Order deny,allow Allow from all </Location> <Directory /> AuthType Basic AuthName "My Test Server" AuthUserFile /path/to/.htpasswd Require valid-user </Directory> With this configuration, basic auth works fine, but the Location directive is totally ignored. I'm not surprised, as according to this (see How the Sections are Merged), the Directory directive is processed first. I'm sure I'm missing something, but since Directory applies to a filesystem location, and I really only have the one Directory at /, and it's a Location that I wish to allow access to, but Directory always overrides Location... EDIT I'm using Apache 2.2, which doesn't support AuthType None.

    Read the article

  • wsgi - narrow user permissions.

    - by Tomasz Wysocki
    I have following Apache configuration and my application is working fine: <VirtualHost *:80> ServerName ig-test.example.com WSGIScriptAlias / /home/ig-test/src/repository/django.wsgi WSGIDaemonProcess ig-test user=ig-test </VirtualHost> But I want to protect my files from other users, so I do: chown ig-test /home/ig-test/ -R chmod og-rwx /home/ig-test/ -R And application stops working: (13)Permission denied: /home/ig-test/.htaccess pcfg_openfile: unable to check htaccess file, ensure it is readable Is it possible to achieve what i'm doing with wsgi? If I have to give read permissions to some files it will be fine. But there are files I have to protect (like file with DB configuration or business logic of application).

    Read the article

  • Apache+FastCGI Timeout Problem

    - by Sadjad Fouladi
    Hi all. I've recently installed mod_fastcgi and Apache 2.2. I've a simple cgi script as below (test.fcgi): #!/bin/sh echo sadjad But when I invoke "mysite.com/test.fcgi" I see "Internal Server Error" message after a short period of time. The error.log file shows this error message: [Tue Jan 31 22:23:57 2006] [warn] FastCGI: (dynamic) server "~/public_html/oaduluth/dispatch.fcgi" has failed to remain running for 30 seconds given 3 attempts, its restart interval has been backed off to 600 seconds This is my .htaccess file: AddHandler fastcgi-script .fcgi RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*)$ django.fcgi/$1 [QSA,L] I'm very confused, please help me! [Sorry for my poor English!]

    Read the article

  • Installing httpssl module on a running NGINX server

    - by Rob
    Hi, New to NGINX, we inherited a project that runs Django/FCGI/NGINX on a hosted RHEL box. A requirement has come in that the site now needs to have ssl enabled. Client was pretty sure the person who had built the site had made it so they could use ssl. I backed up the conf file, added the server block for the ssl instance and tried to reload. Reload failed because it didn't recognize the ssl in this line: ssl on; Not an NGINX expert, but the David Caruso in me tells me that the server (sunglasses on) is not secure. I know that you need to configure NGINX at install with this module. If this didn't happen, how hard/risky is it to reconfigure a running nginx box with this module given that we didn't configure it in the first place.

    Read the article

  • First time setting up a MySQL database.

    - by Wilduck
    In trying to learn how to work with the LAMP stack, I've hit a wall with MySQL. I can't seem to find a good reference for the first time setup of MySQL to be used with Apache and python. So, my question is four-fold: 1) Under what circumstances should I create my first database. That is, what user do I use (Apache's http user? root?) 2)How do permissions work? 3) Do I have to do anything on the MySQL side to make MySQL talk to Apache, or MySQL to talk to Python/Django? 4) Is there a good resource online that describes setting all of this up? I've found a bunch for using a database once it's in place, but none for the initial setup? Notes: I'm trying to run my LAMP stack on a dedicated little box for testing/learning purposes only, so I don't have access to any DBA that could help me, as much as I'd like one.

    Read the article

  • phpbb behind a reverse proxy

    - by asciitaxi
    Hi, i've got a django app running on apache behind an nginx reverse proxy. Nginx takes requests on port 80 and forwards them to apache on 127.0.0.1:81. This works fine. Now I want to run phpbb on apache under /forums. My problem is that when phpbb does a redirect, it seems to redirect to the internal apache port, rather than port 80. So, for instance when I first go to http://my-dev-server/forums to configure php bb, it immediately redirects to http://127.0.0.1:81/forums/install/index.php. Is there something I need to do in nginx/apache/phpbb config to get it to redirect to the external port? Thanks very much!

    Read the article

  • Server memory issues, and expected level of service from hosting company

    - by Greg
    I'm involved in maintaining an Ubuntu VPS which runs our django websites (nginx/apache/mod_wsgi) and we've been having some memory spikes which have either caused the database to die, or induced kernel panic when the memory management system can't find any killable processes. I'm working on fixing the memory spikes, but I'm wondering whether there's anything I can do to better deal with the problem if it occurs again. Are there any tools I could use to detect the memory spikes and then, say, kill the offending process and email the server admin to fix it up? Killing off one website so that the server can remain operational is certainly preferable to the whole thing falling over. Also, we were charged $600 for after-hours service because we had to get the hosting company to restart the server - is this standard practice among hosting companies? Another provider I work with provides a panel with which I can stop and start the server myself, and given that a restart was all that was needed, $600 seems mightily excessive. (That's NZD, it's around $445 USD)

    Read the article

  • Best practices for setting lm-factor in Squid refresh patterns

    - by Mpentecost
    I am running a Squid (3.1) cache in front of Django. The content of the site does not change very often, so Squid gives our backend much needed breathing room. Currently, this is the refresh pattern that we are using to cache the content: refresh_pattern . 60 100% 60 We basically want to cache everything for at least an hour (and only an hour) before Squid then re-validates the content. My question is on the "100%" parameter, which sets the lm-factor. I'm not sure if setting that to 100% is doing what we want it to. The assumption was that by setting it to 100%, it would ensure that objects stay in the cache for the max cache time. Is this an incorrect assumption? What are the best practices that one should follow when setting up a refresh pattern like this?

    Read the article

  • Subdomain only accessible from one computer

    - by Edan Maor
    I recently added a wildcard A record to my domain (*.root.com), mapping it to a certain elastic ip on AWS. I've configured apache to redirect all references to something.root.com to root.com, except for one specific "dev" subdomain, which is hosting its own site (a Django app, specifically). The Problem: This setup works perfectly for me on my computer. But on other computers around the office, it doesn't seem to work. Specifically, trying to visit dev.root.com gives an "unable to find server" error. Pinging dev.root.com gives a "cannot resolve hostname" error. The weird thing: pinging any other subdomain of root.com does work, from all machines. I would think this was all due to DNS propagation, except all the computers are behind the same office router, so how could that be the case? Any ideas?

    Read the article

  • Unreasonably slow stunnel

    - by Kit Sunde
    I setup stunnel on OSX to tunnel traffic to my Django dev server because Facebook needs HTTPS these days but I noticed it's being absurdly slow. It seems like it can only handle a single connection at a time and even the connection is slow when I'm connecting to localhost. I've tried using some performance tips found online and so my config is setup as: pid= # foreground=yes cert=./cacert.pem key=./privkey.pem libwrap=no debug=0 socket = l:TCP_NODELAY=1 socket = r:TCP_NODELAY=1 [https] accept=8443 connect=8000 Is there a way to get more performance or more suitable way of setting up HTTPS for my dev server?

    Read the article

  • Optimal way to make MySQL backups for fairly large databases (MyISAM / InnoDB)

    - by WinkyWolly
    Currently we have one beefy MySQL database that runs a couple of high traffic Django based websites as well as some e-commerce websites of decent size. As a result we have a fair amount of large databases using both InnoDB and MyISAM tables. Unfortunately we've recently hit a wall due to the amount of traffic so I've setup another master server to help alleviate reads / backups. Now at the moment I simply use mysqldump with a few arguments and it's proven to be fine.. until now. Obviously mysqldump is a slow quick method however I believe we've outgrown its use. I now need a good alternative and have been looking into utilizing Maatkits mk-parallel-dump utility or an LVM snapshot solution. Succinct short version: I have a fairly large MySQL databases I need to backup Current method using mysqldump is inefficient and slow (causing issues) Looking into something such as mk-parallel-dump or LVM snapshots Any recommendations or ideas would be appreciated - since I have to re-do how we're doing things I rather have it done properly / most efficient :).

    Read the article

  • "Cannot allocate memory" while no process seems to be using up memory

    - by omat
    I am not competent on server issues, any help is much appreciated. When try to start a python/django shell on a linux box, I am getting OSError: [Errno 12] Cannot allocate memory. free -m seems to confirm I am out of memory: total used free shared buffers cached Mem: 590 560 29 0 3 37 -/+ buffers/cache: 518 71 Swap: 0 0 0 But I cannot see what is eating up the memory with top or ps aux: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1 root 20 0 24336 908 0 S 0.0 0.2 0:00.68 init 2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd 3 root 20 0 0 0 0 S 0.0 0.0 0:04.85 ksoftirqd/0 How can I identify the leak? Thanks. BTW, I am not sure if it is relevant, but the machine I am talking about is an AWS EC2 instance with Ubuntu 12 running.

    Read the article

  • How to Architect a system on AWS for scaling (with a MySQL back-end)

    - by Edan Maor
    I'm trying to understand how to architect an Amazon Web Services application. As I understand it, the whole point of using something like AWS is to make the eventual scaling easier, so I'm trying to understand how to do that. I have an instance, running off of EBS (EBS-based instance, not a regular instance). My application (a Django app) uses MySQL as a back-end. So the question is, where am I supposed to install the MySQL? Do I install it on the same instance? In which case, as far as I can tell, I can't simply create more server instances from that image. Or am I supposed to simply spin up another server as a DB server, and run off of that? Thanks for any help!

    Read the article

  • Wordpress serving PHP but not CSS or JS

    - by Jason
    I'm trying to set up an Amazon EC2 instance to run a Django app and a WP instance side by side, differing only by the incoming URL. Initially, accessing the site via mysite.com/wordpress worked, but I also needed to catch the incoming requests from a subdomain address blog.mysite.com. To do that, I created a default file in /etc/apache2/sites-enabled and included two virtualhost directives, one of which was <VirtualHost *:80> ServerName www.blog.mysite.com <Directory /var/www/wordpress> Order deny,allow Allow from all </Directory> </VirtualHost> This created some errors with the other virtualhost, so I restored the default 000-default file configuration and restarted. Now, accessing mysite.com/wordpress takes forever, and even then the CSS and JS files are not loading. Iside the Firebug Net tab, I can see the HTML response, but the CSS and JS files are not loading at all. What happened here?

    Read the article

  • Apache+FastCGI Timeout Error: "has failed to remain running for 30 seconds given 3 attempts, its restart interval has been backed off to 600 seconds"

    - by Sadjad Fouladi
    I've recently installed mod_fastcgi and Apache 2.2. I have a simple cgi script as below (test.fcgi): #!/bin/sh echo sadjad But when I invoke 'mysite.com/test.fcgi' I see "Internal Server Error" after a short period of time. The error.log file shows this error message: [Tue Jan 31 22:23:57 2006] [warn] FastCGI: (dynamic) server "~/public_html/oaduluth/dispatch.fcgi" has failed to remain running for 30 seconds given 3 attempts, its restart interval has been backed off to 600 seconds This is my .htaccess file: AddHandler fastcgi-script .fcgi RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*)$ django.fcgi/$1 [QSA,L] What could the problem be? Is it my .htaccess file?

    Read the article

  • Site's performance slows over time until Apache is restarted

    - by udbhav
    I'm running a Django app w/ Nginx and Apache. All our static media is stored on S3, and basically it takes a while for the app to check if thumbnails have been created every time a page is loaded. To alleviate this problem, I'm caching the output of the templates w/ memcached. Over the course of an hour or two, the site's speed goes down significantly, until I restart apache, and then all is good for a little while. I have very little sysadmin experience, and was hoping somebody could at least point me in the right direction.

    Read the article

  • Is there any way to automatically prevent running out of memory?

    - by NoahY
    I am often running out of memory on my VPS ubuntu server. I wish there was a way to simply restart apache2 when it starts running out of memory, as that seems to solve the problem. Or am I just too lazy to fix the problem? I do have limited memory on the server... Okay, more information: I'm running apache2 prefork, here are my memory settings (i've been tweaking them...): StartServers 3 MinSpareServers 1 MaxSpareServers 5 MaxClients 150 MaxRequestsPerChild 1000 The VPS has 1 GB of ram, running ubuntu 11.04 32-bit. As for scripts, I have a wordpress network with 5 blogs, an install of AskBot (a python/django stackexchange clone), and an install of MediaWiki that isn't really used. There is also a homebrewed mp3 script that accesses the getid3 library to display information on lists of podcasts, and it seems to be throwing some php errors, not sure if that's the culprit...

    Read the article

< Previous Page | 128 129 130 131 132 133 134 135 136 137 138 139  | Next Page >