Search Results

Search found 19664 results on 787 pages for 'python for ever'.

Page 443/787 | < Previous Page | 439 440 441 442 443 444 445 446 447 448 449 450  | Next Page >

  • Diffusion de programmes PyQt4 autonomes sous Windows grâce à cx_Freeze, un article de Jean-Paul Vidal

    Bonjour, Comme j'en avais le besoin, j'ai réalisé 2 tutos, que je propose maintenant pour être transportés sur developpez (=> merci d'avance à dourouc05: dis-moi si tu as besoin du texte dokuwiki). Il s'agit de construire des programmes PyQt4 accompagnés de l'interpréteur Python et de toutes les bibliothèques nécessaires (dont PyQt4), afin qu'ils puissent fonctionner sur des PC sans aucune installation ni de Python ni de PyQt4: - Sous Windows (XP, Vista, 7) - Sous Linux (Ubuntu 10.10) Je pense que ce type de tuto...

    Read the article

  • Garder les traductions avec cx_Freeze et PyQt4, un article de Jean-Paul Vidal

    Bonjour, Comme j'en avais le besoin, j'ai réalisé 2 tutos, que je propose maintenant pour être transportés sur developpez (=> merci d'avance à dourouc05: dis-moi si tu as besoin du texte dokuwiki). Il s'agit de construire des programmes PyQt4 accompagnés de l'interpréteur Python et de toutes les bibliothèques nécessaires (dont PyQt4), afin qu'ils puissent fonctionner sur des PC sans aucune installation ni de Python ni de PyQt4: - Sous Windows (XP, Vista, 7) - Sous Linux (Ubuntu 10.10) Je pense que ce type de tuto...

    Read the article

  • Is there a simple "Hello World" for making games?

    - by a.m.
    Does anyone know of a simple "Hello World" for making games for ubuntu? I've seen the Getting Started with Quickly video. Any examples for platformers or something like that? EDIT: Just a recap of the answers. Blender Game engine -- Uses python Pygame -- Python MonoGame http://monogame.codeplex.com/ -- some sort of XNA ? QuakeC -- This a Quake flavored C like lang. See: Steel Storm http://one.steel-storm.com/

    Read the article

  • Ubuntu unattended-upgrades stops apache

    - by Robbie
    This morning i was alerted to the fact that both apache instances serving my app were not responding to requests from my load balancer. I attempted apachectl restart and it said apache was not running. So, i started apache on both instances and got the service up again. I then followed the logs and worked out that both had performed upgrades via the unattended-upgrades package moments before they stopped responding. /var/log/unattended-upgrades/unattended-upgrades.log 2013-07-02 06:30:51,875 INFO Starting unattended upgrades script 2013-07-02 06:30:51,875 INFO Allowed origins are: ['o=Ubuntu,a=precise-security'] 2013-07-02 06:33:57,771 INFO Packages that are upgraded: accountsservice apache2 apache2-mpm-prefork apache2-utils apache2.2-bin apache2.2-common apparmor apport apt apt-transport-https apt-utils bind9-host binutils dbus dnsutils gnupg gpgv isc-dhcp-client isc-dhcp-common krb5-locales libaccountsservice0 libapt-inst1.4 libapt-pkg4.12 libbind9-80 libc-bin libc-dev-bin libc6 libc6-dev libcurl3-gnutls libdbus-1-3 libdbus-glib-1-2 libdns81 libdrm-intel1 libdrm-nouveau1a libdrm-radeon1 libdrm2 libexpat1 libfreetype6 libgc1c2 libgnutls-dev libgnutls-openssl27 libgnutls26 libgnutlsxx27 libisc83 libisccc80 libisccfg82 liblwres80 libruby1.8 libx11-6 libx11-data libxcb1 libxext6 libxml2 linux-firmware linux-image-virtual linux-libc-dev linux-virtual multiarch-support openssl perl perl-base perl-modules python-apport python-crypto python-keyring python-problem-report python-software-properties ri1.8 ruby1.8 ruby1.8-dev sudo tzdata update-manager-core 2013-07-02 06:33:57,772 INFO Writing dpkg log to '/var/log/unattended-upgrades/unattended-upgrades-dpkg_2013-07-02_06:33:57.772399.log' 2013-07-02 06:36:10,584 INFO All upgrades installed I'm running Ubuntu 12.04 on Amazon EC2 servers. I have unattended-upgrades installed and configured as follows: /etc/apt/apt.conf.d/50unattended-upgrades // Automatically upgrade packages from these (origin:archive) pairs Unattended-Upgrade::Allowed-Origins { "${distro_id}:${distro_codename}-security"; // "${distro_id}:${distro_codename}-updates"; // "${distro_id}:${distro_codename}-proposed"; // "${distro_id}:${distro_codename}-backports"; }; // List of packages to not update Unattended-Upgrade::Package-Blacklist { }; /etc/apt/apt.conf.d/20auto-upgrades APT::Periodic::Update-Package-Lists "1"; APT::Periodic::Unattended-Upgrade "1"; I've struggled to find documentation about what happens to running processes during an upgrade. - Is this expected behaviour? Or should unattended-upgrades restart apache after upgrading it? - What can I do to ensure apache is restarted correctly? Should I just blacklist the apache package?

    Read the article

  • Rewrote GNU GPL v2 code in another language: can I change a license?

    - by Anton Gogolev
    I rewrote some parts of Mercurial (which is licensed under GNU GPL v2) in C#. Naturally, I looked a lot into original Python code and some parts are direct translations from Python to C#. Is is possible have "my code" licensed under different terms or to even make a part of a closed-source commercial application? If not, can I re-license "my-code" under LGPL, open-source it and then use this open-sourced C# library in my closed-source commercial application?

    Read the article

  • I have permanent connections to Canonical servers, what are they for?

    - by Dan Dman
    After the recent upgrade to 12, I notice permanent connections to canonical servers. Running netstat -tp gives: Foreign Address State PID/Program name mulberry.canonical:http CLOSE_WAIT 6537/ubuntu-geoip-p alkes.canonical.co:http CLOSE_WAIT 6667/python alkes.canonical.co:http CLOSE_WAIT 6667/python Why are there permanent connections and how could I stop this behavior? And if this is intentional, who is responsible? I would like to understand why this was done because to me it seems like a bad idea.

    Read the article

  • I have permanent connections to Canonical servers, what are they for and how can I turn them off?

    - by Dan Dman
    After the recent upgrade to 12, I notice permanent connections to canonical servers. Running netstat -tp gives: Foreign Address State PID/Program name mulberry.canonical:http CLOSE_WAIT 6537/ubuntu-geoip-p alkes.canonical.co:http CLOSE_WAIT 6667/python alkes.canonical.co:http CLOSE_WAIT 6667/python Why are there permanent connections and how could I stop this behavior? And if this is intentional, who is responsible? I would like to understand why this was done because to me it seems like a bad idea.

    Read the article

  • Mercurial browser on Windows 2003 takes several refreshes before displaying repositories

    - by Tim Murphy
    When attempt to browse my Mercurial repositories it usually takes several refreshes before the repository list is displayed. The configuration is as follows: Windows Server 2003 (Dedicated machine hosted by http://www.server4you.com/. Site has anonymous password protection with self-signed SSL. Mercurial 1.5.3 Python 2.6.5 Python for Windows 32 extensions 214 py2.6 isapi-wsgi 0.4.2 The repositories are being served via ISAPI using the standard hgwebdir_wspi.py file (copy to follow). Other problems with the repository server: Before doing a clone/push/etc I have to browse the repositories first otherwise hg on my local machine can not locate the site. I have one a repository with a large changeset that after a minute or so throw error "abort: error: An existing connection was forcibly closed by the remote host". Will be asking another question for this problem. What can I do to start tracking down this problem? hgwebdir_wsgi.py # Configuration file location hgweb_config = r'C:\Public\Mercurial\WebSite\hgweb.config' # Global settings for IIS path translation path_strip = 0 # Strip this many path elements off (when using url rewrite) path_prefix = 0 # This many path elements are prefixes (depends on the # virtual path of the IIS application). import sys # Adjust python path if this is not a system-wide install #sys.path.insert(0, r'c:\path\to\python\lib') # Enable tracing. Run 'python -m win32traceutil' to debug if hasattr(sys, 'isapidllhandle'): import win32traceutil # To serve pages in local charset instead of UTF-8, remove the two lines below import os os.environ['HGENCODING'] = 'UTF-8' import isapi_wsgi from mercurial import demandimport; demandimport.enable() from mercurial.hgweb.hgwebdir_mod import hgwebdir # Example tweak: Replace isapi_wsgi's handler to provide better error message # Other stuff could also be done here, like logging errors etc. class WsgiHandler(isapi_wsgi.IsapiWsgiHandler): error_status = '500 Internal Server Error' # less silly error message isapi_wsgi.IsapiWsgiHandler = WsgiHandler # Only create the hgwebdir instance once application = hgwebdir(hgweb_config) def handler(environ, start_response): # Translate IIS's weird URLs url = environ['SCRIPT_NAME'] + environ['PATH_INFO'] paths = url[1:].split('/')[path_strip:] script_name = '/' + '/'.join(paths[:path_prefix]) path_info = '/'.join(paths[path_prefix:]) if path_info: path_info = '/' + path_info environ['SCRIPT_NAME'] = script_name environ['PATH_INFO'] = path_info return application(environ, start_response) def __ExtensionFactory__(): return isapi_wsgi.ISAPISimpleHandler(handler) if __name__=='__main__': from isapi.install import * params = ISAPIParameters() HandleCommandLine(params) hgweb.config [paths] / = C:\Public\Mercurial\Repositories\* [web] allow_archive = bz2 gz zip ; Allows archive downloads. allow_push = ######## ; Users that are allowed to push.

    Read the article

  • apt-get upgrade stuck at the same package (openjdk-6-jre-headless)

    - by decibyte
    I'm stuck, can't upgrade my system. Running sudo apt-get upgrade gives me the following: mmm@alalunga:~$ sudo apt-get upgrade Reading package lists... Done Building dependency tree Reading state information... Done The following packages have been kept back: ginn libgrip0 linux-generic-pae linux-headers-generic-pae linux-image-generic-pae The following packages will be upgraded: apport apport-gtk bind9-host build-essential dhcp3-client dhcp3-common dnsutils eog evince evince-common firefox firefox-branding firefox-dbg firefox-globalmenu firefox-gnome-support firefox-locale-en gimp gimp-data gir1.2-totem-1.0 glib-networking glib-networking-common glib-networking-services gnupg gpgv icedtea-6-jre-cacao icedtea-6-jre-jamvm icedtea-6-plugin icedtea-netx icedtea-netx-common icedtea-plugin isc-dhcp-client isc-dhcp-common libapache2-mod-php5 libart-2.0-2 libbind9-80 libdns81 libevince3-3 libgimp2.0 libisc83 libisccc80 libisccfg82 liblwres80 libssl-dev libssl-doc libssl1.0.0 libtotem0 linux-firmware linux-libc-dev openjdk-6-jre openjdk-6-jre-headless openjdk-6-jre-lib openssl php-pear php5-cli php5-common php5-curl php5-dev php5-gd php5-mysql php5-xsl policykit-1-gnome python-apport python-django python-gst0.10 python-problem-report resolvconf thunderbird thunderbird-globalmenu thunderbird-gnome-support totem totem-common totem-mozilla totem-plugins xserver-xorg-input-synaptics 74 upgraded, 0 newly installed, 0 to remove and 5 not upgraded. Need to get 317 MB/327 MB of archives. After this operation, 1.481 kB of additional disk space will be used. Do you want to continue [Y/n]? Get:1 http://archive.ubuntu.com/ubuntu/ precise-updates/main openjdk-6-jre-headless i386 6b24-1.11.4-1ubuntu0.12.04.1 [27,3 MB] Get:2 http://archive.ubuntu.com/ubuntu/ precise-updates/main openjdk-6-jre-headless i386 6b24-1.11.4-1ubuntu0.12.04.1 [27,3 MB] Get:3 http://archive.ubuntu.com/ubuntu/ precise-updates/main openjdk-6-jre-headless i386 6b24-1.11.4-1ubuntu0.12.04.1 [27,3 MB] Get:4 http://archive.ubuntu.com/ubuntu/ precise-updates/main openjdk-6-jre-headless i386 6b24-1.11.4-1ubuntu0.12.04.1 [27,3 MB] Get:5 http://archive.ubuntu.com/ubuntu/ precise-updates/main openjdk-6-jre-headless i386 6b24-1.11.4-1ubuntu0.12.04.1 [27,3 MB] Get:6 http://archive.ubuntu.com/ubuntu/ precise-updates/main openjdk-6-jre-headless i386 6b24-1.11.4-1ubuntu0.12.04.1 [27,3 MB] Get:7 http://archive.ubuntu.com/ubuntu/ precise-updates/main openjdk-6-jre-headless i386 6b24-1.11.4-1ubuntu0.12.04.1 [27,3 MB] 9% [7 openjdk-6-jre-headless 27,3 MB/27,3 MB 100%] It keeps downloading the package openjdk-6-jre-headless, then does nothing for a while (hanging on what's the last line above), then download the package again. It's at its 13th download attempt at the moment of writing. The actual downloads seem to be done just fine, but whatever it does after downloading seems to be failing. I tried removing openjdk-6, but then it wanted to install openjdk-7 instead, with the same result, hanging at openjdk-7-jre-headless instead. I also tried changing servers from my local (Danish) to the main server. No luck. It's also keeping me from upgrading alle the other packages. What to do?

    Read the article

  • SQL Server Optimizer Malfunction?

    - by Tony Davis
    There was a sharp intake of breath from the audience when Adam Machanic declared the SQL Server optimizer to be essentially "stuck in 1997". It was during his fascinating "Query Tuning Mastery: Manhandling Parallelism" session at the recent PASS SQL Summit. Paraphrasing somewhat, Adam (blog | @AdamMachanic) offered a convincing argument that the optimizer often delivers flawed plans based on assumptions that are no longer valid with today’s hardware. In 1997, when Microsoft engineers re-designed the database engine for SQL Server 7.0, SQL Server got its initial implementation of a cost-based optimizer. Up to SQL Server 2000, the developer often had to deploy a steady stream of hints in SQL statements to combat the occasionally wilful plan choices made by the optimizer. However, with each successive release, the optimizer has evolved and improved in its decision-making. It is still prone to the occasional stumble when we tackle difficult problems, join large numbers of tables, perform complex aggregations, and so on, but for most of us, most of the time, the optimizer purrs along efficiently in the background. Adam, however, challenged further any assumption that the current optimizer is competent at providing the most efficient plans for our more complex analytical queries, and in particular of offering up correctly parallelized plans. He painted a picture of a present where complex analytical queries have become ever more prevalent; where disk IO is ever faster so that reads from disk come into buffer cache faster than ever; where the improving RAM-to-data ratio means that we have a better chance of finding our data in cache. Most importantly, we have more CPUs at our disposal than ever before. To get these queries to perform, we not only need to have the right indexes, but also to be able to split the data up into subsets and spread its processing evenly across all these available CPUs. Improvements such as support for ColumnStore indexes are taking things in the right direction, but, unfortunately, deficiencies in the current Optimizer mean that SQL Server is yet to be able to exploit properly all those extra CPUs. Adam’s contention was that the current optimizer uses essentially the same costing model for many of its core operations as it did back in the days of SQL Server 7, based on assumptions that are no longer valid. One example he gave was a "slow disk" bias that may have been valid back in 1997 but certainly is not on modern disk systems. Essentially, the optimizer assesses the relative cost of serial versus parallel plans based on the assumption that there is no IO cost benefit from parallelization, only CPU. It assumes that a single request will saturate the IO channel, and so a query would not run any faster if we parallelized IO because the disk system simply wouldn’t be able to handle the extra pressure. As such, the optimizer often decides that a serial plan is lower cost, often in cases where a parallel plan would improve performance dramatically. It was challenging and thought provoking stuff, as were his techniques for driving parallelism through query logic based on subsets of rows that define the "grain" of the query. I highly recommend you catch the session if you missed it. I’m interested to hear though, when and how often people feel the force of the optimizer’s shortcomings. Barring mistakes, such as stale statistics, how often do you feel the Optimizer fails to find the plan you think it should, and what are the most common causes? Is it fighting to induce it toward parallelism? Combating unexpected plans, arising from table partitioning? Something altogether more prosaic? Cheers, Tony.

    Read the article

  • Are there any technical obstacles for implementing `function* ()` syntax

    - by shabunc
    In Python we have yield which is very similar to that one which is proposed in ES6 (in fact, pythonic co-routines were the main source of inspiration for implementing co-routines in I wonder what are the reasons for choosing a separate function* () syntax for generators compared to just defining "regular" functions with yeilds - just like in python by the way? I'm talking strictly of technical issues and peculiarities. Why it had been decided that a separate form will be more appropriate?

    Read the article

  • how to install the freesteam which is the only opensource steam table software for ubuntu?

    - by gunjan parashar
    while installing the freesteam downoaded form the http://sourceforge.net/projects/freesteam/ which is the ony steam table software for ubuntu i am getting following error Dependency is not satisfiable: libfreesteam1. the description of package : python-freesteam , file name :python-freesteam_2.0_i386(1).deb plz help me out as it is the only available opensource steam table soft to be used by mechanical engineering student

    Read the article

  • PyPy 1.2 Released

    PyPy is a reimplementation of Python in Python, using advanced techniques to try to attain better performance than CPython . Many years of hard work have finally paid...

    Read the article

  • Ubuntu 12.10 blender problem

    - by SamueLL
    please I have a problem i want to install older version of blender becouse i don't fell very well in new UI. I downloaded blender-2.46-linux-glibc236-py25-x86_64 from official site and i had some problems like missing python-2.5 library but i fixed them and now i have problem that i can solve... Compiled with Python version 2.7.3. Warning: could not set Blender.sys.progname Can you help me with this please?

    Read the article

  • Offline web app options

    - by L. De Leo
    For a game web app that runs Python on the server side and Javascript / HTML on the client side I'd like to build an offline version that runs in Chrome and on the mobile devices. What is the most convenient way currently available to target Chrome, Win 8 Desktop (with a Win packaged app) and the mobile devices reusing most of the code? Options could be PhoneGap for the mobile devices and PyJs for the offline browser versions or maybe translate Python to Dart manually (because of the closer semantics of the two languages) and compile to Javascript.

    Read the article

  • Other processes take over port 80 when restarting Apache - why, and how to solve?

    - by user72149
    I have a CentOS 5.5 server running Apache on port 80 as well as some other applications. All works fine until I for some reason need to restart the httpd process. Doing so returns: sudo /etc/init.d/httpd restart Stopping httpd: [ OK ] Starting httpd: (98)Address already in use: make_sock: could not bind to address [::]:80 (98)Address already in use: make_sock: could not bind to address 0.0.0.0:80 no listening sockets available, shutting down Unable to open logs First I thought perhaps httpd had frozen and was still running, but that was not the case. So I ran netstat to find out what was using port 80: sudo netstat -tlp Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 *:7203 *:* LISTEN 24012/java tcp 0 0 localhost.localdomain:smux *:* LISTEN 3547/snmpd tcp 0 0 *:mysql *:* LISTEN 21966/mysqld tcp 0 0 *:ssh *:* LISTEN 3562/sshd tcp 0 0 *:http *:* LISTEN 3780/python26 Turns out that my python process had taken over listening to http in the instant that httpd was restarting. So, I killed python and tried starting httpd again - but ran into the same error. Netstat again: sudo netstat -tlp Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 *:7203 *:* LISTEN 24012/java tcp 0 0 localhost.localdomain:smux *:* LISTEN 3547/snmpd tcp 0 0 *:mysql *:* LISTEN 21966/mysqld tcp 0 0 *:ssh *:* LISTEN 3562/sshd tcp 0 0 *:http *:* LISTEN 24012/java Now my java process had taken over listening to http. I killed that too and could then successfully restart httpd. But this is a terrible workaround. Why will these python and java processes start listening to port 80 as soon as httpd is restarted? How to solve? Two other comments. 1) Both java and python processes are started by apache from a php script. But when apache is restarted, they should not be affected. And 2) I have the same setup on two other machines running Ubuntu and there's no problem there. Any ideas? Edit: The Java process listens to port 7203 and the python process supposedly doesn't listen to any port. For some reason, they start listening to port 80 when apache is restarted. This hasn't happened before. On Ubuntu it runs fine. For some reason, on my current CentOS 5.5 machine, this problem arises.

    Read the article

  • Excellent C Tutorials

    - by nebffa
    I've looked high and low for C tutorials that have lots of exercises to do along the way, but in my experience all the guides I've found have mostly explanation with a bit of code pre-written, lacking exercises for you to do. I started learning Python using Learn Python the Hard Way, and for almost all other standard languages there are good sites to learn and grapple with the syntax - for example codecademy.com, programr.com. Is there any site like any of the above for C?

    Read the article

  • httpHandler - subfolder issue

    - by Patto
    Hi, I am trying to redirect my old typepad blog to my new blog (permanent 301 redirect) that runs with wordpress. The new blog will also be on a new server. the old Blog had the following structure: http://subdomain.domain.com/weblog/year/month/what-ever-article.html The new Blog looks like this: http://www.domain.com/Blog/year/month/what-ever-article.html I am using an http handler that I found online and tried to work with it: public class MyHttpModule :IHttpModule { public MyHttpModule() { // // TODO: Add constructor logic here // } #region IHttpModule Members public void Dispose() { } public void Init(HttpApplication context) { context.BeginRequest += new EventHandler(context_BeginRequest); } void context_BeginRequest(object sender, EventArgs e) { string oldURL = System.Web.HttpContext.Current.Request.Url.ToString(); string newURL = String.Empty; //oldURL = if (oldURL.ToString().ToLower().IndexOf("articles") >= 0 || System.Web.HttpContext.Current.Request.Url.ToString().ToLower().IndexOf("weblog") >= 0) { newURL = oldURL.Replace("subdomain.domain.com/weblog", "www.domain.com/Blog/index.php"); if (newURL.ToLower().Contains("subdomain")) { newURL = "http://www.domain.com/Blog"; } } else { newURL = "http://www.domain.com/Blog"; } System.Web.HttpContext.Current.Response.Clear(); System.Web.HttpContext.Current.Response.StatusCode = 301; System.Web.HttpContext.Current.Response.AddHeader("Location", newURL); System.Web.HttpContext.Current.Response.End(); } #endregion } To use this code, I put the handler into the web.config <httpModules> <add name="MyHttpModule" type="MyHttpModule, App_Code"/> </httpModules> The issue that I have is that when I want to redirect from the http://subdomain.domain.com/weblog/year/month/what-ever-article.html, I get an error that the folder would not exist. Is there any way to change my script or add an catch all to the web.config that forwards the URL to my script? When I use "http://subdomain.domain.com/weblog/year/month/what-ever-article.html" in oldURL string, then the redirect works just fine... so I must have some IIS or web.config settings wrong. Thanks in advance, Patrick

    Read the article

  • How do I daemonize an arbitrary script in unix?

    - by dreeves
    I'd like a daemonizer that can turn an arbitrary, generic script or command into a daemon. There are two common cases I'd like to deal with: I have a script that should run forever. If it ever dies (or on reboot), restart it. Don't let there ever be two copies running at once (detect if a copy is already running and don't launch it in that case). I have a simple script or command line command that I'd like to keep executing repeatedly forever (with a short pause between runs). Again, don't allow two copies of the script to ever be running at once. Of course it's trivial to write a "while(true)" loop around the script in case 2 and then apply a solution for case 1, but a more general solution will just solve case 2 directly since that applies to the script in case 1 as well (you may just want a shorter or no pause if the script is not intended to ever die (of course if the script really does never die then the pause doesn't actually matter)). Note that the solution should not involve, say, adding file-locking code or PID recording to the existing scripts. More specifically, I'd like a program "daemonize" that I can run like % daemonize myscript arg1 arg2 or, for example, % daemonize 'echo `date` >> /tmp/times.txt' which would keep a growing list of dates appended to times.txt. (Note that if the argument(s) to daemonize is a script that runs forever as in case 1 above, then daemonize will still do the right thing, restarting it when necessary.) I could then put a command like above in my .login and/or cron it hourly or minutely (depending on how worried I was about it dying unexpectedly). NB: The daemonize script will need to remember the command string it is daemonizing so that if the same command string is daemonized again it does not launch a second copy. Also, the solution should ideally work on both OS X and linux but solutions for one or the other are welcome. (If I'm thinking of this all wrong or there are quick-and-dirty partial solutions, I'd love to hear that too.)

    Read the article

  • Rebooting access point via SSH with pexpect... hangs. Any ideas?

    - by MiniQuark
    When I want to reboot my D-Link DWL-3200-AP access point from my bash shell, I connect to the AP using ssh and I just type reboot in the CLI interface. After about 30 seconds, the AP is rebooted: # ssh [email protected] [email protected]'s password: ******** Welcome to Wireless SSH Console!! ['help' or '?' to see commands] Wireless Driver Rev 4.0.0.167 D-Link Access Point wlan1 -> reboot Sound's great? Well unfortunately the ssh client process never exits, for some reason (maybe the AP kills the ssh server a bit too fast, I don't know). My ssh client process is completely blocked (even if I wait for several minutes, nothing happens). I always have to wait for the AP to reboot, then open another shell, find the ssh client process ID (using ps aux | grep ssh) then kill the ssh process using kill <pid>. That's quite annoying. So I decided to write a python script to reboot the AP. The script connects to the AP's CLI interface via ssh, using python-pexpect, and it tries to launch the "reboot" command. Here's what the script looks like: #!/usr/bin/python # usage: python reboot_ap.py {host} {user} {password} import pexpect import sys import time command = "ssh %(user)s@%(host)s"%{"user":sys.argv[2], "host":sys.argv[1]} session = pexpect.spawn(command, timeout=30) # start ssh process response = session.expect(r"password:") # wait for password prompt session.sendline(sys.argv[3]) # send password session.expect(" -> ") # wait for D-Link CLI prompt session.sendline("reboot") # send the reboot command time.sleep(60) # make sure the reboot has time to actually take place session.close(force=True) # kill the ssh process The script connects properly to the AP (I tried running some other commands than reboot, they work fine), it sends the reboot command, waits for one minute, then kills the ssh process. The problem is: this time, the AP never reboots! I have no idea why. Any solution, anyone?

    Read the article

  • how do i write an init script for django-supervisor

    - by amateur
    pardon me as this is my first time attempting to write a init script for centos 5. I am using django + supervisor to manage my celery workers, scheduler. Now, this is my naive simple attempt /etc/init.d/supervisor #!/bin/sh # # /etc/rc.d/init.d/supervisord # # Supervisor is a client/server system that # allows its users to monitor and control a # number of processes on UNIX-like operating # systems. # # chkconfig: - 64 36 # description: Supervisor Server # processname: supervisord # Source init functions /home/foo/virtualenv/property_env/bin/python /home/foo/bar/manage.py supervisor --daemonize inside my supervisor.conf: [program:celerybeat] command=/home/property/virtualenv/property_env/bin/python manage.py celerybeat --loglevel=INFO --logfile=/home/property/property_buyer/logfiles/celerybeat.log [program:celeryd] command=/home/foo/virtualenv/property_env/bin/python manage.py celeryd --loglevel=DEBUG --logfile=/home/foo/bar/logfiles/celeryd.log --concurrency=1 -E [program:celerycam] command=/home/foo/virtualenv/property_env/bin/python manage.py celerycam I couldn't get it to work. 2013-08-06 00:21:03,108 INFO exited: celerybeat (exit status 2; not expected) 2013-08-06 00:21:06,114 INFO spawned: 'celeryd' with pid 11772 2013-08-06 00:21:06,116 INFO spawned: 'celerycam' with pid 11773 2013-08-06 00:21:06,119 INFO spawned: 'celerybeat' with pid 11774 2013-08-06 00:21:06,146 INFO exited: celerycam (exit status 2; not expected) 2013-08-06 00:21:06,147 INFO gave up: celerycam entered FATAL state, too many start retries too quickly 2013-08-06 00:21:06,147 INFO exited: celeryd (exit status 2; not expected) 2013-08-06 00:21:06,152 INFO gave up: celeryd entered FATAL state, too many start retries too quickly 2013-08-06 00:21:06,152 INFO exited: celerybeat (exit status 2; not expected) 2013-08-06 00:21:07,153 INFO gave up: celerybeat entered FATAL state, too many start retries too quickly I believe it is the init script, but please help me understand what is wrong.

    Read the article

  • Upstart Script on Centos 6

    - by MarcusMaximus
    I'm trying to create an upstart script to run a python script on startup. In theory it looks simple enough but I just can't seem to get it to work. I'm using a skeleton script I found here and altered. description "Used to start python script as a service" author "Me <[email protected]>" # Stanzas # # Stanzas control when and how a process is started and stopped # See a list of stanzas here: http://upstart.ubuntu.com/wiki/Stanzas#respawn # When to start the service start on runlevel [2345] # When to stop the service stop on runlevel [016] # Automatically restart process if crashed respawn # Essentially lets upstart know the process will detach itself to the background expect fork # Start the process script exec su nonrootuser -c "python /usr/local/scripts/script.py" end script The test script I want it to run is currently a simple python script that runs without any issue when run from a terminal. #!/usr/bin/python2 import os, sys, time if __name__ == "__main__": for i in range (10000): message = "shotgunUpstartTest " , i , time.asctime() , " - Username: " , os.getenv("USERNAME") #print message time.sleep(60) out = open("/var/log/scripts/scriptlogfile", "a") print >> out, message out.close() The location/var/log/scripts has permissions 777 The file /usr/local/scripts/script.py has permissions 775 The upstart script /etc/init.d/pythonupstart.conf has permissions 755

    Read the article

  • Unable to ssh in Beagle Bone Black

    - by SamuraiT
    I wanted to install pip onto beagle bone black,and I tried this: /usr/bin/ntpdate -b -s -u pool.ntp.org opkg update && opkg install python-pip python-setuptools then, it threw errors,but Unfortunately, I didn't log that errors. it was occurred a week ago and was't solved yet. I wanted to solve it now and I tried connect by ssh,but I failed. When I ping to beagle bone, it responds, and Cloud9 IDE is working too but not ssh. I don't think this is serious problem since I can connect to beagle bone by other methods: Cloud9 or so. However, to use python on beagle bone, I need to connect by ssh. Before trying to update and install python-pip, I could connect by ssh. Do you have any ideas to solve this connection problem? note I use default OS: Angstrom I don't use SD card. HOST PC is mac, OS.X 10.9 connect by USB serial I checked this but this wasn't helpful http://stackoverflow.com/questions/19233516/cannot-connect-to-beagle-bone-black I could connect by GateOne SSH client, but still unable to connect from terminal.

    Read the article

  • Is there any functional-like unix shell?

    - by Caruccio
    I'm (really) newbie to functional programming (in fact only had contact with it using python) but seems to be a good approach for some list-intensive tasks in a shell environment. I'd love to do something like this: $ [ git clone $host/$repo for repo in repo1 repo2 repo3 ] Is there any Unix shell with these kind of feature? Or maybe some feature to allow easy shell access (commands, env/vars, readline, etc...) from within python (the idea is to use python's interactive interpreter as a replacement to bash). EDIT: Maybe a comparative example would clarify. Let's say I have a list composed of dir/file: $ FILES=( build/project.rpm build/project.src.rpm ) And I want to do a really simple task: copy all files to dist/ AND install it in the system (it's part of a build process): Using bash: $ cp ${files[*]} dist/ $ cd dist && rpm -Uvh $(for f in ${files[*]}; do basename $f; done)) Using a "pythonic shell" approach (caution: this is imaginary code): $ cp [ os.path.join('dist', os.path.basename(file)) for file in FILES ] 'dist' Can you see the difference ? THAT is what i'm talking about. How can not exits a shell with these kind of stuff build-in yet? It's a real pain to handle lists in shell, even its being a so common task: list of files, list of PIDs, list of everything. And a really, really, important point: using syntax/tools/features everybody already knows: sh and python. IPython seams to be on a good direction, but it's bloated: if var name starts with '$', it does this, if '$$' it does that. It's syntax is not "natural", so many rules and "workarounds" ([ ln.upper() for ln in !ls ] -- syntax error)

    Read the article

< Previous Page | 439 440 441 442 443 444 445 446 447 448 449 450  | Next Page >