Search Results

Search found 13776 results on 552 pages for 'python appengine'.

Page 492/552 | < Previous Page | 488 489 490 491 492 493 494 495 496 497 498 499  | Next Page >

  • List comprehension in Ruby

    - by Readonly
    To do the equivalent of Python list comprehensions, I'm doing the following: some_array.select{|x| x % 2 == 0 }.collect{|x| x * 3} Is there a better way to do this...perhaps with one method call?

    Read the article

  • iproute2 not functioning ("RTNETLINK answers: Operation not supported")

    - by James Watt
    The command and error message: gtwy ~ # ip rule add from 64.251.23.186 table t1 RTNETLINK answers: Operation not supported Older article of the same problem, but it did not help me: http://forums.gentoo.org/viewtopic-t-696982-start-0-postdays-0-postorder-asc-highlight-.html I have looked on google at great lengths to try to find a solution. It seems that my kernel configuration is missing something? Any help or ideas would be appreciated. My system/kernel is: 2.6.36-gentoo-r5 #3 SMP Thu Jan 13 10:49:06 EST 2011 x86_64 Intel(R) Xeon(R) CPU X3220 @ 2.40GHz GenuineIntel GNU/Linux. I am posting this on SuperUser since this system is used as a workstation and this problem is unrelated to specific tasks that are handled exclusively by servers. iproute2 is installed: gtwy etc # emerge --search iproute2 Searching... [ Results for search key : iproute2 ] [ Applications found : 1 ] * sys-apps/iproute2 Latest version available: 2.6.35-r2 Latest version installed: 2.6.35-r2 Size of files: 378 kB Homepage: http://www.linuxfoundation.org/collaborate/workgroups/networking/iproute2 Description: kernel routing and traffic control utilities License: GPL-2 A small snippet of my kernel .config (view entire .config): gtwy linux # cat .config | grep NETLINK CONFIG_NETFILTER_NETLINK=y CONFIG_NETFILTER_NETLINK_QUEUE=y CONFIG_NETFILTER_NETLINK_LOG=y CONFIG_NF_CT_NETLINK=y CONFIG_SCSI_NETLINK=y gtwy linux # cat .config | grep IP_ADVANCED_ROUTER CONFIG_IP_ADVANCED_ROUTER=y gtwy linux # cat .config | grep INGRESS CONFIG_NET_SCH_INGRESS=y gtwy linux # cat .config | grep NET_SCHED CONFIG_NET_SCHED=y emerge --info Portage 2.1.9.25 (default/linux/amd64/10.0, gcc-4.1.2, glibc-2.10.1-r1, 2.6.36-gentoo-r5 x86_64) ================================================================= System uname: Linux-2.6.36-gentoo-r5-x86_64-Intel-R-_Xeon-R-_CPU_X3220_@_2.40GHz-with-gentoo-1.12.13 Timestamp of tree: Thu, 13 Jan 2011 01:15:01 +0000 app-shells/bash: 4.0_p37 dev-java/java-config: 1.3.7-r1, 2.1.10 dev-lang/python: 2.4.6, 2.5.4-r4, 2.6.5-r2, 3.1.2-r3 sys-apps/baselayout: 1.12.13 sys-apps/sandbox: 1.6-r2 sys-devel/autoconf: 2.13, 2.65 sys-devel/automake: 1.9.6-r2::<unknown repository>, 1.10.2, 1.11.1 sys-devel/binutils: 2.20.1-r1 sys-devel/gcc: 4.1.2, 4.3.4, 4.4.3-r2 sys-devel/gcc-config: 1.4.1 sys-devel/libtool: 2.2.6b sys-devel/make: 3.81 virtual/os-headers: 2.6.30-r1 (sys-kernel/linux-headers) ACCEPT_KEYWORDS="amd64" ACCEPT_LICENSE="*" CBUILD="x86_64-pc-linux-gnu" CFLAGS="-march=nocona -O2 -pipe" CHOST="x86_64-pc-linux-gnu" CONFIG_PROTECT="/etc /var/bind" CONFIG_PROTECT_MASK="/etc/ca-certificates.conf /etc/env.d /etc/env.d/java/ /etc/fonts/fonts.conf /etc/gconf /etc/php/apache2-php5/ext-active/ /etc/php/cgi-php5/ext-active/ /etc/php/cli-php5/ext-active/ /etc/revdep-rebuild /etc/sandbox.d /etc/terminfo" CXXFLAGS="-march=nocona -O2 -pipe" DISTDIR="/usr/portage/distfiles" FEATURES="assume-digests binpkg-logs distlocks fixlafiles fixpackages news parallel-fetch protect-owned sandbox sfperms strict unknown-features-warn unmerge-logs unmerge-orphans userfetch" GENTOO_MIRRORS="http://gentoo.chem.wisc.edu/gentoo" LC_ALL="en_US.UTF-8" LDFLAGS="-Wl,-O1 -Wl,--as-needed" LINGUAS="en" MAKEOPTS="-j5" PKGDIR="/usr/portage/packages" PORTAGE_CONFIGROOT="/" PORTAGE_RSYNC_OPTS="--recursive --links --safe-links --perms --times --compress --force --whole-file --delete --stats --timeout=180 --exclude=/distfiles --exclude=/local --exclude=/packages" PORTAGE_TMPDIR="/var/tmp" PORTDIR="/usr/portage" PORTDIR_OVERLAY="/usr/local/portage" SYNC="rsync://rsync.namerica.gentoo.org/gentoo-portage" USE="acl amd64 apache2 berkdb bzip2 cli cracklib crypt ctype cups curl cxx dri fortran gdbm gpm iconv jpeg jpeg2k libwww mmx modules mudflap multilib mysql ncurses nls nptl nptlonly openmp pam pcre perl php png pppd python readline session sockets sse sse2 ssl symlink sysfs tcpd threads unicode vhosts xml xorg xsl zlib" ALSA_CARDS="ali5451 als4000 atiixp atiixp-modem bt87x ca0106 cmipci emu10k1x ens1370 ens1371 es1938 es1968 fm801 hda-intel intel8x0 intel8x0m maestro3 trident usb-audio via82xx via82xx-modem ymfpci" ALSA_PCM_PLUGINS="adpcm alaw asym copy dmix dshare dsnoop empty extplug file hooks iec958 ioplug ladspa lfloat linear meter mmap_emul mulaw multi null plug rate route share shm softvol" APACHE2_MODULES="actions alias auth_basic authn_alias authn_anon authn_dbm authn_default authn_file authz_dbm authz_default authz_groupfile authz_host authz_owner authz_user autoindex cache cgi cgid dav dav_fs dav_lock deflate dir disk_cache env expires ext_filter file_cache filter headers include info log_config logio mem_cache mime mime_magic negotiation rewrite setenvif speling status unique_id userdir usertrack vhost_alias" COLLECTD_PLUGINS="df interface irq load memory rrdtool swap syslog" ELIBC="glibc" GPSD_PROTOCOLS="ashtech aivdm earthmate evermore fv18 garmin garmintxt gpsclock itrax mtk3301 nmea ntrip navcom oceanserver oldstyle oncore rtcm104v2 rtcm104v3 sirf superstar2 timing tsip tripmate tnt ubx" INPUT_DEVICES="keyboard mouse evdev" KERNEL="linux" LCD_DEVICES="bayrad cfontz cfontz633 glk hd44780 lb216 lcdm001 mtxorb ncurses text" LINGUAS="en" PHP_TARGETS="php5-3" RUBY_TARGETS="ruby18" USERLAND="GNU" VIDEO_CARDS="fbdev glint intel mach64 mga neomagic nouveau nv r128 radeon savage sis tdfx trident vesa via vmware dummy v4l" XTABLES_ADDONS="quota2 psd pknock lscan length2 ipv4options ipset ipp2p iface geoip fuzzy condition tee tarpit sysrq steal rawnat logmark ipmark dhcpmac delude chaos account" Unset: CPPFLAGS, CTARGET, EMERGE_DEFAULT_OPTS, FFLAGS, INSTALL_MASK, LANG, PORTAGE_BUNZIP2_COMMAND, PORTAGE_COMPRESS, PORTAGE_COMPRESS_FLAGS, PORTAGE_RSYNC_EXTRA_OPTS

    Read the article

  • Apache doesn't run multiple requests

    - by Reinderien
    I'm currently running this simple Python CGI script to test rudimentary IPC: #!/usr/bin/python -u import cgi, errno, fcntl, os, os.path, sys, time print("""Content-Type: text/html; charset=utf-8 <!doctype html> <html lang="en"> <head> <meta charset="utf-8" /> <title>IPC test</title> </head> <body> """) ftempname = '/tmp/ipc-messages' master = not os.path.exists(ftempname) if master: fmode = 'w' else: fmode = 'r' print('<p>Opening file</p>') sys.stdout.flush() ftemp = open(ftempname, fmode) print('<p>File opened</p>') if master: print('<p>Operating as master</p>') sys.stdout.flush() for i in range(10): print('<p>' + str(i) + '</p>') sys.stdout.flush() time.sleep(1) ftemp.close() os.remove(ftempname) else: print('<p>Operating as a slave</p>') ftemp.close() print(""" </body> </html>""") The 'server-push' portion works; that is, for the first request, I do see piecewise updates. However, while the first request is being serviced, subsequent requests are not started, only to be started after the first request has finished. Any ideas on why, and how to fix it? Edit: I see the same non-concurrent behaviour with vanilla PHP, running this: <!doctype html> <html lang="en"> <!-- $Id: $--> <head> <meta charset="utf-8" /> <title>IPC test</title> </head> <body> <p> <?php function echofl($str) { echo $str . "</b>\n"; ob_flush(); flush(); } define('tempfn', '/tmp/emailsync'); if (file_exists(tempfn)) $perms = 'r+'; else $perms = 'w'; assert($fsync = fopen(tempfn, $perms)); assert(chmod(tempfn, 0600)); if (!flock($fsync, LOCK_EX | LOCK_NB, $wouldblock)) { assert($wouldblock); $master = false; } else $master = true; if ($master) { echofl('Running as master.'); assert(fwrite($fsync, 'content') != false); assert(sleep(5) == 0); assert(flock($fsync, LOCK_UN)); } else { echofl('Running as slave.'); echofl(fgets($fsync)); } assert(fclose($fsync)); echofl('Done.'); ?> </p> </body> </html>

    Read the article

  • Where is my VMware-ws FreeNAS CIFS(ZFS) bottle-neck?

    - by maka
    Background: I'm building a quiet HTPC + NAS that is also supposed to be used for general computer usage. I'm so far generally happy with things, it was just that I was expecting a little better IO performance. I have no clue if my expectations are unreal. The NAS is there as a general purpose file storage and as a media server for XBMC and other devices. ZFS is a requirement. Question: Where is my bottle-neck, and is there anything I can do config wise, to improve my performance? I'm thinking VM-disk settings could be something but I really have no idea where to go since I'm neither experienced with FreeNAS nor VMware-WS. Tests: When I'm on the host OS and copy files (from the SSD) to the CIFS share, I get around 30 Mbytes/sec read and write. When I'm on my laptop laptop, wired to the network, I get about the same specs. The test I've done are with a 16 GB ISO, and with about 200 MB of RARs and I've tried avoiding the RAM-cache by reading different files than the ones I'm writing ( 10 GB). It feels like having less CPU cores is a lot more efficient, since the resource manager in Windows reports less CPU-usage. With 4 cores in VMware, CPU usage was 50-80%, with 1 core it was 25-60%. EDIT: HD ActiveTime was quite high on SSD so I moved the page file, disabled hibernate and enabled Win DiskCache both on SSD and RAID. This resulted in no real performance difference for one file, but if i transferred 2 files the total speed went up to 50 Mbytes/s vs ~40. The ActiveTime avg also went down a lot (to ~20%) but has now higher bursts. DiskIO is on ~ 30-35 Mbytes/s avgs, with ~100Mb bursts. Network is on 200-250Mbits/s with ~45 active TCP connections. Hardware Asus F2A85-M Pro A10-5700 16GB DDR3 1600 OCZ Vertex 2 128GB SSD 2x Generic 1tb 7200 RPM drives as RAID0 (in win7) Intel Gigabit Desktop CT Software Host OS: Win7 (SSD) VMware Worksation 9 (SSD) FreeNAS 8.3 VM (20GB VDisk on SSD) CPU: I've tried 1, 2 and 4 cores. Virtualisation engine, Preferred mode: Automatic 10,24Gb ram 50Gb SCSI VDisk on the RAID0, VDisk is formatted as ZFS and exposed through CIFS through FreeNAS. NIC Bridge, Replicate physical network state Below are two typical process print-outs while I'm transfering one file to the CIFS share. last pid: 2707; load averages: 0.60, 0.43, 0.24 up 0+00:07:05 00:34:26 32 processes: 2 running, 30 sleeping Mem: 101M Active, 53M Inact, 1620M Wired, 2188K Cache, 149M Buf, 8117M Free Swap: 4096M Total, 4096M Free PID USERNAME THR PRI NICE SIZE RES STATE TIME WCPU COMMAND 2640 root 1 102 0 50164K 10364K RUN 0:25 25.98% smbd 1897 root 6 44 0 168M 74808K uwait 0:02 0.00% python last pid: 2746; load averages: 0.93, 0.60, 0.33 up 0+00:08:53 00:36:14 33 processes: 2 running, 31 sleeping Mem: 101M Active, 53M Inact, 4722M Wired, 2188K Cache, 152M Buf, 5015M Free Swap: 4096M Total, 4096M Free PID USERNAME THR PRI NICE SIZE RES STATE TIME WCPU COMMAND 2640 root 1 76 0 50164K 10364K RUN 0:52 16.99% smbd 1897 root 6 44 0 168M 74816K uwait 0:02 0.00% python I'm sorry if my question isn't phrased right, I'm really bad at these kind of things, and it is the first time I post here at SU. I also appreciate any other suggestions to something, I could have missed.

    Read the article

  • OpenCV install problems on Studio 12.04 - broken dependencies

    - by Will
    I'm trying to follow the Ubuntu OpenCV documentation at OpenCV. The provided script has a line which executed for some time, taking away more packages than I expected (such as ubuntu-studio video); sudo apt-get -qq remove ffmpeg x264 libx264-dev When the script gets to the line below, it bombs; sudo apt-get -qq install libopencv-dev build-essential checkinstall cmake pkg-config yasm libtiff4-dev libjpeg-dev libjasper-dev libavcodec-dev libavformat-dev libswscale-dev libdc1394-22-dev libxine-dev libgstreamer0.10-dev libgstreamer-plugins-base0.10-dev libv4l-dev python-dev python-numpy libtbb-dev libqt4-dev libgtk2.0-dev libfaac-dev libmp3lame-dev libopencore-amrnb-dev libopencore-amrwb-dev libtheora-dev libvorbis-dev libxvidcore-dev x264 v4l-utils ffmpeg The error msg is; E: Unable to correct problems, you have held broken packages. I've since run Update-Manager, run sudo apt-get updates, rebooted, tried the above script line manually, and still no change. I've just run sudo apt-get install -f and nothing seemed to change. It did mention that some packages were no longer needed and could be removed by apt-get autoremove, so I ran that. It removed a number of packages, so I reran the install command above. Still same problem of held broken packages. I just ran sudo apt-get -u dist-upgrade Part of the response was; The following packages have been kept back: gstreamer0.10-ffmpeg I'm not sure what that means. I do know that it shows up in my Update-Manager and cannot be checked I then ran sudo dpkg --configure -a and then reran sudo apt-get -f install and the package was still not upgraded, though there was this very interesting comment; Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: gstreamer0.10-ffmpeg : Depends: libavcodec53 (< 5:0) but it is not going to be installed or libavcodec-extra-53 (< 5:0) but 5:0.7.2-1ubuntu1+codecs1~oneiric2 is to be installed E: Unable to correct problems, you have held broken packages. Then I ran sudo apt-get -u dist-upgrade It showed I had one held package, so I ran; sudo apt-get -o Debug::pkgProblemResolver=yes dist-upgrade It also exited without upgrading the package, so I ran; sudo apt-get remove --dry-run gstreamer0.10-ffmpeg:i386 And it gave me; *The following packages will be REMOVED: arista gstreamer0.10-ffmpeg 0 upgraded, 0 newly installed, 2 to remove and 0 not upgraded. Remv arista [0.9.7-3ubuntu1] Remv gstreamer0.10-ffmpeg [0.10.12-1ubuntu1]* But when I reran sudo apt-get -u dist-upgrade It showed the package was still there. *The following packages have been kept back: gstreamer0.10-ffmpeg 0 upgraded, 0 newly installed, 0 to remove and 1 not upgraded.* Update: Just went into Synaptic PM and completely removed gstreamer0.10-ffmpeg Reran sudo apt-get -u dist-upgrade And was told; 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. However, when I ran the original apt-get to install opencv (first code at the top of this question), it still gave me the same broken package errors. So I tried $ cat /etc/apt/sources.list # # deb cdrom:[Ubuntu-Studio 11.10 _Oneiric Ocelot_ - Release i386 (20111011.1)]/ oneiric main multiverse restricted universe # deb cdrom:[Ubuntu-Studio 11.10 _Oneiric Ocelot_ - Release i386 (20111011.1)]/ oneiric main multiverse restricted universe # See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to # newer versions of the distribution. deb http://us.archive.ubuntu.com/ubuntu/ precise main restricted deb-src http://us.archive.ubuntu.com/ubuntu/ precise main restricted ## Major bug fix updates produced after the final release of the ## distribution. deb http://us.archive.ubuntu.com/ubuntu/ precise-updates main restricted deb-src http://us.archive.ubuntu.com/ubuntu/ precise-updates main restricted ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team. Also, please note that software in universe WILL NOT receive any ## review or updates from the Ubuntu security team. deb http://us.archive.ubuntu.com/ubuntu/ precise universe deb-src http://us.archive.ubuntu.com/ubuntu/ precise universe deb http://us.archive.ubuntu.com/ubuntu/ precise-updates universe deb-src http://us.archive.ubuntu.com/ubuntu/ precise-updates universe ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team, and may not be under a free licence. Please satisfy yourself as to ## your rights to use the software. Also, please note that software in ## multiverse WILL NOT receive any review or updates from the Ubuntu ## security team. deb http://us.archive.ubuntu.com/ubuntu/ precise multiverse deb-src http://us.archive.ubuntu.com/ubuntu/ precise multiverse deb http://us.archive.ubuntu.com/ubuntu/ precise-updates multiverse deb-src http://us.archive.ubuntu.com/ubuntu/ precise-updates multiverse ## N.B. software from this repository may not have been tested as ## extensively as that contained in the main release, although it includes ## newer versions of some applications which may provide useful features. ## Also, please note that software in backports WILL NOT receive any review ## or updates from the Ubuntu security team. deb http://security.ubuntu.com/ubuntu precise-security main restricted deb-src http://security.ubuntu.com/ubuntu precise-security main restricted deb http://security.ubuntu.com/ubuntu precise-security universe deb-src http://security.ubuntu.com/ubuntu precise-security universe deb http://security.ubuntu.com/ubuntu precise-security multiverse deb-src http://security.ubuntu.com/ubuntu precise-security multiverse ## Uncomment the following two lines to add software from Canonical's ## 'partner' repository. ## This software is not part of Ubuntu, but is offered by Canonical and the ## respective vendors as a service to Ubuntu users. deb http://archive.canonical.com/ubuntu precise partner # deb-src http://archive.canonical.com/ubuntu oneiric partner ## Uncomment the following two lines to add software from Ubuntu's ## 'extras' repository. ## This software is not part of Ubuntu, but is offered by third-party ## developers who want to ship their latest software. # deb http://extras.ubuntu.com/ubuntu oneiric main # deb-src http://extras.ubuntu.com/ubuntu oneiric main # deb http://download.opensuse.org/repositories/home:/popinet/xUbuntu_11.04 ./ # disabled on upgrade to precise and then; $ cat /etc/apt/sources.list.d/* But I don't have enough reputation to post the results here (it says I need at least 10 reputation points to post more than 2 links), so I don't know how to provide the requested feedback. Then tried; $ sudo apt-get check [sudo] password for <abcd>: Reading package lists... Done Building dependency tree Reading state information... Done However, no resolution of the problem yet. What else do I need to do? Will an upgrade to Ubuntu Studio 13.xx solve this problem (or compound it)?

    Read the article

  • Strange Flash AS3 xml Socket behavior

    - by Rnd_d
    I have a problem which I can't understand. To understand it I wrote a socket client on AS3 and a server on python/twisted, you can see the code of both applications below. Let's launch two clients at the same time, arrange them so that you can see both windows and press connection button in both windows. Then press and hold any button. What I'm expecting: Client with pressed button sends a message "some data" to the server, then the server sends this message to all the clients(including the original sender) . Then each client moves right the button 'connectButton' and prints a message to the log with time in the following format: "min:secs:milliseconds". What is going wrong: The motion is smooth in the client that sends the message, but in all other clients the motion is jerky. This happens because messages to those clients arrive later than to the original sending client. And if we have three clients (let's name them A,B,C) and we send a message from A, the sending time log of B and C will be the same. Why other clients recieve this messages later than the original sender? By the way, on ubuntu 10.04/chrome all the motion is smooth. Two clients are launched in separated chromes. windows screenshot Can't post linux screenshot, need more than 10 reputation to post more hyperlinks. Listing of log, four clients simultaneously: [16:29:33.280858] 62.140.224.1 >> some data [16:29:33.280912] 87.249.9.98 << some data [16:29:33.280970] 87.249.9.98 << some data [16:29:33.281025] 87.249.9.98 << some data [16:29:33.281079] 62.140.224.1 << some data [16:29:33.323267] 62.140.224.1 >> some data [16:29:33.323326] 87.249.9.98 << some data [16:29:33.323386] 87.249.9.98 << some data [16:29:33.323440] 87.249.9.98 << some data [16:29:33.323493] 62.140.224.1 << some data [16:29:34.123435] 62.140.224.1 >> some data [16:29:34.123525] 87.249.9.98 << some data [16:29:34.123593] 87.249.9.98 << some data [16:29:34.123648] 87.249.9.98 << some data [16:29:34.123702] 62.140.224.1 << some data AS3 client code package { import adobe.utils.CustomActions; import flash.display.Sprite; import flash.events.DataEvent; import flash.events.Event; import flash.events.IOErrorEvent; import flash.events.KeyboardEvent; import flash.events.MouseEvent; import flash.events.SecurityErrorEvent; import flash.net.XMLSocket; import flash.system.Security; import flash.text.TextField; public class Main extends Sprite { private var socket :XMLSocket; private var textField :TextField = new TextField; private var connectButton :TextField = new TextField; public function Main():void { if (stage) init(); else addEventListener(Event.ADDED_TO_STAGE, init); } private function init(event:Event = null):void { socket = new XMLSocket(); socket.addEventListener(Event.CONNECT, connectHandler); socket.addEventListener(DataEvent.DATA, dataHandler); stage.addEventListener(KeyboardEvent.KEY_DOWN, keyDownHandler); addChild(textField); textField.y = 50; textField.width = 780; textField.height = 500; textField.border = true; connectButton.selectable = false; connectButton.border = true; connectButton.addEventListener(MouseEvent.MOUSE_DOWN, connectMouseDownHandler); connectButton.width = 105; connectButton.height = 20; connectButton.text = "click here to connect"; addChild(connectButton); } private function connectHandler(event:Event):void { textField.appendText("Connect\n"); textField.appendText("Press and hold any key\n"); } private function dataHandler(event:DataEvent):void { var now:Date = new Date(); textField.appendText(event.data + " time = " + now.getMinutes() + ":" + now.getSeconds() + ":" + now.getMilliseconds() + "\n"); connectButton.x += 2; } private function keyDownHandler(event:KeyboardEvent):void { socket.send("some data"); } private function connectMouseDownHandler(event:MouseEvent):void { var connectAddress:String = "ep1c.org"; var connectPort:Number = 13250; Security.loadPolicyFile("xmlsocket://" + connectAddress + ":" + String(connectPort)); socket.connect(connectAddress, connectPort); } } } Python server code from twisted.internet import reactor from twisted.internet.protocol import ServerFactory from twisted.protocols.basic import LineOnlyReceiver import datetime class EchoProtocol(LineOnlyReceiver): ##### name = "" id = 0 delimiter = chr(0) ##### def getName(self): return self.transport.getPeer().host def connectionMade(self): self.id = self.factory.getNextId() print "New connection from %s - id:%s" % (self.getName(), self.id) self.factory.clientProtocols[self.id] = self def connectionLost(self, reason): print "Lost connection from "+ self.getName() del self.factory.clientProtocols[self.id] self.factory.sendMessageToAllClients(self.getName() + " has disconnected.") def lineReceived(self, line): print "[%s] %s >> %s" % (datetime.datetime.now().time(), self, line) if line=="<policy-file-request/>": data = """<?xml version="1.0"?> <!DOCTYPE cross-domain-policy SYSTEM "http://www.adobe.com/xml/dtds/cross-domain-policy.dtd"> <!-- Policy file for xmlsocket://ep1c.org --> <cross-domain-policy> <allow-access-from domain="*" to-ports="%s" /> </cross-domain-policy>""" % PORT self.send(data) else: self.factory.sendMessageToAllClients( line ) def send(self, line): print "[%s] %s << %s" % (datetime.datetime.now().time(), self, line) if line: self.transport.write( str(line) + chr(0)) else: print "Nothing to send" def __str__(self): return self.getName() class ChatProtocolFactory(ServerFactory): protocol = EchoProtocol def __init__(self): self.clientProtocols = {} self.nextId = 0 def getNextId(self): id = self.nextId self.nextId += 1 return id def sendMessageToAllClients(self, msg): for client in self.clientProtocols: self.clientProtocols[client].send(msg) def sendMessageToClient(self, id, msg): self.clientProtocols[id].send(msg) PORT = 13250 print "Starting Server" factory = ChatProtocolFactory() reactor.listenTCP(PORT, factory) reactor.run()

    Read the article

  • Which third party website thumbnailing services do you use?

    - by Ben Delarre
    I've got a requirement for showing thumbnails of arbitrary websites. I need to be able to show small thumbnails (120px by 90px), and larger thumbnails of around 480px wide. I'll need to specify the queue and invalid placeholder images and preferably have a pingback when the queued images are processed so I can respond appropriately. I'd also need a simple API I can use either directly embedded in my HTML, or from a simple web request to queue the images. I've been looking at various services ranging from low-fi services, to large scale ones - here's some examples: www.bitpixels.com Uses Google AppEngine, seems like a prototype or a toy. Free! www.websnapr.com Tried using this, made a free account and requested a thumbnail. Waited a few minutes and refreshed a couple of times, and ended up having the account banned. Free is tricky yes, but if I can't try it out successfully I'm disinclined to pay. www.shrinktheweb.com Free account seems to be very quick. Lots of documentation on the site, and even covers local caching of the images to your own server (documentation mostly in PHP). Quality of thumbnails look good, and there appear to be sufficient options for setting thumbnail placeholder images and parameters for altering how the thumbnailing is done. Also supports large 'screenshots' of URLs - very useful for me. Discovered the PRO pricing is an à la carte menu, allowing me to select just the features I want and keep the monthly cost low. Excellent stuff, have decided to use this service. www.thumbalizr.com Good coverage of thumbnail sizes and control options - even allowing specification for browser width when thumbnailing. No ping-back, but I can live without that. Supports local caching of images with PHP API, would prefer .NET, but can port it if necessary. Looks like a fairly professional service but seems fairly expensive for the number of thumbnails you get to generate. apologies for lack of proper linking - spam protection! I'm not entirely convinced by any of them, and since this will be a long term service I'd like some stability and support. I'm willing to pay for the service, but I'd want something that fulfills most if not all of my requirements for that. I should also mention that we're hosted on Windows under IIS, so local solutions involving Xvfb and the like sadly can't be used for this project. So my question is: what services do you use? How have they panned out, are you happy with them?

    Read the article

  • Google Bookmarks Accelerator for IE8

    - by MACHiNESMiTH
    To Editors: Sorry, I didnt realise the question thing till you pointed it out. Also I didn't know about the appengine tag, I'm assuming it was selected by mistake from that JS autosuggestion box (it usually happens with me, I run a P3) my apologies for that too. I've edited it now, if it doesnt work let me know. Hi all, I'm making (or at least trying to make) a Google bookmarks accelerator for IE8, but I keep running into "Internet Explorer could not install this accelerator. There was a problem with the Accelerator's information." Does anyone know why this is happening? Here is the code I've made <?xml version="1.0" encoding="UTF-8" ?> <os:openServiceDescription xmlns="http://www.microsoft.com/schemas/openservicedescription/1.0"> <os:homepageUrl>http://www.google.com/bookmarks</os:homepageUrl> <os:display> <os:description>Add this page to GoogleBookmarks</os:description> <os:name>Add to GoogleBookmarks</os:name> <os:icon>http://www.google.com/favicon.ico</os:icon> </os:display> <os:activity category="Bookmarks"> <os:activityAction context="document"> <os:execute method="get" action="http://www.google.com/bookmarks/mark?op=add&output=popup"> <os:parameter name="bkmk" value="{documentUrl}" /> <os:parameter name="title" value="{documentTitle}" /> <os:parameter name="annotation" value="{selection}" /> </os:execute> </os:activityAction> </os:activity> </os:openServiceDescription> These are the references I used: MSDN Developers Guide MSDN Format Specification Unofficial Google API (used as a look up so far) and yes i know that I cannot send document variables between schemes (HTTP - HTTPS, for example) or between servers in different security zones (Intranet - Internet etc) Also, if this is the wrong place, what are the recommended sites and/or forums for xml related posts? (not really questions - but more like full blown detailed rant like posts) Thanks

    Read the article

  • One-to-many relationship with JDO in Google App Engine

    - by Marvin
    I've followed the GAE docs on setting up one-to-many relationship in JDO but I'm still having trouble in retrieving the collection data back. I have no problem getting the other non-collection fields back. Here are my classes: @PersistenceCapable public class User{ @PrimaryKey @Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY) private Key key; @Persistent private String uniqueId; @Persistent private String email; @Persistent private List<Address> addresses = new ArrayList<Address>() ; ... } @PersistenceCapable public class Phone{ @PrimaryKey @Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY) private Key key; @Persistent private String number; ... } public class UserDaoImpl implements UserDao { public void insertUser(User user) { if(user.getKey() == null) { com.google.appengine.api.datastore.Key key = KeyFactory.createKey(User.class.getSimpleName(), user.getEmail()); user.setKey(key); } PersistenceManager pm = PersistenceManagerWrapper.getPersistenceManager(); notNull(user); try { pm.makePersistent(user); } finally { pm.close(); } } @SuppressWarnings("unchecked") public User getUser(String uniqueId) { PersistenceManager pm = PersistenceManagerWrapper.getPersistenceManager(); Query query = pm.newQuery(User.class); query.setFilter("uniqueId == uniqueIdParam"); query.declareParameters("String uniqueIdParam"); User user = null; try { List<User> users = (List<User>)(query.execute(uniqueId)); //TODO abstract this if(users.size() > 0) user = users.get(0); } finally { pm.close(); } return user; } } public class UserDaoImplTest { @Test public void getUserTest() { User user = createTestUser(); assertNotNull("The user object should not be null", user); userDao.insertUser(user); User returnedUser = userDao.getUser(TEST_USER_ID); assertNotNull("The returnedUser object should not be null", returnedUser); Assert.assertPropertyEqualsExcludeProperties("User Object", user, returnedUser, ""); } } When I run the test, all the properties for User is populated but the list of Phone if I get is empty.

    Read the article

  • How do I apply a servlet filter when serving an HTML page directly?

    - by Philippe Beaudoin
    First off, I'm using Google AppEngine and Guice, but I suspect my problem is not related to these. When the user connect to my (GWT) webapp, the URL is a direct html page. For example, in development mode, it is: http://127.0.0.1:8888/Puzzlebazar.html?gwt.codesvr=127.0.0.1:9997. Now, I setup my web.xml in the following way: <?xml version="1.0" encoding="UTF-8"?> <web-app xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd" version="2.5"> <display-name>PuzzleBazar</display-name> <!-- Default page to serve --> <welcome-file-list> <welcome-file>Puzzlebazar.html</welcome-file> </welcome-file-list> <filter> <filter-name>guiceFilter</filter-name> <filter-class>com.google.inject.servlet.GuiceFilter</filter-class> </filter> <filter-mapping> <filter-name>guiceFilter</filter-name> <url-pattern>/*</url-pattern> </filter-mapping> <!-- This Guice listener hijacks all further filters and servlets. Extra filters and servlets have to be configured in your ServletModule#configureServlets() by calling serve(String).with(Class<? extends HttpServlet>) and filter(String).through(Class<? extends Filter) --> <listener> <listener-class>com.puzzlebazar.server.guice.MyGuiceServletContextListener </listener-class> </listener> </web-app> Since I'm using Guice, I have to configure extra filters in my ServletModule, where I do this: filter("*.html").through( SecurityCookieFilter.class ); But my SecurityCookieFilter.doFilter is never called. I tried things like "*.html*" or <url-pattern>*</url-pattern> but to no avail. Any idea how I should do this?

    Read the article

  • Sessions not persisting between requests

    - by klonq
    My session objects are only stored within the request scope on google app engine and I can't figure out how to persist objects between requests. The docs are next to useless on this matter and I can't find anyone who's experienced a similar problem. Please help. When I store session objects in the servlet and forward the request to a JSP using: getServletContext().getRequestDispatcher("/example.jsp").forward(request,response); Everything works like it should. But when I store objects to the session and redirect the request using: response.sendRedirect("/example/url"); The session objects are lost to the ether. In fact when I dump session key/value pairs on new requests there is absolutely nothing, session objects only appear within the request scope of servlets which create session objects. It appears to me that the objects are not being written to Memcache or Datastore. In terms of configuring sessions for my application I have set <sessions-enabled>true</sessions-enabled> In appengine-web.xml. Is there anything else I am missing? The single paragraph of documentation on sessions also notes that only objects which implement Serializable can be stored in the session between requests. I have included an example of the code which is not working below. The obvious solution is to not use redirects, and this might be ok for the example given below but some application data does need to be stored in the session between requests so I need to find a solution to this problem. EXAMPLE: The class FlashMessage gives feedback to the user from server-side operations. if (email.send()) { FlashMessage flash = new FlashMessage(FlashMessage.SUCCESS, "Your message has been sent."); session.setAttribute(FlashMessage.SESSION_KEY, flash); // The flash message will not be available in the session object in the next request response.sendRedirect(URL.HOME); } else { FlashMessage flash = new FlashMessage(FlashMessage.ERROR, FlashMessage.INVALID_FORM_DATA); session.setAttribute(FlashMessage.SESSION_KEY, flash); // The flash message is displayed without problem getServletContext().getRequestDispatcher(Templates.CONTACT_FORM).forward(request,response); } FlashMessage.java import java.io.Serializable; public class FlashMessage implements Serializable { private static final long serialVersionUID = 8109520737272565760L; // I have tried using different, default and no serialVersionUID public static final String SESSION_KEY = "flashMessage"; public static final String ERROR = "error"; public static final String SUCCESS = "success"; public static final String INVALID_FORM_DATA = "Your request failed to validate."; private String message; private String type; public FlashMessage (String type, String message) { this.type = type; this.message = message; } public String display(){ return "<div id='flash' class='" + type + "'>" + message + "</div>"; } }

    Read the article

  • How does the Cloud compare to Colocation? And development too

    - by David
    Currently I/we run a SaaS web application where each subscriber has their own physical instance of the application in addition to their own database. The setup has each web application instance deployed on two different IIS boxes both for load-balancing and redundancy (the machines have their Windows Update install times 12 hours apart, for example). Databases are mirrored on two different SQL Server 2012 machines with AlwaysOn for uptime. I don't make use of SQL Server clustering (as it doesn't provide storage-level failover: we don't have a shared storage box). Because it's a Windows setup it means there are two Domain Controllers (we cheat: they're both Mac Minis, 17W each, which keeps our colo power costs low). Finally there's also an Exchange server (Mailbox, Hub Transport and Client Access). One of the SQL Servers also doubles-up as an Exchange Hub Transport. Running costs are about $700 a month for our quarter-rack colocation (which includes power and peering/transfer), then there's about $150 a month for SPLA licensing, so $850 a month in total. Then there's the hard-to-quantify cost of administration, but I reckon I spend a couple of hours a week checking-in on the servers: reviewing event logs, etc. I keep getting bombarded by ads and manufactured news stories about how great "the cloud" is. Back in 2008 when the cloud was taking off I was reading up about the proper "cloud" services like Google AppEngine, where you write in Python against Google's API and that's how they scale your application across servers and also use their database provider for scaling storage. Simple enough to understand. Then came along Amazon, and I understand how Amazon Storage works, but I'm not sure how Amazon Compute works: web application pages don't take much CPU time to compute, how do you even quantify usage anyway? Finally, RackSpace gets in the act and now I'm really confused. RackSpace advertise "Cloud" SQL Server 2012 available for about "$0.70 per hour", going by how they advertise it I thought the "hour" meant the sum of CPU time, IO blocking time, maybe time spent transferring data, so for a low-intensity application that works out pretty cheap then? Nope. I went on to a Sales Chat window and spoke to one of their advisors. They told me the $0.70/hour was actually for every hour the SQL Server is running... but who wants a SQL Server for only a few hours? You're going to need it available 24 hours a day for months on end. $0.70 * 24 * 31 works out at $520 a month, which is rediculously expensive for SQL Server. An SPLA license for SQL Server is only $50 a month or so. That $520 a month does not include "fanatical support", and you also need to stack on top the costs of the host Windows server instance too. From what I can tell, Rackspace's "Cloud" products seem like like an cynical rebranding of an overpriced VPS service, but priced by the hour. I have the same confusion about Windows Azure which uses similar terms to describe the products available, but I think that's because Azure offers both traditional shared webhosting in addition to their own APIs you can target for scalable applications.

    Read the article

  • Google Talk Chat/Conference Solutions

    - by Adam Davis
    I started using the old confbot python conference script in 2005 for my family. This essentially implements an IRC like conference room over Google Talk (or any Jabber/XMPP server). It has significantly increased family communication, and has become rather indispensable due to this. Recently it's begun to have severe problems (people can't see each other in the conference room) which has nearly killed the usefulness of it. Before I develop my own software or debug confbot (probably not - it uses an older jabber library that hasn't been updated since 2003) I wanted to see what other solutions exist that meet our needs: Supports Google Talk (Sorry, I'm not going to try to convince everyone involved to move to a new IM or other client) Free and open source (ideal, but not required) Runs on Windows (Not a web service run by someone else) Implements basic functionality such as kick/ban, emotes Remembers who joined the conference room across restarts Obeys Do Not Disturb and Busy status Archives all activity -Adam

    Read the article

  • unusual backspace behavior in mac terminal

    - by Brandon
    I'm trying to figure out how to get ssh sessions to work how I want using the terminal app on mac os x. I'm used to using PuTTY on windows, where backspace means backspace. On mac when I press delete/backspace on mac it deletes the character following the cursor instead of the one before. I turned on Delete sends Ctrl + H, and that works most of the time, but sometimes it just shows on the screen as ^H this is typically at prompts from some custom python scripts on the box I log into. This doesn't happen with PuTTY on windows. Btw I'm logging into a Ubuntu Linux server running openssh. Any idea what I need to do so that backspace is consistently backspace.

    Read the article

  • Sending ctrl-backslash from Windows to Linux using Synergy

    - by jrbushell
    I'm using Synergy+ to share the keyboard and mouse between a Windows and a Linux (Red Hat) PC. The Windows box is the server, Linux is the client, and both are running version 1.3.4. My Windows box is set up for English UK keyboard. In a Linux terminal window, Ctrl+\ (backslash) sends a quit signal to the currently running program - useful to kill a Python script that's run amok, for example. When I try to do this via synergy, Ctrl+- (minus) is sent instead. This has the undesired effect of resizing my terminal window :( Backslash on its own and Shift-backslash = pipe both work fine. Any ideas what's happening?

    Read the article

  • Linux Email Server Auto-Reply

    - by Robert Smith
    I need to setup a mail server that has the following functionality: if a user sends an email to a specific address on this server, the server must first check if the email has a PDF attachment, do some processing to that PDF file and then reply to the user's initial mail with the new PDF file attached. My question is how would it be possible to achieve this functionality, and what software / mail server do you recommend? I'm thinking that it can be solved the following way: when the server receives a new email it executes an external Python script that checks the attachment, processes the PDF file and then sends it back in the user's mailbox. What mail server would be able to do this, and what configurations does it need?

    Read the article

  • Stop a runaway launchd process on OS X Leopard

    - by Skylarking
    I created a launchd .plist file to in order to have a python script run every hour. I ended up editing and later deleting the .plist file ( in /Library/LaunchDaemons )... but somehow launchd is still trying to run the script.. The file that the original .plist was calling is also no longer present ( it was in /usr/bin ).. Now every 10 seconds launchd is still attempting to run the script, fails, and respawns... I tried fixing this with Lingon.... to no avail.. Is there a way to kill this process for good? I tried logging out and restarting as well... Machine is running 10.5.8

    Read the article

  • redirection problem for my sites.

    - by redirect-p
    I have a site example.com and another one test.example.com. Both have different configuration file. But when I enter url test.example.com it will redirect to example.com. configuration file for example.com <VirtualHost *:80> ServerName example.com ServerAlias www.example.com DirectoryIndex index.html DocumentRoot my-document-path Options -Indexes ErrorDocument 404 /errors/404.html ErrorDocument 403 /errors/404.html <Location "/"> SetHandler python-program PythonHandler django.core.handlers.modpython PythonPath "['path', 'path'] + sys.path" SetEnv DJANGO_SETTINGS_MODULE example.settings PythonInterpreter example PythonAutoReload On PythonDebug On </Location> </VirtualHost>

    Read the article

  • Don't know why this small shell script wont work

    - by tominated
    Hi, I'm trying to make a small script to start up gunicorn for a python website I'm making. I have modified the script found at https://github.com/benoitc/gunicorn/blob/master/examples/gunicorn_rc slightly. Here's my version. #!/bin/sh GUNICORN=/usr/local/bin/gunicorn ROOT=/srv/mobile-site/app PID=/var/run/gunicorn.pid APP=mobilecms:app if [ -f $PID ]; then rm $PID fi cd $ROOT exec $GUNICORN -b 127.0.0.1:8080 -w 8 -k gevent --pidfile=$PID $APP When I try to run the script though, it shows this error /etc/init.d/gunicorn: 13: Syntax error: end of file unexpected (expecting "fi") Does anyone know what's wrong?

    Read the article

  • How do I calculate the amount of tuning needed for my server ?

    - by Low Kian Seong
    I have a server which is running a few discrete Python, Java application which most of the time imports data into a PostGreSQL database. I would like to know from people out there who have experience tuning enterprise grade servers how do i go about calculating in a holistic way the amount of tuning needed for my server for example vm.swappiness, vm.overcommit_ratio and other numerical tunings needed for my server. I tried to enable sar on my server to capture daily numbers but these are more along the lines of total numbers and I can't figure out how to allocate memory for my applications. Help would be appreciated. Thanks.

    Read the article

  • mod_fcgid process doesn't respawn

    - by aaronsw
    I have a Python script running on my server as a FastCGI using Apache2 and mod_fcgid. I let it spawn up to five processes. But I soon get messages like these in the Apache logs: [Wed Sep 02 23:16:34 2009] [warn] (103)Software caused connection abort: mod_fcgid: ap_pass_brigade failed in handle_request function [Wed Sep 02 23:16:35 2009] [warn] (103)Software caused connection abort: mod_fcgid: ap_pass_brigade failed in handle_request function and then Apache doesn't seem to recognize that all its processes are dead (I have a max of 5 backends) and refuses to spawn new ones: [Wed Sep 02 23:26:16 2009] [notice] mod_fcgid: /var/www/hacks.og.theinfo.org/picker.fcgi total process count 5 >= 5, skip the spawn request [Wed Sep 02 23:26:17 2009] [notice] mod_fcgid: /var/www/hacks.og.theinfo.org/picker.fcgi total process count 5 >= 5, skip the spawn request at which point it refuses to respond to requests from the outside world. This doesn't seem to happen with my other FastCGIs, which all use the same Apache config: <IfModule mod_fcgid.c> AddHandler fcgid-script .fcgi IPCConnectTimeout 20 MaxProcessCount 5 DefaultMaxClassProcessCount 2 DefaultMinClassProcessCount 1 </IfModule> Any idea what causes it?

    Read the article

  • nginx terminates connection after 65k bytes

    - by David Wolever
    I've got nginx configured as a front-end to a Python application running under gunicorn, but nginx is terminating connections after about 65k of data have been sent. For example, I've got a view which looks like this: def debug_big_file(request): return HttpResponse("x" * 500000) But when I access that URL through nginx, I only get 65283 bytes: $ curl https://example.com/debug/big-file | wc … curl: (18) transfer closed with outstanding read data remaining 0 1 65283 Note that everything works as expected when accessing gunicorn directly: $ curl http://localhost:1234/debug/big-file | wc … 0 1 500000 The relevant nginx config: location / { proxy_pass http://localhost:1234/; proxy_redirect off; proxy_headers_hash_bucket_size 96; } And nginx version 1.7.0 Some other facts: The number of bytes is consistent from request to request, but it varies based on the content (I first noticed it with a large PNG file, which was cut off after 65,372 bytes, not 65,283) 110k bytes are sent correctly (ie, "x" * 110000 returns all 110,000 bytes), but 120k bytes are not tcpdump suggests that nginx is sending a RST packet to gunicorn:

    Read the article

  • How to create tunnel to utilize for telnet connection.

    - by Z12
    The scenario is as follows: Machine A is located behind client firewall. The machine runs telnetd. This is Linux machine with Python 2.5.4 installed. I do not know the IP addy of the router and firewall is not open incoming. outgoing firewall is open. Machine B (Windows machine) is a server with well known IP address. I can install any programs I want on either machine. The idea is that I want Machine A to open a socket to machine B. Then I want to hold that socket and use to run a telnet session from Machine B to Machine A telnetd server. Is there any freeware that does this? Thoughts? Thanks!

    Read the article

  • Apache/Django subdomains problem

    - by thomasgg
    Now I have apache configuration which works only with localhost domain (http://localhost/). Alias /media/ "/sciezka/do/instalacji/django/contrib/admin/media/" Alias /site_media/ "/sciezka/do/plikow/site_media/" <Location "/"> SetHandler python-program PythonHandler django.core.handlers.modpython SetEnv DJANGO_SETTINGS_MODULE settings PythonPath "['/thomas/django_projects/project'] + sys.path" PythonDebug On </Location> <Location "/site_media"> SetHandler none </Location> How can I make it working for some subdomains like pl.localhost or uk.localhost? This subdomains should display the same page what domain (localhost). Second question: It is possible change default localhost address (http://localhost/) to (http://localhost.com/) or (http://www.localhost.com/) or something else?

    Read the article

< Previous Page | 488 489 490 491 492 493 494 495 496 497 498 499  | Next Page >